Merge pull request #939 from httprunner/v3
## 3.0.13 (2020-06-17)
**Added**
- feat: log client/server IP and port
**Fixed**
- fix: avoid '.csv' been converted to '_csv'
- fix: convert har to JSON format testcase
- fix: missing ${var} handling in overriding config variables
- fix: SyntaxError caused by quote in case of headers."Set-Cookie"
- fix: FileExistsError when specified project name conflicts with existed file
- fix: testcase path handling error when path startswith "./" or ".\\"
2
.github/workflows/unittest.yml
vendored
@@ -32,6 +32,8 @@ jobs:
|
||||
poetry run hrun
|
||||
poetry run har2case
|
||||
poetry run coverage run --source=httprunner -m pytest tests
|
||||
- name: coverage report
|
||||
run: |
|
||||
poetry run coverage xml
|
||||
poetry run coverage report -m
|
||||
- name: Codecov
|
||||
|
||||
@@ -1,5 +1,20 @@
|
||||
# Release History
|
||||
|
||||
## 3.0.13 (2020-06-17)
|
||||
|
||||
**Added**
|
||||
|
||||
- feat: log client/server IP and port
|
||||
|
||||
**Fixed**
|
||||
|
||||
- fix: avoid '.csv' been converted to '_csv'
|
||||
- fix: convert har to JSON format testcase
|
||||
- fix: missing ${var} handling in overriding config variables
|
||||
- fix: SyntaxError caused by quote in case of headers."Set-Cookie"
|
||||
- fix: FileExistsError when specified project name conflicts with existed file
|
||||
- fix: testcase path handling error when path startswith "./" or ".\\"
|
||||
|
||||
## 3.0.12 (2020-06-14)
|
||||
|
||||
**Fixed**
|
||||
|
||||
BIN
docs/images/charles-capture-example-postman-echo.png
Normal file
|
After Width: | Height: | Size: 244 KiB |
BIN
docs/images/charles-export-session.png
Normal file
|
After Width: | Height: | Size: 178 KiB |
BIN
docs/images/charles-save-har.png
Normal file
|
After Width: | Height: | Size: 329 KiB |
BIN
docs/images/httprunner-chain-call-config.png
Normal file
|
After Width: | Height: | Size: 206 KiB |
BIN
docs/images/httprunner-chain-call-step-validate.png
Normal file
|
After Width: | Height: | Size: 308 KiB |
BIN
docs/images/httprunner-formats.png
Normal file
|
After Width: | Height: | Size: 122 KiB |
BIN
docs/images/httprunner-step-request-validate.png
Normal file
|
After Width: | Height: | Size: 354 KiB |
BIN
docs/images/httprunner-testcase.png
Normal file
|
After Width: | Height: | Size: 119 KiB |
@@ -0,0 +1,3 @@
|
||||
# Quick Start
|
||||
|
||||
First of all, remember HttpRunner is a simple yet powerful HTTP(S) testing framework. This document will help you to learn HttpRunner in 10 minutes.
|
||||
|
||||
@@ -4,11 +4,11 @@
|
||||
|
||||
## 金牌赞助商(Gold Sponsor)
|
||||
|
||||
[<img src="/assets/hogwarts.png" alt="霍格沃兹测试学院" width="400">](https://testing-studio.com)
|
||||
[<img src="/assets/hogwarts.png" alt="霍格沃兹测试学院" width="400">](https://ceshiren.com/)
|
||||
|
||||
> [霍格沃兹测试学院](https://testing-studio.com) 是由测吧(北京)科技有限公司与知名软件测试社区 [TesterHome](https://testerhome.com/) 合作的高端教育品牌。由 BAT 一线**测试大咖执教**,提供**实战驱动**的接口自动化测试、移动自动化测试、性能测试、持续集成与 DevOps 等技术培训,以及测试开发优秀人才内推服务。[点击学习!](https://ke.qq.com/course/254956?flowToken=1014690)
|
||||
> [霍格沃兹测试学院](https://ceshiren.com/) 是业界领先的测试开发技术高端教育品牌,隶属于测吧(北京)科技有限公司。学院课程均由 BAT 一线测试大咖执教,提供实战驱动的接口自动化测试、移动自动化测试、性能测试、持续集成与 DevOps 等技术培训,以及测试开发优秀人才内推服务。[点击学习!](https://ke.qq.com/course/254956?flowToken=1014690)
|
||||
|
||||
霍格沃兹测试学院是 HttpRunner 的首家金牌赞助商。
|
||||
[霍格沃兹测试学院](https://ceshiren.com/) 是 HttpRunner 的首家金牌赞助商。
|
||||
|
||||
### 开源服务赞助商(Open Source Sponsor)
|
||||
|
||||
|
||||
327
docs/user/gen_tests.md
Normal file
@@ -0,0 +1,327 @@
|
||||
# Record & Generate testcase
|
||||
|
||||
## capture HTTP request and response
|
||||
|
||||
Before we write testcases, we should know the details of the API. It is a good choice to use a web debugging proxy tool like `Charles Proxy` to capture the HTTP traffic.
|
||||
|
||||
For example, the image below illustrates post form data to [`postman-echo.com`][postman-echo].
|
||||
|
||||

|
||||
|
||||
## export sessions to HAR file
|
||||
|
||||
Then we can select captured request & response and export sessions to HTTP archive (.har) file.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
## generate testcase with har2case
|
||||
|
||||
When you get HAR file, you can use builtin command `har2case` to convert it to HttpRunner testcase.
|
||||
|
||||
### help
|
||||
|
||||
```text
|
||||
$ har2case -h
|
||||
usage: har2case har2case [-h] [-2y] [-2j] [--filter FILTER]
|
||||
[--exclude EXCLUDE]
|
||||
[har_source_file]
|
||||
|
||||
positional arguments:
|
||||
har_source_file Specify HAR source file
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
-2y, --to-yml, --to-yaml
|
||||
Convert to YAML format, if not specified, convert to
|
||||
pytest format by default.
|
||||
-2j, --to-json Convert to JSON format, if not specified, convert to
|
||||
pytest format by default.
|
||||
--filter FILTER Specify filter keyword, only url include filter string
|
||||
will be converted.
|
||||
--exclude EXCLUDE Specify exclude keyword, url that includes exclude
|
||||
string will be ignored, multiple keywords can be
|
||||
joined with '|'
|
||||
```
|
||||
|
||||
### generate testcase (pytest)
|
||||
|
||||
Since HttpRunner `3.0.7`, `har2case` will convert HAR file to pytest by default, and it is extremely recommended to write and maintain testcases in pytest format instead of former `YAML/JSON` format.
|
||||
|
||||
```text
|
||||
$ har2case har/postman-echo-post-form.har
|
||||
2020-06-15 15:08:01.187 | INFO | httprunner.ext.har2case.core:gen_testcase:332 - Start to generate testcase from har/postman-echo-post-form.har
|
||||
2020-06-15 15:08:01.187 | INFO | httprunner.ext.har2case.core:_make_testcase:323 - Extract info from HAR file and prepare for testcase.
|
||||
2020-06-15 15:08:01.191 | INFO | httprunner.loader:load_dot_env_file:130 - Loading environment variables from /Users/debugtalk/Desktop/demo/.env
|
||||
2020-06-15 15:08:01.191 | DEBUG | httprunner.utils:set_os_environ:32 - Set OS environment variable: USERNAME
|
||||
2020-06-15 15:08:01.191 | DEBUG | httprunner.utils:set_os_environ:32 - Set OS environment variable: PASSWORD
|
||||
2020-06-15 15:08:01.193 | INFO | httprunner.make:make_testcase:310 - start to make testcase: /Users/debugtalk/Desktop/demo/har/postman-echo-post-form.har
|
||||
2020-06-15 15:08:01.193 | INFO | httprunner.make:make_testcase:383 - generated testcase: /Users/debugtalk/Desktop/demo/har/postman_echo_post_form_test.py
|
||||
2020-06-15 15:08:01.194 | INFO | httprunner.make:format_pytest_with_black:147 - format pytest cases with black ...
|
||||
reformatted /Users/debugtalk/Desktop/demo/har/postman_echo_post_form_test.py
|
||||
All done! ✨ 🍰 ✨
|
||||
1 file reformatted.
|
||||
2020-06-15 15:08:01.469 | INFO | httprunner.ext.har2case.core:gen_testcase:353 - generated testcase: /Users/debugtalk/Desktop/demo/har/postman_echo_post_form_test.py
|
||||
```
|
||||
|
||||
The generated pytest file is a standard Python file shown as below.
|
||||
|
||||
```python
|
||||
# NOTE: Generated By HttpRunner v3.0.12
|
||||
# FROM: har/postman-echo-post-form.har
|
||||
|
||||
from httprunner import HttpRunner, Config, Step, RunRequest, RunTestCase
|
||||
|
||||
|
||||
class TestCasePostmanEchoPostForm(HttpRunner):
|
||||
config = Config("testcase description").verify(False)
|
||||
|
||||
teststeps = [
|
||||
Step(
|
||||
RunRequest("/get")
|
||||
.get("https://postman-echo.com/get")
|
||||
.with_params(**{"foo1": "bar1", "foo2": "bar2"})
|
||||
.with_headers(
|
||||
**{
|
||||
"User-Agent": "PostmanRuntime/7.24.1",
|
||||
"Accept": "*/*",
|
||||
"Cache-Control": "no-cache",
|
||||
"Postman-Token": "6606343b-10e5-4165-a89f-6c301b762ce0",
|
||||
"Host": "postman-echo.com",
|
||||
"Accept-Encoding": "gzip, deflate, br",
|
||||
"Connection": "keep-alive",
|
||||
"Cookie": "sails.sid=s%3AQG_EVeNRw8k1xxZ6v_SG401VTpmJDSRu.fTAGx3JnZUT7S0c2%2FrD9cxUhQemIsm78nifYZYHpPCU",
|
||||
}
|
||||
)
|
||||
.with_cookies(
|
||||
**{
|
||||
"sails.sid": "s%3AQG_EVeNRw8k1xxZ6v_SG401VTpmJDSRu.fTAGx3JnZUT7S0c2%2FrD9cxUhQemIsm78nifYZYHpPCU"
|
||||
}
|
||||
)
|
||||
.validate()
|
||||
.assert_equal("status_code", 200)
|
||||
.assert_equal('headers."Content-Type"', "application/json; charset=utf-8")
|
||||
.assert_equal(
|
||||
"body.url", "https://postman-echo.com/get?foo1=bar1&foo2=bar2"
|
||||
)
|
||||
),
|
||||
Step(
|
||||
RunRequest("/post")
|
||||
.post("https://postman-echo.com/post")
|
||||
.with_headers(
|
||||
**{
|
||||
"User-Agent": "PostmanRuntime/7.24.1",
|
||||
"Accept": "*/*",
|
||||
"Cache-Control": "no-cache",
|
||||
"Postman-Token": "3e408e9d-25ca-4b31-b04b-7f4898a8cd49",
|
||||
"Host": "postman-echo.com",
|
||||
"Accept-Encoding": "gzip, deflate, br",
|
||||
"Connection": "keep-alive",
|
||||
"Content-Type": "application/x-www-form-urlencoded",
|
||||
"Content-Length": "19",
|
||||
"Cookie": "sails.sid=s%3AQG_EVeNRw8k1xxZ6v_SG401VTpmJDSRu.fTAGx3JnZUT7S0c2%2FrD9cxUhQemIsm78nifYZYHpPCU",
|
||||
}
|
||||
)
|
||||
.with_cookies(
|
||||
**{
|
||||
"sails.sid": "s%3AQG_EVeNRw8k1xxZ6v_SG401VTpmJDSRu.fTAGx3JnZUT7S0c2%2FrD9cxUhQemIsm78nifYZYHpPCU"
|
||||
}
|
||||
)
|
||||
.with_data({"foo1": "bar1", "foo2": "bar2"})
|
||||
.validate()
|
||||
.assert_equal("status_code", 200)
|
||||
.assert_equal('headers."Content-Type"', "application/json; charset=utf-8")
|
||||
.assert_equal("body.data", "")
|
||||
.assert_equal("body.url", "https://postman-echo.com/post")
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
TestCasePostmanEchoPostForm().test_start()
|
||||
```
|
||||
|
||||
And it can be run with `hrun` command or the native `pytest` command. In fact, `hrun` is only a wrapper of `pytest`, thus the effect is basically the same.
|
||||
|
||||
```text
|
||||
$ hrun har/postman_echo_post_form_test.py
|
||||
2020-06-15 15:23:03.502 | INFO | httprunner.loader:load_dot_env_file:130 - Loading environment variables from /Users/debugtalk/Desktop/demo/.env
|
||||
2020-06-15 15:23:03.502 | DEBUG | httprunner.utils:set_os_environ:32 - Set OS environment variable: USERNAME
|
||||
2020-06-15 15:23:03.502 | DEBUG | httprunner.utils:set_os_environ:32 - Set OS environment variable: PASSWORD
|
||||
2020-06-15 15:23:03.503 | INFO | httprunner.make:format_pytest_with_black:147 - format pytest cases with black ...
|
||||
All done! ✨ 🍰 ✨
|
||||
1 file left unchanged.
|
||||
2020-06-15 15:23:03.662 | INFO | httprunner.cli:main_run:56 - start to run tests with pytest. HttpRunner version: 3.0.12
|
||||
====================================================================== test session starts ======================================================================
|
||||
platform darwin -- Python 3.7.5, pytest-5.4.2, py-1.8.1, pluggy-0.13.1
|
||||
rootdir: /Users/debugtalk/Desktop/demo
|
||||
plugins: metadata-1.9.0, allure-pytest-2.8.16, html-2.1.1
|
||||
collected 1 item
|
||||
|
||||
har/postman_echo_post_form_test.py . [100%]
|
||||
|
||||
======================================================================= 1 passed in 2.60s =======================================================================
|
||||
```
|
||||
|
||||
```text
|
||||
$ pytest har/postman_echo_post_form_test.py
|
||||
====================================================================== test session starts ======================================================================
|
||||
platform darwin -- Python 3.7.5, pytest-5.4.2, py-1.8.1, pluggy-0.13.1
|
||||
rootdir: /Users/debugtalk/Desktop/demo
|
||||
plugins: metadata-1.9.0, allure-pytest-2.8.16, html-2.1.1
|
||||
collected 1 item
|
||||
|
||||
har/postman_echo_post_form_test.py . [100%]
|
||||
|
||||
================================================================= 1 passed, 1 warning in 4.11s ==================================================================
|
||||
```
|
||||
|
||||
### generate testcase (YAML/JSON)
|
||||
|
||||
Of course, you can also generate former `YAML/JSON` testcase format. Just add `-2y/--to-yml` or `-2j/--to-json` argument to `har2case`.
|
||||
|
||||
```text
|
||||
$ har2case har/postman-echo-post-form.har -2j
|
||||
2020-06-15 15:32:02.955 | INFO | httprunner.ext.har2case.core:gen_testcase:332 - Start to generate testcase from har/postman-echo-post-form.har
|
||||
2020-06-15 15:32:02.955 | INFO | httprunner.ext.har2case.core:_make_testcase:323 - Extract info from HAR file and prepare for testcase.
|
||||
2020-06-15 15:32:02.958 | INFO | httprunner.ext.har2case.utils:dump_json:122 - dump testcase to JSON format.
|
||||
2020-06-15 15:32:02.959 | INFO | httprunner.ext.har2case.utils:dump_json:131 - Generate JSON testcase successfully: har/postman-echo-post-form.json
|
||||
2020-06-15 15:32:02.959 | INFO | httprunner.ext.har2case.core:gen_testcase:353 - generated testcase: har/postman-echo-post-form.json
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"config": {
|
||||
"name": "testcase description",
|
||||
"variables": {},
|
||||
"verify": false
|
||||
},
|
||||
"teststeps": [
|
||||
{
|
||||
"name": "/get",
|
||||
"request": {
|
||||
"url": "https://postman-echo.com/get",
|
||||
"params": {
|
||||
"foo1": "bar1",
|
||||
"foo2": "bar2"
|
||||
},
|
||||
"method": "GET",
|
||||
"cookies": {
|
||||
"sails.sid": "s%3AQG_EVeNRw8k1xxZ6v_SG401VTpmJDSRu.fTAGx3JnZUT7S0c2%2FrD9cxUhQemIsm78nifYZYHpPCU"
|
||||
},
|
||||
"headers": {
|
||||
"User-Agent": "PostmanRuntime/7.24.1",
|
||||
"Accept": "*/*",
|
||||
"Cache-Control": "no-cache",
|
||||
"Postman-Token": "6606343b-10e5-4165-a89f-6c301b762ce0",
|
||||
"Host": "postman-echo.com",
|
||||
"Accept-Encoding": "gzip, deflate, br",
|
||||
"Connection": "keep-alive",
|
||||
"Cookie": "sails.sid=s%3AQG_EVeNRw8k1xxZ6v_SG401VTpmJDSRu.fTAGx3JnZUT7S0c2%2FrD9cxUhQemIsm78nifYZYHpPCU"
|
||||
}
|
||||
},
|
||||
"validate": [
|
||||
{
|
||||
"eq": [
|
||||
"status_code",
|
||||
200
|
||||
]
|
||||
},
|
||||
{
|
||||
"eq": [
|
||||
"headers.Content-Type",
|
||||
"application/json; charset=utf-8"
|
||||
]
|
||||
},
|
||||
{
|
||||
"eq": [
|
||||
"body.url",
|
||||
"https://postman-echo.com/get?foo1=bar1&foo2=bar2"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "/post",
|
||||
"request": {
|
||||
"url": "https://postman-echo.com/post",
|
||||
"method": "POST",
|
||||
"cookies": {
|
||||
"sails.sid": "s%3AQG_EVeNRw8k1xxZ6v_SG401VTpmJDSRu.fTAGx3JnZUT7S0c2%2FrD9cxUhQemIsm78nifYZYHpPCU"
|
||||
},
|
||||
"headers": {
|
||||
"User-Agent": "PostmanRuntime/7.24.1",
|
||||
"Accept": "*/*",
|
||||
"Cache-Control": "no-cache",
|
||||
"Postman-Token": "3e408e9d-25ca-4b31-b04b-7f4898a8cd49",
|
||||
"Host": "postman-echo.com",
|
||||
"Accept-Encoding": "gzip, deflate, br",
|
||||
"Connection": "keep-alive",
|
||||
"Content-Type": "application/x-www-form-urlencoded",
|
||||
"Content-Length": "19",
|
||||
"Cookie": "sails.sid=s%3AQG_EVeNRw8k1xxZ6v_SG401VTpmJDSRu.fTAGx3JnZUT7S0c2%2FrD9cxUhQemIsm78nifYZYHpPCU"
|
||||
},
|
||||
"data": {
|
||||
"foo1": "bar1",
|
||||
"foo2": "bar2"
|
||||
}
|
||||
},
|
||||
"validate": [
|
||||
{
|
||||
"eq": [
|
||||
"status_code",
|
||||
200
|
||||
]
|
||||
},
|
||||
{
|
||||
"eq": [
|
||||
"headers.Content-Type",
|
||||
"application/json; charset=utf-8"
|
||||
]
|
||||
},
|
||||
{
|
||||
"eq": [
|
||||
"body.data",
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"eq": [
|
||||
"body.url",
|
||||
"https://postman-echo.com/post"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The `YAML/JSON` testcase has the same info with `pytest` testcase, and you can run `YAML/JSON` testcase with `hrun` command.
|
||||
|
||||
```text
|
||||
$ hrun har/postman-echo-post-form.json
|
||||
2020-06-15 15:37:15.621 | INFO | httprunner.loader:load_dot_env_file:130 - Loading environment variables from /Users/debugtalk/Desktop/demo/.env
|
||||
2020-06-15 15:37:15.622 | DEBUG | httprunner.utils:set_os_environ:32 - Set OS environment variable: USERNAME
|
||||
2020-06-15 15:37:15.622 | DEBUG | httprunner.utils:set_os_environ:32 - Set OS environment variable: PASSWORD
|
||||
2020-06-15 15:37:15.623 | INFO | httprunner.make:make_testcase:310 - start to make testcase: /Users/debugtalk/Desktop/demo/har/postman-echo-post-form.json
|
||||
2020-06-15 15:37:15.625 | INFO | httprunner.make:make_testcase:383 - generated testcase: /Users/debugtalk/Desktop/demo/har/postman_echo_post_form_test.py
|
||||
2020-06-15 15:37:15.625 | INFO | httprunner.make:format_pytest_with_black:147 - format pytest cases with black ...
|
||||
reformatted /Users/debugtalk/Desktop/demo/har/postman_echo_post_form_test.py
|
||||
All done! ✨ 🍰 ✨
|
||||
1 file reformatted, 1 file left unchanged.
|
||||
2020-06-15 15:37:15.962 | INFO | httprunner.cli:main_run:56 - start to run tests with pytest. HttpRunner version: 3.0.12
|
||||
====================================================================== test session starts ======================================================================
|
||||
platform darwin -- Python 3.7.5, pytest-5.4.2, py-1.8.1, pluggy-0.13.1
|
||||
rootdir: /Users/debugtalk/Desktop/demo
|
||||
plugins: metadata-1.9.0, allure-pytest-2.8.16, html-2.1.1
|
||||
collected 1 item
|
||||
|
||||
har/postman_echo_post_form_test.py . [100%]
|
||||
|
||||
======================================================================= 1 passed in 2.03s =======================================================================
|
||||
```
|
||||
|
||||
|
||||
[postman-echo]: https://docs.postman-echo.com/?version=latest
|
||||
427
docs/user/run_testcase.md
Normal file
@@ -0,0 +1,427 @@
|
||||
# Run Testcase
|
||||
|
||||
Once testcase is ready, you can run testcase with `hrun` command.
|
||||
|
||||
Notice, `hrun` is an command alias of `httprunner run`, they have the same effect.
|
||||
|
||||
```text
|
||||
hrun = httprunner run
|
||||
```
|
||||
|
||||
## run testcases in diverse ways
|
||||
|
||||
`HttpRunner` can run testcases in diverse ways.
|
||||
|
||||
You can run single testcase by specifying testcase file path.
|
||||
|
||||
```text
|
||||
$ hrun path/to/testcase1
|
||||
```
|
||||
|
||||
You can also run several testcases by specifying multiple testcase file paths.
|
||||
|
||||
```text
|
||||
$ hrun path/to/testcase1 path/to/testcase2
|
||||
```
|
||||
|
||||
If you want to run testcases of a whole project, you can achieve this goal by specifying the project folder path.
|
||||
|
||||
```text
|
||||
$ hrun path/to/testcase_folder/
|
||||
```
|
||||
|
||||
## run YAML/JSON testcases
|
||||
|
||||
If your testcases are written in YAML/JSON format, `hrun` will firstly convert YAML/JSON testcases to pytest(python) files, and run with `pytest` command.
|
||||
|
||||
That is to say,
|
||||
|
||||
```text
|
||||
hrun = make + pytest
|
||||
```
|
||||
|
||||
In most cases, the generated pytest files are in the same folder next to origin YAML/JSON files, with the same file name except adding `_test` suffix and replace extension `.yml/yaml/.json` with `.py`.
|
||||
|
||||
```text
|
||||
/path/to/example.yml => /path/to/example_test.py
|
||||
```
|
||||
|
||||
However, if the testcase folder name or file name contains symbols like dot, hyphen or space, these symbols will be replaced with underscore in order to avoid syntax error in python class importing (testcase reference). Also, folder/file name starts with digit will be adding a prefix `T` because python module and class name can not be started with digit.
|
||||
|
||||
```text
|
||||
path 1/a.b-2/3.yml => path_1/a_b_2/T3_test.py
|
||||
```
|
||||
|
||||
## run pytest testcases
|
||||
|
||||
If your testcases are written in pytest format, or you want to run pytest files converted from YAML/JSON testcases, `hrun` and `pytest` commands are both okay. What you need to remember is that `hrun` only wraps `pytest`, thus all the arguments of `pytest` can be used with `hrun`.
|
||||
|
||||
```text
|
||||
$ hrun -h
|
||||
usage: hrun [options] [file_or_dir] [file_or_dir] [...]
|
||||
|
||||
positional arguments:
|
||||
file_or_dir
|
||||
|
||||
general:
|
||||
-k EXPRESSION only run tests which match the given substring expression. An expression is a python evaluatable expression where all names are
|
||||
substring-matched against test names and their parent classes. Example: -k 'test_method or test_other' matches all test functions and
|
||||
classes whose name contains 'test_method' or 'test_other', while -k 'not test_method' matches those that don't contain 'test_method' in
|
||||
their names. -k 'not test_method and not test_other' will eliminate the matches. Additionally keywords are matched to classes and
|
||||
functions containing extra names in their 'extra_keyword_matches' set, as well as functions which have names assigned directly to them.
|
||||
The matching is case-insensitive.
|
||||
-m MARKEXPR only run tests matching given mark expression. example: -m 'mark1 and not mark2'.
|
||||
--markers show markers (builtin, plugin and per-project ones).
|
||||
-x, --exitfirst exit instantly on first error or failed test.
|
||||
--maxfail=num exit after first num failures or errors.
|
||||
--strict-markers, --strict
|
||||
markers not registered in the `markers` section of the configuration file raise errors.
|
||||
-c file load configuration from `file` instead of trying to locate one of the implicit configuration files.
|
||||
--continue-on-collection-errors
|
||||
Force test execution even if collection errors occur.
|
||||
--rootdir=ROOTDIR Define root directory for tests. Can be relative path: 'root_dir', './root_dir', 'root_dir/another_dir/'; absolute path:
|
||||
'/home/user/root_dir'; path with variables: '$HOME/root_dir'.
|
||||
--fixtures, --funcargs
|
||||
show available fixtures, sorted by plugin appearance (fixtures with leading '_' are only shown with '-v')
|
||||
--fixtures-per-test show fixtures per test
|
||||
--import-mode={prepend,append}
|
||||
prepend/append to sys.path when importing test modules, default is to prepend.
|
||||
--pdb start the interactive Python debugger on errors or KeyboardInterrupt.
|
||||
--pdbcls=modulename:classname
|
||||
start a custom interactive Python debugger on errors. For example: --pdbcls=IPython.terminal.debugger:TerminalPdb
|
||||
--trace Immediately break when running each test.
|
||||
--capture=method per-test capturing method: one of fd|sys|no|tee-sys.
|
||||
-s shortcut for --capture=no.
|
||||
--runxfail report the results of xfail tests as if they were not marked
|
||||
--lf, --last-failed rerun only the tests that failed at the last run (or all if none failed)
|
||||
--ff, --failed-first run all tests but run the last failures first. This may re-order tests and thus lead to repeated fixture setup/teardown
|
||||
--nf, --new-first run tests from new files first, then the rest of the tests sorted by file mtime
|
||||
--cache-show=[CACHESHOW]
|
||||
show cache contents, don't perform collection or tests. Optional argument: glob (default: '*').
|
||||
--cache-clear remove all cache contents at start of test run.
|
||||
--lfnf={all,none}, --last-failed-no-failures={all,none}
|
||||
which tests to run with no previously (known) failures.
|
||||
--sw, --stepwise exit on test failure and continue from last failing test next time
|
||||
--stepwise-skip ignore the first failing test but stop on the next failing test
|
||||
--allure-severities=SEVERITIES_SET
|
||||
Comma-separated list of severity names. Tests only with these severities will be run. Possible values are: blocker, critical, normal,
|
||||
minor, trivial.
|
||||
--allure-epics=EPICS_SET
|
||||
Comma-separated list of epic names. Run tests that have at least one of the specified feature labels.
|
||||
--allure-features=FEATURES_SET
|
||||
Comma-separated list of feature names. Run tests that have at least one of the specified feature labels.
|
||||
--allure-stories=STORIES_SET
|
||||
Comma-separated list of story names. Run tests that have at least one of the specified story labels.
|
||||
--allure-link-pattern=LINK_TYPE:LINK_PATTERN
|
||||
Url pattern for link type. Allows short links in test, like 'issue-1'. Text will be formatted to full url with python str.format().
|
||||
|
||||
reporting:
|
||||
--durations=N show N slowest setup/test durations (N=0 for all).
|
||||
-v, --verbose increase verbosity.
|
||||
-q, --quiet decrease verbosity.
|
||||
--verbosity=VERBOSE set verbosity. Default is 0.
|
||||
-r chars show extra test summary info as specified by chars: (f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed, (p)assed, (P)assed with output,
|
||||
(a)ll except passed (p/P), or (A)ll. (w)arnings are enabled by default (see --disable-warnings), 'N' can be used to reset the list.
|
||||
(default: 'fE').
|
||||
--disable-warnings, --disable-pytest-warnings
|
||||
disable warnings summary
|
||||
-l, --showlocals show locals in tracebacks (disabled by default).
|
||||
--tb=style traceback print mode (auto/long/short/line/native/no).
|
||||
--show-capture={no,stdout,stderr,log,all}
|
||||
Controls how captured stdout/stderr/log is shown on failed tests. Default is 'all'.
|
||||
--full-trace don't cut any tracebacks (default is to cut).
|
||||
--color=color color terminal output (yes/no/auto).
|
||||
--pastebin=mode send failed|all info to bpaste.net pastebin service.
|
||||
--junit-xml=path create junit-xml style report file at given path.
|
||||
--junit-prefix=str prepend prefix to classnames in junit-xml output
|
||||
--result-log=path DEPRECATED path for machine-readable result log.
|
||||
--html=path create html report file at given path.
|
||||
--self-contained-html
|
||||
create a self-contained html file containing all necessary styles, scripts, and images - this means that the report may not render or
|
||||
function where CSP restrictions are in place (see https://developer.mozilla.org/docs/Web/Security/CSP)
|
||||
--css=path append given css file content to report style file.
|
||||
|
||||
collection:
|
||||
--collect-only, --co only collect tests, don't execute them.
|
||||
--pyargs try to interpret all arguments as python packages.
|
||||
--ignore=path ignore path during collection (multi-allowed).
|
||||
--ignore-glob=path ignore path pattern during collection (multi-allowed).
|
||||
--deselect=nodeid_prefix
|
||||
deselect item (via node id prefix) during collection (multi-allowed).
|
||||
--confcutdir=dir only load conftest.py's relative to specified dir.
|
||||
--noconftest Don't load any conftest.py files.
|
||||
--keep-duplicates Keep duplicate tests.
|
||||
--collect-in-virtualenv
|
||||
Don't ignore tests in a local virtualenv directory
|
||||
--doctest-modules run doctests in all .py modules
|
||||
--doctest-report={none,cdiff,ndiff,udiff,only_first_failure}
|
||||
choose another output format for diffs on doctest failure
|
||||
--doctest-glob=pat doctests file matching pattern, default: test*.txt
|
||||
--doctest-ignore-import-errors
|
||||
ignore doctest ImportErrors
|
||||
--doctest-continue-on-failure
|
||||
for a given doctest, continue to run after the first failure
|
||||
|
||||
test session debugging and configuration:
|
||||
--basetemp=dir base temporary directory for this test run.(warning: this directory is removed if it exists)
|
||||
-V, --version display pytest version and information about plugins.
|
||||
-h, --help show help message and configuration info
|
||||
-p name early-load given plugin module name or entry point (multi-allowed). To avoid loading of plugins, use the `no:` prefix, e.g. `no:doctest`.
|
||||
--trace-config trace considerations of conftest.py files.
|
||||
--debug store internal tracing debug information in 'pytestdebug.log'.
|
||||
-o OVERRIDE_INI, --override-ini=OVERRIDE_INI
|
||||
override ini option with "option=value" style, e.g. `-o xfail_strict=True -o cache_dir=cache`.
|
||||
--assert=MODE Control assertion debugging tools. 'plain' performs no assertion debugging. 'rewrite' (the default) rewrites assert statements in test
|
||||
modules on import to provide assert expression information.
|
||||
--setup-only only setup fixtures, do not execute tests.
|
||||
--setup-show show setup of fixtures while executing tests.
|
||||
--setup-plan show what fixtures and tests would be executed but don't execute anything.
|
||||
|
||||
pytest-warnings:
|
||||
-W PYTHONWARNINGS, --pythonwarnings=PYTHONWARNINGS
|
||||
set which warnings to report, see -W option of python itself.
|
||||
|
||||
logging:
|
||||
--no-print-logs disable printing caught logs on failed tests.
|
||||
--log-level=LEVEL level of messages to catch/display. Not set by default, so it depends on the root/parent log handler's effective level, where it is
|
||||
"WARNING" by default.
|
||||
--log-format=LOG_FORMAT
|
||||
log format as used by the logging module.
|
||||
--log-date-format=LOG_DATE_FORMAT
|
||||
log date format as used by the logging module.
|
||||
--log-cli-level=LOG_CLI_LEVEL
|
||||
cli logging level.
|
||||
--log-cli-format=LOG_CLI_FORMAT
|
||||
log format as used by the logging module.
|
||||
--log-cli-date-format=LOG_CLI_DATE_FORMAT
|
||||
log date format as used by the logging module.
|
||||
--log-file=LOG_FILE path to a file when logging will be written to.
|
||||
--log-file-level=LOG_FILE_LEVEL
|
||||
log file logging level.
|
||||
--log-file-format=LOG_FILE_FORMAT
|
||||
log format as used by the logging module.
|
||||
--log-file-date-format=LOG_FILE_DATE_FORMAT
|
||||
log date format as used by the logging module.
|
||||
--log-auto-indent=LOG_AUTO_INDENT
|
||||
Auto-indent multiline messages passed to the logging module. Accepts true|on, false|off or an integer.
|
||||
|
||||
reporting:
|
||||
--alluredir=DIR Generate Allure report in the specified directory (may not exist)
|
||||
--clean-alluredir Clean alluredir folder if it exists
|
||||
--allure-no-capture Do not attach pytest captured logging/stdout/stderr to report
|
||||
|
||||
custom options:
|
||||
--metadata=key value additional metadata.
|
||||
--metadata-from-json=METADATA_FROM_JSON
|
||||
additional metadata from a json string.
|
||||
|
||||
[pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg file found:
|
||||
|
||||
markers (linelist): markers for test functions
|
||||
empty_parameter_set_mark (string):
|
||||
default marker for empty parametersets
|
||||
norecursedirs (args): directory patterns to avoid for recursion
|
||||
testpaths (args): directories to search for tests when no files or directories are given in the command line.
|
||||
usefixtures (args): list of default fixtures to be used with this project
|
||||
python_files (args): glob-style file patterns for Python test module discovery
|
||||
python_classes (args):
|
||||
prefixes or glob names for Python test class discovery
|
||||
python_functions (args):
|
||||
prefixes or glob names for Python test function and method discovery
|
||||
disable_test_id_escaping_and_forfeit_all_rights_to_community_support (bool):
|
||||
disable string escape non-ascii characters, might cause unwanted side effects(use at your own risk)
|
||||
console_output_style (string):
|
||||
console output: "classic", or with additional progress information ("progress" (percentage) | "count").
|
||||
xfail_strict (bool): default for the strict parameter of xfail markers when not given explicitly (default: False)
|
||||
enable_assertion_pass_hook (bool):
|
||||
Enables the pytest_assertion_pass hook.Make sure to delete any previously generated pyc cache files.
|
||||
junit_suite_name (string):
|
||||
Test suite name for JUnit report
|
||||
junit_logging (string):
|
||||
Write captured log messages to JUnit report: one of no|log|system-out|system-err|out-err|all
|
||||
junit_log_passing_tests (bool):
|
||||
Capture log information for passing tests to JUnit report:
|
||||
junit_duration_report (string):
|
||||
Duration time to report: one of total|call
|
||||
junit_family (string):
|
||||
Emit XML for schema: one of legacy|xunit1|xunit2
|
||||
doctest_optionflags (args):
|
||||
option flags for doctests
|
||||
doctest_encoding (string):
|
||||
encoding used for doctest files
|
||||
cache_dir (string): cache directory path.
|
||||
filterwarnings (linelist):
|
||||
Each line specifies a pattern for warnings.filterwarnings. Processed after -W/--pythonwarnings.
|
||||
log_print (bool): default value for --no-print-logs
|
||||
log_level (string): default value for --log-level
|
||||
log_format (string): default value for --log-format
|
||||
log_date_format (string):
|
||||
default value for --log-date-format
|
||||
log_cli (bool): enable log display during test run (also known as "live logging").
|
||||
log_cli_level (string):
|
||||
default value for --log-cli-level
|
||||
log_cli_format (string):
|
||||
default value for --log-cli-format
|
||||
log_cli_date_format (string):
|
||||
default value for --log-cli-date-format
|
||||
log_file (string): default value for --log-file
|
||||
log_file_level (string):
|
||||
default value for --log-file-level
|
||||
log_file_format (string):
|
||||
default value for --log-file-format
|
||||
log_file_date_format (string):
|
||||
default value for --log-file-date-format
|
||||
log_auto_indent (string):
|
||||
default value for --log-auto-indent
|
||||
faulthandler_timeout (string):
|
||||
Dump the traceback of all threads if a test takes more than TIMEOUT seconds to finish. Not available on Windows.
|
||||
addopts (args): extra command line options
|
||||
minversion (string): minimally required pytest version
|
||||
render_collapsed (bool):
|
||||
Open the report with all rows collapsed. Useful for very large reports
|
||||
|
||||
environment variables:
|
||||
PYTEST_ADDOPTS extra command line options
|
||||
PYTEST_PLUGINS comma-separated plugins to load during startup
|
||||
PYTEST_DISABLE_PLUGIN_AUTOLOAD set to disable plugin auto-loading
|
||||
PYTEST_DEBUG set to enable debug tracing of pytest's internals
|
||||
|
||||
|
||||
to see available markers type: pytest --markers
|
||||
to see available fixtures type: pytest --fixtures
|
||||
(shown according to specified file_or_dir or current dir if not specified; fixtures with leading '_' are only shown with the '-v' option
|
||||
```
|
||||
|
||||
## execution logs
|
||||
|
||||
By default, `hrun` will not print details of request and response data.
|
||||
|
||||
```text
|
||||
$ hrun examples/postman_echo/request_methods/request_with_functions.yml
|
||||
2020-06-17 15:39:41.041 | INFO | httprunner.make:make_testcase:317 - start to make testcase: /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/examples/postman_echo/request_methods/request_with_functions.yml
|
||||
2020-06-17 15:39:41.042 | INFO | httprunner.make:make_testcase:390 - generated testcase: /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/examples/postman_echo/request_methods/request_with_functions_test.py
|
||||
2020-06-17 15:39:41.042 | INFO | httprunner.make:format_pytest_with_black:154 - format pytest cases with black ...
|
||||
reformatted /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/examples/postman_echo/request_methods/request_with_functions_test.py
|
||||
All done! ✨ 🍰 ✨
|
||||
1 file reformatted, 1 file left unchanged.
|
||||
2020-06-17 15:39:41.315 | INFO | httprunner.cli:main_run:56 - start to run tests with pytest. HttpRunner version: 3.0.13
|
||||
====================================================================== test session starts ======================================================================
|
||||
platform darwin -- Python 3.7.5, pytest-5.4.2, py-1.8.1, pluggy-0.13.1
|
||||
rootdir: /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner
|
||||
plugins: metadata-1.9.0, allure-pytest-2.8.16, html-2.1.1
|
||||
collected 1 item
|
||||
|
||||
examples/postman_echo/request_methods/request_with_functions_test.py . [100%]
|
||||
|
||||
======================================================================= 1 passed in 2.98s =======================================================================
|
||||
```
|
||||
|
||||
If you want to view details of request & response data, extraction and validation, you can add an argument `-s` (shortcut for `--capture=no`).
|
||||
|
||||
```text
|
||||
$ hrun -s examples/postman_echo/request_methods/request_with_functions.yml
|
||||
2020-06-17 15:42:54.369 | INFO | httprunner.make:make_testcase:317 - start to make testcase: /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/examples/postman_echo/request_methods/request_with_functions.yml
|
||||
2020-06-17 15:42:54.369 | INFO | httprunner.make:make_testcase:390 - generated testcase: /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/examples/postman_echo/request_methods/request_with_functions_test.py
|
||||
2020-06-17 15:42:54.370 | INFO | httprunner.make:format_pytest_with_black:154 - format pytest cases with black ...
|
||||
reformatted /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/examples/postman_echo/request_methods/request_with_functions_test.py
|
||||
All done! ✨ 🍰 ✨
|
||||
1 file reformatted, 1 file left unchanged.
|
||||
2020-06-17 15:42:54.699 | INFO | httprunner.cli:main_run:56 - start to run tests with pytest. HttpRunner version: 3.0.13
|
||||
====================================================================== test session starts ======================================================================
|
||||
platform darwin -- Python 3.7.5, pytest-5.4.2, py-1.8.1, pluggy-0.13.1
|
||||
rootdir: /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner
|
||||
plugins: metadata-1.9.0, allure-pytest-2.8.16, html-2.1.1
|
||||
collected 1 item
|
||||
|
||||
examples/postman_echo/request_methods/request_with_functions_test.py 2020-06-17 15:42:55.017 | INFO | httprunner.runner:test_start:435 - Start to run testcase: request methods testcase with functions, TestCase ID: cc404c49-000f-485c-b4c1-ac3367a053fe
|
||||
2020-06-17 15:42:55.018 | INFO | httprunner.runner:__run_step:278 - run step begin: get with params >>>>>>
|
||||
2020-06-17 15:42:56.326 | DEBUG | httprunner.client:log_print:40 -
|
||||
================== request details ==================
|
||||
method : GET
|
||||
url : https://postman-echo.com/get?foo1=bar11&foo2=bar21&sum_v=3
|
||||
headers : {
|
||||
"User-Agent": "HttpRunner/3.0.13",
|
||||
"Accept-Encoding": "gzip, deflate",
|
||||
"Accept": "*/*",
|
||||
"Connection": "keep-alive",
|
||||
"HRUN-Request-ID": "HRUN-cc404c49-000f-485c-b4c1-ac3367a053fe-775018",
|
||||
"Content-Length": "2",
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
cookies : {}
|
||||
body : {}
|
||||
|
||||
2020-06-17 15:42:56.327 | DEBUG | httprunner.client:log_print:40 -
|
||||
================== response details ==================
|
||||
status_code : 200
|
||||
headers : {
|
||||
"Date": "Wed, 17 Jun 2020 07:42:56 GMT",
|
||||
"Content-Type": "application/json; charset=utf-8",
|
||||
"Content-Length": "477",
|
||||
"Connection": "keep-alive",
|
||||
"ETag": "W/\"1dd-2JtBYPcnh8D6fqLz8KFn16Oq1R0\"",
|
||||
"Vary": "Accept-Encoding",
|
||||
"set-cookie": "sails.sid=s%3A6J_EtUk3nkL_C2xtx-NtAXrlA5wPxEgk.gIO2yBbtvGWIIgQ%2F2mZhMkU669G3F60cvLAPWbwyoGM; Path=/; HttpOnly"
|
||||
}
|
||||
cookies : {
|
||||
"sails.sid": "s%3A6J_EtUk3nkL_C2xtx-NtAXrlA5wPxEgk.gIO2yBbtvGWIIgQ%2F2mZhMkU669G3F60cvLAPWbwyoGM"
|
||||
}
|
||||
encoding : utf-8
|
||||
content_type : application/json; charset=utf-8
|
||||
body : {
|
||||
"args": {
|
||||
"foo1": "bar11",
|
||||
"foo2": "bar21",
|
||||
"sum_v": "3"
|
||||
},
|
||||
"headers": {
|
||||
"x-forwarded-proto": "https",
|
||||
"x-forwarded-port": "443",
|
||||
"host": "postman-echo.com",
|
||||
"x-amzn-trace-id": "Root=1-5ee9c980-d8e98cc72a26ef24f5819ce3",
|
||||
"content-length": "2",
|
||||
"user-agent": "HttpRunner/3.0.13",
|
||||
"accept-encoding": "gzip, deflate",
|
||||
"accept": "*/*",
|
||||
"hrun-request-id": "HRUN-cc404c49-000f-485c-b4c1-ac3367a053fe-775018",
|
||||
"content-type": "application/json"
|
||||
},
|
||||
"url": "https://postman-echo.com/get?foo1=bar11&foo2=bar21&sum_v=3"
|
||||
}
|
||||
|
||||
2020-06-17 15:42:56.328 | INFO | httprunner.client:request:203 - status_code: 200, response_time(ms): 1307.33 ms, response_length: 477 bytes
|
||||
2020-06-17 15:42:56.328 | INFO | httprunner.response:extract:152 - extract mapping: {'foo3': 'bar21'}
|
||||
2020-06-17 15:42:56.328 | INFO | httprunner.response:validate:209 - assert status_code equal 200(int) ==> pass
|
||||
2020-06-17 15:42:56.329 | INFO | httprunner.response:validate:209 - assert body.args.foo1 equal bar11(str) ==> pass
|
||||
2020-06-17 15:42:56.329 | INFO | httprunner.response:validate:209 - assert body.args.sum_v equal 3(str) ==> pass
|
||||
2020-06-17 15:42:56.329 | INFO | httprunner.response:validate:209 - assert body.args.foo2 equal bar21(str) ==> pass
|
||||
2020-06-17 15:42:56.330 | INFO | httprunner.runner:__run_step:290 - run step end: get with params <<<<<<
|
||||
|
||||
<Omit>
|
||||
|
||||
2020-06-17 15:42:57.019 | INFO | httprunner.runner:test_start:444 - generate testcase log: /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/examples/postman_echo/logs/cc404c49-000f-485c-b4c1-ac3367a053fe.run.log
|
||||
.
|
||||
|
||||
======================================================================= 1 passed in 2.13s =======================================================================
|
||||
```
|
||||
|
||||
Also, an execution log file will be generated for each testcase, located in `<ProjectRootDir>/logs/TestCaseID.run.log`.
|
||||
|
||||
## TestCase ID & Request ID
|
||||
|
||||
For the sake of troubleshooting, each testcase will generate a unique ID (uuid4), and each request headers will be added a `HRUN-Request-ID` field with testcase ID automatically.
|
||||
|
||||
```text
|
||||
HRUN-Request-ID = "HRUN-<TestCase ID>-<timestamp_six_digits>"
|
||||
timestamp_six_digits = str(int(time.time() * 1000))[-6:])
|
||||
```
|
||||
|
||||
In other words, all requests in one testcase will have the same `HRUN-Request-ID` prefix, and each request will have a unique `HRUN-Request-ID` suffix.
|
||||
|
||||
## arguments for v2.x compatibility
|
||||
|
||||
Besides all the arguments of `pytest`, `hrun` also has several other arguments to keep compatibility with HttpRunner v2.x.
|
||||
|
||||
- `--failfast`: has no effect, this argument will be removed automatically
|
||||
- `--report-file`: specify html report file path, this argument will be replaced with `--html --self-contained-html` and generate html report with `pytest-html` plugin
|
||||
- `--save-tests`: if set, HttpRunner v3.x will create a pytest conftest.py file containing session fixture to aggregate each testcase's summary and dumps to summary.json
|
||||
102
docs/user/scaffold.md
Normal file
@@ -0,0 +1,102 @@
|
||||
# Scaffold
|
||||
|
||||
If you want to create a new project, you can use the scaffold to startup quickly.
|
||||
|
||||
## help
|
||||
|
||||
```text
|
||||
$ httprunner startproject -h
|
||||
usage: httprunner startproject [-h] [project_name]
|
||||
|
||||
positional arguments:
|
||||
project_name Specify new project name.
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
```
|
||||
|
||||
## create new project
|
||||
|
||||
The only argument you need to specify is the project name.
|
||||
|
||||
```text
|
||||
$ httprunner startproject demo
|
||||
2020-06-15 11:53:25.498 | INFO | httprunner.scaffold:create_scaffold:37 - Create new project: demo
|
||||
Project Root Dir: /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/demo
|
||||
|
||||
created folder: demo
|
||||
created folder: demo/har
|
||||
created folder: demo/testcases
|
||||
created folder: demo/reports
|
||||
created file: demo/testcases/demo_testcase_request.yml
|
||||
created file: demo/testcases/demo_testcase_ref.yml
|
||||
created file: demo/debugtalk.py
|
||||
created file: demo/.env
|
||||
created file: demo/.gitignore
|
||||
|
||||
$ tree demo -a
|
||||
demo
|
||||
├── .env
|
||||
├── .gitignore
|
||||
├── debugtalk.py
|
||||
├── har
|
||||
├── reports
|
||||
└── testcases
|
||||
├── demo_testcase_ref.yml
|
||||
└── demo_testcase_request.yml
|
||||
|
||||
3 directories, 5 files
|
||||
```
|
||||
|
||||
If you specify a project name that already exists, you will get a warning.
|
||||
|
||||
```text
|
||||
$ httprunner startproject demo
|
||||
2020-06-15 11:55:03.192 | WARNING | httprunner.scaffold:create_scaffold:32 - Project demo exists, please specify a new project name.
|
||||
|
||||
$ tree demo -a
|
||||
demo
|
||||
├── .env
|
||||
├── .gitignore
|
||||
├── debugtalk.py
|
||||
├── har
|
||||
├── reports
|
||||
└── testcases
|
||||
├── demo_testcase_ref.yml
|
||||
└── demo_testcase_request.yml
|
||||
|
||||
3 directories, 5 files
|
||||
```
|
||||
|
||||
## run scaffold project
|
||||
|
||||
The scaffold project has several valid testcases, so you can run tests without any edit.
|
||||
|
||||
```text
|
||||
$ hrun demo
|
||||
2020-06-15 11:57:15.883 | INFO | httprunner.loader:load_dot_env_file:130 - Loading environment variables from /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/demo/.env
|
||||
2020-06-15 11:57:15.883 | DEBUG | httprunner.utils:set_os_environ:32 - Set OS environment variable: USERNAME
|
||||
2020-06-15 11:57:15.884 | DEBUG | httprunner.utils:set_os_environ:32 - Set OS environment variable: PASSWORD
|
||||
2020-06-15 11:57:15.885 | INFO | httprunner.make:make_testcase:310 - start to make testcase: /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/demo/testcases/demo_testcase_ref.yml
|
||||
2020-06-15 11:57:15.898 | INFO | httprunner.make:make_testcase:310 - start to make testcase: /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/demo/testcases/demo_testcase_request.yml
|
||||
2020-06-15 11:57:15.899 | INFO | httprunner.make:make_testcase:383 - generated testcase: /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/demo/testcases/demo_testcase_request_test.py
|
||||
2020-06-15 11:57:15.900 | INFO | httprunner.make:make_testcase:383 - generated testcase: /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/demo/testcases/demo_testcase_ref_test.py
|
||||
2020-06-15 11:57:15.911 | INFO | httprunner.make:make_testcase:310 - start to make testcase: /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/demo/testcases/demo_testcase_request.yml
|
||||
2020-06-15 11:57:15.912 | INFO | httprunner.make:__ensure_project_meta_files:128 - copy .env to /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/demo/_env
|
||||
2020-06-15 11:57:15.912 | INFO | httprunner.make:format_pytest_with_black:147 - format pytest cases with black ...
|
||||
reformatted /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/demo/testcases/demo_testcase_ref_test.py
|
||||
reformatted /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner/demo/testcases/demo_testcase_request_test.py
|
||||
All done! ✨ 🍰 ✨
|
||||
2 files reformatted, 1 file left unchanged.
|
||||
2020-06-15 11:57:16.299 | INFO | httprunner.cli:main_run:56 - start to run tests with pytest. HttpRunner version: 3.0.12
|
||||
====================================================================== test session starts ======================================================================
|
||||
platform darwin -- Python 3.7.5, pytest-5.4.2, py-1.8.1, pluggy-0.13.1
|
||||
rootdir: /Users/debugtalk/MyProjects/HttpRunner-dev/HttpRunner
|
||||
plugins: metadata-1.9.0, allure-pytest-2.8.16, html-2.1.1
|
||||
collected 2 items
|
||||
|
||||
demo/testcases/demo_testcase_request_test.py . [ 50%]
|
||||
demo/testcases/demo_testcase_ref_test.py . [100%]
|
||||
|
||||
======================================================================= 2 passed in 6.87s =======================================================================
|
||||
```
|
||||
55
docs/user/testing_report.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# Testing Report
|
||||
|
||||
Benefit from the integration of `pytest`, HttpRunner v3.x can make use of all the pytest plugins, including testing report plugins like `pytest-html` and `allure-pytest`.
|
||||
|
||||
## builtin html report
|
||||
|
||||
`pytest-html` plugin comes with HttpRunner installation. When you want to generate a html report for testcase execution, you can add a command argument `--html`.
|
||||
|
||||
```text
|
||||
$ hrun /path/to/testcase --html=report.html
|
||||
```
|
||||
|
||||
If you want to create a self-contained report, which is a single HTML file that can be more convenient when sharing results, you can add another command argument `--self-contained-html`.
|
||||
|
||||
```text
|
||||
$ hrun /path/to/testcase --html=report.html --self-contained-html
|
||||
```
|
||||
|
||||
You can refer to [`pytest-html`](https://pypi.org/project/pytest-html/) for more details.
|
||||
|
||||
## allure report
|
||||
|
||||
`allure-pytest` is an optional dependency for HttpRunner, thus if you want to generate allure report, you should install `allure-pytest` plugin separately.
|
||||
|
||||
```text
|
||||
$ pip3 install "allure-pytest"
|
||||
```
|
||||
|
||||
Or you can install HttpRunner with allure extra package.
|
||||
|
||||
```text
|
||||
$ pip3 install "httprunner[allure]"
|
||||
```
|
||||
|
||||
Once `allure-pytest` is ready, the following arguments can be used with `hrun/pytest` command.
|
||||
|
||||
- `--alluredir=DIR`: Generate Allure report in the specified directory (may not exist)
|
||||
- `--clean-alluredir`: Clean alluredir folder if it exists
|
||||
- `--allure-no-capture`: Do not attach pytest captured logging/stdout/stderr to report
|
||||
|
||||
To enable Allure listener to collect results during the test execution simply add `--alluredir` option and provide path to the folder where results should be stored. E.g.:
|
||||
|
||||
```text
|
||||
$ hrun /path/to/testcase --alluredir=/tmp/my_allure_results
|
||||
```
|
||||
|
||||
To see the actual report after your tests have finished, you need to use Allure commandline utility to generate report from the results.
|
||||
|
||||
```text
|
||||
$ allure serve /tmp/my_allure_results
|
||||
```
|
||||
|
||||
This command will show you generated report in your default browser.
|
||||
|
||||
You can refer to [`allure-pytest`](https://docs.qameta.io/allure/#_pytest) for more details.
|
||||
267
docs/user/write_testcase.md
Normal file
@@ -0,0 +1,267 @@
|
||||
# Write Testcase
|
||||
|
||||
HttpRunner v3.x supports three testcase formats, `pytest`, `YAML` and `JSON`. It is extremely recommended to write and maintain testcases in `pytest` format instead of former `YAML/JSON` format.
|
||||
|
||||
The format relations are illustrated as below:
|
||||
|
||||

|
||||
|
||||
## record & generate testcase
|
||||
|
||||
If the SUT (system under test) is ready, the most efficient way is to capture HTTP traffic first and then generate testcases with HAR file. Refer to [`Record & Generate testcase`](/user/gen_tests/) for more details.
|
||||
|
||||
Based on the generated pytest testcase, you can then do some adjustment as needed, thus you need to know the details of testcase format.
|
||||
|
||||
## testcase structure
|
||||
|
||||
Each testcase is a subclass of `HttpRunner`, and must have two class attributes: `config` and `teststeps`.
|
||||
|
||||
- config: configure testcase level settings, including `base_url`, `verify`, `variables`, `export`.
|
||||
- teststeps: list of teststep (`List[Step]`), each step is corresponding to a API request or another testcase reference call. Besides, `variables`/`extract`/`validate`/`hooks` mechanisms are supported to create extremely complex test scenarios.
|
||||
|
||||
```python
|
||||
from httprunner import HttpRunner, Config, Step, RunRequest, RunTestCase
|
||||
|
||||
|
||||
class TestCaseRequestWithFunctions(HttpRunner):
|
||||
config = (
|
||||
Config("request methods testcase with functions")
|
||||
.variables(
|
||||
**{
|
||||
"foo1": "config_bar1",
|
||||
"foo2": "config_bar2",
|
||||
"expect_foo1": "config_bar1",
|
||||
"expect_foo2": "config_bar2",
|
||||
}
|
||||
)
|
||||
.base_url("https://postman-echo.com")
|
||||
.verify(False)
|
||||
.export(*["foo3"])
|
||||
)
|
||||
|
||||
teststeps = [
|
||||
Step(
|
||||
RunRequest("get with params")
|
||||
.with_variables(
|
||||
**{"foo1": "bar11", "foo2": "bar21", "sum_v": "${sum_two(1, 2)}"}
|
||||
)
|
||||
.get("/get")
|
||||
.with_params(**{"foo1": "$foo1", "foo2": "$foo2", "sum_v": "$sum_v"})
|
||||
.with_headers(**{"User-Agent": "HttpRunner/${get_httprunner_version()}"})
|
||||
.extract()
|
||||
.with_jmespath("body.args.foo2", "foo3")
|
||||
.validate()
|
||||
.assert_equal("status_code", 200)
|
||||
.assert_equal("body.args.foo1", "bar11")
|
||||
.assert_equal("body.args.sum_v", "3")
|
||||
.assert_equal("body.args.foo2", "bar21")
|
||||
),
|
||||
Step(
|
||||
RunRequest("post form data")
|
||||
.with_variables(**{"foo2": "bar23"})
|
||||
.post("/post")
|
||||
.with_headers(
|
||||
**{
|
||||
"User-Agent": "HttpRunner/${get_httprunner_version()}",
|
||||
"Content-Type": "application/x-www-form-urlencoded",
|
||||
}
|
||||
)
|
||||
.with_data("foo1=$foo1&foo2=$foo2&foo3=$foo3")
|
||||
.validate()
|
||||
.assert_equal("status_code", 200)
|
||||
.assert_equal("body.form.foo1", "$expect_foo1")
|
||||
.assert_equal("body.form.foo2", "bar23")
|
||||
.assert_equal("body.form.foo3", "bar21")
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
TestCaseRequestWithFunctions().test_start()
|
||||
```
|
||||
|
||||
## chain call
|
||||
|
||||
One of the most awesome features of HttpRunner v3.x is `chain call`, with which you do not need to remember any testcase format details and you can get intelligent completion when you write testcases in IDE.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
## config
|
||||
|
||||
Each testcase should have one `config` part, in which you can configure testcase level settings.
|
||||
|
||||
### name (required)
|
||||
|
||||
Specify testcase name. This will be displayed in execution log and test report.
|
||||
|
||||
### base_url (optional)
|
||||
|
||||
Specify common schema and host part of the SUT, e.g. `https://postman-echo.com`. If `base_url` is specified, url in teststep can only set relative path part. This is especially useful if you want to switch between different SUT environments.
|
||||
|
||||
### variables (optional)
|
||||
|
||||
Specify common variables of testcase. Each teststep can reference config variable which is not set in step variables. In other words, step variables have higher priority than config variables.
|
||||
|
||||
### verify (optional)
|
||||
|
||||
Specify whether to verify the server’s TLS certificate. This is especially useful if we want to record HTTP traffic of testcase execution, because SSLError will be occurred if verify is not set or been set to True.
|
||||
|
||||
> SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1076)'))
|
||||
|
||||
### export (optional)
|
||||
|
||||
Specify the exported session variables of testcase. Consider each testcase as a black box, config `variables` is the input part, and config `export` is the output part. In particular, when a testcase is referenced in another testcase's step, and will be extracted some session variables to be used in subsequent teststeps, then the extracted session variables should be configured in config `export` part.
|
||||
|
||||
## teststeps
|
||||
|
||||
Each testcase should have one or multiple ordered test steps (`List[Step]`), each step is corresponding to a API request or another testcase reference call.
|
||||
|
||||

|
||||
|
||||
> Notice: The concept of API in HttpRunner v2.x has been deprecated for simplification. You can consider API as a testcase that has only one request step.
|
||||
|
||||
### RunRequest(name)
|
||||
|
||||
`RunRequest` is used in a step to make request to API and do some extraction or validations for response.
|
||||
|
||||
The argument `name` of RunRequest is used to specify teststep name, which will be displayed in execution log and test report.
|
||||
|
||||
#### .with_variables
|
||||
|
||||
Specify teststep variables. The variables of each step are independent, thus if you want to share variables in multiple steps, you should define variables in config variables. Besides, the step variables will override the ones that have the same name in config variables.
|
||||
|
||||
#### .method(url)
|
||||
|
||||
Specify HTTP method and the url of SUT. These are corresponding to `method` and `url` arguments of [`requests.request`][requests.request].
|
||||
|
||||
If `base_url` is set in config, url can only set relative path part.
|
||||
|
||||
#### .with_params
|
||||
|
||||
Specify query string for the request url. This is corresponding to the `params` argument of [`requests.request`][requests.request].
|
||||
|
||||
#### .with_headers
|
||||
|
||||
Specify HTTP headers for the request. This is corresponding to the `headers` argument of [`requests.request`][requests.request].
|
||||
|
||||
#### .with_cookies
|
||||
|
||||
Specify HTTP request cookies. This is corresponding to the `cookies` argument of [`requests.request`][requests.request].
|
||||
|
||||
#### .with_data
|
||||
|
||||
Specify HTTP request body. This is corresponding to the `data` argument of [`requests.request`][requests.request].
|
||||
|
||||
#### .with_json
|
||||
|
||||
Specify HTTP request body in json. This is corresponding to the `json` argument of [`requests.request`][requests.request].
|
||||
|
||||
#### extract
|
||||
|
||||
##### .with_jmespath
|
||||
|
||||
Extract JSON response body with [jmespath][jmespath].
|
||||
|
||||
> with_jmespath(jmes_path: Text, var_name: Text)
|
||||
|
||||
- jmes_path: jmespath expression, refer to [JMESPath Tutorial][jmespath_tutorial] for more details
|
||||
- var_name: the variable name that stores extracted value, it can be referenced by subsequent test steps
|
||||
|
||||
#### validate
|
||||
|
||||
##### .assert_XXX
|
||||
|
||||
Extract JSON response body with [jmespath][jmespath] and validate with expected value.
|
||||
|
||||
> assert_XXX(jmes_path: Text, expected_value: Any)
|
||||
|
||||
- jmes_path: jmespath expression, refer to [JMESPath Tutorial][jmespath_tutorial] for more details
|
||||
- expected_value: the specified expected value, variable or function reference can also be used here
|
||||
|
||||
The image below shows HttpRunner builtin validators.
|
||||
|
||||

|
||||
|
||||
### RunTestCase(name)
|
||||
|
||||
`RunTestCase` is used in a step to reference another testcase call.
|
||||
|
||||
The argument `name` of RunTestCase is used to specify teststep name, which will be displayed in execution log and test report.
|
||||
|
||||
#### .with_variables
|
||||
|
||||
Same with RunRequest's `.with_variables`.
|
||||
|
||||
#### .call
|
||||
|
||||
Specify referenced testcase class.
|
||||
|
||||
#### .export
|
||||
|
||||
Specify session variable names to export from referenced testcase. The exported variables can be referenced by subsequent test steps.
|
||||
|
||||
```python
|
||||
import os
|
||||
import sys
|
||||
|
||||
sys.path.insert(0, os.getcwd())
|
||||
|
||||
from httprunner import HttpRunner, Config, Step, RunRequest, RunTestCase
|
||||
|
||||
from examples.postman_echo.request_methods.request_with_functions_test import (
|
||||
TestCaseRequestWithFunctions as RequestWithFunctions,
|
||||
)
|
||||
|
||||
|
||||
class TestCaseRequestWithTestcaseReference(HttpRunner):
|
||||
config = (
|
||||
Config("request methods testcase: reference testcase")
|
||||
.variables(
|
||||
**{
|
||||
"foo1": "testsuite_config_bar1",
|
||||
"expect_foo1": "testsuite_config_bar1",
|
||||
"expect_foo2": "config_bar2",
|
||||
}
|
||||
)
|
||||
.base_url("https://postman-echo.com")
|
||||
.verify(False)
|
||||
)
|
||||
|
||||
teststeps = [
|
||||
Step(
|
||||
RunTestCase("request with functions")
|
||||
.with_variables(
|
||||
**{"foo1": "testcase_ref_bar1", "expect_foo1": "testcase_ref_bar1"}
|
||||
)
|
||||
.call(RequestWithFunctions)
|
||||
.export(*["foo3"])
|
||||
),
|
||||
Step(
|
||||
RunRequest("post form data")
|
||||
.with_variables(**{"foo1": "bar1"})
|
||||
.post("/post")
|
||||
.with_headers(
|
||||
**{
|
||||
"User-Agent": "HttpRunner/${get_httprunner_version()}",
|
||||
"Content-Type": "application/x-www-form-urlencoded",
|
||||
}
|
||||
)
|
||||
.with_data("foo1=$foo1&foo2=$foo3")
|
||||
.validate()
|
||||
.assert_equal("status_code", 200)
|
||||
.assert_equal("body.form.foo1", "bar1")
|
||||
.assert_equal("body.form.foo2", "bar21")
|
||||
),
|
||||
]
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
TestCaseRequestWithTestcaseReference().test_start()
|
||||
```
|
||||
|
||||
|
||||
[requests.request]: https://requests.readthedocs.io/en/master/api/#requests.request
|
||||
[jmespath]: https://jmespath.org/
|
||||
[jmespath_tutorial]: https://jmespath.org/tutorial.html
|
||||
@@ -1,4 +1,4 @@
|
||||
# NOTE: Generated By HttpRunner v3.0.11
|
||||
# NOTE: Generated By HttpRunner v3.0.13
|
||||
# FROM: examples/httpbin/basic.yml
|
||||
|
||||
from httprunner import HttpRunner, Config, Step, RunRequest, RunTestCase
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# NOTE: Generated By HttpRunner v3.0.11
|
||||
# NOTE: Generated By HttpRunner v3.0.13
|
||||
# FROM: examples/httpbin/hooks.yml
|
||||
|
||||
from httprunner import HttpRunner, Config, Step, RunRequest, RunTestCase
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# NOTE: Generated By HttpRunner v3.0.11
|
||||
# NOTE: Generated By HttpRunner v3.0.13
|
||||
# FROM: examples/httpbin/load_image.yml
|
||||
|
||||
from httprunner import HttpRunner, Config, Step, RunRequest, RunTestCase
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# NOTE: Generated By HttpRunner v3.0.11
|
||||
# NOTE: Generated By HttpRunner v3.0.13
|
||||
# FROM: examples/httpbin/upload.yml
|
||||
|
||||
from httprunner import HttpRunner, Config, Step, RunRequest, RunTestCase
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# NOTE: Generated By HttpRunner v3.0.11
|
||||
# NOTE: Generated By HttpRunner v3.0.13
|
||||
# FROM: examples/httpbin/validate.yml
|
||||
|
||||
from httprunner import HttpRunner, Config, Step, RunRequest, RunTestCase
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# NOTE: Generated By HttpRunner v3.0.11
|
||||
# NOTE: Generated By HttpRunner v3.0.13
|
||||
# FROM: examples/postman_echo/request_methods/request_with_functions.yml
|
||||
|
||||
from httprunner import HttpRunner, Config, Step, RunRequest, RunTestCase
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# NOTE: Generated By HttpRunner v3.0.11
|
||||
# NOTE: Generated By HttpRunner v3.0.13
|
||||
# FROM: examples/postman_echo/request_methods/request_with_testcase_reference.yml
|
||||
|
||||
import os
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# NOTE: Generated By HttpRunner v3.0.11
|
||||
# NOTE: Generated By HttpRunner v3.0.13
|
||||
# FROM: examples/postman_echo/request_methods/hardcode.yml
|
||||
|
||||
from httprunner import HttpRunner, Config, Step, RunRequest, RunTestCase
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# NOTE: Generated By HttpRunner v3.0.11
|
||||
# NOTE: Generated By HttpRunner v3.0.13
|
||||
# FROM: examples/postman_echo/request_methods/request_with_functions.yml
|
||||
|
||||
from httprunner import HttpRunner, Config, Step, RunRequest, RunTestCase
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# NOTE: Generated By HttpRunner v3.0.11
|
||||
# NOTE: Generated By HttpRunner v3.0.13
|
||||
# FROM: examples/postman_echo/request_methods/request_with_testcase_reference.yml
|
||||
|
||||
import os
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# NOTE: Generated By HttpRunner v3.0.11
|
||||
# NOTE: Generated By HttpRunner v3.0.13
|
||||
# FROM: examples/postman_echo/request_methods/request_with_variables.yml
|
||||
|
||||
from httprunner import HttpRunner, Config, Step, RunRequest, RunTestCase
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# NOTE: Generated By HttpRunner v3.0.11
|
||||
# NOTE: Generated By HttpRunner v3.0.13
|
||||
# FROM: examples/postman_echo/request_methods/validate_with_functions.yml
|
||||
|
||||
from httprunner import HttpRunner, Config, Step, RunRequest, RunTestCase
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# NOTE: Generated By HttpRunner v3.0.11
|
||||
# NOTE: Generated By HttpRunner v3.0.13
|
||||
# FROM: examples/postman_echo/request_methods/validate_with_variables.yml
|
||||
|
||||
from httprunner import HttpRunner, Config, Step, RunRequest, RunTestCase
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
__version__ = "3.0.12"
|
||||
__version__ = "3.0.13"
|
||||
__description__ = "One-stop solution for HTTP(S) testing."
|
||||
|
||||
from httprunner.runner import HttpRunner
|
||||
|
||||
@@ -12,6 +12,7 @@ from requests.exceptions import (
|
||||
RequestException,
|
||||
)
|
||||
|
||||
from httprunner.exceptions import NetworkFailure
|
||||
from httprunner.models import RequestData, ResponseData
|
||||
from httprunner.models import SessionData, ReqRespData
|
||||
from httprunner.utils import lower_dict_keys, omit_long_data
|
||||
@@ -172,16 +173,31 @@ class HttpSession(requests.Session):
|
||||
# timeout default to 120 seconds
|
||||
kwargs.setdefault("timeout", 120)
|
||||
|
||||
# set stream to True, in order to get client/server IP/Port
|
||||
kwargs["stream"] = True
|
||||
|
||||
start_timestamp = time.time()
|
||||
response = self._send_request_safe_mode(method, url, **kwargs)
|
||||
response_time_ms = round((time.time() - start_timestamp) * 1000, 2)
|
||||
|
||||
# get the length of the content, but if the argument stream is set to True, we take
|
||||
# the size from the content-length header, in order to not trigger fetching of the body
|
||||
if kwargs.get("stream", False):
|
||||
content_size = int(dict(response.headers).get("content-length") or 0)
|
||||
else:
|
||||
content_size = len(response.content or "")
|
||||
try:
|
||||
client_ip, client_port = response.raw.connection.sock.getsockname()
|
||||
self.data.address.client_ip = client_ip
|
||||
self.data.address.client_port = client_port
|
||||
logger.debug(f"client IP: {client_ip}, Port: {client_port}")
|
||||
except AttributeError as ex:
|
||||
raise NetworkFailure(f"failed to get client address info: {ex}")
|
||||
|
||||
try:
|
||||
server_ip, server_port = response.raw.connection.sock.getpeername()
|
||||
self.data.address.server_ip = server_ip
|
||||
self.data.address.server_port = server_port
|
||||
logger.debug(f"server IP: {server_ip}, Port: {server_port}")
|
||||
except AttributeError as ex:
|
||||
raise NetworkFailure(f"failed to get server address info: {ex}")
|
||||
|
||||
# get length of the response content
|
||||
content_size = int(dict(response.headers).get("content-length") or 0)
|
||||
|
||||
# record the consumed time
|
||||
self.data.stat.response_time_ms = response_time_ms
|
||||
|
||||
@@ -27,6 +27,10 @@ class TeardownHooksFailure(MyBaseFailure):
|
||||
pass
|
||||
|
||||
|
||||
class NetworkFailure(MyBaseFailure):
|
||||
pass
|
||||
|
||||
|
||||
""" error type exceptions
|
||||
these exceptions will mark test as error
|
||||
"""
|
||||
|
||||
@@ -67,7 +67,7 @@ def main_har2case(args):
|
||||
|
||||
if args.to_yaml:
|
||||
output_file_type = "YAML"
|
||||
elif args.to_yaml:
|
||||
elif args.to_json:
|
||||
output_file_type = "JSON"
|
||||
else:
|
||||
output_file_type = "pytest"
|
||||
|
||||
@@ -338,8 +338,6 @@ class HarParser(object):
|
||||
capture_exception(ex)
|
||||
raise
|
||||
|
||||
logger.debug("prepared testcase: {}".format(testcase))
|
||||
|
||||
if file_type == "JSON":
|
||||
output_testcase_file = f"{harfile}.json"
|
||||
utils.dump_json(testcase, output_testcase_file)
|
||||
|
||||
@@ -64,6 +64,13 @@ if __name__ == "__main__":
|
||||
|
||||
|
||||
def __ensure_absolute(path: Text) -> Text:
|
||||
if path.startswith("./"):
|
||||
# Linux/Darwin, hrun ./test.yml
|
||||
path = path[len("./") :]
|
||||
elif path.startswith(".\\"):
|
||||
# Windows, hrun .\\test.yml
|
||||
path = path[len(".\\") :]
|
||||
|
||||
path = ensure_path_sep(path)
|
||||
project_meta = load_project_meta(path)
|
||||
|
||||
@@ -271,7 +278,7 @@ def make_teststep_chain_style(teststep: Dict) -> Text:
|
||||
# request step
|
||||
step_info += ".extract()"
|
||||
for extract_name, extract_path in teststep["extract"].items():
|
||||
step_info += f'.with_jmespath("{extract_path}", "{extract_name}")'
|
||||
step_info += f""".with_jmespath('{extract_path}', '{extract_name}')"""
|
||||
|
||||
if "export" in teststep:
|
||||
# reference testcase step
|
||||
|
||||
@@ -110,6 +110,13 @@ class RequestStat(BaseModel):
|
||||
elapsed_ms: float = 0
|
||||
|
||||
|
||||
class AddressData(BaseModel):
|
||||
client_ip: Text = "N/A"
|
||||
client_port: int = 0
|
||||
server_ip: Text = "N/A"
|
||||
server_port: int = 0
|
||||
|
||||
|
||||
class RequestData(BaseModel):
|
||||
method: MethodEnum = MethodEnum.GET
|
||||
url: Url
|
||||
@@ -140,6 +147,7 @@ class SessionData(BaseModel):
|
||||
# while when 30X redirect occurs, req_resps will contain multiple request & response
|
||||
req_resps: List[ReqRespData] = []
|
||||
stat: RequestStat = RequestStat()
|
||||
address: AddressData = AddressData()
|
||||
validators: Dict = {}
|
||||
|
||||
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import os.path
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
from loguru import logger
|
||||
@@ -18,25 +19,40 @@ def init_parser_scaffold(subparsers):
|
||||
def create_scaffold(project_name):
|
||||
""" create scaffold with specified project name.
|
||||
"""
|
||||
|
||||
def show_tree(prj_name):
|
||||
try:
|
||||
print(f"\n$ tree {prj_name} -a")
|
||||
subprocess.run(["tree", prj_name, "-a"])
|
||||
print("")
|
||||
except FileNotFoundError:
|
||||
logger.warning("tree command not exists, ignore.")
|
||||
|
||||
if os.path.isdir(project_name):
|
||||
logger.warning(
|
||||
f"Folder {project_name} exists, please specify a new folder name."
|
||||
f"Project folder {project_name} exists, please specify a new project name."
|
||||
)
|
||||
return
|
||||
show_tree(project_name)
|
||||
return 1
|
||||
elif os.path.isfile(project_name):
|
||||
logger.warning(
|
||||
f"Project name {project_name} conflicts with existed file, please specify a new one."
|
||||
)
|
||||
return 1
|
||||
|
||||
logger.info(f"Start to create new project: {project_name}")
|
||||
logger.info(f"CWD: {os.getcwd()}")
|
||||
logger.info(f"Create new project: {project_name}")
|
||||
print(f"Project Root Dir: {os.path.join(os.getcwd(), project_name)}\n")
|
||||
|
||||
def create_folder(path):
|
||||
os.makedirs(path)
|
||||
msg = f"created folder: {path}"
|
||||
logger.info(msg)
|
||||
print(msg)
|
||||
|
||||
def create_file(path, file_content=""):
|
||||
with open(path, "w") as f:
|
||||
f.write(file_content)
|
||||
msg = f"created file: {path}"
|
||||
logger.info(msg)
|
||||
print(msg)
|
||||
|
||||
demo_testcase_request_content = """
|
||||
config:
|
||||
@@ -178,8 +194,10 @@ def sleep(n_secs):
|
||||
create_file(os.path.join(project_name, ".env"), demo_env_content)
|
||||
create_file(os.path.join(project_name, ".gitignore"), ignore_content)
|
||||
|
||||
show_tree(project_name)
|
||||
return 0
|
||||
|
||||
|
||||
def main_scaffold(args):
|
||||
capture_message("startproject with scaffold")
|
||||
create_scaffold(args.project_name)
|
||||
sys.exit(0)
|
||||
sys.exit(create_scaffold(args.project_name))
|
||||
|
||||
@@ -210,8 +210,12 @@ def ensure_file_path_valid(file_path: Text) -> Text:
|
||||
# 19 => T19, 2C => T2C
|
||||
name = f"T{name}"
|
||||
|
||||
# handle cases when directory name includes dot/hyphen/space
|
||||
name = name.replace(" ", "_").replace(".", "_").replace("-", "_")
|
||||
if name.startswith("."):
|
||||
# avoid ".csv" been converted to "_csv"
|
||||
pass
|
||||
else:
|
||||
# handle cases when directory name includes dot/hyphen/space
|
||||
name = name.replace(" ", "_").replace(".", "_").replace("-", "_")
|
||||
|
||||
path_names.append(name)
|
||||
|
||||
@@ -239,8 +243,9 @@ def override_config_variables(
|
||||
"""
|
||||
step_new_variables = {}
|
||||
for key, value in step_variables.items():
|
||||
if f"${key}" == value:
|
||||
if f"${key}" == value or "${" + key + "}" == value:
|
||||
# e.g. {"base_url": "$base_url"}
|
||||
# or {"base_url": "${base_url}"}
|
||||
continue
|
||||
|
||||
step_new_variables[key] = value
|
||||
|
||||
@@ -53,5 +53,11 @@ extra:
|
||||
nav:
|
||||
- Introduction: index.md
|
||||
- Installation: installation.md
|
||||
- User Guide:
|
||||
- Scaffold: user/scaffold.md
|
||||
- Record & Generate testcase: user/gen_tests.md
|
||||
- Write Testcase: user/write_testcase.md
|
||||
- Run Testcase: user/run_testcase.md
|
||||
- Testing Report: user/testing_report.md
|
||||
- Sponsors: sponsors.md
|
||||
- CHANGELOG: CHANGELOG.md
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[tool.poetry]
|
||||
name = "httprunner"
|
||||
version = "3.0.12"
|
||||
version = "3.0.13"
|
||||
description = "One-stop solution for HTTP(S) testing."
|
||||
license = "Apache-2.0"
|
||||
readme = "README.md"
|
||||
|
||||
@@ -14,7 +14,7 @@ class TestLoader(unittest.TestCase):
|
||||
self.assertEqual(len(testcase_obj.teststeps), 3)
|
||||
|
||||
def test_load_json_file_file_format_error(self):
|
||||
json_tmp_file = "/tmp/tmp.json"
|
||||
json_tmp_file = "tmp.json"
|
||||
# create empty file
|
||||
with open(json_tmp_file, "w") as f:
|
||||
f.write("")
|
||||
@@ -111,7 +111,7 @@ class TestLoader(unittest.TestCase):
|
||||
start_path = os.path.join(os.getcwd(), "examples", "httpbin")
|
||||
self.assertEqual(
|
||||
loader.locate_file(start_path, "debugtalk.py"),
|
||||
os.path.join(os.getcwd(), "examples/httpbin/debugtalk.py"),
|
||||
os.path.join(os.getcwd(), "examples", "httpbin", "debugtalk.py"),
|
||||
)
|
||||
self.assertEqual(
|
||||
loader.locate_file("examples/httpbin/", "debugtalk.py"),
|
||||
@@ -119,5 +119,5 @@ class TestLoader(unittest.TestCase):
|
||||
)
|
||||
self.assertEqual(
|
||||
loader.locate_file("examples/httpbin/", "debugtalk.py"),
|
||||
os.path.join(os.getcwd(), "examples/httpbin/debugtalk.py"),
|
||||
os.path.join(os.getcwd(), "examples", "httpbin", "debugtalk.py"),
|
||||
)
|
||||
|
||||
@@ -25,7 +25,12 @@ class TestMake(unittest.TestCase):
|
||||
testcase_python_list[0],
|
||||
os.path.join(
|
||||
os.getcwd(),
|
||||
"examples/postman_echo/request_methods/request_with_variables_test.py",
|
||||
os.path.join(
|
||||
"examples",
|
||||
"postman_echo",
|
||||
"request_methods",
|
||||
"request_with_variables_test.py",
|
||||
),
|
||||
),
|
||||
)
|
||||
|
||||
@@ -38,13 +43,23 @@ class TestMake(unittest.TestCase):
|
||||
self.assertIn(
|
||||
os.path.join(
|
||||
os.getcwd(),
|
||||
"examples/postman_echo/request_methods/request_with_testcase_reference_test.py",
|
||||
os.path.join(
|
||||
"examples",
|
||||
"postman_echo",
|
||||
"request_methods",
|
||||
"request_with_testcase_reference_test.py",
|
||||
),
|
||||
),
|
||||
testcase_python_list,
|
||||
)
|
||||
|
||||
with open(
|
||||
"examples/postman_echo/request_methods/request_with_testcase_reference_test.py"
|
||||
os.path.join(
|
||||
"examples",
|
||||
"postman_echo",
|
||||
"request_methods",
|
||||
"request_with_testcase_reference_test.py",
|
||||
)
|
||||
) as f:
|
||||
content = f.read()
|
||||
self.assertIn(
|
||||
@@ -65,7 +80,12 @@ from examples.postman_echo.request_methods.request_with_functions_test import (
|
||||
self.assertIn(
|
||||
os.path.join(
|
||||
os.getcwd(),
|
||||
"examples/postman_echo/request_methods/request_with_functions_test.py",
|
||||
os.path.join(
|
||||
"examples",
|
||||
"postman_echo",
|
||||
"request_methods",
|
||||
"request_with_functions_test.py",
|
||||
),
|
||||
),
|
||||
testcase_python_list,
|
||||
)
|
||||
@@ -76,20 +96,42 @@ from examples.postman_echo.request_methods.request_with_functions_test import (
|
||||
(os.path.join(os.getcwd(), "mubu_login_test.py"), "MubuLogin"),
|
||||
)
|
||||
self.assertEqual(
|
||||
convert_testcase_path(os.path.join(os.getcwd(), "path/to/mubu.login.yml")),
|
||||
(os.path.join(os.getcwd(), "path/to/mubu_login_test.py"), "MubuLogin"),
|
||||
convert_testcase_path(
|
||||
os.path.join(os.getcwd(), os.path.join("path", "to", "mubu.login.yml"))
|
||||
),
|
||||
(
|
||||
os.path.join(
|
||||
os.getcwd(), os.path.join("path", "to", "mubu_login_test.py")
|
||||
),
|
||||
"MubuLogin",
|
||||
),
|
||||
)
|
||||
self.assertEqual(
|
||||
convert_testcase_path("path/to 2/mubu.login.yml"),
|
||||
(os.path.join(os.getcwd(), "path/to_2/mubu_login_test.py"), "MubuLogin"),
|
||||
convert_testcase_path(os.path.join("path", "to 2", "mubu.login.yml")),
|
||||
(
|
||||
os.path.join(
|
||||
os.getcwd(), os.path.join("path", "to_2", "mubu_login_test.py")
|
||||
),
|
||||
"MubuLogin",
|
||||
),
|
||||
)
|
||||
self.assertEqual(
|
||||
convert_testcase_path("path/to-2/mubu login.yml"),
|
||||
(os.path.join(os.getcwd(), "path/to_2/mubu_login_test.py"), "MubuLogin"),
|
||||
convert_testcase_path(os.path.join("path", "to-2", "mubu login.yml")),
|
||||
(
|
||||
os.path.join(
|
||||
os.getcwd(), os.path.join("path", "to_2", "mubu_login_test.py")
|
||||
),
|
||||
"MubuLogin",
|
||||
),
|
||||
)
|
||||
self.assertEqual(
|
||||
convert_testcase_path("path/to.2/幕布login.yml"),
|
||||
(os.path.join(os.getcwd(), "path/to_2/幕布login_test.py"), "幕布Login"),
|
||||
convert_testcase_path(os.path.join("path", "to.2", "幕布login.yml")),
|
||||
(
|
||||
os.path.join(
|
||||
os.getcwd(), os.path.join("path", "to_2", "幕布login_test.py")
|
||||
),
|
||||
"幕布Login",
|
||||
),
|
||||
)
|
||||
|
||||
def test_make_testsuite(self):
|
||||
@@ -99,14 +141,26 @@ from examples.postman_echo.request_methods.request_with_functions_test import (
|
||||
self.assertIn(
|
||||
os.path.join(
|
||||
os.getcwd(),
|
||||
"examples/postman_echo/request_methods/demo_testsuite_yml/request_with_functions_test.py",
|
||||
os.path.join(
|
||||
"examples",
|
||||
"postman_echo",
|
||||
"request_methods",
|
||||
"demo_testsuite_yml",
|
||||
"request_with_functions_test.py",
|
||||
),
|
||||
),
|
||||
testcase_python_list,
|
||||
)
|
||||
self.assertIn(
|
||||
os.path.join(
|
||||
os.getcwd(),
|
||||
"examples/postman_echo/request_methods/demo_testsuite_yml/request_with_testcase_reference_test.py",
|
||||
os.path.join(
|
||||
"examples",
|
||||
"postman_echo",
|
||||
"request_methods",
|
||||
"demo_testsuite_yml",
|
||||
"request_with_testcase_reference_test.py",
|
||||
),
|
||||
),
|
||||
testcase_python_list,
|
||||
)
|
||||
@@ -147,5 +201,5 @@ from examples.postman_echo.request_methods.request_with_functions_test import (
|
||||
teststep_chain_style = make_teststep_chain_style(step)
|
||||
self.assertEqual(
|
||||
teststep_chain_style,
|
||||
"""Step(RunRequest("get with params").with_variables(**{'foo1': 'bar1', 'foo2': 123, 'sum_v': '${sum_two(1, 2)}'}).get("/get").with_params(**{'foo1': '$foo1', 'foo2': '$foo2', 'sum_v': '$sum_v'}).with_headers(**{'User-Agent': 'HttpRunner/${get_httprunner_version()}'}).extract().with_jmespath("body.args.foo1", "session_foo1").with_jmespath("body.args.foo2", "session_foo2").validate().assert_equal("status_code", 200).assert_equal("body.args.sum_v", "3"))""",
|
||||
"""Step(RunRequest("get with params").with_variables(**{'foo1': 'bar1', 'foo2': 123, 'sum_v': '${sum_two(1, 2)}'}).get("/get").with_params(**{'foo1': '$foo1', 'foo2': '$foo2', 'sum_v': '$sum_v'}).with_headers(**{'User-Agent': 'HttpRunner/${get_httprunner_version()}'}).extract().with_jmespath('body.args.foo1', 'session_foo1').with_jmespath('body.args.foo2', 'session_foo2').validate().assert_equal("status_code", 200).assert_equal("body.args.sum_v", "3"))""",
|
||||
)
|
||||
|
||||
@@ -107,20 +107,26 @@ class TestUtils(unittest.TestCase):
|
||||
|
||||
def test_ensure_file_path_valid(self):
|
||||
self.assertEqual(
|
||||
ensure_file_path_valid("examples/a-b.c/d f/hardcode.yml"),
|
||||
os.path.join(os.getcwd(), "examples/a_b_c/d_f/hardcode.yml"),
|
||||
ensure_file_path_valid(
|
||||
os.path.join("examples", "a-b.c", "d f", "hardcode.yml")
|
||||
),
|
||||
os.path.join(os.getcwd(), "examples", "a_b_c", "d_f", "hardcode.yml"),
|
||||
)
|
||||
self.assertEqual(
|
||||
ensure_file_path_valid("1/2B/3.yml"),
|
||||
os.path.join(os.getcwd(), "T1/T2B/T3.yml"),
|
||||
ensure_file_path_valid(os.path.join("1", "2B", "3.yml")),
|
||||
os.path.join(os.getcwd(), "T1", "T2B", "T3.yml"),
|
||||
)
|
||||
self.assertEqual(
|
||||
ensure_file_path_valid("examples/a-b.c/2B/hardcode.yml"),
|
||||
os.path.join(os.getcwd(), "examples/a_b_c/T2B/hardcode.yml"),
|
||||
ensure_file_path_valid(
|
||||
os.path.join("examples", "a-b.c", "2B", "hardcode.yml")
|
||||
),
|
||||
os.path.join(os.getcwd(), "examples", "a_b_c", "T2B", "hardcode.yml"),
|
||||
)
|
||||
self.assertEqual(
|
||||
ensure_file_path_valid("examples/postman_echo/request_methods/"),
|
||||
os.path.join(os.getcwd(), "examples/postman_echo/request_methods"),
|
||||
ensure_file_path_valid(
|
||||
os.path.join("examples", "postman_echo", "request_methods")
|
||||
),
|
||||
os.path.join(os.getcwd(), "examples", "postman_echo", "request_methods"),
|
||||
)
|
||||
self.assertEqual(
|
||||
ensure_file_path_valid(os.path.join(os.getcwd(), "test.yml")),
|
||||
@@ -130,7 +136,8 @@ class TestUtils(unittest.TestCase):
|
||||
ensure_file_path_valid(os.getcwd()), os.getcwd(),
|
||||
)
|
||||
self.assertEqual(
|
||||
ensure_file_path_valid(os.getcwd() + ".csv"), os.getcwd() + ".csv",
|
||||
ensure_file_path_valid(os.path.join(os.getcwd(), "demo", ".csv")),
|
||||
os.path.join(os.getcwd(), "demo", ".csv"),
|
||||
)
|
||||
|
||||
def test_safe_dump_json(self):
|
||||
|
||||