mirror of
https://github.com/httprunner/httprunner.git
synced 2026-05-13 08:59:44 +08:00
remove docs, move docs to universal repo
https://github.com/HttpRunner/docs
This commit is contained in:
@@ -1 +0,0 @@
|
||||
httprunner.org
|
||||
20
docs/FAQ.md
20
docs/FAQ.md
@@ -1,20 +0,0 @@
|
||||
## Unable to install PyUnitReport dependency library automatically
|
||||
|
||||
If there is something goes wrong in installation like below.
|
||||
|
||||
```text
|
||||
Downloading/unpacking PyUnitReport (from HttpRunner)
|
||||
Could not find any downloads that satisfy the requirement PyUnitReport (from HttpRunner)
|
||||
```
|
||||
|
||||
You could install `PyUnitReport` manully at first.
|
||||
|
||||
```bash
|
||||
$ pip install git+https://github.com/debugtalk/PyUnitReport.git#egg=PyUnitReport
|
||||
```
|
||||
|
||||
And then everything will be OK when you reinstall `HttpRunner`.
|
||||
|
||||
```bash
|
||||
$ pip install git+https://github.com/debugtalk/HttpRunner.git#egg=HttpRunner
|
||||
```
|
||||
@@ -1,96 +0,0 @@
|
||||
## Installation
|
||||
|
||||
`HttpRunner` is available on [`PyPI`][PyPI] and can be installed through pip or easy_install.
|
||||
|
||||
```bash
|
||||
$ pip install HttpRunner
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```bash
|
||||
$ easy_install HttpRunner
|
||||
```
|
||||
|
||||
If you want to keep up with the latest version, you can install with github repository url.
|
||||
|
||||
```bash
|
||||
$ pip install git+https://github.com/HttpRunner/HttpRunner.git#egg=HttpRunner
|
||||
```
|
||||
|
||||
## Upgrade
|
||||
|
||||
If you have installed `HttpRunner` before and want to upgrade to the latest version, you can use the `-U` option.
|
||||
|
||||
This option works on each installation method described above.
|
||||
|
||||
```bash
|
||||
$ pip install -U HttpRunner
|
||||
$ easy_install -U HttpRunner
|
||||
$ pip install -U git+https://github.com/HttpRunner/HttpRunner.git#egg=HttpRunner
|
||||
```
|
||||
|
||||
## Check Installation
|
||||
|
||||
When HttpRunner is installed, a **httprunner** (**hrun** for short) command should be available in your shell (if you're not using
|
||||
virtualenv—which you should—make sure your python script directory is on your path).
|
||||
|
||||
To see `HttpRunner` version:
|
||||
|
||||
```bash
|
||||
$ httprunner -V # same as: hrun -V
|
||||
HttpRunner version: 0.8.1b
|
||||
PyUnitReport version: 0.1.3b
|
||||
```
|
||||
|
||||
To see available options, run:
|
||||
|
||||
```bash
|
||||
$ httprunner -h # same as: hrun -h
|
||||
usage: main-debug.py [-h] [-V] [--no-html-report]
|
||||
[--html-report-name HTML_REPORT_NAME]
|
||||
[--html-report-template HTML_REPORT_TEMPLATE]
|
||||
[--log-level LOG_LEVEL] [--log-file LOG_FILE]
|
||||
[--dot-env-path DOT_ENV_PATH] [--failfast]
|
||||
[--startproject STARTPROJECT]
|
||||
[--validate [VALIDATE [VALIDATE ...]]]
|
||||
[--prettify [PRETTIFY [PRETTIFY ...]]]
|
||||
[testcase_paths [testcase_paths ...]]
|
||||
|
||||
One-stop solution for HTTP(S) testing.
|
||||
|
||||
positional arguments:
|
||||
testcase_paths testcase file path
|
||||
|
||||
optional arguments:
|
||||
-h, --help show this help message and exit
|
||||
-V, --version show version
|
||||
--no-html-report do not generate html report.
|
||||
--html-report-name HTML_REPORT_NAME
|
||||
specify html report name, only effective when
|
||||
generating html report.
|
||||
--html-report-template HTML_REPORT_TEMPLATE
|
||||
specify html report template path.
|
||||
--log-level LOG_LEVEL
|
||||
Specify logging level, default is INFO.
|
||||
--log-file LOG_FILE Write logs to specified file path.
|
||||
--dot-env-path DOT_ENV_PATH
|
||||
Specify .env file path, which is useful for keeping
|
||||
sensitive data.
|
||||
--failfast Stop the test run on the first error or failure.
|
||||
--startproject STARTPROJECT
|
||||
Specify new project name.
|
||||
--validate [VALIDATE [VALIDATE ...]]
|
||||
Validate JSON testcase format.
|
||||
--prettify [PRETTIFY [PRETTIFY ...]]
|
||||
Prettify JSON testcase format.
|
||||
```
|
||||
|
||||
## Supported Python Versions
|
||||
|
||||
HttpRunner supports Python 2.7, 3.4, 3.5, and 3.6. And we strongly recommend you to use `Python 3.6`.
|
||||
|
||||
`HttpRunner` has been tested on `macOS`, `Linux` and `Windows` platforms.
|
||||
|
||||
|
||||
[PyPI]: https://pypi.python.org/pypi
|
||||
@@ -1,28 +0,0 @@
|
||||
# Introduction
|
||||
|
||||
## Design Philosophy
|
||||
|
||||
Take full reuse of Python's existing powerful libraries: [`Requests`][requests], [`unittest`][unittest] and [`Locust`][Locust]. And achieve the goal of API automation test, production environment monitoring, and API performance test, with a concise and elegant manner.
|
||||
|
||||
## Key Features
|
||||
|
||||
- Inherit all powerful features of [`Requests`][requests], just have fun to handle HTTP in human way.
|
||||
- Define testcases in YAML or JSON format in concise and elegant manner.
|
||||
- Supports `function`/`variable`/`extract`/`validate` mechanisms to create full test scenarios.
|
||||
- With `debugtalk.py` plugin, module functions can be auto-discovered in recursive upward directories.
|
||||
- Testcases can be run in diverse ways, with single testcase, multiple testcases, or entire project folder.
|
||||
- Test report is concise and clear, with detailed log records. See [`PyUnitReport`][PyUnitReport].
|
||||
- With reuse of [`Locust`][Locust], you can run performance test without extra work.
|
||||
- CLI command supported, perfect combination with [Jenkins][Jenkins].
|
||||
|
||||
## Learn more
|
||||
|
||||
You can read this [blog][HttpRunner-blog] to learn more about the background and initial thoughts of `HttpRunner`.
|
||||
|
||||
|
||||
[requests]: http://docs.python-requests.org/en/master/
|
||||
[unittest]: https://docs.python.org/3/library/unittest.html
|
||||
[Locust]: http://locust.io/
|
||||
[PyUnitReport]: https://github.com/HttpRunner/PyUnitReport
|
||||
[Jenkins]: https://jenkins.io/index.html
|
||||
[HttpRunner-blog]: http://debugtalk.com/post/ApiTestEngine-api-test-best-practice/
|
||||
@@ -1,47 +0,0 @@
|
||||
|
||||
## 版本号(Version)
|
||||
|
||||
从 2.0 版本开始,HttpRunner 开始使用 [`Semantic Versioning`][SemVer] 版本号机制。该机制由 GitHub 联合创始人 Tom Preston-Werner 编写,当前被广泛采用,遵循该机制也可以更好地与开源生态统一,避免出现 “dependency hell” 的情况。
|
||||
|
||||
具体地,HttpRunner 将采用 `MAJOR.MINOR.PATCH` 的版本号机制。
|
||||
|
||||
- MAJOR: 重大版本升级并出现前后版本不兼容时加 1
|
||||
- MINOR: 大版本内新增功能并且保持版本内兼容性时加 1
|
||||
- PATCH: 功能迭代过程中进行问题修复(bugfix)时加 1
|
||||
|
||||
当然,在实际迭代开发过程中,肯定也不会每次提交(commit)都对 PATCH 加 1;在遵循如上主体原则的前提下,也会根据需要,在版本号后面添加先行版本号(-alpha/beta/rc)或版本编译元数据(+20190101)作为延伸。
|
||||
|
||||
## 分支策略
|
||||
|
||||
HttpRunner 的开发分支策略采用 GitHub Flow。
|
||||
|
||||

|
||||
|
||||
## 提交信息(Commit Message)
|
||||
|
||||
代码提交的注释信息遵循如下格式规范:
|
||||
|
||||
```xml
|
||||
<type>(<scope>): <subject>
|
||||
<BLANK LINE>
|
||||
<body>
|
||||
<BLANK LINE>
|
||||
<footer>
|
||||
```
|
||||
|
||||
- **type**【必填】,大致分类如下:
|
||||
- feat:新功能(feature)
|
||||
- fix:修补 bug
|
||||
- docs:文档(documentation)
|
||||
- style: 格式(不影响代码运行的变动)
|
||||
- perf:性能提升
|
||||
- refactor:重构(即不是新增功能,也不是修改 bug 的代码变动)
|
||||
- test:增加测试
|
||||
- build:构建过程或辅助工具的变动
|
||||
- **subject**【必填】,对提交内容的简要概述
|
||||
- scope【可选项】,用于说明 commit 影响的范围,视项目而定,一般建议写对应具体模块
|
||||
- body【可选项】,对提交内容的详细描述
|
||||
- footer【可选项】,一般为BREAKING CHANGE或关联的issue等内容 详见参考文档
|
||||
|
||||
|
||||
[SemVer]: https://semver.org/
|
||||
@@ -1,23 +0,0 @@
|
||||
## Development
|
||||
|
||||
To develop or debug `HttpRunner`, you shall clone source code first.
|
||||
|
||||
```bash
|
||||
$ git clone https://github.com/HttpRunner/HttpRunner.git
|
||||
```
|
||||
|
||||
Then install all dependencies:
|
||||
|
||||
```bash
|
||||
$ pip install -r requirements-dev.txt
|
||||
```
|
||||
|
||||
Now you can use `main-debug.py` as debugging entrances.
|
||||
|
||||
```bash
|
||||
# debug hrun
|
||||
$ python main-debug.py hrun -h
|
||||
|
||||
# debug locusts
|
||||
$ python main-debug.py locusts -h
|
||||
```
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 40 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 112 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 106 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 20 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 64 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 27 KiB |
@@ -1,3 +0,0 @@
|
||||
Welcome to HttpRunner's documentation!
|
||||
|
||||
点击此处查看[中文使用说明文档](http://cn.httprunner.org)。
|
||||
@@ -1,45 +0,0 @@
|
||||
|
||||
## Load Test
|
||||
|
||||
With reuse of [`Locust`][Locust], you can run performance test without extra work.
|
||||
|
||||
```bash
|
||||
$ locusts -V
|
||||
[2017-08-26 23:45:42,246] bogon/INFO/stdout: Locust 0.8a2
|
||||
[2017-08-26 23:45:42,246] bogon/INFO/stdout:
|
||||
```
|
||||
|
||||
For full usage, you can run `locusts -h` to see help, and you will find that it is the same with `locust -h`.
|
||||
|
||||
The only difference is the `-f` argument. If you specify `-f` with a Python locustfile, it will be the same as `locust`, while if you specify `-f` with a `YAML/JSON` testcase file, it will convert to Python locustfile first and then pass to `locust`.
|
||||
|
||||
```bash
|
||||
$ locusts -f examples/first-testcase.yml
|
||||
[2017-08-18 17:20:43,915] Leos-MacBook-Air.local/INFO/locust.main: Starting web monitor at *:8089
|
||||
[2017-08-18 17:20:43,918] Leos-MacBook-Air.local/INFO/locust.main: Starting Locust 0.8a2
|
||||
```
|
||||
|
||||
In this case, you can reuse all features of [`Locust`][Locust].
|
||||
|
||||
That’s not all about it. With the argument `--processes`, you can even start locust with master and specified number of slaves (default to cpu cores number) at one time, which means you can leverage all cpus of your machine.
|
||||
|
||||
```bash
|
||||
$ locusts -f examples/first-testcase.yml --processes 4
|
||||
[2017-08-26 23:51:47,071] bogon/INFO/locust.main: Starting web monitor at *:8089
|
||||
[2017-08-26 23:51:47,075] bogon/INFO/locust.main: Starting Locust 0.8a2
|
||||
[2017-08-26 23:51:47,078] bogon/INFO/locust.main: Starting Locust 0.8a2
|
||||
[2017-08-26 23:51:47,080] bogon/INFO/locust.main: Starting Locust 0.8a2
|
||||
[2017-08-26 23:51:47,083] bogon/INFO/locust.main: Starting Locust 0.8a2
|
||||
[2017-08-26 23:51:47,084] bogon/INFO/locust.runners: Client 'bogon_656e0af8e968a8533d379dd252422ad3' reported as ready. Currently 1 clients ready to swarm.
|
||||
[2017-08-26 23:51:47,085] bogon/INFO/locust.runners: Client 'bogon_09f73850252ee4ec739ed77d3c4c6dba' reported as ready. Currently 2 clients ready to swarm.
|
||||
[2017-08-26 23:51:47,084] bogon/INFO/locust.main: Starting Locust 0.8a2
|
||||
[2017-08-26 23:51:47,085] bogon/INFO/locust.runners: Client 'bogon_869f7ed671b1a9952b56610f01e2006f' reported as ready. Currently 3 clients ready to swarm.
|
||||
[2017-08-26 23:51:47,085] bogon/INFO/locust.runners: Client 'bogon_80a804cda36b80fac17b57fd2d5e7cdb' reported as ready. Currently 4 clients ready to swarm.
|
||||
```
|
||||
|
||||

|
||||
|
||||
Enjoy!
|
||||
|
||||
|
||||
[Locust]: http://locust.io/
|
||||
@@ -1,336 +0,0 @@
|
||||
# QuickStart
|
||||
|
||||
## Introduction to Sample Interface Service
|
||||
|
||||
Along with this project, I devised a sample interface service, and you can use it to familiarize how to play with `HttpRunner`.
|
||||
|
||||
This sample service mainly has two parts:
|
||||
|
||||
- Authorization, each request of other APIs should sign with some header fields and get token first.
|
||||
- RESTful APIs for user management, you can do CRUD manipulation on users.
|
||||
|
||||
As you see, it is very similar to the mainstream production systems. Therefore once you are familiar with handling this demo service, you can master most test scenarios in your project.
|
||||
|
||||
## Launch Sample Interface Service
|
||||
|
||||
The demo service is a flask server, we can launch it in this way.
|
||||
|
||||
```text
|
||||
$ export FLASK_APP=tests/api_server.py
|
||||
$ flask run
|
||||
* Serving Flask app "tests.api_server"
|
||||
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
|
||||
```
|
||||
|
||||
Now the sample interface service is running, and we can move on to the next step.
|
||||
|
||||
## Capture HTTP request and response
|
||||
|
||||
Before we write testcases, we should know the details of the API. It is a good choice to use a web debugging proxy tool like `Charles Proxy` to capture the HTTP traffic.
|
||||
|
||||
For example, the image below illustrates getting token from the sample service first, and then creating one user successfully.
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
After thorough understanding of the APIs, we can now begin to write testcases.
|
||||
|
||||
## Write the first test case
|
||||
|
||||
Open your favorite text editor and you can write test cases like this.
|
||||
|
||||
```yaml
|
||||
- test:
|
||||
name: get token
|
||||
request:
|
||||
url: http://127.0.0.1:5000/api/get-token
|
||||
method: POST
|
||||
headers:
|
||||
user_agent: iOS/10.3
|
||||
device_sn: 9TN6O2Bn1vzfybF
|
||||
os_platform: ios
|
||||
app_version: 2.8.6
|
||||
json:
|
||||
sign: 19067cf712265eb5426db8d3664026c1ccea02b9
|
||||
|
||||
- test:
|
||||
name: create user which does not exist
|
||||
request:
|
||||
url: http://127.0.0.1:5000/api/users/1000
|
||||
method: POST
|
||||
headers:
|
||||
device_sn: 9TN6O2Bn1vzfybF
|
||||
token: F8prvGryC5beBr4g
|
||||
json:
|
||||
name: "user1"
|
||||
password: "123456"
|
||||
validate:
|
||||
- {"check": "status_code", "comparator": "eq", "expect": 201}
|
||||
- {"check": "content.success", "comparator": "eq", "expect": true}
|
||||
```
|
||||
|
||||
As you see, each API request is described in a `test` block. And in the `request` field, it describes the detail of HTTP request, includes url, method, headers and data, which are in line with the captured traffic.
|
||||
|
||||
You may wonder why we use the `json` field other than `data`. That's because the post data is in `JSON` format, when we use `json` to indicate the post data, we do not have to specify `Content-Type` to be `application/json` in request headers or dump data before request.
|
||||
|
||||
Have you recalled some familiar scenes?
|
||||
|
||||
Yes! That's what we did in [`requests.request`](requests.request)! Since `HttpRunner` takes full reuse of [`Requests`][requests], it inherits all powerful features of [`Requests`][requests], and we can handle HTTP request as the way we do before.
|
||||
|
||||
## Run test cases
|
||||
|
||||
Suppose the test case file is named as [`quickstart-demo-rev-0.yml`][quickstart-demo-rev-0] and is located in `examples` folder, then we can run it in this way.
|
||||
|
||||
```text
|
||||
ate examples/demo-rev-0.yml
|
||||
Running tests...
|
||||
----------------------------------------------------------------------
|
||||
get token ... INFO:root: Start to POST http://127.0.0.1:5000/api/get-token
|
||||
INFO:root: status_code: 200, response_time: 48 ms, response_length: 46 bytes
|
||||
OK (0.049669)s
|
||||
create user which does not exist ... INFO:root: Start to POST http://127.0.0.1:5000/api/users/1000
|
||||
ERROR:root: Failed to POST http://127.0.0.1:5000/api/users/1000! exception msg: 403 Client Error: FORBIDDEN for url: http://127.0.0.1:5000/api/users/1000
|
||||
ERROR (0.006471)s
|
||||
----------------------------------------------------------------------
|
||||
Ran 2 tests in 0.056s
|
||||
|
||||
FAILED
|
||||
(Errors=1)
|
||||
```
|
||||
|
||||
Oops! The second test case failed with 403 status code.
|
||||
|
||||
That is because we request with the same data as we captured in `Charles Proxy`, while the `token` is generated dynamically, thus the recorded data can not be be used twice directly.
|
||||
|
||||
## Optimize test case: correlation
|
||||
|
||||
To fix this problem, we should correlate `token` field in the second API test case, which is also called `correlation`.
|
||||
|
||||
```yaml
|
||||
- test:
|
||||
name: get token
|
||||
request:
|
||||
url: http://127.0.0.1:5000/api/get-token
|
||||
method: POST
|
||||
headers:
|
||||
user_agent: iOS/10.3
|
||||
device_sn: 9TN6O2Bn1vzfybF
|
||||
os_platform: ios
|
||||
app_version: 2.8.6
|
||||
json:
|
||||
sign: 19067cf712265eb5426db8d3664026c1ccea02b9
|
||||
extract:
|
||||
- token: content.token
|
||||
validate:
|
||||
- {"check": "status_code", "comparator": "eq", "expect": 200}
|
||||
- {"check": "content.token", "comparator": "len_eq", "expect": 16}
|
||||
|
||||
- test:
|
||||
name: create user which does not exist
|
||||
request:
|
||||
url: http://127.0.0.1:5000/api/users/1000
|
||||
method: POST
|
||||
headers:
|
||||
device_sn: 9TN6O2Bn1vzfybF
|
||||
token: $token
|
||||
json:
|
||||
name: "user1"
|
||||
password: "123456"
|
||||
validate:
|
||||
- {"check": "status_code", "comparator": "eq", "expect": 201}
|
||||
- {"check": "content.success", "comparator": "eq", "expect": true}
|
||||
```
|
||||
|
||||
As you see, the `token` field is no longer hardcoded, instead it is extracted from the first API request with `extract` mechanism. In the meanwhile, it is assigned to `token` variable, which can be referenced by the subsequent API requests.
|
||||
|
||||
Now we save the test cases to [`quickstart-demo-rev-1.yml`][quickstart-demo-rev-1] and rerun it, and we will find that both API requests to be successful.
|
||||
|
||||
## Optimize test case: parameterization
|
||||
|
||||
Let's look back to our test set `quickstart-demo-rev-1.yml`, and we can see the `device_sn` field is still hardcoded. This may be quite different from the actual scenarios.
|
||||
|
||||
In actual scenarios, each user's `device_sn` is different, so we should parameterize the request parameters, which is also called `parameterization`. In the meanwhile, the `sign` field is calculated with other header fields, thus it may change significantly if any header field changes slightly.
|
||||
|
||||
However, the test cases are only `YAML` documents, it is impossible to generate parameters dynamically in such text. Fortunately, we can combine `Python` scripts with `YAML/JSON` test cases in `HttpRunner`.
|
||||
|
||||
To achieve this goal, we can utilize `debugtalk.py` plugin and `variables` mechanisms.
|
||||
|
||||
To be specific, we can create a Python file (`examples/debugtalk.py`) and implement the related algorithm in it. The `debugtalk.py` file can not only be located beside `YAML/JSON` testcase file, but also can be in any upward recursive folder. Since we want `debugtalk.py` to be importable, we should put a `__init__.py` in its folder to make it as a Python module.
|
||||
|
||||
```python
|
||||
import hashlib
|
||||
import hmac
|
||||
import random
|
||||
import string
|
||||
|
||||
SECRET_KEY = "DebugTalk"
|
||||
|
||||
def get_sign(*args):
|
||||
content = ''.join(args).encode('ascii')
|
||||
sign_key = SECRET_KEY.encode('ascii')
|
||||
sign = hmac.new(sign_key, content, hashlib.sha1).hexdigest()
|
||||
return sign
|
||||
|
||||
def gen_random_string(str_len):
|
||||
random_char_list = []
|
||||
for _ in range(str_len):
|
||||
random_char = random.choice(string.ascii_letters + string.digits)
|
||||
random_char_list.append(random_char)
|
||||
|
||||
random_string = ''.join(random_char_list)
|
||||
return random_string
|
||||
```
|
||||
|
||||
And then, we can revise our demo test case and reference the functions. Suppose the revised file named [`quickstart-demo-rev-2.yml`][quickstart-demo-rev-2].
|
||||
|
||||
```yaml
|
||||
- test:
|
||||
name: get token
|
||||
variables:
|
||||
- user_agent: 'iOS/10.3'
|
||||
- device_sn: ${gen_random_string(15)}
|
||||
- os_platform: 'ios'
|
||||
- app_version: '2.8.6'
|
||||
request:
|
||||
url: http://127.0.0.1:5000/api/get-token
|
||||
method: POST
|
||||
headers:
|
||||
user_agent: $user_agent
|
||||
device_sn: $device_sn
|
||||
os_platform: $os_platform
|
||||
app_version: $app_version
|
||||
json:
|
||||
sign: ${get_sign($user_agent, $device_sn, $os_platform, $app_version)}
|
||||
extract:
|
||||
- token: content.token
|
||||
validate:
|
||||
- {"check": "status_code", "comparator": "eq", "expect": 200}
|
||||
- {"check": "content.token", "comparator": "len_eq", "expect": 16}
|
||||
|
||||
- test:
|
||||
name: create user which does not exist
|
||||
request:
|
||||
url: http://127.0.0.1:5000/api/users/1000
|
||||
method: POST
|
||||
headers:
|
||||
device_sn: $device_sn
|
||||
token: $token
|
||||
json:
|
||||
name: "user1"
|
||||
password: "123456"
|
||||
validate:
|
||||
- {"check": "status_code", "comparator": "eq", "expect": 201}
|
||||
- {"check": "content.success", "comparator": "eq", "expect": true}
|
||||
```
|
||||
|
||||
In this revised test case, `variable reference` and `function invoke` mechanisms are both used.
|
||||
|
||||
To make fields like `device_sn` can be used more than once, we bind values to variables in `variables` block. When we bind variables, we can not only bind exact value to a variable name, but also can call a function and bind the evaluated value to it.
|
||||
|
||||
When we want to reference a variable in the test case, we can do this with a escape character `$`. For example, `$user_agent` will not be taken as a normal string, and `HttpRunner` will consider it as a variable named `user_agent`, search and return its binding value.
|
||||
|
||||
When we want to reference a function, we shall use another escape character `${}`. Any content in `${}` will be considered as function calling, so we should guarantee that we call functions in the right way. At the same time, variables can also be referenced as parameters of function.
|
||||
|
||||
## Optimize test case: overall config block
|
||||
|
||||
There is still one issue unsolved.
|
||||
|
||||
The `device_sn` field is defined in the first API test case, thus it may be impossible to reference it in other test cases. Context separation is a well-designed mechanism, and we should obey this good practice.
|
||||
|
||||
To handle this case, overall `config` block is supported in `HttpRunner`. If we define variables or import functions in `config` block, these variables and functions will become global and can be referenced in the whole test set.
|
||||
|
||||
```yaml
|
||||
# examples/quickstart-demo-rev-3.yml
|
||||
- config:
|
||||
name: "smoketest for CRUD users."
|
||||
variables:
|
||||
- device_sn: ${gen_random_string(15)}
|
||||
request:
|
||||
base_url: http://127.0.0.1:5000
|
||||
headers:
|
||||
device_sn: $device_sn
|
||||
|
||||
- test:
|
||||
name: get token
|
||||
variables:
|
||||
- user_agent: 'iOS/10.3'
|
||||
- os_platform: 'ios'
|
||||
- app_version: '2.8.6'
|
||||
request:
|
||||
url: /api/get-token
|
||||
method: POST
|
||||
headers:
|
||||
user_agent: $user_agent
|
||||
os_platform: $os_platform
|
||||
app_version: $app_version
|
||||
json:
|
||||
sign: ${get_sign($user_agent, $device_sn, $os_platform, $app_version)}
|
||||
extract:
|
||||
- token: content.token
|
||||
validate:
|
||||
- {"check": "status_code", "comparator": "eq", "expect": 200}
|
||||
- {"check": "content.token", "comparator": "len_eq", "expect": 16}
|
||||
|
||||
- test:
|
||||
name: create user which does not exist
|
||||
request:
|
||||
url: /api/users/1000
|
||||
method: POST
|
||||
headers:
|
||||
token: $token
|
||||
json:
|
||||
name: "user1"
|
||||
password: "123456"
|
||||
validate:
|
||||
- {"check": "status_code", "comparator": "eq", "expect": 201}
|
||||
- {"check": "content.success", "comparator": "eq", "expect": true}
|
||||
```
|
||||
|
||||
As you see, we define variables in `config` block. Also, we can set `base_url` in `config` block, thereby we can specify relative path in each API request url. Besides, we can also set common fields in `config` `request`, such as `device_sn` in headers.
|
||||
|
||||
Until now, the test cases are finished and each detail is handled properly.
|
||||
|
||||
## Run test cases and generate report
|
||||
|
||||
Finally, let's run test set [`quickstart-demo-rev-3.yml`][quickstart-demo-rev-3] once more.
|
||||
|
||||
```text
|
||||
$ ate examples/quickstart-demo-rev-4.yml
|
||||
Running tests...
|
||||
----------------------------------------------------------------------
|
||||
get token ... INFO:root: Start to POST http://127.0.0.1:5000/api/get-token
|
||||
INFO:root: status_code: 200, response_time: 33 ms, response_length: 46 bytes
|
||||
OK (0.037027)s
|
||||
create user which does not exist ... INFO:root: Start to POST http://127.0.0.1:5000/api/users/1000
|
||||
INFO:root: status_code: 201, response_time: 15 ms, response_length: 54 bytes
|
||||
OK (0.016414)s
|
||||
----------------------------------------------------------------------
|
||||
Ran 2 tests in 0.054s
|
||||
OK
|
||||
|
||||
Generating HTML reports...
|
||||
Template is not specified, load default template instead.
|
||||
Reports generated: /Users/Leo/MyProjects/HttpRunner/reports/quickstart-demo-rev-0/2017-08-01-16-51-51.html
|
||||
```
|
||||
|
||||
Great! The test case runs successfully and generates a `HTML` test report.
|
||||
|
||||

|
||||
|
||||
## Further more
|
||||
|
||||
This is just a starting point, see the `advanced guide` for the advanced features.
|
||||
|
||||
- templating
|
||||
- [`data extraction and validation`][extraction-and-validation]
|
||||
- [`comparator`][comparator]
|
||||
|
||||
[requests]: http://docs.python-requests.org/en/master/
|
||||
[requests.request]: http://docs.python-requests.org/en/master/api/#requests.request
|
||||
[comparator]: write-testcases.md#comparator
|
||||
[extraction-and-validation]: write-testcases.md#extraction-and-validation
|
||||
[quickstart-demo-rev-0]: ../examples/quickstart-demo-rev-0.yml
|
||||
[quickstart-demo-rev-1]: ../examples/quickstart-demo-rev-1.yml
|
||||
[quickstart-demo-rev-2]: ../examples/quickstart-demo-rev-2.yml
|
||||
[quickstart-demo-rev-3]: ../examples/quickstart-demo-rev-3.yml
|
||||
@@ -1,34 +0,0 @@
|
||||
## Run testcases
|
||||
|
||||
`HttpRunner` can run testcases in diverse ways.
|
||||
|
||||
You can run single testcase by specifying testcase file path.
|
||||
|
||||
```text
|
||||
$ httprunner filepath/testcase.yml
|
||||
```
|
||||
|
||||
You can also run several testcases by specifying multiple testcase file paths.
|
||||
|
||||
```text
|
||||
$ httprunner filepath1/testcase1.yml filepath2/testcase2.yml
|
||||
```
|
||||
|
||||
If you want to run testcases of a whole project, you can achieve this goal by specifying the project folder path.
|
||||
|
||||
```text
|
||||
$ httprunner testcases_folder_path
|
||||
```
|
||||
|
||||
When you do continuous integration test or production environment monitoring with `Jenkins`, you may need to send test result notification. For instance, you can send email with mailgun service as below.
|
||||
|
||||
```text
|
||||
$ httprunner filepath/testcase.yml --report-name ${BUILD_NUMBER} \
|
||||
--mailgun-smtp-username "qa@debugtalk.com" \
|
||||
--mailgun-smtp-password "12345678" \
|
||||
--email-sender excited@samples.mailgun.org \
|
||||
--email-recepients ${MAIL_RECEPIENTS} \
|
||||
--jenkins-job-name ${JOB_NAME} \
|
||||
--jenkins-job-url ${JOB_URL} \
|
||||
--jenkins-build-number ${BUILD_NUMBER}
|
||||
```
|
||||
@@ -1,142 +0,0 @@
|
||||
It is recommended to write testcases in `YAML` format.
|
||||
|
||||
## demo
|
||||
|
||||
Here is a testcase example of typical scenario: get `token` at the beginning, and each subsequent requests should take the `token` in the headers.
|
||||
|
||||
```yaml
|
||||
- config:
|
||||
name: "create user testcases."
|
||||
variables:
|
||||
- user_agent: 'iOS/10.3'
|
||||
- device_sn: ${gen_random_string(15)}
|
||||
- os_platform: 'ios'
|
||||
- app_version: '2.8.6'
|
||||
request:
|
||||
base_url: "http://127.0.0.1:5000"
|
||||
headers:
|
||||
Content-Type: application/json
|
||||
device_sn: $device_sn
|
||||
|
||||
- test:
|
||||
name: get token
|
||||
request:
|
||||
url: /api/get-token
|
||||
method: POST
|
||||
headers:
|
||||
user_agent: $user_agent
|
||||
device_sn: $device_sn
|
||||
os_platform: $os_platform
|
||||
app_version: $app_version
|
||||
json:
|
||||
sign: ${get_sign($user_agent, $device_sn, $os_platform, $app_version)}
|
||||
extract:
|
||||
- token: content.token
|
||||
validate:
|
||||
- eq: ["status_code", 200]
|
||||
- len_eq: ["content.token", 16]
|
||||
|
||||
- test:
|
||||
name: create user which does not exist
|
||||
request:
|
||||
url: /api/users/1000
|
||||
method: POST
|
||||
headers:
|
||||
token: $token
|
||||
json:
|
||||
name: "user1"
|
||||
password: "123456"
|
||||
validate:
|
||||
- eq: ["status_code", 201]
|
||||
- eq: ["content.success", true]
|
||||
```
|
||||
|
||||
Function invoke is supported in `YAML/JSON` format testcases, such as `gen_random_string` and `get_sign` above. This mechanism relies on the `debugtak.py` hot plugin, with which we can define functions in `debugtak.py` file, and then functions can be auto discovered and invoked in runtime.
|
||||
|
||||
For detailed regulations of writing testcases, you can read the [`quickstart`](quickstart.md) documents.
|
||||
|
||||
|
||||
## Comparator
|
||||
|
||||
`HttpRunner` currently supports the following comparators.
|
||||
|
||||
| comparator | Description | A(check), B(expect) | examples |
|
||||
| -- | -- | -- | -- |
|
||||
| `eq`, `==` | value is equal | A == B | 9 eq 9 |
|
||||
| `lt` | less than | A < B | 7 lt 8 |
|
||||
| `le` | less than or equals | A <= B | 7 le 8, 8 le 8 |
|
||||
| `gt` | greater than | A > B | 8 gt 7 |
|
||||
| `ge` | greater than or equals | A >= B | 8 ge 7, 8 ge 8 |
|
||||
| `ne` | not equals | A != B | 6 ne 9 |
|
||||
| `str_eq` | string equals | str(A) == str(B) | 123 str_eq '123' |
|
||||
| `len_eq`, `count_eq` | length or count equals | len(A) == B | 'abc' len_eq 3, [1,2] len_eq 2 |
|
||||
| `len_gt`, `count_gt` | length greater than | len(A) > B | 'abc' len_gt 2, [1,2,3] len_gt 2 |
|
||||
| `len_ge`, `count_ge` | length greater than or equals | len(A) >= B | 'abc' len_ge 3, [1,2,3] len_gt 3 |
|
||||
| `len_lt`, `count_lt` | length less than | len(A) < B | 'abc' len_lt 4, [1,2,3] len_lt 4 |
|
||||
| `len_le`, `count_le` | length less than or equals | len(A) <= B | 'abc' len_le 3, [1,2,3] len_le 3 |
|
||||
| `contains` | contains | [1, 2] contains 1 | 'abc' contains 'a', [1,2,3] len_lt 4 |
|
||||
| `contained_by` | contained by | A in B | 'a' contained_by 'abc', 1 contained_by [1,2] |
|
||||
| `type_match` | A is instance of B | isinstance(A, B) | 123 type_match 'int' |
|
||||
| `regex_match` | regex matches | re.match(B, A) | 'abcdef' regex 'a\w+d' |
|
||||
| `startswith` | starts with | A.startswith(B) is True | 'abc' startswith 'ab' |
|
||||
| `endswith` | ends with | A.endswith(B) is True | 'abc' endswith 'bc' |
|
||||
|
||||
|
||||
## Extraction and Validation
|
||||
|
||||
Suppose we get the following HTTP response.
|
||||
|
||||
```javascript
|
||||
// status code: 200
|
||||
|
||||
// response headers
|
||||
{
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
|
||||
// response body content
|
||||
{
|
||||
"success": False,
|
||||
"person": {
|
||||
"name": {
|
||||
"first_name": "Leo",
|
||||
"last_name": "Lee",
|
||||
},
|
||||
"age": 29,
|
||||
"cities": ["Guangzhou", "Shenzhen"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In `extract` and `validate`, we can do chain operation to extract data field in HTTP response.
|
||||
|
||||
For instance, if we want to get `Content-Type` in response headers, then we can specify `headers.content-type`; if we want to get `first_name` in response content, we can specify `content.person.name.first_name`.
|
||||
|
||||
There might be slight difference on list, cos we can use index to locate list item. For example, `Guangzhou` in response content can be specified by `content.person.cities.0`.
|
||||
|
||||
```javascript
|
||||
// get status code
|
||||
status_code
|
||||
|
||||
// get headers field
|
||||
headers.content-type
|
||||
|
||||
// get content field
|
||||
body.success
|
||||
content.success
|
||||
text.success
|
||||
content.person.name.first_name
|
||||
content.person.cities.1
|
||||
```
|
||||
|
||||
```yaml
|
||||
extract:
|
||||
- content_type: headers.content-type
|
||||
- first_name: content.person.name.first_name
|
||||
validate:
|
||||
- eq: ["status_code", 200]
|
||||
- eq: ["headers.content-type", "application/json"]
|
||||
- gt: ["headers.content-length", 40]
|
||||
- eq: ["content.success", true]
|
||||
- len_eq: ["content.token", 16]
|
||||
```
|
||||
Reference in New Issue
Block a user