mirror of
https://github.com/httprunner/httprunner.git
synced 2026-05-13 17:29:56 +08:00
Merge branch 'master' into dev
This commit is contained in:
6
.github/ISSUE_TEMPLATE/bug_report.md
vendored
6
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -1,7 +1,9 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
|
||||
title: ''
|
||||
labels: Pending
|
||||
assignees: debugtalk
|
||||
---
|
||||
|
||||
## Describe the bug
|
||||
@@ -14,7 +16,7 @@ Please complete the following information:
|
||||
|
||||
- OS: [e.g. macos, Linux, Windows]
|
||||
- Python [e.g. 3.6]
|
||||
- HttpRunner [e.g. 1.5.11]
|
||||
- HttpRunner [e.g. 2.1.2]
|
||||
|
||||
## Traceback
|
||||
|
||||
|
||||
4
.github/ISSUE_TEMPLATE/bug_report_zh.md
vendored
4
.github/ISSUE_TEMPLATE/bug_report_zh.md
vendored
@@ -1,7 +1,9 @@
|
||||
---
|
||||
name: Bug 反馈(中文)
|
||||
about: 提交 bug 反馈
|
||||
|
||||
title: ''
|
||||
labels: Pending
|
||||
assignees: debugtalk
|
||||
---
|
||||
|
||||
## 问题描述
|
||||
|
||||
19
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
19
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for this project
|
||||
title: ''
|
||||
labels: Pending
|
||||
assignees: debugtalk
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
|
||||
**Describe the solution you'd like**
|
||||
A clear and concise description of what you want to happen.
|
||||
|
||||
**Describe alternatives you've considered**
|
||||
A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
**Additional context**
|
||||
Add any other context or screenshots about the feature request here.
|
||||
19
.github/ISSUE_TEMPLATE/feature_request_zh.md
vendored
Normal file
19
.github/ISSUE_TEMPLATE/feature_request_zh.md
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
---
|
||||
name: 需求反馈
|
||||
about: 期望新增或改进实现的需求
|
||||
title: ''
|
||||
labels: Pending
|
||||
assignees: debugtalk
|
||||
---
|
||||
|
||||
## 背景描述
|
||||
|
||||
> 重点描述遇到的问题:在什么场景下,HttpRunner 当前的功能特性不能(很好地)实现需求。
|
||||
|
||||
## 期望的功能特性
|
||||
|
||||
> 期望 HttpRunner 实现怎样的功能特性。
|
||||
|
||||
## 示例描述(可选)
|
||||
|
||||
> 结合示例进行描述,可让开发者更准确理解你的需求。
|
||||
36
.github/workflows/integration_test.yml
vendored
Normal file
36
.github/workflows/integration_test.yml
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
name: integration_test
|
||||
|
||||
on: [push]
|
||||
|
||||
jobs:
|
||||
integration_test:
|
||||
|
||||
name: integration_test - ${{ matrix.python-version }} on ${{ matrix.os }}
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
max-parallel: 6
|
||||
matrix:
|
||||
python-version: [2.7, 3.5, 3.6, 3.7, 3.8]
|
||||
os: [ubuntu-latest, macos-latest] # TODO: windows-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v1
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v1
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install poetry
|
||||
poetry --version
|
||||
poetry install -vv
|
||||
- name: Test package installation
|
||||
run: |
|
||||
poetry build
|
||||
ls dist/*.whl | xargs pip install # test installation
|
||||
hrun -V
|
||||
locusts -V
|
||||
- name: Run smoketest for hrun command
|
||||
run: |
|
||||
cd tests/httpbin && hrun basic.yml --failfast && cd -
|
||||
47
.github/workflows/unittest.yml
vendored
Normal file
47
.github/workflows/unittest.yml
vendored
Normal file
@@ -0,0 +1,47 @@
|
||||
name: unittest
|
||||
|
||||
on: [push]
|
||||
|
||||
jobs:
|
||||
unittest:
|
||||
|
||||
name: unittest - ${{ matrix.python-version }} on ${{ matrix.os }}
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
max-parallel: 12
|
||||
matrix:
|
||||
python-version: [2.7, 3.5, 3.6, 3.7] # TODO: 3.8
|
||||
os: [ubuntu-latest, macos-latest] # TODO: windows-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v1
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v1
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
pip install poetry
|
||||
poetry --version
|
||||
poetry install -vv
|
||||
- name: Run unittest for httprunner
|
||||
run: |
|
||||
poetry run python -m httprunner.cli hrun -V
|
||||
poetry run python -m httprunner.cli hrun -h
|
||||
poetry run coverage run --source=httprunner -m unittest discover
|
||||
poetry run coverage xml
|
||||
poetry run coverage report -m
|
||||
- name: Codecov
|
||||
uses: codecov/codecov-action@v1.0.5
|
||||
with:
|
||||
# User defined upload name. Visible in Codecov UI
|
||||
name: httprunner
|
||||
# Repository upload token - get it from codecov.io
|
||||
token: ${{ secrets.CODECOV_TOKEN }}
|
||||
# Path to coverage file to upload
|
||||
file: ./coverage.xml
|
||||
# Flag upload to group coverage metrics (e.g. unittests | integration | ui,chrome)
|
||||
flags: unittests
|
||||
# Specify whether or not CI build should fail if Codecov runs into an error during upload
|
||||
fail_ci_if_error: true
|
||||
4
.gitignore
vendored
4
.gitignore
vendored
@@ -14,4 +14,6 @@ logs
|
||||
locustfile.py
|
||||
site/
|
||||
reports
|
||||
.venv
|
||||
.venv
|
||||
*.xml
|
||||
htmlcov/
|
||||
22
.travis.yml
22
.travis.yml
@@ -1,22 +0,0 @@
|
||||
sudo: false
|
||||
language: python
|
||||
python:
|
||||
- 2.7
|
||||
- 3.5
|
||||
- 3.6
|
||||
matrix:
|
||||
include: # Required for Python 3.7+
|
||||
- python: 3.7
|
||||
dist: xenial
|
||||
- python: 3.8
|
||||
dist: xenial
|
||||
install:
|
||||
- pip install poetry
|
||||
- poetry install -vvv
|
||||
script:
|
||||
- python -m httprunner.cli hrun -V
|
||||
- python -m httprunner.cli hrun -h
|
||||
- poetry build
|
||||
- poetry run coverage run --source=httprunner -m unittest discover
|
||||
after_success:
|
||||
- poetry run coveralls
|
||||
13
README.md
13
README.md
@@ -2,8 +2,11 @@
|
||||
# HttpRunner
|
||||
|
||||
[](https://pepy.tech/project/httprunner)
|
||||
[](https://travis-ci.org/httprunner/httprunner)
|
||||
[](https://coveralls.io/github/HttpRunner/HttpRunner?branch=master)
|
||||
[](https://github.com/httprunner/httprunner/actions)
|
||||
[](https://github.com/httprunner/httprunner/actions)
|
||||
[](https://codecov.io/gh/httprunner/httprunner)
|
||||
[](https://pypi.python.org/pypi/httprunner)
|
||||
[](https://pypi.python.org/pypi/httprunner)
|
||||
[](https://testerhome.com/github_statistics)
|
||||
@@ -49,6 +52,12 @@ Thank you to all our sponsors! ✨🍰✨ ([become a sponsor](docs/sponsors.md))
|
||||
|
||||
霍格沃兹测试学院是 HttpRunner 的首家金牌赞助商。
|
||||
|
||||
### 开源服务赞助商(Open Source Sponsor)
|
||||
|
||||
[<img src="docs/assets/sentry-logo-black.svg" alt="Sentry" width="150">](https://sentry.io/_/open-source/)
|
||||
|
||||
HttpRunner is in Sentry Sponsored plan.
|
||||
|
||||
## How to Contribute
|
||||
|
||||
1. Check for [open issues](https://github.com/httprunner/httprunner/issues) or [open a fresh issue](https://github.com/httprunner/httprunner/issues/new/choose) to start a discussion around a feature idea or a bug.
|
||||
|
||||
@@ -1,5 +1,106 @@
|
||||
# Release History
|
||||
|
||||
## 2.4.8 (2019-12-25)
|
||||
|
||||
**Added**
|
||||
|
||||
- feat: store parse failed api/testcase/testsuite file path in `logs/xxx.parse_failed.json`
|
||||
- feat: add exception SummaryEmpty
|
||||
|
||||
**Fixed**
|
||||
|
||||
- fix: display request & response details in report when extraction failed
|
||||
- fix: include CHANGELOG in package
|
||||
|
||||
**Changed**
|
||||
|
||||
- change: use sys.exit(code) in hrun main
|
||||
|
||||
## 2.4.7 (2019-12-24)
|
||||
|
||||
**Added**
|
||||
|
||||
- feat: report user id to sentry
|
||||
|
||||
**Fixed**
|
||||
|
||||
- fix #797: locusts command error
|
||||
|
||||
## 2.4.6 (2019-12-23)
|
||||
|
||||
**Added**
|
||||
|
||||
- feat: report tests start event and running exception to sentry
|
||||
|
||||
**Fixed**
|
||||
|
||||
- fix: ensure initializing sentry_sdk on startup
|
||||
|
||||
**Fixed**
|
||||
|
||||
## 2.4.5 (2019-12-20)
|
||||
|
||||
**Added**
|
||||
|
||||
- feat: integrate sentry sdk
|
||||
|
||||
**Fixed**
|
||||
|
||||
- fix: catch UnicodeDecodeError when json loads request body
|
||||
- fix: display indented json for request json body
|
||||
|
||||
**Changed**
|
||||
|
||||
- change: detect request/response bytes encoding, instead of assuming utf-8
|
||||
- refactor: make report as submodule
|
||||
|
||||
## 2.4.4 (2019-12-17)
|
||||
|
||||
**Added**
|
||||
|
||||
- feat: add keyword `body` to reference response body
|
||||
|
||||
**Changed**
|
||||
|
||||
- refactor: dumps request/response headers, display indented json in html report
|
||||
- refactor: dumps request/response body if it is in json format, display indented json in html report
|
||||
- change: unify response field(content/json/text) to `body` in html report
|
||||
|
||||
## 2.4.3 (2019-12-16)
|
||||
|
||||
**Added**
|
||||
|
||||
- feat: load api content on demand
|
||||
|
||||
**Changed**
|
||||
|
||||
- refactor: use poetry>=1.0.0
|
||||
- test: migrate from travis CI to github actions
|
||||
- test: migrate from coveralls to codecov
|
||||
- test: run matrix tests on linux/macos/~~windows~~ and Python 2.7/3.5/3.6/3.7/3.8
|
||||
|
||||
## 2.4.2 (2019-12-13)
|
||||
|
||||
**Changed**
|
||||
|
||||
- refactor: replace with open file handler, avoid reading files into memory
|
||||
- refactor: rename plugin to extension, httprunner/plugins -> httprunner/ext
|
||||
- docs: update installation doc for developers
|
||||
|
||||
## 2.4.1 (2019-12-12)
|
||||
|
||||
**Added**
|
||||
|
||||
- feat: add `upload` keyword for upload test, see [doc](https://docs.httprunner.org/prepare/upload-case/)
|
||||
- test: pip install package
|
||||
- test: hrun command
|
||||
|
||||
**Fixed**
|
||||
|
||||
- fix: typo testfile_paths
|
||||
- fix: check if locustio installed
|
||||
- fix: dump json file name is empty when running relative testfile
|
||||
|
||||
## 2.4.0 (2019-12-11)
|
||||
|
||||
**Added**
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
HttpRunner 是一个基于 Python 开发的测试框架,可以运行在 macOS、Linux、Windows 系统平台上。
|
||||
|
||||
**Python 版本**:HttpRunner 支持 Python 3.4 及以上的所有版本,并使用 Travis-CI 进行了[持续集成测试][travis-ci],测试覆盖的版本包括 2.7/3.4/3.5/3.6/3.7。虽然 HttpRunner 暂时保留了对 Python 2.7 的兼容支持,但强烈建议使用 Python 3.4 及以上版本。
|
||||
**Python 版本**:HttpRunner 支持 Python 3.5 及以上的所有版本,并使用 Travis-CI 进行了[持续集成测试][travis-ci],测试覆盖的版本包括 2.7/3.5/3.6/3.7/3.8。虽然 HttpRunner 暂时保留了对 Python 2.7 的兼容支持,但强烈建议使用 Python 3.6 及以上版本。
|
||||
|
||||
**操作系统**:推荐使用 macOS/Linux。
|
||||
|
||||
@@ -45,10 +45,10 @@ httprunner、hrun、ate 三个命令完全等价,功能特性完全相同,
|
||||
|
||||
```text
|
||||
$ hrun -V
|
||||
2.0.2
|
||||
2.4.1
|
||||
|
||||
$ har2case -V
|
||||
0.2.0
|
||||
0.3.1
|
||||
```
|
||||
|
||||
## 开发者模式
|
||||
@@ -57,10 +57,10 @@ $ har2case -V
|
||||
|
||||
如果你不仅仅是使用 HttpRunner,还需要对 HttpRunner 进行开发调试(debug),那么就需要进行如下操作。
|
||||
|
||||
HttpRunner 使用 [pipenv][pipenv] 对依赖包进行管理,若你还没有安装 pipenv,需要先执行如下命令进行按照:
|
||||
HttpRunner 使用 [poetry][poetry] 对依赖包进行管理,若你还没有安装 poetry,需要先执行如下命令进行安装:
|
||||
|
||||
```bash
|
||||
$ pip install pipenv
|
||||
$ curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python
|
||||
```
|
||||
|
||||
获取 HttpRunner 源码:
|
||||
@@ -72,49 +72,67 @@ $ git clone https://github.com/HttpRunner/HttpRunner.git
|
||||
进入仓库目录,安装所有依赖:
|
||||
|
||||
```bash
|
||||
$ pipenv install --dev
|
||||
$ poetry install
|
||||
```
|
||||
|
||||
运行单元测试,若测试全部通过,则说明环境正常。
|
||||
|
||||
```bash
|
||||
$ pipenv run python -m unittest discover
|
||||
$ poetry run python -m unittest discover
|
||||
```
|
||||
|
||||
查看 HttpRunner 的依赖情况:
|
||||
|
||||
```text
|
||||
$ pipenv graph
|
||||
|
||||
HttpRunner==2.0.0
|
||||
- colorama [required: Any, installed: 0.4.0]
|
||||
- colorlog [required: Any, installed: 3.1.4]
|
||||
- har2case [required: Any, installed: 0.2.0]
|
||||
- PyYAML [required: Any, installed: 3.13]
|
||||
- Jinja2 [required: Any, installed: 2.10]
|
||||
- MarkupSafe [required: >=0.23, installed: 1.0]
|
||||
- PyYAML [required: Any, installed: 3.13]
|
||||
- requests [required: Any, installed: 2.20.0]
|
||||
- certifi [required: >=2017.4.17, installed: 2018.10.15]
|
||||
- chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
|
||||
- idna [required: >=2.5,<2.8, installed: 2.7]
|
||||
- urllib3 [required: >=1.21.1,<1.25, installed: 1.24]
|
||||
- requests-toolbelt [required: Any, installed: 0.8.0]
|
||||
- requests [required: >=2.0.1,<3.0.0, installed: 2.20.0]
|
||||
- certifi [required: >=2017.4.17, installed: 2018.10.15]
|
||||
- chardet [required: >=3.0.2,<3.1.0, installed: 3.0.4]
|
||||
- idna [required: >=2.5,<2.8, installed: 2.7]
|
||||
- urllib3 [required: >=1.21.1,<1.25, installed: 1.24]
|
||||
```bash
|
||||
$ poetry show --tree
|
||||
colorama 0.4.1 Cross-platform colored terminal text.
|
||||
colorlog 4.0.2 Log formatting with colors!
|
||||
└── colorama *
|
||||
coverage 4.5.4 Code coverage measurement for Python
|
||||
coveralls 1.8.2 Show coverage stats online via coveralls.io
|
||||
├── coverage >=3.6,<5.0
|
||||
├── docopt >=0.6.1
|
||||
├── requests >=1.0.0
|
||||
│ ├── certifi >=2017.4.17
|
||||
│ ├── chardet >=3.0.2,<3.1.0
|
||||
│ ├── idna >=2.5,<2.9
|
||||
│ └── urllib3 >=1.21.1,<1.25.0 || >1.25.0,<1.25.1 || >1.25.1,<1.26
|
||||
└── urllib3 *
|
||||
filetype 1.0.5 Infer file type and MIME type of any file/buffer. No external dependencies.
|
||||
flask 0.12.4 A microframework based on Werkzeug, Jinja2 and good intentions
|
||||
├── click >=2.0
|
||||
├── itsdangerous >=0.21
|
||||
├── jinja2 >=2.4
|
||||
│ └── markupsafe >=0.23
|
||||
└── werkzeug >=0.7
|
||||
future 0.18.1 Clean single-source support for Python 3 and 2
|
||||
har2case 0.3.1 Convert HAR(HTTP Archive) to YAML/JSON testcases for HttpRunner.
|
||||
└── pyyaml *
|
||||
jinja2 2.10.3 A very fast and expressive template engine.
|
||||
└── markupsafe >=0.23
|
||||
jsonpath 0.82 An XPath for JSON
|
||||
pyyaml 5.1.2 YAML parser and emitter for Python
|
||||
requests 2.22.0 Python HTTP for Humans.
|
||||
├── certifi >=2017.4.17
|
||||
├── chardet >=3.0.2,<3.1.0
|
||||
├── idna >=2.5,<2.9
|
||||
└── urllib3 >=1.21.1,<1.25.0 || >1.25.0,<1.25.1 || >1.25.1,<1.26
|
||||
requests-toolbelt 0.9.1 A utility belt for advanced users of python-requests
|
||||
└── requests >=2.0.1,<3.0.0
|
||||
├── certifi >=2017.4.17
|
||||
├── chardet >=3.0.2,<3.1.0
|
||||
├── idna >=2.5,<2.9
|
||||
└── urllib3 >=1.21.1,<1.25.0 || >1.25.0,<1.25.1 || >1.25.1,<1.26
|
||||
```
|
||||
|
||||
调试运行方式:
|
||||
|
||||
```bash
|
||||
# 调试运行 hrun
|
||||
$ pipenv run python main-debug.py hrun -h
|
||||
$ poetry run python -m httprunner -h
|
||||
|
||||
# 调试运行 locusts
|
||||
$ pipenv run python main-debug.py locusts -h
|
||||
$ pipenv run python -m httprunner.ext.locusts -h
|
||||
```
|
||||
|
||||
## Docker
|
||||
@@ -124,4 +142,4 @@ TODO
|
||||
[travis-ci]: https://travis-ci.org/HttpRunner/HttpRunner
|
||||
[Locust]: http://locust.io/
|
||||
[har2case]: https://github.com/HttpRunner/har2case
|
||||
[pipenv]: https://docs.pipenv.org/
|
||||
[poetry]: https://github.com/sdispater/poetry
|
||||
1
docs/assets/sentry-logo-black.svg
Normal file
1
docs/assets/sentry-logo-black.svg
Normal file
@@ -0,0 +1 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 717.11 249.68"><title>sentry-logo-black</title><path d="M430.56,143.76,386.07,86.33H375v77h11.22v-59l45.74,59h9.82v-77H430.56Zm-112-14.27H358.4v-10H318.52V96.31h45v-10H307.07v77h57v-10H318.52Zm-46.84-9.78c-15.57-3.72-19.83-6.69-19.83-13.84,0-6.46,5.71-10.81,14.22-10.81,7.09,0,14.07,2.51,21.3,7.67l6.06-8.54c-8-6.13-16.65-9-27.13-9-15.25,0-25.89,9-25.89,21.92,0,13.84,9,18.63,25.5,22.63,14.51,3.35,18.93,6.5,18.93,13.5s-6,11.38-15.35,11.38c-9.07,0-16.81-3-25-9.82l-6.79,8.08a47.82,47.82,0,0,0,31.41,11.6c16.49,0,27.14-8.87,27.14-22.6C296.27,130.23,289.38,124,271.68,119.71Zm373.9-33.37-23.19,36.31-23-36.31H586l30.51,46.54v30.47h11.56V132.53l30.5-46.19ZM450.87,96.76H476.1v66.58h11.57V96.76h25.23V86.33h-62ZM566.4,133.28c11.64-3.21,18-11.37,18-23,0-14.78-10.84-24-28.28-24H522v77h11.45V135.62h19.42l19.54,27.72h13.37l-21.1-29.58Zm-33-7.52V96.53H555c11.27,0,17.74,5.31,17.74,14.56,0,8.91-6.92,14.67-17.62,14.67ZM144.9,65.43a13.75,13.75,0,0,0-23.81,0l-19.6,33.95,5,2.87a96.14,96.14,0,0,1,47.83,77.4H140.56a82.4,82.4,0,0,0-41-65.54l-5-2.86L76.3,143l5,2.87a46.35,46.35,0,0,1,22.46,33.78H72.33a2.27,2.27,0,0,1-2-3.41l8.76-15.17a31.87,31.87,0,0,0-10-5.71L60.42,170.5a13.75,13.75,0,0,0,11.91,20.62h43.25v-5.73A57.16,57.16,0,0,0,91.84,139l6.88-11.92a70.93,70.93,0,0,1,30.56,58.26v5.74h36.65v-5.73A107.62,107.62,0,0,0,117.09,95.3L131,71.17a2.27,2.27,0,0,1,3.93,0l60.66,105.07a2.27,2.27,0,0,1-2,3.41H179.4c.18,3.83.2,7.66,0,11.48h14.24a13.75,13.75,0,0,0,11.91-20.62Z" style="fill:#221f20"/></svg>
|
||||
|
After Width: | Height: | Size: 1.5 KiB |
51
docs/prepare/upload-case.md
Normal file
51
docs/prepare/upload-case.md
Normal file
@@ -0,0 +1,51 @@
|
||||
|
||||
对于上传文件类型的测试场景,HttpRunner 集成 [requests_toolbelt][1] 实现了上传功能。
|
||||
|
||||
在使用之前,确保已安装如下依赖库:
|
||||
|
||||
- [requests_toolbelt](https://github.com/requests/toolbelt)
|
||||
- [filetype](https://github.com/h2non/filetype.py)
|
||||
|
||||
使用内置 `upload` 关键字,可轻松实现上传功能(适用版本:2.4.1+)。
|
||||
|
||||
```yaml
|
||||
- test:
|
||||
name: upload file
|
||||
request:
|
||||
url: http://httpbin.org/upload
|
||||
method: POST
|
||||
headers:
|
||||
Cookie: session=AAA-BBB-CCC
|
||||
upload:
|
||||
file: "data/file_to_upload"
|
||||
field1: "value1"
|
||||
field2: "value2"
|
||||
validate:
|
||||
- eq: ["status_code", 200]
|
||||
```
|
||||
|
||||
同时,你也可以继续使用之前描述形式(适用版本:2.0+)。
|
||||
|
||||
```yaml
|
||||
- test:
|
||||
name: upload file
|
||||
variables:
|
||||
file: "data/file_to_upload"
|
||||
field1: "value1"
|
||||
field2: "value2"
|
||||
m_encoder: ${multipart_encoder(file=$file, field1=$field1, field2=$field2)}
|
||||
request:
|
||||
url: http://httpbin.org/upload
|
||||
method: POST
|
||||
headers:
|
||||
Content-Type: ${multipart_content_type($m_encoder)}
|
||||
Cookie: session=AAA-BBB-CCC
|
||||
data: $m_encoder
|
||||
validate:
|
||||
- eq: ["status_code", 200]
|
||||
```
|
||||
|
||||
参考案例:[httprunner/tests/httpbin/upload.v2.yml][2]
|
||||
|
||||
[1]: https://toolbelt.readthedocs.io/en/latest/uploading-data.html
|
||||
[2]: https://github.com/httprunner/httprunner/blob/master/tests/httpbin/upload.v2.yml
|
||||
@@ -10,9 +10,15 @@
|
||||
|
||||
霍格沃兹测试学院是 HttpRunner 的首家金牌赞助商。
|
||||
|
||||
### 开源服务赞助商(Open Source Sponsor)
|
||||
|
||||
[<img src="./assets/sentry-logo-black.svg" alt="Sentry" width="150">](https://sentry.io/_/open-source/)
|
||||
|
||||
HttpRunner is in Sentry Sponsored plan.
|
||||
|
||||
## 成为赞助商
|
||||
|
||||
如果你所在的公司或个人也想对 HttpRunner 进行赞助,可参考如下方案,具体可联系[项目作者](mailto:mail@debugtalk.com)。
|
||||
如果你所在的公司或个人也想对 HttpRunner 进行赞助,可参考如下方案,具体可联系[项目作者](mailto:debugtalk@gmail.com)。
|
||||
|
||||
| 等级 | 金牌赞助商<br/>(Gold Sponsor) | 银牌赞助商<br/>(Silver Sponsor)| 个人赞赏 |
|
||||
|:---:|:---:|:---:|:---:|
|
||||
|
||||
@@ -1,4 +1,16 @@
|
||||
__version__ = "2.4.0"
|
||||
__version__ = "2.4.8"
|
||||
__description__ = "One-stop solution for HTTP(S) testing."
|
||||
|
||||
__all__ = ["__version__", "__description__"]
|
||||
|
||||
import uuid
|
||||
|
||||
import sentry_sdk
|
||||
|
||||
sentry_sdk.init(
|
||||
dsn="https://cc6dd86fbe9f4e7fbd95248cfcff114d@sentry.io/1862849",
|
||||
release="httprunner@{}".format(__version__)
|
||||
)
|
||||
|
||||
with sentry_sdk.configure_scope() as scope:
|
||||
scope.set_user({"id": uuid.getnode()})
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
import sys
|
||||
|
||||
from httprunner.cli import main
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
main()
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
import os
|
||||
import unittest
|
||||
|
||||
from sentry_sdk import capture_message
|
||||
|
||||
from httprunner import (__version__, exceptions, loader, logger, parser,
|
||||
report, runner, utils)
|
||||
|
||||
@@ -183,6 +185,7 @@ class HttpRunner(object):
|
||||
def run_tests(self, tests_mapping):
|
||||
""" run testcase/testsuite data
|
||||
"""
|
||||
capture_message("start to run tests")
|
||||
project_mapping = tests_mapping.get("project_mapping", {})
|
||||
self.project_working_directory = project_mapping.get("PWD", os.getcwd())
|
||||
|
||||
@@ -192,6 +195,10 @@ class HttpRunner(object):
|
||||
# parse tests
|
||||
self.exception_stage = "parse tests"
|
||||
parsed_testcases = parser.parse_tests(tests_mapping)
|
||||
parse_failed_testfiles = parser.get_parse_failed_testfiles()
|
||||
if parse_failed_testfiles:
|
||||
logger.log_warning("parse failures occurred ...")
|
||||
utils.dump_logs(parse_failed_testfiles, project_mapping, "parse_failed")
|
||||
|
||||
if self.save_tests:
|
||||
utils.dump_logs(parsed_testcases, project_mapping, "parsed")
|
||||
@@ -274,6 +281,8 @@ class HttpRunner(object):
|
||||
path_or_tests:
|
||||
str: testcase/testsuite file/foler path
|
||||
dict: valid testcase/testsuite data
|
||||
dot_env_path (str): specified .env file path.
|
||||
mapping (dict): if mapping is specified, it will override variables in config block.
|
||||
|
||||
Returns:
|
||||
dict: result summary
|
||||
|
||||
@@ -3,19 +3,13 @@ Built-in functions used in YAML/JSON testcases.
|
||||
"""
|
||||
|
||||
import datetime
|
||||
import os
|
||||
import random
|
||||
import string
|
||||
import time
|
||||
|
||||
import filetype
|
||||
from requests_toolbelt import MultipartEncoder
|
||||
|
||||
from httprunner.compat import builtin_str, integer_types
|
||||
from httprunner.exceptions import ParamsError
|
||||
|
||||
PWD = os.getcwd()
|
||||
|
||||
|
||||
def gen_random_string(str_len):
|
||||
""" generate random string with specified length
|
||||
@@ -44,62 +38,3 @@ def sleep(n_secs):
|
||||
"""
|
||||
time.sleep(n_secs)
|
||||
|
||||
|
||||
"""
|
||||
upload files with requests-toolbelt
|
||||
e.g.
|
||||
|
||||
- test:
|
||||
name: upload file
|
||||
variables:
|
||||
file_path: "data/test.env"
|
||||
multipart_encoder: ${multipart_encoder(file=$file_path)}
|
||||
request:
|
||||
url: /post
|
||||
method: POST
|
||||
headers:
|
||||
Content-Type: ${multipart_content_type($multipart_encoder)}
|
||||
data: $multipart_encoder
|
||||
validate:
|
||||
- eq: ["status_code", 200]
|
||||
- startswith: ["content.files.file", "UserName=test"]
|
||||
"""
|
||||
|
||||
|
||||
def multipart_encoder(**kwargs):
|
||||
""" initialize MultipartEncoder with uploading fields.
|
||||
"""
|
||||
|
||||
def get_filetype(file_path):
|
||||
file_type = filetype.guess(file_path)
|
||||
if file_type:
|
||||
return file_type.mime
|
||||
else:
|
||||
return "text/html"
|
||||
|
||||
fields_dict = {}
|
||||
for key, value in kwargs.items():
|
||||
|
||||
if os.path.isabs(value):
|
||||
_file_path = value
|
||||
is_file = True
|
||||
else:
|
||||
global PWD
|
||||
_file_path = os.path.join(PWD, value)
|
||||
is_file = os.path.isfile(_file_path)
|
||||
|
||||
if is_file:
|
||||
filename = os.path.basename(_file_path)
|
||||
with open(_file_path, 'rb') as f:
|
||||
mime_type = get_filetype(_file_path)
|
||||
fields_dict[key] = (filename, f.read(), mime_type)
|
||||
else:
|
||||
fields_dict[key] = value
|
||||
|
||||
return MultipartEncoder(fields=fields_dict)
|
||||
|
||||
|
||||
def multipart_content_type(multipart_encoder):
|
||||
""" prepare Content-Type for request headers
|
||||
"""
|
||||
return multipart_encoder.content_type
|
||||
|
||||
@@ -2,6 +2,8 @@ import argparse
|
||||
import os
|
||||
import sys
|
||||
|
||||
from sentry_sdk import capture_exception
|
||||
|
||||
from httprunner import __description__, __version__
|
||||
from httprunner.api import HttpRunner
|
||||
from httprunner.compat import is_py2
|
||||
@@ -64,23 +66,23 @@ def main():
|
||||
if len(sys.argv) == 1:
|
||||
# no argument passed
|
||||
parser.print_help()
|
||||
return 0
|
||||
sys.exit(0)
|
||||
|
||||
if args.version:
|
||||
color_print("{}".format(__version__), "GREEN")
|
||||
return 0
|
||||
sys.exit(0)
|
||||
|
||||
if args.validate:
|
||||
validate_json_file(args.validate)
|
||||
return 0
|
||||
sys.exit(0)
|
||||
if args.prettify:
|
||||
prettify_json_file(args.prettify)
|
||||
return 0
|
||||
sys.exit(0)
|
||||
|
||||
project_name = args.startproject
|
||||
if project_name:
|
||||
create_scaffold(project_name)
|
||||
return 0
|
||||
sys.exit(0)
|
||||
|
||||
runner = HttpRunner(
|
||||
failfast=args.failfast,
|
||||
@@ -91,7 +93,7 @@ def main():
|
||||
|
||||
err_code = 0
|
||||
try:
|
||||
for path in args.testcase_paths:
|
||||
for path in args.testfile_paths:
|
||||
summary = runner.run(path, dot_env_path=args.dot_env_path)
|
||||
report_dir = args.report_dir or os.path.join(runner.project_working_directory, "reports")
|
||||
gen_html_report(
|
||||
@@ -101,12 +103,13 @@ def main():
|
||||
report_file=args.report_file
|
||||
)
|
||||
err_code |= (0 if summary and summary["success"] else 1)
|
||||
except Exception:
|
||||
except Exception as ex:
|
||||
color_print("!!!!!!!!!! exception stage: {} !!!!!!!!!!".format(runner.exception_stage), "YELLOW")
|
||||
raise
|
||||
capture_exception(ex)
|
||||
err_code = 1
|
||||
|
||||
return err_code
|
||||
sys.exit(err_code)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
sys.exit(main())
|
||||
main()
|
||||
|
||||
@@ -14,6 +14,74 @@ from httprunner.utils import lower_dict_keys, omit_long_data
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
|
||||
|
||||
def get_req_resp_record(resp_obj):
|
||||
""" get request and response info from Response() object.
|
||||
"""
|
||||
def log_print(req_resp_dict, r_type):
|
||||
msg = "\n================== {} details ==================\n".format(r_type)
|
||||
for key, value in req_resp_dict[r_type].items():
|
||||
msg += "{:<16} : {}\n".format(key, repr(value))
|
||||
logger.log_debug(msg)
|
||||
|
||||
req_resp_dict = {
|
||||
"request": {},
|
||||
"response": {}
|
||||
}
|
||||
|
||||
# record actual request info
|
||||
req_resp_dict["request"]["url"] = resp_obj.request.url
|
||||
req_resp_dict["request"]["method"] = resp_obj.request.method
|
||||
req_resp_dict["request"]["headers"] = dict(resp_obj.request.headers)
|
||||
|
||||
request_body = resp_obj.request.body
|
||||
if request_body:
|
||||
request_content_type = lower_dict_keys(
|
||||
req_resp_dict["request"]["headers"]
|
||||
).get("content-type")
|
||||
if request_content_type and "multipart/form-data" in request_content_type:
|
||||
# upload file type
|
||||
req_resp_dict["request"]["body"] = "upload file stream (OMITTED)"
|
||||
else:
|
||||
req_resp_dict["request"]["body"] = request_body
|
||||
|
||||
# log request details in debug mode
|
||||
log_print(req_resp_dict, "request")
|
||||
|
||||
# record response info
|
||||
req_resp_dict["response"]["ok"] = resp_obj.ok
|
||||
req_resp_dict["response"]["url"] = resp_obj.url
|
||||
req_resp_dict["response"]["status_code"] = resp_obj.status_code
|
||||
req_resp_dict["response"]["reason"] = resp_obj.reason
|
||||
req_resp_dict["response"]["cookies"] = resp_obj.cookies or {}
|
||||
req_resp_dict["response"]["encoding"] = resp_obj.encoding
|
||||
resp_headers = dict(resp_obj.headers)
|
||||
req_resp_dict["response"]["headers"] = resp_headers
|
||||
|
||||
lower_resp_headers = lower_dict_keys(resp_headers)
|
||||
content_type = lower_resp_headers.get("content-type", "")
|
||||
req_resp_dict["response"]["content_type"] = content_type
|
||||
|
||||
if "image" in content_type:
|
||||
# response is image type, record bytes content only
|
||||
req_resp_dict["response"]["body"] = resp_obj.content
|
||||
else:
|
||||
try:
|
||||
# try to record json data
|
||||
if isinstance(resp_obj, response.ResponseObject):
|
||||
req_resp_dict["response"]["body"] = resp_obj.json
|
||||
else:
|
||||
req_resp_dict["response"]["body"] = resp_obj.json()
|
||||
except ValueError:
|
||||
# only record at most 512 text charactors
|
||||
resp_text = resp_obj.text
|
||||
req_resp_dict["response"]["body"] = omit_long_data(resp_text)
|
||||
|
||||
# log response details in debug mode
|
||||
log_print(req_resp_dict, "response")
|
||||
|
||||
return req_resp_dict
|
||||
|
||||
|
||||
class ApiResponse(Response):
|
||||
|
||||
def raise_for_status(self):
|
||||
@@ -62,79 +130,12 @@ class HttpSession(requests.Session):
|
||||
}
|
||||
}
|
||||
|
||||
def get_req_resp_record(self, resp_obj):
|
||||
""" get request and response info from Response() object.
|
||||
"""
|
||||
def log_print(req_resp_dict, r_type):
|
||||
msg = "\n================== {} details ==================\n".format(r_type)
|
||||
for key, value in req_resp_dict[r_type].items():
|
||||
msg += "{:<16} : {}\n".format(key, repr(value))
|
||||
logger.log_debug(msg)
|
||||
|
||||
req_resp_dict = {
|
||||
"request": {},
|
||||
"response": {}
|
||||
}
|
||||
|
||||
# record actual request info
|
||||
req_resp_dict["request"]["url"] = resp_obj.request.url
|
||||
req_resp_dict["request"]["method"] = resp_obj.request.method
|
||||
req_resp_dict["request"]["headers"] = dict(resp_obj.request.headers)
|
||||
|
||||
request_body = resp_obj.request.body
|
||||
if request_body:
|
||||
request_content_type = lower_dict_keys(
|
||||
req_resp_dict["request"]["headers"]
|
||||
).get("content-type")
|
||||
if request_content_type and "multipart/form-data" in request_content_type:
|
||||
# upload file type
|
||||
req_resp_dict["request"]["body"] = "upload file stream (OMITTED)"
|
||||
else:
|
||||
req_resp_dict["request"]["body"] = request_body
|
||||
|
||||
# log request details in debug mode
|
||||
log_print(req_resp_dict, "request")
|
||||
|
||||
# record response info
|
||||
req_resp_dict["response"]["ok"] = resp_obj.ok
|
||||
req_resp_dict["response"]["url"] = resp_obj.url
|
||||
req_resp_dict["response"]["status_code"] = resp_obj.status_code
|
||||
req_resp_dict["response"]["reason"] = resp_obj.reason
|
||||
req_resp_dict["response"]["cookies"] = resp_obj.cookies or {}
|
||||
req_resp_dict["response"]["encoding"] = resp_obj.encoding
|
||||
resp_headers = dict(resp_obj.headers)
|
||||
req_resp_dict["response"]["headers"] = resp_headers
|
||||
|
||||
lower_resp_headers = lower_dict_keys(resp_headers)
|
||||
content_type = lower_resp_headers.get("content-type", "")
|
||||
req_resp_dict["response"]["content_type"] = content_type
|
||||
|
||||
if "image" in content_type:
|
||||
# response is image type, record bytes content only
|
||||
req_resp_dict["response"]["content"] = resp_obj.content
|
||||
else:
|
||||
try:
|
||||
# try to record json data
|
||||
if isinstance(resp_obj, response.ResponseObject):
|
||||
req_resp_dict["response"]["json"] = resp_obj.json
|
||||
else:
|
||||
req_resp_dict["response"]["json"] = resp_obj.json()
|
||||
except ValueError:
|
||||
# only record at most 512 text charactors
|
||||
resp_text = resp_obj.text
|
||||
req_resp_dict["response"]["text"] = omit_long_data(resp_text)
|
||||
|
||||
# log response details in debug mode
|
||||
log_print(req_resp_dict, "response")
|
||||
|
||||
return req_resp_dict
|
||||
|
||||
def update_last_req_resp_record(self, resp_obj):
|
||||
"""
|
||||
update request and response info from Response() object.
|
||||
"""
|
||||
self.meta_data["data"].pop()
|
||||
self.meta_data["data"].append(self.get_req_resp_record(resp_obj))
|
||||
self.meta_data["data"].append(get_req_resp_record(resp_obj))
|
||||
|
||||
def request(self, method, url, name=None, **kwargs):
|
||||
"""
|
||||
@@ -207,7 +208,7 @@ class HttpSession(requests.Session):
|
||||
# record request and response histories, include 30X redirection
|
||||
response_list = response.history + [response]
|
||||
self.meta_data["data"] = [
|
||||
self.get_req_resp_record(resp_obj)
|
||||
get_req_resp_record(resp_obj)
|
||||
for resp_obj in response_list
|
||||
]
|
||||
|
||||
|
||||
@@ -6,18 +6,23 @@ from httprunner.compat import JSONDecodeError, FileNotFoundError
|
||||
these exceptions will mark test as failure
|
||||
"""
|
||||
|
||||
|
||||
class MyBaseFailure(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class ValidationFailure(MyBaseFailure):
|
||||
pass
|
||||
|
||||
|
||||
class ExtractFailure(MyBaseFailure):
|
||||
pass
|
||||
|
||||
|
||||
class SetupHooksFailure(MyBaseFailure):
|
||||
pass
|
||||
|
||||
|
||||
class TeardownHooksFailure(MyBaseFailure):
|
||||
pass
|
||||
|
||||
@@ -26,35 +31,51 @@ class TeardownHooksFailure(MyBaseFailure):
|
||||
these exceptions will mark test as error
|
||||
"""
|
||||
|
||||
|
||||
class MyBaseError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class FileFormatError(MyBaseError):
|
||||
pass
|
||||
|
||||
|
||||
class ParamsError(MyBaseError):
|
||||
pass
|
||||
|
||||
|
||||
class NotFoundError(MyBaseError):
|
||||
pass
|
||||
|
||||
|
||||
class FileNotFound(FileNotFoundError, NotFoundError):
|
||||
pass
|
||||
|
||||
|
||||
class FunctionNotFound(NotFoundError):
|
||||
pass
|
||||
|
||||
|
||||
class VariableNotFound(NotFoundError):
|
||||
pass
|
||||
|
||||
|
||||
class EnvNotFound(NotFoundError):
|
||||
pass
|
||||
|
||||
|
||||
class CSVNotFound(NotFoundError):
|
||||
pass
|
||||
|
||||
|
||||
class ApiNotFound(NotFoundError):
|
||||
pass
|
||||
|
||||
|
||||
class TestcaseNotFound(NotFoundError):
|
||||
pass
|
||||
|
||||
|
||||
class SummaryEmpty(MyBaseError):
|
||||
""" test result summary data is empty
|
||||
"""
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
# NOTICE:
|
||||
# This file should not be deleted, or ImportError will be raised in Python 2.7 when importing plugin
|
||||
# This file should not be deleted, or ImportError will be raised in Python 2.7 when importing extension
|
||||
@@ -17,7 +17,7 @@ $ locusts -f xxx.yml --processes
|
||||
```
|
||||
|
||||
```shell script
|
||||
$ python3 -m httprunner.plugins.locusts -h
|
||||
$ python3 -m httprunner.ext.locusts -h
|
||||
|
||||
Usage: locust [options] [LocustClass [LocustClass2 ... ]]
|
||||
|
||||
4
httprunner/ext/locusts/__main__.py
Normal file
4
httprunner/ext/locusts/__main__.py
Normal file
@@ -0,0 +1,4 @@
|
||||
from httprunner.ext.locusts.cli import main
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -2,6 +2,7 @@ try:
|
||||
# monkey patch ssl at beginning to avoid RecursionError when running locust.
|
||||
from gevent import monkey
|
||||
monkey.patch_ssl()
|
||||
from locust import main as locust_main
|
||||
except ImportError:
|
||||
msg = """
|
||||
Locust is not installed, install first and try again.
|
||||
@@ -61,8 +62,7 @@ def gen_locustfile(testcase_file_path):
|
||||
|
||||
|
||||
def start_locust_main():
|
||||
from locust.main import main
|
||||
main()
|
||||
locust_main.main()
|
||||
|
||||
|
||||
def start_master(sys_argv):
|
||||
@@ -5,7 +5,7 @@ from locust import HttpLocust, TaskSet, task
|
||||
from locust.events import request_failure
|
||||
|
||||
from httprunner.exceptions import MyBaseError, MyBaseFailure
|
||||
from httprunner.plugins.locusts.utils import prepare_locust_tests
|
||||
from httprunner.ext.locusts.utils import prepare_locust_tests
|
||||
from httprunner.runner import Runner
|
||||
|
||||
logging.getLogger().setLevel(logging.CRITICAL)
|
||||
144
httprunner/ext/uploader/__init__.py
Normal file
144
httprunner/ext/uploader/__init__.py
Normal file
@@ -0,0 +1,144 @@
|
||||
""" upload test extension.
|
||||
|
||||
If you want to use this extension, you should install the following dependencies first.
|
||||
|
||||
- requests_toolbelt
|
||||
- filetype
|
||||
|
||||
Then you can write upload test script as below:
|
||||
|
||||
- test:
|
||||
name: upload file
|
||||
request:
|
||||
url: http://httpbin.org/upload
|
||||
method: POST
|
||||
headers:
|
||||
Cookie: session=AAA-BBB-CCC
|
||||
upload:
|
||||
file: "data/file_to_upload"
|
||||
field1: "value1"
|
||||
field2: "value2"
|
||||
validate:
|
||||
- eq: ["status_code", 200]
|
||||
|
||||
For compatibility, you can also write upload test script in old way:
|
||||
|
||||
- test:
|
||||
name: upload file
|
||||
variables:
|
||||
file: "data/file_to_upload"
|
||||
field1: "value1"
|
||||
field2: "value2"
|
||||
m_encoder: ${multipart_encoder(file=$file, field1=$field1, field2=$field2)}
|
||||
request:
|
||||
url: http://httpbin.org/upload
|
||||
method: POST
|
||||
headers:
|
||||
Content-Type: ${multipart_content_type($m_encoder)}
|
||||
Cookie: session=AAA-BBB-CCC
|
||||
data: $m_encoder
|
||||
validate:
|
||||
- eq: ["status_code", 200]
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
try:
|
||||
import filetype
|
||||
from requests_toolbelt import MultipartEncoder
|
||||
except ImportError:
|
||||
msg = """
|
||||
uploader extension dependencies uninstalled, install first and try again.
|
||||
install with pip:
|
||||
$ pip install requests_toolbelt filetype
|
||||
"""
|
||||
print(msg)
|
||||
sys.exit(0)
|
||||
|
||||
from httprunner.exceptions import ParamsError
|
||||
|
||||
|
||||
def prepare_upload_test(test_dict):
|
||||
""" preprocess for upload test
|
||||
replace `upload` info with MultipartEncoder
|
||||
|
||||
Args:
|
||||
test_dict (dict):
|
||||
|
||||
{
|
||||
"variables": {},
|
||||
"request": {
|
||||
"url": "http://httpbin.org/upload",
|
||||
"method": "POST",
|
||||
"headers": {
|
||||
"Cookie": "session=AAA-BBB-CCC"
|
||||
},
|
||||
"upload": {
|
||||
"file": "data/file_to_upload"
|
||||
"md5": "123"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
"""
|
||||
upload_json = test_dict["request"].pop("upload", {})
|
||||
if not upload_json:
|
||||
raise ParamsError("invalid upload info: {}".format(upload_json))
|
||||
|
||||
params_list = []
|
||||
for key, value in upload_json.items():
|
||||
test_dict["variables"][key] = value
|
||||
params_list.append("{}=${}".format(key, key))
|
||||
|
||||
params_str = ", ".join(params_list)
|
||||
test_dict["variables"]["m_encoder"] = "${multipart_encoder(" + params_str + ")}"
|
||||
|
||||
test_dict["request"].setdefault("headers", {})
|
||||
test_dict["request"]["headers"]["Content-Type"] = "${multipart_content_type($m_encoder)}"
|
||||
|
||||
test_dict["request"]["data"] = "$m_encoder"
|
||||
|
||||
|
||||
def multipart_encoder(**kwargs):
|
||||
""" initialize MultipartEncoder with uploading fields.
|
||||
"""
|
||||
|
||||
def get_filetype(file_path):
|
||||
file_type = filetype.guess(file_path)
|
||||
if file_type:
|
||||
return file_type.mime
|
||||
else:
|
||||
return "text/html"
|
||||
|
||||
fields_dict = {}
|
||||
for key, value in kwargs.items():
|
||||
|
||||
if os.path.isabs(value):
|
||||
# value is absolute file path
|
||||
_file_path = value
|
||||
is_exists_file = os.path.isfile(value)
|
||||
else:
|
||||
# value is not absolute file path, check if it is relative file path
|
||||
from httprunner.loader import get_pwd
|
||||
_file_path = os.path.join(get_pwd(), value)
|
||||
is_exists_file = os.path.isfile(_file_path)
|
||||
|
||||
if is_exists_file:
|
||||
# value is file path to upload
|
||||
filename = os.path.basename(_file_path)
|
||||
mime_type = get_filetype(_file_path)
|
||||
# TODO: fix ResourceWarning for unclosed file
|
||||
file_handler = open(_file_path, 'rb')
|
||||
fields_dict[key] = (filename, file_handler, mime_type)
|
||||
else:
|
||||
fields_dict[key] = value
|
||||
|
||||
return MultipartEncoder(fields=fields_dict)
|
||||
|
||||
|
||||
def multipart_content_type(m_encoder):
|
||||
""" prepare Content-Type for request headers
|
||||
"""
|
||||
return m_encoder.content_type
|
||||
@@ -9,6 +9,7 @@ HttpRunner loader
|
||||
"""
|
||||
|
||||
from httprunner.loader.check import is_testcase_path, is_testcases, validate_json_file
|
||||
from httprunner.loader.locate import get_project_working_directory as get_pwd
|
||||
from httprunner.loader.load import load_csv_file, load_builtin_functions
|
||||
from httprunner.loader.buildup import load_cases, load_project_data
|
||||
|
||||
@@ -16,6 +17,7 @@ __all__ = [
|
||||
"is_testcase_path",
|
||||
"is_testcases",
|
||||
"validate_json_file",
|
||||
"get_pwd",
|
||||
"load_csv_file",
|
||||
"load_builtin_functions",
|
||||
"load_project_data",
|
||||
|
||||
@@ -2,8 +2,7 @@ import importlib
|
||||
import os
|
||||
|
||||
from httprunner import exceptions, logger, utils
|
||||
from httprunner.builtin import functions
|
||||
from httprunner.loader.load import load_module_functions, load_folder_content, load_file, load_dot_env_file, \
|
||||
from httprunner.loader.load import load_module_functions, load_file, load_dot_env_file, \
|
||||
load_folder_files
|
||||
from httprunner.loader.locate import init_project_working_directory, get_project_working_directory
|
||||
|
||||
@@ -50,12 +49,16 @@ def __extend_with_api_ref(raw_testinfo):
|
||||
# type 1: api is defined in individual file
|
||||
api_name = api_path
|
||||
|
||||
try:
|
||||
if api_name in tests_def_mapping["api"]:
|
||||
block = tests_def_mapping["api"][api_name]
|
||||
# NOTICE: avoid project_mapping been changed during iteration.
|
||||
raw_testinfo["api_def"] = utils.deepcopy_dict(block)
|
||||
except KeyError:
|
||||
elif not os.path.isfile(api_name):
|
||||
raise exceptions.ApiNotFound("{} not found!".format(api_name))
|
||||
else:
|
||||
block = load_file(api_name)
|
||||
|
||||
# NOTICE: avoid project_mapping been changed during iteration.
|
||||
raw_testinfo["api_def"] = utils.deepcopy_dict(block)
|
||||
tests_def_mapping["api"][api_name] = block
|
||||
|
||||
|
||||
def __extend_with_testcase_ref(raw_testinfo):
|
||||
@@ -335,7 +338,6 @@ def load_test_file(path):
|
||||
|
||||
"""
|
||||
raw_content = load_file(path)
|
||||
loaded_content = None
|
||||
|
||||
if isinstance(raw_content, dict):
|
||||
|
||||
@@ -378,77 +380,6 @@ def load_test_file(path):
|
||||
return loaded_content
|
||||
|
||||
|
||||
def load_api_folder(api_folder_path):
|
||||
""" load api definitions from api folder.
|
||||
|
||||
Args:
|
||||
api_folder_path (str): api files folder.
|
||||
|
||||
api file should be in the following format:
|
||||
[
|
||||
{
|
||||
"api": {
|
||||
"def": "api_login",
|
||||
"request": {},
|
||||
"validate": []
|
||||
}
|
||||
},
|
||||
{
|
||||
"api": {
|
||||
"def": "api_logout",
|
||||
"request": {},
|
||||
"validate": []
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
Returns:
|
||||
dict: api definition mapping.
|
||||
|
||||
{
|
||||
"api_login": {
|
||||
"function_meta": {"func_name": "api_login", "args": [], "kwargs": {}}
|
||||
"request": {}
|
||||
},
|
||||
"api_logout": {
|
||||
"function_meta": {"func_name": "api_logout", "args": [], "kwargs": {}}
|
||||
"request": {}
|
||||
}
|
||||
}
|
||||
|
||||
"""
|
||||
api_definition_mapping = {}
|
||||
|
||||
api_items_mapping = load_folder_content(api_folder_path)
|
||||
|
||||
for api_file_path, api_items in api_items_mapping.items():
|
||||
# TODO: add JSON schema validation
|
||||
if isinstance(api_items, list):
|
||||
for api_item in api_items:
|
||||
key, api_dict = api_item.popitem()
|
||||
api_id = api_dict.get("id") or api_dict.get("def") \
|
||||
or api_dict.get("name")
|
||||
if key != "api" or not api_id:
|
||||
raise exceptions.ParamsError(
|
||||
"Invalid API defined in {}".format(api_file_path))
|
||||
|
||||
if api_id in api_definition_mapping:
|
||||
raise exceptions.ParamsError(
|
||||
"Duplicated API ({}) defined in {}".format(
|
||||
api_id, api_file_path))
|
||||
else:
|
||||
api_definition_mapping[api_id] = api_dict
|
||||
|
||||
elif isinstance(api_items, dict):
|
||||
if api_file_path in api_definition_mapping:
|
||||
raise exceptions.ParamsError(
|
||||
"Duplicated API defined: {}".format(api_file_path))
|
||||
else:
|
||||
api_definition_mapping[api_file_path] = api_items
|
||||
|
||||
return api_definition_mapping
|
||||
|
||||
|
||||
def load_project_data(test_path, dot_env_path=None):
|
||||
""" load api, testcases, .env, debugtalk.py functions.
|
||||
api/testcases folder is relative to project_working_directory
|
||||
@@ -480,14 +411,9 @@ def load_project_data(test_path, dot_env_path=None):
|
||||
debugtalk_functions = {}
|
||||
|
||||
# locate PWD and load debugtalk.py functions
|
||||
|
||||
project_mapping["PWD"] = project_working_directory
|
||||
functions.PWD = project_working_directory # TODO: remove
|
||||
project_mapping["functions"] = debugtalk_functions
|
||||
project_mapping["test_path"] = test_path
|
||||
|
||||
# load api
|
||||
tests_def_mapping["api"] = load_api_folder(os.path.join(project_working_directory, "api"))
|
||||
project_mapping["test_path"] = os.path.abspath(test_path)
|
||||
|
||||
return project_mapping
|
||||
|
||||
|
||||
@@ -127,6 +127,8 @@ def is_testcase_path(path):
|
||||
if not os.path.exists(path):
|
||||
return False
|
||||
|
||||
# TODO: check file format if valid
|
||||
|
||||
return True
|
||||
|
||||
|
||||
|
||||
@@ -185,31 +185,6 @@ def load_dot_env_file(dot_env_path):
|
||||
return env_variables_mapping
|
||||
|
||||
|
||||
def load_folder_content(folder_path):
|
||||
""" load api/testcases/testsuites definitions from folder.
|
||||
|
||||
Args:
|
||||
folder_path (str): api/testcases/testsuites files folder.
|
||||
|
||||
Returns:
|
||||
dict: api definition mapping.
|
||||
|
||||
{
|
||||
"tests/api/basic.yml": [
|
||||
{"api": {"def": "api_login", "request": {}, "validate": []}},
|
||||
{"api": {"def": "api_logout", "request": {}, "validate": []}}
|
||||
]
|
||||
}
|
||||
|
||||
"""
|
||||
items_mapping = {}
|
||||
|
||||
for file_path in load_folder_files(folder_path):
|
||||
items_mapping[file_path] = load_file(file_path)
|
||||
|
||||
return items_mapping
|
||||
|
||||
|
||||
def load_module_functions(module):
|
||||
""" load python module functions.
|
||||
|
||||
|
||||
@@ -16,6 +16,14 @@ variable_regex_compile = re.compile(r"\$\{(\w+)\}|\$(\w+)")
|
||||
# function notation, e.g. ${func1($var_1, $var_3)}
|
||||
function_regex_compile = re.compile(r"\$\{(\w+)\(([\$\w\.\-/\s=,]*)\)\}")
|
||||
|
||||
""" Store parse failed api/testcase/testsuite file path
|
||||
"""
|
||||
parse_failed_testfiles = {}
|
||||
|
||||
|
||||
def get_parse_failed_testfiles():
|
||||
return parse_failed_testfiles
|
||||
|
||||
|
||||
def parse_string_value(str_value):
|
||||
""" parse string to number if possible
|
||||
@@ -428,6 +436,11 @@ def get_mapping_function(function_name, functions_mapping):
|
||||
elif function_name in ["environ", "ENV"]:
|
||||
return utils.get_os_environ
|
||||
|
||||
elif function_name in ["multipart_encoder", "multipart_content_type"]:
|
||||
# extension for upload test
|
||||
from httprunner.ext import uploader
|
||||
return getattr(uploader, function_name)
|
||||
|
||||
try:
|
||||
# check if HttpRunner builtin functions
|
||||
built_in_functions = loader.load_builtin_functions()
|
||||
@@ -439,8 +452,9 @@ def get_mapping_function(function_name, functions_mapping):
|
||||
# check if Python builtin functions
|
||||
return getattr(builtins, function_name)
|
||||
except AttributeError:
|
||||
# is not builtin function
|
||||
raise exceptions.FunctionNotFound("{} is not found.".format(function_name))
|
||||
pass
|
||||
|
||||
raise exceptions.FunctionNotFound("{} is not found.".format(function_name))
|
||||
|
||||
|
||||
def parse_function_params(params):
|
||||
@@ -1139,6 +1153,8 @@ def __prepare_testcase_tests(tests, config, project_mapping, session_variables_s
|
||||
|
||||
# 3, testcase_def config => testcase_def test_dict
|
||||
test_dict = _parse_testcase(test_dict, project_mapping, session_variables_set)
|
||||
if not test_dict:
|
||||
continue
|
||||
|
||||
elif "api_def" in test_dict:
|
||||
# test_dict has API reference
|
||||
@@ -1146,13 +1162,18 @@ def __prepare_testcase_tests(tests, config, project_mapping, session_variables_s
|
||||
api_def_dict = test_dict.pop("api_def")
|
||||
_extend_with_api(test_dict, api_def_dict)
|
||||
|
||||
# verify priority: testcase teststep > testcase config
|
||||
if "request" in test_dict:
|
||||
if "verify" not in test_dict["request"]:
|
||||
test_dict["request"]["verify"] = config_verify
|
||||
|
||||
if "upload" in test_dict["request"]:
|
||||
from httprunner.ext.uploader import prepare_upload_test
|
||||
prepare_upload_test(test_dict)
|
||||
|
||||
# current teststep variables
|
||||
teststep_variables_set |= set(test_dict.get("variables", {}).keys())
|
||||
|
||||
# verify priority: testcase teststep > testcase config
|
||||
if "request" in test_dict and "verify" not in test_dict["request"]:
|
||||
test_dict["request"]["verify"] = config_verify
|
||||
|
||||
# move extracted variable to session variables
|
||||
if "extract" in test_dict:
|
||||
extract_mapping = utils.ensure_mapping_format(test_dict["extract"])
|
||||
@@ -1205,21 +1226,34 @@ def _parse_testcase(testcase, project_mapping, session_variables_set=None):
|
||||
|
||||
"""
|
||||
testcase.setdefault("config", {})
|
||||
prepared_config = __prepare_config(
|
||||
testcase["config"],
|
||||
project_mapping,
|
||||
session_variables_set
|
||||
)
|
||||
prepared_testcase_tests = __prepare_testcase_tests(
|
||||
testcase["teststeps"],
|
||||
prepared_config,
|
||||
project_mapping,
|
||||
session_variables_set
|
||||
)
|
||||
return {
|
||||
"config": prepared_config,
|
||||
"teststeps": prepared_testcase_tests
|
||||
}
|
||||
|
||||
try:
|
||||
prepared_config = __prepare_config(
|
||||
testcase["config"],
|
||||
project_mapping,
|
||||
session_variables_set
|
||||
)
|
||||
prepared_testcase_tests = __prepare_testcase_tests(
|
||||
testcase["teststeps"],
|
||||
prepared_config,
|
||||
project_mapping,
|
||||
session_variables_set
|
||||
)
|
||||
return {
|
||||
"config": prepared_config,
|
||||
"teststeps": prepared_testcase_tests
|
||||
}
|
||||
except (exceptions.MyBaseFailure, exceptions.MyBaseError):
|
||||
testcase_type = testcase["type"]
|
||||
testcase_path = testcase.get("path")
|
||||
|
||||
global parse_failed_testfiles
|
||||
if testcase_type not in parse_failed_testfiles:
|
||||
parse_failed_testfiles[testcase_type] = []
|
||||
|
||||
parse_failed_testfiles[testcase_type].append(testcase_path)
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def __get_parsed_testsuite_testcases(testcases, testsuite_config, project_mapping):
|
||||
@@ -1275,6 +1309,7 @@ def __get_parsed_testsuite_testcases(testcases, testsuite_config, project_mappin
|
||||
parsed_testcase = testcase.pop("testcase_def")
|
||||
parsed_testcase.setdefault("config", {})
|
||||
parsed_testcase["path"] = testcase["testcase"]
|
||||
parsed_testcase["type"] = "testcase"
|
||||
parsed_testcase["config"]["name"] = testcase_name
|
||||
|
||||
if "weight" in testcase:
|
||||
@@ -1320,6 +1355,8 @@ def __get_parsed_testsuite_testcases(testcases, testsuite_config, project_mappin
|
||||
parameter_variables
|
||||
)
|
||||
parsed_testcase_copied = _parse_testcase(testcase_copied, project_mapping)
|
||||
if not parsed_testcase_copied:
|
||||
continue
|
||||
parsed_testcase_copied["config"]["name"] = parse_lazy_data(
|
||||
parsed_testcase_copied["config"]["name"],
|
||||
testcase_copied["config"]["variables"]
|
||||
@@ -1328,6 +1365,8 @@ def __get_parsed_testsuite_testcases(testcases, testsuite_config, project_mappin
|
||||
|
||||
else:
|
||||
parsed_testcase = _parse_testcase(parsed_testcase, project_mapping)
|
||||
if not parsed_testcase:
|
||||
continue
|
||||
parsed_testcase_list.append(parsed_testcase)
|
||||
|
||||
return parsed_testcase_list
|
||||
@@ -1424,7 +1463,10 @@ def parse_tests(tests_mapping):
|
||||
|
||||
elif test_type == "testcases":
|
||||
for testcase in tests_mapping["testcases"]:
|
||||
testcase["type"] = "testcase"
|
||||
parsed_testcase = _parse_testcase(testcase, project_mapping)
|
||||
if not parsed_testcase:
|
||||
continue
|
||||
testcases.append(parsed_testcase)
|
||||
|
||||
elif test_type == "apis":
|
||||
@@ -1434,9 +1476,13 @@ def parse_tests(tests_mapping):
|
||||
"config": {
|
||||
"name": api_content.get("name")
|
||||
},
|
||||
"teststeps": [api_content]
|
||||
"teststeps": [api_content],
|
||||
"path": api_content.pop("path", None),
|
||||
"type": api_content.pop("type", "api")
|
||||
}
|
||||
parsed_testcase = _parse_testcase(testcase, project_mapping)
|
||||
if not parsed_testcase:
|
||||
continue
|
||||
testcases.append(parsed_testcase)
|
||||
|
||||
return testcases
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
from httprunner.plugins.locusts.cli import main
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -1,386 +0,0 @@
|
||||
import io
|
||||
import os
|
||||
import platform
|
||||
import time
|
||||
import unittest
|
||||
from base64 import b64encode
|
||||
from collections import Iterable
|
||||
from datetime import datetime
|
||||
|
||||
from jinja2 import Template, escape
|
||||
from requests.cookies import RequestsCookieJar
|
||||
|
||||
from httprunner import __version__, logger
|
||||
from httprunner.compat import basestring, bytes, json, numeric_types
|
||||
|
||||
|
||||
def get_platform():
|
||||
return {
|
||||
"httprunner_version": __version__,
|
||||
"python_version": "{} {}".format(
|
||||
platform.python_implementation(),
|
||||
platform.python_version()
|
||||
),
|
||||
"platform": platform.platform()
|
||||
}
|
||||
|
||||
|
||||
def get_summary(result):
|
||||
""" get summary from test result
|
||||
|
||||
Args:
|
||||
result (instance): HtmlTestResult() instance
|
||||
|
||||
Returns:
|
||||
dict: summary extracted from result.
|
||||
|
||||
{
|
||||
"success": True,
|
||||
"stat": {},
|
||||
"time": {},
|
||||
"records": []
|
||||
}
|
||||
|
||||
"""
|
||||
summary = {
|
||||
"success": result.wasSuccessful(),
|
||||
"stat": {
|
||||
'total': result.testsRun,
|
||||
'failures': len(result.failures),
|
||||
'errors': len(result.errors),
|
||||
'skipped': len(result.skipped),
|
||||
'expectedFailures': len(result.expectedFailures),
|
||||
'unexpectedSuccesses': len(result.unexpectedSuccesses)
|
||||
}
|
||||
}
|
||||
summary["stat"]["successes"] = summary["stat"]["total"] \
|
||||
- summary["stat"]["failures"] \
|
||||
- summary["stat"]["errors"] \
|
||||
- summary["stat"]["skipped"] \
|
||||
- summary["stat"]["expectedFailures"] \
|
||||
- summary["stat"]["unexpectedSuccesses"]
|
||||
|
||||
summary["time"] = {
|
||||
'start_at': result.start_at,
|
||||
'duration': result.duration
|
||||
}
|
||||
summary["records"] = result.records
|
||||
|
||||
return summary
|
||||
|
||||
|
||||
def aggregate_stat(origin_stat, new_stat):
|
||||
""" aggregate new_stat to origin_stat.
|
||||
|
||||
Args:
|
||||
origin_stat (dict): origin stat dict, will be updated with new_stat dict.
|
||||
new_stat (dict): new stat dict.
|
||||
|
||||
"""
|
||||
for key in new_stat:
|
||||
if key not in origin_stat:
|
||||
origin_stat[key] = new_stat[key]
|
||||
elif key == "start_at":
|
||||
# start datetime
|
||||
origin_stat["start_at"] = min(origin_stat["start_at"], new_stat["start_at"])
|
||||
elif key == "duration":
|
||||
# duration = max_end_time - min_start_time
|
||||
max_end_time = max(origin_stat["start_at"] + origin_stat["duration"],
|
||||
new_stat["start_at"] + new_stat["duration"])
|
||||
min_start_time = min(origin_stat["start_at"], new_stat["start_at"])
|
||||
origin_stat["duration"] = max_end_time - min_start_time
|
||||
else:
|
||||
origin_stat[key] += new_stat[key]
|
||||
|
||||
|
||||
def stringify_summary(summary):
|
||||
""" stringify summary, in order to dump json file and generate html report.
|
||||
"""
|
||||
for index, suite_summary in enumerate(summary["details"]):
|
||||
|
||||
if not suite_summary.get("name"):
|
||||
suite_summary["name"] = "testcase {}".format(index)
|
||||
|
||||
for record in suite_summary.get("records"):
|
||||
meta_datas = record['meta_datas']
|
||||
__stringify_meta_datas(meta_datas)
|
||||
meta_datas_expanded = []
|
||||
__expand_meta_datas(meta_datas, meta_datas_expanded)
|
||||
record["meta_datas_expanded"] = meta_datas_expanded
|
||||
record["response_time"] = __get_total_response_time(meta_datas_expanded)
|
||||
|
||||
|
||||
def __stringify_request(request_data):
|
||||
""" stringfy HTTP request data
|
||||
|
||||
Args:
|
||||
request_data (dict): HTTP request data in dict.
|
||||
|
||||
{
|
||||
"url": "http://127.0.0.1:5000/api/get-token",
|
||||
"method": "POST",
|
||||
"headers": {
|
||||
"User-Agent": "python-requests/2.20.0",
|
||||
"Accept-Encoding": "gzip, deflate",
|
||||
"Accept": "*/*",
|
||||
"Connection": "keep-alive",
|
||||
"user_agent": "iOS/10.3",
|
||||
"device_sn": "TESTCASE_CREATE_XXX",
|
||||
"os_platform": "ios",
|
||||
"app_version": "2.8.6",
|
||||
"Content-Type": "application/json",
|
||||
"Content-Length": "52"
|
||||
},
|
||||
"json": {
|
||||
"sign": "cb9d60acd09080ea66c8e63a1c78c6459ea00168"
|
||||
},
|
||||
"verify": false
|
||||
}
|
||||
|
||||
"""
|
||||
for key, value in request_data.items():
|
||||
|
||||
if isinstance(value, list):
|
||||
value = json.dumps(value, indent=2, ensure_ascii=False)
|
||||
|
||||
elif isinstance(value, bytes):
|
||||
try:
|
||||
encoding = "utf-8"
|
||||
value = escape(value.decode(encoding))
|
||||
except UnicodeDecodeError:
|
||||
pass
|
||||
|
||||
elif not isinstance(value, (basestring, numeric_types, Iterable)):
|
||||
# class instance, e.g. MultipartEncoder()
|
||||
value = repr(value)
|
||||
|
||||
elif isinstance(value, RequestsCookieJar):
|
||||
value = value.get_dict()
|
||||
|
||||
request_data[key] = value
|
||||
|
||||
|
||||
def __stringify_response(response_data):
|
||||
""" stringfy HTTP response data
|
||||
|
||||
Args:
|
||||
response_data (dict):
|
||||
|
||||
{
|
||||
"status_code": 404,
|
||||
"headers": {
|
||||
"Content-Type": "application/json",
|
||||
"Content-Length": "30",
|
||||
"Server": "Werkzeug/0.14.1 Python/3.7.0",
|
||||
"Date": "Tue, 27 Nov 2018 06:19:27 GMT"
|
||||
},
|
||||
"encoding": "None",
|
||||
"content_type": "application/json",
|
||||
"ok": false,
|
||||
"url": "http://127.0.0.1:5000/api/users/9001",
|
||||
"reason": "NOT FOUND",
|
||||
"cookies": {},
|
||||
"json": {
|
||||
"success": false,
|
||||
"data": {}
|
||||
}
|
||||
}
|
||||
|
||||
"""
|
||||
for key, value in response_data.items():
|
||||
|
||||
if isinstance(value, list):
|
||||
value = json.dumps(value, indent=2, ensure_ascii=False)
|
||||
|
||||
elif isinstance(value, bytes):
|
||||
try:
|
||||
encoding = response_data.get("encoding")
|
||||
if not encoding or encoding == "None":
|
||||
encoding = "utf-8"
|
||||
|
||||
if key == "content" and "image" in response_data["content_type"]:
|
||||
# display image
|
||||
value = "data:{};base64,{}".format(
|
||||
response_data["content_type"],
|
||||
b64encode(value).decode(encoding)
|
||||
)
|
||||
else:
|
||||
value = escape(value.decode(encoding))
|
||||
except UnicodeDecodeError:
|
||||
pass
|
||||
|
||||
elif not isinstance(value, (basestring, numeric_types, Iterable)):
|
||||
# class instance, e.g. MultipartEncoder()
|
||||
value = repr(value)
|
||||
|
||||
elif isinstance(value, RequestsCookieJar):
|
||||
value = value.get_dict()
|
||||
|
||||
response_data[key] = value
|
||||
|
||||
|
||||
def __expand_meta_datas(meta_datas, meta_datas_expanded):
|
||||
""" expand meta_datas to one level
|
||||
|
||||
Args:
|
||||
meta_datas (dict/list): maybe in nested format
|
||||
|
||||
Returns:
|
||||
list: expanded list in one level
|
||||
|
||||
Examples:
|
||||
>>> meta_datas = [
|
||||
[
|
||||
dict1,
|
||||
dict2
|
||||
],
|
||||
dict3
|
||||
]
|
||||
>>> meta_datas_expanded = []
|
||||
>>> __expand_meta_datas(meta_datas, meta_datas_expanded)
|
||||
>>> print(meta_datas_expanded)
|
||||
[dict1, dict2, dict3]
|
||||
|
||||
"""
|
||||
if isinstance(meta_datas, dict):
|
||||
meta_datas_expanded.append(meta_datas)
|
||||
elif isinstance(meta_datas, list):
|
||||
for meta_data in meta_datas:
|
||||
__expand_meta_datas(meta_data, meta_datas_expanded)
|
||||
|
||||
|
||||
def __get_total_response_time(meta_datas_expanded):
|
||||
""" caculate total response time of all meta_datas
|
||||
"""
|
||||
try:
|
||||
response_time = 0
|
||||
for meta_data in meta_datas_expanded:
|
||||
response_time += meta_data["stat"]["response_time_ms"]
|
||||
|
||||
return "{:.2f}".format(response_time)
|
||||
|
||||
except TypeError:
|
||||
# failure exists
|
||||
return "N/A"
|
||||
|
||||
|
||||
def __stringify_meta_datas(meta_datas):
|
||||
|
||||
if isinstance(meta_datas, list):
|
||||
for _meta_data in meta_datas:
|
||||
__stringify_meta_datas(_meta_data)
|
||||
elif isinstance(meta_datas, dict):
|
||||
data_list = meta_datas["data"]
|
||||
for data in data_list:
|
||||
__stringify_request(data["request"])
|
||||
__stringify_response(data["response"])
|
||||
|
||||
|
||||
def gen_html_report(summary, report_template=None, report_dir=None, report_file=None):
|
||||
""" render html report with specified report name and template
|
||||
|
||||
Args:
|
||||
summary (dict): test result summary data
|
||||
report_template (str): specify html report template path, template should be in Jinja2 format.
|
||||
report_dir (str): specify html report save directory
|
||||
report_file (str): specify html report file path, this has higher priority than specifying report dir.
|
||||
|
||||
"""
|
||||
if not report_template:
|
||||
report_template = os.path.join(
|
||||
os.path.abspath(os.path.dirname(__file__)),
|
||||
"static",
|
||||
"report_template.html"
|
||||
)
|
||||
logger.log_debug("No html report template specified, use default.")
|
||||
else:
|
||||
logger.log_info("render with html report template: {}".format(report_template))
|
||||
|
||||
logger.log_info("Start to render Html report ...")
|
||||
|
||||
start_at_timestamp = int(summary["time"]["start_at"])
|
||||
summary["time"]["start_datetime"] = datetime.fromtimestamp(start_at_timestamp).strftime('%Y-%m-%d %H:%M:%S')
|
||||
|
||||
if report_file:
|
||||
report_dir = os.path.dirname(report_file)
|
||||
report_file_name = os.path.basename(report_file)
|
||||
else:
|
||||
report_dir = report_dir or os.path.join(os.getcwd(), "reports")
|
||||
report_file_name = "{}.html".format(start_at_timestamp)
|
||||
|
||||
if not os.path.isdir(report_dir):
|
||||
os.makedirs(report_dir)
|
||||
|
||||
report_path = os.path.join(report_dir, report_file_name)
|
||||
with io.open(report_template, "r", encoding='utf-8') as fp_r:
|
||||
template_content = fp_r.read()
|
||||
with io.open(report_path, 'w', encoding='utf-8') as fp_w:
|
||||
rendered_content = Template(
|
||||
template_content,
|
||||
extensions=["jinja2.ext.loopcontrols"]
|
||||
).render(summary)
|
||||
fp_w.write(rendered_content)
|
||||
|
||||
logger.log_info("Generated Html report: {}".format(report_path))
|
||||
|
||||
return report_path
|
||||
|
||||
|
||||
class HtmlTestResult(unittest.TextTestResult):
|
||||
""" A html result class that can generate formatted html results.
|
||||
Used by TextTestRunner.
|
||||
"""
|
||||
def __init__(self, stream, descriptions, verbosity):
|
||||
super(HtmlTestResult, self).__init__(stream, descriptions, verbosity)
|
||||
self.records = []
|
||||
|
||||
def _record_test(self, test, status, attachment=''):
|
||||
data = {
|
||||
'name': test.shortDescription(),
|
||||
'status': status,
|
||||
'attachment': attachment,
|
||||
"meta_datas": test.meta_datas
|
||||
}
|
||||
self.records.append(data)
|
||||
|
||||
def startTestRun(self):
|
||||
self.start_at = time.time()
|
||||
|
||||
def startTest(self, test):
|
||||
""" add start test time """
|
||||
super(HtmlTestResult, self).startTest(test)
|
||||
logger.color_print(test.shortDescription(), "yellow")
|
||||
|
||||
def addSuccess(self, test):
|
||||
super(HtmlTestResult, self).addSuccess(test)
|
||||
self._record_test(test, 'success')
|
||||
print("")
|
||||
|
||||
def addError(self, test, err):
|
||||
super(HtmlTestResult, self).addError(test, err)
|
||||
self._record_test(test, 'error', self._exc_info_to_string(err, test))
|
||||
print("")
|
||||
|
||||
def addFailure(self, test, err):
|
||||
super(HtmlTestResult, self).addFailure(test, err)
|
||||
self._record_test(test, 'failure', self._exc_info_to_string(err, test))
|
||||
print("")
|
||||
|
||||
def addSkip(self, test, reason):
|
||||
super(HtmlTestResult, self).addSkip(test, reason)
|
||||
self._record_test(test, 'skipped', reason)
|
||||
print("")
|
||||
|
||||
def addExpectedFailure(self, test, err):
|
||||
super(HtmlTestResult, self).addExpectedFailure(test, err)
|
||||
self._record_test(test, 'ExpectedFailure', self._exc_info_to_string(err, test))
|
||||
print("")
|
||||
|
||||
def addUnexpectedSuccess(self, test):
|
||||
super(HtmlTestResult, self).addUnexpectedSuccess(test)
|
||||
self._record_test(test, 'UnexpectedSuccess')
|
||||
print("")
|
||||
|
||||
@property
|
||||
def duration(self):
|
||||
return time.time() - self.start_at
|
||||
20
httprunner/report/__init__.py
Normal file
20
httprunner/report/__init__.py
Normal file
@@ -0,0 +1,20 @@
|
||||
"""
|
||||
HttpRunner report
|
||||
|
||||
- summarize: aggregate test stat data to summary
|
||||
- stringify: stringify summary, in order to dump json file and generate html report.
|
||||
- html: render html report
|
||||
"""
|
||||
|
||||
from httprunner.report.summarize import get_platform, aggregate_stat, get_summary
|
||||
from httprunner.report.stringify import stringify_summary
|
||||
from httprunner.report.html import HtmlTestResult, gen_html_report
|
||||
|
||||
__all__ = [
|
||||
"get_platform",
|
||||
"aggregate_stat",
|
||||
"get_summary",
|
||||
"stringify_summary",
|
||||
"HtmlTestResult",
|
||||
"gen_html_report"
|
||||
]
|
||||
15
httprunner/report/html/__init__.py
Normal file
15
httprunner/report/html/__init__.py
Normal file
@@ -0,0 +1,15 @@
|
||||
"""
|
||||
HttpRunner html report
|
||||
|
||||
- result: define resultclass for unittest TextTestRunner
|
||||
- gen_report: render html report with jinja2 template
|
||||
|
||||
"""
|
||||
|
||||
from httprunner.report.html.result import HtmlTestResult
|
||||
from httprunner.report.html.gen_report import gen_html_report
|
||||
|
||||
__all__ = [
|
||||
"HtmlTestResult",
|
||||
"gen_html_report"
|
||||
]
|
||||
62
httprunner/report/html/gen_report.py
Normal file
62
httprunner/report/html/gen_report.py
Normal file
@@ -0,0 +1,62 @@
|
||||
import io
|
||||
import os
|
||||
from datetime import datetime
|
||||
|
||||
from jinja2 import Template
|
||||
|
||||
from httprunner import logger
|
||||
from httprunner.exceptions import SummaryEmpty
|
||||
|
||||
|
||||
def gen_html_report(summary, report_template=None, report_dir=None, report_file=None):
|
||||
""" render html report with specified report name and template
|
||||
|
||||
Args:
|
||||
summary (dict): test result summary data
|
||||
report_template (str): specify html report template path, template should be in Jinja2 format.
|
||||
report_dir (str): specify html report save directory
|
||||
report_file (str): specify html report file path, this has higher priority than specifying report dir.
|
||||
|
||||
"""
|
||||
if not summary["time"] or summary["stat"]["testcases"]["total"] == 0:
|
||||
logger.log_error("test result summary is empty ! {}".format(summary))
|
||||
raise SummaryEmpty
|
||||
|
||||
if not report_template:
|
||||
report_template = os.path.join(
|
||||
os.path.abspath(os.path.dirname(__file__)),
|
||||
"template.html"
|
||||
)
|
||||
logger.log_debug("No html report template specified, use default.")
|
||||
else:
|
||||
logger.log_info("render with html report template: {}".format(report_template))
|
||||
|
||||
logger.log_info("Start to render Html report ...")
|
||||
|
||||
start_at_timestamp = int(summary["time"]["start_at"])
|
||||
summary["time"]["start_datetime"] = datetime.fromtimestamp(start_at_timestamp).strftime('%Y-%m-%d %H:%M:%S')
|
||||
|
||||
if report_file:
|
||||
report_dir = os.path.dirname(report_file)
|
||||
report_file_name = os.path.basename(report_file)
|
||||
else:
|
||||
report_dir = report_dir or os.path.join(os.getcwd(), "reports")
|
||||
report_file_name = "{}.html".format(start_at_timestamp)
|
||||
|
||||
if not os.path.isdir(report_dir):
|
||||
os.makedirs(report_dir)
|
||||
|
||||
report_path = os.path.join(report_dir, report_file_name)
|
||||
with io.open(report_template, "r", encoding='utf-8') as fp_r:
|
||||
template_content = fp_r.read()
|
||||
with io.open(report_path, 'w', encoding='utf-8') as fp_w:
|
||||
rendered_content = Template(
|
||||
template_content,
|
||||
extensions=["jinja2.ext.loopcontrols"]
|
||||
).render(summary)
|
||||
fp_w.write(rendered_content)
|
||||
|
||||
logger.log_info("Generated Html report: {}".format(report_path))
|
||||
|
||||
return report_path
|
||||
|
||||
64
httprunner/report/html/result.py
Normal file
64
httprunner/report/html/result.py
Normal file
@@ -0,0 +1,64 @@
|
||||
import time
|
||||
import unittest
|
||||
|
||||
from httprunner import logger
|
||||
|
||||
|
||||
class HtmlTestResult(unittest.TextTestResult):
|
||||
""" A html result class that can generate formatted html results.
|
||||
Used by TextTestRunner.
|
||||
"""
|
||||
def __init__(self, stream, descriptions, verbosity):
|
||||
super(HtmlTestResult, self).__init__(stream, descriptions, verbosity)
|
||||
self.records = []
|
||||
|
||||
def _record_test(self, test, status, attachment=''):
|
||||
data = {
|
||||
'name': test.shortDescription(),
|
||||
'status': status,
|
||||
'attachment': attachment,
|
||||
"meta_datas": test.meta_datas
|
||||
}
|
||||
self.records.append(data)
|
||||
|
||||
def startTestRun(self):
|
||||
self.start_at = time.time()
|
||||
|
||||
def startTest(self, test):
|
||||
""" add start test time """
|
||||
super(HtmlTestResult, self).startTest(test)
|
||||
logger.color_print(test.shortDescription(), "yellow")
|
||||
|
||||
def addSuccess(self, test):
|
||||
super(HtmlTestResult, self).addSuccess(test)
|
||||
self._record_test(test, 'success')
|
||||
print("")
|
||||
|
||||
def addError(self, test, err):
|
||||
super(HtmlTestResult, self).addError(test, err)
|
||||
self._record_test(test, 'error', self._exc_info_to_string(err, test))
|
||||
print("")
|
||||
|
||||
def addFailure(self, test, err):
|
||||
super(HtmlTestResult, self).addFailure(test, err)
|
||||
self._record_test(test, 'failure', self._exc_info_to_string(err, test))
|
||||
print("")
|
||||
|
||||
def addSkip(self, test, reason):
|
||||
super(HtmlTestResult, self).addSkip(test, reason)
|
||||
self._record_test(test, 'skipped', reason)
|
||||
print("")
|
||||
|
||||
def addExpectedFailure(self, test, err):
|
||||
super(HtmlTestResult, self).addExpectedFailure(test, err)
|
||||
self._record_test(test, 'ExpectedFailure', self._exc_info_to_string(err, test))
|
||||
print("")
|
||||
|
||||
def addUnexpectedSuccess(self, test):
|
||||
super(HtmlTestResult, self).addUnexpectedSuccess(test)
|
||||
self._record_test(test, 'UnexpectedSuccess')
|
||||
print("")
|
||||
|
||||
@property
|
||||
def duration(self):
|
||||
return time.time() - self.start_at
|
||||
@@ -232,14 +232,10 @@
|
||||
<tr>
|
||||
<th>{{key}}</th>
|
||||
<td>
|
||||
{% if key == "headers" %}
|
||||
{% for header_key, header_value in req_resp.request.headers.items() %}
|
||||
<div>
|
||||
<strong>{{ header_key }}</strong>: {{ header_value }}
|
||||
</div>
|
||||
{% endfor %}
|
||||
{% if key in ["headers", "body"] %}
|
||||
<pre>{{ value | e }}</pre>
|
||||
{% else %}
|
||||
{{value}}
|
||||
{{value}}
|
||||
{% endif %}
|
||||
</td>
|
||||
</tr>
|
||||
@@ -254,20 +250,14 @@
|
||||
<tr>
|
||||
<th>{{key}}</th>
|
||||
<td>
|
||||
{% if key == "headers" %}
|
||||
{% for header_key, header_value in req_resp.response.headers.items() %}
|
||||
<div>
|
||||
<strong>{{ header_key }}</strong>: {{ header_value }}
|
||||
</div>
|
||||
{% endfor %}
|
||||
{% elif key == "content" %}
|
||||
{% if key == "headers" %}
|
||||
<pre>{{ value | e }}</pre>
|
||||
{% elif key == "body" %}
|
||||
{% if "image" in req_resp.response.content_type %}
|
||||
<img src="{{ req_resp.response.content }}" />
|
||||
{% else %}
|
||||
{{ value }}
|
||||
{% endif %}
|
||||
{% elif key in ["text", "json"] %}
|
||||
<pre>{{ value | e }}</pre>
|
||||
{% endif %}
|
||||
{% else %}
|
||||
{{ value }}
|
||||
{% endif %}
|
||||
216
httprunner/report/stringify.py
Normal file
216
httprunner/report/stringify.py
Normal file
@@ -0,0 +1,216 @@
|
||||
from base64 import b64encode
|
||||
from collections import Iterable
|
||||
|
||||
from jinja2 import escape
|
||||
from requests.cookies import RequestsCookieJar
|
||||
|
||||
from httprunner.compat import basestring, bytes, json, numeric_types, JSONDecodeError
|
||||
|
||||
|
||||
def dumps_json(value):
|
||||
""" dumps json value to indented string
|
||||
|
||||
Args:
|
||||
value (dict): raw json data
|
||||
|
||||
Returns:
|
||||
str: indented json dump string
|
||||
|
||||
"""
|
||||
return json.dumps(value, indent=2, ensure_ascii=False)
|
||||
|
||||
|
||||
def detect_encoding(value):
|
||||
try:
|
||||
return json.detect_encoding(value)
|
||||
except AttributeError:
|
||||
return "utf-8"
|
||||
|
||||
|
||||
def __stringify_request(request_data):
|
||||
""" stringfy HTTP request data
|
||||
|
||||
Args:
|
||||
request_data (dict): HTTP request data in dict.
|
||||
|
||||
{
|
||||
"url": "http://127.0.0.1:5000/api/get-token",
|
||||
"method": "POST",
|
||||
"headers": {
|
||||
"User-Agent": "python-requests/2.20.0",
|
||||
"Accept-Encoding": "gzip, deflate",
|
||||
"Accept": "*/*",
|
||||
"Connection": "keep-alive",
|
||||
"user_agent": "iOS/10.3",
|
||||
"device_sn": "TESTCASE_CREATE_XXX",
|
||||
"os_platform": "ios",
|
||||
"app_version": "2.8.6",
|
||||
"Content-Type": "application/json",
|
||||
"Content-Length": "52"
|
||||
},
|
||||
"body": b'{"sign": "cb9d60acd09080ea66c8e63a1c78c6459ea00168"}',
|
||||
"verify": false
|
||||
}
|
||||
|
||||
"""
|
||||
for key, value in request_data.items():
|
||||
|
||||
if isinstance(value, (list, dict)):
|
||||
value = dumps_json(value)
|
||||
|
||||
elif isinstance(value, bytes):
|
||||
try:
|
||||
encoding = detect_encoding(value)
|
||||
value = value.decode(encoding)
|
||||
if key == "body":
|
||||
try:
|
||||
# request body is in json format
|
||||
value = json.loads(value)
|
||||
value = dumps_json(value)
|
||||
except JSONDecodeError:
|
||||
pass
|
||||
value = escape(value)
|
||||
except UnicodeDecodeError:
|
||||
pass
|
||||
|
||||
elif not isinstance(value, (basestring, numeric_types, Iterable)):
|
||||
# class instance, e.g. MultipartEncoder()
|
||||
value = repr(value)
|
||||
|
||||
elif isinstance(value, RequestsCookieJar):
|
||||
value = value.get_dict()
|
||||
|
||||
request_data[key] = value
|
||||
|
||||
|
||||
def __stringify_response(response_data):
|
||||
""" stringfy HTTP response data
|
||||
|
||||
Args:
|
||||
response_data (dict):
|
||||
|
||||
{
|
||||
"status_code": 404,
|
||||
"headers": {
|
||||
"Content-Type": "application/json",
|
||||
"Content-Length": "30",
|
||||
"Server": "Werkzeug/0.14.1 Python/3.7.0",
|
||||
"Date": "Tue, 27 Nov 2018 06:19:27 GMT"
|
||||
},
|
||||
"encoding": "None",
|
||||
"content_type": "application/json",
|
||||
"ok": false,
|
||||
"url": "http://127.0.0.1:5000/api/users/9001",
|
||||
"reason": "NOT FOUND",
|
||||
"cookies": {},
|
||||
"body": {
|
||||
"success": false,
|
||||
"data": {}
|
||||
}
|
||||
}
|
||||
|
||||
"""
|
||||
for key, value in response_data.items():
|
||||
|
||||
if isinstance(value, (list, dict)):
|
||||
value = dumps_json(value)
|
||||
|
||||
elif isinstance(value, bytes):
|
||||
try:
|
||||
encoding = response_data.get("encoding")
|
||||
if not encoding or encoding == "None":
|
||||
encoding = detect_encoding(value)
|
||||
|
||||
if key == "body" and "image" in response_data["content_type"]:
|
||||
# display image
|
||||
value = "data:{};base64,{}".format(
|
||||
response_data["content_type"],
|
||||
b64encode(value).decode(encoding)
|
||||
)
|
||||
else:
|
||||
value = escape(value.decode(encoding))
|
||||
except UnicodeDecodeError:
|
||||
pass
|
||||
|
||||
elif not isinstance(value, (basestring, numeric_types, Iterable)):
|
||||
# class instance, e.g. MultipartEncoder()
|
||||
value = repr(value)
|
||||
|
||||
elif isinstance(value, RequestsCookieJar):
|
||||
value = value.get_dict()
|
||||
|
||||
response_data[key] = value
|
||||
|
||||
|
||||
def __expand_meta_datas(meta_datas, meta_datas_expanded):
|
||||
""" expand meta_datas to one level
|
||||
|
||||
Args:
|
||||
meta_datas (dict/list): maybe in nested format
|
||||
|
||||
Returns:
|
||||
list: expanded list in one level
|
||||
|
||||
Examples:
|
||||
>>> meta_datas = [
|
||||
[
|
||||
dict1,
|
||||
dict2
|
||||
],
|
||||
dict3
|
||||
]
|
||||
>>> meta_datas_expanded = []
|
||||
>>> __expand_meta_datas(meta_datas, meta_datas_expanded)
|
||||
>>> print(meta_datas_expanded)
|
||||
[dict1, dict2, dict3]
|
||||
|
||||
"""
|
||||
if isinstance(meta_datas, dict):
|
||||
meta_datas_expanded.append(meta_datas)
|
||||
elif isinstance(meta_datas, list):
|
||||
for meta_data in meta_datas:
|
||||
__expand_meta_datas(meta_data, meta_datas_expanded)
|
||||
|
||||
|
||||
def __get_total_response_time(meta_datas_expanded):
|
||||
""" caculate total response time of all meta_datas
|
||||
"""
|
||||
try:
|
||||
response_time = 0
|
||||
for meta_data in meta_datas_expanded:
|
||||
response_time += meta_data["stat"]["response_time_ms"]
|
||||
|
||||
return "{:.2f}".format(response_time)
|
||||
|
||||
except TypeError:
|
||||
# failure exists
|
||||
return "N/A"
|
||||
|
||||
|
||||
def __stringify_meta_datas(meta_datas):
|
||||
|
||||
if isinstance(meta_datas, list):
|
||||
for _meta_data in meta_datas:
|
||||
__stringify_meta_datas(_meta_data)
|
||||
elif isinstance(meta_datas, dict):
|
||||
data_list = meta_datas["data"]
|
||||
for data in data_list:
|
||||
__stringify_request(data["request"])
|
||||
__stringify_response(data["response"])
|
||||
|
||||
|
||||
def stringify_summary(summary):
|
||||
""" stringify summary, in order to dump json file and generate html report.
|
||||
"""
|
||||
for index, suite_summary in enumerate(summary["details"]):
|
||||
|
||||
if not suite_summary.get("name"):
|
||||
suite_summary["name"] = "testcase {}".format(index)
|
||||
|
||||
for record in suite_summary.get("records"):
|
||||
meta_datas = record['meta_datas']
|
||||
__stringify_meta_datas(meta_datas)
|
||||
meta_datas_expanded = []
|
||||
__expand_meta_datas(meta_datas, meta_datas_expanded)
|
||||
record["meta_datas_expanded"] = meta_datas_expanded
|
||||
record["response_time"] = __get_total_response_time(meta_datas_expanded)
|
||||
82
httprunner/report/summarize.py
Normal file
82
httprunner/report/summarize.py
Normal file
@@ -0,0 +1,82 @@
|
||||
import platform
|
||||
|
||||
from httprunner import __version__
|
||||
|
||||
|
||||
def get_platform():
|
||||
return {
|
||||
"httprunner_version": __version__,
|
||||
"python_version": "{} {}".format(
|
||||
platform.python_implementation(),
|
||||
platform.python_version()
|
||||
),
|
||||
"platform": platform.platform()
|
||||
}
|
||||
|
||||
|
||||
def aggregate_stat(origin_stat, new_stat):
|
||||
""" aggregate new_stat to origin_stat.
|
||||
|
||||
Args:
|
||||
origin_stat (dict): origin stat dict, will be updated with new_stat dict.
|
||||
new_stat (dict): new stat dict.
|
||||
|
||||
"""
|
||||
for key in new_stat:
|
||||
if key not in origin_stat:
|
||||
origin_stat[key] = new_stat[key]
|
||||
elif key == "start_at":
|
||||
# start datetime
|
||||
origin_stat["start_at"] = min(origin_stat["start_at"], new_stat["start_at"])
|
||||
elif key == "duration":
|
||||
# duration = max_end_time - min_start_time
|
||||
max_end_time = max(origin_stat["start_at"] + origin_stat["duration"],
|
||||
new_stat["start_at"] + new_stat["duration"])
|
||||
min_start_time = min(origin_stat["start_at"], new_stat["start_at"])
|
||||
origin_stat["duration"] = max_end_time - min_start_time
|
||||
else:
|
||||
origin_stat[key] += new_stat[key]
|
||||
|
||||
|
||||
def get_summary(result):
|
||||
""" get summary from test result
|
||||
|
||||
Args:
|
||||
result (instance): HtmlTestResult() instance
|
||||
|
||||
Returns:
|
||||
dict: summary extracted from result.
|
||||
|
||||
{
|
||||
"success": True,
|
||||
"stat": {},
|
||||
"time": {},
|
||||
"records": []
|
||||
}
|
||||
|
||||
"""
|
||||
summary = {
|
||||
"success": result.wasSuccessful(),
|
||||
"stat": {
|
||||
'total': result.testsRun,
|
||||
'failures': len(result.failures),
|
||||
'errors': len(result.errors),
|
||||
'skipped': len(result.skipped),
|
||||
'expectedFailures': len(result.expectedFailures),
|
||||
'unexpectedSuccesses': len(result.unexpectedSuccesses)
|
||||
}
|
||||
}
|
||||
summary["stat"]["successes"] = summary["stat"]["total"] \
|
||||
- summary["stat"]["failures"] \
|
||||
- summary["stat"]["errors"] \
|
||||
- summary["stat"]["skipped"] \
|
||||
- summary["stat"]["expectedFailures"] \
|
||||
- summary["stat"]["unexpectedSuccesses"]
|
||||
|
||||
summary["time"] = {
|
||||
'start_at': result.start_at,
|
||||
'duration': result.duration
|
||||
}
|
||||
summary["records"] = result.records
|
||||
|
||||
return summary
|
||||
@@ -175,7 +175,7 @@ class ResponseObject(object):
|
||||
raise exceptions.ExtractFailure(err_msg)
|
||||
|
||||
# response body
|
||||
elif top_query in ["content", "text", "json"]:
|
||||
elif top_query in ["body", "content", "text", "json"]:
|
||||
try:
|
||||
body = self.json
|
||||
except exceptions.JSONDecodeError:
|
||||
@@ -222,7 +222,8 @@ class ResponseObject(object):
|
||||
# others
|
||||
else:
|
||||
err_msg = u"Failed to extract attribute from response! => {}\n".format(field)
|
||||
err_msg += u"available response attributes: status_code, cookies, elapsed, headers, content, text, json, encoding, ok, reason, url.\n\n"
|
||||
err_msg += u"available response attributes: status_code, cookies, elapsed, headers, content, " \
|
||||
u"text, json, encoding, ok, reason, url.\n\n"
|
||||
err_msg += u"If you want to set attribute in teardown_hooks, take the following example as reference:\n"
|
||||
err_msg += u"response.new_attribute = 'new_attribute_value'\n"
|
||||
logger.log_error(err_msg)
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
# encoding: utf-8
|
||||
|
||||
from enum import Enum
|
||||
from unittest.case import SkipTest
|
||||
|
||||
from httprunner import exceptions, logger, response, utils
|
||||
@@ -8,6 +9,11 @@ from httprunner.context import SessionContext
|
||||
from httprunner.validator import Validator
|
||||
|
||||
|
||||
class HookTypeEnum(Enum):
|
||||
SETUP = 1
|
||||
TEARDOWN = 2
|
||||
|
||||
|
||||
class Runner(object):
|
||||
""" Running testcases.
|
||||
|
||||
@@ -74,11 +80,11 @@ class Runner(object):
|
||||
self.session_context = SessionContext(config_variables)
|
||||
|
||||
if testcase_setup_hooks:
|
||||
self.do_hook_actions(testcase_setup_hooks, "setup")
|
||||
self.do_hook_actions(testcase_setup_hooks, HookTypeEnum.SETUP)
|
||||
|
||||
def __del__(self):
|
||||
if self.testcase_teardown_hooks:
|
||||
self.do_hook_actions(self.testcase_teardown_hooks, "teardown")
|
||||
self.do_hook_actions(self.testcase_teardown_hooks, HookTypeEnum.TEARDOWN)
|
||||
|
||||
def __clear_test_data(self):
|
||||
""" clear request and response data
|
||||
@@ -131,10 +137,10 @@ class Runner(object):
|
||||
format2 (str): only call hook functions.
|
||||
${func()}
|
||||
|
||||
hook_type (enum): setup/teardown
|
||||
hook_type (HookTypeEnum): setup/teardown
|
||||
|
||||
"""
|
||||
logger.log_debug("call {} hook actions.".format(hook_type))
|
||||
logger.log_debug("call {} hook actions.".format(hook_type.name))
|
||||
for action in actions:
|
||||
|
||||
if isinstance(action, dict) and len(action) == 1:
|
||||
@@ -215,7 +221,7 @@ class Runner(object):
|
||||
# setup hooks
|
||||
setup_hooks = test_dict.get("setup_hooks", [])
|
||||
if setup_hooks:
|
||||
self.do_hook_actions(setup_hooks, "setup")
|
||||
self.do_hook_actions(setup_hooks, HookTypeEnum.SETUP)
|
||||
|
||||
try:
|
||||
method = parsed_test_request.pop('method')
|
||||
@@ -245,32 +251,7 @@ class Runner(object):
|
||||
)
|
||||
resp_obj = response.ResponseObject(resp)
|
||||
|
||||
# teardown hooks
|
||||
teardown_hooks = test_dict.get("teardown_hooks", [])
|
||||
if teardown_hooks:
|
||||
self.session_context.update_test_variables("response", resp_obj)
|
||||
self.do_hook_actions(teardown_hooks, "teardown")
|
||||
self.http_client_session.update_last_req_resp_record(resp_obj)
|
||||
|
||||
# extract
|
||||
extractors = test_dict.get("extract", {})
|
||||
extracted_variables_mapping = resp_obj.extract_response(extractors)
|
||||
self.session_context.update_session_variables(extracted_variables_mapping)
|
||||
|
||||
# validate
|
||||
validators = test_dict.get("validate") or test_dict.get("validators") or []
|
||||
validate_script = test_dict.get("validate_script", [])
|
||||
if validate_script:
|
||||
validators.append({
|
||||
"type": "python_script",
|
||||
"script": validate_script
|
||||
})
|
||||
|
||||
validator = Validator(self.session_context, resp_obj)
|
||||
try:
|
||||
validator.validate(validators)
|
||||
except (exceptions.ParamsError,
|
||||
exceptions.ValidationFailure, exceptions.ExtractFailure):
|
||||
def log_req_resp_details():
|
||||
err_msg = "{} DETAILED REQUEST & RESPONSE {}\n".format("*" * 32, "*" * 32)
|
||||
|
||||
# log request
|
||||
@@ -291,12 +272,39 @@ class Runner(object):
|
||||
err_msg += "body: {}\n".format(repr(resp_obj.text))
|
||||
logger.log_error(err_msg)
|
||||
|
||||
# teardown hooks
|
||||
teardown_hooks = test_dict.get("teardown_hooks", [])
|
||||
if teardown_hooks:
|
||||
self.session_context.update_test_variables("response", resp_obj)
|
||||
self.do_hook_actions(teardown_hooks, HookTypeEnum.TEARDOWN)
|
||||
self.http_client_session.update_last_req_resp_record(resp_obj)
|
||||
|
||||
# extract
|
||||
extractors = test_dict.get("extract", {})
|
||||
try:
|
||||
extracted_variables_mapping = resp_obj.extract_response(extractors)
|
||||
self.session_context.update_session_variables(extracted_variables_mapping)
|
||||
except (exceptions.ParamsError, exceptions.ExtractFailure):
|
||||
log_req_resp_details()
|
||||
raise
|
||||
|
||||
finally:
|
||||
# get request/response data and validate results
|
||||
self.meta_datas = getattr(self.http_client_session, "meta_data", {})
|
||||
self.meta_datas["validators"] = validator.validation_results
|
||||
# validate
|
||||
validators = test_dict.get("validate") or test_dict.get("validators") or []
|
||||
validate_script = test_dict.get("validate_script", [])
|
||||
if validate_script:
|
||||
validators.append({
|
||||
"type": "python_script",
|
||||
"script": validate_script
|
||||
})
|
||||
|
||||
validator = Validator(self.session_context, resp_obj)
|
||||
try:
|
||||
validator.validate(validators)
|
||||
except exceptions.ValidationFailure:
|
||||
log_req_resp_details()
|
||||
raise
|
||||
|
||||
return validator.validation_results
|
||||
|
||||
def _run_testcase(self, testcase_dict):
|
||||
""" run single testcase.
|
||||
@@ -374,13 +382,18 @@ class Runner(object):
|
||||
self._run_testcase(test_dict)
|
||||
else:
|
||||
# api
|
||||
validation_results = {}
|
||||
try:
|
||||
self._run_test(test_dict)
|
||||
validation_results = self._run_test(test_dict)
|
||||
except Exception:
|
||||
# log exception request_type and name for locust stat
|
||||
self.exception_request_type = test_dict["request"]["method"]
|
||||
self.exception_name = test_dict.get("name")
|
||||
raise
|
||||
finally:
|
||||
# get request/response data and validate results
|
||||
self.meta_datas = getattr(self.http_client_session, "meta_data", {})
|
||||
self.meta_datas["validators"] = validation_results
|
||||
|
||||
def export_variables(self, output_variables_list):
|
||||
""" export current testcase variables
|
||||
|
||||
@@ -571,6 +571,10 @@ def dump_json_file(json_data, json_file_abs_path):
|
||||
except TypeError:
|
||||
return str(obj)
|
||||
|
||||
file_foder_path = os.path.dirname(json_file_abs_path)
|
||||
if not os.path.isdir(file_foder_path):
|
||||
os.makedirs(file_foder_path)
|
||||
|
||||
try:
|
||||
with io.open(json_file_abs_path, 'w', encoding='utf-8') as outfile:
|
||||
if is_py2:
|
||||
@@ -579,6 +583,7 @@ def dump_json_file(json_data, json_file_abs_path):
|
||||
json_data,
|
||||
indent=4,
|
||||
separators=(',', ':'),
|
||||
encoding="utf8",
|
||||
ensure_ascii=False,
|
||||
cls=PythonObjectEncoder
|
||||
))
|
||||
@@ -626,9 +631,6 @@ def prepare_dump_json_file_abs_path(project_mapping, tag_name):
|
||||
test_file_name, _file_suffix = os.path.splitext(test_file)
|
||||
dump_file_name = "{}.{}.json".format(test_file_name, tag_name)
|
||||
|
||||
if not os.path.isdir(file_foder_path):
|
||||
os.makedirs(file_foder_path)
|
||||
|
||||
dumped_json_file_abs_path = os.path.join(file_foder_path, dump_file_name)
|
||||
return dumped_json_file_abs_path
|
||||
|
||||
|
||||
@@ -1,5 +1,9 @@
|
||||
# require mkdocs-material 3.x
|
||||
|
||||
#
|
||||
# pip install mkdocs
|
||||
# pip install mkdocs-material
|
||||
|
||||
# Project information
|
||||
site_name: HttpRunner V2.x 中文使用文档
|
||||
site_description: HttpRunner V2.x User Documentation
|
||||
@@ -64,6 +68,7 @@ nav:
|
||||
- 参数化数据驱动: prepare/parameters.md
|
||||
- Validate & Prettify: prepare/validate-pretty.md
|
||||
- 信息安全: prepare/security.md
|
||||
- 文件上传场景: prepare/upload-case.md
|
||||
- 测试执行:
|
||||
- 运行测试(CLI): run-tests/cli.md
|
||||
- 测试报告: run-tests/report.md
|
||||
|
||||
260
poetry.lock
generated
260
poetry.lock
generated
@@ -4,7 +4,7 @@ description = "Python package for providing Mozilla's CA Bundle."
|
||||
name = "certifi"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
version = "2019.9.11"
|
||||
version = "2019.11.28"
|
||||
|
||||
[[package]]
|
||||
category = "main"
|
||||
@@ -27,8 +27,8 @@ category = "main"
|
||||
description = "Cross-platform colored terminal text."
|
||||
name = "colorama"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
|
||||
version = "0.4.1"
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
|
||||
version = "0.4.3"
|
||||
|
||||
[[package]]
|
||||
category = "main"
|
||||
@@ -50,29 +50,13 @@ python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*, <4"
|
||||
version = "4.5.4"
|
||||
|
||||
[[package]]
|
||||
category = "dev"
|
||||
description = "Show coverage stats online via coveralls.io"
|
||||
name = "coveralls"
|
||||
category = "main"
|
||||
description = "Python 3.4 Enum backported to 3.3, 3.2, 3.1, 2.7, 2.6, 2.5, and 2.4"
|
||||
marker = "python_version >= \"2.7\" and python_version < \"2.8\""
|
||||
name = "enum34"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
version = "1.8.2"
|
||||
|
||||
[package.dependencies]
|
||||
coverage = ">=3.6,<5.0"
|
||||
docopt = ">=0.6.1"
|
||||
requests = ">=1.0.0"
|
||||
|
||||
[package.dependencies.urllib3]
|
||||
python = "<3"
|
||||
version = "*"
|
||||
|
||||
[[package]]
|
||||
category = "dev"
|
||||
description = "Pythonic argument parser, that will make you smile"
|
||||
name = "docopt"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
version = "0.6.2"
|
||||
version = "1.1.6"
|
||||
|
||||
[[package]]
|
||||
category = "main"
|
||||
@@ -103,7 +87,7 @@ marker = "python_version >= \"2.7\" and python_version < \"2.8\""
|
||||
name = "future"
|
||||
optional = false
|
||||
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
|
||||
version = "0.18.1"
|
||||
version = "0.18.2"
|
||||
|
||||
[[package]]
|
||||
category = "main"
|
||||
@@ -143,6 +127,9 @@ version = "2.10.3"
|
||||
[package.dependencies]
|
||||
MarkupSafe = ">=0.23"
|
||||
|
||||
[package.extras]
|
||||
i18n = ["Babel (>=0.8)"]
|
||||
|
||||
[[package]]
|
||||
category = "main"
|
||||
description = "An XPath for JSON"
|
||||
@@ -165,7 +152,7 @@ description = "YAML parser and emitter for Python"
|
||||
name = "pyyaml"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
|
||||
version = "5.1.2"
|
||||
version = "5.2"
|
||||
|
||||
[[package]]
|
||||
category = "main"
|
||||
@@ -181,6 +168,10 @@ chardet = ">=3.0.2,<3.1.0"
|
||||
idna = ">=2.5,<2.9"
|
||||
urllib3 = ">=1.21.1,<1.25.0 || >1.25.0,<1.25.1 || >1.25.1,<1.26"
|
||||
|
||||
[package.extras]
|
||||
security = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)"]
|
||||
socks = ["PySocks (>=1.5.6,<1.5.7 || >1.5.7)", "win-inet-pton"]
|
||||
|
||||
[[package]]
|
||||
category = "main"
|
||||
description = "A utility belt for advanced users of python-requests"
|
||||
@@ -192,13 +183,44 @@ version = "0.9.1"
|
||||
[package.dependencies]
|
||||
requests = ">=2.0.1,<3.0.0"
|
||||
|
||||
[[package]]
|
||||
category = "main"
|
||||
description = "Python client for Sentry (https://getsentry.com)"
|
||||
name = "sentry-sdk"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
version = "0.13.5"
|
||||
|
||||
[package.dependencies]
|
||||
certifi = "*"
|
||||
urllib3 = ">=1.10.0"
|
||||
|
||||
[package.extras]
|
||||
aiohttp = ["aiohttp (>=3.5)"]
|
||||
beam = ["beam (>=2.12)"]
|
||||
bottle = ["bottle (>=0.12.13)"]
|
||||
celery = ["celery (>=3)"]
|
||||
django = ["django (>=1.8)"]
|
||||
falcon = ["falcon (>=1.4)"]
|
||||
flask = ["flask (>=0.11)", "blinker (>=1.1)"]
|
||||
pyspark = ["pyspark (>=2.4.4)"]
|
||||
rq = ["0.6"]
|
||||
sanic = ["sanic (>=0.8)"]
|
||||
sqlalchemy = ["sqlalchemy (>=1.2)"]
|
||||
tornado = ["tornado (>=5)"]
|
||||
|
||||
[[package]]
|
||||
category = "main"
|
||||
description = "HTTP library with thread-safe connection pooling, file post, and more."
|
||||
name = "urllib3"
|
||||
optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4"
|
||||
version = "1.25.6"
|
||||
version = "1.25.7"
|
||||
|
||||
[package.extras]
|
||||
brotli = ["brotlipy (>=0.6.0)"]
|
||||
secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"]
|
||||
socks = ["PySocks (>=1.5.6,<1.5.7 || >1.5.7,<2.0)"]
|
||||
|
||||
[[package]]
|
||||
category = "dev"
|
||||
@@ -208,30 +230,166 @@ optional = false
|
||||
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
|
||||
version = "0.16.0"
|
||||
|
||||
[package.extras]
|
||||
dev = ["pytest", "coverage", "tox", "sphinx", "pallets-sphinx-themes", "sphinx-issues"]
|
||||
termcolor = ["termcolor"]
|
||||
watchdog = ["watchdog"]
|
||||
|
||||
[metadata]
|
||||
content-hash = "836d6dec466dfbf8a14481ed801c053a902b3fa6d8b75cf4f5aba4539c0899af"
|
||||
content-hash = "7b478db27fe6f36aeed7f90b6c67efe5903fb43bb899bb66a1a65b80b8637c5a"
|
||||
python-versions = "~2.7 || ^3.5"
|
||||
|
||||
[metadata.hashes]
|
||||
certifi = ["e4f3620cfea4f83eedc95b24abd9cd56f3c4b146dd0177e83a21b4eb49e21e50", "fd7c7c74727ddcf00e9acd26bba8da604ffec95bf1c2144e67aff7a8b50e6cef"]
|
||||
chardet = ["84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae", "fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691"]
|
||||
click = ["2335065e6395b9e67ca716de5f7526736bfa6ceead690adf616d925bdc622b13", "5b94b49521f6456670fdb30cd82a4eca9412788a93fa6dd6df72c94d5a8ff2d7"]
|
||||
colorama = ["05eed71e2e327246ad6b38c540c4a3117230b19679b875190486ddd2d721422d", "f8ac84de7840f5b9c4e3347b3c1eaa50f7e49c2b07596221daec5edaabbd7c48"]
|
||||
colorlog = ["3cf31b25cbc8f86ec01fef582ef3b840950dea414084ed19ab922c8b493f9b42", "450f52ea2a2b6ebb308f034ea9a9b15cea51e65650593dca1da3eb792e4e4981"]
|
||||
coverage = ["08907593569fe59baca0bf152c43f3863201efb6113ecb38ce7e97ce339805a6", "0be0f1ed45fc0c185cfd4ecc19a1d6532d72f86a2bac9de7e24541febad72650", "141f08ed3c4b1847015e2cd62ec06d35e67a3ac185c26f7635f4406b90afa9c5", "19e4df788a0581238e9390c85a7a09af39c7b539b29f25c89209e6c3e371270d", "23cc09ed395b03424d1ae30dcc292615c1372bfba7141eb85e11e50efaa6b351", "245388cda02af78276b479f299bbf3783ef0a6a6273037d7c60dc73b8d8d7755", "331cb5115673a20fb131dadd22f5bcaf7677ef758741312bee4937d71a14b2ef", "386e2e4090f0bc5df274e720105c342263423e77ee8826002dcffe0c9533dbca", "3a794ce50daee01c74a494919d5ebdc23d58873747fa0e288318728533a3e1ca", "60851187677b24c6085248f0a0b9b98d49cba7ecc7ec60ba6b9d2e5574ac1ee9", "63a9a5fc43b58735f65ed63d2cf43508f462dc49857da70b8980ad78d41d52fc", "6b62544bb68106e3f00b21c8930e83e584fdca005d4fffd29bb39fb3ffa03cb5", "6ba744056423ef8d450cf627289166da65903885272055fb4b5e113137cfa14f", "7494b0b0274c5072bddbfd5b4a6c6f18fbbe1ab1d22a41e99cd2d00c8f96ecfe", "826f32b9547c8091679ff292a82aca9c7b9650f9fda3e2ca6bf2ac905b7ce888", "93715dffbcd0678057f947f496484e906bf9509f5c1c38fc9ba3922893cda5f5", "9a334d6c83dfeadae576b4d633a71620d40d1c379129d587faa42ee3e2a85cce", "af7ed8a8aa6957aac47b4268631fa1df984643f07ef00acd374e456364b373f5", "bf0a7aed7f5521c7ca67febd57db473af4762b9622254291fbcbb8cd0ba5e33e", "bf1ef9eb901113a9805287e090452c05547578eaab1b62e4ad456fcc049a9b7e", "c0afd27bc0e307a1ffc04ca5ec010a290e49e3afbe841c5cafc5c5a80ecd81c9", "dd579709a87092c6dbee09d1b7cfa81831040705ffa12a1b248935274aee0437", "df6712284b2e44a065097846488f66840445eb987eb81b3cc6e4149e7b6982e1", "e07d9f1a23e9e93ab5c62902833bf3e4b1f65502927379148b6622686223125c", "e2ede7c1d45e65e209d6093b762e98e8318ddeff95317d07a27a2140b80cfd24", "e4ef9c164eb55123c62411f5936b5c2e521b12356037b6e1c2617cef45523d47", "eca2b7343524e7ba246cab8ff00cab47a2d6d54ada3b02772e908a45675722e2", "eee64c616adeff7db37cc37da4180a3a5b6177f5c46b187894e633f088fb5b28", "ef824cad1f980d27f26166f86856efe11eff9912c4fed97d3804820d43fa550c", "efc89291bd5a08855829a3c522df16d856455297cf35ae827a37edac45f466a7", "fa964bae817babece5aa2e8c1af841bebb6d0b9add8e637548809d040443fee0", "ff37757e068ae606659c28c3bd0d923f9d29a85de79bf25b2b34b148473b5025"]
|
||||
coveralls = ["9bc5a1f92682eef59f688a8f280207190d9a6afb84cef8f567fa47631a784060", "fb51cddef4bc458de347274116df15d641a735d3f0a580a9472174e2e62f408c"]
|
||||
docopt = ["49b3a825280bd66b3aa83585ef59c4a8c82f2c8a522dbe754a8bc8d08c85c491"]
|
||||
filetype = ["17a3b885f19034da29640b083d767e0f13c2dcb5dcc267945c8b6e5a5a9013c7", "4967124d982a71700d94a08c49c4926423500e79382a92070f5ab248d44fe461"]
|
||||
flask = ["2ea22336f6d388b4b242bc3abf8a01244a8aa3e236e7407469ef78c16ba355dd", "6c02dbaa5a9ef790d8219bdced392e2d549c10cd5a5ba4b6aa65126b2271af29"]
|
||||
future = ["858e38522e8fd0d3ce8f0c1feaf0603358e366d5403209674c7b617fa0c24093"]
|
||||
har2case = ["84d3a5cc9fbb16e45372e7e880a936c59bbe8e9b66bad81927769e64f608e2af", "8f159ec7cba82ec4282f46af4a9dac89f65e62796521b2426d3c89c3c9fd8579"]
|
||||
idna = ["c357b3f628cf53ae2c4c05627ecc484553142ca23264e593d327bcde5e9c3407", "ea8b7f6188e6fa117537c3df7da9fc686d485087abf6ac197f9c46432f7e4a3c"]
|
||||
itsdangerous = ["321b033d07f2a4136d3ec762eac9f16a10ccd60f53c0c91af90217ace7ba1f19", "b12271b2047cb23eeb98c8b5622e2e5c5e9abd9784a153e9d8ef9cb4dd09d749"]
|
||||
jinja2 = ["74320bb91f31270f9551d46522e33af46a80c3d619f4a4bf42b3164d30b5911f", "9fe95f19286cfefaa917656583d020be14e7859c6b0252588391e47db34527de"]
|
||||
jsonpath = ["46d3fd2016cd5b842283d547877a02c418a0fe9aa7a6b0ae344115a2c990fef4"]
|
||||
markupsafe = ["00bc623926325b26bb9605ae9eae8a215691f33cae5df11ca5424f06f2d1f473", "09027a7803a62ca78792ad89403b1b7a73a01c8cb65909cd876f7fcebd79b161", "09c4b7f37d6c648cb13f9230d847adf22f8171b1ccc4d5682398e77f40309235", "1027c282dad077d0bae18be6794e6b6b8c91d58ed8a8d89a89d59693b9131db5", "24982cc2533820871eba85ba648cd53d8623687ff11cbb805be4ff7b4c971aff", "29872e92839765e546828bb7754a68c418d927cd064fd4708fab9fe9c8bb116b", "43a55c2930bbc139570ac2452adf3d70cdbb3cfe5912c71cdce1c2c6bbd9c5d1", "46c99d2de99945ec5cb54f23c8cd5689f6d7177305ebff350a58ce5f8de1669e", "500d4957e52ddc3351cabf489e79c91c17f6e0899158447047588650b5e69183", "535f6fc4d397c1563d08b88e485c3496cf5784e927af890fb3c3aac7f933ec66", "62fe6c95e3ec8a7fad637b7f3d372c15ec1caa01ab47926cfdf7a75b40e0eac1", "6dd73240d2af64df90aa7c4e7481e23825ea70af4b4922f8ede5b9e35f78a3b1", "717ba8fe3ae9cc0006d7c451f0bb265ee07739daf76355d06366154ee68d221e", "79855e1c5b8da654cf486b830bd42c06e8780cea587384cf6545b7d9ac013a0b", "7c1699dfe0cf8ff607dbdcc1e9b9af1755371f92a68f706051cc8c37d447c905", "88e5fcfb52ee7b911e8bb6d6aa2fd21fbecc674eadd44118a9cc3863f938e735", "8defac2f2ccd6805ebf65f5eeb132adcf2ab57aa11fdf4c0dd5169a004710e7d", "98c7086708b163d425c67c7a91bad6e466bb99d797aa64f965e9d25c12111a5e", "9add70b36c5666a2ed02b43b335fe19002ee5235efd4b8a89bfcf9005bebac0d", "9bf40443012702a1d2070043cb6291650a0841ece432556f784f004937f0f32c", "ade5e387d2ad0d7ebf59146cc00c8044acbd863725f887353a10df825fc8ae21", "b00c1de48212e4cc9603895652c5c410df699856a2853135b3967591e4beebc2", "b1282f8c00509d99fef04d8ba936b156d419be841854fe901d8ae224c59f0be5", "b2051432115498d3562c084a49bba65d97cf251f5a331c64a12ee7e04dacc51b", "ba59edeaa2fc6114428f1637ffff42da1e311e29382d81b339c1817d37ec93c6", "c8716a48d94b06bb3b2524c2b77e055fb313aeb4ea620c8dd03a105574ba704f", "cd5df75523866410809ca100dc9681e301e3c27567cf498077e8551b6d20e42f", "e249096428b3ae81b08327a63a485ad0878de3fb939049038579ac0ef61e17e7"]
|
||||
pyyaml = ["0113bc0ec2ad727182326b61326afa3d1d8280ae1122493553fd6f4397f33df9", "01adf0b6c6f61bd11af6e10ca52b7d4057dd0be0343eb9283c878cf3af56aee4", "5124373960b0b3f4aa7df1707e63e9f109b5263eca5976c66e08b1c552d4eaf8", "5ca4f10adbddae56d824b2c09668e91219bb178a1eee1faa56af6f99f11bf696", "7907be34ffa3c5a32b60b95f4d95ea25361c951383a894fec31be7252b2b6f34", "7ec9b2a4ed5cad025c2278a1e6a19c011c80a3caaac804fd2d329e9cc2c287c9", "87ae4c829bb25b9fe99cf71fbb2140c448f534e24c998cc60f39ae4f94396a73", "9de9919becc9cc2ff03637872a440195ac4241c80536632fffeb6a1e25a74299", "a5a85b10e450c66b49f98846937e8cfca1db3127a9d5d1e31ca45c3d0bef4c5b", "b0997827b4f6a7c286c01c5f60384d218dca4ed7d9efa945c3e1aa623d5709ae", "b631ef96d3222e62861443cc89d6563ba3eeb816eeb96b2629345ab795e53681", "bf47c0607522fdbca6c9e817a6e81b08491de50f3766a7a0e6a5be7905961b41", "f81025eddd0327c7d4cfe9b62cf33190e1e736cc6e97502b3ec425f574b3e7a8"]
|
||||
requests = ["11e007a8a2aa0323f5a921e9e6a2d7e4e67d9877e85773fba9ba6419025cbeb4", "9cf5292fcd0f598c671cfc1e0d7d1a7f13bb8085e9a590f48c010551dc6c4b31"]
|
||||
requests-toolbelt = ["380606e1d10dc85c3bd47bf5a6095f815ec007be7a8b69c878507068df059e6f", "968089d4584ad4ad7c171454f0a5c6dac23971e9472521ea3b6d49d610aa6fc0"]
|
||||
urllib3 = ["3de946ffbed6e6746608990594d08faac602528ac7015ac28d33cee6a45b7398", "9a107b99a5393caf59c7aa3c1249c16e6879447533d0887f4336dde834c7be86"]
|
||||
werkzeug = ["7280924747b5733b246fe23972186c6b348f9ae29724135a6dfc1e53cea433e7", "e5f4a1f98b52b18a93da705a7458e55afb26f32bff83ff5d19189f92462d65c4"]
|
||||
[metadata.files]
|
||||
certifi = [
|
||||
{file = "certifi-2019.11.28-py2.py3-none-any.whl", hash = "sha256:017c25db2a153ce562900032d5bc68e9f191e44e9a0f762f373977de9df1fbb3"},
|
||||
{file = "certifi-2019.11.28.tar.gz", hash = "sha256:25b64c7da4cd7479594d035c08c2d809eb4aab3a26e5a990ea98cc450c320f1f"},
|
||||
]
|
||||
chardet = [
|
||||
{file = "chardet-3.0.4-py2.py3-none-any.whl", hash = "sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691"},
|
||||
{file = "chardet-3.0.4.tar.gz", hash = "sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae"},
|
||||
]
|
||||
click = [
|
||||
{file = "Click-7.0-py2.py3-none-any.whl", hash = "sha256:2335065e6395b9e67ca716de5f7526736bfa6ceead690adf616d925bdc622b13"},
|
||||
{file = "Click-7.0.tar.gz", hash = "sha256:5b94b49521f6456670fdb30cd82a4eca9412788a93fa6dd6df72c94d5a8ff2d7"},
|
||||
]
|
||||
colorama = [
|
||||
{file = "colorama-0.4.3-py2.py3-none-any.whl", hash = "sha256:7d73d2a99753107a36ac6b455ee49046802e59d9d076ef8e47b61499fa29afff"},
|
||||
{file = "colorama-0.4.3.tar.gz", hash = "sha256:e96da0d330793e2cb9485e9ddfd918d456036c7149416295932478192f4436a1"},
|
||||
]
|
||||
colorlog = [
|
||||
{file = "colorlog-4.0.2-py2.py3-none-any.whl", hash = "sha256:450f52ea2a2b6ebb308f034ea9a9b15cea51e65650593dca1da3eb792e4e4981"},
|
||||
{file = "colorlog-4.0.2.tar.gz", hash = "sha256:3cf31b25cbc8f86ec01fef582ef3b840950dea414084ed19ab922c8b493f9b42"},
|
||||
]
|
||||
coverage = [
|
||||
{file = "coverage-4.5.4-cp26-cp26m-macosx_10_12_x86_64.whl", hash = "sha256:eee64c616adeff7db37cc37da4180a3a5b6177f5c46b187894e633f088fb5b28"},
|
||||
{file = "coverage-4.5.4-cp27-cp27m-macosx_10_12_x86_64.whl", hash = "sha256:ef824cad1f980d27f26166f86856efe11eff9912c4fed97d3804820d43fa550c"},
|
||||
{file = "coverage-4.5.4-cp27-cp27m-macosx_10_13_intel.whl", hash = "sha256:9a334d6c83dfeadae576b4d633a71620d40d1c379129d587faa42ee3e2a85cce"},
|
||||
{file = "coverage-4.5.4-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:7494b0b0274c5072bddbfd5b4a6c6f18fbbe1ab1d22a41e99cd2d00c8f96ecfe"},
|
||||
{file = "coverage-4.5.4-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:826f32b9547c8091679ff292a82aca9c7b9650f9fda3e2ca6bf2ac905b7ce888"},
|
||||
{file = "coverage-4.5.4-cp27-cp27m-win32.whl", hash = "sha256:63a9a5fc43b58735f65ed63d2cf43508f462dc49857da70b8980ad78d41d52fc"},
|
||||
{file = "coverage-4.5.4-cp27-cp27m-win_amd64.whl", hash = "sha256:e2ede7c1d45e65e209d6093b762e98e8318ddeff95317d07a27a2140b80cfd24"},
|
||||
{file = "coverage-4.5.4-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:dd579709a87092c6dbee09d1b7cfa81831040705ffa12a1b248935274aee0437"},
|
||||
{file = "coverage-4.5.4-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:08907593569fe59baca0bf152c43f3863201efb6113ecb38ce7e97ce339805a6"},
|
||||
{file = "coverage-4.5.4-cp33-cp33m-macosx_10_10_x86_64.whl", hash = "sha256:6b62544bb68106e3f00b21c8930e83e584fdca005d4fffd29bb39fb3ffa03cb5"},
|
||||
{file = "coverage-4.5.4-cp34-cp34m-macosx_10_12_x86_64.whl", hash = "sha256:331cb5115673a20fb131dadd22f5bcaf7677ef758741312bee4937d71a14b2ef"},
|
||||
{file = "coverage-4.5.4-cp34-cp34m-manylinux1_i686.whl", hash = "sha256:bf1ef9eb901113a9805287e090452c05547578eaab1b62e4ad456fcc049a9b7e"},
|
||||
{file = "coverage-4.5.4-cp34-cp34m-manylinux1_x86_64.whl", hash = "sha256:386e2e4090f0bc5df274e720105c342263423e77ee8826002dcffe0c9533dbca"},
|
||||
{file = "coverage-4.5.4-cp34-cp34m-win32.whl", hash = "sha256:fa964bae817babece5aa2e8c1af841bebb6d0b9add8e637548809d040443fee0"},
|
||||
{file = "coverage-4.5.4-cp34-cp34m-win_amd64.whl", hash = "sha256:df6712284b2e44a065097846488f66840445eb987eb81b3cc6e4149e7b6982e1"},
|
||||
{file = "coverage-4.5.4-cp35-cp35m-macosx_10_12_x86_64.whl", hash = "sha256:efc89291bd5a08855829a3c522df16d856455297cf35ae827a37edac45f466a7"},
|
||||
{file = "coverage-4.5.4-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:e4ef9c164eb55123c62411f5936b5c2e521b12356037b6e1c2617cef45523d47"},
|
||||
{file = "coverage-4.5.4-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:ff37757e068ae606659c28c3bd0d923f9d29a85de79bf25b2b34b148473b5025"},
|
||||
{file = "coverage-4.5.4-cp35-cp35m-win32.whl", hash = "sha256:bf0a7aed7f5521c7ca67febd57db473af4762b9622254291fbcbb8cd0ba5e33e"},
|
||||
{file = "coverage-4.5.4-cp35-cp35m-win_amd64.whl", hash = "sha256:19e4df788a0581238e9390c85a7a09af39c7b539b29f25c89209e6c3e371270d"},
|
||||
{file = "coverage-4.5.4-cp36-cp36m-macosx_10_13_x86_64.whl", hash = "sha256:60851187677b24c6085248f0a0b9b98d49cba7ecc7ec60ba6b9d2e5574ac1ee9"},
|
||||
{file = "coverage-4.5.4-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:245388cda02af78276b479f299bbf3783ef0a6a6273037d7c60dc73b8d8d7755"},
|
||||
{file = "coverage-4.5.4-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:c0afd27bc0e307a1ffc04ca5ec010a290e49e3afbe841c5cafc5c5a80ecd81c9"},
|
||||
{file = "coverage-4.5.4-cp36-cp36m-win32.whl", hash = "sha256:6ba744056423ef8d450cf627289166da65903885272055fb4b5e113137cfa14f"},
|
||||
{file = "coverage-4.5.4-cp36-cp36m-win_amd64.whl", hash = "sha256:af7ed8a8aa6957aac47b4268631fa1df984643f07ef00acd374e456364b373f5"},
|
||||
{file = "coverage-4.5.4-cp37-cp37m-macosx_10_13_x86_64.whl", hash = "sha256:3a794ce50daee01c74a494919d5ebdc23d58873747fa0e288318728533a3e1ca"},
|
||||
{file = "coverage-4.5.4-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:0be0f1ed45fc0c185cfd4ecc19a1d6532d72f86a2bac9de7e24541febad72650"},
|
||||
{file = "coverage-4.5.4-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:eca2b7343524e7ba246cab8ff00cab47a2d6d54ada3b02772e908a45675722e2"},
|
||||
{file = "coverage-4.5.4-cp37-cp37m-win32.whl", hash = "sha256:93715dffbcd0678057f947f496484e906bf9509f5c1c38fc9ba3922893cda5f5"},
|
||||
{file = "coverage-4.5.4-cp37-cp37m-win_amd64.whl", hash = "sha256:23cc09ed395b03424d1ae30dcc292615c1372bfba7141eb85e11e50efaa6b351"},
|
||||
{file = "coverage-4.5.4-cp38-cp38-macosx_10_13_x86_64.whl", hash = "sha256:141f08ed3c4b1847015e2cd62ec06d35e67a3ac185c26f7635f4406b90afa9c5"},
|
||||
{file = "coverage-4.5.4.tar.gz", hash = "sha256:e07d9f1a23e9e93ab5c62902833bf3e4b1f65502927379148b6622686223125c"},
|
||||
]
|
||||
enum34 = [
|
||||
{file = "enum34-1.1.6-py2-none-any.whl", hash = "sha256:6bd0f6ad48ec2aa117d3d141940d484deccda84d4fcd884f5c3d93c23ecd8c79"},
|
||||
{file = "enum34-1.1.6-py3-none-any.whl", hash = "sha256:644837f692e5f550741432dd3f223bbb9852018674981b1664e5dc339387588a"},
|
||||
{file = "enum34-1.1.6.tar.gz", hash = "sha256:8ad8c4783bf61ded74527bffb48ed9b54166685e4230386a9ed9b1279e2df5b1"},
|
||||
{file = "enum34-1.1.6.zip", hash = "sha256:2d81cbbe0e73112bdfe6ef8576f2238f2ba27dd0d55752a776c41d38b7da2850"},
|
||||
]
|
||||
filetype = [
|
||||
{file = "filetype-1.0.5-py2.py3-none-any.whl", hash = "sha256:4967124d982a71700d94a08c49c4926423500e79382a92070f5ab248d44fe461"},
|
||||
{file = "filetype-1.0.5.tar.gz", hash = "sha256:17a3b885f19034da29640b083d767e0f13c2dcb5dcc267945c8b6e5a5a9013c7"},
|
||||
]
|
||||
flask = [
|
||||
{file = "Flask-0.12.4-py2.py3-none-any.whl", hash = "sha256:6c02dbaa5a9ef790d8219bdced392e2d549c10cd5a5ba4b6aa65126b2271af29"},
|
||||
{file = "Flask-0.12.4.tar.gz", hash = "sha256:2ea22336f6d388b4b242bc3abf8a01244a8aa3e236e7407469ef78c16ba355dd"},
|
||||
]
|
||||
future = [
|
||||
{file = "future-0.18.2.tar.gz", hash = "sha256:b1bead90b70cf6ec3f0710ae53a525360fa360d306a86583adc6bf83a4db537d"},
|
||||
]
|
||||
har2case = [
|
||||
{file = "har2case-0.3.1-py2.py3-none-any.whl", hash = "sha256:84d3a5cc9fbb16e45372e7e880a936c59bbe8e9b66bad81927769e64f608e2af"},
|
||||
{file = "har2case-0.3.1.tar.gz", hash = "sha256:8f159ec7cba82ec4282f46af4a9dac89f65e62796521b2426d3c89c3c9fd8579"},
|
||||
]
|
||||
idna = [
|
||||
{file = "idna-2.8-py2.py3-none-any.whl", hash = "sha256:ea8b7f6188e6fa117537c3df7da9fc686d485087abf6ac197f9c46432f7e4a3c"},
|
||||
{file = "idna-2.8.tar.gz", hash = "sha256:c357b3f628cf53ae2c4c05627ecc484553142ca23264e593d327bcde5e9c3407"},
|
||||
]
|
||||
itsdangerous = [
|
||||
{file = "itsdangerous-1.1.0-py2.py3-none-any.whl", hash = "sha256:b12271b2047cb23eeb98c8b5622e2e5c5e9abd9784a153e9d8ef9cb4dd09d749"},
|
||||
{file = "itsdangerous-1.1.0.tar.gz", hash = "sha256:321b033d07f2a4136d3ec762eac9f16a10ccd60f53c0c91af90217ace7ba1f19"},
|
||||
]
|
||||
jinja2 = [
|
||||
{file = "Jinja2-2.10.3-py2.py3-none-any.whl", hash = "sha256:74320bb91f31270f9551d46522e33af46a80c3d619f4a4bf42b3164d30b5911f"},
|
||||
{file = "Jinja2-2.10.3.tar.gz", hash = "sha256:9fe95f19286cfefaa917656583d020be14e7859c6b0252588391e47db34527de"},
|
||||
]
|
||||
jsonpath = [
|
||||
{file = "jsonpath-0.82.tar.gz", hash = "sha256:46d3fd2016cd5b842283d547877a02c418a0fe9aa7a6b0ae344115a2c990fef4"},
|
||||
]
|
||||
markupsafe = [
|
||||
{file = "MarkupSafe-1.1.1-cp27-cp27m-macosx_10_6_intel.whl", hash = "sha256:09027a7803a62ca78792ad89403b1b7a73a01c8cb65909cd876f7fcebd79b161"},
|
||||
{file = "MarkupSafe-1.1.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:e249096428b3ae81b08327a63a485ad0878de3fb939049038579ac0ef61e17e7"},
|
||||
{file = "MarkupSafe-1.1.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:500d4957e52ddc3351cabf489e79c91c17f6e0899158447047588650b5e69183"},
|
||||
{file = "MarkupSafe-1.1.1-cp27-cp27m-win32.whl", hash = "sha256:b2051432115498d3562c084a49bba65d97cf251f5a331c64a12ee7e04dacc51b"},
|
||||
{file = "MarkupSafe-1.1.1-cp27-cp27m-win_amd64.whl", hash = "sha256:98c7086708b163d425c67c7a91bad6e466bb99d797aa64f965e9d25c12111a5e"},
|
||||
{file = "MarkupSafe-1.1.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:cd5df75523866410809ca100dc9681e301e3c27567cf498077e8551b6d20e42f"},
|
||||
{file = "MarkupSafe-1.1.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:43a55c2930bbc139570ac2452adf3d70cdbb3cfe5912c71cdce1c2c6bbd9c5d1"},
|
||||
{file = "MarkupSafe-1.1.1-cp34-cp34m-macosx_10_6_intel.whl", hash = "sha256:1027c282dad077d0bae18be6794e6b6b8c91d58ed8a8d89a89d59693b9131db5"},
|
||||
{file = "MarkupSafe-1.1.1-cp34-cp34m-manylinux1_i686.whl", hash = "sha256:62fe6c95e3ec8a7fad637b7f3d372c15ec1caa01ab47926cfdf7a75b40e0eac1"},
|
||||
{file = "MarkupSafe-1.1.1-cp34-cp34m-manylinux1_x86_64.whl", hash = "sha256:88e5fcfb52ee7b911e8bb6d6aa2fd21fbecc674eadd44118a9cc3863f938e735"},
|
||||
{file = "MarkupSafe-1.1.1-cp34-cp34m-win32.whl", hash = "sha256:ade5e387d2ad0d7ebf59146cc00c8044acbd863725f887353a10df825fc8ae21"},
|
||||
{file = "MarkupSafe-1.1.1-cp34-cp34m-win_amd64.whl", hash = "sha256:09c4b7f37d6c648cb13f9230d847adf22f8171b1ccc4d5682398e77f40309235"},
|
||||
{file = "MarkupSafe-1.1.1-cp35-cp35m-macosx_10_6_intel.whl", hash = "sha256:79855e1c5b8da654cf486b830bd42c06e8780cea587384cf6545b7d9ac013a0b"},
|
||||
{file = "MarkupSafe-1.1.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:c8716a48d94b06bb3b2524c2b77e055fb313aeb4ea620c8dd03a105574ba704f"},
|
||||
{file = "MarkupSafe-1.1.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:7c1699dfe0cf8ff607dbdcc1e9b9af1755371f92a68f706051cc8c37d447c905"},
|
||||
{file = "MarkupSafe-1.1.1-cp35-cp35m-win32.whl", hash = "sha256:6dd73240d2af64df90aa7c4e7481e23825ea70af4b4922f8ede5b9e35f78a3b1"},
|
||||
{file = "MarkupSafe-1.1.1-cp35-cp35m-win_amd64.whl", hash = "sha256:9add70b36c5666a2ed02b43b335fe19002ee5235efd4b8a89bfcf9005bebac0d"},
|
||||
{file = "MarkupSafe-1.1.1-cp36-cp36m-macosx_10_6_intel.whl", hash = "sha256:24982cc2533820871eba85ba648cd53d8623687ff11cbb805be4ff7b4c971aff"},
|
||||
{file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:00bc623926325b26bb9605ae9eae8a215691f33cae5df11ca5424f06f2d1f473"},
|
||||
{file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:717ba8fe3ae9cc0006d7c451f0bb265ee07739daf76355d06366154ee68d221e"},
|
||||
{file = "MarkupSafe-1.1.1-cp36-cp36m-win32.whl", hash = "sha256:535f6fc4d397c1563d08b88e485c3496cf5784e927af890fb3c3aac7f933ec66"},
|
||||
{file = "MarkupSafe-1.1.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b1282f8c00509d99fef04d8ba936b156d419be841854fe901d8ae224c59f0be5"},
|
||||
{file = "MarkupSafe-1.1.1-cp37-cp37m-macosx_10_6_intel.whl", hash = "sha256:8defac2f2ccd6805ebf65f5eeb132adcf2ab57aa11fdf4c0dd5169a004710e7d"},
|
||||
{file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:46c99d2de99945ec5cb54f23c8cd5689f6d7177305ebff350a58ce5f8de1669e"},
|
||||
{file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:ba59edeaa2fc6114428f1637ffff42da1e311e29382d81b339c1817d37ec93c6"},
|
||||
{file = "MarkupSafe-1.1.1-cp37-cp37m-win32.whl", hash = "sha256:b00c1de48212e4cc9603895652c5c410df699856a2853135b3967591e4beebc2"},
|
||||
{file = "MarkupSafe-1.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:9bf40443012702a1d2070043cb6291650a0841ece432556f784f004937f0f32c"},
|
||||
{file = "MarkupSafe-1.1.1.tar.gz", hash = "sha256:29872e92839765e546828bb7754a68c418d927cd064fd4708fab9fe9c8bb116b"},
|
||||
]
|
||||
pyyaml = [
|
||||
{file = "PyYAML-5.2-cp27-cp27m-win32.whl", hash = "sha256:35ace9b4147848cafac3db142795ee42deebe9d0dad885ce643928e88daebdcc"},
|
||||
{file = "PyYAML-5.2-cp27-cp27m-win_amd64.whl", hash = "sha256:ebc4ed52dcc93eeebeae5cf5deb2ae4347b3a81c3fa12b0b8c976544829396a4"},
|
||||
{file = "PyYAML-5.2-cp35-cp35m-win32.whl", hash = "sha256:38a4f0d114101c58c0f3a88aeaa44d63efd588845c5a2df5290b73db8f246d15"},
|
||||
{file = "PyYAML-5.2-cp35-cp35m-win_amd64.whl", hash = "sha256:483eb6a33b671408c8529106df3707270bfacb2447bf8ad856a4b4f57f6e3075"},
|
||||
{file = "PyYAML-5.2-cp36-cp36m-win32.whl", hash = "sha256:7f38e35c00e160db592091751d385cd7b3046d6d51f578b29943225178257b31"},
|
||||
{file = "PyYAML-5.2-cp36-cp36m-win_amd64.whl", hash = "sha256:0e7f69397d53155e55d10ff68fdfb2cf630a35e6daf65cf0bdeaf04f127c09dc"},
|
||||
{file = "PyYAML-5.2-cp37-cp37m-win32.whl", hash = "sha256:e4c015484ff0ff197564917b4b4246ca03f411b9bd7f16e02a2f586eb48b6d04"},
|
||||
{file = "PyYAML-5.2-cp37-cp37m-win_amd64.whl", hash = "sha256:4b6be5edb9f6bb73680f5bf4ee08ff25416d1400fbd4535fe0069b2994da07cd"},
|
||||
{file = "PyYAML-5.2-cp38-cp38-win32.whl", hash = "sha256:8100c896ecb361794d8bfdb9c11fce618c7cf83d624d73d5ab38aef3bc82d43f"},
|
||||
{file = "PyYAML-5.2-cp38-cp38-win_amd64.whl", hash = "sha256:2e9f0b7c5914367b0916c3c104a024bb68f269a486b9d04a2e8ac6f6597b7803"},
|
||||
{file = "PyYAML-5.2.tar.gz", hash = "sha256:c0ee8eca2c582d29c3c2ec6e2c4f703d1b7f1fb10bc72317355a746057e7346c"},
|
||||
]
|
||||
requests = [
|
||||
{file = "requests-2.22.0-py2.py3-none-any.whl", hash = "sha256:9cf5292fcd0f598c671cfc1e0d7d1a7f13bb8085e9a590f48c010551dc6c4b31"},
|
||||
{file = "requests-2.22.0.tar.gz", hash = "sha256:11e007a8a2aa0323f5a921e9e6a2d7e4e67d9877e85773fba9ba6419025cbeb4"},
|
||||
]
|
||||
requests-toolbelt = [
|
||||
{file = "requests-toolbelt-0.9.1.tar.gz", hash = "sha256:968089d4584ad4ad7c171454f0a5c6dac23971e9472521ea3b6d49d610aa6fc0"},
|
||||
{file = "requests_toolbelt-0.9.1-py2.py3-none-any.whl", hash = "sha256:380606e1d10dc85c3bd47bf5a6095f815ec007be7a8b69c878507068df059e6f"},
|
||||
]
|
||||
sentry-sdk = [
|
||||
{file = "sentry-sdk-0.13.5.tar.gz", hash = "sha256:c6b919623e488134a728f16326c6f0bcdab7e3f59e7f4c472a90eea4d6d8fe82"},
|
||||
{file = "sentry_sdk-0.13.5-py2.py3-none-any.whl", hash = "sha256:05285942901d38c7ce2498aba50d8e87b361fc603281a5902dda98f3f8c5e145"},
|
||||
]
|
||||
urllib3 = [
|
||||
{file = "urllib3-1.25.7-py2.py3-none-any.whl", hash = "sha256:a8a318824cc77d1fd4b2bec2ded92646630d7fe8619497b142c84a9e6f5a7293"},
|
||||
{file = "urllib3-1.25.7.tar.gz", hash = "sha256:f3c5fd51747d450d4dcf6f923c81f78f811aab8205fda64b0aba34a4e48b0745"},
|
||||
]
|
||||
werkzeug = [
|
||||
{file = "Werkzeug-0.16.0-py2.py3-none-any.whl", hash = "sha256:e5f4a1f98b52b18a93da705a7458e55afb26f32bff83ff5d19189f92462d65c4"},
|
||||
{file = "Werkzeug-0.16.0.tar.gz", hash = "sha256:7280924747b5733b246fe23972186c6b348f9ae29724135a6dfc1e53cea433e7"},
|
||||
]
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
[tool.poetry]
|
||||
name = "httprunner"
|
||||
version = "2.4.0"
|
||||
version = "2.4.8"
|
||||
description = "One-stop solution for HTTP(S) testing."
|
||||
license = "Apache-2.0"
|
||||
readme = "README.md"
|
||||
authors = ["debugtalk <debugtalk@gmail.com>"]
|
||||
|
||||
homepage = "https://github.com/HttpRunner/HttpRunner"
|
||||
repository = "https://github.com/HttpRunner/HttpRunner"
|
||||
homepage = "https://github.com/httprunner/httprunner"
|
||||
repository = "https://github.com/httprunner/httprunner"
|
||||
documentation = "https://docs.httprunner.org"
|
||||
|
||||
keywords = ["HTTP", "api", "test", "requests", "locustio"]
|
||||
@@ -27,7 +27,7 @@ classifiers = [
|
||||
"Programming Language :: Python :: 3.8"
|
||||
]
|
||||
|
||||
include = ["CHANGELOG.md", "httprunner/static/*"]
|
||||
include = ["docs/CHANGELOG.md"]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = "~2.7 || ^3.5"
|
||||
@@ -40,19 +40,20 @@ colorama = "^0.4.1"
|
||||
colorlog = "^4.0.2"
|
||||
filetype = "^1.0.5"
|
||||
jsonpath = "^0.82"
|
||||
sentry-sdk = "^0.13.5"
|
||||
future = { version = "^0.18.1", python = "~2.7" }
|
||||
enum34 = { version = "^1.1.6", python = "~2.7" }
|
||||
|
||||
[tool.poetry.dev-dependencies]
|
||||
flask = "<1.0.0"
|
||||
coverage = "^4.5.4"
|
||||
coveralls = "^1.8.2"
|
||||
|
||||
[tool.poetry.scripts]
|
||||
hrun = "httprunner.cli:main"
|
||||
ate = "httprunner.cli:main"
|
||||
httprunner = "httprunner.cli:main"
|
||||
locusts = "httprunner.plugins.locusts.cli:main"
|
||||
locusts = "httprunner.ext.locusts.cli:main"
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry>=0.12"]
|
||||
requires = ["poetry>=1.0.0"]
|
||||
build-backend = "poetry.masonry.api"
|
||||
|
||||
@@ -15,8 +15,8 @@ try:
|
||||
except ImportError:
|
||||
httpbin_app = None
|
||||
HTTPBIN_HOST = "httpbin.org"
|
||||
HTTPBIN_PORT = 443
|
||||
HTTPBIN_SERVER = "https://{}:{}".format(HTTPBIN_HOST, HTTPBIN_PORT)
|
||||
HTTPBIN_PORT = 80
|
||||
HTTPBIN_SERVER = "http://{}:{}".format(HTTPBIN_HOST, HTTPBIN_PORT)
|
||||
|
||||
FLASK_APP_PORT = 5000
|
||||
SECRET_KEY = "DebugTalk"
|
||||
@@ -93,6 +93,7 @@ def validate_request(func):
|
||||
def index():
|
||||
return "Hello World!"
|
||||
|
||||
|
||||
@app.route('/api/get-token', methods=['POST'])
|
||||
def get_token():
|
||||
device_sn = request.headers.get('device_sn', "")
|
||||
@@ -121,6 +122,7 @@ def get_token():
|
||||
response.headers["Content-Type"] = "application/json"
|
||||
return response
|
||||
|
||||
|
||||
@app.route('/api/users')
|
||||
@validate_request
|
||||
def get_users():
|
||||
@@ -134,6 +136,7 @@ def get_users():
|
||||
response.headers["Content-Type"] = "application/json"
|
||||
return response
|
||||
|
||||
|
||||
@app.route('/api/reset-all')
|
||||
@validate_request
|
||||
def clear_users():
|
||||
@@ -145,6 +148,7 @@ def clear_users():
|
||||
response.headers["Content-Type"] = "application/json"
|
||||
return response
|
||||
|
||||
|
||||
@app.route('/api/users/<int:uid>', methods=['POST'])
|
||||
@validate_request
|
||||
def create_user(uid):
|
||||
@@ -167,6 +171,7 @@ def create_user(uid):
|
||||
response.headers["Content-Type"] = "application/json"
|
||||
return response
|
||||
|
||||
|
||||
@app.route('/api/users/<int:uid>')
|
||||
@validate_request
|
||||
def get_user(uid):
|
||||
@@ -188,6 +193,7 @@ def get_user(uid):
|
||||
response.headers["Content-Type"] = "application/json"
|
||||
return response
|
||||
|
||||
|
||||
@app.route('/api/users/<int:uid>', methods=['PUT'])
|
||||
@validate_request
|
||||
def update_user(uid):
|
||||
@@ -209,6 +215,7 @@ def update_user(uid):
|
||||
response.headers["Content-Type"] = "application/json"
|
||||
return response
|
||||
|
||||
|
||||
@app.route('/api/users/<int:uid>', methods=['DELETE'])
|
||||
@validate_request
|
||||
def delete_user(uid):
|
||||
|
||||
@@ -33,7 +33,7 @@ class ApiServerUnittest(unittest.TestCase):
|
||||
)
|
||||
cls.flask_process.start()
|
||||
cls.httpbin_process.start()
|
||||
time.sleep(0.1)
|
||||
time.sleep(1)
|
||||
cls.api_client = requests.Session()
|
||||
|
||||
@classmethod
|
||||
|
||||
@@ -2,14 +2,15 @@
|
||||
name: basic test with httpbin
|
||||
base_url: https://httpbin.org/
|
||||
|
||||
- test:
|
||||
name: index
|
||||
request:
|
||||
url: /
|
||||
method: GET
|
||||
validate:
|
||||
- eq: ["status_code", 200]
|
||||
- contains: [content, "HTTP Request & Response Service"]
|
||||
#- test:
|
||||
# TODO: fix compatibility with Python 2.7, UnicodeDecodeError
|
||||
# name: index
|
||||
# request:
|
||||
# url: /
|
||||
# method: GET
|
||||
# validate:
|
||||
# - eq: ["status_code", 200]
|
||||
# - contains: [content, "HTTP Request & Response Service"]
|
||||
|
||||
- test:
|
||||
name: headers
|
||||
|
||||
30
tests/httpbin/upload.v2.yml
Normal file
30
tests/httpbin/upload.v2.yml
Normal file
@@ -0,0 +1,30 @@
|
||||
config:
|
||||
name: test upload file with httpbin
|
||||
base_url: ${get_httpbin_server()}
|
||||
|
||||
teststeps:
|
||||
-
|
||||
name: upload file
|
||||
variables:
|
||||
file_path: "data/test.env"
|
||||
m_encoder: ${multipart_encoder(file=$file_path)}
|
||||
request:
|
||||
url: /post
|
||||
method: POST
|
||||
headers:
|
||||
Content-Type: ${multipart_content_type($m_encoder)}
|
||||
data: $m_encoder
|
||||
validate:
|
||||
- eq: ["status_code", 200]
|
||||
- startswith: ["content.files.file", "UserName=test"]
|
||||
|
||||
-
|
||||
name: upload file with keyword
|
||||
request:
|
||||
url: /post
|
||||
method: POST
|
||||
upload:
|
||||
file: "data/test.env"
|
||||
validate:
|
||||
- eq: ["status_code", 200]
|
||||
- startswith: ["content.files.file", "UserName=test"]
|
||||
@@ -6,14 +6,24 @@
|
||||
name: upload file
|
||||
variables:
|
||||
file_path: "data/test.env"
|
||||
multipart_encoder: ${multipart_encoder(file=$file_path)}
|
||||
m_encoder: ${multipart_encoder(file=$file_path)}
|
||||
request:
|
||||
url: /post
|
||||
method: POST
|
||||
headers:
|
||||
Content-Type: ${multipart_content_type($multipart_encoder)}
|
||||
data: $multipart_encoder
|
||||
Content-Type: ${multipart_content_type($m_encoder)}
|
||||
data: $m_encoder
|
||||
validate:
|
||||
- eq: ["status_code", 200]
|
||||
- startswith: ["content.files.file", "UserName=test"]
|
||||
|
||||
- test:
|
||||
name: upload file with keyword
|
||||
request:
|
||||
url: /post
|
||||
method: POST
|
||||
upload:
|
||||
file: "data/test.env"
|
||||
validate:
|
||||
- eq: ["status_code", 200]
|
||||
- startswith: ["content.files.file", "UserName=test"]
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
@@ -222,12 +223,17 @@ class TestHttpRunner(ApiServerUnittest):
|
||||
self.assertIn("records", summary["details"][0])
|
||||
|
||||
def test_run_yaml_upload(self):
|
||||
summary = self.runner.run("tests/httpbin/upload.yml")
|
||||
self.assertTrue(summary["success"])
|
||||
self.assertEqual(summary["stat"]["testcases"]["total"], 1)
|
||||
self.assertEqual(summary["stat"]["teststeps"]["total"], 1)
|
||||
self.assertIn("details", summary)
|
||||
self.assertIn("records", summary["details"][0])
|
||||
upload_cases_list = [
|
||||
"tests/httpbin/upload.yml",
|
||||
"tests/httpbin/upload.v2.yml"
|
||||
]
|
||||
for upload_case in upload_cases_list:
|
||||
summary = self.runner.run(upload_case)
|
||||
self.assertTrue(summary["success"])
|
||||
self.assertEqual(summary["stat"]["testcases"]["total"], 1)
|
||||
self.assertEqual(summary["stat"]["teststeps"]["total"], 2)
|
||||
self.assertIn("details", summary)
|
||||
self.assertIn("records", summary["details"][0])
|
||||
|
||||
def test_run_post_data(self):
|
||||
testcases = [
|
||||
@@ -262,8 +268,9 @@ class TestHttpRunner(ApiServerUnittest):
|
||||
self.assertTrue(summary["success"])
|
||||
self.assertEqual(summary["stat"]["testcases"]["total"], 1)
|
||||
self.assertEqual(summary["stat"]["teststeps"]["total"], 1)
|
||||
resp_json = json.loads(summary["details"][0]["records"][0]["meta_datas"]["data"][0]["response"]["body"])
|
||||
self.assertEqual(
|
||||
summary["details"][0]["records"][0]["meta_datas"]["data"][0]["response"]["json"]["data"],
|
||||
resp_json["data"],
|
||||
"abc"
|
||||
)
|
||||
|
||||
@@ -550,12 +557,11 @@ class TestHttpRunner(ApiServerUnittest):
|
||||
}
|
||||
)
|
||||
|
||||
# def test_validate_response_content(self):
|
||||
# # TODO: fix compatibility with Python 2.7
|
||||
# testcase_file_path = os.path.join(
|
||||
# os.getcwd(), 'tests/httpbin/basic.yml')
|
||||
# summary = self.runner.run(testcase_file_path)
|
||||
# self.assertTrue(summary["success"])
|
||||
def test_validate_response_content(self):
|
||||
testcase_file_path = os.path.join(
|
||||
os.getcwd(), 'tests/httpbin/basic.yml')
|
||||
summary = self.runner.run(testcase_file_path)
|
||||
self.assertTrue(summary["success"])
|
||||
|
||||
def test_html_report_xss(self):
|
||||
testcases = [
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import os
|
||||
import unittest
|
||||
|
||||
from httprunner.plugins.locusts.utils import prepare_locust_tests
|
||||
from httprunner.ext.locusts.utils import prepare_locust_tests
|
||||
|
||||
|
||||
class TestLocust(unittest.TestCase):
|
||||
@@ -277,13 +277,6 @@ class TestSuiteLoader(unittest.TestCase):
|
||||
with self.assertRaises(exceptions.FileNotFound):
|
||||
loader.load_cases(path)
|
||||
|
||||
def test_load_api_folder(self):
|
||||
path = os.path.join(os.getcwd(), "tests", "api")
|
||||
api_definition_mapping = buildup.load_api_folder(path)
|
||||
api_file_path = os.path.join(os.getcwd(), "tests", "api", "get_token.yml")
|
||||
self.assertIn(api_file_path, api_definition_mapping)
|
||||
self.assertIn("request", api_definition_mapping[api_file_path])
|
||||
|
||||
def test_load_project_tests(self):
|
||||
buildup.load_project_data(os.path.join(os.getcwd(), "tests"))
|
||||
api_file_path = os.path.join(os.getcwd(), "tests", "api", "get_token.yml")
|
||||
|
||||
@@ -150,10 +150,3 @@ class TestFileLoader(unittest.TestCase):
|
||||
)
|
||||
env_variables_mapping = load.load_dot_env_file(dot_env_path)
|
||||
self.assertEqual(env_variables_mapping, {})
|
||||
|
||||
def test_load_folder_content(self):
|
||||
path = os.path.join(os.getcwd(), "tests", "api")
|
||||
items_mapping = load.load_folder_content(path)
|
||||
file_path = os.path.join(os.getcwd(), "tests", "api", "reset_all.yml")
|
||||
self.assertIn(file_path, items_mapping)
|
||||
self.assertIsInstance(items_mapping[file_path], dict)
|
||||
|
||||
@@ -1206,8 +1206,9 @@ class TestParser(unittest.TestCase):
|
||||
}
|
||||
]
|
||||
}
|
||||
with self.assertRaises(exceptions.VariableNotFound):
|
||||
parser.parse_tests(tests_mapping)
|
||||
parser.parse_tests(tests_mapping)
|
||||
parse_failed_testfiles = parser.get_parse_failed_testfiles()
|
||||
self.assertIn("testcase", parse_failed_testfiles)
|
||||
|
||||
def test_parse_tests_base_url_teststep_empty(self):
|
||||
""" base_url & verify: priority test_dict > config
|
||||
|
||||
Reference in New Issue
Block a user