make documents on readthedocs

This commit is contained in:
debugtalk
2017-11-08 19:15:54 +08:00
parent c916e9dc84
commit f259ecd6a6
21 changed files with 646 additions and 385 deletions

6
.gitignore vendored
View File

@@ -1,6 +1,8 @@
*.pyc
__pycache__
.DS_Store
.vscode
.pypirc
*/tmp/*
build/*
dist/*
@@ -8,4 +10,6 @@ dist/*
.python-version
logs/%
.coverage
locustfile.py
locustfile.py
_build

199
README.md
View File

@@ -23,211 +23,12 @@ Take full reuse of Python's existing powerful libraries: [`Requests`][requests],
- With reuse of [`Locust`][Locust], you can run performance test without extra work.
- CLI command supported, perfect combination with [Jenkins][Jenkins].
[*`Background Introduction (中文版)`*](docs/background-CN.md) | [*`Feature Descriptions (中文版)`*](docs/feature-descriptions-CN.md)
## Installation/Upgrade
```bash
$ pip install HttpRunner
```
To upgrade all specified packages to the newest available version, you should add the `-U` option.
If there is a problem with the installation or upgrade, you can check the [`FAQ`](docs/FAQ.md).
To ensure the installation or upgrade is successful, you can execute command `httprunner -V` to see if you can get the correct version number.
```text
$ httprunner -V
HttpRunner version: 0.8.0
```
Execute the command `httprunner -h` to view command help.
```text
$ httprunner -h
usage: httprunner [-h] [-V] [--log-level LOG_LEVEL] [--report-name REPORT_NAME]
[--failfast] [--startproject STARTPROJECT]
[testset_paths [testset_paths ...]]
HttpRunner.
positional arguments:
testset_paths testset file path
optional arguments:
-h, --help show this help message and exit
-V, --version show version
--log-level LOG_LEVEL
Specify logging level, default is INFO.
--report-name REPORT_NAME
Specify report name, default is generated time.
--failfast Stop the test run on the first error or failure.
--startproject STARTPROJECT
Specify new project name.
```
## Write testcases
It is recommended to write testcases in `YAML` format.
And here is testset example of typical scenario: get `token` at the beginning, and each subsequent requests should take the `token` in the headers.
```yaml
- config:
name: "create user testsets."
variables:
- user_agent: 'iOS/10.3'
- device_sn: ${gen_random_string(15)}
- os_platform: 'ios'
- app_version: '2.8.6'
request:
base_url: http://127.0.0.1:5000
headers:
Content-Type: application/json
device_sn: $device_sn
- test:
name: get token
request:
url: /api/get-token
method: POST
headers:
user_agent: $user_agent
device_sn: $device_sn
os_platform: $os_platform
app_version: $app_version
json:
sign: ${get_sign($user_agent, $device_sn, $os_platform, $app_version)}
extract:
- token: content.token
validate:
- {"check": "status_code", "comparator": "eq", "expected": 200}
- {"check": "content.token", "comparator": "len_eq", "expected": 16}
- test:
name: create user which does not exist
request:
url: /api/users/1000
method: POST
headers:
token: $token
json:
name: "user1"
password: "123456"
validate:
- {"check": "status_code", "comparator": "eq", "expected": 201}
- {"check": "content.success", "comparator": "eq", "expected": true}
```
Function invoke is supported in `YAML/JSON` format testcases, such as `gen_random_string` and `get_sign` above. This mechanism relies on the `debugtak.py` hot plugin, with which we can define functions in `debugtak.py` file, and then functions can be auto discovered and invoked in runtime.
For detailed regulations of writing testcases, you can read the [`QuickStart`][quickstart] documents.
## Run testcases
`HttpRunner` can run testcases in diverse ways.
You can run single testset by specifying testset file path.
```text
$ httprunner filepath/testcase.yml
```
You can also run several testsets by specifying multiple testset file paths.
```text
$ httprunner filepath1/testcase1.yml filepath2/testcase2.yml
```
If you want to run testsets of a whole project, you can achieve this goal by specifying the project folder path.
```text
$ httprunner testcases_folder_path
```
When you do continuous integration test or production environment monitoring with `Jenkins`, you may need to send test result notification. For instance, you can send email with mailgun service as below.
```text
$ httprunner filepath/testcase.yml --report-name ${BUILD_NUMBER} \
--mailgun-smtp-username "qa@debugtalk.com" \
--mailgun-smtp-password "12345678" \
--email-sender excited@samples.mailgun.org \
--email-recepients ${MAIL_RECEPIENTS} \
--jenkins-job-name ${JOB_NAME} \
--jenkins-job-url ${JOB_URL} \
--jenkins-build-number ${BUILD_NUMBER}
```
## Performance test
With reuse of [`Locust`][Locust], you can run performance test without extra work.
```bash
$ locusts -V
[2017-08-26 23:45:42,246] bogon/INFO/stdout: Locust 0.8a2
[2017-08-26 23:45:42,246] bogon/INFO/stdout:
```
For full usage, you can run `locusts -h` to see help, and you will find that it is the same with `locust -h`.
The only difference is the `-f` argument. If you specify `-f` with a Python locustfile, it will be the same as `locust`, while if you specify `-f` with a `YAML/JSON` testcase file, it will convert to Python locustfile first and then pass to `locust`.
```bash
$ locusts -f examples/first-testcase.yml
[2017-08-18 17:20:43,915] Leos-MacBook-Air.local/INFO/locust.main: Starting web monitor at *:8089
[2017-08-18 17:20:43,918] Leos-MacBook-Air.local/INFO/locust.main: Starting Locust 0.8a2
```
In this case, you can reuse all features of [`Locust`][Locust].
Thats not all about it. With the argument `--full-speed`, you can even start locust with master and several slaves (default to cpu cores number) at one time, which means you can leverage all cpus of your machine.
```bash
$ locusts -f examples/first-testcase.yml --full-speed
[2017-08-26 23:51:47,071] bogon/INFO/locust.main: Starting web monitor at *:8089
[2017-08-26 23:51:47,075] bogon/INFO/locust.main: Starting Locust 0.8a2
[2017-08-26 23:51:47,078] bogon/INFO/locust.main: Starting Locust 0.8a2
[2017-08-26 23:51:47,080] bogon/INFO/locust.main: Starting Locust 0.8a2
[2017-08-26 23:51:47,083] bogon/INFO/locust.main: Starting Locust 0.8a2
[2017-08-26 23:51:47,084] bogon/INFO/locust.runners: Client 'bogon_656e0af8e968a8533d379dd252422ad3' reported as ready. Currently 1 clients ready to swarm.
[2017-08-26 23:51:47,085] bogon/INFO/locust.runners: Client 'bogon_09f73850252ee4ec739ed77d3c4c6dba' reported as ready. Currently 2 clients ready to swarm.
[2017-08-26 23:51:47,084] bogon/INFO/locust.main: Starting Locust 0.8a2
[2017-08-26 23:51:47,085] bogon/INFO/locust.runners: Client 'bogon_869f7ed671b1a9952b56610f01e2006f' reported as ready. Currently 3 clients ready to swarm.
[2017-08-26 23:51:47,085] bogon/INFO/locust.runners: Client 'bogon_80a804cda36b80fac17b57fd2d5e7cdb' reported as ready. Currently 4 clients ready to swarm.
```
![](docs/locusts-full-speed.jpg)
Enjoy!
## Supported Python Versions
Python `2.7`, `3.4`, `3.5` and `3.6`.
`HttpRunner` has been tested on `macOS`, `Linux` and `Windows` platforms.
## Development
To develop or debug `HttpRunner`, you can install relevant requirements and use `main-ate.py` or `main-locust.py` as entrances.
```bash
$ pip install -r requirements.txt
$ python main-ate -h
$ python main-locust -h
```
## To learn more ...
- [《接口自动化测试的最佳工程实践ApiTestEngine](http://debugtalk.com/post/ApiTestEngine-api-test-best-practice/)
- [`ApiTestEngine QuickStart`][quickstart]
- [《ApiTestEngine 演进之路0开发未动测试先行》](http://debugtalk.com/post/ApiTestEngine-0-setup-CI-test/)
- [《ApiTestEngine 演进之路1搭建基础框架》](http://debugtalk.com/post/ApiTestEngine-1-setup-basic-framework/)
- [《ApiTestEngine 演进之路2探索优雅的测试用例描述方式》](http://debugtalk.com/post/ApiTestEngine-2-best-testcase-description/)
- [《ApiTestEngine 演进之路3测试用例中实现 Python 函数的定义》](http://debugtalk.com/post/ApiTestEngine-3-define-functions-in-yaml-testcases/)
- [《ApiTestEngine 演进之路4测试用例中实现 Python 函数的调用》](http://debugtalk.com/post/ApiTestEngine-4-call-functions-in-yaml-testcases/)
- [《ApiTestEngine 集成 Locust 实现更好的性能测试体验》](http://debugtalk.com/post/apitestengine-supersede-locust/)
- [《约定大于配置ApiTestEngine实现热加载机制》](http://debugtalk.com/post/apitestengine-hot-plugin/)
[requests]: http://docs.python-requests.org/en/master/
[unittest]: https://docs.python.org/3/library/unittest.html

19
docs/FAQ.rst Normal file
View File

@@ -0,0 +1,19 @@
FAQ
===
Unable to install PyUnitReport dependency library automatically
---------------------------------------------------------------
If there is something goes wrong in installation like below. ::
Downloading/unpacking PyUnitReport (from HttpRunner)
Could not find any downloads that satisfy the requirement PyUnitReport (from HttpRunner)
You could install ``PyUnitReport`` manully at first. ::
pip install PyUnitReport
And then everything will be OK when you reinstall ``HttpRunner``. ::
pip install HttpRunner

74
docs/Installation.rst Normal file
View File

@@ -0,0 +1,74 @@
.. default-role:: code
Installation
============
``HttpRunner`` is available on `PyPI`_ and can be installed through pip or easy_install. ::
$ pip install HttpRunner
or ::
$ easy_install HttpRunner
If you want to keep up with the latest version, you can install with github repository url. ::
$ pip install git+https://github.com/HttpRunner/HttpRunner.git#egg=HttpRunner
Upgrade
-------
If you have installed ``HttpRunner`` before and want to upgrade to the latest version, you can use the ``-U`` option.
This option works on each installation method described above. ::
$ pip install -U HttpRunner
$ easy_install -U HttpRunner
$ pip install -U git+https://github.com/HttpRunner/HttpRunner.git#egg=HttpRunner
Check Installation
------------------
When HttpRunner is installed, a **httprunner** command should be available in your shell (if you're not using
virtualenv—which you should—make sure your python script directory is on your path).
To see ``HttpRunner`` version: ::
$ httprunner -V
HttpRunner version: 0.8.1b
PyUnitReport version: 0.1.3b
To see available options, run::
$ httprunner -h
usage: httprunner [-h] [-V] [--log-level LOG_LEVEL] [--report-name REPORT_NAME]
[--failfast] [--startproject STARTPROJECT]
[testset_paths [testset_paths ...]]
HttpRunner.
positional arguments:
testset_paths testset file path
optional arguments:
-h, --help show this help message and exit
-V, --version show version
--log-level LOG_LEVEL
Specify logging level, default is INFO.
--report-name REPORT_NAME
Specify report name, default is generated time.
--failfast Stop the test run on the first error or failure.
--startproject STARTPROJECT
Specify new project name.
Supported Python Versions
-------------------------
HttpRunner supports Python 2.7, 3.4, 3.5, and 3.6. And we strongly recommend you to use ``Python 3.6``.
.. _PyPI: https://pypi.python.org/pypi

28
docs/Introduction.md Normal file
View File

@@ -0,0 +1,28 @@
# Introduction
## Design Philosophy
Take full reuse of Python's existing powerful libraries: [`Requests`][requests], [`unittest`][unittest] and [`Locust`][Locust]. And achieve the goal of API automation test, production environment monitoring, and API performance test, with a concise and elegant manner.
## Key Features
- Inherit all powerful features of [`Requests`][requests], just have fun to handle HTTP in human way.
- Define testcases in YAML or JSON format in concise and elegant manner.
- Supports `function`/`variable`/`extract`/`validate` mechanisms to create full test scenarios.
- With `debugtalk.py` plugin, module functions can be auto-discovered in recursive upward directories.
- Testcases can be run in diverse ways, with single testset, multiple testsets, or entire project folder.
- Test report is concise and clear, with detailed log records. See [`PyUnitReport`][PyUnitReport].
- With reuse of [`Locust`][Locust], you can run performance test without extra work.
- CLI command supported, perfect combination with [Jenkins][Jenkins].
## Learn more
You can read this [blog][HttpRunner-blog] to learn more about the background and initial thoughts of `HttpRunner`.
[requests]: http://docs.python-requests.org/en/master/
[unittest]: https://docs.python.org/3/library/unittest.html
[Locust]: http://locust.io/
[PyUnitReport]: https://github.com/HttpRunner/PyUnitReport
[Jenkins]: https://jenkins.io/index.html
[HttpRunner-blog]: http://debugtalk.com/post/ApiTestEngine-api-test-best-practice/

20
docs/Makefile Normal file
View File

@@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = python -msphinx
SPHINXPROJ = HttpRunner
SOURCEDIR = .
BUILDDIR = _build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

0
docs/README.rst Normal file
View File

15
docs/_static/my.css vendored Normal file
View File

@@ -0,0 +1,15 @@
@import 'https://media.readthedocs.org/css/sphinx_rtd_theme.css';
.wy-nav-content {
max-width: 1020px
}
.rst-content .topic {
border: silver 1px solid;
margin: 10px auto;
padding: 10px;
}
.rst-content .highlight>pre {
line-height: 1.5;
}

View File

@@ -1,44 +0,0 @@
## 背景
当前市面上存在的接口测试工具已经非常多,常见的如`Postman``JMeter``RobotFramework`等,相信大多数测试人员都有使用过,至少从接触到的大多数简历的描述上看是这样的。除了这些成熟的工具,也有很多有一定技术能力的测试(开发)人员自行开发了一些接口测试框架,质量也是参差不齐。
但是,当我打算在项目组中推行接口自动化测试时,搜罗了一圈,也没有找到一款特别满意的工具或框架,总是与理想中的构想存在一定的差距。
那么理想中的接口自动化测试框架应该是怎样的呢?
测试工具(框架)脱离业务使用场景都是耍流氓!所以我们不妨先来看下日常工作中的一些常见场景。
- 测试或开发人员在定位问题的时候,想调用某个接口查看其是否响应正常;
- 测试人员在手工测试某个功能点的时候,需要一个订单号,而这个订单号可以通过顺序调用多个接口实现下单流程;
- 测试人员在开始版本功能测试之前,可以先检测下系统的所有接口是否工作正常,确保接口正常后再开始手工测试;
- 开发人员在提交代码前需要检测下新代码是否对系统的已有接口产生影响;
- 项目组需要每天定时检测下测试环境所有接口的工作情况,确保当天的提交代码没有对主干分支的代码造成破坏;
- 项目组需要定时30分钟检测下生产环境所有接口的工作情况以便及时发现生产环境服务不可用的情况
- 项目组需要不定期对核心业务场景进行性能测试,期望能减少人力投入,直接复用接口测试中的工作成果。
可以看到,以上罗列的场景大家应该都很熟悉,这都是我们在日常工作中经常需要去做的事情。但是在没有一款合适工具的情况下,效率往往十分低下,或者就是某些重要工作压根就没有开展,例如接口回归测试、线上接口监控等。
先说下最简单的手工调用接口测试。可能有人会说,`Postman`就可以满足需求啊。的确,`Postman`作为一款通用的接口测试工具,它可以构造接口请求,查看接口响应,从这个层面上来说,它是满足了接口测试的功能需求。但是在具体的项目中,使用`Postman`并不是那么高效。
不妨举个最常见的例子。
> 某个接口的请求参数非常多,并且接口请求要求有`MD5`签名校验签名的方式为在Headers中包含一个`sign`参数,该参数值通过对`URL`、`Method`、`Body`的拼接字符串进行`MD5`计算后得到。
回想下我们要对这个接口进行测试时是怎么做的。首先我们需要先参照接口文档的描述手工填写完所有接口参数然后按照签名校验方式对所有参数值进行拼接得到一个字符串在另一个MD5计算工具计算得到其MD5值将签名值填入`sign`参数;最后,才是发起接口请求,查看接口响应,并人工检测响应是否正常。最坑爹的是,我们每次需要调用这个接口的时候,以上工作就得重新来一遍。这样的实际结果是,面对参数较多或者需要签名验证的接口时,测试人员可能会选择忽略不进行接口测试。
除了单个接口的调用,很多时候我们也需要组合多个接口进行调用。例如测试人员在测试物流系统时,经常需要一个特定组合条件下生成的订单号。而由于订单号关联的业务较多,很难直接在数据库中生成,因此当前业务测试人员普遍采取的做法,就是每次需要订单号时模拟下单流程,顺序调用多个相应的接口来生成需要的订单号。可以想象,在手工调用单个接口都如此麻烦的情况下,每次都要手工调用多个接口会有多么的费时费力。
再说下接口自动化调用测试。这一块儿大多接口测试框架都支持普遍的做法就是通过代码编写接口测试用例或者采用数据驱动的方式然后在支持命令行CLI调用的情况下就可以结合`Jenkins`或者`crontab`实现持续集成,或者定时接口监控的功能。
思路是没有问题的,问题在于实际项目中的推动落实情况。要说自动化测试用例最靠谱的维护方式,还是直接通过代码编写测试用例,可靠且不失灵活性,这也是很多经历过惨痛教训的老手的感悟,甚至网络上还出现了一些反测试框架的言论。但问题在于项目中的测试人员并不是都会写代码,也不是对其强制要求就能马上学会的。这种情况下,要想在具体项目中推动接口自动化测试就很难,就算我可以帮忙写一部分,但是很多时候接口测试用例也是要结合业务逻辑场景的,我也的确是没法在这方面投入太多时间,毕竟对接的项目实在太多。所以也是基于这类原因,很多测试框架提倡采用数据驱动的方式,将业务测试用例和执行代码分离。不过由于很多时候业务场景比较复杂,大多数框架测试用例模板引擎的表达能力不足,很难采用简洁的方式对测试场景进行描述,从而也没法很好地得到推广使用。
可以列举的问题还有很多,这些也的确都是在互联网企业的日常测试工作中真实存在的痛点。
基于以上背景,我产生了开发[`ApiTestEngine`][ApiTestEngine]的想法。
对于[`ApiTestEngine`][ApiTestEngine]的定位,与其说它是一个工具或框架,它更多的应该是一套接口自动化测试的最佳工程实践,而`简洁优雅实用`应该是它最核心的特点。
当然,每位工程师对`最佳工程实践`的理念或多或少都会存在一些差异,也希望大家能多多交流,在思维的碰撞中共同进步。
[ApiTestEngine]: https://github.com/debugtalk/HttpRunner

192
docs/conf.py Normal file
View File

@@ -0,0 +1,192 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
#
# HttpRunner documentation build configuration file, created by
# sphinx-quickstart on Wed Nov 8 14:28:04 2017.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
import os
on_rtd = os.environ.get('READTHEDOCS') == 'True'
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.githubpages'
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
from recommonmark.parser import CommonMarkParser
source_parsers = {
'.md': CommonMarkParser,
}
source_suffix = ['.rst', '.md']
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'HttpRunner'
copyright = '2017, DebugTalk'
author = 'debugtalk'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.8'
# The full version, including alpha/beta/rc tags.
release = '0.8.1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = 'zh'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
if on_rtd:
html_theme = 'default'
html_context = {
'css_files': [
'https://media.readthedocs.org/css/sphinx_rtd_theme.css',
'https://media.readthedocs.org/css/readthedocs-doc-embed.css',
'_static/my.css',
],
}
else:
import sphinx_rtd_theme
html_theme = "sphinx_rtd_theme"
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
html_style = 'my.css'
html_show_sourcelink = False
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
# This is required for the alabaster theme
# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars
# html_sidebars = {
# '**': [
# 'about.html',
# 'navigation.html',
# 'relations.html', # needs 'show_related': True theme option to display
# 'searchbox.html',
# # 'donate.html',
# ]
# }
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'HttpRunnerdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'HttpRunner.tex', 'HttpRunner Documentation',
'debugtalk', 'manual'),
]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'httprunner', 'HttpRunner Documentation',
[author], 1)
]
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'HttpRunner', 'HttpRunner Documentation',
author, 'HttpRunner', 'One line description of project.',
'Miscellaneous'),
]

9
docs/development.md Normal file
View File

@@ -0,0 +1,9 @@
## Development
To develop or debug `HttpRunner`, you can install relevant requirements and use `main-ate.py` or `main-locust.py` as entrances.
```bash
$ pip install -r requirements.txt
$ python main-ate -h
$ python main-locust -h
```

View File

@@ -1,138 +0,0 @@
## 核心特性
- 支持API接口的多种请求方法包括 GET/POST/HEAD/PUT/DELETE 等
- 测试用例与代码分离,测试用例维护方式简洁优雅,支持`YAML/JSON`
- 测试用例描述方式具有表现力,可采用简洁的方式描述输入参数和预期输出结果
- 接口测试用例具有可复用性,便于创建复杂测试场景
- 测试执行方式简单灵活,支持单接口调用测试、批量接口调用测试、定时任务执行测试
- 测试结果统计报告简洁清晰,附带详尽日志记录,包括接口请求耗时、请求响应数据等
- 身兼多职同时实现接口管理、接口自动化测试、接口性能测试结合Locust
- 具有可扩展性便于扩展实现Web平台化
## 特性拆解介绍
> 支持API接口的多种请求方法包括 GET/POST/HEAD/PUT/DELETE 等
个人偏好编程语言选择Python。而采用Python实现HTTP请求最好的方式就是采用[`Requests`][Requests]库了,简洁优雅,功能强大。
> 测试用例与代码分离,测试用例维护方式简洁优雅,支持`YAML`
要实现测试用例与代码的分离,最好的做法就是做一个测试用例加载引擎和一个测试用例执行引擎,这也是之前在做[`AppiumBooster`][AppiumBooster]框架的时候总结出来的最优雅的实现方式。当然,这里需要事先对测试用例制定一个标准的数据结构规范,作为测试用例加载引擎和测试用例执行引擎的桥梁。
需要说明的是测试用例数据结构必须包含接口测试用例完备的信息要素包括接口请求的信息内容URL、Headers、Method等参数以及预期的接口请求响应结果StatusCode、ResponseHeaders、ResponseContent
这样做的好处在于,不管测试用例采用什么形式进行描述([`YAML`][YAML]、JSON、CSV、Excel、XML等也不管测试用例是否采用了业务分层的组织思想只要在测试用例加载引擎中实现对应的转换器都可以将业务测试用例转换为标准的测试用例数据结构。而对于测试用例执行引擎而言它无需关注测试用例的具体描述形式只需要从标准的测试用例数据结构中获取到测试用例信息要素包括接口请求信息和预期接口响应信息然后构造并发起HTTP请求再将HTTP请求的响应结果与预期结果进行对比判断即可。
至于为什么明确说明支持[`YAML`][YAML],这是因为个人认为这是最佳的测试用例描述方式,表达简洁不累赘,同时也能包含非常丰富的信息。当然,这只是个人喜好,如果喜欢采用别的方式,只需要扩展实现对应的转换器即可。
> 测试用例描述方式具有表现力,可采用简洁的方式描述输入参数和预期输出结果
测试用例与框架代码分离以后,对业务逻辑测试场景的描述重任就落在测试用例上了。比如我们选择采用[`YAML`][YAML]来描述测试用例,那么我们就应该能在[`YAML`][YAML]中描述各种复杂的业务场景。
那么怎么理解这个“表现力”呢?
简单的参数值传参应该都容易理解,我们举几个相对复杂但又比较常见的例子。
- 接口请求参数中要包含当前的时间戳;
- 接口请求参数中要包含一个16位的随机字符串
- 接口请求参数中包含签名校验需要对多个请求参数进行拼接后取md5值
- 接口响应头Headers中要包含一个`X-ATE-V`头域并且需要判断该值是否大于100
- 接口响应结果中包含一个字符串需要校验字符串中是否包含10位长度的订单号
- 接口响应结果为一个多层嵌套的json结构体需要判断某一层的某一个元素值是否为True。
可以看出以上几个例子都是没法直接在测试用例里面描述参数值的。如果是采用Python脚本来编写测试用例还好解决只需要通过Python函数实现即可。但是现在测试用例和框架代码分离了我们没法在[`YAML`][YAML]里面执行Python函数这该怎么办呢
答案就是,定义函数转义符,实现自定义模板。
这种做法其实也不难理解,也算是模板语言通用的方式。例如,我们将`${}`定义为转义符,那么在`{}`内的内容就不再当做是普通的字符串,而应该转义为变量值,或者执行函数得到实际结果。当然,这个需要我们在测试用例执行引擎进行适配实现,最简单方式就是提取出`${}`中的字符串,通过`eval`计算得到表达式的值。如果要实现更复杂的功能,我们也可以将接口测试中常用的一些功能封装为一套关键字,然后在编写测试用例的时候使用这些关键字。
> 接口测试用例具有可复用性,便于创建复杂测试场景
很多情况下系统的接口都是有业务逻辑关联的。例如要请求调用登录接口需要先请求获取验证码的接口然后在登录请求中带上获取到的验证码而要请求数据查询的接口又要在请求参数中包含登录接口返回的session值。这个时候我们如果针对每一个要测的业务逻辑都单独描述要请求的接口那么就会造成大量的重复描述测试用例的维护也十分臃肿。
比较好的做法是,将每一个接口调用单独封装为一条测试用例,然后在描述业务测试场景时,选择对应的接口,按照顺序拼接为业务场景测试用例,就像搭积木一般。如果你之前读过[`AppiumBooster`][AppiumBooster]的介绍,应该还会联想到,我们可以将常用的功能组成模块用例集,然后就可以在更高的层面对模块用例集进行组装,实现更复杂的测试场景。
不过,这里有一个非常关键的问题需要解决,就是如何在接口测试用例之前传参的问题。其实实现起来也不复杂,我们可以在接口请求响应结果中指定一个变量名,然后将接口返回关键值提取出来后赋值给那个变量;然后在其它接口请求参数中,传入这个`${变量名}`即可。
> 测试执行方式简单灵活,支持单接口调用测试、批量接口调用测试、定时任务执行测试
通过背景中的例子可以看出,需要使用接口测试工具的场景很多,除了定时地对所有接口进行自动化测试检测外,很多时候在手工测试的时候也需要采用接口测试工具进行辅助,也就是`半手工+半自动化`的模式。
而业务测试人员在使用测试工具的时候,遇到的最大问题在于除了需要关注业务功能本身,还需要花费很多时间去处理技术实现细节上的东西,例如签名校验这类情况,而且往往后者在重复操作中占用的时间更多。
这个问题的确是没法避免的,毕竟不同系统的接口千差万别,不可能存在一款工具可以自动处理所有情况。但是我们可以尝试将接口的技术细节实现和业务参数进行拆分,让业务测试人员只需要关注业务参数部分。
具体地,我们可以针对每一个接口配置一个模板,将其中与业务功能无关的参数以及技术细节封装起来,例如签名校验、时间戳、随机值等,而与业务功能相关的参数配置为可传参的模式。
这样做的好处在于,与业务功能无关的参数以及技术细节我们只需要封装配置一次,而且这个工作可以由开发人员或者测试开发人员来实现,减轻业务测试人员的压力;接口模板配置好后,测试人员只需要关注与业务相关的参数即可,结合业务测试用例,就可以在接口模板的基础上很方便地配置生成多个接口测试用例。
> 测试结果统计报告简洁清晰,附带详尽日志记录,包括接口请求耗时、请求响应数据等
测试结果统计报告,应该遵循简洁而不简单的原则。“简洁”,是因为大多数时候我们只需要在最短的时间内判断所有接口是否运行正常即可。而“不简单”,是因为当存在执行失败的测试用例时,我们期望能获得接口测试时尽可能详细的数据,包括测试时间、请求参数、响应内容、接口响应耗时等。
之前在读`locust`源码时,其对[`HTTP`客户端](https://github.com/locustio/locust/blob/master/locust/clients.py
)的封装方式给我留下了深刻的印象。它采用的做法是,继承`requests.Session`类,在子类`HttpSession`中重写覆盖了`request`方法,然后在`request`方法中对`requests.Session.request`进行了一层封装。
```python
request_meta = {}
# set up pre_request hook for attaching meta data to the request object
request_meta["method"] = method
request_meta["start_time"] = time.time()
response = self._send_request_safe_mode(method, url, **kwargs)
# record the consumed time
request_meta["response_time"] = int((time.time() - request_meta["start_time"]) * 1000)
request_meta["content_size"] = int(response.headers.get("content-length") or 0)
```
`HttpLocust`的每一个虚拟用户client都是一个`HttpSession`实例,这样每次在执行`HTTP`请求的时候,既可充分利用[`Requests`][Requests]库的强大功能,同时也能将请求的响应时间、响应体大小等原始性能数据进行保存,实现可谓十分优雅。
受到该处启发,要保存接口的详细请求响应数据也可采用同样的方式。例如,要保存`Response``Headers``Body`只需要增加如下两行代码:
```python
request_meta["response_headers"] = response.headers
request_meta["response_content"] = response.content
```
> 身兼多职同时实现接口管理、接口自动化测试、接口性能测试结合Locust
其实像接口性能测试这样的需求,不应该算到接口自动化测试框架的职责范围之内。但是在实际项目中需求就是这样,又要做接口自动化测试,又要做接口性能测试,而且还不想同时维护两套代码。
多亏有了`locust`性能测试框架,接口自动化和性能测试脚本还真能合二为一。
前面也讲了,`HttpLocust`的每一个虚拟用户client都是一个`HttpSession`实例,而`HttpSession`又继承自`requests.Session`类,所以`HttpLocust`的每一个虚拟用户client也是`requests.Session`类的实例。
同样的,我们在用[`Requests`][Requests]库做接口测试时,请求客户端其实也是`requests.Session`类的实例,只是我们通常用的是`requests`的简化用法。
以下两种用法是等价的。
```python
resp = requests.get('http://debugtalk.com')
# 等价于
client = requests.Session()
resp = client.get('http://debugtalk.com')
```
有了这一层关系以后,要在接口自动化测试和性能测试之间切换就很容易了。在接口测试框架内,可以通过如下方式初始化`HTTP`客户端。
```python
def __init__(self, origin, kwargs, http_client_session=None):
self.http_client_session = http_client_session or requests.Session()
```
默认情况下,`http_client_session``requests.Session`的实例,用于进行接口测试;当需要进行性能测试时,只需要传入`locust``HttpSession`实例即可。
> 具有可扩展性便于扩展实现Web平台化
当要将测试平台推广至更广阔的用户群体例如产品经理、运营人员对框架实现Web化就在所难免了。在Web平台上查看接口测试用例运行情况、对接口模块进行配置、对接口测试用例进行管理的确会便捷很多。
不过对于接口测试框架来说,`Web平台`只能算作锦上添花的功能。我们在初期可以优先实现命令行CLI调用方式规范好数据存储结构后期再结合Web框架如Flask增加实现Web平台功能。
[AppiumBooster]: https://github.com/debugtalk/AppiumBooster
[Requests]: http://docs.python-requests.org/en/master/
[YAML]: http://pyyaml.org/

View File

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 40 KiB

View File

Before

Width:  |  Height:  |  Size: 112 KiB

After

Width:  |  Height:  |  Size: 112 KiB

View File

Before

Width:  |  Height:  |  Size: 106 KiB

After

Width:  |  Height:  |  Size: 106 KiB

View File

Before

Width:  |  Height:  |  Size: 64 KiB

After

Width:  |  Height:  |  Size: 64 KiB

20
docs/index.rst Normal file
View File

@@ -0,0 +1,20 @@
.. HttpRunner documentation master file, created by
sphinx-quickstart on Wed Nov 8 14:28:04 2017.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to HttpRunner's documentation!
======================================
.. toctree::
:maxdepth: 1
:caption: Contents
Introduction
Installation
quickstart
write-testcases
run-testcases
load-test
development
FAQ

45
docs/load-test.md Normal file
View File

@@ -0,0 +1,45 @@
## Load Test
With reuse of [`Locust`][Locust], you can run performance test without extra work.
```bash
$ locusts -V
[2017-08-26 23:45:42,246] bogon/INFO/stdout: Locust 0.8a2
[2017-08-26 23:45:42,246] bogon/INFO/stdout:
```
For full usage, you can run `locusts -h` to see help, and you will find that it is the same with `locust -h`.
The only difference is the `-f` argument. If you specify `-f` with a Python locustfile, it will be the same as `locust`, while if you specify `-f` with a `YAML/JSON` testcase file, it will convert to Python locustfile first and then pass to `locust`.
```bash
$ locusts -f examples/first-testcase.yml
[2017-08-18 17:20:43,915] Leos-MacBook-Air.local/INFO/locust.main: Starting web monitor at *:8089
[2017-08-18 17:20:43,918] Leos-MacBook-Air.local/INFO/locust.main: Starting Locust 0.8a2
```
In this case, you can reuse all features of [`Locust`][Locust].
Thats not all about it. With the argument `--full-speed`, you can even start locust with master and several slaves (default to cpu cores number) at one time, which means you can leverage all cpus of your machine.
```bash
$ locusts -f examples/first-testcase.yml --full-speed
[2017-08-26 23:51:47,071] bogon/INFO/locust.main: Starting web monitor at *:8089
[2017-08-26 23:51:47,075] bogon/INFO/locust.main: Starting Locust 0.8a2
[2017-08-26 23:51:47,078] bogon/INFO/locust.main: Starting Locust 0.8a2
[2017-08-26 23:51:47,080] bogon/INFO/locust.main: Starting Locust 0.8a2
[2017-08-26 23:51:47,083] bogon/INFO/locust.main: Starting Locust 0.8a2
[2017-08-26 23:51:47,084] bogon/INFO/locust.runners: Client 'bogon_656e0af8e968a8533d379dd252422ad3' reported as ready. Currently 1 clients ready to swarm.
[2017-08-26 23:51:47,085] bogon/INFO/locust.runners: Client 'bogon_09f73850252ee4ec739ed77d3c4c6dba' reported as ready. Currently 2 clients ready to swarm.
[2017-08-26 23:51:47,084] bogon/INFO/locust.main: Starting Locust 0.8a2
[2017-08-26 23:51:47,085] bogon/INFO/locust.runners: Client 'bogon_869f7ed671b1a9952b56610f01e2006f' reported as ready. Currently 3 clients ready to swarm.
[2017-08-26 23:51:47,085] bogon/INFO/locust.runners: Client 'bogon_80a804cda36b80fac17b57fd2d5e7cdb' reported as ready. Currently 4 clients ready to swarm.
```
![](images/locusts-full-speed.jpg)
Enjoy!
[Locust]: http://locust.io/

View File

@@ -30,9 +30,9 @@ Before we write testcases, we should know the details of the API. It is a good c
For example, the image below illustrates getting token from the sample service first, and then creating one user successfully.
![](ate-quickstart-http-1.jpg)
![](images/ate-quickstart-http-1.jpg)
![](ate-quickstart-http-2.jpg)
![](images/ate-quickstart-http-2.jpg)
After thorough understanding of the APIs, we can now begin to write testcases.
@@ -316,7 +316,7 @@ Reports generated: /Users/Leo/MyProjects/HttpRunner/reports/quickstart-demo-rev-
Great! The test case runs successfully and generates a `HTML` test report.
![](ate-quickstart-demo-report.jpg)
![](images/ate-quickstart-demo-report.jpg)
## Further more

34
docs/run-testcases.md Normal file
View File

@@ -0,0 +1,34 @@
## Run testcases
`HttpRunner` can run testcases in diverse ways.
You can run single testset by specifying testset file path.
```text
$ httprunner filepath/testcase.yml
```
You can also run several testsets by specifying multiple testset file paths.
```text
$ httprunner filepath1/testcase1.yml filepath2/testcase2.yml
```
If you want to run testsets of a whole project, you can achieve this goal by specifying the project folder path.
```text
$ httprunner testcases_folder_path
```
When you do continuous integration test or production environment monitoring with `Jenkins`, you may need to send test result notification. For instance, you can send email with mailgun service as below.
```text
$ httprunner filepath/testcase.yml --report-name ${BUILD_NUMBER} \
--mailgun-smtp-username "qa@debugtalk.com" \
--mailgun-smtp-password "12345678" \
--email-sender excited@samples.mailgun.org \
--email-recepients ${MAIL_RECEPIENTS} \
--jenkins-job-name ${JOB_NAME} \
--jenkins-job-url ${JOB_URL} \
--jenkins-build-number ${BUILD_NUMBER}
```

182
docs/write-testcases.rst Normal file
View File

@@ -0,0 +1,182 @@
.. default-role:: code
Write testcases
===============
It is recommended to write testcases in `YAML` format.
demo
----
And here is testset example of typical scenario: get `token` at the beginning, and each subsequent requests should take the `token` in the headers.
.. code-block:: yaml
- config:
name: "create user testsets."
variables:
- user_agent: 'iOS/10.3'
- device_sn: ${gen_random_string(15)}
- os_platform: 'ios'
- app_version: '2.8.6'
request:
base_url: http://127.0.0.1:5000
headers:
Content-Type: application/json
device_sn: $device_sn
- test:
name: get token
request:
url: /api/get-token
method: POST
headers:
user_agent: $user_agent
device_sn: $device_sn
os_platform: $os_platform
app_version: $app_version
json:
sign: ${get_sign($user_agent, $device_sn, $os_platform, $app_version)}
extract:
- token: content.token
validate:
- {"check": "status_code", "comparator": "eq", "expected": 200}
- {"check": "content.token", "comparator": "len_eq", "expected": 16}
- test:
name: create user which does not exist
request:
url: /api/users/1000
method: POST
headers:
token: $token
json:
name: "user1"
password: "123456"
validate:
- {"check": "status_code", "comparator": "eq", "expected": 201}
- {"check": "content.success", "comparator": "eq", "expected": true}
Function invoke is supported in `YAML/JSON` format testcases, such as `gen_random_string` and `get_sign` above. This mechanism relies on the `debugtak.py` hot plugin, with which we can define functions in `debugtak.py` file, and then functions can be auto discovered and invoked in runtime.
For detailed regulations of writing testcases, you can read the :doc:`quickstart` documents.
Comparator
----------
``HttpRunner`` currently supports the following comparators.
+---------------------------+---------------------------+-------------------------+--------------------------+
| comparator | Description | A(check), B(expected) | examples |
+===========================+===========================+=========================+==========================+
| ``eq``, ``==`` | value is equal | A == B | 9 eq 9 |
+---------------------------+---------------------------+-------------------------+--------------------------+
| ``lt`` | less than | A < B | 7 lt 8 |
+---------------------------+---------------------------+-------------------------+--------------------------+
| ``le`` | less than or equals | A <= B | 7 le 8, 8 le 8 |
+---------------------------+---------------------------+-------------------------+--------------------------+
| ``gt`` | greater than | A > B | 8 gt 7 |
+---------------------------+---------------------------+-------------------------+--------------------------+
| ``ge`` | greater than or equals | A >= B | 8 ge 7, 8 ge 8 |
+---------------------------+---------------------------+-------------------------+--------------------------+
| ``ne`` | not equals | A != B | 6 ne 9 |
+---------------------------+---------------------------+-------------------------+--------------------------+
| ``str_eq`` | string equals | str(A) == str(B) | 123 str_eq '123' |
+---------------------------+---------------------------+-------------------------+--------------------------+
| ``len_eq``, ``count_eq`` | length or count equals | len(A) == B | | 'abc' len_eq 3 |
| | | | | [1,2] len_eq 2 |
+---------------------------+---------------------------+-------------------------+--------------------------+
| ``len_gt``, ``count_gt`` | length greater than | len(A) > B | | 'abc' len_gt 2 |
| | | | | [1,2,3] len_gt 2 |
+---------------------------+---------------------------+-------------------------+--------------------------+
| ``len_ge``, ``count_ge`` | length greater than | len(A) >= B | | 'abc' len_ge 3 |
| | or equals | | | [1,2,3] len_gt 3 |
+---------------------------+---------------------------+-------------------------+--------------------------+
| ``len_lt``, ``count_lt`` | length less than | len(A) < B | | 'abc' len_lt 4 |
| | | | | [1,2,3] len_lt 4 |
+---------------------------+---------------------------+-------------------------+--------------------------+
| ``len_le``, ``count_le`` | length less than | len(A) <= B | | 'abc' len_le 3 |
| | or equals | | | [1,2,3] len_le 3 |
+---------------------------+---------------------------+-------------------------+--------------------------+
| ``contains`` | contains | [1, 2] contains 1 | | 'abc' contains 'a' |
| | | | | [1,2,3] len_lt 4 |
+---------------------------+---------------------------+-------------------------+--------------------------+
| ``contained_by`` | contained by | A in B | | 'a' contained_by 'abc' |
| | | | | 1 contained_by [1,2] |
+---------------------------+---------------------------+-------------------------+--------------------------+
| ``type`` | A is instance of B | isinstance(A, B) | 123 type 'int' |
+---------------------------+---------------------------+-------------------------+--------------------------+
| ``regex`` | regex matches | re.match(B, A) | 'abcdef' regex 'a\w+d' |
+---------------------------+---------------------------+-------------------------+--------------------------+
| ``startswith`` | starts with | A.startswith(B) is True | 'abc' startswith 'ab' |
+---------------------------+---------------------------+-------------------------+--------------------------+
| ``endswith`` | ends with | A.endswith(B) is True | 'abc' endswith 'bc' |
+---------------------------+---------------------------+-------------------------+--------------------------+
Extraction and Validation
-------------------------
Suppose we get the following HTTP response.
.. code-block:: javascript
// status code: 200
// response headers
{
"Content-Type": "application/json"
}
// response body content
{
"success": False,
"person": {
"name": {
"first_name": "Leo",
"last_name": "Lee",
},
"age": 29,
"cities": ["Guangzhou", "Shenzhen"]
}
}
In `extract` and `validate`, we can do chain operation to extract data field in HTTP response.
For instance, if we want to get `Content-Type` in response headers, then we can specify `headers.content-type`; if we want to get `first_name` in response content, we can specify `content.person.name.first_name`.
There might be slight difference on list, cos we can use index to locate list item. For example, `Guangzhou` in response content can be specified by `content.person.cities.0`.
.. code-block:: javascript
// get status code
status_code
// get headers field
headers.content-type
// get content field
body.success
content.success
text.success
content.person.name.first_name
content.person.cities.1
.. code-block:: yaml
extract:
- content_type: headers.content-type
- first_name: content.person.name.first_name
validate:
- {"check": "status_code", "comparator": "eq", "expected": 200}
- {"check": "headers.content-type", "expected": "application/json"}
- {"check": "headers.content-length", "comparator": "gt", "expected": 40}
- {"check": "content.success", "comparator": "eq", "expected": True}
- {"check": "content.token", "comparator": "len_eq", "expected": 16}
.. _QuickStart: http://