init: move from httprunner/httprunner

This commit is contained in:
lilong.129
2025-02-05 21:32:44 +08:00
commit f4860de5ad
104 changed files with 12602 additions and 0 deletions

46
.gitignore vendored Normal file
View File

@@ -0,0 +1,46 @@
# Binaries for programs and plugins
*.exe
*.exe~
*.dll
*.so
*.dylib
# Test binary, built with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# system or IDE generated files
__debug_bin
.vscode/
.idea/
.DS_Store
*.bak
.commit.txt
# project output files
site/
output/
logs
*.log
*.pcap
.coverage
reports
results
*.xml
htmlcov/
screenshots/
# built plugins
debugtalk.bin
debugtalk.so
# python files
.venv
__pycache__
*.pyc
dist
*.egg-info
.python-version
.pytest_cache

201
LICENSE Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2017 debugtalk
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

0
examples/__init__.py Normal file
View File

0
examples/data/.csv Normal file
View File

30
examples/data/a-b.c/1.yml Normal file
View File

@@ -0,0 +1,30 @@
config:
name: "request methods testcase with functions"
variables:
foo1: config_bar1
foo2: config_bar2
base_url: "https://postman-echo.com"
verify: False
teststeps:
-
name: get with params
variables:
foo1: bar1
sum_v: "${sum_two(1, 2)}"
request:
method: GET
url: /get
params:
foo1: $foo1
foo2: $foo2
sum_v: $sum_v
headers:
User-Agent: HttpRunner/${get_httprunner_version()}
extract:
session_foo2: "body.args.foo2"
validate:
- eq: ["status_code", 200]
- eq: ["body.args.foo1", "bar1"]
- eq: ["body.args.sum_v", "3"]
- eq: ["body.args.foo2", "config_bar2"]

View File

@@ -0,0 +1,26 @@
config:
name: "reference testcase unittest for abnormal folder path"
base_url: "https://postman-echo.com"
verify: False
teststeps:
-
name: request with functions
testcase: a-b.c/1.yml
export:
- session_foo2
-
name: post form data
variables:
foo1: bar12
request:
method: POST
url: /post
headers:
User-Agent: HttpRunner/${get_httprunner_version()}
Content-Type: "application/x-www-form-urlencoded"
data: "foo1=$foo1&foo2=$session_foo2"
validate:
- eq: ["status_code", 200]
- eq: ["body.form.foo1", "bar12"]
- eq: ["body.form.foo2", "config_bar2"]

View File

View File

@@ -0,0 +1,34 @@
# NOTE: Generated By HttpRunner v4.3.5
# FROM: a-b.c/1.yml
from httprunner import HttpRunner, Config, Step, RunRequest
class TestCaseT1(HttpRunner):
config = (
Config("request methods testcase with functions")
.variables(**{"foo1": "config_bar1", "foo2": "config_bar2"})
.base_url("https://postman-echo.com")
.verify(False)
)
teststeps = [
Step(
RunRequest("get with params")
.with_variables(**{"foo1": "bar1", "sum_v": "${sum_two(1, 2)}"})
.get("/get")
.with_params(**{"foo1": "$foo1", "foo2": "$foo2", "sum_v": "$sum_v"})
.with_headers(**{"User-Agent": "HttpRunner/${get_httprunner_version()}"})
.extract()
.with_jmespath("body.args.foo2", "session_foo2")
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.args.foo1", "bar1")
.assert_equal("body.args.sum_v", "3")
.assert_equal("body.args.foo2", "config_bar2")
),
]
if __name__ == "__main__":
TestCaseT1().test_start()

View File

@@ -0,0 +1,44 @@
# NOTE: Generated By HttpRunner v4.3.5
# FROM: a-b.c/2 3.yml
from httprunner import HttpRunner, Config, Step, RunRequest
from httprunner import RunTestCase
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent))
from a_b_c.T1_test import TestCaseT1 as T1
class TestCaseT23(HttpRunner):
config = (
Config("reference testcase unittest for abnormal folder path")
.base_url("https://postman-echo.com")
.verify(False)
)
teststeps = [
Step(RunTestCase("request with functions").call(T1).export(*["session_foo2"])),
Step(
RunRequest("post form data")
.with_variables(**{"foo1": "bar12"})
.post("/post")
.with_headers(
**{
"User-Agent": "HttpRunner/${get_httprunner_version()}",
"Content-Type": "application/x-www-form-urlencoded",
}
)
.with_data("foo1=$foo1&foo2=$session_foo2")
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.form.foo1", "bar12")
.assert_equal("body.form.foo2", "config_bar2")
),
]
if __name__ == "__main__":
TestCaseT23().test_start()

View File

@@ -0,0 +1 @@
# NOTICE: Generated By HttpRunner. DO NOT EDIT!

View File

@@ -0,0 +1,11 @@
curl httpbin.org
curl https://httpbin.org/get?key1=value1&key2=value2
curl -H "Content-Type: application/json" -H "Authorization: Bearer b7d03a6947b217efb6f3ec3bd3504582" -d '{"type":"A","name":"www","data":"162.10.66.0","priority":null,"port":null,"weight":null}' "https://httpbin.org/post"
curl -F "dummyName=dummyFile" -F file1=@file1.txt -F file2=@file2.txt https://httpbin.org/post
curl https://httpbin.org/post -d 'shipment[to_address][id]=adr_HrBKVA85' -d 'shipment[from_address][id]=adr_VtuTOj7o' -d 'shipment[parcel][id]=prcl_WDv2VzHp' -d 'shipment[is_return]=true' -d 'shipment[customs_info][id]=cstinfo_bl5sE20Y'
curl https://httpbing.org/post -H "Content-Type: application/x-www-form-urlencoded" --data "key1=value+1&key2=value%3A2"

View File

@@ -0,0 +1,13 @@
from httprunner import __version__
def get_httprunner_version():
return __version__
def sum_two(m, n):
return m + n
def get_variables():
return {"foo1": "session_bar1"}

356
examples/data/har/demo.har Normal file
View File

@@ -0,0 +1,356 @@
{
"log": {
"version": "1.2",
"creator": {
"name": "Charles Proxy",
"version": "4.6.1"
},
"entries": [
{
"startedDateTime": "2021-10-15T20:29:14.396+08:00",
"time": 1528,
"request": {
"method": "GET",
"url": "https://postman-echo.com/get?foo1=HDnY8&foo2=34.5",
"httpVersion": "HTTP/1.1",
"cookies": [],
"headers": [
{
"name": "Host",
"value": "postman-echo.com"
},
{
"name": "User-Agent",
"value": "HttpRunnerPlus"
},
{
"name": "Accept-Encoding",
"value": "gzip"
}
],
"queryString": [
{
"name": "foo1",
"value": "HDnY8"
},
{
"name": "foo2",
"value": "34.5"
}
],
"headersSize": 113,
"bodySize": 0
},
"response": {
"_charlesStatus": "COMPLETE",
"status": 200,
"statusText": "OK",
"httpVersion": "HTTP/1.1",
"cookies": [
{
"name": "sails.sid",
"value": "s%3Az_LpglkKxTvJ_eHVUH6V67drKp0AGWW-.PidabaXOnatLRP47hVyqqepl6BdrpEQzRlJQXtbIiwk",
"path": "/",
"domain": null,
"expires": null,
"httpOnly": true,
"secure": false,
"comment": null,
"_maxAge": null
}
],
"headers": [
{
"name": "Date",
"value": "Fri, 15 Oct 2021 12:29:15 GMT"
},
{
"name": "Content-Type",
"value": "application/json; charset=utf-8"
},
{
"name": "Content-Length",
"value": "300"
},
{
"name": "ETag",
"value": "W/\"12c-1pyB4v4mv3hdBoU+8cUmx4p37qI\""
},
{
"name": "Vary",
"value": "Accept-Encoding"
},
{
"name": "set-cookie",
"value": "sails.sid=s%3Az_LpglkKxTvJ_eHVUH6V67drKp0AGWW-.PidabaXOnatLRP47hVyqqepl6BdrpEQzRlJQXtbIiwk; Path=/; HttpOnly"
},
{
"name": "Connection",
"value": "keep-alive"
}
],
"content": {
"size": 300,
"mimeType": "application/json; charset=utf-8",
"text": "eyJhcmdzIjp7ImZvbzEiOiJIRG5ZOCIsImZvbzIiOiIzNC41In0sImhlYWRlcnMiOnsieC1mb3J3YXJkZWQtcHJvdG8iOiJodHRwcyIsIngtZm9yd2FyZGVkLXBvcnQiOiI0NDMiLCJob3N0IjoicG9zdG1hbi1lY2hvLmNvbSIsIngtYW16bi10cmFjZS1pZCI6IlJvb3Q9MS02MTY5NzQxYi01YjgyNTRjZTZjZThlNTU2NTRiNzc3MmQiLCJ1c2VyLWFnZW50IjoiSHR0cEJvb21lciIsImFjY2VwdC1lbmNvZGluZyI6Imd6aXAifSwidXJsIjoiaHR0cHM6Ly9wb3N0bWFuLWVjaG8uY29tL2dldD9mb28xPUhEblk4JmZvbzI9MzQuNSJ9",
"encoding": "base64"
},
"redirectURL": null,
"headersSize": 0,
"bodySize": 300
},
"serverIPAddress": "44.193.31.23",
"cache": {},
"timings": {
"dns": 105,
"connect": 1108,
"ssl": 721,
"send": 1,
"wait": 312,
"receive": 2
}
},
{
"startedDateTime": "2021-10-15T20:29:16.120+08:00",
"time": 306,
"request": {
"method": "POST",
"url": "https://postman-echo.com/post",
"httpVersion": "HTTP/1.1",
"cookies": [
{
"name": "sails.sid",
"value": "s%3Az_LpglkKxTvJ_eHVUH6V67drKp0AGWW-.PidabaXOnatLRP47hVyqqepl6BdrpEQzRlJQXtbIiwk"
}
],
"headers": [
{
"name": "Host",
"value": "postman-echo.com"
},
{
"name": "User-Agent",
"value": "Go-http-client/1.1"
},
{
"name": "Content-Length",
"value": "28"
},
{
"name": "Content-Type",
"value": "application/json; charset=UTF-8"
},
{
"name": "Cookie",
"value": "sails.sid=s%3Az_LpglkKxTvJ_eHVUH6V67drKp0AGWW-.PidabaXOnatLRP47hVyqqepl6BdrpEQzRlJQXtbIiwk"
},
{
"name": "Accept-Encoding",
"value": "gzip"
}
],
"queryString": [],
"postData": {
"mimeType": "application/json; charset=UTF-8",
"text": "{\"foo1\":\"HDnY8\",\"foo2\":12.3}"
},
"headersSize": 269,
"bodySize": 28
},
"response": {
"_charlesStatus": "COMPLETE",
"status": 200,
"statusText": "OK",
"httpVersion": "HTTP/1.1",
"cookies": [
{
"name": "sails.sid",
"value": "s%3AS5e7w0zQ0xAsCwh9L8T6R7QLYCO7_gtD.r8%2B2w9IWqEIfuVkrZjnxzm2xADIk34zKAWXRPapr%2FAw",
"path": "/",
"domain": null,
"expires": null,
"httpOnly": true,
"secure": false,
"comment": null,
"_maxAge": null
}
],
"headers": [
{
"name": "Date",
"value": "Fri, 15 Oct 2021 12:29:16 GMT"
},
{
"name": "Content-Type",
"value": "application/json; charset=utf-8"
},
{
"name": "Content-Length",
"value": "526"
},
{
"name": "ETag",
"value": "W/\"20e-aXqJ0H6Q30sU41c/D7asB+yXWeQ\""
},
{
"name": "Vary",
"value": "Accept-Encoding"
},
{
"name": "set-cookie",
"value": "sails.sid=s%3AS5e7w0zQ0xAsCwh9L8T6R7QLYCO7_gtD.r8%2B2w9IWqEIfuVkrZjnxzm2xADIk34zKAWXRPapr%2FAw; Path=/; HttpOnly"
},
{
"name": "Connection",
"value": "keep-alive"
}
],
"content": {
"size": 526,
"mimeType": "application/json; charset=utf-8",
"text": "eyJhcmdzIjp7fSwiZGF0YSI6eyJmb28xIjoiSERuWTgiLCJmb28yIjoxMi4zfSwiZmlsZXMiOnt9LCJmb3JtIjp7fSwiaGVhZGVycyI6eyJ4LWZvcndhcmRlZC1wcm90byI6Imh0dHBzIiwieC1mb3J3YXJkZWQtcG9ydCI6IjQ0MyIsImhvc3QiOiJwb3N0bWFuLWVjaG8uY29tIiwieC1hbXpuLXRyYWNlLWlkIjoiUm9vdD0xLTYxNjk3NDFjLTIxN2RiMGI3MWFkYjgwYmQ3ODUxOTI2OCIsImNvbnRlbnQtbGVuZ3RoIjoiMjgiLCJ1c2VyLWFnZW50IjoiR28taHR0cC1jbGllbnQvMS4xIiwiY29udGVudC10eXBlIjoiYXBwbGljYXRpb24vanNvbjsgY2hhcnNldD1VVEYtOCIsImNvb2tpZSI6InNhaWxzLnNpZD1zJTNBel9McGdsa0t4VHZKX2VIVlVINlY2N2RyS3AwQUdXVy0uUGlkYWJhWE9uYXRMUlA0N2hWeXFxZXBsNkJkcnBFUXpSbEpRWHRiSWl3ayIsImFjY2VwdC1lbmNvZGluZyI6Imd6aXAifSwianNvbiI6eyJmb28xIjoiSERuWTgiLCJmb28yIjoxMi4zfSwidXJsIjoiaHR0cHM6Ly9wb3N0bWFuLWVjaG8uY29tL3Bvc3QifQ==",
"encoding": "base64"
},
"redirectURL": null,
"headersSize": 0,
"bodySize": 526
},
"serverIPAddress": "44.193.31.23",
"cache": {},
"timings": {
"dns": -1,
"connect": -1,
"ssl": -1,
"send": 1,
"wait": 304,
"receive": 1
}
},
{
"startedDateTime": "2021-10-15T20:29:16.427+08:00",
"time": 305,
"request": {
"method": "POST",
"url": "https://postman-echo.com/post",
"httpVersion": "HTTP/1.1",
"cookies": [
{
"name": "sails.sid",
"value": "s%3AS5e7w0zQ0xAsCwh9L8T6R7QLYCO7_gtD.r8%2B2w9IWqEIfuVkrZjnxzm2xADIk34zKAWXRPapr%2FAw"
}
],
"headers": [
{
"name": "Host",
"value": "postman-echo.com"
},
{
"name": "User-Agent",
"value": "Go-http-client/1.1"
},
{
"name": "Content-Length",
"value": "20"
},
{
"name": "Content-Type",
"value": "application/x-www-form-urlencoded; charset=UTF-8"
},
{
"name": "Cookie",
"value": "sails.sid=s%3AS5e7w0zQ0xAsCwh9L8T6R7QLYCO7_gtD.r8%2B2w9IWqEIfuVkrZjnxzm2xADIk34zKAWXRPapr%2FAw"
},
{
"name": "Accept-Encoding",
"value": "gzip"
}
],
"queryString": [],
"postData": {
"mimeType": "application/x-www-form-urlencoded; charset=UTF-8",
"params": [
{
"name": "foo1",
"value": "HDnY8"
},
{
"name": "foo2",
"value": "12.3"
}
]
},
"headersSize": 290,
"bodySize": 20
},
"response": {
"_charlesStatus": "COMPLETE",
"status": 200,
"statusText": "OK",
"httpVersion": "HTTP/1.1",
"cookies": [
{
"name": "sails.sid",
"value": "s%3AMp2gGgeCCDM4sRS_MfL1q-hAkL3bAk84.9XT7TTW8QzueQqtQ6bQM%2BgHqiUBbkJSfgM5CbfhFreQ",
"path": "/",
"domain": null,
"expires": null,
"httpOnly": true,
"secure": false,
"comment": null,
"_maxAge": null
}
],
"headers": [
{
"name": "Date",
"value": "Fri, 15 Oct 2021 12:29:16 GMT"
},
{
"name": "Content-Type",
"value": "application/json; charset=utf-8"
},
{
"name": "Content-Length",
"value": "551"
},
{
"name": "ETag",
"value": "W/\"227-micuvGYwtEZN542D1sTL0hAZaRs\""
},
{
"name": "Vary",
"value": "Accept-Encoding"
},
{
"name": "set-cookie",
"value": "sails.sid=s%3AMp2gGgeCCDM4sRS_MfL1q-hAkL3bAk84.9XT7TTW8QzueQqtQ6bQM%2BgHqiUBbkJSfgM5CbfhFreQ; Path=/; HttpOnly"
},
{
"name": "Connection",
"value": "keep-alive"
}
],
"content": {
"size": 551,
"mimeType": "application/json; charset=utf-8",
"text": "eyJhcmdzIjp7fSwiZGF0YSI6IiIsImZpbGVzIjp7fSwiZm9ybSI6eyJmb28xIjoiSERuWTgiLCJmb28yIjoiMTIuMyJ9LCJoZWFkZXJzIjp7IngtZm9yd2FyZGVkLXByb3RvIjoiaHR0cHMiLCJ4LWZvcndhcmRlZC1wb3J0IjoiNDQzIiwiaG9zdCI6InBvc3RtYW4tZWNoby5jb20iLCJ4LWFtem4tdHJhY2UtaWQiOiJSb290PTEtNjE2OTc0MWMtNWI5ZDEyMWI2N2FlZTI0MTUyMmQzMjE2IiwiY29udGVudC1sZW5ndGgiOiIyMCIsInVzZXItYWdlbnQiOiJHby1odHRwLWNsaWVudC8xLjEiLCJjb250ZW50LXR5cGUiOiJhcHBsaWNhdGlvbi94LXd3dy1mb3JtLXVybGVuY29kZWQ7IGNoYXJzZXQ9VVRGLTgiLCJjb29raWUiOiJzYWlscy5zaWQ9cyUzQVM1ZTd3MHpRMHhBc0N3aDlMOFQ2UjdRTFlDTzdfZ3RELnI4JTJCMnc5SVdxRUlmdVZrclpqbnh6bTJ4QURJazM0ektBV1hSUGFwciUyRkF3IiwiYWNjZXB0LWVuY29kaW5nIjoiZ3ppcCJ9LCJqc29uIjp7ImZvbzEiOiJIRG5ZOCIsImZvbzIiOiIxMi4zIn0sInVybCI6Imh0dHBzOi8vcG9zdG1hbi1lY2hvLmNvbS9wb3N0In0=",
"encoding": "base64"
},
"redirectURL": null,
"headersSize": 0,
"bodySize": 551
},
"serverIPAddress": "44.193.31.23",
"cache": {},
"timings": {
"dns": -1,
"connect": -1,
"ssl": -1,
"send": 0,
"wait": 303,
"receive": 2
}
}
]
}
}

View File

@@ -0,0 +1 @@
# NOTICE: Generated By HttpRunner. DO NOT EDIT!

View File

@@ -0,0 +1 @@
HttpRunner is an open source API testing tool that supports HTTP(S)/HTTP2/WebSocket/RPC network protocols, covering API testing, performance testing and digital experience monitoring (DEM) test types. Enjoy!

Binary file not shown.

After

Width:  |  Height:  |  Size: 316 KiB

View File

@@ -0,0 +1,498 @@
{
"info": {
"_postman_id": "0417a445-b206-4ea2-b1d2-5441afd6c6b9",
"name": "postman collection demo",
"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
},
"item": [
{
"name": "folder1",
"item": [
{
"name": "folder2",
"item": [
{
"name": "Get with params",
"request": {
"method": "GET",
"header": [],
"url": {
"raw": "https://postman-echo.com/:path?k1=v1&k2=v2",
"protocol": "https",
"host": [
"postman-echo",
"com"
],
"path": [
":path"
],
"query": [
{
"key": "k1",
"value": "v1"
},
{
"key": "k2",
"value": "v2"
},
{
"key": "k3",
"value": "v3",
"disabled": true
}
],
"variable": [
{
"key": "path",
"value": "get"
}
]
}
},
"response": [
{
"name": "Get with params case1",
"originalRequest": {
"method": "GET",
"header": [],
"url": {
"raw": "https://postman-echo.com/:path?k1=v1&k2=v2",
"protocol": "https",
"host": [
"postman-echo",
"com"
],
"path": [
":path"
],
"query": [
{
"key": "k1",
"value": "v1"
},
{
"key": "k2",
"value": "v2"
},
{
"key": "k3",
"value": "v3",
"disabled": true
}
],
"variable": [
{
"key": "path",
"value": "get"
}
]
}
},
"status": "OK",
"code": 200,
"_postman_previewlanguage": "json",
"header": [
{
"key": "Date",
"value": "Mon, 16 May 2022 12:12:28 GMT"
},
{
"key": "Content-Type",
"value": "application/json; charset=utf-8"
},
{
"key": "Content-Length",
"value": "508"
},
{
"key": "Connection",
"value": "keep-alive"
},
{
"key": "ETag",
"value": "W/\"1fc-x4EIPFQzoLX0HenCFPx6HNfG0lc\""
},
{
"key": "Vary",
"value": "Accept-Encoding"
},
{
"key": "set-cookie",
"value": "sails.sid=s%3AX2aa_Z7gbcUqIWAjlBkytBRmQ4WCvc3D.pX9Qxh8aO9Ict0BL4CrRhdDJmz81UVmwFsV5Nx30Ils; Path=/; HttpOnly"
}
],
"cookie": [],
"body": "{\n \"args\": {\n \"k1\": \"v1\",\n \"k2\": \"v2\"\n },\n \"headers\": {\n \"x-forwarded-proto\": \"https\",\n \"x-forwarded-port\": \"443\",\n \"host\": \"postman-echo.com\",\n \"user-agent\": \"PostmanRuntime/7.29.0\",\n \"accept\": \"*/*\",\n \"accept-encoding\": \"gzip, deflate, br\",\n \"cookie\": \"Cookie_1=c1; Cookie_2=c2; sails.sid=s%3AGX6aS9b_phvUSUk66w7ZBgWuOPI7IIKT.ayEGTaW4U35eAWyPz%2Fh6Q74DonNcbqw3H5Q5Zv%2BfKMY\"\n },\n \"url\": \"https://postman-echo.com/get?k1=v1&k2=v2\"\n}"
},
{
"name": "Get with params case2",
"originalRequest": {
"method": "GET",
"header": [],
"url": {
"raw": "https://postman-echo.com/:path?k1=v1&k3=v3",
"protocol": "https",
"host": [
"postman-echo",
"com"
],
"path": [
":path"
],
"query": [
{
"key": "k1",
"value": "v1"
},
{
"key": "k2",
"value": "v2",
"disabled": true
},
{
"key": "k3",
"value": "v3"
}
],
"variable": [
{
"key": "path",
"value": "get"
}
]
}
},
"status": "OK",
"code": 200,
"_postman_previewlanguage": "json",
"header": [
{
"key": "Date",
"value": "Mon, 16 May 2022 12:14:04 GMT"
},
{
"key": "Content-Type",
"value": "application/json; charset=utf-8"
},
{
"key": "Content-Length",
"value": "504"
},
{
"key": "Connection",
"value": "keep-alive"
},
{
"key": "ETag",
"value": "W/\"1f8-tMaKs4xmwr+3su3I8mcgR0p+ucw\""
},
{
"key": "Vary",
"value": "Accept-Encoding"
},
{
"key": "set-cookie",
"value": "sails.sid=s%3AMNuX_i0KgaP_KuuMpYB8RtCNipCGJWVw.4ETfPHxE81Omqb6Yli%2FezUU8CXyYBcN3%2Bxkx5htwh8Y; Path=/; HttpOnly"
}
],
"cookie": [],
"body": "{\n \"args\": {\n \"k1\": \"v1\",\n \"k3\": \"v3\"\n },\n \"headers\": {\n \"x-forwarded-proto\": \"https\",\n \"x-forwarded-port\": \"443\",\n \"host\": \"postman-echo.com\",\n \"user-agent\": \"PostmanRuntime/7.29.0\",\n \"accept\": \"*/*\",\n \"accept-encoding\": \"gzip, deflate, br\",\n \"cookie\": \"Cookie_1=c1; Cookie_2=c2; sails.sid=s%3AX2aa_Z7gbcUqIWAjlBkytBRmQ4WCvc3D.pX9Qxh8aO9Ict0BL4CrRhdDJmz81UVmwFsV5Nx30Ils\"\n },\n \"url\": \"https://postman-echo.com/get?k1=v1&k3=v3\"\n}"
}
]
}
]
}
]
},
{
"name": "folder3",
"item": [
{
"name": "Post form-data",
"request": {
"method": "POST",
"header": [],
"body": {
"mode": "formdata",
"formdata": [
{
"key": "k1",
"value": "v1",
"type": "text"
},
{
"key": "k2",
"value": "v2",
"type": "text"
},
{
"key": "k3",
"value": "v3",
"type": "text",
"disabled": true
},
{
"key": "intro_key",
"type": "file",
"src": "intro.txt"
},
{
"key": "logo_key",
"type": "file",
"src": "logo.jpeg"
}
]
},
"url": {
"raw": "https://postman-echo.com/:path",
"protocol": "https",
"host": [
"postman-echo",
"com"
],
"path": [
":path"
],
"variable": [
{
"key": "path",
"value": "post"
}
]
}
},
"response": []
},
{
"name": "Post x-www-form-urlencoded",
"request": {
"method": "POST",
"header": [],
"body": {
"mode": "urlencoded",
"urlencoded": [
{
"key": "k1",
"value": "v1",
"type": "text"
},
{
"key": "k2",
"value": "v2",
"type": "text"
},
{
"key": "k3",
"value": "v3",
"type": "text",
"disabled": true
}
]
},
"url": {
"raw": "https://postman-echo.com/:path",
"protocol": "https",
"host": [
"postman-echo",
"com"
],
"path": [
":path"
],
"variable": [
{
"key": "path",
"value": "post"
}
]
}
},
"response": []
},
{
"name": "Post raw json",
"request": {
"method": "POST",
"header": [],
"body": {
"mode": "raw",
"raw": "{\n \"k1\": \"v1\",\n \"k2\": \"v2\"\n}",
"options": {
"raw": {
"language": "json"
}
}
},
"url": {
"raw": "https://postman-echo.com/:path",
"protocol": "https",
"host": [
"postman-echo",
"com"
],
"path": [
":path"
],
"variable": [
{
"key": "path",
"value": "post"
}
]
}
},
"response": []
},
{
"name": "Post raw text",
"request": {
"method": "POST",
"header": [],
"body": {
"mode": "raw",
"raw": "have a nice day",
"options": {
"raw": {
"language": "text"
}
}
},
"url": {
"raw": "https://postman-echo.com/:path",
"protocol": "https",
"host": [
"postman-echo",
"com"
],
"path": [
":path"
],
"variable": [
{
"key": "path",
"value": "post"
}
]
}
},
"response": []
}
]
},
{
"name": "Get request headers",
"request": {
"method": "GET",
"header": [
{
"key": "User-Agent",
"value": "HttpRunner",
"type": "text"
},
{
"key": "User-Name",
"value": "bbx",
"type": "text",
"disabled": true
},
{
"key": "Connection",
"value": "close",
"type": "text"
}
],
"url": {
"raw": "https://postman-echo.com/:path",
"protocol": "https",
"host": [
"postman-echo",
"com"
],
"path": [
":path"
],
"variable": [
{
"key": "path",
"value": "headers"
}
]
}
},
"response": [
{
"name": "Get request headers case1",
"originalRequest": {
"method": "GET",
"header": [
{
"key": "User-Agent",
"value": "HttpRunner",
"type": "text"
},
{
"key": "User-Name",
"value": "bbx",
"type": "text",
"disabled": true
},
{
"key": "Cookie",
"value": "Cookie_1=c1; Cookie_2=c2; sails.sid=s%3AGX6aS9b_phvUSUk66w7ZBgWuOPI7IIKT.ayEGTaW4U35eAWyPz%2Fh6Q74DonNcbqw3H5Q5Zv%2BfKMY",
"type": "text"
}
],
"url": {
"raw": "https://postman-echo.com/:path",
"protocol": "https",
"host": [
"postman-echo",
"com"
],
"path": [
":path"
],
"variable": [
{
"key": "path",
"value": "headers"
}
]
}
},
"status": "OK",
"code": 200,
"_postman_previewlanguage": "json",
"header": [
{
"key": "Date",
"value": "Mon, 16 May 2022 12:14:25 GMT"
},
{
"key": "Content-Type",
"value": "application/json; charset=utf-8"
},
{
"key": "Content-Length",
"value": "541"
},
{
"key": "Connection",
"value": "keep-alive"
},
{
"key": "ETag",
"value": "W/\"21d-ld5UvFTaRM6lihVnvCj6mZm5Of0\""
},
{
"key": "Vary",
"value": "Accept-Encoding"
}
],
"cookie": [],
"body": "{\n \"headers\": {\n \"x-forwarded-proto\": \"https\",\n \"x-forwarded-port\": \"443\",\n \"host\": \"postman-echo.com\",\n \"user-agent\": \"HttpRunner\",\n \"cookie\": \"Cookie_1=c1; Cookie_2=c2; sails.sid=s%3AGX6aS9b_phvUSUk66w7ZBgWuOPI7IIKT.ayEGTaW4U35eAWyPz%2Fh6Q74DonNcbqw3H5Q5Zv%2BfKMY\",\n \"accept\": \"*/*\",\n \"accept-encoding\": \"gzip, deflate, br\"\n }\n}"
}
]
}
]
}

View File

@@ -0,0 +1,4 @@
headers:
Content-Type: "application/x-www-form-urlencoded"
cookies:
UserName: "debugtalk"

View File

@@ -0,0 +1,5 @@
override: true
headers:
Content-Type: "application/x-www-form-urlencoded"
cookies:
UserName: "debugtalk"

BIN
examples/data/sqlite.db Normal file

Binary file not shown.

View File

@@ -0,0 +1 @@
# NOTICE: Generated By HttpRunner. DO NOT EDIT!

View File

@@ -0,0 +1,4 @@
username,password
test1,111111
test2,222222
test3,333333
1 username password
2 test1 111111
3 test2 222222
4 test3 333333

View File

@@ -0,0 +1,89 @@
config:
name: basic test with httpbin
base_url: ${get_httpbin_server()}
teststeps:
-
name: headers
request:
url: /headers
method: GET
validate:
- eq: ["status_code", 200]
- eq: [body.headers.Host, "127.0.0.1"]
-
name: user-agent
request:
url: /user-agent
method: GET
validate:
- eq: ["status_code", 200]
- startswith: [body."user-agent", "python-requests"]
-
name: get without params
request:
url: /get
method: GET
validate:
- eq: ["status_code", 200]
- eq: [body.args, {}]
-
name: get with params in url
request:
url: /get?a=1&b=2
method: GET
validate:
- eq: ["status_code", 200]
- eq: [body.args, {'a': '1', 'b': '2'}]
-
name: get with params in params field
request:
url: /get
params:
a: 1
b: 2
method: GET
validate:
- eq: ["status_code", 200]
- eq: [body.args, {'a': '1', 'b': '2'}]
-
name: set cookie
request:
url: /cookies/set?name=value
method: GET
validate:
- eq: ["status_code", 200]
- eq: [body.cookies.name, "value"]
-
name: extract cookie
request:
url: /cookies
method: GET
validate:
- eq: ["status_code", 200]
- eq: [body.cookies.name, "value"]
-
name: post data
request:
url: /post
method: POST
headers:
Content-Type: application/json
data: abc
validate:
- eq: ["status_code", 200]
-
name: validate body length
request:
url: /spec.json
method: GET
validate:
- len_eq: ["body", 9]

View File

@@ -0,0 +1,79 @@
# NOTE: Generated By HttpRunner v4.3.5
# FROM: basic.yml
from httprunner import HttpRunner, Config, Step, RunRequest
class TestCaseBasic(HttpRunner):
config = Config("basic test with httpbin").base_url("${get_httpbin_server()}")
teststeps = [
Step(
RunRequest("headers")
.get("/headers")
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.headers.Host", "127.0.0.1")
),
Step(
RunRequest("user-agent")
.get("/user-agent")
.validate()
.assert_equal("status_code", 200)
.assert_startswith('body."user-agent"', "python-requests")
),
Step(
RunRequest("get without params")
.get("/get")
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.args", {})
),
Step(
RunRequest("get with params in url")
.get("/get?a=1&b=2")
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.args", {"a": "1", "b": "2"})
),
Step(
RunRequest("get with params in params field")
.get("/get")
.with_params(**{"a": 1, "b": 2})
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.args", {"a": "1", "b": "2"})
),
Step(
RunRequest("set cookie")
.get("/cookies/set?name=value")
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.cookies.name", "value")
),
Step(
RunRequest("extract cookie")
.get("/cookies")
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.cookies.name", "value")
),
Step(
RunRequest("post data")
.post("/post")
.with_headers(**{"Content-Type": "application/json"})
.with_data("abc")
.validate()
.assert_equal("status_code", 200)
),
Step(
RunRequest("validate body length")
.get("/spec.json")
.validate()
.assert_length_equal("body", 9)
),
]
if __name__ == "__main__":
TestCaseBasic().test_start()

View File

@@ -0,0 +1,148 @@
import os
import random
import string
import time
import uuid
from loguru import logger
from httprunner.utils import HTTP_BIN_URL
def get_httpbin_server():
return HTTP_BIN_URL
def setup_testcase(variables):
logger.info(f"setup_testcase, variables: {variables}")
variables["request_id_prefix"] = str(int(time.time()))
def teardown_testcase():
logger.info("teardown_testcase.")
def setup_teststep(request, variables):
logger.info(f"setup_teststep, request: {request}, variables: {variables}")
request.setdefault("headers", {})
request_id_prefix = variables["request_id_prefix"]
request["headers"]["HRUN-Request-ID"] = request_id_prefix + "-" + str(uuid.uuid4())
def teardown_teststep(response):
logger.info(f"teardown_teststep, response status code: {response.status_code}")
def sum_two(m, n):
return m + n
def sum_status_code(status_code, expect_sum):
"""sum status code digits
e.g. 400 => 4, 201 => 3
"""
sum_value = 0
for digit in str(status_code):
sum_value += int(digit)
assert sum_value == expect_sum
def is_status_code_200(status_code):
return status_code == 200
os.environ["TEST_ENV"] = "PRODUCTION"
def skip_test_in_production_env():
"""skip this test in production environment"""
return os.environ["TEST_ENV"] == "PRODUCTION"
def get_user_agent():
return ["iOS/10.1", "iOS/10.2"]
def gen_app_version():
return [{"app_version": "2.8.5"}, {"app_version": "2.8.6"}]
def get_account():
return [
{"username": "user1", "password": "111111"},
{"username": "user2", "password": "222222"},
]
def get_account_in_tuple():
return [("user1", "111111"), ("user2", "222222")]
def gen_random_string(str_len):
random_char_list = []
for _ in range(str_len):
random_char = random.choice(string.ascii_letters + string.digits)
random_char_list.append(random_char)
random_string = "".join(random_char_list)
return random_string
def setup_hook_add_kwargs(request):
request["key"] = "value"
def setup_hook_remove_kwargs(request):
request.pop("key")
def teardown_hook_sleep_N_secs(response, n_secs):
"""sleep n seconds after request"""
if response.status_code == 200:
time.sleep(0.1)
else:
time.sleep(n_secs)
def hook_print(msg):
print(msg)
def modify_request_json(request, os_platform):
request["json"]["os_platform"] = os_platform
def setup_hook_httpntlmauth(request):
if "httpntlmauth" in request:
from requests_ntlm import HttpNtlmAuth
auth_account = request.pop("httpntlmauth")
request["auth"] = HttpNtlmAuth(
auth_account["username"], auth_account["password"]
)
def alter_response(response):
response.status_code = 500
response.headers["Content-Type"] = "html/text"
response.body["headers"]["Host"] = "127.0.0.1:8888"
response.new_attribute = "new_attribute_value"
response.new_attribute_dict = {"key": 123}
def alter_response_302(response):
response.status_code = 500
response.headers["Content-Type"] = "html/text"
response.text = "abcdef"
response.new_attribute = "new_attribute_value"
response.new_attribute_dict = {"key": 123}
def alter_response_error(response):
# NameError
not_defined_variable
def gen_variables():
return {"var_a": 1, "var_b": 2}

View File

@@ -0,0 +1,36 @@
config:
name: basic test with httpbin
base_url: ${get_httpbin_server()}
setup_hooks:
- ${hook_print(setup)}
teardown_hooks:
- ${hook_print(teardown)}
teststeps:
-
name: headers
variables:
a: 123
request:
url: /headers
method: GET
setup_hooks:
- ${setup_hook_add_kwargs($request)}
- ${setup_hook_remove_kwargs($request)}
teardown_hooks:
- ${teardown_hook_sleep_N_secs($response, 1)}
validate:
- eq: ["status_code", 200]
- contained_by: [body.headers.Host, "${get_httpbin_server()}"]
-
name: alter response
request:
url: /headers
method: GET
teardown_hooks:
- ${alter_response($response)}
validate:
- eq: ["status_code", 500]
- eq: [headers."Content-Type", "html/text"]
- eq: [body.headers.Host, "127.0.0.1:8888"]

View File

@@ -0,0 +1,35 @@
# NOTE: Generated By HttpRunner v4.3.5
# FROM: hooks.yml
from httprunner import HttpRunner, Config, Step, RunRequest
class TestCaseHooks(HttpRunner):
config = Config("basic test with httpbin").base_url("${get_httpbin_server()}")
teststeps = [
Step(
RunRequest("headers")
.with_variables(**{"a": 123})
.setup_hook("${setup_hook_add_kwargs($request)}")
.setup_hook("${setup_hook_remove_kwargs($request)}")
.get("/headers")
.teardown_hook("${teardown_hook_sleep_N_secs($response, 1)}")
.validate()
.assert_equal("status_code", 200)
.assert_contained_by("body.headers.Host", "${get_httpbin_server()}")
),
Step(
RunRequest("alter response")
.get("/headers")
.teardown_hook("${alter_response($response)}")
.validate()
.assert_equal("status_code", 500)
.assert_equal('headers."Content-Type"', "html/text")
.assert_equal("body.headers.Host", "127.0.0.1:8888")
),
]
if __name__ == "__main__":
TestCaseHooks().test_start()

View File

@@ -0,0 +1,37 @@
config:
name: load images
base_url: ${get_httpbin_server()}
teststeps:
-
name: get png image
request:
url: /image/png
method: GET
validate:
- eq: ["status_code", 200]
-
name: get jpeg image
request:
url: /image/jpeg
method: GET
validate:
- eq: ["status_code", 200]
-
name: get webp image
request:
url: /image/webp
method: GET
validate:
- eq: ["status_code", 200]
-
name: get svg image
request:
url: /image/svg
method: GET
validate:
- eq: ["status_code", 200]

View File

@@ -0,0 +1,39 @@
# NOTE: Generated By HttpRunner v4.3.5
# FROM: load_image.yml
from httprunner import HttpRunner, Config, Step, RunRequest
class TestCaseLoadImage(HttpRunner):
config = Config("load images").base_url("${get_httpbin_server()}")
teststeps = [
Step(
RunRequest("get png image")
.get("/image/png")
.validate()
.assert_equal("status_code", 200)
),
Step(
RunRequest("get jpeg image")
.get("/image/jpeg")
.validate()
.assert_equal("status_code", 200)
),
Step(
RunRequest("get webp image")
.get("/image/webp")
.validate()
.assert_equal("status_code", 200)
),
Step(
RunRequest("get svg image")
.get("/image/svg")
.validate()
.assert_equal("status_code", 200)
),
]
if __name__ == "__main__":
TestCaseLoadImage().test_start()

View File

@@ -0,0 +1,4 @@
UserName=test
Password=654321
PROJECT_KEY=AAABBBCCC
content_type=application/json; charset=UTF-8

View File

@@ -0,0 +1,30 @@
config:
name: test upload file with httpbin
base_url: ${get_httpbin_server()}
teststeps:
-
name: upload file
variables:
file_path: "test.env"
m_encoder: ${multipart_encoder(file=$file_path)}
request:
url: /post
method: POST
headers:
Content-Type: ${multipart_content_type($m_encoder)}
data: $m_encoder
validate:
- eq: ["status_code", 200]
- startswith: ["body.files.file", "UserName=test"]
-
name: upload file with keyword
request:
url: /post
method: POST
upload:
file: "test.env"
validate:
- eq: ["status_code", 200]
- startswith: ["body.files.file", "UserName=test"]

View File

@@ -0,0 +1,38 @@
# NOTE: Generated By HttpRunner v4.3.5
# FROM: upload.yml
from httprunner import HttpRunner, Config, Step, RunRequest
class TestCaseUpload(HttpRunner):
config = Config("test upload file with httpbin").base_url("${get_httpbin_server()}")
teststeps = [
Step(
RunRequest("upload file")
.with_variables(
**{
"file_path": "test.env",
"m_encoder": "${multipart_encoder(file=$file_path)}",
}
)
.post("/post")
.with_headers(**{"Content-Type": "${multipart_content_type($m_encoder)}"})
.with_data("$m_encoder")
.validate()
.assert_equal("status_code", 200)
.assert_startswith("body.files.file", "UserName=test")
),
Step(
RunRequest("upload file with keyword")
.post("/post")
.upload(**{"file": "test.env"})
.validate()
.assert_equal("status_code", 200)
.assert_startswith("body.files.file", "UserName=test")
),
]
if __name__ == "__main__":
TestCaseUpload().test_start()

View File

@@ -0,0 +1,4 @@
user_agent
iOS/10.1
iOS/10.2
iOS/10.3
1 user_agent
2 iOS/10.1
3 iOS/10.2
4 iOS/10.3

View File

@@ -0,0 +1,35 @@
config:
name: basic test with httpbin
base_url: ${get_httpbin_server()}
teststeps:
-
name: validate response with json path
request:
url: /get
params:
a: 1
b: 2
method: GET
validate:
- eq: ["status_code", 200]
- eq: ["body.args.a", "1"]
- eq: ["body.args.b", "2"]
validate_script:
- "assert status_code == 200"
-
name: validate response with python script
request:
url: /get
params:
a: 1
b: 2
method: GET
validate:
- eq: ["status_code", 200]
validate_script:
- "assert status_code == 201"
- "a = response_json.get('args').get('a')"
- "assert a == '1'"

View File

@@ -0,0 +1,31 @@
# NOTE: Generated By HttpRunner v4.3.5
# FROM: validate.yml
from httprunner import HttpRunner, Config, Step, RunRequest
class TestCaseValidate(HttpRunner):
config = Config("basic test with httpbin").base_url("${get_httpbin_server()}")
teststeps = [
Step(
RunRequest("validate response with json path")
.get("/get")
.with_params(**{"a": 1, "b": 2})
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.args.a", "1")
.assert_equal("body.args.b", "2")
),
Step(
RunRequest("validate response with python script")
.get("/get")
.with_params(**{"a": 1, "b": 2})
.validate()
.assert_equal("status_code", 200)
),
]
if __name__ == "__main__":
TestCaseValidate().test_start()

View File

@@ -0,0 +1,20 @@
# NOTE: Generated By hrp v4.2.0, DO NOT EDIT!
import sys
import os
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from debugtalk import *
if __name__ == "__main__":
import funppy
funppy.register("get_httprunner_version", get_httprunner_version)
funppy.register("sum_two", sum_two)
funppy.register("get_testcase_config_variables", get_testcase_config_variables)
funppy.register("get_testsuite_config_variables", get_testsuite_config_variables)
funppy.register("get_app_version", get_app_version)
funppy.register("calculate_two_nums", calculate_two_nums)
funppy.register("fake_rand_count", fake_rand_count)
funppy.serve()

View File

View File

@@ -0,0 +1,65 @@
# NOTICE: Generated By HttpRunner.
import json
import os
import time
import pytest
from loguru import logger
from httprunner.utils import get_platform, ExtendJSONEncoder
@pytest.fixture(scope="session", autouse=True)
def session_fixture(request):
"""setup and teardown each task"""
logger.info("start running testcases ...")
start_at = time.time()
yield
logger.info("task finished, generate task summary for --save-tests")
summary = {
"success": True,
"stat": {
"testcases": {"total": 0, "success": 0, "fail": 0},
"teststeps": {"total": 0, "failures": 0, "successes": 0},
},
"time": {"start_at": start_at, "duration": time.time() - start_at},
"platform": get_platform(),
"details": [],
}
for item in request.node.items:
testcase_summary = item.instance.get_summary()
summary["success"] &= testcase_summary.success
summary["stat"]["testcases"]["total"] += 1
summary["stat"]["teststeps"]["total"] += len(testcase_summary.step_results)
if testcase_summary.success:
summary["stat"]["testcases"]["success"] += 1
summary["stat"]["teststeps"]["successes"] += len(
testcase_summary.step_results
)
else:
summary["stat"]["testcases"]["fail"] += 1
summary["stat"]["teststeps"]["successes"] += (
len(testcase_summary.step_results) - 1
)
summary["stat"]["teststeps"]["failures"] += 1
testcase_summary_json = testcase_summary.dict()
testcase_summary_json["records"] = testcase_summary_json.pop("step_results")
summary["details"].append(testcase_summary_json)
summary_path = os.path.join(
os.getcwd(), "examples/postman_echo/logs/request_methods/hardcode.summary.json"
)
summary_dir = os.path.dirname(summary_path)
os.makedirs(summary_dir, exist_ok=True)
with open(summary_path, "w", encoding="utf-8") as f:
json.dump(summary, f, indent=4, ensure_ascii=False, cls=ExtendJSONEncoder)
logger.info(f"generated task summary: {summary_path}")

View File

@@ -0,0 +1 @@
# NOTICE: Generated By HttpRunner. DO NOT EDIT!

View File

@@ -0,0 +1,34 @@
config:
name: "set & delete cookies."
base_url: "https://postman-echo.com"
verify: False
export: ["cookie_foo1"]
teststeps:
-
name: set cookie foo1 & foo2 & foo3
request:
method: GET
url: /cookies/set
params:
foo1: bar1
foo2: bar2
headers:
User-Agent: HttpRunner/${get_httprunner_version()}
extract:
cookie_foo1: body.cookies.foo1
validate:
- eq: ["status_code", 200]
- eq: ["body.cookies.foo1", "bar1"]
- eq: ["body.cookies.foo2", "bar2"]
-
name: delete cookie foo2
request:
method: GET
url: /cookies/delete?foo2
headers:
User-Agent: HttpRunner/${get_httprunner_version()}
validate:
- eq: ["status_code", 200]
- eq: ["body.cookies.foo1", "bar1"]
- eq: ["body.cookies.foo2", null]

View File

@@ -0,0 +1,41 @@
# NOTE: Generated By HttpRunner v4.3.5
# FROM: cookie_manipulation/hardcode.yml
from httprunner import HttpRunner, Config, Step, RunRequest
class TestCaseHardcode(HttpRunner):
config = (
Config("set & delete cookies.")
.base_url("https://postman-echo.com")
.verify(False)
.export(*["cookie_foo1"])
)
teststeps = [
Step(
RunRequest("set cookie foo1 & foo2 & foo3")
.get("/cookies/set")
.with_params(**{"foo1": "bar1", "foo2": "bar2"})
.with_headers(**{"User-Agent": "HttpRunner/${get_httprunner_version()}"})
.extract()
.with_jmespath("body.cookies.foo1", "cookie_foo1")
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.cookies.foo1", "bar1")
.assert_equal("body.cookies.foo2", "bar2")
),
Step(
RunRequest("delete cookie foo2")
.get("/cookies/delete?foo2")
.with_headers(**{"User-Agent": "HttpRunner/${get_httprunner_version()}"})
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.cookies.foo1", "bar1")
.assert_equal("body.cookies.foo2", None)
),
]
if __name__ == "__main__":
TestCaseHardcode().test_start()

View File

@@ -0,0 +1,41 @@
config:
name: "set & delete cookies."
variables:
foo1: bar1
foo2: bar2
base_url: "https://postman-echo.com"
verify: False
export: ["cookie_foo1", "cookie_foo3"]
teststeps:
-
name: set cookie foo1 & foo2 & foo3
variables:
foo3: bar3
request:
method: GET
url: /cookies/set
params:
foo1: bar111
foo2: $foo2
foo3: $foo3
headers:
User-Agent: HttpRunner/${get_httprunner_version()}
extract:
cookie_foo1: $.cookies.foo1
cookie_foo3: $.cookies.foo3
validate:
- eq: ["status_code", 200]
- ne: ["$.cookies.foo3", "$foo3"]
-
name: delete cookie foo2
request:
method: GET
url: /cookies/delete?foo2
headers:
User-Agent: HttpRunner/${get_httprunner_version()}
validate:
- eq: ["status_code", 200]
- ne: ["$.cookies.foo1", "$foo1"]
- eq: ["$.cookies.foo1", "$cookie_foo1"]
- eq: ["$.cookies.foo3", "$cookie_foo3"]

View File

@@ -0,0 +1,44 @@
# NOTE: Generated By HttpRunner v4.3.5
# FROM: cookie_manipulation/set_delete_cookies.yml
from httprunner import HttpRunner, Config, Step, RunRequest
class TestCaseSetDeleteCookies(HttpRunner):
config = (
Config("set & delete cookies.")
.variables(**{"foo1": "bar1", "foo2": "bar2"})
.base_url("https://postman-echo.com")
.verify(False)
.export(*["cookie_foo1", "cookie_foo3"])
)
teststeps = [
Step(
RunRequest("set cookie foo1 & foo2 & foo3")
.with_variables(**{"foo3": "bar3"})
.get("/cookies/set")
.with_params(**{"foo1": "bar111", "foo2": "$foo2", "foo3": "$foo3"})
.with_headers(**{"User-Agent": "HttpRunner/${get_httprunner_version()}"})
.extract()
.with_jmespath("$.cookies.foo1", "cookie_foo1")
.with_jmespath("$.cookies.foo3", "cookie_foo3")
.validate()
.assert_equal("status_code", 200)
.assert_not_equal("$.cookies.foo3", "$foo3")
),
Step(
RunRequest("delete cookie foo2")
.get("/cookies/delete?foo2")
.with_headers(**{"User-Agent": "HttpRunner/${get_httprunner_version()}"})
.validate()
.assert_equal("status_code", 200)
.assert_not_equal("$.cookies.foo1", "$foo1")
.assert_equal("$.cookies.foo1", "$cookie_foo1")
.assert_equal("$.cookies.foo3", "$cookie_foo3")
),
]
if __name__ == "__main__":
TestCaseSetDeleteCookies().test_start()

View File

@@ -0,0 +1,42 @@
from httprunner import __version__
def get_httprunner_version():
return __version__
def sum_two(m, n):
return m + n
def get_testcase_config_variables():
return {"foo1": "testcase_config_bar1", "foo2": "testcase_config_bar2"}
def get_testsuite_config_variables():
return {"foo1": "testsuite_config_bar1", "foo2": "testsuite_config_bar2"}
def get_app_version():
return [3.1, 3.0]
def calculate_two_nums(a, b=1):
return [a + b, b - a]
def fake_rand_count():
"""
return 1 at first call
return 2 at second call
"""
l = []
def func():
l.append(1)
return len(l)
return func
fake_randnum = fake_rand_count()

View File

@@ -0,0 +1 @@
# NOTICE: Generated By HttpRunner. DO NOT EDIT!

View File

@@ -0,0 +1,4 @@
username,password
test1,111111
test2,222222
test3,333333
1 username password
2 test1 111111
3 test2 222222
4 test3 333333

View File

@@ -0,0 +1,61 @@
import uuid
from typing import List
import pytest
from httprunner import Config, Step
from loguru import logger
@pytest.fixture(scope="session", autouse=True)
def session_fixture(request):
"""setup and teardown each task"""
total_testcases_num = request.node.testscollected
testcases = []
for item in request.node.items:
testcase = {
"name": item.cls.config.name,
"path": item.cls.config.path,
"node_id": item.nodeid,
}
testcases.append(testcase)
logger.debug(f"collected {total_testcases_num} testcases: {testcases}")
yield
logger.debug("teardown task fixture")
# teardown task
# TODO: upload task summary
@pytest.fixture(scope="function", autouse=True)
def testcase_fixture(request):
"""setup and teardown each testcase"""
config: Config = request.cls.config
teststeps: List[Step] = request.cls.teststeps
logger.debug(f"setup testcase fixture: {config.name} - {request.module.__name__}")
def update_request_headers(steps, index):
for teststep in steps:
if teststep.request:
index += 1
teststep.request.headers["X-Request-ID"] = f"{prefix}-{index}"
elif teststep.testcase and hasattr(teststep.testcase, "teststeps"):
update_request_headers(teststep.testcase.teststeps, index)
# you can update testcase teststep like this
prefix = f"HRUN-{uuid.uuid4()}"
update_request_headers(teststeps, 0)
yield
logger.debug(
f"teardown testcase fixture: {config.name} - {request.module.__name__}"
)
summary = request.instance.get_summary()
logger.debug(f"testcase result summary: {summary}")
# TODO: upload testcase summary

View File

@@ -0,0 +1,55 @@
config:
name: "request methods testcase in hardcode"
base_url: "https://postman-echo.com"
verify: False
teststeps:
-
name: get with params
request:
method: GET
url: /get
params:
foo1: bar1
foo2: bar2
headers:
:authority: postman-echo.com
:method: POST
:path: /get
:schema: https
User-Agent: HttpRunner/3.0
validate:
- eq: ["status_code", 200]
-
name: post raw text
request:
method: POST
url: /post
headers:
User-Agent: HttpRunner/3.0
Content-Type: "text/plain"
data: "This is expected to be sent back as part of response body."
validate:
- eq: ["status_code", 200]
-
name: post form data
request:
method: POST
url: /post
headers:
User-Agent: HttpRunner/3.0
Content-Type: "application/x-www-form-urlencoded"
data: "foo1=bar1&foo2=bar2"
validate:
- eq: ["status_code", 200]
-
name: put request
request:
method: PUT
url: /put
headers:
User-Agent: HttpRunner/3.0
Content-Type: "text/plain"
data: "This is expected to be sent back as part of response body."
validate:
- eq: ["status_code", 200]

View File

@@ -0,0 +1,68 @@
# NOTE: Generated By HttpRunner v4.3.5
# FROM: request_methods/hardcode.yml
from httprunner import HttpRunner, Config, Step, RunRequest
class TestCaseHardcode(HttpRunner):
config = (
Config("request methods testcase in hardcode")
.base_url("https://postman-echo.com")
.verify(False)
)
teststeps = [
Step(
RunRequest("get with params")
.get("/get")
.with_params(**{"foo1": "bar1", "foo2": "bar2"})
.with_headers(
**{
":authority": "postman-echo.com",
":method": "POST",
":path": "/get",
":schema": "https",
"User-Agent": "HttpRunner/3.0",
}
)
.validate()
.assert_equal("status_code", 200)
),
Step(
RunRequest("post raw text")
.post("/post")
.with_headers(
**{"User-Agent": "HttpRunner/3.0", "Content-Type": "text/plain"}
)
.with_data("This is expected to be sent back as part of response body.")
.validate()
.assert_equal("status_code", 200)
),
Step(
RunRequest("post form data")
.post("/post")
.with_headers(
**{
"User-Agent": "HttpRunner/3.0",
"Content-Type": "application/x-www-form-urlencoded",
}
)
.with_data("foo1=bar1&foo2=bar2")
.validate()
.assert_equal("status_code", 200)
),
Step(
RunRequest("put request")
.put("/put")
.with_headers(
**{"User-Agent": "HttpRunner/3.0", "Content-Type": "text/plain"}
)
.with_data("This is expected to be sent back as part of response body.")
.validate()
.assert_equal("status_code", 200)
),
]
if __name__ == "__main__":
TestCaseHardcode().test_start()

View File

@@ -0,0 +1,69 @@
config:
name: "request methods testcase with functions"
variables:
foo1: config_bar1
foo2: config_bar2
expect_foo1: config_bar1
expect_foo2: config_bar2
base_url: "https://postman-echo.com"
verify: False
weight: 2
export: ["foo3"]
teststeps:
-
name: get with params
variables:
foo1: bar11
foo2: bar21
sum_v: "${sum_two(1, 2)}"
request:
method: GET
url: /get
params:
foo1: $foo1
foo2: $foo2
sum_v: $sum_v
headers:
User-Agent: HttpRunner/${get_httprunner_version()}
extract:
foo3: "body.args.foo2"
validate:
- eq: ["status_code", 200]
- eq: ["body.args.foo1", "bar11"]
- eq: ["body.args.sum_v", "3"]
- eq: ["body.args.foo2", "bar21"]
-
name: post raw text
variables:
foo1: "bar12"
foo3: "bar32"
request:
method: POST
url: /post
headers:
User-Agent: HttpRunner/${get_httprunner_version()}
Content-Type: "text/plain"
data: "This is expected to be sent back as part of response body: $foo1-$foo2-$foo3."
validate:
- eq: ["status_code", 200]
- eq: ["body.data", "This is expected to be sent back as part of response body: bar12-$expect_foo2-bar32."]
- type_match: ["body.json", None]
- type_match: ["body.json", NoneType]
- type_match: ["body.json", null]
-
name: post form data
variables:
foo2: bar23
request:
method: POST
url: /post
headers:
User-Agent: HttpRunner/${get_httprunner_version()}
Content-Type: "application/x-www-form-urlencoded"
data: "foo1=$foo1&foo2=$foo2&foo3=$foo3"
validate:
- eq: ["status_code", 200, "response status code should be 200"]
- eq: ["body.form.foo1", "$expect_foo1"]
- eq: ["body.form.foo2", "bar23"]
- eq: ["body.form.foo3", "bar21"]

View File

@@ -0,0 +1,84 @@
# NOTE: Generated By HttpRunner v4.3.5
# FROM: request_methods/request_with_functions.yml
from httprunner import HttpRunner, Config, Step, RunRequest
class TestCaseRequestWithFunctions(HttpRunner):
config = (
Config("request methods testcase with functions")
.variables(
**{
"foo1": "config_bar1",
"foo2": "config_bar2",
"expect_foo1": "config_bar1",
"expect_foo2": "config_bar2",
}
)
.base_url("https://postman-echo.com")
.verify(False)
.export(*["foo3"])
)
teststeps = [
Step(
RunRequest("get with params")
.with_variables(
**{"foo1": "bar11", "foo2": "bar21", "sum_v": "${sum_two(1, 2)}"}
)
.get("/get")
.with_params(**{"foo1": "$foo1", "foo2": "$foo2", "sum_v": "$sum_v"})
.with_headers(**{"User-Agent": "HttpRunner/${get_httprunner_version()}"})
.extract()
.with_jmespath("body.args.foo2", "foo3")
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.args.foo1", "bar11")
.assert_equal("body.args.sum_v", "3")
.assert_equal("body.args.foo2", "bar21")
),
Step(
RunRequest("post raw text")
.with_variables(**{"foo1": "bar12", "foo3": "bar32"})
.post("/post")
.with_headers(
**{
"User-Agent": "HttpRunner/${get_httprunner_version()}",
"Content-Type": "text/plain",
}
)
.with_data(
"This is expected to be sent back as part of response body: $foo1-$foo2-$foo3."
)
.validate()
.assert_equal("status_code", 200)
.assert_equal(
"body.data",
"This is expected to be sent back as part of response body: bar12-$expect_foo2-bar32.",
)
.assert_type_match("body.json", "None")
.assert_type_match("body.json", "NoneType")
.assert_type_match("body.json", None)
),
Step(
RunRequest("post form data")
.with_variables(**{"foo2": "bar23"})
.post("/post")
.with_headers(
**{
"User-Agent": "HttpRunner/${get_httprunner_version()}",
"Content-Type": "application/x-www-form-urlencoded",
}
)
.with_data("foo1=$foo1&foo2=$foo2&foo3=$foo3")
.validate()
.assert_equal("status_code", 200, "response status code should be 200")
.assert_equal("body.form.foo1", "$expect_foo1")
.assert_equal("body.form.foo2", "bar23")
.assert_equal("body.form.foo3", "bar21")
),
]
if __name__ == "__main__":
TestCaseRequestWithFunctions().test_start()

View File

@@ -0,0 +1,33 @@
config:
name: "request methods testcase: validate with parameters"
parameters:
user_agent: ["iOS/10.1", "iOS/10.2"]
username-password: ${parameterize(request_methods/account.csv)}
app_version: ${get_app_version()}
variables:
app_version: f1
base_url: "https://postman-echo.com"
verify: False
teststeps:
-
name: get with params
variables:
foo1: $username
foo2: $password
sum_v: "${sum_two(1, $app_version)}"
request:
method: GET
url: /get
params:
foo1: $foo1
foo2: $foo2
sum_v: $sum_v
headers:
User-Agent: $user_agent,$app_version
extract:
session_foo2: "body.args.foo2"
validate:
- eq: ["status_code", 200]
- str_eq: ["body.args.sum_v", "${sum_two(1, $app_version)}"]
# - less_than: ["body.args.sum_v", "${sum_two(2, 2)}"] FIXME: TypeError: '<' not supported between instances of 'str' and 'int'

View File

@@ -0,0 +1,53 @@
# NOTE: Generated By HttpRunner v4.3.5
# FROM: request_methods/request_with_parameters.yml
import pytest
from httprunner import HttpRunner, Config, Step, RunRequest
from httprunner import Parameters
class TestCaseRequestWithParameters(HttpRunner):
@pytest.mark.parametrize(
"param",
Parameters(
{
"user_agent": ["iOS/10.1", "iOS/10.2"],
"username-password": "${parameterize(request_methods/account.csv)}",
"app_version": "${get_app_version()}",
}
),
)
def test_start(self, param):
super().test_start(param)
config = (
Config("request methods testcase: validate with parameters")
.variables(**{"app_version": "f1"})
.base_url("https://postman-echo.com")
.verify(False)
)
teststeps = [
Step(
RunRequest("get with params")
.with_variables(
**{
"foo1": "$username",
"foo2": "$password",
"sum_v": "${sum_two(1, $app_version)}",
}
)
.get("/get")
.with_params(**{"foo1": "$foo1", "foo2": "$foo2", "sum_v": "$sum_v"})
.with_headers(**{"User-Agent": "$user_agent,$app_version"})
.extract()
.with_jmespath("body.args.foo2", "session_foo2")
.validate()
.assert_equal("status_code", 200)
.assert_string_equals("body.args.sum_v", "${sum_two(1, $app_version)}")
),
]
if __name__ == "__main__":
TestCaseRequestWithParameters().test_start()

View File

@@ -0,0 +1,31 @@
# -*- coding: utf-8 -*-
"""
@Date : 2022/4/7
@File : request_with_retry.py
@Author : duanchao.bill
@Desc :
"""
from httprunner import HttpRunner, Config, Step, RunRequest, RunTestCase
class TestCaseRetry(HttpRunner):
config = (
Config("request methods testcase in hardcode")
.base_url("https://postman-echo.com")
.verify(False)
)
teststeps = [
Step(
RunRequest("run with retry")
.with_retry(retry_times=1, retry_interval=1)
.get("/get")
.with_params(**{"foo1": "${fake_randnum()}"})
.with_headers(**{"User-Agent": "HttpRunner/3.0"})
.validate()
.assert_equal("body.args.foo1", "2")
)
]

View File

@@ -0,0 +1,37 @@
config:
name: "request methods testcase: reference testcase"
variables:
foo1: testsuite_config_bar1
expect_foo1: testsuite_config_bar1
expect_foo2: config_bar2
base_url: "https://postman-echo.com"
verify: False
teststeps:
-
name: request with functions
variables:
foo1: testcase_ref_bar1
expect_foo1: testcase_ref_bar1
setup_hooks:
- ${sleep(0.1)}
testcase: request_methods/request_with_functions.yml
teardown_hooks:
- ${sleep(0.2)}
export:
- foo3
-
name: post form data
variables:
foo1: bar1
request:
method: POST
url: /post
headers:
User-Agent: HttpRunner/${get_httprunner_version()}
Content-Type: "application/x-www-form-urlencoded"
data: "foo1=$foo1&foo2=$foo3"
validate:
- eq: ["status_code", 200]
- eq: ["body.form.foo1", "bar1"]
- eq: ["body.form.foo2", "bar21"]

View File

@@ -0,0 +1,62 @@
# NOTE: Generated By HttpRunner v4.3.5
# FROM: request_methods/request_with_testcase_reference.yml
from httprunner import HttpRunner, Config, Step, RunRequest
from httprunner import RunTestCase
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent))
from request_methods.request_with_functions_test import (
TestCaseRequestWithFunctions as RequestWithFunctions,
)
class TestCaseRequestWithTestcaseReference(HttpRunner):
config = (
Config("request methods testcase: reference testcase")
.variables(
**{
"foo1": "testsuite_config_bar1",
"expect_foo1": "testsuite_config_bar1",
"expect_foo2": "config_bar2",
}
)
.base_url("https://postman-echo.com")
.verify(False)
)
teststeps = [
Step(
RunTestCase("request with functions")
.with_variables(
**{"foo1": "testcase_ref_bar1", "expect_foo1": "testcase_ref_bar1"}
)
.setup_hook("${sleep(0.1)}")
.call(RequestWithFunctions)
.teardown_hook("${sleep(0.2)}")
.export(*["foo3"])
),
Step(
RunRequest("post form data")
.with_variables(**{"foo1": "bar1"})
.post("/post")
.with_headers(
**{
"User-Agent": "HttpRunner/${get_httprunner_version()}",
"Content-Type": "application/x-www-form-urlencoded",
}
)
.with_data("foo1=$foo1&foo2=$foo3")
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.form.foo1", "bar1")
.assert_equal("body.form.foo2", "bar21")
),
]
if __name__ == "__main__":
TestCaseRequestWithTestcaseReference().test_start()

View File

@@ -0,0 +1,78 @@
config:
name: "request methods testcase with variables"
variables: ${get_testcase_config_variables()}
base_url: "https://postman-echo.com"
verify: False
teststeps:
-
name: get with params
variables:
foo1: bar11
foo2: bar21
request:
method: GET
url: /get
params:
foo1: $foo1
foo2: $foo2
headers:
User-Agent: HttpRunner/3.0
extract:
foo3: "body.args.foo2"
validate:
- eq: ["status_code", 200]
- eq: ["body.args.foo1", "bar11"]
- eq: ["body.args.foo2", "bar21"]
-
name: post raw text
variables:
foo1: "bar12"
foo3: "bar32"
request:
method: POST
url: /post
headers:
User-Agent: HttpRunner/3.0
Content-Type: "text/plain"
data: "This is expected to be sent back as part of response body: $foo1-$foo2-$foo3."
validate:
- eq: ["status_code", 200]
- eq: ["body.data", "This is expected to be sent back as part of response body: bar12-testcase_config_bar2-bar32."]
-
name: post form data
variables:
foo2: bar23
request:
method: POST
url: /post
headers:
User-Agent: HttpRunner/3.0
Content-Type: "application/x-www-form-urlencoded"
data: "foo1=$foo1&foo2=$foo2&foo3=$foo3"
validate:
- eq: ["status_code", 200]
- eq: ["body.form.foo1", "testcase_config_bar1"]
- eq: ["body.form.foo2", "bar23"]
- eq: ["body.form.foo3", "bar21"]
-
name: post form data using json
variables:
foo2: bar23
jsondata:
foo1: $foo1
foo2: $foo2
foo3: $foo3
request:
method: POST
url: /post
headers:
User-Agent: HttpRunner/3.0
Content-Type: "application/json"
json: $jsondata
validate:
- eq: ["status_code", 200]
- eq: ["body.data.foo1", "testcase_config_bar1"]
- eq: ["body.data.foo2", "bar23"]
- eq: ["body.data.foo3", "bar21"]

View File

@@ -0,0 +1,86 @@
# NOTE: Generated By HttpRunner v4.3.5
# FROM: request_methods/request_with_variables.yml
from httprunner import HttpRunner, Config, Step, RunRequest
class TestCaseRequestWithVariables(HttpRunner):
config = (
Config("request methods testcase with variables")
.variables(**{"foo1": "testcase_config_bar1", "foo2": "testcase_config_bar2"})
.base_url("https://postman-echo.com")
.verify(False)
)
teststeps = [
Step(
RunRequest("get with params")
.with_variables(**{"foo1": "bar11", "foo2": "bar21"})
.get("/get")
.with_params(**{"foo1": "$foo1", "foo2": "$foo2"})
.with_headers(**{"User-Agent": "HttpRunner/3.0"})
.extract()
.with_jmespath("body.args.foo2", "foo3")
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.args.foo1", "bar11")
.assert_equal("body.args.foo2", "bar21")
),
Step(
RunRequest("post raw text")
.with_variables(**{"foo1": "bar12", "foo3": "bar32"})
.post("/post")
.with_headers(
**{"User-Agent": "HttpRunner/3.0", "Content-Type": "text/plain"}
)
.with_data(
"This is expected to be sent back as part of response body: $foo1-$foo2-$foo3."
)
.validate()
.assert_equal("status_code", 200)
.assert_equal(
"body.data",
"This is expected to be sent back as part of response body: bar12-testcase_config_bar2-bar32.",
)
),
Step(
RunRequest("post form data")
.with_variables(**{"foo2": "bar23"})
.post("/post")
.with_headers(
**{
"User-Agent": "HttpRunner/3.0",
"Content-Type": "application/x-www-form-urlencoded",
}
)
.with_data("foo1=$foo1&foo2=$foo2&foo3=$foo3")
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.form.foo1", "testcase_config_bar1")
.assert_equal("body.form.foo2", "bar23")
.assert_equal("body.form.foo3", "bar21")
),
Step(
RunRequest("post form data using json")
.with_variables(
**{
"foo2": "bar23",
"jsondata": {"foo1": "$foo1", "foo2": "$foo2", "foo3": "$foo3"},
}
)
.post("/post")
.with_headers(
**{"User-Agent": "HttpRunner/3.0", "Content-Type": "application/json"}
)
.with_json("$jsondata")
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.data.foo1", "testcase_config_bar1")
.assert_equal("body.data.foo2", "bar23")
.assert_equal("body.data.foo3", "bar21")
),
]
if __name__ == "__main__":
TestCaseRequestWithVariables().test_start()

View File

@@ -0,0 +1,29 @@
config:
name: "request methods testcase: validate with functions"
variables:
foo1: session_bar1
base_url: "https://postman-echo.com"
verify: False
teststeps:
-
name: get with params
variables:
foo1: bar1
foo2: session_bar2
sum_v: "${sum_two(1, 2)}"
request:
method: GET
url: /get
params:
foo1: $foo1
foo2: $foo2
sum_v: $sum_v
headers:
User-Agent: HttpRunner/${get_httprunner_version()}
extract:
session_foo2: "body.args.foo2"
validate:
- eq: ["status_code", 200]
- eq: ["body.args.sum_v", "3"]
# - less_than: ["body.args.sum_v", "${sum_two(2, 2)}"] FIXME: TypeError: '<' not supported between instances of 'str' and 'int'

View File

@@ -0,0 +1,34 @@
# NOTE: Generated By HttpRunner v4.3.5
# FROM: request_methods/validate_with_functions.yml
from httprunner import HttpRunner, Config, Step, RunRequest
class TestCaseValidateWithFunctions(HttpRunner):
config = (
Config("request methods testcase: validate with functions")
.variables(**{"foo1": "session_bar1"})
.base_url("https://postman-echo.com")
.verify(False)
)
teststeps = [
Step(
RunRequest("get with params")
.with_variables(
**{"foo1": "bar1", "foo2": "session_bar2", "sum_v": "${sum_two(1, 2)}"}
)
.get("/get")
.with_params(**{"foo1": "$foo1", "foo2": "$foo2", "sum_v": "$sum_v"})
.with_headers(**{"User-Agent": "HttpRunner/${get_httprunner_version()}"})
.extract()
.with_jmespath("body.args.foo2", "session_foo2")
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.args.sum_v", "3")
),
]
if __name__ == "__main__":
TestCaseValidateWithFunctions().test_start()

View File

@@ -0,0 +1,58 @@
config:
name: "request methods testcase: validate with variables"
variables:
foo1: session_bar1
base_url: "https://postman-echo.com"
verify: False
teststeps:
-
name: get with params
variables:
foo1: bar1
foo2: session_bar2
request:
method: GET
url: /get
params:
foo1: $foo1
foo2: $foo2
headers:
User-Agent: HttpRunner/3.0
extract:
session_foo2: "body.args.foo2"
validate:
- eq: ["status_code", 200]
- eq: ["body.args.foo1", "$foo1"]
- eq: ["body.args.foo2", "$foo2"]
-
name: post raw text
variables:
foo1: "hello world"
foo3: "$session_foo2"
request:
method: POST
url: /post
headers:
User-Agent: HttpRunner/3.0
Content-Type: "text/plain"
data: "This is expected to be sent back as part of response body: $foo1-$foo3."
validate:
- eq: ["status_code", 200]
- eq: ["body.data", "This is expected to be sent back as part of response body: hello world-$foo3."]
-
name: post form data
variables:
foo1: bar1
foo2: bar2
request:
method: POST
url: /post
headers:
User-Agent: HttpRunner/3.0
Content-Type: "application/x-www-form-urlencoded"
data: "foo1=$foo1&foo2=$foo2"
validate:
- eq: ["status_code", 200]
- eq: ["body.form.foo1", "$foo1"]
- eq: ["body.form.foo2", "$foo2"]

View File

@@ -0,0 +1,66 @@
# NOTE: Generated By HttpRunner v4.3.5
# FROM: request_methods/validate_with_variables.yml
from httprunner import HttpRunner, Config, Step, RunRequest
class TestCaseValidateWithVariables(HttpRunner):
config = (
Config("request methods testcase: validate with variables")
.variables(**{"foo1": "session_bar1"})
.base_url("https://postman-echo.com")
.verify(False)
)
teststeps = [
Step(
RunRequest("get with params")
.with_variables(**{"foo1": "bar1", "foo2": "session_bar2"})
.get("/get")
.with_params(**{"foo1": "$foo1", "foo2": "$foo2"})
.with_headers(**{"User-Agent": "HttpRunner/3.0"})
.extract()
.with_jmespath("body.args.foo2", "session_foo2")
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.args.foo1", "$foo1")
.assert_equal("body.args.foo2", "$foo2")
),
Step(
RunRequest("post raw text")
.with_variables(**{"foo1": "hello world", "foo3": "$session_foo2"})
.post("/post")
.with_headers(
**{"User-Agent": "HttpRunner/3.0", "Content-Type": "text/plain"}
)
.with_data(
"This is expected to be sent back as part of response body: $foo1-$foo3."
)
.validate()
.assert_equal("status_code", 200)
.assert_equal(
"body.data",
"This is expected to be sent back as part of response body: hello world-$foo3.",
)
),
Step(
RunRequest("post form data")
.with_variables(**{"foo1": "bar1", "foo2": "bar2"})
.post("/post")
.with_headers(
**{
"User-Agent": "HttpRunner/3.0",
"Content-Type": "application/x-www-form-urlencoded",
}
)
.with_data("foo1=$foo1&foo2=$foo2")
.validate()
.assert_equal("status_code", 200)
.assert_equal("body.form.foo1", "$foo1")
.assert_equal("body.form.foo2", "$foo2")
),
]
if __name__ == "__main__":
TestCaseValidateWithVariables().test_start()

6
examples/pytest.ini Normal file
View File

@@ -0,0 +1,6 @@
[pytest]
addopts = -s
# https://docs.pytest.org/en/latest/how-to/output.html
junit_logging = all
junit_duration_report = total
log_cli = False

View File

@@ -0,0 +1,36 @@
import sys
from pathlib import Path
from httprunner.database.engine import DBEngine
sys.path.insert(0, str(Path(__file__).parent.parent))
from httprunner import HttpRunner, Config, Step, RunSqlRequest # noqa:E402
class TestCaseDemoSqlite(HttpRunner):
config = Config("run sqlite demo")
teststeps = [
Step(
RunSqlRequest("执行一个sqlite demo")
.fetchmany("select* from student;", 5)
.extract()
.with_jmespath("[0].name", "name")
.validate()
.assert_equal(
"[0]",
{
"id": 1,
"name": "Jack",
"fullname": {"first_name": "Jack", "last_name": "Tomson"},
},
)
.assert_equal("[0].fullname.first_name", "Jack")
)
]
def test_start(self):
eg = DBEngine(db_uri="sqlite:///../data/sqlite.db")
self.with_db_engine(eg)
super().test_start()

115
httprunner/README.md Normal file
View File

@@ -0,0 +1,115 @@
# 代码阅读指南python 部分)
## 核心数据结构
HttpRunner 以 `TestCase` 为核心,将任意测试场景抽象为有序步骤的集合。
```py
class TestCase(BaseModel):
config: TConfig
teststeps: List[TStep]
```
针对每种测试步骤,统一继承自 `IStep`,并要求必须至少实现如下 4 个方法;步骤内容统一在 `run` 方法中进行实现。
```py
class IStep(object):
def name(self) -> str:
raise NotImplementedError
def type(self) -> str:
raise NotImplementedError
def struct(self) -> TStep:
raise NotImplementedError
def run(self, runner) -> StepData:
# runner: HttpRunner
raise NotImplementedError
```
我们只需遵循 `IStep` 的接口定义,即可实现各种类型的测试步骤类型。当前 python 版本已支持的步骤类型包括:
- [request](step_request.py):发起单次 HTTP 请求
- [testcase](step_testcase.py):引用执行其它测试用例文件
基于该机制,我们可以扩展支持新的协议类型,例如 HTTP2/WebSocket/RPC 等;同时也可以支持新的测试类型,例如 UI 自动化。甚至我们还可以在一个测试用例中混合调用多种不同的 Step 类型,例如实现 HTTP/RPC/UI 混合场景。
## 用例编写
## 运行主流程
### 整体控制器 pytest
不同于 golang 版本python 版本的控制逻辑都基于 `pytest` 的用例发现和执行机制。
- 如果是运行 JSON/YAML 格式的用例hrp 会将用例转换为 pytest 支持的用例格式
- 如果是要自行编写 pytest 测试用例,需要遵循 HttpRunner 的格式要求
### pytest 用例格式要求
所有测试用例要求都继承自 `HttpRunner`,然后
结构如下所示:
```py
class TestCaseRequestWithFunctions(HttpRunner):
config = (
Config("request methods testcase with functions")
)
teststeps = [
Step(
RunRequest("get with params")...
),
Step(
RunRequest("post raw text")...
),
Step(
RunRequest("post form data")...
),
]
```
完整案例可参考:
- [request_with_functions_test.py](../examples/postman_echo/request_methods/request_with_functions_test.py):用例中包含了 requests 的情况
- [request_with_testcase_reference_test.py](../examples/postman_echo/request_methods/request_with_testcase_reference_test.py):用例中包含了引用其它测试用例的情况
### 用例执行器 SessionRunner
测试用例的具体执行都由 `SessionRunner` 完成,每个 TestCase 对应一个实例,在该实例中除了包含测试用例自身内容外,还会包含测试过程的 session 数据和最终测试结果 summary。
```py
class SessionRunner(object):
config: Config
teststeps: List[object] # list of Step
...
```
重点关注一个方法:
- test_start该方法将被 pytest 发现,作为启动执行入口,依次执行所有测试步骤
```go
def test_start(self, param: Dict = None) -> "SessionRunner":
"""main entrance, discovered by pytest"""
self.__start_at = time.time()
try:
# run step in sequential order
for step in self.teststeps:
self.__run_step(step)
finally:
logger.info(f"generate testcase log: {self.__log_path}")
self.__duration = time.time() - self.__start_at
```
在主流程中SessionRunner 并不需要关注 step 的具体类型,统一都是调用 `step.run(self)`,具体实现逻辑都在对应 step 的 `run` 方法中。
```py
def run(self, runner: HttpRunner) -> StepData:
return self.__step.run(runner)
```

38
httprunner/__init__.py Normal file
View File

@@ -0,0 +1,38 @@
__version__ = "v4.3.5"
__description__ = "One-stop solution for HTTP(S) testing."
from httprunner.config import Config
from httprunner.parser import parse_parameters as Parameters
from httprunner.runner import HttpRunner
from httprunner.step import Step
from httprunner.step_request import RunRequest
from httprunner.step_sql_request import (
RunSqlRequest,
StepSqlRequestExtraction,
StepSqlRequestValidation,
)
from httprunner.step_testcase import RunTestCase
from httprunner.step_thrift_request import (
RunThriftRequest,
StepThriftRequestExtraction,
StepThriftRequestValidation,
)
__all__ = [
"__version__",
"__description__",
"HttpRunner",
"Config",
"Step",
"RunRequest",
"RunSqlRequest",
"StepSqlRequestValidation",
"StepSqlRequestExtraction",
"RunTestCase",
"Parameters",
"RunThriftRequest",
"StepThriftRequestValidation",
"StepThriftRequestExtraction",
]

5
httprunner/__main__.py Normal file
View File

@@ -0,0 +1,5 @@
from httprunner.cli import main
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,2 @@
from httprunner.builtin.comparators import *
from httprunner.builtin.functions import *

View File

@@ -0,0 +1,129 @@
"""
Built-in validate comparators.
"""
import re
from typing import Text, Any, Union
def equal(check_value: Any, expect_value: Any, message: Text = ""):
assert check_value == expect_value, message
def greater_than(
check_value: Union[int, float], expect_value: Union[int, float], message: Text = ""
):
assert check_value > expect_value, message
def less_than(
check_value: Union[int, float], expect_value: Union[int, float], message: Text = ""
):
assert check_value < expect_value, message
def greater_or_equals(
check_value: Union[int, float], expect_value: Union[int, float], message: Text = ""
):
assert check_value >= expect_value, message
def less_or_equals(
check_value: Union[int, float], expect_value: Union[int, float], message: Text = ""
):
assert check_value <= expect_value, message
def not_equal(check_value: Any, expect_value: Any, message: Text = ""):
assert check_value != expect_value, message
def string_equals(check_value: Text, expect_value: Any, message: Text = ""):
assert str(check_value) == str(expect_value), message
def length_equal(check_value: Text, expect_value: int, message: Text = ""):
assert isinstance(expect_value, int), "expect_value should be int type"
assert len(check_value) == expect_value, message
def length_greater_than(
check_value: Text, expect_value: Union[int, float], message: Text = ""
):
assert isinstance(
expect_value, (int, float)
), "expect_value should be int/float type"
assert len(check_value) > expect_value, message
def length_greater_or_equals(
check_value: Text, expect_value: Union[int, float], message: Text = ""
):
assert isinstance(
expect_value, (int, float)
), "expect_value should be int/float type"
assert len(check_value) >= expect_value, message
def length_less_than(
check_value: Text, expect_value: Union[int, float], message: Text = ""
):
assert isinstance(
expect_value, (int, float)
), "expect_value should be int/float type"
assert len(check_value) < expect_value, message
def length_less_or_equals(
check_value: Text, expect_value: Union[int, float], message: Text = ""
):
assert isinstance(
expect_value, (int, float)
), "expect_value should be int/float type"
assert len(check_value) <= expect_value, message
def contains(check_value: Any, expect_value: Any, message: Text = ""):
assert isinstance(
check_value, (list, tuple, dict, str, bytes)
), "expect_value should be list/tuple/dict/str/bytes type"
assert expect_value in check_value, message
def contained_by(check_value: Any, expect_value: Any, message: Text = ""):
assert isinstance(
expect_value, (list, tuple, dict, str, bytes)
), "expect_value should be list/tuple/dict/str/bytes type"
assert check_value in expect_value, message
def type_match(check_value: Any, expect_value: Any, message: Text = ""):
def get_type(name):
if isinstance(name, type):
return name
elif isinstance(name, str):
try:
return __builtins__[name]
except KeyError:
raise ValueError(name)
else:
raise ValueError(name)
if expect_value in ["None", "NoneType", None]:
assert check_value is None, message
else:
assert type(check_value) == get_type(expect_value), message
def regex_match(check_value: Text, expect_value: Any, message: Text = ""):
assert isinstance(expect_value, str), "expect_value should be Text type"
assert isinstance(check_value, str), "check_value should be Text type"
assert re.match(expect_value, check_value), message
def startswith(check_value: Any, expect_value: Any, message: Text = ""):
assert str(check_value).startswith(str(expect_value)), message
def endswith(check_value: Text, expect_value: Any, message: Text = ""):
assert str(check_value).endswith(str(expect_value)), message

View File

@@ -0,0 +1,35 @@
"""
Built-in functions used in YAML/JSON testcases.
"""
import datetime
import random
import string
import time
from httprunner.exceptions import ParamsError
def gen_random_string(str_len):
"""generate random string with specified length"""
return "".join(
random.choice(string.ascii_letters + string.digits) for _ in range(str_len)
)
def get_timestamp(str_len=13):
"""get timestamp string, length can only between 0 and 16"""
if isinstance(str_len, int) and 0 < str_len < 17:
return str(time.time()).replace(".", "")[:str_len]
raise ParamsError("timestamp length can only between 0 and 16.")
def get_current_date(fmt="%Y-%m-%d"):
"""get current date, default format is %Y-%m-%d"""
return datetime.datetime.now().strftime(fmt)
def sleep(n_secs):
"""sleep n seconds"""
time.sleep(n_secs)

152
httprunner/cli.py Normal file
View File

@@ -0,0 +1,152 @@
import argparse
import enum
import os
import sys
import pytest
from loguru import logger
from httprunner import __description__, __version__
from httprunner.compat import ensure_cli_args
from httprunner.make import init_make_parser, main_make
from httprunner.utils import ga4_client, init_logger, init_sentry_sdk
init_sentry_sdk()
def init_parser_run(subparsers):
sub_parser_run = subparsers.add_parser(
"run", help="Make HttpRunner testcases and run with pytest."
)
return sub_parser_run
def main_run(extra_args) -> enum.IntEnum:
ga4_client.send_event("hrun")
# keep compatibility with v2
extra_args = ensure_cli_args(extra_args)
tests_path_list = []
extra_args_new = []
for item in extra_args:
if not os.path.exists(item):
# item is not file/folder path
extra_args_new.append(item)
else:
# item is file/folder path
tests_path_list.append(item)
if len(tests_path_list) == 0:
# has not specified any testcase path
logger.error(f"No valid testcase path in cli arguments: {extra_args}")
sys.exit(1)
testcase_path_list = main_make(tests_path_list)
if not testcase_path_list:
logger.error("No valid testcases found, exit 1.")
sys.exit(1)
if "--tb=short" not in extra_args_new:
extra_args_new.append("--tb=short")
extra_args_new.extend(testcase_path_list)
logger.info(f"start to run tests with pytest. HttpRunner version: {__version__}")
return pytest.main(extra_args_new)
def main():
"""API test: parse command line options and run commands."""
parser = argparse.ArgumentParser(description=__description__)
parser.add_argument(
"-V", "--version", dest="version", action="store_true", help="show version"
)
subparsers = parser.add_subparsers(help="sub-command help")
init_parser_run(subparsers)
sub_parser_make = init_make_parser(subparsers)
if len(sys.argv) == 1:
# httprunner
parser.print_help()
sys.exit(0)
elif len(sys.argv) == 2:
# print help for sub-commands
if sys.argv[1] in ["-V", "--version"]:
# httprunner -V
print(f"{__version__}")
elif sys.argv[1] in ["-h", "--help"]:
# httprunner -h
parser.print_help()
elif sys.argv[1] == "run":
# httprunner run
pytest.main(["-h"])
elif sys.argv[1] == "make":
# httprunner make
sub_parser_make.print_help()
sys.exit(0)
elif (
len(sys.argv) == 3 and sys.argv[1] == "run" and sys.argv[2] in ["-h", "--help"]
):
# httprunner run -h
pytest.main(["-h"])
sys.exit(0)
extra_args = []
if len(sys.argv) >= 2 and sys.argv[1] in ["run"]:
args, extra_args = parser.parse_known_args()
else:
args = parser.parse_args()
if args.version:
print(f"{__version__}")
sys.exit(0)
# set log level
try:
index = extra_args.index("--log-level")
if index < len(extra_args) - 1:
level = extra_args[index + 1]
else:
# not specify log level value
level = "INFO" # default
except ValueError:
level = "INFO" # default
init_logger(level)
if sys.argv[1] == "run":
sys.exit(main_run(extra_args))
elif sys.argv[1] == "make":
main_make(args.testcase_path)
def main_hrun_alias():
"""command alias
hrun = httprunner run
"""
if len(sys.argv) == 2:
if sys.argv[1] in ["-V", "--version"]:
# hrun -V
sys.argv = ["httprunner", "-V"]
elif sys.argv[1] in ["-h", "--help"]:
pytest.main(["-h"])
sys.exit(0)
else:
# hrun /path/to/testcase
sys.argv.insert(1, "run")
else:
sys.argv.insert(1, "run")
main()
def main_make_alias():
"""command alias
hmake = httprunner make
"""
sys.argv.insert(1, "make")
main()
if __name__ == "__main__":
main()

62
httprunner/cli_test.py Normal file
View File

@@ -0,0 +1,62 @@
import io
import os
import sys
import unittest
import pytest
from httprunner import loader
from httprunner.cli import main, main_run
class TestCli(unittest.TestCase):
def setUp(self):
self.captured_output = io.StringIO()
sys.stdout = self.captured_output
def tearDown(self):
sys.stdout = sys.__stdout__ # Reset redirect.
def test_show_version(self):
sys.argv = ["hrun", "-V"]
with self.assertRaises(SystemExit) as cm:
main()
self.assertEqual(cm.exception.code, 0)
from httprunner import __version__
self.assertIn(__version__, self.captured_output.getvalue().strip())
def test_show_help(self):
sys.argv = ["hrun", "-h"]
with self.assertRaises(SystemExit) as cm:
main()
self.assertEqual(cm.exception.code, 0)
from httprunner import __description__
self.assertIn(__description__, self.captured_output.getvalue().strip())
def test_debug_pytest(self):
cwd = os.getcwd()
try:
os.chdir(os.path.join(cwd, "examples", "postman_echo"))
exit_code = pytest.main(
["-s", "request_methods/request_with_testcase_reference_test.py"]
)
self.assertEqual(exit_code, 0)
finally:
os.chdir(cwd)
def test_run_testcase_with_abnormal_path(self):
loader.project_meta = None
exit_code = main_run(["examples/data/a-b.c/2 3.yml"])
self.assertEqual(exit_code, 0)
self.assertTrue(os.path.exists("examples/data/a_b_c/__init__.py"))
self.assertTrue(os.path.exists("examples/data/debugtalk.py"))
self.assertTrue(os.path.exists("examples/data/a_b_c/T1_test.py"))
self.assertTrue(os.path.exists("examples/data/a_b_c/T2_3_test.py"))

238
httprunner/client.py Normal file
View File

@@ -0,0 +1,238 @@
import json
import time
import requests
import urllib3
from loguru import logger
from requests import Request, Response
from requests.exceptions import (
InvalidSchema,
InvalidURL,
MissingSchema,
RequestException,
)
from httprunner.models import RequestData, ResponseData
from httprunner.models import SessionData, ReqRespData
from httprunner.utils import lower_dict_keys, omit_long_data
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
class ApiResponse(Response):
def raise_for_status(self):
if hasattr(self, "error") and self.error:
raise self.error
Response.raise_for_status(self)
def get_req_resp_record(resp_obj: Response) -> ReqRespData:
"""get request and response info from Response() object."""
def log_print(req_or_resp, r_type):
msg = f"\n================== {r_type} details ==================\n"
for key, value in req_or_resp.dict().items():
if isinstance(value, dict) or isinstance(value, list):
value = json.dumps(value, indent=4, ensure_ascii=False)
msg += "{:<8} : {}\n".format(key, value)
logger.debug(msg)
# record actual request info
request_headers = dict(resp_obj.request.headers)
request_cookies = resp_obj.request._cookies.get_dict()
request_body = resp_obj.request.body
if request_body is not None:
try:
request_body = json.loads(request_body)
except json.JSONDecodeError:
# str: a=1&b=2
pass
except UnicodeDecodeError:
# bytes/bytearray: request body in protobuf
pass
except TypeError:
# neither str nor bytes/bytearray, e.g. <MultipartEncoder>
pass
request_content_type = lower_dict_keys(request_headers).get("content-type")
if request_content_type and "multipart/form-data" in request_content_type:
# upload file type
request_body = "upload file stream (OMITTED)"
request_data = RequestData(
method=resp_obj.request.method,
url=resp_obj.request.url,
headers=request_headers,
cookies=request_cookies,
body=request_body,
)
# log request details in debug mode
log_print(request_data, "request")
# record response info
resp_headers = dict(resp_obj.headers)
lower_resp_headers = lower_dict_keys(resp_headers)
content_type = lower_resp_headers.get("content-type", "")
if "image" in content_type:
# response is image type, record bytes content only
response_body = resp_obj.content
else:
try:
# try to record json data
response_body = resp_obj.json()
except ValueError:
# only record at most 512 text charactors
resp_text = resp_obj.text
response_body = omit_long_data(resp_text)
response_data = ResponseData(
status_code=resp_obj.status_code,
cookies=resp_obj.cookies or {},
encoding=resp_obj.encoding,
headers=resp_headers,
content_type=content_type,
body=response_body,
)
# log response details in debug mode
log_print(response_data, "response")
req_resp_data = ReqRespData(request=request_data, response=response_data)
return req_resp_data
class HttpSession(requests.Session):
"""
Class for performing HTTP requests and holding (session-) cookies between requests (in order
to be able to log in and out of websites). Each request is logged so that HttpRunner can
display statistics.
This is a slightly extended version of `python-request <http://python-requests.org>`_'s
:py:class:`requests.Session` class and mostly this class works exactly the same.
"""
def __init__(self):
super(HttpSession, self).__init__()
self.data = SessionData()
def update_last_req_resp_record(self, resp_obj):
"""
update request and response info from Response() object.
"""
# TODO: fix
self.data.req_resps.pop()
self.data.req_resps.append(get_req_resp_record(resp_obj))
def request(self, method, url, name=None, **kwargs):
"""
Constructs and sends a :py:class:`requests.Request`.
Returns :py:class:`requests.Response` object.
:param method:
method for the new :class:`Request` object.
:param url:
URL for the new :class:`Request` object.
:param name: (optional)
Placeholder, make compatible with Locust's HttpSession
:param params: (optional)
Dictionary or bytes to be sent in the query string for the :class:`Request`.
:param data: (optional)
Dictionary or bytes to send in the body of the :class:`Request`.
:param headers: (optional)
Dictionary of HTTP Headers to send with the :class:`Request`.
:param cookies: (optional)
Dict or CookieJar object to send with the :class:`Request`.
:param files: (optional)
Dictionary of ``'filename': file-like-objects`` for multipart encoding upload.
:param auth: (optional)
Auth tuple or callable to enable Basic/Digest/Custom HTTP Auth.
:param timeout: (optional)
How long to wait for the server to send data before giving up, as a float, or \
a (`connect timeout, read timeout <user/advanced.html#timeouts>`_) tuple.
:type timeout: float or tuple
:param allow_redirects: (optional)
Set to True by default.
:type allow_redirects: bool
:param proxies: (optional)
Dictionary mapping protocol to the URL of the proxy.
:param stream: (optional)
whether to immediately download the response content. Defaults to ``False``.
:param verify: (optional)
if ``True``, the SSL cert will be verified. A CA_BUNDLE path can also be provided.
:param cert: (optional)
if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
"""
self.data = SessionData()
# timeout default to 120 seconds
kwargs.setdefault("timeout", 120)
# set stream to True, in order to get client/server IP/Port
kwargs["stream"] = True
start_timestamp = time.time()
response = self._send_request_safe_mode(method, url, **kwargs)
response_time_ms = round((time.time() - start_timestamp) * 1000, 2)
try:
client_ip, client_port = response.raw._connection.sock.getsockname()
self.data.address.client_ip = client_ip
self.data.address.client_port = client_port
logger.debug(f"client IP: {client_ip}, Port: {client_port}")
except Exception:
pass
try:
server_ip, server_port = response.raw._connection.sock.getpeername()
self.data.address.server_ip = server_ip
self.data.address.server_port = server_port
logger.debug(f"server IP: {server_ip}, Port: {server_port}")
except Exception:
pass
# get length of the response content
content_size = int(dict(response.headers).get("content-length") or 0)
# record the consumed time
self.data.stat.response_time_ms = response_time_ms
self.data.stat.elapsed_ms = response.elapsed.microseconds / 1000.0
self.data.stat.content_size = content_size
# record request and response histories, include 30X redirection
response_list = response.history + [response]
self.data.req_resps = [
get_req_resp_record(resp_obj) for resp_obj in response_list
]
try:
response.raise_for_status()
except RequestException as ex:
logger.error(f"{str(ex)}")
else:
logger.info(
f"status_code: {response.status_code}, "
f"response_time(ms): {response_time_ms} ms, "
f"response_length: {content_size} bytes"
)
return response
def _send_request_safe_mode(self, method, url, **kwargs):
"""
Send a HTTP request, and catch any exception that might occur due to connection problems.
Safe mode has been removed from requests 1.x.
"""
try:
return requests.Session.request(self, method, url, **kwargs)
except (MissingSchema, InvalidSchema, InvalidURL):
raise
except RequestException as ex:
resp = ApiResponse()
resp.error = ex
resp.status_code = 0 # with this status_code, content returns None
resp.request = Request(method, url).prepare()
return resp

73
httprunner/client_test.py Normal file
View File

@@ -0,0 +1,73 @@
import unittest
from httprunner.client import HttpSession
from httprunner.utils import HTTP_BIN_URL
class TestHttpSession(unittest.TestCase):
def setUp(self):
self.session = HttpSession()
def test_request_http(self):
self.session.request("get", f"{HTTP_BIN_URL}/get")
address = self.session.data.address
self.assertGreater(len(address.server_ip), 0)
self.assertEqual(address.server_port, 80)
self.assertGreater(len(address.client_ip), 0)
self.assertGreater(address.client_port, 10000)
def test_request_https(self):
self.session.request("get", "https://postman-echo.com/get")
address = self.session.data.address
self.assertGreater(len(address.server_ip), 0)
self.assertEqual(address.server_port, 443)
self.assertGreater(len(address.client_ip), 0)
self.assertGreater(address.client_port, 10000)
def test_request_http_allow_redirects(self):
self.session.request(
"get",
f"{HTTP_BIN_URL}/redirect-to?url=https%3A%2F%2Fgithub.com",
allow_redirects=True,
)
address = self.session.data.address
self.assertNotEqual(address.server_ip, "N/A")
self.assertEqual(address.server_port, 443)
self.assertNotEqual(address.server_ip, "N/A")
self.assertGreater(address.client_port, 10000)
def test_request_https_allow_redirects(self):
self.session.request(
"get",
"https://postman-echo.com/redirect-to?url=https%3A%2F%2Fgithub.com",
allow_redirects=True,
)
address = self.session.data.address
self.assertNotEqual(address.server_ip, "N/A")
self.assertEqual(address.server_port, 443)
self.assertNotEqual(address.server_ip, "N/A")
self.assertGreater(address.client_port, 10000)
def test_request_http_not_allow_redirects(self):
self.session.request(
"get",
f"{HTTP_BIN_URL}/redirect-to?url=https%3A%2F%2Fgithub.com",
allow_redirects=False,
)
address = self.session.data.address
self.assertEqual(address.server_ip, "N/A")
self.assertEqual(address.server_port, 0)
self.assertEqual(address.client_ip, "N/A")
self.assertEqual(address.client_port, 0)
def test_request_https_not_allow_redirects(self):
self.session.request(
"get",
"https://postman-echo.com/redirect-to?url=https%3A%2F%2Fgithub.com",
allow_redirects=False,
)
address = self.session.data.address
self.assertEqual(address.server_ip, "N/A")
self.assertEqual(address.server_port, 0)
self.assertEqual(address.client_ip, "N/A")
self.assertEqual(address.client_port, 0)

385
httprunner/compat.py Normal file
View File

@@ -0,0 +1,385 @@
"""
This module handles compatibility issues between testcase format v2, v3 and v4.
"""
import os
import sys
from typing import List, Dict, Text, Union, Any
from loguru import logger
from httprunner import exceptions
from httprunner.loader import load_project_meta, convert_relative_project_root_dir
from httprunner.parser import parse_data
from httprunner.utils import sort_dict_by_custom_order
def convert_variables(
raw_variables: Union[Dict, Text], test_path: Text
) -> Dict[Text, Any]:
if isinstance(raw_variables, Dict):
return raw_variables
elif isinstance(raw_variables, Text):
# get variables by function, e.g. ${get_variables()}
project_meta = load_project_meta(test_path)
variables = parse_data(raw_variables, {}, project_meta.functions)
return variables
else:
raise exceptions.TestCaseFormatError(
f"Invalid variables format: {raw_variables}"
)
def _convert_request(request: Dict) -> Dict:
if "body" in request:
content_type = ""
if "headers" in request and "Content-Type" in request["headers"]:
content_type = request["headers"]["Content-Type"]
if content_type.startswith("application/json"):
request["json"] = request.pop("body")
else:
request["data"] = request.pop("body")
return _sort_request_by_custom_order(request)
def _convert_jmespath(raw: Text) -> Text:
if not isinstance(raw, Text):
raise exceptions.TestCaseFormatError(f"Invalid jmespath extractor: {raw}")
# content.xx/json.xx => body.xx
if raw.startswith("content"):
raw = f"body{raw[len('content'):]}"
elif raw.startswith("json"):
raw = f"body{raw[len('json'):]}"
raw_list = raw.split(".")
for i, item in enumerate(raw_list):
item = item.strip('"')
if item.lower().startswith("content-") or item.lower() in ["user-agent"]:
# add quotes for some field in white list
# e.g. headers.Content-Type => headers."Content-Type"
raw_list[i] = f'"{item}"'
return ".".join(raw_list)
def _convert_extractors(extractors: Union[List, Dict]) -> Dict:
"""convert extract list(v2) to dict(v3)
Args:
extractors: [{"varA": "content.varA"}, {"varB": "json.varB"}]
Returns:
{"varA": "body.varA", "varB": "body.varB"}
"""
v3_extractors: Dict = {}
if isinstance(extractors, List):
# [{"varA": "content.varA"}, {"varB": "json.varB"}]
for extractor in extractors:
if not isinstance(extractor, Dict):
logger.error(f"Invalid extractor: {extractors}")
sys.exit(1)
for k, v in extractor.items():
v3_extractors[k] = v
elif isinstance(extractors, Dict):
# {"varA": "body.varA", "varB": "body.varB"}
v3_extractors = extractors
else:
logger.error(f"Invalid extractor: {extractors}")
sys.exit(1)
for k, v in v3_extractors.items():
v3_extractors[k] = _convert_jmespath(v)
return v3_extractors
def _convert_validators(validators: List) -> List:
for v in validators:
if "check" in v and "expect" in v:
# format1: {"check": "content.abc", "assert": "eq", "expect": 201}
v["check"] = _convert_jmespath(v["check"])
elif len(v) == 1:
# format2: {'eq': ['status_code', 201]}
comparator = list(v.keys())[0]
v[comparator][0] = _convert_jmespath(v[comparator][0])
return validators
def _sort_request_by_custom_order(request: Dict) -> Dict:
custom_order = [
"method",
"url",
"params",
"headers",
"cookies",
"data",
"json",
"files",
"timeout",
"allow_redirects",
"proxies",
"verify",
"stream",
"auth",
"cert",
]
return sort_dict_by_custom_order(request, custom_order)
def _sort_step_by_custom_order(step: Dict) -> Dict:
custom_order = [
"name",
"variables",
"request",
"testcase",
"setup_hooks",
"teardown_hooks",
"extract",
"validate",
"validate_script",
]
return sort_dict_by_custom_order(step, custom_order)
def _ensure_step_attachment(step: Dict) -> Dict:
test_dict = {
"name": step["name"],
}
if "request" in step:
test_dict["request"] = _convert_request(step["request"])
if "variables" in step:
test_dict["variables"] = step["variables"]
if "setup_hooks" in step:
test_dict["setup_hooks"] = step["setup_hooks"]
if "teardown_hooks" in step:
test_dict["teardown_hooks"] = step["teardown_hooks"]
if "extract" in step:
test_dict["extract"] = _convert_extractors(step["extract"])
if "export" in step:
test_dict["export"] = step["export"]
if "validate" in step:
if not isinstance(step["validate"], List):
raise exceptions.TestCaseFormatError(
f'Invalid teststep validate: {step["validate"]}'
)
test_dict["validate"] = _convert_validators(step["validate"])
if "validate_script" in step:
test_dict["validate_script"] = step["validate_script"]
return test_dict
def ensure_testcase_v4_api(api_content: Dict) -> Dict:
logger.info("convert api in v2/v3 to testcase format v4")
teststep = {
"request": _convert_request(api_content["request"]),
}
teststep.update(_ensure_step_attachment(api_content))
teststep = _sort_step_by_custom_order(teststep)
config = {"name": api_content["name"]}
extract_variable_names: List = list(teststep.get("extract", {}).keys())
if extract_variable_names:
config["export"] = extract_variable_names
return {
"config": config,
"teststeps": [teststep],
}
def ensure_testcase_v4(test_content: Dict) -> Dict:
logger.info("ensure compatibility with testcase format v2/v3")
v3_content = {"config": test_content["config"], "teststeps": []}
if "teststeps" not in test_content:
logger.error(f"Miss teststeps: {test_content}")
sys.exit(1)
if not isinstance(test_content["teststeps"], list):
logger.error(
f'teststeps should be list type, got {type(test_content["teststeps"])}: {test_content["teststeps"]}'
)
sys.exit(1)
for step in test_content["teststeps"]:
teststep = {}
if "request" in step:
pass
elif "api" in step:
teststep["testcase"] = step.pop("api")
elif "testcase" in step:
teststep["testcase"] = step.pop("testcase")
else:
raise exceptions.TestCaseFormatError(f"Invalid teststep: {step}")
teststep.update(_ensure_step_attachment(step))
teststep = _sort_step_by_custom_order(teststep)
v3_content["teststeps"].append(teststep)
return v3_content
def ensure_cli_args(args: List) -> List:
"""ensure compatibility with deprecated cli args in v2"""
# remove deprecated --failfast
if "--failfast" in args:
logger.warning("remove deprecated argument: --failfast")
args.pop(args.index("--failfast"))
# convert --report-file to --html
if "--report-file" in args:
logger.warning("replace deprecated argument --report-file with --html")
index = args.index("--report-file")
args[index] = "--html"
args.append("--self-contained-html")
# keep compatibility with --save-tests in v2
if "--save-tests" in args:
logger.warning(
"generate conftest.py keep compatibility with --save-tests in v2"
)
args.pop(args.index("--save-tests"))
_generate_conftest_for_summary(args)
return args
def _generate_conftest_for_summary(args: List):
for arg in args:
if os.path.exists(arg):
test_path = arg
# FIXME: several test paths maybe specified
break
else:
logger.error(f"No valid test path specified! \nargs: {args}")
sys.exit(1)
conftest_content = '''# NOTICE: Generated By HttpRunner.
import json
import os
import time
import pytest
from loguru import logger
from httprunner.utils import get_platform, ExtendJSONEncoder
@pytest.fixture(scope="session", autouse=True)
def session_fixture(request):
"""setup and teardown each task"""
logger.info("start running testcases ...")
start_at = time.time()
yield
logger.info("task finished, generate task summary for --save-tests")
summary = {
"success": True,
"stat": {
"testcases": {"total": 0, "success": 0, "fail": 0},
"teststeps": {"total": 0, "failures": 0, "successes": 0},
},
"time": {"start_at": start_at, "duration": time.time() - start_at},
"platform": get_platform(),
"details": [],
}
for item in request.node.items:
testcase_summary = item.instance.get_summary()
summary["success"] &= testcase_summary.success
summary["stat"]["testcases"]["total"] += 1
summary["stat"]["teststeps"]["total"] += len(testcase_summary.step_results)
if testcase_summary.success:
summary["stat"]["testcases"]["success"] += 1
summary["stat"]["teststeps"]["successes"] += len(
testcase_summary.step_results
)
else:
summary["stat"]["testcases"]["fail"] += 1
summary["stat"]["teststeps"]["successes"] += (
len(testcase_summary.step_results) - 1
)
summary["stat"]["teststeps"]["failures"] += 1
testcase_summary_json = testcase_summary.dict()
testcase_summary_json["records"] = testcase_summary_json.pop("step_results")
summary["details"].append(testcase_summary_json)
summary_path = r"{{SUMMARY_PATH_PLACEHOLDER}}"
summary_dir = os.path.dirname(summary_path)
os.makedirs(summary_dir, exist_ok=True)
with open(summary_path, "w", encoding="utf-8") as f:
json.dump(summary, f, indent=4, ensure_ascii=False, cls=ExtendJSONEncoder)
logger.info(f"generated task summary: {summary_path}")
'''
project_meta = load_project_meta(test_path)
project_root_dir = project_meta.RootDir
conftest_path = os.path.join(project_root_dir, "conftest.py")
test_path = os.path.abspath(test_path)
logs_dir_path = os.path.join(project_root_dir, "logs")
test_path_relative_path = convert_relative_project_root_dir(test_path)
if os.path.isdir(test_path):
file_foder_path = os.path.join(logs_dir_path, test_path_relative_path)
dump_file_name = "all.summary.json"
else:
file_relative_folder_path, test_file = os.path.split(test_path_relative_path)
file_foder_path = os.path.join(logs_dir_path, file_relative_folder_path)
test_file_name, _ = os.path.splitext(test_file)
dump_file_name = f"{test_file_name}.summary.json"
summary_path = os.path.join(file_foder_path, dump_file_name)
conftest_content = conftest_content.replace(
"{{SUMMARY_PATH_PLACEHOLDER}}", summary_path
)
dir_path = os.path.dirname(conftest_path)
if not os.path.exists(dir_path):
os.makedirs(dir_path)
with open(conftest_path, "w", encoding="utf-8") as f:
f.write(conftest_content)
logger.info("generated conftest.py to generate summary.json")
def ensure_path_sep(path: Text) -> Text:
"""ensure compatibility with different path separators of Linux and Windows"""
if "/" in path:
path = os.sep.join(path.split("/"))
if "\\" in path:
path = os.sep.join(path.split("\\"))
return path

266
httprunner/compat_test.py Normal file
View File

@@ -0,0 +1,266 @@
import os
import unittest
from httprunner import compat, exceptions, loader
from httprunner.utils import HTTP_BIN_URL
class TestCompat(unittest.TestCase):
def setUp(self):
loader.project_meta = None
def test_convert_variables(self):
raw_variables = {"var1": 1, "var2": "val2"}
self.assertEqual(
compat.convert_variables(raw_variables, "examples/data/a-b.c/1.yml"),
{"var1": 1, "var2": "val2"},
)
raw_variables = "${get_variables()}"
self.assertEqual(
compat.convert_variables(raw_variables, "examples/data/a-b.c/1.yml"),
{"foo1": "session_bar1"},
)
with self.assertRaises(exceptions.TestCaseFormatError):
raw_variables = [{"var1": 1}, {"var2": "val2", "var3": 3}]
compat.convert_variables(raw_variables, "examples/data/a-b.c/1.yml")
with self.assertRaises(exceptions.TestCaseFormatError):
compat.convert_variables(None, "examples/data/a-b.c/1.yml")
def test_convert_request(self):
request_with_json_body = {
"method": "POST",
"url": "https://postman-echo.com/post",
"headers": {"Content-Type": "application/json"},
"body": {"k1": "v1", "k2": "v2"},
}
self.assertEqual(
compat._convert_request(request_with_json_body),
{
"method": "POST",
"url": "https://postman-echo.com/post",
"headers": {"Content-Type": "application/json"},
"json": {"k1": "v1", "k2": "v2"},
},
)
request_with_text_body = {
"method": "POST",
"url": "https://postman-echo.com/post",
"headers": {"Content-Type": "text/plain"},
"body": "have a nice day",
}
self.assertEqual(
compat._convert_request(request_with_text_body),
{
"method": "POST",
"url": "https://postman-echo.com/post",
"headers": {"Content-Type": "text/plain"},
"data": "have a nice day",
},
)
def test_convert_jmespath(self):
self.assertEqual(compat._convert_jmespath("content.abc"), "body.abc")
self.assertEqual(compat._convert_jmespath("json.abc"), "body.abc")
self.assertEqual(
compat._convert_jmespath("headers.Content-Type"), 'headers."Content-Type"'
)
self.assertEqual(
compat._convert_jmespath("headers.User-Agent"), 'headers."User-Agent"'
)
self.assertEqual(
compat._convert_jmespath('headers."Content-Type"'), 'headers."Content-Type"'
)
self.assertEqual(
compat._convert_jmespath("body.users[-1]"),
"body.users[-1]",
)
self.assertEqual(
compat._convert_jmespath("body.result.WorkNode_-1"),
"body.result.WorkNode_-1",
)
def test_convert_extractors(self):
self.assertEqual(
compat._convert_extractors(
[{"varA": "content.varA"}, {"varB": "json.varB"}]
),
{"varA": "body.varA", "varB": "body.varB"},
)
self.assertEqual(
compat._convert_extractors([{"varA": "content[0].varA"}]),
{"varA": "body[0].varA"},
)
self.assertEqual(
compat._convert_extractors({"varA": "content[0].varA"}),
{"varA": "body[0].varA"},
)
def test_convert_validators(self):
self.assertEqual(
compat._convert_validators(
[{"check": "content.abc", "assert": "eq", "expect": 201}]
),
[{"check": "body.abc", "assert": "eq", "expect": 201}],
)
self.assertEqual(
compat._convert_validators([{"eq": ["content.abc", 201]}]),
[{"eq": ["body.abc", 201]}],
)
self.assertEqual(
compat._convert_validators([{"eq": ["content[0].name", 201]}]),
[{"eq": ["body[0].name", 201]}],
)
def test_ensure_testcase_v4_api(self):
api_content = {
"name": "get with params",
"request": {
"method": "GET",
"url": "/get",
"params": {"foo1": "bar1", "foo2": "bar2"},
"headers": {"User-Agent": "HttpRunner/3.0"},
},
"extract": [{"varA": "content.varA"}, {"user_agent": "headers.User-Agent"}],
"validate": [{"eq": ["content.varB", 200]}, {"lt": ["json[0].varC", 0]}],
}
self.assertEqual(
compat.ensure_testcase_v4_api(api_content),
{
"config": {
"name": "get with params",
"export": ["varA", "user_agent"],
},
"teststeps": [
{
"name": "get with params",
"request": {
"method": "GET",
"url": "/get",
"params": {"foo1": "bar1", "foo2": "bar2"},
"headers": {"User-Agent": "HttpRunner/3.0"},
},
"extract": {
"varA": "body.varA",
"user_agent": 'headers."User-Agent"',
},
"validate": [
{"eq": ["body.varB", 200]},
{"lt": ["body[0].varC", 0]},
],
}
],
},
)
def test_ensure_testcase_v4(self):
testcase_content = {
"config": {"name": "xxx", "base_url": HTTP_BIN_URL},
"teststeps": [
{
"name": "get with params",
"request": {
"method": "GET",
"url": "/get",
"params": {"foo1": "bar1", "foo2": "bar2"},
"headers": {"User-Agent": "HttpRunner/3.0"},
},
"extract": [
{"varA": "content.varA"},
{"user_agent": "headers.User-Agent"},
],
"validate": [
{"eq": ["content.varB", 200]},
{"lt": ["json[0].varC", 0]},
],
}
],
}
self.assertEqual(
compat.ensure_testcase_v4(testcase_content),
{
"config": {"name": "xxx", "base_url": HTTP_BIN_URL},
"teststeps": [
{
"name": "get with params",
"request": {
"method": "GET",
"url": "/get",
"params": {"foo1": "bar1", "foo2": "bar2"},
"headers": {"User-Agent": "HttpRunner/3.0"},
},
"extract": {
"varA": "body.varA",
"user_agent": 'headers."User-Agent"',
},
"validate": [
{"eq": ["body.varB", 200]},
{"lt": ["body[0].varC", 0]},
],
}
],
},
)
def test_ensure_cli_args(self):
args1 = ["examples/postman_echo/request_methods/hardcode.yml", "--failfast"]
self.assertEqual(
compat.ensure_cli_args(args1),
["examples/postman_echo/request_methods/hardcode.yml"],
)
args2 = ["examples/postman_echo/request_methods/hardcode.yml", "--save-tests"]
self.assertEqual(
compat.ensure_cli_args(args2),
["examples/postman_echo/request_methods/hardcode.yml"],
)
self.assertTrue(os.path.isfile("examples/postman_echo/conftest.py"))
args3 = [
"examples/postman_echo/request_methods/hardcode.yml",
"--report-file",
"report.html",
]
self.assertEqual(
compat.ensure_cli_args(args3),
[
"examples/postman_echo/request_methods/hardcode.yml",
"--html",
"report.html",
"--self-contained-html",
],
)
args4 = [
"examples/postman_echo/request_methods/hardcode.yml",
"--failfast",
"--save-tests",
"--report-file",
"report.html",
]
self.assertEqual(
compat.ensure_cli_args(args4),
[
"examples/postman_echo/request_methods/hardcode.yml",
"--html",
"report.html",
"--self-contained-html",
],
)
def test_ensure_file_path(self):
self.assertEqual(
compat.ensure_path_sep("demo\\test.yml"), os.sep.join(["demo", "test.yml"])
)
self.assertEqual(
compat.ensure_path_sep(os.path.join(os.getcwd(), "demo\\test.yml")),
os.path.join(os.getcwd(), os.sep.join(["demo", "test.yml"])),
)
self.assertEqual(
compat.ensure_path_sep("demo/test.yml"), os.sep.join(["demo", "test.yml"])
)
self.assertEqual(
compat.ensure_path_sep(os.path.join(os.getcwd(), "demo/test.yml")),
os.path.join(os.getcwd(), os.sep.join(["demo", "test.yml"])),
)

138
httprunner/config.py Normal file
View File

@@ -0,0 +1,138 @@
import copy
import inspect
from typing import Text
from httprunner.models import TConfig, TConfigThrift, TConfigDB, ProtoType, VariablesMapping
class ConfigThrift(object):
def __init__(self, config: TConfig) -> None:
self.__config = config
self.__config.thrift = TConfigThrift()
def psm(self, psm: Text) -> "ConfigThrift":
self.__config.thrift.psm = psm
return self
def env(self, env: Text) -> "ConfigThrift":
self.__config.thrift.env = env
return self
def cluster(self, cluster: Text) -> "ConfigThrift":
self.__config.thrift.cluster = cluster
return self
def service_name(self, service_name: Text) -> "ConfigThrift":
self.__config.thrift.service_name = service_name
return self
def method(self, method: Text) -> "ConfigThrift":
self.__config.thrift.method = method
return self
def ip(self, service_name_: Text) -> "ConfigThrift":
self.__config.thrift.service_name = service_name_
return self
def port(self, port: int) -> "ConfigThrift":
self.__config.thrift.port = port
return self
def timeout(self, timeout: int) -> "ConfigThrift":
self.__config.thrift.timeout = timeout
return self
def proto_type(self, proto_type: ProtoType) -> "ConfigThrift":
self.__config.thrift.proto_type = proto_type
return self
def trans_type(self, trans_type: ProtoType) -> "ConfigThrift":
self.__config.thrift.trans_type = trans_type
return self
def struct(self) -> TConfig:
return self.__config
class ConfigDB(object):
def __init__(self, config: TConfig):
self.__config = config
self.__config.db = TConfigDB()
def psm(self, psm):
self.__config.db.psm = psm
return self
def user(self, user):
self.__config.db.user = user
return self
def password(self, password):
self.__config.db.password = password
return self
def ip(self, ip):
self.__config.db.ip = ip
return self
def port(self, port: int):
self.__config.db.port = port
return self
def database(self, database: Text):
self.__config.db.database = database
return self
def struct(self) -> TConfig:
return self.__config
class Config(object):
def __init__(self, name: Text) -> None:
caller_frame = inspect.stack()[1]
self.__name: Text = name
self.__base_url: Text = ""
self.__variables: VariablesMapping = {}
self.__config = TConfig(name=name, path=caller_frame.filename)
@property
def name(self) -> Text:
return self.__config.name
@property
def path(self) -> Text:
return self.__config.path
def variables(self, **variables) -> "Config":
self.__variables.update(variables)
return self
def base_url(self, base_url: Text) -> "Config":
self.__base_url = base_url
return self
def verify(self, verify: bool) -> "Config":
self.__config.verify = verify
return self
def export(self, *export_var_name: Text) -> "Config":
self.__config.export.extend(export_var_name)
self.__config.export = list(set(self.__config.export))
return self
def struct(self) -> TConfig:
self.__init()
return self.__config
def thrift(self) -> ConfigThrift:
self.__init()
return ConfigThrift(self.__config)
def db(self) -> ConfigDB:
self.__init()
return ConfigDB(self.__config)
def __init(self) -> None:
self.__config.name = self.__name
self.__config.base_url = self.__base_url
self.__config.variables = copy.copy(self.__variables)

View File

@@ -0,0 +1,86 @@
# -*- coding: utf-8 -*-
import datetime
import json
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
class DBEngine(object):
def __init__(self, db_uri):
"""
db_uri = f'mysql+pymysql://{username}:{password}@{host}:{port}/{database}?charset=utf8mb4'
"""
engine = create_engine(db_uri)
self.session = sessionmaker(bind=engine, autocommit=True)()
@staticmethod
def value_decode(row: dict):
"""
Try to decode value of table
datetime.datetime-->string
datetime.date-->string
json str-->dict
:param row:
:return:
"""
for k, v in row.items():
if isinstance(v, datetime.datetime):
row[k] = v.strftime("%Y-%m-%d %H:%M:%S")
elif isinstance(v, datetime.date):
row[k] = v.strftime("%Y-%m-%d")
elif isinstance(v, str):
try:
row[k] = json.loads(v)
except ValueError:
pass
def _fetch(self, query, size=-1, commit=True):
query = query.strip()
result = self.session.execute(query)
if query.upper()[:6] == "SELECT":
if size < 0:
al = result.fetchall()
al = [dict(el) for el in al]
for el in al:
self.value_decode(el)
return al or None
elif size == 1:
on = dict(result.fetchone())
self.value_decode(on)
return on or None
else:
mny = result.fetchmany(size)
mny = [dict(el) for el in mny]
for el in mny:
self.value_decode(el)
return mny or None
elif query.upper()[:6] in ("UPDATE", "DELETE", "INSERT"):
return {"rowcount": result.rowcount}
def fetchone(self, query, commit=True):
return self._fetch(query, size=1, commit=commit)
def fetchmany(self, query, size, commit=True):
return self._fetch(query=query, size=size, commit=commit)
def fetchall(self, query, commit=True):
return self._fetch(query=query, size=-1, commit=commit)
def insert(self, query, commit=True):
return self._fetch(query=query, commit=commit)
def delete(self, query, commit=True):
return self._fetch(query=query, commit=commit)
def update(self, query, commit=True):
return self._fetch(query=query, commit=commit)
if __name__ == "__main__":
# db = DBEngine("mysql+pymysql://xxxxx:xxxxx@10.0.0.1:3306/dbname?charset=utf8mb4")
db = DBEngine("sqlite:////Users/xxx/HttpRunner/examples/data/sqlite.db")
print(db.fetchmany("""
select* from student""", 5))
print(db.fetchmany("select* from student", 5))

92
httprunner/exceptions.py Normal file
View File

@@ -0,0 +1,92 @@
""" failure type exceptions
these exceptions will mark test as failure
"""
class MyBaseFailure(Exception):
pass
class ParseTestsFailure(MyBaseFailure):
pass
class ValidationFailure(MyBaseFailure):
pass
class ExtractFailure(MyBaseFailure):
pass
class SetupHooksFailure(MyBaseFailure):
pass
class TeardownHooksFailure(MyBaseFailure):
pass
""" error type exceptions
these exceptions will mark test as error
"""
class MyBaseError(Exception):
pass
class FileFormatError(MyBaseError):
pass
class TestCaseFormatError(FileFormatError):
pass
class TestSuiteFormatError(FileFormatError):
pass
class ParamsError(MyBaseError):
pass
class NotFoundError(MyBaseError):
pass
class FileNotFound(FileNotFoundError, NotFoundError):
pass
class FunctionNotFound(NotFoundError):
pass
class VariableNotFound(NotFoundError):
pass
class EnvNotFound(NotFoundError):
pass
class CSVNotFound(NotFoundError):
pass
class ApiNotFound(NotFoundError):
pass
class TestcaseNotFound(NotFoundError):
pass
class SummaryEmpty(MyBaseError):
"""test result summary data is empty"""
class SqlMethodNotSupport(MyBaseError):
pass

View File

@@ -0,0 +1,2 @@
# NOTICE:
# This file should not be deleted, or ImportError will be raised in Python 2.7 when importing extension

View File

@@ -0,0 +1,178 @@
""" upload test extension.
If you want to use this extension, you should install the following dependencies first.
- requests_toolbelt
- filetype
Then you can write upload test script as below:
- test:
name: upload file
request:
url: https://httpbin.org/upload
method: POST
headers:
Cookie: session=AAA-BBB-CCC
upload:
file: "data/file_to_upload"
field1: "value1"
field2: "value2"
validate:
- eq: ["status_code", 200]
For compatibility, you can also write upload test script in old way:
- test:
name: upload file
variables:
file: "data/file_to_upload"
field1: "value1"
field2: "value2"
m_encoder: ${multipart_encoder(file=$file, field1=$field1, field2=$field2)}
request:
url: https://httpbin.org/upload
method: POST
headers:
Content-Type: ${multipart_content_type($m_encoder)}
Cookie: session=AAA-BBB-CCC
data: $m_encoder
validate:
- eq: ["status_code", 200]
"""
import os
import sys
from typing import Text
from httprunner.models import VariablesMapping, FunctionsMapping, TStep
from httprunner.parser import parse_data
from loguru import logger
try:
import filetype
from requests_toolbelt import MultipartEncoder
UPLOAD_READY = True
except ModuleNotFoundError:
UPLOAD_READY = False
def ensure_upload_ready():
if UPLOAD_READY:
return
msg = """
uploader extension dependencies uninstalled, install first and try again.
install with pip:
$ pip install requests_toolbelt filetype
or you can install httprunner with optional upload dependencies:
$ pip install "httprunner[upload]"
"""
logger.error(msg)
sys.exit(1)
def prepare_upload_step(
step: TStep, step_variables: VariablesMapping, functions: FunctionsMapping
):
"""preprocess for upload test
replace `upload` info with MultipartEncoder
Args:
step: teststep
{
"variables": {},
"request": {
"url": "https://httpbin.org/upload",
"method": "POST",
"headers": {
"Cookie": "session=AAA-BBB-CCC"
},
"upload": {
"file": "data/file_to_upload"
"md5": "123"
}
}
}
functions: functions mapping
"""
if not step.request.upload:
return
# parse upload info
step.request.upload = parse_data(step.request.upload, step_variables, functions)
ensure_upload_ready()
params_list = []
for key, value in step.request.upload.items():
step_variables[key] = value
params_list.append(f"{key}=${key}")
params_str = ", ".join(params_list)
step_variables["m_encoder"] = "${multipart_encoder(" + params_str + ")}"
step.request.headers["Content-Type"] = "${multipart_content_type($m_encoder)}"
step.request.data = "$m_encoder"
def multipart_encoder(**kwargs):
"""initialize MultipartEncoder with uploading fields.
Returns:
MultipartEncoder: initialized MultipartEncoder object
"""
def get_filetype(file_path):
file_type = filetype.guess(file_path)
if file_type:
return file_type.mime
else:
return "text/html"
ensure_upload_ready()
fields_dict = {}
for key, value in kwargs.items():
if os.path.isabs(value):
# value is absolute file path
_file_path = value
is_exists_file = os.path.isfile(value)
else:
# value is not absolute file path, check if it is relative file path
from httprunner.loader import load_project_meta
project_meta = load_project_meta("")
_file_path = os.path.join(project_meta.RootDir, value)
is_exists_file = os.path.isfile(_file_path)
if is_exists_file:
# value is file path to upload
filename = os.path.basename(_file_path)
mime_type = get_filetype(_file_path)
# TODO: fix ResourceWarning for unclosed file
file_handler = open(_file_path, "rb")
fields_dict[key] = (filename, file_handler, mime_type)
else:
fields_dict[key] = value
return MultipartEncoder(fields=fields_dict)
def multipart_content_type(m_encoder) -> Text:
"""prepare Content-Type for request headers
Args:
m_encoder: MultipartEncoder object
Returns:
content type
"""
ensure_upload_ready()
return m_encoder.content_type

432
httprunner/loader.py Normal file
View File

@@ -0,0 +1,432 @@
import csv
import importlib
import json
import os
import sys
import types
from typing import Callable, Dict, List, Text, Tuple, Union
import yaml
from loguru import logger
from pydantic import ValidationError
from httprunner import builtin, exceptions, utils
from httprunner.models import ProjectMeta, TestCase
project_meta: Union[ProjectMeta, None] = None
def _load_yaml_file(yaml_file: Text) -> Dict:
"""load yaml file and check file content format"""
with open(yaml_file, mode="rb") as stream:
try:
yaml_content = yaml.load(stream, Loader=yaml.FullLoader)
except yaml.YAMLError as ex:
err_msg = f"YAMLError:\nfile: {yaml_file}\nerror: {ex}"
logger.error(err_msg)
raise exceptions.FileFormatError
return yaml_content
def _load_json_file(json_file: Text) -> Dict:
"""load json file and check file content format"""
with open(json_file, mode="rb") as data_file:
try:
json_content = json.load(data_file)
except json.JSONDecodeError as ex:
err_msg = f"JSONDecodeError:\nfile: {json_file}\nerror: {ex}"
raise exceptions.FileFormatError(err_msg)
return json_content
def load_test_file(test_file: Text) -> Dict:
"""load testcase/testsuite file content"""
if not os.path.isfile(test_file):
raise exceptions.FileNotFound(f"test file not exists: {test_file}")
file_suffix = os.path.splitext(test_file)[1].lower()
if file_suffix == ".json":
test_file_content = _load_json_file(test_file)
elif file_suffix in [".yaml", ".yml"]:
test_file_content = _load_yaml_file(test_file)
else:
# '' or other suffix
raise exceptions.FileFormatError(
f"testcase/testsuite file should be YAML/JSON format, invalid format file: {test_file}"
)
return test_file_content
def load_testcase(testcase: Dict) -> TestCase:
try:
# validate with pydantic TestCase model
testcase_obj = TestCase.parse_obj(testcase)
except ValidationError as ex:
err_msg = f"TestCase ValidationError:\nerror: {ex}\ncontent: {testcase}"
raise exceptions.TestCaseFormatError(err_msg)
return testcase_obj
def load_testcase_file(testcase_file: Text) -> TestCase:
"""load testcase file and validate with pydantic model"""
testcase_content = load_test_file(testcase_file)
testcase_obj = load_testcase(testcase_content)
testcase_obj.config.path = testcase_file
return testcase_obj
def load_dot_env_file(dot_env_path: Text) -> Dict:
"""load .env file.
Args:
dot_env_path (str): .env file path
Returns:
dict: environment variables mapping
{
"UserName": "debugtalk",
"Password": "123456",
"PROJECT_KEY": "ABCDEFGH"
}
Raises:
exceptions.FileFormatError: If .env file format is invalid.
"""
if not os.path.isfile(dot_env_path):
return {}
logger.info(f"Loading environment variables from {dot_env_path}")
env_variables_mapping = {}
with open(dot_env_path, mode="rb") as fp:
for line in fp:
# maxsplit=1
line = line.strip()
if not len(line) or line.startswith(b"#"):
continue
if b"=" in line:
variable, value = line.split(b"=", 1)
elif b":" in line:
variable, value = line.split(b":", 1)
else:
raise exceptions.FileFormatError(".env format error")
env_variables_mapping[
variable.strip().decode("utf-8")
] = value.strip().decode("utf-8")
utils.set_os_environ(env_variables_mapping)
return env_variables_mapping
def load_csv_file(csv_file: Text) -> List[Dict]:
"""load csv file and check file content format
Args:
csv_file (str): csv file path, csv file content is like below:
Returns:
list: list of parameters, each parameter is in dict format
Examples:
>>> cat csv_file
username,password
test1,111111
test2,222222
test3,333333
>>> load_csv_file(csv_file)
[
{'username': 'test1', 'password': '111111'},
{'username': 'test2', 'password': '222222'},
{'username': 'test3', 'password': '333333'}
]
"""
if not os.path.isabs(csv_file):
global project_meta
if project_meta is None:
raise exceptions.MyBaseFailure("load_project_meta() has not been called!")
# make compatible with Windows/Linux
csv_file = os.path.join(project_meta.RootDir, *csv_file.split("/"))
if not os.path.isfile(csv_file):
# file path not exist
raise exceptions.CSVNotFound(csv_file)
csv_content_list = []
with open(csv_file, encoding="utf-8") as csvfile:
reader = csv.DictReader(csvfile)
for row in reader:
csv_content_list.append(row)
return csv_content_list
def load_folder_files(folder_path: Text, recursive: bool = True) -> List:
"""load folder path, return all files endswith .yml/.yaml/.json/_test.py in list.
Args:
folder_path (str): specified folder path to load
recursive (bool): load files recursively if True
Returns:
list: files endswith yml/yaml/json
"""
if isinstance(folder_path, (list, set)):
files = []
for path in set(folder_path):
files.extend(load_folder_files(path, recursive))
return files
if not os.path.exists(folder_path):
return []
file_list = []
for dirpath, dirnames, filenames in os.walk(folder_path):
filenames_list = []
for filename in filenames:
if not filename.lower().endswith((".yml", ".yaml", ".json", "_test.py")):
continue
filenames_list.append(filename)
for filename in filenames_list:
file_path = os.path.join(dirpath, filename)
file_list.append(file_path)
if not recursive:
break
return file_list
def load_module_functions(module) -> Dict[Text, Callable]:
"""load python module functions.
Args:
module: python module
Returns:
dict: functions mapping for specified python module
{
"func1_name": func1,
"func2_name": func2
}
"""
module_functions = {}
for name, item in vars(module).items():
if isinstance(item, types.FunctionType):
module_functions[name] = item
return module_functions
def load_builtin_functions() -> Dict[Text, Callable]:
"""load builtin module functions"""
return load_module_functions(builtin)
def locate_file(start_path: Text, file_name: Text) -> Text:
"""locate filename and return absolute file path.
searching will be recursive upward until system root dir.
Args:
file_name (str): target locate file name
start_path (str): start locating path, maybe file path or directory path
Returns:
str: located file path. None if file not found.
Raises:
exceptions.FileNotFound: If failed to locate file.
"""
if os.path.isfile(start_path):
start_dir_path = os.path.dirname(start_path)
elif os.path.isdir(start_path):
start_dir_path = start_path
else:
raise exceptions.FileNotFound(f"invalid path: {start_path}")
file_path = os.path.join(start_dir_path, file_name)
if os.path.isfile(file_path):
# ensure absolute
return os.path.abspath(file_path)
# system root dir
# Windows, e.g. 'E:\\'
# Linux/Darwin, '/'
parent_dir = os.path.dirname(start_dir_path)
if parent_dir == start_dir_path:
raise exceptions.FileNotFound(f"{file_name} not found in {start_path}")
# locate recursive upward
return locate_file(parent_dir, file_name)
def locate_debugtalk_py(start_path: Text) -> Text:
"""locate debugtalk.py file
Args:
start_path (str): start locating path,
maybe testcase file path or directory path
Returns:
str: debugtalk.py file path, None if not found
"""
try:
# locate debugtalk.py file.
debugtalk_path = locate_file(start_path, "debugtalk.py")
except exceptions.FileNotFound:
debugtalk_path = None
return debugtalk_path
def locate_project_root_directory(test_path: Text) -> Tuple[Text, Text]:
"""locate debugtalk.py path as project root directory
Args:
test_path: specified testfile path
Returns:
(str, str): debugtalk.py path, project_root_directory
"""
def prepare_path(path):
if not os.path.exists(path):
err_msg = f"path not exist: {path}"
logger.error(err_msg)
raise exceptions.FileNotFound(err_msg)
if not os.path.isabs(path):
path = os.path.join(os.getcwd(), path)
return path
test_path = prepare_path(test_path)
# locate debugtalk.py file
debugtalk_path = locate_debugtalk_py(test_path)
if debugtalk_path:
# The folder contains debugtalk.py will be treated as project RootDir.
project_root_directory = os.path.dirname(debugtalk_path)
else:
# debugtalk.py not found, use os.getcwd() as project RootDir.
project_root_directory = os.getcwd()
return debugtalk_path, project_root_directory
def load_debugtalk_functions() -> Dict[Text, Callable]:
"""load project debugtalk.py module functions
debugtalk.py should be located in project root directory.
Returns:
dict: debugtalk module functions mapping
{
"func1_name": func1,
"func2_name": func2
}
"""
# load debugtalk.py module
try:
imported_module = importlib.import_module("debugtalk")
except Exception as ex:
logger.error(f"error occurred in debugtalk.py: {ex}")
sys.exit(1)
# reload to refresh previously loaded module
imported_module = importlib.reload(imported_module)
return load_module_functions(imported_module)
def load_project_meta(test_path: Text, reload: bool = False) -> ProjectMeta:
"""load testcases, .env, debugtalk.py functions.
testcases folder is relative to project_root_directory
by default, project_meta will be loaded only once, unless set reload to true.
Args:
test_path (str): test file/folder path, locate project RootDir from this path.
reload: reload project meta if set true, default to false
Returns:
project loaded api/testcases definitions,
environments and debugtalk.py functions.
"""
global project_meta
if project_meta and (not reload):
return project_meta
project_meta = ProjectMeta()
if not test_path:
return project_meta
debugtalk_path, project_root_directory = locate_project_root_directory(test_path)
# add project RootDir to sys.path
sys.path.insert(0, project_root_directory)
# load .env file
# NOTICE:
# environment variable maybe loaded in debugtalk.py
# thus .env file should be loaded before loading debugtalk.py
dot_env_path = os.path.join(project_root_directory, ".env")
dot_env = load_dot_env_file(dot_env_path)
if dot_env:
project_meta.env = dot_env
project_meta.dot_env_path = dot_env_path
if debugtalk_path:
# load debugtalk.py functions
debugtalk_functions = load_debugtalk_functions()
else:
debugtalk_functions = {}
# locate project RootDir and load debugtalk.py functions
project_meta.RootDir = project_root_directory
project_meta.functions = debugtalk_functions
project_meta.debugtalk_path = debugtalk_path
return project_meta
def convert_relative_project_root_dir(abs_path: Text) -> Text:
"""convert absolute path to relative path, based on project_meta.RootDir
Args:
abs_path: absolute path
Returns: relative path based on project_meta.RootDir
"""
_project_meta = load_project_meta(abs_path)
if not abs_path.startswith(_project_meta.RootDir):
raise exceptions.ParamsError(
f"failed to convert absolute path to relative path based on project_meta.RootDir\n"
f"abs_path: {abs_path}\n"
f"project_meta.RootDir: {_project_meta.RootDir}"
)
return abs_path[len(_project_meta.RootDir) + 1 :]

127
httprunner/loader_test.py Normal file
View File

@@ -0,0 +1,127 @@
import os
import unittest
from httprunner import exceptions, loader
class TestLoader(unittest.TestCase):
def test_load_testcase_file(self):
path = "examples/postman_echo/request_methods/request_with_variables.yml"
testcase_obj = loader.load_testcase_file(path)
self.assertEqual(
testcase_obj.config.name, "request methods testcase with variables"
)
self.assertEqual(len(testcase_obj.teststeps), 4)
def test_load_json_file_file_format_error(self):
json_tmp_file = "tmp.json"
# create empty file
with open(json_tmp_file, "w") as f:
f.write("")
with self.assertRaises(exceptions.FileFormatError):
loader._load_json_file(json_tmp_file)
os.remove(json_tmp_file)
# create empty json file
with open(json_tmp_file, "w") as f:
f.write("{}")
loader._load_json_file(json_tmp_file)
os.remove(json_tmp_file)
# create invalid format json file
with open(json_tmp_file, "w") as f:
f.write("abc")
with self.assertRaises(exceptions.FileFormatError):
loader._load_json_file(json_tmp_file)
os.remove(json_tmp_file)
def test_load_testcases_bad_filepath(self):
testcase_file_path = os.path.join(os.getcwd(), "examples/data/demo")
with self.assertRaises(exceptions.FileNotFound):
loader.load_testcase_file(testcase_file_path)
def test_load_csv_file_one_parameter(self):
csv_file_path = os.path.join(os.getcwd(), "examples/httpbin/user_agent.csv")
csv_content = loader.load_csv_file(csv_file_path)
self.assertEqual(
csv_content,
[
{"user_agent": "iOS/10.1"},
{"user_agent": "iOS/10.2"},
{"user_agent": "iOS/10.3"},
],
)
def test_load_csv_file_multiple_parameters(self):
csv_file_path = os.path.join(os.getcwd(), "examples/httpbin/account.csv")
csv_content = loader.load_csv_file(csv_file_path)
self.assertEqual(
csv_content,
[
{"username": "test1", "password": "111111"},
{"username": "test2", "password": "222222"},
{"username": "test3", "password": "333333"},
],
)
def test_load_folder_files(self):
folder = os.path.join(os.getcwd(), "examples")
file1 = os.path.join(os.getcwd(), "examples", "test_utils.py")
file2 = os.path.join(os.getcwd(), "examples", "httpbin", "hooks.yml")
files = loader.load_folder_files(folder, recursive=False)
self.assertEqual(files, [])
files = loader.load_folder_files(folder)
self.assertIn(file2, files)
self.assertNotIn(file1, files)
files = loader.load_folder_files("not_existed_foulder", recursive=False)
self.assertEqual([], files)
files = loader.load_folder_files(file2, recursive=False)
self.assertEqual([], files)
def test_load_custom_dot_env_file(self):
dot_env_path = os.path.join(os.getcwd(), "examples", "httpbin", "test.env")
env_variables_mapping = loader.load_dot_env_file(dot_env_path)
self.assertIn("PROJECT_KEY", env_variables_mapping)
self.assertEqual(env_variables_mapping["UserName"], "test")
self.assertEqual(
env_variables_mapping["content_type"], "application/json; charset=UTF-8"
)
def test_load_env_path_not_exist(self):
dot_env_path = os.path.join(
os.getcwd(),
"tests",
"data",
)
env_variables_mapping = loader.load_dot_env_file(dot_env_path)
self.assertEqual(env_variables_mapping, {})
def test_locate_file(self):
with self.assertRaises(exceptions.FileNotFound):
loader.locate_file(os.getcwd(), "debugtalk.py")
with self.assertRaises(exceptions.FileNotFound):
loader.locate_file("", "debugtalk.py")
start_path = os.path.join(os.getcwd(), "examples", "httpbin")
self.assertEqual(
loader.locate_file(start_path, "debugtalk.py"),
os.path.join(os.getcwd(), "examples", "httpbin", "debugtalk.py"),
)
self.assertEqual(
loader.locate_file("examples/httpbin/", "debugtalk.py"),
os.path.join(os.getcwd(), "examples", "httpbin", "debugtalk.py"),
)
self.assertEqual(
loader.locate_file("examples/httpbin/", "debugtalk.py"),
os.path.join(os.getcwd(), "examples", "httpbin", "debugtalk.py"),
)

574
httprunner/make.py Normal file
View File

@@ -0,0 +1,574 @@
import os
import string
import subprocess
import sys
from typing import Dict, List, Set, Text, Tuple
import jinja2
from loguru import logger
from httprunner import __version__, exceptions
from httprunner.compat import (
convert_variables,
ensure_path_sep,
ensure_testcase_v4,
ensure_testcase_v4_api,
)
from httprunner.loader import (
convert_relative_project_root_dir,
load_folder_files,
load_project_meta,
load_test_file,
load_testcase,
)
from httprunner.response import uniform_validator
from httprunner.utils import ga4_client, is_support_multiprocessing
""" cache converted pytest files, avoid duplicate making
"""
pytest_files_made_cache_mapping: Dict[Text, Text] = {}
""" save generated pytest files to run, except referenced testcase
"""
pytest_files_run_set: Set = set()
__TEMPLATE__ = jinja2.Template(
"""# NOTE: Generated By HttpRunner {{ version }}
# FROM: {{ testcase_path }}
{%- if parameters or skip %}
import pytest
{% endif %}
from httprunner import HttpRunner, Config, Step, RunRequest
{%- if parameters %}
from httprunner import Parameters
{%- endif %}
{%- if reference_testcase %}
from httprunner import RunTestCase
{%- endif %}
{%- for import_str in imports_list %}
{{ import_str }}
{%- endfor %}
class {{ class_name }}(HttpRunner):
{% if parameters and skip %}
@pytest.mark.parametrize("param", Parameters({{ parameters }}))
@pytest.mark.skip(reason={{ skip }})
def test_start(self, param):
super().test_start(param)
{% elif parameters %}
@pytest.mark.parametrize("param", Parameters({{ parameters }}))
def test_start(self, param):
super().test_start(param)
{% elif skip %}
@pytest.mark.skip(reason={{ skip }})
def test_start(self):
super().test_start()
{% endif %}
config = {{ config_chain_style }}
teststeps = [
{% for step_chain_style in teststeps_chain_style %}
{{ step_chain_style }},
{% endfor %}
]
if __name__ == "__main__":
{{ class_name }}().test_start()
"""
)
def __ensure_absolute(path: Text) -> Text:
if path.startswith("./"):
# Linux/Darwin, hrun ./test.yml
path = path[2:]
elif path.startswith(".\\"):
# Windows, hrun .\\test.yml
path = path[3:]
path = ensure_path_sep(path)
project_meta = load_project_meta(path)
if os.path.isabs(path):
absolute_path = path
else:
absolute_path = os.path.join(project_meta.RootDir, path)
if not os.path.isfile(absolute_path):
logger.error(f"Invalid testcase file path: {absolute_path}")
sys.exit(1)
return absolute_path
def ensure_file_abs_path_valid(file_abs_path: Text) -> Text:
"""ensure file path valid for pytest, handle cases when directory name includes dot/hyphen/space
Args:
file_abs_path: absolute file path
Returns:
ensured valid absolute file path
"""
project_meta = load_project_meta(file_abs_path)
raw_abs_file_name, file_suffix = os.path.splitext(file_abs_path)
file_suffix = file_suffix.lower()
raw_file_relative_name = convert_relative_project_root_dir(raw_abs_file_name)
if raw_file_relative_name == "":
return file_abs_path
path_names = []
for name in raw_file_relative_name.rstrip(os.sep).split(os.sep):
if name[0] in string.digits:
# ensure file name not startswith digit
# 19 => T19, 2C => T2C
name = f"T{name}"
if name.startswith("."):
# avoid ".csv" been converted to "_csv"
pass
else:
# handle cases when directory name includes dot/hyphen/space
name = name.replace(" ", "_").replace(".", "_").replace("-", "_")
path_names.append(name)
new_file_path = os.path.join(
project_meta.RootDir, f"{os.sep.join(path_names)}{file_suffix}"
)
return new_file_path
def __ensure_testcase_module(path: Text):
"""ensure pytest files are in python module, generate __init__.py on demand"""
init_file = os.path.join(os.path.dirname(path), "__init__.py")
if os.path.isfile(init_file):
return
with open(init_file, "w", encoding="utf-8") as f:
f.write("# NOTICE: Generated By HttpRunner. DO NOT EDIT!\n")
def convert_testcase_path(testcase_abs_path: Text) -> Tuple[Text, Text]:
"""convert single YAML/JSON testcase path to python file"""
testcase_new_path = ensure_file_abs_path_valid(testcase_abs_path)
dir_path = os.path.dirname(testcase_new_path)
file_name, _ = os.path.splitext(os.path.basename(testcase_new_path))
testcase_python_abs_path = os.path.join(dir_path, f"{file_name}_test.py")
# convert title case, e.g. request_with_variables => RequestWithVariables
name_in_title_case = file_name.title().replace("_", "")
return testcase_python_abs_path, name_in_title_case
def format_pytest_with_black(*python_paths: Text):
logger.info("format pytest cases with black ...")
try:
if is_support_multiprocessing() or len(python_paths) <= 1:
subprocess.run(["black", *python_paths])
else:
logger.warning(
"this system does not support multiprocessing well, format files one by one ..."
)
[subprocess.run(["black", path]) for path in python_paths]
except subprocess.CalledProcessError as ex:
logger.error(ex)
sys.exit(1)
except OSError:
err_msg = """
missing dependency tool: black
install black manually and try again:
$ pip install black
"""
logger.error(err_msg)
sys.exit(1)
def make_config_chain_style(config: Dict) -> Text:
config_chain_style = f'Config("{config["name"]}")'
if config["variables"]:
variables = config["variables"]
config_chain_style += f".variables(**{variables})"
if "base_url" in config:
config_chain_style += f'.base_url("{config["base_url"]}")'
if "verify" in config:
config_chain_style += f'.verify({config["verify"]})'
if "export" in config:
config_chain_style += f'.export(*{config["export"]})'
return config_chain_style
def make_config_skip(config: Dict) -> Text:
if "skip" in config:
if config["skip"]:
config_chain_style = config["skip"]
else:
config_chain_style = '"skip unconditionally"'
return config_chain_style
def make_request_chain_style(request: Dict) -> Text:
method = request["method"].lower()
url = request["url"]
request_chain_style = f'.{method}("{url}")'
if "params" in request:
params = request["params"]
request_chain_style += f".with_params(**{params})"
if "headers" in request:
headers = request["headers"]
request_chain_style += f".with_headers(**{headers})"
if "cookies" in request:
cookies = request["cookies"]
request_chain_style += f".with_cookies(**{cookies})"
if "data" in request:
data = request["data"]
if isinstance(data, Text):
data = f'"{data}"'
request_chain_style += f".with_data({data})"
if "json" in request:
req_json = request["json"]
if isinstance(req_json, Text):
req_json = f'"{req_json}"'
request_chain_style += f".with_json({req_json})"
if "timeout" in request:
timeout = request["timeout"]
request_chain_style += f".set_timeout({timeout})"
if "verify" in request:
verify = request["verify"]
request_chain_style += f".set_verify({verify})"
if "allow_redirects" in request:
allow_redirects = request["allow_redirects"]
request_chain_style += f".set_allow_redirects({allow_redirects})"
if "upload" in request:
upload = request["upload"]
request_chain_style += f".upload(**{upload})"
return request_chain_style
def make_teststep_chain_style(teststep: Dict) -> Text:
if teststep.get("request"):
step_info = f'RunRequest("{teststep["name"]}")'
elif teststep.get("testcase"):
step_info = f'RunTestCase("{teststep["name"]}")'
else:
raise exceptions.TestCaseFormatError(f"Invalid teststep: {teststep}")
if "variables" in teststep:
variables = teststep["variables"]
step_info += f".with_variables(**{variables})"
if "setup_hooks" in teststep:
setup_hooks = teststep["setup_hooks"]
for hook in setup_hooks:
if isinstance(hook, Text):
step_info += f'.setup_hook("{hook}")'
elif isinstance(hook, Dict) and len(hook) == 1:
assign_var_name, hook_content = list(hook.items())[0]
step_info += f'.setup_hook("{hook_content}", "{assign_var_name}")'
else:
raise exceptions.TestCaseFormatError(f"Invalid setup hook: {hook}")
if teststep.get("request"):
step_info += make_request_chain_style(teststep["request"])
elif teststep.get("testcase"):
testcase = teststep["testcase"]
call_ref_testcase = f".call({testcase})"
step_info += call_ref_testcase
if "teardown_hooks" in teststep:
teardown_hooks = teststep["teardown_hooks"]
for hook in teardown_hooks:
if isinstance(hook, Text):
step_info += f'.teardown_hook("{hook}")'
elif isinstance(hook, Dict) and len(hook) == 1:
assign_var_name, hook_content = list(hook.items())[0]
step_info += f'.teardown_hook("{hook_content}", "{assign_var_name}")'
else:
raise exceptions.TestCaseFormatError(f"Invalid teardown hook: {hook}")
if "extract" in teststep:
# request step
step_info += ".extract()"
for extract_name, extract_path in teststep["extract"].items():
step_info += f""".with_jmespath('{extract_path}', '{extract_name}')"""
if "export" in teststep:
# reference testcase step
export: List[Text] = teststep["export"]
step_info += f".export(*{export})"
if "validate" in teststep:
step_info += ".validate()"
for v in teststep["validate"]:
validator = uniform_validator(v)
assert_method = validator["assert"]
check = validator["check"]
if '"' in check:
# e.g. body."user-agent" => 'body."user-agent"'
check = f"'{check}'"
else:
check = f'"{check}"'
expect = validator["expect"]
if isinstance(expect, Text):
expect = f'"{expect}"'
message = validator["message"]
if message:
step_info += f".assert_{assert_method}({check}, {expect}, '{message}')"
else:
step_info += f".assert_{assert_method}({check}, {expect})"
return f"Step({step_info})"
def make_testcase(testcase: Dict, dir_path: Text = None) -> Text:
"""convert valid testcase dict to pytest file path"""
# ensure compatibility with testcase format v2/v3
testcase = ensure_testcase_v4(testcase)
# validate testcase format
load_testcase(testcase)
testcase_abs_path = __ensure_absolute(testcase["config"]["path"])
logger.info(f"start to make testcase: {testcase_abs_path}")
testcase_python_abs_path, testcase_cls_name = convert_testcase_path(
testcase_abs_path
)
if dir_path:
testcase_python_abs_path = os.path.join(
dir_path, os.path.basename(testcase_python_abs_path)
)
global pytest_files_made_cache_mapping
if testcase_python_abs_path in pytest_files_made_cache_mapping:
return testcase_python_abs_path
config = testcase["config"]
config["path"] = convert_relative_project_root_dir(testcase_python_abs_path)
config["variables"] = convert_variables(
config.get("variables", {}), testcase_abs_path
)
# prepare reference testcase
imports_list = []
teststeps = testcase["teststeps"]
for teststep in teststeps:
if not teststep.get("testcase"):
continue
# make ref testcase pytest file
ref_testcase_path = __ensure_absolute(teststep["testcase"])
test_content = load_test_file(ref_testcase_path)
if not isinstance(test_content, Dict):
raise exceptions.TestCaseFormatError(f"Invalid teststep: {teststep}")
# api in v2/v3 format, convert to v4 testcase
if "request" in test_content and "name" in test_content:
test_content = ensure_testcase_v4_api(test_content)
test_content.setdefault("config", {})["path"] = ref_testcase_path
ref_testcase_python_abs_path = make_testcase(test_content)
# override testcase export
ref_testcase_export: List = test_content["config"].get("export", [])
if ref_testcase_export:
step_export: List = teststep.setdefault("export", [])
step_export.extend(ref_testcase_export)
teststep["export"] = list(set(step_export))
# prepare ref testcase class name
ref_testcase_cls_name = pytest_files_made_cache_mapping[
ref_testcase_python_abs_path
]
teststep["testcase"] = ref_testcase_cls_name
# prepare import ref testcase
ref_testcase_python_relative_path = convert_relative_project_root_dir(
ref_testcase_python_abs_path
)
ref_module_name, _ = os.path.splitext(ref_testcase_python_relative_path)
ref_module_name = ref_module_name.replace(os.sep, ".")
import_expr = f"from {ref_module_name} import TestCase{ref_testcase_cls_name} as {ref_testcase_cls_name}"
if import_expr not in imports_list:
imports_list.append(import_expr)
testcase_path = convert_relative_project_root_dir(testcase_abs_path)
# current file compared to ProjectRootDir
diff_levels = len(testcase_path.split(os.sep))
if len(imports_list) > 0 and diff_levels > 0:
parent = ".parent" * diff_levels
import_deps = f"""
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__){parent}))
"""
imports_list.insert(0, import_deps)
data = {
"version": __version__,
"testcase_path": testcase_path,
"class_name": f"TestCase{testcase_cls_name}",
"imports_list": imports_list,
"config_chain_style": make_config_chain_style(config),
"skip": make_config_skip(config),
"parameters": config.get("parameters"),
"reference_testcase": any(step.get("testcase") for step in teststeps),
"teststeps_chain_style": [
make_teststep_chain_style(step) for step in teststeps
],
}
content = __TEMPLATE__.render(data)
# ensure new file's directory exists
dir_path = os.path.dirname(testcase_python_abs_path)
if not os.path.exists(dir_path):
os.makedirs(dir_path)
with open(testcase_python_abs_path, "w", encoding="utf-8") as f:
f.write(content)
pytest_files_made_cache_mapping[testcase_python_abs_path] = testcase_cls_name
__ensure_testcase_module(testcase_python_abs_path)
logger.info(f"generated testcase: {testcase_python_abs_path}")
return testcase_python_abs_path
def __make(tests_path: Text):
"""make testcase(s) with testcase/folder absolute path
generated pytest file path will be cached in pytest_files_made_cache_mapping
Args:
tests_path: should be in absolute path
"""
logger.info(f"make path: {tests_path}")
test_files = []
if os.path.isdir(tests_path):
files_list = load_folder_files(tests_path)
test_files.extend(files_list)
elif os.path.isfile(tests_path):
test_files.append(tests_path)
else:
raise exceptions.TestcaseNotFound(f"Invalid tests path: {tests_path}")
for test_file in test_files:
if test_file.lower().endswith("_test.py"):
pytest_files_run_set.add(test_file)
continue
try:
test_content = load_test_file(test_file)
except (exceptions.FileNotFound, exceptions.FileFormatError) as ex:
logger.warning(f"Invalid test file: {test_file}\n{type(ex).__name__}: {ex}")
continue
if not isinstance(test_content, Dict):
logger.warning(
f"Invalid test file: {test_file}\n"
f"reason: test content not in dict format."
)
continue
# api in v2/v3 format, convert to v4 testcase
if "request" in test_content and "name" in test_content:
test_content = ensure_testcase_v4_api(test_content)
if "config" not in test_content:
logger.warning(
f"Invalid testcase file: {test_file}\nreason: missing config part."
)
continue
elif not isinstance(test_content["config"], Dict):
logger.warning(
f"Invalid testcase file: {test_file}\n"
f"reason: config should be dict type, got {test_content['config']}"
)
continue
# ensure path absolute
test_content.setdefault("config", {})["path"] = test_file
# invalid format
if "teststeps" not in test_content:
logger.warning(f"Invalid testcase file: {test_file}")
# testcase
try:
testcase_pytest_path = make_testcase(test_content)
pytest_files_run_set.add(testcase_pytest_path)
except exceptions.TestCaseFormatError as ex:
logger.warning(
f"Invalid testcase file: {test_file}\n{type(ex).__name__}: {ex}"
)
continue
def main_make(tests_paths: List[Text]) -> List[Text]:
if not tests_paths:
return []
ga4_client.send_event("hmake")
for tests_path in tests_paths:
tests_path = ensure_path_sep(tests_path)
if not os.path.isabs(tests_path):
tests_path = os.path.join(os.getcwd(), tests_path)
try:
__make(tests_path)
except exceptions.MyBaseError as ex:
logger.error(ex)
sys.exit(1)
# format pytest files
pytest_files_format_list = pytest_files_made_cache_mapping.keys()
format_pytest_with_black(*pytest_files_format_list)
return list(pytest_files_run_set)
def init_make_parser(subparsers):
"""make testcases: parse command line options and run commands."""
parser = subparsers.add_parser(
"make",
help="Convert YAML/JSON testcases to pytest cases.",
)
parser.add_argument(
"testcase_path", nargs="*", help="Specify YAML/JSON testcase file/folder path"
)
return parser

213
httprunner/make_test.py Normal file
View File

@@ -0,0 +1,213 @@
import os
import unittest
from httprunner import loader
from httprunner.make import (
main_make,
convert_testcase_path,
pytest_files_made_cache_mapping,
make_config_chain_style,
make_teststep_chain_style,
pytest_files_run_set,
ensure_file_abs_path_valid,
)
class TestMake(unittest.TestCase):
def setUp(self) -> None:
pytest_files_made_cache_mapping.clear()
pytest_files_run_set.clear()
loader.project_meta = None
self.data_dir = os.path.join(os.getcwd(), "examples", "data")
def test_make_testcase(self):
path = ["examples/postman_echo/request_methods/request_with_variables.yml"]
testcase_python_list = main_make(path)
self.assertEqual(
testcase_python_list[0],
os.path.join(
os.getcwd(),
os.path.join(
"examples",
"postman_echo",
"request_methods",
"request_with_variables_test.py",
),
),
)
def test_make_testcase_with_ref(self):
path = [
"examples/postman_echo/request_methods/request_with_testcase_reference.yml"
]
testcase_python_list = main_make(path)
self.assertEqual(len(testcase_python_list), 1)
self.assertIn(
os.path.join(
os.getcwd(),
os.path.join(
"examples",
"postman_echo",
"request_methods",
"request_with_testcase_reference_test.py",
),
),
testcase_python_list,
)
with open(
os.path.join(
"examples",
"postman_echo",
"request_methods",
"request_with_testcase_reference_test.py",
)
) as f:
content = f.read()
self.assertIn(
"""
from request_methods.request_with_functions_test import (
TestCaseRequestWithFunctions as RequestWithFunctions,
)
""",
content,
)
self.assertIn(
".call(RequestWithFunctions)",
content,
)
def test_make_testcase_folder(self):
path = ["examples/postman_echo/request_methods/"]
testcase_python_list = main_make(path)
self.assertIn(
os.path.join(
os.getcwd(),
os.path.join(
"examples",
"postman_echo",
"request_methods",
"request_with_functions_test.py",
),
),
testcase_python_list,
)
def test_ensure_file_path_valid(self):
self.assertEqual(
ensure_file_abs_path_valid(os.path.join(self.data_dir, "a-b.c", "2 3.yml")),
os.path.join(self.data_dir, "a_b_c", "T2_3.yml"),
)
loader.project_meta = None
self.assertEqual(
ensure_file_abs_path_valid(
os.path.join(os.getcwd(), "examples", "postman_echo", "request_methods")
),
os.path.join(os.getcwd(), "examples", "postman_echo", "request_methods"),
)
loader.project_meta = None
self.assertEqual(
ensure_file_abs_path_valid(os.path.join(os.getcwd(), "pyproject.toml")),
os.path.join(os.getcwd(), "pyproject.toml"),
)
loader.project_meta = None
self.assertEqual(
ensure_file_abs_path_valid(os.getcwd()),
os.getcwd(),
)
loader.project_meta = None
self.assertEqual(
ensure_file_abs_path_valid(os.path.join(self.data_dir, ".csv")),
os.path.join(self.data_dir, ".csv"),
)
def test_convert_testcase_path(self):
self.assertEqual(
convert_testcase_path(os.path.join(self.data_dir, "a-b.c", "2 3.yml")),
(
os.path.join(self.data_dir, "a_b_c", "T2_3_test.py"),
"T23",
),
)
self.assertEqual(
convert_testcase_path(os.path.join(self.data_dir, "a-b.c", "中文case.yml")),
(
os.path.join(self.data_dir, "a_b_c", "中文case_test.py"),
"中文Case",
),
)
def test_make_config_chain_style(self):
config = {
"name": "request methods testcase: validate with functions",
"variables": {"foo1": "bar1", "foo2": 22},
"base_url": "https://postman_echo.com",
"verify": False,
"path": "examples/postman_echo/request_methods/validate_with_functions_test.py",
}
self.assertEqual(
make_config_chain_style(config),
"""Config("request methods testcase: validate with functions").variables(**{'foo1': 'bar1', 'foo2': 22}).base_url("https://postman_echo.com").verify(False)""",
)
def test_make_teststep_chain_style(self):
step = {
"name": "get with params",
"variables": {
"foo1": "bar1",
"foo2": 123,
"sum_v": "${sum_two(1, 2)}",
},
"request": {
"method": "GET",
"url": "/get",
"params": {"foo1": "$foo1", "foo2": "$foo2", "sum_v": "$sum_v"},
"headers": {"User-Agent": "HttpRunner/${get_httprunner_version()}"},
},
"testcase": "CLS_LB(TestCaseDemo)CLS_RB",
"extract": {
"session_foo1": "body.args.foo1",
"session_foo2": "body.args.foo2",
},
"validate": [
{"eq": ["status_code", 200]},
{"eq": ["body.args.sum_v", "3"]},
],
}
teststep_chain_style = make_teststep_chain_style(step)
self.assertEqual(
teststep_chain_style,
"""Step(RunRequest("get with params").with_variables(**{'foo1': 'bar1', 'foo2': 123, 'sum_v': '${sum_two(1, 2)}'}).get("/get").with_params(**{'foo1': '$foo1', 'foo2': '$foo2', 'sum_v': '$sum_v'}).with_headers(**{'User-Agent': 'HttpRunner/${get_httprunner_version()}'}).extract().with_jmespath('body.args.foo1', 'session_foo1').with_jmespath('body.args.foo2', 'session_foo2').validate().assert_equal("status_code", 200).assert_equal("body.args.sum_v", "3"))""",
)
def test_make_requests_with_json_chain_style(self):
step = {
"name": "get with params",
"variables": {
"foo1": "bar1",
"foo2": 123,
"sum_v": "${sum_two(1, 2)}",
"myjson": {"name": "user", "password": "123456"},
},
"request": {
"method": "GET",
"url": "/get",
"params": {"foo1": "$foo1", "foo2": "$foo2", "sum_v": "$sum_v"},
"headers": {"User-Agent": "HttpRunner/${get_httprunner_version()}"},
"json": "$myjson",
},
"testcase": "CLS_LB(TestCaseDemo)CLS_RB",
"extract": {
"session_foo1": "body.args.foo1",
"session_foo2": "body.args.foo2",
},
"validate": [
{"eq": ["status_code", 200]},
{"eq": ["body.args.sum_v", "3"]},
],
}
teststep_chain_style = make_teststep_chain_style(step)
self.assertEqual(
teststep_chain_style,
"""Step(RunRequest("get with params").with_variables(**{'foo1': 'bar1', 'foo2': 123, 'sum_v': '${sum_two(1, 2)}', 'myjson': {'name': 'user', 'password': '123456'}}).get("/get").with_params(**{'foo1': '$foo1', 'foo2': '$foo2', 'sum_v': '$sum_v'}).with_headers(**{'User-Agent': 'HttpRunner/${get_httprunner_version()}'}).with_json("$myjson").extract().with_jmespath('body.args.foo1', 'session_foo1').with_jmespath('body.args.foo2', 'session_foo2').validate().assert_equal("status_code", 200).assert_equal("body.args.sum_v", "3"))""",
)

305
httprunner/models.py Normal file
View File

@@ -0,0 +1,305 @@
import os
from enum import Enum
from typing import Any, Callable, Dict, List, Text, Union
from pydantic import BaseModel, Field, HttpUrl
Name = Text
Url = Text
BaseUrl = Union[HttpUrl, Text]
VariablesMapping = Dict[Text, Any]
FunctionsMapping = Dict[Text, Callable]
Headers = Dict[Text, Text]
Cookies = Dict[Text, Text]
Verify = bool
Hooks = List[Union[Text, Dict[Text, Text]]]
Export = List[Text]
Validators = List[Dict]
Env = Dict[Text, Any]
class MethodEnum(Text, Enum):
GET = "GET"
POST = "POST"
PUT = "PUT"
DELETE = "DELETE"
HEAD = "HEAD"
OPTIONS = "OPTIONS"
PATCH = "PATCH"
class ProtoType(Enum):
Binary = 1
CyBinary = 2
Compact = 3
Json = 4
class TransType(Enum):
Buffered = 1
CyBuffered = 2
Framed = 3
CyFramed = 4
# configs for thrift rpc
class TConfigThrift(BaseModel):
psm: Text = None
env: Text = None
cluster: Text = None
target: Text = None
include_dirs: List[Text] = None
thrift_client: Any = None
timeout: int = 10
idl_path: Text = None
method: Text = None
ip: Text = "127.0.0.1"
port: int = 9000
service_name: Text = None
proto_type: ProtoType = ProtoType.Binary
trans_type: TransType = TransType.Buffered
# configs for db
class TConfigDB(BaseModel):
psm: Text = None
user: Text = None
password: Text = None
ip: Text = None
port: int = 3306
database: Text = None
class TransportEnum(Text, Enum):
BUFFERED = "buffered"
FRAMED = "framed"
class TThriftRequest(BaseModel):
"""rpc request model"""
method: Text = ""
params: Dict = {}
thrift_client: Any = None
idl_path: Text = "" # idl local path
timeout: int = 10 # sec
transport: TransportEnum = TransportEnum.BUFFERED
include_dirs: List[Union[Text, None]] = [] # param of thriftpy2.load
target: Text = "" # tcp://{ip}:{port} or sd://psm?cluster=xx&env=xx
env: Text = "prod"
cluster: Text = "default"
psm: Text = ""
service_name: Text = None
ip: Text = None
port: int = None
proto_type: ProtoType = None
trans_type: TransType = None
class SqlMethodEnum(Text, Enum):
FETCHONE = "FETCHONE"
FETCHMANY = "FETCHMANY"
FETCHALL = "FETCHALL"
INSERT = "INSERT"
UPDATE = "UPDATE"
DELETE = "DELETE"
class TSqlRequest(BaseModel):
"""sql request model"""
db_config: TConfigDB = TConfigDB()
method: SqlMethodEnum = None
sql: Text = None
size: int = 0 # limit nums of sql result
class TConfig(BaseModel):
name: Name
verify: Verify = False
base_url: BaseUrl = ""
# Text: prepare variables in debugtalk.py, ${gen_variables()}
variables: Union[VariablesMapping, Text] = {}
parameters: Union[VariablesMapping, Text] = {}
# setup_hooks: Hooks = []
# teardown_hooks: Hooks = []
export: Export = []
path: Text = None
# configs for other protocols
thrift: TConfigThrift = None
db: TConfigDB = TConfigDB()
class TRequest(BaseModel):
"""requests.Request model"""
method: MethodEnum
url: Url
params: Dict[Text, Text] = {}
headers: Headers = {}
req_json: Union[Dict, List, Text] = Field(None, alias="json")
data: Union[Text, Dict[Text, Any]] = None
cookies: Cookies = {}
timeout: float = 120
allow_redirects: bool = True
verify: Verify = False
upload: Dict = {} # used for upload files
class TStep(BaseModel):
name: Name
request: Union[TRequest, None] = None
testcase: Union[Text, Callable, None] = None
variables: VariablesMapping = {}
setup_hooks: Hooks = []
teardown_hooks: Hooks = []
# used to extract request's response field
extract: VariablesMapping = {}
# used to export session variables from referenced testcase
export: Export = []
validators: Validators = Field([], alias="validate")
validate_script: List[Text] = []
retry_times: int = 0
retry_interval: int = 0 # sec
thrift_request: Union[TThriftRequest, None] = None
sql_request: Union[TSqlRequest, None] = None
class TestCase(BaseModel):
config: TConfig
teststeps: List[TStep]
class ProjectMeta(BaseModel):
debugtalk_py: Text = "" # debugtalk.py file content
debugtalk_path: Text = "" # debugtalk.py file path
dot_env_path: Text = "" # .env file path
functions: FunctionsMapping = {} # functions defined in debugtalk.py
env: Env = {}
RootDir: Text = (
os.getcwd()
) # project root directory (ensure absolute), the path debugtalk.py located
class TestsMapping(BaseModel):
project_meta: ProjectMeta
testcases: List[TestCase]
class TestCaseTime(BaseModel):
start_at: float = 0
start_at_iso_format: Text = ""
duration: float = 0
class TestCaseInOut(BaseModel):
config_vars: VariablesMapping = {}
export_vars: Dict = {}
class RequestStat(BaseModel):
content_size: float = 0
response_time_ms: float = 0
elapsed_ms: float = 0
class AddressData(BaseModel):
client_ip: Text = "N/A"
client_port: int = 0
server_ip: Text = "N/A"
server_port: int = 0
class RequestData(BaseModel):
method: MethodEnum = MethodEnum.GET
url: Url
headers: Headers = {}
cookies: Cookies = {}
body: Union[Text, bytes, List, Dict, None] = {}
class ResponseData(BaseModel):
status_code: int
headers: Dict
cookies: Cookies
encoding: Union[Text, None] = None
content_type: Text
body: Union[Text, bytes, List, Dict, None]
class ReqRespData(BaseModel):
request: RequestData
response: ResponseData
class SessionData(BaseModel):
"""request session data, including request, response, validators and stat data"""
success: bool = False
# in most cases, req_resps only contains one request & response
# while when 30X redirect occurs, req_resps will contain multiple request & response
req_resps: List[ReqRespData] = []
stat: RequestStat = RequestStat()
address: AddressData = AddressData()
validators: Dict = {}
class StepResult(BaseModel):
"""teststep data, each step maybe corresponding to one request or one testcase"""
name: Text = "" # teststep name
step_type: Text = "" # teststep type, request or testcase
success: bool = False
data: Union[SessionData, List["StepResult"]] = None
elapsed: float = 0.0 # teststep elapsed time
content_size: float = 0 # response content size
export_vars: VariablesMapping = {}
attachment: Text = "" # teststep attachment
StepResult.update_forward_refs()
class IStep(object):
def name(self) -> str:
raise NotImplementedError
def type(self) -> str:
raise NotImplementedError
def struct(self) -> TStep:
raise NotImplementedError
def run(self, runner) -> StepResult:
# runner: HttpRunner
raise NotImplementedError
class TestCaseSummary(BaseModel):
name: Text
success: bool
case_id: Text
time: TestCaseTime
in_out: TestCaseInOut = {}
log: Text = ""
step_results: List[StepResult] = []
class PlatformInfo(BaseModel):
httprunner_version: Text
python_version: Text
platform: Text
class Stat(BaseModel):
total: int = 0
success: int = 0
fail: int = 0
class TestSuiteSummary(BaseModel):
success: bool = False
stat: Stat = Stat()
time: TestCaseTime = TestCaseTime()
platform: PlatformInfo
testcases: List[TestCaseSummary]

606
httprunner/parser.py Normal file
View File

@@ -0,0 +1,606 @@
import ast
import builtins
import os
import re
from typing import Any, Callable, Dict, List, Set, Text
from urllib.parse import urlparse
from loguru import logger
from httprunner import exceptions, loader, utils
from httprunner.models import FunctionsMapping, VariablesMapping
# use $$ to escape $ notation
dolloar_regex_compile = re.compile(r"\$\$")
# variable notation, e.g. ${var} or $var
# variable should start with a-zA-Z_
variable_regex_compile = re.compile(r"\$\{([a-zA-Z_]\w*)\}|\$([a-zA-Z_]\w*)")
# function notation, e.g. ${func1($var_1, $var_3)}
function_regex_compile = re.compile(r"\$\{([a-zA-Z_]\w*)\(([\$\w\.\-/\s=,]*)\)\}")
def parse_string_value(str_value: Text) -> Any:
"""parse string to number if possible
e.g. "123" => 123
"12.2" => 12.3
"abc" => "abc"
"$var" => "$var"
"""
try:
return ast.literal_eval(str_value)
except ValueError:
return str_value
except SyntaxError:
# e.g. $var, ${func}
return str_value
def build_url(base_url, step_url):
"""prepend url with base_url unless it's already an absolute URL"""
o_step_url = urlparse(step_url)
if o_step_url.netloc != "":
# step url is absolute url
return step_url
# step url is relative, based on base url
o_base_url = urlparse(base_url)
if o_base_url.netloc == "":
# missed base url
raise exceptions.ParamsError("base url missed!")
path = o_base_url.path.rstrip("/") + "/" + o_step_url.path.lstrip("/")
o_step_url = (
o_step_url._replace(scheme=o_base_url.scheme)
._replace(netloc=o_base_url.netloc)
._replace(path=path)
)
return o_step_url.geturl()
def regex_findall_variables(raw_string: Text) -> List[Text]:
"""extract all variable names from content, which is in format $variable
Args:
raw_string (str): string content
Returns:
list: variables list extracted from string content
Examples:
>>> regex_findall_variables("$variable")
["variable"]
>>> regex_findall_variables("/blog/$postid")
["postid"]
>>> regex_findall_variables("/$var1/$var2")
["var1", "var2"]
>>> regex_findall_variables("abc")
[]
"""
try:
match_start_position = raw_string.index("$", 0)
except ValueError:
return []
vars_list = []
while match_start_position < len(raw_string):
# Notice: notation priority
# $$ > $var
# search $$
dollar_match = dolloar_regex_compile.match(raw_string, match_start_position)
if dollar_match:
match_start_position = dollar_match.end()
continue
# search variable like ${var} or $var
var_match = variable_regex_compile.match(raw_string, match_start_position)
if var_match:
var_name = var_match.group(1) or var_match.group(2)
vars_list.append(var_name)
match_start_position = var_match.end()
continue
curr_position = match_start_position
try:
# find next $ location
match_start_position = raw_string.index("$", curr_position + 1)
except ValueError:
# break while loop
break
return vars_list
def regex_findall_functions(content: Text) -> List[Text]:
"""extract all functions from string content, which are in format ${fun()}
Args:
content (str): string content
Returns:
list: functions list extracted from string content
Examples:
>>> regex_findall_functions("${func(5)}")
["func(5)"]
>>> regex_findall_functions("${func(a=1, b=2)}")
["func(a=1, b=2)"]
>>> regex_findall_functions("/api/1000?_t=${get_timestamp()}")
["get_timestamp()"]
>>> regex_findall_functions("/api/${add(1, 2)}")
["add(1, 2)"]
>>> regex_findall_functions("/api/${add(1, 2)}?_t=${get_timestamp()}")
["add(1, 2)", "get_timestamp()"]
"""
try:
return function_regex_compile.findall(content)
except TypeError as ex:
logger.error(f"regex findall functions error: {ex}")
return []
def extract_variables(content: Any) -> Set:
"""extract all variables in content recursively."""
if isinstance(content, (list, set, tuple)):
variables = set()
for item in content:
variables = variables | extract_variables(item)
return variables
elif isinstance(content, dict):
variables = set()
for key, value in content.items():
variables = variables | extract_variables(value)
return variables
elif isinstance(content, str):
return set(regex_findall_variables(content))
return set()
def parse_function_params(params: Text) -> Dict:
"""parse function params to args and kwargs.
Args:
params (str): function param in string
Returns:
dict: function meta dict
{
"args": [],
"kwargs": {}
}
Examples:
>>> parse_function_params("")
{'args': [], 'kwargs': {}}
>>> parse_function_params("5")
{'args': [5], 'kwargs': {}}
>>> parse_function_params("1, 2")
{'args': [1, 2], 'kwargs': {}}
>>> parse_function_params("a=1, b=2")
{'args': [], 'kwargs': {'a': 1, 'b': 2}}
>>> parse_function_params("1, 2, a=3, b=4")
{'args': [1, 2], 'kwargs': {'a':3, 'b':4}}
"""
function_meta = {"args": [], "kwargs": {}}
params_str = params.strip()
if params_str == "":
return function_meta
args_list = params_str.split(",")
for arg in args_list:
arg = arg.strip()
if "=" in arg:
key, value = arg.split("=")
function_meta["kwargs"][key.strip()] = parse_string_value(value.strip())
else:
function_meta["args"].append(parse_string_value(arg))
return function_meta
def get_mapping_variable(
variable_name: Text, variables_mapping: VariablesMapping
) -> Any:
"""get variable from variables_mapping.
Args:
variable_name (str): variable name
variables_mapping (dict): variables mapping
Returns:
mapping variable value.
Raises:
exceptions.VariableNotFound: variable is not found.
"""
# TODO: get variable from debugtalk module and environ
try:
return variables_mapping[variable_name]
except KeyError:
raise exceptions.VariableNotFound(
f"{variable_name} not found in {variables_mapping}"
)
def get_mapping_function(
function_name: Text, functions_mapping: FunctionsMapping
) -> Callable:
"""get function from functions_mapping,
if not found, then try to check if builtin function.
Args:
function_name (str): function name
functions_mapping (dict): functions mapping
Returns:
mapping function object.
Raises:
exceptions.FunctionNotFound: function is neither defined in debugtalk.py nor builtin.
"""
if function_name in functions_mapping:
return functions_mapping[function_name]
elif function_name in ["parameterize", "P"]:
return loader.load_csv_file
elif function_name in ["environ", "ENV"]:
return utils.get_os_environ
elif function_name in ["multipart_encoder", "multipart_content_type"]:
# extension for upload test
from httprunner.ext import uploader
return getattr(uploader, function_name)
try:
# check if HttpRunner builtin functions
built_in_functions = loader.load_builtin_functions()
return built_in_functions[function_name]
except KeyError:
pass
try:
# check if Python builtin functions
return getattr(builtins, function_name)
except AttributeError:
pass
raise exceptions.FunctionNotFound(f"{function_name} is not found.")
def parse_string(
raw_string: Text,
variables_mapping: VariablesMapping,
functions_mapping: FunctionsMapping,
) -> Any:
"""parse string content with variables and functions mapping.
Args:
raw_string: raw string content to be parsed.
variables_mapping: variables mapping.
functions_mapping: functions mapping.
Returns:
str: parsed string content.
Examples:
>>> raw_string = "abc${add_one($num)}def"
>>> variables_mapping = {"num": 3}
>>> functions_mapping = {"add_one": lambda x: x + 1}
>>> parse_string(raw_string, variables_mapping, functions_mapping)
"abc4def"
"""
try:
match_start_position = raw_string.index("$", 0)
parsed_string = raw_string[0:match_start_position]
except ValueError:
parsed_string = raw_string
return parsed_string
while match_start_position < len(raw_string):
# Notice: notation priority
# $$ > ${func($a, $b)} > $var
# search $$
dollar_match = dolloar_regex_compile.match(raw_string, match_start_position)
if dollar_match:
match_start_position = dollar_match.end()
parsed_string += "$"
continue
# search function like ${func($a, $b)}
func_match = function_regex_compile.match(raw_string, match_start_position)
if func_match:
func_name = func_match.group(1)
func = get_mapping_function(func_name, functions_mapping)
func_params_str = func_match.group(2)
function_meta = parse_function_params(func_params_str)
args = function_meta["args"]
kwargs = function_meta["kwargs"]
parsed_args = parse_data(args, variables_mapping, functions_mapping)
parsed_kwargs = parse_data(kwargs, variables_mapping, functions_mapping)
try:
func_eval_value = func(*parsed_args, **parsed_kwargs)
except Exception as ex:
logger.error(
f"call function error:\n"
f"func_name: {func_name}\n"
f"args: {parsed_args}\n"
f"kwargs: {parsed_kwargs}\n"
f"{type(ex).__name__}: {ex}"
)
raise
func_raw_str = "${" + func_name + f"({func_params_str})" + "}"
if func_raw_str == raw_string:
# raw_string is a function, e.g. "${add_one(3)}", return its eval value directly
return func_eval_value
# raw_string contains one or many functions, e.g. "abc${add_one(3)}def"
parsed_string += str(func_eval_value)
match_start_position = func_match.end()
continue
# search variable like ${var} or $var
var_match = variable_regex_compile.match(raw_string, match_start_position)
if var_match:
var_name = var_match.group(1) or var_match.group(2)
var_value = get_mapping_variable(var_name, variables_mapping)
if f"${var_name}" == raw_string or "${" + var_name + "}" == raw_string:
# raw_string is a variable, $var or ${var}, return its value directly
return var_value
# raw_string contains one or many variables, e.g. "abc${var}def"
parsed_string += str(var_value)
match_start_position = var_match.end()
continue
curr_position = match_start_position
try:
# find next $ location
match_start_position = raw_string.index("$", curr_position + 1)
remain_string = raw_string[curr_position:match_start_position]
except ValueError:
remain_string = raw_string[curr_position:]
# break while loop
match_start_position = len(raw_string)
parsed_string += remain_string
return parsed_string
def parse_data(
raw_data: Any,
variables_mapping: VariablesMapping = None,
functions_mapping: FunctionsMapping = None,
) -> Any:
"""parse raw data with evaluated variables mapping.
Notice: variables_mapping should not contain any variable or function.
"""
if isinstance(raw_data, str):
# content in string format may contains variables and functions
variables_mapping = variables_mapping or {}
functions_mapping = functions_mapping or {}
# only strip whitespaces and tabs, \n\r is left because they maybe used in changeset
raw_data = raw_data.strip(" \t")
return parse_string(raw_data, variables_mapping, functions_mapping)
elif isinstance(raw_data, (list, set, tuple)):
return [
parse_data(item, variables_mapping, functions_mapping) for item in raw_data
]
elif isinstance(raw_data, dict):
parsed_data = {}
for key, value in raw_data.items():
parsed_key = parse_data(key, variables_mapping, functions_mapping)
parsed_value = parse_data(value, variables_mapping, functions_mapping)
parsed_data[parsed_key] = parsed_value
return parsed_data
else:
# other types, e.g. None, int, float, bool
return raw_data
def parse_variables_mapping(
variables_mapping: VariablesMapping, functions_mapping: FunctionsMapping = None
) -> VariablesMapping:
parsed_variables: VariablesMapping = {}
while len(parsed_variables) != len(variables_mapping):
for var_name in variables_mapping:
if var_name in parsed_variables:
continue
var_value = variables_mapping[var_name]
variables = extract_variables(var_value)
# check if reference variable itself
if var_name in variables:
# e.g.
# variables_mapping = {"token": "abc$token"}
# variables_mapping = {"key": ["$key", 2]}
raise exceptions.VariableNotFound(var_name)
# check if reference variable not in variables_mapping
not_defined_variables = [
v_name for v_name in variables if v_name not in variables_mapping
]
if not_defined_variables:
# e.g. {"varA": "123$varB", "varB": "456$varC"}
# e.g. {"varC": "${sum_two($a, $b)}"}
raise exceptions.VariableNotFound(not_defined_variables)
try:
parsed_value = parse_data(
var_value, parsed_variables, functions_mapping
)
except exceptions.VariableNotFound:
continue
parsed_variables[var_name] = parsed_value
return parsed_variables
def parse_parameters(
parameters: Dict,
) -> List[Dict]:
"""parse parameters and generate cartesian product.
Args:
parameters (Dict) parameters: parameter name and value mapping
parameter value may be in three types:
(1) data list, e.g. ["iOS/10.1", "iOS/10.2", "iOS/10.3"]
(2) call built-in parameterize function, "${parameterize(account.csv)}"
(3) call custom function in debugtalk.py, "${gen_app_version()}"
Returns:
list: cartesian product list
Examples:
>>> parameters = {
"user_agent": ["iOS/10.1", "iOS/10.2", "iOS/10.3"],
"username-password": "${parameterize(account.csv)}",
"app_version": "${gen_app_version()}",
}
>>> parse_parameters(parameters)
"""
parsed_parameters_list: List[List[Dict]] = []
# load project_meta functions
project_meta = loader.load_project_meta(os.getcwd())
functions_mapping = project_meta.functions
for parameter_name, parameter_content in parameters.items():
parameter_name_list = parameter_name.split("-")
if isinstance(parameter_content, List):
# (1) data list
# e.g. {"app_version": ["2.8.5", "2.8.6"]}
# => [{"app_version": "2.8.5", "app_version": "2.8.6"}]
# e.g. {"username-password": [["user1", "111111"], ["test2", "222222"]}
# => [{"username": "user1", "password": "111111"}, {"username": "user2", "password": "222222"}]
parameter_content_list: List[Dict] = []
for parameter_item in parameter_content:
if not isinstance(parameter_item, (list, tuple)):
# "2.8.5" => ["2.8.5"]
parameter_item = [parameter_item]
# ["app_version"], ["2.8.5"] => {"app_version": "2.8.5"}
# ["username", "password"], ["user1", "111111"] => {"username": "user1", "password": "111111"}
parameter_content_dict = dict(zip(parameter_name_list, parameter_item))
parameter_content_list.append(parameter_content_dict)
elif isinstance(parameter_content, Text):
# (2) & (3)
parsed_parameter_content: List = parse_data(
parameter_content, {}, functions_mapping
)
if not isinstance(parsed_parameter_content, List):
raise exceptions.ParamsError(
f"parameters content should be in List type, got {parsed_parameter_content} for {parameter_content}"
)
parameter_content_list: List[Dict] = []
for parameter_item in parsed_parameter_content:
if isinstance(parameter_item, Dict):
# get subset by parameter name
# {"app_version": "${gen_app_version()}"}
# gen_app_version() => [{'app_version': '2.8.5'}, {'app_version': '2.8.6'}]
# {"username-password": "${get_account()}"}
# get_account() => [
# {"username": "user1", "password": "111111"},
# {"username": "user2", "password": "222222"}
# ]
parameter_dict: Dict = {
key: parameter_item[key] for key in parameter_name_list
}
elif isinstance(parameter_item, (List, tuple)):
if len(parameter_name_list) == len(parameter_item):
# {"username-password": "${get_account()}"}
# get_account() => [("user1", "111111"), ("user2", "222222")]
parameter_dict = dict(zip(parameter_name_list, parameter_item))
else:
raise exceptions.ParamsError(
f"parameter names length are not equal to value length.\n"
f"parameter names: {parameter_name_list}\n"
f"parameter values: {parameter_item}"
)
elif len(parameter_name_list) == 1:
# {"user_agent": "${get_user_agent()}"}
# get_user_agent() => ["iOS/10.1", "iOS/10.2"]
# parameter_dict will get: {"user_agent": "iOS/10.1", "user_agent": "iOS/10.2"}
parameter_dict = {parameter_name_list[0]: parameter_item}
else:
raise exceptions.ParamsError(
f"Invalid parameter names and values:\n"
f"parameter names: {parameter_name_list}\n"
f"parameter values: {parameter_item}"
)
parameter_content_list.append(parameter_dict)
else:
raise exceptions.ParamsError(
f"parameter content should be List or Text(variables or functions call), got {parameter_content}"
)
parsed_parameters_list.append(parameter_content_list)
return utils.gen_cartesian_product(*parsed_parameters_list)
class Parser(object):
def __init__(self, functions_mapping: FunctionsMapping = None) -> None:
self.functions_mapping = functions_mapping
def parse_string(
self, raw_string: Text, variables_mapping: VariablesMapping
) -> Any:
return parse_string(raw_string, variables_mapping, self.functions_mapping)
def parse_variables(self, variables_mapping: VariablesMapping) -> VariablesMapping:
return parse_variables_mapping(variables_mapping, self.functions_mapping)
def parse_data(
self, raw_data: Any, variables_mapping: VariablesMapping = None
) -> Any:
return parse_data(raw_data, variables_mapping, self.functions_mapping)
def get_mapping_function(self, func_name: Text) -> Callable:
return get_mapping_function(func_name, self.functions_mapping)

574
httprunner/parser_test.py Normal file
View File

@@ -0,0 +1,574 @@
import os
import time
import unittest
from httprunner import parser
from httprunner.exceptions import FunctionNotFound, VariableNotFound
from httprunner.loader import load_project_meta
class TestParserBasic(unittest.TestCase):
def test_build_url(self):
url = parser.build_url("https://postman-echo.com", "/get")
self.assertEqual(url, "https://postman-echo.com/get")
url = parser.build_url("https://postman-echo.com", "get")
self.assertEqual(url, "https://postman-echo.com/get")
url = parser.build_url("https://postman-echo.com/", "/get")
self.assertEqual(url, "https://postman-echo.com/get")
url = parser.build_url("https://postman-echo.com/abc/", "/get?a=1&b=2")
self.assertEqual(url, "https://postman-echo.com/abc/get?a=1&b=2")
url = parser.build_url("https://postman-echo.com/abc/", "get?a=1&b=2")
self.assertEqual(url, "https://postman-echo.com/abc/get?a=1&b=2")
# omit query string in base url
url = parser.build_url("https://postman-echo.com/abc?x=6&y=9", "/get?a=1&b=2")
self.assertEqual(url, "https://postman-echo.com/abc/get?a=1&b=2")
url = parser.build_url("", "https://postman-echo.com/get")
self.assertEqual(url, "https://postman-echo.com/get")
# notice: step request url > config base url
url = parser.build_url("https://postman-echo.com", "https://httpbin.org/get")
self.assertEqual(url, "https://httpbin.org/get")
def test_parse_variables_mapping(self):
variables = {"varA": "$varB", "varB": "$varC", "varC": "123", "a": 1, "b": 2}
parsed_variables = parser.parse_variables_mapping(variables)
print(parsed_variables)
self.assertEqual(parsed_variables["varA"], "123")
self.assertEqual(parsed_variables["varB"], "123")
def test_parse_variables_mapping_exception(self):
variables = {"varA": "$varB", "varB": "$varC", "a": 1, "b": 2}
with self.assertRaises(VariableNotFound):
parser.parse_variables_mapping(variables)
def test_parse_string_value(self):
self.assertEqual(parser.parse_string_value("123"), 123)
self.assertEqual(parser.parse_string_value("12.3"), 12.3)
self.assertEqual(parser.parse_string_value("a123"), "a123")
self.assertEqual(parser.parse_string_value("$var"), "$var")
self.assertEqual(parser.parse_string_value("${func}"), "${func}")
def test_regex_findall_variables(self):
self.assertEqual(parser.regex_findall_variables("$variable"), ["variable"])
self.assertEqual(parser.regex_findall_variables("${variable}123"), ["variable"])
self.assertEqual(parser.regex_findall_variables("/blog/$postid"), ["postid"])
self.assertEqual(
parser.regex_findall_variables("/$var1/$var2"), ["var1", "var2"]
)
self.assertEqual(parser.regex_findall_variables("abc"), [])
self.assertEqual(parser.regex_findall_variables("Z:2>1*0*1+1$a"), ["a"])
self.assertEqual(parser.regex_findall_variables("Z:2>1*0*1+1$$a"), [])
self.assertEqual(parser.regex_findall_variables("Z:2>1*0*1+1$$$a"), ["a"])
self.assertEqual(parser.regex_findall_variables("Z:2>1*0*1+1$$$$a"), [])
self.assertEqual(parser.regex_findall_variables("Z:2>1*0*1+1$$a$b"), ["b"])
self.assertEqual(parser.regex_findall_variables("Z:2>1*0*1+1$$a$$b"), [])
# variable should not start with digit
self.assertEqual(parser.regex_findall_variables("$1a"), [])
self.assertEqual(parser.regex_findall_variables("${1a}"), [])
def test_extract_variables(self):
self.assertEqual(parser.extract_variables("$var"), {"var"})
self.assertEqual(parser.extract_variables("$var123"), {"var123"})
self.assertEqual(parser.extract_variables("$var_name"), {"var_name"})
self.assertEqual(parser.extract_variables("var"), set())
self.assertEqual(parser.extract_variables("a$var"), {"var"})
self.assertEqual(parser.extract_variables("$v ar"), {"v"})
self.assertEqual(parser.extract_variables(" "), set())
self.assertEqual(parser.extract_variables("$abc*"), {"abc"})
self.assertEqual(parser.extract_variables("${func()}"), set())
self.assertEqual(parser.extract_variables("${func(1,2)}"), set())
self.assertEqual(
parser.extract_variables("${gen_md5($TOKEN, $data, $random)}"),
{"TOKEN", "data", "random"},
)
self.assertEqual(parser.extract_variables("Z:2>1*0*1+1$$1"), set())
def test_parse_function_params(self):
self.assertEqual(parser.parse_function_params(""), {"args": [], "kwargs": {}})
self.assertEqual(parser.parse_function_params("5"), {"args": [5], "kwargs": {}})
self.assertEqual(
parser.parse_function_params("1, 2"), {"args": [1, 2], "kwargs": {}}
)
self.assertEqual(
parser.parse_function_params("a=1, b=2"),
{"args": [], "kwargs": {"a": 1, "b": 2}},
)
self.assertEqual(
parser.parse_function_params("a= 1, b =2"),
{"args": [], "kwargs": {"a": 1, "b": 2}},
)
self.assertEqual(
parser.parse_function_params("1, 2, a=3, b=4"),
{"args": [1, 2], "kwargs": {"a": 3, "b": 4}},
)
self.assertEqual(
parser.parse_function_params("$request, 123"),
{"args": ["$request", 123], "kwargs": {}},
)
self.assertEqual(parser.parse_function_params(" "), {"args": [], "kwargs": {}})
self.assertEqual(
parser.parse_function_params("hello world, a=3, b=4"),
{"args": ["hello world"], "kwargs": {"a": 3, "b": 4}},
)
self.assertEqual(
parser.parse_function_params("$request, 12 3"),
{"args": ["$request", "12 3"], "kwargs": {}},
)
def test_extract_functions(self):
self.assertEqual(parser.regex_findall_functions("${func()}"), [("func", "")])
self.assertEqual(parser.regex_findall_functions("${func(5)}"), [("func", "5")])
self.assertEqual(
parser.regex_findall_functions("${func(a=1, b=2)}"), [("func", "a=1, b=2")]
)
self.assertEqual(
parser.regex_findall_functions("${func(1, $b, c=$x, d=4)}"),
[("func", "1, $b, c=$x, d=4")],
)
self.assertEqual(
parser.regex_findall_functions("/api/1000?_t=${get_timestamp()}"),
[("get_timestamp", "")],
)
self.assertEqual(
parser.regex_findall_functions("/api/${add(1, 2)}"), [("add", "1, 2")]
)
self.assertEqual(
parser.regex_findall_functions("/api/${add(1, 2)}?_t=${get_timestamp()}"),
[("add", "1, 2"), ("get_timestamp", "")],
)
self.assertEqual(
parser.regex_findall_functions("abc${func(1, 2, a=3, b=4)}def"),
[("func", "1, 2, a=3, b=4")],
)
def test_parse_data_string_with_variables(self):
variables_mapping = {
"var_1": "abc",
"var_2": "def",
"var_3": 123,
"var_4": {"a": 1},
"var_5": True,
"var_6": None,
}
self.assertEqual(parser.parse_data("$var_1", variables_mapping), "abc")
self.assertEqual(parser.parse_data("${var_1}", variables_mapping), "abc")
self.assertEqual(parser.parse_data("var_1", variables_mapping), "var_1")
self.assertEqual(parser.parse_data("$var_1#XYZ", variables_mapping), "abc#XYZ")
self.assertEqual(
parser.parse_data("${var_1}#XYZ", variables_mapping), "abc#XYZ"
)
self.assertEqual(
parser.parse_data("/$var_1/$var_2/var3", variables_mapping), "/abc/def/var3"
)
self.assertEqual(parser.parse_data("$var_3", variables_mapping), 123)
self.assertEqual(parser.parse_data("$var_4", variables_mapping), {"a": 1})
self.assertEqual(parser.parse_data("$var_5", variables_mapping), True)
self.assertEqual(parser.parse_data("abc$var_5", variables_mapping), "abcTrue")
self.assertEqual(
parser.parse_data("abc$var_4", variables_mapping), "abc{'a': 1}"
)
self.assertEqual(parser.parse_data("$var_6", variables_mapping), None)
with self.assertRaises(VariableNotFound):
parser.parse_data("/api/$SECRET_KEY", variables_mapping)
self.assertEqual(
parser.parse_data(["$var_1", "$var_2"], variables_mapping), ["abc", "def"]
)
self.assertEqual(
parser.parse_data({"$var_1": "$var_2"}, variables_mapping), {"abc": "def"}
)
# format: $var
value = parser.parse_data("ABC$var_1", variables_mapping)
self.assertEqual(value, "ABCabc")
value = parser.parse_data("ABC$var_1$var_3", variables_mapping)
self.assertEqual(value, "ABCabc123")
value = parser.parse_data("ABC$var_1/$var_3", variables_mapping)
self.assertEqual(value, "ABCabc/123")
value = parser.parse_data("ABC$var_1/", variables_mapping)
self.assertEqual(value, "ABCabc/")
value = parser.parse_data("ABC$var_1$", variables_mapping)
self.assertEqual(value, "ABCabc$")
value = parser.parse_data("ABC$var_1/123$var_1/456", variables_mapping)
self.assertEqual(value, "ABCabc/123abc/456")
value = parser.parse_data("ABC$var_1/$var_2/$var_1", variables_mapping)
self.assertEqual(value, "ABCabc/def/abc")
value = parser.parse_data("func1($var_1, $var_3)", variables_mapping)
self.assertEqual(value, "func1(abc, 123)")
# format: ${var}
value = parser.parse_data("ABC${var_1}", variables_mapping)
self.assertEqual(value, "ABCabc")
value = parser.parse_data("ABC${var_1}${var_3}", variables_mapping)
self.assertEqual(value, "ABCabc123")
value = parser.parse_data("ABC${var_1}/${var_3}", variables_mapping)
self.assertEqual(value, "ABCabc/123")
value = parser.parse_data("ABC${var_1}/", variables_mapping)
self.assertEqual(value, "ABCabc/")
value = parser.parse_data("ABC${var_1}123", variables_mapping)
self.assertEqual(value, "ABCabc123")
value = parser.parse_data("ABC${var_1}/123${var_1}/456", variables_mapping)
self.assertEqual(value, "ABCabc/123abc/456")
value = parser.parse_data("ABC${var_1}/${var_2}/${var_1}", variables_mapping)
self.assertEqual(value, "ABCabc/def/abc")
value = parser.parse_data("func1(${var_1}, ${var_3})", variables_mapping)
self.assertEqual(value, "func1(abc, 123)")
def test_parse_data_multiple_identical_variables(self):
variables_mapping = {
"var_1": "abc",
"var_2": "def",
}
self.assertEqual(
parser.parse_data("/$var_1/$var_2/$var_1", variables_mapping),
"/abc/def/abc",
)
variables_mapping = {"userid": 100, "data": 1498}
content = "/users/$userid/training/$data?userId=$userid&data=$data"
self.assertEqual(
parser.parse_data(content, variables_mapping),
"/users/100/training/1498?userId=100&data=1498",
)
variables_mapping = {"user": 100, "userid": 1000, "data": 1498}
content = "/users/$user/$userid/$data?userId=$userid&data=$data"
self.assertEqual(
parser.parse_data(content, variables_mapping),
"/users/100/1000/1498?userId=1000&data=1498",
)
def test_parse_data_string_with_functions(self):
import random
import string
functions_mapping = {
"gen_random_string": lambda str_len: "".join(
random.choice(string.ascii_letters + string.digits)
for _ in range(str_len)
)
}
result = parser.parse_data(
"${gen_random_string(5)}", functions_mapping=functions_mapping
)
self.assertEqual(len(result), 5)
functions_mapping["add_two_nums"] = lambda a, b=1: a + b
self.assertEqual(
parser.parse_data(
"${add_two_nums(1)}", functions_mapping=functions_mapping
),
2,
)
self.assertEqual(
parser.parse_data(
"${add_two_nums(1, 2)}", functions_mapping=functions_mapping
),
3,
)
self.assertEqual(
parser.parse_data(
"/api/${add_two_nums(1, 2)}", functions_mapping=functions_mapping
),
"/api/3",
)
with self.assertRaises(FunctionNotFound):
parser.parse_data("/api/${gen_md5(abc)}")
variables_mapping = {
"var_1": "abc",
"var_2": "def",
"var_3": 123,
"var_4": {"a": 1},
"var_5": True,
"var_6": None,
}
functions_mapping = {"func1": lambda x, y: str(x) + str(y)}
value = parser.parse_data(
"${func1($var_1, $var_3)}", variables_mapping, functions_mapping
)
self.assertEqual(value, "abc123")
value = parser.parse_data(
"ABC${func1($var_1, $var_3)}DE", variables_mapping, functions_mapping
)
self.assertEqual(value, "ABCabc123DE")
value = parser.parse_data(
"ABC${func1($var_1, $var_3)}$var_5", variables_mapping, functions_mapping
)
self.assertEqual(value, "ABCabc123True")
value = parser.parse_data(
"ABC${func1($var_1, $var_3)}DE$var_4", variables_mapping, functions_mapping
)
self.assertEqual(value, "ABCabc123DE{'a': 1}")
value = parser.parse_data(
"ABC$var_5${func1($var_1, $var_3)}", variables_mapping, functions_mapping
)
self.assertEqual(value, "ABCTrueabc123")
value = parser.parse_data(
"ABC${ord(a)}DEF${len(abcd)}", variables_mapping, functions_mapping
)
self.assertEqual(value, "ABC97DEF4")
def test_parse_data_func_var_duplicate(self):
variables_mapping = {
"var_1": "abc",
"var_2": "def",
"var_3": 123,
"var_4": {"a": 1},
"var_5": True,
"var_6": None,
}
functions_mapping = {"func1": lambda x, y: str(x) + str(y)}
value = parser.parse_data(
"ABC${func1($var_1, $var_3)}--${func1($var_1, $var_3)}",
variables_mapping,
functions_mapping,
)
self.assertEqual(value, "ABCabc123--abc123")
value = parser.parse_data(
"ABC${func1($var_1, $var_3)}$var_1", variables_mapping, functions_mapping
)
self.assertEqual(value, "ABCabc123abc")
value = parser.parse_data(
"ABC${func1($var_1, $var_3)}$var_1--${func1($var_1, $var_3)}$var_1",
variables_mapping,
functions_mapping,
)
self.assertEqual(value, "ABCabc123abc--abc123abc")
def test_parse_data_func_abnormal(self):
variables_mapping = {
"var_1": "abc",
"var_2": "def",
"var_3": 123,
"var_4": {"a": 1},
"var_5": True,
"var_6": None,
}
functions_mapping = {"func1": lambda x, y: str(x) + str(y)}
# {
value = parser.parse_data("ABC$var_1{", variables_mapping, functions_mapping)
self.assertEqual(value, "ABCabc{")
value = parser.parse_data(
"{ABC$var_1{}a}", variables_mapping, functions_mapping
)
self.assertEqual(value, "{ABCabc{}a}")
value = parser.parse_data(
"AB{C$var_1{}a}", variables_mapping, functions_mapping
)
self.assertEqual(value, "AB{Cabc{}a}")
# }
value = parser.parse_data("ABC$var_1}", variables_mapping, functions_mapping)
self.assertEqual(value, "ABCabc}")
# $$
value = parser.parse_data("ABC$$var_1{", variables_mapping, functions_mapping)
self.assertEqual(value, "ABC$var_1{")
# $$$
value = parser.parse_data("ABC$$$var_1{", variables_mapping, functions_mapping)
self.assertEqual(value, "ABC$abc{")
# $$$$
value = parser.parse_data("ABC$$$$var_1{", variables_mapping, functions_mapping)
self.assertEqual(value, "ABC$$var_1{")
# ${
value = parser.parse_data("ABC$var_1${", variables_mapping, functions_mapping)
self.assertEqual(value, "ABCabc${")
value = parser.parse_data("ABC$var_1${a", variables_mapping, functions_mapping)
self.assertEqual(value, "ABCabc${a")
# $}
value = parser.parse_data("ABC$var_1$}a", variables_mapping, functions_mapping)
self.assertEqual(value, "ABCabc$}a")
# }{
value = parser.parse_data("ABC$var_1}{a", variables_mapping, functions_mapping)
self.assertEqual(value, "ABCabc}{a")
# {}
value = parser.parse_data("ABC$var_1{}a", variables_mapping, functions_mapping)
self.assertEqual(value, "ABCabc{}a")
def test_parse_data_request(self):
content = {
"request": {
"url": "/api/users/$uid",
"method": "$method",
"headers": {"token": "$token"},
"data": {
"null": None,
"true": True,
"false": False,
"empty_str": "",
"value": "abc${add_one(3)}def",
},
}
}
variables_mapping = {"uid": 1000, "method": "POST", "token": "abc123"}
functions_mapping = {"add_one": lambda x: x + 1}
result = parser.parse_data(content, variables_mapping, functions_mapping)
self.assertEqual("/api/users/1000", result["request"]["url"])
self.assertEqual("abc123", result["request"]["headers"]["token"])
self.assertEqual("POST", result["request"]["method"])
self.assertIsNone(result["request"]["data"]["null"])
self.assertTrue(result["request"]["data"]["true"])
self.assertFalse(result["request"]["data"]["false"])
self.assertEqual("", result["request"]["data"]["empty_str"])
self.assertEqual("abc4def", result["request"]["data"]["value"])
def test_parse_data_testcase(self):
variables = {
"uid": "1000",
"random": "A2dEx",
"authorization": "a83de0ff8d2e896dbd8efb81ba14e17d",
"data": {"name": "user", "password": "123456"},
}
functions = {
"add_two_nums": lambda a, b=1: a + b,
"get_timestamp": lambda: int(time.time() * 1000),
}
testcase_template = {
"url": "http://127.0.0.1:5000/api/users/$uid/${add_two_nums(1,2)}",
"method": "POST",
"headers": {
"Content-Type": "application/json",
"authorization": "$authorization",
"random": "$random",
"sum": "${add_two_nums(1, 2)}",
},
"body": "$data",
}
parsed_testcase = parser.parse_data(testcase_template, variables, functions)
self.assertEqual(
parsed_testcase["url"], "http://127.0.0.1:5000/api/users/1000/3"
)
self.assertEqual(
parsed_testcase["headers"]["authorization"], variables["authorization"]
)
self.assertEqual(parsed_testcase["headers"]["random"], variables["random"])
self.assertEqual(parsed_testcase["body"], variables["data"])
self.assertEqual(parsed_testcase["headers"]["sum"], 3)
def test_parse_parameters_testcase(self):
parameters = {
"user_agent": ["iOS/10.1", "iOS/10.2"],
"username-password": "${parameterize(request_methods/account.csv)}",
"sum": "${calculate_two_nums(1, 2)}",
}
load_project_meta(
os.path.join(
os.path.dirname(os.path.dirname(__file__)),
"examples",
"postman_echo",
"request_methods",
),
)
parsed_params = parser.parse_parameters(parameters)
self.assertEqual(len(parsed_params), 2 * 3 * 2)
self.assertIn(
{
"username": "test1",
"password": "111111",
"user_agent": "iOS/10.1",
"sum": 3,
},
parsed_params,
)
self.assertIn(
{
"username": "test1",
"password": "111111",
"user_agent": "iOS/10.1",
"sum": 1,
},
parsed_params,
)
self.assertIn(
{
"username": "test1",
"password": "111111",
"user_agent": "iOS/10.2",
"sum": 3,
},
parsed_params,
)
self.assertIn(
{
"username": "test1",
"password": "111111",
"user_agent": "iOS/10.2",
"sum": 1,
},
parsed_params,
)
self.assertIn(
{
"username": "test2",
"password": "222222",
"user_agent": "iOS/10.1",
"sum": 3,
},
parsed_params,
)
self.assertIn(
{
"username": "test2",
"password": "222222",
"user_agent": "iOS/10.1",
"sum": 1,
},
parsed_params,
)
self.assertIn(
{
"username": "test2",
"password": "222222",
"user_agent": "iOS/10.2",
"sum": 3,
},
parsed_params,
)
self.assertIn(
{
"username": "test2",
"password": "222222",
"user_agent": "iOS/10.2",
"sum": 1,
},
parsed_params,
)

309
httprunner/response.py Normal file
View File

@@ -0,0 +1,309 @@
from typing import Dict, Text, Any
import jmespath
from jmespath.exceptions import JMESPathError
from loguru import logger
from httprunner import exceptions
from httprunner.exceptions import ValidationFailure, ParamsError
from httprunner.models import VariablesMapping, Validators
from httprunner.parser import parse_string_value, Parser
def get_uniform_comparator(comparator: Text):
"""convert comparator alias to uniform name"""
if comparator in ["eq", "equals", "equal"]:
return "equal"
elif comparator in ["lt", "less_than"]:
return "less_than"
elif comparator in ["le", "less_or_equals"]:
return "less_or_equals"
elif comparator in ["gt", "greater_than"]:
return "greater_than"
elif comparator in ["ge", "greater_or_equals"]:
return "greater_or_equals"
elif comparator in ["ne", "not_equal"]:
return "not_equal"
elif comparator in ["str_eq", "string_equals"]:
return "string_equals"
elif comparator in ["len_eq", "length_equal"]:
return "length_equal"
elif comparator in [
"len_gt",
"length_greater_than",
]:
return "length_greater_than"
elif comparator in [
"len_ge",
"length_greater_or_equals",
]:
return "length_greater_or_equals"
elif comparator in ["len_lt", "length_less_than"]:
return "length_less_than"
elif comparator in [
"len_le",
"length_less_or_equals",
]:
return "length_less_or_equals"
else:
return comparator
def uniform_validator(validator):
"""unify validator
Args:
validator (dict): validator maybe in two formats:
format1: this is kept for compatibility with the previous versions.
{"check": "status_code", "comparator": "eq", "expect": 201, "message": "test"}
{"check": "status_code", "assert": "eq", "expect": 201, "msg": "test"}
format2: recommended new version, {assert: [check_item, expected_value, msg]}
{'eq': ['status_code', 201, "test"]}
Returns
dict: validator info
{
"check": "status_code",
"expect": 201,
"assert": "equal",
"message": "test
}
"""
if not isinstance(validator, dict):
raise ParamsError(f"invalid validator: {validator}")
if "check" in validator and "expect" in validator:
# format1
check_item = validator["check"]
expect_value = validator["expect"]
if "assert" in validator:
comparator = validator.get("assert")
else:
comparator = validator.get("comparator", "eq")
if "msg" in validator:
message = validator.get("msg")
else:
message = validator.get("message", "")
elif len(validator) == 1:
# format2
comparator = list(validator.keys())[0]
compare_values = validator[comparator]
if not isinstance(compare_values, list) or len(compare_values) not in [2, 3]:
raise ParamsError(f"invalid validator: {validator}")
check_item = compare_values[0]
expect_value = compare_values[1]
if len(compare_values) == 3:
message = compare_values[2]
else:
# len(compare_values) == 2
message = ""
else:
raise ParamsError(f"invalid validator: {validator}")
# uniform comparator, e.g. lt => less_than, eq => equals
assert_method = get_uniform_comparator(comparator)
return {
"check": check_item,
"expect": expect_value,
"assert": assert_method,
"message": message,
}
class ResponseObjectBase(object):
def __init__(self, resp_obj, parser: Parser):
"""initialize with a response object
Args:
resp_obj (instance): requests.Response instance
"""
self.resp_obj = resp_obj
self.parser = parser
self.validation_results: Dict = {}
def extract(
self,
extractors: Dict[Text, Text],
variables_mapping: VariablesMapping = None,
) -> Dict[Text, Any]:
if not extractors:
return {}
extract_mapping = {}
for key, field in extractors.items():
if "$" in field:
# field contains variable or function
field = self.parser.parse_data(field, variables_mapping)
field_value = self._search_jmespath(field)
extract_mapping[key] = field_value
logger.info(f"extract mapping: {extract_mapping}")
return extract_mapping
def _search_jmespath(self, expr: Text) -> Any:
try:
check_value = jmespath.search(expr, self.resp_obj)
except JMESPathError as ex:
logger.error(
f"failed to search with jmespath\n"
f"expression: {expr}\n"
f"data: {self.resp_obj}\n"
f"exception: {ex}"
)
raise
return check_value
def validate(
self,
validators: Validators,
variables_mapping: VariablesMapping = None,
):
variables_mapping = variables_mapping or {}
self.validation_results = {}
if not validators:
return
validate_pass = True
failures = []
for v in validators:
if "validate_extractor" not in self.validation_results:
self.validation_results["validate_extractor"] = []
u_validator = uniform_validator(v)
# check item
check_item = u_validator["check"]
if "$" in check_item:
# check_item is variable or function
check_item = self.parser.parse_data(check_item, variables_mapping)
check_item = parse_string_value(check_item)
if check_item and isinstance(check_item, Text):
check_value = self._search_jmespath(check_item)
else:
# variable or function evaluation result is "" or not text
check_value = check_item
# comparator
assert_method = u_validator["assert"]
assert_func = self.parser.get_mapping_function(assert_method)
# expect item
expect_item = u_validator["expect"]
# parse expected value with config/teststep/extracted variables
expect_value = self.parser.parse_data(expect_item, variables_mapping)
# message
message = u_validator["message"]
# parse message with config/teststep/extracted variables
message = self.parser.parse_data(message, variables_mapping)
validate_msg = f"assert {check_item} {assert_method} {expect_value}({type(expect_value).__name__})"
validator_dict = {
"comparator": assert_method,
"check": check_item,
"check_value": check_value,
"expect": expect_item,
"expect_value": expect_value,
"message": message,
}
try:
assert_func(check_value, expect_value, message)
validate_msg += "\t==> pass"
logger.info(validate_msg)
validator_dict["check_result"] = "pass"
except AssertionError as ex:
validate_pass = False
validator_dict["check_result"] = "fail"
validate_msg += "\t==> fail"
validate_msg += (
f"\n"
f"check_item: {check_item}\n"
f"check_value: {check_value}({type(check_value).__name__})\n"
f"assert_method: {assert_method}\n"
f"expect_value: {expect_value}({type(expect_value).__name__})"
)
message = str(ex)
if message:
validate_msg += f"\nmessage: {message}"
logger.error(validate_msg)
failures.append(validate_msg)
self.validation_results["validate_extractor"].append(validator_dict)
if not validate_pass:
failures_string = "\n".join([failure for failure in failures])
raise ValidationFailure(failures_string)
class ResponseObject(ResponseObjectBase):
def __getattr__(self, key):
if key in ["json", "content", "body"]:
try:
value = self.resp_obj.json()
except ValueError:
value = self.resp_obj.content
elif key == "cookies":
value = self.resp_obj.cookies.get_dict()
else:
try:
value = getattr(self.resp_obj, key)
except AttributeError:
err_msg = "ResponseObject does not have attribute: {}".format(key)
logger.error(err_msg)
raise exceptions.ParamsError(err_msg)
self.__dict__[key] = value
return value
def _search_jmespath(self, expr: Text) -> Any:
resp_obj_meta = {
"status_code": self.status_code,
"headers": self.headers,
"cookies": self.cookies,
"body": self.body,
}
if not expr.startswith(tuple(resp_obj_meta.keys())):
if hasattr(self.resp_obj,expr):
return getattr(self.resp_obj,expr)
else:
return expr
try:
check_value = jmespath.search(expr, resp_obj_meta)
except JMESPathError as ex:
logger.error(
f"failed to search with jmespath\n"
f"expression: {expr}\n"
f"data: {resp_obj_meta}\n"
f"exception: {ex}"
)
raise
return check_value
class ThriftResponseObject(ResponseObjectBase):
pass
class SqlResponseObject(ResponseObjectBase):
pass

View File

@@ -0,0 +1,90 @@
import unittest
import requests
from httprunner.parser import Parser
from httprunner.response import ResponseObject, uniform_validator
from httprunner.utils import HTTP_BIN_URL
class TestResponse(unittest.TestCase):
def setUp(self) -> None:
resp = requests.post(
f"{HTTP_BIN_URL}/anything",
json={
"locations": [
{"name": "Seattle", "state": "WA"},
{"name": "New York", "state": "NY"},
{"name": "Bellevue", "state": "WA"},
{"name": "Olympia", "state": "WA"},
]
},
)
parser = Parser(
functions_mapping={"get_name": lambda: "name", "get_num": lambda x: x}
)
self.resp_obj = ResponseObject(resp, parser)
def test_extract(self):
variables_mapping = {"body": "body"}
extract_mapping = self.resp_obj.extract(
{
"var_1": "body.json.locations[0]",
"var_2": "body.json.locations[3].name",
"var_3": "$body.json.locations[3].name",
"var_4": "$body.json.locations[3].${get_name()}",
},
variables_mapping=variables_mapping,
)
self.assertEqual(extract_mapping["var_1"], {"name": "Seattle", "state": "WA"})
self.assertEqual(extract_mapping["var_2"], "Olympia")
self.assertEqual(extract_mapping["var_3"], "Olympia")
self.assertEqual(extract_mapping["var_4"], "Olympia")
def test_validate(self):
self.resp_obj.validate(
[
{"eq": ["body.json.locations[0].name", "Seattle"]},
{"eq": ["body.json.locations[0]", {"name": "Seattle", "state": "WA"}]},
],
)
def test_validate_variables(self):
variables_mapping = {"index": 1, "var_empty": ""}
self.resp_obj.validate(
[
{"eq": ["body.json.locations[$index].name", "New York"]},
{"eq": ["$var_empty", ""]},
],
variables_mapping=variables_mapping,
)
def test_validate_functions(self):
variables_mapping = {"index": 1}
self.resp_obj.validate(
[
{"eq": ["${get_num(0)}", 0]},
{"eq": ["${get_num($index)}", 1]},
],
variables_mapping=variables_mapping,
)
def test_uniform_validator(self):
validators = [
{
"check": "status_code",
"comparator": "eq",
"expect": 201,
"message": "test",
},
{"check": "status_code", "assert": "eq", "expect": 201, "msg": "test"},
{"eq": ["status_code", 201, "test"]},
]
expected = {
"check": "status_code",
"assert": "equal",
"expect": 201,
"message": "test",
}
for validator in validators:
self.assertEqual(uniform_validator(validator), expected)

248
httprunner/runner.py Normal file
View File

@@ -0,0 +1,248 @@
import os
import time
import uuid
from datetime import datetime
from typing import Dict, List, Text
try:
import allure
ALLURE = allure
except ModuleNotFoundError:
ALLURE = None
from loguru import logger
from httprunner.client import HttpSession
from httprunner.config import Config
from httprunner.exceptions import ParamsError, ValidationFailure
from httprunner.loader import load_project_meta
from httprunner.models import (
ProjectMeta,
StepResult,
TConfig,
TestCaseInOut,
TestCaseSummary,
TestCaseTime,
VariablesMapping,
)
from httprunner.parser import Parser
from httprunner.utils import LOGGER_FORMAT, merge_variables, ga4_client
class SessionRunner(object):
config: Config
teststeps: List[object] # list of Step
parser: Parser = None
session: HttpSession = None
case_id: Text = ""
root_dir: Text = ""
thrift_client = None
db_engine = None
__config: TConfig
__project_meta: ProjectMeta = None
__export: List[Text] = []
__step_results: List[StepResult] = []
__session_variables: VariablesMapping = {}
__is_referenced: bool = False
# time
__start_at: float = 0
__duration: float = 0
# log
__log_path: Text = ""
def __init(self):
self.__config = self.config.struct()
self.__session_variables = self.__session_variables or {}
self.__start_at = 0
self.__duration = 0
self.__is_referenced = self.__is_referenced or False
self.__project_meta = self.__project_meta or load_project_meta(
self.__config.path
)
self.case_id = self.case_id or str(uuid.uuid4())
self.root_dir = self.root_dir or self.__project_meta.RootDir
self.__log_path = os.path.join(self.root_dir, "logs", f"{self.case_id}.run.log")
self.__step_results = self.__step_results or []
self.session = self.session or HttpSession()
self.parser = self.parser or Parser(self.__project_meta.functions)
def with_session(self, session: HttpSession) -> "SessionRunner":
self.session = session
return self
def get_config(self) -> TConfig:
return self.__config
def set_referenced(self) -> "SessionRunner":
self.__is_referenced = True
return self
def with_case_id(self, case_id: Text) -> "SessionRunner":
self.case_id = case_id
return self
def with_variables(self, variables: VariablesMapping) -> "SessionRunner":
self.__session_variables = variables
return self
def with_export(self, export: List[Text]) -> "SessionRunner":
self.__export = export
return self
def with_thrift_client(self, thrift_client) -> "SessionRunner":
self.thrift_client = thrift_client
return self
def with_db_engine(self, db_engine) -> "SessionRunner":
self.db_engine = db_engine
return self
def __parse_config(self, param: Dict = None) -> None:
# parse config variables
self.__config.variables.update(self.__session_variables)
if param:
self.__config.variables.update(param)
self.__config.variables = self.parser.parse_variables(self.__config.variables)
# parse config name
self.__config.name = self.parser.parse_data(
self.__config.name, self.__config.variables
)
# parse config base url
self.__config.base_url = self.parser.parse_data(
self.__config.base_url, self.__config.variables
)
def get_export_variables(self) -> Dict:
# override testcase export vars with step export
export_var_names = self.__export or self.__config.export
export_vars_mapping = {}
for var_name in export_var_names:
if var_name not in self.__session_variables:
raise ParamsError(
f"failed to export variable {var_name} from session variables {self.__session_variables}"
)
export_vars_mapping[var_name] = self.__session_variables[var_name]
return export_vars_mapping
def get_summary(self) -> TestCaseSummary:
"""get testcase result summary"""
start_at_timestamp = self.__start_at
start_at_iso_format = datetime.utcfromtimestamp(start_at_timestamp).isoformat()
summary_success = True
for step_result in self.__step_results:
if not step_result.success:
summary_success = False
break
return TestCaseSummary(
name=self.__config.name,
success=summary_success,
case_id=self.case_id,
time=TestCaseTime(
start_at=self.__start_at,
start_at_iso_format=start_at_iso_format,
duration=self.__duration,
),
in_out=TestCaseInOut(
config_vars=self.__config.variables,
export_vars=self.get_export_variables(),
),
log=self.__log_path,
step_results=self.__step_results,
)
def merge_step_variables(self, variables: VariablesMapping) -> VariablesMapping:
# override variables
# step variables > extracted variables from previous steps
variables = merge_variables(variables, self.__session_variables)
# step variables > testcase config variables
variables = merge_variables(variables, self.__config.variables)
# parse variables
return self.parser.parse_variables(variables)
def __run_step(self, step):
"""run teststep, step maybe any kind that implements IStep interface
Args:
step (Step): teststep
"""
logger.info(f"run step begin: {step.name()} >>>>>>")
# run step
for i in range(step.retry_times + 1):
try:
if ALLURE is not None:
with ALLURE.step(f"step: {step.name()}"):
step_result: StepResult = step.run(self)
else:
step_result: StepResult = step.run(self)
break
except ValidationFailure:
if i == step.retry_times:
raise
else:
logger.warning(
f"run step {step.name()} validation failed,wait {step.retry_interval} sec and try again"
)
time.sleep(step.retry_interval)
logger.info(
f"run step retry ({i + 1}/{step.retry_times} time): {step.name()} >>>>>>"
)
# save extracted variables to session variables
self.__session_variables.update(step_result.export_vars)
# update testcase summary
self.__step_results.append(step_result)
logger.info(f"run step end: {step.name()} <<<<<<\n")
def test_start(self, param: Dict = None) -> "SessionRunner":
"""main entrance, discovered by pytest"""
ga4_client.send_event("test_start")
print("\n")
self.__init()
self.__parse_config(param)
if ALLURE is not None and not self.__is_referenced:
# update allure report meta
ALLURE.dynamic.title(self.__config.name)
ALLURE.dynamic.description(f"TestCase ID: {self.case_id}")
logger.info(
f"Start to run testcase: {self.__config.name}, TestCase ID: {self.case_id}"
)
logger.add(self.__log_path, format=LOGGER_FORMAT, level="DEBUG")
self.__start_at = time.time()
try:
# run step in sequential order
for step in self.teststeps:
self.__run_step(step)
finally:
logger.info(f"generate testcase log: {self.__log_path}")
if ALLURE is not None:
ALLURE.attach.file(
self.__log_path,
name="all log",
attachment_type=ALLURE.attachment_type.TEXT,
)
self.__duration = time.time() - self.__start_at
return self
class HttpRunner(SessionRunner):
# split SessionRunner to keep consistent with golang version
pass

67
httprunner/step.py Normal file
View File

@@ -0,0 +1,67 @@
from typing import Union
from httprunner import HttpRunner
from httprunner.models import StepResult, TRequest, TStep, TestCase
from httprunner.step_request import (
RequestWithOptionalArgs,
StepRequestExtraction,
StepRequestValidation,
)
from httprunner.step_sql_request import (
RunSqlRequest,
StepSqlRequestExtraction,
StepSqlRequestValidation,
)
from httprunner.step_testcase import StepRefCase
from httprunner.step_thrift_request import (
RunThriftRequest,
StepThriftRequestExtraction,
StepThriftRequestValidation,
)
class Step(object):
def __init__(
self,
step: Union[
StepRequestValidation,
StepRequestExtraction,
RequestWithOptionalArgs,
StepRefCase,
RunSqlRequest,
StepSqlRequestValidation,
StepSqlRequestExtraction,
RunThriftRequest,
StepThriftRequestValidation,
StepThriftRequestExtraction,
],
):
self.__step = step
@property
def request(self) -> TRequest:
return self.__step.struct().request
@property
def testcase(self) -> TestCase:
return self.__step.struct().testcase
@property
def retry_times(self) -> int:
return self.__step.struct().retry_times
@property
def retry_interval(self) -> int:
return self.__step.struct().retry_interval
def struct(self) -> TStep:
return self.__step.struct()
def name(self) -> str:
return self.__step.name()
def type(self) -> str:
return self.__step.type()
def run(self, runner: HttpRunner) -> StepResult:
return self.__step.run(runner)

499
httprunner/step_request.py Normal file
View File

@@ -0,0 +1,499 @@
import json
import time
from typing import Any, Dict, List, Text, Union
import requests
from loguru import logger
from httprunner import utils
from httprunner.exceptions import ValidationFailure
from httprunner.ext.uploader import prepare_upload_step
from httprunner.models import (
Hooks,
IStep,
MethodEnum,
StepResult,
TRequest,
TStep,
VariablesMapping,
)
from httprunner.parser import build_url, parse_variables_mapping
from httprunner.response import ResponseObject
from httprunner.runner import ALLURE, HttpRunner
def call_hooks(
runner: HttpRunner, hooks: Hooks, step_variables: VariablesMapping, hook_msg: Text
):
"""call hook actions.
Args:
hooks (list): each hook in hooks list maybe in two format.
format1 (str): only call hook functions.
${func()}
format2 (dict): assignment, the value returned by hook function will be assigned to variable.
{"var": "${func()}"}
step_variables: current step variables to call hook, include two special variables
request: parsed request dict
response: ResponseObject for current response
hook_msg: setup/teardown request/testcase
"""
logger.info(f"call hook actions: {hook_msg}")
if not isinstance(hooks, List):
logger.error(f"Invalid hooks format: {hooks}")
return
for hook in hooks:
if isinstance(hook, Text):
# format 1: ["${func()}"]
logger.debug(f"call hook function: {hook}")
runner.parser.parse_data(hook, step_variables)
elif isinstance(hook, Dict) and len(hook) == 1:
# format 2: {"var": "${func()}"}
var_name, hook_content = list(hook.items())[0]
hook_content_eval = runner.parser.parse_data(hook_content, step_variables)
logger.debug(
f"call hook function: {hook_content}, got value: {hook_content_eval}"
)
logger.debug(f"assign variable: {var_name} = {hook_content_eval}")
step_variables[var_name] = hook_content_eval
else:
logger.error(f"Invalid hook format: {hook}")
def pretty_format(v) -> str:
if isinstance(v, dict):
return json.dumps(v, indent=4, ensure_ascii=False)
if isinstance(v, requests.structures.CaseInsensitiveDict):
return json.dumps(dict(v.items()), indent=4, ensure_ascii=False)
return repr(utils.omit_long_data(v))
def run_step_request(runner: HttpRunner, step: TStep) -> StepResult:
"""run teststep: request"""
step_result = StepResult(
name=step.name,
step_type="request",
success=False,
)
start_time = time.time()
# parse
functions = runner.parser.functions_mapping
step_variables = runner.merge_step_variables(step.variables)
prepare_upload_step(step, step_variables, functions)
# parse variables
step_variables = parse_variables_mapping(step_variables, functions)
request_dict = step.request.dict()
request_dict.pop("upload", None)
parsed_request_dict = runner.parser.parse_data(request_dict, step_variables)
request_headers = parsed_request_dict.pop("headers", {})
# omit pseudo header names for HTTP/1, e.g. :authority, :method, :path, :scheme
request_headers = {
key: request_headers[key] for key in request_headers if not key.startswith(":")
}
request_headers[
"HRUN-Request-ID"
] = f"HRUN-{runner.case_id}-{str(int(time.time() * 1000))[-6:]}"
parsed_request_dict["headers"] = request_headers
step_variables["request"] = parsed_request_dict
# setup hooks
if step.setup_hooks:
call_hooks(runner, step.setup_hooks, step_variables, "setup request")
# prepare arguments
config = runner.get_config()
method = parsed_request_dict.pop("method")
url_path = parsed_request_dict.pop("url")
url = build_url(config.base_url, url_path)
parsed_request_dict["verify"] = config.verify
parsed_request_dict["json"] = parsed_request_dict.pop("req_json", {})
# log request
request_print = "====== request details ======\n"
request_print += f"url: {url}\n"
request_print += f"method: {method}\n"
for k, v in parsed_request_dict.items():
request_print += f"{k}: {pretty_format(v)}\n"
logger.debug(request_print)
if ALLURE is not None:
ALLURE.attach(
request_print,
name="request details",
attachment_type=ALLURE.attachment_type.TEXT,
)
resp = runner.session.request(method, url, **parsed_request_dict)
# log response
response_print = "====== response details ======\n"
response_print += f"status_code: {resp.status_code}\n"
response_print += f"headers: {pretty_format(resp.headers)}\n"
try:
resp_body = resp.json()
except (requests.exceptions.JSONDecodeError, json.decoder.JSONDecodeError):
resp_body = resp.content
response_print += f"body: {pretty_format(resp_body)}\n"
logger.debug(response_print)
if ALLURE is not None:
ALLURE.attach(
response_print,
name="response details",
attachment_type=ALLURE.attachment_type.TEXT,
)
resp_obj = ResponseObject(resp, runner.parser)
step_variables["response"] = resp_obj
# teardown hooks
if step.teardown_hooks:
call_hooks(runner, step.teardown_hooks, step_variables, "teardown request")
# extract
extractors = step.extract
extract_mapping = resp_obj.extract(extractors, step_variables)
step_result.export_vars = extract_mapping
variables_mapping = step_variables
variables_mapping.update(extract_mapping)
# validate
validators = step.validators
try:
resp_obj.validate(validators, variables_mapping)
step_result.success = True
except ValidationFailure:
raise
finally:
session_data = runner.session.data
session_data.success = step_result.success
session_data.validators = resp_obj.validation_results
# save step data
step_result.data = session_data
step_result.elapsed = time.time() - start_time
return step_result
class StepRequestValidation(IStep):
def __init__(self, step: TStep):
self.__step = step
def assert_equal(
self, jmes_path: Text, expected_value: Any, message: Text = ""
) -> "StepRequestValidation":
self.__step.validators.append({"equal": [jmes_path, expected_value, message]})
return self
def assert_not_equal(
self, jmes_path: Text, expected_value: Any, message: Text = ""
) -> "StepRequestValidation":
self.__step.validators.append(
{"not_equal": [jmes_path, expected_value, message]}
)
return self
def assert_greater_than(
self, jmes_path: Text, expected_value: Union[int, float], message: Text = ""
) -> "StepRequestValidation":
self.__step.validators.append(
{"greater_than": [jmes_path, expected_value, message]}
)
return self
def assert_less_than(
self, jmes_path: Text, expected_value: Union[int, float], message: Text = ""
) -> "StepRequestValidation":
self.__step.validators.append(
{"less_than": [jmes_path, expected_value, message]}
)
return self
def assert_greater_or_equals(
self, jmes_path: Text, expected_value: Union[int, float], message: Text = ""
) -> "StepRequestValidation":
self.__step.validators.append(
{"greater_or_equals": [jmes_path, expected_value, message]}
)
return self
def assert_less_or_equals(
self, jmes_path: Text, expected_value: Union[int, float], message: Text = ""
) -> "StepRequestValidation":
self.__step.validators.append(
{"less_or_equals": [jmes_path, expected_value, message]}
)
return self
def assert_length_equal(
self, jmes_path: Text, expected_value: int, message: Text = ""
) -> "StepRequestValidation":
self.__step.validators.append(
{"length_equal": [jmes_path, expected_value, message]}
)
return self
def assert_length_greater_than(
self, jmes_path: Text, expected_value: int, message: Text = ""
) -> "StepRequestValidation":
self.__step.validators.append(
{"length_greater_than": [jmes_path, expected_value, message]}
)
return self
def assert_length_less_than(
self, jmes_path: Text, expected_value: int, message: Text = ""
) -> "StepRequestValidation":
self.__step.validators.append(
{"length_less_than": [jmes_path, expected_value, message]}
)
return self
def assert_length_greater_or_equals(
self, jmes_path: Text, expected_value: int, message: Text = ""
) -> "StepRequestValidation":
self.__step.validators.append(
{"length_greater_or_equals": [jmes_path, expected_value, message]}
)
return self
def assert_length_less_or_equals(
self, jmes_path: Text, expected_value: int, message: Text = ""
) -> "StepRequestValidation":
self.__step.validators.append(
{"length_less_or_equals": [jmes_path, expected_value, message]}
)
return self
def assert_string_equals(
self, jmes_path: Text, expected_value: Any, message: Text = ""
) -> "StepRequestValidation":
self.__step.validators.append(
{"string_equals": [jmes_path, expected_value, message]}
)
return self
def assert_startswith(
self, jmes_path: Text, expected_value: Text, message: Text = ""
) -> "StepRequestValidation":
self.__step.validators.append(
{"startswith": [jmes_path, expected_value, message]}
)
return self
def assert_endswith(
self, jmes_path: Text, expected_value: Text, message: Text = ""
) -> "StepRequestValidation":
self.__step.validators.append(
{"endswith": [jmes_path, expected_value, message]}
)
return self
def assert_regex_match(
self, jmes_path: Text, expected_value: Text, message: Text = ""
) -> "StepRequestValidation":
self.__step.validators.append(
{"regex_match": [jmes_path, expected_value, message]}
)
return self
def assert_contains(
self, jmes_path: Text, expected_value: Any, message: Text = ""
) -> "StepRequestValidation":
self.__step.validators.append(
{"contains": [jmes_path, expected_value, message]}
)
return self
def assert_contained_by(
self, jmes_path: Text, expected_value: Any, message: Text = ""
) -> "StepRequestValidation":
self.__step.validators.append(
{"contained_by": [jmes_path, expected_value, message]}
)
return self
def assert_type_match(
self, jmes_path: Text, expected_value: Any, message: Text = ""
) -> "StepRequestValidation":
self.__step.validators.append(
{"type_match": [jmes_path, expected_value, message]}
)
return self
def struct(self) -> TStep:
return self.__step
def name(self) -> Text:
return self.__step.name
def type(self) -> Text:
return f"request-{self.__step.request.method}"
def run(self, runner: HttpRunner):
return run_step_request(runner, self.__step)
class StepRequestExtraction(IStep):
def __init__(self, step: TStep):
self.__step = step
def with_jmespath(self, jmes_path: Text, var_name: Text) -> "StepRequestExtraction":
self.__step.extract[var_name] = jmes_path
return self
# def with_regex(self):
# # TODO: extract response html with regex
# pass
#
# def with_jsonpath(self):
# # TODO: extract response json with jsonpath
# pass
def validate(self) -> StepRequestValidation:
return StepRequestValidation(self.__step)
def struct(self) -> TStep:
return self.__step
def name(self) -> Text:
return self.__step.name
def type(self) -> Text:
return f"request-{self.__step.request.method}"
def run(self, runner: HttpRunner):
return run_step_request(runner, self.__step)
class RequestWithOptionalArgs(IStep):
def __init__(self, step: TStep):
self.__step = step
def with_params(self, **params) -> "RequestWithOptionalArgs":
self.__step.request.params.update(params)
return self
def with_headers(self, **headers) -> "RequestWithOptionalArgs":
self.__step.request.headers.update(headers)
return self
def with_cookies(self, **cookies) -> "RequestWithOptionalArgs":
self.__step.request.cookies.update(cookies)
return self
def with_data(self, data) -> "RequestWithOptionalArgs":
self.__step.request.data = data
return self
def with_json(self, req_json) -> "RequestWithOptionalArgs":
self.__step.request.req_json = req_json
return self
def set_timeout(self, timeout: float) -> "RequestWithOptionalArgs":
self.__step.request.timeout = timeout
return self
def set_verify(self, verify: bool) -> "RequestWithOptionalArgs":
self.__step.request.verify = verify
return self
def set_allow_redirects(self, allow_redirects: bool) -> "RequestWithOptionalArgs":
self.__step.request.allow_redirects = allow_redirects
return self
def upload(self, **file_info) -> "RequestWithOptionalArgs":
self.__step.request.upload.update(file_info)
return self
def teardown_hook(
self, hook: Text, assign_var_name: Text = None
) -> "RequestWithOptionalArgs":
if assign_var_name:
self.__step.teardown_hooks.append({assign_var_name: hook})
else:
self.__step.teardown_hooks.append(hook)
return self
def extract(self) -> StepRequestExtraction:
return StepRequestExtraction(self.__step)
def validate(self) -> StepRequestValidation:
return StepRequestValidation(self.__step)
def struct(self) -> TStep:
return self.__step
def name(self) -> Text:
return self.__step.name
def type(self) -> Text:
return f"request-{self.__step.request.method}"
def run(self, runner: HttpRunner):
return run_step_request(runner, self.__step)
class RunRequest(object):
def __init__(self, name: Text):
self.__step = TStep(name=name)
def with_variables(self, **variables) -> "RunRequest":
self.__step.variables.update(variables)
return self
def with_retry(self, retry_times, retry_interval) -> "RunRequest":
self.__step.retry_times = retry_times
self.__step.retry_interval = retry_interval
return self
def setup_hook(self, hook: Text, assign_var_name: Text = None) -> "RunRequest":
if assign_var_name:
self.__step.setup_hooks.append({assign_var_name: hook})
else:
self.__step.setup_hooks.append(hook)
return self
def get(self, url: Text) -> RequestWithOptionalArgs:
self.__step.request = TRequest(method=MethodEnum.GET, url=url)
return RequestWithOptionalArgs(self.__step)
def post(self, url: Text) -> RequestWithOptionalArgs:
self.__step.request = TRequest(method=MethodEnum.POST, url=url)
return RequestWithOptionalArgs(self.__step)
def put(self, url: Text) -> RequestWithOptionalArgs:
self.__step.request = TRequest(method=MethodEnum.PUT, url=url)
return RequestWithOptionalArgs(self.__step)
def head(self, url: Text) -> RequestWithOptionalArgs:
self.__step.request = TRequest(method=MethodEnum.HEAD, url=url)
return RequestWithOptionalArgs(self.__step)
def delete(self, url: Text) -> RequestWithOptionalArgs:
self.__step.request = TRequest(method=MethodEnum.DELETE, url=url)
return RequestWithOptionalArgs(self.__step)
def options(self, url: Text) -> RequestWithOptionalArgs:
self.__step.request = TRequest(method=MethodEnum.OPTIONS, url=url)
return RequestWithOptionalArgs(self.__step)
def patch(self, url: Text) -> RequestWithOptionalArgs:
self.__step.request = TRequest(method=MethodEnum.PATCH, url=url)
return RequestWithOptionalArgs(self.__step)

View File

@@ -0,0 +1,17 @@
import unittest
from examples.postman_echo.request_methods.request_with_functions_test import (
TestCaseRequestWithFunctions,
)
class TestRunRequest(unittest.TestCase):
def test_run_request(self):
runner = TestCaseRequestWithFunctions().test_start()
summary = runner.get_summary()
self.assertTrue(summary.success)
self.assertEqual(summary.name, "request methods testcase with functions")
self.assertEqual(len(summary.step_results), 3)
self.assertEqual(summary.step_results[0].name, "get with params")
self.assertEqual(summary.step_results[1].name, "post raw text")
self.assertEqual(summary.step_results[2].name, "post form data")

View File

@@ -0,0 +1,317 @@
# -*- coding: utf-8 -*-
import sys
import time
from typing import Text
from loguru import logger
from httprunner import utils
from httprunner.exceptions import SqlMethodNotSupport, ValidationFailure
from httprunner.models import IStep, SqlMethodEnum, StepResult, TSqlRequest, TStep
from httprunner.response import SqlResponseObject
from httprunner.runner import ALLURE, HttpRunner
from httprunner.step_request import (
StepRequestExtraction,
StepRequestValidation,
call_hooks,
)
try:
import pymysql
import sqlalchemy
SQL_READY = True
except ModuleNotFoundError:
SQL_READY = False
def ensure_sql_ready():
if SQL_READY:
return
msg = """
uploader extension dependencies uninstalled, install first and try again.
install with pip:
$ pip install sqlalchemy pymysql
or you can install httprunner with optional upload dependencies:
$ pip install "httprunner[sql]"
"""
logger.error(msg)
sys.exit(1)
def run_step_sql_request(runner: HttpRunner, step: TStep) -> StepResult:
"""run teststep:sql request"""
start_time = time.time()
step_result = StepResult(
name=step.name,
step_type="sql",
success=False,
)
step_variables = runner.merge_step_variables(step.variables)
# parse
request_dict = step.sql_request.dict()
parsed_request_dict = runner.parser.parse_data(request_dict, step_variables)
config = runner.get_config()
parsed_request_dict["db_config"]["psm"] = (
parsed_request_dict["db_config"]["psm"] or config.db.psm
)
parsed_request_dict["db_config"]["user"] = (
parsed_request_dict["db_config"]["user"] or config.db.user
)
parsed_request_dict["db_config"]["password"] = (
parsed_request_dict["db_config"]["password"] or config.db.password
)
parsed_request_dict["db_config"]["ip"] = (
parsed_request_dict["db_config"]["ip"] or config.db.ip
)
parsed_request_dict["db_config"]["port"] = (
parsed_request_dict["db_config"]["port"] or config.db.port
)
parsed_request_dict["db_config"]["database"] = (
parsed_request_dict["db_config"]["database"] or config.db.database
)
if not runner.db_engine:
ensure_sql_ready()
from httprunner.database.engine import DBEngine
runner.db_engine = DBEngine(
f'mysql+pymysql://{parsed_request_dict["db_config"]["user"]}:'
f'{parsed_request_dict["db_config"]["password"]}@{parsed_request_dict["db_config"]["ip"]}:'
f'{parsed_request_dict["db_config"]["port"]}/{parsed_request_dict["db_config"]["database"]}'
f"?charset=utf8mb4"
)
# parsed_request_dict["headers"].setdefault(
# "HRUN-Request-ID",
# f"HRUN-{self.__case_id}-{str(int(time.time() * 1000))[-6:]}",
# )
# setup hooks
if step.setup_hooks:
call_hooks(runner, step.setup_hooks, step_variables, "setup request")
# log request
sql_request_print = "====== sql request details ======\n"
sql_request_print += f"sql: {step.sql_request.sql}\n"
for k, v in parsed_request_dict.items():
v = utils.omit_long_data(v)
sql_request_print += f"{k}: {repr(v)}\n"
sql_request_print += "\n"
if ALLURE is not None:
ALLURE.attach(
sql_request_print,
name="sql request details",
attachment_type=ALLURE.attachment_type.TEXT,
)
logger.info(f"Executing SQL: {parsed_request_dict['sql']}")
if step.sql_request.method == SqlMethodEnum.FETCHONE:
sql_resp = runner.db_engine.fetchone(parsed_request_dict["sql"])
elif step.sql_request.method == SqlMethodEnum.INSERT:
sql_resp = runner.db_engine.insert(parsed_request_dict["sql"])
elif step.sql_request.method == SqlMethodEnum.FETCHMANY:
sql_resp = runner.db_engine.fetchmany(
parsed_request_dict["sql"], parsed_request_dict["size"]
)
elif step.sql_request.method == SqlMethodEnum.FETCHALL:
sql_resp = runner.db_engine.fetchall(parsed_request_dict["sql"])
elif step.sql_request.method == SqlMethodEnum.UPDATE:
sql_resp = runner.db_engine.update(parsed_request_dict["sql"])
elif step.sql_request.method == SqlMethodEnum.DELETE:
sql_resp = runner.db_engine.delete(parsed_request_dict["sql"])
else:
raise SqlMethodNotSupport(
f"step.sql_request.method {parsed_request_dict['method']} not support"
)
# log response
sql_response_print = "====== sql response details ======\n"
if isinstance(sql_resp, dict):
for k, v in sql_resp.items():
v = utils.omit_long_data(v)
sql_response_print += f"{k}: {repr(v)}\n"
elif isinstance(sql_resp, list):
sql_response_print += f"count: {len(sql_resp)}\n"
sql_response_print += "-" * 34 + "\n"
for el in sql_resp:
for k, v in el.items():
v = utils.omit_long_data(v)
sql_response_print += f"{k}: {repr(v)}\n"
sql_response_print += "-" * 34 + "\n"
elif sql_resp is None:
sql_response_print += "None\n"
if ALLURE is not None:
ALLURE.attach(
sql_response_print,
name="sql response details",
attachment_type=ALLURE.attachment_type.TEXT,
)
resp_obj = SqlResponseObject(sql_resp, parser=runner.parser)
step_variables["sql_response"] = resp_obj
# teardown hooks
if step.teardown_hooks:
call_hooks(runner, step.teardown_hooks, step_variables, "teardown request")
def log_sql_req_resp_details():
err_msg = "\n{} SQL DETAILED REQUEST & RESPONSE {}\n".format("*" * 32, "*" * 32)
err_msg += sql_request_print + sql_response_print
logger.error(err_msg)
# extract
extractors = step.extract
extract_mapping = resp_obj.extract(extractors)
step_result.export_vars = extract_mapping
variables_mapping = step_variables
variables_mapping.update(extract_mapping)
# validate
validators = step.validators
try:
resp_obj.validate(validators, variables_mapping)
step_result.success = True
except ValidationFailure:
log_sql_req_resp_details()
raise
finally:
session_data = runner.session.data
session_data.success = step_result.success
session_data.validators = resp_obj.validation_results
# save step data
step_result.data = session_data
step_result.elapsed = time.time() - start_time
return step_result
class StepSqlRequestValidation(StepRequestValidation):
def __init__(self, step: TStep):
self.__step = step
super().__init__(step)
def run(self, runner: HttpRunner):
return run_step_sql_request(runner, self.__step)
class StepSqlRequestExtraction(StepRequestExtraction):
def __init__(self, step: TStep):
self.__step = step
super().__init__(step)
def run(self, runner: HttpRunner):
return run_step_sql_request(runner, self.__step)
def validate(self) -> StepSqlRequestValidation:
return StepSqlRequestValidation(self.__step)
class RunSqlRequest(IStep):
def __init__(self, name: Text):
self.__step = TStep(name=name)
self.__step.sql_request = TSqlRequest()
def with_variables(self, **variables) -> "RunSqlRequest":
self.__step.variables.update(variables)
return self
def with_db_config(
self, user=None, password=None, ip=None, port=None, database=None, psm=None
):
if user:
self.__step.sql_request.db_config.user = user
if password:
self.__step.sql_request.db_config.password = password
if ip:
self.__step.sql_request.db_config.ip = ip
if port:
self.__step.sql_request.db_config.port = port
if database:
self.__step.sql_request.db_config.database = database
if psm:
self.__step.sql_request.db_config.psm = psm
return self
def fetchone(self, sql) -> "RunSqlRequest":
self.__step.sql_request.method = SqlMethodEnum.FETCHONE
self.__step.sql_request.sql = sql
return self
def fetchmany(self, sql, size) -> "RunSqlRequest":
self.__step.sql_request.method = SqlMethodEnum.FETCHMANY
self.__step.sql_request.sql = sql
self.__step.sql_request.size = size
return self
def fetchall(self, sql) -> "RunSqlRequest":
self.__step.sql_request.method = SqlMethodEnum.FETCHALL
self.__step.sql_request.sql = sql
return self
def update(self, sql) -> "RunSqlRequest":
self.__step.sql_request.method = SqlMethodEnum.UPDATE
self.__step.sql_request.sql = sql
return self
def delete(self, sql) -> "RunSqlRequest":
self.__step.sql_request.method = SqlMethodEnum.DELETE
self.__step.sql_request.sql = sql
return self
def insert(self, sql) -> "RunSqlRequest":
self.__step.sql_request.method = SqlMethodEnum.INSERT
self.__step.sql_request.sql = sql
return self
def with_retry(self, retry_times, retry_interval) -> "RunSqlRequest":
self.__step.retry_times = retry_times
self.__step.retry_interval = retry_interval
return self
def teardown_hook(
self, hook: Text, assign_var_name: Text = None
) -> "RunSqlRequest":
if assign_var_name:
self.__step.teardown_hooks.append({assign_var_name: hook})
else:
self.__step.teardown_hooks.append(hook)
return self
def setup_hook(self, hook: Text, assign_var_name: Text = None) -> "RunSqlRequest":
if assign_var_name:
self.__step.setup_hooks.append({assign_var_name: hook})
else:
self.__step.setup_hooks.append(hook)
return self
def struct(self) -> TStep:
return self.__step
def name(self) -> Text:
return self.__step.name
def type(self) -> Text:
return f"sql-request-{self.__step.sql_request.sql}"
def run(self, runner) -> StepResult:
return run_step_sql_request(runner, self.__step)
def extract(self) -> StepSqlRequestExtraction:
return StepSqlRequestExtraction(self.__step)
def validate(self) -> StepSqlRequestValidation:
return StepSqlRequestValidation(self.__step)
def with_jmespath(
self, jmes_path: Text, var_name: Text
) -> "StepSqlRequestExtraction":
self.__step.extract[var_name] = jmes_path
return StepSqlRequestExtraction(self.__step)

103
httprunner/step_testcase.py Normal file
View File

@@ -0,0 +1,103 @@
from typing import Callable, Text
from loguru import logger
from httprunner import exceptions
from httprunner.models import IStep, StepResult, TStep, TestCaseSummary
from httprunner.runner import HttpRunner
from httprunner.step_request import call_hooks
def run_step_testcase(runner: HttpRunner, step: TStep) -> StepResult:
"""run teststep: referenced testcase"""
step_result = StepResult(name=step.name, step_type="testcase")
step_variables = runner.merge_step_variables(step.variables)
step_export = step.export
# setup hooks
if step.setup_hooks:
call_hooks(runner, step.setup_hooks, step_variables, "setup testcase")
# TODO: override testcase with current step name/variables/export
# step.testcase is a referenced testcase, e.g. RequestWithFunctions
ref_case_runner = step.testcase()
ref_case_runner.set_referenced().with_session(runner.session).with_case_id(
runner.case_id
).with_variables(step_variables).with_export(step_export).test_start()
# teardown hooks
if step.teardown_hooks:
call_hooks(runner, step.teardown_hooks, step.variables, "teardown testcase")
summary: TestCaseSummary = ref_case_runner.get_summary()
step_result.data = summary.step_results # list of step data
step_result.export_vars = summary.in_out.export_vars
step_result.success = summary.success
if step_result.export_vars:
logger.info(f"export variables: {step_result.export_vars}")
return step_result
class StepRefCase(IStep):
def __init__(self, step: TStep):
self.__step = step
def teardown_hook(self, hook: Text, assign_var_name: Text = None) -> "StepRefCase":
if assign_var_name:
self.__step.teardown_hooks.append({assign_var_name: hook})
else:
self.__step.teardown_hooks.append(hook)
return self
def export(self, *var_name: Text) -> "StepRefCase":
self.__step.export.extend(var_name)
return self
def struct(self) -> TStep:
return self.__step
def name(self) -> Text:
return self.__step.name
def type(self) -> Text:
return f"request-{self.__step.request.method}"
def run(self, runner: HttpRunner):
return run_step_testcase(runner, self.__step)
class RunTestCase(object):
def __init__(self, name: Text):
self.__step = TStep(name=name)
def with_variables(self, **variables) -> "RunTestCase":
self.__step.variables.update(variables)
return self
def with_retry(self, retry_times, retry_interval) -> "RunTestCase":
self.__step.retry_times = retry_times
self.__step.retry_interval = retry_interval
return self
def setup_hook(self, hook: Text, assign_var_name: Text = None) -> "RunTestCase":
if assign_var_name:
self.__step.setup_hooks.append({assign_var_name: hook})
else:
self.__step.setup_hooks.append(hook)
return self
def call(self, testcase: Callable) -> StepRefCase:
if issubclass(testcase, HttpRunner):
# referenced testcase object
self.__step.testcase = testcase
else:
raise exceptions.ParamsError(
f"Invalid teststep referenced testcase: {testcase}"
)
return StepRefCase(self.__step)

View File

@@ -0,0 +1,27 @@
import unittest
from httprunner.runner import HttpRunner
from httprunner.step_testcase import RunTestCase
from examples.postman_echo.request_methods.request_with_functions_test import (
TestCaseRequestWithFunctions,
)
class TestRunTestCase(unittest.TestCase):
def setUp(self):
self.runner = TestCaseRequestWithFunctions()
self.runner.test_start()
def test_run_testcase_by_path(self):
step_result = (
RunTestCase("run referenced testcase")
.call(TestCaseRequestWithFunctions)
.run(self.runner)
)
self.assertTrue(step_result.success)
self.assertEqual(step_result.name, "run referenced testcase")
self.assertEqual(len(step_result.data), 3)
self.assertEqual(step_result.data[0].name, "get with params")
self.assertEqual(step_result.data[1].name, "post raw text")
self.assertEqual(step_result.data[2].name, "post form data")

View File

@@ -0,0 +1,309 @@
# -*- coding: utf-8 -*-
import platform
import sys
import time
from typing import Text, Union
from loguru import logger
from httprunner import utils
from httprunner.exceptions import ValidationFailure
from httprunner.models import (
IStep,
ProtoType,
StepResult,
TransType,
TStep,
TThriftRequest,
)
from httprunner.response import ThriftResponseObject
from httprunner.runner import ALLURE, HttpRunner
from httprunner.step_request import (
StepRequestExtraction,
StepRequestValidation,
call_hooks,
)
try:
import thriftpy2
from thrift.Thrift import TType
THRIFT_READY = True
except ModuleNotFoundError:
THRIFT_READY = False
def ensure_thrift_ready():
assert platform.system() != "Windows", "Sorry,thrift not support Windows for now"
if THRIFT_READY:
return
msg = """
uploader extension dependencies uninstalled, install first and try again.
install with pip:
$ pip install cython thriftpy2 thrift
or you can install httprunner with optional upload dependencies:
$ pip install "httprunner[thrift]"
"""
logger.error(msg)
sys.exit(1)
def run_step_thrift_request(runner: HttpRunner, step: TStep) -> StepResult:
"""run teststep:thrift request"""
start_time = time.time()
step_result = StepResult(
name=step.name,
step_type="thrift",
success=False,
)
step_variables = runner.merge_step_variables(step.variables)
# parse
request_dict = step.thrift_request.dict()
parsed_request_dict = runner.parser.parse_data(request_dict, step_variables)
config = runner.get_config()
parsed_request_dict["psm"] = parsed_request_dict["psm"] or config.thrift.psm
parsed_request_dict["env"] = parsed_request_dict["env"] or config.thrift.env
parsed_request_dict["cluster"] = (
parsed_request_dict["cluster"] or config.thrift.cluster
)
parsed_request_dict["idl_path"] = (
parsed_request_dict["idl_path"] or config.thrift.idl_path
)
parsed_request_dict["include_dirs"] = (
parsed_request_dict["include_dirs"] or config.thrift.include_dirs
)
parsed_request_dict["method"] = (
parsed_request_dict["method"] or config.thrift.method
)
parsed_request_dict["service_name"] = (
parsed_request_dict["service_name"] or config.thrift.service_name
)
parsed_request_dict["ip"] = parsed_request_dict["ip"] or config.thrift.ip
parsed_request_dict["port"] = parsed_request_dict["port"] or config.thrift.port
parsed_request_dict["proto_type"] = (
parsed_request_dict["proto_type"] or config.thrift.proto_type
)
parsed_request_dict["trans_port"] = (
parsed_request_dict["trans_type"] or config.thrift.trans_type
)
parsed_request_dict["timeout"] = (
parsed_request_dict["timeout"] or config.thrift.timeout
)
parsed_request_dict["thrift_client"] = parsed_request_dict["thrift_client"]
# parsed_request_dict["headers"].setdefault(
# "HRUN-Request-ID",
# f"HRUN-{self.__case_id}-{str(int(time.time() * 1000))[-6:]}",
# )
step_variables["thrift_request"] = parsed_request_dict
psm = parsed_request_dict["psm"]
if not runner.thrift_client:
runner.thrift_client = parsed_request_dict["thrift_client"]
if not runner.thrift_client:
ensure_thrift_ready()
from httprunner.thrift.thrift_client import ThriftClient
runner.thrift_client = ThriftClient(
thrift_file=parsed_request_dict["idl_path"],
service_name=parsed_request_dict["service_name"],
ip=parsed_request_dict["ip"],
port=parsed_request_dict["port"],
include_dirs=parsed_request_dict["include_dirs"],
timeout=parsed_request_dict["timeout"],
proto_type=parsed_request_dict["proto_type"],
trans_type=parsed_request_dict["trans_port"],
)
# setup hooks
if step.setup_hooks:
call_hooks(runner, step.setup_hooks, step_variables, "setup request")
# log request
thrift_request_print = "====== thrift request details ======\n"
thrift_request_print += f"psm: {psm}\n"
for k, v in parsed_request_dict.items():
v = utils.omit_long_data(v)
thrift_request_print += f"{k}: {repr(v)}\n"
thrift_request_print += "\n"
if ALLURE is not None:
ALLURE.attach(
thrift_request_print,
name="thrift request details",
attachment_type=ALLURE.attachment_type.TEXT,
)
# thrift request
resp = runner.thrift_client.send_request(
parsed_request_dict["params"], parsed_request_dict["method"]
)
resp_obj = ThriftResponseObject(resp, parser=runner.parser)
step_variables["thrift_response"] = resp_obj
# log response
thrift_response_print = "====== thrift response details ======\n"
for k, v in resp.items():
v = utils.omit_long_data(v)
thrift_response_print += f"{k}: {repr(v)}\n"
if ALLURE is not None:
ALLURE.attach(
thrift_request_print,
name="thrift response details",
attachment_type=ALLURE.attachment_type.TEXT,
)
# teardown hooks
if step.teardown_hooks:
call_hooks(runner, step.teardown_hooks, step_variables, "teardown request")
def log_thrift_req_resp_details():
err_msg = "\n{} THRIFT DETAILED REQUEST & RESPONSE {}\n".format(
"*" * 32, "*" * 32
)
err_msg += thrift_request_print + thrift_response_print
logger.error(err_msg)
# extract
extractors = step.extract
extract_mapping = resp_obj.extract(extractors)
step_result.export_vars = extract_mapping
variables_mapping = step_variables
variables_mapping.update(extract_mapping)
# validate
validators = step.validators
try:
resp_obj.validate(validators, variables_mapping)
step_result.success = True
except ValidationFailure:
log_thrift_req_resp_details()
raise
finally:
session_data = runner.session.data
session_data.success = step_result.success
session_data.validators = resp_obj.validation_results
# save step data
step_result.data = session_data
step_result.elapsed = time.time() - start_time
return step_result
class StepThriftRequestValidation(StepRequestValidation):
def __init__(self, step: TStep):
self.__step = step
super().__init__(step)
def run(self, runner: HttpRunner):
return run_step_thrift_request(runner, self.__step)
class StepThriftRequestExtraction(StepRequestExtraction):
def __init__(self, step: TStep):
self.__step = step
super().__init__(step)
def run(self, runner: HttpRunner):
return run_step_thrift_request(runner, self.__step)
def validate(self) -> StepThriftRequestValidation:
return StepThriftRequestValidation(self.__step)
class RunThriftRequest(IStep):
def __init__(self, name: Text):
self.__step = TStep(name=name)
self.__step.thrift_request = TThriftRequest()
def with_variables(self, **variables) -> "RunThriftRequest":
self.__step.variables.update(variables)
return self
def with_retry(self, retry_times, retry_interval) -> "RunThriftRequest":
self.__step.retry_times = retry_times
self.__step.retry_interval = retry_interval
return self
def teardown_hook(
self, hook: Text, assign_var_name: Text = None
) -> "RunThriftRequest":
if assign_var_name:
self.__step.teardown_hooks.append({assign_var_name: hook})
else:
self.__step.teardown_hooks.append(hook)
return self
def setup_hook(
self, hook: Text, assign_var_name: Text = None
) -> "RunThriftRequest":
if assign_var_name:
self.__step.setup_hooks.append({assign_var_name: hook})
else:
self.__step.setup_hooks.append(hook)
return self
def with_params(self, **params) -> "RunThriftRequest":
self.__step.thrift_request.params.update(params)
return self
def with_method(self, method) -> "RunThriftRequest":
self.__step.thrift_request.method = method
return self
def with_idl_path(self, idl_path, idl_root_path) -> "RunThriftRequest":
self.__step.thrift_request.idl_path = idl_path
self.__step.thrift_request.include_dirs = [idl_root_path]
return self
def with_thrift_client(
self, thrift_client: Union["ThriftClient", str]
) -> "RunThriftRequest":
self.__step.thrift_request.thrift_client = thrift_client
return self
def with_ip(self, ip: str) -> "RunThriftRequest":
self.__step.thrift_request.ip = ip
return self
def with_port(self, port: int) -> "RunThriftRequest":
self.__step.thrift_request.port = port
return self
def with_proto_type(self, proto_type: ProtoType) -> "RunThriftRequest":
self.__step.thrift_request.proto_type = proto_type
return self
def with_trans_type(self, trans_type: TransType) -> "RunThriftRequest":
self.__step.thrift_request.proto_type = trans_type
return self
def struct(self) -> TStep:
return self.__step
def name(self) -> Text:
return self.__step.name
def type(self) -> Text:
return f"thrift-request-{self.__step.thrift_request.psm}-{self.__step.thrift_request.method}"
def run(self, runner) -> StepResult:
return run_step_thrift_request(runner, self.__step)
def extract(self) -> StepThriftRequestExtraction:
return StepThriftRequestExtraction(self.__step)
def validate(self) -> StepThriftRequestValidation:
return StepThriftRequestValidation(self.__step)
def with_jmespath(
self, jmes_path: Text, var_name: Text
) -> "StepThriftRequestExtraction":
self.__step.extract[var_name] = jmes_path
return StepThriftRequestExtraction(self.__step)

View File

@@ -0,0 +1,471 @@
# -*- coding: utf-8 -*-
from __future__ import division
import json
import traceback
import re
import logging
import base64
from thrift.Thrift import TType
try:
from _json import encode_basestring_ascii as c_encode_basestring_ascii
except ImportError:
c_encode_basestring_ascii = None
text_characters = "".join(map(chr, range(32, 127))) + "\n\r\t\b"
_null_trans = str.maketrans("", "")
ESCAPE = re.compile(r'[\x00-\x1f\\"\b\f\n\r\t]')
ESCAPE_ASCII = re.compile(r'([\\"]|[^\ -~])')
HAS_UTF8 = re.compile(r"[\x80-\xff]")
ESCAPE_DCT = {
"\\": "\\\\",
'"': '\\"',
"\b": "\\b",
"\f": "\\f",
"\n": "\\n",
"\r": "\\r",
"\t": "\\t",
}
for i in range(0x20):
ESCAPE_DCT.setdefault(chr(i), "\\u{0:04x}".format(i))
# ESCAPE_DCT.setdefault(chr(i), '\\u%04x' % (i,))
INFINITY = float("inf")
FLOAT_REPR = repr
def istext(s_input):
"""
既然我们要判断这串内容是不是可以做为Json的value,那为什么不放下试试呢?
:param s_input:
:return:
"""
return not isinstance(s_input, bytes)
def unicode_2_utf8_keep_native(para):
# if type(para) is str:
# return ''.join(filter(lambda x: not str.isalpha(x), para))
if type(para) is str:
return para
if type(para) is list:
for i in range(len(para)):
para[i] = unicode_2_utf8_keep_native(para[i])
return para
elif type(para) is dict:
newpara = {}
for (key, value) in para.items():
key = unicode_2_utf8_keep_native(key)
value = unicode_2_utf8_keep_native(value)
newpara[key] = value
return newpara
elif type(para) is tuple:
return tuple(unicode_2_utf8_keep_native(list(para)))
elif type(para) is str:
return para.encode("utf-8")
else:
logging.debug("type========", type(para))
# if issubclass(type(para), dict):
if isinstance(para, dict):
logging.debug("type ************in dict: %s" % (type(para)))
return unicode_2_utf8_keep_native(dict(para))
else:
return para
def encode_basestring(s):
"""Return a JSON representation of a Python string"""
def replace(match):
return ESCAPE_DCT[match.group(0)]
return '"' + ESCAPE.sub(replace, s) + '"'
def py_encode_basestring_ascii(s):
"""Return an ASCII-only JSON representation of a Python string"""
if isinstance(s, str) and HAS_UTF8.search(s) is not None:
s = s.decode("utf-8")
def replace(match):
s = match.group(0)
try:
return ESCAPE_DCT[s]
except KeyError:
n = ord(s)
if n < 0x10000:
return "\\u{0:04x}".format(n)
# return '\\u%04x' % (n,)
else:
# surrogate pair
n -= 0x10000
s1 = 0xD800 | ((n >> 10) & 0x3FF)
s2 = 0xDC00 | (n & 0x3FF)
return "\\u{0:04x}\\u{1:04x}".format(s1, s2)
# return '\\u%04x\\u%04x' % (s1, s2)
return '"' + str(ESCAPE_ASCII.sub(replace, s)) + '"'
encode_basestring_ascii = c_encode_basestring_ascii or py_encode_basestring_ascii
class ThriftJSONDecoder(json.JSONDecoder):
def __init__(self, *args, **kwargs):
self._thrift_class = kwargs.pop("thrift_class")
super(ThriftJSONDecoder, self).__init__(*args, **kwargs)
def decode(self, json_str):
if isinstance(json_str, dict):
dct = json_str
else:
dct = super(ThriftJSONDecoder, self).decode(json_str)
return self._convert(
dct,
TType.STRUCT,
# (self._thrift_class, self._thrift_class.thrift_spec))
self._thrift_class,
)
def _convert(self, val, ttype, ttype_info):
if ttype == TType.STRUCT:
if val is None:
ret = None
else:
# (thrift_class, thrift_spec) = ttype_info
thrift_class = ttype_info
thrift_spec = ttype_info.thrift_spec
ret = thrift_class()
for tag, field in thrift_spec.items():
if field is None:
continue
# {1: (15, 'ad_ids', 10, False), 255: (12, 'Base', <class 'base.Base'>, False)}
# {1: (15, 'models', (12, <class 'adcommon.Ad'>), False), 255: (12, 'BaseResp', <class 'base.BaseResp'>, False)}
if len(field) <= 3:
(field_ttype, field_name, dummy) = field
field_ttype_info = None
else:
(field_ttype, field_name, field_ttype_info, dummy) = field
if val is None or field_name not in val:
continue
converted_val = self._convert(
val[field_name], field_ttype, field_ttype_info
)
setattr(ret, field_name, converted_val)
elif ttype == TType.LIST:
if type(ttype_info) != tuple: # 说明是基础类型了, 无法在细分
(element_ttype, element_ttype_info) = (ttype_info, None)
else:
(element_ttype, element_ttype_info) = ttype_info
if val is not None:
ret = [self._convert(x, element_ttype, element_ttype_info) for x in val]
else:
ret = None
elif ttype == TType.SET:
if type(ttype_info) != tuple: # 说明是基础类型了, 无法在细分
(element_ttype, element_ttype_info) = (ttype_info, None)
else:
(element_ttype, element_ttype_info) = ttype_info
if val is not None:
ret = set(
[self._convert(x, element_ttype, element_ttype_info) for x in val]
)
else:
ret = None
elif ttype == TType.MAP:
# key处理
if type(ttype_info[0]) == tuple:
key_ttype, key_ttype_info = ttype_info[0]
else:
key_ttype, key_ttype_info = ttype_info[0], None
# value处理
if type(ttype_info[1]) != tuple: # 说明value为基础类型, 已不可在细分
val_ttype = ttype_info[1]
val_ttype_info = None
else:
val_ttype, val_ttype_info = ttype_info[1]
if val is not None:
ret = dict(
[
(
self._convert(k, key_ttype, key_ttype_info),
self._convert(v, val_ttype, val_ttype_info),
)
for (k, v) in val.items()
]
)
else:
ret = None
elif ttype == TType.STRING:
if isinstance(val, str):
ret = val.encode("utf8")
elif val is None:
ret = None
else:
ret = str(val)
# 判断string字段是否是base64编码后的string, 如果是则此处需要对该string字段进行b64decode, 还原成原本的字符串
# todo : 留待实现
elif ttype == TType.DOUBLE:
if val is not None:
ret = float(val)
else:
ret = None
elif ttype == TType.I64:
if val is not None:
ret = int(val)
else:
ret = None
elif ttype == TType.I32 or ttype == TType.I16 or ttype == TType.BYTE:
if val is not None:
ret = int(val)
else:
ret = None
elif ttype == TType.BOOL:
if val is not None:
ret = bool(val)
else:
ret = None
else:
raise TypeError("Unrecognized thrift field type: %s" % ttype)
return ret
def json2thrift(json_str, thrift_class):
logging.debug(json_str)
return json.loads(
json_str, cls=ThriftJSONDecoder, thrift_class=thrift_class, strict=False
)
def dumper(obj):
try:
return json.dumps(obj, default=lambda o: o.__dict__, sort_keys=True, indent=2)
except:
return obj.__dict__
class MyJSONEncoder(json.JSONEncoder):
def __init__(
self,
skipkeys=False,
ensure_ascii=True,
check_circular=True,
allow_nan=True,
indent=None,
separators=None,
encoding="utf-8",
default=None,
sort_keys=False,
**kw
):
super(MyJSONEncoder, self).__init__(
skipkeys=skipkeys,
ensure_ascii=ensure_ascii,
check_circular=check_circular,
allow_nan=allow_nan,
indent=indent,
separators=separators,
encoding=encoding,
default=default,
sort_keys=sort_keys,
)
self.skip_nonutf8_value = kw.get(
"skip_nonutf8_value", False
) # 默认不skip忽略非utf-8编码的字段
def encode(self, o):
"""Return a JSON string representation of a Python data structure.
JSONEncoder().encode({"foo": ["bar", "baz"]})
'{"foo": ["bar", "baz"]}'
"""
# This is for extremely simple cases and benchmarks.
if isinstance(o, str):
if isinstance(o, str):
_encoding = self.encoding
if _encoding is not None and not (_encoding == "utf-8"):
o = o.decode(_encoding)
if self.ensure_ascii:
return encode_basestring_ascii(o)
else:
return encode_basestring(o)
# This doesn't pass the iterator directly to ''.join() because the
# exceptions aren't as detailed. The list call should be roughly
# equivalent to the PySequence_Fast that ''.join() would do.
chunks = self.iterencode(o, _one_shot=True)
if not isinstance(chunks, (list, tuple)):
chunks = list(chunks)
# add by braver
# todo: fix 'utf8' codec can't decode byte 0x91 in position 3: invalid start byte"
if self.skip_nonutf8_value: # 缺省为false
tmp_chunks = []
for chunk in chunks:
try:
tmp_chunks.append(unicode_2_utf8_keep_native(chunk))
except Exception as err:
logging.debug(traceback.format_exc())
return "".join(tmp_chunks)
# 保留老的逻辑, /usr/lib/python2.7/package/json/__init__.py dumps接口
return "".join(chunks)
class ThriftJSONEncoder(json.JSONEncoder):
"""
add by braver
"""
def __init__(
self,
skipkeys=False,
ensure_ascii=True,
check_circular=True,
allow_nan=True,
indent=None,
separators=None,
default=None,
sort_keys=False,
**kw
):
super(ThriftJSONEncoder, self).__init__(
skipkeys=skipkeys,
ensure_ascii=ensure_ascii,
check_circular=check_circular,
allow_nan=allow_nan,
indent=indent,
separators=separators,
default=default,
sort_keys=sort_keys,
)
self.skip_nonutf8_value = kw.get(
"skip_nonutf8_value", False
) # 默认不skip忽略非utf-8编码的字段
def encode(self, o):
"""Return a JSON string representation of a Python data structure.
JSONEncoder().encode({"foo": ["bar", "baz"]})
'{"foo": ["bar", "baz"]}'
"""
# This is for extremely simple cases and benchmarks.
if isinstance(o, str):
if isinstance(o, str):
_encoding = self.encoding
if _encoding is not None and not (_encoding == "utf-8"):
o = o.decode(_encoding)
if self.ensure_ascii:
return encode_basestring_ascii(o)
else:
return encode_basestring(o)
# This doesn't pass the iterator directly to ''.join() because the
# exceptions aren't as detailed. The list call should be roughly
# equivalent to the PySequence_Fast that ''.join() would do.
chunks = self.iterencode(o, _one_shot=True)
if not isinstance(chunks, (list, tuple)):
chunks = list(chunks)
# add by braver
# todo: fix 'utf8' codec can't decode byte 0x91 in position 3: invalid start byte"
if self.skip_nonutf8_value: # 缺省为false
tmp_chunks = []
for chunk in chunks:
try:
tmp_chunks.append(unicode_2_utf8_keep_native(chunk))
except Exception as err:
logging.debug(traceback.format_exc())
return "".join(tmp_chunks)
# 保留老的逻辑, /usr/lib/python2.7/package/json/__init__.py dumps接口
return "".join(chunks)
def default(self, o):
if isinstance(o, bytes):
return str(o, encoding="utf-8")
if not hasattr(o, "thrift_spec"):
return super(ThriftJSONEncoder, self).default(o)
spec = getattr(o, "thrift_spec")
ret = {}
for tag, field in spec.items():
if field is None:
continue
# (tag, field_ttype, field_name, field_ttype_info, default) = field
field_name = field[1]
default = field[-1]
field_type = field[0]
field_ttype_info = field[2]
# if field_type in [TType.STRING, TType.BINARY]: # 说明是string(明文string或者binary)
# if field_type in [TType.STRING, TType.BYTE]: # 说明是string(明文string或者binary)
if field_name in o.__dict__:
val = o.__dict__[field_name]
if field_type in [TType.LIST, TType.SET]: # 数组类型
if val: # val为非空数组/Set
val = list(val) # 统一转成数组(list/set)
is_need_binary_bs64 = False
if type(field_ttype_info) != tuple: # 基础类型
if (
field_ttype_info in [TType.BYTE]
and type(val[0]) in [str]
and not istext(val[0])
):
is_need_binary_bs64 = True
if is_need_binary_bs64:
for index, item in enumerate(val):
if item and type(item) in [str] and not istext(item):
val[index] = base64.b64encode(
item
) # 判断为二进制字符串, 需要进行base64编码
if field_type in [TType.BYTE] and type(val) in [
str
]: # 说明是string(明文string或者binary)
# 需要对二进制字节字符串字段进行base64编码, 将二进制字节串字段->ascii字符编码的base64编码明文串
if val and not istext(val): # 说明是该字段非空且为binary string
print("4" * 100, val)
val = base64.b64encode(val.encode("utf-8"))
# val = base64.b64encode(val) # 进行base64编码处理, 不然该字段序列化为json时会报错
# if val != default:
ret[field_name] = val
if "request_id" in o.__dict__:
ret["request_id"] = o.__dict__["request_id"]
if "rpc_latency" in o.__dict__:
ret["rpc_latency"] = o.__dict__["rpc_latency"]
return ret
def thrift2json(obj, skip_nonutf8_value=False):
return json.dumps(
obj,
cls=ThriftJSONEncoder,
ensure_ascii=False,
skip_nonutf8_value=skip_nonutf8_value,
)
def thrift2dict(obj):
str = thrift2json(obj)
return json.loads(str)
dict2thrift = json2thrift
if __name__ == "__main__":
print(istext("Всего за {$price$}, а доставка - бесплатно!"))
print(istext(b"\xe4\xb8\xad\xe6\x96\x87"))
print(
istext(
'{"web_uri":"ad-site-i18n-sg/202103185d0d723d88b7f642452dac73","height":336,"width":336,"file_name":""}'
)
)

View File

@@ -0,0 +1,139 @@
# -*- coding: utf-8 -*-
from __future__ import absolute_import
import enum
import json
import thriftpy2
from loguru import logger
from thriftpy2.protocol import (
TBinaryProtocolFactory,
TCompactProtocolFactory,
TCyBinaryProtocolFactory,
TJSONProtocolFactory,
)
from thriftpy2.rpc import make_client
from thriftpy2.transport import (
TBufferedTransportFactory,
TCyBufferedTransportFactory,
TCyFramedTransportFactory,
TFramedTransportFactory,
)
from httprunner.thrift.data_convertor import json2thrift, thrift2dict
class ProtoType(enum.Enum):
Binary = 1
CyBinary = 2
Compact = 3
Json = 4
class TransType(enum.Enum):
Buffered = 1
CyBuffered = 2
Framed = 3
CyFramed = 4
class RequestFormat(enum.Enum):
json = 1
binary = 2
def get_proto_factory(proto_type):
if proto_type == ProtoType.Binary:
return TBinaryProtocolFactory()
if proto_type == ProtoType.CyBinary:
return TCyBinaryProtocolFactory()
if proto_type == ProtoType.Compact:
return TCompactProtocolFactory()
if proto_type == ProtoType.Json:
return TJSONProtocolFactory()
def get_trans_factory(trans_type):
if trans_type == TransType.Buffered:
return TBufferedTransportFactory()
if trans_type == TransType.CyBuffered:
return TCyBufferedTransportFactory()
if trans_type == TransType.Framed:
return TFramedTransportFactory()
if trans_type == TransType.CyFramed:
return TCyFramedTransportFactory()
class ThriftClient(object):
def __init__(
self,
thrift_file,
service_name,
ip,
port,
include_dirs=None,
timeout=3000,
proto_type=ProtoType.CyBinary,
trans_type=TransType.CyBuffered,
):
self.thrift_file = thrift_file
self.include_dirs = include_dirs
self.service_name = service_name
self.ip = ip
self.port = port
self.timeout = timeout
self.proto_type = proto_type
self.trans_type = trans_type
try:
logger.debug(
"init thrift module: thrift_file=%s, module_name=%s",
thrift_file,
str(self.service_name) + "_thrift",
)
self.thrift_module = thriftpy2.load(
self.thrift_file,
module_name=str(self.service_name) + "_thrift",
include_dirs=self.include_dirs,
)
self.thrift_service_obj = getattr(self.thrift_module, self.service_name)
logger.debug(
"init thrift client: service_name=%s, ip=%s, port=%s",
self.thrift_service_obj,
ip,
port,
)
self.client = make_client(
self.thrift_service_obj,
self.ip,
int(self.port),
timeout=self.timeout,
proto_factory=get_proto_factory(self.proto_type),
trans_factory=get_trans_factory(self.trans_type),
)
except Exception as e:
self.thrift_module = None
self.thrift_service_obj = None
self.client = None
logger.exception("init thrift module and client failed: {}".format(e))
finally:
thriftpy2.parser.parser.thrift_stack = []
def get_client(self):
return self.client
def send_request(self, request_data, request_method=""):
thrift_req_cls = getattr(
self.thrift_service_obj, request_method + "_args"
).thrift_spec[1][2]
request_obj = json2thrift(json.dumps(request_data), thrift_req_cls)
logger.debug(
"send thrift request: request_method=%s, request_obj=%s",
request_method,
request_obj,
)
response_obj = getattr(self.client, request_method)(request_obj)
logger.debug("thrift response = %s", response_obj)
return thrift2dict(response_obj)
def __del__(self):
self.client.close()

Some files were not shown because too many files have changed in this diff Show More