diff --git a/LICENSE b/LICENSE
index fed1ee1a..f63cb656 100644
--- a/LICENSE
+++ b/LICENSE
@@ -1,21 +1,201 @@
-MIT License
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
-Copyright (c) 2017 Leo Lee
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-Permission is hereby granted, free of charge, to any person obtaining a copy
-of this software and associated documentation files (the "Software"), to deal
-in the Software without restriction, including without limitation the rights
-to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-copies of the Software, and to permit persons to whom the Software is
-furnished to do so, subject to the following conditions:
+ 1. Definitions.
-The above copyright notice and this permission notice shall be included in all
-copies or substantial portions of the Software.
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-SOFTWARE.
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright 2017 debugtalk
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
\ No newline at end of file
diff --git a/httprunner/__about__.py b/httprunner/__about__.py
index 211c90c2..8cfb52d4 100644
--- a/httprunner/__about__.py
+++ b/httprunner/__about__.py
@@ -1,9 +1,9 @@
__title__ = 'HttpRunner'
__description__ = 'One-stop solution for HTTP(S) testing.'
__url__ = 'https://github.com/HttpRunner/HttpRunner'
-__version__ = '1.5.15'
+__version__ = '2.0.0'
__author__ = 'debugtalk'
__author_email__ = 'mail@debugtalk.com'
-__license__ = 'MIT'
+__license__ = 'Apache-2.0'
__copyright__ = 'Copyright 2017 debugtalk'
__cake__ = u'\u2728 \U0001f370 \u2728'
\ No newline at end of file
diff --git a/httprunner/__init__.py b/httprunner/__init__.py
index 6942cf6c..e69de29b 100644
--- a/httprunner/__init__.py
+++ b/httprunner/__init__.py
@@ -1,9 +0,0 @@
-# encoding: utf-8
-
-try:
- # monkey patch at beginning to avoid RecursionError when running locust.
- from gevent import monkey; monkey.patch_all()
-except ImportError:
- pass
-
-from httprunner.api import HttpRunner
diff --git a/httprunner/api.py b/httprunner/api.py
index e5e5e97d..d0190388 100644
--- a/httprunner/api.py
+++ b/httprunner/api.py
@@ -9,91 +9,83 @@ from httprunner import (exceptions, loader, logger, parser, report, runner,
class HttpRunner(object):
- def __init__(self, **kwargs):
+ def __init__(self, failfast=False, save_tests=False, report_template=None, report_dir=None,
+ log_level="INFO", log_file=None):
""" initialize HttpRunner.
Args:
- kwargs (dict): key-value arguments used to initialize TextTestRunner.
- Commonly used arguments:
-
- resultclass (class): HtmlTestResult or TextTestResult
- failfast (bool): False/True, stop the test run on the first error or failure.
- http_client_session (instance): requests.Session(), or locust.client.Session() instance.
-
- Attributes:
- project_mapping (dict): save project loaded api/testcases, environments and debugtalk.py module.
- {
- "debugtalk": {
- "variables": {},
- "functions": {}
- },
- "env": {},
- "def-api": {},
- "def-testcase": {}
- }
+ failfast (bool): stop the test run on the first error or failure.
+ save_tests (bool): save loaded/parsed tests to JSON file.
+ report_template (str): report template file path, template should be in Jinja2 format.
+ report_dir (str): html report save directory.
+ log_level (str): logging level.
+ log_file (str): log file path.
"""
self.exception_stage = "initialize HttpRunner()"
- self.http_client_session = kwargs.pop("http_client_session", None)
- kwargs.setdefault("resultclass", report.HtmlTestResult)
+ kwargs = {
+ "failfast": failfast,
+ "resultclass": report.HtmlTestResult
+ }
self.unittest_runner = unittest.TextTestRunner(**kwargs)
self.test_loader = unittest.TestLoader()
- self.summary = None
+ self.save_tests = save_tests
+ self.report_template = report_template
+ self.report_dir = report_dir
+ self._summary = None
+ if log_file:
+ logger.setup_logger(log_level, log_file)
- def _add_tests(self, testcases):
+ def _add_tests(self, tests_mapping):
""" initialize testcase with Runner() and add to test suite.
Args:
- testcases (list): parsed testcases list
+ tests_mapping (dict): project info and testcases list.
Returns:
- tuple: unittest.TestSuite()
+ unittest.TestSuite()
"""
- def _add_teststep(test_runner, config, teststep_dict):
- """ add teststep to testcase.
+ def _add_test(test_runner, test_dict):
+ """ add test to testcase.
"""
def test(self):
try:
- test_runner.run_test(teststep_dict)
+ test_runner.run_test(test_dict)
except exceptions.MyBaseFailure as ex:
self.fail(str(ex))
finally:
- if hasattr(test_runner.http_client_session, "meta_data"):
- self.meta_data = test_runner.http_client_session.meta_data
- self.meta_data["validators"] = test_runner.evaluated_validators
- test_runner.http_client_session.init_meta_data()
+ self.meta_datas = test_runner.meta_datas
- try:
- teststep_dict["name"] = parser.parse_data(
- teststep_dict["name"],
- config.get("variables", {}),
- config.get("functions", {})
- )
- except exceptions.VariableNotFound:
- pass
+ if "config" in test_dict:
+ # run nested testcase
+ test.__doc__ = test_dict["config"].get("name")
+ else:
+ # run api test
+ test.__doc__ = test_dict.get("name")
- test.__doc__ = teststep_dict["name"]
return test
test_suite = unittest.TestSuite()
- for testcase in testcases:
+ functions = tests_mapping.get("project_mapping", {}).get("functions", {})
+
+ for testcase in tests_mapping["testcases"]:
config = testcase.get("config", {})
- test_runner = runner.Runner(config, self.http_client_session)
+ test_runner = runner.Runner(config, functions)
TestSequense = type('TestSequense', (unittest.TestCase,), {})
- teststeps = testcase.get("teststeps", [])
- for index, teststep_dict in enumerate(teststeps):
- for times_index in range(int(teststep_dict.get("times", 1))):
+ tests = testcase.get("teststeps", [])
+ for index, test_dict in enumerate(tests):
+ for times_index in range(int(test_dict.get("times", 1))):
# suppose one testcase should not have more than 9999 steps,
# and one step should not run more than 999 times.
test_method_name = 'test_{:04}_{:03}'.format(index, times_index)
- test_method = _add_teststep(test_runner, config, teststep_dict)
+ test_method = _add_test(test_runner, test_dict)
setattr(TestSequense, test_method_name, test_method)
loaded_testcase = self.test_loader.loadTestsFromTestCase(TestSequense)
setattr(loaded_testcase, "config", config)
- setattr(loaded_testcase, "teststeps", testcase.get("teststeps", []))
+ setattr(loaded_testcase, "teststeps", tests)
setattr(loaded_testcase, "runner", test_runner)
test_suite.addTest(loaded_testcase)
@@ -127,9 +119,16 @@ class HttpRunner(object):
tests_results (list): list of (testcase, result)
"""
- self.summary = {
+ summary = {
"success": True,
- "stat": {},
+ "stat": {
+ "testcases": {
+ "total": len(tests_results),
+ "success": 0,
+ "fail": 0
+ },
+ "teststeps": {}
+ },
"time": {},
"platform": report.get_platform(),
"details": []
@@ -139,81 +138,67 @@ class HttpRunner(object):
testcase, result = tests_result
testcase_summary = report.get_summary(result)
- self.summary["success"] &= testcase_summary["success"]
+ if testcase_summary["success"]:
+ summary["stat"]["testcases"]["success"] += 1
+ else:
+ summary["stat"]["testcases"]["fail"] += 1
+
+ summary["success"] &= testcase_summary["success"]
testcase_summary["name"] = testcase.config.get("name")
- testcase_summary["base_url"] = testcase.config.get("request", {}).get("base_url", "")
in_out = utils.get_testcase_io(testcase)
utils.print_io(in_out)
testcase_summary["in_out"] = in_out
- report.aggregate_stat(self.summary["stat"], testcase_summary["stat"])
- report.aggregate_stat(self.summary["time"], testcase_summary["time"])
+ report.aggregate_stat(summary["stat"]["teststeps"], testcase_summary["stat"])
+ report.aggregate_stat(summary["time"], testcase_summary["time"])
- self.summary["details"].append(testcase_summary)
+ summary["details"].append(testcase_summary)
- def _run_tests(self, testcases, mapping=None):
- """ start to run test with variables mapping.
-
- Args:
- testcases (list): list of testcase_dict, each testcase is corresponding to a YAML/JSON file
- [
- { # testcase data structure
- "config": {
- "name": "desc1",
- "path": "testcase1_path",
- "variables": [], # optional
- "request": {} # optional
- "refs": {
- "debugtalk": {
- "variables": {},
- "functions": {}
- },
- "env": {},
- "def-api": {},
- "def-testcase": {}
- }
- },
- "teststeps": [
- # teststep data structure
- {
- 'name': 'test step desc2',
- 'variables': [], # optional
- 'extract': [], # optional
- 'validate': [],
- 'request': {},
- 'function_meta': {}
- },
- teststep2 # another teststep dict
- ]
- },
- testcase_dict_2 # another testcase dict
- ]
- mapping (dict): if mapping is specified, it will override variables in config block.
-
- Returns:
- instance: HttpRunner() instance
+ return summary
+ def run_tests(self, tests_mapping):
+ """ run testcase/testsuite data
"""
+ # parse tests
self.exception_stage = "parse tests"
- parsed_testcases_list = parser.parse_tests(testcases, mapping)
+ parsed_tests_mapping = parser.parse_tests(tests_mapping)
+ if self.save_tests:
+ utils.dump_tests(parsed_tests_mapping, "parsed")
+
+ # add tests to test suite
self.exception_stage = "add tests to test suite"
- test_suite = self._add_tests(parsed_testcases_list)
+ test_suite = self._add_tests(parsed_tests_mapping)
+ # run test suite
self.exception_stage = "run test suite"
results = self._run_suite(test_suite)
+ # aggregate results
self.exception_stage = "aggregate results"
- self._aggregate(results)
+ self._summary = self._aggregate(results)
- return self
+ # generate html report
+ self.exception_stage = "generate html report"
+ report.stringify_summary(self._summary)
- def run(self, path_or_testcases, dot_env_path=None, mapping=None):
- """ main interface, run testcases with variables mapping.
+ if self.save_tests:
+ utils.dump_summary(self._summary, tests_mapping["project_mapping"])
+
+ report_path = report.render_html_report(
+ self._summary,
+ self.report_template,
+ self.report_dir
+ )
+
+ return report_path
+
+ def run_path(self, path, dot_env_path=None, mapping=None):
+ """ run testcase/testsuite file or folder.
Args:
- path_or_testcases (str/list/dict): testcase file/foler path, or valid testcases.
+ path (str): testcase/testsuite file/foler path.
dot_env_path (str): specified .env file path.
mapping (dict): if mapping is specified, it will override variables in config block.
@@ -221,37 +206,70 @@ class HttpRunner(object):
instance: HttpRunner() instance
"""
+ # load tests
self.exception_stage = "load tests"
+ tests_mapping = loader.load_tests(path, dot_env_path)
+ tests_mapping["project_mapping"]["test_path"] = path
- if validator.is_testcases(path_or_testcases):
- if isinstance(path_or_testcases, dict):
- testcases = [path_or_testcases]
- else:
- testcases = path_or_testcases
- elif validator.is_testcase_path(path_or_testcases):
- testcases = loader.load_tests(path_or_testcases, dot_env_path)
+ if mapping:
+ tests_mapping["project_mapping"]["variables"] = mapping
+
+ if self.save_tests:
+ utils.dump_tests(tests_mapping, "loaded")
+
+ return self.run_tests(tests_mapping)
+
+ def run(self, path_or_tests, dot_env_path=None, mapping=None):
+ """ main interface.
+
+ Args:
+ path_or_tests:
+ str: testcase/testsuite file/foler path
+ dict: valid testcase/testsuite data
+
+ """
+ if validator.is_testcase_path(path_or_tests):
+ return self.run_path(path_or_tests, dot_env_path, mapping)
+ elif validator.is_testcases(path_or_tests):
+ return self.run_tests(path_or_tests)
else:
raise exceptions.ParamsError("invalid testcase path or testcases.")
- return self._run_tests(testcases, mapping)
-
- def gen_html_report(self, html_report_name=None, html_report_template=None):
- """ generate html report and return report path.
-
- Args:
- html_report_name (str): output html report file name
- html_report_template (str): report template file path, template should be in Jinja2 format
-
- Returns:
- str: generated html report path
-
+ @property
+ def summary(self):
+ """ get test reuslt summary.
"""
- if not self.summary:
- raise exceptions.MyBaseError("run method should be called before gen_html_report.")
+ return self._summary
- self.exception_stage = "generate report"
- return report.render_html_report(
- self.summary,
- html_report_name,
- html_report_template
- )
+
+def prepare_locust_tests(path):
+ """ prepare locust testcases
+
+ Args:
+ path (str): testcase file path.
+
+ Returns:
+ dict: locust tests data
+
+ {
+ "functions": {},
+ "tests": []
+ }
+
+ """
+ tests_mapping = loader.load_tests(path)
+ parsed_tests_mapping = parser.parse_tests(tests_mapping)
+
+ functions = parsed_tests_mapping.get("project_mapping", {}).get("functions", {})
+
+ tests = []
+
+ for testcase in parsed_tests_mapping["testcases"]:
+ testcase_weight = testcase.get("config", {}).pop("weight", 1)
+ for _ in range(testcase_weight):
+ tests.append(testcase)
+
+ return {
+ "functions": functions,
+ "tests": tests
+ }
diff --git a/httprunner/built_in.py b/httprunner/built_in.py
index e2fb0e69..e3e0d1bd 100644
--- a/httprunner/built_in.py
+++ b/httprunner/built_in.py
@@ -132,17 +132,6 @@ def endswith(check_value, expect_value):
""" built-in hooks
"""
-def setup_hook_prepare_kwargs(request):
- if request["method"] == "POST":
- content_type = request.get("headers", {}).get("content-type")
- if content_type and "data" in request:
- # if request content-type is application/json, request data should be dumped
- if content_type.startswith("application/json") and isinstance(request["data"], (dict, list)):
- request["data"] = json.dumps(request["data"])
-
- if isinstance(request["data"], str):
- request["data"] = request["data"].encode('utf-8')
-
def sleep_N_secs(n_secs):
""" sleep n seconds
"""
diff --git a/httprunner/cli.py b/httprunner/cli.py
index cfbdb0df..5d758e6a 100644
--- a/httprunner/cli.py
+++ b/httprunner/cli.py
@@ -1,22 +1,16 @@
# encoding: utf-8
-import argparse
-import multiprocessing
-import os
-import sys
-import unittest
-
-from httprunner import logger
-from httprunner.__about__ import __description__, __version__
-from httprunner.api import HttpRunner
-from httprunner.compat import is_py2
-from httprunner.utils import (create_scaffold, get_python2_retire_msg,
- prettify_json_file, validate_json_file)
-
-
def main_hrun():
""" API test: parse command line options and run commands.
"""
+ import argparse
+ from httprunner import logger
+ from httprunner.__about__ import __description__, __version__
+ from httprunner.api import HttpRunner
+ from httprunner.compat import is_py2
+ from httprunner.utils import (create_scaffold, get_python2_retire_msg,
+ prettify_json_file, validate_json_file)
+
parser = argparse.ArgumentParser(description=__description__)
parser.add_argument(
'-V', '--version', dest='version', action='store_true',
@@ -24,15 +18,6 @@ def main_hrun():
parser.add_argument(
'testcase_paths', nargs='*',
help="testcase file path")
- parser.add_argument(
- '--no-html-report', action='store_true', default=False,
- help="do not generate html report.")
- parser.add_argument(
- '--html-report-name',
- help="specify html report name, only effective when generating html report.")
- parser.add_argument(
- '--html-report-template',
- help="specify html report template path.")
parser.add_argument(
'--log-level', default='INFO',
help="Specify logging level, default is INFO.")
@@ -42,9 +27,18 @@ def main_hrun():
parser.add_argument(
'--dot-env-path',
help="Specify .env file path, which is useful for keeping sensitive data.")
+ parser.add_argument(
+ '--report-template',
+ help="specify report template path.")
+ parser.add_argument(
+ '--report-dir',
+ help="specify report save directory.")
parser.add_argument(
'--failfast', action='store_true', default=False,
help="Stop the test run on the first error or failure.")
+ parser.add_argument(
+ '--save-tests', action='store_true', default=False,
+ help="Save loaded tests and parsed tests to JSON file.")
parser.add_argument(
'--startproject',
help="Specify new project name.")
@@ -77,30 +71,32 @@ def main_hrun():
create_scaffold(project_name)
exit(0)
+ runner = HttpRunner(
+ failfast=args.failfast,
+ save_tests=args.save_tests,
+ report_template=args.report_template,
+ report_dir=args.report_dir
+ )
try:
- runner = HttpRunner(
- failfast=args.failfast
- )
- runner.run(
- args.testcase_paths,
- dot_env_path=args.dot_env_path
- )
+ for path in args.testcase_paths:
+ runner.run(path, dot_env_path=args.dot_env_path)
except Exception:
logger.log_error("!!!!!!!!!! exception stage: {} !!!!!!!!!!".format(runner.exception_stage))
raise
- if not args.no_html_report:
- runner.gen_html_report(
- html_report_name=args.html_report_name,
- html_report_template=args.html_report_template
- )
+ return 0
- summary = runner.summary
- return 0 if summary["success"] else 1
def main_locust():
""" Performance test with locust: parse command line options and run commands.
"""
+ # monkey patch ssl at beginning to avoid RecursionError when running locust.
+ from gevent import monkey; monkey.patch_ssl()
+
+ import multiprocessing
+ import sys
+ from httprunner import logger
+
try:
from httprunner import locusts
except ImportError:
@@ -114,7 +110,7 @@ def main_locust():
sys.argv.extend(["-h"])
if sys.argv[1] in ["-h", "--help", "-V", "--version"]:
- locusts.main()
+ locusts.start_locust_main()
sys.exit(0)
# set logging level
@@ -129,7 +125,7 @@ def main_locust():
loglevel = sys.argv[loglevel_index]
else:
# default
- loglevel = "INFO"
+ loglevel = "WARNING"
logger.setup_logger(loglevel)
@@ -180,4 +176,4 @@ def main_locust():
sys.argv.pop(processes_index)
locusts.run_locusts_with_processes(sys.argv, processes_count)
else:
- locusts.main()
+ locusts.start_locust_main()
diff --git a/httprunner/client.py b/httprunner/client.py
index 0f9e4d9a..a31031c2 100644
--- a/httprunner/client.py
+++ b/httprunner/client.py
@@ -1,20 +1,17 @@
# encoding: utf-8
-import re
import time
import requests
import urllib3
from httprunner import logger
-from httprunner.exceptions import ParamsError
+from httprunner.utils import build_url, lower_dict_keys, omit_long_data
from requests import Request, Response
from requests.exceptions import (InvalidSchema, InvalidURL, MissingSchema,
RequestException)
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-absolute_http_url_regexp = re.compile(r"^https?://", re.I)
-
class ApiResponse(Response):
@@ -42,37 +39,96 @@ class HttpSession(requests.Session):
self.base_url = base_url if base_url else ""
self.init_meta_data()
- def _build_url(self, path):
- """ prepend url with hostname unless it's already an absolute URL """
- if absolute_http_url_regexp.match(path):
- return path
- elif self.base_url:
- return "{}/{}".format(self.base_url.rstrip("/"), path.lstrip("/"))
- else:
- raise ParamsError("base url missed!")
-
def init_meta_data(self):
""" initialize meta_data, it will store detail data of request and response
"""
self.meta_data = {
- "request": {
- "url": "N/A",
- "method": "N/A",
- "headers": {},
- "start_timestamp": None
- },
- "response": {
- "status_code": "N/A",
- "headers": {},
+ "name": "",
+ "data": [
+ {
+ "request": {
+ "url": "N/A",
+ "method": "N/A",
+ "headers": {}
+ },
+ "response": {
+ "status_code": "N/A",
+ "headers": {},
+ "encoding": None,
+ "content_type": ""
+ }
+ }
+ ],
+ "stat": {
"content_size": "N/A",
"response_time_ms": "N/A",
"elapsed_ms": "N/A",
- "encoding": None,
- "content": None,
- "content_type": ""
}
}
+ def get_req_resp_record(self, resp_obj):
+ """ get request and response info from Response() object.
+ """
+ def log_print(req_resp_dict, r_type):
+ msg = "\n================== {} details ==================\n".format(r_type)
+ for key, value in req_resp_dict[r_type].items():
+ msg += "{:<16} : {}\n".format(key, repr(value))
+ logger.log_debug(msg)
+
+ req_resp_dict = {
+ "request": {},
+ "response": {}
+ }
+
+ # record actual request info
+ req_resp_dict["request"]["url"] = resp_obj.request.url
+ req_resp_dict["request"]["headers"] = dict(resp_obj.request.headers)
+
+ request_body = resp_obj.request.body
+ if request_body:
+ request_content_type = lower_dict_keys(
+ req_resp_dict["request"]["headers"]
+ ).get("content-type")
+ if request_content_type and "multipart/form-data" in request_content_type:
+ # upload file type
+ req_resp_dict["request"]["body"] = "upload file stream (OMITTED)"
+ else:
+ req_resp_dict["request"]["body"] = request_body
+
+ # log request details in debug mode
+ log_print(req_resp_dict, "request")
+
+ # record response info
+ req_resp_dict["response"]["ok"] = resp_obj.ok
+ req_resp_dict["response"]["url"] = resp_obj.url
+ req_resp_dict["response"]["status_code"] = resp_obj.status_code
+ req_resp_dict["response"]["reason"] = resp_obj.reason
+ req_resp_dict["response"]["cookies"] = resp_obj.cookies or {}
+ req_resp_dict["response"]["encoding"] = resp_obj.encoding
+ resp_headers = dict(resp_obj.headers)
+ req_resp_dict["response"]["headers"] = resp_headers
+
+ lower_resp_headers = lower_dict_keys(resp_headers)
+ content_type = lower_resp_headers.get("content-type", "")
+ req_resp_dict["response"]["content_type"] = content_type
+
+ if "image" in content_type:
+ # response is image type, record bytes content only
+ req_resp_dict["response"]["content"] = resp_obj.content
+ else:
+ try:
+ # try to record json data
+ req_resp_dict["response"]["json"] = resp_obj.json()
+ except ValueError:
+ # only record at most 512 text charactors
+ resp_text = resp_obj.text
+ req_resp_dict["response"]["text"] = omit_long_data(resp_text)
+
+ # log response details in debug mode
+ log_print(req_resp_dict, "response")
+
+ return req_resp_dict
+
def request(self, method, url, name=None, **kwargs):
"""
Constructs and sends a :py:class:`requests.Request`.
@@ -112,63 +168,42 @@ class HttpSession(requests.Session):
:param cert: (optional)
if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair.
"""
- def log_print(request_response):
- msg = "\n================== {} details ==================\n".format(request_response)
- for key, value in self.meta_data[request_response].items():
- msg += "{:<16} : {}\n".format(key, repr(value))
- logger.log_debug(msg)
+ # record test name
+ self.meta_data["name"] = name
# record original request info
- self.meta_data["request"]["method"] = method
- self.meta_data["request"]["url"] = url
- self.meta_data["request"].update(kwargs)
- self.meta_data["request"]["start_timestamp"] = time.time()
+ self.meta_data["data"][0]["request"]["method"] = method
+ self.meta_data["data"][0]["request"]["url"] = url
+ kwargs.setdefault("timeout", 120)
+ self.meta_data["data"][0]["request"].update(kwargs)
# prepend url with hostname unless it's already an absolute URL
- url = self._build_url(url)
+ url = build_url(self.base_url, url)
- kwargs.setdefault("timeout", 120)
+ start_timestamp = time.time()
response = self._send_request_safe_mode(method, url, **kwargs)
-
- # record the consumed time
- self.meta_data["response"]["response_time_ms"] = \
- round((time.time() - self.meta_data["request"]["start_timestamp"]) * 1000, 2)
- self.meta_data["response"]["elapsed_ms"] = response.elapsed.microseconds / 1000.0
-
- # record actual request info
- self.meta_data["request"]["url"] = (response.history and response.history[0] or response).request.url
- self.meta_data["request"]["headers"] = dict(response.request.headers)
- self.meta_data["request"]["body"] = response.request.body
-
- # log request details in debug mode
- log_print("request")
-
- # record response info
- self.meta_data["response"]["ok"] = response.ok
- self.meta_data["response"]["url"] = response.url
- self.meta_data["response"]["status_code"] = response.status_code
- self.meta_data["response"]["reason"] = response.reason
- self.meta_data["response"]["headers"] = dict(response.headers)
- self.meta_data["response"]["cookies"] = response.cookies or {}
- self.meta_data["response"]["encoding"] = response.encoding
- self.meta_data["response"]["content"] = response.content
- self.meta_data["response"]["text"] = response.text
- self.meta_data["response"]["content_type"] = response.headers.get("Content-Type", "")
-
- try:
- self.meta_data["response"]["json"] = response.json()
- except ValueError:
- self.meta_data["response"]["json"] = None
+ response_time_ms = round((time.time() - start_timestamp) * 1000, 2)
# get the length of the content, but if the argument stream is set to True, we take
# the size from the content-length header, in order to not trigger fetching of the body
if kwargs.get("stream", False):
- self.meta_data["response"]["content_size"] = int(self.meta_data["response"]["headers"].get("content-length") or 0)
+ content_size = int(dict(response.headers).get("content-length") or 0)
else:
- self.meta_data["response"]["content_size"] = len(response.content or "")
+ content_size = len(response.content or "")
- # log response details in debug mode
- log_print("response")
+ # record the consumed time
+ self.meta_data["stat"] = {
+ "response_time_ms": response_time_ms,
+ "elapsed_ms": response.elapsed.microseconds / 1000.0,
+ "content_size": content_size
+ }
+
+ # record request and response histories, include 30X redirection
+ response_list = response.history + [response]
+ self.meta_data["data"] = [
+ self.get_req_resp_record(resp_obj)
+ for resp_obj in response_list
+ ]
try:
response.raise_for_status()
@@ -176,10 +211,10 @@ class HttpSession(requests.Session):
logger.log_error(u"{exception}".format(exception=str(e)))
else:
logger.log_info(
- """status_code: {}, response_time(ms): {} ms, response_length: {} bytes""".format(
- self.meta_data["response"]["status_code"],
- self.meta_data["response"]["response_time_ms"],
- self.meta_data["response"]["content_size"]
+ """status_code: {}, response_time(ms): {} ms, response_length: {} bytes\n""".format(
+ response.status_code,
+ response_time_ms,
+ content_size
)
)
diff --git a/httprunner/context.py b/httprunner/context.py
index 7484a503..3fe3d572 100644
--- a/httprunner/context.py
+++ b/httprunner/context.py
@@ -1,80 +1,63 @@
-# encoding: utf-8
-
-import copy
-
from httprunner import exceptions, logger, parser, utils
-from httprunner.compat import OrderedDict
-class Context(object):
- """ Manages context functions and variables.
- context has two levels, testcase and teststep.
+class SessionContext(object):
+ """ HttpRunner session, store runtime variables.
+
+ Examples:
+ >>> functions={...}
+ >>> variables = {"SECRET_KEY": "DebugTalk"}
+ >>> context = SessionContext(functions, variables)
+
+ Equivalent to:
+ >>> context = SessionContext(functions)
+ >>> context.update_session_variables(variables)
+
"""
- def __init__(self, variables=None, functions=None):
- """ init Context with testcase variables and functions.
- """
- # testcase level context
- ## TESTCASE_SHARED_VARIABLES_MAPPING and TESTCASE_SHARED_FUNCTIONS_MAPPING are unchangeable.
- if isinstance(variables, list):
- self.TESTCASE_SHARED_VARIABLES_MAPPING = utils.convert_mappinglist_to_orderdict(variables)
- else:
- # dict
- self.TESTCASE_SHARED_VARIABLES_MAPPING = variables or OrderedDict()
+ def __init__(self, functions, variables=None):
+ self.session_variables_mapping = utils.ensure_mapping_format(variables or {})
+ self.FUNCTIONS_MAPPING = functions
+ self.init_test_variables()
+ self.validation_results = []
- self.TESTCASE_SHARED_FUNCTIONS_MAPPING = functions or OrderedDict()
-
- # testcase level request, will not change
- self.TESTCASE_SHARED_REQUEST_MAPPING = {}
-
- self.evaluated_validators = []
- self.init_context_variables(level="testcase")
-
- def init_context_variables(self, level="testcase"):
- """ initialize testcase/teststep context
+ def init_test_variables(self, variables_mapping=None):
+ """ init test variables, called when each test(api) starts.
+ variables_mapping will be evaluated first.
Args:
- level (enum): "testcase" or "teststep"
-
- """
- if level == "testcase":
- # testcase level runtime context, will be updated with extracted variables in each teststep.
- self.testcase_runtime_variables_mapping = copy.deepcopy(self.TESTCASE_SHARED_VARIABLES_MAPPING)
-
- # teststep level context, will be altered in each teststep.
- # teststep config shall inherit from testcase configs,
- # but can not change testcase configs, that's why we use copy.deepcopy here.
- self.teststep_variables_mapping = copy.deepcopy(self.testcase_runtime_variables_mapping)
-
- def update_context_variables(self, variables, level):
- """ update context variables, with level specified.
-
- Args:
- variables (list/OrderedDict): testcase config block or teststep block
- [
- {"TOKEN": "debugtalk"},
- {"random": "${gen_random_string(5)}"},
- {"json": {'name': 'user', 'password': '123456'}},
- {"md5": "${gen_md5($TOKEN, $json, $random)}"}
- ]
- OrderDict({
- "TOKEN": "debugtalk",
+ variables_mapping (dict)
+ {
"random": "${gen_random_string(5)}",
- "json": {'name': 'user', 'password': '123456'},
- "md5": "${gen_md5($TOKEN, $json, $random)}"
- })
- level (enum): "testcase" or "teststep"
+ "authorization": "${gen_md5($TOKEN, $data, $random)}",
+ "data": '{"name": "user", "password": "123456"}',
+ "TOKEN": "debugtalk",
+ }
"""
- if isinstance(variables, list):
- variables = utils.convert_mappinglist_to_orderdict(variables)
+ variables_mapping = variables_mapping or {}
+ variables_mapping = utils.ensure_mapping_format(variables_mapping)
- for variable_name, variable_value in variables.items():
- variable_eval_value = self.eval_content(variable_value)
+ self.test_variables_mapping = {}
+ # priority: extracted variable > teststep variable
+ self.test_variables_mapping.update(variables_mapping)
+ self.test_variables_mapping.update(self.session_variables_mapping)
- if level == "testcase":
- self.testcase_runtime_variables_mapping[variable_name] = variable_eval_value
+ for variable_name, variable_value in variables_mapping.items():
+ variable_value = self.eval_content(variable_value)
+ self.update_test_variables(variable_name, variable_value)
- self.update_teststep_variables_mapping(variable_name, variable_eval_value)
+ def update_test_variables(self, variable_name, variable_value):
+ """ update test variables, these variables are only valid in the current test.
+ """
+ self.test_variables_mapping[variable_name] = variable_value
+
+ def update_session_variables(self, variables_mapping):
+ """ update session with extracted variables mapping.
+ these variables are valid in the whole running session.
+ """
+ variables_mapping = utils.ensure_mapping_format(variables_mapping)
+ self.session_variables_mapping.update(variables_mapping)
+ self.test_variables_mapping.update(self.session_variables_mapping)
def eval_content(self, content):
""" evaluate content recursively, take effect on each variable and function in content.
@@ -82,51 +65,10 @@ class Context(object):
"""
return parser.parse_data(
content,
- self.teststep_variables_mapping,
- self.TESTCASE_SHARED_FUNCTIONS_MAPPING
+ self.test_variables_mapping,
+ self.FUNCTIONS_MAPPING
)
- def update_testcase_runtime_variables_mapping(self, variables):
- """ update testcase_runtime_variables_mapping with extracted vairables in teststep.
-
- Args:
- variables (OrderDict): extracted variables in teststep
-
- """
- for variable_name, variable_value in variables.items():
- self.testcase_runtime_variables_mapping[variable_name] = variable_value
- self.update_teststep_variables_mapping(variable_name, variable_value)
-
- def update_teststep_variables_mapping(self, variable_name, variable_value):
- """ bind and update testcase variables mapping
- """
- self.teststep_variables_mapping[variable_name] = variable_value
-
- def get_parsed_request(self, request_dict, level="teststep"):
- """ get parsed request with variables and functions.
-
- Args:
- request_dict (dict): request config mapping
- level (enum): "testcase" or "teststep"
-
- Returns:
- dict: parsed request dict
-
- """
- if level == "testcase":
- # testcase config request dict has been parsed in parse_tests
- self.TESTCASE_SHARED_REQUEST_MAPPING = copy.deepcopy(request_dict)
- return self.TESTCASE_SHARED_REQUEST_MAPPING
-
- else:
- # teststep
- return self.eval_content(
- utils.deep_update_dict(
- copy.deepcopy(self.TESTCASE_SHARED_REQUEST_MAPPING),
- request_dict
- )
- )
-
def __eval_check_item(self, validator, resp_obj):
""" evaluate check item in validator.
@@ -188,7 +130,7 @@ class Context(object):
"""
# TODO: move comparator uniform to init_test_suites
comparator = utils.get_uniform_comparator(validator_dict["comparator"])
- validate_func = parser.get_mapping_function(comparator, self.TESTCASE_SHARED_FUNCTIONS_MAPPING)
+ validate_func = parser.get_mapping_function(comparator, self.FUNCTIONS_MAPPING)
check_item = validator_dict["check"]
check_value = validator_dict["check_value"]
@@ -226,11 +168,12 @@ class Context(object):
def validate(self, validators, resp_obj):
""" make validations
"""
- evaluated_validators = []
if not validators:
- return evaluated_validators
+ return
- logger.log_info("start to validate.")
+ logger.log_debug("start to validate.")
+
+ self.validation_results = []
validate_pass = True
failures = []
@@ -247,10 +190,8 @@ class Context(object):
validate_pass = False
failures.append(str(ex))
- evaluated_validators.append(evaluated_validator)
+ self.validation_results.append(evaluated_validator)
if not validate_pass:
failures_string = "\n".join([failure for failure in failures])
raise exceptions.ValidationFailure(failures_string)
-
- return evaluated_validators
diff --git a/httprunner/exceptions.py b/httprunner/exceptions.py
index 036137da..3db701ee 100644
--- a/httprunner/exceptions.py
+++ b/httprunner/exceptions.py
@@ -47,6 +47,9 @@ class FunctionNotFound(NotFoundError):
class VariableNotFound(NotFoundError):
pass
+class EnvNotFound(NotFoundError):
+ pass
+
class ApiNotFound(NotFoundError):
pass
diff --git a/httprunner/loader.py b/httprunner/loader.py
index d8294c40..3a34865f 100644
--- a/httprunner/loader.py
+++ b/httprunner/loader.py
@@ -1,4 +1,5 @@
import collections
+import copy
import csv
import importlib
import io
@@ -7,8 +8,7 @@ import os
import sys
import yaml
-from httprunner import built_in, exceptions, logger, parser, utils, validator
-from httprunner.compat import OrderedDict
+from httprunner import exceptions, logger, parser, utils, validator
###############################################################################
@@ -58,21 +58,27 @@ def load_json_file(json_file):
def load_csv_file(csv_file):
""" load csv file and check file content format
- @param
- csv_file: csv file path
- e.g. csv file content:
- username,password
- test1,111111
- test2,222222
- test3,333333
- @return
- list of parameter, each parameter is in dict format
- e.g.
+
+ Args:
+ csv_file (str): csv file path, csv file content is like below:
+
+ Returns:
+ list: list of parameters, each parameter is in dict format
+
+ Examples:
+ >>> cat csv_file
+ username,password
+ test1,111111
+ test2,222222
+ test3,333333
+
+ >>> load_csv_file(csv_file)
[
{'username': 'test1', 'password': '111111'},
{'username': 'test2', 'password': '222222'},
{'username': 'test3', 'password': '333333'}
]
+
"""
csv_content_list = []
@@ -163,10 +169,11 @@ def load_dot_env_file(dot_env_path):
"""
if not os.path.isfile(dot_env_path):
- raise exceptions.FileNotFound(".env file path is not exist.")
+ return {}
logger.log_info("Loading environment variables from {}".format(dot_env_path))
env_variables_mapping = {}
+
with io.open(dot_env_path, 'r', encoding='utf-8') as fp:
for line in fp:
# maxsplit=1
@@ -184,7 +191,7 @@ def load_dot_env_file(dot_env_path):
def locate_file(start_path, file_name):
- """ locate filename and return file path.
+ """ locate filename and return absolute file path.
searching will be recursive upward until current working directory.
Args:
@@ -206,7 +213,7 @@ def locate_file(start_path, file_name):
file_path = os.path.join(start_dir_path, file_name)
if os.path.isfile(file_path):
- return file_path
+ return os.path.abspath(file_path)
# current working directory
if os.path.abspath(start_dir_path) in [os.getcwd(), os.path.abspath(os.sep)]:
@@ -220,160 +227,166 @@ def locate_file(start_path, file_name):
## debugtalk.py module loader
###############################################################################
-def load_python_module(module):
- """ load python module.
+def load_module_functions(module):
+ """ load python module functions.
Args:
module: python module
Returns:
- dict: variables and functions mapping for specified python module
+ dict: functions mapping for specified python module
{
- "variables": {},
- "functions": {}
+ "func1_name": func1,
+ "func2_name": func2
}
"""
- debugtalk_module = {
- "variables": {},
- "functions": {}
- }
+ module_functions = {}
for name, item in vars(module).items():
- if validator.is_function((name, item)):
- debugtalk_module["functions"][name] = item
- elif validator.is_variable((name, item)):
- if isinstance(item, tuple):
- continue
- debugtalk_module["variables"][name] = item
- else:
- pass
+ if validator.is_function(item):
+ module_functions[name] = item
- return debugtalk_module
+ return module_functions
-def load_builtin_module():
- """ load built_in module
+def load_builtin_functions():
+ """ load built_in module functions
"""
- built_in_module = load_python_module(built_in)
- return built_in_module
+ from httprunner import built_in
+ return load_module_functions(built_in)
-def load_debugtalk_module():
- """ load project debugtalk.py module
+def load_debugtalk_functions():
+ """ load project debugtalk.py module functions
debugtalk.py should be located in project working directory.
Returns:
- dict: debugtalk module mapping
+ dict: debugtalk module functions mapping
{
- "variables": {},
- "functions": {}
+ "func1_name": func1,
+ "func2_name": func2
}
"""
# load debugtalk.py module
imported_module = importlib.import_module("debugtalk")
- debugtalk_module = load_python_module(imported_module)
- return debugtalk_module
-
-
-def get_module_item(module_mapping, item_type, item_name):
- """ get expected function or variable from module mapping.
-
- Args:
- module_mapping(dict): module mapping with variables and functions.
-
- {
- "variables": {},
- "functions": {}
- }
-
- item_type(str): "functions" or "variables"
- item_name(str): function name or variable name
-
- Returns:
- object: specified variable or function object.
-
- Raises:
- exceptions.FunctionNotFound: If specified function not found in module mapping
- exceptions.VariableNotFound: If specified variable not found in module mapping
-
- """
- try:
- return module_mapping[item_type][item_name]
- except KeyError:
- err_msg = "{} not found in debugtalk.py module!\n".format(item_name)
- err_msg += "module mapping: {}".format(module_mapping)
- if item_type == "functions":
- raise exceptions.FunctionNotFound(err_msg)
- else:
- raise exceptions.VariableNotFound(err_msg)
+ return load_module_functions(imported_module)
###############################################################################
## testcase loader
###############################################################################
-def _load_teststeps(test_block, project_mapping):
- """ load teststeps with api/testcase references
+project_mapping = {}
+tests_def_mapping = {
+ "PWD": None,
+ "api": {},
+ "testcases": {}
+}
+
+
+def __extend_with_api_ref(raw_testinfo):
+ """ extend with api reference
+
+ Raises:
+ exceptions.ApiNotFound: api not found
+
+ """
+ api_name = raw_testinfo["api"]
+
+ # api maybe defined in two types:
+ # 1, individual file: each file is corresponding to one api definition
+ # 2, api sets file: one file contains a list of api definitions
+ if not os.path.isabs(api_name):
+ # make compatible with Windows/Linux
+ api_path = os.path.join(tests_def_mapping["PWD"], *api_name.split("/"))
+ if os.path.isfile(api_path):
+ # type 1: api is defined in individual file
+ api_name = api_path
+
+ try:
+ block = tests_def_mapping["api"][api_name]
+ # NOTICE: avoid project_mapping been changed during iteration.
+ raw_testinfo["api_def"] = utils.deepcopy_dict(block)
+ except KeyError:
+ raise exceptions.ApiNotFound("{} not found!".format(api_name))
+
+
+def __extend_with_testcase_ref(raw_testinfo):
+ """ extend with testcase reference
+ """
+ testcase_path = raw_testinfo["testcase"]
+
+ if testcase_path not in tests_def_mapping["testcases"]:
+ # make compatible with Windows/Linux
+ testcase_path = os.path.join(
+ project_mapping["PWD"],
+ *testcase_path.split("/")
+ )
+ testcase_dict = load_testcase(load_file(testcase_path))
+ tests_def_mapping[testcase_path] = testcase_dict
+ else:
+ testcase_dict = tests_def_mapping[testcase_path]
+
+ raw_testinfo["testcase_def"] = testcase_dict
+
+
+def load_teststep(raw_testinfo):
+ """ load testcase step content.
+ teststep maybe defined directly, or reference api/testcase.
Args:
- test_block (dict): test block content, maybe in 3 formats.
+ raw_testinfo (dict): test data, maybe in 3 formats.
# api reference
{
"name": "add product to cart",
- "api": "api_add_cart()",
- "validate": []
+ "api": "/path/to/api",
+ "variables": {},
+ "validate": [],
+ "extract": {}
}
# testcase reference
{
"name": "add product to cart",
- "suite": "create_and_check()",
- "validate": []
+ "testcase": "/path/to/testcase",
+ "variables": {}
}
# define directly
{
"name": "checkout cart",
"request": {},
- "validate": []
+ "variables": {},
+ "validate": [],
+ "extract": {}
}
Returns:
- list: loaded teststeps list
+ dict: loaded teststep content
"""
- def extend_api_definition(block):
- ref_call = block["api"]
- def_block = _get_block_by_name(ref_call, "def-api", project_mapping)
- _extend_block(block, def_block)
-
- teststeps = []
-
# reference api
- if "api" in test_block:
- extend_api_definition(test_block)
- teststeps.append(test_block)
+ if "api" in raw_testinfo:
+ __extend_with_api_ref(raw_testinfo)
+
+ # TODO: reference proc functions
+ # elif "func" in raw_testinfo:
+ # pass
# reference testcase
- elif "suite" in test_block: # TODO: replace suite with testcase
- ref_call = test_block["suite"]
- block = _get_block_by_name(ref_call, "def-testcase", project_mapping)
- # TODO: bugfix lost block config variables
- for teststep in block["teststeps"]:
- if "api" in teststep:
- extend_api_definition(teststep)
- teststeps.append(teststep)
+ elif "testcase" in raw_testinfo:
+ __extend_with_testcase_ref(raw_testinfo)
# define directly
else:
- teststeps.append(test_block)
+ pass
- return teststeps
+ return raw_testinfo
-def _load_testcase(raw_testcase, project_mapping):
- """ load testcase/testsuite with api/testcase references
+def load_testcase(raw_testcase):
+ """ load testcase with api/testcase references.
Args:
raw_testcase (list): raw testcase content loaded from JSON/YAML file:
@@ -381,9 +394,8 @@ def _load_testcase(raw_testcase, project_mapping):
# config part
{
"config": {
- "name": "",
- "def": "suite_order()",
- "request": {}
+ "name": "XXXX",
+ "base_url": "https://debugtalk.com"
}
},
# teststeps part
@@ -394,289 +406,138 @@ def _load_testcase(raw_testcase, project_mapping):
"test": {...}
}
]
- project_mapping (dict): project_mapping
Returns:
dict: loaded testcase content
{
"config": {},
- "teststeps": [teststep11, teststep12]
+ "teststeps": [test11, test12]
}
"""
- loaded_testcase = {
- "config": {},
- "teststeps": []
- }
+ config = {}
+ tests = []
for item in raw_testcase:
- # TODO: add json schema validation
- if not isinstance(item, dict) or len(item) != 1:
- raise exceptions.FileFormatError("Testcase format error: {}".format(item))
-
key, test_block = item.popitem()
- if not isinstance(test_block, dict):
- raise exceptions.FileFormatError("Testcase format error: {}".format(item))
-
if key == "config":
- loaded_testcase["config"].update(test_block)
-
+ config.update(test_block)
elif key == "test":
- loaded_testcase["teststeps"].extend(_load_teststeps(test_block, project_mapping))
-
+ tests.append(load_teststep(test_block))
else:
logger.log_warning(
"unexpected block key: {}. block key should only be 'config' or 'test'.".format(key)
)
- return loaded_testcase
+ return {
+ "config": config,
+ "teststeps": tests
+ }
-def _get_block_by_name(ref_call, ref_type, project_mapping):
- """ get test content by reference name.
+def load_testsuite(raw_testsuite):
+ """ load testsuite with testcase references.
Args:
- ref_call (str): call function.
- e.g. api_v1_Account_Login_POST($UserName, $Password)
- ref_type (enum): "def-api" or "def-testcase"
- project_mapping (dict): project_mapping
-
- Returns:
- dict: api/testcase definition.
-
- Raises:
- exceptions.ParamsError: call args number is not equal to defined args number.
-
- """
- function_meta = parser.parse_function(ref_call)
- func_name = function_meta["func_name"]
- call_args = function_meta["args"]
- block = _get_test_definition(func_name, ref_type, project_mapping)
- def_args = block.get("function_meta", {}).get("args", [])
-
- if len(call_args) != len(def_args):
- err_msg = "{}: call args number is not equal to defined args number!\n".format(func_name)
- err_msg += "defined args: {}\n".format(def_args)
- err_msg += "reference args: {}".format(call_args)
- logger.log_error(err_msg)
- raise exceptions.ParamsError(err_msg)
-
- args_mapping = {}
- for index, item in enumerate(def_args):
- if call_args[index] == item:
- continue
-
- args_mapping[item] = call_args[index]
-
- if args_mapping:
- block = parser.substitute_variables(block, args_mapping)
-
- return block
-
-
-def _get_test_definition(name, ref_type, project_mapping):
- """ get expected api or testcase.
-
- Args:
- name (str): api or testcase name
- ref_type (enum): "def-api" or "def-testcase"
- project_mapping (dict): project_mapping
-
- Returns:
- dict: expected api/testcase info if found.
-
- Raises:
- exceptions.ApiNotFound: api not found
- exceptions.TestcaseNotFound: testcase not found
-
- """
- block = project_mapping.get(ref_type, {}).get(name)
-
- if not block:
- err_msg = "{} not found!".format(name)
- if ref_type == "def-api":
- raise exceptions.ApiNotFound(err_msg)
- else:
- # ref_type == "def-testcase":
- raise exceptions.TestcaseNotFound(err_msg)
-
- return block
-
-
-def _extend_block(ref_block, def_block):
- """ extend ref_block with def_block.
-
- Args:
- def_block (dict): api definition dict.
- ref_block (dict): reference block
-
- Returns:
- dict: extended reference block.
-
- Examples:
- >>> def_block = {
- "name": "get token 1",
- "request": {...},
- "validate": [{'eq': ['status_code', 200]}]
- }
- >>> ref_block = {
- "name": "get token 2",
- "extract": [{"token": "content.token"}],
- "validate": [{'eq': ['status_code', 201]}, {'len_eq': ['content.token', 16]}]
- }
- >>> _extend_block(def_block, ref_block)
+ raw_testsuite (dict): raw testsuite content loaded from JSON/YAML file:
{
- "name": "get token 2",
- "request": {...},
- "extract": [{"token": "content.token"}],
- "validate": [{'eq': ['status_code', 201]}, {'len_eq': ['content.token', 16]}]
+ "config": {
+ "name": "",
+ "request": {}
+ }
+ "testcases": {
+ "testcase1": {
+ "testcase": "/path/to/testcase",
+ "variables": {...},
+ "parameters": {...}
+ },
+ "testcase2": {}
+ }
}
- """
- # TODO: override variables
- def_validators = def_block.get("validate") or def_block.get("validators", [])
- ref_validators = ref_block.get("validate") or ref_block.get("validators", [])
-
- def_extrators = def_block.get("extract") \
- or def_block.get("extractors") \
- or def_block.get("extract_binds", [])
- ref_extractors = ref_block.get("extract") \
- or ref_block.get("extractors") \
- or ref_block.get("extract_binds", [])
-
- ref_block.update(def_block)
- ref_block["validate"] = _merge_validator(
- def_validators,
- ref_validators
- )
- ref_block["extract"] = _merge_extractor(
- def_extrators,
- ref_extractors
- )
-
-
-def _convert_validators_to_mapping(validators):
- """ convert validators list to mapping.
-
- Args:
- validators (list): validators in list
-
Returns:
- dict: validators mapping, use (check, comparator) as key.
-
- Examples:
- >>> validators = [
- {"check": "v1", "expect": 201, "comparator": "eq"},
- {"check": {"b": 1}, "expect": 200, "comparator": "eq"}
- ]
- >>> _convert_validators_to_mapping(validators)
+ dict: loaded testsuite content
{
- ("v1", "eq"): {"check": "v1", "expect": 201, "comparator": "eq"},
- ('{"b": 1}', "eq"): {"check": {"b": 1}, "expect": 200, "comparator": "eq"}
+ "config": {},
+ "testcases": [testcase1, testcase2]
}
"""
- validators_mapping = {}
+ testcases = raw_testsuite["testcases"]
+ for name, raw_testcase in testcases.items():
+ __extend_with_testcase_ref(raw_testcase)
+ raw_testcase.setdefault("name", name)
- for validator in validators:
- validator = parser.parse_validator(validator)
-
- if not isinstance(validator["check"], collections.Hashable):
- check = json.dumps(validator["check"])
- else:
- check = validator["check"]
-
- key = (check, validator["comparator"])
- validators_mapping[key] = validator
-
- return validators_mapping
+ return raw_testsuite
-def _merge_validator(def_validators, ref_validators):
- """ merge def_validators with ref_validators.
+def load_test_file(path):
+ """ load test file, file maybe testcase/testsuite/api
Args:
- def_validators (list):
- ref_validators (list):
+ path (str): test file path
Returns:
- list: merged validators
+ dict: loaded test content
- Examples:
- >>> def_validators = [{'eq': ['v1', 200]}, {"check": "s2", "expect": 16, "comparator": "len_eq"}]
- >>> ref_validators = [{"check": "v1", "expect": 201}, {'len_eq': ['s3', 12]}]
- >>> _merge_validator(def_validators, ref_validators)
- [
- {"check": "v1", "expect": 201, "comparator": "eq"},
- {"check": "s2", "expect": 16, "comparator": "len_eq"},
- {"check": "s3", "expect": 12, "comparator": "len_eq"}
- ]
+ # api
+ {
+ "path": path,
+ "type": "api",
+ "name": "",
+ "request": {}
+ }
+
+ # testcase
+ {
+ "path": path,
+ "type": "testcase",
+ "config": {},
+ "teststeps": []
+ }
+
+ # testsuite
+ {
+ "path": path,
+ "type": "testsuite",
+ "config": {},
+ "testcases": {}
+ }
"""
- if not def_validators:
- return ref_validators
+ raw_content = load_file(path)
+ loaded_content = None
- elif not ref_validators:
- return def_validators
+ if isinstance(raw_content, dict):
+
+ if "testcases" in raw_content:
+ # file_type: testsuite
+ # TODO: add json schema validation for testsuite
+ loaded_content = load_testsuite(raw_content)
+ loaded_content["path"] = path
+ loaded_content["type"] = "testsuite"
+ elif "request" in raw_content:
+ # file_type: api
+ # TODO: add json schema validation for api
+ loaded_content = raw_content
+ loaded_content["path"] = path
+ loaded_content["type"] = "api"
+ else:
+ # invalid format
+ logger.log_warning("Invalid test file format: {}".format(path))
+
+ elif isinstance(raw_content, list) and len(raw_content) > 0:
+ # file_type: testcase
+ # TODO: add json schema validation for testcase
+ loaded_content = load_testcase(raw_content)
+ loaded_content["path"] = path
+ loaded_content["type"] = "testcase"
else:
- def_validators_mapping = _convert_validators_to_mapping(def_validators)
- ref_validators_mapping = _convert_validators_to_mapping(ref_validators)
+ # invalid format
+ logger.log_warning("Invalid test file format: {}".format(path))
- def_validators_mapping.update(ref_validators_mapping)
- return list(def_validators_mapping.values())
-
-
-def _merge_extractor(def_extrators, ref_extractors):
- """ merge def_extrators with ref_extractors
-
- Args:
- def_extrators (list): [{"var1": "val1"}, {"var2": "val2"}]
- ref_extractors (list): [{"var1": "val111"}, {"var3": "val3"}]
-
- Returns:
- list: merged extractors
-
- Examples:
- >>> def_extrators = [{"var1": "val1"}, {"var2": "val2"}]
- >>> ref_extractors = [{"var1": "val111"}, {"var3": "val3"}]
- >>> _merge_extractor(def_extrators, ref_extractors)
- [
- {"var1": "val111"},
- {"var2": "val2"},
- {"var3": "val3"}
- ]
-
- """
- if not def_extrators:
- return ref_extractors
-
- elif not ref_extractors:
- return def_extrators
-
- else:
- extractor_dict = OrderedDict()
- for api_extrator in def_extrators:
- if len(api_extrator) != 1:
- logger.log_warning("incorrect extractor: {}".format(api_extrator))
- continue
-
- var_name = list(api_extrator.keys())[0]
- extractor_dict[var_name] = api_extrator[var_name]
-
- for test_extrator in ref_extractors:
- if len(test_extrator) != 1:
- logger.log_warning("incorrect extractor: {}".format(test_extrator))
- continue
-
- var_name = list(test_extrator.keys())[0]
- extractor_dict[var_name] = test_extrator[var_name]
-
- extractor_list = []
- for key, value in extractor_dict.items():
- extractor_list.append({key: value})
-
- return extractor_list
+ return loaded_content
def load_folder_content(folder_path):
@@ -749,115 +610,45 @@ def load_api_folder(api_folder_path):
for api_file_path, api_items in api_items_mapping.items():
# TODO: add JSON schema validation
- for api_item in api_items:
- key, api_dict = api_item.popitem()
+ if isinstance(api_items, list):
+ for api_item in api_items:
+ key, api_dict = api_item.popitem()
+ api_id = api_dict.get("id")
+ if api_id in api_definition_mapping:
+ logger.log_warning("API definition duplicated: {}".format(api_id))
- api_def = api_dict.pop("def")
- function_meta = parser.parse_function(api_def)
- func_name = function_meta["func_name"]
+ api_definition_mapping[api_id] = api_dict
- if func_name in api_definition_mapping:
- logger.log_warning("API definition duplicated: {}".format(func_name))
+ elif isinstance(api_items, dict):
+ if api_file_path in api_definition_mapping:
+ logger.log_warning("API definition duplicated: {}".format(api_file_path))
- api_dict["function_meta"] = function_meta
- api_definition_mapping[func_name] = api_dict
+ api_definition_mapping[api_file_path] = api_items
return api_definition_mapping
-def load_test_folder(test_folder_path):
- """ load testcases definitions from folder.
-
- Args:
- test_folder_path (str): testcases files folder.
-
- testcase file should be in the following format:
- [
- {
- "config": {
- "def": "create_and_check",
- "request": {},
- "validate": []
- }
- },
- {
- "test": {
- "api": "get_user",
- "validate": []
- }
- }
- ]
-
- Returns:
- dict: testcases definition mapping.
-
- {
- "create_and_check": [
- {"config": {}},
- {"test": {}},
- {"test": {}}
- ],
- "tests/testcases/create_and_get.yml": [
- {"config": {}},
- {"test": {}},
- {"test": {}}
- ]
- }
-
- """
- test_definition_mapping = {}
-
- test_items_mapping = load_folder_content(test_folder_path)
-
- for test_file_path, items in test_items_mapping.items():
- # TODO: add JSON schema validation
-
- testcase = {
- "config": {},
- "teststeps": []
- }
- for item in items:
- key, block = item.popitem()
-
- if key == "config":
- testcase["config"].update(block)
-
- if "def" not in block:
- test_definition_mapping[test_file_path] = testcase
- continue
-
- testcase_def = block.pop("def")
- function_meta = parser.parse_function(testcase_def)
- func_name = function_meta["func_name"]
-
- if func_name in test_definition_mapping:
- logger.log_warning("API definition duplicated: {}".format(func_name))
-
- testcase["function_meta"] = function_meta
- test_definition_mapping[func_name] = testcase
- else:
- # key == "test":
- testcase["teststeps"].append(block)
-
- return test_definition_mapping
-
-
def locate_debugtalk_py(start_path):
- """ locate debugtalk.py file.
+ """ locate debugtalk.py file
Args:
start_path (str): start locating path, maybe testcase file path or directory path
+ Returns:
+ str: debugtalk.py file path, None if not found
+
"""
try:
+ # locate debugtalk.py file.
debugtalk_path = locate_file(start_path, "debugtalk.py")
- return os.path.abspath(debugtalk_path)
except exceptions.FileNotFound:
- return None
+ debugtalk_path = None
+
+ return debugtalk_path
def load_project_tests(test_path, dot_env_path=None):
- """ load api, testcases, .env, builtin module and debugtalk.py.
+ """ load api, testcases, .env, debugtalk.py functions.
api/testcases folder is relative to project_working_directory
Args:
@@ -865,104 +656,98 @@ def load_project_tests(test_path, dot_env_path=None):
dot_env_path (str): specified .env file path
Returns:
- dict: project loaded api/testcases definitions, environments and debugtalk.py module.
+ dict: project loaded api/testcases definitions, environments and debugtalk.py functions.
"""
- project_mapping = {}
-
+ # locate debugtalk.py file
debugtalk_path = locate_debugtalk_py(test_path)
- # locate PWD with debugtalk.py path
+
if debugtalk_path:
# The folder contains debugtalk.py will be treated as PWD.
project_working_directory = os.path.dirname(debugtalk_path)
else:
- # debugtalk.py is not found, use os.getcwd() as PWD.
+ # debugtalk.py not found, use os.getcwd() as PWD.
project_working_directory = os.getcwd()
# add PWD to sys.path
sys.path.insert(0, project_working_directory)
- # load .env
+ # load .env file
+ # NOTICE:
+ # environment variable maybe loaded in debugtalk.py
+ # thus .env file should be loaded before loading debugtalk.py
dot_env_path = dot_env_path or os.path.join(project_working_directory, ".env")
- if os.path.isfile(dot_env_path):
- project_mapping["env"] = load_dot_env_file(dot_env_path)
- else:
- project_mapping["env"] = {}
+ project_mapping["env"] = load_dot_env_file(dot_env_path)
- # load debugtalk.py
if debugtalk_path:
- project_mapping["debugtalk"] = load_debugtalk_module()
+ # load debugtalk.py functions
+ debugtalk_functions = load_debugtalk_functions()
else:
- project_mapping["debugtalk"] = {
- "variables": {},
- "functions": {}
- }
+ debugtalk_functions = {}
- project_mapping["def-api"] = load_api_folder(os.path.join(project_working_directory, "api"))
- # TODO: replace suite with testcases
- project_mapping["def-testcase"] = load_test_folder(os.path.join(project_working_directory, "suite"))
+ # locate PWD and load debugtalk.py functions
- return project_mapping
+ project_mapping["PWD"] = project_working_directory
+ project_mapping["functions"] = debugtalk_functions
+
+ # load api
+ tests_def_mapping["api"] = load_api_folder(os.path.join(project_working_directory, "api"))
+ tests_def_mapping["PWD"] = project_working_directory
def load_tests(path, dot_env_path=None):
""" load testcases from file path, extend and merge with api/testcase definitions.
Args:
- path (str/list): testcase file/foler path.
- path could be in several types:
+ path (str): testcase/testsuite file/foler path.
+ path could be in 2 types:
- absolute/relative file path
- absolute/relative folder path
- - list/set container with file(s) and/or folder(s)
dot_env_path (str): specified .env file path
Returns:
- list: testcases list, each testcase is corresponding to a file
- [
- { # testcase data structure
- "config": {
- "name": "desc1",
- "path": "testcase1_path",
- "variables": [], # optional
- "request": {} # optional
- "refs": {
- "debugtalk": {
- "variables": {},
- "functions": {}
- },
- "env": {},
- "def-api": {},
- "def-testcase": {}
- }
+ dict: tests mapping, include project_mapping and testcases.
+ each testcase is corresponding to a file.
+ {
+ "project_mapping": {
+ "PWD": "XXXXX",
+ "functions": {},
+ "env": {}
},
- "teststeps": [
- # teststep data structure
- {
- 'name': 'test step desc1',
- 'variables': [], # optional
- 'extract': [], # optional
- 'validate': [],
- 'request': {},
- 'function_meta': {}
+ "testcases": [
+ { # testcase data structure
+ "config": {
+ "name": "desc1",
+ "path": "testcase1_path",
+ "variables": [], # optional
+ },
+ "teststeps": [
+ # test data structure
+ {
+ 'name': 'test desc1',
+ 'variables': [], # optional
+ 'extract': [], # optional
+ 'validate': [],
+ 'request': {}
+ },
+ test_dict_2 # another test dict
+ ]
},
- teststep2 # another teststep dict
+ testcase_2_dict # another testcase dict
+ ],
+ "testsuites": [
+ { # testsuite data structure
+ "config": {},
+ "testcases": {
+ "testcase1": {},
+ "testcase2": {},
+ }
+ },
+ testsuite_2_dict
]
- },
- testcase_dict_2 # another testcase dict
- ]
+ }
"""
- if isinstance(path, (list, set)):
- testcases_list = []
-
- for file_path in set(path):
- testcases = load_tests(file_path, dot_env_path)
- if not testcases:
- continue
- testcases_list.extend(testcases)
-
- return testcases_list
-
if not os.path.exists(path):
err_msg = "path not exist: {}".format(path)
logger.log_error(err_msg)
@@ -971,66 +756,28 @@ def load_tests(path, dot_env_path=None):
if not os.path.isabs(path):
path = os.path.join(os.getcwd(), path)
+ load_project_tests(path, dot_env_path)
+ tests_mapping = {
+ "project_mapping": project_mapping
+ }
+
+ def __load_file_content(path):
+ loaded_content = load_test_file(path)
+ if not loaded_content:
+ pass
+ elif loaded_content["type"] == "testsuite":
+ tests_mapping.setdefault("testsuites", []).append(loaded_content)
+ elif loaded_content["type"] == "testcase":
+ tests_mapping.setdefault("testcases", []).append(loaded_content)
+ elif loaded_content["type"] == "api":
+ tests_mapping.setdefault("apis", []).append(loaded_content)
+
if os.path.isdir(path):
files_list = load_folder_files(path)
- testcases_list = load_tests(files_list, dot_env_path)
+ for path in files_list:
+ __load_file_content(path)
elif os.path.isfile(path):
- try:
- raw_testcase = load_file(path)
- project_mapping = load_project_tests(path, dot_env_path)
- testcase = _load_testcase(raw_testcase, project_mapping)
- testcase["config"]["path"] = path
- testcase["config"]["refs"] = project_mapping
- testcases_list = [testcase]
- except exceptions.FileFormatError:
- testcases_list = []
+ __load_file_content(path)
- return testcases_list
-
-
-def load_locust_tests(path, dot_env_path=None):
- """ load locust testcases
-
- Args:
- path (str): testcase/testsuite file path.
- dot_env_path (str): specified .env file path
-
- Returns:
- dict: locust testcases with weight
- {
- "config": {...},
- "tests": [
- # weight 3
- [teststep11],
- [teststep11],
- [teststep11],
- # weight 2
- [teststep21, teststep22],
- [teststep21, teststep22]
- ]
- }
-
- """
- raw_testcase = load_file(path)
- project_mapping = load_project_tests(path, dot_env_path)
-
- config = {
- "refs": project_mapping
- }
- tests = []
- for item in raw_testcase:
- key, test_block = item.popitem()
-
- if key == "config":
- config.update(test_block)
- elif key == "test":
- teststeps = _load_teststeps(test_block, project_mapping)
- weight = test_block.pop("weight", 1)
- for _ in range(weight):
- tests.append(teststeps)
-
- return {
- "config": config,
- "tests": tests
- }
+ return tests_mapping
diff --git a/httprunner/locusts.py b/httprunner/locusts.py
index 3411bfd5..38f6b7fa 100644
--- a/httprunner/locusts.py
+++ b/httprunner/locusts.py
@@ -7,7 +7,6 @@ import sys
from httprunner.logger import color_print
from httprunner import loader
-from locust.main import main
def parse_locustfile(file_path):
@@ -31,6 +30,7 @@ def parse_locustfile(file_path):
return locustfile_path
+
def gen_locustfile(testcase_file_path):
""" generate locustfile from template.
"""
@@ -49,17 +49,25 @@ def gen_locustfile(testcase_file_path):
return locustfile_path
+
+def start_locust_main():
+ from locust.main import main
+ main()
+
+
def start_master(sys_argv):
sys_argv.append("--master")
sys.argv = sys_argv
- main()
+ start_locust_main()
+
def start_slave(sys_argv):
if "--slave" not in sys_argv:
sys_argv.extend(["--slave"])
sys.argv = sys_argv
- main()
+ start_locust_main()
+
def run_locusts_with_processes(sys_argv, processes_count):
processes = []
diff --git a/httprunner/parser.py b/httprunner/parser.py
index 6c22c7e3..27c80ee7 100644
--- a/httprunner/parser.py
+++ b/httprunner/parser.py
@@ -149,20 +149,27 @@ def parse_function(content):
def parse_validator(validator):
- """ parse validator, validator maybe in two format
- @param (dict) validator
- format1: this is kept for compatiblity with the previous versions.
- {"check": "status_code", "comparator": "eq", "expect": 201}
- {"check": "$resp_body_success", "comparator": "eq", "expect": True}
- format2: recommended new version
- {'eq': ['status_code', 201]}
- {'eq': ['$resp_body_success', True]}
- @return (dict) validator info
- {
- "check": "status_code",
- "expect": 201,
- "comparator": "eq"
- }
+ """ parse validator
+
+ Args:
+ validator (dict): validator maybe in two formats:
+
+ format1: this is kept for compatiblity with the previous versions.
+ {"check": "status_code", "comparator": "eq", "expect": 201}
+ {"check": "$resp_body_success", "comparator": "eq", "expect": True}
+ format2: recommended new version
+ {'eq': ['status_code', 201]}
+ {'eq': ['$resp_body_success', True]}
+
+ Returns
+ dict: validator info
+
+ {
+ "check": "status_code",
+ "expect": 201,
+ "comparator": "eq"
+ }
+
"""
if not isinstance(validator, dict):
raise exceptions.ParamsError("invalid validator: {}".format(validator))
@@ -255,7 +262,7 @@ def substitute_variables(content, variables_mapping):
return content
-def parse_parameters(parameters, variables_mapping, functions_mapping):
+def parse_parameters(parameters, variables_mapping=None, functions_mapping=None):
""" parse parameters and generate cartesian product.
Args:
@@ -265,7 +272,7 @@ def parse_parameters(parameters, variables_mapping, functions_mapping):
(2) call built-in parameterize function, "${parameterize(account.csv)}"
(3) call custom function in debugtalk.py, "${gen_app_version()}"
- variables_mapping (dict): variables mapping loaded from debugtalk.py
+ variables_mapping (dict): variables mapping loaded from testcase config
functions_mapping (dict): functions mapping loaded from debugtalk.py
Returns:
@@ -280,9 +287,12 @@ def parse_parameters(parameters, variables_mapping, functions_mapping):
>>> parse_parameters(parameters)
"""
+ variables_mapping = variables_mapping or {}
+ functions_mapping = functions_mapping or {}
parsed_parameters_list = []
- for parameter in parameters:
- parameter_name, parameter_content = list(parameter.items())[0]
+
+ parameters = utils.ensure_mapping_format(parameters)
+ for parameter_name, parameter_content in parameters.items():
parameter_name_list = parameter_name.split("-")
if isinstance(parameter_content, list):
@@ -305,16 +315,33 @@ def parse_parameters(parameters, variables_mapping, functions_mapping):
else:
# (2) & (3)
parsed_parameter_content = parse_data(parameter_content, variables_mapping, functions_mapping)
- # e.g. [{'app_version': '2.8.5'}, {'app_version': '2.8.6'}]
- # e.g. [{"username": "user1", "password": "111111"}, {"username": "user2", "password": "222222"}]
if not isinstance(parsed_parameter_content, list):
raise exceptions.ParamsError("parameters syntax error!")
- parameter_content_list = [
- # get subset by parameter name
- {key: parameter_item[key] for key in parameter_name_list}
- for parameter_item in parsed_parameter_content
- ]
+ parameter_content_list = []
+ for parameter_item in parsed_parameter_content:
+ if isinstance(parameter_item, dict):
+ # get subset by parameter name
+ # {"app_version": "${gen_app_version()}"}
+ # gen_app_version() => [{'app_version': '2.8.5'}, {'app_version': '2.8.6'}]
+ # {"username-password": "${get_account()}"}
+ # get_account() => [
+ # {"username": "user1", "password": "111111"},
+ # {"username": "user2", "password": "222222"}
+ # ]
+ parameter_dict = {key: parameter_item[key] for key in parameter_name_list}
+ elif isinstance(parameter_item, (list, tuple)):
+ # {"username-password": "${get_account()}"}
+ # get_account() => [("user1", "111111"), ("user2", "222222")]
+ parameter_dict = dict(zip(parameter_name_list, parameter_item))
+ elif len(parameter_name_list) == 1:
+ # {"user_agent": "${get_user_agent()}"}
+ # get_user_agent() => ["iOS/10.1", "iOS/10.2"]
+ parameter_dict = {
+ parameter_name_list[0]: parameter_item
+ }
+
+ parameter_content_list.append(parameter_dict)
parsed_parameters_list.append(parameter_content_list)
@@ -325,34 +352,6 @@ def parse_parameters(parameters, variables_mapping, functions_mapping):
## parse content with variables and functions mapping
###############################################################################
-def get_builtin_item(item_type, item_name):
- """
-
- Args:
- item_type (enum): "variables" or "functions"
- item_name (str): variable name or function name
-
- Returns:
- variable or function with the name of item_name
-
- """
- # override built_in module with debugtalk.py module
- from httprunner import loader
- built_in_module = loader.load_builtin_module()
-
- if item_type == "variables":
- try:
- return built_in_module["variables"][item_name]
- except KeyError:
- raise exceptions.VariableNotFound("{} is not found.".format(item_name))
- else:
- # item_type == "functions":
- try:
- return built_in_module["functions"][item_name]
- except KeyError:
- raise exceptions.FunctionNotFound("{} is not found.".format(item_name))
-
-
def get_mapping_variable(variable_name, variables_mapping):
""" get variable from variables_mapping.
@@ -367,10 +366,10 @@ def get_mapping_variable(variable_name, variables_mapping):
exceptions.VariableNotFound: variable is not found.
"""
- if variable_name in variables_mapping:
+ try:
return variables_mapping[variable_name]
- else:
- return get_builtin_item("variables", variable_name)
+ except KeyError:
+ raise exceptions.VariableNotFound("{} is not found.".format(variable_name))
def get_mapping_function(function_name, functions_mapping):
@@ -392,12 +391,15 @@ def get_mapping_function(function_name, functions_mapping):
return functions_mapping[function_name]
try:
- return get_builtin_item("functions", function_name)
- except exceptions.FunctionNotFound:
+ # check if HttpRunner builtin functions
+ from httprunner import loader
+ built_in_functions = loader.load_builtin_functions()
+ return built_in_functions[function_name]
+ except KeyError:
pass
try:
- # check if builtin functions
+ # check if Python builtin functions
item_func = eval(function_name)
if callable(item_func):
# is builtin function
@@ -436,8 +438,14 @@ def parse_string_functions(content, variables_mapping, functions_mapping):
kwargs = parse_data(kwargs, variables_mapping, functions_mapping)
if func_name in ["parameterize", "P"]:
+ if len(args) != 1 or kwargs:
+ raise exceptions.ParamsError("P() should only pass in one argument!")
from httprunner import loader
- eval_value = loader.load_csv_file(*args, **kwargs)
+ eval_value = loader.load_csv_file(args[0])
+ elif func_name in ["environ", "ENV"]:
+ if len(args) != 1 or kwargs:
+ raise exceptions.ParamsError("ENV() should only pass in one argument!")
+ eval_value = utils.get_os_environ(args[0])
else:
func = get_mapping_function(func_name, functions_mapping)
eval_value = func(*args, **kwargs)
@@ -456,7 +464,7 @@ def parse_string_functions(content, variables_mapping, functions_mapping):
return content
-def parse_string_variables(content, variables_mapping):
+def parse_string_variables(content, variables_mapping, functions_mapping):
""" parse string content with variables mapping.
Args:
@@ -469,7 +477,7 @@ def parse_string_variables(content, variables_mapping):
Examples:
>>> content = "/api/users/$uid"
>>> variables_mapping = {"$uid": 1000}
- >>> parse_string_variables(content, variables_mapping)
+ >>> parse_string_variables(content, variables_mapping, {})
"/api/users/1000"
"""
@@ -477,30 +485,52 @@ def parse_string_variables(content, variables_mapping):
for variable_name in variables_list:
variable_value = get_mapping_variable(variable_name, variables_mapping)
+ if variable_name == "request" and isinstance(variable_value, dict) \
+ and "url" in variable_value and "method" in variable_value:
+ # call setup_hooks action with $request
+ for key, value in variable_value.items():
+ variable_value[key] = parse_data(
+ value,
+ variables_mapping,
+ functions_mapping
+ )
+ parsed_variable_value = variable_value
+ elif "${}".format(variable_name) == variable_value:
+ # variable_name = "token"
+ # variables_mapping = {"token": "$token"}
+ parsed_variable_value = variable_value
+ else:
+ parsed_variable_value = parse_data(
+ variable_value,
+ variables_mapping,
+ functions_mapping
+ )
+
# TODO: replace variable label from $var to {{var}}
if "${}".format(variable_name) == content:
# content is a variable
- content = variable_value
+ content = parsed_variable_value
else:
# content contains one or several variables
- if not isinstance(variable_value, str):
- variable_value = builtin_str(variable_value)
+ if not isinstance(parsed_variable_value, str):
+ parsed_variable_value = builtin_str(parsed_variable_value)
content = content.replace(
"${}".format(variable_name),
- variable_value, 1
+ parsed_variable_value, 1
)
return content
-def parse_data(content, variables_mapping=None, functions_mapping=None):
+def parse_data(content, variables_mapping=None, functions_mapping=None, raise_if_variable_not_found=True):
""" parse content with variables mapping
Args:
content (str/dict/list/numeric/bool/type): content to be parsed
variables_mapping (dict): variables mapping.
functions_mapping (dict): functions mapping.
+ raise_if_variable_not_found (bool): if set False, exception will not raise when VariableNotFound occurred.
Returns:
parsed content.
@@ -528,144 +558,556 @@ def parse_data(content, variables_mapping=None, functions_mapping=None):
if isinstance(content, (list, set, tuple)):
return [
- parse_data(item, variables_mapping, functions_mapping)
+ parse_data(
+ item,
+ variables_mapping,
+ functions_mapping,
+ raise_if_variable_not_found
+ )
for item in content
]
if isinstance(content, dict):
parsed_content = {}
for key, value in content.items():
- parsed_key = parse_data(key, variables_mapping, functions_mapping)
- parsed_value = parse_data(value, variables_mapping, functions_mapping)
+ parsed_key = parse_data(
+ key,
+ variables_mapping,
+ functions_mapping,
+ raise_if_variable_not_found
+ )
+ parsed_value = parse_data(
+ value,
+ variables_mapping,
+ functions_mapping,
+ raise_if_variable_not_found
+ )
parsed_content[parsed_key] = parsed_value
return parsed_content
if isinstance(content, basestring):
# content is in string format here
- variables_mapping = variables_mapping or {}
+ variables_mapping = utils.ensure_mapping_format(variables_mapping or {})
functions_mapping = functions_mapping or {}
content = content.strip()
- # replace functions with evaluated value
- # Notice: _eval_content_functions must be called before _eval_content_variables
- content = parse_string_functions(content, variables_mapping, functions_mapping)
-
- # replace variables with binding value
- content = parse_string_variables(content, variables_mapping)
+ try:
+ # replace functions with evaluated value
+ # Notice: parse_string_functions must be called before parse_string_variables
+ content = parse_string_functions(
+ content,
+ variables_mapping,
+ functions_mapping
+ )
+ # replace variables with binding value
+ content = parse_string_variables(
+ content,
+ variables_mapping,
+ functions_mapping
+ )
+ except exceptions.VariableNotFound:
+ if raise_if_variable_not_found:
+ raise
return content
-def parse_tests(testcases, variables_mapping=None):
- """ parse testcases configs, including variables/parameters/name/request.
+def _extend_with_api(test_dict, api_def_dict):
+ """ extend test with api definition, test will merge and override api definition.
Args:
- testcases (list): testcase list, with config unparsed.
- [
- { # testcase data structure
- "config": {
- "name": "desc1",
- "path": "testcase1_path",
- "variables": [], # optional
- "request": {} # optional
- "refs": {
- "debugtalk": {
- "variables": {},
- "functions": {}
- },
- "env": {},
- "def-api": {},
- "def-testcase": {}
- }
- },
- "teststeps": [
- # teststep data structure
- {
- 'name': 'test step desc2',
- 'variables': [], # optional
- 'extract': [], # optional
- 'validate': [],
- 'request': {},
- 'function_meta': {}
- },
- teststep2 # another teststep dict
- ]
- },
- testcase_dict_2 # another testcase dict
- ]
- variables_mapping (dict): if variables_mapping is specified, it will override variables in config block.
+ test_dict (dict): test block
+ api_def_dict (dict): api definition
Returns:
- list: parsed testcases list, with config variables/parameters/name/request parsed.
+ dict: extended test dict.
+
+ Examples:
+ >>> api_def_dict = {
+ "name": "get token 1",
+ "request": {...},
+ "validate": [{'eq': ['status_code', 200]}]
+ }
+ >>> test_dict = {
+ "name": "get token 2",
+ "extract": {"token": "content.token"},
+ "validate": [{'eq': ['status_code', 201]}, {'len_eq': ['content.token', 16]}]
+ }
+ >>> _extend_with_api(test_dict, api_def_dict)
+ {
+ "name": "get token 2",
+ "request": {...},
+ "extract": {"token": "content.token"},
+ "validate": [{'eq': ['status_code', 201]}, {'len_eq': ['content.token', 16]}]
+ }
"""
- variables_mapping = variables_mapping or {}
- parsed_testcases_list = []
+ # override name
+ api_def_name = api_def_dict.pop("name", "")
+ test_dict["name"] = test_dict.get("name") or api_def_name
- for testcase in testcases:
- testcase_config = testcase.setdefault("config", {})
- project_mapping = testcase_config.pop(
- "refs",
- {
- "debugtalk": {
- "variables": {},
- "functions": {}
- },
- "env": {},
- "def-api": {},
- "def-testcase": {}
- }
+ # override variables
+ def_variables = api_def_dict.pop("variables", [])
+ test_dict["variables"] = utils.extend_variables(
+ def_variables,
+ test_dict.get("variables", {})
+ )
+
+ # merge & override validators TODO: relocate
+ def_raw_validators = api_def_dict.pop("validate", [])
+ ref_raw_validators = test_dict.get("validate", [])
+ def_validators = [
+ parse_validator(validator)
+ for validator in def_raw_validators
+ ]
+ ref_validators = [
+ parse_validator(validator)
+ for validator in ref_raw_validators
+ ]
+ test_dict["validate"] = utils.extend_validators(
+ def_validators,
+ ref_validators
+ )
+
+ # merge & override extractors
+ def_extrators = api_def_dict.pop("extract", {})
+ test_dict["extract"] = utils.extend_variables(
+ def_extrators,
+ test_dict.get("extract", {})
+ )
+
+ # TODO: merge & override request
+ test_dict["request"] = api_def_dict.pop("request", {})
+
+ # base_url & verify: priority api_def_dict > test_dict
+ if api_def_dict.get("base_url"):
+ test_dict["base_url"] = api_def_dict["base_url"]
+
+ if "verify" in api_def_dict:
+ test_dict["request"]["verify"] = api_def_dict["verify"]
+
+ # merge & override setup_hooks
+ def_setup_hooks = api_def_dict.pop("setup_hooks", [])
+ ref_setup_hooks = test_dict.get("setup_hooks", [])
+ extended_setup_hooks = list(set(def_setup_hooks + ref_setup_hooks))
+ if extended_setup_hooks:
+ test_dict["setup_hooks"] = extended_setup_hooks
+ # merge & override teardown_hooks
+ def_teardown_hooks = api_def_dict.pop("teardown_hooks", [])
+ ref_teardown_hooks = test_dict.get("teardown_hooks", [])
+ extended_teardown_hooks = list(set(def_teardown_hooks + ref_teardown_hooks))
+ if extended_teardown_hooks:
+ test_dict["teardown_hooks"] = extended_teardown_hooks
+
+ # TODO: extend with other api definition items, e.g. times
+ test_dict.update(api_def_dict)
+
+ return test_dict
+
+
+def _extend_with_testcase(test_dict, testcase_def_dict):
+ """ extend test with testcase definition
+ test will merge and override testcase config definition.
+
+ Args:
+ test_dict (dict): test block
+ testcase_def_dict (dict): testcase definition
+
+ Returns:
+ dict: extended test dict.
+
+ """
+ # override testcase config variables
+ testcase_def_dict["config"].setdefault("variables", {})
+ testcase_def_variables = utils.ensure_mapping_format(testcase_def_dict["config"].get("variables", {}))
+ testcase_def_variables.update(test_dict.pop("variables", {}))
+ testcase_def_dict["config"]["variables"] = testcase_def_variables
+
+ # override base_url, verify
+ # priority: testcase config > testsuite tests
+ test_base_url = test_dict.pop("base_url", "")
+ if not testcase_def_dict["config"].get("base_url"):
+ testcase_def_dict["config"]["base_url"] = test_base_url
+
+ test_verify = test_dict.pop("verify", True)
+ testcase_def_dict["config"].setdefault("verify", test_verify)
+
+ # override testcase config name, output, etc.
+ testcase_def_dict["config"].update(test_dict)
+
+ test_dict.clear()
+ test_dict.update(testcase_def_dict)
+
+
+def __parse_config(config, project_mapping):
+ """ parse testcase/testsuite config, include variables and name.
+ """
+ # get config variables
+ raw_config_variables = config.pop("variables", {})
+ raw_config_variables_mapping = utils.ensure_mapping_format(raw_config_variables)
+ override_variables = utils.deepcopy_dict(project_mapping.get("variables", {}))
+ functions = project_mapping.get("functions", {})
+
+ # override config variables with passed in variables
+ raw_config_variables_mapping.update(override_variables)
+
+ # parse config variables
+ parsed_config_variables = {}
+ for key, value in raw_config_variables_mapping.items():
+ parsed_value = parse_data(
+ value,
+ raw_config_variables_mapping,
+ functions,
+ raise_if_variable_not_found=False
+ )
+ parsed_config_variables[key] = parsed_value
+
+ if parsed_config_variables:
+ config["variables"] = parsed_config_variables
+
+ # parse config name
+ config["name"] = parse_data(
+ config.get("name", ""),
+ parsed_config_variables,
+ functions
+ )
+
+ # parse config base_url
+ if "base_url" in config:
+ config["base_url"] = parse_data(
+ config["base_url"],
+ parsed_config_variables,
+ functions
)
- # parse config parameters
- config_parameters = testcase_config.pop("parameters", [])
- cartesian_product_parameters_list = parse_parameters(
- config_parameters,
- project_mapping["debugtalk"]["variables"],
- project_mapping["debugtalk"]["functions"]
- ) or [{}]
- for parameter_mapping in cartesian_product_parameters_list:
- testcase_dict = utils.deepcopy_dict(testcase)
- config = testcase_dict.get("config")
+def __parse_testcase_tests(tests, config, project_mapping):
+ """ override tests with testcase config variables, base_url and verify.
+ test maybe nested testcase.
- # parse config variables
- raw_config_variables = config.get("variables", [])
- parsed_config_variables = parse_data(
- raw_config_variables,
- project_mapping["debugtalk"]["variables"],
- project_mapping["debugtalk"]["functions"]
+ variables priority:
+ testcase config > testcase test > testcase_def config > testcase_def test > api
+
+ base_url/verify priority:
+ testcase test > testcase config > testsuite test > testsuite config > api
+
+ Args:
+ tests (list):
+ config (dict):
+ project_mapping (dict):
+
+ """
+ config_variables = config.pop("variables", {})
+ config_base_url = config.pop("base_url", "")
+ config_verify = config.pop("verify", True)
+ functions = project_mapping.get("functions", {})
+
+ for test_dict in tests:
+
+ # base_url & verify: priority test_dict > config
+ if (not test_dict.get("base_url")) and config_base_url:
+ test_dict["base_url"] = config_base_url
+
+ test_dict.setdefault("verify", config_verify)
+
+ # 1, testcase config => testcase tests
+ # override test_dict variables
+ test_dict["variables"] = utils.extend_variables(
+ test_dict.pop("variables", {}),
+ config_variables
+ )
+ test_dict["variables"] = parse_data(
+ test_dict["variables"],
+ test_dict["variables"],
+ functions,
+ raise_if_variable_not_found=False
+ )
+
+ # parse test_dict name
+ test_dict["name"] = parse_data(
+ test_dict.pop("name", ""),
+ test_dict["variables"],
+ functions,
+ raise_if_variable_not_found=False
+ )
+
+ if "testcase_def" in test_dict:
+ # test_dict is nested testcase
+
+ # 2, testcase test_dict => testcase_def config
+ testcase_def = test_dict.pop("testcase_def")
+ _extend_with_testcase(test_dict, testcase_def)
+
+ # 3, testcase_def config => testcase_def test_dict
+ _parse_testcase(test_dict, project_mapping)
+
+ else:
+ if "api_def" in test_dict:
+ # test_dict has API reference
+ # 2, test_dict => api
+ api_def_dict = test_dict.pop("api_def")
+ _extend_with_api(test_dict, api_def_dict)
+
+ if test_dict.get("base_url"):
+ # parse base_url
+ base_url = parse_data(
+ test_dict.pop("base_url"),
+ test_dict["variables"],
+ functions
+ )
+
+ # build path with base_url
+ # variable in current url maybe extracted from former api
+ request_url = parse_data(
+ test_dict["request"]["url"],
+ test_dict["variables"],
+ functions,
+ raise_if_variable_not_found=False
+ )
+ test_dict["request"]["url"] = utils.build_url(
+ base_url,
+ request_url
+ )
+
+
+def _parse_testcase(testcase, project_mapping):
+ """ parse testcase
+
+ Args:
+ testcase (dict):
+ {
+ "config": {},
+ "teststeps": []
+ }
+
+ """
+ testcase.setdefault("config", {})
+ __parse_config(testcase["config"], project_mapping)
+ __parse_testcase_tests(testcase["teststeps"], testcase["config"], project_mapping)
+
+
+def __get_parsed_testsuite_testcases(testcases, testsuite_config, project_mapping):
+ """ override testscases with testsuite config variables, base_url and verify.
+
+ variables priority:
+ parameters > testsuite config > testcase config > testcase_def config > testcase_def tests > api
+
+ base_url priority:
+ testcase_def tests > testcase_def config > testcase config > testsuite config
+
+ Args:
+ testcases (dict):
+ {
+ "testcase1 name": {
+ "testcase": "testcases/create_and_check.yml",
+ "weight": 2,
+ "variables": {
+ "uid": 1000
+ },
+ "parameters": {
+ "uid": [100, 101, 102]
+ },
+ "testcase_def": {
+ "config": {},
+ "teststeps": []
+ }
+ },
+ "testcase2 name": {}
+ }
+ testsuite_config (dict):
+ {
+ "name": "testsuite name",
+ "variables": {
+ "device_sn": "${gen_random_string(15)}"
+ },
+ "base_url": "http://127.0.0.1:5000"
+ }
+ project_mapping (dict):
+ {
+ "env": {},
+ "functions": {}
+ }
+
+ """
+ testsuite_base_url = testsuite_config.get("base_url")
+ testsuite_config_variables = testsuite_config.get("variables", {})
+ functions = project_mapping.get("functions", {})
+ parsed_testcase_list = []
+
+ for testcase_name, testcase in testcases.items():
+
+ parsed_testcase = testcase.pop("testcase_def")
+ parsed_testcase.setdefault("config", {})
+ parsed_testcase["path"] = testcase["testcase"]
+ parsed_testcase["config"]["name"] = testcase_name
+
+ if "weight" in testcase:
+ parsed_testcase["config"]["weight"] = testcase["weight"]
+
+ # base_url priority: testcase config > testsuite config
+ parsed_testcase["config"].setdefault("base_url", testsuite_base_url)
+
+ # 1, testsuite config => testcase config
+ # override test_dict variables
+ testcase_config_variables = utils.extend_variables(
+ testcase.pop("variables", {}),
+ testsuite_config_variables
+ )
+
+ # 2, testcase config > testcase_def config
+ # override testcase_def config variables
+ parsed_testcase_config_variables = utils.extend_variables(
+ parsed_testcase["config"].pop("variables", {}),
+ testcase_config_variables
+ )
+
+ # parse config variables
+ parsed_config_variables = {}
+ for key, value in parsed_testcase_config_variables.items():
+ try:
+ parsed_value = parse_data(
+ value,
+ parsed_testcase_config_variables,
+ functions
+ )
+ except exceptions.VariableNotFound:
+ pass
+ parsed_config_variables[key] = parsed_value
+
+ if parsed_config_variables:
+ parsed_testcase["config"]["variables"] = parsed_config_variables
+
+ # parse parameters
+ if "parameters" in testcase and testcase["parameters"]:
+ cartesian_product_parameters = parse_parameters(
+ testcase["parameters"],
+ parsed_config_variables,
+ functions
)
- # priority: passed in > debugtalk.py > parameters > variables
- # override variables mapping with parameters mapping
- config_variables = utils.override_mapping_list(
- parsed_config_variables, parameter_mapping)
- # merge debugtalk.py module variables
- config_variables.update(project_mapping["debugtalk"]["variables"])
- # override variables mapping with passed in variables_mapping
- config_variables = utils.override_mapping_list(
- config_variables, variables_mapping)
+ for parameter_variables in cartesian_product_parameters:
+ # deepcopy to avoid influence between parameters
+ parsed_testcase_copied = utils.deepcopy_dict(parsed_testcase)
+ parsed_config_variables_copied = utils.deepcopy_dict(parsed_config_variables)
+ parsed_testcase_copied["config"]["variables"] = utils.extend_variables(
+ parsed_config_variables_copied,
+ parameter_variables
+ )
+ _parse_testcase(parsed_testcase_copied, project_mapping)
+ parsed_testcase_list.append(parsed_testcase_copied)
- testcase_dict["config"]["variables"] = config_variables
+ else:
+ _parse_testcase(parsed_testcase, project_mapping)
+ parsed_testcase_list.append(parsed_testcase)
- # parse config name
- testcase_dict["config"]["name"] = parse_data(
- testcase_dict["config"].get("name", ""),
- config_variables,
- project_mapping["debugtalk"]["functions"]
- )
+ return parsed_testcase_list
- # parse config request
- testcase_dict["config"]["request"] = parse_data(
- testcase_dict["config"].get("request", {}),
- config_variables,
- project_mapping["debugtalk"]["functions"]
- )
- # put loaded project functions to config
- testcase_dict["config"]["functions"] = project_mapping["debugtalk"]["functions"]
- parsed_testcases_list.append(testcase_dict)
+def _parse_testsuite(testsuite, project_mapping):
+ testsuite.setdefault("config", {})
+ __parse_config(testsuite["config"], project_mapping)
+ parsed_testcase_list = __get_parsed_testsuite_testcases(
+ testsuite["testcases"],
+ testsuite["config"],
+ project_mapping
+ )
+ return parsed_testcase_list
- return parsed_testcases_list
+
+def parse_tests(tests_mapping):
+ """ parse tests and load to parsed testcases
+ tests include api, testcases and testsuites.
+
+ Args:
+ tests_mapping (dict): project info and testcases list.
+
+ {
+ "project_mapping": {
+ "PWD": "XXXXX",
+ "functions": {},
+ "variables": {}, # optional, priority 1
+ "env": {}
+ },
+ "testsuites": [
+ { # testsuite data structure
+ "config": {},
+ "testcases": {
+ "testcase1 name": {
+ "variables": {
+ "uid": 1000
+ },
+ "parameters": {
+ "uid": [100, 101, 102]
+ },
+ "testcase_def": {
+ "config": {},
+ "teststeps": []
+ }
+ },
+ "testcase2 name": {}
+ }
+ }
+ ],
+ "testcases": [
+ { # testcase data structure
+ "config": {
+ "name": "desc1",
+ "path": "testcase1_path",
+ "variables": {}, # optional, priority 2
+ },
+ "teststeps": [
+ # test data structure
+ {
+ 'name': 'test step desc1',
+ 'variables': [], # optional, priority 3
+ 'extract': [],
+ 'validate': [],
+ 'api_def': {
+ "variables": {} # optional, priority 4
+ 'request': {},
+ }
+ },
+ test_dict_2 # another test dict
+ ]
+ },
+ testcase_dict_2 # another testcase dict
+ ],
+ "api": {
+ "variables": {},
+ "request": {}
+ }
+ }
+
+ """
+ project_mapping = tests_mapping.get("project_mapping", {})
+ parsed_tests_mapping = {
+ "project_mapping": project_mapping,
+ "testcases": []
+ }
+
+ for test_type in tests_mapping:
+
+ if test_type == "testsuites":
+ # load testcases of testsuite
+ testsuites = tests_mapping["testsuites"]
+ for testsuite in testsuites:
+ parsed_testcases = _parse_testsuite(testsuite, project_mapping)
+ for parsed_testcase in parsed_testcases:
+ parsed_tests_mapping["testcases"].append(parsed_testcase)
+
+ elif test_type == "testcases":
+ for testcase in tests_mapping["testcases"]:
+ _parse_testcase(testcase, project_mapping)
+ parsed_tests_mapping["testcases"].append(testcase)
+
+ elif test_type == "apis":
+ # encapsulate api as a testcase
+ for api_content in tests_mapping["apis"]:
+ testcase = {
+ "teststeps": [api_content]
+ }
+ _parse_testcase(testcase, project_mapping)
+ parsed_tests_mapping["testcases"].append(testcase)
+
+ return parsed_tests_mapping
diff --git a/httprunner/report.py b/httprunner/report.py
index b3b3300d..19d1581a 100644
--- a/httprunner/report.py
+++ b/httprunner/report.py
@@ -9,6 +9,7 @@ from base64 import b64encode
from collections import Iterable
from datetime import datetime
+import requests
from httprunner import loader, logger
from httprunner.__about__ import __version__
from httprunner.compat import basestring, bytes, json, numeric_types
@@ -28,11 +29,25 @@ def get_platform():
def get_summary(result):
""" get summary from test result
+
+ Args:
+ result (instance): HtmlTestResult() instance
+
+ Returns:
+ dict: summary extracted from result.
+
+ {
+ "success": True,
+ "stat": {},
+ "time": {},
+ "records": []
+ }
+
"""
summary = {
"success": result.wasSuccessful(),
"stat": {
- 'testsRun': result.testsRun,
+ 'total': result.testsRun,
'failures': len(result.failures),
'errors': len(result.errors),
'skipped': len(result.skipped),
@@ -40,21 +55,18 @@ def get_summary(result):
'unexpectedSuccesses': len(result.unexpectedSuccesses)
}
}
- summary["stat"]["successes"] = summary["stat"]["testsRun"] \
+ summary["stat"]["successes"] = summary["stat"]["total"] \
- summary["stat"]["failures"] \
- summary["stat"]["errors"] \
- summary["stat"]["skipped"] \
- summary["stat"]["expectedFailures"] \
- summary["stat"]["unexpectedSuccesses"]
- if getattr(result, "records", None):
- summary["time"] = {
- 'start_at': result.start_at,
- 'duration': result.duration
- }
- summary["records"] = result.records
- else:
- summary["records"] = []
+ summary["time"] = {
+ 'start_at': result.start_at,
+ 'duration': result.duration
+ }
+ summary["records"] = result.records
return summary
@@ -77,49 +89,220 @@ def aggregate_stat(origin_stat, new_stat):
origin_stat[key] += new_stat[key]
-def render_html_report(summary, html_report_name=None, html_report_template=None):
- """ render html report with specified report name and template
- if html_report_name is not specified, use current datetime
- if html_report_template is not specified, use default report template
+def stringify_summary(summary):
+ """ stringify summary, in order to dump json file and generate html report.
"""
- if not html_report_template:
- html_report_template = os.path.join(
+ for index, suite_summary in enumerate(summary["details"]):
+
+ if not suite_summary.get("name"):
+ suite_summary["name"] = "testcase {}".format(index)
+
+ for record in suite_summary.get("records"):
+ meta_datas = record['meta_datas']
+ __stringify_meta_datas(meta_datas)
+ meta_datas_expanded = []
+ __expand_meta_datas(meta_datas, meta_datas_expanded)
+ record["meta_datas_expanded"] = meta_datas_expanded
+ record["response_time"] = __get_total_response_time(meta_datas_expanded)
+
+
+def __stringify_request(request_data):
+ """ stringfy HTTP request data
+
+ Args:
+ request_data (dict): HTTP request data in dict.
+
+ {
+ "url": "http://127.0.0.1:5000/api/get-token",
+ "method": "POST",
+ "headers": {
+ "User-Agent": "python-requests/2.20.0",
+ "Accept-Encoding": "gzip, deflate",
+ "Accept": "*/*",
+ "Connection": "keep-alive",
+ "user_agent": "iOS/10.3",
+ "device_sn": "TESTCASE_CREATE_XXX",
+ "os_platform": "ios",
+ "app_version": "2.8.6",
+ "Content-Type": "application/json",
+ "Content-Length": "52"
+ },
+ "json": {
+ "sign": "cb9d60acd09080ea66c8e63a1c78c6459ea00168"
+ },
+ "verify": false
+ }
+
+ """
+ for key, value in request_data.items():
+
+ if isinstance(value, list):
+ value = json.dumps(value, indent=2, ensure_ascii=False)
+
+ elif isinstance(value, bytes):
+ try:
+ encoding = "utf-8"
+ value = escape(value.decode(encoding))
+ except UnicodeDecodeError:
+ pass
+
+ elif not isinstance(value, (basestring, numeric_types, Iterable)):
+ # class instance, e.g. MultipartEncoder()
+ value = repr(value)
+
+ elif isinstance(value, requests.cookies.RequestsCookieJar):
+ value = value.get_dict()
+
+ request_data[key] = value
+
+
+def __stringify_response(response_data):
+ """ stringfy HTTP response data
+
+ Args:
+ response_data (dict):
+
+ {
+ "status_code": 404,
+ "headers": {
+ "Content-Type": "application/json",
+ "Content-Length": "30",
+ "Server": "Werkzeug/0.14.1 Python/3.7.0",
+ "Date": "Tue, 27 Nov 2018 06:19:27 GMT"
+ },
+ "encoding": "None",
+ "content_type": "application/json",
+ "ok": false,
+ "url": "http://127.0.0.1:5000/api/users/9001",
+ "reason": "NOT FOUND",
+ "cookies": {},
+ "json": {
+ "success": false,
+ "data": {}
+ }
+ }
+
+ """
+ for key, value in response_data.items():
+
+ if isinstance(value, list):
+ value = json.dumps(value, indent=2, ensure_ascii=False)
+
+ elif isinstance(value, bytes):
+ try:
+ encoding = response_data.get("encoding")
+ if not encoding or encoding == "None":
+ encoding = "utf-8"
+
+ if key == "content" and "image" in response_data["content_type"]:
+ # display image
+ value = "data:{};base64,{}".format(
+ response_data["content_type"],
+ b64encode(value).decode(encoding)
+ )
+ else:
+ value = escape(value.decode(encoding))
+ except UnicodeDecodeError:
+ pass
+
+ elif not isinstance(value, (basestring, numeric_types, Iterable)):
+ # class instance, e.g. MultipartEncoder()
+ value = repr(value)
+
+ elif isinstance(value, requests.cookies.RequestsCookieJar):
+ value = value.get_dict()
+
+ response_data[key] = value
+
+
+def __expand_meta_datas(meta_datas, meta_datas_expanded):
+ """ expand meta_datas to one level
+
+ Args:
+ meta_datas (dict/list): maybe in nested format
+
+ Returns:
+ list: expanded list in one level
+
+ Examples:
+ >>> meta_datas = [
+ [
+ dict1,
+ dict2
+ ],
+ dict3
+ ]
+ >>> meta_datas_expanded = []
+ >>> __expand_meta_datas(meta_datas, meta_datas_expanded)
+ >>> print(meta_datas_expanded)
+ [dict1, dict2, dict3]
+
+ """
+ if isinstance(meta_datas, dict):
+ meta_datas_expanded.append(meta_datas)
+ elif isinstance(meta_datas, list):
+ for meta_data in meta_datas:
+ __expand_meta_datas(meta_data, meta_datas_expanded)
+
+
+def __get_total_response_time(meta_datas_expanded):
+ """ caculate total response time of all meta_datas
+ """
+ try:
+ response_time = 0
+ for meta_data in meta_datas_expanded:
+ response_time += meta_data["stat"]["response_time_ms"]
+
+ return "{:.2f}".format(response_time)
+
+ except TypeError:
+ # failure exists
+ return "N/A"
+
+
+def __stringify_meta_datas(meta_datas):
+
+ if isinstance(meta_datas, list):
+ for _meta_data in meta_datas:
+ __stringify_meta_datas(_meta_data)
+ elif isinstance(meta_datas, dict):
+ data_list = meta_datas["data"]
+ for data in data_list:
+ __stringify_request(data["request"])
+ __stringify_response(data["response"])
+
+
+def render_html_report(summary, report_template=None, report_dir=None):
+ """ render html report with specified report name and template
+
+ Args:
+ report_template (str): specify html report template path
+ report_dir (str): specify html report save directory
+
+ """
+ if not report_template:
+ report_template = os.path.join(
os.path.abspath(os.path.dirname(__file__)),
"templates",
"report_template.html"
)
logger.log_debug("No html report template specified, use default.")
else:
- logger.log_info("render with html report template: {}".format(html_report_template))
+ logger.log_info("render with html report template: {}".format(report_template))
logger.log_info("Start to render Html report ...")
- logger.log_debug("render data: {}".format(summary))
- report_dir_path = os.path.join(os.getcwd(), "reports")
+ report_dir = report_dir or os.path.join(os.getcwd(), "reports")
+ if not os.path.isdir(report_dir):
+ os.makedirs(report_dir)
+
start_at_timestamp = int(summary["time"]["start_at"])
summary["time"]["start_datetime"] = datetime.fromtimestamp(start_at_timestamp).strftime('%Y-%m-%d %H:%M:%S')
- if html_report_name:
- summary["html_report_name"] = html_report_name
- report_dir_path = os.path.join(report_dir_path, html_report_name)
- html_report_name += "-{}.html".format(start_at_timestamp)
- else:
- summary["html_report_name"] = ""
- html_report_name = "{}.html".format(start_at_timestamp)
- if not os.path.isdir(report_dir_path):
- os.makedirs(report_dir_path)
+ report_path = os.path.join(report_dir, "{}.html".format(start_at_timestamp))
- for index, suite_summary in enumerate(summary["details"]):
- if not suite_summary.get("name"):
- suite_summary["name"] = "test suite {}".format(index)
- for record in suite_summary.get("records"):
- meta_data = record['meta_data']
- stringify_data(meta_data, 'request')
- stringify_data(meta_data, 'response')
-
- with io.open(html_report_template, "r", encoding='utf-8') as fp_r:
+ with io.open(report_template, "r", encoding='utf-8') as fp_r:
template_content = fp_r.read()
- report_path = os.path.join(report_dir_path, html_report_name)
with io.open(report_path, 'w', encoding='utf-8') as fp_w:
rendered_content = Template(
template_content,
@@ -132,50 +315,9 @@ def render_html_report(summary, html_report_name=None, html_report_template=None
return report_path
-def stringify_data(meta_data, request_or_response):
- """
- meta_data = {
- "request": {},
- "response": {}
- }
- """
- headers = meta_data[request_or_response]["headers"]
- request_or_response_dict = meta_data[request_or_response]
-
- for key, value in request_or_response_dict.items():
-
- if isinstance(value, list):
- value = json.dumps(value, indent=2, ensure_ascii=False)
-
- elif isinstance(value, bytes):
- try:
- encoding = meta_data["response"].get("encoding")
- if not encoding or encoding == "None":
- encoding = "utf-8"
-
- if request_or_response == "response" and key == "content" \
- and "image" in meta_data["response"]["content_type"]:
- # display image
- value = "data:{};base64,{}".format(
- meta_data["response"]["content_type"],
- b64encode(value).decode(encoding)
- )
- else:
- value = escape(value.decode(encoding))
- except UnicodeDecodeError:
- pass
-
- elif not isinstance(value, (basestring, numeric_types, Iterable)):
- # class instance, e.g. MultipartEncoder()
- value = repr(value)
-
- meta_data[request_or_response][key] = value
-
-
class HtmlTestResult(unittest.TextTestResult):
- """A html result class that can generate formatted html results.
-
- Used by TextTestRunner.
+ """ A html result class that can generate formatted html results.
+ Used by TextTestRunner.
"""
def __init__(self, stream, descriptions, verbosity):
super(HtmlTestResult, self).__init__(stream, descriptions, verbosity)
@@ -186,11 +328,8 @@ class HtmlTestResult(unittest.TextTestResult):
'name': test.shortDescription(),
'status': status,
'attachment': attachment,
- "meta_data": {}
+ "meta_datas": test.meta_datas
}
- if hasattr(test, "meta_data"):
- data["meta_data"] = test.meta_data
-
self.records.append(data)
def startTestRun(self):
diff --git a/httprunner/response.py b/httprunner/response.py
index 821181ba..aef4e9a1 100644
--- a/httprunner/response.py
+++ b/httprunner/response.py
@@ -15,7 +15,10 @@ class ResponseObject(object):
def __init__(self, resp_obj):
""" initialize with a requests.Response object
- @param (requests.Response instance) resp_obj
+
+ Args:
+ resp_obj (instance): requests.Response instance
+
"""
self.resp_obj = resp_obj
@@ -23,6 +26,8 @@ class ResponseObject(object):
try:
if key == "json":
value = self.resp_obj.json()
+ elif key == "cookies":
+ value = self.resp_obj.cookies.get_dict()
else:
value = getattr(self.resp_obj, key)
@@ -36,11 +41,22 @@ class ResponseObject(object):
def _extract_field_with_regex(self, field):
""" extract field from response content with regex.
requests.Response body could be json or html text.
- @param (str) field should only be regex string that matched r".*\(.*\).*"
- e.g.
- self.text: "LB123abcRB789"
- field: "LB[\d]*(.*)RB[\d]*"
- return: abc
+
+ Args:
+ field (str): regex string that matched r".*\(.*\).*"
+
+ Returns:
+ str: matched content.
+
+ Raises:
+ exceptions.ExtractFailure: If no content matched with regex.
+
+ Examples:
+ >>> # self.text: "LB123abcRB789"
+ >>> filed = "LB[\d]*(.*)RB[\d]*"
+ >>> _extract_field_with_regex(field)
+ abc
+
"""
matched = re.search(field, self.text)
if not matched:
@@ -53,14 +69,17 @@ class ResponseObject(object):
def _extract_field_with_delimiter(self, field):
""" response content could be json or html text.
- @param (str) field should be string joined by delimiter.
- e.g.
- "status_code"
- "headers"
- "cookies"
- "content"
- "headers.content-type"
- "content.person.name.first_name"
+
+ Args:
+ field (str): string joined by delimiter.
+ e.g.
+ "status_code"
+ "headers"
+ "cookies"
+ "content"
+ "headers.content-type"
+ "content.person.name.first_name"
+
"""
# string.split(sep=None, maxsplit=-1) -> list of strings
# e.g. "content.person.name" => ["content", "person.name"]
@@ -82,7 +101,7 @@ class ResponseObject(object):
# cookies
elif top_query == "cookies":
- cookies = self.cookies.get_dict()
+ cookies = self.cookies
if not sub_query:
# extract cookies
return cookies
@@ -207,21 +226,27 @@ class ResponseObject(object):
def extract_response(self, extractors):
""" extract value from requests.Response and store in OrderedDict.
- @param (list) extractors
- [
- {"resp_status_code": "status_code"},
- {"resp_headers_content_type": "headers.content-type"},
- {"resp_content": "content"},
- {"resp_content_person_first_name": "content.person.name.first_name"}
- ]
- @return (OrderDict) variable binds ordered dict
+
+ Args:
+ extractors (list):
+
+ [
+ {"resp_status_code": "status_code"},
+ {"resp_headers_content_type": "headers.content-type"},
+ {"resp_content": "content"},
+ {"resp_content_person_first_name": "content.person.name.first_name"}
+ ]
+
+ Returns:
+ OrderDict: variable binds ordered dict
+
"""
if not extractors:
return {}
- logger.log_info("start to extract from response object.")
+ logger.log_debug("start to extract from response object.")
extracted_variables_mapping = OrderedDict()
- extract_binds_order_dict = utils.convert_mappinglist_to_orderdict(extractors)
+ extract_binds_order_dict = utils.ensure_mapping_format(extractors)
for key, field in extract_binds_order_dict.items():
extracted_variables_mapping[key] = self.extract_field(field)
diff --git a/httprunner/runner.py b/httprunner/runner.py
index 89fb657f..3b5ca787 100644
--- a/httprunner/runner.py
+++ b/httprunner/runner.py
@@ -4,138 +4,163 @@ from unittest.case import SkipTest
from httprunner import exceptions, logger, response, utils
from httprunner.client import HttpSession
-from httprunner.compat import OrderedDict
-from httprunner.context import Context
+from httprunner.context import SessionContext
class Runner(object):
+ """ Running testcases.
- def __init__(self, config_dict=None, http_client_session=None):
- """
- """
- self.http_client_session = http_client_session
- config_dict = config_dict or {}
- self.evaluated_validators = []
+ Examples:
+ >>> functions={...}
+ >>> config = {
+ "name": "XXXX",
+ "base_url": "http://127.0.0.1",
+ "verify": False
+ }
+ >>> runner = Runner(config, functions)
+
+ >>> test_dict = {
+ "name": "test description",
+ "variables": [], # optional
+ "request": {
+ "url": "http://127.0.0.1:5000/api/users/1000",
+ "method": "GET"
+ }
+ }
+ >>> runner.run_test(test_dict)
+
+ """
+
+ def __init__(self, config, functions, http_client_session=None):
+ """ run testcase or testsuite.
+
+ Args:
+ config (dict): testcase/testsuite config dict
+
+ {
+ "name": "ABC",
+ "variables": {},
+ "setup_hooks", [],
+ "teardown_hooks", []
+ }
+
+ http_client_session (instance): requests.Session(), or locust.client.Session() instance.
+
+ """
+ base_url = config.get("base_url")
+ self.verify = config.get("verify", True)
+ self.output = config.get("output", [])
+ self.functions = functions
+ self.validation_results = []
- # testcase variables
- config_variables = config_dict.get("variables", {})
- # testcase functions
- config_functions = config_dict.get("functions", {})
# testcase setup hooks
- testcase_setup_hooks = config_dict.pop("setup_hooks", [])
+ testcase_setup_hooks = config.get("setup_hooks", [])
# testcase teardown hooks
- self.testcase_teardown_hooks = config_dict.pop("teardown_hooks", [])
+ self.testcase_teardown_hooks = config.get("teardown_hooks", [])
- self.context = Context(config_variables, config_functions)
- self.init_test(config_dict, "testcase")
+ self.http_client_session = http_client_session or HttpSession(base_url)
+ self.session_context = SessionContext(self.functions)
if testcase_setup_hooks:
- self.do_hook_actions(testcase_setup_hooks)
+ self.do_hook_actions(testcase_setup_hooks, "setup")
def __del__(self):
if self.testcase_teardown_hooks:
- self.do_hook_actions(self.testcase_teardown_hooks)
-
- def init_test(self, test_dict, level):
- """ create/update context variables binds
-
- Args:
- test_dict (dict):
- level (enum): "testcase" or "teststep"
- testcase:
- {
- "name": "testcase description",
- "variables": [], # optional
- "request": {
- "base_url": "http://127.0.0.1:5000",
- "headers": {
- "User-Agent": "iOS/2.8.3"
- }
- }
- }
- teststep:
- {
- "name": "teststep description",
- "variables": [], # optional
- "request": {
- "url": "/api/get-token",
- "method": "POST",
- "headers": {
- "Content-Type": "application/json"
- }
- },
- "json": {
- "sign": "f1219719911caae89ccc301679857ebfda115ca2"
- }
- }
-
- Returns:
- dict: parsed request dict
+ self.do_hook_actions(self.testcase_teardown_hooks, "teardown")
+ def __clear_test_data(self):
+ """ clear request and response data
"""
- test_dict = utils.lower_test_dict_keys(test_dict)
+ if not isinstance(self.http_client_session, HttpSession):
+ return
- self.context.init_context_variables(level)
- variables = test_dict.get('variables') \
- or test_dict.get('variable_binds', OrderedDict())
- self.context.update_context_variables(variables, level)
+ self.validation_results = []
+ self.http_client_session.init_meta_data()
- request_config = test_dict.get('request', {})
- parsed_request = self.context.get_parsed_request(request_config, level)
+ def __get_test_data(self):
+ """ get request/response data and validate results
+ """
+ if not isinstance(self.http_client_session, HttpSession):
+ return
- base_url = parsed_request.pop("base_url", None)
- self.http_client_session = self.http_client_session or HttpSession(base_url)
+ meta_data = self.http_client_session.meta_data
+ meta_data["validators"] = self.validation_results
+ return meta_data
- return parsed_request
-
- def _handle_skip_feature(self, teststep_dict):
- """ handle skip feature for teststep
+ def _handle_skip_feature(self, test_dict):
+ """ handle skip feature for test
- skip: skip current test unconditionally
- skipIf: skip current test if condition is true
- skipUnless: skip current test unless condition is true
Args:
- teststep_dict (dict): teststep info
+ test_dict (dict): test info
Raises:
- SkipTest: skip teststep
+ SkipTest: skip test
"""
# TODO: move skip to initialize
skip_reason = None
- if "skip" in teststep_dict:
- skip_reason = teststep_dict["skip"]
+ if "skip" in test_dict:
+ skip_reason = test_dict["skip"]
- elif "skipIf" in teststep_dict:
- skip_if_condition = teststep_dict["skipIf"]
- if self.context.eval_content(skip_if_condition):
+ elif "skipIf" in test_dict:
+ skip_if_condition = test_dict["skipIf"]
+ if self.session_context.eval_content(skip_if_condition):
skip_reason = "{} evaluate to True".format(skip_if_condition)
- elif "skipUnless" in teststep_dict:
- skip_unless_condition = teststep_dict["skipUnless"]
- if not self.context.eval_content(skip_unless_condition):
+ elif "skipUnless" in test_dict:
+ skip_unless_condition = test_dict["skipUnless"]
+ if not self.session_context.eval_content(skip_unless_condition):
skip_reason = "{} evaluate to False".format(skip_unless_condition)
if skip_reason:
raise SkipTest(skip_reason)
- def do_hook_actions(self, actions):
- for action in actions:
- logger.log_debug("call hook: {}".format(action))
- # TODO: check hook function if valid
- self.context.eval_content(action)
+ def do_hook_actions(self, actions, hook_type):
+ """ call hook actions.
- def run_test(self, teststep_dict):
+ Args:
+ actions (list): each action in actions list maybe in two format.
+
+ format1 (dict): assignment, the value returned by hook function will be assigned to variable.
+ {"var": "${func()}"}
+ format2 (str): only call hook functions.
+ ${func()}
+
+ hook_type (enum): setup/teardown
+
+ """
+ logger.log_debug("call {} hook actions.".format(hook_type))
+ for action in actions:
+
+ if isinstance(action, dict) and len(action) == 1:
+ # format 1
+ # {"var": "${func()}"}
+ var_name, hook_content = list(action.items())[0]
+ logger.log_debug("assignment with hook: {} = {}".format(var_name, hook_content))
+ self.session_context.update_test_variables(
+ var_name,
+ self.session_context.eval_content(hook_content)
+ )
+ else:
+ # format 2
+ logger.log_debug("call hook function: {}".format(action))
+ # TODO: check hook function if valid
+ self.session_context.eval_content(action)
+
+ def _run_test(self, test_dict):
""" run single teststep.
Args:
- teststep_dict (dict): teststep info
+ test_dict (dict): teststep info
{
"name": "teststep description",
"skip": "skip this test unconditionally",
"times": 3,
- "variables": [], # optional, override
+ "variables": [], # optional, override
"request": {
"url": "http://127.0.0.1:5000/api/users/1000",
"method": "POST",
@@ -144,9 +169,9 @@ class Runner(object):
"authorization": "$authorization",
"random": "$random"
},
- "body": '{"name": "user", "password": "123456"}'
+ "json": {"name": "user", "password": "123456"}
},
- "extract": [], # optional
+ "extract": {}, # optional
"validate": [], # optional
"setup_hooks": [], # optional
"teardown_hooks": [] # optional
@@ -158,24 +183,35 @@ class Runner(object):
exceptions.ExtractFailure
"""
+ # clear meta data first to ensure independence for each test
+ self.__clear_test_data()
+
# check skip
- self._handle_skip_feature(teststep_dict)
+ self._handle_skip_feature(test_dict)
# prepare
- extractors = teststep_dict.get("extract", []) or teststep_dict.get("extractors", [])
- validators = teststep_dict.get("validate", []) or teststep_dict.get("validators", [])
- parsed_request = self.init_test(teststep_dict, level="teststep")
- self.context.update_teststep_variables_mapping("request", parsed_request)
+ test_dict = utils.lower_test_dict_keys(test_dict)
+ test_variables = test_dict.get("variables", {})
+ self.session_context.init_test_variables(test_variables)
+
+ # teststep name
+ test_name = test_dict.get("name", "")
+
+ # parse test request
+ raw_request = test_dict.get('request', {})
+ parsed_test_request = self.session_context.eval_content(raw_request)
+ self.session_context.update_test_variables("request", parsed_test_request)
# setup hooks
- setup_hooks = teststep_dict.get("setup_hooks", [])
- setup_hooks.insert(0, "${setup_hook_prepare_kwargs($request)}")
- self.do_hook_actions(setup_hooks)
+ setup_hooks = test_dict.get("setup_hooks", [])
+ if setup_hooks:
+ self.do_hook_actions(setup_hooks, "setup")
try:
- url = parsed_request.pop('url')
- method = parsed_request.pop('method')
- group_name = parsed_request.pop("group", None)
+ url = parsed_test_request.pop('url')
+ method = parsed_test_request.pop('method')
+ parsed_test_request.setdefault("verify", self.verify)
+ group_name = parsed_test_request.pop("group", None)
except KeyError:
raise exceptions.ParamsError("URL or METHOD missed!")
@@ -188,52 +224,144 @@ class Runner(object):
raise exceptions.ParamsError(err_msg)
logger.log_info("{method} {url}".format(method=method, url=url))
- logger.log_debug("request kwargs(raw): {kwargs}".format(kwargs=parsed_request))
+ logger.log_debug("request kwargs(raw): {kwargs}".format(kwargs=parsed_test_request))
# request
resp = self.http_client_session.request(
method,
url,
- name=group_name,
- **parsed_request
+ name=(group_name or test_name),
+ **parsed_test_request
)
resp_obj = response.ResponseObject(resp)
# teardown hooks
- teardown_hooks = teststep_dict.get("teardown_hooks", [])
+ teardown_hooks = test_dict.get("teardown_hooks", [])
if teardown_hooks:
- logger.log_info("start to run teardown hooks")
- self.context.update_teststep_variables_mapping("response", resp_obj)
- self.do_hook_actions(teardown_hooks)
+ self.session_context.update_test_variables("response", resp_obj)
+ self.do_hook_actions(teardown_hooks, "teardown")
# extract
+ extractors = test_dict.get("extract", {})
extracted_variables_mapping = resp_obj.extract_response(extractors)
- self.context.update_testcase_runtime_variables_mapping(extracted_variables_mapping)
+ self.session_context.update_session_variables(extracted_variables_mapping)
# validate
+ validators = test_dict.get("validate", [])
try:
- self.evaluated_validators = self.context.validate(validators, resp_obj)
+ self.session_context.validate(validators, resp_obj)
+
except (exceptions.ParamsError, exceptions.ValidationFailure, exceptions.ExtractFailure):
+ err_msg = "{} DETAILED REQUEST & RESPONSE {}\n".format("*" * 32, "*" * 32)
+
# log request
- err_req_msg = "request: \n"
- err_req_msg += "headers: {}\n".format(parsed_request.pop("headers", {}))
- for k, v in parsed_request.items():
- err_req_msg += "{}: {}\n".format(k, repr(v))
- logger.log_error(err_req_msg)
+ err_msg += "====== request details ======\n"
+ err_msg += "url: {}\n".format(url)
+ err_msg += "method: {}\n".format(method)
+ err_msg += "headers: {}\n".format(parsed_test_request.pop("headers", {}))
+ for k, v in parsed_test_request.items():
+ v = utils.omit_long_data(v)
+ err_msg += "{}: {}\n".format(k, repr(v))
+
+ err_msg += "\n"
# log response
- err_resp_msg = "response: \n"
- err_resp_msg += "status_code: {}\n".format(resp_obj.status_code)
- err_resp_msg += "headers: {}\n".format(resp_obj.headers)
- err_resp_msg += "body: {}\n".format(repr(resp_obj.text))
- logger.log_error(err_resp_msg)
+ err_msg += "====== response details ======\n"
+ err_msg += "status_code: {}\n".format(resp_obj.status_code)
+ err_msg += "headers: {}\n".format(resp_obj.headers)
+ err_msg += "body: {}\n".format(repr(resp_obj.text))
+ logger.log_error(err_msg)
raise
+ finally:
+ self.validation_results = self.session_context.validation_results
+
+ def _run_testcase(self, testcase_dict):
+ """ run single testcase.
+ """
+ self.meta_datas = []
+ config = testcase_dict.get("config", {})
+ base_url = config.get("base_url")
+
+ # each testcase should have individual session.
+ http_client_session = self.http_client_session.__class__(base_url)
+ test_runner = Runner(config, self.functions, http_client_session)
+
+ tests = testcase_dict.get("teststeps", [])
+
+ for index, test_dict in enumerate(tests):
+ try:
+ test_runner.run_test(test_dict)
+ except Exception:
+ # log exception request_type and name for locust stat
+ self.exception_request_type = test_runner.exception_request_type
+ self.exception_name = test_runner.exception_name
+ raise
+ finally:
+ _meta_datas = test_runner.meta_datas
+ self.meta_datas.append(_meta_datas)
+
+ self.session_context.update_session_variables(test_runner.extract_sessions())
+
+ def run_test(self, test_dict):
+ """ run single teststep of testcase.
+ test_dict may be in 3 types.
+
+ Args:
+ test_dict (dict):
+
+ # teststep
+ {
+ "name": "teststep description",
+ "variables": [], # optional
+ "request": {
+ "url": "http://127.0.0.1:5000/api/users/1000",
+ "method": "GET"
+ }
+ }
+
+ # nested testcase
+ {
+ "config": {...},
+ "teststeps": [
+ {...},
+ {...}
+ ]
+ }
+
+ # TODO: function
+ {
+ "name": "exec function",
+ "function": "${func()}"
+ }
+
+ """
+ self.meta_datas = None
+ if "teststeps" in test_dict:
+ # nested testcase
+ self._run_testcase(test_dict)
+ else:
+ # api
+ try:
+ self._run_test(test_dict)
+ except Exception:
+ # log exception request_type and name for locust stat
+ self.exception_request_type = test_dict["request"]["method"]
+ self.exception_name = test_dict.get("name")
+ raise
+ finally:
+ self.meta_datas = self.__get_test_data()
+
+ def extract_sessions(self):
+ """
+ """
+ return self.extract_output(self.output)
+
def extract_output(self, output_variables_list):
""" extract output variables
"""
- variables_mapping = self.context.teststep_variables_mapping
+ variables_mapping = self.session_context.session_variables_mapping
output = {}
for variable in output_variables_list:
diff --git a/httprunner/templates/locustfile_template b/httprunner/templates/locustfile_template
index 84381083..c7582549 100644
--- a/httprunner/templates/locustfile_template
+++ b/httprunner/templates/locustfile_template
@@ -3,7 +3,7 @@ import random
import zmq
from httprunner.exceptions import MyBaseError, MyBaseFailure
-from httprunner.loader import load_locust_tests
+from httprunner.api import prepare_locust_tests
from httprunner.runner import Runner
from locust import HttpLocust, TaskSet, task
from locust.events import request_failure
@@ -15,22 +15,20 @@ logging.getLogger('locust.runners').setLevel(logging.INFO)
class WebPageTasks(TaskSet):
def on_start(self):
- self.test_runner = Runner(self.locust.config, self.client)
- self.testcases = load_locust_tests(self.locust.file_path)
+ self.test_runner = Runner(self.locust.config, self.locust.functions, self.client)
- @task(weight=1)
+ @task
def test_any(self):
- teststeps = random.choice(self.locust.tests)
- for teststep in teststeps:
- try:
- self.test_runner.run_test(teststep)
- except (MyBaseError, MyBaseFailure) as ex:
- request_failure.fire(
- request_type=teststep.get("request", {}).get("method"),
- name=teststep.get("name"),
- response_time=0,
- exception=ex
- )
+ test_dict = random.choice(self.locust.tests)
+ try:
+ self.test_runner.run_test(test_dict)
+ except (AssertionError, MyBaseError, MyBaseFailure) as ex:
+ request_failure.fire(
+ request_type=self.test_runner.exception_request_type,
+ name=self.test_runner.exception_name,
+ response_time=0,
+ exception=ex
+ )
class WebPageUser(HttpLocust):
@@ -39,8 +37,9 @@ class WebPageUser(HttpLocust):
max_wait = 30
file_path = "$TESTCASE_FILE"
- locust_tests = load_locust_tests(file_path)
- config = locust_tests["config"]
+ locust_tests = prepare_locust_tests(file_path)
+ functions = locust_tests["functions"]
tests = locust_tests["tests"]
+ config = {}
- host = config.get('request', {}).get('base_url', '')
+ host = config.get('base_url', '')
diff --git a/httprunner/templates/report_template.html b/httprunner/templates/report_template.html
index 7cca19cb..36b11bfd 100644
--- a/httprunner/templates/report_template.html
+++ b/httprunner/templates/report_template.html
@@ -47,7 +47,8 @@
background-color: lightgrey;
font-size: smaller;
padding: 5px 10px;
- text-align: center;
+ line-height: 20px;
+ text-align: left;
}
.details .success {
background-color: greenyellow;
@@ -75,6 +76,7 @@
a.button{
color: gray;
text-decoration: none;
+ display: inline-block;
}
.button:hover {
background: #2cffbd;
@@ -90,6 +92,7 @@
transition: opacity 500ms;
visibility: hidden;
opacity: 0;
+ line-height: 25px;
}
.overlay:target {
visibility: visible;
@@ -129,6 +132,9 @@
overflow: auto;
text-align: left;
}
+ .popup .separator {
+ color:royalblue
+ }
@media screen and (max-width: 700px) {
.box {
@@ -147,7 +153,6 @@
Summary
-
| START AT |
{{time.start_datetime}} |
@@ -163,22 +168,14 @@
{{ platform.platform }} |
- | TOTAL |
- SUCCESS |
- FAILED |
- ERROR |
- SKIPPED |
-
+ STAT |
+ TESTCASES (success/fail) |
+ TESTSTEPS (success/fail/error/skip) |
- | {{stat.testsRun}} |
- {{stat.successes}} |
- {{stat.failures}} |
- {{stat.errors}} |
- {{stat.skipped}} |
-
+ total (details) => |
+ {{stat.testcases.total}} ({{stat.testcases.success}}/{{stat.testcases.fail}}) |
+ {{stat.teststeps.total}} ({{stat.teststeps.successes}}/{{stat.teststeps.failures}}/{{stat.teststeps.errors}}/{{stat.teststeps.skipped}}) |
@@ -189,36 +186,7 @@
{{test_suite_summary.name}}
- | base_url |
- {{test_suite_summary.base_url}} |
-
- parameters & output
-
-
-
-
- |
-
- | TOTAL: {{test_suite_summary.stat.testsRun}} |
+ TOTAL: {{test_suite_summary.stat.total}} |
SUCCESS: {{test_suite_summary.stat.successes}} |
FAILED: {{test_suite_summary.stat.failures}} |
ERROR: {{test_suite_summary.stat.errors}} |
@@ -233,28 +201,39 @@
{% for record in test_suite_summary.records %}
{% set record_index = "{}_{}".format(suite_index, loop.index) %}
+ {% set record_meta_datas = record.meta_datas_expanded %}
- | {{record.status}}
+ | {{record.status}} |
{{record.name}} |
- {{ record.meta_data.response.response_time_ms }} ms |
+ {{ record.response_time }} ms |
- log
- |