- Pytest:帮助您编写更好的程序
- 完整的 Pytest 文档
- 安装和入门
- 使用和调用
- 在现有测试套件中使用 pytest
- 测试中断言的编写和报告
- Pytest 夹具:显式、模块化、可扩展
- 用属性标记测试函数
- MonkeyPatching / Mocking 模块和环境
- 临时目录和文件
- 捕获 stdout/stderr 输出
- 捕获警告
- 模块和测试文件的 Doctest 集成
- 跳过和 xfail:处理无法成功的测试
- 参数化夹具和测试功能
- 缓存:使用交叉测试运行状态
- UnitTest.TestCase 支持
- 运行为鼻子编写的测试
- 经典的 Xunit 风格设置
- 安装和使用插件
- 编写插件
- 登录
- 良好的集成实践
- 古怪的测试
- Pytest 导入机制和 sys.path/PYTHONPATH
- 设置 bash 完成
- API 引用
- _pytest.hookspec
- _pytest.python_api
- _pytest.outcomes
- _pytest.config
- _pytest.mark
- _pytest.recwarn
- _pytest.assertion
- _pytest.freeze_support
- _pytest.fixtures
- _pytest.cacheprovider
- _pytest.capture
- _pytest.doctest
- _pytest.junitxml
- _pytest.logging
- _pytest.monkeypatch
- _pytest.pytester
- _pytest.tmpdir
- _pytest.python
- _pytest.nodes
- _pytest.reports
- _pytest._code.code
- _pytest.config.argparsing
- _pytest.main
- pluggy.callers
- _pytest.config.exceptions
- py.test 2.0.0:断言++、UnitTest++、Reporting++、Config++、Docs++
- 示例和自定义技巧
- 配置
- 贡献开始
- 向后兼容策略
- Python 2.7 和 3.4 支持
- 企业版 pytest
- 项目实例
- 历史笔记
- 弃用和移除
- 发展指南
- 演讲和辅导
基本模式和示例
如何更改命令行选项默认值
每次使用同一系列的命令行选项可能会很麻烦 pytest
. 例如,如果您总是希望查看有关跳过的和失败的测试的详细信息,以及有terser dot 进度输出,则可以将其写入配置文件:
# content of pytest.ini [pytest] addopts = -ra -q
或者,您可以设置 PYTEST_ADDOPTS
使用环境时添加命令行选项的环境变量:
export PYTEST_ADDOPTS="-v"
以下是命令行在 addopts
或环境变量:
<pytest.ini:addopts> $PYTEST_ADDOPTS <extra command-line arguments>
因此,如果用户在命令行中执行:
pytest -m slow
实际执行的命令行是:
pytest -ra -q -v -m slow
请注意,与其他命令行应用程序一样,如果选项冲突,最后一个选项将获胜,因此上面的示例将显示详细输出,因为 -v
重写 -q
.
根据命令行选项,将不同的值传递给测试函数
假设我们想要编写一个依赖于命令行选项的测试。实现这一目标的基本模式如下:
# content of test_sample.py def test_answer(cmdopt): if cmdopt == "type1": print("first") elif cmdopt == "type2": print("second") assert 0 # to see what was printed
为此,我们需要添加一个命令行选项,并提供 cmdopt
通过一 fixture function :
# content of conftest.py import pytest def pytest_addoption(parser): parser.addoption( "--cmdopt", action="store", default="type1", help="my option: type1 or type2" ) @pytest.fixture def cmdopt(request): return request.config.getoption("--cmdopt")
让我们在不提供新选项的情况下运行它:
$ pytest -q test_sample.py F [100%] ================================= FAILURES ================================= _______________________________ test_answer ________________________________ cmdopt = 'type1' def test_answer(cmdopt): if cmdopt == "type1": print("first") elif cmdopt == "type2": print("second") > assert 0 # to see what was printed E assert 0 test_sample.py:6: AssertionError --------------------------- Captured stdout call --------------------------- first ========================= short test summary info ========================== FAILED test_sample.py::test_answer - assert 0 1 failed in 0.12s
现在提供命令行选项:
$ pytest -q --cmdopt=type2 F [100%] ================================= FAILURES ================================= _______________________________ test_answer ________________________________ cmdopt = 'type2' def test_answer(cmdopt): if cmdopt == "type1": print("first") elif cmdopt == "type2": print("second") > assert 0 # to see what was printed E assert 0 test_sample.py:6: AssertionError --------------------------- Captured stdout call --------------------------- second ========================= short test summary info ========================== FAILED test_sample.py::test_answer - assert 0 1 failed in 0.12s
您可以看到命令行选项已到达我们的测试中。这就完成了基本模式。然而,人们往往希望在测试之外处理命令行选项,而更愿意传递不同或更复杂的对象。
动态添加命令行选项
通过 addopts
可以静态地为项目添加命令行选项。您还可以在处理命令行参数之前动态修改它们:
# setuptools plugin import sys def pytest_load_initial_conftests(args): if "xdist" in sys.modules: # pytest-xdist plugin import multiprocessing num = max(multiprocessing.cpu_count() / 2, 1) args[:] = ["-n", str(num)] + args
如果你有 xdist plugin 安装后,您将始终使用靠近CPU的许多子进程执行测试运行。在带有上述conftest.py的空目录中运行:
$ pytest =========================== test session starts ============================ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y cachedir: $PYTHON_PREFIX/.pytest_cache rootdir: $REGENDOC_TMPDIR collected 0 items ========================== no tests ran in 0.12s ===========================
根据命令行选项控制测试跳过
这里是一个 conftest.py
文件添加 --runslow
用于控制跳过的命令行选项 pytest.mark.slow
标记测试:
# content of conftest.py import pytest def pytest_addoption(parser): parser.addoption( "--runslow", action="store_true", default=False, help="run slow tests" ) def pytest_configure(config): config.addinivalue_line("markers", "slow: mark test as slow to run") def pytest_collection_modifyitems(config, items): if config.getoption("--runslow"): # --runslow given in cli: do not skip slow tests return skip_slow = pytest.mark.skip(reason="need --runslow option to run") for item in items: if "slow" in item.keywords: item.add_marker(skip_slow)
我们现在可以编写这样的测试模块:
# content of test_module.py import pytest def test_func_fast(): pass @pytest.mark.slow def test_func_slow(): pass
当运行时,它将看到一个跳过的 慢速 测试:
$ pytest -rs # "-rs" means report details on the little 's' =========================== test session starts ============================ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y cachedir: $PYTHON_PREFIX/.pytest_cache rootdir: $REGENDOC_TMPDIR collected 2 items test_module.py .s [100%] ========================= short test summary info ========================== SKIPPED [1] test_module.py:8: need --runslow option to run ======================= 1 passed, 1 skipped in 0.12s =======================
或者运行它,包括 slow
标记测试:
$ pytest --runslow =========================== test session starts ============================ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y cachedir: $PYTHON_PREFIX/.pytest_cache rootdir: $REGENDOC_TMPDIR collected 2 items test_module.py .. [100%] ============================ 2 passed in 0.12s =============================
编写良好的集成断言帮助程序
如果有从测试调用的测试助手函数,则可以使用 pytest.fail
标记以使测试失败并显示特定消息。如果设置了 __tracebackhide__
helper函数中的某个选项。例子:
# content of test_checkconfig.py import pytest def checkconfig(x): __tracebackhide__ = True if not hasattr(x, "config"): pytest.fail("not configured: {}".format(x)) def test_something(): checkconfig(42)
这个 __tracebackhide__
设置影响 pytest
回溯显示: checkconfig
除非 --full-trace
指定了命令行选项。让我们运行我们的小功能:
$ pytest -q test_checkconfig.py F [100%] ================================= FAILURES ================================= ______________________________ test_something ______________________________ def test_something(): > checkconfig(42) E Failed: not configured: 42 test_checkconfig.py:11: Failed ========================= short test summary info ========================== FAILED test_checkconfig.py::test_something - Failed: not configured: 42 1 failed in 0.12s
如果只想隐藏某些异常,可以设置 __tracebackhide__
到一个可调用的 ExceptionInfo
对象。例如,您可以使用它来确保不隐藏意外的异常类型:
import operator import pytest class ConfigException(Exception): pass def checkconfig(x): __tracebackhide__ = operator.methodcaller("errisinstance", ConfigException) if not hasattr(x, "config"): raise ConfigException("not configured: {}".format(x)) def test_something(): checkconfig(42)
这将避免在不相关的异常(即断言帮助程序中的错误)上隐藏异常跟踪。
检测是否从pytest运行中运行
通常,如果从测试中调用应用程序代码,使其行为不同是一个坏主意。但是,如果您必须确定应用程序代码是否是从测试中运行的,您可以这样做:
# content of your_module.py _called_from_test = False
# content of conftest.py def pytest_configure(config): your_module._called_from_test = True
然后检查 your_module._called_from_test
旗帜:
if your_module._called_from_test: # called from within a test run ... else: # called "normally" ...
在你的申请中。
向测试报告头添加信息
很容易在 pytest
运行:
# content of conftest.py def pytest_report_header(config): return "project deps: mylib-1.1"
它将相应地向测试头添加字符串:
$ pytest =========================== test session starts ============================ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y cachedir: $PYTHON_PREFIX/.pytest_cache project deps: mylib-1.1 rootdir: $REGENDOC_TMPDIR collected 0 items ========================== no tests ran in 0.12s ===========================
还可以返回字符串列表,该列表将被视为多行信息。你可以考虑 config.getoption('verbose')
如果适用,为了显示更多信息:
# content of conftest.py def pytest_report_header(config): if config.getoption("verbose") > 0: return ["info1: did you know that ...", "did you?"]
仅当使用 -v 运行时添加信息:
$ pytest -v =========================== test session starts ============================ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python cachedir: $PYTHON_PREFIX/.pytest_cache info1: did you know that ... did you? rootdir: $REGENDOC_TMPDIR collecting ... collected 0 items ========================== no tests ran in 0.12s ===========================
当你清楚地运行时,什么也没有:
$ pytest =========================== test session starts ============================ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y cachedir: $PYTHON_PREFIX/.pytest_cache rootdir: $REGENDOC_TMPDIR collected 0 items ========================== no tests ran in 0.12s ===========================
性能分析测试持续时间
如果您有一个运行缓慢的大型测试套件,那么您可能需要找出哪些测试最慢。让我们做一个人工测试套件:
# content of test_some_are_slow.py import time def test_funcfast(): time.sleep(0.1) def test_funcslow1(): time.sleep(0.2) def test_funcslow2(): time.sleep(0.3)
现在我们可以分析执行最慢的测试函数:
$ pytest --durations=3 =========================== test session starts ============================ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y cachedir: $PYTHON_PREFIX/.pytest_cache rootdir: $REGENDOC_TMPDIR collected 3 items test_some_are_slow.py ... [100%] =========================== slowest 3 durations ============================ 0.30s call test_some_are_slow.py::test_funcslow2 0.20s call test_some_are_slow.py::test_funcslow1 0.10s call test_some_are_slow.py::test_funcfast ============================ 3 passed in 0.12s =============================
增量测试.测试步骤
有时,您可能会遇到由一系列测试步骤组成的测试情况。如果一个步骤失败了,那么执行进一步的步骤是没有意义的,因为不管怎样,这些步骤都会失败,并且它们的跟踪没有增加任何洞察力。这是一个简单的 conftest.py
引入一个 incremental
用于类的标记:
# content of conftest.py from typing import Dict, Tuple import pytest # store history of failures per test class name and per index in parametrize (if parametrize used) _test_failed_incremental: Dict[str, Dict[Tuple[int, ...], str]] = {} def pytest_runtest_makereport(item, call): if "incremental" in item.keywords: # incremental marker is used if call.excinfo is not None: # the test has failed # retrieve the class name of the test cls_name = str(item.cls) # retrieve the index of the test (if parametrize is used in combination with incremental) parametrize_index = ( tuple(item.callspec.indices.values()) if hasattr(item, "callspec") else () ) # retrieve the name of the test function test_name = item.originalname or item.name # store in _test_failed_incremental the original name of the failed test _test_failed_incremental.setdefault(cls_name, {}).setdefault( parametrize_index, test_name ) def pytest_runtest_setup(item): if "incremental" in item.keywords: # retrieve the class name of the test cls_name = str(item.cls) # check if a previous test has failed for this class if cls_name in _test_failed_incremental: # retrieve the index of the test (if parametrize is used in combination with incremental) parametrize_index = ( tuple(item.callspec.indices.values()) if hasattr(item, "callspec") else () ) # retrieve the name of the first test function to fail for this class name and index test_name = _test_failed_incremental[cls_name].get(parametrize_index, None) # if name found, test has failed for the combination of class name & test name if test_name is not None: pytest.xfail("previous test failed ({})".format(test_name))
这两个钩子实现共同工作以中止类中的增量标记测试。下面是一个测试模块示例:
# content of test_step.py import pytest @pytest.mark.incremental class TestUserHandling: def test_login(self): pass def test_modification(self): assert 0 def test_deletion(self): pass def test_normal(): pass
如果我们这样做:
$ pytest -rx =========================== test session starts ============================ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y cachedir: $PYTHON_PREFIX/.pytest_cache rootdir: $REGENDOC_TMPDIR collected 4 items test_step.py .Fx. [100%] ================================= FAILURES ================================= ____________________ TestUserHandling.test_modification ____________________ self = <test_step.TestUserHandling object at 0xdeadbeef> def test_modification(self): > assert 0 E assert 0 test_step.py:11: AssertionError ========================= short test summary info ========================== XFAIL test_step.py::TestUserHandling::test_deletion reason: previous test failed (test_modification) ================== 1 failed, 2 passed, 1 xfailed in 0.12s ==================
我们会看到的 test_deletion
未被执行,因为 test_modification
失败。报告为 预期故障 。
包/目录级设备(设置)
如果有嵌套的测试目录,则可以通过将fixture函数放置在 conftest.py
该目录中的文件可以使用所有类型的设备,包括 autouse fixtures 这相当于Xunit的设置/拆卸概念。但是,建议在测试或测试类中有显式的fixture引用,而不是依赖于隐式执行安装/拆卸函数,特别是当它们远离实际测试时。
下面是一个例子 db
目录中提供的设备:
# content of a/conftest.py import pytest class DB: pass @pytest.fixture(scope="session") def db(): return DB()
然后是该目录中的测试模块:
# content of a/test_db.py def test_a1(db): assert 0, db # to show value
另一个测试模块:
# content of a/test_db2.py def test_a2(db): assert 0, db # to show value
然后在姐妹目录中的一个模块 db
固定装置:
# content of b/test_error.py def test_root(db): # no db here, will error out pass
我们可以运行这个:
$ pytest =========================== test session starts ============================ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y cachedir: $PYTHON_PREFIX/.pytest_cache rootdir: $REGENDOC_TMPDIR collected 7 items test_step.py .Fx. [ 57%] a/test_db.py F [ 71%] a/test_db2.py F [ 85%] b/test_error.py E [100%] ================================== ERRORS ================================== _______________________ ERROR at setup of test_root ________________________ file $REGENDOC_TMPDIR/b/test_error.py, line 1 def test_root(db): # no db here, will error out E fixture 'db' not found > available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory > use 'pytest --fixtures [testpath]' for help on them. $REGENDOC_TMPDIR/b/test_error.py:1 ================================= FAILURES ================================= ____________________ TestUserHandling.test_modification ____________________ self = <test_step.TestUserHandling object at 0xdeadbeef> def test_modification(self): > assert 0 E assert 0 test_step.py:11: AssertionError _________________________________ test_a1 __________________________________ db = <conftest.DB object at 0xdeadbeef> def test_a1(db): > assert 0, db # to show value E AssertionError: <conftest.DB object at 0xdeadbeef> E assert 0 a/test_db.py:2: AssertionError _________________________________ test_a2 __________________________________ db = <conftest.DB object at 0xdeadbeef> def test_a2(db): > assert 0, db # to show value E AssertionError: <conftest.DB object at 0xdeadbeef> E assert 0 a/test_db2.py:2: AssertionError ========================= short test summary info ========================== FAILED test_step.py::TestUserHandling::test_modification - assert 0 FAILED a/test_db.py::test_a1 - AssertionError: <conftest.DB object at 0x7... FAILED a/test_db2.py::test_a2 - AssertionError: <conftest.DB object at 0x... ERROR b/test_error.py::test_root ============= 3 failed, 2 passed, 1 xfailed, 1 error in 0.12s ==============
中的两个测试模块 a
目录见相同 db
fixture实例,而sister目录中的一个测试 b
看不到。当然,我们也可以定义 db
固定在姐妹目录的 conftest.py
文件。请注意,每个fixture只有在有一个测试实际需要时才会被实例化(除非您使用 autouse fixture,它总是在第一个测试执行之前执行)。
后处理测试报告/故障
如果您希望对测试报告进行后处理,并且需要访问执行环境,那么可以实现一个钩子,在即将创建测试 报告 对象时调用该钩子。在这里,我们写出所有失败的测试调用,并访问一个fixture(如果测试使用它),以防您在后期处理期间查询/查看它。在我们的例子中,我们只是写一些信息给 failures
文件:
# content of conftest.py import pytest import os.path @pytest.hookimpl(tryfirst=True, hookwrapper=True) def pytest_runtest_makereport(item, call): # execute all other hooks to obtain the report object outcome = yield rep = outcome.get_result() # we only look at actual failing test calls, not setup/teardown if rep.when == "call" and rep.failed: mode = "a" if os.path.exists("failures") else "w" with open("failures", mode) as f: # let's also access a fixture for the fun of it if "tmpdir" in item.fixturenames: extra = " ({})".format(item.funcargs["tmpdir"]) else: extra = "" f.write(rep.nodeid + extra + "\n")
如果测试失败:
# content of test_module.py def test_fail1(tmpdir): assert 0 def test_fail2(): assert 0
然后运行它们:
$ pytest test_module.py =========================== test session starts ============================ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y cachedir: $PYTHON_PREFIX/.pytest_cache rootdir: $REGENDOC_TMPDIR collected 2 items test_module.py FF [100%] ================================= FAILURES ================================= ________________________________ test_fail1 ________________________________ tmpdir = local('PYTEST_TMPDIR/test_fail10') def test_fail1(tmpdir): > assert 0 E assert 0 test_module.py:2: AssertionError ________________________________ test_fail2 ________________________________ def test_fail2(): > assert 0 E assert 0 test_module.py:6: AssertionError ========================= short test summary info ========================== FAILED test_module.py::test_fail1 - assert 0 FAILED test_module.py::test_fail2 - assert 0 ============================ 2 failed in 0.12s =============================
您将有一个 失败 文件,其中包含失败的测试ID:
$ cat failures test_module.py::test_fail1 (PYTEST_TMPDIR/test_fail10) test_module.py::test_fail2
使测试结果信息在夹具中可用
如果您想让测试结果报告在fixture终结器中可用,这里有一个通过本地插件实现的小例子:
# content of conftest.py import pytest @pytest.hookimpl(tryfirst=True, hookwrapper=True) def pytest_runtest_makereport(item, call): # execute all other hooks to obtain the report object outcome = yield rep = outcome.get_result() # set a report attribute for each phase of a call, which can # be "setup", "call", "teardown" setattr(item, "rep_" + rep.when, rep) @pytest.fixture def something(request): yield # request.node is an "item" because we use the default # "function" scope if request.node.rep_setup.failed: print("setting up a test failed!", request.node.nodeid) elif request.node.rep_setup.passed: if request.node.rep_call.failed: print("executing test failed", request.node.nodeid)
如果测试失败:
# content of test_module.py import pytest @pytest.fixture def other(): assert 0 def test_setup_fails(something, other): pass def test_call_fails(something): assert 0 def test_fail2(): assert 0
运行它:
$ pytest -s test_module.py =========================== test session starts ============================ platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-0.x.y cachedir: $PYTHON_PREFIX/.pytest_cache rootdir: $REGENDOC_TMPDIR collected 3 items test_module.py Esetting up a test failed! test_module.py::test_setup_fails Fexecuting test failed test_module.py::test_call_fails F ================================== ERRORS ================================== ____________________ ERROR at setup of test_setup_fails ____________________ @pytest.fixture def other(): > assert 0 E assert 0 test_module.py:7: AssertionError ================================= FAILURES ================================= _____________________________ test_call_fails ______________________________ something = None def test_call_fails(something): > assert 0 E assert 0 test_module.py:15: AssertionError ________________________________ test_fail2 ________________________________ def test_fail2(): > assert 0 E assert 0 test_module.py:19: AssertionError ========================= short test summary info ========================== FAILED test_module.py::test_call_fails - assert 0 FAILED test_module.py::test_fail2 - assert 0 ERROR test_module.py::test_setup_fails - assert 0 ======================== 2 failed, 1 error in 0.12s ========================
您将看到fixture终结器可以使用精确的报告信息。
PYTEST_CURRENT_TEST
环境变量
有时,测试会话可能会被卡住,并且可能没有简单的方法来确定哪个测试被卡住,例如,如果pytest是在安静模式下运行的。 (-q
)或者您无法访问控制台输出。如果问题只是偶尔发生,这是一个特别的问题,著名的 片状 测试。
pytest
设置 PYTEST_CURRENT_TEST
运行测试时的环境变量,可以由流程监视实用程序或库(如 psutil 如有必要,要发现哪个测试被卡住:
import psutil for pid in psutil.pids(): environ = psutil.Process(pid).environ() if "PYTEST_CURRENT_TEST" in environ: print(f'pytest process {pid} running: {environ["PYTEST_CURRENT_TEST"]}')
在测试会话期间,将设置pytest PYTEST_CURRENT_TEST
到当前测试 nodeid 以及当前阶段,可以是 setup
, call
或 teardown
.
例如,当运行一个名为 test_foo
从 foo_module.py
, PYTEST_CURRENT_TEST
将设置为:
foo_module.py::test_foo (setup)
foo_module.py::test_foo (call)
foo_module.py::test_foo (teardown)
按这样的顺序。
注解
内容 PYTEST_CURRENT_TEST
是为了让人可读,实际的格式可以在版本之间更改(甚至是错误修复),因此不应该依赖于它来编写脚本或实现自动化。
冻结试验
如果使用类似 PyInstaller 为了将其分发给最终用户,最好还打包测试运行程序并使用冻结的应用程序运行测试。通过这种方式,可以在早期检测未包含在可执行文件中的依赖项等打包错误,同时还允许您将测试文件发送给用户,以便他们在计算机中运行这些文件,这对于获取有关难以重现的错误的更多信息非常有用。
幸运的是最近 PyInstaller
版本已经有了一个用于pytest的自定义钩子,但是如果您正在使用另一个工具冻结可执行文件,例如 cx_freeze
或 py2exe
,你可以使用 pytest.freeze_includes()
获取内部Pytest模块的完整列表。但是,如何配置工具以查找内部模块因工具而异。
不要将pytest运行程序冻结为单独的可执行文件,您可以在程序启动期间通过一些巧妙的参数处理使冻结的程序作为pytest运行程序工作。这允许您有一个单独的可执行文件,这通常更方便。请注意,pytest(setupttools入口点)使用的插件发现机制不适用于冻结的可执行文件,因此pytest无法自动找到任何第三方插件。包括第三方插件,如 pytest-timeout
它们必须显式导入并传递到pytest.main。
# contents of app_main.py import sys import pytest_timeout # Third party plugin if len(sys.argv) > 1 and sys.argv[1] == "--pytest": import pytest sys.exit(pytest.main(sys.argv[2:], plugins=[pytest_timeout])) else: # normal application execution: at this point argv can be parsed # by your argument-parsing library of choice as usual ...
这允许您使用标准的冻结应用程序执行测试。 pytest
命令行选项:
./app_main --pytest --verbose --tb=long --junitxml=results.xml test-suite/
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论