Docs Pytest Org en 6.2.x
Docs Pytest Org en 6.2.x
Release 6.2
i
4.6 Assertion introspection details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
10 Warnings Capture 91
10.1 @pytest.mark.filterwarnings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
10.2 Disabling warnings summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
10.3 Disabling warning capture entirely . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
10.4 DeprecationWarning and PendingDeprecationWarning . . . . . . . . . . . . . . . . . . . . . . . . . 93
ii
10.5 Ensuring code triggers a deprecation warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
10.6 Asserting warnings with the warns function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
10.7 Recording warnings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
10.8 Custom failure messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
10.9 Internal pytest warnings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
12 Skip and xfail: dealing with tests that cannot succeed 103
12.1 Skipping test functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
12.2 XFail: mark test functions as expected to fail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
12.3 Skip/xfail with parametrize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
iii
19 Writing plugins 133
19.1 Plugin discovery order at tool startup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
19.2 conftest.py: local per-directory plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
19.3 Writing your own plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
19.4 Making your plugin installable by others . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
19.5 Assertion Rewriting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
19.6 Requiring/Loading plugins in a test module or conftest file . . . . . . . . . . . . . . . . . . . . . . . 136
19.7 Accessing another plugin by name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
19.8 Registering custom markers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
19.9 Testing plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
21 Logging 147
21.1 caplog fixture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
21.2 Live Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
21.3 Release notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
21.4 Incompatible changes in pytest 3.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
iv
25.2 prepend and append import modes scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
25.3 Invoking pytest versus python -m pytest . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
26 Configuration 241
26.1 Command line options and configuration file settings . . . . . . . . . . . . . . . . . . . . . . . . . . 241
26.2 Configuration file formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
26.3 Initialization: determining rootdir and configfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
26.4 Builtin configuration file options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
30 History 311
30.1 Focus primary on smooth transition - stance (pre 6.0) . . . . . . . . . . . . . . . . . . . . . . . . . . 311
35 Sponsor 339
35.1 OpenCollective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
37 License 343
v
39 Historical Notes 347
39.1 Marker revamp and iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
39.2 cache plugin integrated into the core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
39.3 funcargs and pytest_funcarg__ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
39.4 @pytest.yield_fixture decorator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
39.5 [pytest] header in setup.cfg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
39.6 Applying marks to @pytest.mark.parametrize parameters . . . . . . . . . . . . . . . . . . 349
39.7 @pytest.mark.parametrize argument names as a tuple . . . . . . . . . . . . . . . . . . . . 350
39.8 setup: is now an “autouse fixture” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
39.9 Conditions as strings instead of booleans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
39.10 pytest.set_trace() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
39.11 “compat” properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Index 359
vi
pytest Documentation, Release 6.2
CONTENTS 1
pytest Documentation, Release 6.2
2 CONTENTS
CHAPTER
ONE
$ pytest --version
pytest 6.2.5
# content of test_sample.py
def func(x):
return x + 1
def test_answer():
assert func(3) == 5
$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
(continues on next page)
3
pytest Documentation, Release 6.2
test_sample.py F [100%]
def test_answer():
> assert func(3) == 5
E assert 4 == 5
E + where 4 = func(3)
test_sample.py:6: AssertionError
========================= short test summary info ==========================
FAILED test_sample.py::test_answer - assert 4 == 5
============================ 1 failed in 0.12s =============================
The [100%] refers to the overall progress of running all test cases. After it finishes, pytest then shows a failure report
because func(3) does not return 5.
Note: You can use the assert statement to verify test expectations. pytest’s Advanced assertion introspection will
intelligently report intermediate values of the assert expression so you can avoid the many names of JUnit legacy
methods.
pytest will run all files of the form test_*.py or *_test.py in the current directory and its subdirectories. More
generally, it follows standard test discovery rules.
Use the raises helper to assert that some code raises an exception:
# content of test_sysexit.py
import pytest
def f():
raise SystemExit(1)
def test_mytest():
with pytest.raises(SystemExit):
f()
$ pytest -q test_sysexit.py
. [100%]
1 passed in 0.12s
Note: The -q/--quiet flag keeps the output brief in this and following examples.
Once you develop multiple tests, you may want to group them into a class. pytest makes it easy to create a class
containing more than one test:
# content of test_class.py
class TestClass:
def test_one(self):
x = "this"
assert "h" in x
def test_two(self):
x = "hello"
assert hasattr(x, "check")
pytest discovers all tests following its Conventions for Python test discovery, so it finds both test_ prefixed
functions. There is no need to subclass anything, but make sure to prefix your class with Test otherwise the class
will be skipped. We can simply run the module by passing its filename:
$ pytest -q test_class.py
.F [100%]
================================= FAILURES =================================
____________________________ TestClass.test_two ____________________________
def test_two(self):
x = "hello"
> assert hasattr(x, "check")
E AssertionError: assert False
E + where False = hasattr('hello', 'check')
test_class.py:8: AssertionError
========================= short test summary info ==========================
FAILED test_class.py::TestClass::test_two - AssertionError: assert False
1 failed, 1 passed in 0.12s
The first test passed and the second failed. You can easily see the intermediate values in the assertion to help you
understand the reason for the failure.
Grouping tests in classes can be beneficial for the following reasons:
• Test organization
• Sharing fixtures for tests only in that particular class
• Applying marks at the class level and having them implicitly apply to all tests
Something to be aware of when grouping tests inside classes is that each test has a unique instance of the class.
Having each test share the same class instance would be very detrimental to test isolation and would promote poor test
practices. This is outlined below:
# content of test_class_demo.py
class TestClassDemoInstance:
def test_one(self):
assert 0
def test_two(self):
assert 0
$ pytest -k TestClassDemoInstance -q
FF [100%]
================================= FAILURES =================================
______________________ TestClassDemoInstance.test_one ______________________
def test_one(self):
> assert 0
E assert 0
test_class_demo.py:3: AssertionError
______________________ TestClassDemoInstance.test_two ______________________
def test_two(self):
> assert 0
E assert 0
test_class_demo.py:6: AssertionError
========================= short test summary info ==========================
FAILED test_class_demo.py::TestClassDemoInstance::test_one - assert 0
FAILED test_class_demo.py::TestClassDemoInstance::test_two - assert 0
2 failed in 0.12s
pytest provides Builtin fixtures/function arguments to request arbitrary resources, like a unique temporary directory:
# content of test_tmpdir.py
def test_needsfiles(tmpdir):
print(tmpdir)
assert 0
List the name tmpdir in the test function signature and pytest will lookup and call a fixture factory to create the
resource before performing the test function call. Before the test runs, pytest creates a unique-per-test-invocation
temporary directory:
$ pytest -q test_tmpdir.py
F [100%]
================================= FAILURES =================================
_____________________________ test_needsfiles ______________________________
(continues on next page)
tmpdir = local('PYTEST_TMPDIR/test_needsfiles0')
def test_needsfiles(tmpdir):
print(tmpdir)
> assert 0
E assert 0
test_tmpdir.py:3: AssertionError
--------------------------- Captured stdout call ---------------------------
PYTEST_TMPDIR/test_needsfiles0
========================= short test summary info ==========================
FAILED test_tmpdir.py::test_needsfiles - assert 0
1 failed in 0.12s
Note that this command omits fixtures with leading _ unless the -v option is added.
Check out additional pytest resources to help you customize tests for your unique workflow:
• “Calling pytest through python -m pytest” for command line invocation examples
• “Using pytest with an existing test suite” for working with pre-existing tests
• “Marking test functions with attributes” for information on the pytest.mark mechanism
• “pytest fixtures: explicit, modular, scalable” for providing a functional baseline to your tests
• “Writing plugins” for managing and writing plugins
• “Good Integration Practices” for virtualenv and test layouts
TWO
You can invoke testing through the Python interpreter from the command line:
This is almost equivalent to invoking the command line script pytest [...] directly, except that calling via
python will also add the current directory to sys.path.
Note: If you would like to customize the exit code in some scenarios, specially when no tests are collected, consider
using the pytest-custom_exit_code plugin.
9
pytest Documentation, Release 6.2
Pytest supports several ways to run and select tests from the command-line.
Run tests in a module
pytest test_mod.py
pytest testing/
This will run tests which contain names that match the given string expression (case-insensitive), which can in-
clude Python operators that use filenames, class names and function names as variables. The example above will
run TestMyClass.test_something but not TestMyClass.test_method_simple.
Run tests by node ids
Each collected test is assigned a unique nodeid which consist of the module filename followed by specifiers like
class names, function names and parameters from parametrization, separated by :: characters.
To run a specific test within a module:
pytest test_mod.py::test_func
pytest test_mod.py::TestClass::test_method
pytest -m slow
Will run all tests which are decorated with the @pytest.mark.slow decorator.
For more information see marks.
Run tests from packages
pytest --pyargs pkg.testing
This will import pkg.testing and use its filesystem location to find and run tests from.
pytest --tb=auto # (default) 'long' tracebacks for the first and last
# entry, but 'short' style for the other entries
pytest --tb=long # exhaustive, informative traceback formatting
pytest --tb=short # shorter traceback format
pytest --tb=line # only one line per failure
pytest --tb=native # Python standard library formatting
pytest --tb=no # no traceback at all
The --full-trace causes very long traces to be printed on error (longer than --tb=long). It also ensures that
a stack trace is printed on KeyboardInterrupt (Ctrl+C). This is very useful if the tests are taking too long and you
interrupt them with Ctrl+C to find out where the tests are hanging. By default no output will be shown (because
KeyboardInterrupt is caught by pytest). By using this option you make sure a trace is shown.
The -r flag can be used to display a “short test summary info” at the end of the test session, making it easy in large
test suites to get a clear picture of all failures, skips, xfails, etc.
It defaults to fE to list failures and errors.
Example:
# content of test_example.py
import pytest
@pytest.fixture
def error_fixture():
assert 0
def test_ok():
print("ok")
def test_fail():
assert 0
def test_error(error_fixture):
pass
def test_skip():
pytest.skip("skipping this test")
def test_xfail():
pytest.xfail("xfailing this test")
@pytest.mark.xfail(reason="always xfail")
def test_xpass():
pass
$ pytest -ra
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 6 items
@pytest.fixture
def error_fixture():
> assert 0
E assert 0
test_example.py:6: AssertionError
================================= FAILURES =================================
________________________________ test_fail _________________________________
def test_fail():
> assert 0
E assert 0
test_example.py:14: AssertionError
========================= short test summary info ==========================
SKIPPED [1] test_example.py:22: skipping this test
XFAIL test_example.py::test_xfail
reason: xfailing this test
XPASS test_example.py::test_xpass always xfail
ERROR test_example.py::test_error - assert 0
FAILED test_example.py::test_fail - assert 0
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12s ===
The -r options accepts a number of characters after it, with a used above meaning “all except passes”.
Here is the full list of available characters that can be used:
• f - failed
• E - error
• s - skipped
• x - xfailed
• X - xpassed
• p - passed
• P - passed with output
Special characters for (de)selection of groups:
• a - all except pP
• A - all
• N - none, this can be used to display nothing (since fE is the default)
More than one character can be used, so for example to only see failed and skipped tests, you can execute:
$ pytest -rfs
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 6 items
@pytest.fixture
def error_fixture():
> assert 0
E assert 0
test_example.py:6: AssertionError
================================= FAILURES =================================
________________________________ test_fail _________________________________
def test_fail():
> assert 0
E assert 0
test_example.py:14: AssertionError
========================= short test summary info ==========================
FAILED test_example.py::test_fail - assert 0
SKIPPED [1] test_example.py:22: skipping this test
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12s ===
Using p lists the passing tests, whilst P adds an extra section “PASSES” with those tests that passed but had captured
output:
$ pytest -rpP
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
(continues on next page)
@pytest.fixture
def error_fixture():
> assert 0
E assert 0
test_example.py:6: AssertionError
================================= FAILURES =================================
________________________________ test_fail _________________________________
def test_fail():
> assert 0
E assert 0
test_example.py:14: AssertionError
================================== PASSES ==================================
_________________________________ test_ok __________________________________
--------------------------- Captured stdout call ---------------------------
ok
========================= short test summary info ==========================
PASSED test_example.py::test_ok
== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12s ===
Python comes with a builtin Python debugger called PDB. pytest allows one to drop into the PDB prompt via a
command line option:
pytest --pdb
This will invoke the Python debugger on every failure (or KeyboardInterrupt). Often you might only want to do this
for the first failing test to understand a certain failure situation:
pytest -x --pdb # drop to PDB on first failure, then end test session
pytest --pdb --maxfail=3 # drop to PDB for first three failures
Note that on any failure the exception information is stored on sys.last_value, sys.last_type and sys.
last_traceback. In interactive use, this allows one to drop into postmortem debugging with any debug tool. One
can also manually access the exception information, for example:
pytest allows one to drop into the PDB prompt immediately at the start of each test via a command line option:
pytest --trace
This will invoke the Python debugger at the start of every test.
To set a breakpoint in your code use the native Python import pdb;pdb.set_trace() call in your code and
pytest automatically disables its output capture for that test:
• Output capture in other tests is not affected.
• Any prior test output that has already been captured and will be processed as such.
• Output capture gets resumed when ending the debugger session (via the continue command).
Python 3.7 introduces a builtin breakpoint() function. Pytest supports the use of breakpoint() with the
following behaviours:
• When breakpoint() is called and PYTHONBREAKPOINT is set to the default value, pytest will use the
custom internal PDB trace UI instead of the system default Pdb.
• When tests are complete, the system will default back to the system Pdb trace UI.
• With --pdb passed to pytest, the custom internal Pdb trace UI is used with both breakpoint() and failed
tests/unhandled exceptions.
• --pdbcls can be used to specify a custom debugger class.
By default, pytest will not show test durations that are too small (<0.005s) unless -vv is passed on the command-line.
Note: This functionality has been integrated from the external pytest-faulthandler plugin, with two small differences:
• To disable it, use -p no:faulthandler instead of --no-faulthandler: the former can be used with
any plugin, so it saves one option.
• The --faulthandler-timeout command-line option has become the faulthandler_timeout con-
figuration option. It can still be configured from the command-line using -o faulthandler_timeout=X.
Unhandled exceptions are exceptions that are raised in a situation in which they cannot propagate to a caller. The most
common case is an exception raised in a __del__ implementation.
Unhandled thread exceptions are exceptions raised in a Thread but not handled, causing the thread to terminate
uncleanly.
Both types of exceptions are normally considered bugs, but may go unnoticed because they don’t cause the program
itself to crash. Pytest detects these conditions and issues a warning that is visible in the test run summary.
The plugins are automatically enabled for pytest runs, unless the -p no:unraisableexception (for unraisable
exceptions) and -p no:threadexception (for thread exceptions) options are given on the command-line.
The warnings may be silenced selectivly using the pytest.mark.filterwarnings mark. The
warning categories are pytest.PytestUnraisableExceptionWarning and pytest.
PytestUnhandledThreadExceptionWarning.
To create result files which can be read by Jenkins or other Continuous integration servers, use this invocation:
pytest --junitxml=path
2.15.1 record_property
If you want to log additional information for a test, you can use the record_property fixture:
def test_function(record_property):
record_property("example_key", 1)
assert True
This will add an extra property example_key="1" to the generated testcase tag:
<testcase classname="test_function" file="test_function.py" line="0" name="test_
˓→function" time="0.0009">
<properties>
<property name="example_key" value="1" />
</properties>
</testcase>
@pytest.mark.test_id(1501)
def test_function():
assert True
<properties>
<property name="test_id" value="1501" />
</properties>
</testcase>
Warning: Please note that using this feature will break schema verifications for the latest JUnitXML schema.
This might be a problem when used with some CI servers.
2.15.2 record_xml_attribute
To add an additional xml attribute to a testcase element, you can use record_xml_attribute fixture. This can
also be used to override existing values:
def test_function(record_xml_attribute):
record_xml_attribute("assertions", "REQ-1234")
record_xml_attribute("classname", "custom_classname")
print("hello world")
assert True
Unlike record_property, this will not add a new child element. Instead, this will add an attribute
assertions="REQ-1234" inside the generated testcase tag and override the default classname with
"classname=custom_classname":
<testcase classname="custom_classname" file="test_function.py" line="0" name="test_
˓→function" time="0.003" assertions="REQ-1234">
<system-out>
hello world
</system-out>
</testcase>
Warning: record_xml_attribute is an experimental feature, and its interface might be replaced by some-
thing more powerful and general in future versions. The functionality per-se will be kept, however.
Using this over record_xml_property can help when using ci tools to parse the xml report. However, some
parsers are quite strict about the elements and attributes that are allowed. Many tools use an xsd schema (like the
example below) to validate incoming xml. Make sure you are using attribute names that are allowed by your parser.
Below is the Scheme used by Jenkins to validate the XML report:
<xs:element name="testcase">
<xs:complexType>
<xs:sequence>
<xs:element ref="skipped" minOccurs="0" maxOccurs="1"/>
<xs:element ref="error" minOccurs="0" maxOccurs="unbounded"/>
Warning: Please note that using this feature will break schema verifications for the latest JUnitXML schema.
This might be a problem when used with some CI servers.
2.15.3 record_testsuite_property
import pytest
@pytest.fixture(scope="session", autouse=True)
def log_global_env_facts(record_testsuite_property):
record_testsuite_property("ARCH", "PPC")
record_testsuite_property("STORAGE_TYPE", "CEPH")
class TestMe:
def test_foo(self):
assert True
The fixture is a callable which receives name and value of a <property> tag added at the test-suite level of the
generated xml:
</testsuite>
name must be a string, value will be converted to a string and properly xml-escaped.
The generated XML is compatible with the latest xunit standard, contrary to record_property and
record_xml_attribute.
pytest --resultlog=path
and look at the content at the path location. Such files are used e.g. by the PyPy-test web page to show test results
over several revisions.
Warning: This option is rarely used and is scheduled for removal in pytest 6.0.
If you use this option, consider using the new pytest-reportlog plugin instead.
See the deprecation docs for more information.
pytest --pastebin=failed
This will submit test run information to a remote Paste service and provide a URL for each failure. You may select
tests as usual or add for example -x if you only want to send one particular failure.
Creating a URL for a whole test session log:
pytest --pastebin=all
You can early-load plugins (internal and external) explicitly in the command-line with the -p option:
pytest -p mypluginmodule
pytest -p pytest_cov
To disable loading specific plugins at invocation time, use the -p option together with the prefix no:.
Example: to disable loading the plugin doctest, which is responsible for executing doctest tests from text files,
invoke pytest like this:
pytest -p no:doctest
this acts as if you would call “pytest” from the command line. It will not raise SystemExit but return the exitcode
instead. You can pass in options and arguments:
pytest.main(["-x", "mytestdir"])
class MyPlugin:
def pytest_sessionfinish(self):
print("*** test run reporting finishing")
pytest.main(["-qq"], plugins=[MyPlugin()])
Running it will show that MyPlugin was added and its hook was invoked:
$ python myinvoke.py
.FEsxX. [100%]*** test
˓→run reporting finishing
@pytest.fixture
def error_fixture():
> assert 0
E assert 0
test_example.py:6: AssertionError
================================= FAILURES =================================
________________________________ test_fail _________________________________
def test_fail():
> assert 0
E assert 0
Note: Calling pytest.main() will result in importing your tests and any modules that they import. Due to the
caching mechanism of python’s import system, making subsequent calls to pytest.main() from the same process
will not reflect changes to those files between the calls. For this reason, making multiple calls to pytest.main()
from the same process (in order to re-run tests, for example) is not recommended.
THREE
Pytest can be used with most existing test suites, but its behavior differs from other test runners such as nose or
Python’s default unittest framework.
Before using this section you will want to install pytest.
Say you want to contribute to an existing repository somewhere. After pulling the code into your development space
using some flavor of version control and (optionally) setting up a virtualenv you will want to run:
cd <repository>
pip install -e . # Environment dependent alternatives include
# 'python setup.py develop' and 'conda develop'
in your project root. This will set up a symlink to your code in site-packages, allowing you to edit your code while
your tests run against it as if it were installed.
Setting up your project in development mode lets you avoid having to reinstall every time you want to run your tests,
and is less brittle than mucking about with sys.path to point your tests at local code.
Also consider using tox.
23
pytest Documentation, Release 6.2
FOUR
pytest allows you to use the standard python assert for verifying expectations and values in Python tests. For
example, you can write the following:
# content of test_assert1.py
def f():
return 3
def test_function():
assert f() == 4
to assert that your function returns a certain value. If this assertion fails you will see the return value of the function
call:
$ pytest test_assert1.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 1 item
test_assert1.py F [100%]
def test_function():
> assert f() == 4
E assert 3 == 4
E + where 3 = f()
test_assert1.py:6: AssertionError
========================= short test summary info ==========================
FAILED test_assert1.py::test_function - assert 3 == 4
============================ 1 failed in 0.12s =============================
pytest has support for showing the values of the most common subexpressions including calls, attributes, compar-
isons, and binary and unary operators. (See Demo of Python failure reports with pytest). This allows you to use the
idiomatic python constructs without boilerplate code while not losing introspection information.
However, if you specify a message with the assertion like this:
25
pytest Documentation, Release 6.2
then no assertion introspection takes places at all and the message will be simply shown in the traceback.
See Assertion introspection details for more information on assertion introspection.
In order to write assertions about raised exceptions, you can use pytest.raises() as a context manager like this:
import pytest
def test_zero_division():
with pytest.raises(ZeroDivisionError):
1 / 0
and if you need to have access to the actual exception info you may use:
def test_recursion_depth():
with pytest.raises(RuntimeError) as excinfo:
def f():
f()
f()
assert "maximum recursion" in str(excinfo.value)
excinfo is an ExceptionInfo instance, which is a wrapper around the actual exception raised. The main at-
tributes of interest are .type, .value and .traceback.
You can pass a match keyword parameter to the context-manager to test that a regular expression matches on the string
representation of an exception (similar to the TestCase.assertRaisesRegexp method from unittest):
import pytest
def myfunc():
raise ValueError("Exception 123 raised")
def test_match():
with pytest.raises(ValueError, match=r".* 123 .*"):
myfunc()
The regexp parameter of the match method is matched with the re.search function, so in the above example
match='123' would have worked as well.
There’s an alternate form of the pytest.raises() function where you pass a function that will be executed with
the given *args and **kwargs and assert that the given exception is raised:
The reporter will provide you with helpful output in case of failures such as no exception or wrong exception.
Note that it is also possible to specify a “raises” argument to pytest.mark.xfail, which checks that the test is
failing in a more specific way than just having any exception raised:
@pytest.mark.xfail(raises=IndexError)
def test_f():
f()
Using pytest.raises() is likely to be better for cases where you are testing exceptions your own code is delib-
erately raising, whereas using @pytest.mark.xfail with a check function is probably better for something like
documenting unfixed bugs (where the test describes what “should” happen) or bugs in dependencies.
You can check that code raises a particular warning using pytest.warns.
pytest has rich support for providing context-sensitive information when it encounters comparisons. For example:
# content of test_assert2.py
def test_set_comparison():
set1 = set("1308")
set2 = set("8035")
assert set1 == set2
$ pytest test_assert2.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 1 item
test_assert2.py F [100%]
def test_set_comparison():
set1 = set("1308")
set2 = set("8035")
> assert set1 == set2
E AssertionError: assert {'0', '1', '3', '8'} == {'0', '3', '5', '8'}
E Extra items in the left set:
E '1'
E Extra items in the right set:
E '5'
E Use -v to get the full diff
test_assert2.py:6: AssertionError
========================= short test summary info ==========================
FAILED test_assert2.py::test_set_comparison - AssertionError: assert {'0'...
============================ 1 failed in 0.12s =============================
It is possible to add your own detailed explanations by implementing the pytest_assertrepr_compare hook.
pytest_assertrepr_compare(config: Config, op: str, left: object, right: object) → Optional[List[str]]
Return explanation for comparisons in failing assert expressions.
Return None for no custom explanation, otherwise return a list of strings. The strings will be joined by newlines
but any newlines in a string will be escaped. Note that all but the first line will be indented slightly, the intention
is for the first line to be a summary.
Parameters config (_pytest.config.Config) – The pytest config object.
As an example consider adding the following hook in a conftest.py file which provides an alternative explanation for
Foo objects:
# content of conftest.py
from test_foocompare import Foo
# content of test_foocompare.py
class Foo:
def __init__(self, val):
self.val = val
def test_compare():
f1 = Foo(1)
f2 = Foo(2)
assert f1 == f2
you can run the test module and get the custom output defined in the conftest file:
$ pytest -q test_foocompare.py
F [100%]
================================= FAILURES =================================
(continues on next page)
def test_compare():
f1 = Foo(1)
f2 = Foo(2)
> assert f1 == f2
E assert Comparing Foo instances:
E vals: 1 != 2
test_foocompare.py:12: AssertionError
========================= short test summary info ==========================
FAILED test_foocompare.py::test_compare - assert Comparing Foo instances:
1 failed in 0.12s
Reporting details about a failing assertion is achieved by rewriting assert statements before they are run. Rewritten
assert statements put introspection information into the assertion failure message. pytest only rewrites test modules
directly discovered by its test collection process, so asserts in supporting modules which are not themselves test
modules will not be rewritten.
You can manually enable assertion rewriting for an imported module by calling register_assert_rewrite before you
import it (a good place to do that is in your root conftest.py).
For further information, Benjamin Peterson wrote up Behind the scenes of pytest’s new assertion rewriting.
pytest will write back the rewritten modules to disk for caching. You can disable this behavior (for example to
avoid leaving stale .pyc files around in projects that move files around a lot) by adding this to the top of your
conftest.py file:
import sys
sys.dont_write_bytecode = True
Note that you still get the benefits of assertion introspection, the only change is that the .pyc files won’t be cached
on disk.
Additionally, rewriting will silently skip caching if it cannot write new .pyc files, i.e. in a read-only filesystem or a
zipfile.
pytest rewrites test modules on import by using an import hook to write new pyc files. Most of the time this works
transparently. However, if you are working with the import machinery yourself, the import hook may interfere.
If this is the case you have two options:
• Disable rewriting for a specific module by adding the string PYTEST_DONT_REWRITE to its docstring.
• Disable rewriting for all modules by using --assert=plain.
FIVE
Software test fixtures initialize test functions. They provide a fixed baseline so that tests execute reliably and produce
consistent, repeatable, results. Initialization may setup services, state, or other operating environments. These are
accessed by test functions through arguments; for each fixture used by a test function there is typically a parameter
(named after the fixture) in the test function’s definition.
pytest fixtures offer dramatic improvements over the classic xUnit style of setup/teardown functions:
• fixtures have explicit names and are activated by declaring their use from test functions, modules, classes or
whole projects.
• fixtures are implemented in a modular manner, as each fixture name triggers a fixture function which can itself
use other fixtures.
• fixture management scales from simple unit to complex functional testing, allowing to parametrize fixtures and
tests according to configuration and component options, or to re-use fixtures across function, class, module or
whole test session scopes.
• teardown logic can be easily, and safely managed, no matter how many fixtures are used, without the need to
carefully handle errors by hand or micromanage the order that cleanup steps are added.
In addition, pytest continues to support classic xunit-style setup. You can mix both styles, moving incrementally from
classic to new style, as you prefer. You can also start out from existing unittest.TestCase style or nose based projects.
Fixtures are defined using the @pytest.fixture decorator, described below. Pytest has useful built-in fixtures, listed here
for reference:
capfd Capture, as text, output to file descriptors 1 and 2.
capfdbinary Capture, as bytes, output to file descriptors 1 and 2.
caplog Control logging and access log entries.
capsys Capture, as text, output to sys.stdout and sys.stderr.
capsysbinary Capture, as bytes, output to sys.stdout and sys.stderr.
cache Store and retrieve values across pytest runs.
doctest_namespace Provide a dict injected into the docstests namespace.
monkeypatch Temporarily modify classes, functions, dictionaries, os.environ, and other objects.
pytestconfig Access to configuration values, pluginmanager and plugin hooks.
record_property Add extra properties to the test.
record_testsuite_property Add extra properties to the test suite.
recwarn Record warnings emitted by test functions.
request Provide information on the executing test function.
31
pytest Documentation, Release 6.2
testdir Provide a temporary test directory to aid in running, and testing, pytest plugins.
tmp_path Provide a pathlib.Path object to a temporary directory which is unique to each test
function.
tmp_path_factory Make session-scoped temporary directories and return pathlib.Path ob-
jects.
tmpdir Provide a py.path.local object to a temporary directory which is unique to each test func-
tion; replaced by tmp_path.
tmpdir_factory Make session-scoped temporary directories and return py.path.local objects;
replaced by tmp_path_factory.
Before we dive into what fixtures are, let’s first look at what a test is.
In the simplest terms, a test is meant to look at the result of a particular behavior, and make sure that result aligns with
what you would expect. Behavior is not something that can be empirically measured, which is why writing tests can
be challenging.
“Behavior” is the way in which some system acts in response to a particular situation and/or stimuli. But exactly how
or why something is done is not quite as important as what was done.
You can think of a test as being broken down into four steps:
1. Arrange
2. Act
3. Assert
4. Cleanup
Arrange is where we prepare everything for our test. This means pretty much everything except for the “act”. It’s
lining up the dominoes so that the act can do its thing in one, state-changing step. This can mean preparing objects,
starting/killing services, entering records into a database, or even things like defining a URL to query, generating some
credentials for a user that doesn’t exist yet, or just waiting for some process to finish.
Act is the singular, state-changing action that kicks off the behavior we want to test. This behavior is what carries
out the changing of the state of the system under test (SUT), and it’s the resulting changed state that we can look at to
make a judgement about the behavior. This typically takes the form of a function/method call.
Assert is where we look at that resulting state and check if it looks how we’d expect after the dust has settled. It’s
where we gather evidence to say the behavior does or does not align with what we expect. The assert in our test is
where we take that measurement/observation and apply our judgement to it. If something should be green, we’d say
assert thing == "green".
Cleanup is where the test picks up after itself, so other tests aren’t being accidentally influenced by it.
At it’s core, the test is ultimately the act and assert steps, with the arrange step only providing the context. Behavior
exists between act and assert.
“Fixtures”, in the literal sense, are each of the arrange steps and data. They’re everything that test needs to do its
thing.
At a basic level, test functions request fixtures by declaring them as arguments, as in the
test_ehlo(smtp_connection): in the previous example.
In pytest, “fixtures” are functions you define that serve this purpose. But they don’t have to be limited to just the
arrange steps. They can provide the act step, as well, and this can be a powerful technique for designing more
complex tests, especially given how pytest’s fixture system works. But we’ll get into that further down.
We can tell pytest that a particular function is a fixture by decorating it with @pytest.fixture. Here’s a simple
example of what a fixture in pytest might look like:
import pytest
class Fruit:
def __init__(self, name):
self.name = name
@pytest.fixture
def my_fruit():
return Fruit("apple")
@pytest.fixture
def fruit_basket(my_fruit):
return [Fruit("banana"), my_fruit]
Tests don’t have to be limited to a single fixture, either. They can depend on as many fixtures as you want, and fixtures
can use other fixtures, as well. This is where pytest’s fixture system really shines.
Don’t be afraid to break things up if it makes things cleaner.
So fixtures are how we prepare for a test, but how do we tell pytest what tests and fixtures need which fixtures?
At a basic level, test functions request fixtures by declaring them as arguments, as in the
test_my_fruit_in_basket(my_fruit, fruit_basket): in the previous example.
At a basic level, pytest depends on a test to tell it what fixtures it needs, so we have to build that information into the
test itself. We have to make the test “request” the fixtures it depends on, and to do this, we have to list those fixtures
as parameters in the test function’s “signature” (which is the def test_something(blah, stuff, more):
line).
When pytest goes to run a test, it looks at the parameters in that test function’s signature, and then searches for fixtures
that have the same names as those parameters. Once pytest finds them, it runs those fixtures, captures what they
returned (if anything), and passes those objects into the test function as arguments.
import pytest
class Fruit:
def __init__(self, name):
self.name = name
self.cubed = False
def cube(self):
self.cubed = True
class FruitSalad:
def __init__(self, *fruit_bowl):
self.fruit = fruit_bowl
self._cube_fruit()
def _cube_fruit(self):
for fruit in self.fruit:
fruit.cube()
# Arrange
@pytest.fixture
def fruit_bowl():
return [Fruit("apple"), Fruit("banana")]
def test_fruit_salad(fruit_bowl):
# Act
fruit_salad = FruitSalad(*fruit_bowl)
# Assert
assert all(fruit.cubed for fruit in fruit_salad.fruit)
def test_fruit_salad(fruit_bowl):
# Act
fruit_salad = FruitSalad(*fruit_bowl)
# Assert
assert all(fruit.cubed for fruit in fruit_salad.fruit)
One of pytest’s greatest strengths is its extremely flexible fixture system. It allows us to boil down complex require-
ments for tests into more simple and organized functions, where we only need to have each one describe the things
they are dependent on. We’ll get more into this further down, but for now, here’s a quick example to demonstrate how
fixtures can use other fixtures:
# contents of test_append.py
import pytest
# Arrange
@pytest.fixture
def first_entry():
return "a"
# Arrange
@pytest.fixture
def order(first_entry):
return [first_entry]
def test_string(order):
# Act
order.append("b")
# Assert
assert order == ["a", "b"]
Notice that this is the same example from above, but very little changed. The fixtures in pytest request fixtures just
like tests. All the same requesting rules apply to fixtures that do for tests. Here’s how this example would work if we
did it by hand:
def first_entry():
return "a"
def order(first_entry):
return [first_entry]
def test_string(order):
# Act
order.append("b")
# Assert
assert order == ["a", "b"]
One of the things that makes pytest’s fixture system so powerful, is that it gives us the abilty to define a generic setup
step that can reused over and over, just like a normal function would be used. Two different tests can request the same
fixture and have pytest give each test their own result from that fixture.
This is extremely useful for making sure tests aren’t affected by each other. We can use this system to make sure each
test gets its own fresh batch of data and is starting from a clean state so it can provide consistent, repeatable results.
Here’s an example of how this can come in handy:
# contents of test_append.py
import pytest
# Arrange
@pytest.fixture
def first_entry():
return "a"
# Arrange
@pytest.fixture
def order(first_entry):
return [first_entry]
def test_string(order):
# Act
order.append("b")
# Assert
assert order == ["a", "b"]
def test_int(order):
# Act
order.append(2)
# Assert
assert order == ["a", 2]
Each test here is being given its own copy of that list object, which means the order fixture is getting executed
twice (the same is true for the first_entry fixture). If we were to do this by hand as well, it would look something
like this:
def first_entry():
return "a"
def order(first_entry):
(continues on next page)
def test_string(order):
# Act
order.append("b")
# Assert
assert order == ["a", "b"]
def test_int(order):
# Act
order.append(2)
# Assert
assert order == ["a", 2]
entry = first_entry()
the_list = order(first_entry=entry)
test_string(order=the_list)
entry = first_entry()
the_list = order(first_entry=entry)
test_int(order=the_list)
Tests and fixtures aren’t limited to requesting a single fixture at a time. They can request as many as they like. Here’s
another quick example to demonstrate:
# contents of test_append.py
import pytest
# Arrange
@pytest.fixture
def first_entry():
return "a"
# Arrange
@pytest.fixture
def second_entry():
return 2
# Arrange
@pytest.fixture
def order(first_entry, second_entry):
return [first_entry, second_entry]
# Arrange
(continues on next page)
# Assert
assert order == expected_list
5.2.5 Fixtures can be requested more than once per test (return values are cached)
Fixtures can also be requested more than once during the same test, and pytest won’t execute them again for that test.
This means we can request fixtures in multiple fixtures that are dependent on them (and even again in the test itself)
without those fixtures being executed more than once.
# contents of test_append.py
import pytest
# Arrange
@pytest.fixture
def first_entry():
return "a"
# Arrange
@pytest.fixture
def order():
return []
# Act
@pytest.fixture
def append_first(order, first_entry):
return order.append(first_entry)
If a requested fixture was executed once for every time it was requested during a test, then this test would fail because
both append_first and test_string_only would see order as an empty list (i.e. []), but since the return
value of order was cached (along with any side effects executing it may have had) after the first time it was called,
both the test and append_first were referencing the same object, and the test saw the effect append_first had
on that object.
Sometimes you may want to have a fixture (or even several) that you know all your tests will depend on. “Autouse”
fixtures are a convenient way to make all tests automatically request them. This can cut out a lot of redundant requests,
and can even provide more advanced fixture usage (more on that further down).
We can make a fixture an autouse fixture by passing in autouse=True to the fixture’s decorator. Here’s a simple
example for how they can be used:
# contents of test_append.py
import pytest
@pytest.fixture
def first_entry():
return "a"
@pytest.fixture
def order(first_entry):
return []
@pytest.fixture(autouse=True)
def append_first(order, first_entry):
return order.append(first_entry)
In this example, the append_first fixture is an autouse fixture. Because it happens automatically, both tests are
affected by it, even though neither test requested it. That doesn’t mean they can’t be requested though; just that it
isn’t necessary.
Fixtures requiring network access depend on connectivity and are usually time-expensive to create. Extending the
previous example, we can add a scope="module" parameter to the @pytest.fixture invocation to cause a
smtp_connection fixture function, responsible to create a connection to a preexisting SMTP server, to only be
invoked once per test module (the default is to invoke once per test function). Multiple test functions in a test module
will thus each receive the same smtp_connection fixture instance, thus saving time. Possible values for scope
are: function, class, module, package or session.
The next example puts the fixture function into a separate conftest.py file so that tests from multiple test modules
in the directory can access the fixture function:
# content of conftest.py
import pytest
import smtplib
@pytest.fixture(scope="module")
def smtp_connection():
return smtplib.SMTP("smtp.gmail.com", 587, timeout=5)
# content of test_module.py
def test_ehlo(smtp_connection):
response, msg = smtp_connection.ehlo()
assert response == 250
assert b"smtp.gmail.com" in msg
assert 0 # for demo purposes
def test_noop(smtp_connection):
response, msg = smtp_connection.noop()
assert response == 250
assert 0 # for demo purposes
Here, the test_ehlo needs the smtp_connection fixture value. pytest will discover and call the @pytest.
fixture marked smtp_connection fixture function. Running the test looks like this:
$ pytest test_module.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 2 items
test_module.py FF [100%]
def test_ehlo(smtp_connection):
response, msg = smtp_connection.ehlo()
assert response == 250
assert b"smtp.gmail.com" in msg
> assert 0 # for demo purposes
E assert 0
test_module.py:7: AssertionError
________________________________ test_noop _________________________________
def test_noop(smtp_connection):
response, msg = smtp_connection.noop()
assert response == 250
> assert 0 # for demo purposes
(continues on next page)
test_module.py:13: AssertionError
========================= short test summary info ==========================
FAILED test_module.py::test_ehlo - assert 0
FAILED test_module.py::test_noop - assert 0
============================ 2 failed in 0.12s =============================
You see the two assert 0 failing and more importantly you can also see that the exactly same
smtp_connection object was passed into the two test functions because pytest shows the incoming argument
values in the traceback. As a result, the two test functions using smtp_connection run as quick as a single one
because they reuse the same instance.
If you decide that you rather want to have a session-scoped smtp_connection instance, you can simply declare it:
@pytest.fixture(scope="session")
def smtp_connection():
# the returned fixture value will be shared for
# all tests requesting it
...
Fixtures are created when first requested by a test, and are destroyed based on their scope:
• function: the default scope, the fixture is destroyed at the end of the test.
• class: the fixture is destroyed during teardown of the last test in the class.
• module: the fixture is destroyed during teardown of the last test in the module.
• package: the fixture is destroyed during teardown of the last test in the package.
• session: the fixture is destroyed at the end of the test session.
Note: Pytest only caches one instance of a fixture at a time, which means that when using a parametrized fixture,
pytest may invoke a fixture more than once in the given scope.
@pytest.fixture(scope=determine_scope)
def docker_container():
yield spawn_container()
pytest does its best to put all the fixtures for a given test in a linear order so that it can see which fixture happens first,
second, third, and so on. If an earlier fixture has a problem, though, and raises an exception, pytest will stop executing
fixtures for that test and mark the test as having an error.
When a test is marked as having an error, it doesn’t mean the test failed, though. It just means the test couldn’t even
be attempted because one of the things it depends on had a problem.
This is one reason why it’s a good idea to cut out as many unnecessary dependencies as possible for a given test. That
way a problem in something unrelated isn’t causing us to have an incomplete picture of what may or may not have
issues.
Here’s a quick example to help explain:
import pytest
@pytest.fixture
def order():
return []
@pytest.fixture
def append_first(order):
order.append(1)
@pytest.fixture
def append_second(order, append_first):
order.extend([2])
@pytest.fixture(autouse=True)
def append_third(order, append_second):
order += [3]
def test_order(order):
assert order == [1, 2, 3]
If, for whatever reason, order.append(1) had a bug and it raises an exception, we wouldn’t be able to know if
order.extend([2]) or order += [3] would also have problems. After append_first throws an excep-
tion, pytest won’t run any more fixtures for test_order, and it won’t even try to run test_order itself. The only
things that would’ve run would be order and append_first.
When we run our tests, we’ll want to make sure they clean up after themselves so they don’t mess with any other tests
(and also so that we don’t leave behind a mountain of test data to bloat the system). Fixtures in pytest offer a very
useful teardown system, which allows us to define the specific steps necessary for each fixture to clean up after itself.
This system can be leveraged in two ways.
“Yield” fixtures yield instead of return. With these fixtures, we can run some code and pass an object back to the
requesting fixture/test, just like with the other fixtures. The only differences are:
1. return is swapped out for yield.
2. Any teardown code for that fixture is placed after the yield.
Once pytest figures out a linear order for the fixtures, it will run each one up until it returns or yields, and then move
on to the next fixture in the list to do the same thing.
Once the test is finished, pytest will go back down the list of fixtures, but in the reverse order, taking each one that
yielded, and running the code inside it that was after the yield statement.
As a simple example, let’s say we want to test sending email from one user to another. We’ll have to first make each
user, then send the email from one user to the other, and finally assert that the other user received that message in their
inbox. If we want to clean up after the test runs, we’ll likely have to make sure the other user’s mailbox is emptied
before deleting that user, otherwise the system may complain.
Here’s what that might look like:
import pytest
@pytest.fixture
def mail_admin():
return MailAdminClient()
@pytest.fixture
def sending_user(mail_admin):
user = mail_admin.create_user()
yield user
mail_admin.delete_user(user)
@pytest.fixture
def receiving_user(mail_admin):
user = mail_admin.create_user()
yield user
mail_admin.delete_user(user)
Because receiving_user is the last fixture to run during setup, it’s the first to run during teardown.
There is a risk that even having the order right on the teardown side of things doesn’t guarantee a safe cleanup. That’s
covered in a bit more detail in Safe teardowns.
If a yield fixture raises an exception before yielding, pytest won’t try to run the teardown code after that yield fixture’s
yield statement. But, for every fixture that has already run successfully for that test, pytest will still attempt to tear
them down as it normally would.
While yield fixtures are considered to be the cleaner and more straighforward option, there is another choice, and that
is to add “finalizer” functions directly to the test’s request-context object. It brings a similar result as yield fixtures,
but requires a bit more verbosity.
In order to use this approach, we have to request the request-context object (just like we would request another fix-
ture) in the fixture we need to add teardown code for, and then pass a callable, containing that teardown code, to its
addfinalizer method.
We have to be careful though, because pytest will run that finalizer once it’s been added, even if that fixture raises an
exception after adding the finalizer. So to make sure we don’t run the finalizer code when we wouldn’t need to, we
would only add the finalizer once the fixture would have done something that we’d need to teardown.
Here’s how the previous example would look using the addfinalizer method:
import pytest
@pytest.fixture
def mail_admin():
return MailAdminClient()
@pytest.fixture
def sending_user(mail_admin):
user = mail_admin.create_user()
yield user
mail_admin.delete_user(user)
@pytest.fixture
def receiving_user(mail_admin, request):
user = mail_admin.create_user()
def delete_user():
mail_admin.delete_user(user)
request.addfinalizer(delete_user)
return user
@pytest.fixture
(continues on next page)
def empty_mailbox():
receiving_user.delete_email(_email)
request.addfinalizer(empty_mailbox)
return _email
It’s a bit longer than yield fixtures and a bit more complex, but it does offer some nuances for when you’re in a pinch.
The fixture system of pytest is very powerful, but it’s still being run by a computer, so it isn’t able to figure out how to
safely teardown everything we throw at it. If we aren’t careful, an error in the wrong spot might leave stuff from our
tests behind, and that can cause further issues pretty quickly.
For example, consider the following tests (based off of the mail example from above):
import pytest
@pytest.fixture
def setup():
mail_admin = MailAdminClient()
sending_user = mail_admin.create_user()
receiving_user = mail_admin.create_user()
email = Email(subject="Hey!", body="How's it going?")
sending_user.send_emai(email, receiving_user)
yield receiving_user, email
receiving_user.delete_email(email)
mail_admin.delete_user(sending_user)
mail_admin.delete_user(receiving_user)
def test_email_received(setup):
receiving_user, email = setup
assert email in receiving_user.inbox
This version is a lot more compact, but it’s also harder to read, doesn’t have a very descriptive fixture name, and none
of the fixtures can be reused easily.
There’s also a more serious issue, which is that if any of those steps in the setup raise an exception, none of the
teardown code will run.
One option might be to go with the addfinalizer method instead of yield fixtures, but that might get pretty
complex and difficult to maintain (and it wouldn’t be compact anymore).
The safest and simplest fixture structure requires limiting fixtures to only making one state-changing action each, and
then bundling them together with their teardown code, as the email examples above showed.
The chance that a state-changing operation can fail but still modify state is neglibible, as most of these operations tend
to be transaction-based (at least at the level of testing where state could be left behind). So if we make sure that any
successful state-changing action gets torn down by moving it to a separate fixture function and separating it from other,
potentially failing state-changing actions, then our tests will stand the best chance at leaving the test environment the
way they found it.
For an example, let’s say we have a website with a login page, and we have access to an admin API where we can
generate users. For our test, we want to:
1. Create a user through that admin API
2. Launch a browser using Selenium
3. Go to the login page of our site
4. Log in as the user we created
5. Assert that their name is in the header of the landing page
We wouldn’t want to leave that user in the system, nor would we want to leave that browser session running, so we’ll
want to make sure the fixtures that create those things clean up after themselves.
Here’s what that might look like:
Note: For this example, certain fixtures (i.e. base_url and admin_credentials) are implied to exist else-
where. So for now, let’s assume they exist, and we’re just not looking at them.
@pytest.fixture
def admin_client(base_url, admin_credentials):
return AdminApiClient(base_url, **admin_credentials)
@pytest.fixture
def user(admin_client):
_user = User(name="Susan", username=f"testuser-{uuid4()}", password="P4$$word")
admin_client.create_user(_user)
yield _user
admin_client.delete_user(_user)
@pytest.fixture
def driver():
(continues on next page)
@pytest.fixture
def login(driver, base_url, user):
driver.get(urljoin(base_url, "/login"))
page = LoginPage(driver)
page.login(user)
@pytest.fixture
def landing_page(driver, login):
return LandingPage(driver)
The way the dependencies are laid out means it’s unclear if the user fixture would execute before the driver
fixture. But that’s ok, because those are atomic operations, and so it doesn’t matter which one runs first because the
sequence of events for the test is still linearizable. But what does matter is that, no matter which one runs first, if the
one raises an exception while the other would not have, neither will have left anything behind. If driver executes
before user, and user raises an exception, the driver will still quit, and the user was never made. And if driver
was the one to raise the exception, then the driver would never have been started and the user would never have been
made.
Fixture availability is determined from the perspective of the test. A fixture is only available for tests to request if they
are in the scope that fixture is defined in. If a fixture is defined inside a class, it can only be requested by tests inside
that class. But if a fixture is defined inside the global scope of the module, than every test in that module, even if it’s
defined inside a class, can request it.
Similarly, a test can also only be affected by an autouse fixture if that test is in the same scope that autouse fixture is
defined in (see Autouse fixtures are executed first within their scope).
A fixture can also request any other fixture, no matter where it’s defined, so long as the test requesting them can see
all fixtures involved.
For example, here’s a test file with a fixture (outer) that requests a fixture (inner) from a scope it wasn’t defined
in:
import pytest
@pytest.fixture
def order():
return []
@pytest.fixture
def outer(order, inner):
(continues on next page)
class TestOne:
@pytest.fixture
def inner(self, order):
order.append("one")
class TestTwo:
@pytest.fixture
def inner(self, order):
order.append("two")
From the tests’ perspectives, they have no problem seeing each of the fixtures they’re dependent on:
So when they run, outer will have no problem finding inner, because pytest searched from the tests’ perspectives.
Note: The scope a fixture is defined in has no bearing on the order it will be instantiated in: the order is mandated by
the logic described here.
The conftest.py file serves as a means of providing fixtures for an entire directory. Fixtures defined in a
conftest.py can be used by any test in that package without needing to import them (pytest will automatically
discover them).
You can have multiple nested directories/packages containing your tests, and each directory can have its own
conftest.py with its own fixtures, adding on to the ones provided by the conftest.py files in parent directories.
For example, given a test file structure like this:
tests/
__init__.py
conftest.py
# content of tests/conftest.py
import pytest
@pytest.fixture
def order():
return []
@pytest.fixture
def top(order, innermost):
order.append("top")
@pytest.fixture
def innermost(order):
order.append("innermost top")
subpackage/
__init__.py
conftest.py
# content of tests/subpackage/conftest.py
import pytest
@pytest.fixture
def mid(order):
order.append("mid subpackage")
test_subpackage.py
# content of tests/subpackage/test_subpackage.py
import pytest
@pytest.fixture
def innermost(order, mid):
order.append("innermost subpackage")
The directories become their own sort of scope where fixtures that are defined in a conftest.py file in that directory
become available for that whole scope.
Tests are allowed to search upward (stepping outside a circle) for fixtures, but can never go down (stepping inside
a circle) to continue their search. So tests/subpackage/test_subpackage.py::test_order would
be able to find the innermost fixture defined in tests/subpackage/test_subpackage.py, but the one
defined in tests/test_top.py would be unavailable to it because it would have to step down a level (step inside
a circle) to find it.
The first fixture the test finds is the one that will be used, so fixtures can be overriden if you need to change or extend
what one does for a particular scope.
You can also use the conftest.py file to implement local per-directory plugins.
Fixtures don’t have to be defined in this structure to be available for tests, though. They can also be provided by third-
party plugins that are installed, and this is how many pytest plugins operate. As long as those plugins are installed, the
fixtures they provide can be requested from anywhere in your test suite.
Because they’re provided from outside the structure of your test suite, third-party plugins don’t really provide a scope
like conftest.py files and the directories in your test suite do. As a result, pytest will search for fixtures stepping
out through scopes as explained previously, only reaching fixtures defined in plugins last.
For example, given the following file structure:
tests/
__init__.py
conftest.py
# content of tests/conftest.py
import pytest
@pytest.fixture
def order():
return []
subpackage/
__init__.py
conftest.py
# content of tests/subpackage/conftest.py
import pytest
@pytest.fixture(autouse=True)
def mid(order, b_fix):
order.append("mid subpackage")
test_subpackage.py
# content of tests/subpackage/test_subpackage.py
import pytest
@pytest.fixture
def inner(order, mid, a_fix):
order.append("inner subpackage")
If plugin_a is installed and provides the fixture a_fix, and plugin_b is installed and provides the fixture
b_fix, then this is what the test’s search for fixtures would look like:
pytest will only search for a_fix and b_fix in the plugins after searching for them first in the scopes inside tests/.
If you want to make test data from files available to your tests, a good way to do this is by loading these data in a
fixture for use by your tests. This makes use of the automatic caching mechanisms of pytest.
Another good approach is by adding the data files in the tests folder. There are also community plugins available
to help managing this aspect of testing, e.g. pytest-datadir and pytest-datafiles.
When pytest wants to execute a test, once it knows what fixtures will be executed, it has to figure out the order they’ll
be executed in. To do this, it considers 3 factors:
1. scope
2. dependencies
3. autouse
Names of fixtures or tests, where they’re defined, the order they’re defined in, and the order fixtures are requested in
have no bearing on execution order beyond coincidence. While pytest will try to make sure coincidences like these
stay consistent from run to run, it’s not something that should be depended on. If you want to control the order, it’s
safest to rely on these 3 things and make sure dependencies are clearly established.
Within a function request for fixtures, those of higher-scopes (such as session) are executed before lower-scoped
fixtures (such as function or class).
Here’s an example:
import pytest
@pytest.fixture(scope="session")
def order():
return []
@pytest.fixture
def func(order):
order.append("function")
@pytest.fixture(scope="class")
def cls(order):
order.append("class")
@pytest.fixture(scope="module")
def mod(order):
order.append("module")
@pytest.fixture(scope="package")
(continues on next page)
@pytest.fixture(scope="session")
def sess(order):
order.append("session")
class TestClass:
def test_order(self, func, cls, mod, pack, sess, order):
assert order == ["session", "package", "module", "class", "function"]
The test will pass because the larger scoped fixtures are executing first.
The order breaks down to this:
When a fixture requests another fixture, the other fixture is executed first. So if fixture a requests fixture b, fixture b
will execute first, because a depends on b and can’t operate without it. Even if a doesn’t need the result of b, it can
still request b if it needs to make sure it is executed after b.
For example:
import pytest
@pytest.fixture
def order():
return []
@pytest.fixture
def a(order):
order.append("a")
@pytest.fixture
def b(a, order):
order.append("b")
@pytest.fixture
def c(a, b, order):
order.append("c")
@pytest.fixture
def d(c, b, order):
order.append("d")
@pytest.fixture
(continues on next page)
@pytest.fixture
def f(e, order):
order.append("f")
@pytest.fixture
def g(f, c, order):
order.append("g")
If we map out what depends on what, we get something that look like this:
The rules provided by each fixture (as to what fixture(s) each one has to come after) are comprehensive enough that it
can be flattened to this:
Enough information has to be provided through these requests in order for pytest to be able to figure out a clear,
linear chain of dependencies, and as a result, an order of operations for a given test. If there’s any ambiguity, and the
order of operations can be interpreted more than one way, you should assume pytest could go with any one of those
interpretations at any point.
For example, if d didn’t request c, i.e.the graph would look like this:
Because nothing requested c other than g, and g also requests f, it’s now unclear if c should go before/after f, e, or
d. The only rules that were set for c is that it must execute after b and before g.
pytest doesn’t know where c should go in the case, so it should be assumed that it could go anywhere between g and
b.
This isn’t necessarily bad, but it’s something to keep in mind. If the order they execute in could affect the behavior a
test is targetting, or could otherwise influence the result of a test, then the order should be defined explicitely in a way
that allows pytest to linearize/”flatten” that order.
Autouse fixtures are assumed to apply to every test that could reference them, so they are executed before other fixtures
in that scope. Fixtures that are requested by autouse fixtures effectively become autouse fixtures themselves for the
tests that the real autouse fixture applies to.
So if fixture a is autouse and fixture b is not, but fixture a requests fixture b, then fixture b will effectively be an
autouse fixture as well, but only for the tests that a applies to.
In the last example, the graph became unclear if d didn’t request c. But if c was autouse, then b and a would
effectively also be autouse because c depends on them. As a result, they would all be shifted above non-autouse
fixtures within that scope.
So if the test file looked like this:
import pytest
@pytest.fixture
def order():
return []
@pytest.fixture
def a(order):
order.append("a")
@pytest.fixture
def b(a, order):
order.append("b")
@pytest.fixture(autouse=True)
def c(b, order):
order.append("c")
@pytest.fixture
def d(b, order):
order.append("d")
@pytest.fixture
def e(d, order):
order.append("e")
@pytest.fixture
def f(e, order):
order.append("f")
@pytest.fixture
def g(f, c, order):
order.append("g")
Because c can now be put above d in the graph, pytest can once again linearize the graph to this:
In this example, c makes b and a effectively autouse fixtures as well.
Be careful with autouse, though, as an autouse fixture will automatically execute for every test that can reach it, even
if they don’t request it. For example, consider this file:
import pytest
@pytest.fixture(scope="class")
def order():
return []
@pytest.fixture(scope="class", autouse=True)
def c1(order):
order.append("c1")
@pytest.fixture(scope="class")
def c2(order):
order.append("c2")
@pytest.fixture(scope="class")
def c3(order, c1):
order.append("c3")
class TestClassWithC1Request:
def test_order(self, order, c1, c3):
assert order == ["c1", "c3"]
class TestClassWithoutC1Request:
def test_order(self, order, c2):
assert order == ["c1", "c2"]
Even though nothing in TestClassWithoutC1Request is requesting c1, it still is executed for the tests inside
it anyway:
But just because one autouse fixture requested a non-autouse fixture, that doesn’t mean the non-autouse fixture be-
comes an autouse fixture for all contexts that it can apply to. It only effectively becomes an auotuse fixture for the
contexts the real autouse fixture (the one that requested the non-autouse fixture) can apply to.
For example, take a look at this test file:
import pytest
@pytest.fixture
def order():
return []
@pytest.fixture
def c1(order):
order.append("c1")
@pytest.fixture
def c2(order):
order.append("c2")
(continues on next page)
class TestClassWithAutouse:
@pytest.fixture(autouse=True)
def c3(self, order, c2):
order.append("c3")
class TestClassWithoutAutouse:
def test_req(self, order, c1):
assert order == ["c1"]
For test_req and test_no_req inside TestClassWithAutouse, c3 effectively makes c2 an autouse fix-
ture, which is why c2 and c3 are executed for both tests, despite not being requested, and why c2 and c3 are executed
before c1 for test_req.
If this made c2 an actual autouse fixture, then c2 would also execute for the tests inside
TestClassWithoutAutouse, since they can reference c2 if they wanted to. But it doesn’t, because from
the perspective of the TestClassWithoutAutouse tests, c2 isn’t an autouse fixture, since they can’t see c3.
Sometimes you may want to run multiple asserts after doing all that setup, which makes sense as, in more complex
systems, a single action can kick off multiple behaviors. pytest has a convenient way of handling this and it combines
a bunch of what we’ve gone over so far.
All that’s needed is stepping up to a larger scope, then having the act step defined as an autouse fixture, and finally,
making sure all the fixtures are targetting that highler level scope.
Let’s pull an example from above, and tweak it a bit. Let’s say that in addition to checking for a welcome message in
the header, we also want to check for a sign out button, and a link to the user’s profile.
Let’s take a look at how we can structure that so we can run multiple asserts without having to repeat all those steps
again.
Note: For this example, certain fixtures (i.e. base_url and admin_credentials) are implied to exist else-
where. So for now, let’s assume they exist, and we’re just not looking at them.
# contents of tests/end_to_end/test_login.py
from uuid import uuid4
from urllib.parse import urljoin
(continues on next page)
@pytest.fixture(scope="class")
def admin_client(base_url, admin_credentials):
return AdminApiClient(base_url, **admin_credentials)
@pytest.fixture(scope="class")
def user(admin_client):
_user = User(name="Susan", username=f"testuser-{uuid4()}", password="P4$$word")
admin_client.create_user(_user)
yield _user
admin_client.delete_user(_user)
@pytest.fixture(scope="class")
def driver():
_driver = Chrome()
yield _driver
_driver.quit()
@pytest.fixture(scope="class")
def landing_page(driver, login):
return LandingPage(driver)
class TestLandingPageSuccess:
@pytest.fixture(scope="class", autouse=True)
def login(self, driver, base_url, user):
driver.get(urljoin(base_url, "/login"))
page = LoginPage(driver)
page.login(user)
Notice that the methods are only referencing self in the signature as a formality. No state is tied to the actual test
class as it might be in the unittest.TestCase framework. Everything is managed by the pytest fixture system.
Each method only has to request the fixtures that it actually needs without worrying about order. This is because the
act fixture is an autouse fixture, and it made sure all the other fixtures executed before it. There’s no more changes
of state that need to take place, so the tests are free to make as many non-state-changing queries as they want without
class TestLandingPageBadCredentials:
@pytest.fixture(scope="class")
def faux_user(self, user):
_user = deepcopy(user)
_user.password = "badpass"
return _user
Fixture functions can accept the request object to introspect the “requesting” test function, class or module context.
Further extending the previous smtp_connection fixture example, let’s read an optional server URL from the test
module which uses our fixture:
# content of conftest.py
import pytest
import smtplib
@pytest.fixture(scope="module")
def smtp_connection(request):
server = getattr(request.module, "smtpserver", "smtp.gmail.com")
smtp_connection = smtplib.SMTP(server, 587, timeout=5)
yield smtp_connection
print("finalizing {} ({})".format(smtp_connection, server))
smtp_connection.close()
We use the request.module attribute to optionally obtain an smtpserver attribute from the test module. If we
just execute again, nothing much has changed:
$ pytest -s -q --tb=no
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
Let’s quickly create another test module that actually sets the server URL in its module namespace:
# content of test_anothersmtp.py
Running it:
voila! The smtp_connection fixture function picked up our mail server name from the module namespace.
Using the request object, a fixture can also access markers which are applied to a test function. This can be useful
to pass data into a fixture from a test:
import pytest
@pytest.fixture
def fixt(request):
marker = request.node.get_closest_marker("fixt_data")
if marker is None:
# Handle missing marker in some way...
data = None
else:
data = marker.args[0]
@pytest.mark.fixt_data(42)
def test_fixt(fixt):
assert fixt == 42
The “factory as fixture” pattern can help in situations where the result of a fixture is needed multiple times in a single
test. Instead of returning data directly, the fixture instead returns a function which generates the data. This function
can then be called multiple times in the test.
Factories can have parameters as needed:
@pytest.fixture
def make_customer_record():
def _make_customer_record(name):
return {"name": name, "orders": []}
return _make_customer_record
def test_customer_records(make_customer_record):
customer_1 = make_customer_record("Lisa")
customer_2 = make_customer_record("Mike")
customer_3 = make_customer_record("Meredith")
If the data created by the factory requires managing, the fixture can take care of that:
@pytest.fixture
def make_customer_record():
created_records = []
def _make_customer_record(name):
record = models.Customer(name=name, orders=[])
created_records.append(record)
return record
yield _make_customer_record
def test_customer_records(make_customer_record):
customer_1 = make_customer_record("Lisa")
customer_2 = make_customer_record("Mike")
customer_3 = make_customer_record("Meredith")
Fixture functions can be parametrized in which case they will be called multiple times, each time executing the set
of dependent tests, i. e. the tests that depend on this fixture. Test functions usually do not need to be aware of their
re-running. Fixture parametrization helps to write exhaustive functional tests for components which themselves can
be configured in multiple ways.
Extending the previous example, we can flag the fixture to create two smtp_connection fixture instances which
will cause all tests using the fixture to run twice. The fixture function gets access to each parameter through the special
request object:
# content of conftest.py
import pytest
import smtplib
The main change is the declaration of params with @pytest.fixture, a list of values for each of which the
fixture function will execute and can access a value via request.param. No test function code needs to change.
So let’s just do another run:
$ pytest -q test_module.py
FFFF [100%]
================================= FAILURES =================================
________________________ test_ehlo[smtp.gmail.com] _________________________
def test_ehlo(smtp_connection):
response, msg = smtp_connection.ehlo()
assert response == 250
assert b"smtp.gmail.com" in msg
> assert 0 # for demo purposes
E assert 0
test_module.py:7: AssertionError
________________________ test_noop[smtp.gmail.com] _________________________
def test_noop(smtp_connection):
response, msg = smtp_connection.noop()
assert response == 250
> assert 0 # for demo purposes
E assert 0
test_module.py:13: AssertionError
________________________ test_ehlo[mail.python.org] ________________________
def test_ehlo(smtp_connection):
response, msg = smtp_connection.ehlo()
assert response == 250
> assert b"smtp.gmail.com" in msg
E AssertionError: assert b'smtp.gmail.com' in b'mail.python.org\nPIPELINING\
˓→nSIZE 51200000\nETRN\nSTARTTLS\nAUTH DIGEST-MD5 NTLM CRAM-MD5\nENHANCEDSTATUSCODES\
˓→n8BITMIME\nDSN\nSMTPUTF8\nCHUNKING'
test_module.py:6: AssertionError
-------------------------- Captured stdout setup ---------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef>
(continues on next page)
def test_noop(smtp_connection):
response, msg = smtp_connection.noop()
assert response == 250
> assert 0 # for demo purposes
E assert 0
test_module.py:13: AssertionError
------------------------- Captured stdout teardown -------------------------
finalizing <smtplib.SMTP object at 0xdeadbeef>
========================= short test summary info ==========================
FAILED test_module.py::test_ehlo[smtp.gmail.com] - assert 0
FAILED test_module.py::test_noop[smtp.gmail.com] - assert 0
FAILED test_module.py::test_ehlo[mail.python.org] - AssertionError: asser...
FAILED test_module.py::test_noop[mail.python.org] - assert 0
4 failed in 0.12s
We see that our two test functions each ran twice, against the different smtp_connection instances. Note also,
that with the mail.python.org connection the second test fails in test_ehlo because a different server string
is expected than what arrived.
pytest will build a string that is the test ID for each fixture value in a parametrized fixture, e.g. test_ehlo[smtp.
gmail.com] and test_ehlo[mail.python.org] in the above examples. These IDs can be used with -k
to select specific cases to run, and they will also identify the specific case when one is failing. Running pytest with
--collect-only will show the generated IDs.
Numbers, strings, booleans and None will have their usual string representation used in the test ID. For other objects,
pytest will make a string based on the argument name. It is possible to customise the string used in a test ID for a
certain fixture value by using the ids keyword argument:
# content of test_ids.py
import pytest
def test_a(a):
pass
def idfn(fixture_value):
if fixture_value == 0:
return "eggs"
else:
return None
def test_b(b):
pass
The above shows how ids can be either a list of strings to use or a function which will be called with the fixture value
and then has to return a string to use. In the latter case if the function returns None then pytest’s auto-generated ID
will be used.
Running the above tests results in the following test IDs being used:
$ pytest --collect-only
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 10 items
<Module test_anothersmtp.py>
<Function test_showhelo[smtp.gmail.com]>
<Function test_showhelo[mail.python.org]>
<Module test_ids.py>
<Function test_a[spam]>
<Function test_a[ham]>
<Function test_b[eggs]>
<Function test_b[1]>
<Module test_module.py>
<Function test_ehlo[smtp.gmail.com]>
<Function test_noop[smtp.gmail.com]>
<Function test_ehlo[mail.python.org]>
<Function test_noop[mail.python.org]>
pytest.param() can be used to apply marks in values sets of parametrized fixtures in the same way that they can
be used with @pytest.mark.parametrize.
Example:
# content of test_fixture_marks.py
import pytest
def test_data(data_set):
pass
Running this test will skip the invocation of data_set with value 2:
$ pytest test_fixture_marks.py -v
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_
˓→PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 3 items
In addition to using fixtures in test functions, fixture functions can use other fixtures themselves. This contributes
to a modular design of your fixtures and allows re-use of framework-specific fixtures across many projects. As a
simple example, we can extend the previous example and instantiate an object app where we stick the already defined
smtp_connection resource into it:
# content of test_appsetup.py
import pytest
class App:
def __init__(self, smtp_connection):
self.smtp_connection = smtp_connection
@pytest.fixture(scope="module")
def app(smtp_connection):
return App(smtp_connection)
def test_smtp_connection_exists(app):
assert app.smtp_connection
Here we declare an app fixture which receives the previously defined smtp_connection fixture and instantiates
an App object with it. Let’s run it:
$ pytest -v test_appsetup.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_
˓→PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 2 items
Due to the parametrization of smtp_connection, the test will run twice with two different App instances and
respective smtp servers. There is no need for the app fixture to be aware of the smtp_connection parametrization
because pytest will fully analyse the fixture dependency graph.
Note that the app fixture has a scope of module and uses a module-scoped smtp_connection fixture. The
example would still work if smtp_connection was cached on a session scope: it is fine for fixtures to use
“broader” scoped fixtures but not the other way round: A session-scoped fixture could not use a module-scoped one
in a meaningful way.
pytest minimizes the number of active fixtures during test runs. If you have a parametrized fixture, then all the tests
using it will first execute with one instance and then finalizers are called before the next fixture instance is created.
Among other things, this eases testing of applications which create and use global state.
The following example uses two parametrized fixtures, one of which is scoped on a per-module basis, and all the
functions perform print calls to show the setup/teardown flow:
# content of test_module.py
import pytest
def test_0(otherarg):
print(" RUN test0 with otherarg", otherarg)
def test_1(modarg):
print(" RUN test1 with modarg", modarg)
Let’s run the tests in verbose mode and with looking at the print-output:
$ pytest -v -s test_module.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_
˓→PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
(continues on next page)
You can see that the parametrized module-scoped modarg resource caused an ordering of test execution that lead
to the fewest possible “active” resources. The finalizer for the mod1 parametrized resource was executed before the
mod2 resource was setup.
In particular notice that test_0 is completely independent and finishes first. Then test_1 is executed with mod1, then
test_2 with mod1, then test_1 with mod2 and finally test_2 with mod2.
The otherarg parametrized resource (having function scope) was set up before and teared down after every test that
used it.
Sometimes test functions do not directly need access to a fixture object. For example, tests may require to operate
with an empty directory as the current working directory but otherwise do not care for the concrete directory. Here is
how you can use the standard tempfile and pytest fixtures to achieve it. We separate the creation of the fixture into a
conftest.py file:
# content of conftest.py
import os
import shutil
import tempfile
import pytest
@pytest.fixture
def cleandir():
old_cwd = os.getcwd()
newpath = tempfile.mkdtemp()
os.chdir(newpath)
yield
os.chdir(old_cwd)
shutil.rmtree(newpath)
# content of test_setenv.py
import os
import pytest
@pytest.mark.usefixtures("cleandir")
class TestDirectoryInit:
def test_cwd_starts_empty(self):
assert os.listdir(os.getcwd()) == []
with open("myfile", "w") as f:
f.write("hello")
def test_cwd_again_starts_empty(self):
assert os.listdir(os.getcwd()) == []
Due to the usefixtures marker, the cleandir fixture will be required for the execution of each test method, just
as if you specified a “cleandir” function argument to each of them. Let’s run it to verify our fixture is activated and the
tests pass:
$ pytest -q
.. [100%]
2 passed in 0.12s
@pytest.mark.usefixtures("cleandir", "anotherfixture")
def test():
...
and you may specify fixture usage at the test module level using pytestmark:
pytestmark = pytest.mark.usefixtures("cleandir")
It is also possible to put fixtures required by all tests in your project into an ini-file:
# content of pytest.ini
[pytest]
usefixtures = cleandir
Warning: Note this mark has no effect in fixture functions. For example, this will not work as expected:
@pytest.mark.usefixtures("my_other_fixture")
@pytest.fixture
def my_fixture_that_sadly_wont_use_my_other_fixture():
...
Currently this will not generate any error or warning, but this is intended to be handled by #3664.
In relatively large test suite, you most likely need to override a global or root fixture with a locally defined
one, keeping the test code readable and maintainable.
tests/
__init__.py
conftest.py
# content of tests/conftest.py
import pytest
@pytest.fixture
def username():
return 'username'
test_something.py
# content of tests/test_something.py
def test_username(username):
assert username == 'username'
subfolder/
__init__.py
conftest.py
# content of tests/subfolder/conftest.py
import pytest
@pytest.fixture
def username(username):
return 'overridden-' + username
(continues on next page)
test_something.py
# content of tests/subfolder/test_something.py
def test_username(username):
assert username == 'overridden-username'
As you can see, a fixture with the same name can be overridden for certain test folder level. Note that the base or
super fixture can be accessed from the overriding fixture easily - used in the example above.
tests/
__init__.py
conftest.py
# content of tests/conftest.py
import pytest
@pytest.fixture
def username():
return 'username'
test_something.py
# content of tests/test_something.py
import pytest
@pytest.fixture
def username(username):
return 'overridden-' + username
def test_username(username):
assert username == 'overridden-username'
test_something_else.py
# content of tests/test_something_else.py
import pytest
@pytest.fixture
def username(username):
return 'overridden-else-' + username
def test_username(username):
assert username == 'overridden-else-username'
In the example above, a fixture with the same name can be overridden for certain test module.
tests/
__init__.py
conftest.py
# content of tests/conftest.py
import pytest
@pytest.fixture
def username():
return 'username'
@pytest.fixture
def other_username(username):
return 'other-' + username
test_something.py
# content of tests/test_something.py
import pytest
@pytest.mark.parametrize('username', ['directly-overridden-username'])
def test_username(username):
assert username == 'directly-overridden-username'
@pytest.mark.parametrize('username', ['directly-overridden-username-other'])
def test_username_other(other_username):
assert other_username == 'other-directly-overridden-username-other'
In the example above, a fixture value is overridden by the test parameter value. Note that the value of the fixture can
be overridden this way even if the test doesn’t use it directly (doesn’t mention it in the function prototype).
5.20.4 Override a parametrized fixture with non-parametrized one and vice versa
tests/
__init__.py
conftest.py
# content of tests/conftest.py
import pytest
@pytest.fixture
def non_parametrized_username(request):
return 'username'
test_something.py
# content of tests/test_something.py
import pytest
(continues on next page)
@pytest.fixture
def parametrized_username():
return 'overridden-username'
def test_username(parametrized_username):
assert parametrized_username == 'overridden-username'
def test_parametrized_username(non_parametrized_username):
assert non_parametrized_username in ['one', 'two', 'three']
test_something_else.py
# content of tests/test_something_else.py
def test_username(parametrized_username):
assert parametrized_username in ['one', 'two', 'three']
def test_username(non_parametrized_username):
assert non_parametrized_username == 'username'
In the example above, a parametrized fixture is overridden with a non-parametrized version, and a non-parametrized
fixture is overridden with a parametrized version for certain test module. The same applies for the test folder level
obviously.
Usually projects that provide pytest support will use entry points, so just installing those projects into an environment
will make those fixtures available for use.
In case you want to use fixtures from a project that does not use entry points, you can define pytest_plugins in
your top conftest.py file to register that module as a plugin.
Suppose you have some fixtures in mylibrary.fixtures and you want to reuse them into your app/tests
directory.
All you need to do is to define pytest_plugins in app/tests/conftest.py pointing to that module.
pytest_plugins = "mylibrary.fixtures"
This effectively registers mylibrary.fixtures as a plugin, making all its fixtures and hooks available to tests in
app/tests.
Note: Sometimes users will import fixtures from other projects for use, however this is not recommended: importing
fixtures into a module will register them in pytest as defined in that module.
This has minor consequences, such as appearing multiple times in pytest --help, but it is not recommended
because this behavior might change/stop working in future versions.
SIX
By using the pytest.mark helper you can easily set metadata on your test functions. You can find the full list of
builtin markers in the API Reference. Or you can list all the markers, including builtin and custom, using the CLI -
pytest --markers.
Here are some of the builtin markers:
• usefixtures - use fixtures on a test function or class
• filterwarnings - filter certain warnings of a test function
• skip - always skip a test function
• skipif - skip a test function if a certain condition is met
• xfail - produce an “expected failure” outcome if a certain condition is met
• parametrize - perform multiple calls to the same test function.
It’s easy to create custom markers or to apply markers to whole test classes or modules. Those markers can be used
by plugins, and also are commonly used to select tests on the command-line with the -m option.
See Working with custom markers for examples which also serve as documentation.
You can register custom marks in your pytest.ini file like this:
[pytest]
markers =
slow: marks tests as slow (deselect with '-m "not slow"')
serial
[tool.pytest.ini_options]
markers = [
"slow: marks tests as slow (deselect with '-m \"not slow\"')",
"serial",
]
73
pytest Documentation, Release 6.2
Note that everything past the : after the mark name is an optional description.
Alternatively, you can register new markers programmatically in a pytest_configure hook:
def pytest_configure(config):
config.addinivalue_line(
"markers", "env(name): mark test to run only on named environment"
)
Registered marks appear in pytest’s help text and do not emit warnings (see the next section). It is recommended that
third-party plugins always register their markers.
Unregistered marks applied with the @pytest.mark.name_of_the_mark decorator will always emit a warning
in order to avoid silently doing something surprising due to mistyped names. As described in the previous section,
you can disable the warning for custom marks by registering them in your pytest.ini file or using a custom
pytest_configure hook.
When the --strict-markers command-line flag is passed, any unknown marks applied with the @pytest.
mark.name_of_the_mark decorator will trigger an error. You can enforce this validation in your project by
adding --strict-markers to addopts:
[pytest]
addopts = --strict-markers
markers =
slow: marks tests as slow (deselect with '-m "not slow"')
serial
SEVEN
Sometimes tests need to invoke functionality which depends on global settings or which invokes code which cannot be
easily tested such as network access. The monkeypatch fixture helps you to safely set/delete an attribute, dictionary
item or environment variable, or to modify sys.path for importing.
The monkeypatch fixture provides these helper methods for safely patching and mocking functionality in tests:
All modifications will be undone after the requesting test function or fixture has finished. The raising parameter
determines if a KeyError or AttributeError will be raised if the target of the set/deletion operation does not
exist.
Consider the following scenarios:
1. Modifying the behavior of a function or the property of a class for a test e.g. there is an API call or database
connection you will not make for a test but you know what the expected output should be. Use monkeypatch.
setattr to patch the function or property with your desired testing behavior. This can include your own functions.
Use monkeypatch.delattr to remove the function or property for the test.
2. Modifying the values of dictionaries e.g. you have a global configuration that you want to modify for certain test
cases. Use monkeypatch.setitem to patch the dictionary for the test. monkeypatch.delitem can be used
to remove items.
3. Modifying environment variables for a test e.g. to test program behavior if an environment variable is missing, or
to set multiple values to a known variable. monkeypatch.setenv and monkeypatch.delenv can be used for
these patches.
4. Use monkeypatch.setenv("PATH", value, prepend=os.pathsep) to modify $PATH, and
monkeypatch.chdir to change the context of the current working directory during a test.
5. Use monkeypatch.syspath_prepend to modify sys.path which will also call pkg_resources.
fixup_namespace_packages and importlib.invalidate_caches().
See the monkeypatch blog post for some introduction material and a discussion of its motivation.
75
pytest Documentation, Release 6.2
Consider a scenario where you are working with user directories. In the context of testing, you do not want your test
to depend on the running user. monkeypatch can be used to patch functions dependent on the user to always return
a specific value.
In this example, monkeypatch.setattr is used to patch Path.home so that the known testing path Path("/
abc") is always used when the test is run. This removes any dependency on the running user for testing purposes.
monkeypatch.setattr must be called before the function which will use the patched function is called. After
the test function finishes the Path.home modification will be undone.
def getssh():
"""Simple function to return expanded homedir ssh path."""
return Path.home() / ".ssh"
def test_getssh(monkeypatch):
# mocked return function to replace Path.home
# always return '/abc'
def mockreturn():
return Path("/abc")
monkeypatch.setattr can be used in conjunction with classes to mock returned objects from functions instead
of values. Imagine a simple function to take an API url and return the json response.
def get_json(url):
"""Takes a URL, and returns the JSON."""
r = requests.get(url)
return r.json()
We need to mock r, the returned response object for testing purposes. The mock of r needs a .json() method
which returns a dictionary. This can be done in our test file by defining a class to represent r.
def test_get_json(monkeypatch):
# Any arguments may be passed and mock_get() will always return our
# mocked object, which only has the .json() method.
def mock_get(*args, **kwargs):
return MockResponse()
monkeypatch applies the mock for requests.get with our mock_get function. The mock_get function
returns an instance of the MockResponse class, which has a json() method defined to return a known testing
dictionary and does not require any outside API connection.
You can build the MockResponse class with the appropriate degree of complexity for the scenario you are testing.
For instance, it could include an ok property that always returns True, or return different values from the json()
mocked method based on input strings.
This mock can be shared across tests using a fixture:
# notice our test uses the custom fixture instead of monkeypatch directly
def test_get_json(mock_response):
result = app.get_json("https://fakeurl")
assert result["mock_key"] == "mock_response"
Furthermore, if the mock was designed to be applied to all tests, the fixture could be moved to a conftest.py
file and use the with autouse=True option.
If you want to prevent the “requests” library from performing http requests in all your tests, you can do:
# contents of conftest.py
import pytest
@pytest.fixture(autouse=True)
def no_requests(monkeypatch):
"""Remove requests.sessions.Session.request for all tests."""
monkeypatch.delattr("requests.sessions.Session.request")
This autouse fixture will be executed for each test function and it will delete the method request.session.
Session.request so that any attempts within tests to create http requests will fail.
Note: Be advised that it is not recommended to patch builtin functions such as open, compile, etc., because it might
break pytest’s internals. If that’s unavoidable, passing --tb=native, --assert=plain and --capture=no
might help although there’s no guarantee.
Note: Mind that patching stdlib functions and some third-party libraries used by pytest might break pytest itself,
therefore in those cases it is recommended to use MonkeyPatch.context() to limit the patching to the block
you want tested:
import functools
def test_partial(monkeypatch):
with monkeypatch.context() as m:
m.setattr(functools, "partial", 3)
assert functools.partial == 3
If you are working with environment variables you often need to safely change the values or delete them from the
system for testing purposes. monkeypatch provides a mechanism to do this using the setenv and delenv
method. Our example code to test:
# contents of our original code file e.g. code.py
import os
def get_os_user_lower():
"""Simple retrieval function.
Returns lowercase USER or raises OSError."""
username = os.getenv("USER")
if username is None:
raise OSError("USER environment is not set.")
return username.lower()
There are two potential paths. First, the USER environment variable is set to a value. Second, the USER environ-
ment variable does not exist. Using monkeypatch both paths can be safely tested without impacting the running
environment:
# contents of our test file e.g. test_code.py
import pytest
def test_upper_to_lower(monkeypatch):
"""Set the USER env var to assert the behavior."""
monkeypatch.setenv("USER", "TestingUser")
assert get_os_user_lower() == "testinguser"
def test_raise_exception(monkeypatch):
"""Remove the USER env var and assert OSError is raised."""
monkeypatch.delenv("USER", raising=False)
with pytest.raises(OSError):
_ = get_os_user_lower()
This behavior can be moved into fixture structures and shared across tests:
# contents of our test file e.g. test_code.py
import pytest
@pytest.fixture
def mock_env_user(monkeypatch):
monkeypatch.setenv("USER", "TestingUser")
@pytest.fixture
def mock_env_missing(monkeypatch):
monkeypatch.delenv("USER", raising=False)
def test_raise_exception(mock_env_missing):
with pytest.raises(OSError):
_ = get_os_user_lower()
monkeypatch.setitem can be used to safely set the values of dictionaries to specific values during tests. Take
this simplified connection string example:
# contents of app.py to generate a simple connection string
DEFAULT_CONFIG = {"user": "user1", "database": "db1"}
def create_connection_string(config=None):
"""Creates a connection string from input or defaults."""
config = config or DEFAULT_CONFIG
return f"User Id={config['user']}; Location={config['database']};"
For testing purposes we can patch the DEFAULT_CONFIG dictionary to specific values.
# contents of test_app.py
# app.py with the connection string function (prior code block)
import app
def test_connection(monkeypatch):
def test_missing_user(monkeypatch):
(continues on next page)
The modularity of fixtures gives you the flexibility to define separate fixtures for each potential mock and reference
them in the needed tests.
# contents of test_app.py
import pytest
@pytest.fixture
def mock_test_database(monkeypatch):
"""Set the DEFAULT_CONFIG database to test_db."""
monkeypatch.setitem(app.DEFAULT_CONFIG, "database", "test_db")
@pytest.fixture
def mock_missing_default_user(monkeypatch):
"""Remove the user key from DEFAULT_CONFIG"""
monkeypatch.delitem(app.DEFAULT_CONFIG, "user", raising=False)
result = app.create_connection_string()
assert result == expected
def test_missing_user(mock_missing_default_user):
with pytest.raises(KeyError):
_ = app.create_connection_string()
EIGHT
You can use the tmp_path fixture which will provide a temporary directory unique to the test invocation, created in
the base temporary directory.
tmp_path is a pathlib.Path object. Here is an example test usage:
# content of test_tmp_path.py
CONTENT = "content"
def test_create_file(tmp_path):
d = tmp_path / "sub"
d.mkdir()
p = d / "hello.txt"
p.write_text(CONTENT)
assert p.read_text() == CONTENT
assert len(list(tmp_path.iterdir())) == 1
assert 0
Running this would result in a passed test except for the last assert 0 line which we use to look at values:
$ pytest test_tmp_path.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 1 item
test_tmp_path.py F [100%]
tmp_path = PosixPath('PYTEST_TMPDIR/test_create_file0')
def test_create_file(tmp_path):
d = tmp_path / "sub"
d.mkdir()
p = d / "hello.txt"
p.write_text(CONTENT)
assert p.read_text() == CONTENT
assert len(list(tmp_path.iterdir())) == 1
(continues on next page)
83
pytest Documentation, Release 6.2
test_tmp_path.py:11: AssertionError
========================= short test summary info ==========================
FAILED test_tmp_path.py::test_create_file - assert 0
============================ 1 failed in 0.12s =============================
The tmp_path_factory is a session-scoped fixture which can be used to create arbitrary temporary directories
from any other fixture or test.
It is intended to replace tmpdir_factory, and returns pathlib.Path instances.
See tmp_path_factory API for details.
You can use the tmpdir fixture which will provide a temporary directory unique to the test invocation, created in the
base temporary directory.
tmpdir is a py.path.local object which offers os.path methods and more. Here is an example test usage:
# content of test_tmpdir.py
def test_create_file(tmpdir):
p = tmpdir.mkdir("sub").join("hello.txt")
p.write("content")
assert p.read() == "content"
assert len(tmpdir.listdir()) == 1
assert 0
Running this would result in a passed test except for the last assert 0 line which we use to look at values:
$ pytest test_tmpdir.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 1 item
test_tmpdir.py F [100%]
tmpdir = local('PYTEST_TMPDIR/test_create_file0')
def test_create_file(tmpdir):
p = tmpdir.mkdir("sub").join("hello.txt")
p.write("content")
assert p.read() == "content"
assert len(tmpdir.listdir()) == 1
(continues on next page)
test_tmpdir.py:6: AssertionError
========================= short test summary info ==========================
FAILED test_tmpdir.py::test_create_file - assert 0
============================ 1 failed in 0.12s =============================
The tmpdir_factory is a session-scoped fixture which can be used to create arbitrary temporary directories from
any other fixture or test.
For example, suppose your test suite needs a large image on disk, which is generated procedurally. Instead of com-
puting the same image for each test that uses it into its own tmpdir, you can generate it once per-session to save
time:
# contents of conftest.py
import pytest
@pytest.fixture(scope="session")
def image_file(tmpdir_factory):
img = compute_expensive_image()
fn = tmpdir_factory.mktemp("data").join("img.png")
img.save(str(fn))
return fn
# contents of test_image.py
def test_histogram(image_file):
img = load_image(image_file)
# compute and test histogram
Temporary directories are by default created as sub-directories of the system temporary directory. The base name
will be pytest-NUM where NUM will be incremented with each test run. Moreover, entries older than 3 temporary
directories will be removed.
You can override the default temporary directory setting like this:
pytest --basetemp=mydir
Warning: The contents of mydir will be completely removed, so make sure to use a directory for that purpose
only.
When distributing tests on the local machine using pytest-xdist, care is taken to automatically configure a
basetemp directory for the sub processes such that all temporary data lands below a single per-test run basetemp
directory.
NINE
During test execution any output sent to stdout and stderr is captured. If a test or a setup method fails its
according captured output will usually be shown along with the failure traceback. (this behavior can be configured by
the --show-capture command-line option).
In addition, stdin is set to a “null” object which will fail on attempts to read from it because it is rarely desired to
wait for interactive input when running automated tests.
By default capturing is done by intercepting writes to low level file descriptors. This allows to capture output from
simple print statements as well as output from a subprocess started by a test.
87
pytest Documentation, Release 6.2
One primary benefit of the default capturing of stdout/stderr output is that you can use print statements for debugging:
# content of test_module.py
def setup_function(function):
print("setting up", function)
def test_func1():
assert True
def test_func2():
assert False
and running this module will show you precisely the output of the failing function and hide the other one:
$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 2 items
test_module.py .F [100%]
def test_func2():
> assert False
E assert False
test_module.py:12: AssertionError
-------------------------- Captured stdout setup ---------------------------
setting up <function test_func2 at 0xdeadbeef>
========================= short test summary info ==========================
FAILED test_module.py::test_func2 - assert False
======================= 1 failed, 1 passed in 0.12s ========================
The capsys, capsysbinary, capfd, and capfdbinary fixtures allow access to stdout/stderr output created
during test execution. Here is an example test function that performs some output related checks:
The readouterr() call snapshots the output so far - and capturing will be continued. After the test function finishes
the original streams will be restored. Using capsys this way frees your test from having to care about setting/resetting
output streams and also interacts well with pytest’s own per-test capturing.
If you want to capture on filedescriptor level you can use the capfd fixture which offers the exact same interface but
allows to also capture output from libraries or subprocesses that directly write to operating system level output streams
(FD1 and FD2).
The return value from readouterr changed to a namedtuple with two attributes, out and err.
If the code under test writes non-textual data, you can capture this using the capsysbinary fixture which instead
returns bytes from the readouterr method.
If the code under test writes non-textual data, you can capture this using the capfdbinary fixture which instead
returns bytes from the readouterr method. The capfdbinary fixture operates on the filedescriptor level.
To temporarily disable capture within a test, both capsys and capfd have a disabled() method that can be used
as a context manager, disabling capture inside the with block:
def test_disabling_capturing(capsys):
print("this output is captured")
with capsys.disabled():
print("output not captured, going directly to sys.stdout")
print("this output is also captured")
TEN
WARNINGS CAPTURE
Starting from version 3.1, pytest now automatically catches warnings during test execution and displays them at the
end of the session:
# content of test_show_warnings.py
import warnings
def api_v1():
warnings.warn(UserWarning("api v1, should use functions from v2"))
return 1
def test_one():
assert api_v1() == 1
test_show_warnings.py . [100%]
-- Docs: https://docs.pytest.org/en/stable/warnings.html
======================= 1 passed, 1 warning in 0.12s =======================
The -W flag can be passed to control which warnings will be displayed or even turn them into errors:
$ pytest -q test_show_warnings.py -W error::UserWarning
F [100%]
================================= FAILURES =================================
_________________________________ test_one _________________________________
def test_one():
> assert api_v1() == 1
(continues on next page)
91
pytest Documentation, Release 6.2
test_show_warnings.py:10:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def api_v1():
> warnings.warn(UserWarning("api v1, should use functions from v2"))
E UserWarning: api v1, should use functions from v2
test_show_warnings.py:5: UserWarning
========================= short test summary info ==========================
FAILED test_show_warnings.py::test_one - UserWarning: api v1, should use ...
1 failed in 0.12s
The same option can be set in the pytest.ini or pyproject.toml file using the filterwarnings ini option.
For example, the configuration below will ignore all user warnings and specific deprecation warnings matching a regex,
but will transform all other warnings into errors.
# pytest.ini
[pytest]
filterwarnings =
error
ignore::UserWarning
ignore:function ham\(\) is deprecated:DeprecationWarning
# pyproject.toml
[tool.pytest.ini_options]
filterwarnings = [
"error",
"ignore::UserWarning",
# note the use of single quote below to denote "raw" strings in TOML
'ignore:function ham\(\) is deprecated:DeprecationWarning',
]
When a warning matches more than one option in the list, the action for the last matching option is performed.
Both -W command-line option and filterwarnings ini option are based on Python’s own -W option and warn-
ings.simplefilter, so please refer to those sections in the Python documentation for other examples and advanced usage.
10.1 @pytest.mark.filterwarnings
You can use the @pytest.mark.filterwarnings to add warning filters to specific test items, allowing you to
have finer control of which warnings should be captured at test, class or even module level:
import warnings
def api_v1():
warnings.warn(UserWarning("api v1, should use functions from v2"))
return 1
@pytest.mark.filterwarnings("ignore:api v1")
def test_one():
assert api_v1() == 1
Filters applied using a mark take precedence over filters passed on the command line or configured by the
filterwarnings ini option.
You may apply a filter to all tests of a class by using the filterwarnings mark as a class decorator or to all tests
in a module by setting the pytestmark variable:
Credits go to Florian Schulze for the reference implementation in the pytest-warnings plugin.
Although not recommended, you can use the --disable-warnings command-line option to suppress the warning
summary entirely from the test run output.
This plugin is enabled by default but can be disabled entirely in your pytest.ini file with:
[pytest]
addopts = -p no:warnings
Or passing -p no:warnings in the command-line. This might be useful if your test suites handles warnings using
an external system.
[pytest]
filterwarnings =
ignore:.*U.*mode is deprecated:DeprecationWarning
This will ignore all warnings of type DeprecationWarning where the start of the message matches the regular
expression ".*U.*mode is deprecated".
Note: If warnings are configured at the interpreter level, using the PYTHONWARNINGS environment variable or
the -W command-line option, pytest will not configure any filters by default.
Also pytest doesn’t follow PEP-0506 suggestion of resetting all warning filters because it might break test suites
that configure warning filters themselves by calling warnings.simplefilter (see issue #2430 for an example
of that).
You can also use pytest.deprecated_call() for checking that a certain function call triggers a
DeprecationWarning or PendingDeprecationWarning:
import pytest
def test_myfunction_deprecated():
with pytest.deprecated_call():
myfunction(17)
This test will fail if myfunction does not issue a deprecation warning when called with a 17 argument.
You can check that code raises a particular warning using func:pytest.warns, which works in a similar manner to
raises:
import warnings
import pytest
def test_warning():
with pytest.warns(UserWarning):
warnings.warn("my warning", UserWarning)
The test will fail if the warning in question is not raised. The keyword argument match to assert that the exception
matches a text or regex:
The function also returns a list of all raised warnings (as warnings.WarningMessage objects), which you can
query for additional information:
Alternatively, you can examine raised warnings in detail using the recwarn fixture (see below).
The recwarn fixture automatically ensures to reset the warnings filter at the end of the test, so no global state is leaked.
You can record raised warnings either using func:pytest.warns or with the recwarn fixture.
To record with func:pytest.warns without asserting anything about the warnings, pass None as the expected
warning type:
assert len(record) == 2
assert str(record[0].message) == "user"
assert str(record[1].message) == "runtime"
The recwarn fixture will record warnings for the whole function:
import warnings
def test_hello(recwarn):
warnings.warn("hello", UserWarning)
assert len(recwarn) == 1
w = recwarn.pop(UserWarning)
assert issubclass(w.category, UserWarning)
assert str(w.message) == "hello"
assert w.filename
assert w.lineno
Both recwarn and func:pytest.warns return the same interface for recorded warnings: a WarningsRecorder
instance. To view the recorded warnings, you can iterate over this instance, call len on it to get the number of
recorded warnings, or index into it to get a particular recorded warning.
Full API: WarningsRecorder.
Recording warnings provides an opportunity to produce custom test failure messages for when no warnings are issued
or other conditions are met.
def test():
with pytest.warns(Warning) as record:
f()
if not record:
pytest.fail("Expected a warning!")
If no warnings are issued when calling f, then not record will evaluate to True. You can then call pytest.
fail() with a custom error message.
pytest may generate its own warnings in some situations, such as improper usage or deprecated features.
For example, pytest will emit a warning if it encounters a class that matches python_classes but also defines an
__init__ constructor, as this prevents the class from being instantiated:
# content of test_pytest_warnings.py
class Test:
def __init__(self):
pass
def test_foo(self):
assert 1 == 1
$ pytest test_pytest_warnings.py -q
˓→py)
class Test:
-- Docs: https://docs.pytest.org/en/stable/warnings.html
1 warning in 0.12s
These warnings might be filtered using the same builtin mechanisms used to filter other types of warnings.
Please read our Backwards Compatibility Policy to learn how we proceed about deprecating and eventually removing
features.
The full list of warnings is listed in the reference documentation.
ELEVEN
By default, all files matching the test*.txt pattern will be run through the python standard doctest module.
You can change the pattern by issuing:
pytest --doctest-glob="*.rst"
on the command line. --doctest-glob can be given multiple times in the command-line.
If you then have a text file like this:
# content of test_example.txt
$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 1 item
test_example.txt . [100%]
By default, pytest will collect test*.txt files looking for doctest directives, but you can pass additional globs using
the --doctest-glob option (multi-allowed).
In addition to text files, you can also execute doctests directly from docstrings of your classes and functions, including
from test modules:
# content of mymodule.py
def something():
""" a doctest in a docstring
>>> something()
42
"""
return 42
97
pytest Documentation, Release 6.2
$ pytest --doctest-modules
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 2 items
mymodule.py . [ 50%]
test_example.txt . [100%]
You can make these changes permanent in your project by putting them into a pytest.ini file like this:
# content of pytest.ini
[pytest]
addopts = --doctest-modules
11.1 Encoding
The default encoding is UTF-8, but you can specify the encoding that will be used for those doctest files using the
doctest_encoding ini option:
# content of pytest.ini
[pytest]
doctest_encoding = latin1
Python’s standard doctest module provides some options to configure the strictness of doctest tests. In pytest, you
can enable those flags using the configuration file.
For example, to make pytest ignore trailing whitespaces and ignore lengthy exception stack traces you can just write:
[pytest]
doctest_optionflags = NORMALIZE_WHITESPACE IGNORE_EXCEPTION_DETAIL
Alternatively, options can be enabled by an inline comment in the doc test itself:
>>> math.pi
3.14
If you wrote 3.1416 then the actual output would need to match to 4 decimal places; and so on.
This avoids false positives caused by limited floating-point precision, like this:
Expected:
0.233
Got:
0.23300000000000001
NUMBER also supports lists of floating-point numbers – in fact, it matches floating-point numbers appearing
anywhere in the output, even inside a string! This means that it may not be appropriate to enable globally in
doctest_optionflags in your configuration file.
New in version 5.1.
By default, pytest would report only the first failure for a given doctest. If you want to continue the test even when
you have failures, do:
You can change the diff output format on failure for your doctests by using one of standard doctest modules
format in options (see doctest.REPORT_UDIFF, doctest.REPORT_CDIFF, doctest.REPORT_NDIFF,
doctest.REPORT_ONLY_FIRST_FAILURE):
Some features are provided to make writing doctests easier or with better integration with your existing test suite. Keep
in mind however that by using those features you will make your doctests incompatible with the standard doctests
module.
# content of example.rst
>>> tmp = getfixture('tmpdir')
>>> ...
>>>
Note that the fixture needs to be defined in a place visible by pytest, for example, a conftest.py file or plu-
gin; normal python files containing docstrings are not normally scanned for fixtures unless explicitly configured by
python_files.
Also, the usefixtures mark and fixtures marked as autouse are supported when executing text doctest files.
The doctest_namespace fixture can be used to inject items into the namespace in which your doctests run. It is
intended to be used within your own fixtures to provide the tests that use them with context.
doctest_namespace is a standard dict object into which you place the objects you want to appear in the doctest
namespace:
# content of conftest.py
import numpy
@pytest.fixture(autouse=True)
def add_np(doctest_namespace):
doctest_namespace["np"] = numpy
# content of numpy.py
def arange():
"""
>>> a = np.arange(10)
>>> len(a)
10
"""
pass
Note that like the normal conftest.py, the fixtures are discovered in the directory tree conftest is in. Meaning that
if you put your doctest with your source code, the relevant conftest.py needs to be in the same directory tree. Fixtures
will not be discovered in a sibling directory tree!
100 Chapter 11. Doctest integration for modules and test files
pytest Documentation, Release 6.2
For the same reasons one might want to skip normal tests, it is also possible to skip tests inside doctests.
To skip a single check inside a doctest you can use the standard doctest.SKIP directive:
def test_random(y):
"""
>>> random.random() # doctest: +SKIP
0.156231223
>>> 1 + 1
2
"""
This will skip the first check, but not the second.
pytest also allows using the standard pytest functions pytest.skip() and pytest.xfail() inside doctests,
which might be useful because you can then skip/xfail tests based on external conditions:
However using those functions is discouraged because it reduces the readability of the docstring.
Note: pytest.skip() and pytest.xfail() behave differently depending if the doctests are in a Python file
(in docstrings) or a text file containing doctests intermingled with text:
• Python modules (docstrings): the functions only act in that specific docstring, letting the other docstrings in the
same module execute as normal.
• Text files: the functions will skip/xfail the checks for the rest of the entire file.
11.6 Alternatives
While the built-in pytest support provides a good set of functionalities for using doctests, if you use them extensively
you might be interested in those external packages which add many more features, and include pytest integration:
• pytest-doctestplus: provides advanced doctest support and enables the testing of reStructuredText (“.rst”) files.
• Sybil: provides a way to test examples in your documentation by parsing them from the documentation source
and evaluating the parsed examples as part of your normal test run.
102 Chapter 11. Doctest integration for modules and test files
CHAPTER
TWELVE
You can mark test functions that cannot be run on certain platforms or that you expect to fail so pytest can deal with
them accordingly and present a summary of the test session, while keeping the test suite green.
A skip means that you expect your test to pass only if some conditions are met, otherwise pytest should skip running
the test altogether. Common examples are skipping windows-only tests on non-windows platforms, or skipping tests
that depend on an external resource which is not available at the moment (for example a database).
An xfail means that you expect a test to fail for some reason. A common example is a test for a feature not yet
implemented, or a bug not yet fixed. When a test passes despite being expected to fail (marked with pytest.mark.
xfail), it’s an xpass and will be reported in the test summary.
pytest counts and lists skip and xfail tests separately. Detailed information about skipped/xfailed tests is not shown
by default to avoid cluttering the output. You can use the -r option to see details corresponding to the “short” letters
shown in the test progress:
pytest -rxXs # show extra info on xfailed, xpassed, and skipped tests
The simplest way to skip a test function is to mark it with the skip decorator which may be passed an optional
reason:
Alternatively, it is also possible to skip imperatively during test execution or setup by calling the pytest.
skip(reason) function:
def test_function():
if not valid_config():
pytest.skip("unsupported configuration")
The imperative method is useful when it is not possible to evaluate the skip condition during import time.
It is also possible to skip the whole module using pytest.skip(reason, allow_module_level=True)
at the module level:
103
pytest Documentation, Release 6.2
import sys
import pytest
if not sys.platform.startswith("win"):
pytest.skip("skipping windows-only tests", allow_module_level=True)
Reference: pytest.mark.skip
12.1.1 skipif
If you wish to skip something conditionally then you can use skipif instead. Here is an example of marking a test
function to be skipped when run on an interpreter earlier than Python3.6:
import sys
If the condition evaluates to True during collection, the test function will be skipped, with the specified reason
appearing in the summary when using -rs.
You can share skipif markers between modules. Consider this test module:
# content of test_mymodule.py
import mymodule
minversion = pytest.mark.skipif(
mymodule.__versioninfo__ < (1, 1), reason="at least mymodule-1.1 required"
)
@minversion
def test_function():
...
You can import the marker and reuse it in another test module:
# test_myothermodule.py
from test_mymodule import minversion
@minversion
def test_anotherfunction():
...
For larger test suites it’s usually a good idea to have one file where you define the markers which you then consistently
apply throughout your test suite.
Alternatively, you can use condition strings instead of booleans, but they can’t be shared between modules easily so
they are supported mainly for backward compatibility reasons.
Reference: pytest.mark.skipif
104 Chapter 12. Skip and xfail: dealing with tests that cannot succeed
pytest Documentation, Release 6.2
You can use the skipif marker (as any other marker) on classes:
If the condition is True, this marker will produce a skip result for each of the test methods of that class.
If you want to skip all test functions of a module, you may use the pytestmark global:
# test_module.py
pytestmark = pytest.mark.skipif(...)
If multiple skipif decorators are applied to a test function, it will be skipped if any of the skip conditions is true.
Sometimes you may need to skip an entire file or directory, for example if the tests rely on Python version-specific
features or contain code that you do not wish pytest to run. In this case, you must exclude the files and directories
from collection. Refer to Customizing test collection for more information.
You can skip tests on a missing import by using pytest.importorskip at module level, within a test, or test setup function.
docutils = pytest.importorskip("docutils")
If docutils cannot be imported here, this will lead to a skip outcome of the test. You can also skip based on the
version number of a library:
The version will be read from the specified module’s __version__ attribute.
12.1.5 Summary
pexpect = pytest.importorskip("pexpect")
You can use the xfail marker to indicate that you expect a test to fail:
@pytest.mark.xfail
def test_function():
...
This test will run but no traceback will be reported when it fails. Instead, terminal reporting will list it in the “expected
to fail” (XFAIL) or “unexpectedly passing” (XPASS) sections.
Alternatively, you can also mark a test as XFAIL from within the test or its setup function imperatively:
def test_function():
if not valid_config():
pytest.xfail("failing configuration (but should work)")
def test_function2():
import slow_module
if slow_module.slow_function():
pytest.xfail("slow_module taking too long")
These two examples illustrate situations where you don’t want to check for a condition at the module level, which is
when a condition would otherwise be evaluated for marks.
This will make test_function XFAIL. Note that no other code is executed after the pytest.xfail() call,
differently from the marker. That’s because it is implemented internally by raising a known exception.
Reference: pytest.mark.xfail
If a test is only expected to fail under a certain condition, you can pass that condition as the first parameter:
Note that you have to pass a reason as well (see the parameter description at pytest.mark.xfail).
You can specify the motive of an expected failure with the reason parameter:
106 Chapter 12. Skip and xfail: dealing with tests that cannot succeed
pytest Documentation, Release 6.2
If you want to be more specific as to why the test is failing, you can specify a single exception, or a tuple of exceptions,
in the raises argument.
@pytest.mark.xfail(raises=RuntimeError)
def test_function():
...
Then the test will be reported as a regular failure if it fails with an exception not mentioned in raises.
If a test should be marked as xfail and reported as such but should not be even executed, use the run parameter as
False:
@pytest.mark.xfail(run=False)
def test_function():
...
This is specially useful for xfailing tests that are crashing the interpreter and should be investigated later.
Both XFAIL and XPASS don’t fail the test suite by default. You can change this by setting the strict keyword-only
parameter to True:
@pytest.mark.xfail(strict=True)
def test_function():
...
This will make XPASS (“unexpectedly passing”) results from this test to fail the test suite.
You can change the default value of the strict parameter using the xfail_strict ini option:
[pytest]
xfail_strict=true
pytest --runxfail
you can force the running and reporting of an xfail marked test as if it weren’t marked at all. This also causes
pytest.xfail() to produce no effect.
12.2.7 Examples
xfail = pytest.mark.xfail
@xfail
def test_hello():
assert 0
@xfail(run=False)
def test_hello2():
assert 0
@xfail("hasattr(os, 'sep')")
def test_hello3():
assert 0
@xfail(reason="bug 110")
def test_hello4():
assert 0
@xfail('pytest.__version__[0] != "17"')
def test_hello5():
assert 0
def test_hello6():
pytest.xfail("reason")
@xfail(raises=IndexError)
def test_hello7():
x = []
x[1] = 1
108 Chapter 12. Skip and xfail: dealing with tests that cannot succeed
pytest Documentation, Release 6.2
It is possible to apply markers like skip and xfail to individual test instances when using parametrize:
import pytest
@pytest.mark.parametrize(
("n", "expected"),
[
(1, 2),
pytest.param(1, 0, marks=pytest.mark.xfail),
pytest.param(1, 3, marks=pytest.mark.xfail(reason="some bug")),
(2, 3),
(3, 4),
(4, 5),
pytest.param(
10, 11, marks=pytest.mark.skipif(sys.version_info >= (3, 0), reason="py2k
˓→")
),
],
)
def test_increment(n, expected):
assert n + 1 == expected
110 Chapter 12. Skip and xfail: dealing with tests that cannot succeed
CHAPTER
THIRTEEN
The builtin pytest.mark.parametrize decorator enables parametrization of arguments for a test function. Here is a
typical example of a test function that implements checking that a certain input leads to an expected output:
# content of test_expectation.py
import pytest
Here, the @parametrize decorator defines three different (test_input,expected) tuples so that the
test_eval function will run three times using them in turn:
$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 3 items
111
pytest Documentation, Release 6.2
test_expectation.py:6: AssertionError
========================= short test summary info ==========================
FAILED test_expectation.py::test_eval[6*9-42] - AssertionError: assert 54...
======================= 1 failed, 2 passed in 0.12s ========================
Note: Parameter values are passed as-is to tests (no copy whatsoever).
For example, if you pass a list or a dict as a parameter value, and the test case code mutates it, the mutations will be
reflected in subsequent test case calls.
Note: pytest by default escapes any non-ascii characters used in unicode strings for the parametrization because it has
several downsides. If however you would like to use unicode strings in parametrization and see them in the terminal
as is (non-escaped), use this option in your pytest.ini:
[pytest]
disable_test_id_escaping_and_forfeit_all_rights_to_community_support = True
Keep in mind however that this might cause unwanted side effects and even bugs depending on the OS used and
plugins currently installed, so use it at your own risk.
As designed in this example, only one pair of input/output values fails the simple test function. And as usual with test
function arguments, you can see the input and output values in the traceback.
Note that you could also use the parametrize marker on a class or a module (see Marking test functions with attributes)
which would invoke several functions with the argument sets.
It is also possible to mark individual test instances within parametrize, for example with the builtin mark.xfail:
# content of test_expectation.py
import pytest
@pytest.mark.parametrize(
"test_input,expected",
[("3+5", 8), ("2+4", 6), pytest.param("6*9", 42, marks=pytest.mark.xfail)],
)
def test_eval(test_input, expected):
assert eval(test_input) == expected
$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 3 items
The one parameter set which caused a failure previously now shows up as an “xfailed” (expected to fail) test.
In case the values provided to parametrize result in an empty list - for example, if they’re dynamically generated
by some function - the behaviour of pytest is defined by the empty_parameter_set_mark option.
To get all combinations of multiple parametrized arguments you can stack parametrize decorators:
import pytest
This will run the test with the arguments set to x=0/y=2, x=1/y=2, x=0/y=3, and x=1/y=3 exhausting parame-
ters in the order of the decorators.
Sometimes you may want to implement your own parametrization scheme or implement some dynamism for deter-
mining the parameters or scope of a fixture. For this, you can use the pytest_generate_tests hook which is
called when collecting a test function. Through the passed in metafunc object you can inspect the requesting test
context and, most importantly, you can call metafunc.parametrize() to cause parametrization.
For example, let’s say we want to run a test taking string inputs which we want to set via a new pytest command
line option. Let’s first write a simple test accepting a stringinput fixture function argument:
# content of test_strings.py
def test_valid_string(stringinput):
assert stringinput.isalpha()
Now we add a conftest.py file containing the addition of a command line option and the parametrization of our
test function:
# content of conftest.py
def pytest_addoption(parser):
parser.addoption(
"--stringinput",
action="append",
default=[],
help="list of stringinputs to pass to test functions",
)
def pytest_generate_tests(metafunc):
if "stringinput" in metafunc.fixturenames:
metafunc.parametrize("stringinput", metafunc.config.getoption("stringinput"))
If we now pass two stringinput values, our test will run twice:
Let’s also run with a stringinput that will lead to a failing test:
stringinput = '!'
def test_valid_string(stringinput):
> assert stringinput.isalpha()
E AssertionError: assert False
E + where False = <built-in method isalpha of str object at 0xdeadbeef>()
E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.
˓→isalpha
test_strings.py:4: AssertionError
========================= short test summary info ==========================
FAILED test_strings.py::test_valid_string[!] - AssertionError: assert False
1 failed in 0.12s
1 skipped in 0.12s
Note that when calling metafunc.parametrize multiple times with different parameter sets, all parameter names
across those sets cannot be duplicated, otherwise an error will be raised.
For further examples, you might want to look at more parametrization examples.
FOURTEEN
14.1 Usage
The plugin provides two command line options to rerun failures from the last pytest invocation:
• --lf, --last-failed - to only re-run the failures.
• --ff, --failed-first - to run the failures first and then the rest of the tests.
For cleanup (usually not needed), a --cache-clear option allows to remove all cross-session cache contents ahead
of a test run.
Other plugins may access the config.cache object to set/get json encodable values between pytest invocations.
Note: This plugin is enabled by default, but can be disabled if needed: see Deactivating / unregistering a plugin by
name (the internal name for this plugin is cacheprovider).
# content of test_50.py
import pytest
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
pytest.fail("bad luck")
If you run this for the first time you will see two failures:
$ pytest -q
.................F.......F........................ [100%]
================================= FAILURES =================================
_______________________________ test_num[17] _______________________________
i = 17
@pytest.mark.parametrize("i", range(50))
def test_num(i):
(continues on next page)
115
pytest Documentation, Release 6.2
test_50.py:7: Failed
_______________________________ test_num[25] _______________________________
i = 25
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:7: Failed
========================= short test summary info ==========================
FAILED test_50.py::test_num[17] - Failed: bad luck
FAILED test_50.py::test_num[25] - Failed: bad luck
2 failed, 48 passed in 0.12s
test_50.py FF [100%]
i = 17
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:7: Failed
_______________________________ test_num[25] _______________________________
i = 25
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:7: Failed
========================= short test summary info ==========================
(continues on next page)
You have run only the two failing tests from the last run, while the 48 passing tests have not been run (“deselected”).
Now, if you run with the --ff option, all tests will be run but the first previous failures will be executed first (as can
be seen from the series of FF and dots):
$ pytest --ff
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 50 items
run-last-failure: rerun previous 2 failures first
i = 17
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:7: Failed
_______________________________ test_num[25] _______________________________
i = 25
@pytest.mark.parametrize("i", range(50))
def test_num(i):
if i in (17, 25):
> pytest.fail("bad luck")
E Failed: bad luck
test_50.py:7: Failed
========================= short test summary info ==========================
FAILED test_50.py::test_num[17] - Failed: bad luck
FAILED test_50.py::test_num[25] - Failed: bad luck
======================= 2 failed, 48 passed in 0.12s =======================
New --nf, --new-first options: run new tests first followed by the rest of the tests, in both cases tests are also
sorted by the file modified time, with more recent files coming first.
When no tests failed in the last run, or when no cached lastfailed data was found, pytest can be configured
either to run all of the tests or no tests, using the --last-failed-no-failures option, which takes one of the
following values:
Plugins or conftest.py support code can get a cached value using the pytest config object. Here is a basic example
plugin which implements a fixture which re-uses previously created state across pytest invocations:
# content of test_caching.py
import pytest
import time
def expensive_computation():
print("running expensive computation...")
@pytest.fixture
def mydata(request):
val = request.config.cache.get("example/value", None)
if val is None:
expensive_computation()
val = 42
request.config.cache.set("example/value", val)
return val
def test_function(mydata):
assert mydata == 23
If you run this command for the first time, you can see the print statement:
$ pytest -q
F [100%]
================================= FAILURES =================================
______________________________ test_function _______________________________
mydata = 42
def test_function(mydata):
> assert mydata == 23
E assert 42 == 23
test_caching.py:20: AssertionError
-------------------------- Captured stdout setup ---------------------------
running expensive computation...
========================= short test summary info ==========================
(continues on next page)
If you run it a second time, the value will be retrieved from the cache and nothing will be printed:
$ pytest -q
F [100%]
================================= FAILURES =================================
______________________________ test_function _______________________________
mydata = 42
def test_function(mydata):
> assert mydata == 23
E assert 42 == 23
test_caching.py:20: AssertionError
========================= short test summary info ==========================
FAILED test_caching.py::test_function - assert 42 == 23
1 failed in 0.12s
You can always peek at the content of the cache using the --cache-show command line option:
$ pytest --cache-show
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
cachedir: $PYTHON_PREFIX/.pytest_cache
--------------------------- cache values for '*' ---------------------------
cache/lastfailed contains:
{'test_50.py::test_num[17]': True,
'test_50.py::test_num[25]': True,
'test_assert1.py::test_function': True,
'test_assert2.py::test_set_comparison': True,
'test_caching.py::test_function': True,
'test_foocompare.py::test_compare': True}
cache/nodeids contains:
['test_50.py::test_num[0]',
'test_50.py::test_num[10]',
'test_50.py::test_num[11]',
'test_50.py::test_num[12]',
'test_50.py::test_num[13]',
'test_50.py::test_num[14]',
'test_50.py::test_num[15]',
'test_50.py::test_num[16]',
'test_50.py::test_num[17]',
'test_50.py::test_num[18]',
'test_50.py::test_num[19]',
'test_50.py::test_num[1]',
(continues on next page)
You can instruct pytest to clear all cache files and values by adding the --cache-clear option like this:
pytest --cache-clear
This is recommended for invocations from Continuous Integration servers where isolation and correctness is more
important than speed.
14.7 Stepwise
As an alternative to --lf -x, especially for cases where you expect a large part of the test suite will fail, --sw,
--stepwise allows you to fix them one at a time. The test suite will run until the first failure and then stop. At the
next invocation, tests will continue from the last failing test and then run until the next failing test. You may use the
--stepwise-skip option to ignore one failing test and stop the test execution on the second failing test instead.
This is useful if you get stuck on a failing test and just want to ignore it until later.
FIFTEEN
UNITTEST.TESTCASE SUPPORT
pytest supports running Python unittest-based tests out of the box. It’s meant for leveraging existing
unittest-based test suites to use pytest as a test runner and also allow to incrementally adapt the test suite to
take full advantage of pytest’s features.
To run an existing unittest-style test suite using pytest, type:
pytest tests
pytest will automatically collect unittest.TestCase subclasses and their test methods in test_*.py or
*_test.py files.
Almost all unittest features are supported:
• @unittest.skip style decorators;
• setUp/tearDown;
• setUpClass/tearDownClass;
• setUpModule/tearDownModule;
Up to this point pytest does not have support for the following features:
• load_tests protocol;
• subtests;
By running your test suite with pytest you can make use of several features, in most cases without having to modify
existing code:
• Obtain more informative tracebacks;
• stdout and stderr capturing;
• Test selection options using -k and -m flags;
• Stopping after the first (or N) failures;
• –pdb command-line option for debugging on test failures (see note below);
• Distribute tests to multiple CPUs using the pytest-xdist plugin;
• Use plain assert-statements instead of self.assert* functions (unittest2pytest is immensely helpful in this);
123
pytest Documentation, Release 6.2
Running your unittest with pytest allows you to use its fixture mechanism with unittest.TestCase style tests.
Assuming you have at least skimmed the pytest fixture features, let’s jump-start into an example that integrates a pytest
db_class fixture, setting up a class-cached database object, and then reference it from a unittest-style test:
# content of conftest.py
import pytest
@pytest.fixture(scope="class")
def db_class(request):
class DummyDB:
pass
This defines a fixture function db_class which - if used - is called once for each test class and which sets the class-
level db attribute to a DummyDB instance. The fixture function achieves this by receiving a special request object
which gives access to the requesting test context such as the cls attribute, denoting the class from which the fixture is
used. This architecture de-couples fixture writing from actual test code and allows re-use of the fixture by a minimal
reference, the fixture name. So let’s write an actual unittest.TestCase class using our fixture definition:
# content of test_unittest_db.py
import unittest
import pytest
@pytest.mark.usefixtures("db_class")
class MyTest(unittest.TestCase):
def test_method1(self):
(continues on next page)
def test_method2(self):
assert 0, self.db # fail for demo purposes
The @pytest.mark.usefixtures("db_class") class-decorator makes sure that the pytest fixture function
db_class is called once per class. Due to the deliberately failing assert statements, we can take a look at the
self.db values in the traceback:
$ pytest test_unittest_db.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 2 items
test_unittest_db.py FF [100%]
def test_method1(self):
assert hasattr(self, "db")
> assert 0, self.db # fail for demo purposes
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0xdeadbeef>
E assert 0
test_unittest_db.py:10: AssertionError
___________________________ MyTest.test_method2 ____________________________
def test_method2(self):
> assert 0, self.db # fail for demo purposes
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0xdeadbeef>
E assert 0
test_unittest_db.py:13: AssertionError
========================= short test summary info ==========================
FAILED test_unittest_db.py::MyTest::test_method1 - AssertionError: <conft...
FAILED test_unittest_db.py::MyTest::test_method2 - AssertionError: <conft...
============================ 2 failed in 0.12s =============================
This default pytest traceback shows that the two test methods share the same self.db instance which was our
intention when writing the class-scoped fixture function above.
15.3. Mixing pytest fixtures into unittest.TestCase subclasses using marks 125
pytest Documentation, Release 6.2
Although it’s usually better to explicitly declare use of fixtures you need for a given test, you may sometimes want to
have fixtures that are automatically used in a given context. After all, the traditional style of unittest-setup mandates
the use of this implicit fixture writing and chances are, you are used to it or like it.
You can flag fixture functions with @pytest.fixture(autouse=True) and define the fixture function in the
context where you want it used. Let’s look at an initdir fixture which makes all test methods of a TestCase
class execute in a temporary directory with a pre-initialized samplefile.ini. Our initdir fixture itself uses
the pytest builtin tmpdir fixture to delegate the creation of a per-test temporary directory:
# content of test_unittest_cleandir.py
import pytest
import unittest
class MyTest(unittest.TestCase):
@pytest.fixture(autouse=True)
def initdir(self, tmpdir):
tmpdir.chdir() # change to pytest-provided temporary directory
tmpdir.join("samplefile.ini").write("# testdata")
def test_method(self):
with open("samplefile.ini") as f:
s = f.read()
assert "testdata" in s
Due to the autouse flag the initdir fixture function will be used for all methods of the class where it is de-
fined. This is a shortcut for using a @pytest.mark.usefixtures("initdir") marker on the class like in
the previous example.
Running this test module . . . :
$ pytest -q test_unittest_cleandir.py
. [100%]
1 passed in 0.12s
. . . gives us one passed test because the initdir fixture function was executed ahead of the test_method.
Note: unittest.TestCase methods cannot directly receive fixture arguments as implementing that is likely to
inflict on the ability to run general unittest.TestCase test suites.
The above usefixtures and autouse examples should help to mix in pytest fixtures into unittest suites.
You can also gradually move away from subclassing from unittest.TestCase to plain asserts and then start to
benefit from the full pytest feature set step by step.
Note: Due to architectural differences between the two frameworks, setup and teardown for unittest-based tests
is performed during the call phase of testing instead of in pytest’s standard setup and teardown stages.
This can be important to understand in some situations, particularly when reasoning about errors. For example, if a
unittest-based suite exhibits errors during setup, pytest will report no errors during its setup phase and will
instead raise the error during call.
SIXTEEN
pytest has basic support for running tests written for nose.
16.1 Usage
python setup.py develop # make sure tests can import our package
pytest # instead of 'nosetests'
and you should be able to run your nose style tests and make use of pytest’s capabilities.
127
pytest Documentation, Release 6.2
If you place a conftest.py file in the root directory of your project (as determined by pytest) pytest will run tests
“nose style” against the code below that directory by adding it to your sys.path instead of running against
your installed code.
You may find yourself wanting to do this if you ran python setup.py install to set up your project,
as opposed to python setup.py develop or any of the package manager equivalents. Installing with
develop in a virtual environment like tox is recommended over this pattern.
• nose-style doctests are not collected and executed correctly, also doctest fixtures don’t work.
• no nose-configuration is recognized.
• yield-based methods are unsupported as of pytest 4.1.0. They are fundamentally incompatible with pytest
because they don’t support fixtures properly since collection and test execution are separated.
nose2pytest is a Python script and pytest plugin to help convert Nose-based tests into pytest-based tests. Specifically,
the script transforms nose.tools.assert_* function calls into raw assert statements, while preserving format of original
arguments as much as possible.
SEVENTEEN
This section describes a classic and popular way how you can implement fixtures (setup and teardown test state) on a
per-module/class/function basis.
Note: While these setup/teardown methods are simple and familiar to those coming from a unittest or nose
background, you may also consider using pytest’s more powerful fixture mechanism which leverages the concept of
dependency injection, allowing for a more modular and more scalable approach for managing test state, especially for
larger projects and for functional testing. You can mix both fixture mechanisms in the same file but test methods of
unittest.TestCase subclasses cannot receive fixture arguments.
If you have multiple test functions and test classes in a single module you can optionally implement the following
fixture methods which will usually be called once for all the functions:
def setup_module(module):
""" setup any state specific to the execution of the given module."""
def teardown_module(module):
""" teardown any state that was previously setup with a setup_module
method.
"""
Similarly, the following methods are called at class level before and after all test methods of the class are called:
@classmethod
def setup_class(cls):
""" setup any state specific to the execution of the given class (which
usually contains tests).
"""
@classmethod
(continues on next page)
129
pytest Documentation, Release 6.2
Similarly, the following methods are called around each method invocation:
def setup_function(function):
""" setup any state tied to the execution of the given function.
Invoked for every test function in the module.
"""
def teardown_function(function):
""" teardown any state that was previously setup with a setup_function
call.
"""
EIGHTEEN
This section talks about installing and using third party plugins. For writing your own plugins, please refer to Writing
plugins.
Installing a third party plugin can be easily done with pip:
If a plugin is installed, pytest automatically finds and integrates it, there is no need to activate it.
Here is a little annotated list for some popular plugins:
• pytest-django: write tests for django apps, using pytest integration.
• pytest-twisted: write tests for twisted apps, starting a reactor and processing deferreds from test functions.
• pytest-cov: coverage reporting, compatible with distributed testing
• pytest-xdist: to distribute tests to CPUs and remote hosts, to run in boxed mode which allows to survive seg-
mentation faults, to run in looponfailing mode, automatically re-running failing tests on file changes.
• pytest-instafail: to report failures while the test run is happening.
• pytest-bdd: to write tests using behaviour-driven testing.
• pytest-timeout: to timeout tests based on function marks or global definitions.
• pytest-pep8: a --pep8 option to enable PEP8 compliance checking.
• pytest-flakes: check source code with pyflakes.
• oejskit: a plugin to run javascript unittests in live browsers.
To see a complete list of all plugins with their latest testing status against different pytest and Python versions, please
visit plugincompat.
You may also discover more plugins through a pytest- pypi.org search.
131
pytest Documentation, Release 6.2
You can require plugins in a test module or a conftest file using pytest_plugins:
pytest_plugins = ("myapp.testsupport.myplugin",)
When the test module or conftest plugin is loaded the specified plugins will be loaded as well.
Note: Requiring plugins using a pytest_plugins variable in non-root conftest.py files is deprecated. See
full explanation in the Writing plugins section.
Note: The name pytest_plugins is reserved and should not be used as a name for a custom plugin module.
If you want to find out which plugins are active in your environment you can type:
pytest --trace-config
and will get an extended test header which shows activated plugins and their names. It will also print local plugins aka
conftest.py files when they are loaded.
pytest -p no:NAME
This means that any subsequent try to activate/load the named plugin will not work.
If you want to unconditionally disable a plugin for a project, you can add this option to your pytest.ini file:
[pytest]
addopts = -p no:NAME
Alternatively to disable it only in certain environments (for example in a CI server), you can set PYTEST_ADDOPTS
environment variable to -p no:name.
See Finding out which plugins are active for how to obtain the name of a plugin.
NINETEEN
WRITING PLUGINS
It is easy to implement local conftest plugins for your own project or pip-installable plugins that can be used throughout
many projects, including third party projects. Please refer to Installing and Using plugins if you only want to use but
not write plugins.
A plugin contains one or multiple hook functions. Writing hooks explains the basics and details of how you can write a
hook function yourself. pytest implements all aspects of configuration, collection, running and reporting by calling
well specified hooks of the following plugins:
• builtin plugins: loaded from pytest’s internal _pytest directory.
• external plugins: modules discovered through setuptools entry points
• conftest.py plugins: modules auto-discovered in test directories
In principle, each hook call is a 1:N Python function call where N is the number of registered implementation functions
for a given specification. All specifications and implementations follow the pytest_ prefix naming convention,
making them easy to distinguish and find.
133
pytest Documentation, Release 6.2
Local conftest.py plugins contain directory-specific hook implementations. Hook Session and test running activi-
ties will invoke all hooks defined in conftest.py files closer to the root of the filesystem. Example of implementing
the pytest_runtest_setup hook so that is called for tests in the a sub directory but not for other directories:
a/conftest.py:
def pytest_runtest_setup(item):
# called for running each test in 'a' directory
print("setting up", item)
a/test_sub.py:
def test_sub():
pass
test_flat.py:
def test_flat():
pass
Note: If you have conftest.py files which do not reside in a python package directory (i.e. one containing an
__init__.py) then “import conftest” can be ambiguous because there might be other conftest.py files as well
on your PYTHONPATH or sys.path. It is thus good practice for projects to either put conftest.py under a
package scope or to never import anything from a conftest.py file.
See also: pytest import mechanisms and sys.path/PYTHONPATH.
Note: Some hooks should be implemented only in plugins or conftest.py files situated at the tests root directory due
to how pytest discovers plugins during startup, see the documentation of each hook for details.
If you want to write a plugin, there are many real-life examples you can copy from:
• a custom collection example plugin: A basic example for specifying tests in Yaml files
• builtin plugins which provide pytest’s own functionality
• many external plugins providing additional features
All of these plugins implement hooks and/or fixtures to extend and add functionality.
Note: Make sure to check out the excellent cookiecutter-pytest-plugin project, which is a cookiecutter template for
authoring plugins.
The template provides an excellent starting point with a working plugin, tests running with tox, a comprehensive
README file as well as a pre-configured entry-point.
Also consider contributing your plugin to pytest-dev once it has some happy users other than yourself.
If you want to make your plugin externally available, you may define a so-called entry point for your distribution so
that pytest finds your plugin module. Entry points are a feature that is provided by setuptools. pytest looks up
the pytest11 entrypoint to discover its plugins and you can thus make your plugin available by defining it in your
setuptools-invocation:
setup(
name="myproject",
packages=["myproject"],
# the following makes a plugin available to pytest
entry_points={"pytest11": ["name_of_plugin = myproject.pluginmodule"]},
# custom PyPI classifier for pytest plugins
classifiers=["Framework :: Pytest"],
)
If a package is installed this way, pytest will load myproject.pluginmodule as a plugin which can define
hooks.
Note: Make sure to include Framework :: Pytest in your list of PyPI classifiers to make it easy for users to
find your plugin.
One of the main features of pytest is the use of plain assert statements and the detailed introspection of expressions
upon assertion failures. This is provided by “assertion rewriting” which modifies the parsed AST before it gets com-
piled to bytecode. This is done via a PEP 302 import hook which gets installed early on when pytest starts up and
will perform this rewriting when modules get imported. However, since we do not want to test different bytecode from
what you will run in production, this hook only rewrites test modules themselves (as defined by the python_files
configuration option), and any modules which are part of plugins. Any other imported module will not be rewritten
and normal assertion behaviour will happen.
If you have assertion helpers in other modules where you would need assertion rewriting to be enabled you need to
ask pytest explicitly to rewrite this module before it gets imported.
register_assert_rewrite(*names: str) → None
Register one or more module names to be rewritten on import.
This function will make sure that this module or all modules inside the package will get their assert statements
rewritten. Thus you should make sure to call this before the module is actually imported, usually in your
__init__.py if you are a plugin using a package.
Raises TypeError – If the given module names are not strings.
This is especially important when you write a pytest plugin which is created using a package. The import hook only
treats conftest.py files and any modules which are listed in the pytest11 entrypoint as plugins. As an example
consider the following package:
pytest_foo/__init__.py
pytest_foo/plugin.py
pytest_foo/helper.py
In this case only pytest_foo/plugin.py will be rewritten. If the helper module also contains assert statements
which need to be rewritten it needs to be marked as such, before it gets imported. This is easiest by marking it
for rewriting inside the __init__.py module, which will always be imported first when a module inside a pack-
age is imported. This way plugin.py can still import helper.py normally. The contents of pytest_foo/
__init__.py will then need to look like this:
import pytest
pytest.register_assert_rewrite("pytest_foo.helper")
You can require plugins in a test module or a conftest.py file using pytest_plugins:
When the test module or conftest plugin is loaded the specified plugins will be loaded as well. Any module can be
blessed as a plugin, including internal application modules:
pytest_plugins = "myapp.testsupport.myplugin"
pytest_plugins are processed recursively, so note that in the example above if myapp.testsupport.
myplugin also declares pytest_plugins, the contents of the variable will also be loaded as plugins, and so
on.
Note: Requiring plugins using pytest_plugins variable in non-root conftest.py files is deprecated.
This is important because conftest.py files implement per-directory hook implementations, but once a plugin is
imported, it will affect the entire directory tree. In order to avoid confusion, defining pytest_plugins in any
conftest.py file which is not located in the tests root directory is deprecated, and will raise a warning.
This mechanism makes it easy to share fixtures within applications or even external applications without the need to
create external plugins using the setuptools’s entry point technique.
Plugins imported by pytest_plugins will also automatically be marked for assertion rewriting (see pytest.
register_assert_rewrite()). However for this to have any effect the module must not be imported already; if
it was already imported at the time the pytest_plugins statement is processed, a warning will result and assertions
inside the plugin will not be rewritten. To fix this you can either call pytest.register_assert_rewrite()
yourself before the module is imported, or you can arrange the code to delay the importing until after the plugin is
registered.
If a plugin wants to collaborate with code from another plugin it can obtain a reference through the plugin manager
like this:
plugin = config.pluginmanager.get_plugin("name_of_plugin")
If you want to look at the names of existing plugins, use the --trace-config option.
If your plugin uses any markers, you should register them so that they appear in pytest’s help text and do not cause
spurious warnings. For example, the following plugin would register cool_marker and mark_with for all users:
def pytest_configure(config):
config.addinivalue_line("markers", "cool_marker: this one is for cool tests.")
config.addinivalue_line(
"markers", "mark_with(arg, arg2): this marker takes arguments."
)
pytest comes with a plugin named pytester that helps you write tests for your plugin code. The plugin is disabled
by default, so you will have to enable it before you can use it.
You can do so by adding the following line to a conftest.py file in your testing directory:
# content of conftest.py
pytest_plugins = ["pytester"]
Alternatively you can invoke pytest with the -p pytester command line option.
This will allow you to use the testdir fixture for testing your plugin code.
Let’s demonstrate what you can do with the plugin with an example. Imagine we developed a plugin that provides a
fixture hello which yields a function and we can invoke this function with one optional parameter. It will return a
string value of Hello World! if we do not supply a value or Hello {value}! if we do supply a string value.
import pytest
def pytest_addoption(parser):
group = parser.getgroup("helloworld")
group.addoption(
"--name",
action="store",
dest="name",
default="World",
help='Default "name" for hello().',
)
def _hello(name=None):
if not name:
name = request.config.getoption("name")
return "Hello {name}!".format(name=name)
return _hello
Now the testdir fixture provides a convenient API for creating temporary conftest.py files and test files. It
also allows us to run the tests and return a result object, with which we can assert the tests’ outcomes.
def test_hello(testdir):
"""Make sure that our plugin works."""
@pytest.fixture(params=[
"Brianna",
"Andreas",
"Floris",
])
def name(request):
return request.param
"""
)
Additionally it is possible to copy examples for an example folder before running pytest on it.
# content of pytest.ini
[pytest]
pytester_example_dir = .
# content of test_example.py
(continues on next page)
def test_plugin(testdir):
testdir.copy_example("test_example.py")
testdir.runpytest("-k", "test_example")
def test_example():
pass
$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, configfile: pytest.ini
collected 2 items
test_example.py .. [100%]
For more information about the result object that runpytest() returns, and the methods that it provides please
check out the RunResult documentation.
TWENTY
pytest calls hook functions from registered plugins for any given hook specification. Let’s look at a typical hook func-
tion for the pytest_collection_modifyitems(session, config, items) hook which pytest calls
after collection of all test items is completed.
When we implement a pytest_collection_modifyitems function in our plugin pytest will during registra-
tion verify that you use argument names which match the specification and bail out if not.
Let’s look at a possible implementation:
Here, pytest will pass in config (the pytest config object) and items (the list of collected test items) but will
not pass in the session argument because we didn’t list it in the function signature. This dynamic “pruning” of
arguments allows pytest to be “future-compatible”: we can introduce new hook named parameters without breaking
the signatures of existing hook implementations. It is one of the reasons for the general long-lived compatibility of
pytest plugins.
Note that hook functions other than pytest_runtest_* are not allowed to raise exceptions. Doing so will break
the pytest run.
Most calls to pytest hooks result in a list of results which contains all non-None results of the called hook functions.
Some hook specifications use the firstresult=True option so that the hook call only executes until the first of
N registered functions returns a non-None result which is then taken as result of the overall hook call. The remaining
hook functions will not be called in this case.
141
pytest Documentation, Release 6.2
pytest plugins can implement hook wrappers which wrap the execution of other hook implementations. A hook
wrapper is a generator function which yields exactly once. When pytest invokes hooks it first executes hook wrappers
and passes the same arguments as to the regular hooks.
At the yield point of the hook wrapper pytest will execute the next hook implementations and return their result to the
yield point in the form of a Result instance which encapsulates a result or exception info. The yield point itself will
thus typically not raise exceptions (unless there are bugs).
Here is an example definition of a hook wrapper:
import pytest
@pytest.hookimpl(hookwrapper=True)
def pytest_pyfunc_call(pyfuncitem):
do_something_before_next_hook_executes()
outcome = yield
# outcome.excinfo may be None or a (cls, val, tb) tuple
post_process_result(res)
Note that hook wrappers don’t return results themselves, they merely perform tracing or other side effects around the
actual hook implementations. If the result of the underlying hook is a mutable object, they may modify that result but
it’s probably better to avoid it.
For more information, consult the pluggy documentation about hookwrappers.
For any given hook specification there may be more than one implementation and we thus generally view hook
execution as a 1:N function call where N is the number of registered functions. There are ways to influence if a hook
implementation comes before or after others, i.e. the position in the N-sized list of functions:
# Plugin 1
@pytest.hookimpl(tryfirst=True)
def pytest_collection_modifyitems(items):
# will execute as early as possible
...
# Plugin 2
@pytest.hookimpl(trylast=True)
def pytest_collection_modifyitems(items):
# will execute as late as possible
...
# Plugin 3
(continues on next page)
Note: This is a quick overview on how to add new hooks and how they work in general, but a more complete overview
can be found in the pluggy documentation.
Plugins and conftest.py files may declare new hooks that can then be implemented by other plugins in order to
alter behaviour or interact with the new plugin:
pytest_addhooks(pluginmanager: PytestPluginManager) → None
Called at plugin registration time to allow adding new hooks via a call to pluginmanager.
add_hookspecs(module_or_class, prefix).
Parameters pluginmanager (_pytest.config.PytestPluginManager) – pytest plu-
gin manager.
Hooks are usually declared as do-nothing functions that contain only documentation describing when the hook will
be called and what return values are expected. The names of the functions must start with pytest_ otherwise pytest
won’t recognize them.
Here’s an example. Let’s assume this code is in the sample_hook.py module.
def pytest_my_hook(config):
"""
Receives the pytest config and does things with it
"""
To register the hooks with pytest they need to be structured in their own module or class. This class or module can
then be passed to the pluginmanager using the pytest_addhooks function (which itself is a hook exposed by
pytest).
def pytest_addhooks(pluginmanager):
""" This example assumes the hooks are grouped in the 'sample_hook' module. """
from my_app.tests import sample_hook
pluginmanager.add_hookspecs(sample_hook)
@pytest.fixture()
def my_fixture(pytestconfig):
# call the hook called "pytest_my_hook"
# 'result' will be a list of return values from all registered functions.
result = pytestconfig.hook.pytest_my_hook(config=pytestconfig)
Now your hook is ready to be used. To register a function at the hook, other plugins or users must now simply define
the function pytest_my_hook with the correct signature in their conftest.py.
Example:
def pytest_my_hook(config):
"""
Print all active hooks to the screen.
"""
print(config.hook)
Occasionally, it is necessary to change the way in which command line options are defined by one plugin based on
hooks in another plugin. For example, a plugin may expose a command line option for which another plugin needs
to define the default value. The pluginmanager can be used to install and use hooks to accomplish this. The plugin
would define and add the hooks and use pytest_addoption as follows:
# contents of hooks.py
# contents of myplugin.py
def pytest_addhooks(pluginmanager):
""" This example assumes the hooks are grouped in the 'hooks' module. """
from . import hook
(continues on next page)
pluginmanager.add_hookspecs(hook)
The conftest.py that is using myplugin would simply define the hook as follows:
def pytest_config_file_default_value():
return "config.yaml"
Using new hooks from plugins as explained above might be a little tricky because of the standard validation mecha-
nism: if you depend on a plugin that is not installed, validation will fail and the error message will not make much
sense to your users.
One approach is to defer the hook implementation to a new plugin instead of declaring the hook functions directly in
your plugin module, for example:
# contents of myplugin.py
class DeferPlugin:
"""Simple plugin to defer pytest-xdist hook functions."""
def pytest_configure(config):
if config.pluginmanager.hasplugin("xdist"):
config.pluginmanager.register(DeferPlugin())
This has the added benefit of allowing you to conditionally install hooks depending on which plugins are installed.
TWENTYONE
LOGGING
pytest captures log messages of level WARNING or above automatically and displays them in their own section for
each failed test in the same manner as captured stdout and stderr.
Running without options:
pytest
By default each captured log message shows the module, line number, log level and message.
If desired the log and date format can be specified to anything that the logging module supports by passing specific
formatting options:
[pytest]
log_format = %(asctime)s %(levelname)s %(message)s
log_date_format = %Y-%m-%d %H:%M:%S
Further it is possible to disable reporting of captured content (stdout, stderr and logs) on failed tests completely with:
pytest --show-capture=no
147
pytest Documentation, Release 6.2
Inside tests it is possible to change the log level for the captured log messages. This is supported by the caplog
fixture:
def test_foo(caplog):
caplog.set_level(logging.INFO)
pass
By default the level is set on the root logger, however as a convenience it is also possible to set the log level of any
logger:
def test_foo(caplog):
caplog.set_level(logging.CRITICAL, logger="root.baz")
pass
The log levels set are restored automatically at the end of the test.
It is also possible to use a context manager to temporarily change the log level inside a with block:
def test_bar(caplog):
with caplog.at_level(logging.INFO):
pass
Again, by default the level of the root logger is affected but the level of any logger can be changed instead with:
def test_bar(caplog):
with caplog.at_level(logging.CRITICAL, logger="root.baz"):
pass
Lastly all the logs sent to the logger during the test run are made available on the fixture in the form of both the
logging.LogRecord instances and the final log text. This is useful for when you want to assert on the contents of
a message:
def test_baz(caplog):
func_under_test()
for record in caplog.records:
assert record.levelname != "CRITICAL"
assert "wally" not in caplog.text
For all the available attributes of the log records see the logging.LogRecord class.
You can also resort to record_tuples if all you want to do is to ensure, that certain messages have been logged
under a given logger name with a given severity and message:
def test_foo(caplog):
logging.getLogger().info("boo %s", "arg")
You can call caplog.clear() to reset the captured log records in a test:
def test_something_with_clearing_records(caplog):
some_method_that_creates_log_records()
caplog.clear()
your_test_method()
assert ["Foo"] == [rec.message for rec in caplog.records]
The caplog.records attribute contains records from the current stage only, so inside the setup phase it contains
only setup logs, same with the call and teardown phases.
To access logs from other stages, use the caplog.get_records(when) method. As an example, if you want to
make sure that tests which use a certain fixture never log any warnings, you can inspect the records for the setup and
call stages during teardown like so:
@pytest.fixture
def window(caplog):
window = create_window()
yield window
for when in ("setup", "call"):
messages = [
x.message for x in caplog.get_records(when) if x.levelno == logging.
˓→WARNING
]
if messages:
pytest.fail(
"warning messages encountered during testing: {}".format(messages)
)
By setting the log_cli configuration option to true, pytest will output logging records as they are emitted directly
into the console.
You can specify the logging level for which log records with equal or higher level are printed to the console by passing
--log-cli-level. This setting accepts the logging level names as seen in python’s documentation or an integer
as the logging level num.
Additionally, you can also specify --log-cli-format and --log-cli-date-format which mirror and de-
fault to --log-format and --log-date-format if not provided, but are applied only to the console logging
handler.
All of the CLI log options can also be set in the configuration INI file. The option names are:
• log_cli_level
• log_cli_format
• log_cli_date_format
If you need to record the whole test suite logging calls to a file, you can pass --log-file=/path/to/log/file.
This log file is opened in write mode which means that it will be overwritten at each run tests session.
You can also specify the logging level for the log file by passing --log-file-level. This setting accepts the
logging level names as seen in python’s documentation(ie, uppercased level names) or an integer as the logging level
num.
Additionally, you can also specify --log-file-format and --log-file-date-format which are equal to
--log-format and --log-date-format but are applied to the log file logging handler.
All of the log file options can also be set in the configuration INI file. The option names are:
• log_file
• log_file_level
• log_file_format
• log_file_date_format
You can call set_log_path() to customize the log_file path dynamically. This functionality is considered exper-
imental.
This feature was introduced as a drop-in replacement for the pytest-catchlog plugin and they conflict with each other.
The backward compatibility API with pytest-capturelog has been dropped when this feature was introduced, so
if for that reason you still need pytest-catchlog you can disable the internal feature by adding to your pytest.
ini:
[pytest]
addopts=-p no:logging
This feature was introduced in 3.3 and some incompatible changes have been made in 3.4 after community feed-
back:
• Log levels are no longer changed unless explicitly requested by the log_level configuration or
--log-level command-line options. This allows users to configure logger objects themselves. Setting
log_level will set the level that is captured globally so if a specific test requires a lower level than this, use
the caplog.set_level() functionality otherwise that test will be prone to failure.
• Live Logs is now disabled by default and can be enabled setting the log_cli configuration option to true.
When enabled, the verbosity is increased so logging for each test is visible.
• Live Logs are now sent to sys.stdout and no longer require the -s command-line option to work.
If you want to partially restore the logging behavior of version 3.3, you can add this options to your ini file:
[pytest]
log_cli=true
log_level=NOTSET
More details about the discussion that lead to this changes can be read in issue #3013.
TWENTYTWO
API REFERENCE
• Functions
– pytest.approx
– pytest.fail
– pytest.skip
– pytest.importorskip
– pytest.xfail
– pytest.exit
– pytest.main
– pytest.param
– pytest.raises
– pytest.deprecated_call
– pytest.register_assert_rewrite
– pytest.warns
– pytest.freeze_includes
• Marks
– pytest.mark.filterwarnings
– pytest.mark.parametrize
– pytest.mark.skip
– pytest.mark.skipif
– pytest.mark.usefixtures
– pytest.mark.xfail
– Custom marks
• Fixtures
– @pytest.fixture
– config.cache
151
pytest Documentation, Release 6.2
– capsys
– capsysbinary
– capfd
– capfdbinary
– doctest_namespace
– request
– pytestconfig
– record_property
– record_testsuite_property
– caplog
– monkeypatch
– pytester
– testdir
– recwarn
– tmp_path
– tmp_path_factory
– tmpdir
– tmpdir_factory
• Hooks
– Bootstrapping hooks
– Initialization hooks
– Collection hooks
– Test running (runtest) hooks
– Reporting hooks
– Debugging/Interaction hooks
• Objects
– CallInfo
– Class
– Collector
– CollectReport
– Config
– ExceptionInfo
– ExitCode
– File
– FixtureDef
– FSCollector
– Function
– FunctionDefinition
– Item
– MarkDecorator
– MarkGenerator
– Mark
– Metafunc
– Module
– Node
– Parser
– PytestPluginManager
– Session
– TestReport
– _Result
• Global Variables
• Environment Variables
• Exceptions
• Warnings
• Configuration Options
• Command-line Flags
22.1 Functions
22.1.1 pytest.approx
This problem is commonly encountered when writing tests, e.g. when making sure that floating-point values are
what you expect them to be. One way to deal with this problem is to assert that two floating-point numbers are
equal to within some appropriate tolerance:
However, comparisons like this are tedious to write and difficult to understand. Furthermore, absolute compar-
isons like the one above are usually discouraged because there’s no tolerance that works well for all situations.
1e-6 is good for numbers around 1, but too small for very big numbers and too big for very small ones. It’s
better to express the tolerance as a fraction of the expected value, but relative comparisons like that are even
more difficult to write correctly and concisely.
The approx class performs floating-point comparisons using a syntax that’s as intuitive as possible:
Dictionary values:
>>> {'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == approx({'a': 0.3, 'b': 0.6})
True
numpy arrays:
By default, approx considers numbers within a relative tolerance of 1e-6 (i.e. one part in a million) of its
expected value to be equal. This treatment would lead to surprising results if the expected value was 0.0,
because nothing but 0.0 itself is relatively close to 0.0. To handle this case less surprisingly, approx also
considers numbers within an absolute tolerance of 1e-12 of its expected value to be equal. Infinity and NaN
are special cases. Infinity is only considered equal to itself, regardless of the relative tolerance. NaN is not
considered equal to anything by default, but you can make it be equal to itself by setting the nan_ok argument
to True. (This is meant to facilitate comparing arrays that use NaN to mean “no data”.)
Both the relative and absolute tolerances can be changed by passing arguments to the approx constructor:
If you specify abs but not rel, the comparison will not consider the relative tolerance at all. In other words,
two numbers that are within the default relative tolerance of 1e-6 will still be considered unequal if they exceed
the specified absolute tolerance. If you specify both abs and rel, the numbers will be considered equal if either
tolerance is met:
You can also use approx to compare nonnumeric types, or dicts and sequences containing nonnumeric types,
in which case it falls back to strict equality. This can be useful for comparing dicts and sequences that can
contain optional values:
True
>>> [None, 1.0000005] == approx([None,1])
True
>>> ["foo", 1.0000005] == approx([None,1])
False
If you’re thinking about using approx, then you might want to know how it compares to other good ways of
comparing floating-point numbers. All of these algorithms are based on relative and absolute tolerances and
should agree for the most part, but they do have meaningful differences:
• math.isclose(a, b, rel_tol=1e-9, abs_tol=0.0): True if the relative tolerance is met
w.r.t. either a or b or if the absolute tolerance is met. Because the relative tolerance is calculated w.r.t.
both a and b, this test is symmetric (i.e. neither a nor b is a “reference value”). You have to specify
an absolute tolerance if you want to compare to 0.0 because there is no tolerance by default. More
information. . .
• numpy.isclose(a, b, rtol=1e-5, atol=1e-8): True if the difference between a and b is
less that the sum of the relative tolerance w.r.t. b and the absolute tolerance. Because the relative tolerance
is only calculated w.r.t. b, this test is asymmetric and you can think of b as the reference value. Support
for comparing sequences is provided by numpy.allclose. More information. . .
• unittest.TestCase.assertAlmostEqual(a, b): True if a and b are within an absolute tol-
erance of 1e-7. No relative tolerance is considered and the absolute tolerance cannot be changed, so this
function is not appropriate for very large or very small numbers. Also, it’s only available in subclasses of
unittest.TestCase and it’s ugly because it doesn’t follow PEP8. More information. . .
• a == pytest.approx(b, rel=1e-6, abs=1e-12): True if the relative tolerance is met w.r.t.
b or if the absolute tolerance is met. Because the relative tolerance is only calculated w.r.t. b, this test is
asymmetric and you can think of b as the reference value. In the special case that you explicitly specify
an absolute tolerance but not a relative tolerance, only the absolute tolerance is considered.
In the second example one expects approx(0.1).__le__(0.1 + 1e-10) to be called. But instead,
approx(0.1).__lt__(0.1 + 1e-10) is used to comparison. This is because the call hierarchy of
rich comparisons follows a fixed behavior. More information. . .
Changed in version 3.7.1: approx raises TypeError when it encounters a dict value or sequence element of
nonnumeric type.
Changed in version 6.1.0: approx falls back to strict equality for nonnumeric types instead of raising
TypeError.
22.1.2 pytest.fail
Tutorial: Skip and xfail: dealing with tests that cannot succeed
fail(msg: str = '', pytrace: bool = True) → NoReturn
Explicitly fail an executing test with the given message.
Parameters
• msg (str) – The message to show the user as reason for the failure.
• pytrace (bool) – If False, msg represents the full failure information and no python
traceback will be reported.
22.1.3 pytest.skip
skip(msg[, allow_module_level=False ])
Skip an executing test with the given message.
This function should be called only during testing (setup, call or teardown) or during collection by using the
allow_module_level flag. This function can be called in doctests as well.
Parameters allow_module_level (bool) – Allows this function to be called at module level,
skipping the rest of the module. Defaults to False.
Note: It is better to use the pytest.mark.skipif marker when possible to declare a test to be skipped under certain
conditions like mismatching platforms or dependencies. Similarly, use the # doctest: +SKIP directive
(see doctest.SKIP) to skip a doctest statically.
22.1.4 pytest.importorskip
docutils = pytest.importorskip("docutils")
22.1.5 pytest.xfail
Note: It is better to use the pytest.mark.xfail marker when possible to declare a test to be xfailed under certain
conditions like known bugs or missing features.
22.1.6 pytest.exit
22.1.7 pytest.main
22.1.8 pytest.param
@pytest.mark.parametrize(
"test_input,expected",
[("3+5", 8), pytest.param("6*9", 42, marks=pytest.mark.xfail),],
)
def test_eval(test_input, expected):
assert eval(test_input) == expected
Parameters
• values – Variable args of the values of the parameter set, in order.
• marks – A single mark or a list of marks to be applied to this parameter set.
• id (str) – The id to attribute to this parameter set.
22.1.9 pytest.raises
If the code block does not raise the expected exception (ZeroDivisionError in the example above), or no
exception at all, the check will fail instead.
You can also use the keyword argument match to assert that the exception matches a text or regex:
The context manager produces an ExceptionInfo object which can be used to inspect the details of the
captured exception:
Note: When using pytest.raises as a context manager, it’s worthwhile to note that normal context
manager rules apply and that the exception raised must be the final line in the scope of the context manager.
Lines of code after that, within the scope of the context manager will not be executed. For example:
>>> value = 15
>>> with pytest.raises(ValueError) as exc_info:
... if value > 10:
... raise ValueError("value must be <= 10")
... assert exc_info.type is ValueError # this will not execute
Instead, the following approach must be taken (note the difference in scope):
The form above is fully supported but discouraged for new code because the context manager form is regarded
as more readable and less error-prone.
Note: Similar to caught exception objects in Python, explicitly clearing local references to returned
ExceptionInfo objects can help the Python interpreter speed up its garbage collection.
Clearing those references breaks a reference cycle (ExceptionInfo –> caught exception –> frame stack
raising the exception –> current frame stack –> local variables –> ExceptionInfo) which makes Python
keep all objects referenced from that cycle (including all local variables in the current frame) alive until the next
cyclic garbage collection run. More detailed information can be found in the official Python documentation for
the try statement.
22.1.10 pytest.deprecated_call
It can also be used by passing a function and *args and **kwargs, in which case it will ensure calling
func(*args, **kwargs) produces one of the warnings types above. The return value is the return value
of the function.
In the context manager form you may use the keyword argument match to assert that the warning matches a
text or regex.
The context manager produces a list of warnings.WarningMessage objects, one for each warning raised.
22.1.11 pytest.register_assert_rewrite
22.1.12 pytest.warns
In the context manager form you may use the keyword argument match to assert that the warning matches a
text or regex:
22.1.13 pytest.freeze_includes
22.2 Marks
Marks can be used apply meta data to test functions (but not fixtures), which can then be accessed by fixtures or
plugins.
22.2.1 pytest.mark.filterwarnings
Tutorial: @pytest.mark.filterwarnings.
Add warning filters to marked test items.
pytest.mark.filterwarnings(filter)
Parameters filter (str) – A warning specification string, which is composed of contents of
the tuple (action, message, category, module, lineno) as specified in The
Warnings filter section of the Python documentation, separated by ":". Optional fields can
be omitted. Module names passed for filtering are not regex-escaped.
For example:
def test_foo():
...
22.2.2 pytest.mark.parametrize
22.2.3 pytest.mark.skip
22.2.4 pytest.mark.skipif
22.2.5 pytest.mark.usefixtures
Note: When using usefixtures in hooks, it can only load fixtures when applied to a test function before test setup
(for example in the pytest_collection_modifyitems hook).
Also note that this mark has no effect when applied to fixtures.
22.2.6 pytest.mark.xfail
• run (bool) – If the test function should actually be executed. If False, the function will
always xfail and will not be executed (useful if a function is segfaulting).
• strict (bool) –
– If False (the default) the function will be shown in the terminal output as xfailed if
it fails and as xpass if it passes. In both cases this will not cause the test suite to fail
as a whole. This is particularly useful to mark flaky tests (tests that fail at random) to be
tackled later.
– If True, the function will be shown in the terminal output as xfailed if it fails, but if it
unexpectedly passes then it will fail the test suite. This is particularly useful to mark func-
tions that are always failing and there should be a clear indication if they unexpectedly
start to pass (for example a new release of a library fixes a known bug).
Marks are created dynamically using the factory object pytest.mark and applied as a decorator.
For example:
Will create and attach a Mark object to the collected Item, which can then be accessed by fixtures or hooks with
Node.iter_markers. The mark object will have the following attributes:
When Node.iter_markers or Node.iter_markers is used with multiple markers, the marker closest to the
function will be iterated over first. The above example will result in @pytest.mark.slow followed by @pytest.
mark.timeout(...).
22.3 Fixtures
def test_output(capsys):
print("hello")
out, err = capsys.readouterr()
assert out == "hello\n"
@pytest.fixture
def db_session(tmpdir):
fn = tmpdir / "db.file"
return connect(str(fn))
22.3.1 @pytest.fixture
22.3.2 config.cache
22.3.3 capsys
def test_output(capsys):
print("hello")
captured = capsys.readouterr()
assert captured.out == "hello\n"
class CaptureFixture
Object returned by the capsys, capsysbinary, capfd and capfdbinary fixtures.
readouterr() → _pytest.capture.CaptureResult
Read and return the captured output so far, resetting the internal buffer.
Returns The captured content as a namedtuple with out and err string attributes.
with disabled() → Generator[None, None, None]
Temporarily disable capturing while inside the with block.
22.3.4 capsysbinary
def test_output(capsysbinary):
print("hello")
captured = capsysbinary.readouterr()
assert captured.out == b"hello\n"
22.3.5 capfd
def test_system_echo(capfd):
os.system('echo "hello"')
captured = capfd.readouterr()
assert captured.out == "hello\n"
22.3.6 capfdbinary
def test_system_echo(capfdbinary):
os.system('echo "hello"')
captured = capfdbinary.readouterr()
assert captured.out == b"hello\n"
22.3.7 doctest_namespace
@pytest.fixture(autouse=True)
def add_np(doctest_namespace):
doctest_namespace["np"] = numpy
22.3.8 request
Tutorial: Pass different values to a test function, depending on command line options.
The request fixture is a special fixture providing information of the requesting test function.
class FixtureRequest
A request for a fixture from a test or fixture function.
A request object gives access to the requesting test context and has an optional param attribute in case the
fixture is parametrized indirectly.
fixturename: Optional[str]
Fixture for which this request is being performed.
scope: _Scope
Scope string, one of “function”, “class”, “module”, “session”.
fixturenames
Names of all active fixtures in this request.
node
Underlying collection node (depends on current request scope).
config
The pytest config object associated with this request.
function
Test function object if the request has a per-function scope.
cls
Class (can be None) where the test function was collected.
instance
Instance (can be None) on which test function was collected.
module
Python module object where the test function was collected.
fspath
The file system path of the test module which collected this test.
keywords
Keywords/markers dictionary for the underlying node.
session
Pytest session object.
addfinalizer(finalizer: Callable[], object]) → None
Add finalizer/teardown function to be called after the last test within the requesting test context finished
execution.
applymarker(marker: Union[str, _pytest.mark.structures.MarkDecorator]) → None
Apply a marker to a single test function invocation.
This method is useful if you don’t want to have a keyword/marker on all function invocations.
Parameters marker – A _pytest.mark.MarkDecorator object created by a call to
pytest.mark.NAME(...).
raiseerror(msg: Optional[str]) → NoReturn
Raise a FixtureLookupError with the given message.
getfixturevalue(argname: str) → Any
Dynamically run a named fixture function.
Declaring fixtures via function argument is recommended where possible. But if you can only decide
whether to use another fixture at test setup time, you may use this function to retrieve it inside a fixture or
test function body.
Raises pytest.FixtureLookupError – If the given fixture could not be found.
22.3.9 pytestconfig
pytestconfig()
Session-scoped fixture that returns the _pytest.config.Config object.
Example:
def test_foo(pytestconfig):
if pytestconfig.getoption("verbose") > 0:
...
22.3.10 record_property
Tutorial: record_property.
record_property()
Add extra properties to the calling test.
User properties become part of the test report and are available to the configured reporters, like JUnit XML.
The fixture is callable with name, value. The value is automatically XML-encoded.
Example:
def test_function(record_property):
record_property("example_key", 1)
22.3.11 record_testsuite_property
Tutorial: record_testsuite_property.
record_testsuite_property()
Record a new <property> tag as child of the root <testsuite>.
This is suitable to writing global information regarding the entire test suite, and is compatible with xunit2
JUnit family.
This is a session-scoped fixture which is called with (name, value). Example:
def test_foo(record_testsuite_property):
record_testsuite_property("ARCH", "PPC")
record_testsuite_property("STORAGE_TYPE", "CEPH")
name must be a string, value will be converted to a string and properly xml-escaped.
Warning: Currently this fixture does not work with the pytest-xdist plugin. See issue #7767 for details.
22.3.12 caplog
Tutorial: Logging.
caplog()
Access and control log capturing.
Captured logs are available through the following properties/methods:
handler
Get the logging handler used by the fixture.
Return type LogCaptureHandler
get_records(when: str) → List[logging.LogRecord]
Get the logging records for one of the possible test phases.
Parameters when (str) – Which test phase to obtain the records from. Valid values are:
“setup”, “call” and “teardown”.
Returns The list of captured records at the given stage.
Return type List[logging.LogRecord]
New in version 3.4.
text
The formatted log text.
records
The list of log records.
record_tuples
A list of a stripped down version of log records intended for use in assertion comparison.
The format of the tuple is:
(logger_name, log_level, message)
messages
A list of format-interpolated log messages.
Unlike ‘records’, which contains the format string and parameters for interpolation, log messages in this
list are all interpolated.
Unlike ‘text’, which contains the output from the handler, log messages in this list are unadorned with
levels, timestamps, etc, making exact comparisons more reliable.
Note that traceback or stack info (from logging.exception() or the exc_info or stack_info
arguments to the logging functions) is not included, as this is added by the formatter in the handler.
New in version 3.7.
clear() → None
Reset the list of log records and the captured log text.
set_level(level: Union[int, str], logger: Optional[str] = None) → None
Set the level of a logger for the duration of a test.
Changed in version 3.4: The levels of the loggers changed by this function will be restored to their initial
values at the end of the test.
Parameters
• level (int) – The level.
• logger (str) – The logger to update. If not given, the root logger.
with at_level(level: int, logger: Optional[str] = None) → Generator[None, None, None]
Context manager that sets the level for capturing of logs. After the end of the ‘with’ statement the level is
restored to its original value.
Parameters
• level (int) – The level.
• logger (str) – The logger to update. If not given, the root logger.
22.3.13 monkeypatch
All modifications will be undone after the requesting test function or fixture has finished. The raising pa-
rameter determines if a KeyError or AttributeError will be raised if the set/deletion operation has no target.
Returns a MonkeyPatch instance.
final class MonkeyPatch
Helper to conveniently monkeypatch attributes/items/environment variables/syspath.
Returned by the monkeypatch fixture.
Versionchanged: 6.2 Can now also be used directly as pytest.MonkeyPatch(), for when the
fixture is not available. In this case, use with MonkeyPatch.context() as mp: or
remember to call undo() explicitly.
classmethod with context() → Generator[_pytest.monkeypatch.MonkeyPatch, None, None]
Context manager that returns a new MonkeyPatch object which undoes any patching done inside the
with block upon exit.
Example:
import functools
def test_partial(monkeypatch):
with monkeypatch.context() as m:
m.setattr(functools, "partial", 3)
Useful in situations where it is desired to undo some patches before the test ends, such as mocking stdlib
functions that might break pytest itself if mocked (for examples of this see #3290.
setattr(target: str, name: object, value: _pytest.monkeypatch.Notset = <notset>, raising: bool =
True) → None
setattr(target: object, name: str, value: object, raising: bool = True) → None
Set attribute value on target, memorizing the old value.
For convenience you can specify a string as target which will be interpreted as a dotted import path,
with the last part being the attribute name. For example, monkeypatch.setattr("os.getcwd",
lambda: "/") would set the getcwd function of the os module.
Raises AttributeError if the attribute does not exist, unless raising is set to False.
22.3.14 pytester
pytest_plugins = "pytester"
pytester.makefile(".ini", pytest="[pytest]\naddopts=-rs\n")
def test_something(pytester):
# Initial file is created test_something.py.
pytester.makepyfile("foobar")
# To create multiple files, pass kwargs accordingly.
pytester.makepyfile(custom="foobar")
# At this point, both 'test_something.py' & 'custom.py' exist in the test
˓→directory.
def test_something(pytester):
# Initial file is created test_something.txt.
pytester.maketxtfile("foobar")
# To create multiple files, pass kwargs accordingly.
pytester.maketxtfile(custom="foobar")
# At this point, both 'test_something.txt' & 'custom.txt' exist in the
˓→test directory.
exception Failed
Signals a stop as failed test run.
exception Interrupted
Signals that the test run was interrupted.
Writes the source to a python file and runs pytest’s collection on the resulting module, returning the test
item for the requested function name.
Parameters
• source – The module source.
• funcname – The name of the test function for which to return a test item.
getitems(source: str) → List[_pytest.nodes.Item]
Return all test items collected from the module.
Writes the source to a Python file and runs pytest’s collection on the resulting module, returning all test
items contained within.
getmodulecol(source: Union[str, pathlib.Path], configargs=(), *, withinit: bool = False)
Return the module collection node for source.
Writes source to a file using makepyfile() and then runs the pytest collection on it, returning the
collection node for the test module.
Parameters
• source – The source code of the module to collect.
• configargs – Any extra arguments to pass to parseconfigure().
• withinit – Whether to also write an __init__.py file to the same directory to ensure
it is a package.
collect_by_name(modcol: _pytest.nodes.Collector, name: str) → Op-
tional[Union[_pytest.nodes.Item, _pytest.nodes.Collector]]
Return the collection node for name from the module collection.
Searchs a module collection node for a collection node matching the given name.
Parameters
• modcol – A module collection node; see getmodulecol().
• name – The name of the node to return.
popen(cmdargs, stdout: Union[int, TextIO] = -1, stderr: Union[int, TextIO] = -1, stdin=<class 'ob-
ject'>, **kw)
Invoke subprocess.Popen.
Calls subprocess.Popen making sure the current working directory is in the PYTHONPATH.
You probably want to use run() instead.
run(*cmdargs: Union[str, os.PathLike[str]], timeout: Optional[float] = None, stdin=<class 'object'>)
→ _pytest.pytester.RunResult
Run a command with arguments.
Run a process using subprocess.Popen saving the stdout and stderr.
Parameters
• cmdargs – The sequence of arguments to pass to subprocess.Popen(), with path-
like objects being converted to str automatically.
• timeout – The period in seconds after which to timeout and raise Pytester.
TimeoutExpired.
• stdin – Optional standard input. Bytes are being send, closing the pipe, otherwise it is
passed through to popen. Defaults to CLOSE_STDIN, which translates to using a pipe
(subprocess.PIPE) that gets closed.
The matches and non-matches are also shown as part of the error message.
Parameters
• lines2 – string patterns to match.
• consecutive – match lines consecutively?
no_fnmatch_line(pat: str) → None
Ensure captured lines do not match the given pattern, using fnmatch.fnmatch.
Parameters pat (str) – The pattern to match lines.
no_re_match_line(pat: str) → None
Ensure captured lines do not match the given pattern, using re.match.
Parameters pat (str) – The regular expression to match lines.
str() → str
Return the entire original text.
class HookRecorder
Record all hooks called in a plugin manager.
This wraps all the hook calls in the plugin manager, recording each call before propagating the normal calls.
matchreport(inamepart: str = '', names: Union[str, Iterable[str]] =
('pytest_runtest_logreport', 'pytest_collectreport'), when: Optional[str] = None)
→ Union[_pytest.reports.CollectReport, _pytest.reports.TestReport]
Return a testreport whose dotted import path matches.
22.3.15 testdir
Identical to pytester, but provides an instance whose methods return legacy py.path.local objects instead
when applicable.
New code should avoid using testdir in favor of pytester.
final class Testdir
Similar to Pytester, but this class works with legacy py.path.local objects instead.
All methods just forward to an internal Pytester instance, converting results to py.path.local objects
as necessary.
CLOSE_STDIN
alias of object
exception TimeoutExpired
class Session(*k, **kw)
exception Failed
Signals a stop as failed test run.
exception Interrupted
Signals that the test run was interrupted.
for ... in collect() → Iterator[Union[_pytest.nodes.Item, _pytest.nodes.Collector]]
Return a list of children (items and collectors) for this collection node.
classmethod from_config(config: _pytest.config.Config) → _pytest.main.Session
mkpydir(name) → py._path.local.LocalPath
See Pytester.mkpydir().
copy_example(name=None) → py._path.local.LocalPath
See Pytester.copy_example().
getnode(config: _pytest.config.Config, arg) → Optional[Union[_pytest.nodes.Item,
_pytest.nodes.Collector]]
See Pytester.getnode().
getpathnode(path)
See Pytester.getpathnode().
genitems(colitems: List[Union[_pytest.nodes.Item, _pytest.nodes.Collector]]) →
List[_pytest.nodes.Item]
See Pytester.genitems().
runitem(source)
See Pytester.runitem().
inline_runsource(source, *cmdlineargs)
See Pytester.inline_runsource().
inline_genitems(*args)
See Pytester.inline_genitems().
inline_run(*args, plugins=(), no_reraise_ctrlc: bool = False)
See Pytester.inline_run().
runpytest_inprocess(*args, **kwargs) → _pytest.pytester.RunResult
See Pytester.runpytest_inprocess().
runpytest(*args, **kwargs) → _pytest.pytester.RunResult
See Pytester.runpytest().
parseconfig(*args) → _pytest.config.Config
See Pytester.parseconfig().
parseconfigure(*args) → _pytest.config.Config
See Pytester.parseconfigure().
getitem(source, funcname='test_func')
See Pytester.getitem().
getitems(source)
See Pytester.getitems().
getmodulecol(source, configargs=(), withinit=False)
See Pytester.getmodulecol().
collect_by_name(modcol: _pytest.nodes.Collector, name: str) → Op-
tional[Union[_pytest.nodes.Item, _pytest.nodes.Collector]]
See Pytester.collect_by_name().
popen(cmdargs, stdout: Union[int, TextIO] = -1, stderr: Union[int, TextIO] = -1, stdin=<class 'ob-
ject'>, **kw)
See Pytester.popen().
run(*cmdargs, timeout=None, stdin=<class 'object'>) → _pytest.pytester.RunResult
See Pytester.run().
runpython(script) → _pytest.pytester.RunResult
See Pytester.runpython().
runpython_c(command)
See Pytester.runpython_c().
runpytest_subprocess(*args, timeout=None) → _pytest.pytester.RunResult
See Pytester.runpytest_subprocess().
spawn_pytest(string: str, expect_timeout: float = 10.0) → pexpect.spawn
See Pytester.spawn_pytest().
spawn(cmd: str, expect_timeout: float = 10.0) → pexpect.spawn
See Pytester.spawn().
22.3.16 recwarn
Note: DeprecationWarning and PendingDeprecationWarning are treated differently; see Ensuring code
triggers a deprecation warning.
22.3.17 tmp_path
22.3.18 tmp_path_factory
22.3.19 tmpdir
22.3.20 tmpdir_factory
22.4 Hooks
Bootstrapping hooks called for plugins registered early enough (internal and setuptools plugins).
pytest_load_initial_conftests(early_config: Config, parser: Parser, args: List[str]) → None
Called to implement the loading of initial conftest files ahead of command line option parsing.
Note: This hook will not be called for conftest.py files, only for setuptools plugins.
Parameters
• early_config (_pytest.config.Config) – The pytest config object.
• args (List[str]) – Arguments passed on the command line.
Note: This hook will not be called for conftest.py files, only for setuptools plugins.
Parameters
• config (_pytest.config.Config) – The pytest config object.
• args (List[str]) – Arguments passed on the command line.
Note: This hook will only be called for plugin classes passed to the plugins arg when using pytest.main to
perform an in-process test run.
Parameters
• pluginmanager (_pytest.config.PytestPluginManager) – Pytest plugin
manager.
• args (List[str]) – List of arguments passed on the command line.
Note: This hook will not be called for conftest.py files, only for setuptools plugins.
Stops at first non-None result, see firstresult: stop at first non-None result.
Parameters config (_pytest.config.Config) – The pytest config object.
Note: This function should be implemented only in plugins or conftest.py files situated at the tests root
directory due to how pytest discovers plugins during startup.
Parameters
pytest calls the following hooks for collecting files and directories:
pytest_collection(session: Session) → Optional[object]
Perform the collection phase for the given session.
Stops at first non-None result, see firstresult: stop at first non-None result. The return value is not used, but only
stops further processing.
The default collection phase is this (see individual hooks for full details):
1. Starting from session as the initial collector:
1. pytest_collectstart(collector)
2. report = pytest_make_collect_report(collector)
3. pytest_exception_interact(collector, call, report) if an interactive ex-
ception occurred
4. For each collected node:
1. If an item, pytest_itemcollected(item)
2. If a collector, recurse into it.
5. pytest_collectreport(report)
3. pytest_collection_finish(session)
4. Set session.items to the list of collected items
5. Set session.testscollected to the number of collected items
You can implement this hook to only perform some action before collection, for example the terminal plugin
uses it to start displaying the collection counter (and returns None).
Parameters session (pytest.Session) – The pytest session object.
pytest_ignore_collect(path: py._path.local.LocalPath, config: Config) → Optional[bool]
Return True to prevent considering this path for collection.
This hook is consulted for all files and directories prior to calling more specific hooks.
Stops at first non-None result, see firstresult: stop at first non-None result.
Parameters
• path (py.path.local) – The path to analyze.
• config (_pytest.config.Config) – The pytest config object.
pytest_collect_file(path: py._path.local.LocalPath, parent: Collector) → Optional[Collector]
Create a Collector for the given path, or None if not relevant.
The new node needs to have the specified parent as a parent.
Parameters path (py.path.local) – The path to collect.
pytest_pycollect_makemodule(path: py._path.local.LocalPath, parent) → Optional[Module]
Return a Module collector or None for the given path.
This hook will be called for each matching test module path. The pytest_collect_file hook needs to be used if
you want to create test modules for files that do not match as a test module.
Stops at first non-None result, see firstresult: stop at first non-None result.
Parameters path (py.path.local) – The path of module to collect.
For influencing the collection of objects in Python modules you can use the following hook:
pytest_pycollect_makeitem(collector: PyCollector, name: str, obj: object) → Union[None, Item,
Collector, List[Union[Item, Collector]]]
Return a custom item/collector for a Python object in a module, or None.
Stops at first non-None result, see firstresult: stop at first non-None result.
pytest_generate_tests(metafunc: Metafunc) → None
Generate (multiple) parametrized calls to a test function.
pytest_make_parametrize_id(config: Config, val: object, argname: str) → Optional[str]
Return a user-friendly string representation of the given val that will be used by @pytest.mark.parametrize
calls, or None if the hook doesn’t know about val.
The parameter name is available as argname, if required.
Stops at first non-None result, see firstresult: stop at first non-None result.
Parameters
• config (_pytest.config.Config) – The pytest config object.
• val – The parametrized value.
• argname (str) – The automatic parameter name produced by pytest.
After collection is complete, you can modify the order of items, delete or otherwise amend the test items:
pytest_collection_modifyitems(session: Session, config: Config, items: List[Item]) → None
Called after collection has been performed. May filter or re-order the items in-place.
Parameters
• session (pytest.Session) – The pytest session object.
• config (_pytest.config.Config) – The pytest config object.
• items (List[pytest.Item]) – List of item objects.
Note: If this hook is implemented in conftest.py files, it always receives all collected items, not only those under
the conftest.py where it is implemented.
Parameters
• item – Test item for which the runtest protocol is performed.
• nextitem – The scheduled-to-be-next test item (or None if this is the end my friend).
Stops at first non-None result, see firstresult: stop at first non-None result. The return value is not used, but only
stops further processing.
pytest_runtest_logstart(nodeid: str, location: Tuple[str, Optional[int], str]) → None
Called at the start of running the runtest protocol for a single item.
See pytest_runtest_protocol() for a description of the runtest protocol.
Parameters
• nodeid (str) – Full node ID of the item.
• location – A tuple of (filename, lineno, testname).
pytest_runtest_logfinish(nodeid: str, location: Tuple[str, Optional[int], str]) → None
Called at the end of running the runtest protocol for a single item.
See pytest_runtest_protocol() for a description of the runtest protocol.
Parameters
• nodeid (str) – Full node ID of the item.
• location – A tuple of (filename, lineno, testname).
pytest_runtest_setup(item: Item) → None
Called to perform the setup phase for a test item.
The default implementation runs setup() on item and all of its parents (which haven’t been setup yet). This
includes obtaining the values of fixtures required by the item (which haven’t been obtained yet).
pytest_runtest_call(item: Item) → None
Called to run the test for test item (the call phase).
The default implementation calls item.runtest().
pytest_runtest_teardown(item: Item, nextitem: Optional[Item]) → None
Called to perform the teardown phase for a test item.
The default implementation runs the finalizers and calls teardown() on item and all of its parents (which
need to be torn down). This includes running the teardown phase of fixtures required by the item (if they go out
of scope).
Parameters nextitem – The scheduled-to-be-next test item (None if no further test item is sched-
uled). This argument can be used to perform exact teardowns, i.e. calling just enough finalizers
so that nextitem only needs to call setup-functions.
pytest_runtest_makereport(item: Item, call: CallInfo[None]) → Optional[TestReport]
Called to create a _pytest.reports.TestReport for each of the setup, call and teardown runtest phases
of a test item.
See pytest_runtest_protocol() for a description of the runtest protocol.
Parameters call (CallInfo[None]) – The CallInfo for the phase.
Stops at first non-None result, see firstresult: stop at first non-None result.
For deeper understanding you may look at the default implementation of these hooks in _pytest.runner and
maybe also in _pytest.pdb which interacts with _pytest.capture and its input/output capturing in order to
immediately drop into interactive debugging when a test failure occurs.
pytest_pyfunc_call(pyfuncitem: Function) → Optional[object]
Call underlying test function.
Stops at first non-None result, see firstresult: stop at first non-None result.
Note: Lines returned by a plugin are displayed before those of plugins which ran before it. If you want to have
your line(s) displayed first, use trylast=True.
Note: This function should be implemented only in plugins or conftest.py files situated at the tests root
directory due to how pytest discovers plugins during startup.
• items – List of pytest items that are going to be executed; this list should not be modified.
Note: Lines returned by a plugin are displayed before those of plugins which ran before it. If you want to have
your line(s) displayed first, use trylast=True.
Note: If the fixture function returns None, other implementations of this hook function will continue to be
called, according to the behavior of the firstresult: stop at first non-None result option.
Return None for no custom explanation, otherwise return a list of strings. The strings will be joined by newlines
but any newlines in a string will be escaped. Note that all but the first line will be indented slightly, the intention
is for the first line to be a summary.
Parameters config (_pytest.config.Config) – The pytest config object.
pytest_assertion_pass(item: Item, lineno: int, orig: str, expl: str) → None
(Experimental) Called whenever an assertion passes.
New in version 5.0.
Use this hook to do some processing after a passing assertion. The original assertion information is available in
the orig string and the pytest introspected assertion information is available in the expl string.
This hook must be explicitly enabled by the enable_assertion_pass_hook ini-file option:
[pytest]
enable_assertion_pass_hook=true
You need to clean the .pyc files in your project directory and interpreter libraries when enabling this option, as
assertions will require to be re-written.
Parameters
• item (pytest.Item) – pytest item object of current test.
• lineno (int) – Line number of the assert statement.
• orig (str) – String with the original assertion.
• expl (str) – String with the assert explanation.
Note: This hook is experimental, so its parameters or even the hook itself might be changed/removed without
warning in any future pytest release.
If you find this hook useful, please share your feedback in an issue.
There are few hooks which can be used for special reporting or interaction with exceptions:
pytest_internalerror(excrepr: ExceptionRepr, excinfo: ExceptionInfo[BaseException]) → Op-
tional[bool]
Called for internal errors.
Return True to suppress the fallback handling of printing an INTERNALERROR message directly to sys.stderr.
pytest_keyboard_interrupt(excinfo: ExceptionInfo[Union[KeyboardInterrupt, Exit]]) → None
Called for keyboard interrupt.
pytest_exception_interact(node: Union[Item, Collector], call: CallInfo[Any], report:
Union[CollectReport, TestReport]) → None
Called when an exception was raised which can potentially be interactively handled.
May be called during collection (see pytest_make_collect_report()), in which case report is a
_pytest.reports.CollectReport.
May be called during runtest of an item (see pytest_runtest_protocol()), in which case report is a
_pytest.reports.TestReport.
This hook is not called if the exception that was raised is an internal exception like skip.Exception.
22.5 Objects
22.5.1 CallInfo
22.5.2 Class
class Class
Bases: _pytest.python.PyCollector
Collector for test methods.
classmethod from_parent(parent, *, name, obj=None)
The public constructor.
collect() → Iterable[Union[_pytest.nodes.Item, _pytest.nodes.Collector]]
Return a list of children (items and collectors) for this collection node.
22.5.3 Collector
class Collector
Bases: _pytest.nodes.Node
Collector instances create children through collect() and thus iteratively build a tree.
exception CollectError
Bases: Exception
An error during collection, contains a custom message.
collect() → Iterable[Union[_pytest.nodes.Item, _pytest.nodes.Collector]]
Return a list of children (items and collectors) for this collection node.
repr_failure(excinfo: _pytest._code.code.ExceptionInfo[BaseException]) → Union[str,
_pytest._code.code.TerminalRepr]
Return a representation of a collection failure.
Parameters excinfo – Exception information for the failure.
22.5.4 CollectReport
Note: This function is considered experimental, so beware that it is subject to changes even in patch
releases.
head_line
Experimental The head line shown with longrepr output for this report, more commonly during traceback
representation during failures:
Note: This function is considered experimental, so beware that it is subject to changes even in patch
releases.
longreprtext
Read-only property that returns the full string representation of longrepr.
New in version 3.0.
22.5.5 Config
Note: Note that the environment variable PYTEST_ADDOPTS and the addopts ini option are handled
by pytest, not being included in the args attribute.
Plugins accessing InvocationParams must be aware of that.
args
The command-line arguments as passed to pytest.main().
Type Tuple[str, ..]
plugins
Extra plugins, might be None.
Type Optional[Sequence[Union[str, plugin]]]
dir
The directory from which pytest.main() was invoked.
Type pathlib.Path
option
Access to command line option as attributes.
Type argparse.Namespace
invocation_params
The parameters with which pytest was invoked.
Type InvocationParams
pluginmanager
The plugin manager handles plugin registration and hook invocation.
Type PytestPluginManager
invocation_dir
The directory from which pytest was invoked.
Prefer to use invocation_params.dir, which is a pathlib.Path.
Type py.path.local
rootpath
The path to the rootdir.
Type pathlib.Path
New in version 6.1.
rootdir
The path to the rootdir.
Prefer to use rootpath, which is a pathlib.Path.
Type py.path.local
inipath
The path to the configfile.
Type Optional[pathlib.Path]
New in version 6.1.
inifile
The path to the configfile.
Prefer to use inipath, which is a pathlib.Path.
Type Optional[py.path.local]
add_cleanup(func: Callable[], None]) → None
Add a function to be called when the config object gets out of use (usually coninciding with
pytest_unconfigure).
classmethod fromdictargs(option_dict, args) → _pytest.config.Config
Constructor usable for subprocesses.
for ... in pytest_collection() → Generator[None, None, None]
Validate invalid ini keys after collection is done so we take in account options added by late-loading
conftest files.
issue_config_time_warning(warning: Warning, stacklevel: int) → None
Issue and handle a warning during the “configure” stage.
During pytest_configure we can’t capture warnings using the catch_warnings_for_item
function because it is not possible to have hookwrappers around pytest_configure.
This function is mainly intended for plugins that need to issue warnings during pytest_configure
(or similar stages).
Parameters
22.5.6 ExceptionInfo
22.5.7 ExitCode
22.5.8 File
class File
Bases: _pytest.nodes.FSCollector
Base class for collecting tests from a file.
Working with non-python tests.
22.5.9 FixtureDef
22.5.10 FSCollector
class FSCollector
Bases: _pytest.nodes.Collector
classmethod from_parent(parent, *, fspath, **kw)
The public constructor.
22.5.11 Function
class Function
Bases: _pytest.python.PyobjMixin, _pytest.nodes.Item
An Item responsible for setting up and executing a Python test function.
param name: The full function name, including any decorations like those added by parametrization
(my_func[my_param]).
param parent: The parent Node.
param config: The pytest Config object.
param callspec: If given, this is function has been parametrized and the callspec contains meta information
about the parametrization.
param callobj: If given, the object which will be called when the Function is invoked, otherwise the callobj
will be obtained from parent using originalname.
param keywords: Keywords bound to the function object for “-k” matching.
param session: The pytest Session object.
param fixtureinfo: Fixture information already resolved at this fixture node..
param originalname: The attribute name to use for accessing the underlying function object. Defaults to
name. Set this if name is different from the original name, for example when it contains decorations like
those added by parametrization (my_func[my_param]).
originalname
Original function name, without any decorations (for example parametrization adds a "[...]" suffix to
function names), used to access the underlying function object from parent (in case callobj is not
given explicitly).
New in version 3.0.
classmethod from_parent(parent, **kw)
The public constructor.
function
Underlying python ‘function’ object.
runtest() → None
Execute the underlying test function.
repr_failure(excinfo: _pytest._code.code.ExceptionInfo[BaseException]) → Union[str,
_pytest._code.code.TerminalRepr]
Return a representation of a collection or test failure.
Parameters excinfo – Exception information for the failure.
22.5.12 FunctionDefinition
class FunctionDefinition
Bases: _pytest.python.Function
This class is a step gap solution until we evolve to have actual function definition nodes and manage to get rid
of metafunc.
runtest() → None
Execute the underlying test function.
setup() → None
Execute the underlying test function.
22.5.13 Item
class Item
Bases: _pytest.nodes.Node
A basic test invocation item.
Note that for a single function there might be multiple test invocation items.
user_properties: List[Tuple[str, object]]
A list of tuples (name, value) that holds user defined properties for this test.
add_report_section(when: str, key: str, content: str) → None
Add a new report section, similar to what’s done internally to add stdout and stderr captured output:
Parameters
• when (str) – One of the possible capture states, "setup", "call", "teardown".
• key (str) – Name of the section, can be customized at will. Pytest uses "stdout" and
"stderr" internally.
• content (str) – The full contents as a string.
22.5.14 MarkDecorator
@mark2
def test_function():
pass
1. If called with a single class as its only positional argument and no additional keyword arguments, it attaches
the mark to the class so it gets applied automatically to all test cases found in that class.
2. If called with a single function as its only positional argument and no additional keyword arguments, it
attaches the mark to the function, containing all the arguments already stored internally in the MarkDeco-
rator.
3. When called in any other case, it returns a new MarkDecorator instance with the original MarkDecorator’s
content updated with the arguments passed to this call.
Note: The rules above prevent MarkDecorators from storing only a single function or class reference as their
positional argument with no additional keyword or positional arguments. You can work around this by using
with_args().
name
Alias for mark.name.
args
Alias for mark.args.
kwargs
Alias for mark.kwargs.
with_args(*args: object, **kwargs: object) → _pytest.mark.structures.MarkDecorator
Return a MarkDecorator with extra arguments added.
Unlike calling the MarkDecorator, with_args() can be used even if the sole argument is a callable/class.
Return type MarkDecorator
22.5.15 MarkGenerator
import pytest
@pytest.mark.slowtest
def test_function():
pass
22.5.16 Mark
final class Mark(name: str, args: Tuple[Any, . . . ], kwargs: Mapping[str, Any], param_ids_from: Op-
tional[Mark] = None, param_ids_generated: Optional[Sequence[str]] = None)
name
Name of the mark.
args
Positional arguments of the mark decorator.
kwargs
Keyword arguments of the mark decorator.
22.5.17 Metafunc
that it can perform more expensive setups during the setup phase of a test rather than at
collection time.
• ids – Sequence of (or generator for) ids for argvalues, or a callable to return part of
the id for each argvalue.
With sequences (and generators like itertools.count()) the returned ids should be
of type string, int, float, bool, or None. They are mapped to the corresponding
index in argvalues. None means to use the auto-generated id.
If it is a callable it will be called for each entry in argvalues, and the return value is
used as part of the auto-generated id for the whole set (where parts are joined with dashes
(“-“)). This is useful to provide more specific ids for certain items, e.g. dates. Returning
None will use an auto-generated id.
If no ids are provided they will be generated automatically from the argvalues.
• scope – If specified it denotes the scope of the parameters. The scope is used for group-
ing tests by parameter instances. It will also override any fixture-function defined scope,
allowing to set a dynamic scope using test context or configuration.
22.5.18 Module
class Module
Bases: _pytest.nodes.File, _pytest.python.PyCollector
Collector for test classes and functions.
collect() → Iterable[Union[_pytest.nodes.Item, _pytest.nodes.Collector]]
Return a list of children (items and collectors) for this collection node.
22.5.19 Node
class Node
Base class for Collector and Item, the components of the test collection tree.
Collector subclasses have children; Items are leaf nodes.
name
A unique name within the scope of the parent node.
parent
The parent collector node.
fspath
Filesystem path where this node was collected from (can be None).
keywords
Keywords/markers collected from all scopes.
own_markers: List[_pytest.mark.structures.Mark]
The marker objects belonging to this node.
extra_keyword_matches: Set[str]
Allow adding of extra keywords to use for matching.
classmethod from_parent(parent: _pytest.nodes.Node, **kw)
Public constructor for Nodes.
This indirection got introduced in order to enable removing the fragile logic from the node constructors.
node.warn(PytestWarning("some message"))
node.warn(UserWarning("some message"))
Changed in version 6.2: Any subclass of Warning is now accepted, rather than only PytestWarning
subclasses.
nodeid
A ::-separated string denoting its collection tree address.
listchain() → List[_pytest.nodes.Node]
Return list of all parent collectors up to self, starting from the root of collection tree.
add_marker(marker: Union[str, _pytest.mark.structures.MarkDecorator], append: bool = True) →
None
Dynamically add a marker object to the node.
Parameters append – Whether to append the marker, or prepend it.
iter_markers(name: Optional[str] = None) → Iterator[_pytest.mark.structures.Mark]
Iterate over all markers of the node.
Parameters name – If given, filter the results by the name attribute.
for ... in iter_markers_with_node(name: Optional[str] = None) →
Iterator[Tuple[_pytest.nodes.Node,
_pytest.mark.structures.Mark]]
Iterate over all markers of the node.
Parameters name – If given, filter the results by the name attribute.
Returns An iterator of (node, mark) tuples.
get_closest_marker(name: str) → Optional[_pytest.mark.structures.Mark]
get_closest_marker(name: str, default: _pytest.mark.structures.Mark) →
_pytest.mark.structures.Mark
Return the first marker matching the name, from closest (for example function) to farther level (for example
module level).
Parameters
• default – Fallback return value if no marker was found.
• name – Name to filter by.
listextrakeywords() → Set[str]
Return a set of all extra keywords in self and any parents.
22.5.20 Parser
Type Type of the variable, can be string, pathlist, args, linelist or bool. Defaults
to string if None or not passed.
Default Default value if no ini-file option exists but is queried.
The value of ini-variables can be retrieved via a call to config.getini(name).
22.5.21 PytestPluginManager
check_pending()
Verify that all hooks which have not been verified against a hook specification are optional, otherwise raise
PluginValidationError.
enable_tracing()
enable tracing of hook calls and return an undo function.
get_canonical_name(plugin)
Return canonical name for a plugin object. Note that a plugin may be registered under a different name
which was specified by the caller of register(plugin, name). To obtain the name of an registered
plugin use get_name(plugin) instead.
get_hookcallers(plugin)
get all hook callers for the specified plugin.
get_name(plugin)
Return name for registered plugin or None if not registered.
get_plugin(name)
Return a plugin or None for the given name.
get_plugins()
return the set of registered plugins.
has_plugin(name)
Return True if a plugin with the given name is registered.
is_blocked(name)
return True if the given plugin name is blocked.
is_registered(plugin)
Return True if the plugin is already registered.
list_name_plugin()
return list of name/plugin pairs.
list_plugin_distinfo()
return list of distinfo/plugin tuples for all setuptools registered plugins.
load_setuptools_entrypoints(group, name=None)
Load modules from querying the specified setuptools group.
Parameters
• group (str) – entry point group to load plugins
• name (str) – if given, loads only plugins with the given name.
Return type int
Returns return the number of loaded plugins by this call.
set_blocked(name)
block registrations of the given name, unregister if already registered.
subset_hook_caller(name, remove_plugins)
Return a new _hooks._HookCaller instance for the named method which manages calls to all regis-
tered plugins except the ones from remove_plugins.
unregister(plugin=None, name=None)
unregister a plugin object and all its contained hook implementations from internal data structures.
22.5.22 Session
22.5.23 TestReport
Note: This function is considered experimental, so beware that it is subject to changes even in patch
releases.
head_line
Experimental The head line shown with longrepr output for this report, more commonly during traceback
representation during failures:
Note: This function is considered experimental, so beware that it is subject to changes even in patch
releases.
longreprtext
Read-only property that returns the full string representation of longrepr.
New in version 3.0.
22.5.24 _Result
Result object used within hook wrappers, see _Result in the pluggy documentation for more informa-
tion.
pytest treats some global variables in a special manner when defined in a test module or conftest.py files.
collect_ignore
Tutorial: Customizing test collection
Can be declared in conftest.py files to exclude test directories or modules. Needs to be list[str].
collect_ignore = ["setup.py"]
collect_ignore_glob
Tutorial: Customizing test collection
Can be declared in conftest.py files to exclude test directories or modules with Unix shell-style wildcards. Needs to be
list[str] where str can contain glob patterns.
collect_ignore_glob = ["*_ignore.py"]
pytest_plugins
Tutorial: Requiring/Loading plugins in a test module or conftest file
Can be declared at the global level in test modules and conftest.py files to register additional plugins. Can be either a
str or Sequence[str].
pytest_plugins = "myapp.testsupport.myplugin"
pytestmark
Tutorial: Marking whole classes or modules
Can be declared at the global level in test modules to apply one or more marks to all test functions and methods. Can
be either a single mark or a list of marks (applied in left-to-right order).
import pytest
pytestmark = pytest.mark.webtest
import pytest
PYTEST_PLUGINS
Contains comma-separated list of modules that should be loaded as plugins:
export PYTEST_PLUGINS=mymodule.plugin,xdist
PY_COLORS
When set to 1, pytest will use color in terminal output. When set to 0, pytest will not use color. PY_COLORS takes
precedence over NO_COLOR and FORCE_COLOR.
NO_COLOR
When set (regardless of value), pytest will not use color in terminal output. PY_COLORS takes precedence over
NO_COLOR, which takes precedence over FORCE_COLOR. See no-color.org for other libraries supporting this com-
munity standard.
FORCE_COLOR
When set (regardless of value), pytest will use color in terminal output. PY_COLORS and NO_COLOR take precedence
over FORCE_COLOR.
22.8 Exceptions
22.9 Warnings
Custom warnings generated in some situations such as improper usage or deprecated features.
class PytestWarning
Bases: UserWarning
Base class for all warnings emitted by pytest.
class PytestAssertRewriteWarning
Bases: pytest.PytestWarning
Warning emitted by the pytest assert rewrite module.
class PytestCacheWarning
Bases: pytest.PytestWarning
Warning emitted by the cache plugin in various situations.
class PytestCollectionWarning
Bases: pytest.PytestWarning
Warning emitted when pytest is not able to collect a file or symbol in a module.
class PytestConfigWarning
Bases: pytest.PytestWarning
Warning emitted for configuration issues.
class PytestDeprecationWarning
Bases: pytest.PytestWarning, DeprecationWarning
Warning class for features that will be removed in a future version.
class PytestExperimentalApiWarning
Bases: pytest.PytestWarning, FutureWarning
Warning category used to denote experiments in pytest.
Use sparingly as the API might change or even be removed completely in a future version.
class PytestUnhandledCoroutineWarning
Bases: pytest.PytestWarning
Warning emitted for an unhandled coroutine.
A coroutine was encountered when collecting test functions, but was not handled by any async-aware plugin.
Coroutine test functions are not natively supported.
class PytestUnknownMarkWarning
Bases: pytest.PytestWarning
Warning emitted on use of unknown markers.
See Marking test functions with attributes for details.
class PytestUnraisableExceptionWarning
Bases: pytest.PytestWarning
An unraisable exception was reported.
Unraisable exceptions are exceptions raised in __del__ implementations and similar situations when the ex-
ception cannot be raised as normal.
class PytestUnhandledThreadExceptionWarning
Bases: pytest.PytestWarning
An unhandled exception occurred in a Thread.
Such exceptions don’t propagate normally.
Consult the Internal pytest warnings section in the documentation for more information.
Here is a list of builtin configuration options that may be written in a pytest.ini, pyproject.toml, tox.ini
or setup.cfg file, usually located at the root of your repository. To see each file format in details, see Configuration
file formats.
Warning: Usage of setup.cfg is not recommended except for very simple use cases. .cfg files use a different
parser than pytest.ini and tox.ini which might cause hard to track down problems. When possible, it is
recommended to use the latter files, or pyproject.toml, to hold your pytest configuration.
Configuration options may be overwritten in the command-line by using -o/--override-ini, which can also be
passed multiple times. The expected format is name=value. For example:
addopts
Add the specified OPTS to the set of command line arguments as if they had been specified by the user. Example:
if you have this ini file content:
# content of pytest.ini
[pytest]
addopts = --maxfail=2 -rf # exit after 2 failures, report fail info
• count: like progress, but shows progress as the number of tests completed instead of a percent.
The default is progress, but you can fallback to classic if you prefer or the new mode is causing unex-
pected problems:
# content of pytest.ini
[pytest]
console_output_style = classic
doctest_encoding
Default encoding to use to decode text files with docstrings. See how pytest handles doctests.
doctest_optionflags
One or more doctest flag names from the standard doctest module. See how pytest handles doctests.
empty_parameter_set_mark
Allows to pick the action for empty parametersets in parameterization
• skip skips tests with an empty parameterset (default)
• xfail marks tests with an empty parameterset as xfail(run=False)
• fail_at_collect raises an exception if parametrize collects an empty parameter set
# content of pytest.ini
[pytest]
empty_parameter_set_mark = xfail
Note: The default value of this option is planned to change to xfail in future releases as this is considered
less error prone, see #3155 for more details.
faulthandler_timeout
Dumps the tracebacks of all threads if a test takes longer than X seconds to run (including fixture setup and
teardown). Implemented using the faulthandler.dump_traceback_later function, so all caveats there apply.
# content of pytest.ini
[pytest]
faulthandler_timeout=5
# content of pytest.ini
[pytest]
filterwarnings =
error
ignore::DeprecationWarning
This tells pytest to ignore deprecation warnings and turn all other warnings into errors. For more information
please refer to Warnings Capture.
junit_duration_report
New in version 4.1.
Configures how durations are recorded into the JUnit XML report:
• total (the default): duration times reported include setup, call, and teardown times.
• call: duration times reported include only call times, excluding setup and teardown.
[pytest]
junit_duration_report = call
junit_family
New in version 4.2.
Changed in version 6.1: Default changed to xunit2.
Configures the format of the generated JUnit XML file. The possible options are:
• xunit1 (or legacy): produces old style output, compatible with the xunit 1.0 format.
• xunit2: produces xunit 2.0 style output, which should be more compatible with latest Jenkins versions.
This is the default.
[pytest]
junit_family = xunit2
junit_logging
New in version 3.5.
Changed in version 5.4: log, all, out-err options added.
Configures if captured output should be written to the JUnit XML file. Valid values are:
• log: write only logging captured output.
• system-out: write captured stdout contents.
• system-err: write captured stderr contents.
• out-err: write both captured stdout and stderr contents.
• all: write captured logging, stdout and stderr contents.
• no (the default): no captured output is written.
[pytest]
junit_logging = system-out
junit_log_passing_tests
New in version 4.6.
If junit_logging != "no", configures if the captured output should be written to the JUnit XML file for
passing tests. Default is True.
[pytest]
junit_log_passing_tests = False
junit_suite_name
To set the name of the root test suite xml item, you can configure the junit_suite_name option in your
config file:
[pytest]
junit_suite_name = my_suite
log_auto_indent
Allow selective auto-indentation of multiline log messages.
Supports command line option --log-auto-indent [value] and config option log_auto_indent
= [value] to set the auto-indentation behavior for all logging.
[value] can be:
• True or “On” - Dynamically auto-indent multiline log messages
• False or “Off” or 0 - Do not auto-indent multiline log messages (the default behavior)
• [positive integer] - auto-indent multiline log messages by [value] spaces
[pytest]
log_auto_indent = False
[pytest]
log_cli = True
log_cli_date_format
Sets a time.strftime()-compatible string that will be used when formatting dates for live logging.
[pytest]
log_cli_date_format = %Y-%m-%d %H:%M:%S
[pytest]
log_cli_format = %(asctime)s %(levelname)s %(message)s
[pytest]
log_cli_level = INFO
[pytest]
log_date_format = %Y-%m-%d %H:%M:%S
[pytest]
log_file = logs/pytest-logs.txt
[pytest]
log_file_date_format = %Y-%m-%d %H:%M:%S
[pytest]
log_file_format = %(asctime)s %(levelname)s %(message)s
[pytest]
log_file_level = INFO
[pytest]
log_format = %(asctime)s %(levelname)s %(message)s
[pytest]
log_level = INFO
[pytest]
addopts = --strict-markers
markers =
slow
serial
Note: The use of --strict-markers is highly preferred. --strict was kept for backward compatibility
only and may be confusing for others as it only applies to markers and not to other options.
minversion
Specifies a minimal pytest version required for running tests.
# content of pytest.ini
[pytest]
minversion = 3.0 # will fail if we run with pytest-2.8
norecursedirs
Set the directory basename patterns to avoid when recursing for test discovery. The individual (fnmatch-style)
patterns are applied to the basename of a directory to decide if to recurse into it. Pattern matching characters:
* matches everything
? matches any single character
[seq] matches any character in seq
[!seq] matches any char not in seq
Default patterns are '*.egg', '.*', '_darcs', 'build', 'CVS', 'dist', 'node_modules',
'venv', '{arch}'. Setting a norecursedirs replaces the default. Here is an example of how to avoid
certain directories:
[pytest]
norecursedirs = .svn _build tmp*
This would tell pytest to not look into typical subversion or sphinx-build directories or into any tmp prefixed
directory.
Additionally, pytest will attempt to intelligently identify and ignore a virtualenv by the presence of an activa-
tion script. Any directory deemed to be the root of a virtual environment will not be considered during test col-
lection unless --collect-in-virtualenv is given. Note also that norecursedirs takes precedence
over --collect-in-virtualenv; e.g. if you intend to run tests in a virtualenv with a base directory that
matches '.*' you must override norecursedirs in addition to using the --collect-in-virtualenv
flag.
python_classes
One or more name prefixes or glob-style patterns determining which classes are considered for test collection.
Search for multiple glob patterns by adding a space between patterns. By default, pytest will consider any class
prefixed with Test as a test collection. Here is an example of how to collect tests from classes that end in
Suite:
[pytest]
python_classes = *Suite
Note that unittest.TestCase derived classes are always collected regardless of this option, as
unittest’s own collection framework is used to collect those tests.
python_files
One or more Glob-style file patterns determining which python files are considered as test modules. Search for
multiple glob patterns by adding a space between patterns:
[pytest]
python_files = test_*.py check_*.py example_*.py
[pytest]
python_files =
test_*.py
check_*.py
example_*.py
By default, files matching test_*.py and *_test.py will be considered test modules.
python_functions
One or more name prefixes or glob-patterns determining which test functions and methods are considered tests.
Search for multiple glob patterns by adding a space between patterns. By default, pytest will consider any
function prefixed with test as a test. Here is an example of how to collect test functions and methods that end
in _test:
[pytest]
python_functions = *_test
Note that this has no effect on methods that live on a unittest .TestCase derived class, as unittest’s
own collection framework is used to collect those tests.
See Changing naming conventions for more detailed examples.
required_plugins
A space separated list of plugins that must be present for pytest to run. Plugins can be listed with or without
version specifiers directly following their name. Whitespace between different version specifiers is not allowed.
If any one of the plugins is not found, emit an error.
[pytest]
required_plugins = pytest-django>=3.0.0,<4.0.0 pytest-html pytest-xdist>=1.0.0
testpaths
Sets list of directories that should be searched for tests when no specific directories, files or test ids are given in
the command line when executing pytest from the rootdir directory. Useful when all project tests are in a known
location to speed up test collection and to avoid picking up undesired tests by accident.
[pytest]
testpaths = testing doc
This tells pytest to only look for tests in testing and doc directories when executing from the root directory.
usefixtures
List of fixtures that will be applied to all test functions; this is semantically the same to apply the @pytest.
mark.usefixtures marker to all test functions.
[pytest]
usefixtures =
clean_db
xfail_strict
If set to True, tests marked with @pytest.mark.xfail that actually succeed will by default fail the test
suite. For more information, see strict parameter.
[pytest]
xfail_strict = True
$ pytest --help
usage: pytest [options] [file_or_dir] [file_or_dir] [...]
positional arguments:
file_or_dir
general:
-k EXPRESSION only run tests which match the given substring
expression. An expression is a python evaluatable
expression where all names are substring-matched
against test names and their parent classes.
Example: -k 'test_method or test_other' matches all
test functions and classes whose name contains
'test_method' or 'test_other', while -k 'not
test_method' matches those that don't contain
'test_method' in their names. -k 'not test_method
and not test_other' will eliminate the matches.
Additionally keywords are matched to classes and
functions containing extra names in their
'extra_keyword_matches' set, as well as functions
which have names assigned directly to them. The
matching is case-insensitive.
-m MARKEXPR only run tests matching given mark expression.
For example: -m 'mark1 and not mark2'.
--markers show markers (builtin, plugin and per-project ones).
-x, --exitfirst exit instantly on first error or failed test.
--fixtures, --funcargs
show available fixtures, sorted by plugin appearance
(fixtures with leading '_' are only shown with '-v')
--fixtures-per-test show fixtures per test
--pdb start the interactive Python debugger on errors or
KeyboardInterrupt.
--pdbcls=modulename:classname
start a custom interactive Python debugger on
errors. For example:
--pdbcls=IPython.terminal.debugger:TerminalPdb
--trace Immediately break when running each test.
--capture=method per-test capturing method: one of fd|sys|no|tee-sys.
-s shortcut for --capture=no.
--runxfail report the results of xfail tests as if they were
not marked
--lf, --last-failed rerun only the tests that failed at the last run (or
all if none failed)
--ff, --failed-first run all tests, but run the last failures first.
This may re-order tests and thus lead to repeated
fixture setup/teardown.
--nf, --new-first run tests from new files first, then the rest of the
tests sorted by file mtime
--cache-show=[CACHESHOW]
show cache contents, don't perform collection or
tests. Optional argument: glob (default: '*').
--cache-clear remove all cache contents at start of test run.
--lfnf={all,none}, --last-failed-no-failures={all,none}
(continues on next page)
reporting:
--durations=N show N slowest setup/test durations (N=0 for all).
--durations-min=N Minimal duration in seconds for inclusion in slowest
list. Default 0.005
-v, --verbose increase verbosity.
--no-header disable header
--no-summary disable summary
-q, --quiet decrease verbosity.
--verbosity=VERBOSE set verbosity. Default is 0.
-r chars show extra test summary info as specified by chars:
(f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed,
(p)assed, (P)assed with output, (a)ll except passed
(p/P), or (A)ll. (w)arnings are enabled by default
(see --disable-warnings), 'N' can be used to reset
the list. (default: 'fE').
--disable-warnings, --disable-pytest-warnings
disable warnings summary
-l, --showlocals show locals in tracebacks (disabled by default).
--tb=style traceback print mode
(auto/long/short/line/native/no).
--show-capture={no,stdout,stderr,log,all}
Controls how captured stdout/stderr/log is shown on
failed tests. Default is 'all'.
--full-trace don't cut any tracebacks (default is to cut).
--color=color color terminal output (yes/no/auto).
--code-highlight={yes,no}
Whether code should be highlighted (only if --color
is also enabled)
--pastebin=mode send failed|all info to bpaste.net pastebin service.
--junit-xml=path create junit-xml style report file at given path.
--junit-prefix=str prepend prefix to classnames in junit-xml output
pytest-warnings:
-W PYTHONWARNINGS, --pythonwarnings=PYTHONWARNINGS
set which warnings to report, see -W option of
python itself.
--maxfail=num exit after first num failures or errors.
--strict-config any warnings encountered while parsing the `pytest`
section of the configuration file raise errors.
--strict-markers markers not registered in the `markers` section of
the configuration file raise errors.
--strict (deprecated) alias to --strict-markers.
-c file load configuration from `file` instead of trying to
locate one of the implicit configuration files.
--continue-on-collection-errors
Force test execution even if collection errors
occur.
--rootdir=ROOTDIR Define root directory for tests. Can be relative
path: 'root_dir', './root_dir',
(continues on next page)
collection:
--collect-only, --co only collect tests, don't execute them.
--pyargs try to interpret all arguments as python packages.
--ignore=path ignore path during collection (multi-allowed).
--ignore-glob=path ignore path pattern during collection (multi-
allowed).
--deselect=nodeid_prefix
deselect item (via node id prefix) during collection
(multi-allowed).
--confcutdir=dir only load conftest.py's relative to specified dir.
--noconftest Don't load any conftest.py files.
--keep-duplicates Keep duplicate tests.
--collect-in-virtualenv
Don't ignore tests in a local virtualenv directory
--import-mode={prepend,append,importlib}
prepend/append to sys.path when importing test
modules and conftest files, default is to prepend.
--doctest-modules run doctests in all .py modules
--doctest-report={none,cdiff,ndiff,udiff,only_first_failure}
choose another output format for diffs on doctest
failure
--doctest-glob=pat doctests file matching pattern, default: test*.txt
--doctest-ignore-import-errors
ignore doctest ImportErrors
--doctest-continue-on-failure
for a given doctest, continue to run after the first
failure
logging:
--log-level=LEVEL level of messages to catch/display.
Not set by default, so it depends on the root/parent
log handler's effective level, where it is "WARNING"
by default.
--log-format=LOG_FORMAT
log format as used by the logging module.
--log-date-format=LOG_DATE_FORMAT
log date format as used by the logging module.
--log-cli-level=LOG_CLI_LEVEL
cli logging level.
--log-cli-format=LOG_CLI_FORMAT
log format as used by the logging module.
--log-cli-date-format=LOG_CLI_DATE_FORMAT
log date format as used by the logging module.
--log-file=LOG_FILE path to a file when logging will be written to.
--log-file-level=LOG_FILE_LEVEL
log file logging level.
--log-file-format=LOG_FILE_FORMAT
log format as used by the logging module.
--log-file-date-format=LOG_FILE_DATE_FORMAT
log date format as used by the logging module.
--log-auto-indent=LOG_AUTO_INDENT
Auto-indent multiline messages passed to the logging
module. Accepts true|on, false|off or an integer.
environment variables:
PYTEST_ADDOPTS extra command line options
PYTEST_PLUGINS comma-separated plugins to load during startup
PYTEST_DISABLE_PLUGIN_AUTOLOAD set to disable plugin auto-loading
PYTEST_DEBUG set to enable debug tracing of pytest's internals
TWENTYTHREE
For development, we recommend you use venv for virtual environments and pip for installing your application and
any dependencies, as well as the pytest package itself. This ensures your code and dependencies are isolated from
your system Python installation.
Next, place a setup.py file in the root of your package with the following minimum content:
setup(name="PACKAGENAME", packages=find_packages())
Where PACKAGENAME is the name of your package. You can then install your package in “editable” mode by running
from the same directory:
pip install -e .
which lets you change your source code (both tests and application) and rerun tests at will. This is similar to run-
ning python setup.py develop or conda develop in that it installs your package using a symlink to your
development code.
229
pytest Documentation, Release 6.2
Putting tests into an extra directory outside your actual application code might be useful if you have many functional
tests or for other reasons want to keep tests separate from actual application code (often a good idea):
setup.py
mypkg/
__init__.py
app.py
view.py
tests/
test_app.py
test_view.py
...
Note: See Invoking pytest versus python -m pytest for more information about the difference between calling pytest
and python -m pytest.
Note that this scheme has a drawback if you are using prepend import mode (which is the default): your test files
must have unique names, because pytest will import them as top-level modules since there are no packages to
derive a full package name from. In other words, the test files in the example above will be imported as test_app
and test_view top-level modules by adding tests/ to sys.path.
If you need to have test modules with the same name, you might add __init__.py files to your tests folder and
subfolders, changing them to packages:
setup.py
mypkg/
...
tests/
__init__.py
foo/
__init__.py
test_view.py
bar/
__init__.py
test_view.py
Now pytest will load the modules as tests.foo.test_view and tests.bar.test_view, allowing you to
have modules with the same name. But now this introduces a subtle problem: in order to load the test modules from
the tests directory, pytest prepends the root of the repository to sys.path, which adds the side-effect that now
mypkg is also importable.
This is problematic if you are using a tool like tox to test your package in a virtual environment, because you want to
test the installed version of your package, not the local code from the repository.
In this situation, it is strongly suggested to use a src layout where application root package resides in a sub-directory
of your root:
setup.py
src/
mypkg/
__init__.py
app.py
view.py
tests/
__init__.py
foo/
__init__.py
test_view.py
bar/
__init__.py
test_view.py
This layout prevents a lot of common pitfalls and has many benefits, which are better explained in this excellent blog
post by Ionel Cristian Măries, .
Note: The new --import-mode=importlib (see Import modes) doesn’t have any of the drawbacks above
because sys.path and sys.modules are not changed when importing test modules, so users that run into this
issue are strongly encouraged to try it and report if the new option works well for them.
The src directory layout is still strongly recommended however.
Inlining test directories into your application package is useful if you have direct relation between tests and application
modules and want to distribute them along with your application:
setup.py
mypkg/
__init__.py
app.py
view.py
test/
__init__.py
test_app.py
test_view.py
...
In this scheme, it is easy to run your tests using the --pyargs option:
pytest will discover where mypkg is installed and collect tests from there.
Note that this layout also works in conjunction with the src layout mentioned in the previous section.
Note: You can use Python3 namespace packages (PEP420) for your application but pytest will still perform test
package name discovery based on the presence of __init__.py files. If you use one of the two recommended file
system layouts above but leave away the __init__.py files from your directories it should just work on Python3.3
and above. From “inlined tests”, however, you will need to use absolute imports for getting at your application code.
Note: In prepend and append import-modes, if pytest finds a "a/b/test_module.py" test file while recurs-
ing into the filesystem it determines the import name as follows:
• determine basedir: this is the first “upward” (towards the root) directory not containing an __init__.py.
If e.g. both a and b contain an __init__.py file then the parent directory of a will become the basedir.
• perform sys.path.insert(0, basedir) to make the test module importable under the fully qualified
import name.
• import a.b.test_module where the path is determined by converting path separators / into “.” charac-
ters. This means you must follow the convention of having directory and file names map directly to the import
names.
The reason for this somewhat evolved importing technique is that in larger projects multiple test modules might import
from each other and thus deriving a canonical import name helps to avoid surprises such as a test module getting
imported twice.
With --import-mode=importlib things are less convoluted because pytest doesn’t need to change sys.path
or sys.modules, making things much less surprising.
23.4 tox
Once you are done with your work and want to make sure that your actual package passes all tests you may want to
look into tox, the virtualenv test automation tool and its pytest support. tox helps you to setup virtualenv environments
with pre-defined dependencies and then executing a pre-configured test command with options. It will run tests against
the installed package and not against your source code checkout, helping to detect packaging glitches.
TWENTYFOUR
FLAKY TESTS
A “flaky” test is one that exhibits intermittent or sporadic failure, that seems to have non-deterministic behaviour.
Sometimes it passes, sometimes it fails, and it’s not clear why. This page discusses pytest features that can help and
other general strategies for identifying, fixing or mitigating them.
Flaky tests are particularly troublesome when a continuous integration (CI) server is being used, so that all tests must
pass before a new code change can be merged. If the test result is not a reliable signal – that a test failure means
the code change broke the test – developers can become mistrustful of the test results, which can lead to overlooking
genuine failures. It is also a source of wasted time as developers must re-run test suites and investigate spurious
failures.
Broadly speaking, a flaky test indicates that the test relies on some system state that is not being appropriately con-
trolled - the test environment is not sufficiently isolated. Higher level tests are more likely to be flaky as they rely on
more state.
Flaky tests sometimes appear when a test suite is run in parallel (such as use of pytest-xdist). This can indicate a test
is reliant on test ordering.
• Perhaps a different test is failing to clean up after itself and leaving behind data which causes the flaky test to
fail.
• The flaky test is reliant on data from a previous test that doesn’t clean up after itself, and in parallel runs that
previous test is not always present
• Tests that modify global state typically cannot be run in parallel.
233
pytest Documentation, Release 6.2
Overly strict assertions can cause problems with floating point comparison as well as timing issues. pytest.approx is
useful here.
pytest.mark.xfail with strict=False can be used to mark a test so that its failure does not cause the whole build to
break. This could be considered like a manual quarantine, and is rather dangerous to use permanently.
24.3.2 PYTEST_CURRENT_TEST
PYTEST_CURRENT_TEST may be useful for figuring out “which test got stuck”. See PYTEST_CURRENT_TEST
environment variable for more details.
24.3.3 Plugins
Rerunning any failed tests can mitigate the negative effects of flaky tests by giving them additional chances to pass, so
that the overall build does not fail. Several pytest plugins support this:
• flaky
• pytest-flakefinder - blog post
• pytest-rerunfailures
• pytest-replay: This plugin helps to reproduce locally crashes or flaky tests observed during CI runs.
Plugins to deliberately randomize tests can help expose tests with state problems:
• pytest-random-order
• pytest-randomly
It can be common to split a single test suite into two, such as unit vs integration, and only use the unit test suite as a
CI gate. This also helps keep build times manageable as high level tests tend to be slower. However, it means it does
become possible for code that breaks the build to be merged, so extra vigilance is needed for monitoring the integration
test results.
For UI tests these are important for understanding what the state of the UI was when the test failed. pytest-splinter can
be used with plugins like pytest-bdd and can save a screenshot on test failure, which can help to isolate the cause.
If the functionality is covered by other tests, perhaps the test can be removed. If not, perhaps it can be rewritten at a
lower level which will remove the flakiness or make its source more apparent.
24.4.4 Quarantine
Mark Lapierre discusses the Pros and Cons of Quarantined Tests in a post from 2018.
Azure Pipelines (the Azure cloud CI/CD tool, formerly Visual Studio Team Services or VSTS) has a feature to identify
flaky tests and rerun failed tests.
24.5 Research
This is a limited list, please submit an issue or pull request to expand it!
• Gao, Zebao, Yalan Liang, Myra B. Cohen, Atif M. Memon, and Zhen Wang. “Making system user interactive
tests repeatable: When and what should we control?.” In Software Engineering (ICSE), 2015 IEEE/ACM 37th
IEEE International Conference on, vol. 1, pp. 55-65. IEEE, 2015. PDF
• Palomba, Fabio, and Andy Zaidman. “Does refactoring of test smells induce fixing flaky tests?.” In Software
Maintenance and Evolution (ICSME), 2017 IEEE International Conference on, pp. 1-12. IEEE, 2017. PDF in
Google Drive
• Bell, Jonathan, Owolabi Legunsen, Michael Hilton, Lamyaa Eloussi, Tifany Yung, and Darko Marinov. “De-
Flaker: Automatically detecting flaky tests.” In Proceedings of the 2018 International Conference on Software
Engineering. 2018. PDF
24.6 Resources
– Flaky Tests at Google and How We Mitigate Them by John Micco, 2016
– Where do Google’s flaky tests come from? by Jeff Listfield, 2017
TWENTYFIVE
pytest as a testing framework needs to import test modules and conftest.py files for execution.
Importing files in Python (at least until recently) is a non-trivial processes, often requiring changing sys.path. Some
aspects of the import process can be controlled through the --import-mode command-line flag, which can assume
these values:
• prepend (default): the directory path containing each module will be inserted into the beginning of sys.
path if not already there, and then imported with the __import__ builtin.
This requires test module names to be unique when the test directory tree is not arranged in packages, because
the modules will put in sys.modules after importing.
This is the classic mechanism, dating back from the time Python 2 was still supported.
• append: the directory containing each module is appended to the end of sys.path if not already there, and
imported with __import__.
This better allows to run test modules against installed versions of a package even if the package under test has
the same import root. For example:
testing/__init__.py
testing/test_pkg_under_test.py
pkg_under_test/
the tests will run against the installed version of pkg_under_test when --import-mode=append is
used whereas with prepend they would pick up the local version. This kind of confusion is why we advocate
for using src layouts.
Same as prepend, requires test module names to be unique when the test directory tree is not arranged in
packages, because the modules will put in sys.modules after importing.
• importlib: new in pytest-6.0, this mode uses importlib to import test modules. This gives full control over
the import process, and doesn’t require changing sys.path or sys.modules at all.
For this reason this doesn’t require test module names to be unique at all, but also makes test modules non-
importable by each other. This was made possible in previous modes, for tests not residing in Python packages,
because of the side-effects of changing sys.path and sys.modules mentioned above. Users which require
this should turn their tests into proper packages instead.
We intend to make importlib the default in future releases.
237
pytest Documentation, Release 6.2
Here’s a list of scenarios when using prepend or append import modes where pytest needs to change sys.path
in order to import test modules or conftest.py files, and the issues users might encounter because of that.
root/
|- foo/
|- __init__.py
|- conftest.py
|- bar/
|- __init__.py
|- tests/
|- __init__.py
|- test_foo.py
When executing:
pytest root/
pytest will find foo/bar/tests/test_foo.py and realize it is part of a package given that there’s an
__init__.py file in the same folder. It will then search upwards until it can find the last folder which still contains
an __init__.py file in order to find the package root (in this case foo/). To load the module, it will insert root/
to the front of sys.path (if not there already) in order to load test_foo.py as the module foo.bar.tests.
test_foo.
The same logic applies to the conftest.py file: it will be imported as foo.conftest module.
Preserving the full package name is important when tests live in a package to avoid problems and allow test modules
to have duplicated names. This is also discussed in details in Conventions for Python test discovery.
root/
|- foo/
|- conftest.py
|- bar/
|- tests/
|- test_foo.py
When executing:
pytest root/
pytest will find foo/bar/tests/test_foo.py and realize it is NOT part of a package given that there’s no
__init__.py file in the same folder. It will then add root/foo/bar/tests to sys.path in order to import
test_foo.py as the module test_foo. The same is done with the conftest.py file by adding root/foo to
sys.path to import it as conftest.
For this reason this layout cannot have test modules with the same name, as they all will be imported in the global
import namespace.
This is also discussed in details in Conventions for Python test discovery.
Running pytest with pytest [...] instead of python -m pytest [...] yields nearly equivalent behaviour,
except that the latter will add the current directory to sys.path, which is standard python behavior.
See also Calling pytest through python -m pytest.
TWENTYSIX
CONFIGURATION
You can get help on command line options and values in INI-style configurations files by using the general help option:
This will display command line and configuration file settings which were registered by installed plugins.
Many pytest settings can be set in a configuration file, which by convention resides on the root of your repository or in
your tests folder.
A quick example of the configuration files supported by pytest:
26.2.1 pytest.ini
pytest.ini files take precedence over other files, even when empty.
# pytest.ini
[pytest]
minversion = 6.0
addopts = -ra -q
testpaths =
tests
integration
26.2.2 pyproject.toml
# pyproject.toml
[tool.pytest.ini_options]
minversion = "6.0"
addopts = "-ra -q"
testpaths = [
(continues on next page)
241
pytest Documentation, Release 6.2
Note: One might wonder why [tool.pytest.ini_options] instead of [tool.pytest] as is the case with
other tools.
The reason is that the pytest team intends to fully utilize the rich TOML data format for configuration in the future,
reserving the [tool.pytest] table for that. The ini_options table is being used, for now, as a bridge between
the existing .ini configuration system and the future configuration format.
26.2.3 tox.ini
tox.ini files are the configuration files of the tox project, and can also be used to hold pytest configuration if they
have a [pytest] section.
# tox.ini
[pytest]
minversion = 6.0
addopts = -ra -q
testpaths =
tests
integration
26.2.4 setup.cfg
setup.cfg files are general purpose configuration files, used originally by distutils, and can also be used to hold
pytest configuration if they have a [tool:pytest] section.
# setup.cfg
[tool:pytest]
minversion = 6.0
addopts = -ra -q
testpaths =
tests
integration
Warning: Usage of setup.cfg is not recommended unless for very simple use cases. .cfg files use a different
parser than pytest.ini and tox.ini which might cause hard to track down problems. When possible, it is
recommended to use the latter files, or pyproject.toml, to hold your pytest configuration.
pytest determines a rootdir for each test run which depends on the command line arguments (specified test files,
paths) and on the existence of configuration files. The determined rootdir and configfile are printed as part of
the pytest header during startup.
Here’s a summary what pytest uses rootdir for:
• Construct nodeids during collection; each test is assigned a unique nodeid which is rooted at the rootdir and
takes into account the full path, class name, function name and parametrization (if any).
• Is used by plugins as a stable location to store project/test run specific information; for example, the internal
cache plugin creates a .pytest_cache subdirectory in rootdir to store its cross-test run state.
rootdir is NOT used to modify sys.path/PYTHONPATH or influence how modules are imported. See pytest
import mechanisms and sys.path/PYTHONPATH for more details.
The --rootdir=path command-line option can be used to force a specific directory. Note that contrary to other
command-line options, --rootdir cannot be used with addopts inside pytest.ini because the rootdir is
used to find pytest.ini already.
• config.inipath: the determined configfile, may be None (it is named inipath for historical rea-
sons).
New in version 6.1: The config.rootpath and config.inipath properties. They are pathlib.Path
versions of the older config.rootdir and config.inifile, which have type py.path.local, and still
exist for backward compatibility.
The rootdir is used as a reference directory for constructing test addresses (“nodeids”) and can be used also by
plugins for storing per-testrun information.
Example:
will determine the common ancestor as path and then check for configuration files as follows:
Warning: Custom pytest plugin commandline arguments may include a path, as in pytest --log-output
../../test.log args. Then args is mandatory, otherwise pytest uses the folder of test.log for rootdir
determination (see also issue 1435). A dot . for referencing to the current working directory is also possible.
TWENTYSEVEN
Here is a (growing) list of examples. Contact us if you need more examples or have questions. Also take a look at
the comprehensive documentation which contains many example snippets as well. Also, pytest on stackoverflow.com
often comes with example answers.
For basic examples, see
• Installation and Getting Started for basic introductory examples
• Asserting with the assert statement for basic assertion examples
• Fixtures for basic fixture/setup examples
• Parametrizing fixtures and test functions for basic test function parametrization
• unittest.TestCase Support for basic unittest integration
• Running tests written for nose for basic nosetests integration
The following examples aim at various use cases you might encounter.
Here is a nice run of several failures and how pytest presents things:
param1 = 3, param2 = 6
failure_demo.py:19: AssertionError
(continues on next page)
245
pytest Documentation, Release 6.2
def test_simple(self):
def f():
return 42
def g():
return 43
failure_demo.py:30: AssertionError
____________________ TestFailing.test_simple_multiline _____________________
def test_simple_multiline(self):
> otherfunc_multi(42, 6 * 9)
failure_demo.py:33:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 42, b = 54
failure_demo.py:14: AssertionError
___________________________ TestFailing.test_not ___________________________
def test_not(self):
def f():
return 42
failure_demo.py:39: AssertionError
_________________ TestSpecialisedExplanations.test_eq_text _________________
def test_eq_text(self):
> assert "spam" == "eggs"
E AssertionError: assert 'spam' == 'eggs'
E - eggs
E + spam
def test_eq_similar_text(self):
> assert "foo 1 bar" == "foo 2 bar"
E AssertionError: assert 'foo 1 bar' == 'foo 2 bar'
E - foo 2 bar
E ? ^
E + foo 1 bar
E ? ^
failure_demo.py:47: AssertionError
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
def test_eq_multiline_text(self):
> assert "foo\nspam\nbar" == "foo\neggs\nbar"
E AssertionError: assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
E foo
E - eggs
E + spam
E bar
failure_demo.py:50: AssertionError
______________ TestSpecialisedExplanations.test_eq_long_text _______________
def test_eq_long_text(self):
a = "1" * 100 + "a" + "2" * 100
b = "1" * 100 + "b" + "2" * 100
> assert a == b
E AssertionError: assert '111111111111...2222222222222' == '111111111111...
˓→2222222222222'
failure_demo.py:55: AssertionError
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
def test_eq_long_text_multiline(self):
a = "1\n" * 100 + "a" + "2\n" * 100
b = "1\n" * 100 + "b" + "2\n" * 100
> assert a == b
E AssertionError: assert '1\n1\n1\n1\n...n2\n2\n2\n2\n' == '1\n1\n1\n1\n...n2\
˓→n2\n2\n2\n'
failure_demo.py:60: AssertionError
_________________ TestSpecialisedExplanations.test_eq_list _________________
def test_eq_list(self):
> assert [0, 1, 2] == [0, 1, 3]
E assert [0, 1, 2] == [0, 1, 3]
E At index 2 diff: 2 != 3
E Use -v to get the full diff
failure_demo.py:63: AssertionError
______________ TestSpecialisedExplanations.test_eq_list_long _______________
def test_eq_list_long(self):
a = [0] * 100 + [1] + [3] * 100
b = [0] * 100 + [2] + [3] * 100
> assert a == b
E assert [0, 0, 0, 0, 0, 0, ...] == [0, 0, 0, 0, 0, 0, ...]
E At index 100 diff: 1 != 2
E Use -v to get the full diff
failure_demo.py:68: AssertionError
_________________ TestSpecialisedExplanations.test_eq_dict _________________
def test_eq_dict(self):
> assert {"a": 0, "b": 1, "c": 0} == {"a": 0, "b": 2, "d": 0}
E AssertionError: assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
E Omitting 1 identical items, use -vv to show
E Differing items:
E {'b': 1} != {'b': 2}
E Left contains 1 more item:
E {'c': 0}
E Right contains 1 more item:
E {'d': 0}...
E
E ...Full output truncated (2 lines hidden), use '-vv' to show
failure_demo.py:71: AssertionError
_________________ TestSpecialisedExplanations.test_eq_set __________________
def test_eq_set(self):
> assert {0, 10, 11, 12} == {0, 20, 21}
(continues on next page)
failure_demo.py:74: AssertionError
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
def test_eq_longer_list(self):
> assert [1, 2] == [1, 2, 3]
E assert [1, 2] == [1, 2, 3]
E Right contains one more item: 3
E Use -v to get the full diff
failure_demo.py:77: AssertionError
_________________ TestSpecialisedExplanations.test_in_list _________________
def test_in_list(self):
> assert 1 in [0, 2, 3, 4, 5]
E assert 1 in [0, 2, 3, 4, 5]
failure_demo.py:80: AssertionError
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
def test_not_in_text_multiline(self):
text = "some multiline\ntext\nwhich\nincludes foo\nand a\ntail"
> assert "foo" not in text
E AssertionError: assert 'foo' not in 'some multil...nand a\ntail'
E 'foo' is contained here:
E some multiline
E text
E which
E includes foo
E ? +++
E and a...
E
E ...Full output truncated (2 lines hidden), use '-vv' to show
failure_demo.py:84: AssertionError
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
def test_not_in_text_single(self):
text = "single foo line"
(continues on next page)
failure_demo.py:88: AssertionError
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
def test_not_in_text_single_long(self):
text = "head " * 50 + "foo " + "tail " * 20
> assert "foo" not in text
E AssertionError: assert 'foo' not in 'head head h...l tail tail '
E 'foo' is contained here:
E head head foo tail tail tail tail tail tail tail tail tail tail tail tail
˓→tail tail tail tail tail tail tail tail
E ? +++
failure_demo.py:92: AssertionError
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
def test_not_in_text_single_long_term(self):
text = "head " * 50 + "f" * 70 + "tail " * 20
> assert "f" * 70 not in text
E AssertionError: assert 'fffffffffff...ffffffffffff' not in 'head head h...l
˓→tail tail '
˓→tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
˓→tail tail
E ?
˓→++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
failure_demo.py:96: AssertionError
______________ TestSpecialisedExplanations.test_eq_dataclass _______________
def test_eq_dataclass(self):
from dataclasses import dataclass
@dataclass
class Foo:
a: int
b: str
E
(continues on next page)
failure_demo.py:108: AssertionError
________________ TestSpecialisedExplanations.test_eq_attrs _________________
def test_eq_attrs(self):
import attr
@attr.s
class Foo:
a = attr.ib()
b = attr.ib()
failure_demo.py:120: AssertionError
______________________________ test_attribute ______________________________
def test_attribute():
class Foo:
b = 1
i = Foo()
> assert i.b == 2
E assert 1 == 2
E + where 1 = <failure_demo.test_attribute.<locals>.Foo object at 0xdeadbeef>.
˓→ b
failure_demo.py:128: AssertionError
_________________________ test_attribute_instance __________________________
def test_attribute_instance():
class Foo:
b = 1
failure_demo.py:135: AssertionError
__________________________ test_attribute_failure __________________________
def test_attribute_failure():
class Foo:
def _get_b(self):
raise Exception("Failed to get attrib")
b = property(_get_b)
i = Foo()
> assert i.b == 2
failure_demo.py:146:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def _get_b(self):
> raise Exception("Failed to get attrib")
E Exception: Failed to get attrib
failure_demo.py:141: Exception
_________________________ test_attribute_multiple __________________________
def test_attribute_multiple():
class Foo:
b = 1
class Bar:
b = 2
failure_demo.py:156: AssertionError
__________________________ TestRaises.test_raises __________________________
def test_raises(self):
s = "qwe"
> raises(TypeError, int, s)
(continues on next page)
failure_demo.py:166: ValueError
______________________ TestRaises.test_raises_doesnt _______________________
def test_raises_doesnt(self):
> raises(OSError, int, "3")
E Failed: DID NOT RAISE <class 'OSError'>
failure_demo.py:169: Failed
__________________________ TestRaises.test_raise ___________________________
def test_raise(self):
> raise ValueError("demo error")
E ValueError: demo error
failure_demo.py:172: ValueError
________________________ TestRaises.test_tupleerror ________________________
def test_tupleerror(self):
> a, b = [1] # NOQA
E ValueError: not enough values to unpack (expected 2, got 1)
failure_demo.py:175: ValueError
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
items = [1, 2, 3]
print(f"items is {items!r}")
> a, b = items.pop()
E TypeError: cannot unpack non-iterable int object
failure_demo.py:180: TypeError
--------------------------- Captured stdout call ---------------------------
items is [1, 2, 3]
________________________ TestRaises.test_some_error ________________________
def test_some_error(self):
> if namenotexi: # NOQA
E NameError: name 'namenotexi' is not defined
failure_demo.py:183: NameError
____________________ test_dynamic_compile_shows_nicely _____________________
def test_dynamic_compile_shows_nicely():
import importlib.util
import sys
(continues on next page)
failure_demo.py:202:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E AssertionError
abc-123:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________
def test_complex_error(self):
def f():
return 44
def g():
return 43
failure_demo.py:213:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:10: in somefunc
otherfunc(x, y)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 44, b = 43
failure_demo.py:6: AssertionError
___________________ TestMoreErrors.test_z1_unpack_error ____________________
def test_z1_unpack_error(self):
items = []
> a, b = items
E ValueError: not enough values to unpack (expected 2, got 0)
failure_demo.py:217: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________
failure_demo.py:221: TypeError
______________________ TestMoreErrors.test_startswith ______________________
def test_startswith(self):
s = "123"
g = "456"
> assert s.startswith(g)
E AssertionError: assert False
E + where False = <built-in method startswith of str object at 0xdeadbeef>(
˓→'456')
failure_demo.py:226: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________
def test_startswith_nested(self):
def f():
return "123"
def g():
return "456"
failure_demo.py:235: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________
def test_global_func(self):
> assert isinstance(globf(42), float)
E assert False
E + where False = isinstance(43, float)
E + where 43 = globf(42)
failure_demo.py:238: AssertionError
_______________________ TestMoreErrors.test_instance _______________________
def test_instance(self):
self.x = 6 * 7
> assert self.x != 42
E assert 42 != 42
E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef>.x
failure_demo.py:242: AssertionError
_______________________ TestMoreErrors.test_compare ________________________
def test_compare(self):
> assert globf(10) < 5
E assert 11 < 5
E + where 11 = globf(10)
failure_demo.py:245: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________
def test_try_finally(self):
x = 1
try:
> assert x == 0
E assert 1 == 0
failure_demo.py:250: AssertionError
___________________ TestCustomAssertMsg.test_single_line ___________________
def test_single_line(self):
class A:
a = 1
b = 2
> assert A.a == b, "A.a appears not to be b"
E AssertionError: A.a appears not to be b
E assert 1 == 2
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_single_line.
˓→<locals>.A'>.a
failure_demo.py:261: AssertionError
____________________ TestCustomAssertMsg.test_multiline ____________________
def test_multiline(self):
class A:
a = 1
b = 2
> assert (
A.a == b
(continues on next page)
failure_demo.py:268: AssertionError
___________________ TestCustomAssertMsg.test_custom_repr ___________________
def test_custom_repr(self):
class JSON:
a = 1
def __repr__(self):
return "This is JSON\n{\n 'foo': 'bar'\n}"
a = JSON()
b = 2
> assert a.a == b, a
E AssertionError: This is JSON
E {
E 'foo': 'bar'
E }
E assert 1 == 2
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
failure_demo.py:281: AssertionError
========================= short test summary info ==========================
FAILED failure_demo.py::test_generative[3-6] - assert (3 * 2) < 6
FAILED failure_demo.py::TestFailing::test_simple - assert 42 == 43
FAILED failure_demo.py::TestFailing::test_simple_multiline - assert 42 == 54
FAILED failure_demo.py::TestFailing::test_not - assert not 42
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_text - Asser...
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_similar_text
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_multiline_text
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_long_text - ...
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_long_text_multiline
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_list - asser...
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_list_long - ...
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_dict - Asser...
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_set - Assert...
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_longer_list
FAILED failure_demo.py::TestSpecialisedExplanations::test_in_list - asser...
FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_multiline
FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single
FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single_long
FAILED failure_demo.py::TestSpecialisedExplanations::test_not_in_text_single_long_term
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_dataclass - ...
FAILED failure_demo.py::TestSpecialisedExplanations::test_eq_attrs - Asse...
FAILED failure_demo.py::test_attribute - assert 1 == 2
FAILED failure_demo.py::test_attribute_instance - AssertionError: assert ...
FAILED failure_demo.py::test_attribute_failure - Exception: Failed to get...
FAILED failure_demo.py::test_attribute_multiple - AssertionError: assert ...
(continues on next page)
It can be tedious to type the same series of command line options every time you use pytest. For example, if you
always want to see detailed info on skipped and xfailed tests, as well as have terser “dot” progress output, you can
write it into a configuration file:
# content of pytest.ini
[pytest]
addopts = -ra -q
Alternatively, you can set a PYTEST_ADDOPTS environment variable to add command line options while the envi-
ronment is in use:
export PYTEST_ADDOPTS="-v"
Here’s how the command-line is built in the presence of addopts or the environment variable:
pytest -m slow
Note that as usual for other command-line applications, in case of conflicting options the last one wins, so the example
above will show verbose output because -v overwrites -q.
27.2.2 Pass different values to a test function, depending on command line options
Suppose we want to write a test that depends on a command line option. Here is a basic pattern to achieve this:
# content of test_sample.py
def test_answer(cmdopt):
if cmdopt == "type1":
print("first")
elif cmdopt == "type2":
print("second")
assert 0 # to see what was printed
For this to work we need to add a command line option and provide the cmdopt through a fixture function:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--cmdopt", action="store", default="type1", help="my option: type1 or type2"
)
@pytest.fixture
def cmdopt(request):
return request.config.getoption("--cmdopt")
cmdopt = 'type1'
def test_answer(cmdopt):
if cmdopt == "type1":
print("first")
elif cmdopt == "type2":
print("second")
> assert 0 # to see what was printed
E assert 0
test_sample.py:6: AssertionError
--------------------------- Captured stdout call ---------------------------
first
========================= short test summary info ==========================
FAILED test_sample.py::test_answer - assert 0
1 failed in 0.12s
cmdopt = 'type2'
def test_answer(cmdopt):
if cmdopt == "type1":
print("first")
elif cmdopt == "type2":
print("second")
> assert 0 # to see what was printed
E assert 0
test_sample.py:6: AssertionError
--------------------------- Captured stdout call ---------------------------
second
========================= short test summary info ==========================
FAILED test_sample.py::test_answer - assert 0
1 failed in 0.12s
You can see that the command line option arrived in our test. This completes the basic pattern. However, one often
rather wants to process command line options outside of the test and rather pass in different or more complex objects.
Through addopts you can statically add command line options for your project. You can also dynamically modify
the command line arguments before they get processed:
# setuptools plugin
import sys
def pytest_load_initial_conftests(args):
if "xdist" in sys.modules: # pytest-xdist plugin
import multiprocessing
num = max(multiprocessing.cpu_count() / 2, 1)
args[:] = ["-n", str(num)] + args
If you have the xdist plugin installed you will now always perform test runs using a number of subprocesses close to
your CPU. Running in an empty directory with the above conftest.py:
$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 0 items
Here is a conftest.py file adding a --runslow command line option to control skipping of pytest.mark.
slow marked tests:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"--runslow", action="store_true", default=False, help="run slow tests"
)
def pytest_configure(config):
config.addinivalue_line("markers", "slow: mark test as slow to run")
# content of test_module.py
import pytest
def test_func_fast():
pass
@pytest.mark.slow
def test_func_slow():
pass
test_module.py .s [100%]
test_module.py .. [100%]
If you have a test helper function called from a test you can use the pytest.fail marker to fail a test with a certain
message. The test support function will not show up in the traceback if you set the __tracebackhide__ option
somewhere in the helper function. Example:
# content of test_checkconfig.py
import pytest
def checkconfig(x):
__tracebackhide__ = True
if not hasattr(x, "config"):
pytest.fail("not configured: {}".format(x))
def test_something():
checkconfig(42)
The __tracebackhide__ setting influences pytest showing of tracebacks: the checkconfig function will
not be shown unless the --full-trace command line option is specified. Let’s run our little function:
$ pytest -q test_checkconfig.py
F [100%]
================================= FAILURES =================================
______________________________ test_something ______________________________
def test_something():
> checkconfig(42)
E Failed: not configured: 42
test_checkconfig.py:11: Failed
========================= short test summary info ==========================
FAILED test_checkconfig.py::test_something - Failed: not configured: 42
1 failed in 0.12s
If you only want to hide certain exceptions, you can set __tracebackhide__ to a callable which gets the
ExceptionInfo object. You can for example use this to make sure unexpected exception types aren’t hidden:
import operator
import pytest
def checkconfig(x):
__tracebackhide__ = operator.methodcaller("errisinstance", ConfigException)
if not hasattr(x, "config"):
raise ConfigException("not configured: {}".format(x))
def test_something():
checkconfig(42)
This will avoid hiding the exception traceback on unrelated exceptions (i.e. bugs in assertion helpers).
Usually it is a bad idea to make application code behave differently if called from a test. But if you absolutely must
find out if your application code is running from a test you can do something like this:
# content of your_module.py
_called_from_test = False
# content of conftest.py
def pytest_configure(config):
your_module._called_from_test = True
if your_module._called_from_test:
# called from within a test run
...
else:
# called "normally"
...
# content of conftest.py
def pytest_report_header(config):
return "project deps: mylib-1.1"
$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
project deps: mylib-1.1
rootdir: $REGENDOC_TMPDIR
collected 0 items
It is also possible to return a list of strings which will be considered as several lines of information. You may consider
config.getoption('verbose') in order to display more information if applicable:
# content of conftest.py
def pytest_report_header(config):
if config.getoption("verbose") > 0:
return ["info1: did you know that ...", "did you?"]
$ pytest -v
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_
˓→PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
info1: did you know that ...
did you?
rootdir: $REGENDOC_TMPDIR
collecting ... collected 0 items
$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 0 items
If you have a slow running large test suite you might want to find out which tests are the slowest. Let’s make an
artificial test suite:
# content of test_some_are_slow.py
import time
def test_funcfast():
(continues on next page)
def test_funcslow1():
time.sleep(0.2)
def test_funcslow2():
time.sleep(0.3)
Sometimes you may have a testing situation which consists of a series of test steps. If one step fails it makes no sense
to execute further steps as they are all expected to fail anyway and their tracebacks add no insight. Here is a simple
conftest.py file which introduces an incremental marker which is to be used on classes:
# content of conftest.py
# store history of failures per test class name and per index in parametrize (if
˓→parametrize used)
parametrize_index = (
tuple(item.callspec.indices.values())
if hasattr(item, "callspec")
(continues on next page)
def pytest_runtest_setup(item):
if "incremental" in item.keywords:
# retrieve the class name of the test
cls_name = str(item.cls)
# check if a previous test has failed for this class
if cls_name in _test_failed_incremental:
# retrieve the index of the test (if parametrize is used in combination
˓→with incremental)
parametrize_index = (
tuple(item.callspec.indices.values())
if hasattr(item, "callspec")
else ()
)
# retrieve the name of the first test function to fail for this class
˓→name and index
test_name = _test_failed_incremental[cls_name].get(parametrize_index,
˓→None)
# if name found, test has failed for the combination of class name & test
˓→name
These two hook implementations work together to abort incremental-marked tests in a class. Here is a test module
example:
# content of test_step.py
import pytest
@pytest.mark.incremental
class TestUserHandling:
def test_login(self):
pass
def test_modification(self):
assert 0
def test_deletion(self):
pass
def test_normal():
pass
If we run this:
$ pytest -rx
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 4 items
def test_modification(self):
> assert 0
E assert 0
test_step.py:11: AssertionError
========================= short test summary info ==========================
XFAIL test_step.py::TestUserHandling::test_deletion
reason: previous test failed (test_modification)
================== 1 failed, 2 passed, 1 xfailed in 0.12s ==================
We’ll see that test_deletion was not executed because test_modification failed. It is reported as an
“expected failure”.
If you have nested test directories, you can have per-directory fixture scopes by placing fixture functions in a
conftest.py file in that directory You can use all types of fixtures including autouse fixtures which are the equiv-
alent of xUnit’s setup/teardown concept. It’s however recommended to have explicit fixture references in your tests or
test classes rather than relying on implicitly executing setup/teardown functions, especially if they are far away from
the actual tests.
Here is an example for making a db fixture available in a directory:
# content of a/conftest.py
import pytest
class DB:
pass
@pytest.fixture(scope="session")
def db():
return DB()
# content of a/test_db.py
def test_a1(db):
assert 0, db # to show value
# content of a/test_db2.py
def test_a2(db):
assert 0, db # to show value
and then a module in a sister directory which will not see the db fixture:
# content of b/test_error.py
def test_root(db): # no db here, will error out
pass
$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 7 items
˓→factory
$REGENDOC_TMPDIR/b/test_error.py:1
================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________
def test_modification(self):
> assert 0
E assert 0
test_step.py:11: AssertionError
_________________________________ test_a1 __________________________________
def test_a1(db):
> assert 0, db # to show value
E AssertionError: <conftest.DB object at 0xdeadbeef>
E assert 0
a/test_db.py:2: AssertionError
_________________________________ test_a2 __________________________________
(continues on next page)
def test_a2(db):
> assert 0, db # to show value
E AssertionError: <conftest.DB object at 0xdeadbeef>
E assert 0
a/test_db2.py:2: AssertionError
========================= short test summary info ==========================
FAILED test_step.py::TestUserHandling::test_modification - assert 0
FAILED a/test_db.py::test_a1 - AssertionError: <conftest.DB object at 0x7...
FAILED a/test_db2.py::test_a2 - AssertionError: <conftest.DB object at 0x...
ERROR b/test_error.py::test_root
============= 3 failed, 2 passed, 1 xfailed, 1 error in 0.12s ==============
The two test modules in the a directory see the same db fixture instance while the one test in the sister-directory b
doesn’t see it. We could of course also define a db fixture in that sister directory’s conftest.py file. Note that
each fixture is only instantiated if there is a test actually needing it (unless you use “autouse” fixture which are always
executed ahead of the first test executing).
If you want to postprocess test reports and need access to the executing environment you can implement a hook that
gets called when the test “report” object is about to be created. Here we write out all failing test calls and also access
a fixture (if it was used by the test) in case you want to query/look at it during your post processing. In our case we
just write some information out to a failures file:
# content of conftest.py
import pytest
import os.path
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
# execute all other hooks to obtain the report object
outcome = yield
rep = outcome.get_result()
# content of test_module.py
def test_fail1(tmpdir):
assert 0
def test_fail2():
assert 0
$ pytest test_module.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 2 items
test_module.py FF [100%]
tmpdir = local('PYTEST_TMPDIR/test_fail10')
def test_fail1(tmpdir):
> assert 0
E assert 0
test_module.py:2: AssertionError
________________________________ test_fail2 ________________________________
def test_fail2():
> assert 0
E assert 0
test_module.py:6: AssertionError
========================= short test summary info ==========================
FAILED test_module.py::test_fail1 - assert 0
FAILED test_module.py::test_fail2 - assert 0
============================ 2 failed in 0.12s =============================
you will have a “failures” file which contains the failing test ids:
$ cat failures
test_module.py::test_fail1 (PYTEST_TMPDIR/test_fail10)
test_module.py::test_fail2
If you want to make test result reports available in fixture finalizers here is a little example implemented via a local
plugin:
# content of conftest.py
import pytest
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
# execute all other hooks to obtain the report object
outcome = yield
rep = outcome.get_result()
@pytest.fixture
def something(request):
yield
# request.node is an "item" because we use the default
# "function" scope
if request.node.rep_setup.failed:
print("setting up a test failed!", request.node.nodeid)
elif request.node.rep_setup.passed:
if request.node.rep_call.failed:
print("executing test failed", request.node.nodeid)
# content of test_module.py
import pytest
@pytest.fixture
def other():
assert 0
def test_call_fails(something):
assert 0
def test_fail2():
assert 0
$ pytest -s test_module.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 3 items
@pytest.fixture
def other():
> assert 0
E assert 0
test_module.py:7: AssertionError
================================= FAILURES =================================
_____________________________ test_call_fails ______________________________
something = None
def test_call_fails(something):
> assert 0
E assert 0
test_module.py:15: AssertionError
________________________________ test_fail2 ________________________________
def test_fail2():
> assert 0
E assert 0
test_module.py:19: AssertionError
========================= short test summary info ==========================
FAILED test_module.py::test_call_fails - assert 0
FAILED test_module.py::test_fail2 - assert 0
ERROR test_module.py::test_setup_fails - assert 0
======================== 2 failed, 1 error in 0.12s ========================
You’ll see that the fixture finalizers could use the precise reporting information.
Sometimes a test session might get stuck and there might be no easy way to figure out which test got stuck, for example
if pytest was run in quiet mode (-q) or you don’t have access to the console output. This is particularly a problem if
the problem happens only sporadically, the famous “flaky” kind of tests.
pytest sets the PYTEST_CURRENT_TEST environment variable when running tests, which can be inspected by
process monitoring utilities or libraries like psutil to discover which test got stuck if necessary:
import psutil
During the test session pytest will set PYTEST_CURRENT_TEST to the current test nodeid and the current stage,
which can be setup, call, or teardown.
For example, when running a single test function named test_foo from foo_module.py,
PYTEST_CURRENT_TEST will be set to:
1. foo_module.py::test_foo (setup)
2. foo_module.py::test_foo (call)
3. foo_module.py::test_foo (teardown)
In that order.
Note: The contents of PYTEST_CURRENT_TEST is meant to be human readable and the actual format can be
changed between releases (even bug fixes) so it shouldn’t be relied on for scripting or automation.
If you freeze your application using a tool like PyInstaller in order to distribute it to your end-users, it is a good idea
to also package your test runner and run your tests using the frozen application. This way packaging errors such as
dependencies not being included into the executable can be detected early while also allowing you to send test files to
users so they can run them in their machines, which can be useful to obtain more information about a hard to reproduce
bug.
Fortunately recent PyInstaller releases already have a custom hook for pytest, but if you are using another tool
to freeze executables such as cx_freeze or py2exe, you can use pytest.freeze_includes() to obtain the
full list of internal pytest modules. How to configure the tools to find the internal modules varies from tool to tool,
however.
Instead of freezing the pytest runner as a separate executable, you can make your frozen program work as the pytest
runner by some clever argument handling during program startup. This allows you to have a single executable, which
is usually more convenient. Please note that the mechanism for plugin discovery used by pytest (setupttools entry
points) doesn’t work with frozen executables so pytest can’t find any third party plugins automatically. To include
third party plugins like pytest-timeout they must be imported explicitly and passed on to pytest.main.
# contents of app_main.py
import sys
import pytest_timeout # Third party plugin
sys.exit(pytest.main(sys.argv[2:], plugins=[pytest_timeout]))
else:
# normal application execution: at this point argv can be parsed
# by your argument-parsing library of choice as usual
...
This allows you to execute tests using the frozen application with standard pytest command-line options:
pytest allows to easily parametrize test functions. For basic docs, see Parametrizing fixtures and test functions.
In the following we provide some examples using the builtin mechanisms.
Let’s say we want to execute a test with different computation parameters and the parameter range shall be determined
by a command line argument. Let’s first write a simple (do-nothing) computation test:
# content of test_compute.py
def test_compute(param1):
assert param1 < 4
def pytest_addoption(parser):
parser.addoption("--all", action="store_true", help="run all combinations")
def pytest_generate_tests(metafunc):
if "param1" in metafunc.fixturenames:
if metafunc.config.getoption("all"):
end = 5
else:
end = 2
metafunc.parametrize("param1", range(end))
We run only two computations, so we see two dots. let’s run the full monty:
$ pytest -q --all
....F [100%]
================================= FAILURES =================================
_____________________________ test_compute[4] ______________________________
param1 = 4
def test_compute(param1):
> assert param1 < 4
E assert 4 < 4
(continues on next page)
test_compute.py:4: AssertionError
========================= short test summary info ==========================
FAILED test_compute.py::test_compute[4] - assert 4 < 4
1 failed, 4 passed in 0.12s
As expected when running the full range of param1 values we’ll get an error on the last one.
pytest will build a string that is the test ID for each set of values in a parametrized test. These IDs can be used with
-k to select specific cases to run, and they will also identify the specific case when one is failing. Running pytest with
--collect-only will show the generated IDs.
Numbers, strings, booleans and None will have their usual string representation used in the test ID. For other objects,
pytest will make a string based on the argument name:
# content of test_time.py
import pytest
testdata = [
(datetime(2001, 12, 12), datetime(2001, 12, 11), timedelta(1)),
(datetime(2001, 12, 11), datetime(2001, 12, 12), timedelta(-1)),
]
@pytest.mark.parametrize("a,b,expected", testdata)
def test_timedistance_v0(a, b, expected):
diff = a - b
assert diff == expected
def idfn(val):
if isinstance(val, (datetime,)):
# note this wouldn't show any hours/minutes/seconds
return val.strftime("%Y%m%d")
@pytest.mark.parametrize(
"a,b,expected",
(continues on next page)
),
],
)
def test_timedistance_v3(a, b, expected):
diff = a - b
assert diff == expected
<Module test_time.py>
<Function test_timedistance_v0[a0-b0-expected0]>
<Function test_timedistance_v0[a1-b1-expected1]>
<Function test_timedistance_v1[forward]>
<Function test_timedistance_v1[backward]>
<Function test_timedistance_v2[20011212-20011211-expected0]>
<Function test_timedistance_v2[20011211-20011212-expected1]>
<Function test_timedistance_v3[forward]>
<Function test_timedistance_v3[backward]>
In test_timedistance_v3, we used pytest.param to specify the test IDs together with the actual data,
instead of listing them separately.
Here is a quick port to run tests configured with test scenarios, an add-on from Robert Collins for the standard
unittest framework. We only have to work a bit to construct the correct arguments for pytest’s Metafunc.
parametrize():
# content of test_scenarios.py
def pytest_generate_tests(metafunc):
(continues on next page)
class TestSampleWithScenarios:
scenarios = [scenario1, scenario2]
$ pytest test_scenarios.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 4 items
If you just collect tests you’ll also nicely see ‘advanced’ and ‘basic’ as variants for the test function:
<Module test_scenarios.py>
<Class TestSampleWithScenarios>
<Function test_demo1[basic]>
<Function test_demo2[basic]>
<Function test_demo1[advanced]>
<Function test_demo2[advanced]>
Note that we told metafunc.parametrize() that your scenario values should be considered class-scoped. With
pytest-2.3 this leads to a resource-based ordering.
The parametrization of test functions happens at collection time. It is a good idea to setup expensive resources like DB
connections or subprocess only when the actual test is run. Here is a simple example how you can achieve that. This
test requires a db object fixture:
# content of test_backends.py
import pytest
def test_db_initialized(db):
# a dummy test
if db.__class__.__name__ == "DB2":
pytest.fail("deliberately failing for demo purposes")
We can now add a test configuration that generates two invocations of the test_db_initialized function and
also implements a factory that creates a database object for the actual test invocations:
# content of conftest.py
import pytest
def pytest_generate_tests(metafunc):
if "db" in metafunc.fixturenames:
metafunc.parametrize("db", ["d1", "d2"], indirect=True)
class DB1:
"one database object"
class DB2:
"alternative database object"
@pytest.fixture
def db(request):
if request.param == "d1":
return DB1()
elif request.param == "d2":
return DB2()
else:
raise ValueError("invalid internal test config")
<Module test_backends.py>
<Function test_db_initialized[d1]>
<Function test_db_initialized[d2]>
(continues on next page)
$ pytest -q test_backends.py
.F [100%]
================================= FAILURES =================================
_________________________ test_db_initialized[d2] __________________________
def test_db_initialized(db):
# a dummy test
if db.__class__.__name__ == "DB2":
> pytest.fail("deliberately failing for demo purposes")
E Failed: deliberately failing for demo purposes
test_backends.py:8: Failed
========================= short test summary info ==========================
FAILED test_backends.py::test_db_initialized[d2] - Failed: deliberately f...
1 failed, 1 passed in 0.12s
The first invocation with db == "DB1" passed while the second with db == "DB2" failed. Our db fixture func-
tion has instantiated each of the DB values during the setup phase while the pytest_generate_tests generated
two according calls to the test_db_initialized during the collection phase.
Using the indirect=True parameter when parametrizing a test allows to parametrize a test with a fixture receiving
the values before passing them to a test:
import pytest
@pytest.fixture
def fixt(request):
return request.param * 3
This can be used, for example, to do more expensive setup at test run time in the fixture, rather than having to run
those setup steps at collection time.
Very often parametrization uses more than one argument name. There is opportunity to apply indirect parameter
on particular arguments. It can be done by passing list or tuple of arguments’ names to indirect. In the example
below there is a function test_indirect which uses two fixtures: x and y. Here we give to indirect the list, which
contains the name of the fixture x. The indirect parameter will be applied to this argument only, and the value a will
be passed to respective fixture function:
# content of test_indirect_list.py
import pytest
@pytest.fixture(scope="function")
def x(request):
return request.param * 3
@pytest.fixture(scope="function")
def y(request):
return request.param * 2
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 1 item
def pytest_generate_tests(metafunc):
# called once per each test function
funcarglist = metafunc.cls.params[metafunc.function.__name__]
argnames = sorted(funcarglist[0])
metafunc.parametrize(
(continues on next page)
class TestClass:
# a map specifying multiple argument sets for a test method
params = {
"test_equals": [dict(a=1, b=2), dict(a=3, b=3)],
"test_zerodivision": [dict(a=1, b=0)],
}
Our test generator looks up a class-level definition which specifies which argument sets to use for each test function.
Let’s run it:
$ pytest -q
F.. [100%]
================================= FAILURES =================================
________________________ TestClass.test_equals[1-2] ________________________
test_parametrize.py:21: AssertionError
========================= short test summary info ==========================
FAILED test_parametrize.py::TestClass::test_equals[1-2] - assert 1 == 2
1 failed, 2 passed in 0.12s
Here is a stripped down real-life example of using parametrized testing for testing serialization of objects between
different python interpreters. We define a test_basic_objects function which is to be run with different sets of
arguments for its three arguments:
• python1: first python interpreter, run to pickle-dump an object to a file
• python2: second interpreter, run to pickle-load an object from a file
• obj: object to be dumped/loaded
"""
module containing a parametrized tests testing cross-python
serialization via the pickle module.
"""
import shutil
import subprocess
import textwrap
(continues on next page)
import pytest
@pytest.fixture(params=pythonlist)
def python1(request, tmpdir):
picklefile = tmpdir.join("data.pickle")
return Python(request.param, picklefile)
@pytest.fixture(params=pythonlist)
def python2(request, python1):
return Python(request.param, python1.picklefile)
class Python:
def __init__(self, version, picklefile):
self.pythonpath = shutil.which(version)
if not self.pythonpath:
pytest.skip(f"{version!r} not found")
self.picklefile = picklefile
Running it results in some skips if we don’t have all the python interpreters installed and otherwise runs all combina-
tions (3 interpreters times 3 interpreters times 3 objects to serialize/deserialize):
If you want to compare the outcomes of several implementations of a given API, you can write test functions that
receive the already imported implementations and get skipped in case the implementation is not importable/available.
Let’s say we have a “base” implementation and the other (possibly optimized ones) need to provide similar results:
# content of conftest.py
import pytest
@pytest.fixture(scope="session")
def basemod(request):
return pytest.importorskip("base")
# content of base.py
def func1():
return 1
# content of opt1.py
def func1():
return 1.0001
# content of test_module.py
test_module.py .s [100%]
You’ll see that we don’t have an opt2 module and thus the second test run of our test_func1 was skipped. A few
notes:
• the fixture functions in the conftest.py file are “session-scoped” because we don’t need to import more
than once
• if you have multiple test functions and a skipped import, you will see the [1] count increasing in the report
• you can put @pytest.mark.parametrize style parametrization on the test functions to parametrize input/output
values as well.
Use pytest.param to apply marks or set test ID to individual parametrized test. For example:
# content of test_pytest_param_example.py
import pytest
@pytest.mark.parametrize(
"test_input,expected",
[
("3+5", 8),
pytest.param("1+7", 8, marks=pytest.mark.basic),
pytest.param("2+4", 6, marks=pytest.mark.basic, id="basic_2+4"),
pytest.param(
"6*9", 42, marks=[pytest.mark.basic, pytest.mark.xfail], id="basic_6*9"
),
],
)
def test_eval(test_input, expected):
assert eval(test_input) == expected
In this example, we have 4 parametrized tests. Except for the first test, we mark the rest three parametrized tests with
the custom marker basic, and for the fourth test we also use the built-in mark xfail to indicate this test is expected
to fail. For explicitness, we set test ids for some tests.
Then run pytest with verbose mode and with only the basic marker:
$ pytest -v -m basic
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_
˓→PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 24 items / 21 deselected / 3 selected
As the result:
• Four tests were collected
• One test was deselected because it doesn’t have the basic mark.
• Three tests with the basic mark was selected.
• The test test_eval[1+7-8] passed, but the name is autogenerated and confusing.
• The test test_eval[basic_2+4] passed.
• The test test_eval[basic_6*9] was expected to fail and did fail.
Use pytest.raises() with the pytest.mark.parametrize decorator to write parametrized tests in which some tests
raise exceptions and others do not.
It is helpful to define a no-op context manager does_not_raise to serve as a complement to raises. For
example:
@contextmanager
def does_not_raise():
yield
@pytest.mark.parametrize(
"example_input,expectation",
[
(3, does_not_raise()),
(2, does_not_raise()),
(1, does_not_raise()),
(0, pytest.raises(ZeroDivisionError)),
],
)
def test_division(example_input, expectation):
"""Test how much I know division."""
(continues on next page)
In the example above, the first three test cases should run unexceptionally, while the fourth should raise
ZeroDivisionError.
If you’re only supporting Python 3.7+, you can simply use nullcontext to define does_not_raise:
from contextlib import nullcontext as does_not_raise
Here are some examples using the Marking test functions with attributes mechanism.
You can “mark” a test function with custom metadata like this:
# content of test_server.py
import pytest
@pytest.mark.webtest
def test_send_http():
pass # perform some webtest test for your app
def test_something_quick():
pass
def test_another():
pass
class TestClass:
def test_method(self):
pass
You can then restrict a test run to only run tests marked with webtest:
$ pytest -v -m webtest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_
˓→PREFIX/bin/python
(continues on next page)
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 4 items / 1 deselected / 3 selected
You can provide one or more node IDs as positional arguments to select only specified tests. This makes it easy to
select tests based on their module, class, method, or function name:
$ pytest -v test_server.py::TestClass::test_method
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_
˓→PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 1 item
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 1 item
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 2 items
Note: Node IDs are of the form module.py::class::method or module.py::function. Node IDs
control which tests are collected, so module.py::class will select all test methods on the class. Nodes are also
created for each parameter of a parametrized fixture or test, so selecting a parametrized test must include the parameter
value, e.g. module.py::function[param].
Node IDs for failing tests are displayed in the test summary info when running pytest with the -rf option. You can
also construct Node IDs from the output of pytest --collectonly.
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 4 items / 3 deselected / 1 selected
And you can also run all tests except the ones that match the keyword:
$ pytest -k "not send_http" -v
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_
˓→PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 4 items / 1 deselected / 3 selected
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collecting ... collected 4 items / 2 deselected / 2 selected
Multiple custom markers can be registered, by defining each one in its own line, as shown in above example.
You can ask which markers exist for your test suite - the list includes our just defined webtest and slow markers:
$ pytest --markers
@pytest.mark.webtest: mark a test as a webtest.
˓→stable/reference.html#pytest-mark-skipif
˓→evaluate to True. Optionally specify a reason for better reporting and run=False if
˓→you don't even want to execute the test function. If only specific exception(s) are
˓→expected, you can list them in raises, and if the test fails in other ways, it will
˓→#pytest-mark-xfail
˓→to two calls of the decorated test function, one with arg1=1 and another with
˓→examples.
˓→#usefixtures
For an example on how to add and work with markers from a plugin, see Custom marker and command line option to
control test runs.
You may use pytest.mark decorators with classes to apply markers to all of its test methods:
# content of test_mark_classlevel.py
import pytest
@pytest.mark.webtest
class TestClass:
def test_startup(self):
pass
def test_startup_and_more(self):
pass
This is equivalent to directly applying the decorator to the two test functions.
To apply marks at the module level, use the pytestmark global variable:
import pytest
pytestmark = pytest.mark.webtest
or multiple markers:
pytestmark = [pytest.mark.webtest, pytest.mark.slowtest]
Due to legacy reasons, before class decorators were introduced, it is possible to set the pytestmark attribute on a
test class like this:
import pytest
class TestClass:
pytestmark = pytest.mark.webtest
When using parametrize, applying a mark will make it apply to each individual test. However it is also possible to
apply a marker to an individual test instance:
import pytest
@pytest.mark.foo
@pytest.mark.parametrize(
("n", "expected"), [(1, 2), pytest.param(1, 3, marks=pytest.mark.bar), (2, 3)]
)
def test_increment(n, expected):
assert n + 1 == expected
In this example the mark “foo” will apply to each of the three tests, whereas the “bar” mark is only applied to the
second test. Skip and xfail marks can also be applied in this way, see Skip/xfail with parametrize.
27.4.7 Custom marker and command line option to control test runs
Plugins can provide custom markers and implement specific behaviour based on it. This is a self-contained example
which adds a command line option and a parametrized test function marker to run tests specifies via named environ-
ments:
# content of conftest.py
import pytest
def pytest_addoption(parser):
parser.addoption(
"-E",
action="store",
metavar="NAME",
help="only run tests matching the environment NAME.",
)
def pytest_runtest_setup(item):
envnames = [mark.args[0] for mark in item.iter_markers(name="env")]
if envnames:
if item.config.getoption("-E") not in envnames:
pytest.skip("test requires env in {!r}".format(envnames))
import pytest
@pytest.mark.env("stage1")
def test_basic_db_operation():
pass
and an example invocations specifying a different environment than what the test needs:
$ pytest -E stage2
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 1 item
test_someenv.py s [100%]
test_someenv.py . [100%]
˓→stable/reference.html#pytest-mark-skipif
˓→evaluate to True. Optionally specify a reason for better reporting and run=False if
˓→you don't even want to execute the test function. If only specific exception(s) are
˓→expected, you can list them in raises, and if the test fails in other ways, it will
˓→#pytest-mark-xfail
˓→to two calls of the decorated test function, one with arg1=1 and another with
˓→examples.
˓→#usefixtures
Below is the config file that will be used in the next examples:
# content of conftest.py
import sys
def pytest_runtest_setup(item):
for marker in item.iter_markers(name="my_marker"):
print(marker)
sys.stdout.flush()
A custom marker can have its argument set, i.e. args and kwargs properties, defined by either invoking it as a
callable or using pytest.mark.MARKER_NAME.with_args. These two methods achieve the same effect most
of the time.
However, if there is a callable as the single positional argument with no keyword arguments, using the pytest.
mark.MARKER_NAME(c) will not pass c as a positional argument but decorate c with the custom marker (see
MarkDecorator). Fortunately, pytest.mark.MARKER_NAME.with_args comes to the rescue:
# content of test_custom_marker.py
import pytest
@pytest.mark.my_marker.with_args(hello_world)
def test_with_args():
pass
We can see that the custom marker has its argument set extended with the function hello_world. This is the key
difference between creating a custom marker as a callable, which invokes __call__ behind the scenes, and using
with_args.
If you are heavily using markers in your test suite you may encounter the case where a marker is applied several times
to a test function. From plugin code you can read over all such settings. Example:
# content of test_mark_three_times.py
import pytest
@pytest.mark.glob("class", x=2)
class TestClass:
@pytest.mark.glob("function", x=3)
def test_something(self):
pass
Here we have the marker “glob” applied three times to the same test function. From a conftest file we can read it like
this:
# content of conftest.py
import sys
def pytest_runtest_setup(item):
for mark in item.iter_markers(name="glob"):
print("glob args={} kwargs={}".format(mark.args, mark.kwargs))
sys.stdout.flush()
Let’s run this without capturing output and see what we get:
$ pytest -q -s
glob args=('function',) kwargs={'x': 3}
(continues on next page)
Consider you have a test suite which marks tests for particular platforms, namely pytest.mark.darwin,
pytest.mark.win32 etc. and you also have tests that run on all platforms and have no specific marker. If you
now want to have a way to only run the tests for your particular platform, you could use the following plugin:
# content of conftest.py
#
import sys
import pytest
def pytest_runtest_setup(item):
supported_platforms = ALL.intersection(mark.name for mark in item.iter_markers())
plat = sys.platform
if supported_platforms and plat not in supported_platforms:
pytest.skip("cannot run on platform {}".format(plat))
then tests will be skipped if they were specified for a different platform. Let’s do a little test file to show how this looks
like:
# content of test_plat.py
import pytest
@pytest.mark.darwin
def test_if_apple_is_evil():
pass
@pytest.mark.linux
def test_if_linux_works():
pass
@pytest.mark.win32
def test_if_win32_crashes():
pass
def test_runs_everywhere():
pass
then you will see two tests skipped and two executed tests as expected:
$ pytest -rs # this option reports skip reasons
=========================== test session starts ============================
(continues on next page)
Note that if you specify a platform via the marker-command line option like this:
$ pytest -m linux
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 4 items / 3 deselected / 1 selected
test_plat.py . [100%]
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
If you have a test suite where test function names indicate a certain type of test, you can implement a hook that
automatically defines markers so that you can use the -m option with it. Let’s look at this test module:
# content of test_module.py
def test_interface_simple():
assert 0
def test_interface_complex():
assert 0
def test_event_simple():
assert 0
def test_something_else():
assert 0
# content of conftest.py
import pytest
def pytest_collection_modifyitems(items):
for item in items:
if "interface" in item.nodeid:
item.add_marker(pytest.mark.interface)
elif "event" in item.nodeid:
item.add_marker(pytest.mark.event)
test_module.py FF [100%]
A session-scoped fixture effectively has access to all collected test items. Here is an example of a fixture function
which walks all collected tests and looks if their test class defines a callme method and calls it:
# content of conftest.py
import pytest
@pytest.fixture(scope="session", autouse=True)
def callattr_ahead_of_alltests(request):
print("callattr_ahead_of_alltests called")
seen = {None}
session = request.node
for item in session.items:
cls = item.getparent(pytest.Class)
if cls not in seen:
if hasattr(cls.obj, "callme"):
cls.obj.callme()
seen.add(cls)
test classes may now define a callme method which will be called ahead of running any tests:
# content of test_module.py
class TestHello:
@classmethod
def callme(cls):
print("callme called!")
def test_method1(self):
print("test_method1 called")
def test_method2(self):
print("test_method1 called")
class TestOther:
@classmethod
def callme(cls):
print("callme other called")
def test_other(self):
print("test other")
class SomeTest(unittest.TestCase):
@classmethod
def callme(self):
print("SomeTest callme called")
def test_unit1(self):
print("test_unit1 method called")
$ pytest -q -s test_module.py
callattr_ahead_of_alltests called
callme called!
callme other called
SomeTest callme called
test_method1 called
.test_method1 called
.test other
.test_unit1 method called
.
4 passed in 0.12s
You can easily ignore certain test directories and modules during collection by passing the --ignore=path option
on the cli. pytest allows multiple --ignore options. Example:
tests/
|-- example
| |-- test_example_01.py
| |-- test_example_02.py
| '-- test_example_03.py
|-- foobar
| |-- test_foobar_01.py
| |-- test_foobar_02.py
| '-- test_foobar_03.py
'-- hello
'-- world
|-- test_world_01.py
|-- test_world_02.py
'-- test_world_03.py
tests/example/test_example_01.py . [ 20%]
tests/example/test_example_02.py . [ 40%]
tests/example/test_example_03.py . [ 60%]
tests/foobar/test_foobar_01.py . [ 80%]
tests/foobar/test_foobar_02.py . [100%]
The --ignore-glob option allows to ignore test file paths based on Unix shell-style wildcards. If you want to
exclude test-modules that end with _01.py, execute pytest with --ignore-glob='*_01.py'.
Tests can individually be deselected during collection by passing the --deselect=item option. For exam-
ple, say tests/foobar/test_foobar_01.py contains test_a and test_b. You can run all of the
tests within tests/ except for tests/foobar/test_foobar_01.py::test_a by invoking pytest with
--deselect tests/foobar/test_foobar_01.py::test_a. pytest allows multiple --deselect
options.
Default behavior of pytest is to ignore duplicate paths specified from the command line. Example:
...
collected 1 item
...
...
collected 2 items
...
As the collector just works on directories, if you specify twice a single test file, pytest will still collect it twice, no
matter if the --keep-duplicates is not specified. Example:
...
collected 2 items
...
You can set the norecursedirs option in an ini-file, for example your pytest.ini in the project root directory:
# content of pytest.ini
[pytest]
norecursedirs = .svn _build tmp*
This would tell pytest to not recurse into typical subversion or sphinx-build directories or into any tmp prefixed
directory.
You can configure different naming conventions by setting the python_files, python_classes and
python_functions in your configuration file. Here is an example:
# content of pytest.ini
# Example 1: have pytest look for "check" instead of "test"
[pytest]
python_files = check_*.py
python_classes = Check
python_functions = *_check
This would make pytest look for tests in files that match the check_* .py glob-pattern, Check prefixes in
classes, and functions and methods that match *_check. For example, if we have:
# content of check_myapp.py
class CheckMyApp:
def simple_check(self):
pass
def complex_check(self):
pass
$ pytest --collect-only
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, configfile: pytest.ini
collected 2 items
<Module check_myapp.py>
<Class CheckMyApp>
<Function simple_check>
<Function complex_check>
You can check for multiple glob patterns by adding a space between the patterns:
# Example 2: have pytest look for files with "test" and "example"
# content of pytest.ini
[pytest]
python_files = test_*.py example_*.py
Note: the python_functions and python_classes options has no effect for unittest.TestCase test
discovery because pytest delegates discovery of test case methods to unittest code.
You can use the --pyargs option to make pytest try interpreting arguments as python package names, deriving
their file system path and then running the test. For example if you have unittest2 installed you can type:
which would run the respective test module. Like with other options, through an ini-file and the addopts option you
can make this change more permanently:
# content of pytest.ini
[pytest]
addopts = --pyargs
Now a simple invocation of pytest NAME will check if NAME exists as an importable package/module and other-
wise treat it as a filesystem path.
You can always peek at the collection tree without running tests like this:
<Module CWD/pythoncollection.py>
<Function test_function>
<Class TestClass>
<Function test_method>
<Function test_anothermethod>
You can easily instruct pytest to discover tests from every Python file:
# content of pytest.ini
[pytest]
python_files = *.py
However, many projects will have a setup.py which they don’t want to be imported. Moreover, there may files only
importable by a specific python version. For such cases you can dynamically define files to be ignored by listing them
in a conftest.py file:
# content of conftest.py
import sys
collect_ignore = ["setup.py"]
if sys.version_info[0] > 2:
collect_ignore.append("pkg/module_py2.py")
# content of pkg/module_py2.py
def test_only_on_python2():
try:
assert 0
except Exception, e:
pass
# content of setup.py
0 / 0 # will raise exception if imported
If you run with a Python 2 interpreter then you will find the one test and will leave out the setup.py file:
#$ pytest --collect-only
====== test session starts ======
platform linux2 -- Python 2.7.10, pytest-2.9.1, py-1.4.31, pluggy-0.3.1
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
collected 1 items
<Module 'pkg/module_py2.py'>
<Function 'test_only_on_python2'>
If you run with a Python 3 interpreter both the one test and the setup.py file will be left out:
$ pytest --collect-only
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR, configfile: pytest.ini
collected 0 items
It’s also possible to ignore files based on Unix shell-style wildcards by adding patterns to collect_ignore_glob.
The following example conftest.py ignores the file setup.py and in addition all files that end with *_py2.py
when executed with a Python 3 interpreter:
# content of conftest.py
import sys
collect_ignore = ["setup.py"]
if sys.version_info[0] > 2:
collect_ignore_glob = ["*_py2.py"]
Since Pytest 2.6, users can prevent pytest from discovering classes that start with Test by setting a boolean
__test__ attribute to False.
Here is an example conftest.py (extracted from Ali Afshar’s special purpose pytest-yamlwsgi plugin). This
conftest.py will collect test*.yaml files and will execute the yaml-formatted content as custom tests:
# content of conftest.py
import pytest
class YamlFile(pytest.File):
def collect(self):
# We need a yaml parser, e.g. PyYAML.
import yaml
raw = yaml.safe_load(self.fspath.open())
for name, spec in sorted(raw.items()):
yield YamlItem.from_parent(self, name=name, spec=spec)
class YamlItem(pytest.Item):
def __init__(self, name, parent, spec):
super().__init__(name, parent)
self.spec = spec
def runtest(self):
for name, value in sorted(self.spec.items()):
# Some custom test execution (dumb example follows).
if name != value:
raise YamlException(self, name, value)
def reportinfo(self):
return self.fspath, 0, f"usecase: {self.name}"
(continues on next page)
class YamlException(Exception):
"""Custom exception for error reporting."""
hello:
world: world
some: other
and if you installed PyYAML or a compatible YAML-parser you can now execute the test specification:
nonpython $ pytest test_simple.yaml
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR/nonpython
collected 2 items
test_simple.yaml F. [100%]
You get one dot for the passing sub1: sub1 check and one failure. Obviously in the above conftest.py you’ll
want to implement a more interesting interpretation of the yaml-values. You can easily write your own domain specific
testing language this way.
Note: repr_failure(excinfo) is called for representing test failures. If you create custom collection nodes
you can return an error representation string of your choice. It will be reported as a (red) string.
reportinfo() is used for representing the test location and is also consulted when reporting in verbose mode:
nonpython $ pytest -v
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-6.x.y, py-1.x.y, pluggy-1.x.y -- $PYTHON_
˓→PREFIX/bin/python
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR/nonpython
collecting ... collected 2 items
While developing your custom test collection and execution it’s also interesting to just look at the collection tree:
<Package nonpython>
<YamlFile test_simple.yaml>
<YamlItem hello>
<YamlItem ok>
TWENTYEIGHT
When using bash as your shell, pytest can use argcomplete (https://argcomplete.readthedocs.io/) for auto-
completion. For this argcomplete needs to be installed and enabled.
Install argcomplete using:
sudo activate-global-python-argcomplete
307
pytest Documentation, Release 6.2
TWENTYNINE
pytest is actively evolving and is a project that has been decades in the making, we keep learning about new and better
structures to express different details about testing.
While we implement those modifications we try to ensure an easy transition and don’t want to impose unnecessary
churn on our users and community/plugin authors.
As of now, pytest considers multiple types of backward compatibility transitions:
a) trivial: APIs which trivially translate to the new mechanism, and do not cause problematic changes.
We try to support those indefinitely while encouraging users to switch to newer/better mechanisms through
documentation.
b) transitional: the old and new API don’t conflict and we can help users transition by using warnings, while
supporting both for a prolonged time.
We will only start the removal of deprecated functionality in major releases (e.g. if we deprecate something in
3.0 we will start to remove it in 4.0), and keep it around for at least two minor releases (e.g. if we deprecate
something in 3.9 and 4.0 is the next release, we start to remove it in 5.0, not in 4.0).
When the deprecation expires (e.g. 4.0 is released), we won’t remove the deprecated functionality immediately,
but will use the standard warning filters to turn them into errors by default. This approach makes it explicit that
removal is imminent, and still gives you time to turn the deprecated feature into a warning instead of an error
so it can be dealt with in your own time. In the next minor release (e.g. 4.1), the feature will be effectively
removed.
c) true breakage: should only be considered when normal transition is unreasonably unsustainable and would offset
important development/features by years. In addition, they should be limited to APIs where the number of actual
users is very small (for example only impacting some plugins), and can be coordinated with the community in
advance.
Examples for such upcoming changes:
• removal of pytest_runtest_protocol/nextitem - #895
• rearranging of the node tree to include FunctionDefinition
• rearranging of SetupState #895
True breakages must be announced first in an issue containing:
• Detailed description of the change
• Rationale
• Expected impact on users and plugin authors (example in #895)
After there’s no hard -1 on the issue it should be followed up by an initial proof-of-concept Pull Request.
309
pytest Documentation, Release 6.2
This POC serves as both a coordination point to assess impact and potential inspiration to come up with a
transitional solution after all.
After a reasonable amount of time the PR can be merged to base a new major release.
For the PR to mature from POC to acceptance, it must contain: * Setup of deprecation errors/warnings that help
users fix and port their code. If it is possible to introduce a deprecation period under the current series, before
the true breakage, it should be introduced in a separate PR and be part of the current release stream. * Detailed
description of the rationale and examples on how to port code in doc/en/deprecations.rst.
THIRTY
HISTORY
Keeping backwards compatibility has a very high priority in the pytest project. Although we have deprecated func-
tionality over the years, most of it is still supported. All deprecations in pytest were done because simpler or more
efficient ways of accomplishing the same tasks have emerged, making the old way of doing things unnecessary.
With the pytest 3.0 release we introduced a clear communication scheme for when we will actually remove the old
busted joint and politely ask you to use the new hotness instead, while giving you enough time to adjust your tests or
raise concerns if there are valid reasons to keep deprecated functionality around.
To communicate changes we issue deprecation warnings using a custom warning hierarchy (see Internal pytest warn-
ings). These warnings may be suppressed using the standard means: -W command-line flag or filterwarnings
ini options (see Warnings Capture), but we suggest to use these sparingly and temporarily, and heed the warnings
when possible.
We will only start the removal of deprecated functionality in major releases (e.g. if we deprecate something in 3.0 we
will start to remove it in 4.0), and keep it around for at least two minor releases (e.g. if we deprecate something in 3.9
and 4.0 is the next release, we start to remove it in 5.0, not in 4.0).
When the deprecation expires (e.g. 4.0 is released), we won’t remove the deprecated functionality immediately, but
will use the standard warning filters to turn them into errors by default. This approach makes it explicit that removal
is imminent, and still gives you time to turn the deprecated feature into a warning instead of an error so it can be dealt
with in your own time. In the next minor release (e.g. 4.1), the feature will be effectively removed.
Features currently deprecated and removed in previous releases can be found in Deprecations and Removals.
We track future deprecation and removal of features using milestones and the deprecation and removal labels on
GitHub.
311
pytest Documentation, Release 6.2
THIRTYONE
This page lists all pytest features that are currently deprecated or have been removed in past major releases. The
objective is to give users a clear rationale why a certain feature has been removed, and what alternatives should be
used instead.
• Deprecated Features
– The --strict command-line option
– The yield_fixture function/decorator
– The pytest_warning_captured hook
– The pytest.collect module
– The pytest._fillfuncargs function
• Removed Features
– --no-print-logs command-line option
– Result log (--result-log)
– pytest_collect_directory hook
– TerminalReporter.writer
– junit_family default value change to “xunit2”
– Node Construction changed to Node.from_parent
– pytest.fixture arguments are keyword only
– funcargnames alias for fixturenames
– pytest.config global
– "message" parameter of pytest.raises
– raises / warns with a string as the second argument
– Using Class in custom Collectors
– marks in pytest.mark.parametrize
– pytest_funcarg__ prefix
– [pytest] section in setup.cfg files
– Metafunc.addcall
– cached_setup
313
pytest Documentation, Release 6.2
Below is a complete list of all pytest features which are considered deprecated. Using those features will issue
PytestWarning or subclasses, which can be filtered using standard warning filters.
As stated in our Backwards Compatibility Policy policy, deprecated features are removed only in major releases after
an appropriate period of deprecation has passed.
31.2.4 TerminalReporter.writer
[pytest]
junit_family=xunit2
If you discover that your tooling does not support the new format, and want to keep using the legacy version, set the
option to legacy instead:
[pytest]
junit_family=legacy
By using legacy you will keep using the legacy/xunit1 format when upgrading to pytest 6.0, where the default format
will be xunit2.
In order to let users know about the transition, pytest will issue a warning in case the --junitxml option is given in
the command line but junit_family is not explicitly configured in pytest.ini.
Services known to support the xunit2 format:
• Jenkins with the JUnit plugin.
• Azure Pipelines.
Note that from_parent should only be called with keyword arguments for the parameters.
Becomes:
with pytest.raises(TimeoutError):
wait_for(websocket.recv(), 0.5)
pytest.fail("Client got unexpected message")
If you still have concerns about this deprecation and future removal, please comment on issue #3974.
pytest.warns(DeprecationWarning, "my_function()")
pytest.warns(SyntaxWarning, "assert(1, 2)")
Becomes:
with pytest.raises(ZeroDivisionError):
1 / 0
with pytest.raises(SyntaxError):
exec("a $ b") # exec is required for invalid syntax
with pytest.warns(DeprecationWarning):
my_function()
with pytest.warns(SyntaxWarning):
exec("assert(1, 2)") # exec is used to avoid a top-level warning
@pytest.mark.parametrize(
"a, b",
[
(3, 9),
pytest.mark.xfail(reason="flaky")(6, 36),
(10, 100),
(20, 200),
(40, 400),
(50, 500),
],
)
def test_foo(a, b):
...
This code applies the pytest.mark.xfail(reason="flaky") mark to the (6, 36) value of the above
parametrization call.
This was considered hard to read and understand, and also its implementation presented problems to the code prevent-
ing further internal improvements in the marks architecture.
To update the code, use pytest.param:
@pytest.mark.parametrize(
"a, b",
[
(3, 9),
pytest.param(6, 36, marks=pytest.mark.xfail(reason="flaky")),
(10, 100),
(20, 200),
(40, 400),
(50, 500),
],
)
def test_foo(a, b):
...
def pytest_funcarg__data():
return SomeData()
@pytest.fixture
def data():
return SomeData()
31.2.16 Metafunc.addcall
def pytest_generate_tests(metafunc):
metafunc.addcall({"i": 1}, id="1")
metafunc.addcall({"i": 2}, id="2")
Becomes:
def pytest_generate_tests(metafunc):
metafunc.parametrize("i", [1, 2], ids=["1", "2"])
31.2.17 cached_setup
@pytest.fixture
def db_session():
return request.cached_setup(
setup=Session.create, teardown=lambda session: session.close(), scope="module"
)
@pytest.fixture(scope="module")
def db_session():
session = Session.create()
yield session
session.close()
You can consult funcarg comparison section in the docs for more information.
Becomes:
warnings.warn(pytest.PytestWarning("some warning"))
31.2.20 record_xml_property
def test_foo(record_xml_property):
...
Change to:
def test_foo(record_property):
...
pytest.main("-v -s")
pytest.main(["-v", "-s"])
By passing a string, users expect that pytest will interpret that command-line using the shell rules they are working on
(for example bash or Powershell), but this is very hard/impossible to do in a portable way.
@pytest.fixture
def cell():
return ...
@pytest.fixture
def full_cell():
cell = cell()
cell.make_full()
return cell
This is a great source of confusion to new users, which will often call the fixture functions and request them from test
functions interchangeably, which breaks the fixture resolution model.
In those cases just request the function directly in the dependent fixture:
@pytest.fixture
def cell():
return ...
@pytest.fixture
def full_cell(cell):
cell.make_full()
return cell
Alternatively if the fixture function is called multiple times inside a test (making it hard to apply the above pattern)
or if you would like to make minimal changes to the code, you can create a fixture which calls the original function
together with the name parameter:
def cell():
return ...
def test_squared():
yield check, 2, 4
yield check, 3, 9
This would result into two actual test functions being generated.
This form of test function doesn’t support fixtures properly, and users should switch to pytest.mark.
parametrize:
Users should just import pytest and access those objects using the pytest module.
This has been documented as deprecated for years, but only now we are actually emitting deprecation warnings.
31.2.25 Node.get_marker
31.2.26 somefunction.markname
31.2.27 pytest_namespace
class MySymbol:
...
def pytest_namespace():
return {"my_symbol": MySymbol()}
Plugin authors relying on this hook should instead require that users now import the plugin modules directly (with an
appropriate public API).
As a stopgap measure, plugin authors may still inject their names into pytest’s namespace, usually during
pytest_configure:
import pytest
def pytest_configure():
pytest.my_symbol = MySymbol()
THIRTYTWO
It is demanding on the maintainers of an open source project to support many Python versions, as there’s extra cost of
keeping code compatible between all versions, while holding back on features only made possible on newer Python
versions.
In case of Python 2 and 3, the difference between the languages makes it even more prominent, because many new
Python 3 features cannot be used in a Python 2/3 compatible code base.
Python 2.7 EOL has been reached in 2020, with the last release made in April, 2020.
Python 3.4 EOL has been reached in 2019, with the last release made in March, 2019.
For those reasons, in Jun 2019 it was decided that pytest 4.6 series will be the last to support Python 2.7 and 3.4.
Thanks to the python_requires setuptools option, Python 2.7 and Python 3.4 users using a modern pip version will
install the last pytest 4.6.X version automatically even if 5.0 or later versions are available on PyPI.
Users should ensure they are using the latest pip and setuptools versions for this to work.
Until January 2020, the pytest core team ported many bug-fixes from the main release into the 4.6.x branch, with
several 4.6.X releases being made along the year.
From now on, the core team will no longer actively backport patches, but the 4.6.x branch will continue to exist
so the community itself can contribute patches.
The core team will be happy to accept those patches, and make new 4.6.X releases until mid-2020 (but consider that
date as a ballpark, after that date the team might still decide to make new releases for critical bugs).
325
pytest Documentation, Release 6.2
New 4.6.X releases will happen after we have a few bugs in place to release, or if a few weeks have passed (say a
single bug has been fixed a month after the latest 4.6.X release).
No hard rules here, just ballpark.
We core maintainers expect that people still using Python 2.7/3.4 and being affected by bugs to step up and provide
patches and/or port bug fixes from the active branches.
We will be happy to guide users interested in doing so, so please don’t hesitate to ask.
Backporting changes into 4.6
Please follow these instructions:
1. git fetch --all --prune
2. git checkout origin/4.6.x -b backport-XXXX # use the PR number here
3. Locate the merge commit on the PR, in the merged message, for example:
nicoddemus merged commit 0f8b462 into pytest-dev:features
4. git cherry-pick -m1 REVISION # use the revision you found above (0f8b462).
5. Open a PR targeting 4.6.x:
• Prefix the message with [4.6] so it is an obvious backport
• Delete the PR body, it usually contains a duplicate commit message.
Providing new PRs to 4.6
Fresh pull requests to 4.6.x will be accepted provided that the equivalent code in the active branches does not contain
that bug (for example, a bug is specific to Python 2 only).
Bug fixes that also happen in the mainstream version should be first fixed there, and then backported as per instructions
above.
THIRTYTHREE
Contributions are highly welcomed and appreciated. Every little bit of help counts, so do not hesitate!
Contents
Do you like pytest? Share some love on Twitter or in your blog posts!
We’d also like to hear about your propositions and suggestions. Feel free to submit them as issues and:
• Explain in detail how they should work.
• Keep the scope as narrow as possible. This will make it easier to implement.
327
pytest Documentation, Release 6.2
$ tox -e docs
The built documentation should be available in doc/en/_build/html, where ‘en’ refers to the documentation
language.
Pytest has an API reference which in large part is generated automatically from the docstrings of the documented
items. Pytest uses the Sphinx docstring format. For example:
More detailed info here, in separate paragraphs from the subject line.
Use proper sentences -- start sentences with capital letters and end
with periods.
.. versionadded:: 6.0
Pytest development of the core, some plugins and support code happens in repositories living under the pytest-dev
organisations:
• pytest-dev on GitHub
All pytest-dev Contributors team members have write access to all contained repositories. Pytest core and plugins are
generally developed using pull requests to respective repositories.
The objectives of the pytest-dev organisation are:
• Having a central location for popular pytest plugins
• Sharing some of the maintenance responsibility (in case a maintainer no longer wishes to maintain a plugin)
You can submit your plugin by subscribing to the pytest-dev mail list and writing a mail pointing to your existing
pytest plugin repository which must have the following:
• PyPI presence with packaging metadata that contains a pytest- prefixed name, version number, authors, short
and long description.
• a tox configuration for running tests using tox.
• a README describing how to use the plugin and on which platforms it runs.
• a LICENSE file containing the licensing information, with matching info in its packaging metadata.
• an issue tracker for bug reports and enhancement requests.
• a changelog.
If no contributor strongly objects and two agree, the repository can then be transferred to the pytest-dev organisa-
tion.
Here’s a rundown of how a repository transfer usually proceeds (using a repository named joedoe/pytest-xyz
as example):
• joedoe transfers repository ownership to pytest-dev administrator calvin.
• calvin creates pytest-xyz-admin and pytest-xyz-developers teams, inviting joedoe to both
as maintainer.
• calvin transfers repository to pytest-dev and configures team access:
– pytest-xyz-admin admin access;
– pytest-xyz-developers write access;
The pytest-dev/Contributors team has write access to all projects, and every project administrator is in it.
We recommend that each plugin has at least three people who have the right to release to PyPI.
Repository owners can rest assured that no pytest-dev administrator will ever make releases of your repository
or take ownership in any way, except in rare cases where someone becomes unresponsive after months of contact
attempts. As stated, the objective is to share maintenance and avoid “plugin-abandon”.
tox -e linting,py37
The test environments above are usually enough to cover most cases locally.
5. Write a changelog entry: changelog/2574.bugfix.rst, use issue id number and one of feature,
improvement, bugfix, doc, deprecation, breaking, vendor or trivial for the issue type.
6. Unless your change is a trivial or a documentation fix (e.g., a typo or reword of a small section) please add
yourself to the AUTHORS file, in alphabetical order.
What is a “pull request”? It informs the project’s core developers about the changes you want to review and merge.
Pull requests are stored on GitHub servers. Once you send a pull request, we can discuss its potential modifications
and even add more commits to it later on. There’s an excellent tutorial on how Pull Requests work in the GitHub Help
Center.
Here is a simple overview, with pytest-specific bits:
1. Fork the pytest GitHub repository. It’s fine to use pytest as your fork repository name because it will live
under your user.
2. Clone your fork locally using git and create a branch:
Given we have “major.minor.micro” version numbers, bug fixes will usually be released in micro releases
whereas features will be released in minor releases and incompatible changes in major releases.
If you need some help with Git, follow this quick start guide: https://git.wiki.kernel.org/index.php/QuickStart
3. Install pre-commit and its hook on the pytest repo:
$ tox -e linting,py37
This command will run tests via the “tox” tool against Python 3.7 and also perform “lint” coding-style checks.
6. You can now edit your local working copy and run the tests again as necessary. Please follow PEP-8 for naming.
You can pass different options to tox. For example, to run tests on Python 3.7 and pass options to pytest (e.g.
enter pdb on failure) to pytest you can do:
Afterwards, you can edit the files and run pytest normally:
$ pytest testing/test_config.py
8. Create a new changelog entry in changelog. The file should be named <issueid>.<type>.rst, where
issueid is the number of the issue related to the change and type is one of feature, improvement, bugfix,
doc, deprecation, breaking, vendor or trivial. You may skip creating the changelog entry if the
change doesn’t affect the documented behaviour of pytest.
9. Add yourself to AUTHORS file if not there yet, in alphabetical order.
10. Commit and push once your tests pass and you are happy with your change(s):
11. Finally, submit a pull request through the GitHub website using this data:
head-fork: YOUR_GITHUB_USERNAME/pytest
compare: your-branch-name
base-fork: pytest-dev/pytest
base: main
Writing tests for plugins or for pytest itself is often done using the testdir fixture, as a “black-box” test.
For example, to ensure a simple test passes you can write:
def test_true_assertion(testdir):
testdir.makepyfile(
"""
def test_foo():
assert True
"""
)
result = testdir.runpytest()
result.assert_outcomes(failed=0, passed=1)
Alternatively, it is possible to make checks based on the actual output of the termal using glob-like expressions:
def test_true_assertion(testdir):
testdir.makepyfile(
"""
def test_foo():
assert False
"""
)
result = testdir.runpytest()
result.stdout.fnmatch_lines(["*assert False*", "*1 failed*"])
When choosing a file where to write a new test, take a look at the existing files and see if there’s one file which looks like
a good fit. For example, a regression test about a bug in the --lf option should go into test_cacheprovider.
py, given that this option is implemented in cacheprovider.py. If in doubt, go ahead and open a PR with your
best guess and we can discuss this over the code.
Anyone who has successfully seen through a pull request which did not require any extra work from the development
team to merge will themselves gain commit access if they so wish (if we forget to ask please send a friendly reminder).
This does not mean there is any change in your contribution workflow: everyone goes through the same pull-request-
and-review process and no-one merges their own pull requests unless already approved. It does however mean you can
participate in the development process more fully since you can merge pull requests from other contributors yourself
after having reviewed them.
Pytest makes feature release every few weeks or months. In between, patch releases are made to the previous feature
release, containing bug fixes only. The bug fixes usually fix regressions, but may be any change that should reach users
before the next feature release.
Suppose for example that the latest release was 1.2.3, and you want to include a bug fix in 1.2.4 (check https://github.
com/pytest-dev/pytest/releases for the actual latest release). The procedure for this is:
1. First, make sure the bug is fixed the master branch, with a regular pull request, as described above. An
exception to this is if the bug fix is not applicable to master anymore.
2. git checkout origin/1.2.x -b backport-XXXX # use the master PR number here
3. Locate the merge commit on the PR, in the merged message, for example:
nicoddemus merged commit 0f8b462 into pytest-dev:master
4. git cherry-pick -x -m1 REVISION # use the revision you found above (0f8b462).
5. Open a PR targeting 1.2.x:
• Prefix the message with [1.2.x].
• Delete the PR body, it usually contains a duplicate commit message.
As mentioned above, bugs should first be fixed on master (except in rare occasions that a bug only happens in a
previous release). So who should do the backport procedure described above?
1. If the bug was fixed by a core developer, it is the main responsibility of that core developer to do the backport.
2. However, often the merge is done by another maintainer, in which case it is nice of them to do the backport
procedure if they have the time.
3. For bugs submitted by non-maintainers, it is expected that a core developer will to do the backport, normally
the one that merged the PR on master.
4. If a non-maintainers notices a bug which is fixed on master but has not been backported (due to maintainers
forgetting to apply the needs backport label, or just plain missing it), they are also welcome to open a PR with
the backport. The procedure is simple and really helps with the maintenance of the project.
All the above are not rules, but merely some guidelines/suggestions on what we should expect about backports.
Stale issues/PRs are those where pytest contributors have asked for questions/changes and the authors didn’t get around
to answer/implement them yet after a somewhat long time, or the discussion simply died because people seemed to
lose interest.
There are many reasons why people don’t answer questions or implement requested changes: they might get busy,
lose interest, or just forget about it, but the fact is that this is very common in open source software.
The pytest team really appreciates every issue and pull request, but being a high-volume project with many issues and
pull requests being submitted daily, we try to reduce the number of stale issues and PRs by regularly closing them.
When an issue/pull request is closed in this manner, it is by no means a dismissal of the topic being tackled by the
issue/pull request, but it is just a way for us to clear up the queue and make the maintainers’ work more manageable.
Submitters can always reopen the issue/pull request in their own time later if it makes sense.
Here are a few general rules the maintainers use to decide when to close issues/PRs because of lack of inactivity:
• Issues labeled question or needs information: closed after 14 days inactive.
• Issues labeled proposal: closed after six months inactive.
• Pull requests: after one month, consider pinging the author, update linked issue, or consider closing. For pull
requests which are nearly finished, the team should consider finishing it up and merging it.
The above are not hard rules, but merely guidelines, and can be (and often are!) reviewed on a case-by-case basis.
When closing a Pull Request, it needs to be acknowledge the time, effort, and interest demonstrated by the person
which submitted it. As mentioned previously, it is not the intent of the team to dismiss stalled pull request entirely but
to merely to clear up our queue, so a message like the one below is warranted when closing a pull request that went
stale:
Hi <contributor>,
First of all we would like to thank you for your time and effort on working on this, the pytest team deeply
appreciates it.
We noticed it has been awhile since you have updated this PR, however. pytest is a high activity project,
with many issues/PRs being opened daily, so it is hard for us maintainers to track which PRs are ready for
merging, for review, or need more attention.
So for those reasons we think it is best to close the PR for now, but with the only intention to cleanup our
queue, it is by no means a rejection of your changes. We still encourage you to re-open this PR (it is just
a click of a button away) when you are ready to get back to it.
Again we appreciate your time for working on this, and hope you might get back to this at a later time!
<bye>
When a pull request is submitted to fix an issue, add text like closes #XYZW to the PR description and/or commits
(where XYZW is the issue number). See the GitHub docs for more information.
When an issue is due to user error (e.g. misunderstanding of a functionality), please politely explain to the user why
the issue raised is really a non-issue and ask them to close the issue if they have no further questions. If the original
requestor is unresponsive, the issue will be handled as described in the section Handling stale issues/PRs above.
THIRTYFOUR
DEVELOPMENT GUIDE
The contributing guidelines are to be found here. The release procedure for pytest is documented on GitHub.
337
pytest Documentation, Release 6.2
THIRTYFIVE
SPONSOR
pytest is maintained by a team of volunteers from all around the world in their free time. While we work on pytest
because we love the project and use it daily at our daily jobs, monetary compensation when possible is welcome to
justify time away from friends, family and personal time.
Money is also used to fund local sprints, merchandising (stickers to distribute in conferences for example) and every
few years a large sprint involving all members.
35.1 OpenCollective
Open Collective is an online funding platform for open and transparent communities. It provide tools to raise money
and share your finances in full transparency.
It is the platform of choice for individuals and companies that want to make one-time or monthly donations directly to
the project.
See more details in the pytest collective.
339
pytest Documentation, Release 6.2
THIRTYSIX
Tidelift is working with the maintainers of pytest and thousands of other open source projects to deliver commercial
support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk,
and improve code health, while paying the maintainers of the exact dependencies you use.
Get more details
The Tidelift Subscription is a managed open source subscription for application dependencies covering millions of
open source projects across JavaScript, Python, Java, PHP, Ruby, .NET, and more.
Your subscription includes:
• Security updates
– Tidelift’s security response team coordinates patches for new breaking security vulnerabilities and alerts
immediately through a private channel, so your software supply chain is always secure.
• Licensing verification and indemnification
– Tidelift verifies license information to enable easy policy enforcement and adds intellectual property in-
demnification to cover creators and users in case something goes wrong. You always have a 100% up-to-
date bill of materials for your dependencies to share with your legal team, customers, or partners.
• Maintenance and code improvement
– Tidelift ensures the software you rely on keeps working as long as you need it to work. Your managed
dependencies are actively maintained and we recruit additional maintainers where required.
• Package selection and version guidance
– Tidelift helps you choose the best open source packages from the start—and then guide you through
updates to stay on the best releases as new issues arise.
• Roadmap input
– Take a seat at the table with the creators behind the software you use. Tidelift’s participating maintainers
earn more income as their software is used by more subscribers, so they’re interested in knowing what you
need.
• Tooling and cloud integration
– Tidelift works with GitHub, GitLab, BitBucket, and every cloud platform (and other deployment targets,
too).
The end result? All of the capabilities you expect from commercial-grade software, for the full breadth of open
source you use. That means less time grappling with esoteric open source trivia, and more time building your own
applications—and your business.
Request a demo
341
pytest Documentation, Release 6.2
THIRTYSEVEN
LICENSE
Distributed under the terms of the MIT license, pytest is free and open source software.
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
343
pytest Documentation, Release 6.2
THIRTYEIGHT
CONTACT CHANNELS
• pytest issue tracker to report bugs or suggest features (for version 2.0 and above).
• pytest discussions at github for general questions.
• pytest on stackoverflow.com to post precise questions with the tag pytest. New Questions will usually be
seen by pytest users or developers and answered quickly.
• Testing In Python: a mailing list for Python testing tools and discussion.
• pytest-dev at python.org (mailing list) pytest specific announcements and discussions.
• contribution guide for help on submitting pull requests to GitHub.
• #pytest on irc.libera.chat IRC channel for random questions (using an IRC client, via webchat, or via Matrix).
• private mail to Holger.Krekel at gmail com if you want to communicate sensitive issues
• merlinux.eu offers pytest and tox-related professional teaching and consulting.
345
pytest Documentation, Release 6.2
THIRTYNINE
HISTORICAL NOTES
This page lists features or behavior from previous versions of pytest which have changed over the years. They are kept
here as a historical note so users looking at old code can find documentation related to them.
The old Node.get_marker(name) function is considered deprecated because it returns an internal MarkerInfo
object which contains the merged name, *args and **kwargs of all the markers which apply to that node.
In general there are two scenarios on how markers should be handled:
1. Marks overwrite each other. Order matters but you only want to think of your mark as a single item. E.g.
log_level('info') at a module level can be overwritten by log_level('debug') for a specific test.
In this case, use Node.get_closest_marker(name):
# replace this:
marker = item.get_marker("log_level")
if marker:
(continues on next page)
347
pytest Documentation, Release 6.2
# by this:
marker = item.get_closest_marker("log_level")
if marker:
level = marker.args[0]
2. Marks compose in an additive manner. E.g. skipif(condition) marks mean you just want to evaluate all of
them, order doesn’t even matter. You probably want to think of your marks as a set here.
In this case iterate over each mark and handle their *args and **kwargs individually.
# replace this
skipif = item.get_marker("skipif")
if skipif:
for condition in skipif.args:
# eval condition
...
# by this:
for skipif in item.iter_markers("skipif"):
condition = skipif.args[0]
# eval condition
If you are unsure or have any questions, please consider opening an issue.
Note: in a future major release of pytest we will introduce class based markers, at which point markers will no longer
be limited to instances of Mark.
The functionality of the core cache plugin was previously distributed as a third party plugin named pytest-cache.
The core plugin is compatible regarding command line options and API usage except that you can only store/receive
data between test runs that is json-serializable.
In versions prior to 2.3 there was no @pytest.fixture marker and you had to use a magic
pytest_funcarg__NAME prefix for the fixture factory. This remains and will remain supported but is not anymore
advertised as the primary means of declaring fixture functions.
Prior to version 2.10, in order to use a yield statement to execute teardown code one had to mark a fixture using
the yield_fixture marker. From 2.10 onward, normal fixtures can use yield directly so the yield_fixture
decorator is no longer needed and considered deprecated.
Prior to 3.0, the supported section name was [pytest]. Due to how this may collide with some distutils commands,
the recommended section name for setup.cfg files is now [tool:pytest].
Note that for pytest.ini and tox.ini files the section name is [pytest].
Prior to version 3.1 the supported mechanism for marking values used the syntax:
import pytest
@pytest.mark.parametrize(
"test_input,expected", [("3+5", 8), ("2+4", 6), pytest.mark.xfail(("6*9", 42))]
)
def test_eval(test_input, expected):
assert eval(test_input) == expected
This was an initial hack to support the feature but soon was demonstrated to be incomplete, broken for passing func-
tions or applying multiple marks with the same name but different parameters.
The old syntax is planned to be removed in pytest-4.0.
In versions prior to 2.4 one needed to specify the argument names as a tuple. This remains valid but the simpler
"name1,name2,..." comma-separated-string syntax is now advertised first because it’s easier to write and pro-
duces less line noise.
During development prior to the pytest-2.3 release the name pytest.setup was used but before the release it was
renamed and moved to become part of the general fixture mechanism, namely Autouse fixtures (fixtures you don’t have
to request)
Prior to pytest-2.4 the only way to specify skipif/xfail conditions was to use strings:
import sys
During test function setup the skipif condition is evaluated by calling eval('sys.version_info >= (3,
0)', namespace). The namespace contains all the module globals, and os and sys as a minimum.
Since pytest-2.4 boolean conditions are considered preferable because markers can then be freely imported between
test modules. With strings you need to import not only the marker but all variables used by the marker, which violates
encapsulation.
The reason for specifying the condition as a string was that pytest can report a summary of skip conditions based
purely on the condition string. With conditions as booleans you are required to specify a reason string.
Note that string conditions will remain fully supported and you are free to use them if you have no need for cross-
importing markers.
The evaluation of a condition string in pytest.mark.skipif(conditionstring) or pytest.mark.
xfail(conditionstring) takes place in a namespace dictionary which is constructed as follows:
• the namespace is initialized by putting the sys and os modules and the pytest config object into it.
• updated with the module globals of the test function for which the expression is applied.
The pytest config object allows you to skip based on a test configuration value which you might have added:
@pytest.mark.skipif("not config.getvalue('db')")
def test_function():
...
Note: You cannot use pytest.config.getvalue() in code imported before pytest’s argument parsing
takes place. For example, conftest.py files are imported before command line parsing and thus config.
getvalue() will not execute correctly.
39.10 pytest.set_trace()
Previous to version 2.4 to set a break point in code one needed to use pytest.set_trace():
import pytest
def test_function():
...
pytest.set_trace() # invoke PDB debugger and tracing
This is no longer needed and one can use the native import pdb;pdb.set_trace() call directly.
For more details see Setting breakpoints.
Access of Module, Function, Class, Instance, File and Item through Node instances have long been
documented as deprecated, but started to emit warnings from pytest 3.9 and onward.
Users should just import pytest and access those objects using the pytest module.
FORTY
40.1 Books
• Webinar: pytest: Test Driven Development für Python (German), Florian Bruhin, via mylearning.ch, 2020
• Webinar: Simplify Your Tests with Fixtures, Oliver Bestwalter, via JetBrains, 2020
• Training: Introduction to pytest - simple, rapid and fun testing with Python, Florian Bruhin, PyConDE 2019
• Abridged metaprogramming classics - this episode: pytest, Oliver Bestwalter, PyConDE 2019 (repository,
recording)
• Testing PySide/PyQt code easily using the pytest framework, Florian Bruhin, Qt World Summit 2019 (slides,
recording)
• pytest: recommendations, basic packages for testing in Python and Django, Andreu Vallbona, PyBCN June
2019.
• pytest: recommendations, basic packages for testing in Python and Django, Andreu Vallbona, PyconES 2017
(slides in english, video in spanish)
• pytest advanced, Andrew Svetlov (Russian, PyCon Russia, 2016).
• Pythonic testing, Igor Starikov (Russian, PyNsk, November 2016).
• pytest - Rapid Simple Testing, Florian Bruhin, Swiss Python Summit 2016.
• Improve your testing with Pytest and Mock, Gabe Hollombe, PyCon SG 2015.
• Introduction to pytest, Andreas Pelme, EuroPython 2014.
• Advanced Uses of py.test Fixtures, Floris Bruynooghe, EuroPython 2014.
• Why i use py.test and maybe you should too, Andy Todd, Pycon AU 2013
• 3-part blog series about pytest from @pydanny alias Daniel Greenfeld (January 2014)
• pytest: helps you write better Django apps, Andreas Pelme, DjangoCon Europe 2014.
• Testing Django Applications with pytest, Andreas Pelme, EuroPython 2013.
• Testes pythonics com py.test, Vinicius Belchior Assef Neto, Plone Conf 2013, Brazil.
353
pytest Documentation, Release 6.2
FORTYONE
PROJECT EXAMPLES
Here are some examples of projects using pytest (please send notes via Contact channels):
• PyPy, Python with a JIT compiler, running over 21000 tests
• the MoinMoin Wiki Engine
• sentry, realtime app-maintenance and exception tracking
• Astropy and affiliated packages
• tox, virtualenv/Hudson integration tool
• PyPM ActiveState’s package manager
• Fom a fluid object mapper for FluidDB
• applib cross-platform utilities
• six Python 2 and 3 compatibility utilities
• pediapress MediaWiki articles
• mwlib mediawiki parser and utility library
• The Translate Toolkit for localization and conversion
• execnet rapid multi-Python deployment
• pylib cross-platform path, IO, dynamic code library
• bbfreeze create standalone executables from Python scripts
• pdb++ a fancier version of PDB
• pudb full-screen console debugger for python
• py-s3fuse Amazon S3 FUSE based filesystem
• waskr WSGI Stats Middleware
• guachi global persistent configs for Python modules
• Circuits lightweight Event Driven Framework
• pygtk-helpers easy interaction with PyGTK
• QuantumCore statusmessage and repoze openid plugin
• pydataportability libraries for managing the open web
• XIST extensible HTML/XML generator
• tiddlyweb optionally headless, extensible RESTful datastore
357
pytest Documentation, Release 6.2
359
pytest Documentation, Release 6.2
360 Index
pytest Documentation, Release 6.2
Index 361
pytest Documentation, Release 6.2
collect_ignore, 213 K
collect_ignore_glob, 213 keywords (Node attribute), 206
pytest_plugins, 213 keywords (TestReport attribute), 211
pytestmark, 213 keywords() (FixtureRequest property), 168
kwargs (Mark attribute), 204
H kwargs() (MarkDecorator property), 204
handler() (LogCaptureFixture property), 169
has_plugin() (PytestPluginManager method), 210 L
hasplugin() (PytestPluginManager method), 209 LineMatcher (class in _pytest.pytester), 179
head_line() (CollectReport property), 196 list() (WarningsRecorder property), 183
head_line() (TestReport property), 212 list_name_plugin() (PytestPluginManager
HookRecorder (class in _pytest.pytester), 180 method), 210
list_plugin_distinfo() (PytestPluginManager
I method), 210
ihook() (Node property), 207 listchain() (Node method), 207
import_plugin() (PytestPluginManager method), listextrakeywords() (Node method), 207
209 load_setuptools_entrypoints() (PytestPlug-
importorskip() (in module pytest), 156 inManager method), 210
inifile() (Config property), 198 location (TestReport attribute), 211
inipath() (Config property), 198 log_auto_indent
inline_genitems() (Pytester method), 176 configuration value, 218
inline_genitems() (Testdir method), 182 log_cli
inline_run() (Pytester method), 176 configuration value, 219
inline_run() (Testdir method), 182 log_cli_date_format
inline_runsource() (Pytester method), 175 configuration value, 219
inline_runsource() (Testdir method), 182 log_cli_format
instance() (FixtureRequest property), 168 configuration value, 219
INTERNAL_ERROR (ExitCode attribute), 201 log_cli_level
INTERRUPTED (ExitCode attribute), 201 configuration value, 219
invocation_dir() (Config property), 198 log_date_format
invocation_params (Config attribute), 197 configuration value, 219
is_blocked() (PytestPluginManager method), 210 log_file
is_registered() (PytestPluginManager method), configuration value, 219
210 log_file_date_format
isinitpath() (Pytester.Session method), 175 configuration value, 220
isinitpath() (Testdir.Session method), 181 log_file_format
issue_config_time_warning() (Config configuration value, 220
method), 198 log_file_level
Item (class in pytest), 203 configuration value, 220
iter_markers() (Node method), 207 log_format
iter_markers_with_node() (Node method), 207 configuration value, 220
log_level
J configuration value, 220
junit_duration_report LogCaptureFixture (class in pytest), 169
configuration value, 217 longrepr (CollectReport attribute), 196
junit_family longrepr (TestReport attribute), 211
configuration value, 218 longreprtext() (CollectReport property), 197
junit_log_passing_tests longreprtext() (TestReport property), 212
configuration value, 218
junit_logging M
configuration value, 218 main() (in module pytest), 157
junit_suite_name make_hook_recorder() (Pytester method), 173
configuration value, 218 make_hook_recorder() (Testdir method), 181
362 Index
pytest Documentation, Release 6.2
Index 363
pytest Documentation, Release 6.2
364 Index
pytest Documentation, Release 6.2
Index 365
pytest Documentation, Release 6.2
U
undo() (MonkeyPatch method), 172
unregister() (PytestPluginManager method), 210
USAGE_ERROR (ExitCode attribute), 201
UsageError (class in pytest), 214
usefixtures
configuration value, 222
user_properties (Item attribute), 203
user_properties (TestReport attribute), 211
V
value() (ExceptionInfo property), 200
366 Index