6. built in fixture
the built-in fixture of pytest can greatly simplify the testing work. For example, when processing temporary files, the built-in fixture of pytest can recognize command line parameters, communicate among multiple test sessions, verify output streams, change environment variables, review error alarms, etc. The built-in fixture is an extension of the core functions of pytest.
6.1 using tmpdir and tempdir_factory
built in tmpdir and tmpdir_factory is responsible for creating temporary file directories before the test starts running and deleting them after the test ends. Its main characteristics are as follows:
- 1. if the test code needs to read and write files, you can use tmpdir or tmpdir_factory to create files or directories. A single test uses tmpdir and multiple tests use tmpdir_factory
- 2. The scope of TMPDIR is function level, tmpdir_factory scope is session level
the example code is as follows:
import pytest def test_tmpDir(tmpdir): tmpfileA=tmpdir.join("testA.txt") tmpSubDir=tmpdir.mkdir("subDir") tmpfileB=tmpSubDir.join("testB.txt") tmpfileA.write("this is pytest tmp file A") tmpfileB.write("this is pytest tmp file B") assert tmpfileA.read()=="this is pytest tmp file A" assert tmpfileB.read()=="this is pytest tmp file B"
the scope of tmpdir is function level, so you can only use tmpdir to create files or directories for test functions. If the scope of the fixture is higher than the function level (class, module, session), you need to use tmpdir_factory. tmpdir and tmpdir_factory is similar, but the methods provided are different, as follows:
import pytest def test_tmpDir(tmpdir_factory): baseTmpDir=tmpdir_factory.getbasetemp() print(f"\nbase temp dir is :{baseTmpDir}") tmpDir_factory=tmpdir_factory.mktemp("tempDir") tmpfileA=tmpDir_factory.join("testA.txt") tmpSubDir=tmpDir_factory.mkdir("subDir") tmpfileB=tmpSubDir.join("testB.txt") tmpfileA.write("this is pytest tmp file A") tmpfileB.write("this is pytest tmp file B") assert tmpfileA.read()=="this is pytest tmp file A" assert tmpfileB.read()=="this is pytest tmp file B"
getbasetemp() is used to return the root directory used by the session. pytest num will automatically increase with the increase of the session. pytest will record the root directory used by the last few sessions, and earlier root directory records will be cleared. You can also specify a temporary directory on the command line, as follows:
pytest --basetemp=dir
The operation results are as follows:
>>> pytest -s -v .\test_01.py ========================= test session starts ============================== platform win32 -- Python 3.7.6, pytest-5.4.2, py-1.8.1, pluggy-0.13.1 -- d:\program files\python\python.exe cachedir: .pytest_cache rootdir: C:\Users\Surpass\Documents\PycharmProjects\PytestStudy\Lesson04 collected 1 item test_01.py::test_tmpDir base temp dir is :C:\Users\Surpass\AppData\Local\Temp\pytest-of-Surpass\pytest-11 PASSED ========================= 1 passed in 0.12s ==================================
6.2 use temporary directory in other scope
tmpdir_ The scope of factory is session level, and the scope of tmpdir is function level. If you need a module level or class level scope directory, how can you solve it? In this case, tmpdir can be used_ Factory creates another fixture.
suppose there is a test module in which many test cases need to read a JSON file, it can be found in the module itself or in confitest Py to configure a module level fixture. The example is as follows:
conftest.py
import json import pytest @pytest.fixture(scope="module") def readJson(tmpdir_factory): jsonData={ "name":"Surpass", "age":28, "locate":"shangahi", "loveCity":{"shanghai":"shanghai", "wuhai":"hubei", "shenzheng":"guangdong" } } file=tmpdir_factory.mktemp("jsonTemp").join("tempJSON.json") with open(file,"w",encoding="utf8") as fo: json.dump(jsonData,fo,ensure_ascii=False) # print(f"base dir is {tmpdir_factory.getbasetemp()}") return file
test_02.py
import json def test_getData(readJson): with open(readJson,"r",encoding="utf8") as fo: data=json.load(fo) assert data.get("name")=="Surpass" def test_getLoveCity(readJson): with open(readJson,"r",encoding="utf8") as fo: data=json.load(fo) getCity=data.get("loveCity") for k,v in getCity.items(): assert len(v)>0
The operation results are as follows:
>>> pytest -v .\test_02.py ============================ test session starts ============================== platform win32 -- Python 3.7.6, pytest-5.4.2, py-1.8.1, pluggy-0.13.1 -- d:\program files\python\python.exe cachedir: .pytest_cache rootdir: C:\Users\Surpass\Documents\PycharmProjects\PytestStudy\Lesson04 collected 2 items test_02.py::test_getData PASSED [ 50%] test_02.py::test_getLoveCity PASSED [100%] ========================== 2 passed in 0.08s ==================================
because the created fixture level is module level, JSON will be created only once.
6.3 using pytestconfig
the built-in pytestconfig can control pytest through command line parameters, options, configuration files, plug-ins, running directories, etc. Pytestconfig is request The shortcut to config is called "pytest configuration object" in pytest“
to understand how pytestconfig works, you can see how to add a custom command line option, and then read the option in the test case. In addition, you can directly read the customized command line options from pytestconfig. In order for pytest to be parsed, you also need to use the hook function (hook function is another method to control pytest, which is frequently used in plug-ins). Examples are as follows:
pytestconfig\conftest.py
def pytest_addoption(parser): parser.addoption("--myopt",action="store_true",help="test boolean option") parser.addoption("--foo",action="store",default="Surpass",help="test stroe")
The operation results are as follows:
>>> pytest --help usage: pytest [options] [file_or_dir] [file_or_dir] [...] ... custom options: --myopt test boolean option --foo=FOO test stroe
let's try to use these options in the test case, as follows:
pytestconfig\test_03.py
import pytest def test_myOption(pytestconfig): print(f"--foo {pytestconfig.getoption('foo')}") print(f"--myopt {pytestconfig.getoption('myopt')}")
The operation results are as follows:
>>> pytest -s -q .\test_03.py --foo Surpass --myopt False . 1 passed in 0.08s >>> pytest -s -q --myopt .\test_03.py --foo Surpass --myopt True . 1 passed in 0.02s >>> pytest -s -q --myopt --foo Surpass .\te st_03.py --foo Surpass --myopt True
because pytestconfig is a fixture, it can also be used by other fixtures. As follows:
import pytest @pytest.fixture() def foo(pytestconfig): return pytestconfig.option.foo @pytest.fixture() def myopt(pytestconfig): return pytestconfig.option.myopt def test_fixtureForAddOption(foo,myopt): print(f"\nfoo -- {foo}") print(f"\nmyopt -- {myopt}")
The operation results are as follows:
>>> pytest -v -s .\test_option.py ======================== test session starts ============================= platform win32 -- Python 3.7.6, pytest-5.4.2, py-1.8.1, pluggy-0.13.1 -- d:\program files\python\python.exe cachedir: .pytest_cache rootdir: C:\Users\Surpass\Documents\PycharmProjects\PytestStudy\Lesson04\pytestconfig collected 1 item test_option.py::test_fixtureForAddOption foo -- Surpass myopt -- False PASSED ======================= 1 passed in 0.14s ================================
in addition to using pytestconfig for customization, you can also use built-in options and pytest startup information, such as directories and parameters. As shown in:
def test_pytestconfig(pytestconfig): print(f"args : {pytestconfig.args}") print(f"ini file is : {pytestconfig.inifile}") print(f"root dir is : {pytestconfig.rootdir}") print(f"invocation dir is :{pytestconfig.invocation_dir}") print(f"-q, --quiet {pytestconfig.getoption('--quiet')}") print(f"-l, --showlocals:{pytestconfig.getoption('showlocals')}") print(f"--tb=style: {pytestconfig.getoption('tbstyle')}")
The operation results are as follows:
>>> pytest -v -s .\test_option.py ========================== test session starts ========================== platform win32 -- Python 3.7.6, pytest-5.4.2, py-1.8.1, pluggy-0.13.1 -- d:\program files\python\python.exe cachedir: .pytest_cache rootdir: C:\Users\Surpass\Documents\PycharmProjects\PytestStudy\Lesson04\pytestconfig collected 1 item test_option.py::test_pytestconfig args : ['.\\test_option.py'] ini file is : None root dir is : C:\Users\Surpass\Documents\PycharmProjects\PytestStudy\Lesson04\pytestconfig invocation dir is :C:\Users\Surpass\Documents\PycharmProjects\PytestStudy\Lesson04\pytestconfig -q, --quiet 1 -l, --showlocals:False --tb=style: auto PASSED ==========================1 passed in 0.07s =================================
6.4 using cache
generally, each test case is independent of each other and does not affect each other. But sometimes, after a test case is run, you want to pass its results to the next test case. In this case, you need to use the built-in cache of pytest.
The cache is used to store one test session information and use it in the next test session. Using the built-in --last-failed and --failed-first flags of pytest can well demonstrate the cache function. Examples are as follows:
def test_A(): assert 1==1 def test_B(): assert 1==2
The operation results are as follows:
>>> pytest -v --tb=no .\test_04.py =========================== test session starts ========================= platform win32 -- Python 3.7.6, pytest-5.4.2, py-1.8.1, pluggy-0.13.1 -- d:\program files\python\python.exe cachedir: .pytest_cache rootdir: C:\Users\Surpass\Documents\PycharmProjects\PytestStudy\Lesson04 collected 2 items test_04.py::test_A PASSED [ 50%] test_04.py::test_B FAILED [100%] ======================== 1 failed, 1 passed in 0.08s ========================
one of the above test cases fails to run. If --ff or --failed-first is used again, the test cases that failed to run before will be run first, and then run other test cases, as shown below:
>>> pytest -v --tb=no --ff .\test_04.py ===================== test session starts =========================== platform win32 -- Python 3.7.6, pytest-5.4.2, py-1.8.1, pluggy-0.13.1 -- d:\program files\python\python.exe cachedir: .pytest_cache rootdir: C:\Users\Surpass\Documents\PycharmProjects\PytestStudy\Lesson04 collected 2 items run-last-failure: rerun previous 1 failure first test_04.py::test_B FAILED [ 50%] test_04.py::test_A PASSED [100%] ======================= 1 failed, 1 passed in 0.14s ===================
in addition, you can use --lf or --last failed to run only the test cases that failed last time, as shown below:
>>> pytest -v --tb=no --lf .\test_04.py =================== test session starts =============================== platform win32 -- Python 3.7.6, pytest-5.4.2, py-1.8.1, pluggy-0.13.1 -- d:\program files\python\python.exe cachedir: .pytest_cache rootdir: C:\Users\Surpass\Documents\PycharmProjects\PytestStudy\Lesson04 collected 1 item run-last-failure: rerun previous 1 failure test_04.py::test_B FAILED [100%] ========================1 failed in 0.07s =============================
how does pytest store and call first? Let's take a look at the following test case, as shown below:
import pytest from pytest import approx testData=[ #x,y,res (1,2,3), (2,4,6), (3,5,8), (-1,-2,0) ] @pytest.mark.parametrize("x,y,expect",testData) def test_add(x,y,expect): res=x+y assert res==approx(expect)
The operation results are as follows:
>>> pytest -v -q .\test_04.py ================== test session starts ============================= platform win32 -- Python 3.7.6, pytest-5.4.2, py-1.8.1, pluggy-0.13.1 rootdir: C:\Users\Surpass\Documents\PycharmProjects\PytestStudy\Lesson04 collected 4 items test_04.py ...F [100%] ================== FAILURES ======================================= ___________________ test_add[-1--2-0] _____________________________ x = -1, y = -2, expect = 0 @pytest.mark.parametrize("x,y,expect",testData) def test_add(x,y,expect): res=x+y > assert res==approx(expect) E assert -3 == 0 ± 1.0e-12 E + where 0 ± 1.0e-12 = approx(0) test_04.py:16: AssertionError =================== short test summary info ======================= FAILED test_04.py::test_add[-1--2-0] - assert -3 == 0 ± 1.0e-12 =================== 1 failed, 3 passed in 0.26s ===================
according to the error prompt information, we can find the error at a glance. For test cases that are not so easy to locate, we need to use --showlocals (abbreviated -l) to debug the failed test cases. As follows:
>>> pytest -q --lf -l .\test_04.py F [100%] ========================= FAILURES ============================= ________________________ test_add[-1--2-0] _____________________ x = -1, y = -2, expect = 0 @pytest.mark.parametrize("x,y,expect",testData) def test_add(x,y,expect): res=x+y > assert res==approx(expect) E assert -3 == 0 ± 1.0e-12 E + where 0 ± 1.0e-12 = approx(0) expect = 0 res = -3 x = -1 y = -2 test_04.py:16: AssertionError ======================== short test summary info ==================== FAILED test_04.py::test_add[-1--2-0] - assert -3 == 0 ± 1.0e-12 1 failed in 0.17s
from the above information, you can visually see the location of the problem. In order to remember the last test case that failed, pytest stores the test failure information in the previous test session. You can use the --cache show ID to display the stored information.
>>> pytest --cache-show ======================= test session starts ============================ platform win32 -- Python 3.7.6, pytest-5.4.2, py-1.8.1, pluggy-0.13.1 rootdir: C:\Users\Surpass\Documents\PycharmProjects\PytestStudy\Lesson04 cachedir: C:\Users\Surpass\Documents\PycharmProjects\PytestStudy\Lesson04\.pytest_cache ----------------------------- cache values for '*' ------------------------ cache\lastfailed contains: {'pytestconfig/test_03.py::test_myOption': True, 'test_04.py::test_B': True, 'test_04.py::test_add[-1--2-0]': True} cache\nodeids contains: ['test_01.py::test_tmpDir', 'test_02.py::test_getLoveCity', 'test_02.py::test_getData', 'test_04.py::test_A', 'test_04.py::test_B', 'pytestconfig/test_03.py::test_myOption', 'pytestconfig/test_option.py::test_pytestconfig', 'test_04.py::test_add[1-2-3]', 'test_04.py::test_add[2-4-6]', 'test_04.py::test_add[3-5-8]', 'test_04.py::test_add[-1--2-0]'] cache\stepwise contains: [] ========================no tests ran in 0.03s ==============================
if you need to clear the cache, you can pass in the --clear cache ID before testing the session. In addition to the --lf and --ff IDS, the cache can also use its interface, as shown below:
cache.get(key,default) cache.set(key,value)
Traditionally, key names start with the application name or plug-in name, followed by /, followed by separated key strings. The key value can be anything that can be converted to JSON, because it is stored in JSON format in the cache directory.
let's create a fixture to record the test time and store it in the cache. If the later test time is more than twice the previous time, a timeout exception will be thrown.
import datetime import time import random import pytest @pytest.fixture(autouse=True) def checkDuration(request,cache): key="duration/"+request.node.nodeid.replace(":","_") startTime=datetime.datetime.now() yield endTime=datetime.datetime.now() duration=(endTime-startTime).total_seconds() lastDuration=cache.get(key,None) cache.set(key,duration) if lastDuration is not None: errorString="test duration over twice last duration" assert duration <= 2 * lastDuration,errorString @pytest.mark.parametrize("t",range(5)) def test_duration(t): time.sleep(random.randint(0,5))
nodeid is a unique identifier that can be used even in parametric testing. Run the test case as follows
>>> pytest -q --cache-clear .\test_04.py ..... [100%] 5 passed in 10.14s >>> pytest -q --tb=line .\test_04.py .E....E [100%] ========================== ERRORS ======================================== _________________ ERROR at teardown of test_duration[0] __________________ assert 5.006229 <= (2 * 1.003045) E AssertionError: test duration over twice last duration _________________ ERROR at teardown of test_duration[4] ___________________ assert 4.149226 <= (2 * 1.005112) E AssertionError: test duration over twice last duration ================== short test summary info ================================ ERROR test_04.py::test_duration[0] - AssertionError: test duration over twice last duration ERROR test_04.py::test_duration[4] - AssertionError: test duration over twice last duration 5 passed, 2 errors in 14.50s >>> pytest -q --cache-show cachedir: C:\Users\Surpass\Documents\PycharmProjects\PytestStudy\Lesson04\.pytest_cache ----------------------- cache values for '*' ----------------------------------------- cache\lastfailed contains: {'test_04.py::test_duration[0]': True, 'test_04.py::test_duration[4]': True} cache\nodeids contains: ['test_04.py::test_duration[0]', 'test_04.py::test_duration[1]', 'test_04.py::test_duration[2]', 'test_04.py::test_duration[3]', 'test_04.py::test_duration[4]'] cache\stepwise contains: [] duration\test_04.py__test_duration[0] contains: 5.006229 duration\test_04.py__test_duration[1] contains: 0.001998 duration\test_04.py__test_duration[2] contains: 1.006201 duration\test_04.py__test_duration[3] contains: 4.007687 duration\test_04.py__test_duration[4] contains: 4.149226 no tests ran in 0.03s
because the cache data has a prefix, you can directly see the duration data.
6.5 using capsys
the built-in capsys of pytest has two main functions
- Allow code to read stdout and stderr
- You can temporarily disable fetching log output
1. read stdout
def greeting(name): print(f"Hello,{name}") def test_greeting(capsys): greeting("Surpass") out,err=capsys.readouterr() assert "Hello,Surpass" in out assert err==""
The operation results are as follows:
>>> pytest -v .\test_05.py ========================= test session starts ============================ platform win32 -- Python 3.7.6, pytest-5.4.2, py-1.8.1, pluggy-0.13.1 -- d:\program files\python\python.exe cachedir: .pytest_cache rootdir: C:\Users\Surpass\Documents\PycharmProjects\PytestStudy\Lesson04 collected 1 item test_05.py::test_greeting PASSED [100%] ====================== 1 passed in 0.08s ==================================
2. read stderr
import sys def greeting(name): print(f"Hello,{name}",file=sys.stderr) def test_greeting(capsys): greeting("Surpass") out,err=capsys.readouterr() assert "Hello,Surpass" in err assert out==""
The operation results are as follows:
>>> pytest -v .\test_05.py ==========================test session starts ============================= platform win32 -- Python 3.7.6, pytest-5.4.2, py-1.8.1, pluggy-0.13.1 -- d:\program files\python\python.exe cachedir: .pytest_cache rootdir: C:\Users\Surpass\Documents\PycharmProjects\PytestStudy\Lesson04 collected 1 item test_05.py::test_greeting PASSED [100%] ============================ 1 passed in 0.11s ===========================
pytest usually captures the output of test cases and tested code. After all test sessions are completed, the captured output will be displayed along with the failed test-- The s parameter can turn off this function and send the output directly to stdout while the test is still running. However, sometimes only part of the information is required. You can use capsys Disable() can temporarily bypass the default output capture mechanism. An example is as follows:
def test_capsysDisable(capsys): with capsys.disabled(): print("\nalways print this information") print("normal print,usually captured")
The operation results are as follows:
>>> pytest -q .\test_05.py always print this information . [100%] 1 passed in 0.02s >>> pytest -q -s .\test_05.py always print this information normal print,usually captured . 1 passed in 0.02s
always print this information will always be displayed whether the output is captured or not, because it is contained in capsys Disabled() in the code block. Other print statements are normal commands that are displayed only after the -s parameter is passed in.
-The s flag is short for --capture=no, which means that output capture is turned off
6.6 using monkeypatch
monkey patch can dynamically modify classes or modules during runtime. In testing, monkey patch is often used to replace part of the running environment of the tested code or decorate the input dependency or output dependency with objects or functions that are easier to test. The built-in monkey patch in pytest allows you to use it in a single environment. After the test ends, no matter whether the result is failed or passed, all modifications will be restored. Common monkeypatch functions are as follows:
setattr(target, name, value=<notset>, raising=True): # set a property delattr(target, name=<notset>, raising=True): # Delete attribute setitem(dic, name, value): # Set an element in the dictionary delitem(dic, name, raising=True): # Delete an element in the dictionary setenv(name, value, prepend=None): # Setting environment variables delenv(name, raising=True): # Delete environment variable syspath_prepend(path) # Add path path to sys In path chdir(path) # Change the current working path
- 1.raising parameter is used to indicate whether pytest throws an exception when the record does not exist
- 2. Prepend in setenv() can be a character. If it is set in this way, the value of the environment variable is value+prepend+
to better understand the practical application of monkeypatch, let's take a look at the following examples:
import os import json defaulData={ "name":"Surpass", "age":28, "locate":"shangahi", "loveCity":{"shanghai":"shanghai", "wuhai":"hubei", "shenzheng":"guangdong" } } def readJSON(): path=os.path.join(os.getcwd(),"surpass.json") with open(path,"r",encoding="utf8") as fo: data=json.load(fo) return data def writeJSON(data:str): path = os.path.join(os.getcwd(), "surpass.json") with open(path,"w",encoding="utf8") as fo: json.dump(data,fo,ensure_ascii=False,indent=4) def writeDefaultJSON(): writeJSON(defaulData)
writeDefaultJSON() has neither parameters nor return values. How to test? Observe the function carefully. It will save a JSON file in the current directory, so you can test from the side. It is usually a straightforward method to run the code and check the file generation. As follows:
def test_writeDefaultJSON(): writeDefaultJSON() expectd=defaulData actual=readJSON() assert expectd==actual
although the above method can be tested, it covers the contents of the original file. The path passed in the function is the current directory. If the directory is changed to a temporary directory, the example is as follows:
def test_writeDefaultJSONChangeDir(tmpdir,monkeypatch): tmpDir=tmpdir.mkdir("TestDir") monkeypatch.chdir(tmpDir) writeDefaultJSON() expectd=defaulData actual=readJSON() assert expectd==actual
although the above problem has solved the directory problem, what if the data needs to be modified during the test process? The example is as follows:
def test_writeDefaultJSONChangeDir(tmpdir,monkeypatch): tmpDir=tmpdir.mkdir("TestDir") monkeypatch.chdir(tmpDir) # Save default data writeDefaultJSON() copyData=deepcopy(defaulData) # Add item monkeypatch.setitem(defaulData,"hometown","hubei") monkeypatch.setitem(defaulData,"company",["Surpassme","Surmount"]) addItemData=defaulData # Save data again writeDefaultJSON() # Get saved data actual=readJSON() assert addItemData==actual assert copyData!=actual
because the default data is in dictionary format, you can use setitem to add key value pairs.
6.7 using recwarn
the built-in recwarn can be used to check the warning information generated by the code to be tested. In Python, we can add warnings, much like assertions, but not prevent the program from running. If you want to stop supporting an obsolete function in a code, you can set a warning message in the code, as shown in the following example:
import warnings import pytest def depricateFunc(): warnings.warn("This function is not support after 3.8 version",DeprecationWarning) def test_depricateFunc(recwarn): depricateFunc() assert len(recwarn)==1 warnInfo=recwarn.pop() assert warnInfo.category==DeprecationWarning assert str(warnInfo.message) == "This function is not support after 3.8 version"
the value of warn is a list of warning messages. Each warning message in the list has four attributes: category, message, filename and lineno. The warning information is collected after the test starts. If the warning information to be tested is the last, you can use warn. before the information collection Clear() clears unwanted content.
in addition to recwarn, you can also use pytest Warnings() to check the warning information. Examples are as follows:
import warnings import pytest def depricateFunc(): warnings.warn("This function is not support after 3.8 version",DeprecationWarning) def test_depricateFunc(): with pytest.warns(None) as warnInfo: depricateFunc() assert len(warnInfo)==1 w=warnInfo.pop() assert w.category==DeprecationWarning assert str(w.message) == "This function is not support after 3.8 version"
Original address: https://www.cnblogs.com/surpassme/p/13258526.html
This article is published synchronously on the wechat subscription number. If you like my article, you can also follow my wechat subscription number: woaitest, or scan the following QR code to add attention: