Testing is an important part of developing PyTorch. Good tests ensure both that:
This article explains PyTorch's existing testing tools to help you write effective and efficient tests.
PyTorch's test suites are located under pytorch/test.
Code for generating tests and testing helper functions are located under pytorch/torch/testing/_internal.
Directly running python test files (preferred)Most PyTorch test files can be run using unittest or pytest (used in CI). Both options can use -k <filter>
to filter tests by string and -v
for verbose.
To run test_torch.py, use:
Additional unittest-specific arguments can be appended to this command. For example, to run only a specific test:
python test_torch.py <TestClass>.<TestName>
To run test_torch.py, use:
or
python -m pytest test_torch.py
Other useful options include:
-x
to stop after first failure-s
to show output of stdout/stderr--lf
to run only the failed tests from the last pytest invocationtest/run_test.py
In addition to directly running python test files. PyTorch's Continuous Integration and some specialized test cases can be launched via pytorch/test/run_test.py.
It provides some additional features that normally doesn't exist when directly running python test files. For example:
One can directly run the test/run_test.py
file and it will selectively run all tests available in your current platform:
Alternatively you can pass in additional arguments to run specific test(s), use the help function to find out all possible test options.
python test/run_test.py -h
Using test/run_test.py
will usually require some extra dependencies, like pytest-rerunfailures and pytest-shard. Running pip install -r .ci/docker/requirements-ci.txt
will install all the dependencies needed.
In addition to unittest and pytest options, PyTorch's test suite also understands the following environment variables:
For instance,
PYTORCH_TEST_WITH_SLOW=1 python test_torch.py
will run the tests in test_torch.py, including those decorated with @slowTest
.
Use the test case's assertEqual to compare objects for equality.
Prefer using make_tensor when generating test tensors over tensor creation ops like torch.randn.
PyTorch's test generation functionalitySee this comment for details on writing test templates.
PyTorch's test framework lets you instantiate test templates for different operators, datatypes (dtypes), and devices to improve test coverage. It is recommended that all tests be written as templates, whether it's necessary or not, to make it easier for the test framework to inspect the test's properties.
In general, there exist three variants of instantiated tests, which adapt the names at runtime according the following scheme.
<TestClass><DEVICE>.<test_name>_<device>
<TestClass><DEVICE>.<test_name>_<device>_<dtype>
<TestClass><DEVICE>.<test_name>_<operator_name>_<device>_<dtype>
To use the selection syntax to run only a single test class or test, be it with unittest
or pytest
, it is important to use instantiated name rather than the template name. For pytest
users there is the pytest-pytorch
plugin, that re-enables selecting individual test classes or tests by their template name.
See the "OpInfos" note in torch/testing/_internal/opinfo/core.py for details on adding an OpInfo and how they work.
OpInfos are used to automatically generate a variety of operator tests from metadata. If you're adding a new operator to the torch, torch.nn, torch.special, torch.fft, or torch.linalg namespaces you should write an OpInfo for it so it's tested properly.
Unit 2: Tensors, Operators, and Testing - Tensor Views Lab Colaboratory notebook will open
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4