Who this guide is for
- Learners writing multi-function scripts and small applications
- Developers who want confidence before refactoring or shipping
- Teams that need repeatable verification in local and CI environments
What you’ll learn
- Differences between
unittest,doctest, andpytest - How to write readable tests with assertions and fixtures
- How to run tests across environments with
tox - How to measure test coverage with
coverage.py - Core TDD mindset for incremental, reliable development
Why this topic matters
Testing protects your code from regressions and gives confidence to improve design over time. Without tests, even small edits can introduce silent bugs that are hard to detect manually.
In Python, you can start simple and scale gradually. A few well-placed tests often catch the most expensive failures early and make collaboration easier.
Core concepts
Test types and frameworks
unittest: built-in and structuredpytest: popular, concise, rich plugin ecosystemdoctest: validates examples in docstrings
unittest example (historically also called pyunit):
import unittest
def multiply(a, b):
return a * b
class TestMultiply(unittest.TestCase):
def test_multiply(self):
self.assertEqual(multiply(3, 4), 12)
if __name__ == "__main__":
unittest.main()
Simple pytest test:
def add(a, b):
return a + b
def test_add():
assert add(2, 3) == 5
Readable tests should describe behavior, not implementation details.
Arrange-Act-Assert pattern
Most unit tests follow three steps:
- Arrange input state
- Act by calling the function
- Assert expected output
def test_discount_price():
price = 100
discount = 0.2
result = price * (1 - discount)
assert result == 80
This structure keeps tests consistent and easy to read.
Coverage and confidence
Coverage tells you how much code executes during tests. It is a signal, not a guarantee.
python -m pip install coverage
coverage run -m pytest
coverage report -m
High coverage is useful, but meaningful assertions matter more than percentages.
Step-by-step walkthrough
Step 1 — Create a testable module
Create calculator.py:
def divide(a: float, b: float) -> float:
if b == 0:
raise ValueError("b must not be zero")
return a / b
Small pure functions are easiest to test.
Step 2 — Add pytest tests
Create test_calculator.py:
import pytest
from calculator import divide
def test_divide_success():
assert divide(10, 2) == 5
def test_divide_by_zero():
with pytest.raises(ValueError):
divide(10, 0)
Run tests:
pytest -q
Step 3 — Add coverage and cross-env checks
coverage run -m pytest
coverage report -m
Optional tox setup (for multi-version checks):
[tox]
envlist = py311, py312
[testenv]
deps = pytest
commands = pytest
This helps verify behavior across supported Python versions.
Simple pytest project structure:
my_project/
├── src/
│ └── calculator.py
├── tests/
│ └── test_calculator.py
├── pyproject.toml
└── README.md
Practical examples
Example 1 — Parameterized tests for multiple inputs
import pytest
@pytest.mark.parametrize(
"a,b,expected",
[
(1, 2, 3),
(10, 5, 15),
(-1, 1, 0),
],
)
def test_add_cases(a, b, expected):
assert a + b == expected
Expected result:
3 passed
Example 2 — Doctest for executable documentation
def square(value: int) -> int:
"""
Return square of a number.
>>> square(4)
16
>>> square(-3)
9
"""
return value * value
Run doctest:
python -m doctest -v your_module.py
This keeps examples and behavior aligned.
Example 3 — TDD mini cycle (Red -> Green -> Refactor)
Start by writing a failing test (Red):
def test_is_even():
assert is_even(4) is True
assert is_even(5) is False
Then implement minimal code to pass (Green):
def is_even(value):
return value % 2 == 0
Finally improve naming/structure without breaking tests (Refactor).
Expected result:
- Tests pass after implementation, and continue passing after safe refactoring.
Common mistakes and how to avoid them
- Writing tests only for happy paths -> Add edge cases and failure cases.
- Asserting too much in one test -> Keep tests focused on one behavior.
- Coupling tests to internal implementation details -> Test public behavior, not private internals.
- Treating coverage percentage as absolute quality -> Combine coverage with meaningful assertions.
Quick practice
- Write tests for a
safe_divide(a, b)function including zero division handling. - Convert one repeated test pattern into
pytest.mark.parametrize. - Generate a coverage report and identify one untested branch to improve.
Key takeaways
- Tests are safety nets that speed up development and refactoring.
pytestoffers a practical default for many Python teams.- Coverage is useful guidance but should not replace thoughtful test design.
- Small, well-scoped tests provide the highest long-term value.
Next step
Continue to Documentation. In the next guide, you will write maintainable docs with docstrings, Sphinx, and MkDocs workflows.
No Comments