Classroom "rules":
Tell us about yourself:
For most people, the philosophy is that "tests are good", since they give you more confidence in the correctness of your code. Their main drawback is the time required to write and run them.
Inside a summer.py
file:
def sum_scores(scores):
""" Calculates total score based on list of scores.
"""
total = 0
for score in scores:
total += score
return total
The unittest module can be used to write large quantities of tests in files outside of the tested code.
import unittest
from summer import sum_scores
class TestSumScores(unittest.TestCase):
def test_sum_empty(self):
self.assertEqual(sum_scores([]), 0)
def test_sum_numbers(self):
self.assertEqual(sum_scores([8, 9, 7]), 24)
Tests are methods inside a class that use a bunch of special assert* methods.
Run a single file:
python3 -m unittest test_sum_scores.py
Run all discoverable tests:
python3 -m unittest
For more options, read the docs.
The pytest package is a popular third-party alternative for writing tests.
from summer import sum_scores
def test_sum_empty():
assert sum_scores([]) == 0
def test_sum_numbers():
assert sum_scores([8, 9, 7]) == 24
Tests are simple functions that use Python's assert statement.
Install the package:
pip3 install pytest
Run a single file:
python3 -m pytest sum_scores_test.py
Run all discoverable tests:
python3 -m pytest
Pytest can be configured in pyproject.toml
:
[tool.pytest.ini_options]
addopts = "-ra"
pythonpath = ['.']
🔗 See all options
Starting from this repo:
github.com/pamelafox/testing-workshop-starter
tests/texter_test.py
, add tests for the src/texter.py
functions.
Test coverage measures the percentage of code that is covered by the tests in a test suite.
Two ways of measuring coverage:
if
conditions)
coverage.py is the most popular tool for measuring coverage in Python programs.
Example coverage report for a Python web app:
tests/test_routes.py ................. [ 89%]
tests/test_translations.py .. [100%]
---------- coverage: platform linux, python 3.9.13-final-0 -----------
Name Stmts Miss Cover Missing
----------------------------------------------------------
src/__init__.py 17 0 100%
src/database.py 4 0 100%
src/models.py 20 0 100%
src/routes.py 74 0 100%
src/translations.py 14 0 100%
tests/conftest.py 35 0 100%
tests/test_routes.py 110 0 100%
tests/test_translations.py 16 0 100%
----------------------------------------------------------
TOTAL 290 0 100%
Install the package:
pip3 install coverage
Run with unittest:
coverage run -m unittest test_sum_scores.py
Run with pytest:
coverage run -m pytest sum_scores_test.py
You can also run with branch coverage.
For a command-line report:
coverage report
For an HTML report:
coverage html
Other reporter types are also available.
The pytest-cov plugin makes it even easier to run coverage with pytest.
Install the package:
pip3 install pytest-cov
Run with pytest:
pytest --cov=myproj tests/
See pytest-cov docs for more options.
Returning to the previous repo:
pyproject.toml
, add the following to addopts
:
--cov src --cov-report term-missing
src/
directory.
conditionals.py
.
If code uses functionality that's hard to replicate in test environments, you can monkeypatch that functionality.
Consider this function:
def input_number(message):
user_input = int(input(message))
return user_input
We can monkeypatch input()
to mock it:
def fake_input(msg):
return '5'
def test_input_int(monkeypatch):
monkeypatch.setattr('builtins.input', fake_input)
assert input_number('Enter num') == 5
Pytest fixtures are functions that run before each test. Fixtures are helpful for repeated functionality.
Example fixture:
import pytest
@pytest.fixture
def mock_input(monkeypatch):
def fake_input(msg):
return '5'
monkeypatch.setattr('builtins.input', fake_input)
def test_input_number(mock_input):
assert input_number('Enter num') == 5
Most web app frameworks provide some sort of testing client object.
app.test_client()
fastapi.testclient.TestClient(app)
django.test.Client()
Example Flask tests:
from flaskapp import app
def test_homepage():
response = app.test_client().get("/")
assert response.status_code == 200
assert b"I am a human" in response.data
import random
import fastapi
from .data import names
app = fastapi.FastAPI()
@app.get("/generate_name")
async def generate_name(starts_with: str = None):
name_choices = ["Hassan", "Maria", "Sofia", "Yusuf", "Aisha", "Fatima", "Ahmed"]
if starts_with:
name_choices = [name for name in names if name.lower().startswith(
starts_with.lower())]
random_name = random.choice(name_choices)
return {"name": random_name}
For access to the TestClient
, install the httpx
module:
pip install httpx
Write tests for each API route:
from fastapi.testclient import TestClient
from .main import app
client = TestClient(app)
def test_generate_name_params():
random.seed(1)
response = client.get("/generate_name?starts_with=n")
assert response.status_code == 200
assert response.json()["name"] == "Nancy"
Using this repo:
github.com/pamelafox/simple-fastapi-container/
python3 -m pytest
to run current tests.python3 -m pytest
to run all tests, ensure 100% coverage.E2E tests are the most realistic tests, since they test the entire program from the user's perspective.
For a web app, an E2E test actually opens up the web app in a browser, interacts with the webpage, and checks the results.
Most popular E2E libraries:
Install playwright, pytest plugin, and browsers:
pip3 install playwright
pip3 install pytest-playwright
playwright install --with-deps
Write a test:
import pytest
from playwright.sync_api import Page, expect
def test_home(page: Page, live_server):
page.goto("http://localhost:8000")
expect(page).to_have_title("ReleCloud - Expand your horizons")
from multiprocessing import Process
import pytest
import uvicorn
from fastapi_app import seed_data
from fastapi_app.app import app
def run_server():
uvicorn.run(app)
@pytest.fixture(scope="session")
def live_server():
seed_data.load_from_json()
proc = Process(target=run_server, daemon=True)
proc.start()
yield
proc.kill()
seed_data.drop_all()
Starting from this repo:
github.com/Azure-Samples/azure-fastapi-postgres-flexible-appservice
pre-commit is a third-party package for running pre-commit hooks.
Running all tests before a commit can take a long time, however!
Whenever code is pushed to a repo, a CI server can run a suite of actions which can result in success or failure.
Popular CI options: Jenkins, TravisCI, GitHub actions
An example GitHub actions workflow with pytest:
name: Python checks
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python 3
uses: actions/setup-python@v3
with:
python-version: 3.11
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest
- name: Run unit tests
run: |
pytest
See it in action.