Testing APIs is a crucial part of software development, ensuring that your code interacts correctly with external services and users, and that the code meets their expectations. Most often, API tests are a part of system or integration testing. API testing is important for several reasons:
- Validation of core functionality - APIs are the backbone of modern applications, enabling communication between different software systems. Testing APIs ensures that the core functionality of your application works as expected.
- Early detection of issues - API testing can be performed early in the development cycle, allowing developers to detect and fix issues before they escalate.
- Security testing - APIs often handle sensitive data and perform critical operations. API testing helps identify security vulnerabilities such as unauthorized access, data breaches, and injection attacks.
- Performance validation - API testing can assess how the backend performs under various conditions, including load testing and stress testing. Performing and automating such tests on an API level is much simpler and more convenient. It enables the detection of potential bottlenecks and areas for improvement.
- Supports continuous integration - API testing is essential for continuous integration and continuous delivery (CI/CD) pipelines. By automating API tests and integrating them into your CI/CD pipeline, you ensure that every change to the codebase is thoroughly tested before it is merged or deployed. This continuous testing process helps to catch issues early, preventing faulty code from being integrated into the main branch and reducing the risk of bugs making it to production.
In this article, we'll explore how to test APIs in Python using the popular PyTest library. We'll also demonstrate how to use mocking to simulate responses from external services, making your tests faster, more reliable, and easier to maintain.
Why PyTest?
PyTest is a robust and flexible testing framework for Python. It supports fixtures, parameterized testing, and a plethora of plugins, making it an excellent choice for API testing.
Here are a few reasons why PyTest is a good choice:
- PyTest's syntax is clean and easy to understand, reducing the learning curve.
- PyTest has a lot of plugins, so it can be extended to meet your specific needs.
- Features like fixtures and parameterization help in writing reusable and maintainable tests that cover more cases without code duplication.
- PyTest provides detailed and easy-to-read output, making it simpler to diagnose test failures and present test results.
- PyTest is one of the most popular testing frameworks for Python.
The high popularity of PyTest makes it easier to learn and troubleshoot. There is a lot of material dealing with this framework; the community is large, so a large number of questions have already been addressed. A lot of huge projects are based on this tool because it is possible to create simple test cases as well as much more complex test suites and hierarchies. More information about PyTest can be found here .
Using virtual environments
Before we start, the first step should be to install Python. You can download it from this site .
When working on a project, it's crucial to manage dependencies to avoid conflicts between different libraries and versions used by different projects. Python's venv module helps create isolated virtual environments for your projects, ensuring that each project has its own dependencies, so let’s start by creating our virtual environment and activating it.
Setting up a virtual environment
To create a virtual environment, run the following command in your project directory:
python -m venv venv
This command creates a new directory called venv that contains an isolated Python environment. To activate the virtual environment, use:
- For Windows:
venv\Scripts\activate
- For macOS and Linux:
source venv/bin/activate
After successful activation, a user should be able to see information about the virtual environment at the beginning of the prompt:
(venv) maciej@maciej-laptop:~/sandbox$
Getting started with PyTest
First, we have to install the PyTest library. We can do so using pip:
pip install pytest
In the next step, we can create a trivial test function to ensure everything is set up correctly:
def test_example():
assert True
Now, we can run this test using the following command in the terminal:
pytest test_example.py
If everything is set up correctly, you should be able to see an output indicating that the test was passed.
maciej@maciej-laptop:~/sandbox$ pytest test_example.py
=========================== test session starts ===============================
platform linux -- Python 3.8.10, pytest-7.2.0, pluggy-1.3.0
rootdir: /home/sandbox
collected 1 item
test_example.py . [100%]
=========================== 1 passed in 0.00s =================================
Requests library
The Requests library is a powerful and user-friendly HTTP library for Python. It is designed to make HTTP requests simpler and more intuitive, so it is a good choice to use it for API testing.
Key features of the Requests library
- Simple and easy to use - the syntax of requests is straightforward and readable. This simplicity makes it easy to send HTTP requests and handle responses.
- HTTP methods support - requests support all major HTTP methods, including GET, POST, PUT, DELETE, PATCH, and HEAD, so the user is able to interact with any RESTful service effectively.
- Session objects - the library provides session objects, allowing you to persist certain parameters across multiple requests.
- Automatic content decoding - requests automatically decodes content from the server. Whether it's JSON, XML, or plain text.
- SSL verification - by default, requests verifies SSL certificates, ensuring secure HTTP connections. You can also customize this behavior to suit your needs.
- File uploads and downloads - handling file uploads and downloads is straightforward.
- Proxies and authentication - requests make it easy to work with HTTP proxies and handle various authentication methods, including Basic Auth, OAuth, and custom authentication schemes.
To install the Requests library we can also use pip:
pip install requests
An example test with PyTest and the Requests libraries might look like this:
import pytest
import requests
def test_example():
url = 'http://example.com'
response = requests.get(url)
print(response.text)
assert response.status_code == 200
In this test, we are going to execute the GET method on a given URL, print the response body as text and check if the response status code is equal to 200. If a user wants to see a print output in the console, PyTest should be executed with the -s flag (this is the option that will allow users to see print statements in the console):
maciej@maciej-laptop:~/sandbox$ pytest test_example.py -s
In larger projects we would likely build a hierarchy in the testing framework, create API Clients, Builders and other tools to send project specific data, make assertions and provide debug logs in a more comfortable and readable way.
Mocking with PyTest
Mocking may be essential in API testing, especially when your code interacts with external services. By simulating responses from these services, you can isolate your code and ensure that tests run quickly and consistently. What is more, you are not dependent on bugs and changes in external services. For example: you do not have to wait until a new feature of that service is implemented. You are able to mock this functionality and start testing immediately.
We'll use the unittest.mock library, which is included in the standard Python library, to demonstrate how to mock API responses. Let's assume you have a simple API client that fetches data from an external service:
import requests
class APIClient:
def get_data(self, url):
response = requests.get(url)
return response
Now, we'll write a test for this client. We'll mock the requests.get method to simulate an external API response:
from unittest.mock import patch
from api_client import APIClient
@patch('requests.get')
def test_fetch_data(mock_get):
# Mock setup
mock_response = mock_get.return_value
mock_response.status_code = 200
mock_response.json.return_value = {'key': 'value'}
# Test body
client = APIClient()
url = 'http://example.com/api'
result = client.get_data(url)
# Asserts
assert result.json() == {'key': 'value'}
assert result.status_code == 200
mock_get.assert_called_once_with(url)
In this test, we're using the patch decorator from unittest.mock to replace the requests.get method with a mock object. We then define what this mock object should return when its json method is called. Finally, we assert that our get_data method returns the expected result, status code and that requests.get was called with the correct URL.
Using fixtures for reusable mocks
PyTest fixtures allow you to create reusable setups for your tests. Let's refactor our test to use a fixture for the mock response:
import pytest
from unittest.mock import patch
from api_client import APIClient
@pytest.fixture
def mock_response():
with patch('requests.get') as mock_get:
yield mock_get
def test_get_data(mock_response):
mock_response.return_value.json.return_value = {'key': 'value'}
mock_response.return_value.status_code = 200
client = APIClient()
url = 'http://example.com/api'
result = client.get_data(url)
# Asserts
assert result.json() == {'key': 'value'}
assert result.status_code == 200
mock_response.assert_called_once_with(url)
By using a fixture, we can reuse the mock setup in multiple tests, improving code maintainability. Mocking can get more sophisticated when you need to simulate different behaviors or errors. Let's explore a few advanced techniques.
Mocking different responses
You might want to simulate different responses, status codes or errors. Here you can find some inspiration on how to do so:
import pytest
from unittest.mock import patch
from api_client import APIClient
from http import HTTPStatus
@pytest.fixture
def mock_response():
with patch('requests.get') as mock_get:
yield mock_get
def test_get_data_success(mock_response):
# Successful response
mock_response.return_value.json.return_value = {'key': 'value'}
client = APIClient()
url = 'http://example.com/api'
result = client.get_data(url)
assert result.json() == {'key': 'value'}
def test_get_data_failure(mock_response):
# Failure response
mock_response.return_value.status_code = HTTPStatus.NOT_FOUND
mock_response.return_value.json.return_value = {'error': 'Not found'}
client = APIClient()
url = 'http://example.com/api'
result = client.get_data(url)
assert result.json() == {'error': 'Not found'}
assert result.status_code == HTTPStatus.NOT_FOUND
In this test the HTTPStatus class was used. This is an enum, which contains all HTTP statuses and it allows users to use and check it in a more readable way.
Mocking external dependencies in methods
If your class depends on other methods or classes, you can mock these dependencies as well. Let’s consider a client that logs the result of an API call:
- api_client.py
import requests
import logging
class APIClient:
def get_data(self, url):
response = requests.get(url)
self.log_result(response)
return response
def log_result(self, response):
logging.info(f"Response: {response.json()}")
You can mock the log_result method to focus your test on get_data. This might be especially useful when a mocked method has some indirect (hard to track) or unwanted side effects - it is slow, depends on external services, etc. Mocking lets you avoid all those difficulties, and only assert that the method was called as expected:
- test_api_client.py
import pytest
from unittest.mock import patch
from api_client import APIClient
@pytest.fixture
def mock_response():
with patch('requests.get') as mock_get:
yield mock_get
@patch.object(APIClient, 'log_result')
def test_get_data_with_logging(mock_log_result, mock_response):
mock_response.return_value.json.return_value = {'key': 'value'}
client = APIClient()
url = 'http://example.com/api'
result = client.get_data(url)
assert result.json() == {'key': 'value'}
mock_log_result.assert_called_once()
Use parameters to check multiple cases
PyTest provides the parameterize marker to check multiple cases in one test without additional code overhead and duplication. It is good practice to have one generic test covering multiple cases, as it improves both the readability and maintainability of the codebase.
from http import HTTPStatus
import requests
import pytest
from unittest.mock import patch
class APIClient:
def get_data(self, url):
response = requests.get(url)
return response
@pytest.fixture
def mock_response():
with patch('requests.get') as mock_get:
yield mock_get
@pytest.mark.parametrize(
"status_code",
[
HTTPStatus.OK,
HTTPStatus.NOT_FOUND,
HTTPStatus.UNPROCESSABLE_ENTITY
]
)
def test_api_example(mock_response, status_code):
mock_response.return_value.status_code = status_code
client = APIClient()
url = 'http://example.com/api'
result = client.get_data(url)
assert result.status_code == status_code
mock_response.assert_called_once()
In this case, we specified three status codes which will be mocked first and then, at the end of the test, will be used for assertion. We have one test case, but when we run it, we will see that the test was launched three times for different parameters.
(venv) maciej@maciej-laptop:~/sandbox$ pytest test_api_example.py -v
============================= test session starts =============================
platform linux -- Python 3.8.10, pytest-8.3.2, pluggy-1.5.0 -- /home/maciej/sandbox/venv/bin/python
cachedir: .pytest_cache
rootdir: /home/maciej/sandbox
collected 3 items
test_api_example.py::test_api_example[HTTPStatus.OK] PASSED [ 33%]
test_api_example.py::test_api_example[HTTPStatus.NOT_FOUND] PASSED [ 66%]
test_api_example.py::test_api_example[HTTPStatus.UNPROCESSABLE_ENTITY] PASSED [100%]
============================== 3 passed in 0.05s ==============================
Practical scenario
Note that previous examples, for simplicity, were only showing mock’s possibilities. The system did not have any real logic and the tests only verified that the mocks work as expected. In a real use case, mocking an external service would be used to help in verifying the internals of the tested service. The example below shows a case when the tested service processes the data received from the external (mocked) service.
import requests
import pytest
from unittest.mock import patch
class MissingUsersException(Exception):
pass
@pytest.fixture
def mock_get():
with patch('requests.get') as mock_get:
yield mock_get
class APIClient:
@staticmethod
def get_users_data(url):
response = requests.get(url)
users_key = "users"
try:
data = response.json()
return data[users_key]
except KeyError:
raise MissingUsersException(f'Key: {users_key} does not exist!')
def test_fetch_users_data(mock_get):
# Mock setup
mock_response = mock_get.return_value
mock_response.status_code = 200
users_list = ['user1', 'user2']
mock_response.json.return_value = {'users': users_list}
# Test body
client = APIClient()
url = 'http://example.com/api'
result = client.get_users_data(url)
# Asserts
assert result == users_list
mock_get.assert_called_once_with(url)
In this case we have implemented the get_users_data method in APIClient, which makes a call to an API and processes the response to return a list of users. In the test, the real APIClient’s method is called, and external API response is mocked, so only the internal logic of the APIClient is tested, as intended.
Without a mock, a genuine API URL would be queried. When you are using mocks, you have to remember that if the actual behavior of the mocked service changes (e.g., the response data structure is modified), mocks need to be updated to make sure that you are testing real-life scenarios; the test can’t detect such change.
Running and organizing tests
Organizing your tests and running them efficiently is very important for maintaining a robust test suite. Here are some tips:
- Directory structure
Organize your tests in a structured manner, e.g.: tests/test_api_client.py. Create separate directories for tests, test data and internal libraries. It is a good idea to consider using pydantic to create data models and then use those in tests. More information about this library can be found here .
- Use markers
Use PyTest markers to categorize and run specific sets of tests.
Custom markers can be added in the pytest.ini file:
[pytest]
markers =
slow:Slow tests
Let’s create a few tests in the example.py file:
import pytest
@pytest.mark.slow
def test_slow_function():
assert True
def test_example():
assert True
Now you are able to run (or skip) chosen tests using a specific marker. The following command will run all tests except those marked as slow:
pytest -m "not slow" example.py
- Continuous integration - integrate your tests with CI/CD pipelines to automate the testing process.
Conclusion
Testing APIs with PyTest and mocking external services ensures your tests are fast, reliable, and easy to maintain. PyTest's simplicity and powerful features, combined with the flexibility of mocking, make it an excellent choice for API testing in Python. By isolating your code from external dependencies, you can focus on ensuring that your application behaves correctly under various conditions. You have to remember though, that mocking also has its cons. Being aware of these potential issues can help you use mocks more effectively in your tests.
- Loss of test representativeness - mocks can lead to tests that do not fully represent the actual behavior of the application. If mocks are not properly configured, tests may pass even if the code does not work correctly in real scenarios.
- Increased test complexity - introducing mocks can make tests more complex and harder to understand.
- Challenges with code updates - when the behavior of a mocked entity changes, it might be necessary to update the mocks to reflect these changes.
- Over-mocking risks - excessive use of mocks can lead to tests that are overly focused on implementation details and testing mocks themselves, rather than functional behavior of the system.
- False sense of security - successful tests using mocks might give a false sense of security, as they may not cover all edge cases or real-world scenarios
By mastering these techniques and remembering the pros and cons of mocks, you can create a robust testing strategy that not only improves code quality but also boosts your confidence in deploying updates to production. Utilizing venv for isolated test environments further enhances your workflow by preventing dependency conflicts and ensuring consistency across different development setups. Happy testing!