Next-Level API Testing: Unlocking Automated Regression with Replay Tests

Konstantin Shefer
7 min readApr 5, 2024

Have you ever faced the time-consuming task of manual regression testing before a new release? Ensuring that your application not only introduces new features but also retains its existing functionality is crucial. This process, known as regression testing, can be a daunting task. Traditionally done by hand, it ensures that the application functions correctly and adheres to its requirements. But what if there’s a more efficient way?

In this article, we’ll delve into the transformative approach of automating API regression testing using test generation. Whether you’re new to regression testing or looking to upgrade your manual processes, this guide will equip you with the knowledge to streamline your testing efforts. You’ll discover how to quickly set up automated regression tests, ensuring thorough and efficient checks with every new version of your application.

Automating Regression Testing with Replay-Tests

Let’s explore testing an API, for instance, The Rick and Morty API. This API serves as a gateway to a universe filled with hundreds of characters, images, locations, and episodes, each teeming with details from the beloved TV series.

To illustrate, let’s focus on a specific API method: retrieving character information. The primary method is as follows:

GET https://rickandmortyapi.com/api/character — to get information about all characters.

For more tailored results, filtering is an option. For example:

GET https://rickandmortyapi.com/api/character?name=rick&status=alive — to find all living characters named Rick.

In a real-world setting, each time an application undergoes changes, especially during refactoring, it’s essential to perform regression testing before any new release. This process typically involves manual API regression tests, where a series of requests are sent, followed by meticulous checks.

For instance, consider the following GET request and its associated checks:

Request: GET https://rickandmortyapi.com/api/character

Expected Checks:

  • The response code must be 200, indicating success.
  • The Content-Type header should be ‘application/json’, confirming the data format.
  • The response schema and data must align with the API documentation and expected values, such as those in a database.
Checking in Postman

Performing similar checks across all methods and their parameter combinations provides a comprehensive picture of the application’s functionality and reliability in each new iteration. However, the manual approach comes with notable drawbacks:

  • Time-Consuming: It requires a significant investment of time to cover every aspect thoroughly.
  • Prone to Human Error: The risk of missing critical issues due to oversight or fatigue is ever-present.
  • Monotony: Repeating the same tests can be tedious, reducing overall efficiency and attention to detail.

Automated API regression testing, especially through test generation and replay-tests, marks a significant advancement beyond traditional manual testing methods. The concept of replay-tests is particularly intriguing, offering a structured and automated way to compare responses from a stable application version (the ‘golden’ version) with those from the current test version. This method involves three key steps:

  1. Sending a request to the stable version of the application: This step establishes a baseline for the expected behavior and output of the application.
  2. Sending a similar request to the test version of the application: This is where potential changes, enhancements, or regressions will manifest in the output.
  3. Comparing responses from both versions: By meticulously analyzing differences in response codes, headers, and body content, developers can identify unintended changes or regressions.
How replay-tests work

​​Replay-tests provide an efficient pathway for regression testing by automating the process of comparing application behaviors before and after modifications. This technique not only diminishes the resources and time required for thorough testing but also significantly lowers the possibility of human errors and the tedium tied to manual testing efforts.

​​However, it faces challenges when managing operations that affect data states or when synchronizing across different application versions and their components, such as databases. Moreover, issues arise when backward compatibility is compromised or a new application version is introduced, as it depends on pre-existing knowledge of the stable version’s behaviors, making it unsuitable in scenarios where expected outcomes are not previously established.

Setting Up Replay-Tests with Vedro-Replay

Implementing replay-tests has been greatly simplified with he Vedro-Replay package, available on PyPI. This practical tool facilitates the generation of replay tests from requests, offering a comprehensive package for their deployment. Let’s delve into the setup process using The Rick and Morty API as our case study.

Let’s go. In and out 20 minute adventure

1. Package Installation

To kick things off, the first step involves installing the Vedro-Replay package:

$ pip3 install vedro-replay

2. Generating Tests

Our focus will be on the /api/character method. Start by creating a directory named requests. Inside, create a character.http file for testing this method, filled as follows:


### Getting information about all characters
GET https://{{host}}/api/character

### Search for alive Ricks
GET https://{{host}}/api/character?name=rick&status=alive

​​The file format is based on .http from JetBrains, which enables you to describe HTTP requests. These requests can be utilized not only in replay-tests but also sent directly from the IDE using the built-in HTTP client. In this example, {{host}} is used to facilitate sending requests to different hosts via configuration. Read more here.

With your requests in place, generate the tests:

$ vedro-replay generate

This command constructs a project framework, including directories and files for interfaces, contexts, helpers, and scenarios, all utilizing the Vedro framework.

The project structure post-generation:


-replay-tests # Root directory
|----requests # Requests for tested methods
|----|----character.http # File with API requests for the /api/character method
|----contexts
|----|----api.py # API contexts and states
|----helpers
|----|----helpers.py # Helper methods
|----interfaces
|----|----api.py # API interface description
|----scenarios # Testing scenarios
|----|----character.py # Scenario using requests from the file requests/character.http
|----config.py # Test project config
|----vedro.cfg.py # Framework config

In scenarios/character.py, you’ll find tests utilizing the prepared queries, designed to compare responses from the API’s golden and test versions.


import vedro # The framework for running the generated test
from contexts.api import golden_response, testing_response
from d42 import from_native
from helpers.helpers import prepare_api_character

from vedro_replay import Request, replay


class Scenario(vedro.Scenario):
# Explanation and comment of the request that will be displayed after the test is completed
subject = "do request: /api/character (comment='{comment}')"

# A file with requests that will be used for testing
@replay("requests/character.http")
def __init__(self, comment: str, request: Request):
self.comment = comment
self.request = request

# Getting a response from the golden version of the application
async def given_golden_response(self):
self.golden_response = await golden_response(self.request, prepare_api_character)

# Getting a response from the test version of the application
async def when_user_sends_request(self):
self.testing_response = await testing_response(self.request, prepare_api_character)

# Checking the response code
def then_it_should_return_same_status(self):
assert self.testing_response.status == self.golden_response.status

# Checking headers
def and_it_should_return_same_headers(self):
assert self.testing_response.headers == from_native(self.golden_response.headers)

# Checking the response body
def and_it_should_return_same_body(self):
assert self.testing_response.body == from_native(self.golden_response.body)

The headers and the body of the response in these tests are checked using the d42 package. The d42 package is a comprehensive toolkit for data modeling, which includes functionalities for definition, generation, validation, and substitution of data models using a robust data description language.

3. Launching Tests

Testing requires two application versions: the stable (golden) version and the test version. We use environment variables for their URLs:

$ GOLDEN_API_URL=https://rickandmortyapi.com TESTING_API_URL=https://rickandmortyapi.com vedro run
Launch result

Occasionally, identical API versions might yield differing responses, causing false positives. Variables like “date” and “x-nf-request-id” can fluctuate. To mitigate this, modify the helpers/helpers.py file to exclude certain headers, focusing on the core response elements.


from vedro_replay import JsonResponse, MultipartResponse, Response, filter_response


def basic_headers_exclude():
return ['x-nf-request-id', 'date']


def basic_body_exclude():
return []


def prepare_api_character(response) -> Response:
exclude_headers = basic_headers_exclude() + []
exclude_body = basic_body_exclude() + []
return filter_response(JsonResponse.from_response(response), exclude_headers, exclude_body)

After adjustment, helper methods will ignore specified headers, avoiding non-essential test failures:

$ GOLDEN_API_URL=https://rickandmortyapi.com TESTING_API_URL=https://rickandmortyapi.com vedro run
Successful launch
Truly a 20 minute adventure

Conclusion

In summary, we have successfully established a foundation for automated testing using the Vedro framework, thereby eliminating the need for manual API regression testing. Our project, which demonstrates the application of replay-tests for The Rick and Morty API, serves as a practical example of this approach.

It’s important to note that new methods and queries can be easily integrated into the finished project. Adding a new method simply requires preparing the relevant queries and executing the vedro-replay generate command. This action will update the existing project by adding new scenarios and necessary components without affecting what has already been created. Additionally, you are encouraged to further develop and customize the generated tests, taking full advantage of the packages installed and the infrastructure already in place.

In the next article, we will explore methods for collecting requests for replay-tests, further enhancing our understanding and capabilities in automated API testing.

--

--