Writing Tests That Actually Test šŸ“šŸ§Ŗ

Jahdunsin Osho
13 min readJun 20, 2023

In my software engineering journey, Iā€™ve encountered diverse development environments. Some stressed the importance of testing, while others focused more on just getting the code to work, perceiving tests as mere illusions of flawless code.

I can understand why some developers and managers may not invest heavily in testing, having witnessed instances where a supposedly robust codebase with 100% code coverage still experienced critical failures and bugs in production environments.

Yet, despite these concerns, we canā€™t discount the importance of testing in building reliable software.

Itā€™s all about finding balance ā€” knowing what to test and how to test it effectively, rather than just shooting for perfect scores. After all, weā€™re building products for people, not trying to pass an exam.

So how do you write tests that actually test?

In this article, Iā€™ll share some best practices weā€™ve implemented at Edubaloo, where I work as tech lead. Our backend services are built on Node.js, so expect TypeScript examples with the Jest testing library. Still, the principles weā€™ll examine broadly apply across different languages and technologies.

To start, weā€™ll provide a brief overview of Jest and its syntax. This way, even if youā€™re new to Jest or TypeScript, youā€™ll find the rest of the guide easily digestible.

A brief overview of Jest

Jest, a streamlined JavaScript testing framework developed by Facebook, comes packed with versatile features such as mocking, spies, assertions, parallel testing, and code coverage analysis.

It integrates smoothly with TypeScript, shines in unit testing, and pairs well with other JavaScript libraries for integration testing.

Even if TypeScript or Jest arenā€™t your bread and butter, youā€™ll find Jestā€™s syntax and structure reasonably intuitive. It is easy-to-understand and user-friendly design contributes to its popularity among JavaScript developers.

Next, weā€™ll dive into some of Jestā€™s key components.

Letā€™s delve into Jest Syntax

Test Suite: The describe function in Jest gathers related tests. It accepts a string and an anonymous function. The string typically describes the group of tests, and the function hosts these tests. Hereā€™s an example:

describe(ā€˜A simple test suiteā€™, () => {
// tests go here
});

Test Cases: Individual test cases use the test or it function, both identical in functionality and require a string describing the test and a callback for the test code. For instance, a test checking the sum of 1 and 2:

test(ā€˜adds 1 + 2 to equal 3ā€™, () => {
// assertion goes here
});

Bundling the describe and test functions:

describe('Numerical addition', () => {
test('adds 1 + 2 to equal 3', () => {
// assertion goes here
});

test('adds 2 + 3 to equal 5', () => {
// assertion goes here
});
});

Assertions: The expect function asserts truths. It can be chained with methods like toBe for matching values. Hereā€™s our previous example with assertion:

describe('Numerical addition', () => {
test('adds 1 + 2 to equal 3', () => {
expect(1 + 2).toBe(3)
});
});

Setup and Teardown: Jest offers four hooks ā€” beforeEach, afterEach, beforeAll, and afterAll ā€” for setup and cleanup tasks. For this discussion, weā€™ll focus on beforeEach, which runs before every test case in a suite.

Hereā€™s a compilation of our syntax examples:

describe('Numerical addition', () => {
let result;
beforeEach(() => {
// This code runs before each test case in this describe block
result = 0;
console.log('Running beforeEach hook');
});

test('adds 1 + 2 to equal 3', () => {
result = 1 + 2;
expect(result).toBe(3);
});

test('adds 2 + 3 to equal 5', () => {
result = 2 + 3;
expect(result).toBe(5);
});
});

Now that you have a fundamental understanding of Jest syntax, letā€™s get into what you came here for.

Writing Effective Tests

No doubt, Seeing all-green checks when executing tests can be satisfying. However, the design of your tests profoundly affects the reliability and maintainability of your test suite, underlining the need for effective tests.

To write effective tests, weā€™ll explore principles such as writing tests that fail, when to use integration tests as against unit tests, avoiding false positives and writing DRY (Donā€™t Repeat Yourself) tests.

Unit Tests vs Integration Test

So, hereā€™s a question for you, Should you attempt to write unit tests for every function in your codebase, isolating every minute function to ensure its functionality and reliability?

Before you answer that question, imagine this scenario. Say youā€™re implementing a user profile service for an app, providing standard user management functionalities like registration and login. Within this service, thereā€™s a method, getAllUsers, that fetches a list of all users from your database.

You decide to run a unit test for the getAllUsers method. In your test, you use a mock database, add some mock data, and verify that your repository method is fetching the list of users.

That doesnā€™t sound bad, but has your test added any value to your code? Has it made your service more reliable?

First, The database youā€™re using is a mock database, thereā€™s no guarantee that the production database will behave identically. Second, the service has only one method that fetches a list of all users, ten out of ten times, it would return the same predictable result provided the repository method was implemented correctly.

In this scenario, your unit test doesnā€™t add any value. Instead of a unit test, an integration test would be more effective as it would examine how that method interacts with the entire system, bolstering your confidence in its functionality.

That being said, unit tests are far from redundant ā€” they hold their own and prove highly effective in numerous cases. For instance, suppose you have a calculateUserAge() method in your UserService class, which takes a userā€™s date of birth and calculates their current age. Here, a unit test can easily validate the correctness of the method without any external dependencies. Writing an integration test for this method would be overkill and could unnecessarily complicate your test suite.

So, when should you use unit tests, and when should you lean towards integration tests? Well, it depends!

There is no one-size-fits-all rule for deciding which type of testing to use. To make a decision, you need to understand the purpose and functionality of the method youā€™re testing and determine which type of testing adds the most value to your testing efforts, whether it be unit tests or integration tests.

Write Tests That Fail

Writing tests that fail is a fundamental principle in test driven development and it involves writing test cases first, describing an expected outcome, running it and then writing code that meets the defined requirements.

We developers often have an innate desire to get our hands dirty with coding right away ā€” building that exciting new feature seems more appealing than drafting some test suite. However, this approach may lead us to write tests that fit too neatly within our expected outcomes, potentially glossing over essential edge cases and spawning false positives.

By flipping the process ā€” drafting tests before code ā€” you will write code that successfully passes your tests, rather than tailoring tests to fit preexisting code. This results in a more robust and reliable system, as your code is being crafted specifically to meet the test criteria from the very beginning.

Another great practice when writing tests that fail is writing the breaking cases first. Say youā€™re working on a typical user feature like change the password feature. You might instinctively want to write the successful test case first checking whether a logged-in user can successfully update their password.

Your initial test suite might look like this:

describe("change password", () => {
describe("if user is logged in", () => {
it("updates password with new if old password is correct", async () => {
//... password change scenario...
});
// other test cases
});
});

When you write test cases that pass first, thereā€™s a temptation to be lazy causing you to be less exhaustive with edge cases.

For instance, in our example, the ā€˜change passwordā€™ feature might successfully change the user password, but what about the other possibilities? What happens if an unauthorized user tries to change the password? Or if certain required fields are missing?

By writing breaking cases first, you prioritize the unexpected over the expected, focusing on test cases that could potentially break your code before writing tests that confirm the desired functionality.

Consequently, you are better prepared to handle edge cases and unexpected inputs, leading to software that not only meets the desired functionality under ideal circumstances, but also maintains its integrity under less predictable scenarios.

Now, letā€™s tweak our example to focus on the breaking cases first:

describe("change password", () => {
describe("if user is not logged in", () => {
it(
"should respond with a user unauthorized status code",
testUnauthorizedRequest(server, changePasswordRoute, RequestMethods.POST),
);
});

describe("given field(s) is missing", testMissingFields(
//... missing fields scenarios ...
));

describe("if user is logged in", () => {

it("updates password with new if old password is correct", async () => {
//... password change scenario...
});
});
});

In this revised test suite, weā€™re checking first if the user attempting to change the password is even authorized to do so. Then weā€™re verifying if all necessary parameters are in place to change the password. And only then do we check if the password changes successfully when the old password is correct.

A neat trick to coming up with edge cases is picturing yourself as a user who is determined to find faults in your code. This kind of mindset allows you to be more exhaustive when thinking of edge cases, allowing you to think more deeply about potential issues, leading to code thatā€™s more reliable in production.

Avoid False Positives

Tests that lead to false positives are those that seem to validate functionality but overlook crucial aspects, thereby providing an illusion of correct operation.

These tests can be problematic because they give developers a false sense of security, believing that their code works as intended when hidden issues might exist.

Consider a test case that checks if an admin user was successfully created.

describe("create admin", () => {
it("creates a user with admin access", async () => {
.
.
expect(statusCode).toBe(201);
expect(contentType).toBe("application/json");
});
});

The test above checks whether the endpoint returns a successful status code and that the content type of the API response is application/json. But does it really verify that the user was successfully created? Not exactly.

Simply checking if an API endpoint returns a 201 status code doesnā€™t necessarily mean the admin user creation functionality works correctly. There might have been a bug that caused the API to return a success code despite it failing.

In practice, although a 201 status code indicates that a new resource was created successfully, it doesnā€™t truly validate that the user was successfully created.

This kind of shallow testing approach can let bugs slip resulting in an unreliable codebase. To avoid false positives, you need to create tests that fully examine the functionality being tested and not solely rely on superficial indicators of success.

A more robust approach would be to also check the response body or query the database to ensure the user was actually created.

Letā€™s revise our test to be more thorough.

describe("create admin", () => {
test("it creates an admin user and responds with JSON", async () => {
const response = //... call your create admin endpoint ...

expect(response.statusCode).toBe(201);
expect(response.type).toBe("application/json");

// Assert that the response body contains the created user's details
expect(response.body.username).toBe("admin");
expect(response.body.role).toBe("admin");

// You could also verify the user creation by querying your database
const dbUser = //... query your database for the user ...
expect(dbUser).toBeDefined();
expect(dbUser.username).toBe("admin");
expect(dbUser.role).toBe("admin");
});
});

This test is far more reliable. It not only checks the API response, but also confirms that the expected side effect (the creation of the admin user) has occurred.

Remember, the value of your tests lies in their reliability and their ability to truly reflect the functionality of your system. So, Write tests that truly validate your codeā€™s functionality, avoid false positives!

Try to be DRY

This is really more about writing clean code than writing effective tests but I thought Iā€™d still add it in here.

Youā€™re probably familiar with the DRY (Donā€™t Repeat Yourself) principle when writing code; the same principle can be applied when writing tests. When you eliminate redundant code in your tests, you make it more readable and maintainable.

Letā€™s look at a test for User API endpoints thatā€™s not so DRY and then weā€™ll look at how we can clean it up.

describe('User API endpoint tests', () => {
test('should login user if user exists', async () => {
const newUser: IUser = {
name: 'John Doe',
email: 'john.doe@example.com',
password: 'password123',
};

await User.create(newUser);

const response = await request(app)
.post('/api/user/login')
.send({
email: 'john.doe@example.com',
password: 'password123',
});

expect(response.status).toBe(200);
// other assertions
});

test('should update a user', async () => {
const newUser: IUser = {
name: 'John Doe',
email: 'john.doe@example.com',
password: 'password123',
};

const createdUser = await User.create(newUser);

const response = await request(app)
.put(`/api/user/${createdUser._id}`)
.send({ name: 'Updated Name' });

expect(response.status).toBe(200);
// other assertions
});

// More tests...
});

In this test suite, the code for creating a new user is repeated in both test cases. If this pattern were to continue across ten or more tests, thatā€™s a lot of code no one needs to read.

Letā€™s refactor the code to adhere to the DRY principle:

describe('User API endpoint tests', () => {
let newUser: IUser;
let createdUser: any;

beforeEach(async () => {
newUser = {
name: 'John Doe',
email: 'john.doe@example.com',
password: 'password123',
};

createdUser = await User.create(newUser);
});

test('should login user if user exists', async () => {
const response = await request(app)
.post('/api/user/login')
.send({
email: newUser.email,
password: newUser.password,
});

expect(response.status).toBe(200);
// other assertions
});

test('should update a user', async () => {
const response = await request(app)
.put(`/api/user/${createdUser._id}`)
.send({ name: 'Updated Name' });

expect(response.status).toBe(200);
// other assertions
});

// More tests...
});

In this refactored version, we have extracted the common user creation process into the beforeEach block. This block runs before each individual test, setting up newUser and createdUser for use within each test. This way, we avoid repeating the user creation process and make our tests more readable and maintainable.

We can take a step further to make our tests adhere to the DRY principle by using dry assertions. Sometimes you have to check for the same requirement across multiple test cases or multiple test suites. In this scenario, instead of repeating the test cases, we can make them into a reusable function ā€” keeping it DRY.

This becomes particularly useful when writing integration tests for APIs. Letā€™s say we want to ensure a group of APIs can only be successfully called if the user is an admin. Instead of repeating the assertion in every test case, we can apply the DRY principle.

letā€™s examine an example where the assertion code is repeated.

describe('User API endpoint tests', () => {
.
.
describe('User Details Update', () => {
.
.
test('should respond with unauthorized if user is not admin', async () => {
expect(userToken).toBeDefined();
const response = await request(server)
.patch(route)
.set('Authorization', `Bearer ${userToken}`)
.send();
expect(response.statusCode).toBe(401);
});

describe('User Deactivation', () => {
.
.
test('should respond with unauthorized if user is not admin', async () => {
expect(userToken).toBeDefined();
const response = await request(server)
.delete(route)
.set('Authorization', `Bearer ${userToken}`)
.send();

expect(response.statusCode).toBe(401);
});
});

Weā€™re looking at the ā€˜User API endpoint testsā€™ in this example. There are two ā€˜describeā€™ blocks, testing ā€˜User Details Updateā€™ and ā€˜User Deactivationā€™ features of the User module. Each block houses its own tests, all of which repeat the same admin authorization check.

So, how do we adhere to the DRY principle here?

To tidy up these tests, we can refactor the authorization assertion into a reusable function and call it in each test case.

Letā€™s introduce testUnauthorizedAdminRequest function. It takes in parameters like the server, route, HTTP method, and userToken. The function makes the request, verifies the userToken is defined, and asserts that the response status code is 401, which corresponds to ā€œunauthorizedā€.

function testUnauthorizedAdminRequest(
server:express.Express,
route: string,
method: RequestMethods,
userToken: ()=>string,
): jest.ProvidesCallback {
return async () => {
expect(userToken).toBeDefined();
const { statusCode } = await requestMethod(method)(server, route)
.set("Authorization", `Bearer ${userToken()}`)
.send();
expect(statusCode).toBe(401);
};
}

describe('User API endpoint tests', () => {
// ... other tests or setup ...

describe('User Details Update', () => {
// ... other tests or setup ...

it(
'should respond with unauthorized if user is not admin',
testUnauthorizedAdminRequest(
server,
'/api/admin/userUpdate',
'PATCH',
() => userToken,
),
);
});

describe('User Deactivation', () => {
// ... other tests or setup ...

it(
'should respond with unauthorized if user is not admin',
testUnauthorizedAdminRequest(
server,
'/api/admin/userDeactivation',
'DELETE',
() => userToken,
),
);
});

// ... more tests ...
});

In this refactored version, Instead of repeating the admin authorization check across test cases, weā€™re invoking testUnauthorizedAdminRequest, maintaining the DRY principle, while making our tests easier to read.

Adhereing to the DRY principle doesnā€™t necessarily need to be your initial focus while writing tests though. Itā€™s perfectly okay to start with writing tests that may contain redundancy; the key lies in iterative refinement.

In fact, often while starting out, your priority should be to ensure that each test case is thorough and covers all aspects of the functionality being tested.

Once you have a good set of tests that cover your applicationā€™s functionalities, you can start looking for patterns and repeated blocks of code. This is the stage where you employ the DRY principle ā€” refactoring common code snippets into reusable functions or setup blocks, improving the efficiency of the tests.

Conclusion

In this article, we have discussed essential principles for writing effective tests: knowing when to apply unit tests versus integration tests, the importance of writing tests that fail, avoiding false positives and the value of the DRY principle in tests.

Remember, your tests should serve your codebase and not just exist for the sake of it. They should illuminate the functionality of your software, expose unexpected behaviour, and enable safe and confident code modifications.

Please note that these principles to writing effective tests are nowhere near exhaustive and should be treated as a foundation, a stepping stone for you to explore further and tailor to your specific needs.

Now, itā€™s over to you. What testing practices have proven most effective in your software development journey? Do you have any other testing principles or strategies to share? Please, feel free to leave your thoughts and experiences in the comments section.

--

--

Jahdunsin Osho

Software Engineer | Building edubaloo.com | I do content marketing for SaaS companies.