Write tests that withstand storms and quakes

Write tests that withstand storms and quakes

·

5 min read

Introduction

Many of us who are on to writing automated tests — in integration or e2e testing level — know for better and mostly agree with me, that it's rather plain and simple if one is just implementing additional tests for new modules, or features to an app codebase intending to improve the overall test coverage; It's obvious and also ironic at the same time, that's not the end of the story, but rather just the beginning to never-ending efforts of an engineer in keeping these tests up and running, while staying relevant after every offending change.

Practicality

The speed at which one brings these tests up to date determines the outcome of testing effectiveness, which certainly decides the speed at which product changes are being shipped and reaching customers. This may seem super easy in an ideal world, however, there are certain hurdles one can expect to face on the ground as far as QA automation is concerned. Okay, there are lot many, but the following are the prominent ones though.

  • Most of the time, you are deprived of the luxury of time to add tests (any kind) and/or fix existing tests that would have been broken due to the breaking changes.

  • The impact, that even a minor product change can bring on test suites will be huge. For instance, let's assume there is a modal that is being newly introduced across all the screens in the product, and it means that from an engineering standpoint, this could have been handled with a single component, whereas from a quality team standpoint, this will have an impact in many tests that need to be taken care individually.

  • In general, Heavily underestimates the efforts that are required to be spent on automated testing-related activities.

Think Hard

Considering this, it's essential to strike a balance between the aforementioned hurdles and the principles of automated testing that mainly focus on agility one should keep in mind to reap the benefits of automated tests by adopting best practices and removing inefficacies

While writing automated tests, thinking has to be on a line that they are going to withstand storms and quakes, for ages to come

Principle #1 — Isolation

In principle, it's imperative to keep an automated test as discrete and independent unit as possible. From what I know from my personal experience, I could tell that there are significant test failures occur purely because of their dependencies — be it internal, external, or design flaws — even before reaching a point to be tested. In the following section, I'm listing down various dependencies that tests might have relied on knowingly or unknowingly.

- The state being defined or assumed before the start of tests becomes untenable (or outdated). Let's say while testing an eCommerce website, assuming the existence of a product to be used in a test will become untenable sooner or later.

Solution: Let's look at this test — 1. add a new contact — with valid data (+) and you can see that a new user is created every time before running all the tests in a suite. This ensures that the state is rightly set up on time, without worrying about obsoleteness or invalidity.

beforeAll(async function () {
...
  let response = await createUserUsingApi({
    firstName: "Veera",
    lastName: "N",
    email: email,
    password: password,
  });
  console.log(`creating a new user : ${response.status}`);
}, timeOut.halfSecond);

- Setup steps under these tests are prone to frequent errors.

Solution: Pay close attention to the setup steps AKA hooks by ensuring they are rock solid, and never fail under any circumstance. In the below example, userLogin and getBearerToken are the functions within a hook beforeEach and are never expected to alarm false failures.

beforeEach(async function () {
  ...
  userLogin(pwPage, {
    email: email,
    password: password,
  });
  token = await getBearerToken({
    email: email,
    password: password,
  });
  ...
}, timeOut.halfSecond);

- Letting one test depend on another, in a way that the state in which the previous test finished will have become the state in which the following test starts.

Solution: It's rather straightforward; as you can see in the illustrative test file, there are two tests, and though they both have common before and after hooks, the result of either of them does not affect the other in any possible way, and either of the tests can be run individually all the time.

- Unavailability of dependencies (external or internal) which tests rely on

Solution: In most cases, Unlike unit tests where we can employ mock services in place of dependencies, for integration and e2e tests, it doesn't add value if we decide to skip dependencies that the app under test relies on in ideal working conditions. After all, it's expected to work along with its dependencies in test environments before releasing it to a Prod environment.

In the following illustrative code block, you could notice that two external dependencies are being used: 1. Playwright to run UI tests using chromium and 2. Faker to generate random data to be used within the tests. The rationale behind using them as such is that they are stable open-source libraries, that are available in the npm registry.

beforeAll(async function () {
  browser = await chromium.launch({ headless: false });
  context = await browser.newContext(devices["Desktop Chrome"]);
  ...
  console.log(`creating a new user : ${response.status}`);
}, timeOut.halfSecond);
test(
  "1. add a new contact — with valid data (+)",
  async function () {
    await addContact(pwPage, {
      firstName: faker.person.firstName(),
      lastName: faker.person.lastName(),
      ...
      postalCode: faker.location.zipCode(),
      country: faker.location.country(),
    });
    ...
  },
  timeOut.halfSecond,
);

On the other hand, let's suppose that you are running some tests related to verifying emails with the help of Gmail, and while doing so, it's noted that some tests are failing, as the script access to Gmail is blocked due to recent Google's T&C, which is the case for any external tools or libraries. In such cases, what I'd recommend is to have your version of such libraries so tests can be run within a controlled environment setup. For example, If we consider email testing in our scope, Putsbox is one alternative that can be set up on-premise.

This is part of a series I'm planning to write on various approaches/best practices to write and maintain tests in a stable and scalable manner.

Also, the examples used in this blog are from this repository, and I sincerely believe that this blog is informative and useful for you.

Thank you for reading!
Veera.