Photo by Rachel Moenning on Unsplash

How to track automation tests coverage & statistics

Svitlana Tkachenko
Wix Engineering
Published in
5 min readJun 15, 2021

--

All QA Engineers eventually face the need for automation tests coverage tracking and statistics gathering. How to solve this problem with minimal losses and maximum wow effect (and benefit for yourself surely)?

I will describe it with an example of my experience.

Briefly about my projects and situation

For quite a long time, I have been a tester in a team with constant active growth and an actively increasing number of projects and branches.

Each project has its mini-team and its PM.

Projects are relatively small and dynamic — the new functionality is added to existing regularly or changes minimally.

For now, I actively work with three projects, each of which has an average of three branches. We support desktop and mobile platforms. At the same time, we support about 20 languages. Behavior varies from language to language.

We have visual and integration tests. Some of them are parameterized, some not.

Also, we work with A/B testing, so some of the functionality is covered for both versions with auto-tests.

The overall number of test cases we have today is around three thousand.

Lack of transparency issue begins

The more projects = the more PM you need to work with.

At some point, the management shows a deeper interest in QA processes. Requests began to come from one PM, then from another. After the conversation, it became clear that the picture of automation for management is far from transparent. The feature came out, QA did his magic, let’s go. We have some automation, and it’s ok, but what exactly is automated and what part of the functionality is covered by tests is a mystery.

I had to drop out of the workflow for a long time to prepare a full-vision answer for each request.

Also, from the QA point of view — due to a large number of projects, the focus is lost, and sometimes it becomes difficult to understand the coverage of some or another feature, recall the coverage of the old functionality, and answer at any time of the day or night with precision — what is our coverage for this or that project. In addition, confidence in quality is lost, which is unacceptable.

Solution

Based on the above, an immediate decision was made to keep records of projects, autotests, and coverage statistics.

This was done based on checklists and the percentage of automated test coverage of total coverage per feature and project.

Note:

In my example, we are talking about QA autotests only, but this approach might be used in combination with the developers’ tests as well.

Also, you need to understand what percentage of the project’s testing you can automate to understand whether this auto-test statistic is reasonable.

So what was my path?

From the example of one of the projects:

  1. Create a spreadsheet with a list of all projects and types of tests per each of them

2. Do checklists per feature

3. Add column “Status” with options {Drivers (Infrastructure), Not Implemented, Bug, Resolved}

4. Put the “skeleton” (list of features) of the project from the checklist to a separate “Coverage Tracking” sheet.

5. Add column “Automation status” with options {Automated, Not Automated, Not For Automation}.

Note:

Feature is considered as automated in case the automated coverage is more than 90%.

6. Also, add the necessary columns depending on the functionality to use it in formulas calculation. In my case they are automation tasks summary (= AT Sum), supported platforms, and amount of supported languages.

7. In a separate “Coverage Statistics” sheet we place a less detailed “skeleton” and add formulas

Note:

The percentage of automation testing coverage is a vague metric. For example, 80% coverage doesn’t mean we have covered with automation 80% of the whole feature’s testing coverage because non-functional testing isn’t included in this checklist.

It means 80% of planned automation coverage is covered with auto-tests. This should be clearly explained to management.

Example of formulas calculation from the table above:

8. Show it to your manager / PM / whoever — up your game.

Use Guide:

  • This doc will cover your back every time a manager appears on the horizon — show him or her your checklist and statistics — the total amount of automated tests per feature and project
  • Have a free minute? Percentage of tests per feature remained to be automated will show you where efforts should be directed (don’t forget about priorities)
  • Dev wants to run tests for sanity testing on his own, and you are not quite confident about the completeness of testing for some feature? Take a deep breath, open your checklist, look at the percentage of automation done, relax.
  • Note it can be used in the early stages of project planning to plan your coverage together with devs to improve the testability of project / distribute automation efforts.

Pros & Cons

Pros:

  • Confidence in quality
  • Visual visibility of “gaps” in the coverage
  • Simplifies estimation of efforts required to do / complete coverage
  • At any moment of the day or night, you have the worthy answer to the management regarding the status of the automation tests coverage of the project (also you can present this visualization as a diagram or whatever you like)
  • Full transparency for all stakeholders
  • This is a very important indicator of automation testing maturity and yourself as a QA.

Cons:

  • Not suitable for large dynamic projects
  • Time-consuming to maintain relevance

Next Steps:

  • Concatenate QA and related Dev tests in one doc to plan work more carefully, make coverage deeper, and avoid tests duplicating.

Conclusion

If you have a long-term project that is “overgrown” with functionality and there is a need to provide a complete picture to management — tracking and automation testing statistics in a spreadsheet in this way is a fantastic approach with minimal efforts to visualize the situation.

Believe me, your manager(-s) will be satisfied.

--

--