Useful Test Automation Metrics You Should Be Doing

16 Feb 2022 by Mark Mayo

“You can’t improve what you don’t measure”. This quote, often attributed to management thinker Peter Drucker, reminds us that while indeed, measuring your progress / quality is vital; when measurements become an end in and of themselves, they can also become irrelevant.  And when you’re measuring that which itself is meant to quickly evaluate or measure the quality of the software it’s evaluating, choosing the right metrics to track is vital.

With test automation, much of the requirement or initiative that drives the development of automation is to increase test execution cycles, earlier identification of issues, and (potentially ill-conceived but worth considering) reducing costs.  As a result, metrics recorded should be used to display historical performance, and/or predict performance in the future.  They should provide useful, actionable information, rather than simply vanity metrics.

Most software testing metrics fall into one of three categories

  • Coverage: measuring the scope of the tests against the software under development
  • Progress: identify terms that can be shown to improve iteratively over time, such as time to fix a defect
  • Quality: measurement of testing the quality of the software, for example, performance.

As with any metric, automated testing ones should have useful goals – why is it being measured? We should never measure just for the sake of measuring; and to be meaningful, it should relate to the performance of the effort put in by the team.

Before deciding what metrics to measure and keep, as a team, QA engineer, or manager, you’ll want to decide what your goals are. What are you trying to accomplish, and how are you going to track this?  What questions can be asked to determine whether you’re progressing towards said goals?  Some examples:

  • How long does the entire test execution take?
  • How long does analysis of the results take?
  • How many combinations/permutations are executed?
  • How do we define code coverage?

Following SMART goals, a good test automation metric should be specific, measurable, achievable, realistic, and time-bound.  It should be objective, easy to gather, and simple.

The right test automation metrics should provide an objective, deeper understanding of your QA system, process, and tests, in order to identify and fix issues while boosting your team’s efficiency and productivity.

As a result, the following metrics are worth considering for your team and software – but always consider that measuring still takes effort, time, and money, so you and your team will want to make them count.

Automation execution time

Agile software development is all about speed, and testing solutions need to run quickly and without delay.  As a result, this is a measurement of how long it takes to complete the automated testing from start to finish.  Important: This is not a measure of quality, just a measure of time.  

Calculation: End time – start time of test execution

Automation test coverage

This metric indicates how many of your total test cases have been automated, with the remainder still being run by hand.  This is useful to see over time if you’re maintaining a goal coverage level, or increasing each sprint, for example.

Calculation: # of automated tests / # of total tests

Automation pass rate

Relatively clear, it gives you an indication of what proportion of your automated tests are passing.  Not only is this useful for finding failures, but also in understanding the stability of your suite.  Flaky tests causing a low pass rate means time wasted investigating false fails.  On the other hand, if a new commit of code occurs and the pass rate drops, it’s a flag indicating possible degradation of software.

Calculation: # of tests passed / # of tests executed

Automation stability (flaky failures over time)

Along with the pass rate, this is used to indicate potential problems that exist with your tests.  Particularly among UI tests which can be flaky from timing issues, if over time your tests keep failing, but no defects are being found, there’s a problem with your tests. Tracking this over time helps indicate if the general stability of your tests is increasing or decreasing.

Calculation: # of failures / # of executions (for each test)

Build stability (% of build failures)

Once automated tests become part of your CI/CD, they can be executed on each branch, commit or release.  This metric can be used to see how often builds are breaking when commits occur and could indicate more developer testing – unit or integration – could be required.

Calculation: # of build failures / # of builds

In-sprint automation

If working in an agile workflow, this indicates how many tests are being automated in the current sprint with their associated stories, versus later iterations. The earlier they’re automated, the earlier you get fast, efficient feedback on issues of quality.

Calculation: # of tests automated in-sprint / # of tests automated post sprint

Automation progress

Some projects grow over time and accumulate a lot of manual tests, which then might be automated as part of a project. This allows you to look at what tests in that set are actually automatable, and how far along you are in achieving that goal.

Calculation: # of tests automated / # of tests automatable

Automation Script Effectiveness

Where and how are your defects being found? If you’re putting a lot of effort into automation, you’ll want to know if they’re effective, in each environment.

Calculation: # of defects found by automation / # of defects found (in a period of time)

Automation pyramid / trophy

The automation pyramid/trophy are guides for how many tests you should have at the static analysis, unit, integration, and UI.  By looking at the number of tests at each level automated, you can get an idea of whether your focus fits these.

Calculation: # of tests automated at each level / # of total tests automated

Of course, metrics are just that, metrics. They should not be used as performance goals for teams. It’s to measure the tests, not the team. Choose the metrics you use carefully, as teams will often trend to making them look better – intentionally or otherwise, as everyone likes to see a positive trend. However, if these trends aren’t helping with your actual goals, you’ve measured the wrong thing for your business.

You may also find after time that some metrics just aren’t that useful for your team. Or that you can’t get enough of another aspect.  Find and adapt, keep working on improving your automation and if the metrics meet your goals, you should find your software release quality improving steadily as well.


More reading on test automation:


Back to Blog