Maturity of test automation

Michał Zawieja
3 min readJun 7, 2023

When software development teams first start with test automation there is a lot of hype going on — from the best tools to use, through skills do develop and all the way to tracking the progress. After all who wouldn't want to showcase a great achievement, especially after many hours and days going into it :)

It’s not a surprise that many teams decide to use Code Coverage as one of the metrics of success. It is a (relatively) simple indicator — number of lines of code covered by automated test in relation to all the code there is in the repo.

High ratio of coverage should lead to low level of regression since the existing functionality can be tested repeatedly with any new change to the code base.

Sounds neat! :)

And, in all fairness, it is a good introduction into test automation — clear goal (80% of code covered by automation) and quick feedback loop (check your Sonar after the PR to get updated metrics).

But does achieving the 80% really mean maturity? Well… things are a bit more complex… but let me tell you a story first…

Long time ago in a scrum team far, far away… :)

The year was coming to an end. We were working extra hard to get our great app delivered and to start new year on a good note. With many functionalities planned we’ve had little time to consider any changes to our delivery model and automation would be a huge one. But as the year-end holidays were approaching and most of heavy delivery was behind us we decided to give it a try… after all 80% code coverage was one of the targets our management gave us for that year.

Motivated to do something new we got our overtime pre-approved and dived in… head first…

A lot of work to set things up, even more to write all the automation scripts, but, boy, was it rewarding seeing the percentage to go up with every new commit :)

Long story short — we’ve reached our target and enjoyed drinks that followed :) But the really interesting thing happened a few months later.

As the hype settled and the delivery came back to its usual routine at the end for January, we’ve had new things to code but also automation scripts to maintain. This became a bit of a nuisance… even a minor change to simple code required update to automation scripts.

So… well… soon enough some of the scripts were getting out of date and new discussion started. What value did we actually add?

Looking at different data, we put code complexity and code coverage metrics together and it became clear that our 80% focused on simple code leaving the more complex parts that could really benefit from good regression untouched. This was quite a bucket of call water… All that effort to test some some constructors in POCO classes? :)

Of course we adjusted our direction and became more selective in our testing…

This leads me to my conclusion: maturity of quality control can not be defined by arbitrary numbers like code coverage or number of bugs in production. Maturity starts then the team discusses quality of quality control implemented.

Sounds obvious but it’s very easy to get swept by numbers. After all they are more tangible then abstract quality. And such numbers are important to get you going, but don’t forget that at some point you will need to slow down and re-assess… just as we did… and it’s a very good sign when you do :)

--

--