When you spend some time working on test automation, chances are you'll reach a point where your test suite is more of a hassle than a help. It takes longer and longer to execute your tests. The feedback loop between developers committing code and receiving the automated test results grows larger every day. Eventually, your team will begin to ignore your tests because they won't want to wait around.

If you reach this point, your first inclination might be to take the "scorched earth" approach and start again from scratch. I know I've had plenty of days with that thought. However, you can take steps to salvage the remnants of your test suite and get it back to a state where it doesn't block your team.

This article details my recent experience with an end-to-end test suite that slowed down the entire team and how we took a different road for executing those tests to keep the team's workflow moving along.

The Initial Honeymoon Phase

At the beginning of 2019, the previous company I worked at started to get serious about automating end-to-end tests for our projects. Previous attempts were made to implement automated UI testing, but those efforts never went far. Most projects in our organization relied on manual testing from the in-house QA team.

The teams for some of those projects had issues finishing their allocated work for the sprint on time. The main problem they faced was a slow regression testing phase. It took the QA team too much time to perform a full regression test, and every time they found a bug, the delays would increase until the team missed the delivery deadline.

The organization wanted to automate the repetitive work the QA team did for each sprint, giving testers more time to perform higher-value tasks like exploratory testing. We also wanted to get our projects to a point where we could implement continuous delivery. End-to-end testing would help us build confidence in our applications to release new changes to production automatically.

We got to work to begin the initial implementation for one project. We decided to use the TestCafe testing framework, since the project made extensive use of JavaScript, and we wanted part of the development team to pitch in since they had the product knowledge. The organization also wanted developers to pair with QA team members who wanted to learn more about automated testing.

After a few weeks, the development and testing teams managed to automate a good chunk of the regression tests, and others helped integrate it into the existing workflow. Whenever the team pushed new changes to the code repository, it would execute the end-to-end tests after running the existing automated test suite, including unit and functional tests.

From the start, the team started seeing how including these end-to-end tests would help the project. Instead of wasting time on the mind-numbing, repetitive work that would often get pushed to the end of the development cycle, the team had part of it taken care for them. It began to free the testing team to perform other tasks, while developers had more feedback after their changes.

However, not everything was rosy. We started to feel lots of bumps in this automated testing road.

The Hostility Phase

As the team continued to expand the automated test suite and increase the coverage for the application, an all-too-common issue reared its ugly head - the tests were slowing down the entire team.

End-to-end tests tend to be slow and flaky, and our initial attempts at writing these tests were no exception. Since this was the first time many on the team were doing any test automation, speed and reliability were missing in the test suite. Builds were running at least five times slower than before and would continually fail for no reason.

One of the mistakes we made as a team was attempting to automate too much, too quickly. In our quest to automate as much of the regression test cases as possible, we also built many extensive tests. These tests performed too many steps to cover as much functionality as possible, which led to slow performance and high flakiness.

Because of these extensive, unstable tests, the feedback loop between the time developers pushed out a code change, and the notification of the test results increased every day. If you throw in the increase in build failures, that creates an unhappy team - and rightfully so.

The development team didn't want the end-to-end tests to run after every code commit they pushed to the repository. We changed the workflow to run these tests only when specific branches were updated, like the release candidate branch or the main branch that we used to deploy to production.

This move helped minimize the build times during development. However, it was merely a placebo because running the tests infrequently created additional problems. Regressions were caught much later in the development cycle - often just before the project's release date. Eventually, it got to a point where the entire team ran into similar delays as they had before implementing the automated test suite.

Some on the team wanted to cut our losses, scrap the end-to-end tests, and get back to manual testing with additional resources. However, we didn't give up and put our heads together to find a way through.

The Adjustment Phase

As mentioned earlier, one of the issues we had with the test suite was that most test cases performed too many test cases. We also noticed that some tests executed almost the same steps every time, changing the data slightly or performing different assertions. These tests felt like duplicate work, so we did our best to trim unnecessary test cases.

You can only go so far with this approach, depending on your application. In our case, it was a rather complex application with many different scenarios. The QA team had lots of various tests that they felt necessary to run because they had experienced problems in the past when skipping those areas even though it felt repetitive.

With development wanting the builds to run faster while testing wanted to be thorough, we eventually reached a compromise. Instead of deleting tests, the QA team sat down to classify the end-to-end tests by type and priority. The team tagged each test with a label, like "smoke" or "sanity", and either high, medium, or low priority.

With this information in our test code, we could set our continuous integration system only to run higher-priority smoke tests after code changes. These tests took only 25% of the time to run compared to running the entire test suite, which was acceptable to avoid blocking the development team for too long.

For the remainder of the tests, we configured the continuous integration system to trigger a build for running the entire automated test suite at night for most of the team. If the test suite failed, the system sent a notification to the project's Slack channel so the team could catch it when they returned the next business day.

I found this split to be the best of both worlds. The development team didn't get stuck waiting for the test suite's results after pushing out new code, while the testing team was able to keep the tests they built without sacrificing thoroughness.

Much better, but there's still room for improvement

While this worked well, these changes still weren't perfect. We ran into our fair share of issues throughout the project while building the automated test suite.

Despite splitting up the end-to-end tests and only running a subset throughout the day, the team still had to wait a bit too long sometimes for the tests triggered by their changes to run. In the days leading up to a release, the team's activity tended to spike, and the continuous integration service had to queue up multiple builds at a time. The solution to this problem is often to throw money at the issue, meaning pay to get more build capacity.

Another issue that surfaced leading up to a deadline was an increase in the frequency of regressions occurring. Of course, it's great that the test suite caught these problems before they shipped to production. However, since many of the regressions were found during the nightly builds, it would disrupt the team's day since they had to deal with it. This problem can get solved by running the full test suite more often, although we struggled to find a good way to balance build times and acceptable feedback loops.

I also noticed that we needed to be extra-vigilant about how to classify any new tests. As the team built new functionality, they also created new automated test cases. However, many of these new tests got classified as high-priority smoke tests, and it wasn't long before the build times after each commit crept up to unacceptable levels. It's good to occasionally review your existing test suite to either reclassify tests or cull them from if they're no longer necessary or useful.

Still, even with these occasional troubles, the automated end-to-end test suite massively improved the testing team's efficiency. After a few months since we began implementing test automation to the project, the time used to perform regression testing in each sprint was cut nearly in half, and fewer bugs slipped through the cracks into production.

Summary

Automated testing is an excellent way to speed up your team by freeing them from the repetitive nature of regression testing. Instead of taking their time to go through the same test cases repeatedly, automating these steps can let them do other kinds of work to help boost the project's quality.

However, automation is not a silver bullet, and if you're not careful, you may run into plenty of issues. When starting with test automation, teams tend to want to automate everything through the UI and build lots of end-to-end tests. This tactic isn't sustainable. Eventually, you'll end up with a slow and unreliable test suite that no one on the team wants to use.

If you reach this point, you don't have to scrap everything and start again. You can take a few steps to change how your automated test suite behaves and avoid slowing down your project and your team.

A quick thing you can do with your existing test suite is to determine which tests should run frequently and which you can defer at a later time. Not every test should be a high-priority scenario. If you can extract a subset of tests that give you a high degree of confidence that the application is working well, you can set up your workflow to execute them first.

With the remainder of the tests, you can take advantage of test automation tools like continuous integration systems to run them when it doesn't interrupt the team's workday. Any long-running tests or tasks that you can execute at a time that doesn't block anyone will help avoid any bottlenecks during the development and testing cycles.

The key to running end-to-end tests is to automate as much as you can to give you and your team the freedom to worry about other important issues for your project. Even if it's not perfect, you'll increase the quality of your applications with automation giving you the time to do your best work.

How do you deal with long-running builds or test suites that the rest of the team doesn't like to execute? Let me know in the comments section below!