Essentials of Test Automation

Derya Oz
5 min readJul 19, 2022

In our fast-paces world, test automation is considered a standard for tech and non tech companies. When I started in IT business 10 years ago, testing was considered a “plus” but not a “must”. Slowly when software got more complex, companies started to see the importance of testing. At first, manual testing was doing the job, because software releases were once in half a month or once a year. Afterwards, releases became more and more frequent, test automation became the standard to ensure quality of the product.

Test automation ensures quality and fast releases, but does it mean that it is perfect or covers everything once it is written and running? Short answer is no. But there are some tips and tricks I have learned during my journey in business.

Here are some of those:

  1. Tests must be independent

Every test and test code must be independent, that means, it should not rely on each other to run successfully.

Think about this; say, you are to test one new feature in pre existing software. You have ten tests to run. Eight are regression tests, two are new ones. The new two tests depend on the data of the eight regression tests. In this case, if the regression tests fail, you cannot determine if new features are failing or not.

2. Initialize and Pre-Run

This is tightly connected with the first item. In order to make sure all tests are running independently, a pre-run is a must.

In pre run, necessary data is initialized (e.g. users are created, authorization is provided), old data can be removed (ideally this should be done at the end of each test execution, but in some cases, leftover data can be there) or common methods for different tests can be called here (e.g. login with admin user). Just parameterize the initialization code with users, environments, url etc. and make sure to call it before every test.

This will also ensure clean code and considered a best practice. This way, for example, when a login feature is changed, you do not have to change every test code one by one, just modify the initialize method and you are done.

3. Retry Mechanism

There are behaviors that are hard to verify and make tests unreliable due to unpredictable conditions. Consider that you entered URL: www.google.com in your browser and pressed enter on your keyboard. Can you say that every time you enter this URL, the page is loaded within same time? Because of some unpredictable conditions, page loads in different time even if it varies so little. These conditions can be, network issues, animations, server availability, api calls etc.

Retry mechanism ensures that tests are not much effected with these unpredictable conditions. A retry mechanism can be, waiting to try to click a button, calling an API until it returns expected data, waiting for a page to load etc.

But retry mechanism is a bit tricky. One question might arise, how long to wait? How many times to call API for it to return successful? There is a gentle balance there, because if you over-retry, sporadic bugs might be missed during the process. For example, if you wait for a page to load for 15 minutes, it might mean that system is not working well. Or, in API call, there might be a sporadic bug that returns 400 Bad Request at first but you can miss it because test does not fail due to retry. Make sure to keep a balance on retry and log every response in the code so that these issues can be spotted.

4. Post-run

Post running methods mean the code will run after actual test is executed. Particularly doing clean-up, logging out or shutting down the system if necessary. Every test must return the system to its initial state before test run after it is executed. Leftover data from other tests should not be present. This ensures distinction of failures and increases reliability of test results.

5. When to run

Can we over-test? Can testing consume unnecessary time and resources? Yes it can. This makes us come to the question of when to test.

It depends on some external capabilities, such as environment availability, resource availability, or release date/time. When to test also depends on the objective of testing. For example, tests must run at least one time once new features are available. But if the objective is to catch sporadic issues, to test reliability of the system, test must run more than once, every day, twice a day or twice a week etc.

Over time, the frequency of testing must be reconsidered as the software product changes.

6. Parallel Execution

Have you ever been in a situation like when your test case fails due to, for example, your user had been deleted by other processes?

In software, we have multithreaded applications. In test code, it is more complicated than that. Sure, parallel execution saves us lots of time but we should also consider if parallel running test cases are effecting each other or not.

7. Test Code Maintenance

Test maintenance is very important aspect of test automation. We cannot say once we write the test code, it will work perfectly forever. Because software is changing, system is changing constantly, our test code must be maintained constantly.

We must always do code review to our old test code on how to improve it, how to make it more reliable, and make sure it is doing its job. Additionally, test cases must be improved, deprecated features must be cleaned up.

So, next time you write some test code, these points can help you to get bigger picture and maybe your future-self will thank you for that.

Happy testing!

--

--