Photo by Pavel Neznanov on Unsplash

Technical concepts for automated testers

What every automated tester should know

John Gluck
9 min readJun 24, 2023

--

In my role as Quality Practice Lead and general TestOps developer, I frequently find myself explaining certain technical concepts to other test automators. I hope you will find this advice useful.

Taking on dependencies increases risks

Whenever you add a new dependency to your test harness, be it an internal or external dependency, you increase the chances your build will fail. But you also increase the chance of future maintenance. If you have an internal dependency, you might learn that it has a vulnerability and, if you have a good InfoSec team, they will give you a deadline to fix it that may not be convenient. If you have an internally-maintained dependency, such as a library built by another team in your organization , you may start running into trouble if the team wants to upgrade or replace that library. I have seen entire upgrade efforts quashed because too many testers were depending on a library they didn’t need but were using for convenience. The upshot here is that you should always find ways to avoid taking a dependency. Sometimes, duplicating library code might be the right answer.

Avoid Enums

Generally, you want to avoid using Enums if for any other reason that it will probably force you to take unnecessary dependencies, given that Enums are usually part of poorly segregated common libraries. Just use primitive constants where you can. You can define your own Enums in your own harness. I have no problem with that.

Tests should generally not catch exceptions

I believe that test automators should use style-checking tools on every merge, but sometimes rules need to be modified. This is one of those rules. I think it’s generally a bad idea to catch an exception in a test, mostly because a test is not a service, but a script. It has a point where it stops executing. And if for any reason, that script encounters and error, it should fail fast. I see only two reasons for test to trap exceptions:

  1. The error is cryptic and doesn’t make it obvious why the test fails, in which case you would trap it, add additional messaging, and throw it again.
  2. You are having some problem with asynchronous behavior (like an AJAX spinner or a database connection) .

Encapsulation is your friend

I have a Rule of Three: If I find myself repeating a sequence of steps three or more times, I move that sequence into a library function for all my tests to use. Also, if I find myself using a particular value more than three times, I declare it as a constant.

Favor Composition over Inheritance

Particularly for your base classes, this may not always be the easiest thing to do, especially if your test runner has rigid architecture an forces you to inherit. However, in the long run, it’s easier to maintain a class that is not a child of another class. Learn how to add behaviors to a class without having to use an “is-a” relationship. This is particularly handy for Page Objects. In particular, get familiar with the Strategy pattern to learn how to make your objects more extensible.

Use Equivalence Partitioning to reduce over-testing

Many testers will use data providers to run through a sequence of steps with multiple inputs/input sets that yield functionally equivalent output and the end of a sequence. But that is wasteful. Instead, reduce the number of inputs for equivalent functional output, preferably to one occurence. If you find that you have numerous cases, like different kinds of bad input that you know traverse different paths in your code but yield the same output, like an customer-friendly error page, you are probably conflating separate testing goals.

You shouldn’t treat internal dependencies as third parties

Many automated testers in medium and large organizations have trouble distinguishing the boundaries between what they need to test and what other teams are responsible for. My rule is that you should strive for push-button deployment. So it stands to reason that you should never write a test the prevents developers from merging because of failures in dependencies outside of their control. Understand what the testers of the teams maintaining internal dependencies are testing. If there are coverage gaps, file a defect or submit a PR to their test harness. Don’t perpetuate silos.

You can efficiently test third-party service providers

Generally, mocks are not recommended for third-party services because of the problem with maintaining those mocks. So instead, a better approach is to break up the testing.

  1. Pulse check (particularly in production if you can) simply to assure that you can always log in.
  2. Characterization test — Design a set of representative client request input and generate output from the third-party. That output becomes “golden” and is used in subsequent comparisons of the run against the original input.
  3. Third party output fixture — The output can also be used as fixtures for tests that determine if your application can process messages from the third-party. That way, you don’t need to run your tests with a live third-party service or a mock. You can just use the fixture.

These three sets of tests are all separate and may run at different stages but if they all pass, you can have a high degree of confidence that the third party is working.

When it’s okay to test third-party software

In general, it is not a good idea to test third party software, but there may be situations where you should. If you are relying on a third-party library to do some critical job in your application, you should definitely understand the extent to which those developers test and discuss with your developers whether they need additional assurances. You should also have tests that assure that the library doesn’t get upgraded without testing.

There may be cases where management specifically asks you to do testing they know a third party isn’t doing, either because missing coverage in the library caused an escape or because someone in product has a close relationship with the library vendor and the vendor told someone on your team that there was missing coverage. In that case, you don’t have a choice but to write someone else’s tests for them. Sorry.

Service contracts are probably more than you think they are

I say this here because I have, in the past, had to explain it. A service contract does not just describe the structure of the endpoints. It also describes the schema for the request or return payload for each endpoint, the method name, and the status. It also includes any headers that the service requires. If any of those change, the contract has changed and may be broken.

All the Test Data Management Strategies you know are mostly problematic

You should understand that there are several ways to tackle the problem of setting up data for your test, some of which are more problematic than others. You should also understand what the right way to do it is.

AAA means Arrange, Act, Assert and it is the foundation of all testing

This is the basic structure of all tests, not including setup and teardown. It’s important to understand that every step of a test, even if you don’t assert against it, is a test in and of itself. That you can get to a page that is behind a login is a test that a login works in the situation.

Assertions increase risk

Some automated testers take a “journey” approach to writing their tests. As such, they might add have assertions peppered through the “Act” and “Arrange” phases of their tests, increasing the likelihood of false positives in their tests. Some use soft assertions, which give testers an illusion of safety because soft assertions allow a test to continue executing instead of stopping at the point of failure. The risk increases if you don’t take time to understand why a particular soft assertion failed every time it fails. Most tester don’t take that time to investigate because soft assertions are supposed to save them time.

Preferably, you should only have one assertion and it should come at the end of your test. That assertion should validate a single mutation/state change. Sometimes, you’ll need more assertions, so you may need to bunch up a few consecutive assertions. Just make sure that those assertions are aligned with the same test goal. If you can’t describe your test without using the word “flow” or some synonym, you may be testing too much in a single test.

Name spacing can limit you

Always be careful about names of methods, classes, files, directories, etc. Try to keep your namespace value limited to only what is necessary to describe the object. Try to account for future scope without forcing future users into unnecessary conventions.

Don’t test implementation instead of behavior

Generally, the more you can approach from an abstraction layer, the more you are assured to be testing behavior. For example, if you are testing that data you entered got into the database, you’d be testing the implementation by writing a direct SQL query. This is generally a bad idea because the schema could change while the behavior remains the same (because the data model being used by the service changed with the schema). The result would be a false positive and you would have to change your test and, if you didn’t change your approach, you’d have to change your test the next time the implementation changed.

Testing database persistence is probably not your job

Following from the previous, many automators think it is their job to make sure that something gets written to the database, so they take dependencies to their harness in order to make ad hoc SQL queries. This can cause no end of havoc when thousands of test run that don’t manage their connections properly. But typically, the best place in the CI pipeline to test that the application is writing to the database is in the pre-merge/component integration phase. If you are trying to validate that the system is functioning properly, you can ask your developer to help you talk to the service and get the data you are looking for.

Evaluate all application designs for testability

I’m going to write a blog about this. But in short, automated testers need to stop accepting “no” for an answer from developers. That means, in part, that they need to know what to ask for.

It is standard practice in modern companies for the people who own authorization to give testers bypasses for such functionality as SSO, MFA and SSL certs, so testers don’t have to jump through those hoops to test things that are gated by said functionality . If your company doesn’t have it, file a feature request and get it prioritized. It is also standard practice for front end engineers to provide a consistent element attribute for testers to use instead of convoluted XPaths. It is nearly standard for developers to provide test endpoints in lower environments that don’t get pushed to prod so that testers can test that the data is getting written to the DB correctly instead of them having to write an ad hoc query, or even use a persistence API. I can think of many more examples which I will include in my blog on the topic.

Anonymous functions in applications make your job harder; You should ask developers to avoid them

JavaScript developers love using anonymous functions (also called arrow functions or callbacks). But the problem with using them is that there is no way to unit test an anonymous function (unless you are in Python, where you can use a decorator). This tends to be the reason why JavaScript developers want testers to write extensive end-to-end tests. It’s important to understand this and learn to negotiate with your developers. You can ask them to use promises, which is what they are supposed to be doing now. Some of them are just stuck in their ways.

Cloud lambda functions make your job harder; You should ask developers to make them more testable at the outset

There’s nothing I’m going to say that isn’t addressed here. Read it and convince your developer’s to follow this approach. Send them the link.

In order to test lambda functions, you need to break everything up into testable pieces and also provide the mechanisms for everyone involved in developing and testing to have their own isolated cluster. Any other approach will result in escapes, or, at the minimum, low-confidence and extra cycles spent fretting and trying to get everything coordinated in order to test.

I personally have made recommendations to developers in process of designing/implementing lambdas and had my recommendations ignored only to hear the developer admit I was right six months later. It’s tiring and not at all satisfying. You’d think it would give me some charge knowing these guys with CS degrees are admitting that a Theatre major was correct about the design of their software. It’s not. I’d rather see the company make money, because then I make more money. Ugh.

I’m not saying this for my own health

These are concepts I largely learned at the School, nay University, of Hard Knocks, where I have a Ph.D.. I’m not telling you because someone told me this. I’m telling you because I’ve been through it. And I want to save you the trouble. Of course I know that if you are anything like me, you will ignore all this and you will just be another person to whom I will avoid saying I told you so.

--

--

John Gluck

Quality Practice Lead/TestOps Architect, Dad, Husband, blogger, cat herder, dark debt slayer, enjoyer of strange music and art, yoga enthusiast