Photo by Tingey Injury Law Firm on Unsplash

How to advocate for testability

Be argumentative, lazy, and irritable

John Gluck
9 min readJul 4, 2023

--

I often tell testers, automated or otherwise, that they need to speak up about and advocate for testability features in their applications. Until now, I have never specified how because I guess it was obvious to me. But it’s not obvious if you haven’t done it before, so here’s some pointers for how to deal with this thorny problem.

What is a testability feature?

You probably know this but, for completeness sake, I’m going to define this anyway. A testability feature is an additional feature not specifically part of the customer requirements that eases and/or speeds testing of a given application or component thereof.

Testers Assess Risk

As an automator, your job is automated validation. You are there to make sure that the software does what it is supposed to do. But also your job is to make sure your team understands what you don’t know. Which areas of the application didn’t get tested or regressed? Sometimes we don’t know what we don’t know, but many times, we know what we don’t know. However, rarely do I hear testers tell their team that they don’t know something. Admitting to possible coverage gaps might make it appear that you aren’t doing your job. But by hiding coverage gaps, testers do a disservice to their customers and their teams. One of the main factors that determines testers’ ability to assess the risk of a given feature is whether or not the application is testable in the environments it will be executing in before it gets to production.

Ultimately, the purpose of automated tests should be to allow for greater risk assessment. Often times, testers are unable to assess risk to the desired extent because they don’t have a sufficient amount of time. Unfortunately, one of the reasons testers don’t have enough time is that they didn’t advocate for themselves to have more time. Frequently, testers neglect explaining how complicated testing is or may be for a given feature based on current or projected conditions.

Stubborn, lazy, and irritable

I have actually gotten jobs from giving this answer in interviews to the question “What qualities make you a good tester?” It sounds like a joke and it is, but its essense is true. Obviously, one needs to behave diplomatically. But you also need to advocate for the customer. That’s right, I said the customer. That’s because increased testability in an application decreases risk to the customers.

I firmly believe that most testers understand when a feature is going to be difficult to test. But rather than point it out in a way that serves the customer, we hold our opinion for political reasons (we don’t want to make waves, we want the team to look good, etc.).

When we tell our team that we need testability features:

  • We think others see us as argumentative because developers will likely not see the immediate value in our requests. If we stick to our request for increasing testability, it can force us into conflict with our team members.
  • We think others see us as lazy because our requests tend to require removing some of our work and asking other team members for help, which potentially slows down the other team members and impacts the delivery schedule.
  • We think others see us as irritable because the tiny things that bother us while we are testing are just that, tiny things. We feel like we should just suck it up and take it for the team. Maybe our teammates feel the same way.

Boiling Frogs

As testers, we tend to overlook obstacles to testing. Something as small as having to make our test log-in before arriving at the homepage might become more complicated if we add a multi-factor authentication feature. A forced password reset every few months adds additional complications and possibilities for false positives. A cookie validation prompt might add another set of risks. Every complication adds more and more risk until we have achieved death by a thousand cuts. And remember, as we attempt to log in, testing log-in wasn’t our goal. Our goal was simply to get to the homepage.

Most developers don’t understand the cost of end-to-end testing

End-to-end testing is difficult both to perform manually and also to automate. However, most developers have an unrealistic idea of how much effort goes into end-to-end testing, particularly when it comes to execution and maintenance. I get it. Unit testing is boring. There are repeated misconceptions about unit-testing as well that take cycles to push back against. Developers frequently default to asking for end-to-end automation tests when there isn’t enough time for them to finish development comfortably and, specifically, finish their unit tests. It’s our job as experienced testers to make sure developers understand that a request for an end-to-end test means adding more to the heap of technical debt that has already been accumulated.

Here’s some ways you can do this.

Tell the story

Sometimes, it’s best to go into the details of how you attempted to solve the problem. However, you will loose your audience if you aren’t succinct. Here’s an example of how you might present your case.

Request: I would like a way to access pages behind the log in more easily.
Context — I am testing that the service routes me to the homepage after login.

  1. When I first go to the dashboard page, I instead encounter a log in prompt.
  2. I use the “testuser@mycompany.com” username.
  3. I enter the “myverybadcommonpassword” password.
  4. I am unable to log in so I need to investigate. It turns out, someone changed the password.
  5. I try logging in with the new password, but am prompted for an MFA token.
  6. In order to get the MFA token, for security purposes, I need to log into the common user’s email account.
  7. When I attempt to log in into the common user’s email account, I am again prompted for the password.
  8. I try both the old and new password. Neither work.
  9. I ask on Slack if anyone has the password for the common email account. 5 minutes later, someone answers me.
  10. I log into the common email account and spend several times going through CAPTCHA puzzles, which seem not to be working
  11. I go back to Slack and ask who is responsible for CAPTCHA. After being directed to 2 incorrect teams, I manage to get the correct team. They tell me they were releasing a new feature and ask me to wait 10 minutes.
  12. I try again in 10 minutes and I encounter the same problem. I contact the person I spoke to previously. It is now lunch time.

So on.

A journal of what you have to go through can really help to bring home how painful certain application functionality can be to test. While you are journaling, make sure you detail how long you are spending testing so that in the future, you can reference your journal if anyone asks you to perform that testing again.

Use your friends/rubber duck

Talk to someone, especially someone who doesn’t understand your job, to try to boil down your problems to their essence. If you can explain why a particular test setup is complicated and painful to someone who doesn’t understand much about software development, you can more easily explain it to a developer. Ask other developers, preferably those who don’t understand the domain, to help you boil down why a particular problem is complicated. They may stump you, which is likely to help your either solve the problem or figure out how to state it more clearly.

Record yourself

Recording a video of trying to perform a complicated test setup can go a long way in demonstrating why something is hard to do. You can always edit the video.

For new features, start early

In order to identify testability problems in a given feature, the best place to start is before the feature is built. If you are lucky enough to be invited into requirements discussions, you have already won half the battle. The next stage is to ask yourself how you will test this new feature.

  • Use the 5 Whys? Why will this be difficult? Why? Why?
  • Think about similar functionality and what has gotten in your way before.

Requirements, not implementation

Requirements should not specify technology or specific solutions. Rather, they should explain the problem and describe the end state.

If we take the example of the login page, a requirements approach would be to say “testers need a way to get to the dashboard page within 5 seconds of entering the site. The method for doing so should work 100% of the time with as little risk of false positive as possible”. And implementation approach would be to say “testers need to bypass the log in page”. The second approach is less likely to get you what you need.

Make it incremental

Developers are often rewarded for coming up with the cheapest solutions that solve most of the problem. Don’t expect perfection. You may have to get your team to iterate on the problem, but hopefully you will get at least some partial relief from your testing woes in the near term if you allow your team to develop an incomplete solution.

If, for example, you used the requirements approach above and a developer came back to you and said, “What if we give you 98%”, you’d probably be pretty happy if your existing test was failing 75% of the time.

Work with product

If you don’t get the testability work prioritized in the backlog, chances are it will keep getting pushed back. Work with your product owners to quantify how much time you spend working around problems. Visuals such as those mentioned above can again be your friend. It’s complicated to calculate how much time you will get back. And remember, any extra cycles you get will simply go back into testing the product more deeply, so don’t ever promise any one that testability will speed up delivery. It probably won’t. You’ll fill your time with other productive activities.

Expect to be right in retrospect

My wife tells a story about how one day, when she was in grade school, she anticipated a rainstorm later that day by watching the news. Before school, she made sure to bring her rain jacket with her. Later that day, as her fellow students shrieked and panicked when the rain arrived, she put on her jacket, smug and self-satisfied. As she walked onto her school bus, the driver saw her grin and said, “No one likes a smart ass.”

It’s hard to see a failure you predicted unfold and stay silent. But if you plan for the eventuality by making sure you have documented the solutions to the failure, you might be able to take advantage. Also, bring in other team members and let them get some credit. And don’t say “I told you so.”

You are only human

Remember, it’s okay for human needs to get in the way of testing: “a colleague needed my help”, “My boss asked me for some information and I lost focus”, “I put off lunch to try to make this work but I got too hungry too keep going”, “I had some pressing personal business”; these are all legitimate excuses if the problem is that time constraints in the application are causing problems. It’s okay to be human.

Make sure to publicize success

Once you have your testability feature, don’t expect that everyone will notice what a great job you are doing now that you are able to assess more risk per feature release. You have to make sure people in and outside of your team understand that the extra space was created by the efforts of you and your team.

Some common examples of testability features you should be asking for

  1. Test attributes in web pages — This has been very common practice for years and, in general, it’s pretty easy for developers to do once they get in the habit. Said attributes can help you avoid creating convoluted XPath locators
  2. Login/MFA/SSO bypass — This is also a pretty common practice in most organizations. The argument you will encounter against this can be fought against by timing how long it takes for you to get past these steps and by demonstrating how much the time accumulates. For example, if it currently take 60 seconds to get pass all the log-in nonsense, and you have 100 tests, it now takes a single suite 1 hour and 40 minutes cumulatively to log in, assuming you aren’t parallelizing
  3. Test Data — This can be challenging to get developers on board with, mostly because it’s a difficult problem to solve. I’ve written an entry on it. You are likely to get some pushback but keep plugging away. I had a colleague who once went to the architecture meeting and proposed a pretty ugly solution to a test data problem we were having, which prompted the architects to ask, “Why in the world would you want to do that?” When he explained to them his reasoning, they realized that they were the ones who needed to solve the problem by working more closely with their developers to make applications more testable. Don’t be afraid to propose bad ideas as long as you frame them as such. You can say something like, “This seems like a bad idea but I don’t see another way around it that I control.”

--

--

John Gluck

Quality Practice Lead/TestOps Architect, Dad, Husband, blogger, cat herder, dark debt slayer, enjoyer of strange music and art, yoga enthusiast