Is Manual Testing a Dirty Word?

I’ve been part of a few discussions this week, including on LinkedIn, about the term manual testing. Generally within the community, it appears that there’s a negative connotation around the term and I wanted to find out why.

What is manual testing?

Like with many terms in testing, there’s not a clear answer. If you ask ten different testing experts then you’ll probably get ten different responses for what it means. Generally though when we say manual testing we usually mean “not automation” with manual testers being testers who don’t write automated tests.

That means manual testing can cover a wide variety of things: running scripts, exploratory testing, use of tools, non functional testing, technical testing (including API testing) and critical analysis for risks. It’s a big old term that can mean a lot of things, so why is it something we don’t like?

Fig 1. Eric Cartman from South Park saying a dirty (or curse) word.

It’s a term that I avoid…

Here are some of the comments that came from me asking what the term manual testing meant to people:

  • The term frustrates me, it’s just “testing”.
  • A trigger word for many experienced testing professionals.
  • I don’t like using the term “manual testing” at all.
  • A word used to inflame a subject and take focus away from uplifting each other.
  • There is nothing “Manual” about “Testing” period. It’s a terrible and antiquated term that needs to go extinct.
  • I really struggle to hear the term ‘manual testing’ used, the first thing I think is … aaaaarrrrggghhhh! Followed by, “why do people insist on differentiating manual from automated”!
  • I understand why people might use it, but it’s mostly meaningless for me.
  • Reduces people to test executors.
  • You don’t get manual developers.
  • Similar nonsense like principal test anything.

(Apologies for not crediting people, I hadn’t asked if people were cool to be referenced. if you want to be credited, please hit me up.)

As we can see, there’s a lot of negativity that comes from the term and, when digging in to it, it seems to be directed to a couple of themes: Manual testers are seen as less technical, It’s not a useful term to describe what we do and It’s not marketable.

Less technical

The term manual testing usually, for most people, begins and ends with testers who review test scripts at the business layer. This would mean that the view of manual testing is limited to simple checks without getting under the hood or into the details of anything.

Fig 2. View of manual testing perception vs. technical view.

If we’re doing some of that more technical stuff, then we wouldn’t want to be perceived as only doing the basics, right? So manual testing becomes a perception of something lesser than what we can do, or a view that it’s less senior and we distance ourselves from it.

As people start to distance themselves from the term, especially thought leaders and “known faces” in the industry, it has a feedback loop effect. People may want to associated with those thought leaders so also distance themselves too.

The other side of thing is that manual testing is not automation, so is seen as less technical. We get a lot of organisations and career paths telling us to “level up” into automation as that’ll make you more technical (as opposed to differently technical). So if saying you do manual testing is saying you don’t do automation, the fear becomes being seen as not technical.

Not a useful term

“Why do we even say manual testing? We don’t say manual development!”

It’s true that we don’t, but isn’t that because all development is manual? Until developers start automating coding from Generative AI, they have to do all the coding themselves. So for developers this isn’t a useful distinction, instead we have front end / back end or java / python/ C++ developers as that tells us what they do.

So for testing is manual tester a useful term that describes what we do? For some, yes as it describes that they can do not-automation testing. For others it’s not as useful because an absence of something isn’t a description of what they *can* do.

If manual testing (as described above) is overwhelmingly seen as following business scripts, then how do I showcase what I can do? I could call myself a:

  • Exploratory tester
  • Toolsmith
  • Risk analyst
  • Bug archeologist
  • Quality coach
  • Engineer
  • Tester

These all form parts of a skillset, but also don’t quite explain fully what someone can do (or in the case of tester is just holistic). But they’re all seen as more technical, so maybe the assumption is that I can do the other things because I can do this. These other titles become more useful because they don’t come with the baggage of being seen as “less than” or “untechnical”.

Not Marketable

Fig 3. The Resident Evil 4 Merchant has a selection of good things on sale…. stranger.

Market forces… Capitalism… love it or hate it they shape our careers and what’s seen as valuable in testing. Organisations want technical testers and because hiring managers aren’t always deep into the nuance of the testing community, that means they want automation engineers or SDITs.

Being seen as a manual tester isn’t helpful because it’s seen as untechnical, less than other testers and it’s not descriptive of what we can do. That means less job opportunities in the current market and (potentially) less career advancement. So of course people wouldn’t want to use the term, they want to be able to advance and feel like they can get better jobs.

Plus in a world where a number of our thought leaders and peers decry manual testing as lesser, we’re conditioned to see it that way too.

The other side

From the conversations, some people are happy with the term and use it to describe human based testing, exploration and critical thinking.

  • Testing like a user would.
  • Using common sense, influence and critical thinking.
  • Using human instinct and experience to explore and learn.
  • When I am testing I am constantly thinking “What if..” or “Why is that..” or “Can I?” or  “Will it let me?” I am constantly asking questions of the software.
  • An opportunity for a tester to get the “feel” of a feature; to really understand what technically is the feature about by the manual setup

Whilst this was not the majority of people, there are still those out there that don’t mind the term and actively describe themselves as a manual tester. With this in mind, it’s important to recognise that it’s not a dirty word for everyone and that we shouldn’t dismiss these people from the conversation.

Saying “there’s no such thing”, “that would be testing manuals” or the like could alienate and dismiss a number of our peers and colleagues just because of our own distaste for a term. Especially for those people who are reclaiming the term and using it to possibly be role models or advocates for the many other manual testers out there.

Avoiding the conversation

The manual vs. automation conversation can come across as so condescending that people opt out all together. There are testers avoiding engaging with each other because of the negative connotations around their area of expertise. This creates silos, which means we lose out on information sharing and it prevents holistic testing.

  • Duplication of effort with things being tested by manual and automation testers.
  • Lack of strategic thinking overall, with silos only focusing on themselves and not the bigger picture.
  • No sharing vital information about opportunities for testing (e.g. exploration leading to automation cases).
  • Individual testers thinking they have to do it all, resulting in too much testing for one person to manage.

if we’re unable to bridge the gap because the community has created a “dirty word” then testing in organisations suffers as a result.

So is manual testing a dirty word? In some circles yes, it very much is a word that’s looked on with skepticism (and understandably so). But by making the word dirty, do we lose an important piece of testing conversations?

Response

  1. robertday154 Avatar

    I’ve always thought of it as “human factors testing”; asking “What would a typical user do?” or “Has the system design taken human behaviour into account?” After all, many systems are designed with a human end user in mind (or should be).

    And yet, we get systems that fail to do this. Perhaps one of the most relevant examples right now is the discredited Post Office Horizon system. Look at some of the bugs cited in the official enquiry, and you’ll find that a significant number of them only manifested when users unused to IT systems tried to get a response from the system when it went unresponsive – so (for instance) if the system froze during a transaction, if the user kept hitting the return key to try to force a response, the system logged each keystroke as a new transaction without updating the display.

    And I once had an interesting conversation with our UI designer about “over-reading”, something my father, who worked on railway signalling systems, had encountered. Sometimes, train drivers focused on the most distant signal and failed to notice a different signal in their foreground. I saw similar effects in UI design when multiple dialogs were displayed and the user’s natural reaction led them to try to interact with a dialog that wasn’t active.

    I’d suggest that you can’t automate this process. if you end up calling it something different, that’s OK by me.

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.