How should we view AI as testers?

On Wednesday and Thursday last week I was privileged to be at the AI Summit in London with two of my colleagues Sushmitha Sivan and Bhavana Akula, where we not only heard some really interesting talks about AI, but also got to deliver a one hour(ish) session to Hackathon participants on testing & quality engineering and how they can apply that to the challenge they had been given.

AI is fairly new to me (and I suspect many others), so we are at different stages of a learning journey, but one which is fascinating in terms of its scope, and the impact it can have on society.

I’m not going to tell you how to test AI, or how to refine data models etc, there are other places to go for that. This is about more general learnings and ideas that came to me from the different talks I attended, as I came away with a number of buzzwords and phrases in my head (these are in a random order):

  • Ethics
  • Regulation
  • Resistance
  • Fear of the unknown
  • Job losses
  • Enabling human creativity
  • Removing drudgery
  • Data model bias
  • Training data models
  • Using AI responsibly
  • Deepfakes

It got me thinking about how we should be responding to AI as test professionals, and I deliberately use that phrase as it covers every role within testing.

Most of us have spent our careers testing UI’s, backend databases, API’s etc, which can generate specified known outcomes. We have user stories that tell us what the expected behaviour is so that we can test accordingly. So what do we make of this new world?

The purpose of AI as I see it, is to enhance the tools that we humans have which can help us to make decisions. Computers can think much faster than we can, and therefore they can take some of the drudgery away from us, and the idea is that we can use our brains to do more creative things.

Of course the dangers are that a reliance on AI could mean that inherent biases are overlooked and the decisions we make are based on flawed data, therefore we need to be careful about the amount of trust that we put into the outcomes:

  • Can we trust the data set that is used by the AI tool?
  • Could it contain fake data?
  • Does it contain biases?
  • Do we have trust that the output covers everything we need to be considered?
  • Are there missing references that would have impacted on the results?

We could also lose the ability to critically think for ourselves if machines can do it for us, and we rely on them unquestioningly. This happens already, and a real-world example here is this – how many under 30’s can read a map from a book? If their phones or car satnav dies, how many of the younger generation would cope with using a map as a backup plan? They have grown up with a reliance on tech, whereas those of us a little older can do both.

As testers we could so easily fall into the trap of opening up something like ChatGPT and asking it to help generate a test plan or a test case based on the information that we give it, and then using it as though it were the perfect answer. We must exercise caution here. Yes, we will probably get something that we can start with rather than a blank sheet of paper, but doing that all the time will mean that we lose the ability to start the process ourselves from scratch. Sometimes mind-mapping things out helps us to make links ourselves – we would have to train ourselves to start mid-way through the process from whatever ideas AI has given us, and that may or may not work. I’m not saying its necessarily a bad thing, but we need to be careful not to lose the ability to think for ourselves.

There are some great real-world scenarios that we can us AI for:

  • Finding the best mortgage rates and organising them in a table
  • Drafting a letter for an unfamiliar scenario
  • Researching for a quiz you are going to host
  • Preparing for a radio show (something I will be using it for)

As we start to use it in our daily lives, we will come to rely on AI more and more, it isn’t going to go away, it will need to be regulated (humans have the unenviable ability to turn any invention into something that can be used for harmful purposes!!), and it needs people to question its usage.

As testers, my advice is to embrace AI with a healthy amount of skepticism. Question the results that you are given, and independently verify them. You wont have access to the data that the outcomes are based on, so exercise caution, and do what testers do best – delve, investigate and ask questions.

And finally, keep hold of your critical thinking skills – they are going to be more important than ever in a world where people come to rely on what they are told as being the truth. Those that can step back and take an objective approach will be the ones who will stand out in the future.

Welcome to the brave new world.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.