DEV Community

Ben Dowen
Ben Dowen

Posted on

Anatomy of test automation

With the rise of low-code and no-code solutions, I've been thinking about the problem they are trying to solve.

This lead me to think wider about the parts of Testing that can be supported by tools and automation.

System under test

In the begining, you need the correct version of the System Under Test.

This maybe as simple as taking the latest build of a single isolated service.

More likely, this will involve getting matching versions of multiple services that work together to form a system.

This task alone may not be trivial, more then likely it can be automated with help of a Continuous Integration (CI) platform.

Something like Jenkins that can be configured to kick off builds when you commit changes to source control.

All CI systems I know of require at least minimal code, or at least a configuration file of some sort that gives instructions on how to build the Software.

Likely this is a wider concern then Test Automation, you won't be the only one who wants software built.

But I've worked in consultancies who still build the odd thing by hand and FTP it to a server.

So if you don't already have a slick answer for this, it's a barrier to automation straight away.

The supporting environment

The SUT will seldom live on it's own, unsupported by dependencies. This might e mean having the right Node or Java versions available.

It could also mean setting up:

  • Databases
  • Storage
  • Queues
  • Proxies
  • Load balancers
  • Third party API or appropriate mocks

Each of these may also need special configuration or base data seeding that isn't specific to any given scenario.

If you are lucky you can get some or all of your dependencies deployed using some type of automation. Maybe Docker images that can be Deployed by your CI System.

Living the dream. More automation, definitely some code, hopefully some Infrastructure as code.

Failing that, you might have some static servers. hopefully managed by a friendly Ops engineer. Otherwise another job for you.

Again, if this isn't already slick and automated, another barrier to entry.

Test scenario

Okay, the big one. Let's assume we overcame all the hurdles above. Now let's break down our Test cases.

Prerequisites

In addition to all the setup you've done to get this far you might still need a big more. Maybe some users with appropriate access.

Test data

If your inputs and outputs are static, this might be trivial.

For me, I often work with non trivial test data that requires some level of templating.

I often need to make sure dates represent today, even if I wrote the test and captured the data months ago.

In some cases test data can be generated automatically using libraries like faker. I know some tools, like Mockoon include Faker out the box.

System state (Given)

Now your going to want to get your system into the starting state.

Some examples:

  • User is logged in
  • A the right page is loaded
  • An entry doesn't already exist for new TODO item you are about to create

This is something your automation framework, code or otherwise really should handle.

Finally, something we can recognise as Test Automation.

Action (When)

This is the core of the test. Stimulating the users action or a sequence of multiple actions that we expect to make some change to the system state.

Examples:

  • User clicks a button
  • API call is made
  • File is changed

Assertion of expected result (Then)

This is where most of the debate comes in about Automated Checks and Human testing. Ignoring any Ai for a moment.

This is where we have a coded assertion. I don't mean we need to be using a programming language, but we need to have an unambiguous way to choose if the actual results we got are what we expect.

If they match our coded expectations the test passed, otherwise it falls.

While we can make test smart to an extent and we can look for shapes and ranges, ultimately we can only assert on what we can expect.

This can definitely be done by any automation framework, whatever the language, low-code or no-code.

If we got this far we are winning, provided of course we did our Test Analysis right and we are checking for useful things.

Logging and reporting

Analysing logs and making reports can definitely be fully or partially Automated.

This might take the form of some Console output that can be captured by your CI System and be linked to a build. It could also take the form of a HTML report, complete with graphs, screenshots, API responses to even video recording.

What you will probably miss is the contextual logging. The logs from the dependencies and more distant parts of your SUT. This can also be captured and logged, but takes a fair amount more thought.

Tools and supporting script's

Of course this maximalist description describes pretty much an end to end or system integration test.

As I'm sure expert's like Mark Winteringham, Richard Bradshaw and Alan Richardson would tell you, you can get great value from Automation and Tools to support your testing way short of an end to end test.

I've done this myself on plenty of occasions by creating tools to capture or generate test data, of by mocking APIs to support my Exploratory Testing.

Conclusion final thoughts

If you were not convinced already that to succeed, an automation strategy need support and buy in from many people in various roles, I hope you are now a convert.

In my opinion, and I am making some assumptions, low-code or no code only attempts to solve.

In fact, I would go further and say Test Automation Frameworks only try and solve part of the problem. And that's OK.

Building software is a team game, and Software Quality even more so.

Top comments (12)

Collapse
 
alanmbarr profile image
Alan Barr

Thought proking article Benjamin. It makes me wonder what the next level for testers would look like. This sounds like one kind where a hyper-empowered QA Engineer can assemble a realistic scenario. Of course, I value people that can achieve a high-quality testing process cobbled together without requiring my whole budget. This might be a tough question but where do you see a QA Engineer adding value in an enterprise given this maximalist view of testing?

Collapse
 
dowenb profile image
Ben Dowen

Us humble QA Engineers often have to manage being a lot of things to a lot of people.

  • Business analysts
  • Communicators
  • Risk identifiers
  • Model builders
  • Software engineers
  • Customer advocates
  • System administrators
  • Team members
  • Leaders

If we are lucky, we are well supported, respected and don't need to wear all of these hats at the same time.

The larger the organisation and the bigger the teams. The more opportunity there is to specialise and add value doing fewer things well.

But there is great power in small teams. I work in a squad as the sole QA Engineer with 6 Developers, a solutions architect, a DevOps engineer and a product owner. I don't do all the things all the time, and I get the opportunity to pair with others in the team to work together to achieve our goal of delivering high quality solutions.

Like I said, making software is a team sport.

Collapse
 
jessekphillips profile image
Jesse Phillips

Benjamin and Alan, could you both clarify "maximalist description describes pretty much an end to end or system integration test." and "next level for testers would look like."

I did not view this as end-to-end testing but rather integration testing at all layers.

Alan you've introduced a concept of testing levels, I don't think this article established a leveling system but could be wrong.

Collapse
 
alanmbarr profile image
Alan Barr

I read into it. I work on Kubernetes infrastructure things now and problems I would have highlighted do not exist anymore. There are more different emergent problems and I wonder if people are aware that the skills in this kind of role might change dramatically in the next few years.

Thread Thread
 
dowenb profile image
Ben Dowen

I tend to find some problems I think have been solved by the industry, reoccurring. What I mean is, there are lots of solved problems that are unsolved for teams and companies that are not at the same level of maturity.

This isn't a bad thing as such, sometimes solving the be same problems again we come up with better and better solutions.

Thread Thread
 
jessekphillips profile image
Jesse Phillips

Well now I'm curious, what you feel is solved and what are new challenges. I have some thoughts on the subject of Kubernetes, but would like to know where you are going with it?

Thread Thread
 
dowenb profile image
Ben Dowen

It starts to get a bit complicated to talk in general terms at this point without discussing teams I've worked with and their problems. And I to want to avoid naming names.

As a general trend, Docker is making my life steadily easier in terms of setting up isolated test environments that have controlled test data.

But, if I hit something that cannot easily live in Docker for licensing or technical reasons, then I'm back to hosted environments I'm not in control of.

I test a lot of APIs so mocks fill in some of these gaps.

But now I've got a problem that my Integration Tests are not always triggered by builds or before merged and reporting and monitoring isn't trivial. And if something goes wrong, debugging is harder.

So you make gains in some areas, but end up with gaps. That with static environments were solved problems. Like the SUT staying around to debug.

All the new problems are solvable. It just all comes down to time, and sometimes licensing or infrastructure costs.

Thread Thread
 
alanmbarr profile image
Alan Barr • Edited

Most of the software problems stem from human communication issues. Conway's law. It is a belabored topic but ever-present and essential.

Thread Thread
 
alanmbarr profile image
Alan Barr

Anything related to server setup, repeatability, scalability. I wouldn't spend a lot of time unless I'm aware of non-functional requirements depending on your environment. I think that in a large enough business with shifting business models it might be nice to have someone thinking through and telling me: "This concept doesn't make sense anymore". That's fanciful, I'm also bored to death with UI automation ;)

Thread Thread
 
dowenb profile image
Ben Dowen

As our microservices architecture matures and working with skilled developers, I find fewer straight code bugs.

Problems with configuration and deployment remain plentiful as do bigger picture issues.

Being able to explore, learn and exercise the full stack end to end, from concept to production is privilege I enjoy as a quality expert.

I test and automate very few UIs. So there is that at least.

Thread Thread
 
jessekphillips profile image
Jesse Phillips

I love the repeatability of containers. Having the ability to stand up something for testing and each tester having their own. So many manual install fails.

I am definitely still very busy though.

Collapse
 
dowenb profile image
Ben Dowen

I was contrasting system integration testing, that I consider End to End testing to be part of, with what I would call partial automation, or using tools and code to assist Exploratory Testing.

I admit I casualty threw the term End to End in there without any discussion. Really it's a whole topic or its own.

I discussed End to End Testing over at The Club:

club.ministryoftesting.com/t/where...

And this is a summary of what I found out:

  • End to end testing is typically a system test that follows a user or data journey through the system
  • The scope of the test highly depends on the system under test
  • There will be a vast number of possible paths through the system
  • Judicious selection of as few critical paths as possible is important to avoid high maintenance cost
  • Where possible it is preferable to have multiple smaller tests that cover sub systems and system integration over more E2E tests
  • Context is king and no one size fits all definition of E2E testing covers all usages