Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Stimulus, response, check: The core of test automation

public://pictures/paul_headshot_2018.jpg
Paul Grizzaffi Principal Automation Architect, Magenic
 

In a 1970s commercial, a boy asked a wise owl, "How many licks does it take to get to the center of a Tootsie Pop?" The owl, who was obviously a tester, decided to see how many licks it would take; in a humorous twist, he concluded that the answer was three. But on the third "lick," he crunched down on the pop and ate the candy.

Like the Tootsie Pop, test automation has a core as well, but it has three parts: stimulus, response, and some number of checks. Here's what you need to understand about each of these parts, so that you will be able to create stacks and adjacent stacks that add greater flexibility to your test automation.

Defined: Stimulus, response, and checks

At its most simplistic, a stimulus is an action that causes a reaction. For example, striking a drum causes a sound to be made; the strike is the stimulus.

In terms of testing and automation, stimuli come from many different sources, such as:

  • Clicking a button 
  • Calling an API 
  • Calling an operating system command

Each action causes something to happen.

A response is a reaction or result from applying a stimulus; it's the thing that a stimulus causes to happen. Following from the drum example above, the sound made by the drum is a reaction to the stimulus of striking that drum.

Based on our stimulus examples above, corresponding responses could include:

  • A page update or navigation caused by a button click
  • A response message returned by an API call
  • An external program returning a result due to calling an operating system command

Note that a single stimulus may cause multiple responses; also, a specific response may be caused by more than one stimulus. These are important to note because it may be valuable or necessary to check multiple stimulus/response combinations.

Checks are what you use to determine whether you received an appropriate response from your reaction. Again, from the drum example, checks for the result of striking a drum could include "Did you hear a sound?" and "Did you feel contact with the drum?" Note that this is also an example of multiple responses to the same stimulus.

For the previous automation and testing examples, checks could include:

  • Did you navigate to an appropriate page following a button click?
  • Did you receive appropriate values in your API call's response message?
  • Did the external program give an appropriate result from your operating system call?

When programmed into automation, these checks are usually implemented as assertions, as opposed to the question style used in the above bullets.

Implementation: Abstract your steps

At the core of the implementation, there are certain commonalities about how to generate the stimulus, receive the response, and evaluate the checks. Though they typically differ per technology, the abstractions are similar; they differ only in the implementation details.

If you think at a high enough level, you can abstract your automation steps into behaviors, such as "Add an item to cart" or "Perform checkout." Using "Add an item to cart" as an example and assuming your GUI is backed with a web service API, there are at least two different ways that you can accomplish adding an item to a cart.

Conceptually, you could write a test script that looks like the following:

cart.addAnItemToCart(item)
Assert.assertTrue(cart.contains(item)) 
The interesting part of the above pseudocode is the call to addAnItemToCart. This method can be implemented by interacting with the GUI, or it can be implemented by calling the appropriate API action(s).

Understanding this helps you realize that behaviors can be implemented through different actions, and each of those actions can have a different implementation. (For a more detailed explanation of behaviors and actions, see this article about the automation stack.)

Following the automation stack concept, you can have one stack based on an API raw tool and a second based on a browser-based raw tool. In doing so, you can have different automation approaches for the same behavior.

The need and value of this kind of semi-repeated implementation of a behavior are absolutely context-dependent; some organizations might find great value in this implementation, while others may find it's redundant. The concept does, however, lead to the notion of adjacent stacks.

How adjacent stacks can help

Again, based on the automation stack concept, the idea of adjacent stacks is exactly that: automation stacks that can be exercised in a single test script that differ in their specific implementations but have "mostly the same" actions and behaviors. Here, "mostly the same" is context-dependent as well, but generally means that if one stack has a behavior or action, the adjacent stack also has that behavior or action.

Why are adjacent stacks valuable? Some organizations may want to exercise the system at different levels for the same function or feature. Perhaps, since API tests execute faster than GUI tests, the API-level test suite is testing deeply for the message, data, and business logic aspects. This allows for fewer of the slower GUI tests, but GUI tests can still provide value even if duplicating a behavior that's previously been tested by an API test.

Duplication is not inherently bad; in actuality, it's only bad if you are duplicating without having a specific value proposition for that duplication. Also, it could be argued that if you are getting value from the duplication because you're getting additional information from it, then it's not a duplication.

The real value from adjacent stacks, however, is in cross-technology test scripts—scripts that can use more than one automation technology in the same test script. For example, perhaps you want to test that an update to a profile is correctly saved in a database. This could be automated by driving the GUI to log in and make the update, followed by an API or SQL call to check that the data was stored as expected.

Even if cross-technology scripts are not currently useful to you, having all of your scripts using the same logging and execution frameworks can require less effort to debug and store automation results.

The application to the automation core concept is that these automation stacks can encapsulate the implementation specifics but provide the same or similar interfaces across the stacks for similar actions. This level of consistency can be used to create general approaches for designing test scripts where the details of the automation implementation are no longer leaked into the scripts, which reduces maintenance and, in many cases, increases readability and supportability.

As with most implementations, your mileage may vary depending on your specific needs, implementations, and goals.

How many checks should a script have?

An automation's core contains some number of checks, as stated above, but how many is "some number"?

Some teams follow an automation philosophy of one (explicit) check/assert per script. In concept, this is a great idea. Keeping automated testing scripts small and focused can help an individual script run quickly and reduce the likelihood of "lots" of failures due to the same issue.

This failure reduction is largely due to being able to check Step B of an application without having to pass through Step A first. This means that issues in Step A will cause failures in test scripts for Step A, but are less likely to cause failures in test scripts for Step B because you skip Step A.

This approach works, however, only if you can start testing Step B directly. If the application requires that you perform Step A before starting Step B, you can introduce appreciable repetition across your test scripts.

For example:

  • Test script 1 – perform Step A, check result of A
  • Test script 2 – perform Step A, perform Step B, check result of B
  • Test script 3 – perform Step A, perform Step B, perform Step C, check result of C
  • Etc.

When following this one-check approach for a messaging interface such as a REST endpoint or a telecom interface, the amount of time that each step takes may be sufficiently small that the pass-through time for these prerequisite steps is insignificant.

Sadly, this is not always the case; most of us don't work exclusively at the protocol messaging level for telecom or REST, which, by its very nature, allows endpoints to be poked at will.

When to use multiple checks in one script

Usually, these prerequisite step scenarios occur when automating via a GUI. Interacting via GUI is slow. In these cases, having multiple checks or asserts in a single test script may be the most appropriate implementation.

Typically, the tradeoff here is a shorter duration for an automation run versus the risk that a problem in a particular test step prevents testing of later steps in a specific script. Certainly, some of the automation run duration can be shortened by parallelizing automation runs.

Apart from GUI tests, there are often instances in API and messaging tests for which multiple assertions in a single test script are appropriate. Take, for instance, the case where you want to check many fields in an API response message. You could write a test script that checks Field 1, then write a script that checks Field 2, etc. Yes, since you are testing at the message level, the tests are typically fast to execute, most often sub-second.

If, however, you want to check 60 fields, you could be adding approximately 30 seconds to testing that response message. You likely will also need to check different configurations of that response; you likely will have to check other response messages as well. It could add multiple minutes to each automation execution's duration. In cases such as this, it can make sense to have multiple asserts or checks per test script.

Checks and determinism

Conventional wisdom says that your checks must be deterministic, i.e., you can always programmatically determine whether an assertion's condition is true or false. After all, if you can't determine whether or not an assertion fires, you don't know if a test script should report a pass or a fail.

If you can't reliably determine pass or fail, you lose trust in your automation and the data it provides to you. Therefore, only deterministic checks are useful, right? Not so fast.

Most of the time, deterministic checks are required to produce trustworthy and valuable results; this is true for traditional automated test scripts, in particular. When you go beyond traditional automation into nontraditional automation or automation assist, non-deterministic assertions can still provide value.

With this approach, automation is not about passing and failing; it's about computers helping testers do their jobs by doing things at which computers excel, namely repetitive operations and data comparisons.

When intentionally allowing non-deterministic checks, you understand that your automation is not living in a pass/fail world, but in a world where some unexpected things happen that might indicate an issue. And a human needs to evaluate those results to make that determination.

Want to know more? Come to my talk, "Stacking the Automation Deck," on April 28 at the STAREAST Virtual+ conference. The online event runs April 26-30. For my full schedule of appearances, visit my upcoming events page. 

Keep learning

Read more articles about: App Dev & TestingTesting