I recently wrote an article for TechBeacon about the core of automation being “simply” comprising three parts: a stimulus, a response, and some number of checks (for some definition of “simply”). In that article, I purposely didn’t specify a definition of “some number”. Different teams and organizations have different philosophies about the appropriate number of checks or assertions in a test script.

But zero is also a number, meaning we can have test scripts with no explicit checks or assertions.

Wait, what?!?! What good is a script without checks? Traditionally, automation scripts without checks or assertions aren’t very valuable. If we have a script that is supposed to make an e-commerce order, but we don’t check whether or not the order was successful, we’ve likely missed the point of having the script at all, namely checking that the order was successful; we probably need to check a few other things as well, depending on our intent and context. Additionally, most of us have heard stories of someone needing to “get the automation to pass” so they commented-out the assertions. I guess that’s one way to do it.

There are, however, some valid cases when not specifying explicit checks or assertions is appropriate. One case is when there are implicit checks baked into the lower levels of our automation stack. For example, if our stack provides a method to POST a request to an API that implicitly checks if an HTTP status code of “OK” (or 200) is returned from the POST, then an explicit check of that status code is redundant and therefore unnecessary. If there is no interesting payload data to check, then this script might not need any explicit assertions.

Another case is when the intent of the automation is not to produce a pass-fail, green-red result. Following from my definition of automation, i.e. the judicious application of technology to help humans do their jobs, creating automation scripts or programs that turn the metaphorical crank may not need explicit assertions. In some cases, it may be a low-effort activity to create the automation that performs the stimulus and receives the response, but it’s high-effort or non-deterministic to perform explicit checks. In these cases, we should consider automating only the stimulus and response portions, leaving the checks to the humans, i.e., the testers, once the automated portion has completed its execution. This allows us to leverage the strengths of each actor: the automation is good at repetitive, algorithmic actions; testers are good at creativity, handling non-linear thought, and “noticing things”. How might we accomplish something like this? The automation can cause the stimulus and then store the response in a known location so that the testers can evaluate the responses at the end of the automation run.

Related to the above paragraph, I was once on a project where we were able to tell if certain portions of the distributed application were up and communicating just by driving a browser through several paths in the application; if a script made it to the final page in its path, the script reported a successful result. No explicit assertions were used because a script would fail due to the failure to find the next button or link to click; all the checks were implicit in the behavior of each script.

To be clear, I’m fully aware that most test scripts provide limited or no value unless there is at least one explicit check or assertion. The point here is that we must think critically about how we’re using automation to assist with our testing; we can’t let a situation where “I don’t have anything to assert on” prevent us from using automation to provide value.

Featured image taken from here.

Like this? Catch me at an upcoming event!