I’m not a physicist; in fact, I struggled with physics in college, mostly because of the math. One thing that did stick with me, however, was the concept of potential energy, though to be fair, I think I learned those concepts when I was much younger. In short, potential energy is energy that is available to an entity, but the value of which has not yet been realized. You can read a much more cogent explanation of potential energy at this location.

Why am I talking about physics and potential energy? Much like potential energy, testing in and of itself provides no value. None. Nada. Zippo. Now, before you delete all traces of me from your browsing history, read just a bit further…

What I’m saying is that testing, and by extension test automation, only provides information; testing doesn’t fix problems. It informs about what appears to be working, and what appears to not be working, and delivers other information with which decision-makers can make decisions. Without action on that information, the information has little if any value. Thus, the analogy to potential energy.

What is done with the information provided by testing? Well, that depends. Responsible leaders digest the information and collaborate with the team to understand the risk related to the current version of the product or application.

What do I mean by risk? What do YOU mean by risk? Cavalier-ness aside, risk can loosely be thought of as jeopardy to the business value your product has for your company. Risk is context-specific; an issue in an application may be high risk to one organization but the same issue in a different application may be low risk to another organization.

Risk comes in many forms, but the ones we tend to see in software testing are

  • the risk that a client/customer/user encounters a known issue that causes a loss of revenue or an increase in cost to the company.
  • the risk of the unknown for testing that we identified but didn’t or couldn’t complete, i.e., the “known unknowns”
  • the risk of the unknown for testing that we didn’t think to do, i.e., the “unknown unknowns”.

What does (or should) “product leadership” care about? That team wants (or should want) to know what might go wrong with an application release, how likely it might be, how impactful it might be, and how quickly the team can mitigate an issue. They also want to know what areas of the product have not been sufficiently evaluated and what could happen if there were to be a failure in an under-evaluated area. Again, what is the risk to the company and the organization?

Testing without acting on appropriate reporting is wasted effort and wasted energy. Testing that provides appropriate but unused information is akin to potential energy; it’s unrealized value.  Testing, and by extension automation, provides no value if no one acts upon the information it provides.

What if your testing isn’t producing the kinds of information that are needed by the decision-makers? If your testing is not providing information that your decision-makers can act upon, perhaps it’s time to rethink your testing approach. If your test automation is not providing actionable results, perhaps you should change your automation approach or your automation logging paradigm. Remember, software changes over time and the needs of the software and testing approaches need to evolve to stay relevant, appropriate, and valuable.

Like this? Catch me at an upcoming event!