Select Mode

What Actually Is Testing?

One thing that’s often interesting is to define foundational terms within your discipline. It’s often even more interesting when you come across a discipline that seems to struggle with doing so. Is that the case for testing? Well, let’s talk about it.

This is one of those posts that I’m writing as I think through the ideas. Given that, I suspect this post may go on a few tangents. Thus I’ll show my end point now. Here’s my hypothesis of what I’m going to end up with by the end of this article:

Testing is the act of continually reassessing the plausibility of risks based on new information determined through experiments, using confirmation, falsification, and implausification as methods of investigation and discovery.

This article is my experiment in action to see if I do, in fact, end up there.

From Insider to Outsider

As we talk about this, it’s critical to consider this from the insider point of view as well as the outsider point of view. I talked about this a bit — actually more than just a bit! — in my post on the emic and etic in testing.

That idea of “talking past outsiders” is one that I feel is particularly endemic to the test industry as a whole and a lot of that seems to hinge on how people want to define testing.

From Intuition to Formalism

There’s an interesting spectrum if you try to shift people from what they might “intuitively” think testing is to a formalism about what it “actually” is.

In fact, it’s nowhere near that cut and dry because a lot of the intuition isn’t that far off in some cases. But there is whole lot of nuance that intuition can leave off.

Beyond that concern, another area of importance to this is that defining testing in a clear and comprehensive manner is a crucial step towards formalizing it as a practice or discipline. In just about any arena that requires practitioners actually doing something, a well-defined concept serves as a foundation for developing methodologies, standards, and various practices that we would consider better rather than worse.

Engaging in this kind of definitional activity provides a common understanding and language that practitioners can use to communicate, collaborate, and continually improve their work. This helps communication not only internally among practitioners but also externally among those who are non-practitioners. Again, the emic/etic rears its head.

Another way to put this is that defining testing in a clear and comprehensive manner is a way of framing testing with operational specificity. Operational specificity refers to defining a concept or process in such a way that it can be consistently and effectively implemented in practice. It’s how you know you’re actually doing testing versus doing something else.

If you’ll indulge me, let me frame this from another aspect of my career, albeit one I haven’t been in for quite some time.

Consider quantum theory for just a second. Don’t worry! There won’t be a quiz. It’s generally true that there’s only one mathematical formalism for quantum theory. What that means is your average physicist has no problem with going ahead and using the theory. The crucial point is: physicists do this even though they often don’t agree about what the theory means.

If you work with enough physicists you’ll find that while they seem to agree, at least to the extent to be able to work with each other, they actually have radically different understandings of the meaning of quantum theory. This sometimes comes as quite a shock to them as they learn some of their collaborators and friends think very differently.

Well, if that’s the case, then how does it all work? It works mainly because physicists realize that how they think about the theory actually has remarkably little effect on the calculations they are routinely doing.

Some physicists are actually bothered by that but most find it not terribly concerning. That’s the insider view. But this ability to work while disagreeing about the fundamentals is not a consolation to the average layperson, the outsider.

This is because the average layperson tends to lack the mathematics to fall back on. They don’t have the formalism. With only the concepts and principles to go on — and thus often left only with intuition — it can be very disconcerting to discover that different physicists, in their different books or blog posts or conferences, offer very different versions of the basics of quantum theory. It can leave someone wondering which person is “right.” It can leave someone wondering if any of them really know what they’re talking about.

Fundamental Questions

Let’s stick with our science focus for a bit. In one way — and this is an arguable statement — the progress of science can be measured by sustained work that produce new answers to two key questions.

  • What is time?
  • What is space?

For the past hundred years or so we’ve known that matter is made up of atoms. We learned not too long after that these atoms in turn are composed of electrons, protons and neutrons.

And this has taught us an important lesson. The lesson was that human perception, amazing as it sometimes is, is too coarse to allow us to see the building blocks of nature directly. We need new tools to see some of the very smallest things in nature. At one point we came up with microscopes and those let us see the cells living matter is made up of. But to see the atoms that the cells were made up of, we needed better tools.

We needed to look on scales at least a thousand times smaller and we ended up being able to do that with electron microscopes. Using yet other tools, such as very large particle accelerators, we can see the nucleus of an atom. We’ve even seen the quarks that make up the protons and neutrons. (It’s interesting that to see smaller and smaller, we must build bigger and bigger.)

And what we learned here, as we journeyed into the quantum realm, is that not only is our perception flawed, but our intuition could be as well. In fact, we knew this was the case even without getting into atomic and subatomic layers of matter but the latter certainly drove the point home.

So a formalism was needed. And the path to that formalism leads to a set of pretty fundamental questions:

  • Are the electrons and the quarks the smallest possible things?
  • Or are they themselves made up of still smaller entities?
  • Will we always find smaller things?
  • Or is there a smallest possible entity?

Earlier I mentioned that the key developments come in when asking about space and time. And that’s actually what we just did with the above questions, albeit it may not seem like it. The above was focused on questions of matter (which operates in space and time) but that allows us to apply those questions to space. Space seems continuous, but we might wonder about that. That leads us to ask:

  • Is space infinitely divisible?
  • Or is there a smallest unit of space?
  • Is there a smallest distance?

We can do something very similar with time.

  • Is time infinitely divisible?
  • Or is there a smallest possible unit of time?
  • Is there a smallest duration?

What’s a key fundamental question that’s lurking here? It’s really this:

  • Is there a simplest thing that can happen?

Wait. What does “simplest” mean here? How about this:

  • Is there a smallest possible thing that can travel the smallest possible distance in the smallest possible time?

Asking questions like this was essentially addressing the idea of granularity in the physical world. Granularity was the unifying concept here between space, time and matter. I’ll spare you all the history but ultimately this led to Planck’s constant which told us that there was a fundamental limit to how finely we could divide energy. Mass and energy are interconnected — that’s what E=mc2 tells us — and, so, if there’s a fundamental limit to how finely we can divide energy, it implies a similar limit for matter, which has mass.

But what about space and time? Well, consider two parts of our formalism that were developed independently.

  • Gravitational constant (G): General relativity describes the curvature of spacetime due to mass and energy. So, in this context, G is closely associated with the geometry or shape of space. It tells us how gravity, which is the warping of spacetime, affects the trajectories of objects.
  • Speed of light (c): The speed of light in a vacuum, c, is often referred to as the cosmic speed limit because it represents the maximum speed at which information or matter can travel through spacetime. It’s a fundamental constant related to the speed of time, meaning it sets a universal speed limit for cause and effect.

These constants, when combined and used with Planck’s constant, give us values called the Planck length and Planck time. The Planck length (around 1.6 x 10-35 meters) and the Planck time (around 5.4 x 10-44 seconds) are believed to be the smallest meaningful scales in spacetime. If you attempt to divide space or time into smaller intervals beyond these values, it’s generally suggested that it doesn’t make physical sense

So given what we just talked about, let’s ask the question again: “Is there a smallest possible thing that can travel the smallest possible distance in the smallest possible time?”

Well, we can answer part of that based on our current knowledge. There is presumably some “smallest thing” that could travel the Planck length in the Planck time. The Planck length represents the smallest meaningful distance and the Planck time is the shortest meaningful duration. If we consider the Planck length as the smallest meaningful distance within our current framework of physics, it implies that nothing we know of can be smaller than this distance.

I know this can seem far afield from defining testing. Bear with me just a little longer.

When we consider the “smallest possible distance” and “smallest possible time,” we enter the realm of what’s known as Heisenberg’s uncertainty principle. This principle states that you can never precisely know both the position and momentum of a particle simultaneously. The more accurately you try to measure one, the less accurately you can know the other. This inherent uncertainty is related to the granularity of space and time we just talked about.

So, to tie it all together, Planck’s constant, with its fixed value, implies a fundamental limit to the precision with which we can measure both time and distance. It’s as if there’s a smallest “quantum” of time and distance beyond which we can’t make meaningful measurements. I’ve always felt this interplay between Planck’s constant, the uncertainty principle, and the granularity of the physical world is an absolutely fascinating connection because they literally define the notion of what we perceive as “reality.” It’s like we have the smallest possible building blocks of the universe in terms of both energy and measurement precision.

And notice where we ended up there: with things we can measure, which means thing we can test for, which means experiments we can perform.

That speaks to being able to verify or falsify.

More Fundamental Questions

Let’s consider again the idea of time. We can ask other questions:

  • Does it go on forever?
  • Was there a first moment?
  • Will there be a last moment?
  • If there was a first moment, then how was the universe created?
  • And what happened just a moment before that?

That last question is a primary one because notice how it relies on us asking about the “before” to something we may define as having no “before.” The question can be framed but is it meaningful?

We can ask other questions of space.

  • Does it go on and on forever?
  • If there’s an end to space, what’s just on the other side of it?

Again, note the last question. We’re asking for the “other side” of something we just said doesn’t really have an “other side.” Again, the question can be framed but is it meaningful?

And I bring all this up because here we start to get into the ability to determine if our conceptions are implausible, even if we can’t quite measure or test or experiment to verify or falsify.

Notice how all of this is getting into operational specifics about what something is. And those operational specifics suggest experiments: figure out how to measure the things we are experiencing. There’s a quantitative aspect to this. But also remember that our senses cannot reveal how exactly things are (or at least seem to be) and our intuition can absolutely lead us astray. So there’s a qualitative aspect as well.

As we know from our careers, or just by being consumers of basically anything, quality also has qualitative and quantitative aspects. Thus it has elements of the subjective as well as the objective.

This all hinges on that distinction around perception on the one hand and specification on the other.

I bring that up because there are very vocal test practitioners who suggest that testing has nothing to do with quality at all. Now, if you know the history of science, you know that’s entirely untrue. Even if you know the history of just software, you also know that’s untrue. But understanding why helps when you have an operational definition of testing.

Defining Testing

So let’s get into defining testing. Now, obviously, what I say here is my opinion. I don’t claim to speak for an industry. I don’t claim to be more right than someone else. I speak as someone who has an informed opinion based on a whole lot of experience. Which can mean as much or as little as you want.

I’m not going to cover all the possible ways that people define testing here. I’m going to focus on one particular definitional element and build on that. Sometimes we say testing is about risk. When framed as such, testing gets defined an an approach to identify risks and bring them to the notice of people who need to be aware of those risks.

And that’s true but, as I indicated above, testing is also about experimentation. I talked before about how testing can act as an experiment around project forces.

As such, testing is rooted in the scientific method. I know some folks claim that testing is more art than science. Or at least a mix of both. And that actually leads me to remind folks of when I wrote “Testing is the art of …”. There I did talk about a possible definition of testing already, in fact.

But let’s keep with the science theme here. I talked a whole lot about the pedigree of this idea in my history and science series of posts. But even if that didn’t win you over, consider that we’re often dealing with technology. And, as Mustafa Suleyman defined it in his excellent book The Coming Wave, technology is:

The application of scientific knowledge (in the broadest possible sense) to produce tools or practical outcomes.

Beyond working with technology, this rooting in the scientific method is crucial because testing is not just about identifying something but about manipulating something in order to see how, or whether, and to what extent, a given outcome results from the manipulation.

This involves forming a hypothesis (predicting the outcome of a test), conducting an experiment (executing the test), and then analyzing the results to confirm or refute the hypothesis.

So what about risks here?

Well, in terms of identifying risks, this scientific approach is very applicable. When you form a hypothesis, you’re essentially identifying a potential risk — “If we do X, then Y might happen.” As in, if we add this feature, we might break existing functionality. Or: if we add this feature, we might confuse users. Or: if we add this feature, we’ll make things easier for users.

Wait, is that last one a “risk”? Well, hold on to that thought but this is why the notion of risk is not quite enough and the focus more needs to be on the idea of experimenting.

So we formed our hypothesis. The experiment — the testing — then either validates this risk (if the outcome is as predicted) or mitigates it (if the outcome is not as predicted).

So, in essence, testing is about both identifying risks and experimenting to validate or refute them. These two aspects are intertwined and both are crucial to the overall process of testing.

In the case of science, however, the “risk” may not anything harmful, right? For example, Galileo tested balls rolling down an inclined plane. In the context of scientific experiments, the term “risk” doesn’t necessarily refer to something harmful or negative. Instead, it can be seen as an unknown outcome or a variable that could affect the results of the experiment. That applies to testing as well.

In Galileo’s case, the “risk” could be any factor that might affect the balls’ speed or trajectory, such as the angle of the incline, the smoothness of the surface, the weight of the balls, and so on. By conducting tests, he was able to identify these variables, understand their effects, and thereby increase his knowledge of the physical laws governing motion.

So, while in software testing “risks” often refer to potential issues or problems, in a broader scientific context, “risks” can simply mean unknowns or variables that need to be understood and accounted for.

And this takes us a bit into the idea of implausifying. “Implausifying,” as a term, can be used to express a state of skepticism or doubt about a particular claim or hypothesis, but without outright denying its possibility. Thus if a claim or hypothesis is deemed implausible, it doesn’t necessarily mean it’s false or should be dismissed outright. Instead, it indicates that more evidence or experimentation is needed to shift it towards plausibility.

In this context, the goal of the experiments would be to gather more data and insights that could either support the claim (making it more plausible) or contradict it (making it even more implausible). Throughout this process, it’s important to maintain an open mind and not rush to judgment until the evidence clearly points in one direction or the other. This approach aligns well with the principles of scientific inquiry and skepticism. And, as I hope you can see, aligns extremely well with the idea of testing.

And, I might add, this does still align well with the concept of identifying risk. In risk identification, one of the key steps is to assess the likelihood or plausibility of a particular risk event occurring. If a risk is deemed implausible, it might not be prioritized for mitigation. However, if new information or results from further testing make that risk more plausible, then it would need to be reassessed and potentially addressed.

So what is this leading us towards? It leads me toward this:

Testing: continually reassessing the plausibility of risks based on new information.

But when I broaden that out a bit, I find I frame testing like this:

Testing is the act of continually reassessing the plausibility of risks based on new information determined through experiments, using confirmation and falsification as methods of investigation and discovery.

That isn’t all that testing is. But I think that statement is a crucial part of defining testing.

Share

This article was written by Jeff Nyman

Anything I put here is an approximation of the truth. You're getting a particular view of myself ... and it's the view I'm choosing to present to you. If you've never met me before in person, please realize I'm not the same in person as I am in writing. That's because I can only put part of myself down into words. If you have met me before in person then I'd ask you to consider that the view you've formed that way and the view you come to by reading what I say here may, in fact, both be true. I'd advise that you not automatically discard either viewpoint when they conflict or accept either as truth when they agree.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.