Why meaningful testing matters?

Marty Hrášek
Dr.Max IT Development Blog
5 min readDec 29, 2021

--

Recently I was asked by my colleague Markéta Čonka to give a speech for the [pro:]TEST! community on their meetup about my view on testing within the software delivery chain. I'd like to recap important points of this talk by following article.

When we look on the term “meaningful testing”, it sounds like a weird abstraction which is spreading across whole software delivery chain, from a requirement to the deployment. So what does the word meaningful means in this context?

To cut the long story short we should focus only on the testing which brings the value to the final product and end-users. There are several approaches to do it so I'll try to point out some practices which software development teams should follow by nature to bringing a value to the product and the end-users within meaningful testing.

The testing pyramid

I think all of you know the testing pyramid, here is nice explanation on Martin Fowler's blog by Ham Vocke. If so, let me point out one sentence from the prologue of this article.

“Although the concept of the Test Pyramid has been around for a while, teams still struggle to put it into practice properly”.

So why teams are still struggling with the practical QA strategy implementation? The answer seems to be easy: You can't test everything and that's meaningful. Isn't it?

When you ask Google for “testing pyramid” pictures results, You could see probably something like this.

Google search results for “testing pyramid” term
Google search results for “testing pyramid” term

It's pretty clear that we can have a look on this topic from certain granularity levels, but at the end that's the time when you start turning the abstraction idea of QA strategy into yours particular implementation and for sure some specific decisions take place. In my eyes we can split the whole QA strategy into three main blocks as is described on following drawing.

Abstract Testing Pyramid

Let me describe how I think you should implement “meaningful testing” in each above described category. I'll try to mention all practices which together should in my eyes lead to transparent and meaningful QA strategy implementation from the bottom to the up.

Code, stylistic and programmatic testing

This category is basement of the pyramid where no compromises should take place. As is well-know practice to have coding standards and naming conventions we should start building our pyramid from here.

We should always try to cover testing of naming convention, coding style, code smells earliest in the process as possible followed by Unit test execution and other isolated testing approaches which can be executed on the developer machine. ideally before adding source code into the version control repository.

But let me repeat it again, it must be reasonable and meaningful in your implementation context. There is no reason to try reach 100% of coverage or strictest rules possible if it doesn't make any sense for you, ie. if it is not bringing any value.

I have seen many of the implementations where at the end those test were blocking instead of helping to deliver smoothly. I see this kind of testing as the fundamental prerequisite for the next stages of testing.

Another discipline which I'm considering as a part of this category are code reviews. Code reviews should be considered as a part of daily software testing culture. Just to take a break from your tasks, from your focus, to look around what is going on and assist your teammates to consult and help them with their challenges.

Doing it on a daily basis as a routine should bring a lot of benefits to deliver cleaner, maintainable and understandable code. But the reality is showing us that without strong basic testing of code mentioned above we are spending our code review time mostly by correcting already defined rules. Again we are back around the fact why meaningful comes into the game.

Integration testing

As we are stepping higher on the pyramid we are getting from more isolated tests to the complex ones. Integration testing was always in my eyes based in proper architecture and development planning. If this prerequisite as clear interfaces, contracts, master data models and APIs are implemented properly there is no obstacle to define strong integration testing layer with a lot of mocking and simulations.

Those day offering a lot of tools which can be used for creation your custom integration layer stack much more easier than in the past. Let me point out the evolution of Swagger, Postman or promising cloud products as Azure API Management. Last but not least I'd like to mention famous libraries and tools like Faker, etc.

Security vulnerability and dependency scans should take place in this stage as well, it doesn't even matter if it done as a service from the VCS provider or it is managed by some external SaaS service or open source tool such is DependencyTracker.com.

As always I'll point out main message, you don't have to cover 100% because some guy has written this on medium.com or elsewhere, you should do it if it bringing you some value a has reasonable meaning for your project, ie. company.

E2E testing

Hardest discipline, always dependent on many buts or maybe butts? ;-) It’s a piece of cake but if it is implemented correctly can help all the chain deliver faster and react on potential production incidents upfront.

Load and performance tests are perfectly fitting this category and can close the whole chain into all operational questions required to be answered before the production release takes place.

Easy implementation of E2E testing layer into your project/company workflow is strictly dependent on previous points as the roof of the pyramid is dependent on the its basement.

Here you can forgot the tests isolation and implementation only necessary, meaningful tests is heavily suggested otherwise you'll do infinite loop of maintaining something what is not usable for your product, ie. end users.

Expect true to be “truthy”

As the conclusion I'd like to explain meaning of this chapter title: “Expect true to be truthy”. I think all of us are doing what they can to have a strong QA process as is possible but we are still more focused on the abstraction best practices then meaningful implementation and that's the reason why we are just wasting time with a lot off obstacles which are at the end forcing us to do things which is not bringing any value at all.

We are still expecting that following common patterns will help us to do software better. But that's not true. We shouldn't expect, we should implement, that's the difference. We should stop expecting that what work in one implementation will work in another. We should proof it by the implementation and be able to always verify that extension of our current solution will work too in all cases where it is bringing some value to the product itself and the end-users.

--

--