How to sell excellent software testing?

Ingo Philipp
22 min readMar 31, 2022

8 principles for selling the value of excellent testing.

This article shares 8 principles for selling the value of testing to make people perceive excellent testing as what it is: a value center, not a cost center.

Even though software development is so dependent on software testing, hardly anyone in management understands anything about it.

Testing is often either willfully ignored, treated as something that could be replaced by machines, or seen as the number one bottleneck.

On top of that, excellence in testing is often confused with excellence in automation. What many managers view as testing is not what it actually is.

This lack of understanding and the resulting misconceptions make it hard to sell the value of excellent software testing.

Before you start selling your testing excellence, you should know what it is. Excellent testing is hard to describe.

You only find out what excellent testing is when you see it.

My article on excellent software testing outlines what I have seen. It boils the testing excellence of a tester called Alice down to 18 characteristics.

Here’s a teeny-weeny summary of her testing excellence.

Alice was our yardstick to assess software quality. She was our microscope for the actual problems in our software.

She was our telescope for the potential problems in our software. She was our wake-up call for our unawareness about numerous things.

She was our stop sign for our overfocus on even more things. And she was our alarm device for all the problems in our thinking.

This list could go on and on. It’s not complete. You get the point.

Alice was, and most likely still is, a cocktail of testing excellence. Her testing was fast, inexpensive, credible, and accountable. It was excellent.

This article continues the story of Alice.

It outlines 8 principles that aim to help you to sell testing in a strategic way. These principles will help you to change people’s perception of testing in such a way that testing is no longer seen as a cost center but as a value center.

Motivation

Working with Alice has been a double-edged experience.

One, it has been a humbling experience. I have learned that I am not even close but rather a world away from being an excellent tester.

Two, it has been a sobering experience. I have learned that testers doing excellent testing aren’t necessarily excellent in selling testing.

Selling testing means communicating the value of testing in a strategic way so that testing is no longer perceived as a cost center but as a value center by the people who matter.

Alice was brilliant in what she did. She quickly got promoted. She became the first test lead in her company. In this role, she was responsible for mentoring and coaching more than 20 testers worldwide.

Alice was used to doing testing, not talking about testing.

She was used to showing the value of her work to people inside her home territory (e.g., UI/UX experts, developers, product owners) by doing it.

A week after her promotion, she was asked to communicate the value of testing to people outside her home territory. In this session, people from an operational level and strategic level in Alice’s organization were invited.

This included people from c-level management (e.g., CPO, CTO, CXO) and various directors and (senior) vice presidents.

Alice understood that testing isn’t something simple but rather something complex because of its multifaceted nature. She knew that she has to sell something complex. Alice lived and breathed testing but she didn’t realize that her audience didn’t. With a glimmer of sarcasm, she decided to make it even more complicated by not talking in the language of the audience.

Alice blew on the fire.

She talked about functional testing, security testing, usability testing, load testing, stress testing, and performance testing.

She philosophized about the relation between testing and checking. She deep-dived into exploratory testing and session-based test management.

She spoke about unit testing, integration testing, system testing, and user acceptance testing. She gave a lecture on oracles, heuristics, risk, coverage, metrics, and software quality.

She highlighted the differences between black-box testing, white-box testing, and gray-box testing. She outlined regression testing and progression testing.

She touched upon test-driven development and behavior-driven development and their relation to software testing.

She discussed the negative consequences of false positives and false negatives and pondered on ensemble testing, tour-based testing, crowd-testing, context-driven testing, and testing in production in the context of DevOps and Agile.

She delivered all that in about 60 minutes. It was a firework of testing.

The good news is that Alice brilliantly demonstrated the complex nature of software testing. She showed that software testing is something that cannot be done by anyone just like that. Her performance showed that professional testing requires special skills.

However, this wasn’t what the audience was looking for. She sleepwalked into leaving the audience in the dark of understanding the value of testing. She missed reading the room. She missed tailoring her message to the audience.

She talked about how testing is done, not why testing needs to be done. She overfocused on the practice of testing, not on the value of testing.

In her own overdramatic words: “The pitch failed.”

To put it mildly, Alice felt miserable after this session. However, she quickly started to take action. She drummed up her fellow testers and organized brainstorming sessions to work out a set of basic principles for selling testing.

These principles are our general beliefs that guided our selling behavior. They guided the way we communicated the value of testing in Alice’s organization.

Principle of Selling

The common perception of selling and salespeople is all too often linked with used cars and timeshare apartments: Hit them hard, fast, and get out of town with your commission w/o a long-term thought for the customer.

Therefore, the buying process is often one of mistrust and skepticism.

In our brainstorming sessions, we have learned that this distorted perception about selling made some testers feel icky, cheesy, or sometimes even sleazy when selling appeared on the agenda.

Alice’s fellow testers often started painting the stereotypical picture of an egocentric, dodgy, dishonest, and money-grubbing person when the term selling was vocalized in our sessions.

Their impression was that selling means convincing people in a superficial and pushy way to do something that isn’t worth doing.

That’s not sales, that’s bad sales.

Selling isn’t telling. It means exchanging values.

A sales transaction is an exchange of values between a buyer and a seller. The seller gives something of value (e.g., product, service) to the buyer, and the buyer, in return, gives something of value (e.g., money, data) to the seller.

For example, the way the seller can create value for the buyer is by providing a solution to the buyer’s problems. We consider a problem as a gap between what is perceived and what is desired.

So, there’s a problem when there’s something blocking the buyer from getting closer to their desired states (e.g., needs, goals, objectives, wants, desires).

This principle reminded us of the following:

  1. We need to know our buyers. We need to know to whom we are selling.
  2. We need to understand the desired states of our buyers.
  3. We need to know the value we create through testing.
  4. We need to know how to tie this value to the buyer’s desired states.
  5. We need to know how and when to articulate this value exchange.

This principle is a foundational one.

It guided our discussions and helped us to understand that selling means demonstrating the way we create value for others through testing.

Principle of Targeting

We decided to put our buyers, our targets, into two categories: People inside and people outside software development.

Inside software development, we had three groups. These groups reflected different levels of decision-making power in Alice’s organization.

The tactical level included roles in our natural environment such as software developers, product owners, support engineers, and UI/UX experts.

These were our primary targets.

The operational level included roles in middle management such as vice presidents and directors of the tactical disciplines (e.g., engineering).

These were our secondary targets.

The strategic level included roles in upper management (e.g., senior vice presidents) and the c-level management (e.g., CPO, CTO, CXO).

These were our tertiary targets.

We finally lumped together all the people outside the software development regime. This included roles in customer success, marketing, sales, training, community and partner management, legal, finance. You name it.

These people were our optional targets.

The term management referred to our secondary and tertiary targets. These people were the “money people” in Alice’s organization. They controlled the budget of the organization, including the budget spend on software testing.

You might now wonder why the managers, with their high decision-making power, weren’t our primary targets? The reason is twofold.

One, these people were hard to reach. Two, their decisions about software testing were highly influenced by the people on the tactical level.

Our approach was to primarily, not exclusively, sell the value of testing through the people in our natural environment.

In sales jargon, our goal was to grow the people on the tactical level to our sales champions who sell the value of testing on our behalf to the operational and strategic levels. This was important, especially when we weren’t there.

We first grouped our targets according to their decision-making power. Then we developed user personas for our targets to better understand their habits, objectives, needs, and goals. Then we prioritized our targets since we cannot please all our targets all the time.

We then decided what type of content (e.g., factsheets, whitepapers, success stories infographics) we need to create to attract each target persona.

Then we defined which mediums (e.g., webinars, blogs, podcasts, workshops, meetups) to use to communicate the content to the targets.

Then we defined which channels to utilize (e.g., Slack, Confluence, Microsoft Teams, email) to distribute the content to the targets.

Finally, we decided how often to reach out to our targets.

In a sense, we conducted internal marketing campaigns to communicate the value of testing. In doing so, we defined metrics such as content engagements (e.g., likes, comments, shares), impressions, and audience growth to later be able to assess the success of our campaigns.

This was our basic scheme for selling testing. The challenge was not so much to create this content but to ensure that this content is used and consumed in a consistent way throughout the entire organization.

This leads us to our principle of consistency.

Principle of Consistency

We are stronger together than we are alone. Alice understood this very well. She started to unite her power with the power of her fellow testers. She knew that the best teams not only have chemistry but also consistency.

Alice wasn’t alone. Many of her fellow testers faced the problem of selling the value of testing well. Some testers were simply better at hiding this problem than others. So, Alice quickly turned the brainstorming sessions into a regular series of meetups. Our so-called Action Lab was born.

You can think of it as a community of practice.

Our Action Lab was a group of people (e.g., testers, developers) who came together on a regular basis (e.g., bi-weekly) to share their passion for testing by exchanging ideas, sharing experiences, and spreading knowledge.

Everyone was welcome but no one was safe to just lean back and relax. It was a working group. The focus was on doing, not on talking.

The hands-on character of these sessions was key to keeping the participants engaged. The individual sessions had no fixed duration but usually lasted no more than two hours. Each session always focused on one specific topic that was neither too big nor too small.

The topics were phrased as questions, not as statements because questions require answers, statements do not. In our first session, we tackled the question “What characterizes testing excellence?”. From this, we derived the 18 characteristics of excellent testing. In another session, we discussed “How to structure note-taking during exploratory testing?”.

Ergo, the topics were not only focused on the practice of testing but also on the value of testing. The overall goal was twofold. One, to improve our daily testing practice. Two, to improve the way we communicate the value of testing. The content resulting from these sessions was then used for our internal marketing campaigns.

Each session was moderated. The moderator prepared the agenda, introduced the topic, moderated the discussion, collected action items, and summarized the lessons we have learned after each session.

That’s, in a nutshell, how we rolled our Action Lab.

Our Action Lab not only helped us to close skill gaps and knowledge gaps among testers in different teams and different departments but also enabled us to understand that unity is a strength and division is a weakness.

Our main learning was that, even in one single organization, many testers often have different, and sometimes even contrary, views on testing.

So, by getting together we were able to develop a consistent view on testing. This enabled us to develop a consistent way of messaging the value of testing.

This was crucial. Otherwise, we would have signified nothing right from the beginning. To conclude, your superpower is your team’s consistency, not your individual brilliance.

Principle of Practicing

In addition to our Action Lab, we also organized practice sessions to train the way we pitch the value of testing.

We decided to train because we have learned that the delivery of our message is every bit as important as its content.

This gave rise to our Sales Lab. We did that for three reasons. First, we wanted to perfect the way we pitch the value of testing in less than 2, 5, 15, and 30 minutes. We mainly focused on making our pitch succinct because we usually only had a few minutes to get our point across.

Ergo, focus and momentum were our bosom buddies in pitching testing.

Secondly, we have seen that some testers were deeply convinced that one must be born with the magic ability to sell well. Well, that’s simply not true. Anyone can learn how to sell. You just need to be willing to invest effort.

Unsurprisingly, practicing the act of selling is a great way to get released from this popular misconception. The more often you do it, the easier it will get. Selling isn’t talent. Just as testing, selling is a set of skills that can be learned.

Thirdly, we have learned that some testers were frightened of failing at selling. They were scared of giving the rest of the organization the final proof that they don’t understand their profession or that they aren’t made for success. This fear often created a mental block.

So, we practiced. We practiced a lot. We practiced together to release our fellow testers from this anxiety. These sessions boosted our confidence.

We didn’t just share our success stories but also our failure stories to help our fellow testers realize that failing is not bad, not learning from failure is. This then stimulated them to speak about their failures more openly.

Ergo, we learned from each other and realized that the most frightening monster isn’t the one you are selling to, it’s the one that exists in your mind.

All this takes effort. But remember, without your individual commitment you’ll never start selling well. And without your team’s consistency, you’ll never finish selling well.

We have learned that consistency comes from collaboration. Therefore, we decided to foster collaboration through sharing knowledge.

This leads us to our principle of collaboration.

Principle of Collaboration

The true power of becoming and remaining excellent in doing and selling testing comes from sharing knowledge, not withholding it. So, we created a knowledge base for testing.

In this knowledge base, we shared the lessons we have learned from our internal seminars, workshops, and meetups (e.g., action lab, sales lab).

We shared testing practices that turned out to be valuable in our projects (e.g., heuristics, charters, mindmaps, models, tools).

We created a glossary to make testing-specific terminology (e.g., context-driven testing, exploratory testing) and testing-related terminology (e.g., quality, coverage, risk) more consistent within the organization.

We shared what we have learned from reading books, articles, papers, and blog posts. We shared answers to questions we frequently discussed (e.g., “How to decide what to automate?”).

We shared educational material on testing such as blogs, books, magazines, courses, podcasts, reports, and conference talks.

This list could go on and on. You get the point.

This knowledge base was our one-stop-shop for all content related to software testing. Therefore, we called it the Testing Shop.

This knowledge base wasn’t just a storing but a collaboration platform that enabled us to gather, incorporate, and share feedback in the wink of an eye.

This platform was based on SharePoint and Confluence to make the information accessible for everyone in the organization.

This allowed us to easily involve other people too, e.g., professionals in branding, marketing, and sales. These people initially supported us in structuring our positioning material in a more professional form.

We created a variety of collaterals such as factsheets, one-pagers, value cards, battle cards, and storybooks. This enabled us to share our message in a more concise, to-the-point, and engaging way.

So, just as great testing, great selling comes from great collaboration.

In the course of creating and sharing this content, we not only sharpened our understanding of the practical dimension of software testing.

We also sharpened our understanding of the value we create for other people through excellent software testing.

Principle of Connectivity

Having a clear understanding of the value of testing was important since we usually received two types of questions from management. The first one was: “Why do we need testing at all?”.

We addressed this question in the following way.

First, we made it clear to management that the value we provide through testing is largely intangible. You cannot touch it. Through testing, we collect quality-related information (e.g., risks) about the software to enable other people (e.g., developers, product owners) to make better-informed decisions (e.g., shipping decisions, fixing decisions). We inform people.

Next, we let our management know that problems are the quality-related information we primarily are looking for in the software. Because making these people aware of problems means enabling them to address these problems. And being able to address these problems means being able to avoid that these problems turn into bigger problems.

So, giving people the ability to avoid problems means giving them the ability to mitigate potential damage, e.g., in terms of financial loss, loss of reputation, or the loss of faith of clients due to poor user experience.

Simply put, through testing, we help other people to mitigate risk by making them aware of the risks (i.e., potential problems).

We help other people to mitigate the potential of losing something of value. Helping people in your company mitigate potential damage means giving the entire company the ability to progress toward strategic company goals at a sustainable pace, not at a reckless pace.

These goals are manifold (e.g., increase revenue, improve user/customer experience). We only indirectly influence these goals through testing but indirectly doesn’t mean unimportant. With this oversimplified example, we just want to illustrate what we were trying to achieve.

We were trying to find a trading zone with our internal stakeholders (e.g., managers). We did that by tying testing to their objectives.

So, we made testing legible to them by adapting our testing terminology without oversimplifying the complex nature of testing too much.

In a nutshell, we made our story their story. The reason is simple: If our story isn’t their story, they wouldn’t give it a chance.

All in all, we were trying to convey that we create value through testing by helping other people to create value.

Principle of Differentiation

Our principle of differentiation addresses the second question we usually received from management: “Why do we hire professional testers at all?”.

Management tends to believe that anyone can test. Well, it’s true: Anyone can test but not anyone can test well.

In other words, management usually underestimates the difficulty of testing well. We addressed this question in the following way.

First, we distinguished between professional testing and amateur testing. Amateur testing is just code for testing that can be performed by anyone.

Amateur testing has a low probability of finding rare, hidden, and subtle problems that matter. It’s shallow testing.

On the other hand, professional testing has a high probability of finding rare, hidden, and subtle problems that matter. It’s the ability to test deep.

Deep professional testing leads to comprehensive product knowledge and risk coverage, shallow amateur testing doesn’t.

Additionally, amateur testing is a tactical activity that is largely driven by the gut feeling, talent, and intuition of amateur testers (e.g., developers).

Professional testing is a strategic activity that not only is driven by gut feeling, talent, and intuition. It’s an exploratory enterprise backed by a colorful mix of systematic strategies, approaches, and techniques. For example, the ability to design and discuss test strategies is a hallmark of professional testing.

All in all, amateur testing is low-skilled testing. Here, randomness rules. Here, the act of finding problems that matter is like playing the lottery. Luck plays a major role in amateur testing.

Professional testing is high-skilled testing. It’s a fast, reliable, and consistent activity where luck only plays a minor role in finding problems that matter.

Don’t get me wrong.

Shallow amateur testing isn’t bad, far from it. It can be important. Among other things, it makes deep professional testing possible (Michael Bolton).

How do I know? Well, I (amateur tester) have worked with Alice (professional tester). That’s how I realized that I am not even close but rather a world away from being a professional tester.

In short, (excellent) testing makes people aware of problems (that matter), and problems that matter usually cause damage (e.g., loss of reputation).

This then begs the question: “Do you really want to rely on luck when your reputation is at stake?”. Well, we don’t. We often left management with these types of questions to motivate two things.

First, we emphasized the necessity and importance of professional testing to the business. Secondly, we highlighted that professional testing isn’t natural talent only. It’s a set of skills that must be learned. Simply put, professional testing matters, and since professional testing requires professional testers, professional testers matter too.

Here’s a little exercise you can do. Think about your differentiators. Think about them in three ways. First, think about your unique differentiators. These are the capabilities that make you unique. These are the capabilities that only you possess as a professional tester. In business jargon, that’s your unfair advantage. The ability to test deep is one example.

Secondly, think about your comparative differentiators. These are your capabilities that exceed the capabilities of other people (e.g., developers). These are the things you can do better. Typical examples are assessing risk, analyzing coverage, and telling the testing story.

Thirdly, think about your holistic differentiators. These are the capabilities that make you credible. For example, here you could highlight that you regularly speak at testing-related conferences, publish testing-related articles, or actively participate in testing communities to advance the testing craft.

In short, think about where you begin and others end.

It’s important that these differentiators are true, important to your targets, and provable. Otherwise, it’s just cheap talking. So, also think about how you can defend them. In doing so, remember that less is more.

The amount of information your management can absorb is usually limited. So, keep your list of differentiators concise but precise.

Principle of Debunking

Our principle of debunking is about anticipating, exposing, and proactively addressing toxic thoughts about testing.

Think about all the things your management should stop buying, here and now. Make a list. Our non-exhaustive list is shown below.

For us, this list included, and still includes, the illusion of bug-free software, the fantasy that all software testing can be automated, and the fallacy that 100% coverage is a meaningful practical concept.

It included the popular misconception that we can verify software or that everyone, including cavemen, can test well.

This list included the illusion that quality can be assured and quantified, and that testing is all about creating, automating, and executing test cases.

Whenever we pitched testing to management, we came prepared. We wanted to be quick on the trigger. We did that by having responses ready for these toxic thoughts. Remember, by failing to prepare, you are preparing to fail.

For example, here is what we did to address the false belief that “testing is test cases”. We reminded management that a recipe is not cooking. We reminded them that a sheet of music is not a musical performance. We also reminded them that a file of PowerPoint is not a conference talk.

In the same way, a test case is not testing.

A test case is an artifact, testing is a human performance. So, just as a file of PowerPoint is not central to a conference talk, a test case is not central to testing. Ergo, a test case is not a test. And likewise, the number of recipes (test cases) you have doesn’t tell us anything about your cooking (testing) skills.

This argument, among many others, has been stolen from the rich (Michael Bolton) and given to the poor (Ingo Philipp).

You get the point. We adapted our testing terminology and talked in simple terms to debunk these common misconceptions about testing.

We used metaphors and analogies. These are powerful cognitive tools to compare software testing to something that’s familiar to your managers.

These tools helped us to clarify our idea about testing by turning the abstract discipline of testing into something concrete. It helped us to stop this spiral of toxic thoughts about testing. It helped us to separate myth from reality.

Conclusion

We discussed two types of principles: action-related and process-related principles. In this article, you have seen a colorful mix of both types.

The action-related principles were the general guidelines we followed in the act of selling testing, i.e., when we pitched the value of testing to other people (e.g., management). We also called them tactical principles.

The process-related principles were our strategic principles. They directed our processes that, in turn, helped us to plan, maintain, and scale the way we communicated the value of testing across the entire organization.

This was crucial because the challenge was not to do that just for 1, 2, or 3 testers. The challenge was to do that for 40, 50, 60, or more testers.

I am highlighting this because I have seen too many testing teams that pay little to no attention to the process-related principles. I have seen too many teams overfocusing on action-related principles.

This then often sets these teams up for failing in scaling. Their action-related principles often turn into tiny little drops in a vast ocean.

Therefore, act strategically, not only tactically, since tactics without strategy is the noise before defeat.

Here are the three main benefits why following these principles pays off. First, think about what happens when a company restructures, the c-suite changes, or when employees must get laid off.

In these situations, positions are questioned. Their relevance to the business is challenged. In the engineering department, people typically start questioning positions related to testing because it’s tough to understand its value add.

In these cases, you come prepared. You hit these questions hard and fast to make these concerns go off the table fast.

Secondly, through continuous and consistent evangelism centered around the value of testing, you probably won’t receive these tough questions anymore.

You already answered these questions since you never stopped addressing them. So, in these cases, it’s already crystal clear that professional testers significantly contribute to positive business outcomes.

Thirdly, think about what usually happens when new (agile) development teams are being formed. In the worst case, there’s no discussion on whether professional testers should be part of these teams. Then there are situations where people discuss whether professional testers are required but decide against them. Again, through evangelizing the value of professional testing consistently and continuously, you’ll see these discussions go away.

You will be included in these teams.

People will understand that it’s close to negligent to not include you. So, evangelism is not an option but a necessity to get your seat at the table.

In hindsight, the journey of developing these principles was much more rewarding than the principles themselves.

The real value of that experience was in all the things we’ve learned and unlearned in our heated discussions and lively debates.

The process of overcoming failures and finding new strategies in selling the value of software testing is what is valuable.

Through this process, we didn’t just become better sellers of testing but also better doers of testing. Again, this takes effort but it’s worth doing.

So, take the effort and accept the trouble that comes with it. The reason is simple: If you aren’t taking care of how software testing is perceived by the people in your organization, others will. Remember, you cannot “correct” what you aren’t willing to confront.

Namasté, my software testing friends.

--

--