Why you might be wasting time in your testing

Software testing is important. We already know that.

But with the time dedicated to testing also being a critical commodity. You don’t want to be guilting of wasting it on activities that aren’t helping to enable a successful testing process.

You probably know about the five w’s, which are questions asked to gather information or conduct problem-solving.

These sorts of activities are great for testers to carry-out prior to designing their tests. In order to ensure that their testing is targetted and effective.

While I do believe that experience can be the best teacher. And if you try an activity that doesn’t yield “good” results. You quickly learn not to do that next time. Additionally, if you miss something from your planning. You make sure to do it next time.

But why make those mistakes if you don’t have to?

Below are the top five things that you might, or might not be doing. To further perpetuate the problem of not using time for testing effectively.

You don’t know what your goal is

Not having a plan for your testing is like setting out a journey without a map, compass, or a destination in your mind.

While there is nothing wrong with testing in this way (aka Exploratory testing). Most of the time that you sit down to test a piece of software. You are going to clearly know what you are going to test, how you are going to do it etc etc.

Maybe your activities are only going to be limited to verifying bug fixes. But without taking the time to plan out your testing beforehand. You are in danger of wasting your time on efforts that do not add value to your overall testing goal.

You’re only focusing on positive test cases

Testing that software works isn’t to be understated. But as noted in the principles of software testing. Testing shows the presence of defects. Not their absence. So testing that you are able to login to a restricted area is certainly useful. It doesn’t prove that the login system totally works.

What happens if you try invalid details? Illegal characters, or unexpected content.

Positive and negative testing are opposite sides of the same coin. And while positive testing allows us to ensure the software is meeting the business use case. Negative testing allows us to understand any flaws in the system. As well as providing us the ability to test more parts of the software.

If you only focus on the positives when conducting your testing. You are missing out on discovering what the negative consequences are for your users.

You’re lacking an understanding of why you are testing

Do you remember being at the school disco when you were in your early teens?

Trying to dance but not really knowing how? That is kinda what it’s like to test without knowing your purpose.

One problem that contributes to this is not taking that time to understand what the end-user wants to do with a piece of software. By not putting yourself in the shoes of a user and failing to target your testing efforts around the user’s needs. This leaves you open to wasting significant amounts of time by trying to test everything.

Which of course is not only impractical. But also a huge waste of time and contributes to numerous testing mistakes being made.

The other originates from testers who are either introduced to an existing team. Or a group of testers who are relying on legacy testing solutions.

Part of the testing process may be to run a script that nobody knows anything about. It’s just written down in the test case and the way the test has always been run.

While the mandatory step may help you in successfully completing your test process and acquire the results you need. It is possible that it might not even be a step that is needed anymore. Or one that isn’t the most efficient or effective.

As I detailed in this post. These sorts of steps should always be questioned. Interrogated and updated where possible.

You do not hold review/feedback sessions with your peers

There’s a method in software development that occurs when a programmer is faced with a problem with their code. They turn to a model of a rubber duck on their desk. Explain the problem to it. Remembering to cover what the code is doing, what each line does and the expected outcome.

And in my own experience. The process of verbally going through the logic in the code which has only previously existed in the mind. Quickly enables the solution to materialise.

This method is known as rubber duck debugging.

I first discovered the technique at a developer conference in 2008 and have been using it extensively ever since. Not only to solve my own coding problems. But also as a way to solve any sort of technical problem that I face when a person is not around.

When a developer writes something, or a tester creates a test case. We tend to have a bias that believes it is the best it possibly can be. Not seeing its flaws and glossing over any criticisms that others might have.

Showing your creation to a peer and explaining what you are trying to achieve. Not only allows not only another set of eyes to take a look at our work. But it also sets in motion the process of improvement that would otherwise not occur if we kept it to ourselves.

By taking the time to highlight and address flaws in your testing strategy. You are able to address any problems increase the chances that your future testing efforts are a success.

Posted by Kevin Tuck

Kevin Tuck is an ISTQB qualified software tester with nearly a decade of professional experience. Well versed in creating versatile and effective testing strategies. He offers a variety of collaborative software testing services. From managing your testing strategy, creating valuable automation assets, or serving as an additional resource.

Leave a Reply