Micro Focus is now part of OpenText. Learn more >

You are here

You are here

How to test when you know a bug won't be fixed

public://pictures/img_1794.jpg
Jenny Bramble Software Test Engineer, Willowtree Apps
 

When you're writing automated tests and run up against a defect or unexpected behavior, it can feel like hitting a brick wall—as if all your testing is screeching to a halt

It's even more deflating when you bring the defect to your team and it decides not to fix it before release. Or when you're working with a third-party API and it's returning an unexpected response. 

When you're tasked with writing automation in these settings, what do you do? Do you avoid writing the test? Let it fail all the time? Work harder to get the defect resolved? 

Here are my suggestions about how to approach testing in this situation.

Write your tests to pass, but design them to fail

At the end of the run, automated tests provide information back to testers. Removing the phrase "automation finds bugs" from your lexicon is the first step to a successful, meaningful automation suite. 

Automation returns information about the expected behavior of the system—it's up to the humans interpreting that information to determine if there's a defect. 

How many times have you dismissed a test as flaky or rerun a suite because something failed when you didn't expect it? Automation didn't find a defect; it returned information that you then used in the context of your suite to determine your next actions.

This means that when you have a defect that won't be fixed, you accept that your team has determined that this is the expected behavior of the system. This behavior might be the result of a defect, but within the context of your system, it's now expected. 

Designing tests to fail means that you design them to pass on this expected behavior. When that test fails, you’ll have a new piece of information to determine whether or not your system still meets expectations. Is the third-party API returning a different value? Has a developer picked up the bug ticket and resolved the defect? Has anything else changed that affects this functionality?

By designing tests to fail, you’re providing meaningful, actionable information without increasing failure fatigue, which is defined as "the more failed tests you see, the most likely you are to ignore failed tests." This is something automation architect Paul Grizzaffi calls "hunting Sasquatch."

Keep your team informed

Designing tests to fail should not be a quiet endeavor. To ensure that the information your suite returns is meaningful and actionable, it needs to have a context. You can create context through Jira stories, TODOs in the code, and READMEs.

Imagine that your test will not fail for several months. By then will you still be on the project—or even with the company? Will anyone know to ask you about this particular test? Will someone else have to do hours of research to figure out what's going on? You need to minimize this type of swirl for this method to be effective.

Start by documenting your tests. If you're using custom failure messages, create something similar to "This may be failing because defect 1414 has been resolved" or "This test may be failing due to a change in the API’s response."  

TODOs in your test code are lifesavers here as well. Many IDEs can show a summary view of the TODOs in the code. Adding them in will help any new testers on your team get up to speed on the context of your tests quickly. The best TODOs should describe the desired behavior and link to the defect ticket at a minimum.

In addition, note that there is a test associated with the defects in your tracking system. Describe where to find the test, the current behavior, and how you can imagine the test working once the defect is resolved. 

This can be as simple as: "The test is in the login group and currently verifies that the new password is six characters. When defect 1414 is resolved, the test will need to be updated to the correct value of eight characters."

Don't abandon your tests

More than any other method of test curation, designing tests to fail requires maintenance. Because there are TODOs in place, they should be a little more visible to anyone working on your automation or code base at large.

When working with a codebase that has several TODOs of any sort, it's important to check those regularly to ensure that they're still valid. One of the best ways to do this is to tag these tests, if your framework allows for it, or use the tools built into your IDE.

For example, in Xcode: //TODO: revisit when defect 1414 is resolved 

will add the item to a list that you can view. Using this:

#warning revisit when defect 1414 is resolved will create a more intrusive compiler warning.

In Android Studio,

//TODO: revisit when defect 1414 is resolved will add the item to an easy-to-read list in the bottom pane.

Having this list in an easy-to-access place will let you quickly revisit them every couple of sprints or every quarter at the least.

Finally, remember that with some defects, the application or APIs may change in such a way that the defect is no longer valid. Make sure you don't have any TODOs hanging around that are no longer valid.

Now go ahead: Design your tests to fail

Designing tests to fail is a great method for making sure that your applications are displaying expected behaviors and reducing failure fatigue.

Start by identifying the behavior that you expect from your application or API and then categorize any defects that your team won't fix. Once you have a list of defects that you can consider expected behavior, you can write your tests to pass on this behavior.

Finally, carefully document them so anyone coming after you will have context about the tests when they start failing.

Come to my TSQA 2020 conference presentation, "Setting Your Automated Tests Up To Fail," where I'll talk about when to design tests to fail and how to use TODOs and other indicators to let the rest of your test team know what's going on. We'll have a frank discussion of automation as information, not as a bug-detection system. The conference runs February 26-27, 2020, in Durham, N.C.

Keep learning

Read more articles about: App Dev & TestingTesting