DEV Community

Cover image for Why Testing After Is a Bad Practice
Matti Bar-Zeev
Matti Bar-Zeev

Posted on • Updated on

Why Testing After Is a Bad Practice

In this post I will try and give you my 2Cents on why writing tests after you have a so called “working” code is a bad practice, and why you should avoid it and strive to go tests first (also known as - TDD).

Some background

There you have it, your completed feature which actually works!

Well at least as far as you’ve manually checked it. It has dozens of files involved, several new ones, several modified old ones, but you’re pretty proud of what you’ve accomplished.

You look at your Jira board (or whatever your poison is these days) and see that ticket saying “Write tests for it”, and so you grab your coffee mug and set off to get your hands dirty with some assertions.

You don’t understand what’s the big fuss about TDD and writing tests first. You’ve been coding like this for a few years now and everything seems to be working fine. I mean, each code has it’s bugs, right? and your code is sometimes hard to introduce new features to… or, perhaps too often that you find it hard to do? and what about these tests which are cluttered with mocks and are too complex for no real reason… hmmm.

That unsettling feeling you have cripping down your spine is rightfully there.

Here are a few points on how writing tests-after contributes to the symptoms I’ve mentioned above, among others:

Unconsciously complying to a given “reality”

It is within our human nature - when there is a certain situation at hand we try to comply with it, even to the point of justifying wrongs.

Our code is a given “reality”. It’s there and it works (for all we know). The tests that you will write now will tend to establish and support the existing reality you’ve created.
It will do so by going through paths you covered in your code, it will focus on the happy paths more often and will be bais. It is not surprising that tests-after usually produce poorer code coverage. It becomes easier for us to overlook certain cases since we wish to comply with what’s already there.

For instance, let’s take the common “add” function - when you write a test to it after the code is implemented, you will attempt to add a few numbers and see that it works as you remember your code should work. But if you wrote a test first, you would start to think on how the function handles a situation where it does not get the arguments it expects, not in number and not in type.

In this sense, TDD kinda forces you to think of edge cases prior to writing the actual code. In many cases, this has proven to produce a much more resilient code.

Emotional attachment to our code

We get emotionally attached to our work. You can see it in every PR you’ve submitted, when requested to modify something. It takes a lot of self-discipline to acknowledge that something you’ve made requires a change, and in many cases people will go a long distance in debating over minor issues. Search your feelings, you will know it to be true.

“What testing after has to do with it?” you might ask -
Tests have the tendency to expose your code design weaknesses. When your code is too complex or too tightly coupled, writing tests after will surface the bad design.
Although the tests indicate that the design is wrong you will find that many choose to ignore the red lights and not to refactor the code, but somehow make the tests suffer for the lack of testability.

This can manifest itself in lack of tests, in overlooking certain use cases and overly mocking.
Practicing TDD helps to avoid such cases and helps us better design our code. TDD increases your code testability by default, and code with good testability is also a flexible code which can be modified with greater ease.

Overly mocking

Practicing test-after usually produces tests which have a lot more mocking done for it. When this happens it implies that your code is tightly coupled to modules it probably shouldn’t and/or that the code’s separation of concerns (SoC) is lacking. When you wrote the code nothing stopped you from tightly coupling, but now the tests surface it.

Overly mocking means that your tests become more complex and less readable. Moreover, in some testing frameworks it may present an overload to the runner.

As you probably know, mocking also requires to be well maintained- you need to clean it, restore it, apply it, and that can be so frustrating later on when you’re trying to figure out why a certain test is not passing only to find out that you forgot to restore a mock.

Gets neglected at the end

At the beginning I wrote that you have that Jira ticket for “Write tests for it”. I don’t know why you didn’t stop me there and then :D

This is the place to say you should not have such a ticket. Writing tests is not an additional task. It is an inseparable part of the development task for your feature. What’s more, when you have it at the end, it is the easiest task to postpone to “never” in the eyes of your product team - after all, as they see it, the feature is “working” and done.

Sometimes developers will just write dummy tests which have no value, but somehow increase the code coverage, and that’s even worse than not writing the tests at all since it gives a false feeling that the code is well covered and protected, when it is actually not.

Wrapping up

Many of the coding issues we experience on a daily basis can be avoided if we will practice TDD more. I’m not saying that the transition should be binary, this or that, but I hope that what I’ve written here will help you insist a bit more (even in that inner debate you’re having with yourself) on the quality which you would like to write your code in.

I know that reality sometimes demands we spit out the code as fast as we can, but we, as professionals, should always strive to make our work better and improve as we go.

Do you agree? share your thoughts with the rest of us :)

Hey! If you liked what you've just read check out @mattibarzeev on Twitter 🍻

Photo by Jennifer Bedoya on Unsplash

Top comments (28)

Collapse
 
cjsmocjsmo profile image
Charlie J Smotherman

TDD is like any other tool we use in development, it depends on your use case.

On one hand I can see it being a huge waste of time and on the other I can see how it can be a life saver. For me it depends on the project whether or not to use TDD.

Happy coding

Collapse
 
melli79 profile image
M. Gr.

Whenever I developed with TDD, I had more refactorings and thought deeper about corner cases. Also I had a reliable automated test suite right at the end of development.
But I agree that TDD requires discipline.
I would however not agree that TDD is a waste of time, as Uncle Bob says: The developer is to choose the tools of development and responsible for good automated tests. If your manager decides that you should not do TDD, then you should question his competences.

Collapse
 
raibtoffoletto profile image
Raí B. Toffoletto

Totally agree! As someone who works for a very small company that has the practice of not writing any kind of tests, I have to fall to the practice of writing my test units when I have a bit of free time way after I wrote the code. I can feel I'm biased towards the code written and have to do twice the effort to imagine edge cases or scenarios I haven't thought about before. Hope this will change in the future.
Nice article 🎉

Collapse
 
starswan profile image
Stephen Dicks

I think it's a bit sad that you say 'you have a practice of not writing tests' in a small company. You could change that with your next PR - try writing the tests first. You might just find that the job is quicker (yes really!) because you get fast feedback on whether your code solves the problem or not, rather than testing everything manually.

Collapse
 
raibtoffoletto profile image
Raí B. Toffoletto

It's VERY sad... They basically told me not to 'loose time' with it... I'm trying bit a bit show other ways to them...

Thread Thread
 
mbarzeev profile image
Matti Bar-Zeev • Edited

And if that does not work out I think you should maybe consider your continuing of professional career path with them. If you consider testing and TDD a must-have tool for you to do your work well, you should look for an employer who acknoledges and respects that.

Thread Thread
 
raibtoffoletto profile image
Raí B. Toffoletto

Yes, that's the plan. 😉

Collapse
 
mbarzeev profile image
Matti Bar-Zeev

Thanks!

Collapse
 
jankapunkt profile image
Jan Küster

TDD is a must when requirements are clear and distinct but a waste of time when reqs are fuzzy. First get all requirements together then do the tdd - that's at least my premise.

Collapse
 
mbarzeev profile image
Matti Bar-Zeev

For sure. At least you have a clear indication that the spec you got is not complete very early in the process.

Collapse
 
jankapunkt profile image
Jan Küster

That's nearly every time the case even with true domain experts we often face vague or contradictory requirements.

Collapse
 
fredicious profile image
Fred

Awesome article, I have seen the exact situations you describe.
For me TDD is a mindset, it takes a lot of practice to own it, but I would never want go back.
I dream that one day we will just write the tests and the computer will implement the code :)

Collapse
 
mbarzeev profile image
Matti Bar-Zeev

Thanks mate! I think that such tools are emerging as we speak, more in the sense of writing your requirements and watch the code generate

Collapse
 
ashleyjsheridan profile image
Ashley Sheridan

I think there's a few things conflated here. Writing tests after the code doesn't necessarily mean they're an afterthough, or considered a separate task (e.g. via a separate ticket). Sometimes that can be the case, but the majority of the time when I've written, or seen other developers writing, tests, they're part of the task, but written after the code is at a working point.

Tests should be written as cleanly as any other code, although obviously allowing for any nuances of the test framework being used. I've often written what I thought was clean code for the non-test part, only to find it could be improved because my tests couldn't be written well. I'd suspect the same could be found if tests were written first.

I also don't believe that just because tests are written after that they've immediately written lazily, or begrudgingly, and aren't testing the happy and unhappy paths in the code. Where that does happen, I'd heavily suspect that the developer doing so is just as likely to be writing the rest of the code lazily too, not only the tests.

Finally, I think if you're in a team that would consider functioning code without tests to be ready for production, then writing tests beforehand (which would presumably be taking up the same overall amount of time) would probably have said team questioning why a task was taking so long, and likely prompt them to ask that the tests be left until the end anyway. However, that's just generally a bit of a red flag anyway, and if you're a developer who cares about testing in a team that doesn't, you need to find a better team.

I think it's also worth mentioning that a lot of testing frameworks out there do have code coverage capabilities. I wouldn't rely on only these though, as it's not too difficult to trick them into the appearance of coverage when it's not the case, and this can lead to abuse by those developers who would take the lazy route.

Collapse
 
mbarzeev profile image
Matti Bar-Zeev • Edited

Writing code without having a clear functional spec of what should this code do is hard. Very hard. I need to know what I want the code to do before I jump in and write it. I found that the best way of creating this functional spec is with testing before the code.

Collapse
 
ashleyjsheridan profile image
Ashley Sheridan

How do you begin writing a test for anything if the functional spec of that thing isn't clear?

Personally, I find it easiest to break the business requirements into the smallest unit of work that effectively results in a complete item. Then it becomes pretty simple to break that down across the various code layers as required to ensure an approach that follows clean, SOLID principles. Tests are more easily written for such code.

Thread Thread
 
mbarzeev profile image
Matti Bar-Zeev

How do you begin writing a test for anything if the functional spec of that thing isn't clear?

I don't. Or at least, I strive not to.
I'm saying the same thing, but the flow is test first 🤓

Collapse
 
developergp profile image
GPWebDeveloper

Thanks for sharing.
I understand and agree with all the arguments listed. However I have a hard time imagining and writing a test for code that doesn't yet exist. I imagine the task specification has to be very detailed.

Collapse
 
mbarzeev profile image
Matti Bar-Zeev

Indeed it should. Truth is that starting the actual coding without having a detailed specification can result in a great time waste

Collapse
 
highperformancecoder profile image
Russell Standish

TDD works well when the behaviour is specified up front. The problem is not a lot of my code is like that. The process of writing the code involves discovering what is required (which is very agile - write the code, get it into the hands of the stakeholder ASAP, revise the code according to feedback).

As much as possible, I do testing at the same time as coding. You need to test the code anyway to verify your solution is doing what it is supposed to do, so why not make a little extra effort and automate those tests. So yeah - having a separate ticket item "Write tests for ..." is a bad move.

I do agree that this process does tend to miss edge cases. Hence, I also try to factor in a period of what I call "coverage testing" at some point during an iteration. Using a code coverage tool (eg gcov) take a close look at the lines of code not covered by the regression suite. Then think about what is involved to exercise those lines of code, and create tests where relevant. It's a great way of catching bugs. Not all lines of code need to be exercised by a test - the "dummy tests" mentioned above - so focussing on a percentage coverage is not really worthwhile. Focus on the lines not covered, and decide then whether testing is appropriate.

Collapse
 
mbarzeev profile image
Matti Bar-Zeev

I agree with most of what you wrote here. As for the "coverage test" period, This sort of things do not apply when praticing TDD for the obvious reason that there is not a line of code which did not start with a test.
What you mentioned about checking put the coverage and writing tests accordingly is also my way when I need to write tests after. How else would you do that, right? ;)

Collapse
 
juanlabrada profile image
Juan Labrada

Actually, you can not test your app until it is finished. What you mean is designing the tests based on the app expectations, and that's why many agile practitioners consider TDD as a design technique rather than a testing technique.
It has additional emotional benefits, for example: reducing anxiety since you quickly get feedback about the code you are writing, in contract to testing at the end when you are anxious expecting your job is done and that you are met the deadline, just to realize that it is full of bugs.

Collapse
 
jacekandrzejewski profile image
Jacek Andrzejewski

I disagree that practicing more TDD will help avoid many of issues. Far more issues arise because people use TDD wrong way, like writing tests that give nothing in return, example is testing what happens if you pass incorrect type to a function with typed arguments, or tests that are "tautologies" (use a setter to set value, get value and compare).
Too much mocking happens when your code is overcomplicated, and it can be overcomplicated because you TDD everything and write too many tests and try to make things easily testable and extendable where they don't need to be. Examples are things like creating an interface for one class, where there is a really low chance you will ever need a different class implementing that interface.
Another problem happens because TDD asks you to test units, and every single person defines "unit" differently in their head. I usually don't do a ton of unit tests, mostly because I usually create APIs, so I just write tests for the API and any mistake in unit will come up there too.

TL;DR
Most problems happen because people don't understand what they read or don't think while writing. Using TDD and other practices won't help with those two problems and may make things worse.

Collapse
 
shadrack1701 profile image
Matt Trachsel

Nothing better than fully testing a piece of code which has the likelihood of needing to be refactored from a PR review so that the 5x as much code in the tests you wrote needs re done as well. Then assuming that step only happens once you ship the code with a bunch of low value tech debt tests on a piece of code that will very likely change or have a short life span. Great practice we're encouraging Devs to do.

Collapse
 
ravimashru profile image
Ravi Suresh Mashru

I totally get what you're saying and I also faced the same problem when starting out with TDD. However, it took me a few years to understand that good tests are tied to the behavior. As a result, if something has to refactored because of a PR comment it doesn't break a test - because refactoring is all about changing implementation details, it doesn't change the behavior you expect from a class/function.

Collapse
 
nrcaz profile image
nrcaz

I think you should have emphasized more on this in your post, I personally believe it is one of the most important aspect of TDD and how it becomes an agile practice. Writing test after will be tied to your implementation 100% of the time, just try it, even if you know TDD you will have a hard time writing a test not tied to your implementation after coding it, you are biased, and will be a really hard mental exercise.
Writing the test first challenge the design and the user needs, if you can't write the test because there are too many aspect you don't understand, review the design don't start an implementation that will probably fail to answer the need. Never implement without a test really means never implement when your design is not though out.
As you mentioned since test should not be tied to your implementation but to the users needs you can change your implementation any times you want and your test will even help you do it faster without breaking anything for your users.

Collapse
 
prafful profile image
Prafful Lachhwani

Completely agree, how to overcome the feeling to skip tests for now because you have lots of extra things to do?

At start of the project we've enthusiasm but when you have to meet deadlines you feel like skipping unit tests for now.

Collapse
 
nyambol profile image
Michael Powe

I'm bemused by the implication that people who don't TDD start out writing code without having thought out the design. Is that really a thing? "By the time I learned to talk, I forgot what to say."