The risk of changing your tests

Roshan Manjushree Adhikari
Level Up Coding
Published in
6 min readFeb 22, 2024

--

Image from https://www.pinterest.com/pin/fixing-unit-tests--484207397435294509/

As programmers, we are always changing our code either it is for refactoring, bugfix, improving readability, adding new feature, removing outdated code and so on. As change is such as inevitable part in any system’s journey, the importance of having tests for those code really shines. In his famous book Working Effectively with Legacy Code, the author tells us that there are two ways to refactor our code: Edit and Pray that the change did not break your code OR Cover and modify — cover your code with necessary tests and edit as much as you like with confidence. By embracing the later approach, we can iterate and refine our code base with assurance, knowing that our tests protect our code against unintended consequences.

But what about changing your tests? Changing a test can be risky. This is because any change to test immediately questions its reliability. If your production code changes, tests are there to verify its behavior. But if tests change, how do we verify its continued reliability/dependability? For example — when you refactor your test code, how do you verify that the test functions the same as before and after the change? Do we write tests for the test? And write tests for those tests too? This will be a never ending cycle.

Let’s take a closer look at when would you want to change your tests, the risk associated with it and how can we write our tests so that we can lessen the risk associated with changing them.

When would you change your tests?

  • Behavior change of production code — Changing behavior of the production code is one obvious case when we expect that existing test would also change. This might happen when requirement of the system has changed, and the old test needs to change.
  • Refactoring test code itself — Refactoring your test might be a practice you might do to ensure its readability, maintainability or effectiveness. Some refactoring like renaming test method names, changing variables inside test might pose lesser risk. On the other hand, changing arrange and assert section of your test code might carry a higher degree of risk because you are directly modifying the actual behavior of test.
  • Bug fixes — Tests are what make our systems reliable, but if tests contain bugs, you need to change the test code. The most probable situation here is that you have a false positive test, which means test has incorrectly identified a condition as present when it was actually not.
  • Performance improvements — Here, you might be editing your tests to optimize your test code, reduce test scope (perhaps delete a redundant test case etc.), parallelize test execution for faster test runs, optimize the test environment and things as such.

When should you not change your tests?

  • Refactoring production code — Refactoring production code should have changed the internals of a system without modifying it’s interface. In case like this, your tests should not change. If you need to change the test code every time you refactor the production code, your test is obviously dependent on the implementation details of the code.
  • Adding new features — When new code is added to the system, the existing tests should not be changed. The system’s existing behavior should remain unaffected. To cover the new change, you must write new tests to cover the new behavior.
  • Bugfix — The presence of a existing bug suggests that the respective test suites might have missed a test case. Or, the existing test case itself has bug (see above). It is not expected that you edit existing test cases for scenarios like these. Rather, you might need to add a new test case for that bugfix.

How to make the change less risky?

Even if changing tests might be inevitable, there are some good practices that might make this change less risky for you.

  • Incremental change — This applies to both your code and tests. It’s beneficial to make small, incremental changes to our tests rather than attempting a larger update, allowing for quicker feedback and a less risky process. This actually holds more for tests than production code because keep in mind that production code are covered by your tests, but tests have no coverage as such to preserve their behavior.
  • Code reviews — Changes made to the test code must always go through a proper peer code review to catch any potential issues or change. Keep in mind that unlike change in production code, the only way to verify test code is code review because you are not going to write tests for your tests.
  • Test coverage analysis — Monitor the before and after test coverage metrics to ensure that the change you just made did not reduce the test coverage of the codebase.
  • Proper communication in the team — Maintain a proper culture of communication and collaboration within your team to facilitate discussion and feedback on changes to the test code.

How to write tests that are easier to change?

When you start writing tests following some proper guidelines, many of the risk involved with changing them are mitigated to quiet an extent. Let’s discuss about some of them.

  • Write simple and short tests — Tests should be simple and brief. It’s generally recommended to structure them in an AAA format (Arrange, Act, Assert). Simpler tests obviously have less complexity, making them easier to modify.
  • Aim for readability — Readability is important in test code as it is in production code. Readable tests means they are easier to understand, making it quicker to grasp the intent of the test which in turn reveals the behavior of the system. Changing tests like these would be very easy compared to tests you do not know what the test is supposed to verify.
  • Avoid code duplication, use utility methods — Most times many related tests will need to share the same set of data. We can define them in some separate utility method, or predefined methods like setUp() or tearDown(). Code duplication between tests will result in complex and longer tests, which might hinder the readability, simplicity and due to that intent of that test.
  • Adapt a good naming convention — Adapting a readable and consistent naming format for writing tests. Practices like this help ensure the intent of the test just from the name of the test.
  • Isolation — Make sure that each test is independent from each other and does not rely on outcome of other tests. This is the worst possible way of writing tests. If your tests are dependent with each other, editing one test might affect the result of another test. This means you have made your tests totally unreliable.

The takeaway is that while tests are essential, when you don’t do it right, you create more harm to the codebase and the project timeline instead of good. Keep in mind that once you write a test, you editing that test again means you might be harming the reliability of that test. If you need to edit your tests, make sure you do it for a very good reason and you are not harming the behavior of the test. Make sure you always follow clean practices while writing your tests which will make sure your tests are clean, simple, readable, reliable and isolated from each other. The solution is to write tests that do not need to be edited. The statement might sound pretentious but when tests are written a little carefully, the stability of the test will increase.

References:

[1] Winters, T., Manshreck, T., & Wright, H. (n.d.). Software Engineering at Google: Lessons Learned from Programming Over Time. O’Reilly Media.

[2] Feathers, M. (1984). Working Effectively with Legacy Code.

--

--