by Leon Rosenshein

Testing Is More Than Preventing Breakage

Here’s something I ran across on the interwebs the other day. It’s about the reason for testing. I don’t fully agree with the first point, but I definitely agree with the rest.

If your tests only tell you when something breaks, you’re missing the point.

Great tests accelerate learning. They guide design, expose bad assumptions, and make change safe.

Testing isn’t cleanup. It’s engineering. Still shipping without fast feedback? That’s not speed. That’s risk.

– Dave Farley

Using tests to tell you when something breaks is absolutely part of why we have tests. Whether the tests are written first or after, once you have them1 they’ll let you know when a change you made breaks something. That’s real value right there. It might not be the whole point, but it’s part of it.

Dave is right though that do tests more than just detect breakage. They help you clarify things. They help make things more concrete. They help you make your interfaces clearer. They validate what you think you know. And they point out the things you know that just ain’t so.

Especially if you write your tests first. You write the tests that expect the code to work correctly. To behave the way you want it to behave. Tests that use an abstraction and mental model that is consistent. Tests that show how the thing is going to be used.

That’s where the learning comes from. Each test builds upon the earlier ones. They let YOU test your mental model of the system before you write any of the code. If the mental model you’re using doesn’t allow you to write a test that has the behavior you want, it’s the wrong mental model. If the behavior you want is hard to get then your model needs more thought. Remember that you are not only allowed, but required to think about the whole system before you write that first test.

Once you have the tests written, all you need to do is write the code. When the tests pass, you’re done.

Yes, it’s not that simple in practice. You probably won’t write all the tests before you write any code. In fact, you shouldn’t. You should write your list of tests first. Then you should write one test2. Once you have a test, write the code needed to make that test pass3. Once the test passes, look at the code and fix it4. Then you’ll move on to the next test and its code. Repeat until you run out of tests to write.

As you add more tests and more code you’ll be learning about the code. You’ll find things that need to be refactored and combined. You’ll realize that some of what you thought you wanted was wrong and you’ll adapt. You’ll optimize for multiple things at once. You’ll make compromises. You’ll do the thinking needed to solve the business problem you need to solve. You know what else you’ve done? You’ve done Test Driven Development.

To reiterate, as Dave says, writing tests isn’t the cleanup you do at the end. It’s not just coding to a spec and checking a box marked ‘Write Tests’. Done right, writing the tests is engineering the system. And when you’re done you’ve not only solved the current problem, you’ve built a system that will reduce the risk of making changes when you learn something new.


  1. Of course, this assumes your tests are well written and validate behavior, not implementation, but that’s a topic for another time. ↩︎

  2. In TDD terms, this is the RED step. At leat one test is failing ↩︎

  3. In TDD terms, this is the GREEN step. All the tests pass ↩︎

  4. In TDD terms, this is the REFACTOR step. You learned something about the code so apply that learning. This is also when you’ll add more tests to your list of tests to write ↩︎