by Leon Rosenshein


Test After Development (TAD) or Test Driven Development (TDD)? What’s the difference? They sound pretty similar, and the Levenshtein distance between them is only 5, but the difference is pretty profound.

In TAD you write some code, try it out with some test cases, and write some tests to cover the complicated or confusing parts. You want to make sure that you have some confidence that the code does what you want it to so you write some tests. Maybe you write some other tests to make sure some of the likely errors are handled. Simple and straightforward.

In TDD, on the other hand, you know what you want to do. But instead of figuring out how to do it, you write tests. You write tests that are examples of doing it. You write tests that demonstrate how you want people to use it. You write some showing you think they’ll try to use it. You write tests with the kinds of mistakes you think they’ll make. Of course, none of those tests pass. How could they? There’s no code to do it yet.

Then you look at all of those tests. And the APIs and function calls you’ll need to make it happen. And you think about them. Individually and as a group. How they fit together. If they’re consistent, in their inputs, outputs, error reporting, and “style”. If they solve someone’s problem, or are just a bunch of building blocks.

Then you go back and adjust the API. You make it consistent. Logically and stylistically. You make sure it fits together. You make sure that it can be used to solve the problem you’re trying to solve. And you re-write the tests to show it. But they still fail, since you haven’t written the code yet.

After all your tests are written, your APIs updated, all the tests are compiling but failing to run, you start to write the code. As you write more code more of the tests pass. Finally all the tests pass. Then you realize you’ve got an internal design problem. Something is in the wrong domain or you find some coupling you don’t want. So you refactor it. Some of the tests fail, but that’s OK. You find and fix the problem. All the tests pass and you’re done. Simple.

It took 1 paragraph to describe TAD and 4 to describe TDD. Based on that TAD is better, right? Maybe in some cases, but generally, no. And here’s some of the reasons why.

Looking at things from the outside

With TAD you generally don’t get a chance to look at your API as a whole. You don’t look for the inconsistencies, you just look at functionality. You might not upset your users, but you’re unlikely to delight them. It works, but it’s doesn’t feel smooth.

With TDD you have a ready-made source of documentation. Your tests are a great way to show your users how you expect the API to be used. And when your users tell you how they’re really using your API you can easily add that to your behavioral test suite to make sure your APIs are for life.

Looking at things from the inside

You will need to make changes. Your changes need to fit in with the existing system. With TDD you have a can see how the changes fit with the existing API. You can see if they are (or aren’t) working well together. You can ensure that your API stays logically consistent.

You’re always learning about the domain your modeling. As your understanding grows the boundaries between the domains will shift. Functionality and data will want to move between domains. There will be refactoring. TDD helps you know that what you’re changing doesn’t change the observable behavior. Because that’s what your users care about.

That’s why TDD generally leads to better results than TAD.


One final caveat. While TDD makes your code and APIs better, it’s not the entire solution. No one methodology is. TDD is a great foundation, but for your own peace of mind you’ll also want other methodologies. Like unit tests for specific functionality and validating complex internal logic that doesn’t directly map to externally visible behavior. Or integration tests. Or testing on large data sets. Or whatever else your specific context demands.