I’ve talked about the difference between Test After Design and Test Driven Design before. I’m a big proponent of writing tests that prove your design before you write code. Not all the tests, because as your knowledge of what the design should be improves, you’ll learn about new tests that need to be written as well. So, as you learn you write more tests to drive the design.
Regardless of when you write the tests, before or after the code, there are a bunch of test anti-patterns to watch out for. Anti-patterns, or code smells, as they’re sometimes known, aren’t necessarily wrong. They are, however, signals that you should look a little bit deeper and check your assumptions.
Check out the list yourself for the full details, but here’s a few of the ones that I see getting implemented pretty often:
The liar is a test that doesn’t do what it says it’s doing. It’s testing something, and when the thing it’s actually testing isn’t working it fails. An example would be a test called “ValidateInput” that, instead of checking that the method properly validates its input parameters, is actually checking to make sure the result is correct.
The giant is a test that does too much. It doesn’t just test that invalid input parameters are properly handled. It also checks that valid input gives the correct answer. Or it might be an integration test. Something that requires way too many things to be working properly for the result of the test to tell you something. One important note here is that a table-driven test that tests all possible combinations of invalid inputs in a single test function is not a giant (as long as each entry in the table is its own test). It’s just a better way to write small tests
The Slow Poke
The slow poke is just that, slow. It takes a long time to run. It might be waiting for the OS to do something, or for a real-world timeout, or it might just be doing too much. Tests need to be fast because entire test suites need to be fast. If your test suite takes minutes to complete, it’s not going to be run. And a test that doesn’t get run might as well have not been written.
Success Against All Odds
The success against all odds test is the worst of all. It takes time write. It takes time to maintain. It executes your code. Then it says all tests pass. Regardless of what happens. It could be you’re testing your mock, or, through a series of unfortunate events, you’re comparing the expected output to itself, or even worse, forgot to compare the output to the expected output. Regardless of the reason, you end up with Success Against All Odds, and you feel good, but you haven’t actually tested anything. You haven’t increased anyone’s confidence, which is Why We Test in the first place.
So regardless of when you write your tests, make sure you’re not falling into any of the anti-patterns by mistake or by not paying attention.