In the past I’ve talked about Why We Test. We test to increase our stakeholder’s confidence. So let’s add all the confidence we can. If a little confidence is good, more confidence is better, right?
I think Charity Majors said it really well.
Or you could think of it like running tests is the equivalent of eating well, not smoking, and wearing sunscreen. You get diminishing returns after that.
And if you’re too paranoid to go outside or see friends, it becomes counterproductive to shipping a good healthy life. ☺️🌴
How much confidence do you need? What’s the cost of a something going badly? What are your operational metrics (MTTD, MTTF, MTBF, MTTR)? Do you have leading metrics that let you know a problem is coming before customers/users notice?
The more notice you have before your customers notice, the faster you can detect and recover from a problem, and the rarer they occur, the less confidence you can accept. If you have a system that detects the problem starting and auto-recovers in 200 milliseconds and your customers never notice required confidence is low.
On the other hand, if you’re going to lose $10K/second a 2 minute outage has a direct cost of $1.2M. And that’s without including the intangible expense of the hit to your brand. You’re going to want a bit more confidence when making that change.
So do your unit testing. Do your integration testing. Track your code coverage. Run in your test environment. Shadow your production environment. Until you have the appropriate amount of confidence.
Because all the time you’re building confidence you’re not getting feedback. You might have confidence that it’s going to work, but you have no confidence that it’s the right thing to do.