by Leon Rosenshein

Speed

It's a truism that going too fast is dangerous. It's human nature to slow down in the face of danger or uncertainty. When you walk into a dark room for the first time you don't run. You walk in slowly, possibly feeling around for obstacles. There's even a Facebook group telling our local community to stfdlouisville.

On the other hand, when I talk to my fighter pilot friends I get a different answer. Speed is life. Even Maverick got it. In that situation they say speed, but they really mean energy. Trading potential and kinetic energy as the situation demands. Because when you're out of speed and out of altitude, you're out of options.

And at an even more basic level, that's what we're looking for. Options. And the ability to choose between them as information is uncovered and understanding changes. The ability to detect an obstacle while you still have time to react to it.

In software, there are a bunch of ways to approach this. Big Design Up Front (BDUF), or the traditional waterfall approach which seeks to know everything at the beginning so that there are no obstacles to overcome. You've either designed around them or done other work to remove them. This can work. Provided you know exactly what you are going to need to do. And the order to do it in. And when it will be done (which is often different from when it needs to be done). Given sufficient time to build the BDUF, you can do a pretty good job of determining what will be built and when it will be ready. Since you know what needs to be done you can keep everyone working. With more people doing more things in parallel and getting done sooner.

Another option is a much more iterative approach. Start with a team and a rough definition of the problem space. You try something. If it gets you closer to a solution you do more of it. If not, you try something else. You're constantly testing how things work. Of course, this requires you to have a definition of what makes a solution valid. And a way to see if you have a valid solution. And you'll need to take very small steps. Checking all the time to see if you're going in the right direction. The faster you can adjust the less work you'll do in a non-optimal direction. And even with small steps, there will be re-work and changes. As the understanding of the problem grows and the success criteria mature, earlier decisions will turn out to be, at best, less than perfect, or maybe even completely wrong. So you fix them and move on. Eventually you'll get to a valid solution. You won't know what it will look like or when it will be until you're almost done, but, as long as your definition of done is correct, it will converge.

On the surface BDUF seems like the right answer. Less rework. More parallelism. More predictability. But… Reality tells us that we never know everything up front. If we just do what's in the plan then at best we've only solved part of the problem, since we didn't understand it. More likely, we missed some interaction or condition along the way and the solution doesn't work. So we end up doing the rework anyway. At the end, when it's harder. And more error prone. And after the scheduled completion date. Which leads to overwork, burn-out, and the mythical man-month. And no-one wants that.

So we try the iterative approach. And that's where speed really matters. Not the speed of the solution. That's an outcome, a result, not an input parameter. The speed that matters, the speed that's the input to the system, is, as Charity Majors tells us, the speed of the decision cycle. How quickly can a developer/team go from recognizing a problem to getting real-world feedback on if the proposed solution makes things better or worse? Because that's the time that drives how fast you can react to new information.

After all, you wouldn't drive a car by looking out the window every 6 months and deciding how to adjust the  steering wheel, throttle, and brake for the next 6 months, so why would you build software for one that way?