Release stabilization is traditionally the period at the end of a development cycle where the team minimizes change and spends time fixing things and making sure the result is stable. That sounds like a good thing, doesn’t it? And compared to its most obvious opposite, release de-stabilization, it’s definitely a good thing. Before you release any software, executable, library, website, whatever, you want to know that its good and that it’s stable and resilient to whatever the real world will throw at it, When thought of that way, it’s something we should always do.
On the other hand, there’s a different opposite state that’s implied by that term. A term that makes me think really hard about how we develop software in general. The alternative other state is Development Instability. If we need to take some time at the end to make the software work, what the hell were we doing all the time before that? Working on job security? Of course not, But …
“Release stabilization?” I don’t understand. Why did you choose to make it unstable? In what world does that make sense? - - Kent Beck
I know at least one reason. The drive to get things done. Or at least call things done. And be able to close the Jira ticket. Because often, in the short term, that’s how we’re measured. And as Eli Goldratt said,
Tell me how you will measure me, and then I will tell you how I will behave. If you measure me in an illogical way, don’t complain about illogical behavior.
That doesn’t make it the right thing to do though. It’s a classic local maximum issue. The fastest thing I can do right now is the least amount of work needed to be able to close the ticket. Even if that means I leave a bunch of work for tomorrow (the release stabilization period). If it were as simple as delaying some work until later, we’d be OK, and it wouldn’t be a local maximum.
Unfortunately, it’s not just delaying some work. The work we’re delaying isn’t just moved to the end of the project, it also slows down progress for the rest of the project. Every time we delay work, we make things a little slower in the future. If you think of it as technical debt, which isn’t a bad metaphor, eventually the interest becomes too high and you go bankrupt.
You can hit a local maximum the other way too. You can spend too much time making some small thing perfect. You think by doing that you’ve made it so you don’t need the time at the end, and don’t have the problem of slowing yourself down in the future. It’s a good thought, but again, there’s a problem. You spend all that time making it perfect, given what you know about the rest of the system. Then you learn something new about the system and what was perfect becomes not perfect. So you perfect it again. Then you learn something new. And the same thing happens. It turns out there’s a name for this, Rework Avoidance Theory and it doesn’t work either. You end up slowing yourself down because you need to keep changing things as you learn more about what you’re doing.
We know these things. We struggle against doing them. Like everything else in software (and life), the right choice at any specific time is dependent on the context. I can’t tell you what you should do in any specific context, but I can tell you that the right way to approach the problem is to be aware of your context. To think about which work to do now based on what you know, and which work to leave for later when you know more.
I can’t be sure, but experience tells me that you should probably be doing a little more work after you think you’re done, not less. There might be some extra conditions/experiments you run when you get closer to the end, but overall, doing more along the way and less stabilization will probably get you to the finish line sooner.