by Leon Rosenshein

Prediction

As engineers we make lots of predictions. Some of them are about our what customers/users want and need, some are about how the things we build will interact, and some are about ourselves, what we are going to do and when we are going to do them. If you were to order our predictions by how accurate they are, that's probably the order to put them in. Which is kind of odd. The people we know the least about we are the best at predicting, while the people we know the most about (ourselves) we have the most trouble with. There are all sorts of reasons, but I think some of the big ones are sample size, using data, and optimism.

Consider our customers. As a company we have millions of them, and it's much easier to predict the average of a million things is easier than guessing a specific one. And we do experiments on subsets of them. We control for different situations, and run the experiments different ways, adjusting the experiment and the sample until we're confident in our predictions. Then we make the change and if our predictions were wrong we figure out why and use that knowledge for the next predictions.

For our system interactions we have unit tests, integration tests, simulations, and even track tests. Again, all designed and constrained to explore the boundaries of the system interactions and help us predict how things will happen "in the wild". Here too, if we find something that was missed we update/extend the situations we use to help us make predictions so that next time we're more sure of ourselves.

On the other hand, do we apply that kind of rigor to our development estimates? We have sprints, we have backlogs, we have story points. We do retrospectives. Then we miss our dates and do it all again. Does your team adjust the number of points in a sprint based on your historical completion rate? Do you have a way of correcting for new kinds of work or new areas? Do you account for the fact that not everyone is the same in all areas? What about dependency management?

That's not to say that it's terrible and we shouldn't try. Things are a lot more predictable now then they were when I started doing this. But we can do even better. As a small team we've seen good results from adjusting the number of story points we accept into a sprint based on our past performance. And we don't make anything about time. There is no conversion between story points and hours. We don't explicitly account for on-call work or KTLO, but that is built into our history of how many points we were able to close. It's not perfect. Our numbers don't directly apply to other teams. When we dive into a new area it gets a little rough, but it's a lot better than it was. And, it helps our customers. They might not like it when we say we can't get something done by a certain date, but the can at least plan around it. And that's a really big win.