Recent Posts (page 2 / 66)

by Leon Rosenshein

Review Your Own Code Review

I’ve talked about code reviews before. There’s are things you should do when you review code, and there are things you shouldn’t do. Most of those things are about what to do when reviewing someone else’s code. The rest are about what goes into preparing a good code review.

I stand by those ideas. But there’s on important thing that goes into preparing a good code review that I haven’t mentioned in the past. That’s reviewing your own code reviews, before asking anyone else to look at them.

One of the biggest reasons is to know for sure exactly what is going into the review. In order to write a good title and summary, you need to know what you’re summarizing. You want to be clear about what’s included, and for that you need to know.

It gives you a chance to see how many “ands” you put into your summary. Very often you find that to do the thing you set out to do you need to make a bunch of other changes to make it possible. Since each review should do one thing, the self-review is a good time to catch that you should split the review up into multiple reviews. NB: Your reviewers will thank you.

Since you’re looking at the entire thing at once, are you being consistent? Are you being readable? Do things visually flow? Not just formatting, you have rules and formatters for that (don’t you?), but in structure and naming. Do you call something inventory in one place and stock in another? Make them the same. Did your method grow to 300+ lines? Break it up. Are you calling methods from hither and yon? Do you need to jump around/across files to follow what’s happening? Consider bringing things physically closer together to make reading/debugging easier.

Did you leave yourself a note to looks at something related later? I do that all the time since I don’t want to change contexts and track something down. I also often forget that I wanted to track that thing down, so seeing the note during the review gives me another oportunity to follow through.

Another thing you get from self-review is the chance to notice any debug helpers you left in the code. Things like extra print or log statements, parameter overrides, or changed defaults. Once you find them you’ve got an opportunity to change things. Should those prints be log messages? Should the logs be at a different level? Are you changing or adding a default that needs to be documented?

Speaking of documentation, this is also a good time to check and make sure that there is appropriate documentation for what you did, why you did it, and why you didn’t do something else. You’ve already done it once, but this is your chance to look at the changes in totality and make sure.

What about unit tests and/or code coverage. Do you have enough? Neither are guarantees, and you hopefully have automated systems that will check this for you, but being a good neighbor means the code under review is done. If you have folks do a review its better if you’re sure all the existing tests still pass, and any needed tests have been added. If not, you’re going to have tyo go back to your reviewer and ask them to review things all over again once you fix those problems. That is not the way to make friends.

What it comes down to, and what all these benefits add up to, is the chance to make sure your change is done and really ready for review before you send it on to others. Between context switches and the time taken to do the review, you’re about to cause multiple man hours of work across multiple people. It’s on you to do your homework first.

by Leon Rosenshein

Demands ...

What if I told you that your time was limited? That the number of things you could get done in any given period was limited? You’d probably say “I know that.” And you do. You know you can’t do everything, and you know you can’t do everything at once. However, you’re not in control of what you need to do. Hopefully you’re involved in the decision, but while the how is often fully on the development team, the what and the why usually aren’t.

But what if I told you that you are in complete control of a big part of what you need to do? Broadly speaking, you can split what you need to do into two categories. The things you need to do to add value, and the things you need to do to because of things you did to yourself.

But what if I told you that you have control morpheus meme.

Adding customer value is pretty straightforward. A new feature. A new capability. Automating the manual. That’s the value demand. The demand on your time/capability where the result is customer value. And providing customer value is what we’re all here for.

The other category, things you need to do because of things you did to yourself, is the set of demands on your time that includes things like keeping the lights on, responding to customer issues, bug fixes, outage response, and other similar “customer service” demands.

That’s what Vanguard calls Failure Demand. They’re even more prescriptive about what Failure Demand is. To them:

It is demand caused by a failure to do something or do something right for the customer. Customers come back, making further demands, unnecessarily consuming the organization’s resources because the service they receive is ineffective.

Put that way, it’s simple. Everything is relative to the customer (internal or external). Add up the two things and that’s the total demand. The amount you do is constrained, so for a given amount of capacity, to maximize value demand met, you minimize failure demand met.

Equation showing that the sum of value demand and failure demand equals system capacity.

Sure, you could just not do the failure demand, and for a short time you can get away with it. But do it for too long and you lose customers because no matter how much value you think you’re adding, they aren’t getting enough value because of the failure demand they’re living with. A much more sustainable way of dealing with it is to minimize the amount of failure demand that your customers are generating. You do that by moving a little slower. By being more careful to finish what you’re doing. To not release those issues in the first place.

You can’t know everything up front, and you will learn things from releasing, but by paying attention to what you’re doing, by having sufficient testing, by making things resilient, by improving quality overall, you reduce failure demand.

Which naturally gives you more capacity to respond to value demand.

by Leon Rosenshein

The Tao Of Pooh

I’ve always liked Winnie the Pooh. He may say he’s a bear of very little brain, but I think he’s got a lot of deep understanding that we could all benefit from.

Winnie the Pooh talking to piglet, saying To know the way, we go the way, we do the way. The way we do, the things we do, it's all there in front of you.

While I’m reasonably sure Pooh was not an extreme programmer, that’s not a bad paraphrase of what extreme programming and the agile manifesto are getting at. Do the work and the work will show you what needs to be done. You do what you can do, and you find out how to do the next thing.

To know the way,
we go the way,
we do the way.

The way we do,
the things we do,
it’s all there in front of you.

But if you try too hard to see it,
you’ll only become confused.

I am me and you are you.
As you can see;
but when you do
the things that you can do,
you will find the way.

The way will follow you.

For example, I used to fight with my code sometimes. Or more accurately, I fought against it, trying to make it do what I wanted, not what it wanted. Then I realized I was wrong. And not just wrong to be fighting against my code, but wrong about code ownership and wrong about what I was fighting against.

Firstly, and long term, probably the more important, was that it wasn’t really my code. I might have written it, and I might have been the person that knew the most about it, but it wasn’t “mine”. Code has its own existence and its own purpose. The work I’m doing with the code is designed to get something done. To add value to the code, to the system. Not to make me more valuable by owning more code. It’s not about me, and while Imposter Syndrome is real, beating your head against some code is not the way to approach it.

Second, and the more tactical part, is that you can’t fight against code. You can’t make it do anything it doesn’t know how to do. You can use it different ways, and for different things. You can use it in ways that it was expected to be used, you can find new ways to use it in new situations, and you can use it in ways that are different and the opposite of how the original writers intended it to be used, but you (largely) can’t make it do something it doesn’t know how to do.

Instead, what you’re really fighting against is yourself. Your understanding (or lack of understanding) of what the code is doing and is supposed to do. How it works, and what its side effects are. The way to approach that is not through fighting or struggling, but through education. Reading documentation. Reading code. Exercising code to characterize how it actually responds (because documentation isn’t always correct). When you understand yourself and your biases, when you understand the code, its strengths, weaknesses, abilities, and constraints, you find that you’re working together, with the code to meet your goals, instead of against it. You find you’ll go farther and you’ll get there faster.

So like Pooh says, it’s all there for you to see. You just need to not try so hard. Let yourself see how things are and how they should be. And that will lead you to the way.

by Leon Rosenshein

Primitive Obsession and Boolean Blindness

George Boole brought us Boolean algebra. There’s tremendous benefit in using it. Boolean algegra is one of the foundations of computer science and factors into a lot of what we do as developers. But sometimes, it can also blind you to a deeper truth.

I’ve talked about primitive obsession before. It’s where you use a base type to represent a specific domain type. Like storing a URI as a string, It works, and its faster at first, but it’s also very limiting. Instead of building an object that understands itself, you build the functionality to use the primitive type as if it were the domain type. It works, but future you is going to be unhappy.

Boolean blindness is a specific kind of primitive obsession. That’s where you use a Boolean value to store the value of something that is really more nuanced than that. There are two common ways this happens.

One is where things don’t map neatly to True/False. Instead, you have some possible list of values and some of them would be considered True and others would be considered False. This can happen a lot in state machines, like bug databases. You often want to know if a particular entry is open, active, or closed. That entry, however, can have many states than that. You could have a Boolean for every possible state and some complex logic that keeps them all updated, at the same time, but that’s brittle and error-prone. A better choice might be to have a enum for the possible states and a way to determine if a given state IsOpen or IsClosed.

The other case is where there is a very clear definition of True/False, but there’s more data you want to store. In my Wikipedia entry (if I had one), there would be a birth date, but no date of death. I could have two variables, a Boolean called isDead and a Date called diedOn. The second would only be meaningful if the first were true. That works, but again, that’s very brittle. You need to make sure that diedOn is only used when isDead is True, and that any change to diedOn changes isDead appropriately. A better choice here would be some kind of nullable or optional date, diedOn. If diedOn has a value then I’m dead and the value is when I died. If it doesn’t I’m still alive. It might be wrong, but it can’t be inconsistent.1 It’s easy to imagine a much more complex class where isTrue is a complicated function of a bunch of internal state. Most languages won’t let you define the truthiness of a class, but you can imitate it by having a method on the class IsTrue() that handles that for you.

In other words, sometimes Boolean Blindness hides information by turning a multi-state thing into a two-state thing. In other cases you have the information and you just end up repeating yourself (and often getting out of sync). And sometimes that Boolean really is what you want. The trick is to actually make the choice knowingly, not by default or by mistake.


  1. There’s a whole different article around consistency vs correctness, but that’s something for the future. ↩︎

by Leon Rosenshein

Deploying vs. Releasing

Today’s thought is courtesy of the inimitable @mipsytipsy (Charity Majors).

Deploys and releases are two different things:

DEPLOY – building, testing, and rolling out new code changes; hopefully small, incremental ones, very often

RELEASE – changing user experience in some meaningful way (not just minor bug fixes)

Sounds simple, no? In reality we confuse the two all the time. Or at least conflate them. Every deployment changes how things behave. And the only way to change it is to deploy again. This might not be a big deal if your change/build/validate/deploy cycle is on the order of minutes, but when it’s on the order of hours (or days or weeks) that’s a real problem. You become afraid to deploy, so the cycle time gets longer, more things end up in each deploy, and things get even longer.

It’s the exact opposite of a virtuous cycle. It’s a death spiral that ends up with huge deployment/releases that don’t do what you want, don’t let you respond to feedback, slow down the development cycle, and upset users. I’m pretty sure that’s not how we want things to go. I know I’d much rather be able to release a small change to my users and have them see a small change that I can iterate on. And undo if they don’t like it. Or it has some unintended side effect. Something I can be comfortable releasing now, whenever now is, knowing that if there’s a problem and quickly undo it.

That gives me confidence to try things. Things that I expect make things better for users, but I need more feedback on. Things that a focus group liked but might not be broadly applicable. Things that change internal data flow but have different operational characteristics than how things are now. Or any other change, really. The easier it is to go back through that two-way door, the more doors I can go through.

Of course, making that happen isn’t easy. You need a rock-solid build/validate/deploy system, which is not a simple thing to build. You need a robust system to distribute and use feature flags (or the equivalent). Also, a non-trivial solution.

You also need a way to clean up after yourself. You can’t leave everything behind a feature flag forever. If nothing else, the combinatorial explosion of possibilities will make it impossible to validate even a majority of the possible configurations. Some of them will be contradictory. Some will just break. The code gets too ugly and hard to understand. You end up with nested configs and then you can’t turn one thing on/off by itself, which defeats the purpose of doing this in the first place.

Now not every change should have deployment separated from release. It depends. Sometimes there are underlying changes that must happen at the same time as the code change. Sometimes the change is so pervasive that making it optional doesn’t make sense. But those kinds of things are much rarer than you think.

So instead of just assuming Deploy == Release, think about what it would take to make deployment and release two entirely separate actions.

by Leon Rosenshein

Let It Flow

Like many things, I’ve talked about Flow and WIP before. The idea that what you want to optimize for is getting things done, not doing things. That’s a pretty subtle difference, but it’s an important one.

Or as it was said in Principle of Product Development Flow,

In product development, our problem is virtually never motionless engineers. It is almost always motionless work products.

– Donald Reinertsen

So much of how we work is designed to keep us busy. Blocked on waiting for someone else to do something? Start another task. Waiting for some tests to run? Start refactoring your code. Whatever you do, don’t just do nothing.

Now to be clear, I’m NOT recommending you sit around and do nothing if you can’t work directly on whatever it is you’re doing. That is not going to help.

But instead of doing nothing or starting the next task, maybe help the person you’re waiting for. They might get done sooner, which helps them get back to what they were supposed to be doing. Even if it makes things take a bit longer, the next time you’re in that situation you won’t need to bother them. You won’t be blocked, and they won’t be interrupted. Better for everyone.

Or another typical case. You need to run an integration test. It takes 20 minutes, so as soon as you start it, you go do something else. It takes you 15 minutes to context switch and get back up to speed. Just as you start getting somewhere, the test finishes. But you don’t notice because you’re busy. 10 minutes later you notice and go back to the first task. After another 15-minute context switch. You’re back to where you were before the test started. Think about it. You just spent 40+ minutes (2 context switches and a bit of time actually working) for 10 minutes of progress. Do that 3 or 4 times a day and you’ve wasted way more time than you’ve been productive.

Another quote from the book is

Since high capacity utilization simultaneously raises efficiency and increases delay cost, we need to look at the combined impact of these two factors. We can only do so if we express both factors in the same unit of measure, life-cycle profits. If we do this, we will always conclude that operating a product development process near full utilization is an economic disaster.

I know there are a lot of management buzzwords in there, and SAFe has picked up on this quote in particular, but that doesn’t mean it’s wrong. Like most things engineering, it’s a trade-off, and the answer always starts with “It depends …”. And with all trade-offs, the way to make it is to understand what you’re trading off against something else, and what your priorities are. It comes back to are you trying to optimize for doing things or for getting things done?

Here are some more interesting quotes from the book. There’s a lot to think about there.

by Leon Rosenshein

K. I. S. S.

This is something I keep coming back to. Whether it’s talking about The Unix Way or the difference between complicated and complex, we seem to like complexity.

Developers are drawn to complexity like moths to a flame, often with the same outcome

– Neal Ford

It’s not surprising. It’s in the nature of the problems we’re trying to solve. We’re often dealing with large problems in real-world situations. People are involved, and people are complex. While they may be somewhat predictable in an aggregate sense, even Hari Seldon’s psychohistory didn’t (and couldn’t) predict the Mule. Or as Ian Malcom said, Life finds a way. Complexity is all around us, and we often feel like the only way to tame it is to build something even more complex.

But what it it’s not? What if the best way to deal with complexity is to make things simpler? To build things that are in and of themselves very simple. They do one thing and do it well. The do it in response to an input. You can reason about them. You can make predictions about them. They don’t surprise you. And when the fail, they fail in very specific, predicable, handleable ways. Which means if it does something unexpected, it’s easy to figure out why. Then you can figure out how keep it from doing that. And make it even more predictable.

If you build something that way, something that has very narrow inputs and only a few outputs, the space it operates in, its operational domain, is very small. The smaller the operational domain, the less complex the behavior. And if one part is less complex, the rest of the system can be less complex. The more things you can make less complex the more you can make other parts less complex. It’s a virtuous cycle.

I can hear you saying that making things less complex is great and all, but the problems we’re looking at are complex, and we need to deal with that complexity. That’s true. Things are complex. We need to deal with them. The thing is, we need to deal with them at the system level, not the component level. We all want to build the big, new, shiny, complex, solution to all the world’s problems in one fell swoop, but that’s unlikely to be the best answer.

You can combine all those simple components in complicated ways. Consider the computer I’m using to create this entry. At its core, it’s a collection of gates. 1’s and 0’s. There are lots of them, and they’re connected in very complicated ways. And I can do almost anything with them. But complication is something we can deal with. Complicated things are knowable. And that’s the key.

We can almost always handle complex behavior with a complicated arrangement of simple, straightforward steps. It’s only for the rest of the cases that you need to build a complex solution.

So is your complex problem one that can be solved with a complicated arrangement of simple things, or do you need a complex solution. The answer to that question is “It Depends”.

by Leon Rosenshein

Adding Estimates

I’ve talked about estimation many times before. I’ve talked about Kent Beck, and I’ve talked about Hillel Wayne. Today I’m going to talk about all three of them at once. I’m not sure how directly this applies to development, but I’m pretty sure there’s something there. Maybe I’ll have it by the end of the post.

Beck recently wrote about private estimates and public progress. What it basically came down to was that the entire team gets together on Monday and decides what they think they can get done that week, given all they know of the situation, the priorities, and the goals. Then, at the end of the week, the team gets together again, looks at what was done, then answers the question “How did we do?”.1 It’s a simple idea. He then goes on to give a list of pros you get from doing that and some things to avoid.

You get limits. You know what you’re NOT doing. You know what done looks like. You get focus and alignment.

On the other hand, you can’t add weeks. Or compare them. Or compare teams. It just doesn’t work.

But it also got me to thinking about something Hillel said. You can’t add times or temperatures and get something meaningful, but you can subtract them and average them. Huh? On the surface, that doesn’t make any sense. I mean it’s true, and it feels right, but huh? Subtraction is just the inverse of addition. Why can you do one and not the other? Averaging is adding things up then dividing. So why can you add and divide, but not just add?

As he explains, the reason it works is that what you’re really doing is not staying in the domain of time or temperature. What you really end up doing is working in the domain of deltas in time or temperature. And that works because you can add and subtract deltas. When you add two times the result isn’t a time. I’m not sure what it is, but it’s not a point in time. When you subtract two times you get the difference between them. The delta. And I know what that is. It’s a time period. That’s a real thing.

Taking the average of a group of times (or temperatures) is really taking the delta between each time and a common arbitrary time point, then adding the deltas, dividing that time period by the number of times you’re averaging, then adding that time period back to the same arbitrary time point. All of those are valid operations, and you end up with a single point in time that we all agree is the average of that collection of time points.

Now that I’m getting towards the end, here’s what I think. Estimates are kind of like times and temperatures in that they’re based on a known starting point. If you could somehow compare them all with the same starting condition then, just like time or temperature, you could find the delta and do your math in that space. Unfortunately, the starting point for an estimate isn’t an estimate, or a point in time, or anything else easily measurable. Instead, it’s the sum total of what you know about a situation. How things are, how you want them to be, the environment around you, and the expected interruptions among other things.

When you make an estimate for this week you know everything you know about the current state. How correct you are is a different question, but you know all you can know right now. If you could know more then you would. However, the estimate you want to make for next week relies on having that same level of knowledge. You might have that knowledge now, but when you get to the beginning of next week, when you should be making that estimate, the current state will be different. How different? I don’t know. You don’t know.

For next week, it will probably be something like what you estimate for this week, but not exactly. And over time the difference between you expectation and reality will grow, to the point where you have no idea what the starting condition is.

Which mean that you can’t pick a starting condition to subtract from all of the estimates, so you can’t generate a relative estimate delta. And if you can’t turn your estimate into a delta then there’s nothing you can add, or even compare.2

The same thing applies to comparing teams. Or doesn’t apply. Because the teams are different there can’t be any shared arbitrary point to use to create a delta from. And again, without that delta, there’s nothing to compare.

And that’s how Kent Beck, Hillel Wayne, and Estimates come together to explain why you can’t just add up a bunch of weekly estimates and come up with 12 month plan.


  1. You know what, that’s a pretty clean definition of XP/agile. ↩︎

  2. I leave it to someone with more math than I have to explain that in formal logic. ↩︎

by Leon Rosenshein

You Broke What?

They say with motorcycles there are riders that have laid their bike down at least once, and there are riders that will lay there bike down in the future. That there are no other kinds of riders. I’m not a motorcycle rider, so I can’t comment on the accuracy of that statement or not.

I am, however, a software developer, and I can say that there are 10 kinds of developers. Those that understand binary and those that don’t. And there are 2 other kinds of developers. Those that have broken production software at least once, and those that will.

Web comic from workchronicles.com. One developer worrying about being fired for breaking production and another telling him not to worry. If developers were fired for breaking production everyone would be fired.

There are two important things to take away from that. First, you will break production at some point. Don’t let that paralyze you. Second, don’t just YOLO it over the wall and walk away. Pay attention to what happens. Make sure things are doing what you expect, and not doing what you don’t expect. And be prepared to fix things.

Actually, there are a lot of things you should take from that besides those two important things. And the biggest of those revolve around 2 area. Preventing issues and recovering from issues.

You want to do everything you can to prevent issues from happening. The things you’ve seen happen. The things you’ve heard about happening. The things you can think of going wrong. The things others can think of going wrong.

Things like running unit and acceptance tests first. If possible, shadowing production. Then sending a percentage of production work to the new version. And things like making sure the sequencing is in the right order. You add things before you reference them. You stop referencing things before you delete them. Making sure everything is done before flipping to new versions. Those sorts of things. And don’t forget Hyrum’s Law. Somewhere, someone is using your system in ways you didn’t expect, and probably relying on something you didn’t realize was happening. When you find it you’ll need to deal with it.

Bottom line? You don’t want to keep getting bitten by the same kinds of problems when you can find and prevent them.

The other thing to take away is that you need to be prepared to deal with problems. The simplest one to be prepared for is the roll-back. Something bad happened? Putting it back the way it was should be a fast, one-click, operation. If you can’t do that, work towards it. Because it will happen. And don’t forget to periodically test the roll-back. There are much better times than when you’re under pressure to find out that you have a write-only backup.

And after the fact, don’t forget the blameless review. You might call it a post-mortem, an incident review, lessons learned, or something else entirely. Amazon calls them COE’s. Some parts of Microsoft call them Root Cause Analysis. Regardless of what you call them, you should do them. And pay attention to the results. Check out what the Pragmatic Engineer has to say about them.

How careful should you be? It Depends. It depends on how bad things can go and how fast you can recover. So think about what you’re doing. Think about how you can recover. Then make the change. And eventually, you’ll join the club of those who have broken production. I look forward to welcoming you.

by Leon Rosenshein

The Dude Abides

The Big Lebowski is a cult classic from the late 90s. It’s all about Jeff “The Dude” Lebowski and a series of events and (mis)adventures that happen to and around him. There are many quotable lines, but one of the most quoted, one that shows up on posters, shirts, memes everywhere, and is the last line The Dude says in the movie, is “The Dude abides.” It’s a paraphrase of a line in Ecclesiastes

One generation goeth, and another generation cometh; but the earth abideth for ever.

Things change. Generations come and go. But the Earth abides. It doesn’t go away. The same with The Dude. Despite everything that happens in the movie, The Dude goes on as he is. Accepting life. Calmly enjoying it. Going bowling and drinking white russians.

Which brings me to today’s topic, Dude’s Law.

Value = Why / How

A picture based on DaVinci’s Vitruvian Man, with the man replaced by The Dude from The Big Lebowski. On both sides of the picture is Dude’s law V = W/H

Dude’s law was coined by David Hussman, and he explains it in this short video. Dude’s law is simple. The value of doing something is proportional to why you’re doing it, and inversely proportional to how you’re doing it. For any give why/how pair, if you double the why you get twice the value. If, on the other hand, you double the how, you get half the value.

That’s pretty non-denominational, so let’s try to be a little more specific. Why measures the benefit. The thing you want and get more of. It’s based on the reason you’re doing something. It helps you to quantify the good things that happen if you do whatever it is. How measures the cost of doing it. That includes time, effort, money, and any other resource you put into doing the thing. Or talking about doing the thing.

It’s still pretty non-denominational, but at least now, value is benefit / cost. That’s a fairly common definition, and something that’s easy to get behind.

The important thing to remember here is that value is inversely proportional to the how, and the how in Hussman’s explanation includes time spent discussing how. If you spend too much time talking and thinking about how and don’t improve the how by enough, you’ve actually reduced value. An obvious example of this is XKCD’s Is It Worth The Time. Automation can help, but if you spend more time automating that the automation will save you then you haven’t added value.

It’s another local vs global maximization tradeoff. Consider making a technology choice for the next feature. You’ve got a system that was built using Go. There’s a cool new feature in Rust that you could use that would make the task easier. But the team doesn’t know Rust and the development environment doesn’t support Rust. The higher the how, the less value you create. So without changing the why you probably shouldn’t choose Rust in this case.

Unless… you also change the why. If there’s a good reason to move the entire system to Rust things are different. If your why goes up at the same rate as the how, then the value is not reduced. In that case, the higher how is justified by the higher why.

So yet again, we come back to it depends. Which is just another way of expressing Dude’s Law. Value is benefit / cost. That doesn’t change. It’s up to you to define why and how.