Recent Posts (page 1 / 66)

by Leon Rosenshein

Testing Schedules

Yesterday I talked about different kinds of tests, [unit, integration, and system](/posts/2025/04/02]. I mentioned that not only are there different kinds of tests, but those tests have different characteristics. In addition to the differences in what you can learn from tests by classification, there are also differences in execution times and execution costs.

Venn diagram for different test types

NB: This is semi-orthogonal to test driven development. When you run the tests is not the same as when you write the tests.

These differences often lead to different schedules for running these tests. Like any dynamic system, one important tool to maintain stability is to have a short feedback loop. The faster the tests run, and the more often you run them, the shorter you can make your feedback loop. The shorter your feedback loop, the faster you can get to the result you want.

Luckily, unit tests are both cheap and fast. That means you can run them a lot. And get results quickly. The questions are, which ones do you run, and what does “a lot” mean? If you’ve got a small system, and running all the tests takes a few seconds, run them all. While building and running any individual test is fast, in a more realistic setting, building and running them all can take minutes or hours. And that’s just not practical. You need to do something else. This is where a dependency management system, like Make or bazel can help. You can set them up to only run the tests that are directly dependent on the code that changed. Combine that with some thoughtful code layout and you can relatively easily keep the time it takes to run the relevant (directly impacted) tests down.

Running quickly is important, but when do you run them? Recommendations vary from “every time a file is changed/saved” to every time changes are stored in your shared version control system. Personally, I think it’s every time there’s a logical change made locally. Sometimes logical changes span files, so testing based on any given file doesn’t make sense. You want to run the tests before you make a change to make sure the tests are still valid, you want to run the tests after each logical step in the overall change to make sure your changes haven’t broken anything, and you want to run the tests when you’re done to make sure that everything still works. That’s a good start, but it’s not enough. In a perfect world your unit tests would cover every possible combination of use cases for the SUT. But we don’t live in a perfect world, and as Hyrum’s Law tells us, someone, somewhere, is making use of some capability you don’t know you’ve exposed. A capability you don’t have a unit test for. So even when all your unit tests pass, you can still break something downstream. At some point you need to run all the unit tests for all the code that depends on the change. Ideally before anyone else sees the change. You run all those tests just before you push your changes to the shared version control system.

Unfortunately, unit tests aren’t enough. Everything can work properly on its own, but things also must work well together. That’s why we have integration tests in the first place. When do you run them? They cost more and take longer than integration tests, but the same basic rule applies. You should run them when there’s a complete logical change. That means when any component of any integration test changes, run the integration test. And again, just running the directly impacted integration tests isn’t enough. There will be integration tests that depend on the things that depend on the integrations you’re testing. You need to test them as well. Again, ideally before anyone else sees the change.

Then we have system level, or end-to-end tests. Those tests are almost always slow, expensive, and take real hardware. Every change impacts the system, but it’s just not practical to run them for every change. And even if you did, given the time it takes to run those tests (hours or days if there’s real hardware involved), running them for every change would slow you down so much you’d never get anything done. Of course, you need to run your system level tests for every release, or you don’t know what you’re shipping, but that’s not enough. You need to run the system tests, or at least the relevant system tests, often enough that you’ve got a fighting chance to figure out which change made the test fail. That’s dependent on the rate of change of your system. For systems under active development that might be every day or at least multiple times per week, for systems that rarely change, it might be much less frequently.

There you have it. Unit tests on changed code run locally on every logical change, before sharing, and centrally on everything impacted by the change after/as part of sharing with the rest of the team. Integration tests run on component level changes locally before sharing, and centrally on everything impacted by the change after/as part of sharing with the rest of the team. System level tests run on releases and on a schedule that makes sense based on the rate of change of the system.

Bonus points for allowing people to trigger system tests when they know there’s a high likelihood of emergent behavior to check for.

by Leon Rosenshein

Test Classification

Whether you are thinking about unit vs integration vs system tests, or build (or save) time vs check in time vs release time tests, what you’re really thinking about is test classification and test hierarchy. Or put another way, you’re thinking about why you’re running that test and what the goal of the test is.

Of course you want the test to pass. And you want that pass to mean something. Even if the result you’re looking for is a failure in the system under test (SUT), you want to see that failure so your test passes. But I’m not talking about what makes a good test. That’s a different topic for a different time.

The topic for today is, instead, what is the purpose of the test. What are level of functionality are you trying to test? Knowing the purpose of the test can help you figure out how to classify it. That can then help you figure out how and when to run it.

First, some basics on test classification. There are many types of tests, but in broad strokes, you can think of them applying at 3 levels, unit, integration, and end-to-end or system. To make things more real, let’s consider a clock application.

The test pyramid. Slow, expensive end to end tests on top, integration tests in the middle, and fast, cheap unit tests at the bottom

Unit tests are tests that validate things at the functional level. The typically live and execute at the function/class level. Do these handful of things work well insolation? Do they do what they say they will do, and handle failure gracefully? They typically take less than a second to set up, run, and tear down. Things at the unit level might include configuration, storage, accessing the host’s notion of date and time.

Integration tests are the tests that validate things at the boundaries of domains, functions, or classes. How does class A work with class B? How does your CRUD layer work with the underlying database? How does your logging/timing/resource management system work with its consumers? These tests might take a couple of seconds, and might require access to some host system, For the clock app, you might test reading and writing configuration with the storage system.

End-to-end or System tests are the tests that validate how the system as a whole works. Does it meet the end user’s expectations, or at least not surprise them greatly when it doesn’t. System tests are the ones that validate that even though a bunch of things failed along the way, the system managed to do the right thing, or at least avoided doing the wrong thing. This is where you’ll test emergent behavior as the different parts of the system interact. It’s often only at the system level that you can test what happens when 4 different things fail in a specific way. Because the system level test is the only one where those 4 different components are working together. These tests can take much longer, and often require the real system, or at least a trusted emulator. For that clock, it might be setting up an alarm and making sure it sounds at the appropriate time.

I’ve mentioned the time it takes for these various test types, but it’s not just time that changes. Cost also changes. Running unit tests is almost free. Just a few CPU cycles and that’s it. System tests, on the other hand, can be very expensive. You need to build the entire system and deploy it. To not just the appropriate hardware, but special hardware that you have extra access to for debugging. That all takes time. Any money. Unless you’re doing manual testing. Which takes even more time and money.

Most tests fit reasonably well into one of these three buckets. If one your tests doesn’t, think about breaking the test up into multiple tests to that it does. Once you know which bucket to put your tests in you can move on to the next step, figuring out when you should be running it. I’ll cover that in a different post.

On the other hand, if most of your tests don’t, think about your test design. If your test design seems reasonable, but your tests themselves don’t fit into those three buckets, think about the underlying system design. If your system is untestable at the unit level, that’s not a testing problem, that’s a design/architecture problem. Fix that first. Then recognize that you’re practicing Test Driven Development.

And that’s a good thing.

by Leon Rosenshein

I'm Back

It’s been a while since I’ve published a new entry. Not because I haven’t thought of things, but because I got sidetracked with life and work for a bit, then I got out of the habit of writing. Which is a great topic to write about. So here I go. Talking about writing. And habits. And personal lessons.

We all have free will. We get to decide what we want to do. Not in a vacuum of course. There is always an impact to our choices. You need to balance the costs and benefits of a choice. The better visibility you have into those costs and benefits, the better decision you can make. Just remember that making a good decision is not the same as having a good outcome of the decision.

In my case, back in the middle of last year I got busy. Busy at work and busy outside work. As people familiar with Spoon Theory know, when you run out of spoons you need to stop, so you need to use your spoons thoughtfully. Looking at the things I needed to do, the things I wanted to do, and the things I could do, I decided to stop working on this blog.

And in retrospect, it was a good decision. The things that needed to be done got done. I was able to do the things I wanted to do that were most important to me, and I was able to put enough effort into them to do them well. I’m happy with the choice I made, and in the same situation, I’ll go through the same process.

However, in retrospect, one thing I missed in my decision was how much momentum and habit play into things. One of the reasons I was writing so much was that I was I the habit of writing. I had some momentum, and that kept me going. When I stopped, I got out of the habit and lost the momentum. Even worse, I got in the habit of saying “I’ll get back to it soon”. And that’s a dangerous habit to have.

What I should have done was extend my decision with some exit criteria. That would have helped me not get into the habit of not writing. Instead of realizing it’s been 9 months since I posted a new entry, I would have had both reminders and a reason to get back to it. Because I do like writing. And sharing. And hopefully others are getting something out of it as well. So here we are. I’m writing blog posts again, and working on building back that habit.

And to bring this back to helping you, my Friendgineers, it’s something that we need to remember as software developers. When we write, whether it’s emails, docs, blog posts, or code, we have habits. Generally, our habits help us by keeping us from having to decide every little detail. One space or two between sentences? (One) Oxford comma or not (Yes) Indent or blank line between paragraphs (Blank line). Those habits are useful.

But sometimes, when we make a decision, like deciding to move quickly to get something working right now, that should have exit criteria but doesn’t, we end up without an important habit, or possibly worse, with a new habit, that gets in our way later. Like the habit of not thinking about forward or backward compatibility, not worrying about separation of concerns, or not writing unit tests. Or maybe hard-coding configurations, or choices? Sometimes you do those things for speed, or expediency, but those are not things you want to make a habit of.

So when you do make those decisions, know your exit criteria, and follow them. If you don’t have them, create them. And above all, be careful what habits your pick up. Or lose.

by Leon Rosenshein

On Busyness

As I’ve mentioned previously, Winnie the Pooh makes a great coach for folks interested in extreme programming. Things are the way they are and we have to live with what is. We can learn from it. We can change it. But we have to deal with the reality of what is.

There’s another part of the Tao Of Pooh we can learn from. It’s the Bisy Backson.

Gon Out. Backson. Bisy. Backson, C.R.

In the book, Rabbit is looking for Christopher Robin, but instead of finding him, Rabbit finds the note above. He can’t figure out exactly what it means and becomes a “Bisy Backson” trying to find Christopher Robin.

In the story, the Bisy Backson has to always be moving. Always doing something, going somewhere, full of sound and fury, signifying nothing. Just to prove, to themselves and others, that they’re important. Because if you’re doing something important, you must be important.

Unfortunately, that’s one of those things we all know that just ain’t so. And ain’t so on multiple levels. First, and most straightforward, there’s no transitive relationship between the importance of the work and the importance of who’s doing it. The work is important, and getting it done is important, but in most cases, it doesn’t matter who does it. A better way to look at this is through the value lens. Not “I am doing important things so I am important”, but “I am valuable because I am doing things that add value.” A subtle, but important, difference.

Second, and probably the most insidious thing, is that the Bisy Backson is a great example of the difference between outputs and outcomes. They do lots of things and there’s lots of activity, but not a lot of results. And from the outside it looks like progress. That’s the sound and fury part. To make matters worse, that appearance of progress is often incentivized by the systems we work in. this one is hard to manage because it requires self-awareness. Again, the value lens is a good way to combat this. Is what you’re doing high value or not? It doesn’t matter if it’s high output if the value is low.

Third, and hardest to see, is the opportunity cost of being busy. I’ve talked about the importance of slack time, and this is still a great explanation of how busyness and parallelization can work against reducing overall time. The Bisy Backson doesn’t see this. They’re too busy doing things to see that doing less might be faster. And it’s certainly faster than doing something that’s just going to sit in unfinished inventory for a while, or worse, doing the wrong thing because we don’t know what the right thing is yet. The value lens helps here, as it usually does, but it’s not enough. One of the things that traps the Bisy Backson is the local maximum (or minimum) problem. If you don’t take the time to look at the bigger picture the Bisy Backson will quickly find themselves on a peak looking across the valley at the higher peak they should have been moving towards. The antidote here is to step back and look at the bigger picture and understand what it is you’re really trying to do.


On a personal note, there’s another kind of time when I deal with the Bisy Backson inside myself. That’s when something significant enough happens that I need to take time to process it, but I’m not ready to process it headfirst in real time. At times like that I’ll often choose to be the Bisy Backson to engage the high-order processing nodes in my head and let the issue rattle around and clarify itself. That’s where I’ve been the past week. Someone at work passed away unexpectedly. Someone I’ve worked closely with for over 3 years. There are lots of little reminders of the loss, and each one is a distraction. I’ve been using my inner Bisy Backson to give me the time and space to work through it at my own pace.

So while busyness for its own sake might not be the best thing, busyness as a tool can be useful. The hard part is knowing which situation you’re in and making the appropriate choice.

by Leon Rosenshein

That's The Way It Is

I’ve said before that It Depends is just E_INSUFFICIENT_CONTEXT written so humans can understand it. There’s another common phrase that often hides a much deeper meaning.

That’s Just How It Is

The thing about that sentence is how passive and accepting it is. Particularly in the word just1. Without just it’s a description of the current state. Adding just adds another whole dimension. It changes the sentence for a description of what is to a comment on what is.

And implicit in that comment is context. The context that says not only are things the way they are, but that you’re powerless to do anything about it. I assert that that last part is untrue.

There may be limits on how much you can do, but it’s not nothing. At the very least, if you know that things are that way, you can expect it. And plan for it. Since I’m a software developer I’ll use a car analogy. Say you’re on a road trip and a road you want to use is closed.

You can drive right up to the sign, then stop and wait for someone to open the road, tell you to turn around and go home, or provide a detour. Or, depending on when you find out, you can plan a different route, decide not to go or to go somewhere else instead, or maybe decide a phone call gets you enough of what you want, and do that instead.

The difference is agency. If that’s just the way it is, you have no agency. On the other hand, if that’s the way it is, you have some control over your destiny. You can do something.

Coming back to software development, the same thing applies. There are events that happen that are outside your control. You do have to accept them. Requirements change. Hardware fails. You get bad input. What you do about it is up to you.

Depending on how much control you have, what you do is different. Sometimes you have enough control to prevent the problem. Or at least prevent the problem from impacting you. Ensure there are redundant systems to mitigate hardware issues. Sanitize your inputs when you get them, and if possible, where they are generated. Knowing that requirements change, leave some slack in the schedule. You’ll still run out of time (Hofstadter’s Law), but it won’t be as bad as it might have been.

Or maybe all you can do is add a bit of resilience to the system. Knowing that your inputs are unreliable, even after doing some sanitizing, reject them. Instead of crashing or passing on the problem to someone else, stop what you’re doing and return some kind of error to someone who can do something about it. If you can’t do that, at least log enough information so that you know what happened. And automate the recovery process. Or if you can’t do that, script it. There have been many times where I wasn’t in a position to prevent a problem from happening, but once I knew it could happen, I can’t think of a single time where there was nothing I could do to make things easier to diagnose and/or recover from the situation.

What makes it possible is the mindset change that comes from dropping the just. From changing a comment that makes you powerless to a statement of reality that you can do something about.

That’s just the way it is


That’s the way it is

Come to think of it, that’s good advice not just for software development, but for life in general.


  1. Just and But as modifiers, the difference between them, and how different people use them is a whole separate topic for another day. ↩︎

by Leon Rosenshein

The Power Of No

A long time ago I read a story by Eric Frank Russell, And Then There Were None. On the surface it’s a typical re-contact story. There was a great diaspora during which hundreds or thousands of interstellar ships left Earth to start new colonies. Shortly afterward, something happened, and contact was lost with all of them. Now, four hundred years later, the people or Earth are trying to reconnect with those colonies.

Without going in to too much detail, the ship (it’s never named) lands on planet K22g and the ambassador on board tries to find the folks in charge. Unfortunately, he can’t seem to do that. It turns out that the people who colonized the planet are followers of Gandhi approach to society and have turned civil disobedience and barter into their social system. In the story, two phrases that keep coming up and seem to define their society are F:IW and myob. It takes a while for the new arrivals to figure out that they are shorthand for “Freedom: I Won’t” and “mind your own business”. Those two phrases have a pretty strong impact on the more authoritarian, hierarchical, society represented by the ship’s crew.

There are lots of reasons why that social and economic system might not work at scale, and the story uses a bit of hyperbole to make it’s point, but who knows. It could work at small scale, like within a small community. It’s a pretty short story and it’s interesting to think about how such a society might actually work in the larger context.

You might be wondering how this relates to software development. After all, you can’t just tell your boss to “myob” or just say “I won’t” when you get a request. Or at least you can’t do that and expect there to not be any consequences.

It does, however, point out the power of politely saying “No” or “I’m responsible for that, and I think …”, and that’s where the connection is.

Just because someone asks you to do something, it doesn’t mean you should just drop everything and do it. Or try to do it in addition to whatever else you’re trying to do. That doesn’t work. It’s not scalable, it hurts quality. It slows things down and it keeps things from getting done. That doesn’t mean you should just say no and ignore the request. It just means you need to think about it and make sure you understand the impact of fulfilling the request.

Similarly, just saying “myob” when someone makes a request or suggestion isn’t the way to work with others. But you also shouldn’t just blindly take someone’s advice or let them set priorities or importance. You need to understand the “why” behind it. Once you understand the why, you can evaluate the statement and decide if it makes sense, would make things worse, or if there’s a better way to get the desired result. And again, you can’t do everything, so you should have the discussion if it’s the right thing to do.

Really, what they both come down to is realizing you’re responsible for what you do (and don’t do) and that you need to think about the value you’re providing. Not just immediately, but medium and long term as well. Saying “I Won’t”, when it’s appropriate, and with the proper understanding and justification, can give you the freedom to do the things that need to be done.

Saying “myob” or something like “Thanks for the advice. As the person/team responsible for doing X, here are a few things you may not have thought of.” is also very powerful. It can start the conversation around what’s important, what’s not, and why. When everyone understands that, the group makes better decisions together.

I’m not suggesting you respond to every request with “F:IW” or “myob”, but make sure you think about what the end result of giving a respectful version of those responses would be.

by Leon Rosenshein

What Are You Waiting For?

I’ve talked about the Eisenhower matrix before. Two orthogonal axes, importance, and urgency. It’s a good way to prioritize. The higher and further to the right, the higher the priority.

Another way to look at things is task priority, Leverage, Neutral, or Overhead. They all need to be done, but things with leverage are higher priority.

Or maybe you prefer the MoSCoW method. Must, Should, Could, and just as important, Won’t. That’s the priority order.

Three ways to prioritize. All you need to do is pick one, implement it, then reap the rewards, right? Well, that’s mostly right.

The thing that none of those schemes mention is that you have to not just to the work, you have to deliver the work. If you don’t deliver, then you haven’t added any value and you haven’t actually done the thing that you decided is most important.

Now here’s where it gets interesting. Delivering is work. So, it’s got a priority. Depending on your exact situation, that might be a lot of work, or it might be just a small amount of work. You can’t forget about it though.

Which leads to some interesting conflicts. You’ve done the work for the top 3 priorities on your list. They got done at about the same time. You could release (deliver) an update, but you could also wait for the next thing to be ready. There would be more value in your delivery, but it’s going to be later. Which do you choose?

It depends. If you decide to wait, some say you’re letting a lower priority (lower value) thing hold your high value thing hostage. Others say that you’re delivering more value, so the delay is worth it. Either might be true.

What lets you determine if you’re being held hostage or delivering more value over time is an understanding of the cost of delivery. The more delivery costs, the longer it takes, the more you end up packing things into each delivery. Instead of paying the cost of delivery for each item, you amortize it over multiple items. So, the delivery cost per item is lower. Standard unit economics, right?

Maybe, but maybe not. It depends on your perspective. Sure, the cost to you, as the producer is lower, but that doesn’t mean the value to the user is higher. It completely ignores the difference in value of having something right now compared to having something else later. The longer you wait to deliver something that could add value, the harder it is to make up the user’s lost value from not having it in hand now.

To make the calculation even worse, since you’ve done a good job figuring out the priority and value of the things to work on, by definition, the thing you’re waiting for has less value than what you’ve already done. If the difference in value is large enough, your user will never make up the difference.

So rather than cram more and more things into a delivery, which ends up delaying user value, and can make delivery take even longer, make delivery cheaper and faster. So you can put fewer things in each delivery. And deliver sooner.

And remember, this applies at all levels of development. Add the highest value thing now. Deliver it. Add the next highest value thing. Deliver that. Repeat. Or to put it GeePaw Hill’s terms, take many more much smaller steps.

by Leon Rosenshein

You ARE Allowed To Think Before You Type

I’m a big proponent of agile (lower case a) development practices and Test Driven Development (TDD) in particular. In my experience it conforms to the reality of changing understanding and moving requirements much better than BDUF. Like many things, there’s a spectrum ranging from BDUF to “Start typing and see what you come up with”. And in some specific scenarios, either one of those might be the right choice. But for the vast majority of us, in the vast majority of cases, the best choice is somewhere in the middle.

Unfortunately, the best sound bites are at the extremes. Things are so clear there. There’s no nuance or reality to get in your way. You end up with everything from “The product and architecture teams have specified every function/message/API call, so just go implement this skeleton.” to “Build me a website that tracks all of my activity. I’ll be back on Friday for a demo.” It’s easy to look at either of those and say that there’s no way that they’ll work. And they don’t.

So here’s another sound bite to get you thinking.

Continuous design is an interesting thing; it can be continuously coming up with a design (with some reversals) or it could be continuously testing pre-thought design ideas by implementing them and assessing.

Both are valid. You are allowed to think before typing.

     Tim Ottinger

I say nay, not just allowed. I would say you are required to think before you type.

It’s the messy middle, the land of “It Depends” where software engineering lives. The place where we need to understand not just what we’re doing, but why. To choose what to do for a valid reason, and just as importantly, choose what not to do. The way to do that is to think. Think about what your goals are. Think about what your constraints are. Think about the path, not just from 0 to 1, but from 1 to 10 as well.’

I’ve talked about only designing until you start speculating. But it goes even deeper than that. Until you’re done, you’re speculating. Even then, you might be wrong. You can (and should) design for the known knowns. You should have a approach for known unknows. But by definition, you don’t know the unknown unknowns, so you can’t design for them.

You might not be able to design for them, but you can keep from painting yourself into a corner. Especially if you’re using TDD to guide you, you have to think first. You can’t write the initial list of failing tests, let along the tests themselves, unless you have some idea of how things are going to work. How the events and data, the information is going to flow. How it’s going to be transformed.

To break things down into domains, you have to have some understanding of the situation. And not just the happiest path. You have to think about the edges. The failure cases. The places and ways things can go wrong. It’s only after you’ve thought about those things that you can create your domains and use tests to guide your development.

And all of that is speculation. You don’t know until you’ve tried.

So think. Then type. Then think some more. Then learn. Then design. And know that even those sound like separate things, they’re all happening somewhat concurrently. And that’s OK.

by Leon Rosenshein

10 Commandments of Code Review

Programming is a social activity. We work with other people, not in a vacuum. We share code with other people. We share ownership of code with other people. We share responsibility for code with other people. The sharing goes the other way as well. Others share their use cases, their requirements, and their experiences. One of the places where all of that comes together is in a code review.

In some cases, such as pair or mob programming, code review is a synchronous, ongoing activity, but that’s not what I’m going to talk about now. For most us, code review is an asynchronous event. Someone changes some code, throws it over a wall for review, then waits, more or less patiently, for a response. Eventually some sort of agreement is reached, and the code gets merged into the codebase. I have a lot of issues with the whole “throw it over the wall” approach, which is another thing to talk about at a later time. Meanwhile, here are 10 important things to keep in mind when writing or reviewing a code review.

Mel Brooks as Moses holding the 10 commandments.
  1. Thou shalt treat a code review request as an important task. Of course, you should give a thorough review when you give one, but there’s more to considering it important than that. It means declining quickly if you know you’re not going to be able to do the review. It means not putting it off for days. Different teams have different benchmarks, but 1 working day is a good rule of thumb.

  2. Thou shalt create code reviews knowing that others will read them. As I said earlier, always review your code review before sharing it. Make sure it has what it needs for the reviewer. The right files. The right explanation of why the change is being made. A description of the change that someone who didn’t make the change can understand. A list of tests to validate the change.

  3. Thou shalt not take comments personally. When reading comments on your review, remember, the comments are about the code, not you the author. Don’t get angry and ignore the comments. Take them seriously. Be willing to accept feedback and be willing to push back if you have a well thought out reasons. Nor shalt they make comments personal. Comment on the code, not the author.

  4. Thou shalt keep your review to a single topic. A code review should be about a single logical change. If you find you need to keep using and in your description, consider splitting the code review into multiple reviews.

  5. Thou shalt consider all feedback. Not just consider and ignore, but consider and respond. The response might be changing the code, it might be an explanation of why you’re not changing the code, or it might be agreement and a promise of a follow-on change.

  6. Thou shalt say why. Whether it’s the code review description, a code review comment, or a reply to a comment, always explain yourself. It can be as simple as “I agree” if there’s no disagreement, but “No” is not acceptable, “No because …” can be.

  7. Thou shalt not be absolutist. Writer or reviewer, be willing to compromise. Is good to have a strong opinion. It’s good to explain yourself. Like with everything else development, the right solution for a specific review depends on the context. Be willing to listen to the other people and work together to come up with a solution you all can more forward with. Again, remember that it’s OK to say there is more work to come, either right away or later when you know more about the situation.

  8. Thou shalt not argue with the style guide. One thing I like about Go is that most of the style guide is built into the language and there aren’t any options. The writer and the review might not agree with it, but it is what it is, and we all go with it. For other languages you might have more choices, but the review is not the place to argue them. If you need to argue about the style guide, do it outside the review.

  9. Thou shalt remember review feedback. The first time you receive some feedback it’s a learning experience. The second time you get the same feedback it’s helping you build a habit. If you get the same feedback a third or fourth time, stop for a minute and think about why. Understand where the feedback is coming from and incorporate the learning into your future reviews. If you’re not learning from your reviews, you’re being disrespectful to your reviewers.

  10. Though shalt treat your review partners as you want to be treated. Just like that other golden rule, whether you’re the writer or reviewer, think about how your partner will feel. Think about how you would feel if you were on the other side of the equation. Then act the way you would want yourself to act.

Code reviews, regardless of which side you’re on, are there to help produce the most value you can. But they can be so much more. They can be teaching and learning tools. They can be vehicles for sharing. They can be a gift that you give to yourself and that teams give to themselves.

If you let them. So take advantage of your opportunities when you can.

by Leon Rosenshein

Optionality

There are lots of reasons to have high internal software quality (ISQ). Minimizing WTFs/min is only one of them. One of the biggest reasons is not what ISQ does for you today, but what it will do for you tomorrow. The optionality it gives you for the future.

So what is optionality, and why should we care?

optionality (countable and uncountable, plural optionalities)

  1. (finance, business) The value of additional optional investment opportunities available only after having made an initial investment.
        The short-term payoff for this is modest, but the optionality value is enormous.
  2. Quality or state in which choice or discretion is allowed.
        Some offices do not follow the corporate procedure, due to a culture of optionality.

We care about optionality because software is all about change. In this case we care about both senses of the word. Sure, a particular release is a snapshot of a moment in time, and unchanging, but even in the days of shrink-wrapped software that you got off a shelf, there were often updates and patches, to say nothing of next year’s version. Now, with Software as a Service, Web execution, downloadable content, it’s not unusual to have multiple releases on any given day. So software changes, and optionality is important.

Some of you might be saying “But I write embedded code for an unconnected device. My code can’t be updated. I don’t care about change.” It’s true that once released that code can’t change. But right up until release the requirements can change, and they often do. So even in that case, writing software is about change.

Even more fundamentally, every character you add, remove, or change (there’s that word again) is a change to the software. The progression of a product, from idea, to design, to code, to release, is about change. Again, development, of any kind, including software, is about change.

Given that you know you’ll need to make changes, even if you can’t know what some of those changes are, giving yourself the optionality to go in whatever direction you need to go in is important. And that’s the biggest benefit of ISQ. Knowing that you can tackle the unknown unknowns that you know are coming with confidence.

ISQ is things like low coupling. That’s what lets you adjust the internals of one part of your code without having to worry about how it will impact other parts. As long as you don’t break your interface contract, you can do what you need to safely. Your unit tests, which are a part of high ISQ, let you know that you’re maintaining your contract.

ISQ is things like readability. I’ve talked about Coding for the Maintainer many times. Doing that is part of having high ISQ. If it’s easy to tell what’s happening, why choices were made, and more importantly, why other choices weren’t made, you have more options available to you. You can change your mind with more understanding, which leads to better decisions.

ISQ is things designing for test and using well written tests. If you have designed your components to be tested, you have lower coupling (see above) and you can test things without worrying about the testability of a dependency. It’s making sure your tests are tied to the outcomes of the things being tested, not the process of the thing being tested. It’s using fakes, mocks, stubs, and the real thing for dependencies at the right time1. If you do those things, you can make changes with the confidence.

There are lots of other things that go into ISQ, but just doing those things isn’t the goal. And just having higher ISQ isn’t the goal. The goal is to give yourself more optionality. Higher ISQ is just one way to get that optionality. To have confidence that you’ve changed what you wanted to change, and just as importantly, that you haven’t changed anything you didn’t want to change.


  1. When to use fakes, mocks, stubs, and the real thing is a whole different topic for another time. ↩︎