Recent Posts (page 1 / 68)

by Leon Rosenshein

Slow is Smooth, Smooth is Fast

Move fast and break things. That’s the tech mantra, right? Do something. Might be right, might be wrong. Just do something and see what happens. Things will break. That’s OK. Just fix it later. As the Dothraki say, It is known.

There’s another saying. Slow is Smooth, Smooth is Fast. This one is courtesy of the Navy Seals. It’s saying the opposite. Slow down. Think about what you’re doing. Make deliberate choices. Every step will be little slower, but overall things will get done faster. Again, it is known.

And just as with the Dothraki, just because it is known, it’s not necessarily true. Maybe they’re both true. It’s your classic dialectic thinking. It Depends on the context.

Or maybe, thinking about it with the dialectic lens, they’re really saying the same thing, but from different perspectives, so of course they’re both true. We just need to think about them the right way. A way that honors both sayings and leads us to the deeper truth.

From an outside-in perspective, move fast and break things is saying that you should perturb the system and see how it responds. Then, with that new knowledge, you make another change. Do that fast enough and often enough and you end up changing the entire paradigm. You will have broken the old system and replaced it with a new one. Quickly.

From an inside-out perspective, you want to be deliberate. You want to slow down just a bit and consider what you’re about to do. Then do something deliberately. Which leaves you well positioned to make the next deliberate step towards your goal. Do that deliberately enough and it looks like you’re moving smoothly. If you keep doing that, you’ll find that you’ve actually moved faster than if you had rushed each step, but spent more time between steps.

Bringing this back to software development, here’s something to keep in mind as you do your work. Neither of those say you should take shortcuts or write bad code. When you move fast and break things, the thing that you’re breaking isn’t your code. You’re changing your code, but you don’t break it. You break the outside paradigm.

When you’re moving slowly and smoothly, you are always being careful to not break your code. You keep things smooth so you can keep taking the next step. You don’t need to take time to throw out your code and start again because it can change with you. You don’t need to take an extended period of time to figure out why your code has collapsed under its own weight. You use your understanding of the system to keep it the best simple system for now.

In both cases you might need to back-track a bit occasionally because you’ve chosen to move and break some paradigm, which has taught you that something you’ve done needs to change. That’s expected and it’s fine. Since you’ve done things deliberately, maintaining your optionality, it’s easy to smoothly make that change and move forward.

Which brings us right back to the dialectic. Move fast and break things. Slow is Smooth, Smooth is Fast. Statements that sound like they contradict each other. But are both true. By moving slowly and smoothly, you’re able to move fast and break the paradigm. There’s even a study showing this is true1.


  1. Code Red: The Business Impact of Code Quality – A Quantitative Study of 39 Proprietary Production Codebases. Details are a story for another blog. ↩︎

by Leon Rosenshein

Government Digital Services

A long time ago, in a country far away from, the government released guidelines. Nothing unusual about this, It happens all the time. Usually, when I hear about that I think of things that are well known, well understood, generally accepted, and now written down in obtuse language with lots of buzz words and details. Enough fluff to make it largely incomprehensible. You know, standard bureaucratic language.

When I think about the government that did this, I think of powdered wigs, stiff upper lips, and traditions that date back hundreds, if not thousands of years. Very much rooted in what worked before, with only a passing nod to the current.

Image of the UK House of lords

And then there’s this. The opposite of stuffy, hidebound, traditional, bureaucratic guidelines. From Government Digital Services in the UK, the Government Design Principles. First published in 2012. Largely unchanged since then. Very forward looking at the time. And still forward looking.

Before I get too far into this, I do want to acknowledge that the design they’re talking about is software design, not interface design. There are some principles that touch on interface design, but it’s about software design and the software design process more than anything.

It might not be quite a pithy as the Agile Manifesto, but it’s close. Remarkably close for a government publication. If nothing else, look where it starts. With the user’s needs. It includes talking to users and to recognize that what they ask for isn’t always what they need. That’s a great place to start for design.

There were 10 points in the original version, and all of them still apply. From doing only what is needed to making things open and interoperable. Because context matters and we don’t know what we don’t know.

I believe all of these principles are good principles, and I would never use an appeal to authority, but it’s nice when others agree with you.

by Leon Rosenshein

Best Simple System For Now

When you’re writing code you have lots of choices. Even when working with 20-year-old legacy code, you have options. Not all of those options are equal though. Some are cheap and fast now, but may have a large cost later. Others are expensive and slow now, but might make things easier in the future. Your job as a software engineer is to choose the right one.

A system without feedback and a sytem with a feedback loop

Which one is right? You can probably guess what my answer is. It Depends. Of course it does. It always does. Without the context, there is no up-front answer. In fact, both are usually wrong. You don’t want to choose the cheapest/fastest option, and you don’t want to the one that gives you the most options in the future.

Instead, you want to choose the one that gives you a good balance of things. You want what Dan North calls the best simple system for now. It’s a very deliberate phrase. There’s a lot to think about in there.

For Now

One of the most important parts of the phrase is at the end. For Now. Given what you know at the current moment, about where you are, about what the immediate goal is, and what is between you and that solution, and what you think the long-term goals are. What can you do right now? It’s going to change. You know that. You just don’t know how it’s going to change. So you want to maintain the options, not make more decisions than you need to.

Simple

One of the best ways to maintain that optionality is to keep things simple. Simple is easy to understand. It’s easy to reason about. And most importantly, it’s easy to change. But remember, simple doesn’t mean you get to ignore things. It still needs to work. It still needs to work at the scale you’re operating at. It still needs to work when the inputs change. Or at least it needs to work well enough to tell you that it can’t work in the new situation. Remember, KISS. The simpler it is the easier to get right and the harder to get wrong.

System

Another thing to keep in mind is that it’s a system. Even the simplest program is a system. And the important thing about systems is that the parts of a system interact with each other. Often in strange and unexpected ways. You need to remember, and minimize, emergent behavior. By keeping things simple. By remembering that you’re building a system for now.

You need to remember that systems have feedback loops. So you need to identify and understand those loops. So you can work with those loops, instead of against them. When you work against the feedback loops in a system you’re working against the entire system. If you keep trying to do that, you either change the entire system or you end up not changing anything. As John Gall said:

A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.

Best

Finally, we get to best. How can you make something the best? By ensuring that what you’re building is for now. By keeping it simple. And by working with the system. If you do all of those things, you’ve got a very good chance of ending up with the best simple system for now.

by Leon Rosenshein

Respect The Problem

The other day I ran across a really interesting quote.

The bottom line here is that you have to respect the problem. … There’s no silver bullet solution to just linearise them and wish them away.

Dan Davies

Now that was in the context government regulation and the environment, but the quote can be applied just as well to many different environments.

Such as software development. When you’re trying to solve a user’s problem, you have to understand their problem. Not just the surface level request, but the real problem behind it. If you’ve ever looked at what your doctor and nurse do when you have an appointment, you can see that many years ago someone said “We need to turn this pile of paper records into an electronic record”. So they took the exact forms that were in use and implemented them on the computer as a scrolling screen with lots of tiny checkboxes. It did solve the problem of moving to electronic records, but it didn’t really solve the problem of “How do we efficiently track patients and their conditions, knowing that we have a new input modality”

That’s not Domain Driven Design (DDD). That’s not respecting the problem. That’s a feature factory. Blindly implement the request without thinking about the why.

Respecting the problem means understanding the not just the surface “what” of the ask, but the why behind it. I’m not going to pretend to understand the medical data entry domain, nor do I know the details of the regulations behind them. But I do know that if you really look at the domain and think about the goals, you’d come up with a different solution than taking the old pen and paper forms and replicating them on the computer.

The constraints are different, so the solution should be different. If nothing else, most laptops are built in landscape mode, and the paper forms were designed in portrait mode. The form factor is different, so the solution should be different. Today’s tablet displays might have sufficient resolution to use the same size and layout as the paper forms, but they sure didn’t when the online systems were developed, but that’s the way they were developed.

Here’s another constraint. There’s no natural physical way to bookmark 3 different pages. Back when there were paper forms, it was easy and common to have fingers on the different pages you needed to flip between. There’s no physical way to do that with a laptop or tablet, so that capability just went away.

On the flip side, there are new capabilities that come with the online form. You can make a decision tree and have the form follow it. You can hide things that don’t matter. You can group things based on what you’ve already done. A simple example would be a field where you can select one or more things, and there’s an “other” option. Instead of always having a text box there taking up space or only show it when the user picked “other”. Another thing you could do would be allow things to get bigger when you need to interact with them. You can imagine lots of others.

But those things didn’t happen either. They just replicated the old paper system. And called it done.

That’s not DDD. That’s not good software engineering. That’s not respecting the problem. And it’s not solving the user’s problem.

by Leon Rosenshein

Dijkstra On Bugs

Unsurprisingly, there are hundreds of quotes about computers and programming by Edsger Dijkstra, and almost all of them are worthy of a post (or two). His work is foundational to much of what we do as software engineers. He was also a prolific, excellent, and memorable communicator. After all, he was the one who came up with Goto Considered Harmful and that one is certainly well known, almost dogma.

Image of Edsger Wybe Dijkstra

Edgser W. Dijkstra
Attr: Hamilton Richards

But today I’m going to talk about one of his lesser known statements. A statement about how we view program correctness and debugging.

Let me start with a well-established fact: by and large the programming community displays a very ambivalent attitude towards the problem of program correctness. A major part of the average programmer’s activity is devoted to debugging, and from this observation we may conclude that the correctness of his programs —or should we say: their patent incorrectness?— is for him a matter of considerable concern. I claim that a programmer has only done a decent job when his program is flawless and not when his program is functioning properly only most of the time. But I have had plenty of opportunity to observe that this suggestion is repulsive to many professional programmers: they object to it violently! Apparently, many programmers derive the major part of their intellectual satisfaction and professional excitement from not quite understanding what they are doing. In this streamlined age, one of our most under-nourished psychological needs is the craving for Black Magic, and apparently the automatic computer can satisfy this need for the professional software engineers, who are secretly enthralled by the gigantic risks they take in their daring irresponsibility. They revel in the puzzles posed by the task of debugging. They defend —by appealing to all sorts of supposed Laws of Nature— the right of existence of their program bugs, because they are so attached to them: without the bugs, they feel, programming would no longer be what is used to be! (In the latter feeling I think —if I may say so— that they are quite correct.)

July 1970
prof.dr.Edsger W.Dijkstra
Department of Mathematics
Technological University
EINDHOVEN, the Netherlands
EWD288

That’s not quite as pithy as Simplicity is prerequisite for reliability, and there’s a lot to unpack there. Go read it again.

To me, the first and most important thing he’s saying is that, as a profession, we not just accept, but defend the existence of bugs. That’s a pretty damning accusation. That the profession of software engineering feels that all programs should have bugs.

Second is that debugging is the fun part. That we need the opportunity to debug. That without that part it’s boring.

Third, that we somehow need the Black Magic of the computer to fill some psychological need.

That’s not how I see it, but it does give you something to think about. Take the first part. That we defend the existence of bugs. There’s some truth to that. For all but the most trivial of programs running in a constrained domain, I would assert that it’s impossible to ensure that future changes to not cause improper operation. Or at least impossible in practice. But that doesn’t mean we should ignore the possibility of bugs, or that we shouldn’t be as defensive as we can be. And we should maintain Zero Bugs. Prevent what you can, then fix what is exposed as fast as possible.

Personally, I don’t find debugging being fun. I think that conflates the feeling of accomplishment we get from finding/fixing an issue with enjoyment. There have been many occasions where I’ve been proud of myself for doing the work, and I’ve definitely felt the easy and fulfillment of getting into a flow state while tracking down an issue, I wouldn’t call it fun. And I don’t know many people who would.

As to needing the Black Magic of computers, that’s not something I experience, but it might be true for others. As a description of how people approach things, maybe? Regardless, I don’t think it’s a good reason to accept issues.

Having said that about the individual points, his meta-point that we don’t do enough to ensure that issues don’t end up in the hands of our users/customers, is valid. I think we can, should, and must, do better. In this age of fast and easy updates, I think we, as a profession, have somewhat forgotten the value of shipping good software in favor of shipping flashy software. And that reflects badly on us.

As software engineers, our goal should be to solve our user’s problems by balancing their needs and the system’s capabilities. Most of the time that’s by using more software. But sometimes it’s by using less software. And in both cases, it’s by delivering software that does the right thing. All of the time, not just most of the time.

That’s how we can honor our responsibilities as software engineers and respond to Dijkstra’s message.

by Leon Rosenshein

Zero Bugs

Back when I worked on boxed products at Microsoft, we had 2-year release cycles. And towards the end of each one was a milestone called Feature Complete. That was the point in the project where all features we expected when we did planning 18 months earlier were done. Or at least the ones that we hadn’t decided to cut because we ran out of time. You would think that after feature complete, we’d be ready to ship. But that wasn’t the case.

Instead, the next big milestone was Zero Bug Bounce (ZBB). That was the second time in the history of the project that there were zero active bugs in our tracking system. The first was before we wrote any code. After that, the number of bugs climbed until shortly after Feature Complete. For ¾ of the project or more, the incoming bug rate was higher than the fix rate.

That wasn’t just our project. That was the way most software was written. You built it, then you tried to test quality in. It worked, after a fashion, but let’s not fool ourselves. It wasn’t very efficient, and it wasn’t a lot of fun. From the beginning of the project until some time after feature complete the backlog of work kept getting bigger.

At the same time, the early 2000’s, extreme programming and the agile movement were getting started. Borrowing some concepts from lean manufacturing, and the idea of building quality in instead of testing it in.

One of the ways that expressed itself was the idea of a Zero Bug Policy (ZBP). The idea that your software should have 0 bugs. At the time, most folks looked at that and said it was impossible. Of course, there were already examples of bug free software, but people still thought it was impossible to write bug free code.

And those folks are right. Even with Test Driven Development (TDD), and a full suite of unit, integration, and system tests, you can’t guarantee bug-free software. But that’s not what a ZBP is about. It’s not that you never make a mistake, or a bug never gets shipped to a customer. Instead, a ZBP is really about not having a bug tracking system.

While ZBB and ZBP have a Levenshtein distance of only 41, they’re completely different things. A ZBP means that instead of keeping track of your bugs and fixing them later, when you’re not so busy adding more bugs, you fix them now, for some reasonable value of now. You don’t drop everything and fix it2, but as soon as you finish what you’re working on you fix the problem before you start something new. That means that every day is potentially a ZBB.

That’s a very different way to build software. It’s hard to do. You need to build the muscles for TDD and unit test. You need to build the muscle to say “No” when schedule pressure pushes you to move on to the next feature even though there are still issues with the current task. You need to build the deployment muscle so it’s easy to make the fix. All of these things and more are hard to do, and don’t show any immediate benefit3. It takes disciple and commitment.

Another benefit of ZBP is that you’re always ready to ship. You might not have the feature set you originally planned, and it might not be as pretty as you might have made things, but if you need to do a demo, you can demo everything you’ve done. If something happens and the release date moves forward, you have something to release. You can sleep at night and not have to worry about having the rug pulled out from under you.

Remember, even if you’re living in a ZBB world, you don’t have to stay there. You can bias your choice of work slightly so that your rate of finding issues is lower than the rate at which you fix them. Even if this doesn’t get to you ZBB before feature complete, the wall you hit at feature complete will be shorter.

And finally, you need to differentiate between planned features, feature requests, learning more about the domain you’re operating in, and software bugs. The first two have nothing to do with a ZBP. You can have as many of them as you see fit, and you can track them however you want. They key is that they are NOT bugs. That’s just future work you need to do.

New learning about the domain might or might not be a bug. Learning there’s a better way to do something, or an abstraction you should be using is not a bug. Finding your domain model doesn’t match the system you’re trying to model IS a bug and needs to be fixed ASAP.

Simple coding errors are also bugs. First, write a test that fails because of the bug. Then fix the code so that test, and all other existing tests, pass. Again, don’t add those issues to a long-term tracker and wait to fix those issues. Just fix them now.


  1. Only one for the acronyms, but that’s cheating ↩︎

  2. Sometimes you do need to drop everything and fix the problem. Or at least part of the team does. If something changes and your production system goes down, you mitigate it immediately. Similarly, if the bug found is blocking a large portion of the dev team, you might choose to fix it immediately. In most cases however, you can work the fix in as the next thing. ↩︎

  3. In the long run, putting more effort into how you write code will pay you back, but you can always rent time by taking on technical debt. You just have to pay it back later. ↩︎

by Leon Rosenshein

Emergency Procedures

The other day I ran into a quote on the internet about the problem with emergency procedures. I generally agree with it. The quote went like this:

If you wouldn’t use your emergency process to deliver normal changes because it’s too risky, why the hell would you use it in an emergency?

But, as always, It Depends. It’s about the risk/reward ratio. You want the ratio to be low. If the system is down, the risk of breaking the system is low. If it’s down, you can’t crash it .

In general, you want one, and only one, deployment process. You want it to be easy, automated, idempotent, and recoverable. You want it to be well exercised, well documented, well tested, and fail safe. And in almost all cases, you should use it. All of the checks, validations, and audit trails are there for a reason (see Chesterton’s Fence).

The main goal of any deployment process is to make sure the user experience is not degraded. Or at least only temporarily degraded by a very small, broadly agreed upon amount. That means making sure that nothing happens by mistake and without an appropriate amount of validation. There can be tests (unit, integration, or system) that need to pass. There can be configuration validators. There can be business, legal, and communications sign-off. All in service of making sure no-one has a bad interaction. Actually deploying the new thing is often just a tiny part of the actual process. There’s a high risk of something going wrong, so you need to be careful to keep the overall risk/reward ratio down.

In an emergency situation though, the constraints are different. If the system is down, you can’t crash it. You can’t reduce the throughput rate. You can’t make the experience worse for users1. The risk of making things worse is low, so the risk/reward ratio is biased lower.

In fact, many things you normally do to make sure you don’t have an outage are unneeded. You don’t need to keep in-flight operations going (because there are no in-flight operations). Instead, you can skip the step of your process that drains running instances. You don’t need to do a phased update to maintain your throughput. When nothing is happening, getting anything running is a step forward. Because nothing is running, you don’t need to do a phased roll-out to check for performance deltas or emergent behavior or edge cases. After all, things can’t get much worse. There are just a few of the things you don’t have to worry about when you’re trying to mitigate an outage.

Magic wand behind glass labeled 'In case of emergency, break glass'

Or to put it more simply, outage recovery is a different operation than system upgrade. When dealing with an outage, the first step is to mitigate the problem. When doing an upgrade, the most important goal is to have customers/users only see the good changes. There should be no negative changes to any user. Many steps can (and should) be shared between the two processes. But the goals are different, so the process is going to be different.


  1. Ok, there are things you can do to make it worse. Like loose data. Or expose personal data. But generally speaking, if your system is down, you can’t make the user experience worse. ↩︎

by Leon Rosenshein

Careers are Non-Linear

Hiring has been on my mind lately. I’ve been looking for an entry level developer. Someone just starting out in their career. I’ve described the arc of my career before. In fact, I came up with what I think is a pretty novel (and useful) way to describe the arc of a career. It’s also good for helping you visualize where you are at any given point compared to your company’s (or more specifically your managers) expectations. Anything that helps you do that gap analysis with your manager and guide the discussion on how you’re going to close those gaps is good for your career.

One important thing to keep in mid though, is that while we think of careers as always being “up and to the right”1, that’s not really the case. Especially as it’s perceived by the person living it. In fact, careers, as experienced by the person having the career, are very non-linear. The slope changes. It can even be negative. Particularly in one aspect or another. Even when the overall arc of the career trends towards more scope of influence.

Every career change, whether it’s role on a team, changing teams, promotion to a new level, or changing companies, changes your context. Everything you learned about where you were is still true, in that context. And much of it is still true in your new context. But not all of it.

That’s why your career is non-linear. What you’ve done got you to where you are. It was the right thing at the right time, in the right place. And while I don’t believe that the Peter Principle is generally true, when you start that new role, you definitely know less about it than you did about the role you just left. You don’t get promoted until you can’t do the job. You get promoted until you can’t learn/grow enough to do the next job. And you didn’t get the role change you just made because you can’t do the job. You got it because they do think you can do the job.

Think about it. If you really were ready for that next role (and people know and it was available), you would have gotten it. Since you didn’t, you’ve gone from being at the top of your old role, one of the best around, to being OK at your new role. Not bad, but nowhere near ready for promotion. So compared to other folks in the new role, you’re closer to the bottom that you are to the top. And that can feel like a move backward2.

All of that is when you’re staying in the same basic job/role. Moving from IC to Manager has all of those issues, and a whole set of its own. The same applies when transitioning between Product/Project/Program Management and Development, or really any other “discipline” (using the term very loosely here).

The important thing to remember is that all of these steps are advances in your career. Even if they don’t feel like it to you at the time. Even (especially?) if they feel like more work. When you’re challenged and succeed, you grow.

As has been said, “If you rest, you rust”.


  1. Progress being up and to the right is a metaphor. Knowing how we use metaphors, where they come from, and how they can subtly influence things even when you’re not trying, is an important topic, but for another day. Meanwhile, consider Metaphors We Live By and Darmok as tokens of the importance of metaphor in our lives ↩︎

  2. Back in the day at Microsoft, the Principal band was levels 65-67. A three level span doesn’t seem like that big a span, but in fact it was huge. L64, senior engineer, was considered a terminal level. If you reached level 64 there was no longer any expectation that you would get promoted or eventually be asked to leave. L59-L63 was considered up or out, the only difference was the time. Moving from L64 to L65 (Senior to Principal) was a big deal. It was an inflection point in your career. From L65 on, even if you had no direct reports, you were expected to show results through others. You still had to do your work, but the big expectation was around how you impacted others. That’s fine and makes sense. The problem was that back then everyone in the Principal band compared to everyone else in the same band. And newly promoted folks at L65 were being compared to folks at L67 who were being considered for promotion. L68 was Vice President. So the first review cycle as a new L65 you were suddenly compared to someone about to be one of the Vice Presidents. Unsurprisingly, L65s didn’t come out well in that comparison. It certainly felt like a step backwards. Talk about imposter syndrome. ↩︎

by Leon Rosenshein

People Over Process

As seen on the internet

People over process.

Why?

Because systems can’t fix problems with people, but people can fix problems with systems.

People Over Process is from the Agile Manifesto. There’s a lot to unpack there. It starts by acknowledging that the software development is a socio-technical endeavor. There are people (that’s the socio part). But there are also tools and rules and processes, which makes it technical.

First, and foremost, it’s over, not or. It’s not a Boolean choice. You get to have some of each. If you choose to only focus on the people, making them safe and happy, you can’t organize. You can’t even self-organize. Because without some norms, some process, you can’t communicate. And if you can’t communicate, you can’t coordinate. Not because no one cares, and not because no one wants to listen, but because anarchy is the opposite of coordination. Even the most libertarian knows that there needs to be some structure. Or you end up with the tragedy of the commons.

And if you choose process only, the first time something happens that your process doesn’t cover then you get stuck. Unless/until you can come up with a new process. Which takes a while, because there’s a well-developed process for changing the process. You did remember to add that to your set of processes, right?

So don’t ever let yourself be tricked into turning an analog choice into a Boolean one. Trust me, it won’t end well.

Second, it’s not quite accurate. Systems, particularly feedback systems, can fix, or at least minimize, problems with people. Processes, just like those warning signs on ladders, are there for a reason. They’re there because at some point in the past not just one person, but enough people did things in a way they thought was right, but was actually dangerous, and got hurt or killed. Processes are institutional scar tissue. Something bad happened, and the process is there to make sure it never happens again. The process is there for a reason, and that reason is so that the system can heal from a person’s mistake.

The trick is to have the right balance between the two. The agile manifesto says people over process, so at least 51%/49%, and less than 100%/0%, but that’s a pretty big range. Where you land in that range depends on lots of things. The context. The people. The familiarity of the people with the context. And some trial and error, because you’re unlikely to get the balance right the first time.

And there you have it. People over process. Because people can fix issues in your system. And your system needs to have processes to protect it from the people.

by Leon Rosenshein

The Power Of Examples

I’ve subscribed to Kent Beck’s Tidy First substack, and there’s lots of useful info there. He just posted a piece on Why TDD doesn’t Lead to Dumb Code. As usual, it’s a really good entry.

But what really stood out to me in that post was not what he was saying, but how he was saying it. In particular, his use of an example. Beck is trying to answer why TDD doesn’t lead to overly specific code. The task at hand is to use TDD to write a function called factorial. As a software developer, figuring out the factorial of a number is something I’m very familiar with. So the amount of cognitive overhead to understand the problem space was approximately zero. This left me all of my bandwidth to understand the message about TDD and generalization that he was really trying to get across.

That’s the beauty of good examples. They help the reader/listener understand the problem and the solution. And good examples don’t burden them with additional things they need to learn before they can get to the information you’re trying to impart.

One good way to do that is by knowing your audience and understanding their context. It’s great if you share the same context, but the key is speaking in their context. As the presenter of information, it’s on you to find the right example for the group you’re speaking to.

That’s why a lot of software stories use car analogies. Cars are ubiquitous. The classic agile incremental build image

works so well because you don’t need to think very hard to understand that if you value easier transportation, you get nothing until the end in the top half, while there’s value added at each step of the bottom half. That’s a great example

Besides tailoring your example to your audience, Hillel Wayne goes a step further and talks about the difference between instructive and persuasive examples. More importantly, he notes that while an example might be good at one or the other, you still need to use the right example depending on what you’re trying to do. A good instructive example is often not persuasive, and an example that’s very persuasive might not be good at teaching something. Like everything else about software, and engineering in general, It Depends.

All of this is just to say that good examples are hard to find. And they’re also very important. And worth the effort to find.

Because if you do, you’re much more likely to get your point across. Which is your goal in any communication. Hopefully my examples here have helped me.