Recent Posts (page 9 / 65)

by Leon Rosenshein

Strange Loop

Strange Loop Conference logo with date and location

Strange Loop Strange is, according to its own website, a multi-disciplinary conference that brings together the developers and thinkers building tomorrow’s technology in fields such as emerging languages, alternative databases, concurrency, distributed systems, security, and the web. I’ve had it on my list of things to do one of these years for a while. I’ve heard a rumor that next year is going to be the last one, so I guess I know what I’ll be doing in late September next year.

That said, this year’s conference is over, and according to the posts on twitter it has lived up to expectations yet again. The thing that has always interested me about Strange Loop is that it seems much more practical yet also big picture. Like any conference there are plenty of companies that use it as a chance to market their products, or at least toot their own horns. And that’s fine. There’s value in advertising for companies and there’s value for attendees and viewers in those talks.

If you look at the conference schedule there are plenty of talks that describe how a person/company solved a gnarly problem they faced. There are plenty of learnings in there, as long as you keep the context in mind. Talks about handling concurrency, distributed systems problems, observability, security, and testing. Things that I deal with every day at work and that are important to me. Learning from other’s mistakes and lessons, even if they’re in a very different field.

But it’s more than that. There are also talks like It Will Never Work In Theory. Talks about the difference between theory and practice and how to bridge the gap. Or a keynote on Expert Software Developers’ Approach to Error, looking into how bugs happen and how to not just fix them, which is critical, but also how to not just avoid them, but prevent them, which is the best way to fix a bug.

Then there’s what’s fascinating to me. How this stuff gets used, and how it ends up impacting people’s day to day lives. How it makes their lives easier, better, or just more interesting. How we use it to understand and experience the world around us and its history. How Live Music Is Evolving In A Post Pandemic World, and The Vera C. Rubin Observatory Legacy Survey of Space And Time. Sure, those will touch on computers and technology, but that’s not what they’re about.

Eventually the talks will be available online. There are a bunch I want to watch. And hopefully next year I can go in person, before it’s too late.

by Leon Rosenshein

Tensegrity

It’s been a while since I posted anything. Sometimes life intrudes on posting, but I’m back to posting again. Today’s question is, can you push something with a rope? Sure, you can do it if the rope is short enough that it effectively can’t bend, but what about a longer rope? You can’t. But you can make it look like you’re pushing with a rope. You do it with tensegrity. Tensigrity is not new. The first examples were shown in the 1920s, and Buckminster Fuller (of geodesic dome fame) coined the term in the early 1960s. It’s the idea that by properly pre-loading the elements of a structure, some in tension, some in compression, you can build a stable structure. From an architectural standpoint it lets you right-size different components based on the load each component will actually be carrying. It also lets you build apparently impossible figures where chains push things up.

Small LEGO figure using tensegrity that appears to show chains pushing up

That’s really cool, and I need to build one of them, but what in the wide, wide world of sports has that got to do with software?

Simply this. When building complex systems, you have multiple competing forces (requirements) that you need to handle. You can do it in multiple ways. The easiest (from a design standpoint) is to make sure that each individual component can carry the maximum load of the entire system. Now building each component that way might not be so easy, or might be so expensive as to be impractical.

That’s where tensegrity comes into play. The forces are in balance. Putting one piece into compression puts other pieces into tension. That pulls things back to equilibrium. It’s a feedback system. It’s systems thinking. This applies to software as much as it applies to architecture.

It’s not that you can maintain everything the same in the face of increasing stress (load), but that you spread the load so that the system doesn’t fail catastrophically. The load on one system influences a different system, which in turn impacts the load on the first system. Consider a simple retry system. If something fails, instead of just retrying, add an increasing backoff time. That reduces the load on the downstream by temporally spreading the load. Good for the downstream system, but it does increase the load on the sender since now it needs to keep track of retries.

It’s by no means a magic solution to all your problems. As you can see in the GIF above, as long as the system is in balanced tensegrity, it stands. Add too much load in the standard direction and something will break. Add too much load in an unexpected direction and the system can’t compensate. Instead, it fails in a new and exciting way.

Lego tensegrity model failing under a side

All the parts are fine, but the system isn’t holding. Like all other solutions, the answer to the question of whether it’s the right solution for a given situation or not is “It Depends”. And like every other solution, it’s one to keep in your toolbox for when the time is right.

Plus, it’s an excuse to show a LEGO build video.

by Leon Rosenshein

More Broken Windows

I’ve talked about the Broken Windows theory before. The idea that visible signs of problems in a community, like persistent broken windows, is a leading indicator of further problems in a community. And I said that in software development problems, like frequent pages, technical debt, and failing to code for the maintainter, set precedents and lead to more of the same.

Since then, I’ve seen multiple cases where, anecdotally, that has proved true. It starts with one little thing. Then another. And another. Pretty soon, intermittent failures are ignored. Places to incorrect comments left in code. Times when, instead of simplifying a complicated conditional, more specific conditions are added.

Unfortunately, as important as individual cases and anecdotes are, as important to insights and understanding anecdotes are, they are stories. And while a story might be a singular datum, the plural of anecdote is not data. The thing is, just because there are stories and anecdotes, it doesn’t mean there’s no data. I just didn’t know of any. Until now.

According to a recently published paper, The Broken Windows Theory Applies to Technical Debt, there is now proof.

DAG showing the causal chain from paper goals to questiobs

We now have data. Not just data, but also, rigorous analysis of the data. Sure, it was a small study. 29 participants and 51 submissions. All from a small geographic region. It’s not incontrovertible, but it’s a start.

According to the paper,

The analysis revealed significant effects of TD level on the subjects’ tendency to re-implement (rather than reuse) functionality, choose non-descriptive variable names, and introduce other code smells identified by the software tool SonarQube, all with at least 95% credible intervals. Additionally, the developers appeared to be, at least partially, aware of when they had introduced TD.

Not only did the level of technical debt go up, but the participants at least partially recognized that they were adding technical debt to the code.

The study didn’t say it, but if existing tech debt is strongly correlated with more debt being added, but it stands to reason that reducing tech debt will cause a reduction in the amount of new tech debt added.

Something to think about. And act on.

by Leon Rosenshein

Readability vs Idioms

We’re all here trying to add value. Trying to solve the problems we’ve seen. The problems that keep us up at night. Coming up with solutions that make a difference to something that’s important to us.

The question though, is not how to add the most value in the next instant, but how to add the most value over time. That’s and important distinction. Because what adds the most value right now might be different from what will add the most cumulative value tomorrow, next week, or next year.

You can see that in many different places. Helping someone else solve their problem instead of focusing on your own all the time. Taking many more much smaller steps in the right general direction as you figure out the correct path. Instead of just fixing a failure/outage/mistake, using it as an opportunity to make sure that a whole class of problems doesn’t happen again, or at least is much easier to isolate and recover from. It happens when you write new code and when you modify existing code.

Readability is a big part of that. You can solve almost any programming problem in almost any language. And you can make it look the same in all those languages. But should you? Is it really the most readable, most understandable, most maintainable way if you do it that way?

I’d argue that it’s not. Consider Go’s structs vs Java’s classes vs Python’s classes. At a high level they’re all about encapsulation. Keeping things in a domain together and separating domains. But they’re also very different. Java loves to inherit, but can compose. Go loves to compose and doesn’t have inheritance. Python has inheritance and composition, but doesn’t care. Java has a strict type hierarchy, which must be obeyed. Go and Python are duck typed. If you write the same code in all three languages it’s going to look odd in at least one (and probably all) of those languages to people familiar with that language.

That’s not good. No matter how readable code is in one language, if you transfer how you write in one those languages to another language it will NOT be readable. It might be decipherable, but we’re not looking for decipherable, we’re looking for readable, easy to understand code.

Of course, readability is one of those ilities. The non-functional requirements that creep into your requirements and guidelines. And like most of the ilities, there’s not a good, clear, objective, metric for how readable your code is. Given that, you’re never sure how readable your code is. So how can you know if it’s readable enough?

You can’t. so maybe there’s a better thing to focus on. Like guardrails. Guardrails that make it easy to do the right thing. Guardrails that encourage working with the system. Guardrails that make it obvious what is happening, and what isn’t.

And that’s where libraries and idiomatic code come into play. An example in Java is using Try/Finally. In Go it’s checking the returned error value. In Python it’s truthiness and falseness and comprehension. If you write with those idioms (and others) other people familiar with the language will see what you’re trying to do. They’ll know what you mean, what you’re trying to do, and the benefits and limitations of the idioms, libraries, and language features you’re using. By working with your languages and libraries you help yourself and anyone else who works on the code do the right thing.

Saying you’re doing something to make it more readable is arbitrary and subjective. Making it easy to do the right thing and hard to do the wrong thing is clear and objective. Don’t just ask people to be careful, make their lives easier.

by Leon Rosenshein

The Developer's Journey

I’ve been a developer for a long time now. I’ve seen others start down the path. I’ve seen people make mistakes and picked them up and helped them continue on their journey. I’ve seen their trials and tribulations. I’ve seen their highs and lows. And everyone’s journey is different. But there are some pretty common things I’ve seen along the way, especially amongst the most successfull.

Starting Out

In which the person sees an opportunity. It could be as simple as wanting to change the color of a texture or an operating parameter of an item in a game. It might be more complex, like automating some calculation in a spreadsheet, or it might even be some “advanced” thing like parsing some data out of a set of text files and then doing something with the data. But the developer rejects the opportunity and decides it’s easier to do it by hand or just live with things the way they are. After living with the problem for a while it turns out that someone the person knows has some experience and somehow, with a few deft keystrokes and function calls, solves the problem, almost magically. Armed with that new knowledge the person tries to do the same thing themselves.

Learning and Growing

Of course, doing it themselves is hard. At first it’s simply struggling with the language itself. What are the keywords? What kinds of loops and collections are there and which ones should be used when? Slowly it starts to make sense and they’re thinking less about how the language works and more about the problem they’re trying to solve. This kind of thinking expresses itself as levels of abstraction where the boundaries are between things. At first everything is ints, floats, and strings in a single giant method. As they progress in their ability and understanding they start to coalesce into types, libraries, and executables.

Systems, data, and their interactions start to be the driving factor instead of what a method or library does. They start talking about the problems in problem domain, not the code domain. Who took what? What’s the most efficient way to get that person from home to a hotel in a different city? How can we ensure that all of the food we need is made, everything we make is used, and leftovers are used to help those in need?

Sharing

At this point in their career, the developer is deep in the code. They understand it. They understand it’s place in the world. They know how a change in one place will have some unusual impact to something else. Not because of a problem, but because they understand the underlying connect between those two things. They’re usually very happy in that place. Often, they never leave.

But sometimes they do. Either they recognize themselves, or someone (a mentor or manager) points out to them that they could do more, for themselves and others, if they want to. They can teach others. They can help connect people and things. They can expand their scope and purview to include more and different things. They not only solve problems themselves; they help others solve the problems they have.

They recognize that being a developer is not just about code. It’s also (mostly?) about people, businesses, problems, and how they are all interrelated. Connection, influence, and passing on the learnings about how to avoid problems become the primary goals.

Or, as I’ve put it before, a developer’s career progression is about scope. Are you just looking at yourself, your team, your company, or your world. The broader your scope, the further you are in your journey.

Of course, I’m not the only one to look at the developer’s journey. According to Joseph Campbell, the developer’s journey goes through 17 distinct phases, often broken down into 3 acts. To wit:

  1. Departure
  2. Initiation
  3. Return

Wait a minute. That’s not the developer’s journey, that’s the Hero’s Journey.

This image outlines the basic path of the monomyth, or Hero's Journey. The diagram is loosely based on Campbell (1949) and (more directly?) on Christopher Vogler, 'A Practical Guide to Joseph Cambell’s The Hero with a Thousand Faces' (seven-page-memo 1985). Campbell's original diagram was labelled 'The adventure can be summarised in the following diagram:' and had the following items: Call to Adventure, Helper, Threshold of Adventure: Threshold crossing; Brother-battle; Dragon-battle; Dismemberment; Crucifixion; Abduction; Night-sea journey; Wonder journey; Whale's belly

They’re two different things, aren’t they? Consider about Bilbo Baggins. He didn’t ask to go There and Back Again. He didn’t even know that such a thing was possible when the story started. But he went. He learned. He understood. He had a wizard and good friends to help him along the way. Then he came home. And made the shire a better place. For himself and everyone else.

Maybe the developer’s journey and the hero’s journey aren’t that different after all. Something to think about.

by Leon Rosenshein

One Thing At A Time

Doing one thing and doing it well is The Unix Way. As I said in that article, tools should do one thing and do it well. If you need to do two things with two tools then connect them with a data pipe. It’s a great tenet for tools and applies to any number of systems that I’ve worked on. From text extraction to image processing to 3D model generation to an entire micro-service network.

It’s a great tenet in other areas as well. It lies at the heart of an agile development processes. Do one thing and finish it. See how it works. Get some feedback. Figure out what the next thing to do is. Do that. Lather, Rinse, Repeat. Take many more much smaller steps. You know where you are. You know where you want to be. The exact path between those two (and probably the exact destination) will change as you learn along the way. Uncovering better ways of developing software by doing it.

Another place it applies is change management. How you structure your checkins, code review (CR), pull request (PR), or whatever you call them. Every commit or CR should have one logical purpose. Sometimes that means that any given CR doesn’t add customer value. Sometimes, before you make a change that adds value you need to make a change (or multiple changes) that [makes the change you need to make easier)(/posts/2020/10/27/). And that’s OK. Because just like you should code for the maintainer, you should make CRs for the reviewer.

The question is, why is this important? After all, isn’t it faster to have one change, one big optimal step, that just works? In theory that might be the case. In practice it isn’t for, multiple reasons. Most importantly, ensuring that one big step ends up in the right place is probably impossible. We don’t know where that place is exactly, so there’s no way we can be sure we’ll hit it.

There’s another reason. A reason that has to do with combinatorics. Let’s say you have two changes you’re making. Let’s make it really simple by saying that the change either works or it doesn’t. Determining if it does or doesn’t is trivial. In this situation there are 4 possible outcomes. They both work, they both fail, or one works and the other fails. In this sitation. 75% of the possible outcomes are failures. Then. after you determine if the experiment is a failure you need to figure out which of the three possible failures it was. Then you need to fix it. The more things you try at once, the worse it gets. With 3 changes 88% of the outcomes are failures. Only the top path, with all results Heads (success) is a successful attempt.

With 4 changes 94% of the possible outcomes is a failure case. Any savings you get by taking that big step are going to be eaten up by dealing with all of the possible failures. You might get lucky once or twice, but over the long term you’re much better of makig one change at a time.

It doesn’t matter if the changes are in a single CR to be reviewed, a data processing pipeline, a microservice network, or the architecture of something you haven’t built yet. The more you change at once, the harder it is to know if the changes made things better or worse. So take the time to decompose your changes into atomic, testable, reversable steps and then make the changes, doing one thing at a time. You’ll be happier. Your customers will be happier.

And surprisingly, you’ll move faster as well.

by Leon Rosenshein

What's an Internal Customer?

I’m a platform and tool builder. I’ve spent most of my career building platforms and tools. Tools that others, inside and outside the company, use to do whatever it is they do to add value to their respective businesses. Even when the “tool” is something like Flight Simulator, built as a game to provide entertainment. Many, probably most, people who used Flight Sim used it as shipped. They might buy a airplane or some scenery, but basically they used it as shipped. But even those people also used it as a platform.

With platforms, I’ve talked about the difference between customers and partners before. It’s a big difference. With Flight Sim we had both. Some people who bought Flight Sim were clearly customers. They bought it, used it, and never talked to us about it. That’s a customer. Others, the people who built the add-ons, were partners. We worked with them. We made changes for them that made their jobs easier which made it possible for them to build more add-ons and make money doing it. Then, they built add-ons that we didn’t have time or resources to make. Those add-ons increased the demand for Flight Sim. So we did better. And the more copies of Flight Sim that sold the bigger the installed base they could sell to. So we treated our customers and partners differently.

And nowhere is the difference bigger than when you’re talking about the difference between an internal customer or partner. With Flight Sim our customers and partners were clearly external. With other platforms, such as the various versions of distributed processing platforms I’ve built, the customers were very much internal. We were building tools and platforms for other people in the company to do their work to build whatever product they were building. Sometimes it was maps, sometimes it was image processing. Sometimes it was large scale ETL jobs. Regardless of what they were doing, they needed our platform to do their job. So were they our customers or partners? They needed wht we were building, but if they didn’t need it, we didn’t need to build it. We needed each other.

Or at least we did at the beginning. As John Cutler put it, that internal team is your customer if you can

  1. walk away from the “deal”
  2. charge their “customers”
  3. sign contracts
  4. pursue work outside the company with other “customers”
  5. manage their own budgets
  6. hire their own teams

You know, the kinds of things you can do when you’re a company trying to sell something to someone outside the company. Of course, you can’t arbitrarily do any of those things without consequences, but there’s lots of choice on both sides. If that isn’t the case, for whatever structural, organizational, or financial reasons, it’s not a seller/customer relationship. It’s a partner relationship.

When we started building those platforms we had nothing to sell to our customer, and there was nothing they could “build/buy”, internally or externally. Neither side could walk away. We didn’t have individual budgets and we couldn’t just decide to go do something else. The situation was what it was, and we had to work with it. We had to work together to build the product(s) our customers wanted. We had to be partners in creating both the “product” our team was building and the maps/imagery/data sets that the other team needed. So that’s how we started out.

That doesn’t mean it had to stay that way. We aspired to have products that our customers wanted to buy. They aspired to have products to “buy” and that they could make feature requests on. And we eventually got there. By working together in partnership to build those first versions. And once we had products, as William Gibson said, the street finds its own uses for things. Once those other use cases were found, we could (bud didn’t) walk away from one of them because there were other customers. We could build a chargeback model. We had contracts (SLAs, usage commitments, etc.) We looked for (and found) other customers and related work. We got a budget and managed our own time and its size. In short, our partners had become customers.

That’s how you get from internal partners to internal customers. And give both sides the autonomy they want (need?) to get their jobs done and feel good about it.

by Leon Rosenshein

WIP and Queuing Theory

A distributed processing network with queues.

You never know where things will back up.

I’ve talked about [flow] a few times now. It’s a great state to be in and you can be very productive. On the other hand, having too much WIP inhibits flow and slows you down. And it slows you down by more than the context switching time (although that is a big issue itself). A common refrain I hear though goes something like “I need to be working on so many things at the same time otherwise I’m sitting around doing nothing while I wait for someone else.”

On the surface that seems like a reasonable concern. After all, isn’t it more efficient to be doing something rather than not doing anything? As they say, it depends. It depends on how you’re measuring efficiency. As an individual, if you don’t wait then you’re clearly busier. Your utilization is up, and if you think utilization is the same thing as efficiency then yes, the individual efficiency is higher.

On the other hand, if you look at how much is getting finished (not started), you’ll see that staying busy will reduce how much gets finished, not increase it. It’s because of queuing theory. Instead of waiting for someone to finish their part of a task before you get to your part you start something else. Then, when the other person finishes their part the work sits idle while you finish whatever thing you just started. Since the other person is waiting for you to do your part they start something else. Eventually you get to that shared thing and do your part. But now the other person is busy doing something new, so they don’t get to until they finish. So instead of you originally waiting for someone else to finish, the work ends up waiting. Waiting at each transition. The more transitions the more delay you’ve added to the elapsed time. Everyone can do eavery task in the optimum amount of time, but you’ve still added lots of delay by having the work sit itdle.

Explaining dynamic things with text is hard. Luckily there are other options. Like this video by Michel Grootjans where he shows a bunch of simulations of how limiting WIP (and swarming) can dramatically improve throughput and reduce cycle time. Check it out. I’ll wait.

What really stands out is that the queues that appear between each phase in a task’s timeline are what causes the delays. With 3 tasks there are 2 queues. In this case there’s only one bottleneck, so only one queue ever got very deep, but you can imagine what would happen if there were more phases/transitions. Whenever a downstream phase takes longer than its predecessor the queue will grow. If there’s no limit then it ends up with most of the work in it. Adding a WIP limit doesn’t appreciably change total time since the queue just lets the work sit there, but it does reduce the cycle time for a given task. It spends much less time in a queue.

And that cycle time is the real win. Unless you’ve done a perfect job up front of defining the tasks, limiting WIP gives you the opportunity to learn from the work you’ve done. In Michel’s example, if you learned you needed to make a UX change to something you could do it before you’ve finished the UX. You’d still have the UX person around and they could incorporate those learnings into future tasks. You’ve actually eliminated a bunch of rework by simply not doing the work until you know exactly what it is.

Of course, that was a simple simulation where each task of a given type takes, on average, the same amount of time. In reality there’s probably more variance on task length than shown. It also assumes the length of time doesn’t depend on which worker gets the task. Again, not quite correct, but things average out.

Even with those caveats, the two big learnings are very apparent. Limit WIP and share the work. Eliminate the queues and reduce specialization and bottlenecks. Everyone will be happier and you can release something better sooner. Without doing more work. And being able to stay in flow.

by Leon Rosenshein

Built-In Functionality

A pocket knife with multiple tools available.

You can use all the tools, not just the large blade.

Most languages have a way to start an external process. It’s usually called some version of exec, as in execute this processes for me please. There are generally lots of ways to call it. Synchronous and Asynchronous. Capturing the output,stdout and stderr. Passing arguments or not, or even piping data in viastdin. Capturing the exit code.

All those options are needed when you’re running external applications/executables. If you’re calling a 3rd party program to do some heavy lifting, you’ll probably want that level of control over what goes into the executable. You’ll want to know exactly what comes out, stdout, stderr, and any data persisted. If you need to then do something with the output data then you’ll want to wait for it to finish so you know it’s done and if it succeeded, so you’ll want to be synchronous. On the other hand, if it’s a best effort you might just want to know that it started successfully and have it keep running after you’re done. For all those reasons, and others, there are very good times and reasons to use the exec family of functions.

On the other hand, they’re also very easy to mis-use. In many (most?) languages it’s pretty trivial to run a shell command, pipe its output to a file, then read the file. If that’s all you do you’ve opened yourself up to a whole raft of potential issues.

The biggest is that if you’re exec’ing to a shell, like bash or zsh you never know what you’re going to get. You’re at the mercy of the version of the shell that’s deployed on the node/container you’re running in. You can hope that the version you want is in the place you want, but unless you’ve made sure it’s there yourself, you don’t know. Sure, you could write your shell script to use sh v1.0 and be pretty sure it will work, but that’s really going to limit you. The same goes with relying on standard unix tools in a distro. That works fine until someone sticks the thing you’ve written into a distroless container (or tries to build/run it on a Windows box) and suddenly things stop working. That’s why most languages have packages/modules/libraries built into them that provide the same kind of functionality you would get from those tools.

Second consider this little golang example. It’s much easier to just call

out, err := exec.Command("ls", "-l", "/tmp/mydir").Output()
fmt.Println(string(out))

than

 infos, err := os.ReadDir("/tmp/mydir")
 if err != nil {
  log.Fatal(err)
 }
 
 for _, info := range infos {
  entryType := "file"
  if info.IsDir() {
   entryType = "directory"
  }
  fmt.Printf("Found %s, which is a %s\n", info.Name(), entryType)
 }

and have the output right there on the screen. And that’s how it often done. But that easy leads to some big gaps where problems can sneak in. There’s no input validation or error checking. In Go at least you have to capture any error in err, but you never have to use it. And that snippet ignores stdout.

At the same time, you have to properly escape your input. With ls it’s not too bad, but you have to handle spaces, special characters, delimiters, and everything else your users might throw at you. Add in calling a shell script and it gets worse. The more interpreters between the thing you type and the thing that gets executed the more likely you are to miss escaping something so it gets to the next level as intended.

Finally, if you’re calling a shell script, how robust is it really? Code Golf might be a game, but it’s a lousy way to write reliable, resilient code. Even if the correct version of bash is used, and you get the argument parsing and escaping right, executing a script becomes an undebuggable, fragile, black box. And no one wants that.

So next time you think “I’ll just do a quick exec to get something done, think again. Use the tools of your language and check your work.

by Leon Rosenshein

Consensus vs. Consent

Consent and Consensus. Two very similar words. The first 6 letters are the same. The levenshtein distance is only 3. In general terms they both mean the same thing. If you have consensus you also have consent. The converse, however, is not true. In detail, they’re very different.

Consensus:

  • general agreement : UNANIMITY

Consent:

  • compliance in or approval of what is done or proposed by another : ACQUIESCENCE

It’s that last word in each definition that drives the difference. To get consent you need to make sure that no one is completely against the idea. That there’s no one who says, “You can do that, but you’re doing it without me. I will always argue against that action or point of view.” If you have consent everyone will go along with the decision. It might not be their first choice. It might not be the 10th. It might even be their last choice, but they’re OK with it. They will acquiesce to the decision.

Consensus on the other hand, means everyone thinks the plan/point of view is the best choice. No one has any doubts or thinks there might be a better way. Everyone is 100% on board and wondering why you haven’t started yet. This is a wonderful thing when it happens.

Think of it this way. For every idea/plan/proposal you have all of the people who get to weigh in get a vote. They can vote in one of 4 ways:

  • Yes: I think this is a great idea and we should do it now

  • OK: I’m willing to go along an support this idea. I don’t see any problems, so let’s do it.

  • No: I have a specific problem that needs to be addressed. Address my issue and I’m a Yes or at least OK

  • Absolutely Not: I completely refuse to be involved. I will not be part of a group the does this.

To get consent you need to get everyone to Yes or OK. If you have people in the “No” camp you need to address their concerns. You need to address their issue, but you don’t need to get them to think it’s the greatest idea every. Those in the “Absolutely Not” camp should be expected to provide an alternative. Since they think everything you’ve proposed is wrong, it’s on them to replace it all. In reality you’ll sometimes find someone who feels that way, but far more often, when someone says “Absolutely Not” they’re really just a “No”, with more emphasis. There’s a specific problem they see that they feel you’ve ignored. Address that issue and they become an OK. Getting everyone to “Yes” or “OK” can be hard. You’ll probably need to change the plan and there will be compromises, but it’s doable and when you’ve decided you have solid support behind you.

To get consensus, on the other hand, you need to get everyone into the “Yes” category. And that’s orders of magnitude harder. You have to get everyone to agree that the current idea is the best idea possible. That there’s no point thinking about it more.

Sometimes doing that is the right thing to do. If you’re on a road trip and you have time to make one stop for food you better make sure you have consensus. That everyone can get something to eat at the place you stop. If your group has 85% BBQ connoisseur, 15% omnivores, and one grain-free vegan (for medical reasons) you can’t stop at the BBQ joint that only serves brisket, pulled pork, buttermilk biscuits, and mac and cheese. It doesn’t matter how enthusiastic the BBQ experts are. The grain-free vegan can’t eat there. It’s not that they don’t want to or they’re being difficult. Eating there is physically bad for them and if they ate that food you’d be days late since they’d be in the hospital. You need to go the all-night diner down the road a little since everyone can get something. That’s consensus.

One the other hand, if that grain-free vegan says something like “I can’t eat at the BBQ place. It’s a physical impossibility. But there’s a market a couple of doors down. While you’re getting your food I’ll run over to the market and get something I can eat.” suddenly you’ve got consent. You can’t get consensus, but you’ve changed things so that you can get consent. And often, consent is all you need to move forward.

So next time you’re trying to build consensus make sure that’s really what you need. If you don’t need it and consent is enough, just go for that.