Recent Posts (page 25 / 67)

by Leon Rosenshein

Design By Counter-Example

Every once in a while I come across a coworker who knows exactly what they don’t want. Whether it’s a design doc, slide deck, email, error message, icon, or whatever you’ve come up with isn’t it. They could tell you that you were close, but it's not what they're looking for. The level of detail wasn't right. or the constraints were at the wrong level. That the color or saturation was off. They could be very clear that there was a problem.

Unfortunately, they weren’t nearly as clear about the converse. They didn't know what the problem was. Now I understand that not everyone can describe colors, write the perfect error message, or come up with the exact sound bite that everyone will remember for years, but knowing that didn’t make my job easier. I wanted the right feedback at the right time. Unfortunately, all I got was “That’s not what I’m looking for.”

Which left me in the position of needing to do a binary search for the right answer. Pick a solution. Get mostly non-directional feedback, with just the tiniest amount of direction. Then I’d take a medium sized step in that direction. And get the same kind of feedback. And then take step N+1. Either further in the direction I was going, or half-way back to step N-1. Lather. Rinse. Repeat. Until I got close enough to the solution, or we both got tired of trying and accepted where it was.

While it worked, after a fashion, it wasn’t efficient. Or fun. Or scaleable. So how can we do better? Even in the case where we don’t know the right answer. When you have to creep up on the right answer because you really don’t know where it is yourself, so you can’t describe it to anyone. You just know what it isn’t.

First, and foremost, when giving feedback, it’s ok to start with what needs to be changed. But instead of stopping at “That’s not it”, continue on with what it is like, or more like. And add why you want the change. Instead of saying “There are too many words on the slide” and leaving it at that, add the goal. Something like “I want the audience to be able to focus on the one important thing on the slide and not be distracted.” Now that’s the kind of actionable feedback I like.

Which, when you think about it, is the kind of goal/value oriented management you want in general.

by Leon Rosenshein

Complex Vs. Complicated

According to dictionary.com, complex is:

adjective

  • composed of many interconnected parts; compound; composite:
    a complex highway system.
  • characterized by a very complicated or involved arrangement of parts, units, etc.:
    complex machinery.
  • so complicated or intricate as to be hard to understand or deal with:
    a complex problem.

While  complicated is:

adjective

  • composed of elaborately interconnected parts; complex:
    complicated apparatus for measuring brain functions.
  • difficult to analyze, understand, explain, etc.:
    a complicated problem.

Seems pretty similar, right? In fact, 2 of the 3 definitions of complex include the word complicated. And the thesaurus lists them as synonyms for each other. So they’re interchangeable, right?

Sometimes, maybe even often, but not always. One of the biggest areas where they’re not interchangeable is in mapping management processes to use cases. And that’s where the Cynefin framework comes in.

Cynefin matrix

As you can see right there in the chart, the top two use cases are Complex and Complicated. Because how you manage them is very different.

Building a car is complicated. But building a car doesn’t have a lot of trade-offs and decision making (that’s designing a car). Lots and lots of moving parts (literally). And critical paths. Deep dependency chains that might not be obvious at first glance. Getting them out of order means lots of rework and delays. Because we’ve done it so many times and the interdependencies (not dependencies) between steps is very minimal, you can write down the steps, in a specified order, and expect that the process will pretty much follow those steps.

On the other hand, Software development is often complex. It’s the addition of the first part of the definition, the interconnectedness that makes it complex. That fact that it’s recursive. And that you don’t have full knowledge going in. Your choice of domains informs the design of the domain’s API and the act of building that API informs your choice of domains, which changes your API.

Or, working with your customer on what they’re really trying to do, you build a ubiquitous language to describe the work and the expected value. Then you start to implement things based on that understanding. But your language is not perfect, so you learn more about what they really mean as you work. So what you’re building changes. Which clarifies the langage. Which changes what you’re building some more.

So the key is pick the right tool for the job. Using the wrong strategy isn't just sub-optimal, it can make things worse.

And to make software development more challenging, different parts of the solution you’re building land in different quadrants, so it’s not one size fits all. You need to find the right strategy for the different parts of the job and make them work together.

by Leon Rosenshein

More Than Cruft

I’ve talked about Tech Debt before. The idea that, like with earnings, you can trade future work and understanding for earlier delivery of a product, then pay it back later. It made sense when I first heard about it, and it makes sense now.

What’s important though, is to do both parts. Not just pushing work or learning into the future to speed up delivery. But what does that really mean? It means being an owner and thinking not just of delivery dates, but of future developers, which includes you. You’re not just making life harder for some faceless future person, you’re making life harder for yourself. So it’s in your own best interest to do it knowingly.

It means doing things with the best information and understanding you have at the time, then, as you learn, do the important things, the critical things, that you put off. Making sure the code reflects your current understanding, at all times. You made the choice and bought the time. Maybe it was for a demo or a holiday ship date, but you made the choice. Now live up to the promise you made to yourself and your customers. Take the thing you got by releasing sooner, the capital (feedback, experience, brand recognition, investment, whatever) and put it to use. Update the user interface after watching your customers use it. Refactor your domains now that you know what they really are. Add error handling and the operational domain outside the narrow happy path you just implemented.

While I’ve known Ward Cunningham came up with the idea and I recently came across this description of the situation where he first started using the term. At the time he was using Smalltalk and talking about objects and hierarchies. We don’t use Smalltalk, and where he talks about objects he’s talking about domains and domain driven design, not objects as defined by a particular language. And doing things that way, keeping your domains clear and bounded, is what keeps your debt from burying you. It’s what allows you to keep the promise you made to yourself when you took on the debt in the first place. It’s about keeping things clean enough so that you can be agile enough to use the learnings you took on the debt for in the first place.

And if that isn’t clear enough, here’s Ward himself talking about it and a transcript if you prefer.

by Leon Rosenshein

Blink Estimation

Speaking of Tacit Knowledge, have you heard of Blink Estimation? Take a medium sized group of people with significant tacit knowledge in a domain. Explain a problem/opportunity and give them some time to ask questions and explore the shape of the problem. Then, ask them for a rough order of magnitude of the level of effort required to implement the solution. If your group comes up with similar answers then you’ve got your estimate. If they don’t, work with the outliers to understand why their answers were so different. At least one group missed something, possibly both.

It’s surprising how often this works. There are lots of caveats and biases to watch out for, but if you can manage those you can get a good answer. The biggest of those caveats is that tacit knowledge can be incredibly domain specific. Which means the folks you ask need to be well versed in something close enough to what you’re asking them about.

And not just on the surface. All software development is not the same. Asking someone with deep tacit knowledge about IOS apps for an estimate on distributed parallel processing is unlikely to give you a good answer. Both problems involve computers, but they're not the same. Consider bicycles and unicycles. In the US somewhere around 6% of people say they don’t know how to ride a bike. Unicycles are half the complexity (1 wheel vs 2), so even fewer people don’t know how to ride a unicycle, right? Of course not. On the surface they’re the same, but in reality they’re mostly unrelated.

Then there are the cognitive biases. Not surprisingly, if the experts have been working on the project and know it was supposed to take a year and they’re 6 months in, that fact will color their answer. So will how long it took them to finish the last task they worked on. Just mentioning an expected duration will bias the results. Even a bad commute into the office that morning can bias the results.

And it won’t work if the problem space is new. We can estimate how long it will take to build a single family ranch house in the suburbs because we’ve got thousands (millions) of examples to look at and get that tacit knowledge from. Building the first habitat on Mars, from local material, will not be nearly as predictable.

How can such a shallow evaluation be better than a deeper one? Of course having the right tacit knowledge is key. But there’s more to it than that. Here’s a question for you. How long is the coast of England? It depends on how small you break the problem down. I suppose if you get down to plank units there’s a limit, but the closer you look the longer the coastline gets. Software can be like that too. The deeper you design, the more work you find. And of course, no task takes less than a day, so the estimate blooms the harder you look. Which seems counterintuitive, but is reality.

We’re doing lots of new things, so it may not be very applicable, but just for grins, next time you start a project, get a few folks together, think deeply about the size and shape of the problem, (but not too deeply) and make an estimate. Then see how close you come. Do it a few times. Keep track of the results. Regardless of the outcome it will likely make you blink.

by Leon Rosenshein

Tacit Knowledge

When I first heard about tacit knowledge and started looking into it, I thought it was the same thing as tribal knowledge. You know, those things that aren’t written down, never get explained, and you just need to know to be successful. Turns out that’s not the case. On the surface they seem similar in that in both cases you aren’t told the knowledge you’re looking for. The difference is that tacit knowledge is the kind of knowledge that can’t be explained with words (at least in an amount of time that is useful), not just something that hasn’t been explained.

One classic example is riding a bicycle. Think about it. How would you explain to someone how to ride a bike? You could tell them all about the physics of flywheels and precession, but understanding the physics won’t teach you the muscle and balance feedback you need. Comparing it to standing on one foot or slack-lining doesn’t really help either, but those are comparisons, not explanations or instructions.

The next thing I thought of was that ok, this is just experiential learning. You know, learn by doing instead of learning by rote by watching, but again, not really. I’ll be the first to admit that I’m an experiential learner. I learn best by trying and watching what happens when I interact with something. But I can learn by listening to others, or just watching them do it. Learning styles are a property of the learner, not the subject.

So what is tacit knowledge? To me, it’s the knowledge you get by doing something, then having to live with the results. In software it’s the kind of knowledge that leads to a feeling that something is the right (or wrong) design. A code or design smell. You look at it and say “that’s not right.” Then you think about it and you come up with some reasons. But the reasons come after the feeling and justify it, not precede it. And that’s what makes teaching and learning tacit knowledge so hard.

Consider the expert system (really a knowledge based system). Find a few experts, look at a bunch of their decisions, and ask them why. Keep asking why. Build a sufficiently complex tree that you can use to arrive at the same conclusions. Sounds simple. People tried it and it’s not. Take a simple case. Is that function too long? It should fit on a page. It should encapsulate a single action. It should be replaceable without changing the results. It shouldn’t have too many branches. Unless it's a complicated topic. Unless it needs that many steps. Unless the combination of possible conditions is that large. Unless there are no other functions. Unless breaking it up makes it harder to reason about.

So how do you learn tacit knowledge? It’s not just repetition. Repeating the same thing over and over again will make you faster at it, but not better, and how do you know if you’re correct anyway? So you do it and look at the results. Repetition and introspection. That helps. Now you know what you did, and you can hopefully see the flaws, but that doesn’t help you get better.

For that you need to add something else. Feedback. The completion of repetition with introspection and feedback is one of the best ways to build tacit knowledge. So how do you get that in software? Of course you have to do it. You need to write the code, or do the design. Over and over again, That’s the repetition.

You need to know if it works. You test what you’ve done, and see the results. And you change things until you get the answers you think you want. That’s the introspection.

And finally, you need to share what you’ve done and get feedback, It could be a diff/PR. It could be a design doc. It could be an architectural kata. Whatever the mechanism, you need that external feedback to make sure your biases, conscious and unconscious, don’t get in the way.

by Leon Rosenshein

Multiple Projects

Work in progress and I have an interesting relationship. I understand, both intellectual and viscerally, that regularly working on multiple things at once takes longer than doing the first things first. Motion is not the same as progress, and just because you’re busy doesn’t mean you’re adding value. On the other hand, sometimes there are external events you need to wait for. Doing nothing in those cases doesn’t help anyone. While not all motion is progress, lack of motion is lack of progress. Sometimes it is the right thing to do, but when?

The first thing to think about is to compare the time spent on a project to the time spent switching between projects, the context switch time. If you’re talking about writing a design document in the morning, responding to email right after lunch, and then debugging an outage in the afternoon you may or may not be switching projects. Over the day you’ve done at least 3 things, but you were probably working on the same overall project. On the other hand, while you’re doing that afternoon debugging session a co-worker comes over and asks why the class structure is the way it is, so you’re looking at one piece of code and talking about a second one. Then your phone rings and it’s a call from your SO. Now all of a sudden you’re doing 3 completely different things at once. I don’t know about you, but I can’t do three things as well as I can do any of them.

One of the reasons comes from system thinking. When you think about things as interrelated systems you recognize that there is some level of interconnection between things, even if you can’t see it on the surface. And for true mlti-tasking, that interconnectedness looks something like

Time to context switch

One estimate of a context switch is that 20% of a day’s time is lost to switching between projects. That means that if you’re working on 5 different projects (not tasks in a project) you might be losing 80% of your time to context switches. That’s a lot of time. To a first approximation, that’s about 1½ hours per switch. And the same amount of time to switch back.

That’s a lot of time, and I generally don’t have that much extra time in my day. So again, to a first approximation, unless you expect that external delay to take over 3 hours, you’re much better off finding something else in the same project to work on. Do a code review. Answer some related emails (leave the unrelated ones for later). Work on that design doc. There will still be a context switch, but it’s much smaller and will take much less time.

by Leon Rosenshein

Gift Giving

I’ve talked about the gift of feedback before. How important it is, and how useful it is. One thing I didn’t talk about then though is what kind of feedback you give when. Gifts should match the occasion. You wouldn’t give a housewarming gift for a 75th anniversary, and feedback is just the same.

Let’s say you want your house painted, so you look around at newly painted houses and find that the ones you like have the same painter. You call the painter and start to discuss things. The painter puts a swatch on the side of the house for you to look at. You look at the swatch, tell the painter you don’t like that the edges of the swatch aren’t crisp and they should do a better job of edges and finishing things and head to work. You come back at the end of the day and see that your house is almost fully painted. The only thing left is the door and the window frames. The edges are perfect. Just what you want. But not the color. So you call the painter and tell them it’s 2 shades too dark and they need to redo it.

That doesn't make a lot of sense, does it? The wrong feedback at the wrong time. It’s understandable though. We want orderliness and predictability. We want to close out issues. So instead of opening a discussion on the things that are unclear we focus on things we can close out. But we should fight that urge and give the right feedback at the right time.

That’s the 1/50/90%feedback idea. When things are just starting out, when there’s 1% of the project done and most of the work is in the future, feedback is about the big things. Is the direction or vision correct? Are the right upstream and downstream dependencies identified and accounted for? Does this fit into the bigger picture you’re going for? In the house painting example, when looking at the swatch on the side of the house, the feedback should be about the color. How it fits in with your overall vision for the house. How it works with the landscaping and the surrounding neighborhood. Not if the edges are crisp.

Some time later, when things are well underway, say 50% done, feedback is about how it’s going and how well things are matching with expectations and vision. Are there any new learnings/understandings from doing the work that far that might cause a change in the plan? If so, have they been raised, answered, and incorporated? Again, back to the house painting, when one or two walls have the base color applied, does the color still work with the surroundings? When you look at it from farther away, from a perspective you couldn’t see with just a swatch, does it make the statement you want? Now might be the time to think about the trim color for the eaves and windows.

Finally, when the project is 90+% done, look at the external details. The fit and finish. It’s (mostly) too late to change the big details, but what are the smaller things that you can only see now in the full context that might need some adjustment. Back at the house, the painting is almost finished. You’re doing a walk-around with the painter. Now is the time to talk about the edges between the walls and windows. Look for paint splatters. What they call in the trade the punch list. The things that need to happen before it’s truly done.

And that applies to software just as well as it does to house painting. When you see a proposal or vision doc, that’s not the time to say there needs to be an api for X and it needs to include parameter Y, or write the exact error text if an invalid parameter is provided. Instead you should make sure the domains are properly separated, the use case for the X api is covered and that as a principle, error messages are provided and that they include enough context for the user to understand how to fix it.

At an interim review, make sure the domain boundaries still make sense as things have evolved. Make sure use cases are covered, or at least not precluded. Think about new requirements that might have come up, or changed to up and downstream dependencies that might impact things. How has the overall environment changed and what are the effects of that. It’s still not the time to correct the grammar on the error messages, but it is the time to make sure the principles are being followed and the info is there.

Finally, as the project is nearing completion, feedback gets very specific. Now is the time to check the grammar in the error messages. Make sure that it’s solving the problem completely, not leaving off some edge cases. Feedback about things that might make it easier to live with over the long term. And of course, if you do see a big problem, say something. It’s better to fix things late in the dev cycle than to put something out that makes things worse.

So continue to give the gift of feedback. Just make sure the gift matches the occasion

by Leon Rosenshein

Things We Know (or think we know)

There are lots of things we know. Or at least assume to be true. But how many of them really are? Consider RFC 1925, which lays out a set of “truths” for the internet. And since it can be found on the Internet, it must be true.

Of course, sometimes the easiest way to define truth is to define what it isn’t. That’s where the Fallacies of distributed computing come in. We’re all developing distributed systems. We all “know” these things are fallacies But sometimes, when writing code, we forget. And we forget at our own peril.

Consider the first fallacy. The network is reliable. We all know that it’s not. Certainly not in the small. Any given packet of information could be lost, or delivered late, or be slightly garbled by the time it’s delivered. And yet we rely on the unreliable network, You’re reading this over a network. Builds require access to remote information. Our phones use a network. In the large we can (usually) rely on the network because at the small there are things like retries, checksums, sequence numbers, and time stamps. So we rely on the network because the alternative is to be physically touching the same hardware at the same time. And of course that’s really a network as well, and not 100% reliable. But often it’s close enough to being reliable in the large that we can just ignore those error cases.

Or more likely, we decide that adding sufficient defense in depth, whether it’s via hardware redundancy, redundant routes, caching, store and forward, or any of a dozen other work-arounds, costs more, not just in raw dollar terms, but also in terms of added complexity, loss of agility, and increased cognitive load, to not be worth handling. So where do you draw the line? Well, it depends. It depends on the cost of all those changes.

Or maybe you’re not worried about the network. Your problem is simpler. You just need to deal with people’s names. That’s simple, right? 3, maybe 4 strings. Easy to store and display. Or maybe not. Some folks have 5+ strings in their name. I only have two. The artist formerly known as Prince had mostly one, then for a while it was just a custom glyph. He broke so many of the things we know about names in just a few short years. Or more recently a couple couldn’t use the baby name they wanted because it didn’t follow the rules.

All of which is to say, examine your assumptions. They may be correct. They may not be. The cost of dealing with outliers might be more than you’re currently willing to pay. That’s a fine answer. Just make it knowingly, not by mistake/default.

And whatever decision you make, write down why so you understand why you did what you did when you did it.

by Leon Rosenshein

Eliza

And I don’t mean Eliza Doolittle. No, in this case I’m talking about one of, if not the first, natural language processing artificial intelligence agents. I’m talking about the Eliza that came out of MIT’s AI lab in the mid 60s.

Eliza and I are about the same age, and I first ran into it in my early teens. And like any typical teen, I spent a bunch of time getting Eliza wrapped up in knots and saying things that made no sense at all. But in fact, if you took it seriously, you could have what appeared to be a real conversation. Could it pass the turing test? No, but in a limited domain it could feel real.

There are a bunch of things to be learned from Eliza. Not the least of which is that much conversation is shallow, and doesn’t require much, if any, subject matter knowledge. The second is around systems thinking (more on that at a later date) and emergent behavior. Eliza used a very simple script, with some pattern matching and lookback, to generate its responses. Remember, this was almost 60 years ago, and there wasn’t any semantic understanding of the words. But still, some people thought they were talking to a real therapist and say it helped.

It turns out that recently someone found the original ELIZA code and the DOCTOR script it used to act like a therapist. That website has links to the code and lots of interesting info about how Eliza’s original author felt about how Eliza was received.

And if you want to try it out yourself and don’t have an IBM 7094 running MAD-SLIP, check out this javascript version. I’d be interested to hear how your conversations went.

by Leon Rosenshein

Proverbially Speaking

Go is a pretty minimalist language. When you get down to it, there’s not much there. There are only 25 keywords. Types come in 3 flavors, simple types, aggregate types, and reference types, and the magic interface. Simple types are numbers (int/uint/float with precisions), strings and booleans. Aggregates include arrays and structs, while reference types are slices, pointers, funcs, and a go-ism, channels. That’s it.

Any yet, you can, and people have, written significant system/back-end/control-plane tools and systems with it. Things like Kubernetes, Prometheus, Docker, and Terraform for example. We have too. BatchAPI is Go.

Another thing about Go is that the community has more than a little bit opinionated. That’s why when you look at Go code, even poorly written Go code, it looks like Go code. The biggest reason for that, of course, is gofmt. While you can write Go with your own style, there is one true style, and gofmt will quickly and easily apply it for you.

The other way it does that is through what’s colloquially known as idiomatic go. The kind of Go that matches the way other Go developers think. And a lot of that comes from the Go Proverbs. A few sentence fragments from a talk by Rob Pike, one of Go’s developers. And they capture not how the language works, but how you should use it. Ideas designed to help developers write code in a way that used the language the way they thought it should be used. There’s even a nifty little website and set of icons/images.

Some are very Go specific, like Cgo is not Go or interface{} says nothing, but others have broader reach, like A little copying is better than a little dependency or Documentation is for users. Of all of them, the one that speaks most to me is Clear is better than clever.

Go is not about writing as few lines of code as possible. It’s not about packing as much logic into a single line as you can. It’s not about implicit handling of errors somewhere at the top of the chain. It’s very much about doing something and responding to what happened, even (especially?) if it wasn’t what you wanted to happen. Because it’s not about just writing the code. It’s about maintaining it. Supporting it. Extending it. Basically keeping the cognitive load as low as possible when interpreting the code itself so you can focus on the user’s problem and not have to worry about understanding what the code is doing behind the scenes.

What do you like about the proverbs, and what other ones should there be?