Recent Posts (page 11 / 67)

by Leon Rosenshein

What's an Internal Customer?

I’m a platform and tool builder. I’ve spent most of my career building platforms and tools. Tools that others, inside and outside the company, use to do whatever it is they do to add value to their respective businesses. Even when the “tool” is something like Flight Simulator, built as a game to provide entertainment. Many, probably most, people who used Flight Sim used it as shipped. They might buy a airplane or some scenery, but basically they used it as shipped. But even those people also used it as a platform.

With platforms, I’ve talked about the difference between customers and partners before. It’s a big difference. With Flight Sim we had both. Some people who bought Flight Sim were clearly customers. They bought it, used it, and never talked to us about it. That’s a customer. Others, the people who built the add-ons, were partners. We worked with them. We made changes for them that made their jobs easier which made it possible for them to build more add-ons and make money doing it. Then, they built add-ons that we didn’t have time or resources to make. Those add-ons increased the demand for Flight Sim. So we did better. And the more copies of Flight Sim that sold the bigger the installed base they could sell to. So we treated our customers and partners differently.

And nowhere is the difference bigger than when you’re talking about the difference between an internal customer or partner. With Flight Sim our customers and partners were clearly external. With other platforms, such as the various versions of distributed processing platforms I’ve built, the customers were very much internal. We were building tools and platforms for other people in the company to do their work to build whatever product they were building. Sometimes it was maps, sometimes it was image processing. Sometimes it was large scale ETL jobs. Regardless of what they were doing, they needed our platform to do their job. So were they our customers or partners? They needed wht we were building, but if they didn’t need it, we didn’t need to build it. We needed each other.

Or at least we did at the beginning. As John Cutler put it, that internal team is your customer if you can

  1. walk away from the “deal”
  2. charge their “customers”
  3. sign contracts
  4. pursue work outside the company with other “customers”
  5. manage their own budgets
  6. hire their own teams

You know, the kinds of things you can do when you’re a company trying to sell something to someone outside the company. Of course, you can’t arbitrarily do any of those things without consequences, but there’s lots of choice on both sides. If that isn’t the case, for whatever structural, organizational, or financial reasons, it’s not a seller/customer relationship. It’s a partner relationship.

When we started building those platforms we had nothing to sell to our customer, and there was nothing they could “build/buy”, internally or externally. Neither side could walk away. We didn’t have individual budgets and we couldn’t just decide to go do something else. The situation was what it was, and we had to work with it. We had to work together to build the product(s) our customers wanted. We had to be partners in creating both the “product” our team was building and the maps/imagery/data sets that the other team needed. So that’s how we started out.

That doesn’t mean it had to stay that way. We aspired to have products that our customers wanted to buy. They aspired to have products to “buy” and that they could make feature requests on. And we eventually got there. By working together in partnership to build those first versions. And once we had products, as William Gibson said, the street finds its own uses for things. Once those other use cases were found, we could (bud didn’t) walk away from one of them because there were other customers. We could build a chargeback model. We had contracts (SLAs, usage commitments, etc.) We looked for (and found) other customers and related work. We got a budget and managed our own time and its size. In short, our partners had become customers.

That’s how you get from internal partners to internal customers. And give both sides the autonomy they want (need?) to get their jobs done and feel good about it.

by Leon Rosenshein

WIP and Queuing Theory

A distributed processing network with queues.

You never know where things will back up.

I’ve talked about [flow] a few times now. It’s a great state to be in and you can be very productive. On the other hand, having too much WIP inhibits flow and slows you down. And it slows you down by more than the context switching time (although that is a big issue itself). A common refrain I hear though goes something like “I need to be working on so many things at the same time otherwise I’m sitting around doing nothing while I wait for someone else.”

On the surface that seems like a reasonable concern. After all, isn’t it more efficient to be doing something rather than not doing anything? As they say, it depends. It depends on how you’re measuring efficiency. As an individual, if you don’t wait then you’re clearly busier. Your utilization is up, and if you think utilization is the same thing as efficiency then yes, the individual efficiency is higher.

On the other hand, if you look at how much is getting finished (not started), you’ll see that staying busy will reduce how much gets finished, not increase it. It’s because of queuing theory. Instead of waiting for someone to finish their part of a task before you get to your part you start something else. Then, when the other person finishes their part the work sits idle while you finish whatever thing you just started. Since the other person is waiting for you to do your part they start something else. Eventually you get to that shared thing and do your part. But now the other person is busy doing something new, so they don’t get to until they finish. So instead of you originally waiting for someone else to finish, the work ends up waiting. Waiting at each transition. The more transitions the more delay you’ve added to the elapsed time. Everyone can do eavery task in the optimum amount of time, but you’ve still added lots of delay by having the work sit itdle.

Explaining dynamic things with text is hard. Luckily there are other options. Like this video by Michel Grootjans where he shows a bunch of simulations of how limiting WIP (and swarming) can dramatically improve throughput and reduce cycle time. Check it out. I’ll wait.

What really stands out is that the queues that appear between each phase in a task’s timeline are what causes the delays. With 3 tasks there are 2 queues. In this case there’s only one bottleneck, so only one queue ever got very deep, but you can imagine what would happen if there were more phases/transitions. Whenever a downstream phase takes longer than its predecessor the queue will grow. If there’s no limit then it ends up with most of the work in it. Adding a WIP limit doesn’t appreciably change total time since the queue just lets the work sit there, but it does reduce the cycle time for a given task. It spends much less time in a queue.

And that cycle time is the real win. Unless you’ve done a perfect job up front of defining the tasks, limiting WIP gives you the opportunity to learn from the work you’ve done. In Michel’s example, if you learned you needed to make a UX change to something you could do it before you’ve finished the UX. You’d still have the UX person around and they could incorporate those learnings into future tasks. You’ve actually eliminated a bunch of rework by simply not doing the work until you know exactly what it is.

Of course, that was a simple simulation where each task of a given type takes, on average, the same amount of time. In reality there’s probably more variance on task length than shown. It also assumes the length of time doesn’t depend on which worker gets the task. Again, not quite correct, but things average out.

Even with those caveats, the two big learnings are very apparent. Limit WIP and share the work. Eliminate the queues and reduce specialization and bottlenecks. Everyone will be happier and you can release something better sooner. Without doing more work. And being able to stay in flow.

by Leon Rosenshein

Built-In Functionality

A pocket knife with multiple tools available.

You can use all the tools, not just the large blade.

Most languages have a way to start an external process. It’s usually called some version of exec, as in execute this processes for me please. There are generally lots of ways to call it. Synchronous and Asynchronous. Capturing the output,stdout and stderr. Passing arguments or not, or even piping data in viastdin. Capturing the exit code.

All those options are needed when you’re running external applications/executables. If you’re calling a 3rd party program to do some heavy lifting, you’ll probably want that level of control over what goes into the executable. You’ll want to know exactly what comes out, stdout, stderr, and any data persisted. If you need to then do something with the output data then you’ll want to wait for it to finish so you know it’s done and if it succeeded, so you’ll want to be synchronous. On the other hand, if it’s a best effort you might just want to know that it started successfully and have it keep running after you’re done. For all those reasons, and others, there are very good times and reasons to use the exec family of functions.

On the other hand, they’re also very easy to mis-use. In many (most?) languages it’s pretty trivial to run a shell command, pipe its output to a file, then read the file. If that’s all you do you’ve opened yourself up to a whole raft of potential issues.

The biggest is that if you’re exec’ing to a shell, like bash or zsh you never know what you’re going to get. You’re at the mercy of the version of the shell that’s deployed on the node/container you’re running in. You can hope that the version you want is in the place you want, but unless you’ve made sure it’s there yourself, you don’t know. Sure, you could write your shell script to use sh v1.0 and be pretty sure it will work, but that’s really going to limit you. The same goes with relying on standard unix tools in a distro. That works fine until someone sticks the thing you’ve written into a distroless container (or tries to build/run it on a Windows box) and suddenly things stop working. That’s why most languages have packages/modules/libraries built into them that provide the same kind of functionality you would get from those tools.

Second consider this little golang example. It’s much easier to just call

out, err := exec.Command("ls", "-l", "/tmp/mydir").Output()
fmt.Println(string(out))

than

 infos, err := os.ReadDir("/tmp/mydir")
 if err != nil {
  log.Fatal(err)
 }
 
 for _, info := range infos {
  entryType := "file"
  if info.IsDir() {
   entryType = "directory"
  }
  fmt.Printf("Found %s, which is a %s\n", info.Name(), entryType)
 }

and have the output right there on the screen. And that’s how it often done. But that easy leads to some big gaps where problems can sneak in. There’s no input validation or error checking. In Go at least you have to capture any error in err, but you never have to use it. And that snippet ignores stdout.

At the same time, you have to properly escape your input. With ls it’s not too bad, but you have to handle spaces, special characters, delimiters, and everything else your users might throw at you. Add in calling a shell script and it gets worse. The more interpreters between the thing you type and the thing that gets executed the more likely you are to miss escaping something so it gets to the next level as intended.

Finally, if you’re calling a shell script, how robust is it really? Code Golf might be a game, but it’s a lousy way to write reliable, resilient code. Even if the correct version of bash is used, and you get the argument parsing and escaping right, executing a script becomes an undebuggable, fragile, black box. And no one wants that.

So next time you think “I’ll just do a quick exec to get something done, think again. Use the tools of your language and check your work.

by Leon Rosenshein

Consensus vs. Consent

Consent and Consensus. Two very similar words. The first 6 letters are the same. The levenshtein distance is only 3. In general terms they both mean the same thing. If you have consensus you also have consent. The converse, however, is not true. In detail, they’re very different.

Consensus:

  • general agreement : UNANIMITY

Consent:

  • compliance in or approval of what is done or proposed by another : ACQUIESCENCE

It’s that last word in each definition that drives the difference. To get consent you need to make sure that no one is completely against the idea. That there’s no one who says, “You can do that, but you’re doing it without me. I will always argue against that action or point of view.” If you have consent everyone will go along with the decision. It might not be their first choice. It might not be the 10th. It might even be their last choice, but they’re OK with it. They will acquiesce to the decision.

Consensus on the other hand, means everyone thinks the plan/point of view is the best choice. No one has any doubts or thinks there might be a better way. Everyone is 100% on board and wondering why you haven’t started yet. This is a wonderful thing when it happens.

Think of it this way. For every idea/plan/proposal you have all of the people who get to weigh in get a vote. They can vote in one of 4 ways:

  • Yes: I think this is a great idea and we should do it now

  • OK: I’m willing to go along an support this idea. I don’t see any problems, so let’s do it.

  • No: I have a specific problem that needs to be addressed. Address my issue and I’m a Yes or at least OK

  • Absolutely Not: I completely refuse to be involved. I will not be part of a group the does this.

To get consent you need to get everyone to Yes or OK. If you have people in the “No” camp you need to address their concerns. You need to address their issue, but you don’t need to get them to think it’s the greatest idea every. Those in the “Absolutely Not” camp should be expected to provide an alternative. Since they think everything you’ve proposed is wrong, it’s on them to replace it all. In reality you’ll sometimes find someone who feels that way, but far more often, when someone says “Absolutely Not” they’re really just a “No”, with more emphasis. There’s a specific problem they see that they feel you’ve ignored. Address that issue and they become an OK. Getting everyone to “Yes” or “OK” can be hard. You’ll probably need to change the plan and there will be compromises, but it’s doable and when you’ve decided you have solid support behind you.

To get consensus, on the other hand, you need to get everyone into the “Yes” category. And that’s orders of magnitude harder. You have to get everyone to agree that the current idea is the best idea possible. That there’s no point thinking about it more.

Sometimes doing that is the right thing to do. If you’re on a road trip and you have time to make one stop for food you better make sure you have consensus. That everyone can get something to eat at the place you stop. If your group has 85% BBQ connoisseur, 15% omnivores, and one grain-free vegan (for medical reasons) you can’t stop at the BBQ joint that only serves brisket, pulled pork, buttermilk biscuits, and mac and cheese. It doesn’t matter how enthusiastic the BBQ experts are. The grain-free vegan can’t eat there. It’s not that they don’t want to or they’re being difficult. Eating there is physically bad for them and if they ate that food you’d be days late since they’d be in the hospital. You need to go the all-night diner down the road a little since everyone can get something. That’s consensus.

One the other hand, if that grain-free vegan says something like “I can’t eat at the BBQ place. It’s a physical impossibility. But there’s a market a couple of doors down. While you’re getting your food I’ll run over to the market and get something I can eat.” suddenly you’ve got consent. You can’t get consensus, but you’ve changed things so that you can get consent. And often, consent is all you need to move forward.

So next time you’re trying to build consensus make sure that’s really what you need. If you don’t need it and consent is enough, just go for that.

by Leon Rosenshein

Hey, That's Pretty Clever

A dungeon master with unruly hair and d20.
The Dungeon Master of Engineering has been on Twitter for just over 4 years now. There have been lots of snarky (but accurate) tweets about life as a developer. Recently there was a whole thread contrasting the viewpoint of someone new to tech with a tech veteran. Some are whimsical, some are political, and some are learnings about things developers deal with every day. There are lots of really good learnings in there when you look at them.

One of my favorites is

New to tech: 
That's really clever, ship it. 


Tech Veteran: 
That's really clever, fix it. 

I really like that one. Because I used to do clever things. Call things and rely on their side effects to save a few lines of code. Use Duff’s Device because it’s interesting and maybe faster even if the speed wasn’t needed, but Speed is Life. Or simple things like reuse a variable that wasn’t needed anymore to save a little stack space. Or in C++ use a , as a sequence point instead of just making a new statement.

Clever is nice. Clever is fun. Clever makes you feel smart. And we all like that. It’s great. Until it’s not.

Because the failure mode of clever is jerk. This is true when speaking or writing. Not just writing comments like tweets, but also when writing code.

Clever code often works at first. It might work for though a couple of requirement changes and refactors. And it might even work after that. But it’s value goes down fast. The code was written once. And it will be read many times. Now, every time someone needs to read the code to understand what it does, whether to extend it, refactor it, fix a bug, or just avoid adding a bug, that person will need to figure out what happens in the bit of clever code. That takes time. That takes effort. That increases cognitive load. Which makes everything harder.

And no one wants that. Software engineering, the balancing of conflicting goals and requirements to solve a user’s problems, is hard enough. There’s no good reason to make things harder on ourselves when we don’t have to.

I will acknowledge that sometimes you have to. If you’re writing an embedded controller and need to save every byte. If you’re working on the inner loop of a complex, time consuming rendering loop and your profiling has told you that this is the function that’s blowing your time budget. If you’ve found something new and novel in the domain that means your clever solution is actually the right one in this domain’s context. But those times are relatively rare.

So when you run across clever code, code with a slightly more verbose or slower implementation, code that can be written in a more maintainable way, consider fixing it. Make it less clever. You’ll be thanked by your peers and by future you. They’ll think you’re pretty smart for not subjecting them to clever code.

And that’s the best kind of thanks.

by Leon Rosenshein

Thinking Rocks, Magic, Intent, and TDD

A rock with eyes that thinks.

Can this rock really think?

A computer chip rendered useless after the magic smoke escaped.

Who let the magic blue smoke out?

Some have said that computers are just rocks we’ve taught to think. Others think computers run on magic blue smoke, and once you let the magic smoke out they’ll never work again. The truth, as usual, is somewhere between the two extremes. It’s not magic, and while arcing 120 VAC to ground across a chip will make a cloud of blue smoke and the chip will ever work again, it’s not magic. And no matter how many MFLOPS a chip can execute, it’s not really doing math. It just lets the electrons flow one way or another through a series of adjustable switches. From the outside though, it does seem like someone cast a spell on some tiny grains of sand (silicon) now the sand is doing math.

Whether it’s magic or good teaching, what does this have to do with Intent, let alone Test Driven Development? The connection is that intent is what drives both. The teaching was driven by the intent to build a machine that can do math quickly and reliably. Over and over again. And of course one of the primary rules of magic is that you have to keep the intent of the spell in mind when you cast it. Whether it’s Harry Potter’s “alohamora”, a djinn’s three wishes, or almost any other example of magic in the literature, it’s the intent behind the spell, not just the words, that defines what the spell operates on and how it works.

And it’s Intent that connects us to TDD. The intent of the tests in TDD is to express what should and should not happen. They’re an explicit expression of our intent for how the API should be used. They’re an explicit expression of what the limits and boundaries of the code are. They express what will work, what won’t, and how you know if it worked or not. And explicit is always better than implicit.

Leaving it implicitly expressed by the definition of the API and hoping users intuit your intent will only cause problems in the end. Hyrum’s Law tells us that, over time, anything users can do, they will do. That turns implicit requirements into explicit requirements as you work to avoid any breaking changes. Flight Simulator was like that. We needed to ensure all of the 3rd party tools and content worked, and with each new version it got a little more difficult to maintain compatibility with all those things that leaked through our interfaces.

Now you know how thinking rocks and the intent of magic are related to software development in general and TDD specifically. But magic has a lot more in common with development than that. After all, according to the literature, with magic, unless you follow the rules exactly things don’t always turn out the way you expected. At best nothing happens at all. At worst, something terrible happens. For more discussion of how the rules of magic also apply to software development, check out this thread from @bethcodes.

And beware the wily fae.

by Leon Rosenshein

Something Smells ... Primitive

I like types. I like typed languages. I find they prevent me from making some simple mistakes. The simplest example is that if you have something like int cookiesAvailableToSell you can’t do cookiesAvailableToSell = 2.5. You either have 2 or 3 cookies to sell. If you can sell the half cookie as a whole one you have 3. If you can’t then you have 2 cookies to sell and a little snack.

Picture of primitive tools
image source

I like domains and bounded contexts. They’re great at helping you keep separate things separate and related things together. Together, domains and bounded contexts help you stay flexible. They give you clear boundaries to work with so you know what not to mix. They make responding to business and operational changes easier by localizing contact points between components.

You’re probably wondering what types and domains have in common. It’s that a type is a domain. A byte (depending on language, obviously) is the set of all integers x such that -127 <= x <= 128. That’s a pretty specific domain. A character is also a domain. It’s very similar to a byte in that it takes up one byte, and can have a numeric value just like a byte, but it’s actually a very different domain, and represents a single character. They may have the same in-memory representation, but operationally they’re very different. If you try to add (+) an int and a char, in a typed language you’ll get some kind of error at compile time.

In an untyped language you never know what will happen. On the other hand, if you try to + a string and a char the result is generally the string with the character appended. That works because in the domain of text that makes sense. In the mixed domain of integers and text it doesn’t.

Which brings me to the code smell known as Primitive Obsession. It’s pretty straightforward. It’s using the primitive, built-in types in your typed language to represent a value in a specific domain. Using an int to represent a unique identifier. A string to represent a Universally Unique ID. Or a string to represent an email address. Or even an int to represent which one value of a defined (enumerated) set of values that something could possibly be. I’ve done all of those things. I’ve seen others do all of those things. And I’ve seen it work. So why not do it that way?

The most obvious is that you often end up with code duplication. Consider the case where there’s a string that represents an email address. Every public function that function takes an email address now needs to validate it. Hopefully there’s a method to do that, but even if there is you (actually all of the developers on the team) need to remember to call that method every time the user of the method passes in a string for the email. You also need to handle the failure mode of the string not being a valid email address, so that code gets duplicated as well.

Another problem is what happens if the domain of the thing you’re representing changes? You’ve got something represented with a byte, but now you need to handle a larger domain of values. Instead of changing the type in one place and possibly updating some constructors/factories, you’re now on a search for all of the places you used byte instead of int for this use case. And you’re looking not just in your code, but in all code that uses your code. That’s a long, complicated, error-prone search. And you probably won’t find all of them at first. Someone, somewhere, is using your code without your knowledge. Next time they do an update they’re going to find out that what they have doesn’t work anymore. And they’re going to find out the hard way.

Those are two very real problems. They make life harder on you and your customers/users. But they’re not, in my opinion, the most important reasons. There’s a much more important reason. Still thinking about that email address as a string, what if you have an API that sends an email. It’s going to need, at a minimum, the user name, domain, subject, and body. If you have all of them as type string then you make it easier for your user to get the order of the parameters wrong and not know until some kind of runtime error happens.

How else could it be done?

A better choice is to create a new type. A new type that is specific to your domain. That enforces the limits of your domain. That collects all of the logic that belongs to that domain into one bounded context. That abstracts the implementation of the domain away from the user and focuses on the functionality.

Sticking with the string/email, changing your APIs to take an email address instead of a string solves all of the issues above. Instead of getting an InvalidEmailAddress error from the SendEmail function the user gets an error when they try to create an email address. The problem is very localized. It’s a problem creating the address, not one of 12 possible errors when sending the email.

You never need to remember to check if the input string is a valid email address. You know it is when you get it because every email address created has been validated. Do the construction right and they can’t even send in an uninitialized email address.

If for some reason later you want/need to change from taking a single string to creating an email address from a username and domain you just do it. You can create a new constructor that does whatever you want with whatever validation you think is appropriate. All without impacting your users.

And best of all, this happens at compile time. Get the order of the parameters wrong and the types are wrong. A whole class of possible errors is avoided by ensuring it fails long before it gets deployed.

Because the best way to fix an error is to make sure it doesn’t happen in the first place.

by Leon Rosenshein

What Is Technical Debt Anyway?

Inigo Montoya saying Technical Debt. You keep using that word. I do not think it means what you think it means.

Technical debt has been on my mind a bunch the last few weeks. The system I’m working on has been around for a few years. It works, and it works successfully. However, since Day 1 we’ve learned a lot about what we want the system to do, what we don’t want it to do, and the environment it will be operating in. Some of those things fit into the original design, some didn’t.

According to Ward Cunnigham, who coined the term, technical debt is not building something you know is wrong, with the intent of fixing later. You always build things the best way you can, given what you know at the time. Technical Debt happens when you learn something new. Instead of refactoring the code to make it match the new knowledge you make the minimal change to the code to get the right answer, usually in the interest of time.

Two things to keep in mind here. First, when he coined the term, Ward was talking to financial analysts. People who were extremely familiar with the concept of debt and taking on debt to meet a short term need. They also understood the imperative of paying off that debt and the fact that if you didn’t pay off the debt you would eventually go bankrupt. They understood the context. That you can’t just keep increasing your debt and expect there to be no consequences.

Second, technical debt is NOT doing things badly, worse than you could, ignoring your principles and patterns, with the idea that you’ll do it right later. It’s not building a big ball of mud, without clearly separating your domains. It’s not hard-coding your strings everywhere because it’s easier or using exception handling for standard flow control. That’s just bad design and something that we should avoid.

Rather, Technical Debt is choosing to not refactor when you learn something new. You avoid going into “technical debt” by doing whatever refactoring is needed to ensure that that code models what you know about the system/domain. Doing anything else is considered tech debt. Once you have some tech debt you have to pay interest on it. That interest comes in the form of overhead, making it more difficult to make the next change when you learn something else. Eventually you end up in a situation where it’s almost impossible to make the change because the interest on the debt is so high.

There’s a nuance there that needs to be called out. Technical debt is not what happens when you do the wrong thing. It’s what happens when don’t do the right thing. It’s what happens when you’re doing the best you can, learn something new, and then don’t incorporate it.

There’s a time to take on debt. Just like a business, sometimes you take on debt to do something new. To open a store, take on a new line of merchandise, or just run a new advertising campaign. You take on the debt, see the benefit, then pay off the debt.

Whatever you do, don’t use technical debt as an excuse to do less than the best you know how to do.

by Leon Rosenshein

Starting vs. Finishing

Picture of Kanban board
image source

What’s more important, starting or finishing? Being done is great, but you can’t finish something you haven’t started. To me, finishing is more important. Because if you don’t finish all you’ve done is waste your time (modulo any learning along the way). Of course, to finish you need to define what “finishing” means. This is critical because, especially in development, while finishing does NOT always mean delivered, it usually does, and that’s a place to start with the definition. You always need to be explicit about what done means. If it doesn’t mean delivered to customers then you need to be even more explicit. And clear that done means you don’t already know there’s more work you need to.

As I’ve said, finishing is more important. However, since you can’t finish what you don’t start, that means it’s at least as important, right? And if you want to finish as much as possible, it stands to reason that you want to start as many things as possible so you have something to finish, right? Wrong.

What’s why managing your Work in Progress (WIP) is so important. Contrary to expectations, the less you’re working on, the more you can finish. There are lots of reasons for this. The first is time lost to context switching. As I noted before, every context switch can take up to 20% of your time. It doesn’t take many context switches to run out of working time. Second, the more things you’re working on, the more opportunities you have for interruption. When you’re working on one thing there’s one group of people who will be interrupting you. It might be as simple as needing a status update, or it could be as complicated as a change in dates and requirements. Third, is increased cognitive load. It’s related to context switching, but even if you’re not switching, you’re carrying around all that extra context, which means you have less “space” to focus on what you’re currently working on.

Add to that a very human tendency to want to start things and you can easily end up with lots of WIP. I’m very guilty of this. It’s often easier and more fun to start a new task. Especially if it’s a completely new thing. Greenfield development is easier and lots of fun. You start out with learning and exploring and you don’t need to worry (too much) about what’s been done before. And even if you’re not doing greenfield work, you still get to learn and explore. Starting out is generally much less constrained. You have more freedom. Conversely, finishing something is all about constraints. Have you met all the constraints? Have you done all of the niggly lit bits that are needed. Have you dug deep enough to finish up and get to done?

Sometimes of tools don’t help. If you’re using a Scrum-like or Kanban-like process you want to see motion on the board. The easiest thing to do is move something from not started to in-progress. You get motion. The counter for time in state goes down. The more things you have on the board at any given time the more things can move around. You get the appearance of progress.

But it’s not real progress. Real progress is moving things to done. Getting them off the board. That frees up time, capacity, and cognitive load. It reduces context switches. It improves flow. It gives you more real progress.

So next time you get to a point where you have an opportunity to either start working on something new or helping someone else move something to done, consider trying out helping someone on the team get to done. You might be surprised at the overall result.

by Leon Rosenshein

What happens when you can't even Tidy First?

Picture of Test Driven Development
image source

I had a different topic on my mind for today but life and the internet have conspired to change my mind. Today seems to be about refactoring instead. I’m trying to upgrade a docker image to use some newer libraries and the definition of which versions of libraries are used/depended upon are scattered hither and yon. Where they’re defined at all and not just picked by a happy accident at the time things were set up. At the same time I got the latest pre-release part of Kent Beck’s Tidy First? on why you might want to do your tidying at different times in the lifecycle and saw the Code Whisperer’s article on What is refactoring? so I guess I’ll talk about refactoring instead.

Most of what I’m doing down in the depths of docker base images is refactoring. It’s Tidy FIRST. Moving definitions around and collecting them into fewer places. Using those definitions instead of specifying directly in all of the individual use cases. Making sure things still work. Adding some tests that work before any changes and making sure they still pass after the refactoring. When I get done there are no observable changes. Or at least that’s the goal.

Turns out there are some observable changes. Things didn’t actually work when I started, and it does me no good if it doesn’t work. So even before tidying is making it work. The world isn’t as static as some code might like. Some code isn’t as backward compatible as other bits of code would like. Some security systems have been updated and require a different set of keys than they used to. Some things have just moved. All of that needs to be handled. For example, what does RUN pip3 install --no-cache --upgrade nvidia-ml-py do in a Docker file? It installs the latest version of nvidia-ml-py, that’s what it does. It did that yesterday, last month, last year, and probably will next year. It’s good that it always does the same semantic thing. Unfortunately, the specific version it installs is going to be different in some of those cases. Which means a docker image built today, using the same version of Docker, and the same Docker file doesn’t always give you the same image. There’s an implicit external dependency in that line. A better choice would be something like RUN pip3 install --no-cache -r requirements.txt, where requirements.txt specified what version of libraries you want.

Which gets us to when to tidy. When you’re building that Docker file you don’t know what versions of which libraries you want, and getting the latest versions is probably a good place to start. Once it’s working docker images are immutable, so you know the image won’t change. (NB: While the image might be immutable, if you’re using tags and expecting consistency, think again). So this could be an opportunity for Tidy NEVER. The code won’t change. The image won’t change. Don’t spend more time on it than needed. There’s always something else to do, so why tidy?

In this case, it’s because it was reasonable to think that someone might need to update the image in the future. Which means making the process more immutable/repeatable would have been a good choice. Which moves us to the realm of Tidy AFTER. In this case you move faster by not locking the versions until you have things working. Once you have things working that’s the time to tidy. To use pinned versions. Leaving things in good shape for the maintainer.

But that wasn’t done. So here I am now, doing Tidy FIRST. Not just tidying. Not just the classic refactor of making the change easy, then making the change. First I have to make it work. Figure out the right versions. Make sure they’re being used. Get it back to working. Then I can do some tidying. Then do some more tidying, making sure things still work. Then, and only then, make the change.

Because, as the Code Whisperer said,

a refactoring is a code transformation that preserves enough observable behavior to satisfy the project community

That’s what a refactoring is. Sometimes though, you have to make it work before you can do one.