Recent Posts (page 6 / 71)

by Leon Rosenshein

Safety Nets and Guardrails

Safety nets and guardrails sound like the same thing, but they’re not. They are very similar though. They both help prevent bad things from happening. Where they differ is in how and when they operate.

Safety nets help after something bad has happened. It’s what you do when things go wrong. Your traditional safety net catches something that has fallen. It could be a person off a roof, or a trapeze artist that missed a catch. It could also be a process that helps recover from a negative event. Like insurance (be it life, health, auto, home, or unemployment). It doesn’t mean the bad thing can’t happen, or that there will be no consequences, but it minimizes the negative impact/damage caused by the bad thing happening.

Safety net at the edge of a building

Or in the software world it could be a top-level error handling routine or executing a series of SQL statements inside a transaction so you can safely roll things back if there’s an error. Put another way, it’s using both a belt and suspenders even when your pants fit. Normally, the pants stay in place by themselves, but if they don’t for some reason, you’ve got the belt to hold your pants up. And if the belt snaps, there’s still the set of suspenders to hold them up. In terms of ilities, it’s resilience. 1

Guardrails, on the other hand, help prevent something bad from happening. Like the guardrails along the highway. They work to keep cars on the road and heading in the right general direction. It doesn’t mean you can’t force your way off the road, or that everything will still be perfect if you end up needing the guardrail, but things will be much better with the guardrail, and you’ll probably still get to your destination. It’s Poka Yoke, which I’ve talked about before. And just like you can have multiple levels of safety nets, you can have multiple guardrails. Like the rumble strips on a highway that tell you you’re drifting, before the guardrail pushes you back on track, both of them help you do the right thing.

Guardrail along a road

In software, guardrails come in multiple flavors. It’s using types instead of primitive obsession. Sure, you could use a string to store a value that is one of a limited set, but it can also store many invalid strings. If you instead use an ENUM that only supports the limited set, the user simply can’t set the value to something invalid. Another guardrail is using a builder to initialize something so that it either works or tells you immediately that it can’t be initialized instead of leaving you with something that won’t work. There are lots of other guardrails you can add to your software.

And remember, while safety nets and guardrails have the same basic goal, keep something terrible from happening, they are, in fact, orthogonal ideas. Which means you can (and should) use them both. Use guardrails that make it easy to use your API/functions the right way and hard to use them incorrectly. But recognize that it can still happen. So also include safety nets, so that when something is wrong, it gets handled the best way possible.


  1. Yes, I know resilience doesn’t end in ility, but it’s almost always in a list of -ilities ↩︎

by Leon Rosenshein

With Apologies to Uncle Bob

From the design by counter-example file, this might be the most practical definition of “unclean” code I’ve ever seen.

In the kitchen, the stuff I left on the counter is fine, I know why it’s there. Everything my family leaves on the counter is mess.

In our own software, we don’t trip over the rough edges, we can fix those later. For everyone else, our software is rough.

Jessica Joy Kerr

In your own kitchen, you don’t see your own clutter. It’s not bad, but there’s not a lot of available workspace. Similarly, that’s why everyone else’s code is not clean, but your code is. At least in your eyes.

A cluttered kitchen. It's not bad, but there's not a lot of available workspace

Clean code is good goal. And there are lots of heuristics and rules of thumb to help you get write clean code. You should always be thinking about them. Not blindly following them, but thinking about them. And you need to be aware of your biases and blind spots.

And one of the biggest, the one that makes identifying your own unclean code, is the same one that makes it hard to accurately and effectively edit your own writing. It’s a problem of context. When you’re writing, whether it’s code, a novel, an email, or a text message, you have an immense amount of context. When you go back to review/edit that text, unless it’s been a long time, you still have all that context. And even if it has been a long time, that context will come back pretty quickly. That means you don’t see the missing or doubled words. You don’t see the misspellings. You don’t notice that your functions are long, that you have complicated conditionals, or that function and variable names no longer match what they actually do.

Unfortunately, unless you’re the copy editor for someone, everyone else has much less context. They don’t know what you know and they see those things immediately. It makes it hard for them to understand your code. Just like it makes it hard for you to understand their code.

Or to work in someone else’s kitchen.

So the next time you’re reviewing your own code prior to getting someone else to review it, make sure you’re looking at not just with your own context, but also the context of someone who hasn’t seen it before.

by Leon Rosenshein

Testing Schedules

Yesterday I talked about different kinds of tests, unit, integration, and system. I mentioned that not only are there different kinds of tests, but those tests have different characteristics. In addition to the differences in what you can learn from tests by classification, there are also differences in execution times and execution costs.

Venn diagram for different test types

NB: This is semi-orthogonal to test driven development. When you run the tests is not the same as when you write the tests.

These differences often lead to different schedules for running these tests. Like any dynamic system, one important tool to maintain stability is to have a short feedback loop. The faster the tests run, and the more often you run them, the shorter you can make your feedback loop. The shorter your feedback loop, the faster you can get to the result you want.

Luckily, unit tests are both cheap and fast. That means you can run them a lot. And get results quickly. The questions are, which ones do you run, and what does “a lot” mean? If you’ve got a small system, and running all the tests takes a few seconds, run them all. While building and running any individual test is fast, in a more realistic setting, building and running them all can take minutes or hours. And that’s just not practical. You need to do something else. This is where a dependency management system, like Make or bazel can help. You can set them up to only run the tests that are directly dependent on the code that changed. Combine that with some thoughtful code layout and you can relatively easily keep the time it takes to run the relevant (directly impacted) tests down.

Running quickly is important, but when do you run them? Recommendations vary from “every time a file is changed/saved” to every time changes are stored in your shared version control system. Personally, I think it’s every time there’s a logical change made locally. Sometimes logical changes span files, so testing based on any given file doesn’t make sense. You want to run the tests before you make a change to make sure the tests are still valid, you want to run the tests after each logical step in the overall change to make sure your changes haven’t broken anything, and you want to run the tests when you’re done to make sure that everything still works. That’s a good start, but it’s not enough. In a perfect world your unit tests would cover every possible combination of use cases for the SUT. But we don’t live in a perfect world, and as Hyrum’s Law tells us, someone, somewhere, is making use of some capability you don’t know you’ve exposed. A capability you don’t have a unit test for. So even when all your unit tests pass, you can still break something downstream. At some point you need to run all the unit tests for all the code that depends on the change. Ideally before anyone else sees the change. You run all those tests just before you push your changes to the shared version control system.

Unfortunately, unit tests aren’t enough. Everything can work properly on its own, but things also must work well together. That’s why we have integration tests in the first place. When do you run them? They cost more and take longer than integration tests, but the same basic rule applies. You should run them when there’s a complete logical change. That means when any component of any integration test changes, run the integration test. And again, just running the directly impacted integration tests isn’t enough. There will be integration tests that depend on the things that depend on the integrations you’re testing. You need to test them as well. Again, ideally before anyone else sees the change.

Then we have system level, or end-to-end tests. Those tests are almost always slow, expensive, and take real hardware. Every change impacts the system, but it’s just not practical to run them for every change. And even if you did, given the time it takes to run those tests (hours or days if there’s real hardware involved), running them for every change would slow you down so much you’d never get anything done. Of course, you need to run your system level tests for every release, or you don’t know what you’re shipping, but that’s not enough. You need to run the system tests, or at least the relevant system tests, often enough that you’ve got a fighting chance to figure out which change made the test fail. That’s dependent on the rate of change of your system. For systems under active development that might be every day or at least multiple times per week, for systems that rarely change, it might be much less frequently.

There you have it. Unit tests on changed code run locally on every logical change, before sharing, and centrally on everything impacted by the change after/as part of sharing with the rest of the team. Integration tests run on component level changes locally before sharing, and centrally on everything impacted by the change after/as part of sharing with the rest of the team. System level tests run on releases and on a schedule that makes sense based on the rate of change of the system.

Bonus points for allowing people to trigger system tests when they know there’s a high likelihood of emergent behavior to check for.

by Leon Rosenshein

Test Classification

Whether you are thinking about unit vs integration vs system tests, or build (or save) time vs check in time vs release time tests, what you’re really thinking about is test classification and test hierarchy. Or put another way, you’re thinking about why you’re running that test and what the goal of the test is.

Of course you want the test to pass. And you want that pass to mean something. Even if the result you’re looking for is a failure in the system under test (SUT), you want to see that failure so your test passes. But I’m not talking about what makes a good test. That’s a different topic for a different time.

The topic for today is, instead, what is the purpose of the test. What are level of functionality are you trying to test? Knowing the purpose of the test can help you figure out how to classify it. That can then help you figure out how and when to run it.

First, some basics on test classification. There are many types of tests, but in broad strokes, you can think of them applying at 3 levels, unit, integration, and end-to-end or system. To make things more real, let’s consider a clock application.

The test pyramid. Slow, expensive end to end tests on top, integration tests in the middle, and fast, cheap unit tests at the bottom

Unit tests are tests that validate things at the functional level. The typically live and execute at the function/class level. Do these handful of things work well insolation? Do they do what they say they will do, and handle failure gracefully? They typically take less than a second to set up, run, and tear down. Things at the unit level might include configuration, storage, accessing the host’s notion of date and time.

Integration tests are the tests that validate things at the boundaries of domains, functions, or classes. How does class A work with class B? How does your CRUD layer work with the underlying database? How does your logging/timing/resource management system work with its consumers? These tests might take a couple of seconds, and might require access to some host system, For the clock app, you might test reading and writing configuration with the storage system.

End-to-end or System tests are the tests that validate how the system as a whole works. Does it meet the end user’s expectations, or at least not surprise them greatly when it doesn’t. System tests are the ones that validate that even though a bunch of things failed along the way, the system managed to do the right thing, or at least avoided doing the wrong thing. This is where you’ll test emergent behavior as the different parts of the system interact. It’s often only at the system level that you can test what happens when 4 different things fail in a specific way. Because the system level test is the only one where those 4 different components are working together. These tests can take much longer, and often require the real system, or at least a trusted emulator. For that clock, it might be setting up an alarm and making sure it sounds at the appropriate time.

I’ve mentioned the time it takes for these various test types, but it’s not just time that changes. Cost also changes. Running unit tests is almost free. Just a few CPU cycles and that’s it. System tests, on the other hand, can be very expensive. You need to build the entire system and deploy it. To not just the appropriate hardware, but special hardware that you have extra access to for debugging. That all takes time. Any money. Unless you’re doing manual testing. Which takes even more time and money.

Most tests fit reasonably well into one of these three buckets. If one your tests doesn’t, think about breaking the test up into multiple tests to that it does. Once you know which bucket to put your tests in you can move on to the next step, figuring out when you should be running it. I’ll cover that in a different post.

On the other hand, if most of your tests don’t, think about your test design. If your test design seems reasonable, but your tests themselves don’t fit into those three buckets, think about the underlying system design. If your system is untestable at the unit level, that’s not a testing problem, that’s a design/architecture problem. Fix that first. Then recognize that you’re practicing Test Driven Development.

And that’s a good thing.

by Leon Rosenshein

I'm Back

It’s been a while since I’ve published a new entry. Not because I haven’t thought of things, but because I got sidetracked with life and work for a bit, then I got out of the habit of writing. Which is a great topic to write about. So here I go. Talking about writing. And habits. And personal lessons.

We all have free will. We get to decide what we want to do. Not in a vacuum of course. There is always an impact to our choices. You need to balance the costs and benefits of a choice. The better visibility you have into those costs and benefits, the better decision you can make. Just remember that making a good decision is not the same as having a good outcome of the decision.

In my case, back in the middle of last year I got busy. Busy at work and busy outside work. As people familiar with Spoon Theory know, when you run out of spoons you need to stop, so you need to use your spoons thoughtfully. Looking at the things I needed to do, the things I wanted to do, and the things I could do, I decided to stop working on this blog.

And in retrospect, it was a good decision. The things that needed to be done got done. I was able to do the things I wanted to do that were most important to me, and I was able to put enough effort into them to do them well. I’m happy with the choice I made, and in the same situation, I’ll go through the same process.

However, in retrospect, one thing I missed in my decision was how much momentum and habit play into things. One of the reasons I was writing so much was that I was I the habit of writing. I had some momentum, and that kept me going. When I stopped, I got out of the habit and lost the momentum. Even worse, I got in the habit of saying “I’ll get back to it soon”. And that’s a dangerous habit to have.

What I should have done was extend my decision with some exit criteria. That would have helped me not get into the habit of not writing. Instead of realizing it’s been 9 months since I posted a new entry, I would have had both reminders and a reason to get back to it. Because I do like writing. And sharing. And hopefully others are getting something out of it as well. So here we are. I’m writing blog posts again, and working on building back that habit.

And to bring this back to helping you, my Friendgineers, it’s something that we need to remember as software developers. When we write, whether it’s emails, docs, blog posts, or code, we have habits. Generally, our habits help us by keeping us from having to decide every little detail. One space or two between sentences? (One) Oxford comma or not (Yes) Indent or blank line between paragraphs (Blank line). Those habits are useful.

But sometimes, when we make a decision, like deciding to move quickly to get something working right now, that should have exit criteria but doesn’t, we end up without an important habit, or possibly worse, with a new habit, that gets in our way later. Like the habit of not thinking about forward or backward compatibility, not worrying about separation of concerns, or not writing unit tests. Or maybe hard-coding configurations, or choices? Sometimes you do those things for speed, or expediency, but those are not things you want to make a habit of.

So when you do make those decisions, know your exit criteria, and follow them. If you don’t have them, create them. And above all, be careful what habits your pick up. Or lose.

by Leon Rosenshein

On Busyness

As I’ve mentioned previously, Winnie the Pooh makes a great coach for folks interested in extreme programming. Things are the way they are and we have to live with what is. We can learn from it. We can change it. But we have to deal with the reality of what is.

There’s another part of the Tao Of Pooh we can learn from. It’s the Bisy Backson.

Gon Out. Backson. Bisy. Backson, C.R.

In the book, Rabbit is looking for Christopher Robin, but instead of finding him, Rabbit finds the note above. He can’t figure out exactly what it means and becomes a “Bisy Backson” trying to find Christopher Robin.

In the story, the Bisy Backson has to always be moving. Always doing something, going somewhere, full of sound and fury, signifying nothing. Just to prove, to themselves and others, that they’re important. Because if you’re doing something important, you must be important.

Unfortunately, that’s one of those things we all know that just ain’t so. And ain’t so on multiple levels. First, and most straightforward, there’s no transitive relationship between the importance of the work and the importance of who’s doing it. The work is important, and getting it done is important, but in most cases, it doesn’t matter who does it. A better way to look at this is through the value lens. Not “I am doing important things so I am important”, but “I am valuable because I am doing things that add value.” A subtle, but important, difference.

Second, and probably the most insidious thing, is that the Bisy Backson is a great example of the difference between outputs and outcomes. They do lots of things and there’s lots of activity, but not a lot of results. And from the outside it looks like progress. That’s the sound and fury part. To make matters worse, that appearance of progress is often incentivized by the systems we work in. this one is hard to manage because it requires self-awareness. Again, the value lens is a good way to combat this. Is what you’re doing high value or not? It doesn’t matter if it’s high output if the value is low.

Third, and hardest to see, is the opportunity cost of being busy. I’ve talked about the importance of slack time, and this is still a great explanation of how busyness and parallelization can work against reducing overall time. The Bisy Backson doesn’t see this. They’re too busy doing things to see that doing less might be faster. And it’s certainly faster than doing something that’s just going to sit in unfinished inventory for a while, or worse, doing the wrong thing because we don’t know what the right thing is yet. The value lens helps here, as it usually does, but it’s not enough. One of the things that traps the Bisy Backson is the local maximum (or minimum) problem. If you don’t take the time to look at the bigger picture the Bisy Backson will quickly find themselves on a peak looking across the valley at the higher peak they should have been moving towards. The antidote here is to step back and look at the bigger picture and understand what it is you’re really trying to do.


On a personal note, there’s another kind of time when I deal with the Bisy Backson inside myself. That’s when something significant enough happens that I need to take time to process it, but I’m not ready to process it headfirst in real time. At times like that I’ll often choose to be the Bisy Backson to engage the high-order processing nodes in my head and let the issue rattle around and clarify itself. That’s where I’ve been the past week. Someone at work passed away unexpectedly. Someone I’ve worked closely with for over 3 years. There are lots of little reminders of the loss, and each one is a distraction. I’ve been using my inner Bisy Backson to give me the time and space to work through it at my own pace.

So while busyness for its own sake might not be the best thing, busyness as a tool can be useful. The hard part is knowing which situation you’re in and making the appropriate choice.

by Leon Rosenshein

That's The Way It Is

I’ve said before that It Depends is just E_INSUFFICIENT_CONTEXT written so humans can understand it. There’s another common phrase that often hides a much deeper meaning.

That’s Just How It Is

The thing about that sentence is how passive and accepting it is. Particularly in the word just1. Without just it’s a description of the current state. Adding just adds another whole dimension. It changes the sentence for a description of what is to a comment on what is.

And implicit in that comment is context. The context that says not only are things the way they are, but that you’re powerless to do anything about it. I assert that that last part is untrue.

There may be limits on how much you can do, but it’s not nothing. At the very least, if you know that things are that way, you can expect it. And plan for it. Since I’m a software developer I’ll use a car analogy. Say you’re on a road trip and a road you want to use is closed.

You can drive right up to the sign, then stop and wait for someone to open the road, tell you to turn around and go home, or provide a detour. Or, depending on when you find out, you can plan a different route, decide not to go or to go somewhere else instead, or maybe decide a phone call gets you enough of what you want, and do that instead.

The difference is agency. If that’s just the way it is, you have no agency. On the other hand, if that’s the way it is, you have some control over your destiny. You can do something.

Coming back to software development, the same thing applies. There are events that happen that are outside your control. You do have to accept them. Requirements change. Hardware fails. You get bad input. What you do about it is up to you.

Depending on how much control you have, what you do is different. Sometimes you have enough control to prevent the problem. Or at least prevent the problem from impacting you. Ensure there are redundant systems to mitigate hardware issues. Sanitize your inputs when you get them, and if possible, where they are generated. Knowing that requirements change, leave some slack in the schedule. You’ll still run out of time (Hofstadter’s Law), but it won’t be as bad as it might have been.

Or maybe all you can do is add a bit of resilience to the system. Knowing that your inputs are unreliable, even after doing some sanitizing, reject them. Instead of crashing or passing on the problem to someone else, stop what you’re doing and return some kind of error to someone who can do something about it. If you can’t do that, at least log enough information so that you know what happened. And automate the recovery process. Or if you can’t do that, script it. There have been many times where I wasn’t in a position to prevent a problem from happening, but once I knew it could happen, I can’t think of a single time where there was nothing I could do to make things easier to diagnose and/or recover from the situation.

What makes it possible is the mindset change that comes from dropping the just. From changing a comment that makes you powerless to a statement of reality that you can do something about.

That’s just the way it is


That’s the way it is

Come to think of it, that’s good advice not just for software development, but for life in general.


  1. Just and But as modifiers, the difference between them, and how different people use them is a whole separate topic for another day. ↩︎

by Leon Rosenshein

The Power Of No

A long time ago I read a story by Eric Frank Russell, And Then There Were None. On the surface it’s a typical re-contact story. There was a great diaspora during which hundreds or thousands of interstellar ships left Earth to start new colonies. Shortly afterward, something happened, and contact was lost with all of them. Now, four hundred years later, the people or Earth are trying to reconnect with those colonies.

Without going in to too much detail, the ship (it’s never named) lands on planet K22g and the ambassador on board tries to find the folks in charge. Unfortunately, he can’t seem to do that. It turns out that the people who colonized the planet are followers of Gandhi approach to society and have turned civil disobedience and barter into their social system. In the story, two phrases that keep coming up and seem to define their society are F:IW and myob. It takes a while for the new arrivals to figure out that they are shorthand for “Freedom: I Won’t” and “mind your own business”. Those two phrases have a pretty strong impact on the more authoritarian, hierarchical, society represented by the ship’s crew.

There are lots of reasons why that social and economic system might not work at scale, and the story uses a bit of hyperbole to make it’s point, but who knows. It could work at small scale, like within a small community. It’s a pretty short story and it’s interesting to think about how such a society might actually work in the larger context.

You might be wondering how this relates to software development. After all, you can’t just tell your boss to “myob” or just say “I won’t” when you get a request. Or at least you can’t do that and expect there to not be any consequences.

It does, however, point out the power of politely saying “No” or “I’m responsible for that, and I think …”, and that’s where the connection is.

Just because someone asks you to do something, it doesn’t mean you should just drop everything and do it. Or try to do it in addition to whatever else you’re trying to do. That doesn’t work. It’s not scalable, it hurts quality. It slows things down and it keeps things from getting done. That doesn’t mean you should just say no and ignore the request. It just means you need to think about it and make sure you understand the impact of fulfilling the request.

Similarly, just saying “myob” when someone makes a request or suggestion isn’t the way to work with others. But you also shouldn’t just blindly take someone’s advice or let them set priorities or importance. You need to understand the “why” behind it. Once you understand the why, you can evaluate the statement and decide if it makes sense, would make things worse, or if there’s a better way to get the desired result. And again, you can’t do everything, so you should have the discussion if it’s the right thing to do.

Really, what they both come down to is realizing you’re responsible for what you do (and don’t do) and that you need to think about the value you’re providing. Not just immediately, but medium and long term as well. Saying “I Won’t”, when it’s appropriate, and with the proper understanding and justification, can give you the freedom to do the things that need to be done.

Saying “myob” or something like “Thanks for the advice. As the person/team responsible for doing X, here are a few things you may not have thought of.” is also very powerful. It can start the conversation around what’s important, what’s not, and why. When everyone understands that, the group makes better decisions together.

I’m not suggesting you respond to every request with “F:IW” or “myob”, but make sure you think about what the end result of giving a respectful version of those responses would be.

by Leon Rosenshein

What Are You Waiting For?

I’ve talked about the Eisenhower matrix before. Two orthogonal axes, importance, and urgency. It’s a good way to prioritize. The higher and further to the right, the higher the priority.

Another way to look at things is task priority, Leverage, Neutral, or Overhead. They all need to be done, but things with leverage are higher priority.

Or maybe you prefer the MoSCoW method. Must, Should, Could, and just as important, Won’t. That’s the priority order.

Three ways to prioritize. All you need to do is pick one, implement it, then reap the rewards, right? Well, that’s mostly right.

The thing that none of those schemes mention is that you have to not just to the work, you have to deliver the work. If you don’t deliver, then you haven’t added any value and you haven’t actually done the thing that you decided is most important.

Now here’s where it gets interesting. Delivering is work. So, it’s got a priority. Depending on your exact situation, that might be a lot of work, or it might be just a small amount of work. You can’t forget about it though.

Which leads to some interesting conflicts. You’ve done the work for the top 3 priorities on your list. They got done at about the same time. You could release (deliver) an update, but you could also wait for the next thing to be ready. There would be more value in your delivery, but it’s going to be later. Which do you choose?

It depends. If you decide to wait, some say you’re letting a lower priority (lower value) thing hold your high value thing hostage. Others say that you’re delivering more value, so the delay is worth it. Either might be true.

What lets you determine if you’re being held hostage or delivering more value over time is an understanding of the cost of delivery. The more delivery costs, the longer it takes, the more you end up packing things into each delivery. Instead of paying the cost of delivery for each item, you amortize it over multiple items. So, the delivery cost per item is lower. Standard unit economics, right?

Maybe, but maybe not. It depends on your perspective. Sure, the cost to you, as the producer is lower, but that doesn’t mean the value to the user is higher. It completely ignores the difference in value of having something right now compared to having something else later. The longer you wait to deliver something that could add value, the harder it is to make up the user’s lost value from not having it in hand now.

To make the calculation even worse, since you’ve done a good job figuring out the priority and value of the things to work on, by definition, the thing you’re waiting for has less value than what you’ve already done. If the difference in value is large enough, your user will never make up the difference.

So rather than cram more and more things into a delivery, which ends up delaying user value, and can make delivery take even longer, make delivery cheaper and faster. So you can put fewer things in each delivery. And deliver sooner.

And remember, this applies at all levels of development. Add the highest value thing now. Deliver it. Add the next highest value thing. Deliver that. Repeat. Or to put it GeePaw Hill’s terms, take many more much smaller steps.

by Leon Rosenshein

You ARE Allowed To Think Before You Type

I’m a big proponent of agile (lower case a) development practices and Test Driven Development (TDD) in particular. In my experience it conforms to the reality of changing understanding and moving requirements much better than BDUF. Like many things, there’s a spectrum ranging from BDUF to “Start typing and see what you come up with”. And in some specific scenarios, either one of those might be the right choice. But for the vast majority of us, in the vast majority of cases, the best choice is somewhere in the middle.

Unfortunately, the best sound bites are at the extremes. Things are so clear there. There’s no nuance or reality to get in your way. You end up with everything from “The product and architecture teams have specified every function/message/API call, so just go implement this skeleton.” to “Build me a website that tracks all of my activity. I’ll be back on Friday for a demo.” It’s easy to look at either of those and say that there’s no way that they’ll work. And they don’t.

So here’s another sound bite to get you thinking.

Continuous design is an interesting thing; it can be continuously coming up with a design (with some reversals) or it could be continuously testing pre-thought design ideas by implementing them and assessing.

Both are valid. You are allowed to think before typing.

     Tim Ottinger

I say nay, not just allowed. I would say you are required to think before you type.

It’s the messy middle, the land of “It Depends” where software engineering lives. The place where we need to understand not just what we’re doing, but why. To choose what to do for a valid reason, and just as importantly, choose what not to do. The way to do that is to think. Think about what your goals are. Think about what your constraints are. Think about the path, not just from 0 to 1, but from 1 to 10 as well.’

I’ve talked about only designing until you start speculating. But it goes even deeper than that. Until you’re done, you’re speculating. Even then, you might be wrong. You can (and should) design for the known knowns. You should have a approach for known unknows. But by definition, you don’t know the unknown unknowns, so you can’t design for them.

You might not be able to design for them, but you can keep from painting yourself into a corner. Especially if you’re using TDD to guide you, you have to think first. You can’t write the initial list of failing tests, let along the tests themselves, unless you have some idea of how things are going to work. How the events and data, the information is going to flow. How it’s going to be transformed.

To break things down into domains, you have to have some understanding of the situation. And not just the happiest path. You have to think about the edges. The failure cases. The places and ways things can go wrong. It’s only after you’ve thought about those things that you can create your domains and use tests to guide your development.

And all of that is speculation. You don’t know until you’ve tried.

So think. Then type. Then think some more. Then learn. Then design. And know that even those sound like separate things, they’re all happening somewhat concurrently. And that’s OK.