Recent Posts (page 28 / 65)

by Leon Rosenshein

Single Responsibility Principle

The Single Responsibility Principle (SRP) is a follow-on/extension to Don’t Repeat Yourself (DRY). It basically says that and given module should be responsible for one part of a program’s functionality.

Or does it? That’s the common understanding, but if you go back to the author and the original text that’s not quite it. In Robert Martin’s clarifying blog post you find something a little bit different. Instead of being based on functionality, it’s based on reasons for change.

Gather together the things that change for the same reasons. Separate those things that change for different reasons. -- Uncle Bob Martin

Which, while related to function, isn’t really about function. Consider HTML. There are multiple ways to style an element. If you want purple text you could put the declaration in every element. That would work, but it wouldn’t be DRY. You could define it in a div, then put everything in the div, but that adds an unneeded element. Or, you could put your style(s) into a CSS file. Not only do you get DRY, but you also separate responsibility. You’ve split the “look” of the site from the functionality of the site. The CSS changes when you want the color or padding or some other style element to change, but, in general, the functionality, both Javascript and HTML, don’t. 

Conversely, if you need to change what a button does, or change the new validation to a form then you don’t need to change the style. In fact, two people could make the changes in parallel and not have a merge conflict. That’s always a nice thing.

It also means that when the business needs change you all of the things related to that change are together. Changing the way the system responds to say, a pedestrian, could be handled individually by all of the things that notice pedestrians, or, you could gather all of the pedestrian related decisions together, making it easier to find and understand the interactions.

Of course, as with all architectural decisions, the specific answer depends on the specific case, it’s definitely another way to think about how to break things up.

by Leon Rosenshein

Tick-Tock

Commits and Pull Requests (diffs) are very similar, but they’re not the same thing. The basic difference is that git commits are local (ish) and PRs/Diffs are public. And there’s a lot of value in understanding the difference between your private and public record of changes. But what’s that got to do with tick-tock?

One way they’re related is refactoring. There are lots of reasons to refactor code, but they mostly come down to not knowing what you didn’t know. Yes, sometimes you just get the design wrong with full knowledge, but much more often either you didn't know what you were going to need at first, or the requirements changed after the fact. Regardless of the reason, there will come a time when you need to refactor to make a business (functional) change.

Another way to know if you should have more than one PR is to think about your PR message. The subject should be short, imperative, and have one goal. If your subject looks like conjunction junction you should probably split the PR.

And that’s where tick-tock comes in. You could do them both at once. A single moment in time that changes everything. But is that really the best way?

Maybe, but probably not. A better way would be to refactor first (tick), then make the business change (tock) as two completely separate PRs. There are lots of good reasons. Here’s some of the big ones:

  1. Your automated tests ensure that nothing changed with the refactor. You shouldn’t need to touch most of your tests. They should be testing the functionality, not the structure of your code. Of course if the refactor includes an API change then you’ll need to do some modifications, but the changes should be simple.
  2. You spread knowledge of the refactor by itself. If code is truth and documentation is context, the refactor speaks for itself and others are aware of it and can take advantage of it
  3. It’s easier for the reviewer. Each PR has one goal, so there’s less cognitive load when trying to understand it. The refactor should have no functional changes, and the functional change should only make the change
by Leon Rosenshein

A Question Of Balance

And it’s not just an underrated Moody Blues album

It’s been said that it’s important to write code for the maintainer, code that’s easy for a person to understand. It’s also been said that code should be decoupled and easy to change as requirements change. The problem is that these two goals are often in conflict.

One of the easiest ways to make code easy to understand is to make it very explicit. Make every decision and assignment in line. Maybe add some functions for readability, and so you can collapse the code in the IDE to it’s well chosen name. Don’t rely on anything else. And don’t use abstractions like interfaces and factories. Those just make it harder to find the code that’s running.

One of the easiest ways to make code easy to change is to abstract all the messy implementation details behind an interface. Then, when you need to support some new variant, you just update the factory and everything else is the same. Or use dependency injection so that your code doesn’t even know about the factory. The thing you need is just magically delivered on startup.

As you can see, there might be some tension between those two things. So how do you know what to do? As usual, it depends. It depends on where you are in the product cycle. It depends on what you know you’re going to need versus what you think you’re going to need. Because YAGNI.

by Leon Rosenshein

Experimental Results

We’re always doing experiments. Right now we’re probably doing more formal experiments than normal, but whenever you’re doing something new it could be considered an experiment. Depending on your knowledge and experience, it might be one with a much higher expectation of success than a traditional experiment, but writing code is just an experiment to validate the hypothesis that is the design.

Coding, like all experiments, will have a result. Since you’re writing the code to add value, you have a vested interest in a specific outcome. Sometimes though, the outcome isn’t what you hoped for. So how do you move forward at that point? One thing is for sure, don’t keep digging.

But even before that, what do you call that result. Remember, naming things is one of the hard problems, and that doesn’t just apply to methods and variables. It’s not a mistake or failure. Assuming you were thoughtful in your choices, they might have been incorrect, but they weren’t a mistake. While the code might not work as intended, the experiment isn’t a failure.

One way to think about is is with this picture:

Comparing outcomes to behaviors

And in this case, experiments, regardless of the outcome, have the most learning. If you take the time to learn from it.

So what do you call it when an experiment has an unexpected outcome?

by Leon Rosenshein

You're Sunk

Ever been in one of those situations where you know you’ve almost got something figured out so you keep trying? You dig and dig, making small steps until you finally reach a solution. That can feel really rewarding. But sometimes you look around afterwards and realize that while you might have ended up in the right place, the route you took to get there was suboptimal.

There are many potential reasons for that, and one of the more common is the sunk cost fallacy. The idea that you’re close to a solution and the time/money/effort spent on the current approach makes you feel like the best answer is to keep going on the same path. It’s certainly easier to just keep going. You don’t need to change direction. You don’t need to admit to yourself (or others) that you made an incorrect choice. And anyway, plugging along has worked in the past, so you expect it to work again.

The first thing you need to do is recognize the situation. And that can be hard (see above). One good way I’ve found is the WTF rate. If it starts going up, you might be in a hole. And like that digger, the first thing to do is stop digging.

The next thing to do is reevaluate. What were the assumptions going in? What have you learned since then? Are you really closer to a solution? What are you not doing because you’re so focused? Who should you be asking for help/advice?

It might be that staying the course is the right answer. You might be working on an onion problem. Getting configuration in a complex system correct the first time is like that. You don’t know what you don’t know, and the only way to find it is ask someone who’s done it or get to the problem and fix it yourself.

Or, more often, it’s an XY problem, and the best thing you can do is get out your rubber duck

by Leon Rosenshein

Lumpers and Splitters

Are you a lumper or a splitter? I like to think of myself as a splitter, finding boundaries and cleaning architecture as I go, but I’m not sure that’s always true.

Because to be a splitter, you need to have a deep(ish) understanding of the problem. And you can’t have a deep understanding of the problem, solution, and its internal boundaries until you’ve lived with both the problem and a solution for a while. Instead, the best you can do is put things that seem to go together, or at least get used together, in one place so you can find them next time you need them. That’s called v1 or the MVP.

Ideally you’ve done a good enough job on v1 and learned enough that it makes sense to continue. So you add to it. And as a new, successful product, you have some momentum and good will. You want to take advantage of that, so you make the small additions you need and put things where they seem to fit best. You still haven’t lived with it, so right next to that other similar thing seems like a good idea.

Lather, Rinse, Repeat. Suddenly you find yourself struggling to make the next change. Things aren’t fitting together well, and now that you’ve lived with it for a while, you recognize that the internal boundaries of your solution aren’t quite right. That’s OK. You’ve not only figured out the problem, you’ve got a good idea of what a better solution looks like.

So you dive in. Breaking things along clear boundaries. Tightening the bounded contexts and firming up the APIs between them. You find it’s much easier to make changes again. You’re a splitter, and you feel good about the code. Out of curiosity you check history to see who lumped it together that way. And it turns out that the lumper was you.

Lather, Rinse, Repeat

by Leon Rosenshein

Functional Options

I’ve written about the Builder pattern before. It’s a way to give folks a set of default options for a constructor along with a way to set all of the options, and then validate things before actually creating the object. It’s nice because it lets you set all the parameters, in whatever order you see fit, and only validate them at the end. This can be important if setting parameters one at a time can put things in an invalid state temporarily.

On the other hand, it requires that the initial developer come up with all of the possible combinations of builder parameters. And that might not include the way one of your customers wants to use it. So what can you do in that case?

One variation is the Functional Options pattern. It relies on a variadic “constructor” function. Something along the lines of 

func MakeThing(required1 string, required2 int, options ...func(*Thing)) (*Thing, error)

Then, the user provides one or more functions that take a *Thing and do whatever magic they need to modify it. Those functions might take arguments. They might do validation. They can do anything the language allows. That’s pretty extensible.

Then, inside MakeThing, you add a loop that calls all of those functions, which modify the thing as desired.

func MakeThing(required1 string, required2 int, options ...func(*Thing)) (*Thing, error)
{
    thing := &Thing { 
        Name: required1,
        Value: required2,
    }

    for _, opt := range options {
        opt(thing)
    }

    return thing, nil
}

That gives the user all the control. There are 2 things I don’t like about though. The first is that there’s no final validation at the end. I have yet to see an example/tutorial that has one. It’s trivial to do and I’d certainly add one.

The other is a bigger issue. The functional options pattern requires your users to have full knowledge of the objects internals, while the Builder pattern hides those details. If you’re making a public API you probably want to hide those details, and the validator becomes crucial

Should you use it? Well, it depends, but it’s an option to consider.

by Leon Rosenshein

The Way

I recently stumbled back across an article titled Ron Jeffries Says Developers Should Abandon "Agile". And it’s strictly true. Jefferies did say that. Unfortunately it’s not the whole story. That’s much more nuanced, and won’t fit in a headline.

What he said was that many organizations are imposing processes and systems with “Agile” in their name. Those systems use many of the same words and descriptions from the original Agile Manifesto. And there might even be some short term benefits to organization, but long term, especially for the developers, it makes things worse. He calls this Faux or Dark Agile and says those systems should be abandoned.

Which leads me to another article that upset me the other day. 5 Things That Killed Software Development for Me. Again, it’s this person’s lived experience, and therefore true. But is that really the story? I think the story behind the story is really about forgetting my favorite question. What are you really trying to do here, and why? Because what upset me about the article is that there’s no attempt to understand the why or to really achieve those goals.

It’s done because This is the way. And the way is all that matters. Or is it? Is the result the important part? As the mandalorian learns, there is the way, but the way is there for a reason, and the reason is what’s important, not just keeping your helmet on.

So too with Agile:

  • People and Interactions over processes and tools -- Scrum rituals are an outcome, not the goal
  • Working software over comprehensive docs -- Add value with incremental change.
  • Customer collaboration over contract negotiation -- Add value together, not as adversaries
  • Responding to change over following a plan -- Start with a plan, then adjust as details become clear
by Leon Rosenshein
by Leon Rosenshein

Optimizing Tests

Like most things engineering, the answer to the question “Is this a good test?” is it depends. Tests have properties and the different kinds of tests make different tradeoffs between those properties.

While there are a lot of different properties a test could have, some of the most important are:

Deterministic: A good test gives the same result every time. Which usually means it has no external dependencies.

Fast:Tests should be fast. How fast? It depends. Programmer/Unit tests should be single digit seconds. Integration/acceptance tests should be single digit minutes. Bake/Soak tests might take hours or days. And of course this is for individual tests. The sum total of all tests of a given type, even with parallelization, might be longer.

Independent: Your tests shouldn’t have side effects. You should be able run them in any order and get the same results. Adding or removing a test shouldn’t change any other results. This means that you need a good way to set up all the preconditions for a test.

Specific: If a test fails, knowing which test failed and how should be enough to isolate the problem to a handful of methods. If your test that includes generating a value, storing it, and retrieving it fails, you don’t know which part failed and you have to examine the entire system to understand why. Much better to have tests for each part so you know where the problem is when the test fails.

Two-Sided: Of course you want to test that valid inputs give the correct results. But you also want to test that invalid inputs give the expected failure.

Uncoupled: Tests shouldn’t be concerned with the implementation of the solution. Ideally you would mock out an entire system and have it be functional and inspectable. We’ve done that for our in-memory file system we use for testing things on the infra team. We can preload the system, read/write arbitrary things, and then see what happened. On the other hand, for some things, like network calls, our mocking system looks for certain patterns and responds in certain ways. Not ideal, but a compromise. And avoid a mocking system that just returns a canned set of responses in a specific order. That’s both brittle and not representative of the thing you’re mocking.

Finally, going back to the classes of tests, and the different tradeoffs. Unit tests are run frequently. You might trade off testing how things work together for speed of testing. On the other hand, an integration test might have a more involved setup so you can test the interaction between components.

So what’s important to you when writing tests?