Recent Posts (page 9 / 67)

by Leon Rosenshein

The Law Of Instruments

if all you have is a hammer, all you see are nails.

Kaplan, Maslow, et al.

That’s not exactly what Maslow and the others said, but that’s a pretty common paraphrase. And it’s true. If the only tool you have is a hammer, you tend to treat every problem as a nail and pound on it until the problem is gone. Not necessarily solved, but at least it’s gone. Because of that, you need to know you tools. When to use them, and when not to use them.

However, using the right tool for the right job is only part of the issue. You don’t just have a bias to use the tool at hand. A perhaps bigger problem is that you also have a bias around how you see the situation. Hat Tip to GeePaw Hill who helped me put this into perspective for myself. It’s not really about hammers or nails. It’s not even about tools and uses.

The deeper bias is about the frame you view things in/through. And that’s a very subtle bias. It’s subtle because it operates before perception and cognition. You see things in terms of the frame of reference you’re viewing them from. Back to the hammer and nail, it’s not that you see a screw and decide that the easiest way to make it not stick up is to apply percussive maintainance. It’s that you see a nail, then you think about how to fix the problem of a nail sticking up, and whack it with a hammer.

The situation you’re in, the frame makes you see things not in the way they are, but in a way that fits into your ability to fix. Then you think about it and apply a reasonable solution to the problem, as you see it. The key to countering the bias is to recognize that you might not be seeing what you’re seeing.

Unfortunately, this isn’t the world of Dungeons and Dragons, and you can’t attempt to disbelieve. At least not by just rolling some dice. Instead, you have to do what the roll of the dice simulates. You need to:

  1. Recognize there’s a chance you might be fooling yourself
  2. Examine the situation closely
  3. Decide if you’re seeing what you think you’re seeing
  4. Do something about it.

Of course, the hardest part is the first part. You probably don’t have the time or energy to assume everything you perceive if wrong. After all, the vast majority of it is correct, so it’s just slowing you down. What you can do, though, is identify some of your common responses that might come from misperceptions. Once again, with the hammer and nail. If you see something sticking up that shouldn’t be and you reach for your hammer, look again. If you’re doing rough framing of a wall, it very well might be a nail. If, on the other hand, there’s a lump under your carpet, it’s probably not a nail, so check first.

If you’re building a distributed system and there’s a latency problem your first response might be to use a cache. That might be the answer. But before you go build/install that cache, look at the system again. Maybe you don’t even need to do what’s taking a long time. Maybe the thing you’re looking for changes so rapidly that a cache is the wrong thing to do and you need to fix the response time. Or maybe the problem is your SLO is just too high.

So next time you find yourself swinging that metaphorical hammer, before you make contact, take another look and make sure you’re seeing what you think you’re seeing.

by Leon Rosenshein

Effective Software Engineers

What do you think of when you think of effective software engineers? Do you think of things like knowledge of frameworks, libraries, and technologies, an understanding of Big O notation and data structures, or maybe the ability to write code fast? I know a lot of people go in that direction when they think of effective engineers. While I agree that those things are important to being a good programmer, I think being a good engineer, software or otherwise, requires an entirely different set of skills.

As I said just last week, and 6 months ago, software engineering is a social activity. Not just in how you debug and ask questions, but how you work with those around you. Your peers. Your stakeholders. Your customers. Your downstream dependencies. The people who will be maintaining what you’re writing. Your future self.

Which leads me to this article on What Makes An Effective Software Engineer. It goes through a list of things to do (and not do).

10 traits of effective engineers: 1) Cares about the user. 2) Great problem solver. 3) Keeps things simple. 4) Communicates well. 5) Builds trust, autonomy, and social capital. 6) Understands team strategy. 7) Prioritizes well and executes independently. 8) Thinks long term. 9) Leaves projects better than found them. 10) Comfortable taking on new challenges.

Look at that list. None of them are specific to writing code, or even software development in general. Instead, they’re about the traits needed to work well with all of the people involved.

I’n not going to talk about all of them, but here’s a little something about the ones that really resonate with me.

It all starts with the customer. Caring about the customer. Knowing what problem you’re solving and why. It’s about adding value for the user, whoever that is.

Keeping things simple goes along with thinking about the long term. Don’t do things you don’t need yet. Make it simple and easy to change. It’s an iterative process and you learn what you need to do by doing it, so keep it simple, allowing for the long term things when you’re ready for them.

Leaves things better than you found them. The boy scout’s rule. Learn as you go. Make it simpler as you understand more about the domain. Refactor to make it easy to make the change (this can be hard), then make the change. Make things easier for the maintainer

And finally, you need to communicate well. You don’t ever want to surprise someone. New things are explained well. Every change, every decision, internally and externally visible, has not just a what, but a why. You won’t always have agreement, but you should have consensus, and you DO make sure everyone is on the same page.

Check out the article. 10 things you need to be an effective software engineer, but (almost) nothing at all about writing code. In fact, the only place there’s mention of coding and code practices is in the anti-patterns.

by Leon Rosenshein

Breaker Breaker Rubber Duck

When I think about rubber ducks I think of two things. I think about Ernie singing in his bathtub and I think about the old C. W. McCall song, Convoy. And they’re both relevant here. Ernie is talking to his rubber duck. talking through his thoughts and plans. In Convoy, anti-establishment as it was, Rubber Duck, the lead driver in the convoy, recognizes that driving, as solitary as you are in the cab of your truck traveling down the interstate, is something you do with others. You’re part of group and the group does better together.

Ernie from Sesame Street signing to his rubber duck
A line of trucks coming around a curve, with Rubber Duck in the lead

I’ve talked about Rubber Duck Debugging before. It’s the idea that a good way to start a debugging session is by explaining the situation to an inanimate object. You describe what the problem is, what you know, what you think, and the assumptions you’ve made. It gets you to be very explicit about those things and being explicit often uncovers what you missed. It might have been something you didn’t know and needed to find out. It might have been a misconception/bad assumption. It might have been a misunderstanding of what the requirements are. Or it might not help you figure out the solution, but it helped you clearly articulate the problem.

Which brings me to this Medium article. It says that rubber ducking is bad and you shouldn’t talk to that inanimate object. Instead, you should talk to a real person. That’s not a bad idea. After all, Software development is a social activity. You’re almost certainly working with others. If you’re the only person writing code, there’s others involved. Design. Sales. Marketing. Support. Even if you’re doing all that, there will be customers. Or at least users. What you’re writing and their feedback is a slow speed interactive discussion. Software development IS social.

If development is social, is it bad to talk to others on your team? No. Of course not. You should talk to the other people involved. You should talk to them frequently. About what you’re doing and why. What you need help with and what you can help others with. You should talk about the whys and the why nots.

What you shouldn’t do is substitute talking to others for thinking for yourself. Don’t bring all your problems to the team before you try to solve them. That’s what you’re there for. To solve problems. So try to do it.

That doesn’t mean you rubber duck until you solve the problem. It doesn’t mean you beat your head against the wall until one of them breaks. It doesn’t mean you have to solve every problem on your own or have all the answers right away, every time. It doesn’t mean you don’t need to put in any effort.

It means you need to try to solve the problem. It means you need to understand the problem. It means you have an obligation to bring a good question to the team when you do. And that often means rubber ducking.

How much rubber ducking is enough? Like many things development, it depends. But you always need to put in enough effort to ask a good question.

by Leon Rosenshein

Forward Compatability

Before I get into today’s topic, a little flashback. Since December 29th, we’ve had 4 days with measurable snow. The first one, over 2 days was a wet, heavy (for Colorado) foot of snow, the other two were more typical front range snows, 2 inches of light powder. But it never got that warm and sunny, so it’s been building up. It’s also been melting and refreezing. Which reminded me of this post abut YAGNI. You ain’t gonna need it. Until you do, so be prepared.

Interestingly enough, this kind of relates to today’s topic as well. Backward compatibility is pretty easy to understand, and relatively easy to do. Essentially it means using a new format for your input, while retaining a way to recognize that your input is in some older format, then handling it correctly. The reason it’s easy is that you’re in control of all the variables You get to define the new format and how to handle it. You get to define how you’ll distinguish between the two formats. You get to define how to convert the old format into the new one. And the old format can’t change. Easy Peazy.

Forward compatibility, on the other hand, means defining your output in such a way that it works with code that hasn’t been written yet. That’s just a little bit harder. It’s not just that you’re not in control of the variables, it’s that the variable hasn’t been created yet. How can you be compatible with something that doesn’t even exist yet?

In the strictest sense, you can’t. There’s no way to be forward compatible in all cases. What forward compatibility really means is that you make some design choices up front that don’t constrain you (much). Instead, they put some very light requirements on the future. As long as the future follow those simple constraints then they’re backward compatible.

Consider HTML. The design of HTML and what browsers due with tags they don’t understand makes it forward compatible. It’s not going to be exactly what had hoped for, but older browsers can parse and display newer HTML documents, even if they include things like the <article> or <section> tags.

Communication protocols are often like that. Since you don’t know what you don’t know at the beginning you make room for learning and growing. If you’ve done any Windows system level programming, you’re probably familiar with APIs that take a structure that includes the size of the structure. That’s there for forward compatibility. It lets the receiver of the structure know what version it’s getting. While it might have been possible to determine it by inspection, adding the size of the structure makes it clear.

Another common way of making things forward compatible is by nesting things. Sure, you could have an API that takes 5 parameters, but if you add a sixth one then you need to not just rebuild everything, but in many languages, you need to change the code to add that 6th parameter, even if you’re going to go with the default value. You can make that forward compatible by passing that info as a unit (think C structure). If the definition of the structure changes, you might need to recompile the code, but you don’t need to change anything. That’s what the folks building the Windows API did, and it certainly worked for them.

It’s also important when you have a distributed system and need to transfer data from one computer to another over a wire. In many cases you have no influence, let alone control, over the computer that is sending the data. You need to come up with something that is extensible and forward compatible. Enter Protocol Buffers. Designed from the ground up to be both forward compatible and extensible. Follow the guidelines and you can add lots of extensions without impacting existing users. Send a newer version of data to something built with the old definition of the schema and it will happily ignore data it doesn’t understand.

That doesn’t mean you can ignore the issue. You still have to make sure that you don’t make breaking changes. You still have to make sure that ignoring the new data makes sense and gives reasonable answers. But you can do all that in the new code, where you have control and the ability to change.

You just have to think ahead and make sure you can accept those changes. Just like you need to plow a little bit wider than you need now, because something is going to change in the future.

by Leon Rosenshein

Another Year In Review

snowman with falling snow animation
It’s the last day of the year and time for my annual look back. It’s been a busy year. At home, at work, and on the blog.

On the work front, at the beginning of the year I had just started working from home again because of the Marshal Fire. A few weeks later cleaning was finished and I started working from the office again. There weren’t many people there, but it was good to not work alone. Two weeks later I started a new job. Luckily it was just down the street, and I could still walk to work. There was (and still is) a lot to learn, but I’ve come up to speed and I’m enjoying the work. About a month ago I got a new manager. Same role, same job, same everything, just a new manager. And now, at the end of the year, our local office is being closed. We all still have jobs. Nothing changed with that. I’ll just be working from home until the powers that be decide which of the other 2 nearby offices I’ll be working from.

On the home front, we were temporarily living with our daughter because of the Marshal Fire. Luckily all we had to do was spend a few nights away from home. We did do some smoke remediation, but that didn’t have much impact. On the other hand, we did a bunch of renovations this year. New master bathroom. Hardwood floors all around. New coat of paint everywhere, except the kitchen/family room, which are still to be done. That meant living with my daughter again, this time for a month. But we’re back in the house now, and loving what we did, so it was worth the trouble.

On the blog front, it was a mixed bag. At the beginning of this year my blog was a set of Confluence pages on Aurora’s internal network. It was easy to publish to, but not very visible, and I was about to lose access to it, so something had to change. I looked around at options and settled on Hugo as the site generator, with S3 as the backing store and CloudFront as the web server. The raw data is stored in GitHub and I use GitHub actions to build/copy to S3 when things change. Once I got it set up it’s just worked. Since you’re reading this, it’s working, and it only costs ~$0.25/month, I’d say it’s worth it. Along the way I made some updates to the theme I’m using to support for the OpenGraph protocol and Mastodon as a social media provider. If you’re interested in learning how it all works let me know.

From a content production standpoint though, not as big a year as some. I only added 90 entries this year, so my average (~1.7/week) is a little lower than the 2/week I wanted. On the other hand, I did get the Engineering Stories part of the site up and running, with some archtypes and my own story there for viewing. That’s something I’m pretty proud of.

Speaking of Engineering Stories and the related articles, they’re right there at the top of this year’s most viewed articles. For your viewing pleasure, here’s the top 10 articles for this year:

And a few others I really liked:

Hopefully you’ve found those, and other, articles interesting and useful. I know I found writing them helpful in clarifying and cementing my own understanding of the topics. Until next year, Happy New Year, and remember to enjoy whatever you’re doing. We only get one shot at this life, so make it worthwhile.

by Leon Rosenshein

Beware Of Bugs

“Beware of bugs in the above code; I have only proved it correct, not tried it.” – Donald Knuth

What does that statement even mean? Assuming the proof is correct, and I have no reason to doubt it, how can something be proven correct and still have bugs? There’s a lot to unpack in that statement.

First of all, how do you prove something is correct? There are formal/logical proofs that start with axioms and then prove a statement based on those axioms. You can sort of prove something, but really, you’re only proving the statement is correct if the axioms are really true. Or, you could use symbolic logic and a set of requirements/constraints and something like Leslie Lamport’s TLA+ to prove that your logic and sequences are correct. Those are at least two ways to make a formal proof, and there are other similar tools/mechanisms.

But even if you have that formal proof, have you really proved there are no bugs? No, you haven’t. For even more reasons than there are to formally prove something.

The biggest reason is that any proof you have is comparing some set of requirements to some specific logical representation of an algorithm. Once you have that you make two huge assumptions. First, you assume that the requirements you’re using for the proof are an exact and complete representation of the problem at hand. Second, you assume that the logical representation of the algorithm is a complete and accurate representation of the code that is/was/will be written. Both of those assumptions are fraught with peril.

Reality has shown us, time and time again, that our initial understanding of a problem is at best incomplete, if not wrong. There are always things we learn. As the Agile Maxims tell us,

It is in the doing of the work that we discover the work that we must do. Doing exposes reality.

We don’t know what we don’t know until we find out we don’t know it. Which means any proof of correctness you’ve done is incomplete. So your proof doesn’t prove anything, or at least doesn’t prove that your code does what it needs to.

You also don’t know that any code written is functionally equivalent to the logic in the proof you used. It’s translation on top of translation. Language translations are notoriously flakey. So we write unit tests to help increase our confidence that we’ve matched the requirements and expectations. And they do. But they can’t prove there are no bugs. All they can do is help us get our confidence high enough.

But even that isn’t enough. You could have perfect knowledge of the requirements. Create a perfect logical representation of the code. Perfectly translate that model into code. And then a stray neutrino hits the CPU at just the right (wrong?) time and the answer comes out wrong. Or you could have one of the Intel chips that we had in our Bing Maps datacenter. It couldn’t take the inverse cosecant of an angle. Well, it could, in that it gave you an answer, but it was wrong. Eventually we got a firmware update and at least it stopped lying to us and just gave an error if you tried to use the broken funcrion.

As bad as that is, it could be even worse. Almost 40 years ago, in 1984, Ken Thompson gave a talk on trusting trust. There’s a long chain of hardware, tools, and libraries between the code you write and the program a user eventually runs. He talked about a compiler that looked at what it was compiling. If it was a compiler it put a copy of itself inside the new compiler. If it was an operating system, it added a hidden login for him in the operating system itself. With that compiler you’d never know what it was really doing. It was mostly doing what you want, but sometimes it did more. That’s a pretty benign hidden ability, but if it can do that, what else can it do? Introducing bugs would be easy to do if you’ve already gotten that far.

Which brings us back to the opening quote. You can prove something is correct, but you can’t be sure that it will work and that there are no bugs. But that doesn’t mean things are hopeless. We shouldn’t just give up. There are lots of things we can and should do to. From software supply chain management to hermetic builds to exploratory testing. All of these things increase your confidence that things are correct. And we should never stop doing that.

by Leon Rosenshein

Happy Winter Solstice

Today is the Winter Solstice in the northern hemisphere. The shortest day of the year. And here in Colorado, if the weather prognosticators can be trusted, both a warm and pretty cold day. The high is listed as 51°F and the low as -10°F. Last year on this day it was warmer (63°F), but it hasn’t been that cold since 1998, when it was -17°F. That’s also the biggest swing between high and low temps I could find in the records, which go back to 1928.

The thing is, it involves time. And the English language. English, however, is slippery and imprecise. As Humpty Dumpty said

When I use a word," Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to mean—neither more nor less.”

Time is hard. The winter solstice isn’t really a day. It’s an instant in time. We just decided to label the day by that instant. We say the winter solstice is the shortest day of the year, but it’s not. It’s (almost) the same as every other day. What we really mean is that it’s the shortest time between sunrise and sunset, which means it’s related to the solar day. Unless you’re close to the north pole, in which case the sun set in early October and won’t rise again until early March, so the “day” on the solstice is no shorter than it has been for a while.

Dates are hard too. The high for the day is predicted to be 51°F, between 1300 and 1400 local time. That makes sense. The low temperature, on the other hand, somewhere between 0800 and 0900 tomorrow, Dec. 22nd. That’s probably somehow historically related to the solar day, from one local noon to the next. Unfortunately, it’s completely disconnected from our calendar, which marks days as being from 000 hours on the clock to 2359.99999 on the same clock. That’s not the solar day or the sidereal day. Which leads to even more confusion.

All of that is why computers are bad at handling dates and times. Durations between a start tick and and an end tick aren’t too bad. We can measure the number of seconds pretty accurately between them. But try to give the start or end tick an unambiguous label or try to find the duration between two arbitrary labels and all you find are edge cases. So unless you’re one of the maintainers of the time zone database, leave it to the experts and use the official library in your language of choice. If you try to roll your own you WILL get it wrong at the edges.

Regardless of the difficulty in knowing what day/time it is, Happy Solstice everyone.

by Leon Rosenshein

Latkes

A plate of latkes with sour cream

Yesterday was our 14th annual Latke Fry. Not just any latkes, but vegan, gluten free, Rosenshein latkes. 30+ hours after the cooking began, and I still smell a little like the short order cook I was one summer many years ago.

After last year’s disappointing 25 pounds of potatoes, this year’s 9 hours of cooking and 40 pounds of potatoes helped me realize something. The limiting factor in how many latkes get eaten and how many pounds of potatoes get used isn’t how hungry folks are, the number of guests, how hard I work, or how pipelined the latke assembly line is. It’s the size of my frying pans.

Or really, the size of my stove/counter. My stove and counter are big enough for me to have two 18 in frying pans going at once. That means that at any given moment I have 8 latkes frying. If a batch of latkes takes 10 minutes (round numbers), I can do 50 latkes an hour. 50 latkes an hour for 9 hours is ~ 450 latkes. That works out to be 80-100 happy people, but it also means that somewhere around 2:00 I have folks lined up by the stove, taking the latkes as soon as they’re ready.

And there’s not much I can do about that with my current setup. I’m already starting to cook early so I have a cached supply of latkes warming in the oven to help with peak demand. Outside of that, there’s not much more I can other than starting to cook earlier and increasing the size of the cache.

Shredding the potatoes and onions faster won’t help. Other folks are already doing that and there are always shredded potatoes ready for me. They’re already sitting in bowls waiting, and I want the potatoes to be as freshly shredded as possible. Pre-mixing the dry ingredients (salt, pepper, flour) might save a little time, like 10 seconds a batch, maybe 10 minutes over the whole day at best. Plus, the mixing of a batch happens while there are latkes frying, so and I’m mostly idle (other than schmoozing with my friends), so that’s not going to help with peak demand. I can’t just cook them faster by turning up the heat. The oil starts to burn and smoke, which leads to latkes with burnt outsides and raw insides. It also sets off the smoke detectors. And no-one wants that.

I could invite fewer folks over and reduce demand, but that goes directly against the vision of having as many happy guests get as many latkes as possible. That translates into more latkes. Which means higher throughput. So what’s a latke cook to do? My current plan is that next year I’ll try adding a third, smaller, frying pan, to increase batch size.

Latke Frys are great and we all had a good time, but what has any of this got to do with software? It’s the classic struggle of a batch system. How do you increase throughput when you can’t do anything about latency? It takes 10 minutes to cook a latke, whether you’re making 1 or 10 at a time. All you can do is do more at once. Which means finding the bottleneck and working on that. In my case it’s cooking area.

I know I can’t get my throughput up to meet peak demand. Latency (cooking time) is too high, and bandwidth (cooking surface area) is too limited. Instead, I maximize use of available bandwidth (adding another frying pan) to increase my throughput from the beginning, which lets my increase the size of my cache (more latkes in the oven staying warm). The increased cache size then lets me reduce apparent latency for the customer (latke eaters) by letting them pull from the cache (pre-cooked latkes) which has almost zero latency.

Just like any other distributed, pipelined, batch processing system.

by Leon Rosenshein

What Are You Testing?

When writing tests, whether they’re TDD test, Unit tests, Integration tests, or Canary tests, the first thing you need to be sure of is what you’re testing. If you don’t know what you’re testing then how can you know if your test has succeeded? The next thing to do is make sure that what you’re testing is really the thing you want to test. I’ve seen way to many tests that say they’re testing some business logic, but what they really end up testing is either the storage system or some cache along the way.

Tests typically follow the Arrange, Act, Assert pattern. That means you set things up, you try the thing you want to test, then you assert that the response is what you expect. It sounds simple, and conceptually, it is. It looks something like this:

Arrange

The arrange step is pretty straightforward. Create the thing you want to test. Create any needed resources. Create your inputs. Identify the expected result. It sounds simple, and often is, but there are hidden details you need to be sure of.

Act

The act step is the easiest. Call the method. Capture the result(s). Simple. As long as your code is testable. As long as the method does one thing. One logical thing. The more logical things the method does the harder it is to make it do the act, the whole act, and nothing but the act.

Assert

The assert step is more subtle. Conventional wisdom says one assert per test. At it’s simplest, sure. If you have a method that adds two integers your test should assert that the result you get is equal to the sum you’ve calculated. But if you’re doing some kind of merge of two complex objects then a single assert is probably not enough. But logically it’s asserting that the merged object is what you expect, and that’s one logical assert.

In theory it’s easy to Arrange/Act/Assert. In practice though, it can be hard. For a lot of different reasons. Reasons you need to understand not just when writing the tests, but when writing the code too.

In the arrange step it means setting things up so there’s no variability. Does the thing you’re testing have any dependencies? How do you know they’re working? Are you sure you’re controlling all of the inputs? Not just the function parameters, but also any calls the thing you’re testing might make to get external data. Random numbers. Date/Time, environmental information. All of these things, (and more, need to be controlled. If you’re not you’re probably testing the wrong thing. At best you’re writing a flakey test that will fail at the worst possible moment. Dependency Injection and mocks/fakes are your friends here. Don’t leave the response of a dependency to chance. Making something testable often means adding interfaces and using them so you can replace the implementaiton behind the interface in your tests without changing any other operational characteristics.

In the act step it means arranging things so you can actually call the thing you want to test. It is a method? Is it public? Is it just a branch in some other method and you need to carefully craft the setup to force that branch to be called? If it’s hard to run the thing you want to test then you’ve probably got some refactoring to do. Make the code in the branch a method, then call it. Another common one is separating the code from calling some external thing from the error handling. A common code pattern looks like

retval, err = dbManager.Insert(newUser);
if err != nil then {
    switch err {
        case ALREADY_EXISTS:
           <Specific Error handling Here>
           break;

        case INVALID_OBJECT:
           <Other Specific Error handling Here>
           break;
    }
    return;
}

<Happy Path Code Here>

It can be hard to get an external reference (or it’s mock) to return a specific error just so you can test the error handling. A better choice might be to write it something like

retval, err = dbManager.Insert(newUser);
if err != nil {
    HandleError(err);
} else {
    HandleInsert(retval);
}

That way, HandleError and HandleInsert are both easy to test. Arrange the one input variable, call the method, assert what you want. And the wrapper is trivially inspectable by observation. Add some dependency injection and it’s trivial to test too.

Which brings us to the assert step. There are those that say one assert per test, and that might be an implementation, but the important thing is one logical test per test. And it should match the name. If there are 20 different types of invalid input then there should be 20 tests, each for one type of invalid input. You don’t want one test with 20 different ways to fail. It adds too much cognitive load when you need to figure out why your tests fail. What failed and why should be obvious when you see the name of the failed test.

And again, if the thing you’re testing does so much or has so many side effects that your test name is “validateInputsAndComputeResult” it’s probably time to refactor. In fact, any time your test name has been to Conjunction Junction, it’s probably time to refactor your code to do less.

So when you write your tests (and your code), think about what you’re testing.

by Leon Rosenshein

FFS

Once upon a time, in the before times, when people worked in offices and went to conferences in person, there was a conference called FFS Tech Conference. The format was simple. Every talk title started with FFS, and started with a 15 minute rant by the speaker followed by 15 minutes of questions/discussion. Unfortunately, the conference series never took off, but there are some real gems in that first one.

Let’s start with the idea behind the conference. It’s about clarity and transparency. It’s about taking what many folks say quietly to themselves and saying it out loud in front of others, then discussing it. Which brings me to the conference title. There are lots of possibilities. Full Flight Simulator. Fee For Service. Film Forming Substances, Field Fiscal Services. But none of those are right. In this case it’s For F@$ks Sake. And in this case, Urban Dictionary actually has the right answer (potentially NSFW, you decide). It’s an expression of expression of disappointment. It’s a cry of disbelief. And it’s a plea to change.

Take a look at the titles of some of the talks. You’ve probably all felt the frustration that led to them. I know I have.

  • FFS: Get Outside
  • FFS: Fix the small things. Some are bigger than they appear.
  • FFS: FFS just f’ing talk to each other!

Consider Kent’s Fix the Small things talk. It’s entirely possible, likely even, that you aren’t going to be able to convince management, whatever that means) that you should take 4 weeks and refactor the code you’re working on before you fix a single bug or add a feature. So don’t. Fix the most important bug. A do a little tidying of the code to make fixing the bug easier. Make a variable to function name match what it represents. Group things together for easier readability. If you’re adding some functionality, then find related functionality and group them together. At least physically. And if you can do some semantic reshuffling to make things more coherent and easier to change do that too. In fact, do it first. Don’t tell anyone, but that’s refactoring and you just got it done.

It’s a pretty amazing group of talks, and I wish I had been there. I’ve wanted to say almost every one of those things to multiple groupa multiple times. Just knowing that someone else has said them out loud is comforting and affirming.

FFS