Recent Posts (page 56 / 65)

by Leon Rosenshein

You Are Not Paid To Write Code

How’s that for an attention grabber? If we’re not here to write code (or design hardware, or manage projects/products, or whatever it is you spend your time doing), then why are we where.

We’re here to increase business value by making cars that drive more than themselves. Very often that means writing code, or designing or managing, but those are the means to an end, not the end in itself.

I’m a software engineer, so I’m going to phrase this in software engineering terms, but it applies to all roles. Sometimes the best code you write is the code you don’t write. Yes, we all like to start from scratch and build bespoke systems, and sometimes that’s the right answer, but before we go and do that, we really need to think about why we’re doing it.

New systems are fun and easy to build. They don’t have (as many) constraints. They can be exactly what we want, and require nothing else to change, but as  pointed out last week, another layer of indirection is just another place for problems. Every system has its functionality, but it brings with it complexity and unexpected interactions. It brings with it opportunity costs when you’re building it and  maintenance costs. It might very well be the right answer is to build something new, but take the time and be sure before you get started.

by Leon Rosenshein

The Maintainer

Last week I talked about some principles for developers. Today I want to drill down into the Code For the Maintainer idea. As developers we really like to work on green-field projects. There’s room to explore and innovate, and you don’t need to worry about pesky things like backward compatibility and existing user expectations. However, most of the work we do isn’t green-field. We work inside existing systems, adding functionality and fixing bugs. We extend them into new operational domains. In those cases we act as a maintainer of code, rewriting/refactoring existing code. That’s where this principle becomes really important, because the first thing you need to do, before you even think about doing what you came for, is to understand not only what the code actually does, but what the author’s intent was. So one of your jobs, as the writer of code, is to make the job of the maintainer easier. Make sure things are named well (they do what they’re called and nothing else). Make sure the flow is broken down into manageable chunks. Avoid side-effects. Even more than documenting the “What”, make sure to document the “Why” and the constraints/expectations.

And please, please, please, document how to setup/install and use any additional tools/permissions/preconditions necessary to build and work with the code. Even better, write a script and put it in the README.md. You do have one of those, don’t you?

You never know who’s going to look at the code 6 months from now. How many times have you looked at some code and asked what noob wrote it, only to run git blame and find out it was you? Do everyone a favor and write for the maintainer. I, and future you, thank you for it. As Ward Cunningham said:

“Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live.”And his follow-up

I usually maintain my own code, so the “as if” is true.

https://www.joelonsoftware.com/2005/05/11/making-wrong-code-look-wrong/

by Leon Rosenshein

Kilobytes, Megabytes, Gigabytes and Copy-Pasta

We’ve all looked online for solutions to problems, and that’s a good thing. Learning from others gives you time to focus on what’s important for you and your team, adding value for the company instead of reinventing the wheel. Whether it’s personal blogs, public Stack Overflow, internal Stack Overflow, or a generic Google search, it’s easy to find code snippets that purport to solving your problem, or at least something not entirely unlike your problem. Once you’ve found it, the question then is, what do you do with it.

It’s tempting to just copy/paste it into your code and move on, but that’s probably not the best choice. If nothing else, there’s no guarantee that the code you’re looking at actually works the way you want it to. Just because it’s got lots of up-votes, it could still be wrong. Or, the constraints on the code don’t quite match the ones you’re dealing with. Or there might be some edge case you or the author haven’t thought of. Or it might have some kind of license on it. You definitely want to check up on that.

Bottom line, internet searches can be really helpful, but they’re not without problems/risks. Here’s the story of one of the most copied pieces of code from StackOverflow, and how it’s wrong.

by Leon Rosenshein

Cautionary Tales

In the face of ballooning system requirements or the need to future proof your code, you may be tempted to add another layer of indirection. All problems in computer science can be solved this way, right? Not so fast. Overly-complex designs can be difficult to comprehend and maintain, and when they go wrong, they can go really wrong. A wonderful new podcast from Tim Harford highlights the risks--particularly for safety systems like nuclear power plants--of layering on complexity without enough thought. It’s a delightful romp from the 2017 Academy Awards to a 1638 post mortem by Galileo.


Cautionary Tales podcast
Benefits and risks of indirection

All problems in computer science can be solved by another level of indirection
    —Butler Lampson
But that usually will create another problem

    —David Wheeler

by Leon Rosenshein

Kissenger, Schmidt, and Huttenlocher, and AI

What do a former Secretary of State, the ex CEO of Google, and the current Dean of the MIT College of Computing have in common? Not their age, not their education, and certainly not their jobs. But they all, through direct actions and through the people and companies they advised, led, and taught, have had big impacts on your lives and the world in general. And what do they have in common around AI? Two of them have an obvious connection, but a 96 year old historian isn't someone you usually think about when you think of AI. For today I give you a couple articles on AI and society. Something to think about.

by Leon Rosenshein

Engineering Principles

Here's a few of my favorites. BSR, CFtM, DRY, KISS, PLA, SOC, YAGNI

According to wikipedia, a principle is a proposition or value that is a guide for behavior or evaluation. Following your principles is a good thing, both at work and at outside work. They help you make decisions and figure out what to do. Good principles help you make good designs. Good designs lead to good products and good outcomes. So what principles should we be following?

You've probably heard of those acronyms, but just in case:


Boy Scout Rule - Leave things better than you found them
Code For the Maintainer - which is most likely you, so do yourself a favor
Don't Repeat Yourself - If you need to do something twice, use the same thing, don't build a new one
Keep ISimple Stupid - Make things as simple as possible, but no simpler
Principle of Least Astonishment - Don't surprise your user. Do what they expect
Separation OConcerns - The old Unix philosophy, do one thing, and do it well
You Ain't Gonna Need It - Don't create things you don't need now. Wait until you need it. You may never need it, and if you do, you'll know much more by then.

Following these principles helps at whatever scale you're working at, whether it's a function, feature, service, platform, or product. What principles speak directly to you?

by Leon Rosenshein

PM Is More Than Provider Of Meals

I've been working with PMs of various breeds at multiple companies for years now. Many years ago at Microsoft there was a class called "PM is more than provider of meals". Yes, at MS the PM was often worrying about team morale and making sure we were fed and clothed (lots of t-shirts), but that's not really why they existed. They were the voice of the customer (whoever that was, internal or external), worked with partner teams, vendors, and generally filled all the gaps that no-one else was filling. They did things like communicating up, down, and across, providing product vision, leadership (but no authority), problem solving, and prioritizing. Not in a vacuum, and they were by no means the ones doing all of any of that, but they were a backstop to make sure things got done.

At ATG we have, at least, Program Managers, Product Managers, Project Managers, and Technical Program managers. Different roles with different skills and expectations, but it's not always clear which one you have, let alone which one you want or need. In most cases you need parts of all of them, and the best PMs (regardless of title) combine all of the different skills, to varying degrees, and just help you get things done. Unfortunately, our PMs cover very wide areas and we almost never have one of each working directly with each team/group, so how can we move forward most efficiently?

Most importantly, we all need to be clear about who's responsible for what, and that's going to vary by team. In the end, all of the bases have to be covered regardless of the mix of titles involved in any particular effort. So whether it's the PM, EM, TLM, TL, or an IC, we need to make sure things are covered. And we need to make sure things aren't covered multiple times so there's no confusion about who's responsible for what. How are you and your team handling it? What best practices have you come up with?

by Leon Rosenshein

Prediction

As engineers we make lots of predictions. Some of them are about our what customers/users want and need, some are about how the things we build will interact, and some are about ourselves, what we are going to do and when we are going to do them. If you were to order our predictions by how accurate they are, that's probably the order to put them in. Which is kind of odd. The people we know the least about we are the best at predicting, while the people we know the most about (ourselves) we have the most trouble with. There are all sorts of reasons, but I think some of the big ones are sample size, using data, and optimism.

Consider our customers. As a company we have millions of them, and it's much easier to predict the average of a million things is easier than guessing a specific one. And we do experiments on subsets of them. We control for different situations, and run the experiments different ways, adjusting the experiment and the sample until we're confident in our predictions. Then we make the change and if our predictions were wrong we figure out why and use that knowledge for the next predictions.

For our system interactions we have unit tests, integration tests, simulations, and even track tests. Again, all designed and constrained to explore the boundaries of the system interactions and help us predict how things will happen "in the wild". Here too, if we find something that was missed we update/extend the situations we use to help us make predictions so that next time we're more sure of ourselves.

On the other hand, do we apply that kind of rigor to our development estimates? We have sprints, we have backlogs, we have story points. We do retrospectives. Then we miss our dates and do it all again. Does your team adjust the number of points in a sprint based on your historical completion rate? Do you have a way of correcting for new kinds of work or new areas? Do you account for the fact that not everyone is the same in all areas? What about dependency management?

That's not to say that it's terrible and we shouldn't try. Things are a lot more predictable now then they were when I started doing this. But we can do even better. As a small team we've seen good results from adjusting the number of story points we accept into a sprint based on our past performance. And we don't make anything about time. There is no conversion between story points and hours. We don't explicitly account for on-call work or KTLO, but that is built into our history of how many points we were able to close. It's not perfect. Our numbers don't directly apply to other teams. When we dive into a new area it gets a little rough, but it's a lot better than it was. And, it helps our customers. They might not like it when we say we can't get something done by a certain date, but the can at least plan around it. And that's a really big win.

by Leon Rosenshein

Makefiles

Relevant or not? What's the point of a Makefile in a bazel world?

bazel is a turing complete, extensible build system, so if your core build is done with bazel is there any reason to use anything else? In theory, no. You can do anything with a collection of bazel rules and targets. In practice though, bazel isn't the simplest of systems to use, and the best tool around to understand what bazel is doing is bazel query, so there's a bit of a chicken and egg problem there. Your IDE isn't much help either. Many of them can do syntax highlighting, but they don't tell you why something was built or which rule(s) got fired.

On the other hand, Makefiles can't handle the depth/complexity of the trees we build, or at least not simply, and they just don't provide the power/flexibility to only do exactly the things you want done. They're also strangely prescriptive about the format of the Makefile itself. Not quite COBOL, but leading whitespace is important (and mostly invisible). But they are pretty simple to understand since the basic rules are simple.

So what's an engineer to do? For your actual build, bazel is a good choice, especially if you're in a world with lots of other bazel things (NA, Sim, atg-services, etc) then do your build in bazel. If you're not, consider it. After all, we're heading for a monorepo with the majority of the build being managed by bazel.

But there's another use case for Makefiles. Simple scripting to make your life easier and reduce typing. That's the Makefile of phony targets. It's a simple way to build/share a workflow and integrate 3rd party tools into a workflow, possibly including a bazel build <big long string of characters denoting the target>. you can even pass in arguments to help specify what you want to do. In that case Makefiles make sense, at least for a while. If you find yourself building/sharing complex Makefiles that's probably an indication that you should be talking to the DevX team about better shared tooling.

So what do you think? Makefiles, bazel, or some combination? Share in the thread.

by Leon Rosenshein

Imposter Syndrome, Dunning–Kruger, or Just Do It

There are new things for us to learn all the time. Some of them are just new coats of paint on the same old thing, some of them build on existing tech, some of them are really new. Sometimes you end up in working on something tangentially related to your area of expertise that's new to you. How do you approach it? Is it a problem or an opportunity? Generally, I choose to see them as opportunities. I get to learn something new. I get to expand my repertoire. I get to show my boss I'm flexible. All good things.

But that doesn't make it easy. Sometimes convincing yourself is the hardest part. Then you've got to actually get started. I've found the easiest way to handle both is to ask questions. Find the person that understands the problem. Ask them to explain it. Figure out exactly what part you don't know and ask some simple, but well-formed questions. Use those answers to ask some more questions. Then try some things. See what works and what doesn't through your personal approach. Then draw the rest of the chart. You'll be amazed at what you can accomplish.