Recent Posts (page 44 / 65)

by Leon Rosenshein

Crawl, Walk, Run

Or, as Kent Beck saidMake it work. Make it right. Make it fast”. But what does that mean? What's the difference between crawling, walking, and running or working, right, and fast? What goes in each phase?

Crawling or making it work is the happy path proof of concept. As long as the inputs make sense (whatever that means in a given context) then the output is good enough to use. Bad input might crash or give undefined results. Error handling and reporting is minimal to non-existent. The idea is to get enough signal to know if you really want to invest. Back in the aerial image processing days we used to have visible seams where two images touched, and we had artists and modelers fix things by hand. We started to use poisson blending for the textures and some line-following heuristics to hide the seams along natural image boundaries. In many cases it worked well, so we rolled it out and made accepting or sending the seam off for manual alteration part of the QC process. At that point we got to find more edge cases and understood the system we were working much better so we could do a better job going forward.

Walking (making it right) is productionizing. Hardening the system so that if there are problems you know early and can deal with them. It's handling all of the weird edge cases that come up. It's degrading gracefully. It's making sure you have the right answer (or know that you don't) all the time. It also means making sure the code itself is well written. That it's "clean" and extensible (if needed). That it's well factored. For that seam line processing it meant handling fields and forests and cities. It meant high confidence annotations on which seams still needed manual work and which didn't. It meant not blending colors across seam lines that ran down the side of a road. By this time, even with the much longer automated processing we were able to have higher overall throughput and more consistent quality.

Finally, running or making it fast. For me, this part is about making it literally fast and making it scale out, Being able to get more done with less. Eliminating bottlenecks. And not just in the code. Making it easier and faster to build, test, and deploy. Adding automation. Extending the use cases. For the seam lines most of it was really making it faster. Getting the process down from days to hours. We were able to reduce memory usage (we were memory bound, like many things) so we could do more in parallel. And because we had it "right" from phase 2 it was easy to make sure we didn't have any regressions.

So take things in steps, checking along the way to make sure you're still on track. You, and your customers, will be happier.

by Leon Rosenshein

Eventual Consistency

Consistency, availability, and partition tolerance. CAP theorem. If you're working on a distributed database then a lot of what you do is guided by CAP theorem. The world is an imperfect place, and networks break. So you have to deal with network partitions. And in the face of a network partition you can't guarantee consistency and availability. That's the CAP theorem in a nutshell. But what if there were a way to, if not break the rules, at least ignore them in some cases?

Consider the lowly status API. Is this flag set? Has this job finished? Is the car door open? Things like that. Assuming good intent and the network is working, _at best_ those are just snapshots in time, The answer you got was true when it was sent, but by the time you got it it wasn't. So you make the call again, looking to see if anything changed, and behold, something did. You get the response you're looking for and get to move on.

That's eventual consistency in action. Ask later and eventually you'll get the correct answer. And you can take advantage of that to make the issues of CAP theorem less of a burden. Because different operations have different requirements and make different promises to the user.

For database writes you might reasonably decide to require consistency at all times. If you can't be sure everyone knows, then don't accept the change. On the other hand, in many (most?) situations availability is key. Getting the most recent answer you can is better than not getting any answer. Recognizing this opens the door to lots of opportunity, from working through network partitions to read-only status copies of your database.

So when you're thinking about the requirements of your distributed database remember to that the requirements for data accessibility and consistency are not consistent for all accesses.

Side note: Both ACID and CAP have consistency in them, but they're not the same thing. That's a topic for another time.

by Leon Rosenshein

TIL ...

TIL what TIL means. Not really, but that is one of the TLA's I seem to have trouble remembering. Kind of like I know that PCMCIA doesn't mean People Can't Memorize Computer Industry Acronyms, but I can never remember what it really means.

Actually, what I really learned was more about Optional vs Nullable. I gained a deeper understanding of the advantages of Optional types. Not having to worry about NULL pointers can make things cleaner, easier to comprehend, and of course, avoid NPEs, which is always a good thing. Back when I was using Scala everything turned into a chain of operations on collections and without Optional that would have been a nightmare. With Optional things just seemed to flow.

by Leon Rosenshein

Abstract vs Interface

Whatever your language of choice, at some point you're going to need to think about inheritance, composition, and hierarchy. That's when you start thinking about abstract classes vs interface classes. Regardless of what your language calls them, pretty much all modern languages (C++, Golang, Javascript. Python, etc) include the functionality directly or indirectly.

The difference between the two is kind of subtle, but for the sake of simplicity let's say an interface is the definition of the contract between caller and provider, but has no implementation, while an abstract class provides at least one overrideable concrete implementation of part of that contract. In neither case can you use them without providing your own implementation. In both cases though, you can just act against the contract and not care about the implementation.

The place where it gets interesting is when you start talking about composition vs inheritance, IsA vs HasA, and the future. Generally speaking, you can inherit (IsA) one thing, but you can implement (HasA) many. So what happens when you want to add some new functionality or capabilities? What if you only want to add it to some things and not others?

A great example is Compareable. You could put it on the most basic element in your hierarchy. You'd have to give it a default implementation that failed with NotImplemented or everything would have to provide its own implementation that did that, but that would break the DRY rule. Then you'd have to implement it on everything that wanted to be Compareable. That's basically the same amount of work so no loss there. What would be a loss though is that now your compiler can't know if Compare actually works or just fails, so you're forced to try it out, hope it works, and then deal with the fallout. On the other hand, you could make Compareable an interface, define it rigorously, and then add it to the things that you plan on making comparable. Then when some developer writes instanceofthing.Compare(otherinstance) you get a warning up front that the capability doesn't exist. At that point the developer can implement it or do something else. And if you inherit from something that implements an interface you get that as well.

So, is an SDV a Car, which is a MotorizedVehicle, which is a Transport, or is an SDV a thing which implements the interfaces of a MovingPlatform, SensorPackage, and DecisionMaker? Discuss in the thread.

by Leon Rosenshein

Toward A Better PM

I'm not talking about Nyquil or Tylenol PM or [Program|Project|Product] Manager. I'm talking about Post-Mortems, or Post-Incident Reviews as they're starting to be known. Whatever you call them, they're critical to improving the system. And that's the goal. Make the system better. Make it more reliable. More self-healing. To do that you need to figure out two things. What happened, both leading up to and during the incident, and why those things happened. You don't need to figure out where to place blame, and trying to do that makes it less likely that you'll be able to find out the important things.

What and why leading up to the incident. How we got into this situation. The 5 whys. It's so important to know how we got there that it's baked right into the incident response template. But here's the thing. We call it the 5 whys, but it often ends up being the 5 whats.

  1. The server was overloaded.
  2. The server was overloaded because there were 3 instances instead of 5.
  3. There were 3 instances instead of 5 because the autoscaler didn't scale the service up.
  4. The autoscaler didn't scale up because there was no more capacity
  5. There was no more capacity because growth wasn't in the plan.

That's what happened, not why. Why didn't we know the service was getting close to overloading? Why did we get down to 3 instances without knowing about it? Why didn't we know the autoscaler wasn't able to autoscale? Why didn't we know we were close to capacity? Why didn't we anticipate more growth? You need to answer those questions if you want to make sure to fix that class of problem. Each of those "whys" has a solution that would have prevented the problem, or at least let you know before it got that bad.

A good PM goes over what happened during the incident. And that's good. Understanding the steps is important if you want to make an SOP in case something like that happens again. But there's more to it than that. Unless your incident/solution is such a well worn path that you know exactly what to do and just do it (in which case, why isn't it automated?), chances are there was some uncertainty going at the beginning. Those 6 stages of debugging. So instead of just recording/reporting the steps that were taken to mitigate the problem, think about that uncertainty. Why did it happen? Was there an observability gap? A documentation gap? A knowledge gap? Maybe it was permissions, or too many conflicting signals. Ask questions about why there was uncertainty and what could be done to reduce the time spent clearing up that uncertainty.

We naturally gravitate to the what questions. They typically have very concrete answers and it's easy to point to those answers and the process and say that's why there was an incident. And while we're focusing on the what we tend to ignore the why. However, if you really want to make the system better, to prevent things from happening in the first place, you need to spend less time on what happened, and more time on why it happened.

by Leon Rosenshein

Logging

Ever run into an error dialog on a web page that said something like "An error has occurred. Hit ok to continue"? Not very helpful was it. What about "An error has occurred and has been logged. Please try again later." Still not really helpful, but at least you feel better because someone knows about it now.

Well this isn't about that error dialog (although some of it applies). This is about that log that got mentioned. We've got a decently sized team, and every 7 weeks I'm on call, so I get to deal with those log files. We (and our customers) use lots of 3rd party libraries and tools, so the log I get to wade through are of varying quality. Lots of information. Lots of status. Some indication of what's happening, and the occasional bit of info about something that's gone wrong. Spark and HDFS in particular are verbose, but not all that informative. There are lots of messages about making calls that succeeded and the number of things that are happening. The occasional message about a retry that succeeded. And then the log ends with "The process terminated with a non-zero exit code".  Thus starts the search for the error. Somewhere a few pages (or more) up in the log is the call that failed.

So what can we do to make it better? First and foremost, log less. Or at least make it possible to log less by setting verbosity levels. In most cases logging the start, middle, and end of every loop doesn't help, so only do that if someone sets a flag. All that logging does is add noise and hide the signal.

Second, don't go it alone. Use existing structured logging where possible. Your primary consumer is going to be a human, so log messages need to be human readable, but the shear size of our logs means we need some automated help to winnow them down to a manageable size, so add some machine parsable structure.

Third, When you do detect an error log as much info as possible. What were you doing? What was the actual error/return code? What was the call stack? What were the inputs? The more specific you can be the better. The one thing you want to be careful about is logging PII/Financial info. If you're processing a credit card payment and it fails, logging the order id and the error code is good. Logging the user name, address, credit card number and security code is too much.

Finally, think about the person who's going to be looking at it cold months from now. Is the information actionable? Will the person looking at it know where to start? There's a high probability that the person dealing with it will be you, so instead of setting your future self up for a long search for context, provide it up front. You'll thank yourself later.

by Leon Rosenshein

Avoiding The Wat

In the beginning was the command line. And the command line is still there. Whether you're using sh, bash, csh, tch, zsh, fish or something else, you end up running the same commands. Today I want to talk about those commands. Or at least those commands and the principle of least surprise.

Let's start with a simple one. `-?` Should print out help for your command. I know someone can come up with a valid case where that doesn't make sense, but unless you've got a really good reason, go with that. Bonus points for also giving the same output for --help.

Another easy one. Have a way for the user to get the version of your command. If you have subcommands use version if not, use -v.

You should be able to type each flag by itself (in any order), or collect flags into a series of characters preceded by a `-`. If any of those flags need an option and don't get it then that's an error.

Having more than one instance of a flag that can't be repeated should be an error. If you don't like that, take the last value.

Simple, common options should have simple one-letter ways to specify them. They should also have more verbose options. And it's OK to only have more verbose options for some things. Always use -- for your verbose options.

Speaking of --help, if you have sub-commands then -? or --help should work at any level, giving more detailed info about the sub-command when known.

Have sensible defaults. That's going to be very context sensitive, but if you have defaults they need to make sense. An obvious example would be rm using the current directory as a base unless given a fully qualified path.

Use stderr and stdout appropriately. Put the pipeable output of your command into stdout, and send errors to stderr. And once you've released an output format think 3 or 4 times about changing it. Someone has parsed it and is using it as the input to something else.

Command completion is your user's friend, so support it. Especially if you have lots of sub-commands and options. Bonus points here for being able to compete options and providing alternatives when the user spells something wrong (like git does).

And above all, never make them say "WAT"

by Leon Rosenshein

6 Stages Of Debugging

  1. That can't happen.
  2. That doesn't happen on my machine.
  3. That shouldn't happen.
  4. Why is that happening?
  5. Oh, I see.
  6. How did that ever work?

Not quite the 5 stages of grief, but surprisingly similar.

First, Denial. It's impossible. The user did something wrong. It's not my code. The sunspots did it. Come back when you've got real proof that this is my bug and I there's something for me to fix. To get through it, trust that the bug report is identifying an issue. It might not be what the user thinks, but there is a problem. The user was surprised, which means you need to do a better job of expectation management.

Then on through Anger and Bargaining. My testing didn't show that problem. Who's fault is it? Just fix DNS. Get the user to be more careful. Get someone else to do a better job on scrubbing the inputs. Yes, the system you're in needs to be taken into consideration. The problem could be elsewhere so it makes sense to step back and get a slightly wider perspective. Is the problem related to other, similar issues? Is there something more systemic that could/should be done? Make sure you're putting the fix in the right place.

Depression. How did we get here? What did I do wrong? How could I have prevented this? It's something that needs to be fixed. So what can you learn from this? Keep track of where you missed the mark or made assumptions about things. Identify what could be different in the future.

Acceptance. it's your problem to fix, so make it better. Take everything you've learned about the issue to this point and apply it. Own the issue, and share the learnings. Help others not miss the mark the same way.

The last stage of debugging, "How did that ever work?", has gone beyond grief. Now you're looking at the entire system and wondering what other bugs/features are lurking about. At this point you're an explorer, learning new and exciting things about your system. When you get to that point the best thing you can do is tell us all what you learned.

by Leon Rosenshein

Just Keep Learning

The only constant is change. That's true for many things, including being a developer and a developer's career. Lots of things change. The computers we use. The tools that we use. The language(s) we write in. The business environment. The goals we work toward. And on top of all of those external changes, we change too. While there's an unbroken line of "me" all the way back to when I got that first part-time developer job in school to now, I'm not the same person I was then.

I started my career writing Fortran 4 on an Apollo DN3000 computer and C89 on an SGI 310 GTX on a 5 computer network using 10Base5 Ethernet. You won't find anyone using any of those outside of a museum these days. Today we're running a 2000 node cluster using Kubernetes, C++, Golang, and Python. Those are a lot of changes to the fundamentals of what I do for a living.

Back then I was a Mechanical and Aerospace Engineer that used computers to simulate aircraft and missile flight. Since then I've done air combat simulation for the USAF and for games, pilot AI, visualization, 2D and 3D modeling and map building, fleet management, project management, data center management, and large scale parallel processing.

That's a lot of changes to go through, and what I'm doing now doesn't look that much like what I went to school for. So how did I get from then to now? By continuing to learn. And not learning in one way, but in many ways. Some was formal education. I got a masters in Aerospace Engineering early on, and then about 15 years later I got an MBA, but that is only a small part of what I learned. There were also conferences, vendor classes, company sponsored and on-line classes, and probably the most important, my coworkers.

My coworkers and I learned from experience. Doing things that needed to be done. Fixing bugs, writing tools, adding functionality, and learning along the way. I learned from them and they learned from me. Languages, IDEs, operating systems, open source and in-house frameworks. We learned the basics together and helped each other out along the way. And I asked questions. Because pretty much no matter what I was doing, there was someone who knew more about it or had more experience or had solved similar problems. So I took advantage of that and learned. I still do that. When there are things that need to be done around deep learning or security I work with people who know and learn the details.

Another way to keep learning is to teach. Sharing what you've learned forces you to really think about what you've learned and what the essence is. You need to understand something deeply to explain it clearly and succinctly. Writing these little articles has helped me understand things better, but you can get the same benefits from a tech sharing session in your team where you explain the latest feature you've added.

Because the world is changing, and you need to keep up.

by Leon Rosenshein

In Defense Of Laziness

The parentage of invention is something of an open question. It's often been said that necessity is the mother of invention, and that seems reasonable, but who's the father? I've heard a bunch of different answers, including opportunity and ability, but my money's on laziness.

But it's a special kind of laziness. Not the kind that sweeps the dust under the rug instead of cleaning up or the kind that finishes a project and leaves the tools wherever they lay. I'm talking about the kind of laziness that knows the priorities. The kind that realizes that doing things in the right order minimizes rework. The kind that, when digging a ditch, makes it big enough to handle the expected flow and some more, but doesn't build one big enough to handle the Mississippi river.

And that goes for developers as well. Good developers have that special kind of laziness. They know that if your method/library/application doesn't do what it's supposed to it doesn't matter how clean the interface is. They know that when you're dealing with terabyte datasets a few kilobytes of local data isn't something to worry about.

YAGNI (You Ain't Gonna Need It) is laziness. It's not doing work before you need to. You might never need it, so be lazy.

DRY (Don't Repeat Yourself) is laziness. Whether it's using someone else's library/package/service/tool, or refactoring to get rid of duplication, or writing and maintaining less code. Be lazy and free up the time to do something more important.

DevOps is all about laziness. Automate all the things. Make them redundant. Make them resilient. Be lazy and build tools and systems to manage the day-to-day so you don't have to.

To be clear, this doesn't mean skip the important things. Don't skip the unit tests or the code review. Don't skip on requirements or talking to customers. Don't skimp on being thorough.

So be lazy, but only at the right times