Recent Posts (page 55 / 70)

by Leon Rosenshein

Cycles Of History

Or, `The more things change, the more they stay the same`. A long long time ago computers were big. They took up entire rooms and required cadres of experts who would care for them, take your punch cards, and then some time later tell you that you punched the wrong hole on card 97. So you'd redo card 97, resubmit your cards, and get the next error. Rinse and repeat until lo, an answer appeared.

Then computers got smaller. You could put two or three in a room. Someone added a keyboard and a roll of paper towels, and you could type at it. And get an answer. And be miles away connected by a phone line. Time and Moore's law marched on, and computers got smaller and smaller. Teletypes turned into vt100 terminals, then vector displays, and finally maps and bitmap displays. Thus was born the Mother of all Demos. And it was good. Altair then Sinclair, Atari, Commodore, and IBM started making personal computers. They got even smaller and Osbourne gave us suitcase sized luggables. IBM and Apple made them easier to use. Then the Macintosh made them "cool". And all the power was on the desktop. And you could run what you wanted, when you wanted to.

Then Sun came along, and the network was the computer. At least for the enterprise. Centralized services and data, with lots of thin clients on desktops. Solaris. Sparc. XWindows. Display Postscript. Control. The cadre in the Datacenter told you what you could run and when. They controlled access to everything.

But what about Microsoft? A computer on every desktop (running Microsoft software). And it became a thing. NAS, SAN, and Samba. File servers were around to share data, but they were just storage. Processing moved back out to the edge. All the pixels and FPS you could ask for. One computer had more memory than the entire world did 20 years earlier. We got Doom, and Quake, an MS Flight Simulator. But all those computers were pretty isolated. LAN parties required rubber chickens and special incantations to get 4 computers in the same room to talk to each other.

Meanwhile, over in the corner DARPA and BBN had built milnet, universities joined bitnet, and computers started talking to each other, almost reliably. Email, usenet, and maybe, distributed computing. TCP/IP for reliable routing. Retries. Store and forward. EasySabre. Compuserve. Geocities. Angelfire. AOL, and the September that never ended. The internet was a thing, and the network was the computer again.

And now? The speed of light is a limit. You need things close by. Akamai isn't enough. Just having the data local doesn't cut it when you need to crunch it. and the big new thing is born. Edge computing. Little pockets of local data that do what you need, and occasionally share data back with the home office and get updates. Hybrid Cloud and on-prem systems. It's new. It's cool. It's now

It's the same thing we used to do, writ large. Keep your hot data close to where it's going to be processed. It doesn't matter if it's RAM vs. drum memory, L1 Cache on chip vs. SSD or SAN vs. Cloud. Or I could tell you the same story about graphics cards. Bandwidth limited, geometry transform limited, Fill limited, Power limited. Or disk drives. RPM, Density, Number of tracks, Bandwidth, Total storage. Or display systems. Raster, Vector, Refresh rate. Pixel density. Or, for a car analogy, internal combustion engines.

In 1968 Douglas Engelbart showed us the future. It took 50+ years and some large number of cycles to get here, and we're still cycling. There are plenty of good lessons to learn from those cycles, so let's not forget about them while we build the future.

by Leon Rosenshein

Committed

What do you do more of, read PR commit messages or write them? Like almost everyone, I read a lot more PR messages than I write. And even more than the code it represents, which represents your intent to the compiler/interpreter, PR messages are about expressing your intent to the reader. And like good code, the purpose of your PR message is to meet a business need. In this case, the business need is to represent to the reader (which is often future you) what and why the PR exists in the first place.

So how do you achieve that goal? By remembering it while you're writing the message. And it starts with the title. It needs to be short and imperative. It should say what the PR does. And it probably shouldn't have the word and in it. In general, if it's a compound statement you should have split the change into multiple PRs, each with a single focus.

Second, no more than three paragraphs that explain, at a high level, what the PR does, why it's needed, and why it does it that way. It's not a design doc, but you can link to one if needed. It's not a trade study, but it should capture the reasoning. This is the place where you would touch on what you didn't do, and why you didn't do it. If there's a requirements doc, work item, task, or bug that the PR is in service of, this is where you link to it.

Third, describe how you know it works. Again in a paragraph or two, not a full test matrix, explain how you tested it. It could be unit tests, integration tests, A/B tests, or something else. And if there's something you didn't test, note what it is and why you didn't test it.

Finally, remember that we do squashed PRs, so all of the individual commit messages you write prior to submitting the PR won't be seen after your PR lands. I don't put nearly as much effort into them as I do the final PR commit message. I view them as short term notes to myself to help me write the final PM message. For the interim commits short is good, and unless there's something novel in a commit then one or two lines is enough.

So what do you think is important in a PR comment?

by Leon Rosenshein

Stable Code

Stable code is a good thing, right? Write it, build it, deploy it, and it just keeps working. It just runs and runs. You don't need to worry about it anymore. What more could you ask for? How about repeatability? There's more to a stable service than the first release. There's all the subsequent releases as well. You need to keep doing it. New requirements and requests come in, you write some more code, build it again, then deploy it.  In most cases those releases come on a fairly frequent cadence, and you get pretty good at it. And if you're not that good at it you get better. You practice. You write runbooks. You automate things. You evolve with the rest of your environment.

But sometimes it doesn't work that way. Maybe it was the first release, or maybe it was the 10th, but eventually it happens. You wrote the code so well, with so much forethought and future-proofing that months go by and you don't have to touch the system. Your monitoring and your customers are telling you everything is going great so you stop thinking about it and move on to other things. And inevitably a customer comes to you with a new requirement.

This happened to us the other day. We had a website that had been working great for months. A stateless website that was adaptive enough to handle all the changes in upstream services we threw at it without changes. Then the upstream service we knew would always be there went away and we needed to change one hard coded value. So what do you do in that case? First, you remember where you put the code and the runbooks. Then you build it so you can make a simple change and test it. Simple right?

In this case, not really. Turns out the world had moved on. The runbooks didn't work. Our laptops had upgraded to a new set of tools. The build and deployment systems had changed. A testing dependency wasn't available. But we needed to deploy, So we adapted. We removed the missing dependency. We turned off a bunch of validation tools. We got some bigger hammers and made it work. At the expense of repeatability. Luckily for us we're decommissioning the tool, so hopefully we won't need to ever update it again.

Bottom line, because of our apparent success we took shortcuts and didn't make sure our build system was not only stable and repeatable, but hermetic enough to stand up to the passage of time. We were offline for about 6 hours because our builds relied on external dependencies we had no control over and we had to adjust to work around them.

The lesson? Be hermetic. Own your dependencies. Exercise your systems even if you haven't needed them.

by Leon Rosenshein

Architect's Newsletter

Monoliths, Microservices,  Distributed  Monoliths, Domain Driven Design (DDD), Service Meshes, Data Modeling and more. All in the latest Software Architect's Newsletter.

Personally, I like DDD, as it helps with system decomposition, isolation, reducing coupling and cognitive load, and generally gives developers more freedom inside their domain because the boundaries are clearly defined. This is especially important when you're out on a sea of uncertainty. Anything you can do to bound the context you're working in lets you focus more on what you're trying to get done and not have to worry about what others are doing.

by Leon Rosenshein

Foundations

I recently ran across an article on "The 10 things every developer should learn", which is a great idea, but kind of hard to specify in practice. I mean, every developer should understand loops, conditionals, and data structures, right? How about discrete math and formal logic? Set theory? Big O notation is something every developer should have a basic understanding of too. It would be good to know TCP/IP at a high level, know what 3NF is. Then there's the toolset. Every developer should know source control, compilers, deployment tools, operating systems, and IDEs. Wait, that's more than 10 things.

Or, you could go the other way. There's the LAMP stack. If you know that you know everything you need to know, right? And that's only 4 things. Simple. Or maybe the list is Linux and C++. It's all just 1's and 0's, and short of assembly language or machine code you don't get much closer to the 1's and 0's than that. At the other end of that spectrum is Lisp and Smalltalk. Those are the best languages ever designed, with rich semantics. Or Ada. If you can get it to compile it's almost certainly correct. Good luck getting much more than "Hello World" to compile though.

So having said all that, I do believe there are things every developer should know. But they're not very concrete. They're more a set of heuristics and guidelines. And there's not a short ordered list. The things on the list are context sensitive and depend on the problem you're trying to solve. So what's on my list? In no particular order, 

  • Make sure you're solving the problem you are supposed to be solving
  • You don't know everything
  • You can't retrofit security securely
  • Build for the right scale
  • Use the right tool for the job
  • Text is hard
  • Write for the reader, not the compiler
  • Design for flexibility, but remember, you ain't gonna need it (yet)
  • Having a way to change things without a full build/test/deploy cycle is important to flexibility
  • Save early, save often (both to disk and source control)
  • Don't go alone
  • Don't blindly follow top 10 lists

So what's on your list? Share in the thread.

by Leon Rosenshein

Onboarding

How do people join your team? How long until they can do a build and run some tests? I'm not talking about understanding things well enough to submit a bug-fix PR or add a new feature, just build the code as is and run all the unit tests?

Our repo is hermetic, so that's good, it doesn't rely on having the right version of the compilers and libraries installed on dev machines. but what about the rest of dev tooling? IDEs are pretty common these days. Does your team have a supported one or two? Are there a set of plugins/aliases that are commonly used to improve your daily work? How are they shared/made available? What security groups/lists/permissions are needed? What websites does your team use to share things? What's your on-call policy, if you have one, and how do folks get added?

And this doesn't just apply to new hires or people onboarding to your team. What happens when you get a new laptop or need to re-image your current one? Get pushed to a new version of the OS? Find yourself helping out a coworker on their machine and try to use one of those tools?

And what if something changes? How do you tell the rest of your team? How do you add new capabilities without breaking things?

None of these things are insurmountable, and individually they might not take that much time, but overall they're things that slow you down, take you out of the flow, and generally detract from happiness. So think about how you can make things better for everyone.

by Leon Rosenshein

Incompatible Markdown

Quick question. Are these two html blocks equivalent?

<table>
  <tr>
    <th>Product</td>
    <th>What it's good for</td>
    <th>When to avoid/limitations</td>
  </tr>
  <tr>
    <td>Spinnaker<br>[Docs](http://docs/for/spinnaker)</td>
    <td>Services on K8s</td>
    <td></td>
  </tr>
  <tr>
    <td>Batch API<br>[Docs](https://docs/for/batch.html)</td>
    <td>Batch Jobs</td>
    <td>Not SparkSQL</td>
  </tr>
</table>

and

<table>
  <tr>
    <th>Product</td>
    <th>What it's good for</td>
    <th>When to avoid/limitations</td>
  </tr>
  <tr>
    <td>
Spinnaker<br>[Docs](http://docs/for/spinnaker)</td>
    <td>Services on K8s</td>
    <td></td>
  </tr>
  <tr>
    <td>
Batch API<br>[Docs](https://docs/for/batch.html)<br></td>
    <td>Batch Jobs</td>
    <td>Not SparkSQL</td>
  </tr>
</table>

According to the browsers I’ve tried, they are. But put them in a markdown file and things get interesting. Throw doxygen into the mix and it gets even weirder.

According to the W3C, whitespace inside tags is generally collapsed. LF and CRLF are mostly ignored, and browsers all (pretty much) do the same thing. That’s because it’s a standard. Yes, Microsoft did embrace, extend, extinguish, and IE6 became a standard unto itself, and now chromium is the standard, but at least you know what to expect.

In the case of Markdown, not so much. First, it’s not a standard, in any of the standard senses. There’s no governing body, there are no validation tests, and there are lots of flavors. There was (is?) an attempt to create a standard, but I think it’s a little too late for that.

With markdown, # is the big heading. Then you can have some number of them (depending on the implementation) of smaller headings. Surrounding your text with backticks ` generally gets you inline code, _ gets italics, ** gets bold, and sometimes, ~~ gets you strikethrough. There’s a way to do links. You can use ° to make bullet lists, and putting three backticks ``` before and after lines of code is a code block. And there are usually tables. Generally defined by an arcane series of -s and |’s, with some :s thrown in to make justification suggestions.

Beyond that it gets weird. Sometimes you can tell the code block what language you’re writing and it will do some syntax highlighting. Some of the format specifiers can be nested (italic strikethrough), but others can’t. Then there are flavors of Markdown. Phab adds icons, images, and internal references. Github has its version, as does Doxygen.

One thing I’ve never been able to figure out is lists in tables cells in pure Markdown. I’ve been able to do it using embedded HTML in markdown, which both doxygen and GitHub support. And that’s what led to the problem at the start of this. They both work in doxygen and give the result I expected, But in Github Markdown the first didn’t have named hyperlinks. It wasn’t until I added the spaces and blank lines between the td and the text of the cell. that the links worked. It took some experimenting to get it to work in both cases. At least I was able to in both, so we’ve got that going for us.

by Leon Rosenshein

WIP It Good

As I talked about in DevOps Book Club, reducing work in progress (WIP) is generally a good thing. It lets you focus, makes things more predictable, reduces time lost to context switches, decreases time to value, and in my case, makes my boss happy. The place where I have some disagreement with the idea of minimizing WIP is around bigger things and things that mostly come under the category of Internal Projects. Particularly in a larger organization. Things that require organization of lots of teams across an org or inherently have large blocks of "waiting for things to settle". If that's all you're doing you've squandering your time.

Yes, you could break things down into tiny little pieces like "Send an email to team A", "Respond to email from Team A", and "Share results with Team B", but really, the task is "Coordinate with Teams A & B". And that task inherently has a bunch of built in delays and down-time. So you can have a single piece of WIP, or you can have a couple of those coordination tasks, which become basically interrupts to the main task you're working on.

You do need to be careful to not have too many of those interrupt tasks. The whole point of minimizing WIP is to focus and keep things from becoming interrupt driven, with nothing getting done. If you're not careful that's what happens.

Back when I was doing near-real-time game coding on Windows we used to use the Windows Multimedia Timer to get things to happen at something approaching a steady rate and QueryPerformanceCounter to figure out how much time had actually elapsed between calls so we'd know how much time to simulate. And it worked really well for a while. Then we started to work on some networking and remote coordination. We'd send a message to a remote node, but we couldn't wait for the response because it was longer than our frame time. And we couldn't schedule lots of little tasks because they weren't periodic or something we could plan for. Our solution was to create a comms state machine that generally did nothing, but when there were messages to handle it handled them. In between those times it existed, but didn't take up usable time and cycles. By scheduling event handlers we were able to get lots of useful work done, stay within our time budget,  and minimize overhead.

At the more macro level of project tasks and work items the same thing can be done. Most things are individually scheduled, but you carve out some time for things with lots of wall-clock time, but only a little processing time, and let it coordinate itself. And that's why I have more WIP than my boss likes. Because I'm coordinating a lot of "remote" work that needs to get done, and the coordination takes some time, but the time isn't regular or predictable on a short scale. The total work is, but not the work in the next few hours.

Net-Net, minimize your WIP. Let it increase only as much as it takes to keep making forward progress on something, but only as much as it takes to do that without negatively impacting the high priority items, thereby reducing "Time to Value".

by Leon Rosenshein

Scope Of Influence

Perf season is over (for now). Time to think about your career. And not just think about it, but write it down and share it with your manager. What do you want from your career? That's a personal decision and there are lots of things to think about. Manager vs. IC? Technology Driven? Product or infrastructure focus? Customer facing? Data science vs PM vs Engineering (or some combination of the 3)? Only you can make the call on what you want to do, and it's ok to be unsure or change your mind along the way as you learn more.

But even with all those choices, there is one thing that stays pretty consistent as you advance through your career. And that's scope of influence. Whether you're a manager or an IC, as your level goes up, so does your scope of influence.

Generally speaking, an L3 engineer, right out of school, has a small scope of influence. They are expected to be able to manage themselves. Given a framework (either in code or documentation), turn a small set of requirements into business value. They should know to ask questions if they get stuck, and think a little about what they're doing means to the future.

At L4 your scope of influence increases. You might be a mentor, you might own a feature and have to work with other people on your team to get something implemented. You're expected to be able to estimate your work and think about how it will impact people who have to use it.

As a senior engineer (L5A) or EM1 your decisions impact the entire team. You're talking about what gets implemented, not just how. And the how part changes too. You're thinking about how your designs and changes impact the future, not just over the current release cycle, but for the next year or more. And you're not just thinking about technology, you're thinking about process and efficiency and communications.

From there scope just gets bigger. It could be the technology roadmap or the deep owner of a specific technology. It could be as a director thinking about which of the competing business goals get prioritized over the other. All three of those things can impact not just what, but how, hundreds of people are doing their jobs.

And of course, none of this is to say that everyone can't or shouldn't be thinking about these kinds of things. We all should. When it comes to your career it's the consistent demonstration of your scope of influence that drives things.

by Leon Rosenshein

Breaking New Ground

Ever have to work on a new technology? Something new to you? Maybe building a gRPC client/server and sending the first gRPC message to it. Or deploying your first service on Spinnaker. Takes a long time, doesn't it? Then you look back and wonder what took so long. You've been writing code for a while. You've sent messages before. You've deployed code. This should be old hat, right? Wrong. 

Think back to high school physics and consider the lowly ice cube. Ignoring altitude, impurities and other non-ideal situations, it takes ~2.1 Joules of energy to raise a gram of ice 1° C. That's not too bad. Take an ice cube out of the freezer and put it on the counter. The water temperature rises steadily, from ~-20°C (depending on your freezer) to 0°C. Then the temperature stops going up. The room is still the same temperature, so you're pouring energy into the ice at the same rate, but the temperature is steady. That's because a state change takes a lot more energy than just heating the water. It takes almost 80x the amount of energy (333.6 J/g) to turn a gram of ice into liquid. The water is still at 0°C. No temperature change, just a state change. So lots of energy was transferred, but as far as the thermometer is concerned, nothing  happened. After that temperature goes up steadily again (~4.2 J/g) until you hit 100°C, at which point it takes a whooping 2256 J/g to turn water into steam. (Side Note: That's why steam burns are so bad. All of that energy is transferred to your skin as soon as the steam condenses)

Doing things the first time is a lot like that. It is *NOT* incremental work. It's a step change. It's a state change. From "never did" to "have done". So you need to put in a lot of "work" to make the change. After that you'll go back to your more normal pace of advancement.

So when you hit that "state change" wall, don't get discouraged. Try some things. Read some books/articles. Ask for help. Once you get over the state change things will be better.