Worse is Better, Perfect is the Enemy of Good, and The Cathedral and the Bazaar. Lots of different ways to think about the same thing. There's the "Right Thing"™️ and the New Jersey approach. Is there a correct answer? If so, which is correct? Is it situational? If it's situational, what are the situations? This one is so deep that the author of the original Worse is Better paper spent the next 10+ years developing a second persona and arguing with himself about it and still hasn't come to a conclusion.
My personal bête noire. I know we're going to create some. That's a good thing and we should do it.
That's not the question. The real questions are how much should we take on, and when/how should we pay it off? As we
transition from On-Prem (IRN) to Cloud (EKS/EC2) and HDFS/Posix to HDFS/S3 we need to balance today's business needs
and developer velocity against tomorrow's. To help frame the problem I offer
these linksand a reminder that
there are lots of resources available in the O'Reilly library, including this book.
There's been an interesting conversation in gophers and python dev mailing list over the last few days, so I
figured I'd send some links on ORMs for you all to peruse. Another thing to think about as services/processing moves
and dB choices are upon us yet again.
Today's gem is from Raymond Chen's The Old New Thing. Raymond is an old Microsoftie, and among other things he was an SRE back before there were such things. Of course then we called them Sustainability Engineers, and their job was to make sure everything stayed working and existing functionality didn't break with releases and updates. This was especially important back in the day because releases happened every few years and quarterly updates were considered quick. Many of Raymond's postings explain why Windows is the way it is and how things that look like poor decisions now were actually the best decision possible at the time and for a long time after. Others talk about Tech in general, the human condition, and how the two interrelate. This one is about tech and engineering and sustainability in the face of time constraints.
In honor of the completion of the Apollo 11 mission, I give you Margaret Hamilton, lead developer for the Apollo Guidance Computer software, and the AGC software itself. It's old (1969) and in assembly, but here it is.
Two things in computing are hard. Concurrency, Distributed Systems, and Off-By-One errors. You'd think reliably writing a file wouldn't be one of them, but you'd be wrong. Luckily much of the complexity is hidden from most of us by "the system", but you'd be surprised how much potentially leaks to user code.
Here's a topic for discussion. There was a recent twitter thread about the "10x
Engineer". Does that person exist? Would you want to work with them? Is there a time and place for such a
person?
Founders if you ever come across this rare breed of engineers, grab them. If you have a 10x engineer as part of
your first few engineers, you increase the odds of your startup success significantly.
OK, here is a tough question.
How do you spot a 10x engineer?
Here's some even tougher questions. Are they really that good? Do they make everyone better, or do they get things done
and leave a trail of barely working code and burned our engineers supporting their code behind them? Do you really want
someone like that on your team?