Recent Posts (page 4 / 66)

by Leon Rosenshein

Visibility

I don’t often give career advice. Or rather, while I often talk about what good engineers and developers do, the importance of adding value, and the importance of managing your own career, I rarely talk about specific things you need to do to get a promotion. I’m of the opinion that if you grow and improve yourself, add value, and create and take advantage of opportunities along the way, your work will be recognized and valued.

That said, knowing how to manage your career is advice on how to get promoted. The difference though, and it’s an important one, is the motivation. Managing your career in a way that lets you learn and grow and expand your scope of influence means doing things that will help to get you promoted. When you do those things to learn and grow, they satisfy internal needs. The satisfy the need for Autonomy, Mastery, and Purpose. t’s good for you. It’s likely good got your team. It’s likely good for your company.

On the other hand, when you do things with the goal of getting promoted, you might learn and grow, but those are collateral benefits. Checking boxes on a level rubric can get you promoted. Unfortunately, when your focus is on checking those boxes, you end up not helping yourself, not helping your team, and not helping the company you’re working for reach its goals. Worst case you end up with what Charity Majors described as roving bands of skilled, restless engineers competing for vanity projects. And when you get the promotion, you’re likely to find that while it’s satisfying for a little while, it didn’t really satisfy that internal need, and instead of enjoying things, you just keep struggling for the next promotion.

The thing is, regardless of your motivation, you’ll do many of the same things. One of those things is ensuring you have visibility. Which brings me to this image, from workchronicles

I’ve mentioned workchronicles and shared one of their comics before. Like many of those comics, it’s more than a bit cynical, but there’s a kernel of truth hiding in there. You need to get past the tyranny of or. You do need to talk about what you’re doing and what you’ve done, but NOT at the expense of doing the work. Advancing your career requires a bunch of things. Of course, you need the ability. You need to demonstrate that you can do the work, but ability alone isn’t enough.

After all, your coworkers, your peers, have roughly the same ability as you do. They might have deeper knowledge of something specific, or a broader knowledge base, but they’re your peers. Assuming everything else is working the way it should (yes, I know what happens when you assume) if they’re your peers, your abilities are roughly equal.

Since ability isn’t enough, what’s the next thing? The next thing is that others need to know that you have the ability. That’s where visibility comes in. To manage your career, to set yourself up for growth, to be in a position where you can see opportunities, you need to make sure that your ability is visible. Or, as I said last week, you need to radiate information. The key is to make your ability (and the value you’ve added by using that ability) visible to people without bragging or forcing it down other’s throats.

You do that by informing people of what you’ve done, and how it helps them and makes their lives easier. You do it by asking relevant questions. Not to show how smart or observant you are, but to help others make the right decisions and to make sure they haven’t overlooked any potential problems. You do it by helping others and sharing your ability when it’s needed, and you can help. You do it by documenting and sharing your learnings. You do it by not being indispensable in an area, but by helping others learn that area themselves. You do it by doing the work that’s important and valuable and move things forward for everyone, because it’s important and valuable to everyone, not because it’s flashy, or “level appropriate”.

If you do those things your visibility will go up. Your scope of influence will go up. You’ll see bigger pictures and see more opportunities. Your autonomy, mastery, and purpose will go up. You’ll be more satisfied. And, as a side effect, you might get a promotion.

by Leon Rosenshein

Radiating Information

According to the Hacker Ethic, per Steven Levy in Hackers, all information should be free. Stewart Brand anthromophized it and noted the tension betweemm the cost of distributing information going down and the value of having the right information at the right time going up. He said

It seems like there’s a couple of interesting paradoxes we’re working with here. That’s why I’m especially interested in what Bob Wallace has done with PC-WRITE and what Andrew Flugelman did before that with PC-TALK. On the one hand information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.

So now you know where the idea that information should be free comes from.

The thing is, the information might want to be free, and there are those that will seek out the information, when they feel they need it, but that doesn’t say anything about people having the information they need when need it. Because regardless of what the information might want, it doesn’t have the ability to go out and find people.

On the other hand, we, as developers of systems and platforms, do have access to lots of information that wants to be free. And it’s up to us to make sure that information gets where it wants to be.

You’ve probably seen an error message like this

That’s not very helpful, is it. And the worst part is that the person who made that dialog box show up almost certainly knew more about the problem. They just chose to hide the information.

That’s a pretty obvious example of information being hidden by not being shown. Another way important information gets hidden is by being drowned out by less important information. Let’s say you’ve got a service (like a Wordpress blog with a security buddy plugin) that sends you the status of a security scan every day at 2:00 AM. And every day it sends you the same list of 7 files that have been replaced by the overnight backup. How long do you think it will take before you either just ignore that email or come up with a rule in your email client to delete it, or at least shuffle it off to an ignored folder? So that when one day the email says 10 files have been changed you never see it, let alone do anything about it. In that case, the information is free and available, but it’s just sitting there waiting for someone (you) to notice/care.

What that’s really pointing at is the difference between data, information, knowledge, and wisdom. We’re drowning in data. The key is to take that data, distill out the information in it, and present it as knowledge to the user. I don’t need 100 emails from all of the services I care about every day telling me they’re working fine. An email that one of them crashed and has restarted is useful information, but that can happen for lots of transient reasons out of my control, like a power outage, network drop, or hardware failure. That’s useful knowledge and I’ll care about it soon.

(Almost) Nobody reads automated status emails. What I really want is the knowledge that the service has tried to restart multiple times in the past 5 minutes, and hasn’t been able to reach a healthy state. That’s knowledge I need to respond to NOW.

The same applies to letting my users/customers know how things are going. They don’t want daily emails that everything is OK. But a red banner that shows up on every page of the website telling them about current problems/outages is knowledge they can use.

Or as my GPM Brad told me many moons ago, “Nobody reads the status email. I need to be alerted to the problem, so do that instead.” It doens’t matter how free the information is or wants to be. What matters is that the people who need to know it do know it when they need to. The value of having the right information at the right is incalcuble. Even if the cost to deliver a header or not deliver it, over the cost of the lifetime of a website, is esentially zero.

by Leon Rosenshein

Perspective

Perspective is important. Where you see things from changes what you see. Some things are visible from one perspective, but not visible from another. This applies not just to your physical location, but also to your mental location. Your experiences, your history, your expectations, and your biases all have a huge impact on not just what you see, but how you interpret what you see.

Consider this video of Saturn’s small army of moons moving around it. In the video the point of view (POV) is always focused on Saturn, and as you can tell from the fixed stars in the background, it’s always looking perpendicular to the plane of Saturn’s orbit around the Sun. In this case the moons move mostly across the image, and slight above/below Saturn, which always remains centered in the image.

Saturn’s Moons

In contrast, this video of the planets in our solar system moving around the Sun, starts out with the same POV, centered on the Sun, looking down from a point perpendicular to the ecliptic, the plane of the orbits around the Sun. It stays there until about 5 seconds into the video. Then it starts to move. The camera stays at the same distance from the Sun, but starts to move. It moves down towards the ecliptic, goes through it, moves along it, then ends up doing some kind of rotation in the same plane as the ecliptic.

Solar System

That looks like a very different kind of motion, doesn’t it? The planets are moving around, above, below, and sideways. Or at least they seem to.

In fact, the motion is effectively the same in both videos. And it’s the same as the motion in the first 5 seconds of the solar system video. The smaller objects revolve around the much more massive object in the center of the video. What makes the second one so complex looking is the changing perspective of the viewer.

It’s the same thing in software development. You see things with your perspective. If you’re responsible for the storage of data, you see things as rows in a table or key/value pairs. If you’re responsible for dataflows you might see things as pipeline or processing nodes with branches, tees, and connectors. From a user interface perspective, you might see views (input forms and controls) and the models (the hidden processing), and the resulting views (graphs, tables, gauges, animations, etc). All of those views are correct, but they’re not the whole thing.

Just as in the parable of the Blind Men and an Elephant, we often see things from our limited perspective, and miss the bigger picture. The trick is to overcome the bias of our perspective without losing our perspective. To take advantage of what we see from our individual POVs and combine that with other people’s POVs. Like everything else in development, how much to focus on one POV or another is a balance.

And just as important as balancing the different technical perspectives, is ensuring that we include other perspectives. There’s the customer perspective. There’s the short term business perspective. There’s the long term perspective. Which all gets back to understanding why you’re doing whatever it is you’re doing. Which is the most important perspective of all.

by Leon Rosenshein

Three More Tells

A while back I talked about The Three Tells. They weren’t poker tells. Instead, it was tell them what you’re going to tell them, tell it to them, then tell them what you told them. I thought it was a good way to approach a presentation then, and I still think it is today. But that’s old news. Today I’m going to tell you about three different tells. The three tells of Test Driven Development (TDD).

You might be wondering how TDD is like a presentation. I’m going to tell you how TDD is just a different implementation of the same formula. Let me explain.

The 1st tell – Tell them what you’re going to tell them

The first thing you do in TDD is write a test. You expect it to fail, and it does. Nothing surprising there. But what you’re really doing is telling yourself, your team, and the compiler, how you expect things to work. You’re telling them what the code you’re going to write is going to do. You’re telling them how you expect the code to be used. How and when you expect it to fail. And if you do it really well, you’re telling the people who will eventually use the code why they want to use the code and how it will make things better for them.

The 2nd tell – Tell it to them

The next thing you do is write the code. You tell yourself, your team, and the compiler exactly how do to what you want to do. You make the tests pass. You go into as much detail as needed. You might outline it then fill in the details. You might take the most important path and write that first. You might do the obvious parts first since you know what you want to say. You might dive deep in one area first because that’s where you’re least clear. And just like working on a presentation, you do some refactoring and moving things around to make it flow better. Until you’ve got it down to where it does everything you want it to and nothing more.

The 3rd Tell – Tell them what you told them

Once you’ve got the code written and the tests you wrote originally passing, you keep running the tests. As your understanding of the domain grows and you make changes. As others make changes to the same domain and other domains. As your understanding of the input data grows. You keep running the tests and reminding yourself, your team, and the compiler what you wanted the code to do. Reminding yourself why the code does what it does and why you wanted it to do it that way.

There you have it. TDD is just another presentation. Where you tell them what you’re going to tell them through your tests. Where tell them what you want to tell them by writing the code that makes the tests pass. And where, finally, you tell them what you told them by running the tests over and over again on each change, making sure that they still pass.

by Leon Rosenshein

Games And Agency

I recently finished listening to the first chapter of Worlds Beyond Number. It’s a podcast that uses D&D rules to help define a world built for storytelling. Brennan Mulligan is the Dungeon Master (DM) and world builder. He’s responsible for (almost) all of the backstory, pacing, and events. He does an amazing job of not just building a world and making it feel alive, he also provides the context that the three live characters act within. He does it in a way that makes the people playing the characters, and the folks listening in along the way, feel like they have control of their destiny, even while knowing there is a lot they don’t know about the world and it’s limitations.

Another podcast I regularly listen to is Brian Marick’s Oddly Influenced. It’s all about how folks have taken the teachings and writings of other fields and applied them to software development. A recent episode was an interview with Jessica Kerr, where they discussed the book Games: Agency as Art. Or at least that was the reason they did the interview. The interview itself was about far more that games, agency, or art.

One of the things they talked about was, of course, agency. What it was, what it wasn’t, and where it came from. Although similar to what Daniel Pink talked about in Drive, it’s a slightly different take on the idea. It approaches it from how you craft or adjust the environment to help the player have agency. In games, the boundaries of the player’s agency comes from the game designer (and the developer’s implementation of the designer’s vision). The designer provides the goals, which set the direction (purpose in Pink’s description), the player’s capabilities (Pink’s mastery), which define what the player can do, and the rules, which define what the player can’t do (the boundaries of Pink’s Autonomy). The thing about the kind of agency that a game designer can provide though, is that it has to be completely defined up front. The game has a beginning, a middle, and an end, and downloadable content aside, the designer gets no realtime feedback from the player, and has no ability to change the game after it ships.

A DM, on the other hand, such as Brennan in Worlds Beyond Number, has to do much of the same work as the game designer up front, but after that, the DM is right there in the world with the player, getting feedback and making adjustments (staying within the defined framework) that adapt and redefine both the world and the player’s agency. There’s a natural tension there, between maintaining the status quo and keeping the world operating per it’s rules, and ensuring that the players that inhabit the world are having a good time. Because if they’re not having a good time they’ll pick up their dice and go home. If they do that then the game is over. Even though it’s not thecnically player vs DM, if the players quit, the DM has clearly lost (whatever that means).

Which gets us to the Oddly Influenced part. It’s not a big stretch to think of an engineering manager (EM) /development lead as the DM in a world-building game. The EM has an initial set of goals that they want to see met. They have tools and capabilities they can provide to the team, compilers, platforms, compute and storage resources, consultants, and other teams (non-player characters in the D&D world). They also provide a set of constraints (rules) the team needs to work within. Deadlines and schedules. External regulatory requirements and internal processes that must be followed. Networks and the laws of physics. Just like a game designer or DM, they define the Goals, Capabilities, and Rules. They define the boundaries of the agency that the members of the team have.

And just like a designer or DM, a good EM uses those things levers to provide not just agency, but purpose and fulfillment. Make the goals to difficult, or just arbitrarily add a rule that makes using a capability impossible and the team (or player) gets frustrated. Conversely, make the goals too easy or provide rewards arbitrarily, and there’s no challenge or growth. The team (or player) gets bored and finds something else to do.

Also like a DM, the EM has the immediate feedback from the team. Are goals being met? Is the team “enjoying” the journey? Are they getting ahead of the goals, or keeping the goals from being met? How can the environment (cpabilities and rules) be changed to provide more fulfillment for the team while still incentivizing moving towards the goals?

The environment, however, is where things get more complicated for the EM. The game designed and the DM have complete control over the environment. That, unfortunately, isn’t so for the EM. The overall goals are given to the EM. And there’s not just one EM. Typically, not even one EM for any particular goal. The EM has to work with their partner EMs to reach the goals. Or adjust them so they can be met. Meanwhile the group of EMs is getting feedback from their managers and customers/users on the validity of what’s being built. You might even say the EMs are a team with some level of agency working within a framework of goals, capabilities, and rules.

Once again, it’s turtles all the way down. But at least at any given level, you’ve got another frame to view the situation through and to use to help make decisions so that, at that level, the designer/DM/EM knows what levers there are and the players/team knows what their level of agency is. Or, you could look at it the other way around. We’re all playing a game together. Someone else has defined the goals, capabilities, and rules. We can work with them, and each other, within and across the levels of the stack, to provide feedback to each other to jointly maximize goals met and enjoyment/fullfillment. Which is really the meta-goal.

by Leon Rosenshein

Shallow Hurry

I ran across the term shallow hurry the other day and it resonated deeply (no pun intended) with me. Shallow hurry means doing just what you’ve been doing, only doing it faster. The expectation is that you’ll get done sooner. And that might even be true. In the short term.

Typing faster will get your code written a little bit sooner, but there’s a natural limit on that. While there may be times when we’re actually limited by typing speed, that’s not usually the case. Not refactoring the code when you see an need/opportunity or not writing/running tests, on the other hand, will often get your code into production sooner. The first time. Sometimes the seocnd time. And occasionally the third time. After that, not so much. You find you’re fighting the code. You’re spending a lot of time dealing with edge cases and wierd constraints. You’re working harder, typing more and faster, and moving slower.

That’s an example of shallow hurry. It makes you faster in the moment, but long term, you’re slower. After the initial speedup, you spend more time avoiding problems than you do making forward progress. All the problems you pushed off until tomorrow are still there, and the shortcuts you took have added to that burden and made the problems interact in new, exciting, and damaging ways. So to make progress you need to keep finding corners to push the problems into. Or you bury them in another layer of abstraction, leaving the problem hidden under the covers to bite some unsuspecting maintainer in a few weeks/months.

There are lots of reasons why this might happen. Some are even existential. Back in the days of selling games in boxes on shelves in brick and mortar stores, 80%+ of sales happened between Thanksgiving and Christmas. If your box wasn’t on the shelves, you didn’t make the sale. If you missed too many sales you ran out of money, and that was the end. So, in that case, you take the chance, do the shallow hurry, and hope you get the chance to fix it.

On the other hand, most of the reasons aren’t nearly that existential. Instead, the drive for Shallow Hurry often comes from internal biases and misaligned incentives. Biases like sunk cost, anchoring and overconfidence. You’ve made a choice and put some effort into it. You don’t want to admit you might have made a mistake. Other’s around you fixate on the proposed solution, and of course, you know you’ve got to solve one more small problem or write one simple function, and you’ll reach the goal. Just move a little faster because you’re almost there.

Add to that a typical incentive system. Heros are rewarded for putting in extra effort, for executing on the plan and rescuing the project. At the same time, questioning the plan is seen as not being a team player, and discouraged. Include a deadline coming up that you’ve promised to meet, and you find folks doubling down on what they’re already do. Do it more. Do it faster. Get to the end and get the prize. Just before disaster strikes.

Because what often happens is that you’ve pushed a mountain of problems out in front of you. You’ve managed to reach the goal, but as is often the case, what you’ve reached is an intermediate goal. So you look to take the next step, and find there isn’t one. You’ve backed yourself into a corner, and before you can move forward, you need to figure out a path forward. You might even need to change your goal, just like Mike Mulligan and His Steam Shovel.

Luckily, as easy as it is to slip into shallow hurry, it’s just as easy to recognize. When you find yourself avoiding even looking at options, you might be dealing with shallow hurry. When you start thinking about ways to spend a few more hours just trying random things to see how it works, you might be dealing with shallow hurry. And when you’re spending more and more time working on the same things, but the results aren’t changing, you’re probably dealing with shallow hurry.

And that’s the time to take a step back, look at what you’re doing, why you’re doing it, and ask yourself my favorite question. “What are you really trying to do here?”

by Leon Rosenshein

Lead With the Why, Not the Way

Taking a break from the book reviews, but sticking with the theme of software development being a social endeavor, there are many ways to get teams working on things together and doing them in a similar fashion. Some work better than others.

One of the best ways to make sure that everyone is working together and in a similar fashion is to work ensemble style. If the team is sitting together talking about and editing the same code at the same time then, by definition they’re working together and in a similar fashion. I’ve had really good experiences with this with small groups and small tasks, but folks I generally trust and respect have reported good results with larger groups and longer term projects. Seems like a good goal to strive for.

That said, that’s not always going to be possible. For any number of organizational and structural reasons, it often doesn’t make sense for an entire team to be working on the exact same thing at the exact same time. So how can you get everyone working together?

One way is by fiat. Lay down the law and demand that people do what you tell them, exactly when you tell them, and in the precise fashion or have ordained. That might work. Once or twice. As long as everything goes exactly as you expected. Sometimes that’s the right approach, but only sometimes, and not over the long term. Iif you stick with that method then pretty soon a situation will arise where your exact instructions cause things to stop, or perhaps make things worse. That’s probably not your desired outcome, so that’s not a good approach.

Or, you could go to the other extreme. Tell people that they should all work together and get things done, then walk away. Again, that might work a time or two, but pretty soon everyone is going to have their own interpretation of what it means and how they should be working. That ends up in one of two places. Chaos, with everyone doing what they think you mean, or someone deciding and enforcing, again by fiat, their will. That might be what you want, and result in a success or two, but long-term it still doesn’t work.

Which leads us back to autonomy, alignment, purpose, and urgency You’ve got autonomy and urgency. How can you manage alignment and purpose? You can do that with what Alexis de Tocqueville called enlightened self-interest. The idea that you get the best results when people are working for a desired common goal, not just because it’s the goal, but also because it’s good for them. Said another way, like this entry’s title, Lead With the Why, Not the Way.

The best way to get a team working together, towards the same goal, in a similar fashion, is to help them understand why they want to do that. How it helps the team reach its goal and how it helps them reach their goals. It aligns intrinsic and extrinsic goals. It sets shared purpose. It gives people a reason to want to achieve the goals. It’s even more powerful when your why has multiple levels. It’s what the customer wants. It increases sales. It reduces support burden. It makes it easier to add the feature everyone wants to build.

And now that I think of it, I am talking about Governing the Commons. That’s all about how to set up a system. To give the people in the system the appropriate why’s so that they do what’s best for the system. Because in the long run, that’s also what’s best for themselves.

by Leon Rosenshein

How Buildings Learn

Now that I’ve written about Seeing Like A State, I want to talk about How Buildings Learn. How Buildings Learn is, in many ways, a counterpoint to Seeing Like a State. It also has a lot of relevance to software design.

How Buildings Learn starts from the premise that over-architecting is bad. That the best way ensure longevity is to architect not only for the now, but also for the future. And then, when you get feedback, listen to it and adapt. It’s a very agile way of approaching building architecture.

Brand goes into some detail about how designing for specific constraints is limiting. Of course, that makes sense. Every time you optimize of one thing, you’re not optimizing for something else. Instead, what Brand recommends is simple designs that you can adjust as you learn the true usage patterns.

There are multiple examples where premature optimization of architecture has caused problems. Consider the Fuller Dome. If all you’re trying to do is minimize resource usage it’s a great idea. Or if you’re building in a zero gravity environment. If you’re not, then you end up with a lot of wasted space in the top/center of the dome. Other examples are Falling Water and Villa Savoye. Both are examples of form trumping function and causing problems later on.

Both construction and software use the term architecture, but does Brand’s approach really apply? After all, the Unix Way is all about being specific. Doing one thing and doing it well. Which is the opposite of what Brand proposes. Or is it? The Unix way is not just about doing one thing. It’s also about composability. Which is really what Brand is getting at. Build something that is easy to subdivide into parts that are composable. Build the parts that meet your current need. When things change or you learn you need more, adjust the parts to match the new understanding. That’s evolutionary architecture.

And that’s in direct contrast to pre-defined legibility. A state (or organization) is often looking for control and predictability. So instead of building something that could work and them adjusting it to fit the exact needs, it asks for detailed, involved, plans. And then it sticks with them, even if the reality on the ground shows problems (see Villa Savoye above).

Another, more software based approach, but still with an architectural basis, is The Cathedral and the Bazzar. In it, Raymond describes the differences between working from a centrally defined/controlled plan and working from a set of common goals. According to Raymond, the Bazaar will get you a superior result. He’s got more than a little evidence to prove it.

However, the model of starting with something adaptable and a set of common goals and then building the perfect building (or piece of software or really any other shared resource) comes with its own problems. Not the least of which is diffusion of responsibility. How you handle that issue is critical to having a good outcome when buildings (or code) learn. Anarchy is not the way to reach a solution that optimizes for what everyone is trying to get done.

Which leads right to Ostrom’s Governing the Commons. But that’s a topic for another day.

by Leon Rosenshein

Seeing Like a State

I was going to compare and contrast Scott’s Seeing Like A State with Brand’s How Buildings Learn, but when I went to find the link to what I wrote, I realized that How Buildings Learn is going to have to wait, because, somehow, I haven’t directly talked about Seeing Like A State. I have mentioned legibility though, which is directionally similar.

In Seeing Like a State, Scott talks about the tendency of the state, really any large organization, to want to be able to measure, record, and control a system. Making it measurable means making it possible to record it in a ledger. Also, the organization (state) has a model that is used to predict the future. If you combine the record of how things were, with the model of how things will be, it’s not a big leap to believing you can control the future by controlling the measurements. And if you’ve made that leap you get to feel good about things. You have predictability. The model tells you what to expect. You have agency. Your results are the inputs to the model, so you have direct control over the results.

Unfortunately, things almost never work out that way. Models are, at best, approximations. So the results are at best approximations of the real world. The measurements that go into the model are often approximations as well. And when they’re not, they’re samples taken at a specific point in time, with a specific context. You can guess what the result of using approximations as inputs to a model that is also an approximation. You get a prediction that sometimes has some similarity with reality, but very often doesn’t. You often run into the cobra effect.

This applies to software development as much as it applies to government. As much as software development is about making complex systems out of tiny parts that do one thing, it’s also a social activity. Just like organizations and states, you can’t predict the output of software development without recognizing that there are people involved and including their own internal thoughts and motivations. And while those things are generally qualitatively knowable, until someone like Hari Seldon arrives and gives us psychohistory, it’s not going to be legible.

Which means that the key takeaway from Seeing Like A State is not that you can measure and predict the future, but that you can’t. Or at least, you can’t predict to the level of precision and accuracy that you think you can. But that doesn’t mean you shouldn’t measure, or that you shouldn’t use models to predict. It just means you need to be much more thoughtful about it. You need to work with the system, from the inside. It’s much more about Governing the Commons, than seeing like a state. But that, like How Buildings Learn, is a topic for another day.

by Leon Rosenshein

Code Coverage Is NOT useless

Mini rant today. There are lots of teams across the software industry that are called some variation of “Software Quality”. That’s a lovely term. It means different things to different people. There are (at least) two kinds of quality at play here. Internal software quality (ISQ) and external software quality (ESQ). ESQ is about correctness and suitability for the task at hand. ISQ is about the code itself, not whether or not it works as specified. Not all quality teams are responsible for both kinds of quality.

Furthermore, as much as people want it to mean that the team called “Software Quality” is responsible for ensuring that the entire org is building software with both internal and external quality, that isn’t the case. Those teams are not, and cannot be, responsible for what others do. After all, they’re not the ones writing the code. What it does mean, and what they can, and generally do, do, is that they are responsible for defining and promoting good practices and especially, for pointing out places in the codebase where the code misses the mark.

There are two very important points in that last sentence. The first is that the quality team’s job is to identify where the code misses the mark. NOT the developers. Code ownership is important, and people write the code, but it’s important to distinguish between problems with code and process and problems with people. That, however, is a topic for another time.

The other point, and where I’m going with today’s post, is the pointing out part. The quality team’s job is to point out, with comparable, if not truly objective values, how much ISQ the code has. There are lots of ways to do that. Things like cyclomatic complexity, lint/static analysis warnings, code sanitizer checks, or code coverage percentages. Those measures are very objective. There are X lint errors. Your tests execute Y percent of your codebase and cover Z percent of the branch decisions. And you can track those numbers over time. Are they getting closer to your goal or further? You can argue the value of all of those metrics, but they’re (relatively) easy to calculate, so they’re easy to report and track.

Which, finally, gets us to today’s rant. I ran across this article that that says code coverage is a useless metric. I have a real problem with that. I’m more than happy to discuss the value of code coverage metrics with anyone. I know that you can have 100% code coverage and still have bugs. It’s easy to get to a fairly high percentage of code coverage and not say anything about correctness. In complex systems with significant amounts of emergent behavior it’s even harder to get correctness from low level unit tests. Just look at that article.

What bothers me most about that article is the click-baity title and the initial premise. It starts from “Because it’s possible for a bad (or at least uncaring) actor to get great coverage and not find bugs, coverage metrics are useless.” If you have that approach to management, you’re going to get what you measure. To me, code coverage is a signal. A signal you need to balance with all of the other signals. Letting one signal overpower all the others is hiding the truth. And like any useful signal, its absence is just as enlightening as its presence. If you have a test suite that you think fully exercises your API and there are large areas of code without coverage, why do you even have that code? If you really don’t need it remove it. Maybe your domain breakdown is wrong and it belongs somewhere else? Should it be moved? If you find that there are swaths of code that are untestable because you can’t craft inputs that exercise them, do you need a refactor? Is this an opportunity for dependency injection?

So the next time someone tells you that code coverage is a useless metric, maybe the problem isn’t the metric, it’s how they’re using code coverage. That’s an opportunity for education, and that’s always a good thing.