I was going to compare and contrast Scott’s Seeing Like A State with Brand’s How Buildings Learn, but when I went to find the link to what I wrote, I realized that How Buildings Learn is going to have to wait, because, somehow, I haven’t directly talked about Seeing Like A State. I have mentioned legibility though, which is directionally similar.
In Seeing Like a State, Scott talks about the tendency of the state, really any large organization, to want to be able to measure, record, and control a system. Making it measurable means making it possible to record it in a ledger. Also, the organization (state) has a model that is used to predict the future. If you combine the record of how things were, with the model of how things will be, it’s not a big leap to believing you can control the future by controlling the measurements. And if you’ve made that leap you get to feel good about things. You have predictability. The model tells you what to expect. You have agency. Your results are the inputs to the model, so you have direct control over the results.
Unfortunately, things almost never work out that way. Models are, at best, approximations. So the results are at best approximations of the real world. The measurements that go into the model are often approximations as well. And when they’re not, they’re samples taken at a specific point in time, with a specific context. You can guess what the result of using approximations as inputs to a model that is also an approximation. You get a prediction that sometimes has some similarity with reality, but very often doesn’t. You often run into the cobra effect.
This applies to software development as much as it applies to government. As much as software development is about making complex systems out of tiny parts that do one thing, it’s also a social activity. Just like organizations and states, you can’t predict the output of software development without recognizing that there are people involved and including their own internal thoughts and motivations. And while those things are generally qualitatively knowable, until someone like Hari Seldon arrives and gives us psychohistory, it’s not going to be legible.
Which means that the key takeaway from Seeing Like A State is not that you can measure and predict the future, but that you can’t. Or at least, you can’t predict to the level of precision and accuracy that you think you can. But that doesn’t mean you shouldn’t measure, or that you shouldn’t use models to predict. It just means you need to be much more thoughtful about it. You need to work with the system, from the inside. It’s much more about Governing the Commons, than seeing like a state. But that, like How Buildings Learn, is a topic for another day.