Beware Of Bugs
“Beware of bugs in the above code; I have only proved it correct, not tried it.” – Donald Knuth
What does that statement even mean? Assuming the proof is correct, and I have no reason to doubt it, how can something be proven correct and still have bugs? There’s a lot to unpack in that statement.
First of all, how do you prove something is correct? There are formal/logical proofs that start with axioms and then prove a statement based on those axioms. You can sort of prove something, but really, you’re only proving the statement is correct if the axioms are really true. Or, you could use symbolic logic and a set of requirements/constraints and something like Leslie Lamport’s TLA+ to prove that your logic and sequences are correct. Those are at least two ways to make a formal proof, and there are other similar tools/mechanisms.
But even if you have that formal proof, have you really proved there are no bugs? No, you haven’t. For even more reasons than there are to formally prove something.
The biggest reason is that any proof you have is comparing some set of requirements to some specific logical representation of an algorithm. Once you have that you make two huge assumptions. First, you assume that the requirements you’re using for the proof are an exact and complete representation of the problem at hand. Second, you assume that the logical representation of the algorithm is a complete and accurate representation of the code that is/was/will be written. Both of those assumptions are fraught with peril.
Reality has shown us, time and time again, that our initial understanding of a problem is at best incomplete, if not wrong. There are always things we learn. As the Agile Maxims tell us,
It is in the doing of the work that we discover the work that we must do. Doing exposes reality.
We don’t know what we don’t know until we find out we don’t know it. Which means any proof of correctness you’ve done is incomplete. So your proof doesn’t prove anything, or at least doesn’t prove that your code does what it needs to.
You also don’t know that any code written is functionally equivalent to the logic in the proof you used. It’s translation on top of translation. Language translations are notoriously flakey. So we write unit tests to help increase our confidence that we’ve matched the requirements and expectations. And they do. But they can’t prove there are no bugs. All they can do is help us get our confidence high enough.
But even that isn’t enough. You could have perfect knowledge of the requirements. Create a perfect logical representation of the code. Perfectly translate that model into code. And then a stray neutrino hits the CPU at just the right (wrong?) time and the answer comes out wrong. Or you could have one of the Intel chips that we had in our Bing Maps datacenter. It couldn’t take the inverse cosecant of an angle. Well, it could, in that it gave you an answer, but it was wrong. Eventually we got a firmware update and at least it stopped lying to us and just gave an error if you tried to use the broken funcrion.
As bad as that is, it could be even worse. Almost 40 years ago, in 1984, Ken Thompson gave a talk on trusting trust. There’s a long chain of hardware, tools, and libraries between the code you write and the program a user eventually runs. He talked about a compiler that looked at what it was compiling. If it was a compiler it put a copy of itself inside the new compiler. If it was an operating system, it added a hidden login for him in the operating system itself. With that compiler you’d never know what it was really doing. It was mostly doing what you want, but sometimes it did more. That’s a pretty benign hidden ability, but if it can do that, what else can it do? Introducing bugs would be easy to do if you’ve already gotten that far.
Which brings us back to the opening quote. You can prove something is correct, but you can’t be sure that it will work and that there are no bugs. But that doesn’t mean things are hopeless. We shouldn’t just give up. There are lots of things we can and should do to. From software supply chain management to hermetic builds to exploratory testing. All of these things increase your confidence that things are correct. And we should never stop doing that.