Free will and program execution
A computational metaphor for free will in a deterministic universe
I have a subjective experience of free will, as do most people I have asked about it. We seem to generally experience our choices as voluntary and under our control, not as somehow determined by the universe. However, that subjective experience appears to contradict what we know from physics. There are whole books that explain why, but here’s a summary of the reasoning.
Each person’s subjective experience appears to be produced by the activity of that person’s brain. Although we don’t yet have a complete understanding of the connections, in principle all of our brain activity is explainable as a combination of electrical and chemical interactions. That explanation suggests that all of our actions are completely determined, and appears to leave no room for free will.
Even more weirdly, experiments with brain probes suggest that neuron firings related to our actions – voluntary muscle movements and the like – precede neuron firings related to our consciousness. Although my subjective experience is that I decide to (say) raise a finger, the electrical activity in the brain instead suggests that I raise my finger and then, slightly afterward, construct an explanatory mental illusion of choosing to act. Is free will only an illusion?
Taking a computational perspective
Looking at computation offers a novel perspective on this question. Although no-one would describe an executing program as having free will, it nevertheless has aspects of choice and unpredictability. Those aspects coexist with the relentlessly predictable behavior of the underlying computer.
The execution of a program is a completely rule-driven, mechanical, step-taking process. We don’t expect the step-taking machinery to improvise or engage in artistic excursions. Indeed, for very small, simple programs, we can predict exactly what it will do when it executes. However, it doesn’t require a very large or very complex program before we lose that predictive ability.
It may be surprising to learn that we can’t predict what a program does, but a simple example suffices. Consider a program that reads the current date and prints “Hello” on even-numbered days and “Goodbye” on odd-numbered days. Although at one level the program’s behavior is easy to understand and completely constrained, it is already unpredictable. Suppose that someone demands to know in advance exactly what the program will print when it runs. We can only characterize it in terms of the two mutually-exclusive possible results, unless we know something more about exactly when the program will be run. Our best available answer is the wishy-washy, “it either prints ‘Hello’ or it prints ‘Goodbye.’” To anyone who might be frustrated by that summary, we can only say that there is no better answer available. The problem isn’t that we’re uncooperative – it’s that we just don’t know in advance.
But wait, it gets worse!
Depending on the nature of the program, sometimes our ability to predict is even weaker. In some situations, the only way we can determine the result is by executing the program. We saw that with the hello/goodbye program, we have to know the input data to make a prediction; in some other cases, either it’s impossible to know the input data in advance, or knowing the input data still doesn’t let us predict accurately.
Some computations are hard to predict because they depend on real-time data – data that is just becoming available at the same time as it is consumed by the computation. If we want to, we can think of the date in the hello/goodbye program as a simple and slow-changing kind of real-time data – but in that specific case, it’s not very challenging to figure out what the input data will be at any given instant. In contrast, there are programs that deal with measurements of complex ongoing processes (weather, rivers, car engines, assembly lines) where it’s literally impossible to know what the next set of readings will be.
There are also programs that have much more complex decision structures than the hello/goodbye program. Sometimes that sequencing and decision structure (which computer scientists refer to as control flow) is in itself sufficiently complex so that the program’s result is hard to predict, even if the data is all simple and known in advance.
For any of these complex programs, whether the complexity is in the control flow or in the data or both, there is basically no way to predict the computation. We could try to run an additional copy of the computation alongside the “real” one, but that won’t give any advantage… we still won’t get the answer any sooner, we’ll just use twice as many computing resources.
What if we have other kinds of knowledge about the program? For example, perhaps we have a collection of actually-performed executions that have happened in the past. Or perhaps we can characterize the set of possible executions that will happen in the future. Even if we have one or both kinds of information available to us, that may not help us predict exactly what will happen on the next execution of the program.
After all, whatever is genuinely predictable is typically redundant. For example, if we can tell that our program will always add 2 and 2, we don’t need to actually perform that computation each time the program runs – we can just substitute the value 4 as the result. As a bonus, such a substitution will also save time compared to the prior implementation. To invert this observation, whatever can’t be substituted away is not predictable.
Unpredictable, but without free will
So we can see that a program is an intriguing combination. It completely determines the steps and choices that will be taken in execution, and yet its behavior when executed is unpredictable in pretty much every case beyond the trivial. There are multiple sources of unpredictability in the execution of the program: the program may be affected by its environment (data read or written), or the program might even be computing its future control structures by reading and writing that data.
Despite all of that potential unpredictability, there is no free will anywhere. Whether we look at the underlying machinery or at the program itself, there is no freedom, no willfulness, no idiosyncratic choice to be found anywhere.
Few, if any, people would think of themselves as being programs. But we can ask what it is like to be an executing program: does the program “know” its eventual conclusion? Surely not, because there would be no point of executing if the result were known in advance. The program’s experience of execution must instead be that each step’s result is revealed only as a result of taking that step.
Likewise, for each data item read, for each decision made, does the program “know” in advance what will happen? Surely not, because again the program would not need to bother with actually carrying it out if it were known in advance. From our broader perspective, we can see the program as completely determined by circumstances. However, if we take the point of view of this imaginary program, it seems plausible that the program “feels” that it is making each of its choices via an exercise of will.
But wait, it gets worse (again)!
Curiously, a program is even less predictable than this description would suggest. There are subtle theoretical limits on what we can know about a program’s future behavior. Our simple hello/goodbye program is only a short distance away from another kind of two-way choice program. Instead of a program that chooses to say hello or goodbye, we can instead have a program that tells us whether another program halts or runs forever. That gives us the ingredients for a provably-unsolvable construction that computer scientists call the halting problem, which is sketched in chapter 7 of Bits to Bitcoin. Rather than repeat that sketch here, we’ll just note the deeply surprising result. Although it’s easy to imagine a program that can tell you what another program does, this particular kind of program is impossible to construct: when you work the logic through, it embodies an inherent contradiction.
Why does that matter? If the halting problem is unsolvable, there are questions in computing that can be clearly expressed but which are nevertheless inherently unanswerable. This discovery is sobering. In computing, we are not constrained by the many ways in which we don’t understand brains and minds; instead, we have constructed all the layers of our computing universe to be well-behaved, well-understood, and orderly. Nevertheless, we discover that there are areas that are off-limits – not because they are too complex for us to understand, but essentially because the universe happens to work in a way that we find surprising.
Summing up
Programs make a thought-provoking metaphor for the apparent paradox of free will in a deterministic universe. Programs are both completely determined and largely unpredictable, just as our actions are both apparently determined by physics and also apparently subject to our will. Perhaps each of us is like the imaginary executing program, taking the steps of our life. We see each such step as subject to our personal determination. Nevertheless, the physics of the universe operate as relentless rule-driven machinery, operating with no concern for our local perspective. And seen from that perspective of the physical universe, there is no free will.