The Measurement Problem

OK, this is going to be a very long post. About something I don’t pretend to be expert in. But it is science, at least.

A couple of weeks ago, Radio 4’s highbrow “In Our Time” tackled the so-called “Measurement Problem”. That is: quantum mechanics predicts probabilities, not definite outcomes. And yet we see a definite world. Whenever we look, a particle is in a particular place. A cat is either alive or dead, in Schrodinger’s infamous example. So, lots to explain in just setting up the problem, and even more in the various attempts so far to solve it (none quite satisfactory). This is especially difficult because the measurement problem is, I think, unique in physics: quantum mechanics appears to be completely true and experimentally verified, without contradiction so far. And yet it seems incomplete: the “problem” arises because the equations of quantum mechanics only provide a recipe for the calculations of probabilities, but doesn’t seem to explain what’s going on underneath. For that, we need to add a layer of interpretation on top. Melvyn Bragg had three physicists down to the BBC studios, each with his own idea of what that layer might look like.

Unfortunately, the broadcast seemed to me a bit of a shambles: the first long explanation by Basil Hiley of Birkbeck of quantum mechanics used the terms “wavefunction” and “linear superposition” without even an attempt at a definition. Things got a bit better as Bragg tried to tease things out, but I can’t imagine the non-physicists that were left listening got much out of it. Hiley himself worked with David Bohm on one possible solution to the measurement problem, the so-called “Pilot Wave Theory” (another term which was used a few times without definition) in which quantum mechanics is actually a deterministic theory — the probabilities come about because there is information to which we do not — and in principle cannot — have access to about the locations and trajectories of particles.

Roger Penrose proved to be remarkably positivist in his outlook: he didn’t like the other interpretations on offer simply because they make no predictions beyond standard quantum mechanics and are therefore untestable. (Others see this as a selling point for these interpretations, however — there is no contradiction with experiment!) To the extent I understand his position, Penrose himself prefers the idea that quantum mechanics is actually incomplete, and that when it is finally reconciled with General Relativity (in a Theory of Everything or otherwise), we will find that it actually does make specific, testable predictions.

There was a long discussion by Simon Saunders of that sexiest of interpretations of quantum mechanics, the Many Worlds Interpretation. The latest incarnation of Many-Worlds theory is centered around workers in or near Oxford: Saunders himself, David Wallace and most famously David Deutsch. The Many-Worlds interpretation (also known as the Everett Interpretation after its initial proponent) attempts to solve the problem by saying that there is nothing special about measurement at all — the simple equations of quantum mechanics always obtain. In order for this to occur, then all possible outcomes of any experiment must be actualized: that is, their must be a world for each outcome. But we’re not just talking about outcomes of science experiments here, but rather any time that quantum mechanics could have predicted something other than what (seemingly) actually happened. Which is all the time, to all of the particles in the Universe, everywhere. This is, to say the least, “ontologically extravagant”. Moreover, it has always been plagued by at least one fundamental problem: what, exactly, is the status of probability in the many-worlds view? When more than one quantum-mechanical possibility presents itself, each splits into its own world, with a probability related to the aforementioned wavefunction. But what beyond this does it mean for one branch to have a higher probability? The Oxonian many-worlders have tried to use decision theory to reconcile this with the prescriptions of quantum mechanics: from very minimal requirements of rationality alone, can we derive the probability rule? They claim to have done so, and they further claim that their proof only makes sense in the Many-Worlds picture. This is, roughly, because only in the Everett picture is their no “fact of the matter” at all about what actually happens in a quantum outcome — in all other interpretations the very existence of a single actual outcome is enough to scupper the proof. (I’m not so sure I buy this — surely we are allowed to base rational decisions on only the information at hand, as opposed to all of the information potentially available?)

At bottom, these interpretations of quantum mechanics (aka solutions to the measurement problem) are trying to come to grips with the fact that quantum mechanics seems to be fundamentally about probability, rather than the way things actually are. And, as I’ve discussed elsewhere, time and time again, probability is about our states of knowledge, not the world. But we are justly uncomfortable with 70s-style “Tao-of-Physics” ideas that make silly links between consciousness and the world at large.

But there is an interpretation that takes subjective probability seriously without resorting to the extravagance of many (very, very many) worlds. Chris Fuchs, along with his collaborators Carlton Caves and Ruediger Schack have pursued this idea with some success. Whereas the many-worlds interpretation requires a universe that seems far too full for me, the Bayesian interpretation is somewhat underdetermined: there is a level of being that is, literally unspeakable: there is no information to be had about the quantum realm beyond our experimental results. This is, as Fuchs points out, a very strong restriction on how we can assign probabilities to events in the world. But I admit some dissatisfaction at the explanatory power of the underlying physics at this point (discussed in some technical detail in a review by yet another Oxford philosopher of science, Christopher Timpson).

In both the Bayesian and Many Worlds interpretations (at least in the modern versions of the latter), probability is supposed to be completely subjective, as it should be. But something still seems to be missing: probability assignments are, in fact, testable, using techniques such as Bayesian model selection. What does it mean, in the purely subjective interpretation, to be correct, or at least more correct? Sometimes, this is couched as David Lewis’ “principal principle” (it’s very hard to find a good distillation of this on the web, but here’s a try): there is something out there called “objective chance” and our subjective probabilities are meant to track it (I am not sure this is coherent, and even Lewis himself usually gave the example of a coin toss, in which there is nothing objective at all about the chance involved: if you know the initial conditions of the coin and the way it is flipped and caught, you can predict the outcome with certainty.) But something at least vaguely objective seems to be going on in quantum mechanics: more probable outcomes happen more often, at least for the probability assignments that physicists make given what we know about our experiments. This isn’t quite “objective chance” perhaps, but it’s not clear that there isn’t another layer of physics still to be understood.