Foundations of randomness, remainder of Day 1

Today I will cover the remaining talks from the first day of the workshop. Valerio’s opening talk was followed by a stimulating blackboard talk by Renato Renner, who asked the question “Is the existence of randomness an axiom of quantum physics”? Randomness is usually viewed as entering quantum mechanics through the Born rule, which is stated as an axiom describing the outcome distribution of any given measurement on a quantum state; there is no standard derivation of the rule from other axioms. This is deeply unsatisfying, as the rule appears completely arbitrary, and indeed many attempts have been made at deriving it from more basic structural constraints such as the Hilbert space formalism underlying Gleason’s theorem, or assumptions based on decision theory (Deutsch) or logic (Finkelstein).

Renato and his student Daniela Frauchiger have a proposal whose main thrust consists in demarcating what could be called the “physical substrate” of a given theory (such as quantum mechanics) from its “information-theoretic substrate”. In short, the physical substrate of a theory governs which experiments are “physical”, or possible; it is a completely deterministic set of rules. The information-theoretic substrate provides an additional layer which considers assignments of probabilities to different experiments allowed by the physical substrate, and specifies rules that these probabilities should obey.

Let me try to give a little more detail, as much as I can recall from the talk. Let’s call the framework suggested by Renato the “FR proposal”. The first component in the proposal is a definition of the physical substrate of a theory. In the FR proposal a theory bears on objects called stories: a story is any finite statement describing an experiment, where “experiment” should be interpreted very loosely: essentially, any sequence of events with a set-up, evolution, and outcome. For instance, a story could be that a pendulum is initialized in a certain position, left to swing freely, and after time {t} is observed and found to be in a certain position. Or it could specify that a quantum particle is initialized in a certain state, say {\ket{+}=\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})}, then measured in the computational basis {\{\ket{0},\ket{1}\}}, and that the outcome {\ket{0}} is observed. Thus a story refers to a physical system and makes claims about observable quantities of the system.

Thus the physical substrate of the theory is a set of rules that completely determine which stories are “physical”, i.e. allowed by the theory, and which are not. For instance, the second story described above is a valid story of quantum mechanics: it is certainly possible that the outcome {\ket{0}} is obtained when the measurement described is performed. On the other hand quantum mechanics would rule out a story such as “the state {\ket{0}} is measured in the computational basis and the outcome {\ket{1}} is obtained”: this is simply not allowed. Note that up to this point the theory makes no claim at all on the “probability” of a story occurring; it just states whether the story is possible or not.

So much for the physical substrate. How do probabilities come in? This is the goal of the information-theoretic substrate of the theory. To introduce probabilities we consider mappings from stories to {[0,1]}. Any such mapping can be thought of as a measure of an observer’s state of knowledge, or uncertainty, about any particular story (much more on this and other interpretations of probabilities will be discussed in later talks, such as Carl Hoefer’s coming up next). A priori the mapping could be arbitrary, and the role of the information-theoretic substrate of the theory is to specify which mappings are allowed, or “reasonable”. For instance, in the measurement example from earlier most observers would assign that particular story a probability of {50\%}, which is what the Born rule predicts. But without taking the rule for granted, on what basis could we assert that {50\%} is the only “reasonable” probability that can be assigned to the story — why not {10\%}, or even {100\%}?

The main claim that Renato put forward is that the Born rule can be derived as a consequence of the following two broad postulates: (I apologize for the description of the postulates being rather vague; there are points about the formulation that I am unclear about myself!)

  1. The repetition postulate: experiments can be repeated, and if so lead to the same outcomes.
  2. The symmetry postulate: certain experiments have symmetries under which the outcome of the experiment is invariant. For example, an experiment which involves flipping three different coins in sequence and reporting the majority outcome is not affected by the order in which the coins are flipped.

The idea is that these postulates should be much easier to accept than the Born rule: certainly, if we want to say anything meaningful about certain probabilities being reasonable or not we ought to enforce some form of compatibility rules, and the two above sound rather reasonable themselves. The first seems a prerequisite to even make sense of probabilities, and the second follows the tradition started by de Finetti of basing the emergence of probability on indistinguishability of the outcomes of an experiment under certain symmetries of the experiment.

Unfortunately there does not yet seem to be a preprint available that describes these ideas in detail; hopefully it is coming soon. I’m quite curious to see how the proposal, that I find rather appealing, can be formalized.

 

After Renato’s talk — and an excellent lunch — we were treated to an intriguing talk on “Chance laws and quantum randomness” by Carl Hoefer. The talk addressed questions that we computer scientists or physicists rarely discuss openly (and indeed many of us might reserve for dreamy Sunday nights or inebriated Friday evenings). In Carl’s own words: “When we ascribe a primitive (irreducible) chance propensity, or postulate intrinsically random/chancy fundamental laws of nature, do we know what we’re saying?”. Or, less poetically put (but still in Carl’s words — Carl, thanks for making it accessible to us!): What does a claim such as “{\Pr(\text{spin-z-up} | \text{spin-x-up-earlier}) = 0.5}” mean, and how is it different from “{\Pr(\text{spin-z-up} | \text{spin-x-up-earlier}) = 0.7}”?

Let the question a chance to sink in — a good answer is not that obvious! It already arose in Renato’s talk, where probabilities were introduced as the reflection of an agent’s subjective belief. Well, this is the first of a number of possible answers to the question that Carl took head on and debunked, one slide at a time. (I am not sure how much I agree with Carl’s arguments here, but at least he showed us that there was certainly room for debate!) Other possible answers considered by Carl include probability as the reflection of an objective frequency for the occurrence of a certain event, or simply a “primitive fact” that would require no explanation. Needless to say, he found none of these interpretations satisfactory. It’s a good exercise to find arguments against each of them; see Carl’s slides for elements of answer (see also the article on Interpretations of probability on the Stanford Encyclopaedia of Philosophy for an extensive discussion).

GaltonBoard_1000

A Galton board

So what is the speaker’s suggestion? I doubt my summary will make it much justice, but here goes: according to Carl probabilities, or rather “chance laws”, arise as a form of statistical averaging; in particular probabilistic statements can be meaningful irrespective of whether nature is intrinsically deterministic or irreducibly probabilistic. Specifically, Carl observes that many physical processes tend to generate a distribution on their observable outcomes that is robust to initial conditions. For instance, a Galton board will lead to the same Gaussian distribution for the location of the ball, irrespective of the way it is thrown in (except perhaps for an exceptional set of initial conditions of measure zero). Whether the initial conditions are chosen deterministically or probabilistically we can meaningfully state that “there is {x\%} chance that the ball will exit the board at position {z}”.

Carl concluded his talk by tying it to the Bohmian interpretation of quantum mechanics — almost, though perhaps not quite, going as far as to frame deterministic Bohmian mechanics as the most reasonable way to interpret probability in quantum mechanics. I found the discussion fascinating: I had never considered Bohmian mechanics (or, for that matter, the question of the origin of probabilities in quantum mechanics) seriously, and Carl made it sound, if not natural, at least not wholly unreasonable! In particular he made a convincing case for the possibility of ascribing meaning to “probability” in a fully deterministic world, without having to resort to the need for “subjective beliefs” or related rabbits such as high sensitivity to initial conditions — indeed, here it is quite the opposite.

 

The last talk of the day was delivered on the whiteboard, by Marcin Pawlowski. Marcin gave an intuitive derivation of the quantum bound on device-independent randomness generation from the CHSH inequality, the famous (at least for those of us versed in these things… see Appendix A.1 here for a derivation)

\displaystyle H_\infty(A|X)\geq 1-\log_2\Big(1+\sqrt{2-\frac{I^2}{4}}\Big) ,

based solely on a monogamy bound due to Toner. Toner introduced a “two-out-of-three” variant of the CHSH game in which there are three players Alice, Bob and Charlie, but the game is played either between Alice and Bob, or Alice and Charlie, chosen uniformly at random by the referee. Although in principle the maximum success probability of the players in this game could be as high as that of the two-player CHSH (since in the end only one instance of the game is played, the third player being ignored), Toner showed that the fact that Alice did not know a priori whom of Bob and Charlie she was going to play with places strong constraints on her ability to generate non-local correlations with at least one of them. Quantitatively, he showed that the players’ maximum success probability in this game is precisely {3/4}, whether the players are allowed entanglement or not: the presence of the third player completely eliminates the quantum advantage that can be gained in the usual two-player CHSH game!

The derivation presented by Marcin is simple but enlightening (good exercise!). It could be considered the “proof from the book” for this problem, as it makes the intuition fully explicit: the reason Alice’s outcomes are unpredictable, from the point of view of Eve (=Charlie), is that they are strongly correlated with Bob’s outcomes (up to the Tsirelson bound); monogamy implies that correlations with Eve’s outcomes could not simultaneously be as strong.

the-view

Typical view from a Stellenbosch winery

That’s it for Day 1 – time to relax, enjoy a good bottle of local wine, and get ready for the second day!

This entry was posted in Conferences, device independence, Quantum, Science, Talks and tagged , . Bookmark the permalink.

One Response to Foundations of randomness, remainder of Day 1

  1. Thomas says:

    I realize some of the latex may not be parsing well on some computers — this is the case for me. It seems to be a wordpress bug, and I don’t know what I can do about it except hope it gets fixed soon; sorry for the inconvenience. If you badly need a certain equation post a comment and I’ll transcribe it!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s