Foundations of randomness, Day 2

Back to the fundamentals of randomness and our workshop in Stellenbosch – I want to discuss the remaining two days of talks!

The second day started off with a whirlwind tour of the TCS approach to pseudo-randomness by David Zuckerman. David surveyed known constructions of pseudo-random generators and extractors; he also gave a very comprehensive overview of the different types of sources that have been considered in this area and what kind of extraction (deterministic, seeded, two-source) is known to be possible (or not) for each type of source.

I’ll refer you to David’s slides for details. Two particular sources caught my attention. First David mentioned small-space sources, for which there are good deterministic extractors but so far only in the regime of high (polynomial) min-entropy. This type of source seems quite relevant in practice, and it is natural to ask whether it is possible to do better using device-independent quantum extraction. How could we leverage the fact that the source is generated using bounded space? A second interesting category of sources are non-oblivious bit-fixing sources, which are sources such that some of the bits are uniformly random, and others deterministic, but such that which bits are of which type can be chosen adversarially by the source based on the values taken by previous bits. Together with Santha-Vazirani sources these are the two examples David gave of natural sources from which deterministic extraction is not possible. But we do know that it is possible to extract from a single Santha-Vazirani source, of bias arbitrarily close to {1/2}, using a device-independent quantum procedure. So what about non-oblivious bit-fixing sources — can it be done there as well?

Work on device-independent randomness extraction from the quantum information community has so far focused on a small set of sources — uniform seeds of limited length, arbitrary Santha-Vazirani sources, and more recently general min-entropy sources — but I see no reason why these are the only cases of interest; the restricted focus seems to be mostly for historical reasons. In pseudo-randomness different types of sources are often motivated by applications to derandomization, lower bounds, or combinatorial constructions. In quantum information, rather than “beating the classical” we should focus on the scenario that are best motivated by the relevant applications, which so far come mostly from cryptography. Here device-independent randomness certification procedures seem most relevant in their role as “decouplers” (see the discussion on “random to whom” from Valerio’s talk). What kind of structure are we willing to assume on the kind of information an adversary may have kept on a particular source? This question seems particularly relevant in light of recent works on device-independent two-source extraction.

David’s talk was followed with a talk by one of his students, Eshan Chattopadhyay, on their recent breakthrough construction of a two-source extractor for poly-logarithmic min-entropy. It is hard to do any justice to such an intricate (and beautiful!) result, and instead I will point you to Eshan’s slides or the great talk that David gave on the same topic on TCS+ just a few weeks ago.

For the last talk of the morning session Henry Yuen treated us to an (arguably :-)) even more formidable treat: infinite randomness expansion! Henry presented the main steps that led him and Matthew Coudron to their beautiful result showing how, starting from just a few uniformly random bits, one could bootstrap a quantum device-independent expansion procedure and generate as many uniformly random bits as one desires — yes, that’s an {\infty}! (I realize a bit late that I never properly explained what “device-independent” even means… if I have any “classical” readers left, for context for Henry’s talk see a blog post of Henry’s for all the required background; for the quantum crypto motivation see the viewpoint by Roger Colbeck; for the computer scientist perhaps the introduction to this paper is readable.)

In Henry and Matt’s protocol the length of the initial seed only needs to be at least some universal constant. The number of random bits available to start with will govern the security parameter (the distance from uniform) of the bits produced, but aside from that any small initial seed will do. The protocol uses two pairs of devices (those can be arbitrarily correlated, e.g. share randomness or quantum entanglement, but are not allowed to communicate directly) which take turns in expanding the current string of bits by an exponential factor. The key observation required to argue that the protocol works is that each expansion step, not only increases the number of bits available, but simultaneously “purifies” them, in the sense that the bits produced will have high entropy even conditioned on any information, classical or quantum, in the possession of the other pair of devices. Given that this other pair generated the input bits used for the expansion in the first place this is not a trivial observation, and is  essentially the content of the powerful “equivalence lemma” of Chung, Shi and Wu discussed in the post describing Valerio Scarani’s talk. The lemma guarantees that the bits produced by a device-independent procedure are random even from the point of view of an adversary who shares arbitrary correlations with the seed (including knowing it perfectly).

The result is quite impressive: if a few uniformly random bits are available, then it is automatically guaranteed that infinitely many just-as-good (in some respects even better) random bits can be generated! So much for our derandomization problems… The next step is to ask if uniformly random bits are even needed: how about weaker sources of randomness, such as Santha-Vazirani sources or general min-entropy sources? The task of device-independent extraction from the former was taken on by Colbeck and Renner (with much follow-up work); for the latter there is the work of Chung, Shi and Wu, which unfortunately still requires a number of devices that scales with the length of the source as well as the security parameter. More to be done!

The afternoon brought two more talks, with a very different focus but equally stimulating. The first was delivered by Ruediger Schack, who presented “The QBist perspective on randomness and laws of nature”. Ruediger’s talk followed the one by Carl Hoefer on the previous day in challenging our attitudes towards randomness and the meaning of probabilities, but Ruediger took us in quite a different direction. One of the points he made is that Bell’s theorem can be interpreted as giving us the following mutually exclusive alternatives about the world: either locality does not hold, and we must accept that far-away objects can have a nonlocal (instantaneous) influences on one another, or we insist on keeping locality — as Einstein did — but ascribe a different “meaning” to probabilistic statements, or predictions, that arise from the Born rule. (Note that in the first option in “nonlocal influence” I am including the kind of non-signaling “influence” which arises from measuring half of an entangled pair.)

This is where QBism — for “Quantum Bayenism” — comes in. According to Ruediger, the solution it offers to this conundrum goes through the assertion that, contrary to e.g. the Copenhagen interpretation, “probabilities are not determined by real properties”. This allows one to “restore locality” while not having recourse to local hidden variables (as Bohmian mechanics would) either. Thus in QBism “there is no Born rule”, in the sense that the rule does not describe an intrinsic probabilistic fact about the world; rather, probabilities are seen as the reflection of an agent’s subjective belief and have no objective existence. The fact that, if Alice measures her half of an EPR pair in the computational basis and obtains a {|0\rangle}, then Bob will with certainty observe a {|0\rangle} as well, if he measures in the same basis, is not a “fact” about their systems; rather it is a prescription for how a rational Bob should update his belief as to the outcomes of an experiment he could perform, were he to be informed of the situation at Alice’s side.

The main conceptual tool used to formulate this interpretation of probabilities is the Dutch book used as a guide to how an agent (observer) should update its probabilities. I admit I find this interpretation a bit hard to digest — I cannot help but think of betting as a highly irrational procedure, and in general I would be hard-pressed to bet on any precise number or probability; the Born rule makes very precise predictions and ascribing them to an agent’s subjective belief feels like a bit of a stretch. Quite possibly it is also a problem of language, and I am not an expert on QBism (even the “usual” Bayesianism is far from obvious to me). Ruediger’s slides are very clear and give a good introduction for anyone interested in digging further.

The last talk of the day was given by Carl Miller, on “The extremes of quantum random number generation”. Carl explained how two very different principles enabled him and Yaoyun Shi to derive two independent lower bounds, each of interest in a separate regime, on the amount of randomness that can be generated from a pair of quantum devices in the sequential scenario: the devices are used repeatedly in sequence and one is interested in the min-entropy rate of one of the device’s outputs, as a function of the average observed violation of (in this case) the CHSH inequality: as the violation increases to the maximum the rate will improve to 1, and the question is how high a rate can be maintained is the violation is be bounded away from the optimum.

Carl’s first principle of measurement disturbance leads to a good rate in the high error regime, i.e. when the observed violation deviates significantly from optimum, say it is around {0.8} (for an optimum of {\approx .85}). The general idea is that a violation of the CHSH inequality necessarily implies the use of “non-classical” measurements, which do not commute (in fact a maximal violation requires anti-commuting measurements), by Alice’s device. Two non-commuting measurements do not have a common eigenstate, hence whatever the state of the device at least one of Alice’s measurements will perturb it — the outcome of that measurement on the state cannot be deterministic, and randomness is produced.

Of course this is just the intuition, and making it quantitative is more challenging. In their paper Carl and Yaoyun formalize it by introducing a new uncertainty relation (see this very recent survey on the topic for much more) expressed in terms of Renyi {\alpha}-entropy for {\alpha > 1}. The proof relies on strong convexity of the underlying Schatten-normed space of matrices, and Carl gave some geometric intuition for the uncertainty relation. The use of {\alpha}-Renyi entropy is important for the inductive step; it allows to bypass the lack of good chain-rule like arguments for the min-entropy.

The second principle introduced in Carl’s talk is self-testing. This is the idea that a device demonstrating a close-to-optimal violation of the CHSH inequality must have a certain rigid structure: not only does it generate random bits, but the device’s outcomes must be produced in a very specific way. Specifically the device’s state and measurements must be equivalent, up to local isometries, to those used in the “canonical” optimal strategy for the CHSH game. This principle is well-known (see e.g. the paper by McKague for a proof), but usually one only expects it to be of use in the “tiny error” regime, i.e. when the observed violation is of order {opt - \varepsilon} for small epsilon. Carl manages to squeeze the techniques further, and by carefully fine-tuning the argument is able to get good bounds on the randomness produced when the violation is large but still a constant away from optimum, say {0.83} or so. In that regime the bound is better than the one obtained from the first principle; I’ll refer you to Carl’s slides for the precise curves.

After Carl’s talk, the mind full of new ideasIMG_0018-1024x683 we left for a little walk to the Lanzerac wine estate, where we were shown around the wine cellar — including the mandatory wine
tasting of course, with the highlight being the local pinotage — and finished off the day with a nice dinner on the terrasse: it’s late October in South Africa, summer is coming!

Advertisements
This entry was posted in Conferences, device independence, Quantum, Talks and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s