Randomness and interaction? Entanglement ups the game!

[05/25/19 Update: Kevin Hartnett has a nice article at Quanta explaining Natarajan & Wright’s result in slightly more layman terms than I’d be able to…see here: Computer Scientists Expand the Frontier of Verifiable Knowledge]

The study of entanglement through the length of interactive proof systems has been one of the most productive applications of complexity theory to the physical sciences that I know of. Last week Anand Natarajan and John Wright, postdoctoral scholars at Caltech and MIT respectively, added a major stone to this line of work. Anand & John (hereafter “NW”) establish the following wild claim: it is possible for a classical polynomial-time verifier to decide membership in any language in non-deterministic doubly exponential time by asking questions to two infinitely powerful, but untrusted, provers sharing entanglement. In symbols, NEEXP {\subseteq} MIP{^\star}! (The last symbol is for emphasis — no, we don’t have an MIP{^\star}! class — yet.)

What is amazing about this result is the formidable gap between the complexity of the verifier and the complexity of the language being verified. We know since the 90s that the use of interaction and randomness can greatly expand the power of polynomial-time verifiers, from NP to PSPACE (with a single prover) and NEXP (with two provers). As a result of the work of Natarajan and Wright, we now know that yet an additional ingredient, the use of entanglement between the provers, can be leveraged by the verifier — the same verifier as in the previous results, a classical randomized polynomial-time machine — to obtain an exponential increase in its verification power. Randomness and interaction brought us one exponential; entanglement gives us another.

To gain intuition for the result consider first the structure of a classical two-prover one-round interactive proof system for non-deterministic doubly exponential time, with exponential-time verifier. Cutting some corners, such a protocol can be obtained by “scaling up” a standard two-prover protocol for non-deterministic singly exponential time. In the protocol, the verifier would sample a pair of exponential-length questions {(X,Y)}, send {X} and {Y} to each prover, receive answers {A} and {B}, and perform an exponential-time computation that verifies some predicate about {(X,Y,A,B)}.

How can entanglement help design an exponentially more efficient protocol? At first it may seem like a polynomial-time verifier has no way to even get started: if it can only communicate polynomial-length messages with the provers, how can it leverage their power? And indeed, if the provers are classical, it can’t: it is known that even with a polynomial number of provers, and polynomially many rounds of interaction, a polynomial-time verifier cannot decide any language beyond NEXP.

But the provers in the NW protocol are not classical. They can share entanglement. How can the verifier exploit this to its advantage? The key property that is needed is know as the rigidity of entanglement. In words, rigidity is the idea that by verifying the presence of certain statistical correlations between the provers’ questions and answers the verifier can determine precisely (up to a local change of basis) the quantum state and measurements that the provers must have been using to generate their answers. The most famous example of rigidity is the CHSH game: as already shown by Werner and Summers in 1982, the CHSH game can only be optimally, or even near-optimally, won by measuring a maximally entangled state using two mutually unbiased bases for each player. No other state or measurements will do, unless they trivially imply an EPR pair and mutually unbiased bases (such as a state that is the tensor product of an EPR pair with an additional entangled state).

Rigidity gives the verifier control over the provers’ use of their entanglement. The simplest use of this is for the verifier to force the provers to share a certain number {N} of EPR pairs and measure them to obtain identical uniformly distributed {N}-bit strings. Such a test for {N} EPR pairs can be constructed from {N} CHSH games. In a paper with Natarajan we give a more efficient test that only requires questions and answers of length that is poly-logarithmic in {N}. Interestingly, the test is built on classical machinery — the low-degree test — that plays a central role in the analysis of some classical multi-prover proof systems for NEXP.

At this point we have made an inch of progress: it is possible for a polynomial-time (in {n=\log N}) verifier to “command” two quantum provers sharing entanglement to share {N=2^n} EPR pairs, and measure them in identical bases to obtain identical uniformly random {N}-bit strings. What is this useful for? Not much — yet. But here comes the main insight in NW: suppose we could similarly force the provers to generate, not identical uniformly random strings, but a pair of {N}-bit strings {(X,Y)} that is distributed as a pair of questions from the verifier in the aforementioned interactive proof system for NEEXP with exponential-time (in {n}) verifier. Then we could use a polynomial-time (in {n}) verifier to “command” the provers to generate their exponentially-long questions {(X,Y)} by themselves. The provers would then compute answers {(A,B)} as in the NEEXP protocol. Finally, they would prove to the verifier, using a polynomial interaction, that {(A,B)} is a valid pair of answers to the pair of questions {(X,Y)} — indeed, the latter verification is an NEXP problem, hence can be verified using a protocol with polynomial-time verifier.

Sounds crazy? Yes. But they did it! Of course there are many issues with the brief summary above — for example, how does the verifier even know the questions {X,Y} sampled by the provers? The answer is that it doesn’t need to know the entire question; only that it was sampled correctly, and that the quadruple {(X,Y,A,B)} satisfies the verification predicate of the exponential-time verifier. This can be verified using a polynomial-time interactive proof.

Diving in, the most interesting insight in the NW construction is what they call “introspection”. What makes multi-prover proof systems powerful is the ability for the verifier to send correlated questions to the provers, in a way such that each prover has only partial information about the other’s question — informally, the verifier plays a variant of prisonner’s dilemma with the provers. In particular, any interesting distribution {(X,Y)} will have the property that {X} and {Y} are not fully correlated. For a concrete example think of the “planes-vs-lines” distribution, where {X} is a uniformly random plane and {Y} a uniformly random line in {X}. The aforementioned test for {N} EPR pairs can be used to force both provers to sample the same uniformly random plane {X}. But how does the verifier ensure that one of the provers “forgets” parts of the plane, to only remember a uniformly random line {Y} that is contained in it? NW’s insight is that the information present in a quantum state — such as the prover’s half-EPR pairs — can be “erased” by commanding the prover to perform a measurement in the wrong basis — a basis that is mutually unbiased with the basis used by the other prover to obtain its share of the query. Building on this idea, NW develop a battery of delicate tests that provide the verifier the ability to control precisely what information gets distributed to each prover. This allows a polynomial-time verifier to perfectly simulate the local environment that the exponential-time verifier would have created for the provers in a protocol for NEEXP, thus simulating the latter protocol with exponentially less resources.

One of the aspects of the NW result I like best is that they showed how the “history state barrier” could be overcome. Previous works attempting to establish strong lower bounds on the class MIP{^\star}, such as the paper by Yuen et al., relies on a compression technique that requires the provers to share a history state of the computation performed by a larger protocol. Unfortunately, history states are very non-robust, and as a result such works only succeeded in developing protocols with vanishing completeness-soundness gap. NW entirely bypass the use of history states, and this allows them to maintain a constant gap.

Seven years ago Tsuyoshi Ito and I showed that MIP{}^\star contains NEXP. At the time, we thought this may be the end of the story — although it seemed challenging, surely someone would eventually prove a matching upper bound. Natarajan and Wright have defeated this expectation by showing that MIP{^\star} contains NEEXP. What next? NEEEXP? The halting problem? I hope to make this the topic of a future post.

About Thomas

I am a professor in the department of Computing and Mathematical Sciences (CMS) at the California Institute of Technology, where I am also a member of the Institute for Quantum Information and Matter (IQIM). My research is in quantum complexity theory and cryptography.
This entry was posted in CHSH, QPCP, Quantum, Uncategorized and tagged . Bookmark the permalink.

8 Responses to Randomness and interaction? Entanglement ups the game!

  1. RandomOracle says:

    In the first paragraph, I guess it should NEEXP instead of NEXP 🙂

  2. Attila Pereszlényi says:

    I would like to know what do you think this result implies to the gap amplification version of the quantum PCP conjecture (to the inapproximability of the local Hamiltonian problem)? More precisely, one could have hoped that a QMA lower bound on a certain (possibly restricted subclass) of MIP* with log-length questions would imply the QMA-hardness of the local Hamiltonian problem with constant relative gap. This result exceeds the expected QMA lower bound and made it all the way up to NEXP. This may indicate that the connection either does not exist (contrary to the classical case) or one would need to consider a very (artificially) restricted subclass of MIP*_log.

    • Thomas says:

      Yes, that’s a very good point. I think of it optimistically: as you write, what the result indicates is that there may exist restricted classes of MIP*_log protocols, such that their complexity would still equal QMA, but such that one would be able to transfer hardness from them to the local Hamiltonian problem. So you could say that the result is encouraging in giving a justification for our inability to relate the complexity of MIP* to that of the local Hamiltonian problem; indeed the former can contain much harder problems than the latter. It is now a very interesting question whether you can put a useful restriction on MIP* protocols that would allow a gap-preserving transfer of hardness to take place. An obvious one is limiting the entanglement to polynomially many qubits, but that doesn’t seem to be sufficient.

  3. Pingback: Shtetl-Optimized » Blog Archive » Not yet retired from research

  4. Pingback: Shtetl-Optimized » Blog Archive » John Wright joins UT Austin

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s