Quid qPCP?

This blog has already seen three posts on the quantum PCP conjecture: in February 2013 to highlight several talks at the Simons Institute in Berkeley, in October 2013 to promote a survey on the topic I wrote with Dorit Aharonov and Itai Arad, and in October 2014 to introduce the two main variants “constraint satisfaction” (CSP) and “multiplayer games” (MIP), of the quantum QCP (qPCP) conjecture. Such a high rate of posting (compared to the average frequency of posts on this blog) might indicate a slight obsession. But you may also notice it’s been…two years! Has no result worthy of note been established since? Certainly not, and although the conjecture still stands strong, there have been a few interesting developments on both variants of the conjecture. In this post I’ll discuss a couple results on the CSP-qPCP. In a follow-up post I’ll describe progress on the MIP-qPCP.

When we wrote the survey three summers ago, the latest word on the CSP-qPCP (see Conjecture 1.3 here for a precise formulation) had been given in a paper by Brandao and Harrow. BH showed, using information-theoretic arguments, that the constraint graphs associated with constant-gap QMA-hard instances of the local Hamiltonian problem had to satisfy “non-expansion” requirements seemingly at odds with the expansion properties of graphs associated with what are often considered the hardest instances of classical CSPs. Intuitively, their argument uses the monogamy of quantum correlations to argue that highly expanding constraint graphs place such strong demands on entanglement that there is always a product state whose energy is not far from the minimum. Although not strictly a no-go result, their theorem indicates that QMA-hard instances must be based on constraint graphs with markedly different spectral properties than those associated with the hardest instances of classical CSP.

For the time being it seems like any proof, or disproof, of the conjecture remains out of reach. Instead of focusing directly on qPCP, it may be more fruitful to develop the objects that are expected to play an important role in the proof, such as (quantum) low-density parity check codes (qLDPC) and (quantum) locally testable codes (qLTC). Two recent works make progress on this front.

The NLETS conjecture

The no low-energy trivial states (NLTS) conjecture was proposed by Freedman and Hastings as a “complexity-free” analogue of CSP-qPCP. The NLTS conjecture states that there exist local Hamiltonians such that all low-energy (within an additive constant, times the norm of the Hamiltonian, from the minimum) states are “non-trivial”, in the sense that they cannot be generated by a constant-depth quantum circuit applied on a product state. Equivalently, all states that are the output of a constant-depth quantum circuit must have energy a constant above the minimum. NLTS Hamiltonian are good candidates for qPCP as they provide local Hamiltonian for which many obvious classical certificates for the minimal energy of the Hamiltonian (such as the description of a small circuit which generates a low-energy state) are essentially ruled out.

An earlier version of the Eldar-Harrow manuscript claimed a construction of NLTS Hamiltonian, but the paper was recently updated, and the claim retracted. The current manuscript establishes a moderately weaker (though strictly incomparable) result, that the authors call NLETS, for “no low-error trivial states”. The main result of EH is a relatively simple, explicit construction of a family of local Hamiltonians that have no non-trivial “ground state {\varepsilon}-impostor”. An {\varepsilon}-impostor is a state that has the same reduced density matrix as a ground state on a fraction {(1-\varepsilon)} of the qubits, but may differ arbitrarily on the remaining {\varepsilon} fraction. Using that the Hamiltonian is local, impostors necessarily have low energy, but the converse is not true, so that NLETS rules out non-triviality for a more restricted class of states than NLTS. For that restricted class of states, however, the non-triviality established by EH is sronger than required by NLTS: they show that no {\varepsilon}-impostor can even be well-approximated (within inverse-polynomial trace distance) by logarithmic-depth, instead of just constant-depth, quantum circuits.

Let’s see if I can give some basic intuition on their construction; for anything substantial see the paper, which gives many angles on the result. Consider first first a classical repetition code encoding {1} bit into {n} bits. This can be made into a locally testable code by enforcing pairwise equality of bits along the edges of a constant-degree expanding graph on vertex set {\{1,\ldots,n\}}. Now allow me a little leap of faith: imagine there existed a magic quantum analogue of this classical repetition code, where equality between pairs of qubits is enforced not only in the {Z} (computational) basis, but also in the {X} (Hadamard) basis. Of course such a thing does not exist: the constraints would force any pair of qubits (linked by the expander) to form an EPR pair, a requirement that strongly violates monogamy. But let’s imagine. Then I claim that we would essentially be done. Why? We need two more observations.

The first key observation made by EH is that any ground state of this imaginary code would have the following property: if you measure all qubits of the state in the same basis, either {X} or {Z}, then for at least one of the two possible choices the measurement outcomes will be distributed according to a distribution on {n}-bit strings that places a large (constant) weight on at least two well-isolated (separated by at least the minimum distance) subsets of the Hamming cube. Note that this does not hold of the classical repetition code: the distribution which all-{0} codeword is, well, concentrated. But if we were to measure the associated quantum state {|0\cdots 0 \rangle \simeq \frac{1}{\sqrt{2}}( |+\cdots+\rangle+|-\cdots-\rangle)} in the Hadamard basis, we would get a very spread distribution, with constant mass on two sets that are at distance {n} apart (I realize the equation I wrote is not quite correct! Don’t think too hard about it; obviously my “magical quantum repetition code” does not exist). The reason the distribution obtained in at least one of the two bases must be spread out is due to the uncertainty principle: if the distribution is localized in the {X} basis it must be delocalized in the {Z} basis, and vice-versa. And the reason it should be concentrated on isolated clumps is that we are measuring a codeword, which, for our magic example, can only lead to outcomes that are supported on the set {\{|0\rangle^{\otimes n},|1\rangle^{\otimes n},|+\rangle^{\otimes n},|-\rangle^{\otimes n}\}}.

To conclude we need the second observation, which is that trivial states do not have this property: measuring a trivial state in any product basis will always lead to a highly expanding distribution, which in particular cannot have large mass on well-isolated subsets. This is obviously true for product states, and requires a bit of work to be carried through logarithmically many layers of a quantum circuit; indeed this is where the main technical work of the paper lies.

 

So the argument is complete…except for the fact that the required magic quantum repetition code does not exist! Instead, HE find a good make-do by employing a beautiful construction of quantum LDPC codes due to Tillich and Zemor, the “hypergraph product”. The hypergraph product takes as input any pair of classical linear codes and returns a quantum “product” CSS code whose locality, distance and rate properties can be related to those of the original codes. The toric code can be case as an example of a hypergraph product code; see Section 3 in the paper for explanations. Unfortunately, the way the distance of the product code scales with other parameters prevents TZ from obtaining good enough qLDPC for the CSP-QPCP; they can “only” obtain codes with constant weight and constant rate, but distance {O(\sqrt{n})}.

In the context of NL(E)TS, and even more so qPCP, however, distance may not be the most relevant parameter. EH’s main construction is obtained as the hypergraph product of two expander-based repetition codes, which as a code only has logarithmic distance; nevertheless they are able to show that the robustness derived from the repetition code, together with the logarithmic distance, are enough to separate {\varepsilon}-impostors from logarithmic-depth trivial states.

Quantum LDPC & LTC

Quantum low-density parity-check codes (qLDPC) already made a showing in the previous sections. These families of codes are of much broader interest than their possible role in a forthcoming proof of qPCP, and constructions are being actively pursued. For classical codes the situation is largely satisfactory, and there are constructions that simultaneously achieve constant rate and linear distance with constant-weight parity checks. For quantum codes less is known. If we insist on constant-weight stabilizers then the best distance is {\Omega(\sqrt{n}\log^{1/4} n)} (e.g. Freedman et al.), a notch above the TZ construction mentioned earlier. The most local construction that achieves linear distance requires stabilizers of weight {O(\sqrt{n})} (e.g. Bravyi and Hastings).

A recent paper by Hastings makes progress on constructions of qLDPC – assuming a geometrical conjecture on the volume of certain surfaces defined from lattices in {{\mathbb R}^n}. Assuming the conjecture, Hastings shows the existence of qLDPC with {n^{1-\varepsilon}} distance and logarithmic-weight stabilizers, a marked improvement over the state of the art. Although as discussed earlier even linear-distance, constant-weight, qLDPC would not imply the CSP-qPCP nor NLTS (the resulting Hamiltonian may still have low-energy eigenstates that are not at a small distance from codewords), by analogy with the classical case (and basic intuition!), constructions of such objects should certainly facilitate any attempt at a proof of the conjectures. Moreover, qLDPC suffice for the weaker NLETS introduced by EH, as the latter only makes a statement about {\varepsilon}-impostors, i.e. states that are at a constant distance from codewords. To obtain the stronger implication to NLTS, the proper notion is that of local testability: errors should be detected by a fraction of parity checks proportional to the distance of the error from the closest codeword (and not just some parity check).

Hastings’ construction follows the topological approach to quantum error correcting codes pioneered by Freedman and Kitaev. Although the latter introduced codes whose properties depend on the surface they are embedded in, at best I could tell the formal connection between homology and error correction is made in a comprehensive paper by Bombin and Martin-Delgado. The advantage of this approach is that properties of the code, including rate and distance, can be tied to geometric properties of the underlying homology, reducing the construction of good codes to that of manifolds with the right properties.

 

In addition to the (conjectural) construction of good qLDPC, almost as an afterthought Hastings provides an unconditional construction of a quantum locally testable code (qLTC), albeit one which encodes two qubits only. Let’s try to visualize this, starting from the helpful warm-up provided by Hastings, a high-dimensional, entangled, locally-testable code…which encodes zero qubit (the code space is one-dimensional). Of course this is trivial, but it’s a warm-up!

The simplest instance to visualize has six physical qubits. To follow the forthcoming paragraphs, take a piece of paper and draw a large tetrahedron. If you didn’t mess up your tetrahedron should have six edges: these are your qubits. Now the parity checks are as follows. Each of the four faces specifies an {X}-stabilizer which acts Image result for tetrahedronon the three edges forming the face. Each of the four vertices specifies a {Z}-stabilizer which acts on the three edges that touch the vertex. The resulting eight operators pairwise commute, and they specify a unique (entangled) state in the {2^6}-dimensional physical space.

Next we’d like to understand “local” testability. This means that if we fix a set {O} of edges, and act on each of them using an {X} error, then the resulting operator should violate (anti-commute) with a fraction of {Z}-stabilizers that is proportional to the reduced weight of the error, i.e. its distance to the closest operator which commutes with all {Z}-stabilizers. To see which stabilizers “detect” the error {O}, we recall that {Z} and {X} which overlap at an even number of locations commute. Therefore a {Z} stabilizer will detect {O} if and only if it lies in its boundary {\partial O}: the set of vertices which touch an odd number of edges in {O}. This is our syndrome; it has a certain cardinality. To conclude we need to argue that {O} can be modified into a set {O+P} with no boundary, {\partial(O+P)=\emptyset}, and such that {P} is as small as possible – ideally, it should involve at most as many edges as the size of the boundary {|\partial O|}. Here is how Hastings does it: for each vertex in the boundary, introduce an edge that links it to some fixed vertex – say the top-most one in your tetrahedron. Let {P} be the resulting set of edges. Then you can check (on the picture!) that {O+P} is boundary-less. Since we added at most as many edges as vertices in the boundary (if the top-most vertex was part of the boundary it doesn’t contribute any edge), we have proven local testability with respect to {X} errors; {Z} errors are similar.

This was all in three dimensions. The wonderful thing is that the construction generalizes in a “straightforward” way to {n} dimensions. Consider an {(n+1)}-element universe {U=\{1,\ldots,n+1\}}. Qubits are all subsets of {U} of size {q=(n+1)/2}; there are exponentially many of these. {Z}-stabilizers are defined for each {(q-1)}-element subset; each acts on all {(q+1)} of its {q}-element supersets. Symmetrically, {X}-stabilizers are defined for each {(q+1)}-element set; each acts on all {(q+1)} of its {q}-element subsets. Thus the code is local: each stabilizer has weight {(q+1)}, which is logarithmic in the number of qubits. It remains to check local testability; this follows using precisely the same argument as above (minus the picture…).

This first construction encodes zero qubits. How about getting a couple? Hastings gives a construction achieving this, and remains (poly-logarithmically) locally testable. The idea, very roughly, is to make a toric code by combining together two copies of the code described above. The number of encoded qubits will become non-trivial and local testability will remain. Unfortunately, just as for the toric code, the distance of the result code only scales as {\sqrt{n}}. To construct his code Hastings uses a slightly different cellulation than the above-described one. I am not sure precisely why the change is needed, and I defer to the paper for more details. (Leverrier, Tillich and Zemor had earlier provided a construction, based on the TZ hypergraph product, with linear rate, square root distance, and local testability up to the minimal distance, i.e. for all errors of reduced weight at most {O(\sqrt{n})}.)

 

Although the geometric picture takes some effort to grasp, I find these constructions fascinating. Given the Brandao-Harrow objections to using the most “straighforward” expander constructions to achieve CSP-qPCP, or even NLTS, it seems logical to start looking for combinatorial structures that have more subtle properties and lie at the delicate boundary where both robustness (in terms of testability) and entanglement (non-triviality of ground states) can co-exist without challenging monogamy.

Advertisements
This entry was posted in QPCP, Quantum, Science, Uncategorized and tagged , , , . Bookmark the permalink.

2 Responses to Quid qPCP?

  1. anthony says:

    Hi Thomas,
    thanks for this very interesting summary!

    Two minor corrections:
    – the Tillich-Zémor construction really takes 2 classical codes and turn them into a CSS code. The construction that takes 2 CSS codes to get a new one, as you mention, would be the homological product of Bravyi-Hastings, I think.
    – concerning the Freedman et al construction, the minimum distance should be $\Omega(n^{1/2} \log^{1/4} n$. I think there’s a typo in the last equation of their paper.

    • Thomas says:

      Thanks for the corrections! I updated the post. (PS: I wish I had more time to discuss the TZ construction, and the LTC analysis in your paper…)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s