The first day of the Simons workshop on quantum Hamiltonian complexity brought us two talks on the quantum PCP conjecture. In the first, Dorit Aharonov gave an overview of the conjecture and its relevance for physics, and she discussed recent work (by Aharonov&Eldar, Freedman&Hastings, and others) delimiting the possible range of correctness of the conjecture in the case of commuting Hamiltonians. The second talk was given by Fernando Brandao, who presented recent results (with Aram Harrow) showing that if the conjecture were true it would have to be so in a relatively tight range of parameters, which are *not* those for which the classical PCP theorem is known to hold. (For instance, QMA-hard instances of the local Hamiltonian problem necessarily have a low degree.)

As I started writing this post, the workshop went on and the conjecture kept being brought up over and over again in different contexts. So rather than discuss the technical content, as I had originally planned, I decided to write a more free-flowing discussion of the conjecture, its meaning, and its validity. A lot of the arguments made below are borrowed from the lively discussions that took place during the workshop; if I do not attribute them directly it is only for fear of placing the wrong words in an august speaker’s mouth. I only claim ownership of excessively naive statements and misuses of physical terminology .

**1. Statement of the quantum PCP conjecture (qPCP)**

Let’s first see a formal statement of the qPCP; although there are variations luckily this is the one aspect of the conjecture on which there is at least some kind of rough agreement. The conjecture makes a statement on the computational difficulty of a particular problem: approximating the ground state energy of a local Hamiltonian acting on particles. Formally, the input is given by operators , each acting on particles of dimension : is a Hermitian matrix of norm at most acting on . Here and are treated as small constants, though it may be worthwhile allowing or to grow with ; in particular considering leads to interesting intermediate cases. One also often imposes that the are non-negative, and it is not hard to see this can be assumed to hold without loss of generality by adding a small energy penalty to each particle.

The qPCP asserts that, given the , it is QMA-hard to approximate the smallest eigenvalue of the total Hamiltonian . What this means is that there exists an efficient procedure which transforms any quantum circuit into a local Hamiltonian such that, if there exists an input state that is accepted with probability at least by then the smallest eigenvalue of is small, whereas if rejects every input with probability at least then the all eigenvalues of are above some threshold. Informally, the conjecture thus states that approximating the energy of a local Hamiltonian is as hard, in the worst case, as deciding whether any given quantum circuit has an accepting input. Here is a formal statement.

**qPCP Conjecture.** There exists constants and such that the following holds. Given integers and -local Hamiltonians , it is QMA-hard to find an additive approximation to the quantity

where the minimum is taken over all unit vectors .

What is known about the conjecture? The basic result, due to Kitaev, is that qPCP is true for , , and . However, as stated above, the conjecture does not allow for to depend on , and that is the whole challenge! The same is true of or : if we allow either of them to be then the conjecture is trivial (with constant ). I believe the conjecture is open even if one were allowed to take and . It is also known to be *false* if the commute and or are too small: for instance if (Bravyi&Vyalyi) or and (Aharonov&Eldar). Certain classes of interaction graphs (the hyper-graph with a vertex associated to each particle, and a hyper-edge for each local Hamiltonian term) have also been ruled out. For instance, Brandao&Harrow showed that instances in which the interaction graph has a high degree are in NP, which follows from exploiting a phenomenon known as *monogamy of entanglement*. Monogamy is a strong limitation on the correlations that can be generated from entanglement; intuitively these correlations weaken as the number of parties in-between which they are shared increases. In practice, entanglement tends not to be robust to complex systems of overlapping constraints, which are precisely the type of constraints that arise in known proofs of the classical PCP theorem. One of the essential difficulties in proving the qPCP is to overcome this limitation: how to make a (quantum) constraint satisfaction problem robust while still allowing for highly entangled solutions.

**2. Why should we care?**

Before diving in and trying to prove (or disprove) the quantum PCP conjecture, it might be worth pausing and asking why we should spend any effort thinking about it in the first place. This may sound unnecessary — *clearly*, if so many clever researchers think it is such an important conjecture, it *must* be worth studying! All too often though the perceived importance of a research topic snowballs and can easily become disproportionate…see here for a (somewhat politically incorrect) discussion of what might be a good contemporary example in quantum information theory. The issue is a particularly thorny one for quantum computing: since our field does not (yet!) have many “tangible” output, we must take special care in choosing the problems on which we spend our efforts. Indeed, there is no lack of questions, nor even of beautiful or exciting or deep questions out there, but there are far fewer *good* questions: questions whose resolution or even exploration will advance the state of our knowledge in a meaningful direction. Is the qPCP such a question?

**(a) The wrong reasons**

The motivation given for introducing the conjecture in the first place is often that it is a natural analogue of the classical PCP theorem; since the PCP theorem was such an astounding success in theoretical computer science it must obviously be worth studying its “quantum” counterpart. (One way to think of the classical PCP theorem is as asserting qPCP for the case of local Hamiltonians that are all diagonal in the computational basis, and replacing NP-hardness by QMA-hardness.)

This is a dangerous argument. There is no a priori reason for the factors that made the PCP theorem a success to translate to its quantum counterpart. The strongest motivation for the classical PCP came from applications to hardness of approximation; as I understand it the very statement of the theorem was driven by the quest for such applications (see the excellent note by Ryan O’Donnell for a more detailed history of the PCP theorem). The origins of the theorem can be traced back to Babai, Fortnow, and Lund’s result characterizing the power of multi-prover interactive proof systems as MIP=NEXP. Although now considered a major landmark, when it appeared that result may have been thought of as fairly esoteric: researchers were still grappling with the very recent IP = PSPACE and the question of the power of multiple provers probably didn’t seem so essential. It is only later, driven by a quest for tighter applications to hardness of approximation, that specific parameters (number of queries, answer length, number of random bits/proof length) were identified as being more relevant than others. Successive attempts to drive down these parameters as much as possible eventually led to the PCP theorem.

The key in obtaining a powerful and *useful* result was thus to identify the appropriate resource parameters, and examine to what extent proof verification procedures could be made efficient with respect to these parameters. The statement of the qPCP as above emphasizes exactly the same parameters as the classical PCP: the importance is placed first on the constant gap , then on the constant locality , and finally on the constant alphabet size . What if there were other parameters of much higher relevance in the quantum setting? For instance, is a strict constant locality crucial, or would relaxing it to some kind of smoothly decaying action of each of the local Hamiltonians make the conjecture much easier to prove, without losing physical relevance? What if the most important parameter was the structure of the underlying constraint graph? Identifying the key parameters requires a good understanding of what we hope to get out of the qPCP conjecture: what would its consequences be, were it proven true? What would it teach us about physics? I don’t think we have good answers to these questions yet, but let me still go out on a limb and explore some of the basic physical underpinnings of the qPCP.

**(b) A better motivation**

The evolution of a quantum mechanical system is governed by Schrödinger’s equation. The variables in that equation are the wavefunction, or state, . The parameters are encapsulated in the Hamiltonian , a Hermitian operator describing the total energy of the system. Eigenvalues of are the different values that the energy of the system can take, and its eigenvectors the corresponding states.

The object of condensed matter physics is the study of properties of condensed phases of matter, such as solids or liquids. Different approaches can be taken for this study: based on the laws of classical physics of course, but also for instance statistical mechanics. Starting in the 30s an approach now known as quantum many-body physics has been increasingly successful. The idea here is to model the material as a lattice of interacting particles, each governed by the laws of quantum mechanics. Interactions between the particles are typically nearest-neighbor; in addition there might be some global constraints, such as a magnetic field, that act independently and identically on each of the particles. For example, in the Heisenberg antiferromagnet model neighboring particles tend to like having aligned spins. Each of these constraints is modeled by a local Hamiltonian. Since the goal of condensed-matter physics is to study emergent properties of the lattice as the number of particles scales ( qubits, anyone?), the big challenge faced by the field is the following : can one extract global properties of the material from such a fine-grained modelization in a computationally tractable way?

This clearly identifies local Hamiltonians as the basic object of study in quantum many-body physics; describing properties of the quantum states that represent systems subject to such Hamiltonians is the goal of condensed-matter physics. The quantum PCP conjecture poses a specific question about local Hamiltonians: it asks about the computational difficulty of approximating their minimal energy. Why is that the right question?

One may first wonder, why the energy? The total energy represents the frustration of the system; the higher the energy the more constraints (as enforced by the local Hamiltonians) are unsatisfied. A state with zero energy is called *frustration-free*: it simultaneously satisfies all constraints. A Hamiltonian is frustrated if the constraints it imposes are necessarily incompatible, so that it is impossible to satisfy all of them simultaneously. So the energy is a natural canonical parameter that can be associated to any physical system: this makes it into a natural “benchmark”, a quantity that, even though by itself does not tell us much, we *ought to* be able to compute. More generally, one is often interested in computing local properties of the ground state, such as the detailed state of a small subset of particles (most of these systems have a lot of symmetries, including translation invariance, so which articles often doesn’t matter much). In general there is no direct relationship between the computational hardness of approximating the energy and that of computing a generic local property, but in practice the existence of an efficient algorithm for the one would be a good indication that there also exists an efficient algorithm for the other.

Alright, so the energy is a natural quantity to be looking at. But now, why specifically the *minimal* energy? The minimal energy is the energy of the system in its most stable state: its *ground state*, the state of the system at zero temperature. This state is well-defined mathematically, but it does not have any physical existence: any system always has a finite positive temperature. In fact, in quantum field theory the ground state is called the “vacuum”, and Frank Verstraete made the argument that there was nothing interesting to observe about it — indeed, it is just a pure vacuum. Who cares about the energy of the vacuum?

This is where the approximation allowed by the qPCP comes in: the conjecture does not require to precisely compute the energy of the ground state, but rather that of any state whose energy is close enough. As one cools down the system its state will slowly evolve towards the ground state, and even if that state is never reached the system’s total energy will approximate that of its ground state. As such, and Dorit Aharonov repeatedly made this point, the qPCP is really asking about the computational difficulty of approximating the energy of any “low-lying state”, or of the “Gibbs state at room temperature”: a mixture of states that is stable at low but positive temperature (so-called “room temperature”: 20 or so Kelvin, room temperature you bet :-)).

Given these basic considerations, we could rephrase the qPCP as follows: given a local Hamiltonian describing a quantum many-body system, what is the total energy that the system has as its temperature is driven to zero? Or, how much frustration does the system of local Hamiltonians necessarily induce? This is slowly making it look like more of a real physically motivated problem than the mathematical statement I gave at first, good! But maybe it doesn’t sound too exciting yet…let’s see if we can make the conjecture talk some more 🙂

**3. So, what is the qPCP conjecture really about?**

We just discussed the quantity that the qPCP is concerned with: the ground state energy of a local Hamiltonian. The real content of the conjecture is in the statement that finding even a rough approximation to that quantity is computationally hard — it is QMA-hard. The relevance of the conjecture for physics is often phrased as follows:

As the temperature of a physical system is cooled down, its state slowly evolves towards the ground state of the Hamiltonian that describes the system. Hence nature is by itself “computing” the ground state, or at least a mixture of low-energy states, of that Hamiltonian. But the qPCP asserts that approximating the energy of any such state is computationally hard, even for quantum computers — how can this computational hardness be reconciled with the fact that nature apparently solves the same problem on a day-to-day basis?

As you read this paragraph you probably immediately found a number of problems with the statement that it makes (if not, read it again!). This is precisely why the qPCP conjecture is so interesting: although it sounds like a pretty straightforward statement, once one starts digging a little bit it is not so easy to wrap one’s head around it. I’ll discuss a couple issues one might raise, but there is certainly much more to be said!

**(a) Is local local enough?**

For the sake of the argument, let’s first assume that condensed-matter systems can indeed be driven to their low-temperature Gibbs state (a probabilistic mixture representing the state of the system at a given temperature) with a relatively small “mechanical” effort. In what sense does the qPCP imply that nature would be “solving” a computationally hard problem?

We should first check that, provided we are given access to such a state, its energy can indeed be estimated to the accuracy demanded by the qPCP. This, I believe, is in most cases not too hard to do: simply pick a local term at random and measure the frustration of the particles on which it acts; provided the Hamiltonian did describe a physical system in the first place there ought to be a way to perform the corresponding measurement. To obtain a good accuracy, though, we’ll have to prepare the system in the same state many times over ( or so) and repeatedly perform the measurement. We could start debating whether nature is actually able to do this, but I think it’s fair to say that this part sounds fine.

Now, as we already discussed QMA-hardness means that any polynomial-size quantum circuit can efficiently be mapped to a local Hamiltonian whose ground state energy approximately encodes the probability that there exists an input that is accepted by the circuit with high probability. While qPCP conjectures the existence of such an efficient reduction, it places very little constraints on the resulting Hamiltonian. Can it actually be implemented? Here we are hanging on to the “locality” constraint as a measure of “physicality”, but it might not be such a convincing measure. For instance, it is often pointed out that some form of *geometric* locality in low dimensions is much more important than just any generic form of locality: it is not enough that the individual Hamiltonians act on few particles, but it should be possible to arrange the particles in, say, two or three dimensions so that no Hamiltonian acts simultaneously on particles that are far apart.

In addition, the local Hamiltonian could be highly inhomogeneous: all the could have a different flavor. How can one ever hope to implement such constraints on a large scale? There certainly are natural inhomogeneous Hamiltonians, but it would be quite a coincidence if those were of the type output by our reduction! It seems like we should be taking into account an additional transformation: given a description of the local Hamiltonian, find a physical way to implement it, using the equipment at disposition (the possibility for enforcing magnetic fields of different strengths, the use of particles with various properties, etc.). Admittedly though even if this were a hard problem it should still be in NP, so that the QMA-hardness conjectured by the qPCP must lie elsewhere.

(Interestingly enough, I should point out that we do know that approximating the energy to within high (inverse-polynomial) accuracy of very simple nearest-neighbor translation-invariant Hamiltonians is QMA-hard (up to some minor caveats; see the paper by Gottesman and Irani). So we have a good starting point, and while the qPCP conjecture only asks for a generic form of locality, if it were proven I expect that further work would immediately go into finding out which are the simplest systems that are hard.)

**(b) Why is NP not hard enough?**

As we already saw the content of the qPCP is really in asserting QMA-hardness; the same statement for NP-hardness is known (and is given by the PCP *theorem*…). In making arguments about nature solving hard problems, does it make any difference at all that the problem be QMA-hard instead of NP-hard? Shouldn’t both types of problems be just as impossible for nature to solve? But the PCP theorem *is* true — doesn’t that mean we already *know* that Nature solves hard problems? Either that or the arguments outlined in the previous paragraph apply to instances that arise from the proof of the classical PCP theorem: they do not correspond to systems of constraints that can be implemented as the Hamiltonian of physical systems.

John Preskill asked a good question: what, if any, is the physical intuition for the PCP theorem being true? Here by “intuition” I think he meant not the intuition for the mathematical statement or its proof, but really whether one could have a physical way of thinking about the theorem. There was no answer to this question — I don’t have one — and maybe there is none, maybe the PCP theorem does *not* have any physical relevance?

We are thus faced with a number of possibilities. First, it could be that the qPCP is indeed making an absurdly strong statement, and cannot possibly be true. The fact that the classical PCP theorem sounds in most respects just as surprising makes that argument rather unconvincing. Second, it could be that qPCP is true, but has no or little physical relevance (in addition to whatever the classical PCP theorem may already be saying). If so it will be interesting to understand why, what is the main difference between “natural” Hamiltonians, implmentable in nature, and “hard” Hamiltonians, coming out of the qPCP reduction: is it the geometry? The existence of a spectral gap?

Finally, it could also be that the conjecture *is* true and that it *is* making a statement about nature 🙂 What would that statement be? Let’s see how QMA-hardness differs from NP-hardness. Basically, QMA-hardness means that a certain quantity — the ground state energy — could be efficiently computed (on a quantum computer) given the help of a quantum “advice” state, but no *classical* advice would help (assuming the problem is not in NP). The most natural advice is the ground state itself, from which we can, at least in principle, efficiently compute the energy using a quantum computer. The fact that there would not be any classical polynomial-size witness suggests that the ground state should be *highly entangled*: if it was, say, a tensor product of independent states for every particle then we could easily represent that efficiently. So if we allow ourselves a few leaps of faith we can assert that the qPCP, if true, would imply that the ground state (or, and that is much stronger, that every low-temperature state) necessarily contains large amounts of entanglement: that state must be “irreducibly complex”. (One reason I am invoking leaps of faith here is that the ground state could have an efficient representation — and it has one, I just gave it — but which does not allow a direct efficient computation of the energy.)

This last very appealing interpretation of the conjecture brings us dangerously close to another conjecture that was hotly discussed on the second day of the Simons workshop, which is that of the existence of so-called “area laws” for ground states of local Hamiltonians. This conjecture asserts almost the opposite of qPCP: that ground states of 2D local gapped Hamiltonians have an entanglement between any two regions that scales as the area of the region rather than its volume, i.e. they have “low entanglement”. There are important caveats in comparing that statement to qPCP: the area law “conjecture” (it is probably too debated to be granted the full status of conjecture; at present I would say it is more of a “question” than anything else) applies to Hamiltonians that are *geometrically local* and *gapped*, two restrictions that the qPCP does not enforce. Moreover, an area law for entanglement entropy does not immediately imply the existence of classical witnesses from which one could efficiently estimate of the energy: while it is reasonable to think that it does imply the existence of an efficient classical representation of the ground state (such as PEPS), we don’t yet know if such representations allow for efficient computation of the energy.

**More to come…**

This discussion ended being much more lengthy (and somewhat elementary) than I expected — apologies for the digressions! Not to mention that this is only the tip of the iceberg (well, even “tip” is generous here — let’s say the melting droplet on top of the tip of the iceberg :-)) All in all, there is quite an exciting world of possibilities to explore, and I am very much looking forward to next Spring’s semester.

Thomas, this outstanding essay is (as it seems to me) among most lucid, informative, and interesting essays regarding quantum PCP theorems (that I have ever seen) in the quantum blogosphere. Please keep posting!

Perhaps in a day or two I will post more, but for now, the

mainmessage that you deserve to hear, by virtue of the hard work and careful thought that are so evident in your post, is appreciation and thanks.Thanks, John, for the very warm comment 🙂

Thank you for your writing this exciting article. I enjoyed reading it very much.

–“Frank Verstraete made the argument that there was nothing interesting to observe about it — indeed, it is just a pure vacuum. Who cares about the energy of the vacuum?”–

It is really interesting and I do care, however, about spatially-entangled fluctuations of the local Hamiltonians in the pure vacuum state. The local energy fluctuation can be indeed measured. By using it, protocols of quantum energy teleportation (QET) had been proposed:

http://www.tuhep.phys.tohoku.ac.jp/~hotta/extended-version-qet-review.pdf .

The relation between the qPCP and the QET is not clear to me yet and deserves researches, I think.

Hi MH — Pardon my ignorance, but what do you mean by “local energy fluctuation”: are you measuring the energy of small excitations above the ground state? Is this done for very particular types of Hamiltonians (i.e., can you reconcile your comment with MJK’s comment below…)?

Two arxiv preprints this week that consider “interesting” aspects of the vacuum are O’Connell’s

Two oscillators in a common heat bath(arXiv:1302.5921) and Hastings & Freedman’sObstructions to classically simulating the quantum adiabatic algorithm(arXiv:1302.5733).Thank you for this lovely essay.

I had one regarding your informal claim that:

“But the qPCP asserts that approximating the energy of any such state is computationally hard, even for quantum computers — how can this computational hardness be reconciled with the fact that nature apparently solves the same problem on a day-to-day basis?”

Is this meant only as being thought provoking, or does some fraction of the community actually think that “nature solves the same problem on a day-to-day basis?”. It seems quite clear to me that nature does NOT solve this problem generally. Markov Chain mixing analysis of Metropolis/Glauber type algorithms, as well as spin glasses, already provide a pretty strong intuition that nature sometimes (often?) takes a very long time to cool down to low temperature.

In fact, I am wondering whether this would not be be one possible interpretation of the (q)PCP theorem: that it is not possible to build an generally efficient refrigerator. This seems to be suggested throughout you article, but never spelled out explicitly.

Thanks for your comment — I think there is a fine line there, and to be fair I am not experienced enough to thread it carefully. To answer your first question, as long as you interpret “the same problem” loosely enough, i.e. you allow a little flexibility with the issues regarding the nature of “hard instances” for qPCP that I discussed, then I think that, yes, this is a view that can be taken seriously in the community.

You make a very valid point though, and related issues were repeatedly raised during the workshop: in practice, reaching the ground states of many systems does indeed seem hard. This is why the constant approximation that appears in the qPCP is crucial: it *seems* to imply that even computing properties of low — but not too low — energy, such as “Gibbs states at room temperature” may be computationally hard. (Recall, qPCP is a *proven theorem* for small enough , scaling inverse polynomially with the system size).

Because of that flexibility in the approximation, the conjecture is not quite going all the way towards proving the “hardness of refrigerator” problem that you are mentioning. My impression is that temperatures that are required for the qPCP *can* be achieved; the question is, for what systems? I would be very happy to hear any further input on that question!

Hi Thomas, thank you for your response. I mean, “local energy fluctuation” is, for example, two-point correlation functions of a local POVM at point A and a local Hamiltonian at point B for a many-body system in the ground state (not in any excited state with small energy). We are able to measure the two-point correlation functions in the ground state, at least in principle, with arbitrary precision. Interestingly, it is proven that the two-point function is related to the amount of `teleported energy’ at B, which can be indeed extracted out of the subsystem at B in a local ground state by utilizing a local operation dependent on results of a local measurement of another subsystem at A corresponding to the above POVM at A. You can find a brief review of the argument of QET in section 2 of the paper: http://arxiv.org/pdf/1301.0485.pdf . Eq. (16) in page 7 is the formula of teleported energy. The QET itself can be implemented for various kinds of physical systems with a rather general class of Hamiltonian forms. (Please refer to http://www.tuhep.phys.tohoku.ac.jp/~hotta/extended-version-qet-review.pdf for the general argument.)

I do not think I’m able to directly reconcile my comment with MJK’s comment, but from a QET viewpoint, a “local cooling problem” can be posed. Indeed, you are able to find the argument in p47 of the review: http://www.tuhep.phys.tohoku.ac.jp/~hotta/extended-version-qet-review.pdf .

In the page, “the hardness-of-*local*-refrigerator problem” is solved by using a notion of the ground-state entanglement breaking.

Pingback: Futureseek Daily Link Review; 28 February 2013 | Futureseek Link Digest

Re the question of physical intuition for the pcp theorem itself, perhaps some insight can be obtained by looking at random 3sat formulas.

the pcp theorem (Hastad ‘s version) says that it is not feasible (for a classical or quantum computer, assuming BQP doesn’t contain NP) to distinguish between the case that a 3sat formula is Satisfiable and the case one can satisfy at most 7/8+o(1) fraction of the constraints.

Feige hypothesized that even though in a random 3sat formula one can satisfy at most 7/8+o(1) fraction of the constraints (I. e. the ground state has large energy), it is infeasible to rule out that in fact one can satisfy all of them (I. e. there is a frustration free state)

so I guess random 3sat instances (or their planted variants if you plant a solution in a smart way ) could give fairly simple, if not “natural”, examples of. systems that take exponentially long to reach their ground state.