ITCS 2015

It might’ve been there for a while (it definitely should :-)), but I only just noticed: the list of papers accepted to ITCS 2015 is out! There were 45 accepts out of 159 submissions, a record rate for ITCS and a level of selectivity, 28%, comparable to that of STOC or FOCS (which typically accept slightly under twice as many papers, and have less than twice as many submissions). I had the pleasure and honor to serve on this year’s program committee (how else do you think I’d get a paper in?), and I was truly impressed by the overall quality of submissions. We could easily have accepted twice as many papers and still claim a spot for ITCS as a top-tier conference in the theory of computation. If so, why didn’t we? Giving an answer requires to think through which purpose we want ITCS, and more generally our constellation of theory (or theory-friendly) conferences, to serve. The question is being increasingly actively debated, though it has already been on the table for a while (and even featured on this blog…); I think it is an important one, and I’d like to share some thoughts.

STOC and FOCS have been giving rhythm to any decent theorist’s academic year for immemorial times (well, almost — as Dick Lipton reminded us in his Knuth award prize at FOCS, some of us do remember the time when the first editions of theory conferences were making their apparition). Their respective Spring and Fall deadlines are so much entrenched that I was recently asked by a graduate student how one could come up with a new 6-month project every 6 months, and make sure to wrap it up by the next deadline? Of course in practice most people work on many projects in parallel, and decide which have a decent chance of being brought to a (reasonable?) state of completion by deadline time a few weeks (hours?) before the deadline. Nevertheless, these “flagship” conferences are there to constantly remind us of our bi-yearly sacrificial duty. The longer one has been going through the process, the harder it is to imagine any changes to the model: Wouldn’t any reduction in the rhythm affect our productivity and the speed at which ideas spread in the community? To the opposite, wouldn’t any increase in rhythm lead to a serious risk of exertion? Isn’t the actual system just right?

Of course not — indeed, I don’t know anyone who’d claim the current model is good. But I don’t know any two researchers who’d agree on how to change it either. Any substantial proposal seems to immediately run into a large amount of resistance — in the form of a barrage of “what about…?”, or “what if…?” — that it is clear any change will have to be extremely gradual. The difficulty is exacerbated by the additional level of inertia imposed by the professional societies, ACM and IEEE, that administer STOC and FOCS respectively. In this respect the recent proposal by Boaz Barak and Omer Reingold had good chances behind it: Boaz and Omer put forward a very concrete proposal that carefully avoided any changes to the backbone structure of STOC and FOCS, preserving submission deadlines, independent committee evaluation, rough numbers of accepted papers, etc., and instead focused on the format of the actual event, in substance proposing a single yearly week-long “theoryfest” to replace the current bi-yearly 3-day gatherings. Unfortunately, from the echoes I’ve had from Saturday’s meeting at FOCS it looks like even this moderate change  is proving too much for the community to take, and the background grumble is likely to overtake the energetic spike proposed by Boaz and Omer.

ITCS, in contrast, should be largely free of such accumulated inertia. The conference series is still in its blooming youth: the coming edition, to be held in January at the Weizmann institute in Israel, will be the 6th instantiation of the series. It is not attached to any professional society (though it does seem to be loosely affiliated with ACM’s SIGACT), and is instead run by a steering committee made of prominent members of the theory community. Since its inception ITCS has been in permanent soul-searching mode: is there space for yet another theory conference? Wouldn’t the debate over a possible merge of STOC and FOCS indicate quite the opposite, that we theorists already have too many venues in which to express ourselves?

I would argue that there is, and that searching for this space can play an important role in defining the future of theory. Irrespective of changes in the format of STOC or FOCS, ITCS can continue to occupy a special niche: the “conceptual” niche (I don’t like the word — it is too overloaded — but I can’t think of a better one). ITCS is not there to wipe off the excess production that didn’t make it to its big sisters. It is different, not only in its selection process but also, and just as importantly, in the format of the event itself. It should affirm this difference by continuing to build the reputation it has started to earn as a small, friendly, innovative conference in which discussions take a prominent part, the program is light and single-track, and the presentations lead to lively discussions between the participants. Not everyone needs to go to ITCS every year — but, given the chance, most would go, and have a pleasant experience. You might not get to learn about all the recent hot results: theoryfest is made for that. But you’ll be exposed to the pulse of the community, emerging research directions you might not have been exposed to otherwise (for instance, because you went to the other parallel session, the one with talks on your favorite area…).

There is a broader picture to keep in mind with respect to the place of theory within the CS landscape at large. By its nature our field is bound to have its relevance regularly questioned: are we looking at the “right” problems? And even if we are, are the kind of answers we bring relevant? Or has theory lost any contact with applications? ITCS could play an important role in addressing these questions. As a “conceptual” conference it is there precisely to bring to light promising avenues and novel directions in which our theoretical ideas and concepts may find stimulation and application. Vitality of the conference will be a sign that theory is able to permanently re-invent itself. Many of us strongly believe that the language we speak is universal; the question then is only whether we’re able to use that language effectively and bring it to bear on the problems that our peers across the sciences care about.

If I am to judge of ITCS’s vitality by the quality of this year’s submissions I have no reason to be worried about the future of theory. As already mentioned, the crop was excellent, and decisions were difficult. I was also impressed at how seriously the “mission” of ITCS was taken into account during the reviewing process. The PC chair, Tim Roughgarden, put a strong emphasis on selecting papers that fit the single sentence in the ITCS call for papers that seeks to distinguish it from other conferences:

ITCS (previously known as ICS) seeks to promote research that carries a strong conceptual message (e.g., introducing a new concept or model, opening a new line of inquiry within traditional or cross-interdisciplinary areas, or introducing new techniques or new applications of known techniques).

This is not to say that strong submissions should be rejected if their “conceptual message” was not apparent. This would be a mistake, for who are we to judge of the presence, and even more so future importance, of a new concept? Only time will tell. But given the large number of strong submissions, difficult decisions had to be made, and in many cases the discussion did seriously take into account what we perceived as the conceptual novelty, or appeal, of a submission. The debate was often extensive; I can assure you that most submissions received very careful attention.  I wish the authors, or even the community at large, could get precise feedback on the discussions, which had the benefit of raising awareness, at least among PC members, of the above quote from the CFP.

We also had some discussions on the format of the conference; because these discussions happened after the formal publication of the CFP I’m not sure we can expect any big changes to the format this year, but I hope that they may lead to improvements in later years. Some of the suggestions that were floated around included having invited talks, varying the length of contributed talks, and having a poster session, possibly preceded by short 3-4 minute “advertisements” of the posters by their authors. Personally I think this is an excellent idea, and I hope it gets implemented in a later edition.

To get back to my opening comment — why did we accept so few papers, if the submissions were so good? Well, if you’re angry your paper got rejected, I can only promise that the accepted ones were at least as good — come and check them out! Indeed, in my mind (I am not pretending to speak for the whole PC here!) the main reason is this: ITCS will be single track, it will be relaxed and friendly, and all presentations will be of high quality. We had to make decisions; some of them were right, some were wrong, and many could have been done either way. But we had to make them, and made them for you. So we hope to see you in January 2015 in Israel, and that you’ll enjoy hearing the talks as much as we enjoyed reading and debating the papers!

Posted in Conferences | Tagged , | Leave a comment

Quantum PCP conjectures

Last Spring I took part in the Simons Institute’s semester on Quantum Hamiltonian Complexity. The semester was a great success, with an excellent batch of long-term participants and many fruitful interactions. The Institute asked me to write a short “Research Vignette” presenting, to a broad audience, an example scientific outcome of the programme. You can probably guess what topic I chose for my vignette…I am reproducing it below (the specific project I described led to this preprint). See the Simons’ website for the full newsletter, and in particular the other vignette by Adi Livnat, a participant in the concurrent programme on Evolutionary Biology and the Theory of Computing, and suggestively named “On Sex, Math, and the Origin of Life“.


 

The macroscopic properties of materials arise from the physical interactions between their constituent particles. For example, the use of the Pauli exclusion principle explains how the conductivity properties of a material follow from the energy band structure of its atoms. Techniques for performing the extrapolation from microscopic laws to macroscopic predictions form the basis of statistical mechanics. The implied statistical averaging, however, is insufficient to predict certain properties, such as superconductivity, for which subtle quantum mechanical effects need to be taken into account. Without the reduction in complexity granted by the application of the law of large numbers, the study of novel phases of matter quickly runs into a fundamental computational bottleneck. The complete description of an n-particle quantum system requires exponentially many parameters: How can one hope to make meaningful predictions if one cannot even write down a full description of the state of the system? Is materials science about to hit a computational barrier?

The Simons semester on Quantum Hamiltonian Complexity brought together theoretical physicists and computer scientists motivated by this problem. The goal of the semester was to formulate, and begin to address, the key complexity-theoretic issues that arise from the study of many-body systems in condensed-matter physics. A central conjecture, the so-called quantum PCP conjecture, crystallizes many of these issues, and the conjecture was hotly debated throughout the semester. During an open problems session held as part of one of the week-long workshops, I was prompted to formulate a new variant of the conjecture whose study led to a fruitful collaboration with Joseph Fitzsimons, another program participant. Before describing this new variant, I will introduce the central computational problem in quantum Hamiltonian complexity and formulate the quantum PCP conjecture.

The local Hamiltonian problem

The computational hardness of making predictions about the global properties of quantum many-body systems is embodied in a single computational problem, the local Hamiltonian problem. This problem, introduced by Kitaev as a direct quantum analogue of classical constraint satisfaction problems, asks whether a given Hamiltonian (i.e., a collection of physical constraints—magnetic field, two-body interactions, etc.) acting on n particles has a lowest-energy state whose energy is below or above a certain threshold. Formally, each constraint is specified by a a positive semidefinite matrix Hj acting on a constant-size subset of the n particles. The energy of a quantum state |ψ〉is its overlap with the total Hamiltonian:

Kitaev showed that the local Hamiltonian problem is complete for the complexity class QMA, the quantum analogue of NP: QMA is the class of decision problems that can be solved by a polynomial-time quantum computer given access to a quantum proof. Kitaev’s seminal result shows that a fundamental problem in physics, estimating the minimal energy of a local Hamiltonian, is computationally intractable. Assuming, as is widely believed, that QMA is a strictly larger class than NP, it also implies that the lowest-energy state does not have an efficient classical description from which its energy can be estimated. In particular, the state must be highly entangled; in other words it must possess complex quantum correlations spread across all its particles. Kitaev’s purely theoretical result thus has direct consequences for the properties of physical states: even the simplest such states (the lowest-energy state is the equilibrium state of the system as the temperature is driven to zero) display the most complex quantum phenomenon, many-body entanglement.

The quantum PCP conjecture

Kitaev’s hardness result applies to inverse-polynomial approximations to the minimal energy. What about weaker approximations? In cases where the energy has an intensive scaling with the system size it is natural to only require an approximation that is proportional to the number of constraints acting on the system. Do such approximations remain hard to obtain, or could they be amenable to efficient, possibly quantum, algorithms?

Asking this question amounts to seeking a quantum analogue of the PCP theorem from classical complexity theory. The PCP theorem, a major breakthrough of the early 90s, asserts that constraint satisfaction problems such as 3SAT remain hard even under very strong promises on the satisfiability of the instance: Given a 3SAT formula, it is NP-hard to estimate the maximum number of clauses that can be simultaneously satisfied up to an accuracy that scales proportionally with the total number of constraints. Since the local Hamiltonian problem contains classical constraint satisfaction problems as a special case, the PCP theorem implies that constant-factor approximations to the minimal energy are NP-hard. The quantum PCP conjecture asks whether the problem is even harder: Could it be QMA-hard?

The status of the conjecture remains completely open. A positive answer would have important implications for the presence of entanglement in so-called Gibbs states, which are equilibrium states of the Hamiltonian at low but positive temperature. Indeed, interest in the conjecture partly stems from its apparent contradiction with the common physical intuition that, as temperature is increased above zero, disorder in the system tends to destroy the coherence required for maintaining entanglement. If this intuition were correct, then low-energy states would have classical descriptions that could be used to contradict the conjecture.

A new variant

Much of the strength of the PCP theorem comes from its versatility, and many equivalent formulations are known. One such formulation, closely tied to the theorem’s origins, uses the language of interactive proof systems. Here one thinks of the verification procedure (for, say, the satisfiability of a 3SAT formula given as input) as an interactive protocol between a polynomial-time verifier and all-powerful but untrusted provers. The PCP theorem implies that any language in NP can be verified through a single round of interaction with two non-communicating provers, to which the verifier sends logarithmic-length messages and receives a constant number of bits as answers. The possibility for such highly efficient proof checking (compared to reading off the linear number of bits describing a satisfying assignment) remains a wonder of modern complexity theory.

Interestingly, this formulation of the PCP theorem gives rise to a quantum analogue whose relation to the quantum PCP conjecture is a priori unclear. The new variant asks the following: Do all languages in QMA admit an efficient verification procedure of the above form, in which the verifier and communication may be quantum? Here it is important to also allow the provers to be initialized in an arbitrary entangled state, as this may be necessary to encode the QMA witness whose existence the provers are supposed to convey to the verifier.

Prompted by program participants Dorit Aharonov and Umesh Vazirani, I formulated this new variant, and the question of its relation to the traditional quantum PCP conjecture, during an open problems session that we had organized as part of the first workshop of the semester. The question immediately generated many ideas and suggestions from the audience. The problem is a tantalizing one. How can the existence of a global, n-qubit state be verified solely through the local snapshots obtained by the verifier in the interactive proof system? In the classical setting, consistency between the provers’ answers and a global proof is ensured by cross-checking the provers’ answers against each other through a complexity-theoretic implementation of the classic prisoner’s dilemmainterrogation technique. In the case of a quantum proof, however, the presence of entanglement renders such cross-checking all but impossible, as the no-cloning principle prevents even honest provers from sharing “consistent” copies of the same quantum state; with a unique copy, the question of who owns which part of the proof becomes essential.

Joseph Fitzsimons, a participant from the Centre for Quantum Technologies in Singapore, suggested a scheme based on encoding the quantum proof using secret-sharing, and distributing the shares equally among the provers. I was excited by his idea, and we attempted to flesh out the details during the semester, continuing by Skype after Joe had left for Singapore. We were recently able to complete the soundness analysis of Joe’s suggested protocol, and our findings establish the second variant of the quantum PCP conjecture as a promising target for future work. Extending the naïve scheme that we have so far been able to analyze in order to obtain a protocol with better soundness guarantees is an exciting open problem. I expect its resolution to require the development of quantum codes with new properties, akin to the locally testable codes used in proofs of the PCP theorem, but possibly with an even more local twist. This would be an exciting outcome, independently of the eventual validity of the conjecture.

These explorations were made possible by the diverse mix of participants in the semester and their willingness to engage with one another. As I was stating my question in the open problems session, encouraged to do so by other participants, I more or less firmly believed that it could only lead to a dead end—but this did not take into account the generous creativity of my fellow participants!

Posted in QPCP, Quantum, Simons | Tagged , , | Leave a comment

QIP 2015 papers

The list of papers accepted to QIP 2015 has just been posted. In an attempt to make sense of the program I scouted for the papers on the arXiv and list what I found below, loosely organized according to an arbitrary categorization. (Please don’t take this too seriously…clearly many papers would have fit in more than one section, in which case I made an arbitrary decision; I also had to stretch some categories a bit in order to accommodate a fairly diverse program.)

As you can see, this year’s edition, at at a standard 39 accepts by my count (including two merges), promises a healthy showing of algorithms, and even more so quantum channel theory. We’ll also be treated to a good dose of (computational and topological aspects of) many-body physics, together with some more exotic (and as of yet undetermined…a few submissions are still missing arXiv links; shame on you!) phases.

We’re promised that the technical abstracts will be made available online. For some reason the PC chair gave us a month to provide updated versions, so we’ll have to wait a bit before we learn what some of the hidden gems of the program are about…

Quantum algorithms and query complexity:

5. Ashley Montanaro. Quantum pattern matching fast on average

12. Francois Le Gall. Improved Quantum Algorithm for Triangle Finding via Combinatorial Arguments

31. Ryan O’Donnell and John Wright. Quantum Spectrum Testing

71. Han-Hsuan Lin and Cedric Yen-Yu Lin. Upper bounds on quantum query complexity inspired by the Elitzur-Vaidman bomb tester

147. Kirsten Eisentraeger, Sean Hallgren, Alexei Kitaev and Fang Song. A quantum algorithm for computing the unit group of an arbitrary degree number field

Error correction:

9. Fernando Pastawski and Beni Yoshida. Fault-tolerant logical gates in quantum error-correcting codes

89. Hector Bombin. Gauge colour codes and 90. Hector Bombin. Single-shot fault-tolerant quantum error correction

151. Courtney Brell. Self-correcting stabilizer quantum memories in 3 dimensions or (slightly) less

Thermodynamics:

99. Henrik Wilming, Rodrigo Gallego and Jens Eisert. Universal operations in resource theories and local thermodynamics

Pseudorandomness and Cryptography:

103. Mario Berta, Omar Fawzi and Volkher Scholz. Quantum-proof randomness extractors via operator space theory

154. Richard Cleve, Debbie Leung, Li Liu and Chunhao Wang. Near-linear construction of exact unitary 2-designs

174. Carl Miller and Yaoyun Shi. Universal security for quantum contextual devices

Communication and channel theory:

28. Stefan Baeuml, Matthias Christandl, Karol Horodecki and Andreas Winter. Limitations on Quantum Key Repeaters

43. Toby Cubitt, David Elkouss, William Matthews, Maris Ozols, David Perez-Garcia and Sergii Strelchuk. Unbounded number of channel uses are required to see quantum capacity

45. Dave Touchette. Direct Sum Theorem for Bounded Round Quantum Communication Complexity and a New, Fully Quantum Notion of Information Complexity

47. Marco Piani and John Watrous. Einstein-Podolsky-Rosen steering provides the advantage in entanglement-assisted subchannel discrimination with one-way measurements

106. William Matthews and Debbie Leung. On the power of PPT-preserving and non-signalling codes

115. Runyao Duan and Andreas Winter. Zero-Error Classical Channel Capacity and Simulation Cost Assisted by Quantum No-Signalling Correlations

Information theory:

54. Isaac Kim. On the informational completeness of local observables

49. Andrea Mari, Vittorio Giovannetti and Alexander S. Holevo. Quantum state majorization at the output of bosonic Gaussian channels

193. Bartek Czech, Patrick Hayden, Nima Lashkari and Brian Swingle. The information theoretic interpretation of the length of a curve

Complexity theory:

63. Joseph Fitzsimons and Thomas Vidick. A multiprover interactive proof system for the local Hamiltonian problem

189. Sergey Bravyi and Matthew Hastings. On complexity of the quantum Ising model

Architectures and implementations:

69. Neil J. Ross and Peter Selinger. Optimal ancilla-free Clifford+T approximation of z-rotations

132. Adam Bouland and Scott Aaronson. Generation of Universal Linear Optics by Any Beamsplitter

191. Joel Wallman and Steve Flammia. Randomized Benchmarking with Confidence

Simulation:

78. Dominic Berry, Andrew Childs and Robin Kothari. Hamiltonian simulation with nearly optimal dependence on all parameters

Foundations:

74. Nicolas Delfosse, Jacob Bian, Philippe Guerin and Robert Raussendorf. Wigner function negativity and contextuality in quantum computation on rebits

77. Salman Beigi and Amin Gohari. Wiring of No-Signaling Boxes Expands the Hypercontractivity Ribbon

111. Rafael Chaves, Christian Majenz, Lukas Luft, Thiago O. Maciel, Dominik Janzing, Bernhard Schölkopf and David Gross. Information-Theoretic Implications of Classical and Quantum Causal Structures

Local Hamiltonians and many-body states:

76. Michael Kastoryano and Fernando Brandao. Quantum Gibbs Samplers: the commuting case

86. Xiaotong Ni, Oliver Buerschaper and Maarten Van Den Nest. A non-commuting Stabilizer Formalism

125. Fernando Brandao and Marcus Cramer. A Berry-Esseen Theorem for Quantum Lattice Systems and the Equivalence of Statistical Mechanical Ensembles

137. Mehmet Burak Şahinoğlu, Dominic Williamson, Nick Bultinck, Michael Marien, Jutho Haegeman, Norbert Schuch and Frank Verstraete. Characterizing Topological Order with Matrix Product Operators and 176. Oliver Buerschaper. Matrix Product Operators: Local Equivalence and Topological Order

139. Ramis Movassagh and Peter Shor. Power law violation of the area law in quantum spin chains

204. Dorit Aharanov, Aram Harrow, Zeph Landau, Daniel Nagaj, Mario Szegedy and Umesh Vazirani. Local tests of global entanglement and a counterexample to the generalized area law

Misc:

15. Masahito Hayashi. Estimation of group action with energy constraint

Posted in Conferences | Tagged | 2 Comments

Fall teaching

Caltech’s academic year started this week, with Monday being the first day of instruction for the Fall quarter (Caltech’s quarter-based division, as opposed to the more usual division in semesters, explains why we’re starting a month later than many others). So even though I moved to Pasadena in early June, it’s only now that I’m getting to experience the buzzing of a surprisingly lively campus (given the small – less than 1k -undergraduate population), as opposed to the relative quietness of the summer. Having moved in the early summer helped a lot with the settling down, and I now have a functional office (got the last piece – the chair – today), studio (one of the perks of being at Caltech!), and stream of students coming in to ask for signatures :-)

Now, first week of the quarter means, well, let’s get back to business! A lot happened since the last post I wrote on this blog (including many many aborted drafts: when I saw what was the last thing I wrote, I couldn’t believe I hadn’t brought myself to posting anything in the meantime), but I’ll have to settle for the status quo and look forwards, rather than backwards, for regular postings. Caltech’s IQIM alread maintains an excellent blog, to which I’ll attempt to contribute. But this blog will continue to live on for less scientific, or less directly quantum-related posts, so stay tuned! Here’s some brief news on the start of the quarter:

If only I could get inspiration from my predecessors!

So, first week of instruction means, well, first week of instruction. Someone has to teach something to the hungry crowds, and this quarter I’ll bring my own modest contribution by teaching CS286: Topics in Computer Science, that I bravely (foolishly?) decided to entitle “Around the Quantum PCP Conjecture”. The goal of the class is to use the conjecture as a backdrop, a motivating theme; not one to be approached directly but one to suggest interesting questions that can be studied for their own sake.

My plan is to start by reviewing the classical PCP theorem, in enough depth to give a reasonable idea of the kind of ingredients that go into both known proofs of the theorem, though clearly without having the time to cover any of them completely. This week we (almost) covered NP in PCP(poly,1). Next week we’ll see a few of the ideas that go into Dinur’s proof. The next step will be to give an introduction to the area of quantum Hamiltonian complexity, including Kitaev’s QMA-completeness result. We’ll then be in a position to state the conjecture and discuss some of its implications.

Once the background is properly set, I’d like to spend the six remaining weeks of the quarter to dive deep into two or three current topics of research, associated to the conjecture but that we’ll study independently. The three I have in mind are quantum multiprover interactive proofs, area laws, and de Finetti theorems in quantum information theory. Each of these topics would be well worth spending a full semester course on by itself. But instead, I’d like to pick just one particular aspect, one result, and describe the simplest possible non-trivial instantiation of that component or result. The hope is that such a depth-first, rather than breadth-first, exploration, will still give the students some interesting insight into the area and motivate them to continue the exploration on their own.

We’ll have to see how this plays. The first two lectures had a very mixed audience: from the first-year undergraduate (courageous!) to the full professor, going through undergraduate, graduate and postdocs in CS and physics, not a single one of them has a matching background. This was a reason for choosing the “quantum PCP” topic in the first place: my hope is that both CS and Physics students can find some angle of interest on that theme, drawing both crowds around the same preoccupation. It’s going to be challenging to keep everyone together, but it could also be a very fun and rewarding experience…I’m looking forward to it!

Posted in Caltech, teaching | Tagged , , | Leave a comment

Science funding

avi

Hard at work in Puerto Rico

Barely back from a workshop organized in Puerto Rico by Ryan O’Donnell, Krzysztof Oleszkiewicz and Elchanan Mossel, and lavishly funded by the Simons foundation, I am greeted on my BART ride home (thanks wifirail) by a NYTimes piece Billionaires With Big Ideas Are Privatizing American Science, featuring Jim Simons among many others. I had planned to blog about the excellent workshop, but this task has already been pre-empted by Li-Yang Tan’s comprehensive posts over at Ryan O’Donnel’s blog. So instead I will pick up on the NYTimes piece, which although lacking in verifiable facts and hard data raises some interesting issues. Being entirely funded by the Simons foundation through the Simons Institute in Berkeley (though I should say I don’t know how much of my salary comes directly from the foundation, and how much is contributed by UC Berkeley or other sources), whether I like it or not I am an integral part of the debate and find it worthwhile to give it some thought.

I fact, I do take to heart the question of where my funding comes from. Perhaps this is due to the somewhat unusual circumstances of the first funding I ever received, from the French government. I was fortunate to undertake my undergraduate studies at École Normale Supérieure in Paris, one of France’s (in)famous “Grandes Écoles”. One of the perks of ENS is that all students receive a modest monthly salary, the result of us being made “civil servants” at our entrance into the school. As a foreign national (I am Belgian) I found this “unreserved confidence” granted me by the French state (a job for life solely based on my results in some math exam!) somewhat humbling. Paid to study! Did they really think I would actually produce something useful, or was the French government just living through its characteristic largesse? (Obviously, the latter :-)) The funding did come with some conditions: in exchange for it I promised 10 years of work to the government (this includes working as a teacher, or researcher, for any of the French public organizations). I still owe them six years (the four years undergraduate count towards the total), but, shhhhht, so far they haven’t bothered to come after me…

Since those years I have been continuously supported by public funds, mostly from the French, Canadian, Singaporean and American governments, under what I can only qualify as wildly un-restrictive conditions. What justifies such unreserved trust? Behind the governments are taxpayers, some of whom voted for, others against, devoting a small percentage of their income to supporting researchers like myself. Although it may sound a little naïve I am proud of being awarded such confidence, and believe it is important to understand why and for what purpose it was given, and to do my best to deliver on this purpose. (Note here I am taking “purpose” in a strongly ideal sense, setting aside the cynical politics of science funding and instead focusing on what I perceive as the “right” motivations for funding science.) I owe a lot to public funding, and aside from the Simons foundation this semester I have not yet received much private support, although this may change in the future.

Back to the NYTimes piece. Are big private donors’ efforts to “leave their mark” by directing huge amounts of money into research on one particular area with large potential  impact — curing a specific disease, saving a species, flying civilians into space, etc — a threat to the “purity” and “independence” of scientific research? Are such efforts anything more than a spoiled child’s whim? As a scientist on which relatively small but nevertheless impressive amounts of one such donor’s money have recently been expended with virtually no direct oversight (witness the Puerto Rican resort for last week’s workshop, that I won’t dare link to) it would perhaps be disingenuous to be dismissive of the enterprise. But still… what do they know about fundamental research? What makes them qualified to influence, the future of science? The billionaires cited in the NYTimes article have made their fortunes in a variety of industries, software, oil, housing, pharmaceuticals, investment. Aren’t they doing more harm than good in throwing their checks around, paying others to achieve feats they will boast about at cocktail parties? Should we, scientists armed with the purest ideals, fall prey to the billionaire’s fancy?

I don’t think it’s that simple. The NYTimes piece presents some of the drawbacks of private funding, such as skewing research toward trendy areas or diminishing public support for federal science funding. These are real risks and identifying them is the first step in limiting their effect. Potential benefits, however, may far outweigh these risks. I am going to take a very idealistic (who said provocative?) perspective: private philanthropy at its best is the way science (and more generally the arts) should be funded. What is the purpose of science? It is a means to advance human knowledge. (There are others, such as philosophy or the arts, that are just as important or successful, but here I will focus just on science.) Given such a grand quest facing humanity’s finite resources, the important questions are: which directions, which challenges, should be privileged, and who should be elected to tackle them? Perhaps just as important, who should be empowered to decide on the answer to those questions: an all-powerful government (and its bureaucrats) or the scientists themselves?

Jim Simons

Neither. The decision-making process of any big government is unfathomable; there is no reason to think most bureaucrats are particularly qualified to make the funding decisions their appointment entails them to. Paradoxically their job might even make them less qualified than average at making those decisions (here I am thinking about high-level decisions on research directions or paradigms to pursue, rather than low-level decisions as to which particular grant applications should be funded). Indeed it is hard to imagine that even the best-intentioned employee of any big governmental agency would not eventually fall prey to a strong bias in favor of a very specific kind of proposal — say, one that emphasizes “impact”, or “public outreach”, or whatever is the agency’s current doctrine. Scientists suffer from a similar bias, though it is generated by an antipodal situation: they are too qualified to be the ones making decisions as to the future of science. Being right at the heart of the scientific process itself, and in spite of our grand claims to the opposite, our perspective is inevitably near-sighted. We are strongly biased in favor of directions and ideas that, although they may appear very bold, are rooted in the current techniques and scientific understanding and less prone to the kind of bold, wild projects that only the more “ignorant” could make.

Instead, I would like to argue that the best-placed to make paradigmatic decisions about fundamental research are strong-willed individuals, outsiders, who have a high but non-vital stake in its outcome. Note how this brings us close to, but not quite at, the claim that science funding should be directly decided by popular vote. I don’t place much trust in democratic voting; it is too subject to manipulation and places too important a stake in the hands of the many who simply do not care, or are easily led to care for the wrong reasons. Hence the insistence on “individuals” — decisions should be made by a few responsible persons rather than an amorphous group. I further strongly believe decisions should be taken by people who care about them; there is no reason to make those who don’t care a part of the process. I insist on “outsiders” simply because they are the only ones really free to make the kind of non-obvious choices necessary for scientific innovation. Finally, by “having a stake in the outcome” I mean that self-appointed decision-makers should be allowed only if they have a strong personal incentive in taking the matter seriously; no “strong-willed dilettantes” please!

So, private donors anyone? There are some clear risks, many of them exposed in the NYTimes article (and a Nature editorial linked to from the article), and I am certainly not advocating for a take-over of the whole decision process by private individuals. Private donors at first sound particularly undesirable as they may have a very specific kind of stake in the process, emphasizing “return on investment”, as well as a particularly directive approach more appropriate in a business-like environment. It is nevertheless interesting that the NYTimes piece makes it clear private donors often have very personal motivation for investing in a particular kind of science, either because they (or their close relatives) suffer from a certain rare disease, or they have a lifelong dream of achieving a particularly challenging technical feat (e.g. flying into space), or maybe had a life-changing personal encounter with this or that scientist, etc. These are all great reasons to promote a particular kind of science. In general I disagree with the image of scientific progress as an objective path towards a greater good. Science has always been rooted in a particular social and political context; there is no point in uprooting it, quite the opposite. Giving some amount of decision-making power back to individuals who care may be one of the best ways to ensure science accomplishes what it’s best at, helping human beings discover themselves and their world.

I think some amount of private decision-making (rather than just funding) may also help us, scientists, stay on our toes and remember why we are here, why we are given such trust; trust should never be unconditional. I don’t think it is doing us a favor to be awarded the comfort of an almost-guaranteed support in the form of “small grants” for almost whatever research proposal we may come up with. Instead I believe we should be ready to fight for, and defend, our ideas and research proposals. Comfort is never helpful. If this means having to pitch our ideas to individuals, then so be it. At least they actually take the issues at heart; much better than a double-blind interaction with some anonymous committee. As for scientific freedom and integrity, I trust us scientists to continue making it clear that we ultimately are the ones in charge; as long as we keep our reputation up — admittedly a perpetual fight — I don’t see the balance tipping in the wrong direction.

Alright, so perhaps this was a little provocative :-) What do you think?

Posted in Science, Simons | Tagged , , , | 8 Comments

Workshop on “Quantum games and Protocols” at the Simons

** Update: videos are now up! **

This past week saw the first of three week-long workshop that will rhythm the semester on quantum Hamiltonian complexity at the Simons institute (I’ll count the boot camp was the zeroth-such workshop). Dorit Aharonov, John Watrous and myself co-organized the workshop around the theme of “quantum games and protocols”. The general idea was to investigate the strengths and limitations of quantum machines — depending on the context, quantum computers, quantum provers, quantum adversaries, etc. — in interactive scenario, both quantum and classical. This included questions such as the possibility for delegating quantum computations through classical means, the problem of verifying quantum mechanics, the power of multi-prover interactive proofs with entangled provers, the quantum PCP theorem, two-party cryptography, device-independence, and more!

SimonsWorkshop1

The Simons auditorium (courtesy D. Nagaj)

We were extremely fortunate to have an absolutely excellent line-up of speakers for the workshop. Each of them put a huge amount of effort into preparing talks that would be accessible to the relatively broad audience — complexity theorists, experimentalists, mathematicians, cryptographers — while still managing to distill and communicate the “expert’s nugget”. As all talks given in the Simons’ main auditorium they were expertly recorded, and I can only highly recommend watching the videos. Here is a partial set of highlights from the workshop: (Right now the links only point to each talk’s abstract, where the recorded videos will appear in about a week; in the meantime the talks can be viewed by navigating to the correct date and time, inferred from the workshop schedule. on the Simons’ ustream channel.)

Umesh Vazirani gave a very nice, focused introduction to the problem of “testing for “quantumness””. Many explanations are often put forward to justify the power of quantum computing: exponential-sized vector spaces, complex amplitudes, the measurement rule, etc. Umesh argued that none of these ingredients by itself is sufficient to distinguish quantum states from probability distributions; rather he pointed to the subtle combination of entanglement, the observers’ freedom in choosing a measurement basis, and the quadratic nature of the measurement rule — {\sin^2\theta} vs {\theta} — as the “magic combination” that underlies the irreducible “quantumness” of our world. Umesh then broadened up significantly by outlining some of the far-reaching implications of this simple difference: by allowing for the rigid testing of quantum states and measurements through the CHSH game this particular form of “quantumness” leads to a host of applications, from randomness certification to device-independent key distribution and delegated computation.

Daniel Nagaj delivered a fast-paced and beautifully illustrated talk on the diverse clock constructions used in proofs of QMA-hardness results for restricted classes of local Hamiltonians. Daniel argued that all known (and foreseeable) constructions of clocks necessarily have a spectral gap that closes as {O(L^{-2})}, where {L} is the number of gates in the encoded computation. Although the exact scaling could possibly be fine-tuned for each specific construction, it places a strong limitation on any attempt at constructing a local Hamiltonian that would satisfy the quantum PCP desiderata. I found Daniel’s talk very helpful in clarifying why known QMA-hard instances have a closing promise gap. The current reductions all go through the analysis of the low-lying eigenspaces of sums of local Hamiltonians, where we often think of nagajone of the Hamiltonians as a small perturbation of the other. The resolution to which this analysis can be done depends on the spectral gap of the larger Hamiltonian; the smaller this spectral gap the smaller the promise gap one will be able to maintain. Since the Hamiltonians that are manipulated, such as the clock Hamiltonian, tend to have vanishing spectral gaps, this is reflected in the final promise gap (watch Daniel’s talk — in slow motion — for more details!). As we know, large spectral gaps tend to impose strong structure, and this appears to be a significant difficulty on the way to quantum PCP.

Fernando Brandao described a striking implication of his recent de Finetti theorem with Aram Harrow. Their result essentially implies that “naïve gap amplification” is impossible in the quantum setting, in the following sense: any transformation of a local Hamiltonian that simultaneously at least (i) squares the degree of the constraint graph, and (ii) squares the local dimension of the qudits, must necessarily reduce the (relative) promise gap. This is highly counter-intuitive because classically, the operation which takes a constraint satisfaction problem and maps it to its “square” through parallel repetition achieves (i) and (ii) and furthermore has the effect of increasing the promise gap by a factor roughly {2}. Indeed, this is exactly the goal of parallel reptition; the Brandao-Harrow result shows that this most direct method of amplifying the gap will not work in the quantum setting.

On Tuesday morning were two talks exploring the hardness of multi-prover interactive proof systems. The first talk, by Dana Moshkovitz, presented recent progress on the “sliding scale conjecture”, which aims to devise PCPs with optimal trade-off between the verifier’s randomness, the alphabet size, and the soundness. Dana hasn’t resolved the conjecture (yet), but she presented strong progress by describing a low-degree test with very good soundness parameters. The test is obtained through the parallel repetition of existing low-degree tests, but the analysis goes through a new way to think about the resulting test, as a “curve vs curve” (instead of “line vs line”) test, where the curves intersect in multiple points. The second talk was given by by Zhengfeng Ji. Zhengfeng presented a comprehensive overview of the few hardness results that are known for the case of entangled provers, including a nice description of the key techniques used in the proofs. These include the design and analysis of various small “sub-games”, such as the consistency game, the confusion game, etc., which I hope will find further applications (see also a more technical whiteboard talk I gave on this topic at the boot camp). Zhengfeng’s talk was followed by a lively discussion on the relationship between inequivalent quantum versions of the classical PCP theorem (video link to come?).

Carl Miller and Henry Yuen presented exciting recent works on device-independent randomness certification and randomness amplification. Carl described how a very recent result joint with Yaoyun Shi could be understood as a way to constrain a device to perform anti-commuting measurements, thereby generating randomness irrespective of the underlying state. Henry showed that existing randomness expansion and certification protocols could be composed with themselves in a way that allows for unbounded randomness expansion — the procedure can run indefinitely, with the same constant number of devices, while producing infinitely increasing quantities of trusted random bits!

Ran Raz gave a talk on recent join work with Yael Kalai and Ron Rothblum that I really like. They reduce the task of devising a computationally secure one-round delegated computation scheme for languages in time {t(n)} to that of creating a multi-prover interactive proof system for the same class of languages that has the strong property of being sound against non-signaling provers. This reduction is based on previous work, and the meat of their new result lies in devising such a MIP for all languages in EXPTIME, such that the MIP involes polynomially many provers running in exponential time, while the verifier itself runs in polynomial time. Their construction is very involved and combines a number of new ideas to control the soundness error in the protocol; perhaps some of these ideas could be helpful to analyze entangled-prover interactive proof systems.

David Perez-Garcia gave a very clear talk on entanglement in many-body systems; more specifically on the problem of classifying phases of matter and how the discovery of phases davidwith topological order provides new motivation, as well as new difficulties, for this task. David’s talk gave the best explanation I had ever heard of the problem (of course I am quite new to this), and I highly recommend watching it to everyone interested in getting a smooth introduction to one of the hottest topics in many-body physics.

Thursday morning saw a wonderful trio of talks. The first, by Pablo Parrilo, presented semidefinite hierarchies and their use in quantum information (more specifically, separability testing and the quantum moment problem). Pablo gave a very clear picture of both primal (through sums of squares) and dual (moments) views of these hierarchies. Next Aram Harrow talked about ways to quantify, and use, the monogamy of general non-signalling correlations. He presented a few variants of a recent de Finetti result for such correlations that he obtained with Fernando Brandao. Aram also gave a range of applications for the de Finetti results (for instance, to proving hardness results for free games with entangled players) together with a host of open questions. In the last talk of the morning Patrick Hayden gave us an enjoyable tour of the “fundamental laws of quantum entanglement”. One such law is that entanglement can never be created out of thin air. However, in Patrick’s words, it can sometimes be surreptitiously “stolen”, by using embezzling quantum states. Those are special “catalyst” entangled states from which any other entangled state can be generated by local operations alone (up to precision), while leaving the catalyst essentially unchanged…a fascinating topic!

Iordanis Kerenidis gave an hour-long marathon reporting on four years of (team)work (now available on arXiv!) in dissecting, understanding and improving Carlos Mochon’s breakthrough showing the existence of quantum protocols for weak coin flipping with arbitrarily small bias. I found Iordanis’ talk very inspiring — as I told him, the first half of the talk really made me want to work on coin-flipping. It is quite a basic primitive, but besides QKD one of the rare fundamental such primitives for which there is a strong quantum advantage. And its analysis gives rise to such a beautiful, and apparently powerful, formalism for the study of quantum interactions (point games anyone?) that one feels one ought to lay one’s hand on it. The second half of Iordanis’ talk, however, involuntarily served as strong caution from someone who has invested years of effort on the topic: although big rewards may be awaiting discovery, do not expect them to come easily! Indeed, Iordanis described some of the more technical (and subtle) aspects of the SDP–point game correspondence, and it is not for the faint-hearted…check out the talk!

These are only a few of the many enjoyable talks that rhythmed our week — others included two presentations by Ben Reichardt on his powerful rigidity results for the CHSH game and their use in delegated computation; Joe Fitzimon’s talk on measurement-based delegated quantum computation; talks by Philip Walther and Eleni Diamanti reporting on challenges in experimental implementation of quantum cryptographic protocols; a survey of quantum zero-knowledge proofs and quantum proofs of knowledge by Dominique Unruh, who emphasized the key role played by quantum rewinding techniques; a beautiful talk by Amit Sahai on his breakthrough result on program obfuscation; a follow-up talk by Paul Christiano describing a tentative protocol for quantum obfuscation of classical circuits; a playful but dense introduction to a very fruitful connection between the theory of quantum games and tensor norms in operator spaces by Marius Junge; a survey on the problem ofparallel repetition and direct product testing by Irit Dinur, including her recent work with David Steurer proposing a new, algebraic (they call it analytical) framework for proving such results; two back-to-back presentations on the first comprehensive entangled-prover parallel repetition results by André Chailloux and Attila Pereszlényi; a talk by Jamie Sikora presenting an improved understanding of a certain class of coin-flipping protocols; a beautiful talk by Zeph Landau giving a a tensor-network view of coin-flipping protocols (directly inspired from the Gutoski-Watrous framework for the description of two-player quantum games) that leads to a simple proof of Kitaev’s lower bound, and, last but certainly not least, a presentation by Scott Aaronson on his work with Alex Arkhipov promoting Boson Sampling as an experimentally realizable, and verifiable, demonstration of “quantum supremacy”.

All in all this turned out to be a very intense, but also very gratifying, week. As an organizer I feel like we were extremely lucky to have not only an excellent line-up of speakers but also a formidable bunch of enthusiastic participants. The whole workshop was very interactive, with lots of interruptions, questions, and small-group discussions taking place during and after the talks. The building in which the Simons institute i hosted made this possible by providing us with a beautiful auditorium, including windows letting the natural light in and a fairly flat structure (only five rows) that encourages participation, as well as ample (and pleasant) discussion space in the upper floors. If anything I wish we had managed to keep a slightly lighter schedule, to allow even more time for discussion. Aall the talks were attractive and well-attended, and by Friday afternoon we were all pretty tired…I was very impressed that many participants still had the energy for the lively open problems session that ended the week on a high note. Once again, a very heartfelt thanks to all the participants; I am looking forward to the next two workshops of what has so far been quite an exciting semester.

Posted in Conferences, Simons | Tagged , , , , | 2 Comments

Simons bootcamps

I guess it wasn’t that bad…

Each semester the newly-founded Simons Institute, hosted on the UC Berkeley campus, simultaneously hosts two thematic programs. The first two programs, which took place in the Fall, were onTheoretical Foundations of Big Data Analysis and Real Analysis in Computer Science. This Spring two new programs just got started, on Quantum Hamiltonian Complexity and Evolutionary Biology and the Theory of Computing (I’ll let you guess which one I’m participating in!)

You’ll notice that both programs are strictly in line with the Institute’s stated goal of “expand[ing] the horizons of [computer science] by exploring other scientific disciplines through a computational lens”. But there is a subtle difference in their titles, which was partially reflected in their respective “boot camps”, or introductory workshops, held over the past couple weeks. While the quantum program has a self-assuming title, the other program’s resonates closer to an explicitly forced conjunction, “biology and computing”. Hamiltonian complexity is a fast maturing field, initiated in great part by Alexei Kitaev in the early 2000s. “Computational evolutionary theory”, in contrast, is a field in wanting. One of the goals of the program on evolution will likely be to set the foundations, and perhaps more importantly the expectations, for this field; laying out its ground questions. (Or, as Christos put it in answer to a question about which open problems he would conceivably think could be solved during the semester, perhaps the program’s most ambitious goal is simply to “forge life-long ties”.)

If one is to judge by the talks given in the boot camp this is likely to be an exciting but challenging task. The idea for these boot camps is to bring all participants “on the same page”. I won’t comment much on the QHC bootcamp, although I’d like to come back to it in a separate post in which I will elaborate a little on a talk I gave about hardness results for quantum multiplayer games. For now I just want to point to the talk given by Christos Papadimitriou in the Evolutionary Biology boot camp. I highly recommend watching both hour-long talks, including (especially) the Q&A that took place throughout, and at the end, of each of the talks. I always enjoy the density of Christos’ talks: on the surface they may appear quite light, and the material covered bounded. But Christos always manages to raise an impressive number of key conceptual (and often controversial!) issues using few but well-chosen words.

About 27 minutes into his first talk Christos raises the following question: if protein folding is an NP-hard problem (as it is in general), then how come proteins fold by themselves? Apparently this is known as Levinthal’s paradox and has been discussed since the late 60s. The reason I am bringing it up here is the question sounds oddly familiar. It is the same question that was discussed at length in the QHC bootcamp, and also on this blog: if finding ground states of local Hamiltonians is a QMA-hard problem, then how can physical systems ever reach equilibrium? Is nature solving a QMA-hard problem? If so, how?

It was interesting to see how the same fundamental question on the physical bearing of results in computational complexity arose in both programs. The audience in Christos’ talk immediately raised some of the same objections that are often discussed regarding the QMA-hardness of the local Hamiltonian problem (and the possible physical bearings of the quantum PCP conjecture), and more:

  • What if nature doesn’t find the true optimum? Do we know that the actual protein foldings that occur are optimal? How could we if it’s a hard problem? (Quantum version: do complex systems really thermalize?)
  • NP-hardness is a theory of the worst-case; maybe the hard instances are not physical. In Christos’ words:“Life is hard but natural selection can favor easy inputs”. (Quantum version: hard instances of LH may not be physical; for instance the interaction graph may not be geometrically local.)
  • Maybe the proteins that occur in nature have been selected precisely for their ability to fold easily? Could evolution be such that the only surviving proteins are those that can find their optimum folding configuration by some weird indirect method? (Quantum version: what if the only quantum systems stable enough that we can study them are precisely those for which finding the ground state is an “easy” problem?)
  • NP-hardness is an asymptotic theory. It doesn’t forbid the existence of exponential-time algorithms with very good constants. Maybe Nature is a skilled optimizer.
  • Perhaps the needed advice was planted at Creation?

There are nuch more, and once again I encourage you to watch & enjoy the talk.

Posted in Conferences, Simons | Tagged , , , | Leave a comment