Fall teaching

Caltech’s academic year started this week, with Monday being the first day of instruction for the Fall quarter (Caltech’s quarter-based division, as opposed to the more usual division in semesters, explains why we’re starting a month later than many others). So even though I moved to Pasadena in early June, it’s only now that I’m getting to experience the buzzing of a surprisingly lively campus (given the small – less than 1k -undergraduate population), as opposed to the relative quietness of the summer. Having moved in the early summer helped a lot with the settling down, and I now have a functional office (got the last piece – the chair – today), studio (one of the perks of being at Caltech!), and stream of students coming in to ask for signatures :-)

Now, first week of the quarter means, well, let’s get back to business! A lot happened since the last post I wrote on this blog (including many many aborted drafts: when I saw what was the last thing I wrote, I couldn’t believe I hadn’t brought myself to posting anything in the meantime), but I’ll have to settle for the status quo and look forwards, rather than backwards, for regular postings. Caltech’s IQIM alread maintains an excellent blog, to which I’ll attempt to contribute. But this blog will continue to live on for less scientific, or less directly quantum-related posts, so stay tuned! Here’s some brief news on the start of the quarter:

If only I could get inspiration from my predecessors!

So, first week of instruction means, well, first week of instruction. Someone has to teach something to the hungry crowds, and this quarter I’ll bring my own modest contribution by teaching CS286: Topics in Computer Science, that I bravely (foolishly?) decided to entitle “Around the Quantum PCP Conjecture”. The goal of the class is to use the conjecture as a backdrop, a motivating theme; not one to be approached directly but one to suggest interesting questions that can be studied for their own sake.

My plan is to start by reviewing the classical PCP theorem, in enough depth to give a reasonable idea of the kind of ingredients that go into both known proofs of the theorem, though clearly without having the time to cover any of them completely. This week we (almost) covered NP in PCP(poly,1). Next week we’ll see a few of the ideas that go into Dinur’s proof. The next step will be to give an introduction to the area of quantum Hamiltonian complexity, including Kitaev’s QMA-completeness result. We’ll then be in a position to state the conjecture and discuss some of its implications.

Once the background is properly set, I’d like to spend the six remaining weeks of the quarter to dive deep into two or three current topics of research, associated to the conjecture but that we’ll study independently. The three I have in mind are quantum multiprover interactive proofs, area laws, and de Finetti theorems in quantum information theory. Each of these topics would be well worth spending a full semester course on by itself. But instead, I’d like to pick just one particular aspect, one result, and describe the simplest possible non-trivial instantiation of that component or result. The hope is that such a depth-first, rather than breadth-first, exploration, will still give the students some interesting insight into the area and motivate them to continue the exploration on their own.

We’ll have to see how this plays. The first two lectures had a very mixed audience: from the first-year undergraduate (courageous!) to the full professor, going through undergraduate, graduate and postdocs in CS and physics, not a single one of them has a matching background. This was a reason for choosing the “quantum PCP” topic in the first place: my hope is that both CS and Physics students can find some angle of interest on that theme, drawing both crowds around the same preoccupation. It’s going to be challenging to keep everyone together, but it could also be a very fun and rewarding experience…I’m looking forward to it!

Posted in Caltech, teaching | Tagged , , | Leave a comment

Science funding

avi

Hard at work in Puerto Rico

Barely back from a workshop organized in Puerto Rico by Ryan O’Donnell, Krzysztof Oleszkiewicz and Elchanan Mossel, and lavishly funded by the Simons foundation, I am greeted on my BART ride home (thanks wifirail) by a NYTimes piece Billionaires With Big Ideas Are Privatizing American Science, featuring Jim Simons among many others. I had planned to blog about the excellent workshop, but this task has already been pre-empted by Li-Yang Tan’s comprehensive posts over at Ryan O’Donnel’s blog. So instead I will pick up on the NYTimes piece, which although lacking in verifiable facts and hard data raises some interesting issues. Being entirely funded by the Simons foundation through the Simons Institute in Berkeley (though I should say I don’t know how much of my salary comes directly from the foundation, and how much is contributed by UC Berkeley or other sources), whether I like it or not I am an integral part of the debate and find it worthwhile to give it some thought.

I fact, I do take to heart the question of where my funding comes from. Perhaps this is due to the somewhat unusual circumstances of the first funding I ever received, from the French government. I was fortunate to undertake my undergraduate studies at École Normale Supérieure in Paris, one of France’s (in)famous “Grandes Écoles”. One of the perks of ENS is that all students receive a modest monthly salary, the result of us being made “civil servants” at our entrance into the school. As a foreign national (I am Belgian) I found this “unreserved confidence” granted me by the French state (a job for life solely based on my results in some math exam!) somewhat humbling. Paid to study! Did they really think I would actually produce something useful, or was the French government just living through its characteristic largesse? (Obviously, the latter :-)) The funding did come with some conditions: in exchange for it I promised 10 years of work to the government (this includes working as a teacher, or researcher, for any of the French public organizations). I still owe them six years (the four years undergraduate count towards the total), but, shhhhht, so far they haven’t bothered to come after me…

Since those years I have been continuously supported by public funds, mostly from the French, Canadian, Singaporean and American governments, under what I can only qualify as wildly un-restrictive conditions. What justifies such unreserved trust? Behind the governments are taxpayers, some of whom voted for, others against, devoting a small percentage of their income to supporting researchers like myself. Although it may sound a little naïve I am proud of being awarded such confidence, and believe it is important to understand why and for what purpose it was given, and to do my best to deliver on this purpose. (Note here I am taking “purpose” in a strongly ideal sense, setting aside the cynical politics of science funding and instead focusing on what I perceive as the “right” motivations for funding science.) I owe a lot to public funding, and aside from the Simons foundation this semester I have not yet received much private support, although this may change in the future.

Back to the NYTimes piece. Are big private donors’ efforts to “leave their mark” by directing huge amounts of money into research on one particular area with large potential  impact — curing a specific disease, saving a species, flying civilians into space, etc — a threat to the “purity” and “independence” of scientific research? Are such efforts anything more than a spoiled child’s whim? As a scientist on which relatively small but nevertheless impressive amounts of one such donor’s money have recently been expended with virtually no direct oversight (witness the Puerto Rican resort for last week’s workshop, that I won’t dare link to) it would perhaps be disingenuous to be dismissive of the enterprise. But still… what do they know about fundamental research? What makes them qualified to influence, the future of science? The billionaires cited in the NYTimes article have made their fortunes in a variety of industries, software, oil, housing, pharmaceuticals, investment. Aren’t they doing more harm than good in throwing their checks around, paying others to achieve feats they will boast about at cocktail parties? Should we, scientists armed with the purest ideals, fall prey to the billionaire’s fancy?

I don’t think it’s that simple. The NYTimes piece presents some of the drawbacks of private funding, such as skewing research toward trendy areas or diminishing public support for federal science funding. These are real risks and identifying them is the first step in limiting their effect. Potential benefits, however, may far outweigh these risks. I am going to take a very idealistic (who said provocative?) perspective: private philanthropy at its best is the way science (and more generally the arts) should be funded. What is the purpose of science? It is a means to advance human knowledge. (There are others, such as philosophy or the arts, that are just as important or successful, but here I will focus just on science.) Given such a grand quest facing humanity’s finite resources, the important questions are: which directions, which challenges, should be privileged, and who should be elected to tackle them? Perhaps just as important, who should be empowered to decide on the answer to those questions: an all-powerful government (and its bureaucrats) or the scientists themselves?

Jim Simons

Neither. The decision-making process of any big government is unfathomable; there is no reason to think most bureaucrats are particularly qualified to make the funding decisions their appointment entails them to. Paradoxically their job might even make them less qualified than average at making those decisions (here I am thinking about high-level decisions on research directions or paradigms to pursue, rather than low-level decisions as to which particular grant applications should be funded). Indeed it is hard to imagine that even the best-intentioned employee of any big governmental agency would not eventually fall prey to a strong bias in favor of a very specific kind of proposal — say, one that emphasizes “impact”, or “public outreach”, or whatever is the agency’s current doctrine. Scientists suffer from a similar bias, though it is generated by an antipodal situation: they are too qualified to be the ones making decisions as to the future of science. Being right at the heart of the scientific process itself, and in spite of our grand claims to the opposite, our perspective is inevitably near-sighted. We are strongly biased in favor of directions and ideas that, although they may appear very bold, are rooted in the current techniques and scientific understanding and less prone to the kind of bold, wild projects that only the more “ignorant” could make.

Instead, I would like to argue that the best-placed to make paradigmatic decisions about fundamental research are strong-willed individuals, outsiders, who have a high but non-vital stake in its outcome. Note how this brings us close to, but not quite at, the claim that science funding should be directly decided by popular vote. I don’t place much trust in democratic voting; it is too subject to manipulation and places too important a stake in the hands of the many who simply do not care, or are easily led to care for the wrong reasons. Hence the insistence on “individuals” — decisions should be made by a few responsible persons rather than an amorphous group. I further strongly believe decisions should be taken by people who care about them; there is no reason to make those who don’t care a part of the process. I insist on “outsiders” simply because they are the only ones really free to make the kind of non-obvious choices necessary for scientific innovation. Finally, by “having a stake in the outcome” I mean that self-appointed decision-makers should be allowed only if they have a strong personal incentive in taking the matter seriously; no “strong-willed dilettantes” please!

So, private donors anyone? There are some clear risks, many of them exposed in the NYTimes article (and a Nature editorial linked to from the article), and I am certainly not advocating for a take-over of the whole decision process by private individuals. Private donors at first sound particularly undesirable as they may have a very specific kind of stake in the process, emphasizing “return on investment”, as well as a particularly directive approach more appropriate in a business-like environment. It is nevertheless interesting that the NYTimes piece makes it clear private donors often have very personal motivation for investing in a particular kind of science, either because they (or their close relatives) suffer from a certain rare disease, or they have a lifelong dream of achieving a particularly challenging technical feat (e.g. flying into space), or maybe had a life-changing personal encounter with this or that scientist, etc. These are all great reasons to promote a particular kind of science. In general I disagree with the image of scientific progress as an objective path towards a greater good. Science has always been rooted in a particular social and political context; there is no point in uprooting it, quite the opposite. Giving some amount of decision-making power back to individuals who care may be one of the best ways to ensure science accomplishes what it’s best at, helping human beings discover themselves and their world.

I think some amount of private decision-making (rather than just funding) may also help us, scientists, stay on our toes and remember why we are here, why we are given such trust; trust should never be unconditional. I don’t think it is doing us a favor to be awarded the comfort of an almost-guaranteed support in the form of “small grants” for almost whatever research proposal we may come up with. Instead I believe we should be ready to fight for, and defend, our ideas and research proposals. Comfort is never helpful. If this means having to pitch our ideas to individuals, then so be it. At least they actually take the issues at heart; much better than a double-blind interaction with some anonymous committee. As for scientific freedom and integrity, I trust us scientists to continue making it clear that we ultimately are the ones in charge; as long as we keep our reputation up — admittedly a perpetual fight — I don’t see the balance tipping in the wrong direction.

Alright, so perhaps this was a little provocative :-) What do you think?

Posted in Science, Simons | Tagged , , , | 8 Comments

Workshop on “Quantum games and Protocols” at the Simons

** Update: videos are now up! **

This past week saw the first of three week-long workshop that will rhythm the semester on quantum Hamiltonian complexity at the Simons institute (I’ll count the boot camp was the zeroth-such workshop). Dorit Aharonov, John Watrous and myself co-organized the workshop around the theme of “quantum games and protocols”. The general idea was to investigate the strengths and limitations of quantum machines — depending on the context, quantum computers, quantum provers, quantum adversaries, etc. — in interactive scenario, both quantum and classical. This included questions such as the possibility for delegating quantum computations through classical means, the problem of verifying quantum mechanics, the power of multi-prover interactive proofs with entangled provers, the quantum PCP theorem, two-party cryptography, device-independence, and more!

SimonsWorkshop1

The Simons auditorium (courtesy D. Nagaj)

We were extremely fortunate to have an absolutely excellent line-up of speakers for the workshop. Each of them put a huge amount of effort into preparing talks that would be accessible to the relatively broad audience — complexity theorists, experimentalists, mathematicians, cryptographers — while still managing to distill and communicate the “expert’s nugget”. As all talks given in the Simons’ main auditorium they were expertly recorded, and I can only highly recommend watching the videos. Here is a partial set of highlights from the workshop: (Right now the links only point to each talk’s abstract, where the recorded videos will appear in about a week; in the meantime the talks can be viewed by navigating to the correct date and time, inferred from the workshop schedule. on the Simons’ ustream channel.)

Umesh Vazirani gave a very nice, focused introduction to the problem of “testing for “quantumness””. Many explanations are often put forward to justify the power of quantum computing: exponential-sized vector spaces, complex amplitudes, the measurement rule, etc. Umesh argued that none of these ingredients by itself is sufficient to distinguish quantum states from probability distributions; rather he pointed to the subtle combination of entanglement, the observers’ freedom in choosing a measurement basis, and the quadratic nature of the measurement rule — {\sin^2\theta} vs {\theta} — as the “magic combination” that underlies the irreducible “quantumness” of our world. Umesh then broadened up significantly by outlining some of the far-reaching implications of this simple difference: by allowing for the rigid testing of quantum states and measurements through the CHSH game this particular form of “quantumness” leads to a host of applications, from randomness certification to device-independent key distribution and delegated computation.

Daniel Nagaj delivered a fast-paced and beautifully illustrated talk on the diverse clock constructions used in proofs of QMA-hardness results for restricted classes of local Hamiltonians. Daniel argued that all known (and foreseeable) constructions of clocks necessarily have a spectral gap that closes as {O(L^{-2})}, where {L} is the number of gates in the encoded computation. Although the exact scaling could possibly be fine-tuned for each specific construction, it places a strong limitation on any attempt at constructing a local Hamiltonian that would satisfy the quantum PCP desiderata. I found Daniel’s talk very helpful in clarifying why known QMA-hard instances have a closing promise gap. The current reductions all go through the analysis of the low-lying eigenspaces of sums of local Hamiltonians, where we often think of nagajone of the Hamiltonians as a small perturbation of the other. The resolution to which this analysis can be done depends on the spectral gap of the larger Hamiltonian; the smaller this spectral gap the smaller the promise gap one will be able to maintain. Since the Hamiltonians that are manipulated, such as the clock Hamiltonian, tend to have vanishing spectral gaps, this is reflected in the final promise gap (watch Daniel’s talk — in slow motion — for more details!). As we know, large spectral gaps tend to impose strong structure, and this appears to be a significant difficulty on the way to quantum PCP.

Fernando Brandao described a striking implication of his recent de Finetti theorem with Aram Harrow. Their result essentially implies that “naïve gap amplification” is impossible in the quantum setting, in the following sense: any transformation of a local Hamiltonian that simultaneously at least (i) squares the degree of the constraint graph, and (ii) squares the local dimension of the qudits, must necessarily reduce the (relative) promise gap. This is highly counter-intuitive because classically, the operation which takes a constraint satisfaction problem and maps it to its “square” through parallel repetition achieves (i) and (ii) and furthermore has the effect of increasing the promise gap by a factor roughly {2}. Indeed, this is exactly the goal of parallel reptition; the Brandao-Harrow result shows that this most direct method of amplifying the gap will not work in the quantum setting.

On Tuesday morning were two talks exploring the hardness of multi-prover interactive proof systems. The first talk, by Dana Moshkovitz, presented recent progress on the “sliding scale conjecture”, which aims to devise PCPs with optimal trade-off between the verifier’s randomness, the alphabet size, and the soundness. Dana hasn’t resolved the conjecture (yet), but she presented strong progress by describing a low-degree test with very good soundness parameters. The test is obtained through the parallel repetition of existing low-degree tests, but the analysis goes through a new way to think about the resulting test, as a “curve vs curve” (instead of “line vs line”) test, where the curves intersect in multiple points. The second talk was given by by Zhengfeng Ji. Zhengfeng presented a comprehensive overview of the few hardness results that are known for the case of entangled provers, including a nice description of the key techniques used in the proofs. These include the design and analysis of various small “sub-games”, such as the consistency game, the confusion game, etc., which I hope will find further applications (see also a more technical whiteboard talk I gave on this topic at the boot camp). Zhengfeng’s talk was followed by a lively discussion on the relationship between inequivalent quantum versions of the classical PCP theorem (video link to come?).

Carl Miller and Henry Yuen presented exciting recent works on device-independent randomness certification and randomness amplification. Carl described how a very recent result joint with Yaoyun Shi could be understood as a way to constrain a device to perform anti-commuting measurements, thereby generating randomness irrespective of the underlying state. Henry showed that existing randomness expansion and certification protocols could be composed with themselves in a way that allows for unbounded randomness expansion — the procedure can run indefinitely, with the same constant number of devices, while producing infinitely increasing quantities of trusted random bits!

Ran Raz gave a talk on recent join work with Yael Kalai and Ron Rothblum that I really like. They reduce the task of devising a computationally secure one-round delegated computation scheme for languages in time {t(n)} to that of creating a multi-prover interactive proof system for the same class of languages that has the strong property of being sound against non-signaling provers. This reduction is based on previous work, and the meat of their new result lies in devising such a MIP for all languages in EXPTIME, such that the MIP involes polynomially many provers running in exponential time, while the verifier itself runs in polynomial time. Their construction is very involved and combines a number of new ideas to control the soundness error in the protocol; perhaps some of these ideas could be helpful to analyze entangled-prover interactive proof systems.

David Perez-Garcia gave a very clear talk on entanglement in many-body systems; more specifically on the problem of classifying phases of matter and how the discovery of phases davidwith topological order provides new motivation, as well as new difficulties, for this task. David’s talk gave the best explanation I had ever heard of the problem (of course I am quite new to this), and I highly recommend watching it to everyone interested in getting a smooth introduction to one of the hottest topics in many-body physics.

Thursday morning saw a wonderful trio of talks. The first, by Pablo Parrilo, presented semidefinite hierarchies and their use in quantum information (more specifically, separability testing and the quantum moment problem). Pablo gave a very clear picture of both primal (through sums of squares) and dual (moments) views of these hierarchies. Next Aram Harrow talked about ways to quantify, and use, the monogamy of general non-signalling correlations. He presented a few variants of a recent de Finetti result for such correlations that he obtained with Fernando Brandao. Aram also gave a range of applications for the de Finetti results (for instance, to proving hardness results for free games with entangled players) together with a host of open questions. In the last talk of the morning Patrick Hayden gave us an enjoyable tour of the “fundamental laws of quantum entanglement”. One such law is that entanglement can never be created out of thin air. However, in Patrick’s words, it can sometimes be surreptitiously “stolen”, by using embezzling quantum states. Those are special “catalyst” entangled states from which any other entangled state can be generated by local operations alone (up to precision), while leaving the catalyst essentially unchanged…a fascinating topic!

Iordanis Kerenidis gave an hour-long marathon reporting on four years of (team)work (now available on arXiv!) in dissecting, understanding and improving Carlos Mochon’s breakthrough showing the existence of quantum protocols for weak coin flipping with arbitrarily small bias. I found Iordanis’ talk very inspiring — as I told him, the first half of the talk really made me want to work on coin-flipping. It is quite a basic primitive, but besides QKD one of the rare fundamental such primitives for which there is a strong quantum advantage. And its analysis gives rise to such a beautiful, and apparently powerful, formalism for the study of quantum interactions (point games anyone?) that one feels one ought to lay one’s hand on it. The second half of Iordanis’ talk, however, involuntarily served as strong caution from someone who has invested years of effort on the topic: although big rewards may be awaiting discovery, do not expect them to come easily! Indeed, Iordanis described some of the more technical (and subtle) aspects of the SDP–point game correspondence, and it is not for the faint-hearted…check out the talk!

These are only a few of the many enjoyable talks that rhythmed our week — others included two presentations by Ben Reichardt on his powerful rigidity results for the CHSH game and their use in delegated computation; Joe Fitzimon’s talk on measurement-based delegated quantum computation; talks by Philip Walther and Eleni Diamanti reporting on challenges in experimental implementation of quantum cryptographic protocols; a survey of quantum zero-knowledge proofs and quantum proofs of knowledge by Dominique Unruh, who emphasized the key role played by quantum rewinding techniques; a beautiful talk by Amit Sahai on his breakthrough result on program obfuscation; a follow-up talk by Paul Christiano describing a tentative protocol for quantum obfuscation of classical circuits; a playful but dense introduction to a very fruitful connection between the theory of quantum games and tensor norms in operator spaces by Marius Junge; a survey on the problem ofparallel repetition and direct product testing by Irit Dinur, including her recent work with David Steurer proposing a new, algebraic (they call it analytical) framework for proving such results; two back-to-back presentations on the first comprehensive entangled-prover parallel repetition results by André Chailloux and Attila Pereszlényi; a talk by Jamie Sikora presenting an improved understanding of a certain class of coin-flipping protocols; a beautiful talk by Zeph Landau giving a a tensor-network view of coin-flipping protocols (directly inspired from the Gutoski-Watrous framework for the description of two-player quantum games) that leads to a simple proof of Kitaev’s lower bound, and, last but certainly not least, a presentation by Scott Aaronson on his work with Alex Arkhipov promoting Boson Sampling as an experimentally realizable, and verifiable, demonstration of “quantum supremacy”.

All in all this turned out to be a very intense, but also very gratifying, week. As an organizer I feel like we were extremely lucky to have not only an excellent line-up of speakers but also a formidable bunch of enthusiastic participants. The whole workshop was very interactive, with lots of interruptions, questions, and small-group discussions taking place during and after the talks. The building in which the Simons institute i hosted made this possible by providing us with a beautiful auditorium, including windows letting the natural light in and a fairly flat structure (only five rows) that encourages participation, as well as ample (and pleasant) discussion space in the upper floors. If anything I wish we had managed to keep a slightly lighter schedule, to allow even more time for discussion. Aall the talks were attractive and well-attended, and by Friday afternoon we were all pretty tired…I was very impressed that many participants still had the energy for the lively open problems session that ended the week on a high note. Once again, a very heartfelt thanks to all the participants; I am looking forward to the next two workshops of what has so far been quite an exciting semester.

Posted in Conferences, Simons | Tagged , , , , | 2 Comments

Simons bootcamps

I guess it wasn’t that bad…

Each semester the newly-founded Simons Institute, hosted on the UC Berkeley campus, simultaneously hosts two thematic programs. The first two programs, which took place in the Fall, were onTheoretical Foundations of Big Data Analysis and Real Analysis in Computer Science. This Spring two new programs just got started, on Quantum Hamiltonian Complexity and Evolutionary Biology and the Theory of Computing (I’ll let you guess which one I’m participating in!)

You’ll notice that both programs are strictly in line with the Institute’s stated goal of “expand[ing] the horizons of [computer science] by exploring other scientific disciplines through a computational lens”. But there is a subtle difference in their titles, which was partially reflected in their respective “boot camps”, or introductory workshops, held over the past couple weeks. While the quantum program has a self-assuming title, the other program’s resonates closer to an explicitly forced conjunction, “biology and computing”. Hamiltonian complexity is a fast maturing field, initiated in great part by Alexei Kitaev in the early 2000s. “Computational evolutionary theory”, in contrast, is a field in wanting. One of the goals of the program on evolution will likely be to set the foundations, and perhaps more importantly the expectations, for this field; laying out its ground questions. (Or, as Christos put it in answer to a question about which open problems he would conceivably think could be solved during the semester, perhaps the program’s most ambitious goal is simply to “forge life-long ties”.)

If one is to judge by the talks given in the boot camp this is likely to be an exciting but challenging task. The idea for these boot camps is to bring all participants “on the same page”. I won’t comment much on the QHC bootcamp, although I’d like to come back to it in a separate post in which I will elaborate a little on a talk I gave about hardness results for quantum multiplayer games. For now I just want to point to the talk given by Christos Papadimitriou in the Evolutionary Biology boot camp. I highly recommend watching both hour-long talks, including (especially) the Q&A that took place throughout, and at the end, of each of the talks. I always enjoy the density of Christos’ talks: on the surface they may appear quite light, and the material covered bounded. But Christos always manages to raise an impressive number of key conceptual (and often controversial!) issues using few but well-chosen words.

About 27 minutes into his first talk Christos raises the following question: if protein folding is an NP-hard problem (as it is in general), then how come proteins fold by themselves? Apparently this is known as Levinthal’s paradox and has been discussed since the late 60s. The reason I am bringing it up here is the question sounds oddly familiar. It is the same question that was discussed at length in the QHC bootcamp, and also on this blog: if finding ground states of local Hamiltonians is a QMA-hard problem, then how can physical systems ever reach equilibrium? Is nature solving a QMA-hard problem? If so, how?

It was interesting to see how the same fundamental question on the physical bearing of results in computational complexity arose in both programs. The audience in Christos’ talk immediately raised some of the same objections that are often discussed regarding the QMA-hardness of the local Hamiltonian problem (and the possible physical bearings of the quantum PCP conjecture), and more:

  • What if nature doesn’t find the true optimum? Do we know that the actual protein foldings that occur are optimal? How could we if it’s a hard problem? (Quantum version: do complex systems really thermalize?)
  • NP-hardness is a theory of the worst-case; maybe the hard instances are not physical. In Christos’ words:“Life is hard but natural selection can favor easy inputs”. (Quantum version: hard instances of LH may not be physical; for instance the interaction graph may not be geometrically local.)
  • Maybe the proteins that occur in nature have been selected precisely for their ability to fold easily? Could evolution be such that the only surviving proteins are those that can find their optimum folding configuration by some weird indirect method? (Quantum version: what if the only quantum systems stable enough that we can study them are precisely those for which finding the ground state is an “easy” problem?)
  • NP-hardness is an asymptotic theory. It doesn’t forbid the existence of exponential-time algorithms with very good constants. Maybe Nature is a skilled optimizer.
  • Perhaps the needed advice was planted at Creation?

There are nuch more, and once again I encourage you to watch & enjoy the talk.

Posted in Conferences, Simons | Tagged , , , | Leave a comment

Quid ITCS?

Earlier this week I attended the fifth edition of the ITCS (“Innovations in Theoretical Computer Science”) conference, which took place over the course of three days in a beautiful, stately auditorium inside the Field center on the Princeton University campus. This was my first time attending the conference and, given the noise that surrounded the series’ launch just a few years ago, I was quite eager to finally experience it first-hand.

ITCS was created with great fanfare just a few years ago: initially named ICS, the first two installments, in 2010 and 2011, were held at Beijing’s famous Tsinghua university; one of their noted features was that an author of each accepted paper got a free ticket to the conference, including airfare, hotel, etc. The first ICS had an ambitious call for papers, emphasizing “research that carries a strong conceptual message (e.g., introducing a new concept or model, opening a new line of inquiry within traditional or crossdisciplinary areas, or introducing novel techniques or novel applications of known techniques)”, together with a substantial steering committee to back it up. Much was made at the time of ICS’s purposefully distinctive focus (as opposed to STOC, FOCS, or other TCS conferences sometimes decried as “too technical” or “narrow-minded”), and I remember heated debates on the blogosphere (see e.g. herehere or here for a sample and further links). Given such high expectations, I was curious: had the conference lived up to its promise?

 

A digression on conflicting publication venues

The reason for my attendance this year could be taken as a positive sign that it had. While the two papers I presented at the conference fall squarely within the realm of TCS, my co-authors and I strongly felt that the results should be publicized to a wider audience as well (in our case to theoretical physicists). ITCS, as opposed to STOC or FOCS, allowed for this possibility. Before explaining this, let me briefly describe the papers in a way that I hope makes it clear why we think it worthwhile to make the results accessible, not only to the TCS community, but also to researchers in physics:

The first paper provides the first complete proof of security for a cryptographic protocol for key distribution having the distinctive property that, although the protocol relies on quantum mechanics to achieve unconditional security (this would be impossible classically), the proof of security holds even in a setting where all the quantum devices used in the protocol are untrusted (they could even have been designed by the adversary!). The proof is based on ideas from computer science (such as randomness extractors or techniques borrowed from the analysis of parallel repetition of multiplayer games), but the end result has a strong bearing on our understanding of a phenomenon known as monogamy entanglement. The latter has received a lot of interest recently, and plays a role in areas ranging from the study of phase transitions in condensed-matter physics to the theory of black holes.

The second paper introduces a polynomial-time algorithm for a problem of interest in many-body physics: computing the ground state of a 1D gapped local Hamiltonian. The algorithm builds upon widely used techniques in computer science (dynamic programming; convex optimization and semidefinite programming; matrix-valued concentration bounds) but the end result, beyond its (probably impractical) use as an algorithm for the simulation of physical systems, demonstrates important properties of gapped systems that go beyond the celebrated “area law” that they have been shown to exhibit.

My co-authors and I thus strongly felt that, although the results should be published in a TCS venue (we are computer scientists, and we — though I should really only speak for myself — view these results as results in, made possible by, and of interest to computer science), they should also be publicized to the appropriate audience in physics. How could this be done? In computer science we are used to publishing an initial version of our results in conference proceedings, with the infamous “full version” eventually submitted to a journal (I will not get into the debate of whether this traditional distinction is still relevant…see this post by Sanjeev Arora for a recent discussion). In physics, papers are published directly in a journal; conferences are reserved for short, often invited, presentations, and do not have published proceedings. These differences lead to differences in copyright and priority handling: TCS conferences strongly insist that the submitted material has not appeared (nor even been submitted) elsewhere, while TCS journals are willing to publish minor updates on conference papers. In contrast, physics journals tend to have extremely restrictive copyright policies. To make a long story short (see this post by Umesh Vazirani for more), we were stuck: publication in a TCS conference is all but incompatible with publication in a Physics journal (such as Science or Nature), irrespective of how you do it (believe me, we tried!). Unless…one would treat the TCS conference as what it is, a conference, and not a publication, by submitting (and presenting) the full paper, but only publishing a very short one-page abstract in the proceedings (pointing to a full version of the paper from the arXiv). While STOC and FOCS do not currently allow this, ITCS has been explicitly permitting the publication of very short abstracts since last year. So we submitted to ITCS…and, even though the process of unentangling ACM’s copyright policies was by no means straightforward, we eventually found a workable solution — and I got to attend the conference!

 

Impressions from this year’s ITCS

1. The format

The first thing that struck me, as a presenter, is that the program seemed quite dense: there were four 90-minute sessions per day, each consisting of four 20-minute talks together with a 10-minute introduction by the session chair. This meant, first, that the schedule went on from 8:30am till 5:30pm each of the three days (including a welcome 2-hour lunch break), and second, that each talk was allotted 20 minutes, including questions and switching of speakers. In other words, I better prepare for a 15-minute talk — and still somehow manage to carry the “conceptual message” across!

Why such a crammed schedule? The CFP for the first ICS explicitly stated that “the total number of papers will be around 30, so as to allow for ample time for open and one-on-one discussions and exchanges of ideas.” In fact, the first PC barely kept in line with this requirement, as 39 papers were accepted to the conference. Since then the sentence has disappeared from the CFP, and this year there were 48 accepted papers (roughly the same count as last year). Although this only represents an additional 10 papers, it’s 20 more than the initial CFP called for. (Admittedly, the original ICS CFP also called for a conference spread over two and a half days, a requirement which seems never to have been implemented.) For comparison, STOC or FOCS typically accept around 70-80 papers, and have two tracks; ITCS has a single track. So the average load is similar — but STOC or FOCS don’t have an explicitly stated goal of “allowing for ample time for open and one-on-one discussions and exchanges of ideas”.

The space inside the Fields institute

In spite of the overloaded schedule, the fact that there was a single track did give ITCS a more relaxed feel than multiple-track conferences. It’s hard to pin down why exactly, but the difference is significant. Maybe it’s simply that it’s so much more pleasant to be able to simply walk into the designated conference room trusting (hoping) that the PC has done a good job and one is about to hear interesting work. No worrying about what is going on in the other room, and should I switch, and risk being caught up in one of the incessant hallway discussions — discussions which are certainly an integral part of the conference, but maybe there can be time for both! Even though total participation was fairly low (around 80 registrants or so), the talks were well-attended. The long coffee and lunch breaks allowed for ample discussion time; it helped they were held in a very nice 3-4 loosely divided rooms (together with a smallish “library”) with chairs, couches, and tables organized in various configurations which made it possible to comfortably accommodate all kinds of conversations and group sizes. Compared to the generic STOC ballroom-, or worse, corridor-style break area this was a big plus of the conference.

Another distinctive feature of ITCS are the 10-minute introductions given by session chairs at the beginning of each four-paper session. The chairs allocated those 10 minutes in more or less creative ways: while some simply distributed their minutes to the speakers, allowing a little extra (much needed) time for questions, others attempted a more in-depth introduction, including slides, to place the session’s papers in context. Some of these presentations were good, but not all were successful. It is not always easy to warm up the audience to yet another topic in such short time; being in the audience I also felt some impatience at hearing the results themselves, and found it difficult to not let my mind wander during the introduction. Maybe those could be kept shorter, to just 2-3 minutes for presenting the topic of the session; instead of a lengthy introduction it might be more beneficial if the chair placed his or her efforts into coming up with good questions to stimulate discussion after the presentations. (I’m not blaming the session chairs…some of the introductions were quite helpful; it’s not an easy task.)

2. The talks

Now, this is all good, but what about the talks themselves? ITCS’s main claim to fame lies in the following two sentences of its CFP:

“ITCS (previously known as ICS) seeks to promote research that carries a strong conceptual message (e.g., introducing a new concept or model, opening a new line of inquiry within traditional or cross-interdisciplinary areas, or introducing new techniques or new applications of known techniques). ITCS welcomes all submissions, whether aligned with current theory of computation research directions or deviating from them.”

The above statement is the only part of the CFP that relates to the content of submissions (as opposed to formatting guidelines). Since the second sentence has essentially zero content (either {x} or {\neg x}…), we are left hanging on the first. I wonder if the statement is purposefully elliptic; note that it does not even state what is desired of submissions, but only what the conference “seeks” to achieve. If this happened to be the first time I encountered ITCS I could be forgiven for being puzzled at the conference’s scope, and for being hesitant as to whether I should submit my results there. In contrast, the CFP for this year’s STOC clearly states “submissions presenting original research on the theory of computation are sought. Topics of interest include but are not limited to:”, followed by a long list of topics.

Now for the real question: did the program of this year’s ITCS achieve its stated goal of “promoting research that carries a strong conceptual message”? I’ll let you have a look at the program and judge for yourself. I have mixed feelings. On the positive side, a solid fraction of the results did have a twist to them that is less often seen in STOC/FOCS papers. Some papers, such as “Cryptogenography” by Brody et al. or “Building one-time memories from isolated qubits” by Liu, introduced a new model (the setting for the former paper is that one among a group of cooperating players holds a secret bit, and the goal is for the group to openly communicate in a way that any eavesdropper can learn the secret bit, but not the identity of its holder; the latter paper introduces a model of security in which the eavesdropper is not allowed to make entangling operations — a requirement presumably enforced physically by choosing an implementation for the OTM scheme that makes such operations highly impractical). Others, such as “Non-commutative arithmetic circuits with division” by Hrubes and Wigderson, obtained interesting results by studying an unconventional extension of a well-studied problem; such “outside-the-box” approaches can help put current efforts in perspective. Yet others suggested new approaches to hard problems; for instance “Tree Codes and a Conjecture on Exponential Sums” by Moore and Schulman suggests a path towards solving an important open question in interactive coding, but provides no “concrete” results. There was even a session devoted to results with application to, or motivated by, the physical sciences, including “Attribute-Efficient Evolvability of Sparse Linear Functions” by Angelino and Kanade, or “Rate-independent computation in mass-action chemical reaction networks” by Chen et al.

These are all results that I believe would easily fit in STOC or FOCS in terms of quality, but may not eventually make it because of the lack of, either concrete technical progress on a well-known problem deemed important by the community (such as some of the papers introducing novel frameworks whose long-lasting impact is hard to measure), or direct relevance to a well-circumscribed area (such as the papers inspired by the physical sciences). These are the papers that, if one is to follow the CFP, should make the meat of ITCS. The corresponding talks were among the best; it was sometimes (though not always) apparent the speaker knew very well he or she had to “defend her turf”, i.e. find the correct words for promoting the conceptual message of the paper. This led to interesting, fresh talks, and lively discussion sessions.

Those only accounted for about half the papers (by my count…) presented at the conference though. The other half struck me as consisting of much more conventional results, which would more naturally find their place at STOC, FOCS or SODA. I don’t want to point to any specific papers, and in any case I have nothing against them; most were very good results anyways. They would easily fit in any top TCS conference…any but ITCS? I was not on the PC, and unfortunately there was no (publicly announced) business meeting, so I don’t know how the requirement for a “conceptual message” was taken into account when making acceptance decisions; it would be interesting to have some inside information about this. Was there a target number of accepted papers, and if so what was it — did it get exceeded? How much weight was conceptual novelty given in the decisions? If it was taken into account, what was the rationale behind accepting this “second batch” of papers? Do they have some novelty that I simply am not seeing? The point here is not to criticize the work of the PC or the papers themselves, but to pose to question of “how far” ITCS is willing to go in sticking with its stated goals. Was there a compromise, and if so why? Could the program not be made much shorter, while still keeping a high-quality conference, that many would willingly attend (and perhaps even more so than are induced by accepting yet more papers)?

 

Why do we go to conferences?

August 23, 2009

Thinking through my experience with ITCS, the question I am brought back to is a most basic one: why do I go to conferences? What do I expect out of them? After all, it takes quite some effort to attend: the travel, the costs, etc. Here are some seemingly obvious answers: 1. to hear about exciting new research in TCS that I may not have heard about otherwise; 2. to have the ideas driving this research explained to me in a clear and insightful way by select speakers; 3. to catch up with the remainder of the community over pleasant social events such as a dinner, coffee breaks, rump session, etc.; and 4. as a result of 1.-3., to emerge from the conference with i) the refreshing feeling of belonging to a dynamic research group that is constantly re-inventing itself, and ii) research ideas!

What is unfortunately much less obvious is whether any of these goals are being kept in mind when shaping our conferences; for that matter it is unclear, at least to me, what their actual goals are (if any!). This is at least one merit of the ITCS CFP: it states an explicit goal for the conference, “ITCS seeks to…”. I’ll end this post by describing some thoughts, or suggestions, as to how one could make progress towards achieving the four goals stated above. For the fun of it I’ll express my suggestions in the present tense, as if I was describing an already existing “dream” conference which I’ll call “XTCS”.

1. Single track.

One of the distinctive features of XTCS is its relatively small program, organized around a single track of talks. The CFP makes it clear that this small number of accepted talks is the result of a design decision, rather than out of necessity (e.g. too few good submitted papers). There are many benefits to having a small program. One of the main such benefits is the indirect pressure it puts on the PC: less accepted talks means a higher strain on “getting it right”. As an attendee, it means I can trust that the SC and PC have put together a strong, pointed program, and divided it into coherent sessions; the speakers, assured of a broad audience, have carefully prepared their talks; their public is curious and focused. Even if I can’t immediately parse the title of the talks I still attend most of them. At XTCS the chance that I will discover a new topic, a new conceptual idea, a new bridge between areas, is much higher — and this is precisely because there are fewer, better-selected talks.

In addition, XTCS has a small number of specially selected (either taken from the best submissions or by invitation) longer talks, perhaps 45-minute or an hour-long. These talks are one of the main reasons I attend the conference: they are guaranteed to be of high quality and present novel ideas; they are the kind of talks that no paper-reading can replace. Similar to some of the “prize talks” at STOC or FOCS (indeed I was surprised to discover that ITCS did not have a single such broader talk, apart from the after-dinner talk by Peter Winkler), these can be given by reputable speakers from our field, as well as by speakers from neighboring fields who have something to tell us. Selecting such talks is the business of the SC; in fact it is one of their most important tasks in putting a successful conference together: identify novel ideas, promising directions, that will shape the future of the field.

2. Computation as a lens

XTCS has a purposefully broader scope than other conferences in theoretical computer science. This is in keeping with the “computation as a lens on the sciences” motto that is increasingly adopted, in various guises, by TCS communities (see e.g. Berkeley’s Simons institute mission statement). If anything, this motto best embodies the future of our field; XTCS is its conference. Which doesn’t mean that “purely TCS” papers should be excluded; only that papers accepted to the conference will be required to have a bearing outside of themselves. For instance, a paper whose main result is “we prove that the permanent does not have polynomial-size circuits” might not be accepted, whereas a paper claiming “we show that not all efficiently verifiable assertions can be efficiently decided” might be. The example is a a bit extreme and I don’t mean it to be taken literally, but I chose it for two reasons: first, to emphasize that an “application to the physical sciences” is by no means necessary, and second, that even though the difference is only one of formulation, formulation matters (after all, a conference is all about presentation!). Here the second formulation emphasizes the significance of the result; the first focuses only on the technique that is used to obtain it.

Another way XTCS encourages inter-disciplinarity and avoids being overly self-centered is by remaining as international as possible. XTCS strives to avoid the kind of “inner circle” complaints that are sometimes made about STOC or FOCS (be they justified or not), and keeping its audience as geographically diverse as possible helps. The reason I have most often heard for locating a conference in the US are the travel costs, but I don’t think this is really justified: accommodation can be cheaper in Europe (depending on location), and the extra cost in terms of airfare remains reasonable; compared to the total cost of attending it represents perhaps a 20 or 30% increase. Financial support for students is a priority: registration costs can be increased for faculty, and decreased for students and postdocs.

3. Community building

In addition to the importance of the wording used in the CFP, which is given more prominence than at “traditional” conferences, XTCS takes a few small but concrete steps towards reinforcing the sensation of a “TCS community”. One such step is the organization of the sessions, in which the chair is given a slightly more important role than usual: she is responsible for saying a few words, either about the importance of each work, or about the session’s topic as a whole, and for asking pointed questions that will encourage discussion. There may even be a “rump session” in which students are encouraged to present their latest results, and throughout which one is generally encouraged to discuss half-baked ideas. The rump session can be paired with a poster session around which more serious discussions can be pursued.

Ample time is given to coffee breaks and lunches. The conference location is chosen carefully so as to allow for interactions ranging from casual discussion to focused research discussions. Large hotels in city centers are avoided (to avoid the crowd dispersing too easily); university halls are preferred (this can also help keep the costs down).

4. Submission format and proceedings

Submissions to XTCS are short; perhaps three to five pages. This forces the authors to present the main point of their research in an accessible way. The five-page abstract should be accompanied by a more technical version of the paper, which can be a pointer to the arXiv. Accepted papers are required to have their full version made available online by the time of the conference. XTCS has no published proceedings. Instead, it has a well-maintained website which lists each year’s accepted papers, together with their five-page abstracts and pointers to the online version.

Will you come to XTCS?

Posted in Conferences | Tagged , | 2 Comments

Where on earth?

Tricky one!

IMAG0174

Posted in Tourism, Where on earth | 4 Comments

Prospective students & postdocs: apply to Caltech!

It’s job application season again! While I happily suffered  through the application process in Fall 2006, Fall 2010 and Fall 2012, this year I will be participating in the festivities from the other side of the divide. My message is simple: all of you brilliant (aspiring to brilliance is also good), young (of the mind) computer scientists, mathematicians, physicists or otherwise, interested in theoretical computer science, discrete math, quantum information and related topics, apply to Caltech — you couldn’t dream a better place in where to set up and develop your research! (Not to mention the beach, mountains, movies, and weather of course.)

I will be extremely fortunate to be among two joining the faculty of the world no.1 university starting June 2014 as an assistant professor of computer science in the Department of Computing + Mathematical Sciences (CMS) and am looking forward to some great applicants this year, both at the Ph.D. and postdoc level. I encourage anyone interested to navigate through the websites of Caltech’s CMS, Center for the Mathematics of Information (CMI), Institute for Quantum Information and Matter (IQIM), and Physics department. (You should also check out IQIM’s excellent blog for a lively snapshot of what its members are up to.) You’ll find that, in spite of its small size, the interests of Caltech faculty span an impressive range, and I hope that, just like I did, you’ll suddenly realize that you need to get over here right away and start talking to these people!

Concretely, here is what you can do:

Undergraduate students applying for Ph.D. Apply directly on the Caltech admissions website. The deadline for applications is Dec. 15th.

Ph.D. students or postdocs applying for postdoctoral positions. Both CMI and IQIM, institutes in which CMS faculty take part, have openings for postdoc positions next year. For CMI, applications are due Dec. 20th here. For IQIM, applications are due Dec. 6th here.

I hope you’ll consider us, and look forward to welcoming some of you next Fall!

 

Posted in Caltech, Jobs | Tagged , | Leave a comment