A beginner’s guide to PC chairing

I recently had the privilege to serve as program committee (PC) chair for the yearly conference on quantum cryptography, QCRYPT’17 (note: for obvious public-relations reasons all names of conferences, PC members and authors in this post have been replaced by entirely fictional aliases). Although I had served on the PC of QCRYPT (and other conferences and workshops in quantum information and theoretical computer science) before, this was my first experience as chair. Since I am not aware of any publicly available resources that would help prepare one for this task, I thought I would share selected aspects of my experience here.

It is easiest to organize the discussion in chronological order: I will go through all the steps (and missteps), from the initial invitation email to the final notification to authors (probably safer to stop there – the next step would drag me into a discussion of the authors’ reaction, the consequences of which even the use of aliases may not save me from).

Lights:

You just received that glowing email — “Dear XX, would you be interested to serve as PC chair for ConfY’18?”. Followed by the obligatory series of flattering comments. Such an honor… who would refuse? But don’t jump on that reply-send button just now. Here are a few points to take into consideration before making the decision.

First off, carefully consider the reviewing schedule. The dates of the conference are likely decided already, giving you a good sense of when the submission and notification deadlines will fall. The period in-between represents two to four months of your working life. Are you ready to give them up? I estimate that most days within that period you will have to allocate one to two hours’ work to the PC reviewing process (the load is not evenly spread: during the reviewing phase, it depends how many papers you assign yourself to review; during the discussion phase, it depends on how active you are, whether there is an in-person meeting, etc.). This is a serious commitment, comparable in load to taking on an additional 12-week teaching job. So if you’re already planning on teaching two courses during the same period – think twice.

A second point to consider discussing upfront with the steering committee (SC) is your “mission”. The SC probably has its own idea of the scope of the conference (there might even be a charter), how many papers they would like to be accepted, what justifies a “ConfY-worthy paper”, etc. How rigid are they going to be regarding these points? How much interference can you expect — do you have full latitude in deciding final acceptances (should be)? How flexible is the final number of accepts?

Last but not least, make sure this is something you want to do. How good is ConfY? Does it serve a specific purpose that you value? How often have you attended, or served on the PC? Do you feel competent to make decisions across all areas covered by the conference? Check the past couple years’ accepts. Many conferences are broader than we think, just because when we attend we tend to unconsciously apply a selective bias towards those talks for which we can at least parse the title. This time you’ll have to understand the contents of every single one of the submitted (let alone accepted) papers. So again, is this something you {\textit want} to do?

Camera,

Selecting the PC. Now that the fatal decision has been made, my first piece of advice is all too simple: seek advice. Your first task is to form a PC. This is clearly the most influential decision you will make, both in terms of the quality and content of the final program, as well as the ease with which you and your “team” will get there. Choosing a PC is a delicate balancing act. A good mix of seniority and young blood is needed: seniority for the experience, the perspective, and the credibility; young blood for the energy, the taste for novelty, the muscle power. It is a good idea to involve a PC member from the previous installment of the conference; this may in particular help with the more difficult cases of resubmission.

I was fortunate to receive multiple recommendations from the SC, past conference chairs, and colleagues. While you obviously want to favor diversity and broad representation of topic areas, I also recommend selecting PC members with whom one has a personal connection. My experience has been that the amount of effort any one person is willing to put into the PC process varies hugely. It is inevitable that some PC members will eventually drift away. The more connection you have to them the easier it will be to handle irresponsiveness or divergences of opinion.

The more important comment I will make, one which I wish I had been more keenly aware of, is to know your PC. You will eventually select a team of researchers with complementary qualities, not only in terms of the areas that they are familiar with but also in more human terms: some will be good at responding to “quick opinion” calls on difficult papers, while others will have insightful comments about the overall balance of papers in the emerging list of accepted papers, or generate thoughtful advice on the handling of the more tricky cases, etc. At multiple points in the process you will need help; it is crucial to know the right person to turn to, lest you waste precious days or make ill-informed decisions.

With a list of names in hand, you are ready to send out invitations. (Before doing so, consider forming a rough schedule for the reviewing process. This will be needed for PC members to decide whether they will be sufficiently available during the requisite periods.) In my experience this part went smoothly. About {75\%} of those on my initial list accepted the invitation (thanks!!). Filling in the remaining slots took a little more time. A small tip: if a researcher does not respond to your invitation within a reasonable delay, or is slow to decide whether to join or not, don’t push too hard: while you need a strong PC, you also need a responsive PC. It is not a good idea to start off in a situation where someone is “doing you a favor” by accepting the invitation as a result of some heavy arm-twisting.

Drafting a CFP. The second main item on your pre-submission agenda is the drafting of a call for papers (CFP). This may be done in cooperation with the SC. CFP from previous years can serve as a starting point. Check with last year’s PC chair if they were happy with the wording used, or if they have a posteriori recommendations: did they omit an important area of interest? Were the submission instructions, including formatting guidelines, clear?

A good CFP balances out two conflicting desirata: first, it should make your life easier by ensuring that submissions follow a reasonably homogeneous format, and are presented in a way that facilitates the reviewing process; second, it should not place an unreasonable burden on the authors who, as we all know, have better things to do (and will read the instructions, if they ever read them, no earlier than 23:59 in any timezone – making an overly precise CFP a sure recipe for disaster).

One place where precision is needed is in the formulation of the requirements for rigor and completeness. Are full proofs expected, or will a short 3-page abstract suffice? Or should it be both – a short abstract clearly presenting the main ideas, together with a full paper providing precise complete proofs? Be warned that, whatever the guidelines, they will be stretched, forcing you into quick judgment calls as to whether a submission fits the CFP guidelines.

You should also pay attention to the part of the CFP that concerns the scope of the conference: although for all I know this is all but ignored by most authors, and varies little from year to year, it does play an important role in carving out an inch of originality and specificity for the conference.

Another item on the CFP is the “key dates” that will bound the time available for the reviewing process: the submission deadline and the notification date. Here again there are conflicting requirements: the submission date should be as late as possible (to ensure accepted papers are as fresh as possible by the time the conference takes place), the reviewing phase as long as possible (you’re going to need it…), and the notification as early as possible (so there is time to compile proceedings, when they exist, and for authors to make travel arrangements). In my experience as PC member the time allocated for reviewing almost invariably felt too long – yes, I did write too long. However much time is allocated for the reviewing phase invariably ends up divided into {\sim 90\%} procrastination and {\sim 20\%} actual reviewing effort (obviously the actual reviewing gets under way too late for it to be completed by the reviewing deadline, which typically gets overstretched by some {\sim 10\%}). I suggest that a good calendar should allocate a month for collecting reviews, and a month for discussion. This is tight but sufficient, and will ensure that everyone remains engaged throughout. A month for reviewing allows a week for going through papers and identifying those for which external help should be sought; 2-3 weeks for actual reviewing; and a week for collecting reviews, putting the scores together, and scrambling through the last-minute calls for help. Similarly, a month of discussion would allow a week for score homogenization, two weeks to narrow down on the {20\%} (say) borderline papers, and a final week to make those tough ultimate decisions. Tight, but feasible. Remember: however much time you allocate, will be taken up!

Now, as good a calendar you may have come up with, plan for delays. In my case I typically informed PC members that “reviews have to be completed by April 29th” and “the discussion phase will start on May 1st”. The “hidden” three days in-between the two dates were more than needed to track down missing reviews. Don’t ask PC members to complete a task the day you need the task completed, as it simply won’t happen: people have busy schedules, operate in different (and sometimes shifting) timezones, and have other deadlines to deal with. To respect your PC you ought to give them a precise calendar that you will follow, so they are able to plan ahead; but you also need to allow for the unavoidable time conflicts, last-minute no-shows, and other unpredictable events.

One last item before you break off. To set up the submissions webpage you’ll need to decide on a reviewing management software. I (quite mistakenly) didn’t give much thought to this. As PC member I had had a decent experience with easychair, and was under the impression that it was the most commonly used software – and would therefore be easiest to work with for the PC. Even though things went, on the whole, fairly smoothly, I had more than one occasion to regret the decision. The topic would deserve a blog post in itself, and I won’t expand here. Just make sure you carefully consider how easy the software will make different parts of the reviewing process, such as computing statistics, tracking missing reviews, ordering papers based on various criteria, allowing an efficient tagging system to keep track of memos or tentative decisions, handling communication with authors (including possibly the compilation of proceedings), etc.

Action!

Alright, so you went for a stroll and enjoyed your most leisurely conference submission deadline ever – as PC chair, you’re probably not allowed to submit – but the bell has rung, the submission sever closed…now it’s your turn!

The last few hours. Actually, maybe this wasn’t quite your most leisurely submission deadline after all. I was advised to elect a “midnight anywhere on earth” deadline, as this supposedly made the guideline easier to comprehend for everyone. Not only do I now have strong evidence that I am not the only one to find this denomination absurdly confusing – where on earth is this place anyways, anywhere on earth?? – I would in any case strongly suggest setting a deadline that falls at a reasonable time in the PC chair’s timezone. You will get email from people unable to access the submission server (for whatever reason), unsure whether their submission fits the guidelines, asking whether they can get an extension, etc. It is more helpful if you can deal with such email as they arrive, rather than the next day.

Paper bidding. Before reviewing can get under way you need to assign papers to PC members. And before you can assign papers, PC members need to express their preferences. The resulting allocation is critical. It determines how many headaches you will face later on: how many papers will have low confidence reviews, how many closely related papers will have been reviewed by disjoint sets of PC members, how many papers will live on the enthusiastic scores of expert subreviewers. I found this phase challenging. An automatic assignment can be completed in milliseconds, but doesn’t take into account related submissions or expertise of PC members aside from their declared preference, which is a single noisy bit of information. I highly recommend (I realize I am “highly recommending” a lot of things for a first-timer – I only wish I had been told some of these ahead of time!) taking the process very seriously, and spending enough time to review, and tweak, the automatic assignment before it is made final.

Refereeing. Each PC member now has a healthy batch of papers assigned, and a deadline by which to submit reviews. What kind of guidelines can you give to make the process as smooth as possible? Discrepancies in scores are always an issue: whichever reviewing software you use, it is bound to produce some kind of score-based ranking; this initial ranking, although it will change during the discussion phase, induces a huge bias in final decisions (this effect is exacerbated for conferences, such as QCRYPT, where there is no in-person meeting). I don’t have a magic solution to this, but establishing clear guidelines in terms of the significance and expected proportion for each numerical score helps. I eventually found it necessary to prod outliers to modify their scores. This is one of the things easychair did not make particularly easy, forcing me to download data in Excel format and run some basic home-made scripts on the spreadsheet.

Aside from scoring, it is useful to include precise guidelines on the use of sub-referees and conflicts of interest (COIs). I allowed sub-refereeing but insisted that the final opinion should be the PC member’s. (It is not ok to copy-paste a sub-review while barely having gone through it!) Unfortunately sub-reviewers tend to be experts, and experts tend to be overly enthusiastic: watch out for papers that received three high scores, each of which with high confidence: easychair will rank those right at the top, but they may well be worth a second look.

Regarding COIs, I did not set overly strict rules (with the idea that “everyone knows when it is appropriate to declare a COI”), and regretted it. It is simply too uncomfortable to realize at a late stage that this very enthusiastic review was written by a PC member who happens to be a close collaborator of one of the authors, but chose not to disclose the COI. Do you discard the review? I don’t know. It depends: maybe the source of the COI played a role in the PC member’s vocal defense of the paper, and maybe not. Better not let it happen. It is not necessarily that even weak COI should forbid reviewing, but rather that COIs should be made explicit. As long as everyone states their position, things are in the open and can be taken into account.

Discussion. With all the reviews in (dream on… some reasonable fraction of the reviews in) begins the second phase of the reviewing process, the discussion phase. Success of this phase rests almost entirely on engagement of the PC chair and a few dedicated, dynamic PC members. Among PCs I have sat on the most satisfying were ones where the chair visibly spent large amounts of energy in the stimulation of online discussion. This is no trivial task: we all lead busy lives, and it is easy to let things slip; papers with high scores get in, low scores get out; a few days to discuss the few in the middle and we’ll be done…not so! Unfortunately, the initial ranking is bound to be abysmal. It is necessary to work to straighten things up. Some basic tricks apply: search for papers with high discrepancies in scores, low confidence, missing, very short, or uninformative reviews, etc. It is useful to individually prod PC members to keep the discussion going. This is a place where the “know your PC” recommendation comes in: for each submission, you need to be able to identify who will be able to clarify the arguments in favor and against the paper; who will have the technical expertise to clarify the relationship between papers X and Y, etc. It’s an exhausting, but highly rewarding process: I learned a lot by listening to my colleagues and trying to grasp at times rather subtle – and opinionated – arguments that could reach quite far from my expertise.

Decisions! The discussion has been going on for a couple weeks, and you already only have little time left: it is time to start making decisions. Proceeding in phases seems popular, and effective. It helps to progressively sharpen the acceptance threshold. As long as there are too many papers in play it is very hard to get a sense of where the boundary will lie; typically, far too many papers will have positive scores and enthusiastic proponents than can ultimately be accepted.

However much ahead of time you get started, the real decisions will take place in the last few days. I found it helpful to set a clear calendar for the process, marking days when decisions would be made, identifying clear categories (accept, accept?, discuss!, etc.), and setting explicit targets for each phase (accept X/reject Y many more papers, etc.), even if I wasn’t always able to meet them. It is also important that the PC as a whole be aware of the target number of papers that is to be accepted. I have frequently been on PC where the chair gave us the information that “we will accept all great papers”, only to learn later that a hard limit had (of course) been set. Conversely, I’ve also been extremely annoyed at last-minute decisions along the lines of, well, we accepted about as much we could, but there’s 4 left undecided cases, and, well, they’re all really good, so why don’t we just stretch the program a bit and accept all 4 at the last minute. To me this is the PC not doing its job… be prepared to make difficult decisions! Make it clear to the PC (and to yourself) what your goal is. Is it to serve the authors, the conference attendees, the advancement of science – all of the above (good luck)?

Post-mortem

This was fun. Exhausting, but fun. Of course not all authors (or PC members) were happy. There will be complaints. And some of them will be justified: there is no perfect allocation. Mistakes happen. We did our best!

Some tasks lie down the road. Put a program together. Help select a best (student) paper. Gather statistics for the business meeting. But first things first: take a deep breath. This was fun.

Posted in Conferences, Jobs, QCrypto | Tagged , , | 2 Comments

Pauli braiding

[7/9/17 Update: Following a suggestion by Oded Regev I upgraded Section 1 from “probabilistic functions” to “matrix-valued functions”. This hopefully makes it a more useful, and interesting, mid-point between the classical analysis of BLR and the non-abelian extension discussed afterwards. I also fixed a bunch of typos — I apologize for the many remaining ones. The pdf has also been fixed.]

Last week Anand Natarajan from MIT presented our joint work on “A Quantum Linearity Test for Robustly Verifying Entanglement” at the STOC’17 conference in Montreal. Since we first posted our paper on the quant-ph arXiv, Anand and I discovered that the test and its analysis could be reformulated in a more general framework of tests for group relations, and rounding of approximate group representations to exact group representations. This reformulation is stimulated by a beautiful paper by Gowers and Hatami on “Inverse and stability theorems for approximate representations of finite groups”, which was first pointed to me by William Slofstra. The purpose of this post is to present the Gowers-Hatami result as a natural extension of the Blum-Luby-Rubinfeld linearity test to the non-abelian setting, with application to entanglement testing. (Of course Gowers and Hatami are well aware of this — though maybe not of the application to entanglement tests!) My hope in doing so is to make our result more accessible, and hopefully draw some of my readers from theoretical computer science into a beautiful area.

I will strive to make the post self-contained and accessible, with no quantum information background required — indeed, most of the content is purely — dare I say elegantly — mathematical. In the interest of being precise (and working out better parameters for our result than appear in our paper) I include essentially full proofs, though I may allow myself to skip a line or two in some of the calculations.

Given the post remains rather equation-heavy, here is a pdf with the same contents; it may be more convenient to read.

I am grateful to Anand, and Oded Regev and John Wright, for helpful comments on a preliminary version of this post.

1. Linearity testing

The Blum-Luby-Rubinfeld linearity test provides a means to certify that a function {f:{\mathbb Z}_2^n\rightarrow\{\pm1 \}} is close to a linear function. The test can be formulated as a two-player game:

BLR linearity test:

  • (a) The referee selects {a,b\in{\mathbb Z}_2^n} uniformly at random. He sends the pair {(a,b)} to one player, and either {a}, {b}, or {a+b} (chosen uniformly at random) to the other.
  • (b) The first player replies with two bits, and the second player with a single bit. The referee accepts if and only if the player’s answers satisfy the natural consistency constraint.

This test, as all others considered here, treats both players symmetrically. This allows us to restrict our attention to the case of players who both apply the same strategy, an assumption I will systematically make from now on.

Blum et al.’s result states that any strategy for the players in the linearity test must provide answers chosen according to a function that is close to linear. In this section I will provide a slight “matrix-valued” extension of the BLR result, that follows almost directly from the usual Fourier-analytic proof but will help clarify the extension to the non-abelian case.

1.1. Matrix-valued strategies

The “classical” analysis of the BLR test starts by modeling an arbitrary strategy for the players as a pair of functions {f:{\mathbb Z}_2^n\rightarrow \{\pm 1\}} (for the second player, who receives a single string as query) and {f':{\mathbb Z}_2^n \times {\mathbb Z}_2^n \rightarrow \{\pm 1\}\times\{\pm 1\}} (for the first player, who receives a pair of strings as query). In doing so we are making an assumption: that the players are deterministic. More generally, we should allow “probabilistic strategies”, which can be modeled via “probabilistic functions” {f:{\mathbb Z}_2^n \times \Omega \rightarrow \{\pm 1\}} and {f':{\mathbb Z}_2^n \times {\mathbb Z}_2^n \times\Omega\rightarrow \{\pm 1\}\times\{\pm 1\}} respectively, where {(\Omega,\mu)} is an arbitrary probability space which plays the role of shared randomness between the players. Note that the usual claim that “probabilistic strategies are irrelevant because they can succeed no better than deterministic strategies” is somewhat moot here: the point is not to investigate success probabilities — it is easy to pass the BLR test with probability {1} — but rather derive structural consequences from the assumption that a certain strategy passes the test. In this respect, enlarging the kinds of strategies we consider valid can shed new light on the strengths, and weaknesses, of the test.

Thus, and with an eye towards the “quantum” analysis to come, let us consider an even broader set of strategies, which I’ll refer to as “matrix-valued” strategies. A natural matrix-valued analogue of a function {f:{\mathbb Z}_2^n \rightarrow \{\pm 1\}} is {F:{\mathbb Z}_2^n \rightarrow O_d({\mathbb C})}, where {O_d({\mathbb C})} is the set of {d\times d} Hermitian matrices that square to identity (equivalently, have all eigenvalues in {\{\pm 1\}}); these matrices are called “observables” in quantum mechanics. Similarly, we may generalize a function {f':{\mathbb Z}_2^n \times {\mathbb Z}_2^n \rightarrow \{\pm 1 \} \times \{\pm 1\}} to a function {F':{\mathbb Z}_2^n \times {\mathbb Z}_2^n \rightarrow O_d({\mathbb C}) \times O_d({\mathbb C})}. Here we’ll impose an additional requirement: any pair {(B,C)} in the range of {F'} should be such that {B} and {C} commute. The latter condition is important so that we can make sense of the function as a strategy for the provers: we should be able to ascribe a probability distribution on outcomes {(a,(b,c))} to any query {(x,(y,z))} sent to the players. This is achieved by defining

\displaystyle \Pr\big((F(x), F'(y,z))=(a,(b,c))\big)\,=\,\frac{1}{d}\,\mathrm{Tr}\big( F(x)^aF'(y,z)_1^b F'(y,z)_2^c\big), \ \ \ \ \ (1)

where for any observable {O} we denote {O^{+1}} and {O^{-1}} the projections on the {+1} and {-1} eigenspaces of {O}, respectively (so {O=O^{+1}-O^{-1}} and {O^{+1}+O^{-1}=I}). The condition that {F'(y,z)_1} and {F'(y,z)_2} commute ensures that this expression is always non-negative; moreover it is easy to check that for all {(x,(y,z))} it specifies a well-defined probability distribution on {\{\pm 1\}\times (\{\pm1\}\times \{\pm1\})} . Observe also that in case {d=1} we recover the classical deterministic case, for which with our notation {f(x)^a = 1_{f(x)=a}}. If all {F(x)} and {F'(y,z)} are simultaneously diagonal matrices we recover the probabilistic case, with the role of {\Omega} (the shared randomness) played by the rows of the matrices (hence the normalization of {1/d}; we will see later how to incorporate the use of non-uniform weights).

With these notions in place we establish the following simple lemma, which states the only consequence of the BLR test we will need.

Lemma 1 Let {n} be an integer, {\varepsilon\geq 0}, and {F:{\mathbb Z}_2^n\rightarrow O_d({\mathbb C})} and {F':{\mathbb Z}_2^n \times {\mathbb Z}_2^n \rightarrow O_d({\mathbb C})\times O_d({\mathbb C})} a matrix strategy for the BLR test, such that players determining their answers according to this strategy (specifically, according to (1)) succeed in the test with probability at least {1-\varepsilon}. Then

\displaystyle \mathop{\mathbb E}_{x,y\in {\mathbb Z}_2^n}\,\frac{1}{d}\, \Re\,\mathrm{Tr}\big( F(x)F(y)F(x+y)\big) \,\geq\, 1-O(\varepsilon).

Introducing a normalized inner product {\langle A,B\rangle_f = d^{-1} \mathrm{Tr}(AB^*)} on the space of {d\times d} matrices with complex entries (the {^*} designates the conjugate-transpose), the conclusion of the lemma is that {\mathop{\mathbb E}_{x,y\in {\mathbb Z}_2^n} \langle F(x)F(y),\,F(x+y)\rangle_f \,=\, 1-O(\varepsilon)}.

Proof: Success with probability {1-\varepsilon} in the test implies the three conditions

\displaystyle \begin{array}{rcl} &&\mathop{\mathbb E}_{x,y\in {\mathbb Z}_2^n} \, \langle F'(x,y)_1,F(x)\rangle_f \,\geq\, 1-3\varepsilon,\\ &&\mathop{\mathbb E}_{x,y\in {\mathbb Z}_2^n} \, \langle F'(x,y)_2,F(y)\rangle_f \,\geq\, 1-3\varepsilon,\\ &&\mathop{\mathbb E}_{x,y\in {\mathbb Z}_2^n} \, \langle F'(x,y)_1F'(x,y)_2,F(x+y)\rangle_f \,\geq\, 1-3\varepsilon. \end{array}

To conclude, use the triangle inequality as

\displaystyle \begin{array}{rcl} &\mathop{\mathbb E}_{x,y\in {\mathbb Z}_2^n} & \big\|F(x)F(y)-F(x+y) \big\|_f^2 \\ & &\qquad\leq \,3\Big(\mathop{\mathbb E}_{x,y\in {\mathbb Z}_2^n} \, \big\|(F(x)-F'(x,y)_1)F(y) \big\|_f^2\\ && \qquad\qquad + \mathop{\mathbb E}_{x,y\in {\mathbb Z}_2^n} \, \big\|(F(y)-F'(x,y)_2)F'(x,y)_1 \big\|_f^2\\ &&\qquad\qquad+\mathop{\mathbb E}_{x,y\in {\mathbb Z}_2^n} \, \big\|F'(x,y)_1F'(x,y)_2-F(x+y) \big\|_f^2\Big), \end{array}

where {\|A\|_f^2 = \langle A,A\rangle_f} denotes the dimension-normalized Frobenius norm. Expanding each squared norm and using the preceding conditions and {F(x)^2=1} for all {x} proves the lemma. \Box

1.2. The BLR theorem for matrix-valued strategies

Before stating a BLR theorem for matrix-valued strategies we need to define what it means for such a function {G: {\mathbb Z}_2^n \rightarrow O_d({\mathbb C})} to be linear. Consider first the case of probabilistic functions, i.e. {G} such that all {G(x)} are diagonal, in the same basis. Any such {G} whose every diagonal entry is of the form {\chi_{S}(x) = (-1)^{S \cdot x}} for some {S\in\{0,1\}^n} which may depend on the row/column number will pass the BLR test. This shows that we cannot hope to force {G} to be a single linear function, we must allow “mixtures”. Formally, call {G} linear if {G(x) = \sum_S \chi_S(x) P_S} for some decomposition {\{P_S\}} of the identity, i.e. the {P_S} are pairwsie orthogonal projections such that {\sum_S P_S=I}. Note that this indeed captures the probabilistic case; in fact, up to a basis change it is essentially equivalent to it. Thus the following may come as a surprise.

Theorem 2 Let {n} be an integer, {\varepsilon\geq 0}, and {F:{\mathbb Z}_2^n \rightarrow O_d({\mathbb C})} such that

\displaystyle \mathop{\mathbb E}_{x,y\in {\mathbb Z}_2^n} \, \frac{1}{d}\,\Re\,\langle F(x)F(y),F(x+y)\rangle_f \,\geq\, 1-\varepsilon. \ \ \ \ \ (2)

Then there exists a {d'\geq d}, an isometry {V:{\mathbb C}^d\rightarrow{\mathbb C}^{d'}}, and a linear {G:{\mathbb Z}_2^n \rightarrow O_{d'}({\mathbb C})} such that

\displaystyle \mathop{\mathbb E}_{x\in{\mathbb Z}_2^n} \,\big\| F(x) - V^* G(x)V\big\|_f^2 \,\leq\, 2\,\varepsilon.

Note the role of {V} here, and the lack of control on {d'} (more on both aspects later). Even if {F} is a deterministic function {f}, i.e. {d=1}, the function {G} returned by the theorem may be matrix-valued. In this case the isometry {V} is simply a unit vector {v\in {\mathbb C}^{d'}}, and expanding out the squared norm in the conclusion of the theorem yields the equivalent conclusion

\displaystyle \sum_S (v^* P_S v)\,\Big(\mathop{\mathbb E}_{x} f(x)\, \chi_S(x) \Big) \,\geq\, 1-\varepsilon,

where we expanded {G(x) = \sum_S \chi_S(x) P_S} using our definition of a linear matrix-valued function. Note that {\{ v^* P_S v\}} defines a probability distribution on {\{0,1\}^n}. Thus by an averaging argument there must exist an {S} such that {f(x)=\chi_S(x)} for a fraction at least {1-\varepsilon/2} of all {x\in{\mathbb Z}_2^n}: the usual conclusion of the BLR theorem is recovered.

Proof: The proof of the theorem follows the classic Fourier-analytic proof of Bellare et al. Our first step is to define the isometry {V}. For a vector {u\in {\mathbb C}^d}, define

\displaystyle V u = \sum_S \hat{F}(S) u \otimes e_S \in {\mathbb C}^d \otimes {\mathbb C}^{2^n},

where {\hat{F}(S) = \mathop{\mathbb E}_{x} \chi_S(x) F(x)} is the matrix-valued Fourier coefficient of {F} at {S} and {\{e_S\}_{S\in\{0,1\}^n}} an arbitrary orthonormal basis of {{\mathbb C}^{2^n}}. An easily verified extension of Parseval’s formula shows {\sum_S \hat{F}(S)^2 = I} (recall {F(x)^2=I} for all {x}), so that {V^*V = I}: {V} is indeed an isometry.

Next, define the linear probabilistic function {G} by {G(x) = \sum_S \chi_S(x) P_S}, where {P_S = I \otimes e_Se_S^*} forms a partition of identity. We can evaluate

\displaystyle \begin{array}{rcl} &\mathop{\mathbb E}_{x} \,\frac{1}{d}\,\langle F(x),V^*G(x)V \rangle_f &= \mathop{\mathbb E}_{x} \sum_{S}\,\frac{1}{d}\,\langle F(x),\, \chi_S(x) \hat{F}(S)^2 \rangle_f \\ &&= \mathop{\mathbb E}_{x,y} \,\frac{1}{d}\,\langle F(x+y),\,F(x)F(y) \rangle_f, \end{array}

where the last equality follows by expanding the Fourier coefficients and noticing the appropriate cancellation. Together with (2), this proves the theorem. \Box

At the risk of sounding yet more pedantic, it might be useful to comment on the relation between this proof and the usual argument. The main observation in Bellare et al.’s proof is that approximate linearity, expressed by (2), implies a lower bound on the sum of the cubes of the Fourier coefficients of {f}. Together with Parseval’s formula, this bound implies the existence of a large Fourier coefficient, which identifies a close-by linear function.

The proof I gave decouples the argument. Its first step, the construction of the isometry {V} depends on {F}, but does not use anything regarding approximate linearity. It only uses Parseval’s formula to argue that the isometry is well-defined. A noteworthy feature of this step is that the function {G} on the extended space is always well-defined as well: given a function {F}, it is always possible to consider the linear matrix-valued function which “samples {S} according to {\hat{F}(S)^2}” and then returns {\chi_S(x)}. The second step of the proof evaluates the correlation of {F} with the “pull-back” of {G}, and observes that this correlation is precisely our measure of “approximate linearity” of {F}, concluding the proof without having had to explicitly notice that there existed a large Fourier coefficient.

1.3. The group-theoretic perspective

Let’s re-interpret the proof we just gave using group-theoretic language. A linear function {g: {\mathbb Z}_2^n\rightarrow\{\pm 1\}} is, by definition, a mapping which respects the additive group structure on {{\mathbb Z}_2^n}, namely it is a representation. Since {G=({\mathbb Z}_2^n,+)} is an abelian group, it has {|G|=2^n} irreducible {1}-dimensional representations, given by the characters {\chi_S}. As such, the linear function defined in the proof of Theorem 2 is nothing but a list of all irreducible representations of {G}.

The condition (2) derived in the proof of the theorem can be interpreted as the condition that {F} is an “approximate representation” of {G}. Let’s make this a general definition. For {d}-dimensional matrices {A,B} and {\sigma} such that {\sigma} is positive semidefinite, write

\displaystyle \langle A,B\rangle_\sigma = \mathrm{Tr}(AB^* \sigma),

where we use {B^*} to denote the conjugate-transpose. The following definition considers arbitrary finite groups (not necessarily abelian).

Definition 3 Given a finite group {G}, an integer {d\geq 1}, {\varepsilon\geq 0}, and a {d}-dimensional positive semidefinite matrix {\sigma} with trace {1}, an {(\varepsilon,\sigma)}-representation of {G} is a function {f: G \rightarrow U_d({\mathbb C})}, the unitary group of {d\times d} matrices, such that

\displaystyle \mathop{\mathbb E}_{x,y\in G} \,\Re\big(\big\langle f(x)^*f(y) ,f(x^{-1}y) \big\rangle_\sigma\big) \,\geq\, 1-\varepsilon, \ \ \ \ \ (3)

where the expectation is taken under the uniform distribution over {G}.

The condition (3) in the definition is very closely related to Gowers’s {U^2} norm

\displaystyle \|f\|_{U^2}^4 \,=\, \mathop{\mathbb E}_{xy^{-1}=zw^{-1}}\, \big\langle f(x)f(y)^* ,f(z)f(w)^* \big\rangle_\sigma.

While a large Gowers norm implies closeness to an affine function, we are interested in testing linear functions, and the condition (3) will arise naturally from our calculations in the next section.

If {G=({\mathbb Z}_2^n,+)}, the product {xy^{-1}} should be written additively as {x-y=x+y}, so that the condition (2) is precisely that {F} is an {(\varepsilon,\sigma)}-representation of {G}, where {\sigma = d^{-1}I}. Theorem 2 can thus be reformulated as stating that for any {(\varepsilon,\sigma)}-approximate representation of the abelian group {G=({\mathbb Z}_2^n,+)} there exists an isometry {V:{\mathbb C}^d \rightarrow {\mathbb C}^d\otimes {\mathbb C}^{2^n}} and an exact representation {g} of {G} on {{\mathbb C}^d \otimes {\mathbb C}^{2^n}} such that {f} is well-approximated by the “pull-back” {V^*gV} of {g} to {{\mathbb C}^d}. In the next section I will make the words in quotes precise and generalize the result to the case of arbitrary finite groups.

2. Approximate representations of non-abelian groups

2.1. The Gowers-Hatami theorem

In their paper Gowers and Hatami consider the problem of “rounding” approximate group representations to exact representations. I highly recommend the paper, which gives a thorough introduction to the topic, including multiple motivations. Here I will state and prove a slightly more general, but quantitatively weaker, variant of their result inspired by the somewhat convoluted analysis of the BLR test given in the previous section.

Theorem 4 (Gowers-Hatami) Let {G} be a finite group, {\varepsilon\geq 0}, and {f:G\rightarrow U_d({\mathbb C})} an {(\varepsilon,\sigma)}-representation of {G}. Then there exists a {d'\geq d}, an isometry {V:{\mathbb C}^d\rightarrow {\mathbb C}^{d'}}, and a representation {g:G\rightarrow U_{d'}({\mathbb C})} such that

\displaystyle \mathop{\mathbb E}_{x\in G}\, \big\| f(x) - V^*g(x)V \big\|_\sigma^2\, \leq\, 2\,\varepsilon.

Gowers and Hatami limit themselves to the case of {\sigma = d^{-1}I_d}, which corresponds to the dimension-normalized Frobenius norm. In this scenario they in addition obtain a tight control of the dimension {d'}, and show that one can always take {d'\ = (1+O(\varepsilon))d} in the theorem. I will give a much shorter proof than theirs (the proof is implicit in their argument) that does not seem to allow to recover this estimate. (It is possible to adapt their proof to keep a control of {d'} even in the case of general {\sigma}, but I will not explain this here.) Essentially the same proof as the one sketched below has been extended to some classes of infinite groups by De Chiffre, Ozawa and Thom in a recent preprint.

Note that, contrary to the BLR theorem, where the “embedding” is not strictly necessary (if {\varepsilon} is small enough we can identify a single close-by linear function), as noted by Gowers and Hatami Theorem 4 does not in general hold with {d'=d}. The reason is that it is possible for {G} to have an approximate representation in some dimension {d}, but no exact representation of the same dimension: to obtain an example of this, take any group {G} that has all non-trivial irreducible representations of large enough dimension, and create an approximate representation in e.g. dimension one less by “cutting off” one row and column from an exact representation. The dimension normalization induced by the norm {\|\cdot\|_\sigma} will barely notice this, but it will be impossible to “round” the approximate representation obtained to an exact one without modifying the dimension.

The necessity for the embedding helps distinguish the Gowers-Hatami result from other extensions of the linearity test to the non-abelian setting, such as the work by Ben-Or et al. on non-Abelian homomorphism testing (I thank Oded Regev for pointing me to the paper). In that paper the authors show that a function {f:G\rightarrow H}, where {G} and {H} are finite non-abelian groups, which satisfies {\Pr( f(x)f(y)=f(xy) ) \geq 1-\varepsilon}, is {O(\varepsilon)}-close to a homomorphism {g:G\rightarrow H}. The main difference with the setting for the Gowers-Hatami result is that since {H} is finite, Ben-Or et al. use the Kronecker {\delta} function as distance on {H}. This allows them to employ combinatorial arguments, and provide a rounding procedure that does not need to modify the range space ({H}). In contrast, here the unitary group is infinite.

The main ingredient needed to extend the analysis of the BLR test is an appropriate notion of Fourier transform over non-abelian groups. Given an irreducible representation {\rho: G \rightarrow U_{d_\rho}({\mathbb C})}, define

\displaystyle \hat{f}(\rho) \,=\, \mathop{\mathbb E}_{x\in G} \,f(x) \otimes \overline{\rho(x)}. \ \ \ \ \ (4)

In case {G} is abelian, we always have {d_\rho=1}, the tensor product is a product, and (4) reduces to the usual definition of Fourier coefficient. The only properties we will need of irreducible representations is that they satisfy the relation

\displaystyle \sum_\rho \,d_\rho\,\mathrm{Tr}(\rho(x)) \,=\, |G|\delta_{xe}\;, \ \ \ \ \ (5)

for any {x\in G}. Note that plugging in {x=e} (the identity element in {G}) yields {\sum_\rho d_\rho^2= |G|}.

Proof: } As in the proof of Theorem 2 our first step is to define an isometry {V:{\mathbb C}^d \rightarrow {\mathbb C}^d \otimes (\oplus_\rho {\mathbb C}^{d_\rho} \otimes {\mathbb C}^{d_\rho})} by

\displaystyle V :\;u \in {\mathbb C}^d \,\mapsto\, \bigoplus_\rho \,d_\rho^{1/2} \sum_{i=1}^{d_\rho} \,\big(\hat{f}(\rho) (u\otimes e_i)\big) \otimes e_i,

where the direct sum ranges over all irreducible representations {\rho} of {G} and {\{e_i\}} is the canonical basis. Note what {V} does: it “embeds” any vector {u\in {\mathbb C}^d} into a direct sum, over irreducible representations {\rho}, of a {d}-dimensional vector of {d_\rho\times d_\rho} matrices. Each (matrix) entry of this vector can be thought of as the Fourier coefficient of the corresponding entry of the vector {f(x)u} associated with {\rho}. If {G={\mathbb Z}_2^n} and {f} ranges over {O_({\mathbb C})} this recovers the isometry defined in the proof of Theorem 2. And indeed, the fact that {V} is an isometry again follows from the appropriate extension of Parseval’s formula:

\displaystyle \begin{array}{rcl} & V^* V &= \sum_\rho d_\rho \sum_i (I\otimes e_i^*) \hat{f}(\rho)^*\hat{f}(\rho) (I\otimes e_i)\\ &&= \mathop{\mathbb E}_{x,y}\, f(x)^*f(y) \sum_\rho d_\rho \sum_i (e_i^* \rho(x)^T \overline{\rho(y)} e_i)\\ &&= \sum_\rho \frac{d_\rho^2}{|G|}I = I, \end{array}

where for the second line we used the definition (4) of {\hat{f}(\rho)} and for the third we used (5) and the fact that {f} takes values in the unitary group.

Following the same steps as in the proof of Theorem 2, we next define

\displaystyle g(x) = \bigoplus_\rho \,\big(I_d \otimes I_{d_\rho} \otimes \rho(x)\big),

a direct sum over all irreducible representations of {G} (hence itself a representation). Lets’ first compute the “pull-back” of {g} by {V}: following a similar calculation as above, for any {x\in G},

\displaystyle \begin{array}{rcl} & V^*g(x) V &= \sum_{\rho} d_\rho \sum_{i,j} (I\otimes e_i^*)\hat{f}(\rho)^* \hat{f}(\rho)(I\otimes e_j) \otimes e_i^* \rho(x) e_j ) \\ && = \mathop{\mathbb E}_{z,y}\, f(z)^*f(y) \sum_{\rho} d_\rho \sum_{i,j} (e_i^* \rho(z)^T \overline{\rho(y)} e_j) \big( e_i^* \rho(x) e_j \big) \\ && = \mathop{\mathbb E}_{z,y}\, f(z)^*f(y) \sum_{\rho} d_\rho \mathrm{Tr}\big( \rho(z)^T \overline{\rho(y)} {\rho(x)^T} \big) \\ && = \mathop{\mathbb E}_{z,y}\, f(z)^*f(y) \sum_{\rho} d_\rho \mathrm{Tr}\big( \rho(z^{-1}y x^{-1}) \big) \\ && = \mathop{\mathbb E}_{z}\, f(z)^*f(zx) , \end{array}

where the last equality uses (5). It then follows that

\displaystyle \begin{array}{rcl} &\mathop{\mathbb E}_{x}\, \big\langle f(x), V^*g(x) V \big\rangle_\sigma &= \mathop{\mathbb E}_{x,z} \mathrm{Tr}\big( f(x) f(zx)^* f(z)\sigma\big). \end{array}

This relates correlation of {f} with {V^*gV} to the quality of {f} as an approximate representation and proves the theorem. \Box

2.2. Application: the Weyl-Heisenberg group

In quantum information we care a lot about the Pauli group. For our purposes it will be be sufficient (and much more convenient, allowing us to avoid some trouble with complex conjugation) to consider the Weyl-Heisenberg group {H}, or “Pauli group modulo complex conjugation”, which is the {8}-element group {\{\pm \sigma_I,\pm \sigma_X,\pm \sigma_Z,\pm \sigma_W\}} whose multiplication table matches that of the {2\times 2} matrices

\displaystyle \sigma_X = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix},\qquad \sigma_Z= \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}, \ \ \ \ \ (6)

{\sigma_I = \sigma_X^2 = \sigma_Z^2} and {\sigma_W=\sigma_X\sigma_Z=-\sigma_Z\sigma_X}. This group has four {1}-dimensional representations, uniquely specified by the image of {\sigma_X} and {\sigma_Z} in {\{\pm 1\}}, and a single irreducible {2}-dimensional representation, given by the matrices defined above. We can also consider the “{n}-qubit Weyl-Heisenberg group” {H^{(n)}}, the matrix group generated by {n}-fold tensor products of the {8} matrices identified above. The irreducible representations of {H^{(n)}} are easily computed from those of {H}; for us the only thing that matters is that the only irreducible representation which satisfies {\rho(-I)=-\rho(I)} has dimension {2^n} and is given by the defining matrix representation (in fact, it is the only irreducible representation in dimension larger than {1}).

With the upcoming application to entanglement testing in mind, I will state a version of Theorem 4 tailored to the group {H^{(n)}} and a specific choice of presentation for the group relations. Towards this we first need to recall the notion of Schmidt decomposition of a bipartite state (i.e. unit vector) {\psi \in {\mathbb C}^d \otimes {\mathbb C}^d}. The Schmidt decomposition states that any such vector can be written as

\displaystyle \psi \,=\, \sum_i \,\sqrt{\lambda_i}\, u_i \otimes v_i, \ \ \ \ \ (7)

for some orthonomal bases {\{u_i\}} and {\{v_i\}} of {{\mathbb C}^d} (the “Schmidt vectors”) and non-negative coefficients {\sqrt{\lambda_i}} (the “Schmidt coefficients”). The decomposition can be obtained by “reshaping” {\psi = \sum_{i,j} \psi_{i,j} e_i \otimes e_j} into a {d\times d} matrix {K=(\psi_{i,j})_{1\leq i,j\leq d}} and performing the singular value decomposition. To {\psi} we associate the (uniquely defined) positive semidefinite matrix

\displaystyle \sigma \,=\, KK^* \,=\, \sum_i \lambda_i\,u_iu_i^*\; ; \ \ \ \ \ (8)

note that {\sigma} has trace {1}. The matrix {\sigma} is called the reduced density of {\psi} (on the first system).

Corollary 5 Let {n,d} be integer, {\varepsilon \geq 0}, {\psi \in {\mathbb C}^d \otimes {\mathbb C}^d} a unit vector, {\sigma} the positive semidefinite matrix associated to {\psi} as in (8), and {f: \{X,Z\}\times \{0,1\}^n \rightarrow U({\mathbb C}^d)}. For {a,b\in\{0,1\}^n} let {X(a)=f(X,a)}, {Z(b)=f(Z,b)}, and assume {X(a)^2=Z(b)^2=I_d} for all {a,b} (we call such operators, unitaries with eigenvalues in {\{\pm 1\}}, observables). Suppose that the following inequalities hold: consistency

\displaystyle \mathop{\mathbb E}_a \, \psi^* \big(X(a) \otimes X(a)\big) \psi \,\geq\,1-\varepsilon,\qquad \mathop{\mathbb E}_b \, \psi^* \big(Z(b) \otimes Z(b) \big)\psi\,\geq\,1-\varepsilon, \ \ \ \ \ (9)

linearity

\displaystyle \mathop{\mathbb E}_{a,a'} \,\big\|X(a)X(a')-X(a+a')\big\|_\sigma^2 \leq \varepsilon,\qquad\mathop{\mathbb E}_{b,b'}\, \big\|Z(b)Z(b')-Z(b+b')\big\|_\sigma^2 \leq \varepsilon, \ \ \ \ \ (10)

and anti-commutation

\displaystyle \mathop{\mathbb E}_{a,b} \,\big\| X(a)Z(b)-(-1)^{a\cdot b} X(a)Z(b)\big\|_\sigma^2\,\leq\,\varepsilon. \ \ \ \ \ (11)

Then there exists a {d'\geq d}, an isometry {V:{\mathbb C}^d\rightarrow {\mathbb C}^{d'}}, and a representation {g:H^{(n)}\rightarrow U_{d'}({\mathbb C})} such that {g(-I)=-I_{d'}} and

\displaystyle \mathop{\mathbb E}_{a,b}\, \big\| X(a)Z(b) - V^*g(\sigma_X(a)\sigma_Z(b))V \big\|_\sigma^2 \,=\, O(\varepsilon).

Note that the conditions (10) and (11) in the corollary are very similar to the conditions required of an approximate representation of the group {H^{(n)}}; in fact it is easy to convince oneself that their exact analogue suffice to imply all the group relations. The reason for including only those relations is that they are the ones that it will be possible to test; see the next section for this. Condition (9) is necessary to derive the conditions of Theorem 4 from (10) and (11), and is also testable; see the proof.

Proof: To apply Theorem 4 we need to construct an {(\varepsilon,\sigma)}-representation {f} of the group {H^{(n)}}. Using that any element of {H^{(n)}} has a unique representative of the form {\pm \sigma_X(a)\sigma_Z(b)} for {a,b\in\{0,1\}^n}, we define {f(\pm \sigma_X(a)\sigma_Z(b)) = \pm X(a)Z(b)}. Next we need to verify (3). Let {x,y\in H^{(n)}} be such that {x=\sigma_X(a_x)\sigma_Z(b_x)} and {y=\sigma_X(a_y)\sigma_Z(b_y)} for {n}-bit strings {(a_x,b_x)} and {(a_y,b_y)} respectively. Up to phase, we can exploit successive cancellations to decompose {(f(x)f(y)^*-f(xy^{-1}))\otimes I} as

\displaystyle \begin{array}{rcl} &&\big( X(a_x)Z(b_x)X(a_y)Z(b_y) -(-1)^{a_y\cdot b_x} X(a_x+a_y) Z(b_x+b_y)\big)\otimes I \\ &&\qquad = X(a_x)Z(b_x)X(a_y)\big (Z(b_y)\otimes I - I\otimes Z(b_y)\big)\\ && \qquad\qquad+ X(a_x)\big(Z(b_x)X(a_y) - (-1)^{a_y\cdot b_x} X(a_y)Z(b_x)\big)\otimes Z(b_y)\\ && \qquad\qquad+(-1)^{a_y\cdot b_x} \big( X(a_x)X(a_y)\otimes Z(b_y)\big) \big( Z(b_x)\otimes I - I\otimes Z(b_x)\big)\\ && \qquad\qquad+ (-1)^{a_y\cdot b_x} \big( X(a_x)X(a_y)\otimes Z(b_y)Z(b_x) - X(a_x+a_y)\otimes Z(b_x+b_y)\big)\\ && \qquad\qquad+ (-1)^{a_y\cdot b_x} \big( X(a_x+a_y)\otimes I \big)\big(I\otimes Z(b_x+b_y) - Z(b_x+b_y)\otimes I\big). \end{array}

(It is worth staring at this sequence of equations for a little bit. In particular, note the “player-switching” that takes place in the 2nd, 4th and 6th lines; this is used as a means to “commute” the appropriate unitaries, and is the reason for including (9) among the assumptions of the corollary.) Evaluating each term on the vector {\psi}, taking the squared Euclidean norm, and then the expectation over uniformly random {a_x,a_y,b_x,b_y}, the inequality {\| AB\psi\| \leq \|A\|\|B\psi\|} and the assumptions of the theorem let us bound the overlap of each term in the resulting summation by {O({\varepsilon})}. Using {\| (A\otimes I) \psi\| = \|A\|_\sigma} by definition, we obtain the bound

\displaystyle \mathop{\mathbb E}_{x,y}\,\big\|f(x)f(y)^* - f(xy^{-1})\big\|_\sigma^2 \,=\, O({\varepsilon}).

We are thus in a position to apply Theorem 4, which gives an isometry {V} and exact representation {g} such that

\displaystyle \mathop{\mathbb E}_{a,b}\,\Big\| X(a)Z(b)- \frac{1}{2}V^*\big( g(\sigma_X(a)\sigma_Z(b)) - g(-\sigma_X(a)\sigma_Z(b))\big)V\Big\|_\sigma^2 \,=\, O({\varepsilon}). \ \ \ \ \ (12)

Using that {g} is a representation, {g(-\sigma_X(a)\sigma_Z(b)) = g(-I)g(\sigma_X(a)\sigma_Z(b))}. It follows from (12) that {\|g(-I) + I \|_\sigma^2 = O({\varepsilon})}, so we may restrict the range of {V} to the subspace where {g(-I)=-I} without introducing much additional error. \Box

3. Entanglement testing

Our discussion so far has barely touched upon the notion of entanglement. Recall the Schmidt decopmosition (7) of a unit vector {\psi \in {\mathbb C}^d\otimes {\mathbb C}^d}, and the associated reduced density matrix {\sigma} defined in (8). The state {\psi} is called entangled if this matrix has rank larger than {1}; equivalently, if there is more than one non-zero coefficient {\lambda_i} in (7). The Schmidt rank of {\psi} is the rank of {\sigma}, the number of non-zero terms in (7). It is a crude, but convenient, measure of entanglement; in particular it provides a lower bound on the local dimension {d}. A useful observation is that the Schmidt rank is invariant under local unitary operations: these may affect the Schmidt vectors {\{u_i\}} and {\{v_i\}}, but not the number of non-zero terms.

3.1. A certificate for high-dimensional entanglement

Among all entangled states in dimension {d}, the maximally entangled state {\phi_d} is the one which maximizes entanglement entropy, defined as the Shannon entropy of the distribution induced by the squares of the Schmidt coefficients:

\displaystyle \phi_d \,=\, \frac{1}{\sqrt{d}} \sum_{i=1}^d\, e_i\otimes e_i,

with entropy {\log d}. The following lemma gives a “robust” characterization of the maximally entangled state in dimension {d=2^n} as the unique common eigenvalue-{1} eigenvector of all operators of the form {\sigma_P \otimes \sigma_P}, where {\sigma_P} ranges over the elements of the unique {2^n}-dimensional irreducible representation of the Weyl-Heisenberg group {H^{(n)}}, i.e. the Pauli matrices (taken modulo {\sqrt{-1}}).

Lemma 6 Let {\varepsilon\geq 0}, {n} an integer, {d=2^n}, and {\psi\in {\mathbb C}^d \otimes {\mathbb C}^d} a unit vector such that

\displaystyle \mathop{\mathbb E}_{a,b}\, \psi^* \big(\sigma_X(a) \sigma_Z(b)\otimes \sigma_X(a) \sigma_Z(b) \big) \psi \geq 1-\varepsilon. \ \ \ \ \ (13)

Then {|\psi^*\phi_{2^n}|^2 \geq 1-\varepsilon}. In particular, {\psi} has Schmidt rank at least {(1-\varepsilon) 2^n}.

Proof: Consider the case {n=1}. The “swap” matrix

\displaystyle S = \frac{1}{4}\big(\sigma_I \otimes \sigma_I + \sigma_X \otimes \sigma_X + \sigma_Z \otimes \sigma_Z + \sigma_W \otimes \sigma_W\big)

squares to identity and has a unique eigenvalue-{1} eigenvector, the vector {\phi_2 = (e_1\otimes e_1 + e_2\otimes e_2)/\sqrt{2}} (a.k.a. “EPR pair”). Thus {\psi^* S \psi \geq 1-\varepsilon} implies {|\psi^* \phi|^2 \geq 1-\varepsilon}. The same argument for general {n} shows {|\psi^* \phi_{2^n}|^2 \geq 1-\varepsilon}. Any unit vector {u} of Schmidt rank at most {r} satisfies {|u^* \phi_{2^n}|^2 \leq r2^{-n}}, concluding the proof. \Box

Lemma 6 provides an “experimental road-map” for establishing that a bipartite system is in a highly entangled state:

  • (i) Select a random {\sigma_P = \pm\sigma_X(a)\sigma_Z(b) \in H^{(n)}};
  • (ii) Measure both halves of {\psi} using {\sigma_P};
  • (iii) Check that the outcomes agree.

To explain the connection between the above “operational test” and the lemma I should review what a measurement in quantum mechanics is. For our purposes it is enough to talk about binary measurements (i.e. measurements with two outcomes, {+1} and {-1}). Any such measurement is specified by a pair of orthogonal projections, {M_+} and {M_-}, on {{\mathbb C}^d} such that {M_++M_- = I_d}. The probability of obtaining outcome {\pm} when measuring {\psi} is {\|M_\pm \psi\|^2}. We can represent a binary measurement succinctly through the observable {M=M_+-M_-}. (In general, an observable is a Hermitian matrix which squares to identity.) It is then the case that if an observable {M} is applied on the first half of a state {\psi\in{\mathbb C}^d\otimes{\mathbb C}^d}, and another observable {N} is applied on the second half, then the probability of agreement, minus the probability of disagreement, between the outcomes obtained is precisely {\psi^*(M\otimes N)\psi}, a number which lies in {[-1,1]}. Thus the condition that the test described above accepts with probability {1-\varepsilon} when performed on a state {\psi} is precisely equivalent to the assumption (13) of Lemma 6.

Even though this provides a perfectly fine test for entanglement in principle, practitioners in the foundations of quantum mechanics know all too well that their opponents — e.g. “quantum-skeptics” — will not be satisfied with such an experiment. In particular, who is to guarantee that the measurement performed in step (ii) is really {\sigma_P\otimes\sigma_P}, as claimed? To the least, doesn’t this already implicitly assume that the measured system has dimension {2^n}?

This is where the notion of device independence comes in. Briefly, in this context the idea is to obtain the same conclusion (a certificate of high-dimensional entanglement) without any assumption on the measurement performed: the only information to be trusted is classical data (statistics generated by the experiment), but not the operational details of the experiment itself.

This is where Corollary 5 enters the picture. Reformulated in the present context, the corollary provides a means to verify that arbitrary measurements “all but behave” as Pauli measurements, provided they generate the right statistics. To explain how this can be done we need to provide additional “operational tests” that can be used to certify the assumptions of the corollary.

3.2. Testing the Weyl-Heisenberg group relations

Corollary 5 makes three assumptions about the observables {X(a)} and {Z(b)}: that they satisfy approximate consistency (9), linearity (10), and anti-commutation (11). In this section I will describe two (somewhat well-known) tests that allow to certify these relations based only on the fact that the measurements generate statistics which pass the tests.

Linearity test:

  • (a) The referee selects {W\in\{X,Z\}} and {a,a'\in\{0,1\}^n} uniformly at random. He sends {(W,a,a')} to one player and {(W,a)}, {(W,a')}, or {(W,a+a')} to the other.
  • (b) The first player replies with two bits, and the second with a single bit. The referee accepts if and only if the player’s answers are consistent.

As always in this note, the test treats both players simultaneously. As a result we can (and will) assume that the player’s strategy is symmetric, and is specified by a permutation-invariant state {\psi\in {\mathbb C}^d \otimes {\mathbb C}^d} and a measurement for each question: an observable {W(a)} associated to questions of the form {(W,a)}, and a more complicated four-outcome measurement {\{W^{a,a'}\}} associated with questions of the form {(W,a,a')} (It will not be necessary to go into the details of the formalism for such measurements).

The linearity test described above is exactly identical to the BLR linearity test described earlier, but for the use of the basis label {W\in\{X,Z\}}. The lemma below is a direct analogue of Lemma 1, which extends the analysis to the setting of players sharing entanglement. The lemma was first introduced in a joint paper with Ito, where we used an extension of the linearity test, Babai et al.’s multilinearity test, to show the inclusion of complexity classes NEXP{\subseteq}MIP{^*}.

Lemma 7 Suppose that a family of observables {\{W(a)\}} for {W\in\{X,Z\}} and {a\in\{0,1\}^n}, generates outcomes that succeed in the linearity test with probability {1-\varepsilon}, when applied on a bipartite state {\psi\in{\mathbb C}^d\otimes {\mathbb C}^d}. Then the following hold: approximate consistency

\displaystyle \mathop{\mathbb E}_a \, \psi^* \big(X(a) \otimes X(a)\big) \psi \,=\,1-O(\varepsilon),\qquad \mathop{\mathbb E}_b \, \psi^* \big(Z(b) \otimes Z(b) \big)\psi\,\geq\,1-O(\varepsilon),

and linearity

\displaystyle \mathop{\mathbb E}_{a,a'} \,\big\|X(a)X(a')-X(a+a')\big\|_\sigma^2 = O(\varepsilon),\qquad\mathop{\mathbb E}_{b,b'}\, \big\|Z(b)Z(b')-Z(b+b')\big\|_\sigma^2 \,=\, O({\varepsilon}).

Testing anti-commutation is slightly more involved. We will achieve this by using a two-player game called the Magic Square game. This is a fascinating game, but just as for the linearity test I will treat it superficially and only recall the part of the analysis that is useful for us (see e.g. the paper by Wu et al. for a description of the game and a proof of Lemma 8 below).

Lemma 8 (Magic Square) The Magic Square game is a two-player game with nine possible questions (with binary answers) for one player and six possible questions (with two-bit answers) for the other player which has the following properties. The distribution on questions in the game is uniform. Two of the questions to the first player are labelled {X} and {Z} respectively. For any strategy for the players that succeeds in the game with probability at least {1-\varepsilon} using a bipartite state {\psi\in{\mathbb C}^d\otimes {\mathbb C}^d} and observables {X} and {Z} for questions {X} and {Z} respectively, it holds that

\displaystyle \big\|\big( (XZ+ZX)\otimes I_d \big)\psi\big\|^2 \,=\, O\big(\sqrt{\varepsilon}\big). \ \ \ \ \ (14)

Moreover, there exists a strategy which succeeds with probability {1} in the game, using {\psi=\phi_4} and Pauli observables {\sigma_X \otimes I_2} and {\sigma_Z\otimes I_2} for questions {X} and {Z} respectively.

Based on the Magic Square game we devise the following “anti-commutation test”.

Anti-commutation test:

  • (a) The referee selects {a,b\in\{0,1\}^n} uniformly at random under the condition that {a\cdot b=1}. He plays the Magic Square game with both players, with the following modifications: if the question to the first player is {X} or {Z} he sends {(X,a)} or {(Z,b)} instead; in all other cases he sends the original label of the question in the Magic Square game together with both strings {a} and {b}.
  • (b) Each player provides answers as in the Magic Square game. The referee accepts if and only if the player’s answers would have been accepted in the game.

Using Lemma 8 it is straightforward to show the following.

Lemma 9 Suppose a strategy for the players succeeds in the anti-commutation test with probability at least {1-\varepsilon}, when performed on a bipartite state {\psi \in {\mathbb C}^d \otimes {\mathbb C}^d}. Then the observables {X(a)} and {Z(b)} applied by the player upon receipt of questions {(X,a)} and {(Z,b)} respectively satisfy

\displaystyle \mathop{\mathbb E}_{a,b:\,a\cdot b=1} \,\big\| X(a)Z(b)-(-1)^{a\cdot b} Z(b)X(a)\big\|_\sigma^2\,=\,O\big(\sqrt{\varepsilon}\big). \ \ \ \ \ (15)

3.3. A robust test for high-dimensional entangled states

We are ready to state, and prove, our main theorem: a test for high-dimensional entanglement that is “robust”, meaning that success probabilities that are a constant close to the optimal value suffice to certify that the underlying state is within a constant distance from the target state — in this case, a tensor product of {n} EPR pairs. Although arguably a direct “quantization” of the BLR result, this is the first test known which achieves constant robustness — all previous {n}-qubit tests required success that is inverse polynomially (in {n}) close to the optimum in order to provide any meaningful conclusion.

{n}-qubit Pauli braiding test: With probability {1/2} each,

  • (a) Execute the linearity test.
  • (b) Execute the anti-commutation test.

Theorem 10 Suppose that a family of observables {W(a)}, for {W\in\{X,Z\}} and {a\in\{0,1\}^n}, and a state {\psi\in{\mathbb C}^d\otimes {\mathbb C}^d}, generate outcomes that pass the {n}-qubit Pauli braiding test with probability at least {1-\varepsilon}. Then {d= (1-O(\sqrt{\varepsilon}))2^n}.

As should be apparent from the proof it is possible to state a stronger conclusion for the theorem, which includes a characterization of the observables {W(a)} and the state {\psi} up to local isometries. For simplicity I only recorded the consequence on the dimension of {\psi}.

Proof: Using Lemma 7 and Lemma 9, success with probability {1-\varepsilon} in the test implies that conditions (9)(10) and (11) in Corollary 5 are all satisfied, up to error {O(\sqrt{\varepsilon})}. (In fact, Lemma 9 only implies (11) for strings {a,b} such that {a\cdot b=1}. The condition for string such that {a\cdot b=0} follows from the other conditions.) The conclusion of the corollary is that there exists an isometry {V} such that the observables {X(a)} and {Z(b)} satisfy

\displaystyle \mathop{\mathbb E}_{a,b}\, \big\| X(a)Z(b) - V^*g(\sigma_X(a)\sigma_Z(b))V \big\|_\sigma^2 \,=\, O(\sqrt{\varepsilon}).

Using again the consistency relations (9) that follow from part (a) of the test together with the above we get

\displaystyle \mathop{\mathbb E}_{a,b}\, \psi^* (V\otimes V)^* \big( \sigma_X(a)\sigma_Z(b)\otimes \sigma_X(a)\sigma_Z(b)\big)(V\otimes V)\psi \,=\, 1-O(\sqrt{\varepsilon}).

Applying Lemma 6, {(V\otimes V)\psi} has Schmidt rank at least {(1-O(\sqrt{\varepsilon}))2^n}. But {V} is a local isometry, which cannot increase the Schmidt rank. \Box

Posted in CHSH, device independence, QPCP, Quantum, Uncategorized | Tagged , , | Leave a comment

Unitary Correlation Matrices

Today I’d like to sketch a question that’s been pushing me in a lot of different directions over the past few years — some sane, others less so; few fruitful, but all instructive. The question is motivated by the problem of placing upper bounds on the amount of entanglement needed to play a two-player non-local game (near-)optimally. But it can also be stated as a natural mathematical question in itself, so this is how I’ll present it first, and then only briefly discuss some motivation. (I wish I could write I’ll also present results, but these will be quite sparse.) All that is to come is based on discussions with Oded Regev, though all inaccuracies and naïvetés are mine.

Prelude: Vector Correlation Matrices

Before jumping to unitary correlation matrices, let’s — rather pedantically — introduce vector correlation matrices. Most of you are already familiar with this simple object: a vector correlation matrix is an {n\times n} Hermitian matrix {C} with complex entries such that there exists an integer {d} and unit vectors {u_1,\ldots,u_n\in {\mathbb C}^d} such that {C_{i,j} = \langle u_i,u_j\rangle} for all {(i,j)\in\{1,\ldots,n\}^2}. In other words: a Gram matrix with diagonal entries equal to {1}.

A natural question is, given a vector correlation matrix {C}, what is the minimal dimension {d} in which there exists vectors achieving the specified correlations? Clearly {d\leq n}, the dimension of the span of the {n} vectors; moreover the identity matrix implies that {d=n} is sometimes necessary.

If we allow {\varepsilon}-approximations, we can do better: the Johnson-Lindenstrauss lemma implies that {d=O(\varepsilon^{-2}\log n)} is sufficient (and necessary) to find unit vectors such that {|\langle u_i,u_j\rangle-C_{i,j}|\leq\varepsilon} for each {i,j}. And if we only require the approximation to hold on the average over the choice of {i} and {j}, then no dependence on {n} is necessary: {d = O(\epsilon^{-2})} suffices.

This is all good and well. Now onto the interesting stuff!

Theme: Unitary Correlation Matrices

Define a unitary correlation matrix to be an an {n\times n} Hermitian matrix {C} with complex entries such that there exists an integer {d} and unitary matrices {U_1,\ldots,U_n\in {\mathbb C}^{d\times d}} such that {C_{i,j} = d^{-1}\textrm{Tr}(U_i U_j^\dagger)} for all {(i,j)\in\{1,\ldots,n\}^2}. Considering block matrices shows that the set of unitary correlation matrices is convex.

By forgetting the unitary structure of the {U_i} we see that a unitary correlation matrix is automatically a vector correlation matrix; in particular it is positive semidefinite with all diagonal entries equal to {1}. While the latter is a characterization of vector correlation matrices, however, as soon as {n\geq 4} (and not before) there exists vector correlation matrices that are not unitary correlation matrices. This is not completely trivial to see, and appears in a paper by Dykema and Juschenko; it is a nice exercise to work out. Now for the main question:

({\mathfrak{D}}): Dimension reduction for unitaries. Let {n\in{\mathbb N}} and {\epsilon > 0 } be given. Does there exist an explicit {d'=d'(n;\epsilon)} such that for every {n\times n} unitary correlation matrix {C} there are {d'}-dimensional unitaries {V_1,\ldots,V_n} such that

\displaystyle \Big|\frac{1}{d'}\textrm{Tr}\big(V_i\,V_j^\dagger\big) - C_{i,j} \Big| \leq \varepsilon \qquad \forall (i,j)\in\{1,\ldots,n\}^2\; .

While the analogue question for vectors is trivial for {\epsilon=0}, and a fundamental result in geometry for {\epsilon>0}, extremely little is known on the question for unitaries. Virtually the only general statement that can be made is that, at least, some bound {d'} exists. This follows by a simple compactness argument, but does not yield any meaningful bound on the growth of {d'} as a function of {n} and {\epsilon}. In fact no explicit bound, however large, is known to hold in general. Let’s explore the problem a bit.

Variatio: Equivalent formulations

A nice feature of question ({\mathfrak{D}}) is that it is reasonably robust, in the sense that different natural formulations of the question can be shown equivalent, up to simple variations on the precise scaling of {d'}. For example, one can relax the constraint of being unitary to the sole requirement that the matrices have all singular values at most {1}. At the opposite end of the spectrum one can consider a more structured problem which considers correlations between projection matrices (so all eigenvalues are {0} or {1}). Both these variants can be shown equivalent to the unitary case via some simple reductions.

The one variant which makes a substantial difference is the case of correlation matrices with real entries. A beautiful result of Tsirelson shows that any extremal real correlation matrix can be realized exactly, by Hermitian matrices having all eigenvalues {\{-1,1\}},  in dimension {d' = 2^{\sqrt{\lfloor n/2\rfloor}}}, and this bound is tight; relatively precise bounds of the form {d'=2^{\Theta(\epsilon^{-1})}} are known for small enough {\epsilon>0}. (Note that even though projection matrices are Hermitian, and thus give rise to real correlations, Tsirelson’s result does not imply a positive answer for the case of projections as the dimension-{d'} matrices recovered via Tsirelson’s construction will in general be Hermitian, but not projectors, even when the original matrices were.)

Interlude: Motivation

Quantum games. One can arrive at question ({\mathfrak{D}}) by asking about the minimal dimension of near-optimal strategies in a quantum two-player game. Experts will immediately see the connection, and I will not elaborate on this. Roughly, the easy observation is that correlations that are achievable by entangled players in a nonlocal game take the form

\displaystyle \langle \psi| A \otimes B |\psi\rangle = \textrm{Tr}(AKB^TK^\dagger),

where {|\psi\rangle} is a unit vector in {{\mathbb C}^d \otimes {\mathbb C}^d} (the entanglement), {K} is a complex {d\times d} matrix that can be computed from {|\psi\rangle}, and {A,B}“observables”, i.e. Hermitian operators that square to identity describing the players’ measurement operators. (A more general formulation considers projections, rather than observables.) In case {|\psi\rangle} is the so-called “maximally entangled state”, {K = d^{-1/2} I} and we recover precisely an entry from a correlation matrix. (The case of a general state gives rise to a slight variant of question {(\mathfrak{D})}, to which I am not sure whether it is equivalent or not.)
Arriving at the question from this “physical” angle, it seems like it “ought” to have a reasonable answer: certainly, if one fixes the size of the game, and an approximation error {\epsilon>0}, then there must exist some dimension that suffices to implement an {\epsilon}-optimal strategy. No such result is known. If anything existing signs seem to point in the negative direction: for instance, Slofstra very recently showed that there exists a fixed, constant-sized game such that the optimal winning probability of {1} can only be achieved in the limit of infinite dimension (but it does seem to be the case that, for this game, {\epsilon}-optimal strategies can be found in dimension {\textrm{poly}(\epsilon^{-1}}). Note that this result implies that the set of correlation matrices of projections is not closed.

Connes’ conjecture. A different, though related, way to arrive at question ({\mathfrak{D}}) is via the famous “Connes embedding conjecture” in the theory of {C^*} algebras. Connes’ embedding conjecture states, rather informally, that any separable {II_1} factor (i.e. a von Neumann algebra with trivial center that is infinite-dimensional as a vector space, but has a finite faithful trace) embeds into a suitable ultrapower of the hyperfinite factor {\mathcal{R}}. Kirchberg showed that the conjecture is equivalent to the following statement.

Theorem. The validity of Connes’ conjecture for some factor {M} is equivalent to the following: For all {\epsilon>0}, {n,k\in{\mathbb N}} and unitaries {U_1,\ldots,U_n\in M} there is a {d'\in{\mathbb N}} and unitaries {V_1,\ldots,V_n\in M_{d'}({\mathbb C})}, such that

\displaystyle \Big|\tau\big(U_i\,U_j^*\big)-\frac{1}{d}\textrm{Tr}\big(V_i\,V_j^\dagger\big)\Big|\leq \epsilon,

where {\tau} is the trace on {M}.
This formulation is close to question ({\mathfrak{D}}), except for two important differences: first, we assume that the target correlations are achievable in finite dimension {d}. This makes the problem easier, and would make it trivial if we were not to introduce a second important difference, which is that we ask for explicit bounds on {d'}. As a result I do not know of any formal implication between ({\mathfrak{D}}) and Connes’ conjecture, in either direction.

Graph limits. Finally, for the combinatorialist let me mention an analogous (though, as far I can tell, not directly related) question, formulated by Aldous and Lyons in the context of their study of limits of bounded-degree graphs. The distance between two finite graphs of the same constant degree (but not necessarily the same number of vertices) can be measured via the sampling distance {\delta}: {\delta(G,G') = \sum_{r=1}^\infty 2^{-r}\delta_r(G,G')}, where {\delta_r(G,G')} denotes the total variation distance between the distributions on rooted {r}-neighborhoods obtained by sampling a random vertex from {G} (resp. {G'}) and considering the sub-graph induced on all vertices at distance at most {r} from the sampled vertex. With this notion in place, Question 10.1 in Aldous and Lyons’ paper on unimodular random networks asks the following:

(Aldous-Lyons:) For every {\epsilon>0} there is an integer {d} such that for every (finite) graph {G} there is a graph {G'} on {d} vertices such that {\delta(G,G')\leq \epsilon}.

In page 1458 the authors mention that the validity of their conjecture for the special class of Cayley graphs would imply that all finitely generated groups are sofic (very roughly, can be embedded into finite-dimensional permutation groups). Even though we do not know of an example of a group that is not sofic, this would be a very surprising result. In particular, it would imply Connes’s Embedding Conjecture for group von Neumann algebras, since the latter is known to hold for sofic groups.

Development: Results

Unfortunately this is going to be one of the shortest, most boring developments in musical history: there is too little to say! I could describe multiple failed attempts. In particular, naïve attempts at dimension reduction, inspired by Johnson-Lindenstrauss or other standard techniques, or incremental “gradient-descent” type of progressive block diagonalization procedures, all seem doomed to fail.

Aside from Tsirelson’s result for real correlation matrices, the one case for which we were able to find a cute proof is the case of permutation correlation matrices, where each {U_i} is assumed to be a permutation matrix. The fact that permutations are sparse seems to make it easier to operate on them by “shifting entries around”; unitaries have a more rigid structure. The proof uses a simple combinatorial argument, with the heaviest hammer being Hall’s theorem guaranteeing the existence of a perfect matching, which is used to simultaneously re-organize the “{1}” entries in a subset of the permutation matrices while preserving all correlations. The upper bound on {d'} we obtain is of order {2^n/\varepsilon^2}, which may be the right order.

More is known in terms of negative results, i.e. lower bounds on {d'}. Such bounds abound in the theory of nonlocal games, where they go by the name of “dimension witness”. The best known results I am aware of imply that {d'} should grow at least like {\min(2^{\Omega(\epsilon^{-1})},2^n)}, which is good for very small {\epsilon}, and also {d'=\Omega(n^c)}, which holds for {\epsilon} smaller than a universal constant (the two bounds are obtained from different families of correlations; see here for the former and here for the latter). An interesting consequence of the (proof of) the second bound, which appears in joint work with Natarajan, is that even an {\varepsilon}-approximation on average (over the entries of C) requires large dimension. This implies that no “oblivious” rounding technique, as in the Johnson-Lindenstrauss lemma, will work: such a technique would guarantee small approximation error on average independently of {n}.

Coda

There has been a lot of progress recently on lower bounds, stimulated by works on quantum non-local games. This includes a beautiful framework of games for checking “non-commutative” analogues of linear equations over {\mathbb{F}_2}, developed by Cleve and Mittal and Ji; extensions of the framework to testing finitely presented groups by Slofstra; a development of approaches based on operator systems by Paulsen and co-authors, and many others. But no upper bounds! Get to work: things can’t remain this way.

Posted in Conferences, Quantum, Science, Uncategorized | Tagged , | 2 Comments

Quid qPCP?

This blog has already seen three posts on the quantum PCP conjecture: in February 2013 to highlight several talks at the Simons Institute in Berkeley, in October 2013 to promote a survey on the topic I wrote with Dorit Aharonov and Itai Arad, and in October 2014 to introduce the two main variants “constraint satisfaction” (CSP) and “multiplayer games” (MIP), of the quantum QCP (qPCP) conjecture. Such a high rate of posting (compared to the average frequency of posts on this blog) might indicate a slight obsession. But you may also notice it’s been…two years! Has no result worthy of note been established since? Certainly not, and although the conjecture still stands strong, there have been a few interesting developments on both variants of the conjecture. In this post I’ll discuss a couple results on the CSP-qPCP. In a follow-up post I’ll describe progress on the MIP-qPCP.

When we wrote the survey three summers ago, the latest word on the CSP-qPCP (see Conjecture 1.3 here for a precise formulation) had been given in a paper by Brandao and Harrow. BH showed, using information-theoretic arguments, that the constraint graphs associated with constant-gap QMA-hard instances of the local Hamiltonian problem had to satisfy “non-expansion” requirements seemingly at odds with the expansion properties of graphs associated with what are often considered the hardest instances of classical CSPs. Intuitively, their argument uses the monogamy of quantum correlations to argue that highly expanding constraint graphs place such strong demands on entanglement that there is always a product state whose energy is not far from the minimum. Although not strictly a no-go result, their theorem indicates that QMA-hard instances must be based on constraint graphs with markedly different spectral properties than those associated with the hardest instances of classical CSP.

For the time being it seems like any proof, or disproof, of the conjecture remains out of reach. Instead of focusing directly on qPCP, it may be more fruitful to develop the objects that are expected to play an important role in the proof, such as (quantum) low-density parity check codes (qLDPC) and (quantum) locally testable codes (qLTC). Two recent works make progress on this front.

The NLETS conjecture

The no low-energy trivial states (NLTS) conjecture was proposed by Freedman and Hastings as a “complexity-free” analogue of CSP-qPCP. The NLTS conjecture states that there exist local Hamiltonians such that all low-energy (within an additive constant, times the norm of the Hamiltonian, from the minimum) states are “non-trivial”, in the sense that they cannot be generated by a constant-depth quantum circuit applied on a product state. Equivalently, all states that are the output of a constant-depth quantum circuit must have energy a constant above the minimum. NLTS Hamiltonian are good candidates for qPCP as they provide local Hamiltonian for which many obvious classical certificates for the minimal energy of the Hamiltonian (such as the description of a small circuit which generates a low-energy state) are essentially ruled out.

An earlier version of the Eldar-Harrow manuscript claimed a construction of NLTS Hamiltonian, but the paper was recently updated, and the claim retracted. The current manuscript establishes a moderately weaker (though strictly incomparable) result, that the authors call NLETS, for “no low-error trivial states”. The main result of EH is a relatively simple, explicit construction of a family of local Hamiltonians that have no non-trivial “ground state {\varepsilon}-impostor”. An {\varepsilon}-impostor is a state that has the same reduced density matrix as a ground state on a fraction {(1-\varepsilon)} of the qubits, but may differ arbitrarily on the remaining {\varepsilon} fraction. Using that the Hamiltonian is local, impostors necessarily have low energy, but the converse is not true, so that NLETS rules out non-triviality for a more restricted class of states than NLTS. For that restricted class of states, however, the non-triviality established by EH is sronger than required by NLTS: they show that no {\varepsilon}-impostor can even be well-approximated (within inverse-polynomial trace distance) by logarithmic-depth, instead of just constant-depth, quantum circuits.

Let’s see if I can give some basic intuition on their construction; for anything substantial see the paper, which gives many angles on the result. Consider first first a classical repetition code encoding {1} bit into {n} bits. This can be made into a locally testable code by enforcing pairwise equality of bits along the edges of a constant-degree expanding graph on vertex set {\{1,\ldots,n\}}. Now allow me a little leap of faith: imagine there existed a magic quantum analogue of this classical repetition code, where equality between pairs of qubits is enforced not only in the {Z} (computational) basis, but also in the {X} (Hadamard) basis. Of course such a thing does not exist: the constraints would force any pair of qubits (linked by the expander) to form an EPR pair, a requirement that strongly violates monogamy. But let’s imagine. Then I claim that we would essentially be done. Why? We need two more observations.

The first key observation made by EH is that any ground state of this imaginary code would have the following property: if you measure all qubits of the state in the same basis, either {X} or {Z}, then for at least one of the two possible choices the measurement outcomes will be distributed according to a distribution on {n}-bit strings that places a large (constant) weight on at least two well-isolated (separated by at least the minimum distance) subsets of the Hamming cube. Note that this does not hold of the classical repetition code: the distribution which all-{0} codeword is, well, concentrated. But if we were to measure the associated quantum state {|0\cdots 0 \rangle \simeq \frac{1}{\sqrt{2}}( |+\cdots+\rangle+|-\cdots-\rangle)} in the Hadamard basis, we would get a very spread distribution, with constant mass on two sets that are at distance {n} apart (I realize the equation I wrote is not quite correct! Don’t think too hard about it; obviously my “magical quantum repetition code” does not exist). The reason the distribution obtained in at least one of the two bases must be spread out is due to the uncertainty principle: if the distribution is localized in the {X} basis it must be delocalized in the {Z} basis, and vice-versa. And the reason it should be concentrated on isolated clumps is that we are measuring a codeword, which, for our magic example, can only lead to outcomes that are supported on the set {\{|0\rangle^{\otimes n},|1\rangle^{\otimes n},|+\rangle^{\otimes n},|-\rangle^{\otimes n}\}}.

To conclude we need the second observation, which is that trivial states do not have this property: measuring a trivial state in any product basis will always lead to a highly expanding distribution, which in particular cannot have large mass on well-isolated subsets. This is obviously true for product states, and requires a bit of work to be carried through logarithmically many layers of a quantum circuit; indeed this is where the main technical work of the paper lies.

 

So the argument is complete…except for the fact that the required magic quantum repetition code does not exist! Instead, HE find a good make-do by employing a beautiful construction of quantum LDPC codes due to Tillich and Zemor, the “hypergraph product”. The hypergraph product takes as input any pair of classical linear codes and returns a quantum “product” CSS code whose locality, distance and rate properties can be related to those of the original codes. The toric code can be case as an example of a hypergraph product code; see Section 3 in the paper for explanations. Unfortunately, the way the distance of the product code scales with other parameters prevents TZ from obtaining good enough qLDPC for the CSP-QPCP; they can “only” obtain codes with constant weight and constant rate, but distance {O(\sqrt{n})}.

In the context of NL(E)TS, and even more so qPCP, however, distance may not be the most relevant parameter. EH’s main construction is obtained as the hypergraph product of two expander-based repetition codes, which as a code only has logarithmic distance; nevertheless they are able to show that the robustness derived from the repetition code, together with the logarithmic distance, are enough to separate {\varepsilon}-impostors from logarithmic-depth trivial states.

Quantum LDPC & LTC

Quantum low-density parity-check codes (qLDPC) already made a showing in the previous sections. These families of codes are of much broader interest than their possible role in a forthcoming proof of qPCP, and constructions are being actively pursued. For classical codes the situation is largely satisfactory, and there are constructions that simultaneously achieve constant rate and linear distance with constant-weight parity checks. For quantum codes less is known. If we insist on constant-weight stabilizers then the best distance is {\Omega(\sqrt{n}\log^{1/4} n)} (e.g. Freedman et al.), a notch above the TZ construction mentioned earlier. The most local construction that achieves linear distance requires stabilizers of weight {O(\sqrt{n})} (e.g. Bravyi and Hastings).

A recent paper by Hastings makes progress on constructions of qLDPC – assuming a geometrical conjecture on the volume of certain surfaces defined from lattices in {{\mathbb R}^n}. Assuming the conjecture, Hastings shows the existence of qLDPC with {n^{1-\varepsilon}} distance and logarithmic-weight stabilizers, a marked improvement over the state of the art. Although as discussed earlier even linear-distance, constant-weight, qLDPC would not imply the CSP-qPCP nor NLTS (the resulting Hamiltonian may still have low-energy eigenstates that are not at a small distance from codewords), by analogy with the classical case (and basic intuition!), constructions of such objects should certainly facilitate any attempt at a proof of the conjectures. Moreover, qLDPC suffice for the weaker NLETS introduced by EH, as the latter only makes a statement about {\varepsilon}-impostors, i.e. states that are at a constant distance from codewords. To obtain the stronger implication to NLTS, the proper notion is that of local testability: errors should be detected by a fraction of parity checks proportional to the distance of the error from the closest codeword (and not just some parity check).

Hastings’ construction follows the topological approach to quantum error correcting codes pioneered by Freedman and Kitaev. Although the latter introduced codes whose properties depend on the surface they are embedded in, at best I could tell the formal connection between homology and error correction is made in a comprehensive paper by Bombin and Martin-Delgado. The advantage of this approach is that properties of the code, including rate and distance, can be tied to geometric properties of the underlying homology, reducing the construction of good codes to that of manifolds with the right properties.

 

In addition to the (conjectural) construction of good qLDPC, almost as an afterthought Hastings provides an unconditional construction of a quantum locally testable code (qLTC), albeit one which encodes two qubits only. Let’s try to visualize this, starting from the helpful warm-up provided by Hastings, a high-dimensional, entangled, locally-testable code…which encodes zero qubit (the code space is one-dimensional). Of course this is trivial, but it’s a warm-up!

The simplest instance to visualize has six physical qubits. To follow the forthcoming paragraphs, take a piece of paper and draw a large tetrahedron. If you didn’t mess up your tetrahedron should have six edges: these are your qubits. Now the parity checks are as follows. Each of the four faces specifies an {X}-stabilizer which acts Image result for tetrahedronon the three edges forming the face. Each of the four vertices specifies a {Z}-stabilizer which acts on the three edges that touch the vertex. The resulting eight operators pairwise commute, and they specify a unique (entangled) state in the {2^6}-dimensional physical space.

Next we’d like to understand “local” testability. This means that if we fix a set {O} of edges, and act on each of them using an {X} error, then the resulting operator should violate (anti-commute) with a fraction of {Z}-stabilizers that is proportional to the reduced weight of the error, i.e. its distance to the closest operator which commutes with all {Z}-stabilizers. To see which stabilizers “detect” the error {O}, we recall that {Z} and {X} which overlap at an even number of locations commute. Therefore a {Z} stabilizer will detect {O} if and only if it lies in its boundary {\partial O}: the set of vertices which touch an odd number of edges in {O}. This is our syndrome; it has a certain cardinality. To conclude we need to argue that {O} can be modified into a set {O+P} with no boundary, {\partial(O+P)=\emptyset}, and such that {P} is as small as possible – ideally, it should involve at most as many edges as the size of the boundary {|\partial O|}. Here is how Hastings does it: for each vertex in the boundary, introduce an edge that links it to some fixed vertex – say the top-most one in your tetrahedron. Let {P} be the resulting set of edges. Then you can check (on the picture!) that {O+P} is boundary-less. Since we added at most as many edges as vertices in the boundary (if the top-most vertex was part of the boundary it doesn’t contribute any edge), we have proven local testability with respect to {X} errors; {Z} errors are similar.

This was all in three dimensions. The wonderful thing is that the construction generalizes in a “straightforward” way to {n} dimensions. Consider an {(n+1)}-element universe {U=\{1,\ldots,n+1\}}. Qubits are all subsets of {U} of size {q=(n+1)/2}; there are exponentially many of these. {Z}-stabilizers are defined for each {(q-1)}-element subset; each acts on all {(q+1)} of its {q}-element supersets. Symmetrically, {X}-stabilizers are defined for each {(q+1)}-element set; each acts on all {(q+1)} of its {q}-element subsets. Thus the code is local: each stabilizer has weight {(q+1)}, which is logarithmic in the number of qubits. It remains to check local testability; this follows using precisely the same argument as above (minus the picture…).

This first construction encodes zero qubits. How about getting a couple? Hastings gives a construction achieving this, and remains (poly-logarithmically) locally testable. The idea, very roughly, is to make a toric code by combining together two copies of the code described above. The number of encoded qubits will become non-trivial and local testability will remain. Unfortunately, just as for the toric code, the distance of the result code only scales as {\sqrt{n}}. To construct his code Hastings uses a slightly different cellulation than the above-described one. I am not sure precisely why the change is needed, and I defer to the paper for more details. (Leverrier, Tillich and Zemor had earlier provided a construction, based on the TZ hypergraph product, with linear rate, square root distance, and local testability up to the minimal distance, i.e. for all errors of reduced weight at most {O(\sqrt{n})}.)

 

Although the geometric picture takes some effort to grasp, I find these constructions fascinating. Given the Brandao-Harrow objections to using the most “straighforward” expander constructions to achieve CSP-qPCP, or even NLTS, it seems logical to start looking for combinatorial structures that have more subtle properties and lie at the delicate boundary where both robustness (in terms of testability) and entanglement (non-triviality of ground states) can co-exist without challenging monogamy.

Posted in QPCP, Quantum, Science, Uncategorized | Tagged , , , | 2 Comments

QCryptoX: post-mortem

Our EdX/Delft/Caltech course on quantum cryptography officially ended on Dec. 20th, and is now archived (all material should remain available in the foreseeable future; we will also prepare a self-contained version of the lecture notes). At Caltech, the “flipped classroom” ended a couple weeks earlier; by now all grades are in and the students may even be beginning to recover. How did it go?

1. In the classroom

Fifteen Caltech students, with a roughly equal mix of physics/CS/EE backgrounds, followed the course till the end (we started at ~20). We had a great time, but integration with the online course proved more challenging than I anticipated. Let me say why, in the hope that my experience could be useful to others (including myself, if I repeat the course).

The EdX content was released in 10 weekly updates, on Tuesdays. Since on-campus classes took place Tuesdays and Thursdays, I asked Caltech students to review the material (videos+lecture notes+quizzes) made available online on a given Tuesday by the following Tuesday’s class. I would then be able to structure the class under the assumption that the students had at least some minimal familiarity with the weeks’ concepts. This would allow for a more relaxed, “conversational” mode: I would be able to react to difficulties encountered by the students, and engage them in the exploration of more advanced topics. That was the theory. Some of it worked out, but not quite as well as I had hoped, and this for a variety of reasons:

  1. There was a large discrepancy in the students’ level of preparation. Some had gone through lecture notes in detail, watched all videos, and completed all quizzes. Although some aspects of the week’s material might still puzzle them, they had a good understanding of the basics. But other students had barely pulled up the website, so that they didn’t even really know what topics were covered in a given week. This meant that, if I worked under the assumption that students already had a reasonable grasp of the material, I would loose half the class; whereas if I assumed they had not seen it at all I would put half the class to sleep. As an attempted remedy I enforced some minimal familiarity with the online content by requiring that weekly EdX quizzes be turned in each Tuesday before class. But these quizzes were not hard, and the students could (and did) get away with a very quick scan through the material.
  2. As all students, but, I hear, even more so here, Caltech undergraduates generally (i) do not show up in class, and (ii) if per chance they happen to land in the right classroom, they certainly won’t participate. In an attempt to encourage attendance  I made homeworks due right before the Tuesday 10:30am class, the idea being that students would probably turn in homeworks at the last minute, but then they would at least be ready for class. Bad idea: as a result, students ended up spending the night on the homework, dropping it off at 10:29:59… only to skip class so as to catch up on sleep!Slightly over half of the registered students attended any given class, a small group of 8-10 on average. This made it harder to keep participation up. On the whole it still went pretty well, and with a little patience, and insistence, I think I eventually managed to instore a reasonably relaxed atmosphere, where students would naturally raise questions, submit suggestions, etc. But we did not reach the stage of all-out participation I had envisioned.
  3. The material was not easy. This is partially a result of my inexperience in teaching quantum information; as all bad teachers do I had under-estimated the effort it takes to learn the basis of kets, bras and other “elementary” manipulations, especially when one has but an introductory course in undergraduate linear algebra as background. Given this, I am truly amazed that the 15 students actually survived the class; they had to put in a lot of work. Lucky for me there are bright undergraduates around!We ended the course with projects, on which the students did amazingly well. In groups of 2-3 they read one or more papers in quantum cryptography, all on fairly advanced topics we had not covered in class (such as relativistic bit commitment, quantum homomorphic encryption, quantum bitcoin, and more!), wrote up a short survey paper outlining some criticisms and ideas they had about what they had read, and gave an invariably excellent course presentation. From my perspective, this was certainly a highlight of the course.

Given these observations on what went wrong (or at least sub-optimally), here are a few thoughts on how the course could be improved, mostly for my own benefit (I hope to put some of these to good practice in a year or two!). This should be obvious, but: the main hurdle in designing a “flipped classroom” is to figure out how to work with the online content:

  • First there is a scheduling difficulty. Some students complained that by having to go through the weekly videos and lecture notes prior to the discussion of the material in class they simultaneously had to face two weeks’ worth of content at any given time. Scheduling of online material was decided based on other constraints, and turned out to be highly sub-optimal: each week was released on a Tuesday, which was also the first day of class, so that it was unreasonable to ask the students to review the material before that week’s classes….pushing it to the next week, and resulting in the aforementioned overlap. A much better schedule would have been to e.g. release material online on Friday, and then have class on Tuesdays and Thursdays. This would have led to a larger overlap and less schizophrenia.
  • Then comes the problem of “complementarity”. What can be done in class that does not replicate, but instead enriches, then online material? This is made all the more difficult by the inevitable heterogeneity in the student’s preparation. An effort has to be made to limit this by finding ways to enforce the student’s learning of the material. For instance, each class could be kick-started by a small presentation by one of the students, based on one of the online problems, or even by re-explaining (or, explaining better!) one of the week’s more challenging videos. This should be made in a way that the students find it valuable, both for the presenter and the listeners; I don’t want the outcome to be that no one shows up for class.
  • Student-led discussions usually work best. They love to expose their ideas to each other, and improve upon them. This forces them to be active, and creative. The best moments in the class where when the discussion really picked up and the students bounced suggestions off each other. The existence of the online material should facilitate this, by giving a common basis on which to base the discussion. My approach this time wasn’t optimal, but based on the experience I think it is possible to do something truly engaging. But it won’t work by itself; one really has to design incentive-based schemes to get the process going.

2. Online

Success of the online course is rather hard to judge. At the end of the course there were about 8000 officially registered students. Of these, EdX identified ~500 as “active learners” over the last few weeks (dropping from ~1500 over the first few weeks, as is to be expected). I think an active learner is roughly someone who has at least watched some parts of a video, answered a quizz or problem, participated in a forum, etc.
About 100 students pursued an official certificate, which means that they paid ~50$ to have their success in the course officially registered. I couldn’t figure out how many students have actually “passed” the class, but I expect the number to be around 200: most of the certified students plus a few others who didn’t want to pay for the certificate but still turned in most homeworks. This is a fair number for a challenging specialized course, I am pretty happy with it. The high initial enrollment numbers, together with anecdotal evidence from people who got in touch directly, indicate that there certainly is demand for the topic.  The most active students in the course definitely “wanted in”, and we had lots of good questions on the forum. And, many, many typos were fixed!

How satisfied were the students with the course? We ran an “exit survey”, but I don’t have the results yet; I can write about them later (hoping that a significant enough number of students bother to fill in the survey). We also had  pre- and mid-course survey. Some of the more interesting questions had to do with how students learn. In my opinion this is the main challenge in designing a MOOC: how to make it useful? Will the students learn anything by watching videos? Anecdotal evidence (but also serious research, I hear) suggests not. Reading the lecture notes? Maybe, but that requires time and dedication – basically, to be an assiduous learner already. Just as “in-the-classroom” learning, it is the problem-solving that students are brought to engage in that can make a difference. Students like to be challenged; they need to be given an active role. In the mid-course survey many of the positive comments had to do with “Julia lab” assignments that were part of the course, and for which the students had to do some simple coding that let them  experiment with properties of qubits, incompatible measurements, etc. In the pre-course survey students also indicated a marked preference for learning via solving problems rather than by watching videos.

So a good online MOOC should be one that actively engages the student’s problem-solving skills. But this is not easy! Much harder than recording a video in front of a tablet & webcam. Even though I was repeatedly told about it before-hand, I learned the lesson the hard way: homework questions have to be vetted very thoroughly. There is no end to a student’s creativity in misinterpreting a statement – let alone 1000 students’. Multiple-choice questions may sound straightforward, but they’re not: one has to be very careful that there is exactly one straight correct answer, while at the same time not making it too obvious which is that answer; when one has a solution in mind it is easy not to realize that other proposed supposedly wrong solutions could in fact be interpreted as correct. The topic of cryptography makes this particularly tricky: we want the students to reason, be creative, devise attacks, but the multiple-choice limits us in this ability. Luckily we had a very dedicated, and creative, team of TAs, both in Delft and In Caltech, and by working together they compiled quite a nice set of problems; I hope they get used and re-used.

3. Conclusions

It’s too early (or too late) for conclusions. This was a first, and I hope there’ll be a second. The medium is a challenge, but it’s worth reaching out: we teach elite topics to elite students at elite institutions, but so many more have the drive, interest, and ability to learn the material that it would be irresponsible to leave them out. MOOCs may not be the best way to expand the reach of our work, but it is one way…to be improved!

It was certainly a lot of fun. I owe a huge thank you to all the students, in the classroom and online, who suffered through the course. I hope you learned a lot! Second in line were the TAs, at Caltech as well as Delft, who did impressive work, coping simultaneously with the heavy online and offline duties. They came up with a great set of resources. Last but not least, behind the scenes, the video production and online learning teams, from Delt and Caltech, without whose support none of this would have been made possible. Thanks!

Posted in Caltech, QCrypto, teaching | Tagged , , | Leave a comment

Two weeks in

Our course on quantum cryptography will soon enter its third week of instruction (out of ten weeks planned). My parallel “in-class” course at Caltech has already gone through four weeks of lectures. How is it going so far?

There are many possible measures against which to evaluate the experience. An easy one is raw numbers. Online there are a bit over 7,200 students enrolled. But how many are “active”? The statistics tools provided by EdX report 1995 “active” students last week – under what definition of “active”? EdX also reports that 1003 students “watched a video”, and 861 “tried a problem”. What is an active student who neither watched a video nor tried a problem – they clicked on a link? In any case, the proportion seems high; from what I heard a typical experience is that about 2-5% of registered students will complete any given online course. Out of 7,000, this would bring the number of active students by the end of the course at at a couple hundred, a number I would certainly consider a marked success, given the specialized topic and technically challenging material.

At Caltech there are 20 students enrolled in CS/Phys 120. Given the size of our undergraduate population I also consider this to be a rather high number (but the hard drop deadline has not passed yet!). It’s always a pleasure to see our undergraduate’s eagerness to dive into any exciting topic of research that is brought to their attention. I don’t know the enrollment for TU Delft, but they have a large program in quantum information so the numbers are probably at least twice as high.

Numbers are numbers. How about enthusiasm? You saw the word cloud we collected in Week 0. Here is one from Week 2 (“What does “entanglement” evoke in you right now?”; spot the “love” and “love story”; unfortunately only 1% of responses for either!). word2Some of the students speak up when prompted for simple feedback such as this, but the vast majority remain otherwise silent, so that involvement is hard to measure. We do have a few rather active participants in the discussion forums, and it’s been a pleasure to read and try to answer their creative questions each day – dear online learners, if you read this, thanks for your extremely valuable comments and feedback, which help make the course better for everyone! It’s amazing how even as we were learning qubits some rather insightful questions, and objections, were raised. It’s clear that people are coming to the course from a huge range of backgrounds, prompting the most unexpected reactions.

A similar challenge arises in the classroom. Students range from the freshmen with no background in quantum information (obviously), nor in quantum mechanics or computer science, to more advanced seniors (who form the bulk of the class) to graduate students in Caltech’s Institute for Quantum Information and Matter (IQIM). How to capture everyone’s attention, interest, imagination? The topic of cryptography helps -there is so much to be fascinated with. I started the course by discussing the problem of quantum money, which has the advantage of easily capturing one’s imagination, and for which there is a simple quantum scheme with a clear advantage over classical (cherry on top, the scheme is based on the famous “BB84 states” that will play a major role in the class via their use for quantum key distribution). So newcomers to quantum information could learn about qubits, kets and bras, while others could fend off their impatience by imagining new schemes for public-coin quantum money.

This is not an easy line to thread however, and given the composition of the class I eventually decided to err on the side of caution. Don’t repeat it, but this is my first time even teaching a full class on quantum information, and the basic concepts, not to mention the formalism, can be quite tricky to pick up. So we’re going to take it slow, and we’ll see how far we get. My hope is that the “flipped classroom” format should help needy but motivated students keep afloat by making all the online material available before it is discussed in class. Since the online course has only been going on for a couple weeks I can’t yet report on how well this will work out; my initial impression is that it is not given that the in-class students actually do spend enough time with the online material. I am yet to find the proper way to incentivize this: quizzes? rewards? The best reward should be that they manage to follow the course 😉

In the coming weeks we’ll start making our way towards quantum key distribution and its analysis. Entanglement, measuring uncertainty, privacy amplification, BB84 and Eckert, and device independence. Quite a program, and it’s certainly nice to attempt it in such pleasant company!

Posted in QCrypto, teaching | Tagged , , | Leave a comment

One week later…

…and 736 words of expectation for the class:

wordcloud

Note the top contender: let’s see if we live up to their expectations!

It’s been a fun first week. We released “Week 0” of the material a month ahead of the official start date, so that those students not yet familiar with the basics of quantum information (what is a qubit?) would have enough time to digest the fundamental notions we’ll be using throughout. (Out of the ~5500 registered students, ~1250 are currently marked my EdX as “active” and 560 have watched at least one video. Keep it coming!)

An unexpected benefit of opening up the platform ahead of time is that it is giving us the time to experiment with (read: debug) some of the tools we plan to use throughout. A first is EdX’s own possibilities for interaction with the students, an example of which is pictured above (“Use the space below to enter a few words that best characterize your expectations for this class”).But we’re also using a couple add-ons:

The first is a system replacing EdX’s discussion forums,called Askalot. Cute name – how can you resist. The main benefit of Askalot is that it provides more structure to the discussions, which can be characterized as question/bug report/social discussion/etc, can be up/down-voted, marked as correct/invalidated by the instructors, etc. The students are having fun already, introducing themselves, complaining about bugs in the quizzes, and, of course, about Askalot itself! (Thanks go to Ivan Srba, one of the creators of Askalot, for being extremely responsive and fixing a host of minor bugs overnight – not to mention throwing in the extra feature requested by the students.)

A second is called DALITE. The idea of DALITE is to encourage students to provide an explanation justifying their answer to a question. Indeed, one of the drawbacks of the online platform is the need for automatic grading of assignments, which greatly limits how the student can be engaged in the problem-solving exercise, mostly limited to multiple-choice or simple numeric answers. DALITE (which grew out of, and is still, a serious research project in online learning) introduces a fun twist: the student is asked to type in a “rationale” for her choice. Of course there is no way we could grade such rationales. But here is the idea: once the student has entered her explanation, she is shown the rationale provided by another student (or the instructor) for a different answer, and asked whether she would like to reconsider her decision. The student can choose to change her mind, or stick with her initial choice; she is asked to explain why. It’s really fun to watch the answers provided (“What would happen if quantum mechanics allowed cloning of arbitrary states?”), the change of minds that take place, and the rationale that incentivized said change of mind. (Thanks to Sameer Bhatagnar for helping us set up the tool and explaining its many possibilities, and to Joseph Williams for suggesting its use in the first place.)

We’ll see how these pan out in the longer run. I’m definitely eager to experiment with ways to make the MOOC experience a better learning experience for the students. I’ll let you know how it goes. Suggestions welcome!

PS: 47% students 25 and under, 78% from outside US, is good, but 16% female is not…come on!

 

Posted in QCrypto, teaching, Uncategorized | Tagged , , , | Leave a comment