This year QCRYPT had four special sessions called “focus tutorials” (see here for a list, as well as links to slides of the talks). The goal of these 90-minute long tutorials was to introduce a particular aspect of quantum crypto, as developed in one of the sub-communities, to the others. For instance, Juerg Wullschleger talked about “Cryptographic primitives” such as bit commitment or oblivious transfer, while Gregor Weihs talked about “Optics”, giving us an overview of the theory as well as the kind of experiments that are currently feasible using linear optics. The other two tutorials were on “Smooth min/max entropies” by Marco Tomamichel and “Authentication” by Ueli Maurer.

Given the range of subjects and the fact that the tutorials were scheduled at 9 in the morning, they were surprisingly well-attended! In fact I think they were some of the best talks of the conference, generating a lot of interest and serving as a great introduction to the shorter, more technical contributed talks.

In the past year or two the STOC and FOCS conferences have also been experimenting with “tutorial”, or “workshop”, sessions on the day before the conference. These are typically longer: each tutorial lasts for the whole day, and 3 or 4 tutorials are run in parallel. The advantage is that there is time to go much more in-depth and cover both basics and more advanced topics; an obvious downside is that it is only possible to attend one of the tutorials. While the QCRYPT 90-minute tutorial is just enough to get a good exposition to the language and basic concepts used in an area, the day-long workshop has the more ambitious goal of bringing its attendance up-to-date on the latest developments in a particular area.

One benefit I see to the shorter QCRYPT-style tutorial is that the barrier (or self-evaluated “a priori risk”) to entry is much lower: while it is unlikely that I will attend the “Bayesian Mechanism Design” tutorial at FOCS (because it takes up the whole day and conflicts with another tutorial that I am more interested in), I *did* attend the “Optics” tutorial at QCRYPT. So the shorter tutorials are better for the *mildly curious* who want to get the basics in order to be able to follow the technical talks in an area outside theirs, while the longer workshops are for the *seriously curious, *who may be considering learning a new area, or at least feel like it is close enough to their interests that they would like (or should!) to keep abreast of progress there; in any case they are willing to invest a whole day on the topic. And so I* will* (probably) attend the “Randomized Numerical Algebra” workshop at FOCS 2012…

I certainly wish that STOC and FOCS had more hour-long presentations, be they tutorials, “featured talks”, or whatever the community deems interesting. The 20-minute talks from outside my area are usually too short that I do not get so much out of them; they seem to serve more as a showcase of each and everyone’s new result rather than as a platform to get good ideas out (with notable exceptions of course: some talks *are* excellent, and there’s usually enough of those to make attendance more than worthwhile!). There has been much debate recently on changing the format of these conferences, and varying the format of the talks should be one thing to keep in mind.

**1. Security proofs in cryptography**

Enough digressing: back to the tutorials! I’d like to discuss just one of them, by Ueli Maurer. Although Ueli’s tutorial was titled “Authentication”, the main focus on the talk was on a framework of “Abstract cryptography” that Ueli has been developing recent years, some of it in collaboration with Renato Renner. (See this paper and this one for much more.)

Ueli’s pitch is that current security proofs for cryptographic primitives (think public-key encryption, but also two-party primitives such as bit commitment, etc.) can be incredibly tedious and messy: they usually involve sequences of *games* played by *simulators*, attempting to convince us through a series of elaborate *reductions* that no malicious third party will be able to do anything that wasn’t planned. The main idea is to show that the effect of any action performed by the “adversary” in the “real world” could in principle be reproduced in the “ideal world” by the “simulator”; since the ideal world is, well, ideal, nothing harmful can happen there, hence by the simulation property nothing could happen in the real world either.

Carrying out this program requires to explicitly model not only the adversary and the access that it has to the resources, but also the kind of things that we think we should be worried the adversary may be attempting (if you think this sentence is convoluted, just think about how the proof will look like…). For instance, in public-key cryptography so-called IND-CPA security is achieved by considering a “game” in which the adversary gets to submit two plaintexts of her choice for encryption, and is returned one of the two cyphertexts at random; if she can distinguish wich then the cryptosystem is not secure.

Ueli’s main critique is that such proofs are (1) messy, and (2) require us to make many implicit assumptions. How can we trust security proved in such a way? Surely there must be some eventuality that was not considered. Note that this is not everyone’s opinion of course, and when Ueli gave a talk on a similar subject at MIT a few months ago, he had a (friendly but heated) debate with Shafi Goldwasser, who contended that the current model for proving security had the one essential feature that we need, which is that it enables us to carry out our *intuition* as to what should represent security. In other words, the “games/simulators” model is used because it at least gives us the *possibility *to do security proofs. If one had to worry about every single eventuality, one would end up entangled in such a web of abstract constructs that one would not be able to prove anything at all. Or would we?

**2. Abstract cryptography**

Ueli disagrees, and his work attempts to develop an abstract framework in which one can give security proofs that are *intuitive*,* rigorous* and *bullet-proof*. Too good to be true? Well, if you want to see how security proofs look in that framework, you can check out Ueli’s slides at QCRYPT (starting at slide 15). They definitely *look* simple, and Ueli tried to convince us that, not only they *are* simple, but they also capture everything that is needed to be captured in order to establish security.

One of the selling points of concrete cryptography is that there is no need to model the “adversary” explicitly. I find the idea appealing, as any such modeling necessarily puts implicit restrictions on the adversary’s behavior. Instead, concrete cryptography is based on a heavy use of composition: cryptographic primitives, such as a digital signature scheme, are built by composing other primitives together (such as an authenticated public channel). Each available primitive is thought of as a resource, and the goal is to combine different resources together in order to build more complex primitives. Simple rules govern which resources can be combined, and how. Just like a LEGO game…

I am sure Ueli faces the same difficulty every time he gives his talk: how to convince the audience that his framework can actually be used to complete a nontrivial security proof? Imagine trying to demonstrate the usefulness of coding every mathematical proof in first-order logic to a crowd of algebraic theorists…not an easy task. Ueli did a great job by providing concrete examples, but in my mind the question remains: how expressive can such an abstract framework be? The little diagrams made me think a little of category theory: beautiful, deep abstract objects, stunning the first time you see them; but are they the right tool?

To me, these seem like two complementary approaches to proving security. The games and simulators approach makes subtle security proofs *possible*. The Lego brick composition approach makes straightforward security proofs *easy*.

Or, would this opinion have gotten me into a heated debate with *both* Shafi and Ueli?

I think Ueli’s (and Renato’s) main goal is to obtain fully transparent proofs, that make the fewest possible (I guess Ueli would say “no”) assumptions on the adversary, or the “real world”. One of their main selling points is the history of security proofs for quantum key distribution: the first complete security proofs were perfectly correct, except that the criterion (low mutual information between the key and the outcome of any measurement on the adversary’s side information) they used for security was almost useless (e.g. the “secure” key was completely insecure if used as a one-time pad!), due to locking.

The “concrete cryptography” framework is supposed to preempt any such issue. Admittedly, the only examples I have seen so far are ones such as the one-time pad, that seem easy enough to formalize, so it’s hard to get a feeling for how complex an object can be proven secure in this way. So I would agree with you, and qualify further: simulators make proofs “possible, but possibly faulty”, legos make proofs that are “necessarily sound, but impossible to follow”. Though if this approach was to lead to automatic security provers, as there are automatic proof verifiers/discoverers such as Coq, it wouldn’t be a bad outcome!

Ah, I think I understand better what they are getting at: the “concrete cryptography” framework is not just a tool for proving security, it’s also supposed to help you solve the meta-problem of how to define security in the first place. I guess there is also an opinion buried in there: that definitions of security that involve detailed models of the adversary are just bad.