This year, grad students in MIT’s theory of computing group were organizing their first *theory retreat. *The idea is that the students go on a trip together over a a long week-end, have some fun, maybe even discuss some research, and most importantly get to know each other.

Apparently I can pride myself in contributing to plant the germ of the retreat idea in some of MIT’s students’ minds by praising Berkeley’s own theory retreats during one of the Monday teas. The Berkeley group started organizing retreats the year before I arrived there as a grad student, and they were always great successes (Lake Tahoe, Big Sur, Sequoia National Park, and the Northern Californian coast: it can’t go wrong).

I should say that these are not your usual resort-style “all-included” retreats though. Rather, one key aspect of their success is their chaotic nature: the students rent a big house, split up the available beds (and, as houses for 20 are not always easy to find, carpet space), (attempt to) cook together, play games, (try to) wake each other up in the morning, and discuss a few papers and open problems on a research topic chosen before the retreat.

Sharing the same house for three successive days and nights has in my experience always been a great way to get to know each other, much better than in a more organized, semi-formal environment. So I tried to push for the same at MIT…but of course they couldn’t do things like everyone else: when in Berkeley we were typically 16-18 in a “sleeps 8” house, the MIT folks rented *three **houses*, with a total sleeping capacity of *26*, for a total of *22 students*! Which is how I came to be invited at crashing the party…I can’t complain đź™‚

So we all headed to the White Mountains on what is supposed to be the best week-end of the year, as Fall colors light up the forest in a display worthy of the best fireworks. We spread our 22 selves in the three huge houses, cooked breakfast (pancakes!), lunch and dinner (stuffed peppers!), played games, and hiked. In spite of a very rainy Saturday the trails chosen by our “outdoors activities organizer” were absolutely magnificent: I am pleased to offer my highest possible recommendation to a Fall New Hampshire trip.

On top of all this, thanks to the tight schedule dutifully maintained by the expert organizers we even managed to discuss a little research. The topic chosen for this year was communication complexity, and more specifically recent work on information complexity. I got to present the paperÂ Lower bounds on information complexity via zero-communication protocols and applications by Kerenidis, Laplante, Lerays, Roland and Xiao, which will appear in the upcoming FOCS.Â I highly recommend the paper, as it contains great insights on communication complexity and information complexity.

To state the one of their results that I found most striking, I need to define the notion of a* zero-communication* protocol. Informally, a zero-communication protocol for a function is a protocol in whichÂ Alice receives , Bob receives , and each of them needs to produce an output in — without communicating at all. Of course, there is a trick: the output symbol indicates that Alice or Bob decides to abort the protocol. To make the problem non-trivial, we require the probability that at least one of them produces as his answer should be (almost) a constant independent of the inputs. The cost of the protocol is then defined as . And of course, it should be the case that, if both Alice and Bob report a bit as their answer, this bit should equal with high probability. (To understand the definition, a good exercise is to devise a zero-error protocol for the equality function that is correct with probability , conditioned on not aborting, and has cost .)

One of the most surprising results of the paper by Kerenidis et al. is that essentially all known lower bound methods on randomized *communication* complexity are also lower bounds on the cost of* zero-communication* protocols! To make a bit of a blunt statement, to me this indicates that all our efforts to strengthen the simplest lower bound of all, the rectangle bound, in order to take into account constraints such as the fact that the rectangles should form a partition of the function’s truth table, come to naught: indeed, it is not hard to see that zero-error protocols, contrary to communication protocols, are just about (distributions on) rectangles. It will be interesting to see if this new perspective brings about better lower bound methods, able to overcome the *zero-communication barrier*. (Note that strictly speaking, the authors do not show that Jain and Klauck’s partition bound is also a lower bound on zero-communication protocols — this is the only bound escaping their result. Reading the paper, this seems like more of a technicality than anything else, though I would be happy to be proven wrong.)