I apologize for posting being so slow these days…but my attempt at blogging is not dead (yet)! Things have been piling up, with an extended trip, a visitor, and now some urgent deadlines… I expect to get back to more serious topics in a week or.

As my most loyal readers will undoubtedly have noticed, my recent trip included a two-week stay in Berkeley. I don’t need to comment on the effect that coming back to this wonderful place had on me — others have done it eloquently before. The UC Berkeley campus area is the single geographic location I’ve spent the longest (almost) uninterrupted amount of time throughout my life (four years!), and it felt like the closest place to “home” I have.

One of the best things about the theory group in Berkeley is its size. Depending on how you count, there are about 6-8 faculty and roughly three times as many students. This is small enough to know everyone well, and large enough for there to be the right amount of diversity. Most importantly, the relatively small size means that students get exposed to each others’ research areas, and will take classes that do not necessarily precisely match their immediate research interests. I believe this “right” size plays an important role in making Berkeley students among the ones with the broadest outlook on theory. Of course the specific perspective taken by the faculty on TCS also plays an important role, and in spite of the relaxed and “hands-off” atmosphere the faculty take very good care of their pool of students!

**The Berkeley theory retreat**

As part of the attention brought to the student’s well-being is, guess what, the theory retreat — yes, the famous Berkeley theory retreat, emulated by theory students from the best institutions worldwide! The first retreat happened in 2006, the year before I joined for my PhD. It’s been held every year since, in places such as Lake Tahoe, Yosemite National Park, Sequoia national park, and Big Sur… can’t say we’re lacking options in the area!

This year I managed to time my trip so well that it coincided with the students’ planned dates for the retreat…so, well, I crashed it again 🙂 We headed back down to beautiful Big Sur, just three hours drive south along the coast, to almost the exact same house we had rented a few days ago (I remember — I was in charge of housing that year). This time though the students — inspired

by the MIT extravaganza? — had rented *three* huge neighboring villas, just across from the beach.

Our two-day menu included some hiking, various more or less exotic forms of chess and other board-game-playing, an epic game of mafia (won, for the first time in retreat history, by the villagers), some beach-frisbee (man that water was cold!), and, well, pesto-filled pork chops & other delicacies masterfully prepared by our resident chef.

**Rounding the Lasserre hierarchy**

In addition to all the fun, the retreat also involves presentations and discussions on a research topic chosen (and hopefully prepared) ahead of time. This years’ was a promising but difficult one: the relation between SDP hierarchies and quantum information. Out of the three talks that were given by the students, I particularly enjoyed Ning‘s, which was crystal clear. Ning explained what seems to be the only rounding technique we have that takes advantage of higher levels of the Lasserre hierarchy.

I’ll briefly record what I understood of the main idea here, since I found it so beautiful. When is it hard to round an SDP solution? When the SDP value is high, meaning the SDP vectors tend to be locally very “correlated”, but they are far from giving away a deterministic assignment to the original CSP. (Of course, one has to define what “correlation” means here…simply think of it as a natural measure of mutual information that would apply to SDP vectors.)

Such a situation occurs when the large correlations are not the result of vectors that are, either of large norm and well-aligned with a particular direction, or of small norm (so that applying the Goemans-Williamson random projection technique will lead to a good assignment), but rather the vectors are all of even length but still somewhat aligned (corresponding to very mixed local distributions on satisfying assignments, that will not be recovered easily by a global rounding procedure).

Our goal is to use the high correlation between the SDP vectors in order to extract a classical assignment to the CSP variables. Suppose we are given a highly correlated solution to the -th level of the Lasserre hierarchy: there is a vector variable for every assignment to every set of variables, together with matching distributions on all subsets of variables. The idea is to perform *conditioning*: select a particular variable, assign it a fixed value (taken randomly from the distribution implied by the SDP vectors), and restrict the SDP vectors to those that are consistent with this assignment. This brings us down one level in the hierarchy — we only have vectors for sets of variables — but the correlation has also gone down by some amount: any variable that was highly correlated with the one we fixed has also become all but fixed. If we can repeat this operation a large enough number of times (requiring that the original solution came from a high enough level in the hierarchy) we will end up with a solution with no correlation at all: a classical, deterministic assignment.

A difficulty in analyzing this procedure is to understand when does high *local* correlation (measured across edges/constraints) imply high *global* correlation (measured across all pairs of variables). Indeed, our goal is to decrease global correlations, but we only know that conditioning will have an effect commensurate with the local correlation. This suggests that the rounding procedure outlined above will perform better on graphs that are closer to expanders. Indeed I believe the number of hierarchy levels required can be related to the number of large eigenvalues (close to the maximum ) of the constraint graph, though I don’t know the details.

If you followed me all the way here, you probably noticed how simple is the rounding algorithm that results from the procedure I outlined: it simply selects a certain subset of the vectors obtained by the SDP, and eventually performs standard random-projection rounding on the selected vectors. The other vectors, corresponding to assignments incompatible with the conditioning performed, are simply discarded. Is there no better way of using the rich structure of a level- solution to the Lasserre hierarchy? Maybe thinking of such a solution using the language of quantum states, or entangled-player strategies, could give us a hint… This was the theme of the retreat, and I can’t wait to see what the Berkeley theory students will come up with 🙂

[I wish I could point my readers to a reference where the rounding procedure is explained in more detail, while still being accessible: is there a good paper for this?]

This paper 1104.4680 by Barak, Raghavendra, and Steurer indeed shows that the number of rounds needed is related to the threshold rank of the constraint graph. Together with the paper by Raghavendra and Tan 1110.1064 I think they do a good job explaining the rounding procedure and other details of the framework.

I am becoming a loyal reader !