A couple weeks ago I attended the workshop on “Hypercontractivity and Log Sobolev Inequalities in Quantum Information Theory” organized by Patrick Hayden, Christopher King, Ashley Montanaro and Mary Beth Ruskai at the Banff International Research Station (BIRS).

BIRS holds 5-day workshops in mathematics and applications throughout the year, itself a small component of the Banff centre, which describes itself as the “largest arts and creativity incubator on the planet”. Sounds attractive? The centre appears to be remarkably successful, managing to attract not only the brightest mathematicians like us(!), but also a range of talented artists from the world all over, reputable international conferences, and more. The setting certainly doesn’t hurt (see the pictures — and, by the way, that view was from the center’s beautiful library!), nor does the Canadian hospitality — with hearty food provided daily in a cafeteria with impressive views on the surrounding mountains. It was my first visit and in spite of the somewhat coldish weather (“much warmer than usual”, as they say…you bet! When “usual” means -20 Celsius, it’s still not quite clear what improvement you’re getting). I enjoyed the place, which occupies some sort of in-between a seclusive retreat in the mountains (think Dagstuhl) and a bustling research centre (think the Simons Institute in Berkeley). Good balance.

The goal of this post is to give a brief introduction to the (first half of the) topic of the workshop, “Hypercontractivity and Log Sobolev Inequalities”. (I hope to talk a bit about the second half, “in Quantum Information theory”, in the next post.) Log-Sobolev inequalities, be they quantum or classical, meant next to nothing to me before attending — I had heard of both, and was somewhat familiar with the uses of hypercontractivity in Boolean function analysis, but log-Sobolev inequalities were much more mysterious. The workshop was successful in bringing different perspectives to bear on these inequalities, their relationship, and applications (videos available online!). For my own benefit I’ll write down some very basic facts on these inequalities, where they come from, and what is the motivation for studying them. What I write will undoubtedly appear extremely naive to even moderate experts, and for much more on the subject I highly recommend a survey by Diaconis and Saloff-Coste, the lecture notes by Guionnet and Zegarlinski, and the (quals-fueled) survey by Biswal. In particular I refer the reader to these resources for proofs of the unproven assertions made in this post (any errors introduced being mine of course), as well as appropriate credit and references for the results mentioned below.

**Markov semi-groups.** The fundamental objects underlying both the hypercontractive and log-Sobolev inequalities are a space of functions (think continuous real-valued functions defined on a nice topological space, , or your favorite graph) together with, first a Markov semi-group defined on that space, and second, a measure on the same space that is invariant under , i.e.

for all . Now, even the notion of a “Markov semi-group” can be intimidating (to me), but it really corresponds to the most simple process. A Markov semi-group is a family of maps indexed by the positive reals that transforms functions in a way that applying the map for time , and then time , is equivalent to applying it for time : . Simple examples are e.g. any random walk on the domain of , with denoting averaging over inputs obtained by taking a “-step” walk, or a diffusion process on real functions, such as convolving with a Gaussian of variance or evolving under the heat equation for time . Some concrete examples that I’ll refer to throughout:

**Examples**

*Random walks on regular graphs.* Let be a regular undirected graph with degree , and its normalized adjacency matrix: if and otherwise. Let . If and , then , where the expectation is taken over a random neighbor of in . If is a distribution over , then after steps of the discrete-time random walk on the distribution is given by . The standard way to make this into a continuous-time random walk is to wait at a vertex for an amount of time distributed as an exponential with mean , and then move to a random neighbor. In other words, the number of steps taken in time is distributed as a Poisson point process with rate . Starting in an initial distribution , the distribution after time is given by

The same construction extends to any irreducible Markov chain , , for which the associated semigroup is .

*Ornstein-Uhlenbeck process.* Here we take the set of continuous functions from to itself, and

This describes the evolution of a particle submitted to Brownian motion under the influence of friction. It is a continuous, Gaussian analogue of the Bonami-Beckner noise operator on the hypercube that crops up in computer science applications, where is obtained from by flipping each of its coordinates independently with probability . *Time-independent evolution.* Schrödinger’s equation dictates that a quantum state suject to a fixed Hamiltonian will evolve as (where I set for simplicity). It is easy to check that is a Markov semi-group. Note however this semi-group acts on a different kind of space than the previous examples: is an operator on a Hilbert space , rather than a function on a certain domain. Dealing with such examples requires non-commutative analogues of the classical theory to which I hope to return in the next post.

**Ergodicity.** Now that we have a cast of players, let’s see what kind of questions we’d like to answer about them. The most basic question is that of convergence: given a Markov semi-group defined on a function space , does the sequence converge, and if so at what rate? The latter part of this question requires one to specify a distance measure on . For this we will use the norms, defined as for some measure on the domain of functions in . Important special cases are and . Here the measure will always be an *invariant measure* for , i.e. it satisfies (1). There does not always exist an invariant measure, and there can be more than one; in practice though and are often defined together and for the purposes of this post I will assume there is a unique invariant (for the example of a semi-group defined from a Markov chain, this corresponds to the chain being irreducible, and is its unique stationary measure). The question then becomes, does converge towards its expectation (by invariance), and if so at what speed? This is precisely the question that log-Sobolev inequalities will let us answer.

One says that is -ergodic if for all , i.e. convergence holds in the -norm. Note this is equivalent to requiring , where denotes the operator norm from to . Since for all and any , the notion of -ergodicity becomes stronger as increases. We will see three different ways to obtain bounds on the rate of convergence: the spectral gap inequality, equivalent to -ergodicity; Sobolev inequalities and ultra-contractivity, which imply -ergodicity, and log-Sobolev inequalities and hypercontractivity, which are related to -ergodicity for intermediate .

Before getting started there is one last quantity associated with any Markov semi-group that we need to introduce, the *Liouvillian* . The Liouvillian is defined through its action on functions in as

Equivalently, the Liouvillian can be defined through the functional equation : it plays the role of infinitesimal generator for the semi-group . For the example of the random walk the Liouvillian is simply . For the Ornstein-Uhlenbeck process one finds , and for the Bonami-Beckner noise operator where is the adjacency matrix of the hypercube. Finally in the example of (time-independent) Schrödinger evolution we have , so that the Liouvillian is .

**Spectral gap inequality and -ergodicity.** A first family of inequalities are spectral gap inequalities, which turn out to be equivalent to -ergodicity. The spectral gap of an invariant measure for the semigroup generated by is defined as the largest (if it exists) such that

holds for all such that . In the example of the Markov chain, and the inequality can be rewritten as , where the first expectation is taken over and , and the second expectation is over independent . In particular for random walks on graphs we recover the standard definition of the Laplacian .

Applying (2) to we see that it is equivalent to requiring , which using implies the bound for all such that . Applying the appropriate translation the same inequality can be rewritten as

which is precisely a convergence bound. Conversely, by differentiating the above inequality we recover (2), so that the two are equivalent.

Rather than convergence one would often like to derive bounds on convergence in , or total variation, distance. One way to do this is to use Hölder’s inequality as

Thus we get an exponentially decaying bound, but the prefactor could be very large; for instance in the case of a random walk on a graph it would typically equal the number of vertices. Our goal is to do better by using log-Sobolev inequalities. But before investigating those, let’s first see what a Sobolev (without the log!) inequality is.

**Sobolev inequalities and -ergodicity.** Sobolev inequalities are the “high-” generalizations of the spectral gap inequality. satisfies a Sobolev inequality if there is a and constants , such that for all with ,

Sobolev inequalities imply the following form of ergodicity:

where are constants depending on only (and as ). If one can take (which is often the case) directly gives a polynomial bound on which is incomparable to the one we derived from the spectral gap inequality: on the one hand it does not have the large prefactor, but on the other it only gives polynomial decay in . This form of contractivity, in the operator norm, is called “ultracontractivity”.

An important drawback of Sobolev inequalities is that they do not “tensor” well, leading to dimension-dependent bounds. What this means is that if acts on functions of a single variable in a certain way, and we let act on -variable functions in the natural way, then the resulting semigroup will in general not satisfy a Sobolev inequality with constants independent of , even if itself did. This contrasts with the spectral gap inequality: it is not hard to see that the spectral gap of is the same as that of . This makes Sobolev inequalities particularly hard to derive in the infinite-dimensional settings that originally motivated the introduction of log-Sobolev inequalities (I hope to return to this in the next post).

**Log-Sobolev inequalities and -ergodicity.** We finally arrive at log-Sobolev inequalities. We’ll say that satisfies a log-Sobolev inequality if there exists constants , such that for all non-negative ,

Often it is required that , and the log-Sobolev constant is where is the smallest positive real for which (3) holds for all . A log-Sobolev inequality can be understood as a “weakest” Sobolev inequality, obtained as ; indeed, one can show that if satisfies a Sobolev inequality then it also satisfies a log-Sobolev inequality. Gross related the log-Sobolev inequality to hypercontractivity by showing that it is equivalent to the statement that for all and all the bound

holds: is hypercontractive for the operator norm if and only if the log-Sobolev constant satisfies . One can also show that always holds. But a direct use of the log-Sobolev inequality can give us a better convergence bound than the one we derived from the spectral gap: a more careful use of Hölder’s inequality than the one we made earlier easily leads to the bound

a marked improvement over the prefactor in the bound derived from alone (provided and are of the same order, which turns out to often be the case).

Aside from this improvement a key point making log-Sobolev inequalities so useful is that, contrary to Sobolev inequalities, they tensor perfectly: if satisfies a log-Sobolev inequality with coefficients for then satisfies one with coefficients and . Thus for example the standard approach for proving hypercontractivity for the Bonami noise operator on the -dimensional hypercube: first, prove the “two-point inequality” for the single-variable case, for any . This by itself is a nontrivial statement, but it boils down to a direct calculation. Then use Gross’ equivalence to deduce that satisfies a log-Sobolev inequality with the appropriate constant . Deduce that the -dimensional noise operator satisfies a log-Sobolev inequality with the same value of , and conclude hypercontractivity of for general : for all .

I hope this got you interested in the topic. I wish I had thought of browsing through the surveys mentioned at the start of the post before the BIRS workshop…Have a look, they’re quite interesting. Next time I’ll try to recall why the second part of the workshop title, “and quantum information theory”…

Nice stuff ! I had no idea that Sobolev does not tensorizes.This can potentially save me from a deadly mistake at some point in life. (A small point: The equation before the paragraph on Sobolev inequalities has not compiled.)

Thanks! Equation fixed.

Re: tensoring, there is an explicit counter-example in the Guionnet-Zegarlinski notes.