FOCS 2012 had a special session today dedicated to the award of the Knuth prize to Leonid Levin, in recognition of four decades of visionary research in complexity, cryptography, and information theory.
Levin was introduced by Joan Feigenbaum, who honored him by keeping it short and relating a few stories about Levin’s famously sharp and succinct writing. She kept the best one last. Levin submitted a manuscript to a journal, but the editor sent him back the paper with two major complaints: (1) the paper was too short, and (2) some terms were used before they were even defined. In response, Levin took two copies of his paper, stapled them one after the other, and send them back to the editor with a short note saying that he had thus addressed both the editor’s concerns! (She didn’t say if the paper was accepted the second time.)
Levin’s lecture was entitled “Turing’s Password: What the Internet Cannot Leak”. I won’t try to summarize it here as it was very dense (especially the slides! He probably had a dozen full papers in there — one per slide :-)), but I enjoyed it very much so I’ll just record what I understood as its main theme.
Levin’s main thesis, if I may say so, is that as a result of the evolving nature of computation the equivalence between algorithmically computable sequences and sequences that can be computed by a Turing machine stated in the Church-Turing thesis has become obsolete. As a result, we should update our fundamental model of computation, just as physicists routinely update their models of nature to take into account new discoveries: some fundamentals stay, others need to be refined or altogether replaced. He used the example of the internet as a central illustration of how our notion of computation has, or should have, changed: almost all computations today are done over the internet, but one cannot reasonably argue that sequences that are “produced” by the internet can be generated by a Turing machine in the traditional sense (if only for the presence of randomness, interaction with humans, etc.).
The ultimate Internet Church-Turing thesis (Levin’s wording!)
I found Levin’s proposed extension of the Church-Turing thesis very appealing. He stated it as follows:
No sequence that exists physically can have significant information about any mathematically definable sequence.
What does that mean? Precise definitions are given in Levin’s (brief!) paper Forbidden information (see in particular the appendix), but let me make an attempt here. A sequence that exists physically is one that can be found, or observed, in the real world by finite means: any sequence that I will be able to observe by following a short list of instructions, e.g. “point the telescope at this angle and measure intensity of light at some given frequency”, or “register network traffic coming out of this IP address”. A mathematically definable sequence is a sequence that can be succinctly defined using any reasonable set of axioms, such as Peano’s arithmetic. Finally, information is defined in the sense of Kolmogorov, through compressibility.
Maybe a few examples will help. Avi Wigderson suggested a good one: does the sequence of digits of exist physically? The answer was a clear “yes”: nature contains many instantiations of ; there are abundant circles around. But the digits of , as a sequence, contain very little information anyways (in the sense of Kolmogorov), as there is a very simple algorithm that computes them one after the other. Levin gave another example: the sequence corresponding to the halting problem contains a lot of information (it is highly incompressible) and has a simple mathematical definition, but it does not exist in nature. A final example would be a sequence generated randomly: it exists in nature and contains a lot of information in the sense of being highly incompressible, but contains negligible information about any fixed, mathematically definable sequence.
Hence one way to understand the thesis is in saying that the only sequences that exist in nature are those that can be found through a set of instructions that is almost as short as their simplest high-level mathematical description. Indeed, if the mathematical description was much shorter, the “mutual information” — in that case I believe the difference in the length of the descriptions — would be very large. Levin’s thesis extends the notion of “algorithmically computable” to “has a short mathematical description”, and of “Turing computable” to “exists in nature”, and states a formal equivalence between the two notions using Kolmogorov complexity.
The lecture was a welcome change from the usual STOC/FOCS talk, more conceptual and less flashy…but then it was Levin 🙂 I hope his proposition of a “UICT thesis” gains momentum and stimulates discussion in the community: basing computation on a more physically motivated footing than the original CT thesis did is one of the current challenges of the field. As computer science becomes ever more relevant to fields such as statistical or quantum physics, evolution, neurology, and others, it seems like a re-examination of our “fundamental object of study”, which by definition delineates the scope that we wish to give to our results, is in order.