Levin and the Church-Turing thesis

FOCS 2012 had a special session today dedicated to the award of the Knuth prize to Leonid Levin, in recognition of four decades of visionary research in complexity, cryptography, and information theory.

Levin was introduced by Joan Feigenbaum, who honored him by keeping it short and relating a few stories about Levin’s famously sharp and succinct writing. She kept the best one last. Levin submitted a manuscript to a journal, but the editor sent him back the paper with two major complaints: (1) the paper was too short, and (2) some terms were used before they were even defined. In response, Levin took two copies of his paper, stapled them one after the other, and send them back to the editor with a short note saying that he had thus addressed both the editor’s concerns! (She didn’t say if the paper was accepted the second time.)

Levin’s lecture was entitled “Turing’s Password: What the Internet Cannot Leak”. I won’t try to summarize it here as it was very dense (especially the slides! He probably had a dozen full papers in there — one per slide :-)), but I enjoyed it very much so I’ll just record what I understood as its main theme.

Levin’s main thesis, if I may say so, is that as a result of the evolving nature of computation the equivalence between algorithmically computable sequences and sequences that can be computed by a Turing machine stated in the Church-Turing thesis has become obsolete. As a result, we should update our fundamental model of computation, just as physicists routinely update their models of nature to take into account new discoveries: some fundamentals stay, others need to be refined or altogether replaced. He used the example of the internet as a central illustration of how our notion of computation has, or should have, changed: almost all computations today are done over the internet, but one cannot reasonably argue that sequences that are “produced” by the internet can be generated by a Turing machine in the traditional sense (if only for the presence of randomness, interaction with humans, etc.).

The ultimate Internet Church-Turing thesis (Levin’s wording!)

I found Levin’s proposed extension of the Church-Turing thesis very appealing. He stated it as follows:

No sequence that exists physically can have significant information about any mathematically definable sequence.

What does that mean? Precise definitions are given in Levin’s (brief!) paper Forbidden information (see in particular the appendix), but let me make an attempt here. A sequence that exists physically is one that can be found, or observed, in the real world by finite means: any sequence that I will be able to observe by following a short list of instructions, e.g. “point the telescope at this angle and measure intensity of light at some given frequency”, or “register network traffic coming out of this IP address”. A mathematically definable sequence is a sequence that can be succinctly defined using any reasonable set of axioms, such as Peano’s arithmetic. Finally, information is defined in the sense of Kolmogorov, through compressibility.

Maybe a few examples will help. Avi Wigderson suggested a good one: does the sequence of digits of \pi exist physically? The answer was a clear “yes”: nature contains many instantiations of \pi; there are abundant circles around. But the digits of \pi, as a sequence, contain very little information anyways (in the sense of Kolmogorov), as there is a very simple algorithm that computes them one after the other. Levin gave another example: the sequence corresponding to the halting problem contains a lot of information (it is highly incompressible) and has a simple mathematical definition, but it does not exist in nature. A final example would be a sequence generated randomly: it exists in nature and contains a lot of information in the sense of being highly incompressible, but contains negligible information about any fixed, mathematically definable sequence.

Hence one way to understand the thesis is in saying that the only sequences that exist in nature are those that can be found through a set of instructions that is almost as short as their simplest high-level mathematical description. Indeed, if the mathematical description was much shorter, the “mutual information” — in that case I believe the difference in the length of the descriptions — would be very large. Levin’s thesis extends the notion of “algorithmically computable” to “has a short mathematical description”, and of “Turing computable” to “exists in nature”, and states a formal equivalence between the two notions using Kolmogorov complexity.

The lecture was a welcome change from the usual STOC/FOCS talk, more conceptual and less flashy…but then it was Levin 🙂 I hope his proposition of a “UICT thesis” gains momentum and stimulates discussion in the community: basing computation on a more physically motivated footing than the original CT thesis did is one of the current challenges of the field. As computer science becomes ever more relevant to fields such as statistical or quantum physics, evolution, neurology, and others, it seems like a re-examination of our  “fundamental object of study”, which by definition delineates the scope that we wish to give to our results, is in order.

 

 

Advertisements
This entry was posted in Conferences, Talks and tagged , , , . Bookmark the permalink.

3 Responses to Levin and the Church-Turing thesis

  1. aram says:

    The conjecture is consistent with my prejudices, but is pretty unsatisfactory in that we wouldn’t be able to recognize an uncomputable number even if we found it. This point is discussed in a much more amusing way here.

  2. Thomas says:

    Hi aram,

    Thanks for the reference! It is indeed a very nice introduction, though I don’t think it really is saying anything surprising.

    I am not sure why you find the unrecognizability of uncomputable numbers unsatisfactory — it seems like such a fact must be part of any reasonable definition of computability. Do you mean that it implies the practical impossibility of proving the “conjecture”? If so I agree, but then I would not call UICT (or the CT) a conjecture either, but rather a thesis, or a “postulate”, as Levin states it in his paper: it is not something that you expect to ever be able to verify or “prove”, so there is no point in calling it a conjecture. Rather, it is a statement that you can believe in or not, debate, etc.

    In that respect I think the main merits of Levin’s UICT are in (1) broadening the concept of computability to include the physical, and (2) making a more precise connection between “physical computability” and “mathematical computability” through the use of mutual information.

    For instance, the UICT gives much more depth to Bennett’s statement “\Omega [Chaitin’s number] is in many senses a Cabalistic number. It can be known of, but not known, through human reason” than does the CT.
    First, Bennett is (unvoluntarily?) broadening the CT himself by equating “known through human reason” with “computable”: this equation makes sense in the context of the UICT, but less so with the CT, as it would force us to restrict human reason to Turing machines.
    Second, the UICT says more: it implies that the sequence of digits of \Omega contains only negligible information about any sequence that exists physically. (For, if it did contain much information about such a sequence, since \Omega has a short mathematical description it would provide a way to compress the physical sequence.) The CT does not seem to imply such a strong statement.

  3. Pingback: Short and Tweet « Gödel’s Lost Letter and P=NP

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s