Today was the first talk in a seminar series organized by MIT’s theory of computation group and entitled “Theory and beyond”. The talk was given by Jeff Lichtman from Harvard, and it was absolutely excellent. So, although I was going to write more about Tsirelson’s proof of his bound on the CHSH game (what else!), I can’t resist jotting down a few notes (and stunning pictures!) from Lichtman’s talk instead.
On the right is a picture of the cover of a very recent issue of the journal Neuron. The illustration refers to a paper by Lichtman and co-authors (first author: Tapia. In the following I will always refer to “Lichtman”, but of course he has co-authors and colleagues, and I apologize for not giving proper credit…this is really not my area!). The image, adapted from a Japanese painting, served as a beautiful illustration of the punchline of Lichtman’s talk.
To give it away, the main biological discovery that Lichtman motivated, and then described, in his talk, is the following: neural connections in a mammalian brain start few (the first tree on the left), quickly grow to be extremely dense (the big middle tree represents the situation at birth), but soon become much sparser (the last tree on the right is two weeks after birth). There are initially few cells to connect to; as the brain grows quickly, neurons stretch their axons to connect to any synapse or other cell they might encounter, resulting in a very dense map of connections; competition between different axons for the same connections eventually results in a much sparser forest. How can this seemingly random process result in an organ as complex, capable and plastic as our brain? What are the principles that govern the initiation and selection of neuronal connections? Assuming the process is not completely genetically determined, what other factors affect it, and how much influence do these factors have in the brains functioning?
Structure and function
Backing off a little bit, Lichtman presented his research as broadly fitting in the area of “structural biology“. The goal of structural biology is to study biological structure at the molecular level, and understand how that structure relates to function. For most organs the situation is relatively well understood: structure affects function in a clear, one-way manner; an organ’s functioning can be studied by observing its structure at the appropriate scale. For instance, in the lungs the structure is that air canals (bronchioles and alveoli) are in very close contact with blood vessels, enabling the transmission of oxygen from the one to the other. From this structure the main principle governing the lungs’ functioning follows relatively straightforwardly. Understanding this picture can also help diagnose and treat diseases, by relating an observed malfunctioning to the underlying problem in structure.
As Lichtman argued, however, there is one organ for which the situation is very different: the brain. First of all, the brain is incomparably more complex than any other organ. The number of different cell types that constitute it is not known — it can barely be estimated. (For instance, according to Lichtman the retina could have anywhere between 40 and 800 cell types!). Moreover, a key challenge in understanding the brain is the different size scales at which important structure appears. From the atomic level at which synapses need to be studied, to the different structural parts of the brain observable with the naked eye, there are at least six orders of magnitude, and six corresponding levels of structure: how can one build an understanding of the brain by integrating information from all levels?
An even more striking difference is that, in the brain, not only does structure influence function, but events resulting from the brain’s functioning can also affect its structure in return. This is evident once one realizes which mechanisms must be at play when the brain learns. Anyone who has attempted to learn how to bike after the age of 30 is well aware that it is no trivial task. However, the brain of a 7-year old handles it quite well — and never forgets the new skill!
This apparent plasticity raises a fundamental question: how does information about the world get instantiated in the brain? What are the mechanisms that govern the “backwards” flow of information from function to structure?
The wiring patterns of neurons
Lichtman’s approach to this question consists in studying how the pattern of connections between neurons in the brain is formed. Our understanding of this “wiring” has almost not evolved since neurons and their axons were first made observable by something called the “Golgi technique”, back in 1873. This technique enabled the creation of images, such as the one on the right, from which our current picture of the brain’s functioning was formed. As we all know, the basic idea is that neurons connect to each other and exchange information in the form of electric signals, that propagate through the axons, and chemical signals, that propagate through the synapses. This picture leads to a map of the brain such as the ones in the famous drawings by Cajal: a fixed, semi-organized network of connections through which information flows. A standard analogy would be with an electrical circuit, which follows a particular design and in which the connections are typically fixed (only the intensities and their excitatory or inhibitory nature varies).
Lichtman made the point is that the real situation is much more complicated. Connections are far denser and complex than can be represented on a drawing such as Cajal’s (with all due respect!). Maybe more important, when one looks close enough one sees that in practice most wirings seem completely arbitrary and wasteful. One might naturally expect that the structure of the wirings is at least somewhat influenced by genetic material, and hence has some similarity from one individual of the same species to the next — or at least, from the same individual’s left ear neurons to her right ear neurons. And indeed, this is the case for many species: for instance, in insects or fish the pattern of connections is mostly governed by genetic information. However, in more complex animals, such as mammals, no such patterns can reliably be found! In fact, not only do the wirings appear arbitrary, but axons frequently run around in energy-wasting loops for no apparent reason. How could such a picture have emerged through evolution? What mechanisms govern the forming of neural connections in the brain?
Mapping the connectome
Lichtman is part of a major effort to understand these wiring patterns by developing techniques to efficiently and quickly map the connections between neurons in any given brain — a “genome project” of sorts, for the brain. These techniques rely on the combination of detailed neuro-imagery techniques with advanced computerized image recognition. A scanner will provide detailed images of extremely thin 2D slices of a particular 3D region of the brain, and the challenge is to connect these pictures together into a reliable 3D image from which the wirings between different neurons can be inferred (if you think this should be easy, check out the images…).
The sheer quantity of data to be handled is mind-boggling; maybe this is why Lichtman seemed pretty happy at giving a talk in front of a packed audience of computer scientists! He admitted that progress so far has been slow, and only very small amounts of data collected. However, he thinks we are on the verge of a revolution: as both acquisition and data processing techniques quickly improve, it should become possible to handle more and more data far quicker and at a far lower cost (less work for the undergrads…). Nevertheless, the main question remains: how to make sense of the data? How can one extract meaningful information from a picture such as the one on the left (multiplied a billion of times)?
Lichtman’s key insight so far comes from looking at how the connections develop in the first few months of the life of an embryo, and this is how he was led to the picture appearing at the start of the post. By studying the formation of neuronal connections, and more importantly their destruction, he arrived at the following first hint of a picture of how function influences structure in the brain. The idea would be that, initially, neurons throw out their axons all around, attempting to connect to whatever other neuron or muscle cell they can attach to. This results in a very dense network of connections. A few weeks after birth, however, many of these connections have disappeared. Lichtman’s conviction is that which connections remain and which disappear is at least in part governed by competition and experience. By looking at small clusters of cells Lichtman observed that the density of connections could be explained by a simple linear arrangement: stronger connections exist in-between cells that were simultaneously created or stimulated. This suggests the possibility for a process of reinforcement — maybe the germ of a structural explanation how for the ease of learning experienced by children, in spite of the obviously monstrous difficulty of the task that we painfully discover at the adult age?
These are extrapolations though, and the research is only in its beginning (see this nice piece from the Harvard news office, which does much better justice to Lichtman’s idea than I could, for more). The idea of studying the factors influencing the formation, and destruction, of neuronal connections in the brain compelling: as I see it, this seems to be one of many other cases in which biologists are now trying to move beyond the “genetic determinism” paradigm that dominated the past two decades. With the realization that genes, and evolution over large time-scales, only determine a our body’s functioning to a limited extent comes the question of what other mechanisms influence this function. This is probably one of the biggest challenges currently facing biology, and it is quite exciting to see that researchers are taking it head on!