Connectomics: mapping the brain

Today was the first talk in a seminar series organized by MIT’s theory of computation group and entitled “Theory and beyond”. The talk was given by Jeff Lichtman  from Harvard, and it was absolutely excellent. So, although I was going to write more about Tsirelson’s proof of his bound on the CHSH game (what else!), I can’t resist jotting down a few notes (and stunning pictures!) from Lichtman’s talk instead.

On the right is a picture of the cover of a very recent issue of the journal Neuron. The illustration refers to a paper by Lichtman and co-authors (first author: Tapia. In the following I will always refer to “Lichtman”, but of course he has co-authors and colleagues, and I apologize for not giving proper credit…this is really not my area!). The image, adapted from a Japanese painting, served as a beautiful illustration of the punchline of Lichtman’s talk.

To give it away, the main biological discovery that Lichtman motivated, and then described, in his talk, is the following: neural connections in a mammalian brain start few (the first tree on the left), quickly grow to be extremely dense (the big middle tree represents the situation at birth), but soon become much sparser (the last tree on the right is two weeks after birth).  There are initially few cells to connect to; as the brain grows quickly, neurons stretch their axons to connect to any synapse or other cell  they might encounter, resulting in a very dense map of connections; competition between different axons for the same connections eventually results in a much sparser forest. How can this seemingly random process result in an organ as complex, capable and plastic as our brain? What are the principles that govern the initiation and selection of neuronal connections? Assuming the process is not completely genetically determined, what other factors affect it, and how much influence do these factors have in the brains functioning?

Structure and function

Backing off a little bit, Lichtman presented his research as broadly fitting in the area of “structural biology“. The goal of structural biology is to study biological structure at the molecular level, and understand how that structure relates to function. For most organs the situation is relatively well understood: structure affects function in a clear, one-way manner; an organ’s functioning can be studied by observing its structure at the appropriate scale. For instance, in the lungs the structure is that air canals (bronchioles and alveoli) are in very close contact with blood vessels, enabling the transmission of oxygen from the one to the other. From this structure the main principle governing the lungs’ functioning follows relatively straightforwardly. Understanding this picture can also help diagnose and treat diseases, by relating an observed malfunctioning to the underlying problem in structure.

As Lichtman argued, however, there is one organ for which the situation is very different: File:Lateral surface of cerebral cortex - gyri.pngthe brain. First of all, the brain is incomparably more complex than any other organ. The number of different cell types that constitute it is not known — it can barely be estimated. (For instance, according to Lichtman the retina could have anywhere between 40 and 800 cell types!). Moreover, a key challenge in understanding the brain is the different size scales at which important structure appears. From the atomic level at which synapses need to be studied, to the different structural parts of the brain observable with the naked eye, there are at least six orders of magnitude, and six corresponding levels of structure: how can one build an understanding of the brain by integrating information from all levels?

An even more striking difference is that, in the brain, not only does structure influence function, but events resulting from the brain’s functioning can also affect its structure in return. This is evident once one realizes which mechanisms must be at play when the brain learns. Anyone who has attempted to learn how to bike after the age of 30 is well aware that it is no trivial task. However, the brain of a 7-year old handles it quite well — and never forgets the new skill!

This apparent plasticity raises a fundamental question: how does information about the world get instantiated in the brain? What are the mechanisms that govern the “backwards” flow of information from function to structure?

The wiring patterns of neurons

Lichtman’s approach to this question consists in studying how the pattern of connections between neurons in the brain is formed. Our understanding of this “wiring” has almost not evolved since neurons and their axons were first made observable by something called the “Golgi technique”, back in 1873. This technique enabled the creation of images, such as the one on the right, from which our current picture of the brain’s functioning was formed. As we all know, the basic idea is that neurons connect to each other and exchange information in the form of electric signals, that propagate through the axons, and chemical signals, that propagate through the synapses. This picture leads to a  map of the brain such as the ones in the famous drawings by Cajal: a fixed, semi-organized network of connections through which information flows. A standard analogy would be with an electrical circuit, which follows a particular design and in which the connections are typically fixed (only the intensities and their excitatory or inhibitory nature varies).

A drawing of a neuron network by Cajal

Lichtman made the point is that the real situation is much more complicated. Connections are far denser and complex than can be represented on a drawing such as Cajal’s (with all due respect!). Maybe more important, when one looks close enough one sees that in practice most wirings seem completely arbitrary and wasteful. One might naturally expect that the structure of the wirings is at least somewhat influenced by genetic material, and hence has some similarity from one individual of the same species to the next — or at least, from the same individual’s left ear neurons to her right ear neurons. And indeed, this is the case for many species: for instance, in insects or fish the pattern of connections is mostly governed by genetic information.  However, in more complex animals, such as mammals, no such patterns can reliably be found! In fact, not only do the wirings appear arbitrary, but axons frequently run around in energy-wasting loops for no apparent reason. How could such a picture have emerged through evolution? What mechanisms govern the forming of neural connections in the brain?

Mapping the connectome

A colored 2D slice. Small roundish regions are axon cross-sections

Lichtman is part of a major effort to understand these wiring patterns by developing techniques to efficiently and quickly map the connections between neurons in any given brain — a “genome project” of sorts, for the brain. These techniques rely on the combination of detailed neuro-imagery techniques with advanced computerized image recognition. A scanner will provide detailed images of extremely thin 2D slices of a particular 3D region of the brain, and the challenge is to connect these pictures together into a reliable 3D image from which the wirings between different neurons can be inferred (if you think this should be easy, check out the images…).

A 3D map of neurons and their axons reconstructed from 2D slices such as the one above

The sheer quantity of data to be handled is mind-boggling; maybe this is why Lichtman seemed pretty happy at giving a talk in front of a packed audience of computer scientists! He admitted that progress so far has been slow, and only very small amounts of data collected. However, he thinks we are on the verge of a revolution: as both acquisition and data processing techniques quickly improve, it should become possible to handle more and more data far quicker and at a far lower cost (less work for the undergrads…). Nevertheless, the main question remains: how to make sense of the data? How can one extract meaningful information from a picture such as the one on the left (multiplied a billion of times)?

Lichtman’s key insight so far comes from looking at how the connections develop in the first few months of the life of an embryo, and this is how he was led to the picture appearing at the start of the post. By studying the formation of neuronal connections, and more importantly their destruction, he arrived at the following first hint of a picture of how function influences structure in the brain. The idea would be that, initially, neurons throw out their axons all around, attempting to connect to whatever other neuron or muscle cell they can attach to. This results in a very dense network of connections. A few weeks after birth, however, many of these connections have disappeared. Lichtman’s conviction is that which connections remain and which disappear is at least in part governed by competition and experience. By looking at small clusters of cells Lichtman observed that the density of connections could be explained by a simple linear arrangement: stronger connections exist in-between cells that were simultaneously created or stimulated. This suggests the possibility for a process of reinforcement — maybe the germ of a structural explanation how for the ease of learning experienced by children, in spite of the obviously monstrous difficulty of the task that we painfully discover at the adult age?

These are extrapolations though, and the research is only in its beginning (see this nice piece from the Harvard news office, which does much better justice to Lichtman’s idea than I could, for more). The idea of studying the factors influencing the formation, and destruction, of neuronal  connections in the brain compelling: as I see it, this seems to be one of many other cases in which biologists are now trying to move beyond the “genetic determinism” paradigm that dominated the past two decades. With the realization that genes, and evolution over large time-scales, only determine a our body’s functioning to a limited extent comes the question of what other mechanisms influence this function. This is probably one of the biggest challenges currently facing biology, and it is quite exciting to see that researchers are taking it head on!

Advertisements
This entry was posted in Talks and tagged , , , , . Bookmark the permalink.

6 Responses to Connectomics: mapping the brain

  1. Henry Yuen says:

    Great post! Although I couldn’t make it to the talk, your post is making me desperately wish I did.

    Now the real question is… what can theoretical computer science say about the brain? Is there a model of computation that: 1) somehow captures the nature of computation in the brain, 2) is tractable to analyze, yet 3) gives rise to non-trivial insights into how our brains learn and process information?

    Besides the computing model question, there are obvious algorithmic aspects to processing the “Big Data”, if you will, coming from these connectomics. It is said that Kolmogorov came up with a notion of expander graphs when thinking about brain structure. Now that we might have these detailed connectome maps, it’d be fun to study its graph properties.

    • Thomas says:

      I guess the difficulty is we’re not quite close to understanding what goes on in the brain yet…The first problem is to make sense of the massive amount of data that Lichtman and co-authors are just beginning to collect (see here
      to get started!). Different individuals of the same species will give rise to completely different maps, or so it seems at least. How to extract a simple set of mechanisms that could explain the maps obtained, both in their diversity and in their common features? Run PCA??

      Maybe this raises an interesting theoretical question though. Suppose given many families of graphs G^i = \{G^i_{t}\}_{t>0}, where t is a “time” parameter; for simplicity let’s say it is discrete. Suppose there exists a (possibly probabilistic) evolution procedure f such that for every i,t, G_t^i = f(G_{t-1}^i). Can you find (an approximation for) f?

      We probably need to put restrictions on f. For instance, instead of having many G^i we could imagine given a single infinite graph G_t (for all t>0), and assume that f is local in some sense…that I can’t make quite precise right now, but one can imagine many possibilities. Presumably if such an f exists, and G is infinite, the problem is overdetermined and we should be able to find it, even assuming noise. Of course this is way too computationally intensive though, so what would you do?

  2. Anthony says:

    Great post!

    A natural approach to this problem is certainly to represent this network of neurons as a graph. I guess people must be trying to model the brain as a random graph. Did Lichtman mention anything along these lines?

    • Thomas says:

      He didn’t explicitly mention any modeling by random graphs, but it’s an interesting suggestion. The question is how to model the evolution of the connectome through a mammalian embryo’s development stages, so the mechanism that govern connections and disconnections should take into account some kind of feedback from the environment, as well as genetic or other internal mechanisms: a “preferential dis-attachment model” :-)?

  3. Anthony says:

    I would be very surprised if nobody had looked at this before. Surely not every graph theorist is trying to model social networks?

    • Thomas says:

      Here is a decent Nature survey I found: http://ccn.ucla.edu/wiki/images/6/6a/Bullmore_Sporns_NatRevNeuro_2009.pdf. It is maybe a bit old (2009), but contains a ton of references. It contains information on the structure of large graphs obtained from brain networks in various ways, and discusses whether some of the typical properties we expect to observe in large random graphs (small worlds, power law distribution of the degrees, etc.) also appear in these graphs.

      The main focus though is on how structure in the brain would affect function, and not the other way round, as Lichtman argued was the most interesting aspect…Some of the comments in the survey are even at odds with Lichtman’s talk: for instance, the survey mentions studies arguing that axons are organized in a way so as to minimize energy use; Lichtman clearly argued this was (surprisingly) *not* the case. I think that as imaging techniques progress rapidly, a lot of our assumptions on the brain’s organization are going to be challenged.

      The survey does have a nice paragraph or two (the second column on p.192; also on the next page) in which the authors discuss the possibility for a dynamic interplay of structure and function, and a reference to this paper: http://arxiv.org/pdf/0706.2602.pdf which does seem to study a model for the evolution of a neuronal network. Looking at citations to the latter gives more…So, indeed there is work on the topic! I am quite curious to read more about it now 🙂

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s