Robots are helping us to understand the brain…

A neurone in action. Thanks to a new protein attached to its membrane the neurone becomes fluorescent when is active. Credit: Edward Boyden, MIT

We are still in the dark when it comes to really understand what goes on in our brain that results in perception, awareness … Yes we have learnt and understood several basic circuitry but the formation of thoughts and conscience still eludes us.

The progress has been made by observing how neurones are connected with one another and how they are activated. The problem is that we have been able to get just some glimpse on what is really going on.

fMRI is giving a very broad idea of brains areas activated by some stimuli, areas that comprises billion of neurones! Implanting electrodes provides a much finer granularity detecting activities of some thousands neurones but missing completely what is going on in other parts of the brain and we know very well that most of higher level processing (leading to conscience, awareness, thinking) is actually the result of many neurones working together in different areas.

Optogenetics has provided researchers with a new tool by making it possible to mark single neurones with proteins that are sensitive to light so that one can influence neuronal activity with a beam of light and study the outcome.

In a paper published on Nature Chemical Biology a team of MIT researchers are sharing the results obtained by using a robot to create a protein that can attach to neurones membrane and fluoresce when the neurones are active. The level of fluorescence is proportional to the voltage in the neurone, so it is also possible to evaluate the “level” of activity. With this technology it becomes possible to observe the parallel activity of several neurones in many parts of the brain at the same time (there are animals, like the zebra fish, that have transparent tissues covering their brain so that it is actually possible to see their brain and detect the fluorescence). Most importantly the protein identified has a switch on- switch off time that is au pair with the neurones so that it fluoresces if and only if the neurone membrane has a certain voltage potential (i.e. the neurone is active). Previously used proteins had a much longer switch on switch off time so that what you would see was a huge amount of neurones activated over a long period of time, in the order of a second whilst the actual window to track neurones activity is in the order of millisecond. This is clear when you look at the clip of a zebra fish brain where one sees activities of large parts of the brain with no spatial and temporal resolution since all actions get over imposed on one another (see clip).

Discovering the right protein to use is a gigantic task. There are millions of proteins and checking them out one by one is a never ending story.  The researchers sought the help of a robot to look for the right protein and indeed the robot delivered. This is actually what attracted my attention and the reason for this post.

We are starting to use robots in research for … doing research. The robot evaluated hundred of thousands proteins in just a few hours considering their different characteristics and choosing the best fit to the goal.

The possibility to look at neuronal activity at circuit level (which also means discovering the circuits involved in a specific activities) is a big step forward in trying to understand what really goes on in our brain (notice that the understanding we can derive from a zebrafish brain applies to our brain as well…).

An illustration of a multi-compartment neural network model for deep learning. Left: Reconstruction of pyramidal neurons from mouse primary visual cortex, the most prevalent cell type in the cortex. The tree-like form separates “roots,” where bottoms of cortical neurons are located just where they need to be to receive signals about sensory input, from “branches” at the top, which are well placed to receive feedback error signals. Right: Illustration of simplified pyramidal neuron models. Credit: CIFAR

This is also fuelling other researches, such as the one carried out at the Canadian Institute For Advanced Research -CIFAR- where a team is comparing the deep learning processes used in Artificial Intelligence with the learning activity at the neuronal level.

CIFAR researchers have noticed similarities (but also significant differences) in what is going on in an Artificial Intelligence engine using deep learning techniques and what seems to go on in a pyramidal neurone. This latter connects part of the stimuli arriving from deep in the brain to feedback received from the cortical area where it is connected to other higher level neurones.

So far it has been a little like trying to catch a moving target in the dark. With the new technology  developed at MIT allowing the detection of neural activities within circuits all over the brain the light has been turned on. It remains the problem to catch the moving target but at least now the researchers would be able to see it.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.