Tech for Autonomous Systems – Advanced Interaction Capabilities VIII

Will our brain be directly connected to the internet? And, will we realising it? Cartoon Credit: Jef Mallet

Brain Computer Interfaces

Our brain, as all brains, processes data arriving from sensors spread out in the brain’s owner body. The processing may result in the activation of parts of the body (like muscles to talk or to flee) or may just result in a “change of state” of the brain (memories, learning –explicit and implicit at macro or micro level).

To get data into the brain the only possible way was to stimulate the senses and let them communicate.

In the last fifty years, but more so in the last ten, researchers have found ways to read the brain and to stimulate directly the brain. This is the result of a growing understanding on the workings of the brain at the physical (electrical and chemical interactions among neurones) and architectural/functional level (neural circuits associated to specific processing activities). This has been made possible by technology evolution in the areas of:

  • Neuroimaging (fMRI – functional Magnetic Resonance Imaging, PET – Positron Emission Tomography, CT – Computer Tomography, MEG – Magnetoencephalography, EEC – Electroencephalography, NIRS – Near Infrared Spectroscopy)
  • Sensors array probes
  • Signal processing (including Deep Learning)
  • Optogenetics
  • Neuromorphic Computing
A brain implantable electrode array integrating ultrathin, flexible silicon transistors capable of sampling large areas of the brain with minimal wiring. Credit: Jonathan Viventi, Polytechnic Institute at New York University

Significant evolution is expected in the next and following decades both in the understanding of the brain circuitry and dynamic evolution (this is part of the virtuous circle –better technology, better understanding leading to better use of technology and to technology evolution) and in the creation of seamless interfaces.

Today the more accurate reading of what is going on, the better the possibility to interface and in turns this require more cumbersome (and often invasive) physical interfaces.

As discussed in the sensors evolution post there is not silver bullet in sight also from a theoretical point of view: invasive probes, like sensor arrays, fail because of the reorganisation taking place in the brain that would make any positioning useless over time since processing shifts from place to place and it is obviously impossible to trace each and every neuron activity.

Neuromorphic chip designed by the Heidelberg group of Karlheinz Meier. It features 384 neurons and 100,000 synapses, and operates at a speed of approximately 100,000 times biological realtime. Credit: Kirchhoff Institute for Physics

A lot of hope is placed on signal processing to be able to filter interesting activity from noise or from irrelevant ones. To do this, however, a huge amount of data is required and in turns this means huge processing capability. Neuromorphic computing may be an interesting possibility to explore. At the same time the collection of data remains tricky, even if one can make away with accurate positioning of probes.

Invasive probes, although much more accurate, are not viable in the mass market, only people with specific needs, like some forms of lock-in syndrome, would be eager to accept them. External probes, like electrodes on the skull, are more viable although much less informative.

Evolution is being driven by three classes of demand:

  • Medical (for patients with various degrees of disabilities)
  • Military (to maximise efficiency in communications)
  • Gaming

As it happened in other areas military driven evolution may fall out into the civil “market” first in high demanding areas and then into the mass market.

BCI is an area where technology evolution is still dependent on scientific progress (in brain understanding) and this is now actively pursued by The Human Brain Project funded by the EU and in the US by the Brain Initiative, NIH funded, and by the Human Connectome.

Industry is also looking at BCI as something that is getting mature for generating business and a few initiatives have been announced in the private sector, like Neuralink an initiative launched by Elon Musk to develop commercially viable BCI in the coming years..

BCI can become an important means of communications in symbiotic autonomous systems involving humans but it is unlikely to happen in the next 20 years at a mass market level.

What it can be expected is an adoption of more and more sophisticated technologies in the gaming area spilling off into entertainment area and then into augmented reality areas. At that point the step into more widespread adoption for interaction with a variety of systems, including appliances will be more a matter of cultural acceptance.

Clearly, the availability of a seamless BCI would make communications with the Internet and with devices much more effective. At the same time it will create significant ethical and social issues, not to mention the rise of security concern for potential hacking of the interface.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.