Disruptive Technologies in human augmentation impacting beyond 2040 III

Dream reading and recording

Sleep waking robot. Designed to enact your dreams. It captures signs of what’s going on in your brain and your eye movements as you sleep, makes some guesses and re-enact them. Credit: Fernando Orellana and Brendan Burns, Union College, Schenectady, NY

If you google for “dream reading” you’ll get plenty of pointers to sites that are supposedly helping in understanding the “magic” revealed by dreams. That goes back centuries and millennia to the times of soothsayers that are still flourishing today. Freud moved dream interpretation to a new level with psychology. Both Freud and dream readers, however, relied, rely, on you telling what you dreamt.

Some scientists are looking at dreams in a different ways, as brain activities that can be tracked and eventually understood, in principle recording dreams that you will never remember and be aware of. This falls into the more general endeavour of detecting brain activity and make sense of it. Ever more powerful technologies are -and will be- able to capture brain activity, mostly in terms of electrical activity but also in terms blood perfusion, chemical reactions and gene activation. All this growing data are now processed by artificial intelligence algorithms that are also taking advantage of deep learning approaches to correlate -and learn- from previous observations. This is crucial since it is now an accepted fact that although in a broad sense all brains (even the ones of other animals) are alike in terms of “working”, each one is different. Hence seeing a cat result in a specific distribution of activities in my brain that differs from the once activated in your brain when seeing the very same cat.

I am seeing a continuous progress in this area: decoding brain activity into meanings. We are still quite far from pinpointing what is going on. According to the Imperial College foresight study by 2040 we might have a machine able to read our dreams and record them.

As shown in the photo, two researchers at the Union College, NY, have created a robot that is fed with data captured by sensors on a sleeping person. These data are  processed by the robotic brain that guides the robot to enact those (supposed) dreams. Something, like replicating rapid eye movements occurring in dreams, is easy, other more semantically connected dreams are more difficult and beyond current technological possibility.

This connection between ourself and a robot, with the robot digesting and mimicking our dreams reminds me of DeepDream, a program developed by Google to look inside the neural networks of a computer as it processes images. The program (see the clip) visualise the processing giving rise to images that looks somehow like a robot dream!  Will a robot be able to dream? Possibly. Will it be aware that it is dreaming? May be. Will it enjoy the dreaming?  Have your pick!

Cognitive prosthetics

Graphic rendering of the DARPA challenge: to connect 1 million neurones to an external computer monitoring their activities. Image credit: Paradromics

Significant progresses have been made in this last decade to interface prosthetics to the brain, using a few electrodes (on the skull or implanted) to pick up brain electrical activity and use that to control an external prosthetics, like a robotic arm or a robotic wheelchair. What usually happens is that the person “learns” to control the prosthetic by engaging in some specific thoughts activity. Hence it is not the prosthetics that learns to read that person thoughts.

The more electrical activity can be picked up and the more precise the locations the easier it is to control the prosthetics and more and more complex activities can be orchestrated. That is the reason for the DARPA challenge: Neural Engineering System Design, resulting in a brain implants able to pick up electrical activities from a million neurones.

Researchers are at work to win the challenge and I am quite sure they will. However, creating a seamless cognitive prosthetics is another story altogether.

Here the crucial point is “seamless”.  In a way we already have cognitive prosthetics today: our smartphone is an extremely effective cognitive prosthetics. If I do not know something, a few clicks on my smartphone and the world knowledge is at my fingertips, similarly for performing a variety of tasks, translating into another language, navigating a foreign city, doing math….

The Imperial College Foresight study predict the existence of seamless cognitive prosthetics by 2040 and I am not sure this can be really achieved in its full form, that is increasing our brain cognitive capabilities through some prosthetics, what I mean is that I do not see feasible, in this century, to plug in a chip on a brain and boost its cognitive capability at will. However, it is not black or white, there is plenty of grey in between and that is the area where improvements in cognitive prosthetics are likely to take place in the coming 20 years.

A I noted, we already have a cognitive prosthetics in our smartphone (but that is not fundamentally different from using a book in a library, just billion times more efficient). We can easily foresee a smartphone shrinking to the point of becoming embedded in an electronic contact lens … That would provide a cognitive boost, but still it would be just stretching in terms of effectiveness what we have been doing from centuries, accessing a knowledge repository through our senses.

There are some experiments being done in actually “boosting” the cognitive capability of the brain. I posted recently the trials on mice performed at the Wake Forest Baptist Medical Centre indicating that it is possible to increase, through electrical stimulation, the learning capabilities of a rat brain. Other trials are also under way aiming at discovering ways to boost our brain cognitive capabilities.

Most of these trials are exploring the use of Deep Brain Stimulation and it is likely that the increased understanding we will gain in the next two decades from flagship projects, like the Human Brain and Connectome, will result in better ways to boost brain capabilities. At the same time the increase of knowledge is likely to widen the gap between what we can learn (even understand!) even by boosting our brain (through stimulation and genetic modification) and what becomes available.

It becomes more and more likely that in the future we will have to rely on a distributed intelligence, and that we will leverage it through a symbiotic relationship with machines and ultimately with the environment. This might seem like science fiction but to me is the only way to cope with the avalanche of knowledge being created.  This has been pointed out as the way to move forward by the World Economic Forum in its 2018 meeting.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.