Transhumanism: Increasing Human Thought Capabilities V

Our brain is a massively distributed system -sort of- processing sensory data and data generated autonomously (thoughts, dreams…). In the image the Hypothetical involvement of different brain zones in some processing activities, in R the actual activities in the brain and in Z the difference. Notice that these are very “gross” approximation of parallel computation engaged in an activity. What if we could defer one or more of these brain computation to an external processing device (an implanted chip, a processing service in the Cloud or in a robot) to increase the overall efficiency? Image credit: The Brain as a Distributed Intelligent Processing System: An EEG Study. Armando Freitas da Rocha Fábio Theoto Rocha Eduardo Massad

Using the brain as a co-processor 

As soon as we started having computers researchers tried to connect them together to leverage on the possibility of harvesting more processing power. Internet is the result of this idea: connecting computers (with their processing and data) in a seamless web that can be experienced as a single entity.

Actually, the idea of distributed processing took also the other direction! Within a single computer several processing chips, each one with specific processing prowess, can be tied together to perform a computation. Nowadays computers have GPUs along with CPUs, supercomputers have hundreds of thousands of CPUs/GPUs (the latest supercomputer, Summit, has 101,376 IBM processors and 24,576 NVIDIA accelerators) and within a single processing chip there are now many “processing cores” – hundreds of them and growing (Adapteva is proposing an multi-core architecture supporting a billion processing units!).

Our brain is, too, a distributed processing system. At macro level scientists have identified several areas each dominant in the processing of certain signals, like visual signals processing happening in the V1, V2, V3, V4 and MT on the dorsal pathway and in the TE, TEO and LGN in the ventral pathway.

It is actually way more complicated that this. The brain does not work in macro areas with clearly defined interfaces, it works on trillion of synapses affecting each others and being affected by chemical compounds percolating in the brain as result of brain activity and body situation. This mkes the brain a much better distributed systems that basically avoids the current downfall of computer distributed systems where the challenge is how to split the computation on different nodes. In the brain everything goes on basically in parallel and it influences one another both spatially (because of the trillions of connections) and temporally (because synapses are affected by what they did before, they have memory of previous computations).

However, from a conceptual point of view one could imagine that part of the brain computation could be enhanced by some form of external processing. In a way we did it already as we got used to process data and using the result to take decisions, fine tuning our understanding and so on. However, so far this has been done through the mediation of our senses (like: we “look” at a printout -video screen- and the eyes bring the result to the brain…). With the evolution of the BCI, discussed in a previous post, we might expect a seamless connectivity between the brain processing and an external processing taking place in an implanted chip, or in a wearable device, in an assistant robot or in the web. Tale a look at the clip showing how the use of Artificial Intelligence is now allowing a computer to detect shapes we see in our thoughts by looking at brain’s electrical activity.

Interestingly, one could also imagine a coupling of BCI with CBI, hence connecting one brain with another brain to leverage the knowledge and skills of both. This is what we are doing every day, working in teams, but again the availability of seamless BCI could make this cooperative working happening at a completely different level.

Notice that both in this last case, as in the previous one of involving an external device, what would become possible, differently from today, is an interaction below the thresholds of perception. Today this is not possible, any information we exchange with a computer, through our senses and actions, necessarily flows through the perception layer (yes we can be absentminded and not take notice of what we are doing, but we are still working at the perception level).

This is a major constrain, since most of brain processing happens below the perception level (including, it seems, some decisional processes). The possibility of engaging additional processing capabilities operating at the “unconscious” level would radically change the thinking processes since it will act on their substrata. It would be a completely new world, fraught with ethical issues (who is responsible of the decision and of the decision effects?). Symbiotic Systems will have this shared “intelligence” and thought forming.

At the same time, it is clear that there is a potential for a tremendous boosting of human’s thinking capabilities that would shift our species into transhumanism.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.