Augmented Machines and Augmented Humans are converging II

CeNSE, Central Nervous System for Earth, was a 2010 HP project with a vision of providing an overlay of sensors to monitor the planet. The project initially focussed to the needs of Shell to detect oil reservoirs. It has not progressed but it is important as a vision of the whole planet being monitored by sensors providing data to a platform that could be exploited to develop environmental services. Today there are several sensors network, on land and in the oceans, to monitor the Planet but so far they are independent from one another. Image credit: HP

Awareness

You can’t be smart without understanding the context you operate in and the very first step is becoming aware of what is going on. You need to sense your environment.

The vision of a world that can be understood by scattering sensors all around was articulated by HP in the first decade of this century with the project CeNSE, Central Nervous System for the Earth. The project had Shell as the first customer, interested in using sensors to detect oil reservoirs by measuring vibration patterns induced by micro explosions. HP was foreseeing a world where every object had sensors embedded and these sensors were forming a network, a nervous system, collecting data that could be

Every single nut and bolt on a bridge can embed a sensors and all together t

processed centrally. Every bridge, every road would be part of it. Bolts and nuts connecting the various parts of a bridge would embed sensors communicating with one another the local stress and pressure; these data would be used to monitor the bridge and the movement of the connected banks. Sensors embedded in the tarmac would capture the vibration created by vehicles and signal processing would be able to tell traffic patterns and even differentiate distinct types of vehicles: a micro sensing that could provide the data for a macro assessment of the environment.

A different approach to sensing is the one we are using everyday and that has been perfected through million years of evolution: sight. Image detectors have become extremely effective (high resolution and low cost) and computer vision, in these last years, has progressed enormously. It leverages on image processing (detection of edges, sorting out shadows,…), on machine learning to understand what is there, and it is now being used in machine vision (determine what is there, like a rusted pole needing maintenance, or the identification of a vehicle plate number)  and in robot vision (e.g. to move in an ambient).

Lately, and more so in the future, smart materials have acquired sensing capabilities so that any object will embed sensing in a native way. Smart materials can sense and react to a variety of stimuli, with piezoelectricity taking the lion share. This means sensing pressure, including touch, and releasing an electrical current that is proportional to the pressure applied, hence measuring that pressure.

All this enhanced and distributed sensing is creating data that can be processed both locally and at various hierarchical stages. And processing is what can deliver value!

By mirroring the physical world creating a digital copy we can analyse the digital copy as it is (mirror), we can compare it with what it was (thread) and we can keep it in synch as the physical entity evolves (shadow).

Through data analytics on the digital copy and on other digital entities loosely connected with it in the physical space we can assess what is going on and find out “why”. Then we can infer what might be happening next and work out ways to steer the evolution in a more desirable direction (or at least limiting the negative aspects).

The sequence:

  • what is going on,
  • why,
  • what will happen,
  • how to change the predicted evolution

is what creates value.

Making sense of data is crucial, it also opens up the door to have data representing reality, creating a digital model of reality. Reality exists only in the present, however, data can represent the present and the past, they can develop a thread (and can be used to forecast the future!). This is what digital twins are all about. Of course once you are mirroring reality into a digital representation you are creating a snapshot that has a very short life. In order to keep the mirror faithful to reality you need to make sure that it is in synch with it. You need to create a digital shadows that keeps the digital model up to date. This makes the digital twin useful, since you can use it trusting that it keeps representing the physical reality.

Digital twins have been evolving over the last ten years from being a simple representation of a real entity at a certain point (like at the design stage) to become a shadow of the physical entity kept in synch through IoT (sensors).

In the coming years we may expect a (partial) fusion between a digital twin and its physical twin. The resulting reality, the one that we will perceive, will exist partially in the physical world and partly in the cyberspace but the dividing line will get fuzzy and most likely will not be perceived by us. To us cyberspace and physical reality will be our perceived reality.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.

3 comments

  1. Dear Roberto,

    thanks for this article. Project CeNSE is fascinating as it reminds to James Lovelock’s Gaia theory. Roughly, he defined the whole planet similar as one living organism, including all creatures as animals, plants and humans.

    Much earlier, in 1926 the Serbian-American inventor Nikola Tesla said: “When wireless is perfectly applied, the whole earth will be converted into a huge brain…” With this he predicted today’s reality. Important is that brain is not synonym for intelligence.

    • I like the point you are making. Indeed one can look at the CeNSE vision as a way to make us aware of our planet in the short term rather than observing its macro evolution when it might be too late and when finding the root cause may be impossible.

  2. Back in 1990 programmers used the Gaia-philosophy and tried to interpret this into a model. The result had been “Sim Earth” ( https://en.wikipedia.org/wiki/SimEarth ). In opposite to today’s Digital Twins, this model not included data shadowing. If we connect the whole world via sensors, we could have such a Digital Twin on our planet. On the other hand, to make the results of such a sophisticated DW interpretable for humans, the algorithms have to reduce the amount of Big Data, to have interpretable Smart Data. As the Earth is such a complex system, the algorithm would strongly depend on the opinion of their programmers. The risk of biases would be very high.