The fading boundary between Atoms and Bits

The separation between atoms and bits is getting fuzzier, as IoT gets smarter, and that includes human beings.

It is now 60+ years that digitalisation is progressing. We have been living in a world of atoms but with the advent of computers and their ability to work on bits at very low cost we have initiated a transposition of bits into atoms. This is made possible through the use of sensors. In the very beginning we used “our” sensors, our sight and hearing and the processing in our brain to convert atoms into bit (using punched tape and punched cards in the beginning). Then we created sensors to convert atoms into bits and since then the variety of sensors and their capability have kept increasing. Among these capabilities the possibility to process data locally and communicate an abstract data with richer content of information.

The separation between atoms and bits has remained quite clear. We shifted our attention to bit, to data and we are now using a variety of technologies to exploit these data, correlating them and extrating meaning out of these correlation (big data analyses). We are using data and their variation over time to learn and infer patterns and rules that help us (the program) to get smarter in their analyses (deep learning).

Of course one of the goal of analysing data is to set up actions that can influence the world of atoms steering it into a desirable direction. For this we use actuators. These may generate direct commands to machine or influence the bahaviour of the world of atoms by providing information, like advising of the buiding up of a traffic jam through SMS broadcast to drivers in the vicinity of the problematic area prompting them to take alternative routes.

What we are seeing happening, right now and more so in the coming decade, is a blurring in this separation of atoms and bits. The first sign of this blurring is the uptake of augmented reality.

Devices integrating a screen and a camera, connected to the web and with processing capability can merge the world of atoms with the world of bits.

Think about using your smartphone camera to look at a road sign in a foreign Country. An app can  translate the wording in the sign into your language, keeping all the rest of the image unchanged (Word Lens, watch the clip, was an application running on smarphone by QuestVisual, bough by Google in 2015).

The smartphone is a good example of a device that can suppport Augmented Reality, merging the world of bits with the world of atoms and indeed there are plenty of apps available and under development targeting this platform.

A more “seamless” device like Google Glass promised to be (although it didn’t manage to capture the market as much as it was expected) would be an even better one for making AR ubiquitous.

I feel it is just a matter of a few more years and we will see AR becoming as commonplace as text messages are today. We will be using it without noticing, taking it for granted, as part of our daily life. Today we are already consciuously connecting the world of atoms with the world of bits by using our cellphone to search the web for information relevant to a specific situation we are facing, tomorrow this will take place seamlessly.

Imagine a time when BCI (Brain Computer Interface) will be widespread and just “wondering” about something will bring the answer to us. You see a bifurcation in the road and a prompt will come indicating with way to go. It would be like having a navigator plugged into your brain. Or looking at a couch in a department store and seeing it with your mind’s eye fitting in your living room, taking the exact space it would take, given its dimension.

These examples may look like science fiction in their seamless occurrence, but they are clearly feasible today if we accept some (sometimes cumbersome) interaction. The point is that evolution will, step by step, make the perception of interaction slowly vanish to the point that the connection will be a matter of fact.

There is another point that is going to make the separation between atoms and bits presented in the diagram fading away. Sensors and actuators are becoming more and more rich in terms of processing and storage capabilities. This lead to an increased capability of taking decisions locally. This is what is meant by “Smart IoT”.

Reality is not getting “augmented” by overlaying bits on atoms. It becomes “mixed” with a co-presence of bits and atoms.

Smart IoT will be context aware and they will evolve in their behaviour because they will learn through experience. At that point it will be difficult, and artificial, to separate bits from atoms, as it is artificial to separate the brain from the mind.

It will not be an evolution confined to technology, it will have impact on economics and on ethics. The shift to a “mixed reality” is a bigger one than the upcoming of Augmented Reality, since the concept of objective reality gets fuzzier. What is the real reality, once the perceived one depends on the specific capabilities available here and now (to me or to you).  What is the “reality” in case of a symbiotic autonomous systems. Is the the one emerging out of the local realities of each system component? Who is in charge to perculate that emerging reality to each system component so that they share a common view (assuming this is even possibile?).

Is machine learning, leveraging on processing capabilities that far exceed our human capabilities, leading us into a forced trusting of the machine (which is already the case when a pilot flies blind in the fog towards a runway…) de-responsibilising us?

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.