Awareness, Intention, Sentiment technologies in SAS – IV

We can appreciate what our fellow humans feel thanks to the presence of mirror neurones in our brain. These neurones, discovered for the first time in the 1990s in monkeys…, mimic the other person and put us in their shoes. In this image the similarities in brain activity between a person acting and one observing his acting, in this instance playing a piano. Notice that a pianist listening to a pianist playing has a much more closer reaction than a non pianist listening to a piano player. The model of the external reality we develop in our brain, the virtual environment, is highly dependent on our past experience. Image credit: Gazzola et al.

Following on the previous posts in this series, a third area of awareness, covered in a specular way in the next subsection, relates to the potential perception of context and actions carried out by the “aware” entity by other entities. This is, by far, a higher level of awareness and it is found only in a few mammal species, as far as we can tell. It implies the capability to imagine what the other entities can perceive/feel.  Humans brain and primates brain have been found to have “mirror neurons” that serve this specific purpose.

Humans for sure, and possibly other creatures, can imagine what other creatures would feel and react when confronted with our actions. This is a fundamental characteristics of social behavior. Most of the time we act in a way that we feel acceptable by our environment.

Social robots have become a significant area of study, where the focus is on facilitating their interaction with us, human beings.

In symbiotic systems, where a component is a human being, the other components may get hints on what the human is perceiving the behavior by looking at some telltale signs in her expression. There is already technology to evaluate the feeling of a single person as well as the feeling of a cluster of people (sentiment analyses discussed later on). Digital cameras are already equipped with software that intercepts smile to take the snapshot at the right time. Much more sophisticated software by analyzing a number of traits, including posture, movement, tone of voice can extract very precise information on the “feeling” of a person. Simple camera sensors coupled with software can accomplish this feat.

However, it would be better to foresee the feeling of a human (or another component in the symbiotic system or in the environment) before executing an action. The point is to take a decision based on the possible ways these decisions would affect the others.
Technology in this area is also progressing through the creation of virtual twins. Notice that the virtual twin differs from the digital twin associated to an entity. A digital twin is coupled explicitly with its real twin, a virtual twin is created on spot through modelling of the perceived behavior of an entity and it is used by the ones that created it. A digital twin is associated to the real twin, a virtual twin is associated to the entity using it (and different entities would each generate their own virtual twin to “understand” the world around it). The recent approach based on Generative Adversarial Networks can be used to test potential effect of decisions on the virtual twin.

The concept of “virtual twin” can be applied to humans as well as to machine. It is created, and refined, through the observation of the behavior manifested by the “real twin” in response to specific stimuli. Deep learning technologies are useful in developing the virtual twin and refining it.

The virtual twin will be used to test (in a blink of an eye) the possible response to an interaction and decisions will be taken based on the desired response. Interesting to notice that these approaches, and technologies, are already used in modeling the possible responses of an audience or of voters during an election campaign with the candidate will talk (in form and content) based on the expected reaction from the audience.
This becomes part of the way interactions are constructed, with continuous refinement that is needed not just to create a more accurate virtual twin, but to take into account the changing responses over time to a given interaction (what may work now may not work tomorrow).

This is an area that on one side connects to sociology and psychology (if we are creating virtual twins of people) and game theory if applied to machines.

Symbiotic autonomous systems will become able to mirror their environment as well as their individual constituents awareness probably in the next twenty years (with some aspects already addressed today).

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.