Tech for Autonomous Systems – Self-Evolving Capabilities I

Self driving cars need more than seeing what is in their environment. They need to become aware of what may happen, given the kind of entities present and the ones that are not visible but might be present. Image credit: Inhabitat

Awareness Technologies and sentiment analyses

Autonomous systems need to have a situational awareness, that is they need to be aware of their surrounding and aware of their possible evolution. An autonomous vehicle needs to identify the objects in its surrounding, understand their characteristics: a bench on a sidewalk and a light pole are not going to move around but the boy sitting on the bench may move all of a sudden and jump into the street, however if the boy has a cast on his leg his possible movements will be much slower, if he is bouncing a ball, on the contrary, he is more likely to jump from the bench. Context awareness requires the ability to imagine what may happen even if there are no immediate signs. As an example, an autonomous vehicle approaching a blind crossing cannot detect any incoming vehicles but needs to be aware that there might be “incoming vehicles” whilst if there is visibility and no other vehicle is nearby then there will be no vehicles contending the crossing.

Even from these few examples it is clear the complexity faced by awareness technologies and how, sometimes –particularly when confronted with living organisms, like us, dogs, cats…- they have to enter into the sentiment analyses, imagining what a sentient being might be after.
Interestingly, in a world populated by autonomous systems this “sentiment analyses” shall be applied to them as well. Not all autonomous systems will be alike, each one will have its own character and predisposition to act in a certain way and this may change depending on the situation and over time, just like a sentient being. Actually, the more sophisticated the autonomous system the more likely the need for some sort of sentiment analyses. This is a new area of science that will need to be developed.  From the point of view of an autonomous system every object has to be looked upon with some sort of suspicion, in a way of saying, about its possible behaviour.

These technologies are based on computer vision and its sub areas of image processing, image analyses, machine vision and pattern recognition.

The challenges are quite similar to the ones faced by living organisms and studies have been, and are, made on the visual processing of living organisms and the subsequent decision taking strategies. Out of these studies, usually under the label of neurobiology, the scientific branches of neural nets and deep learning have emerged.

The study of biological vision brings in studies of consciousness and its relation to the interpretation of visual stimuli and decision taking processes.

Sensors are usually providing 2D images and these images need to be converted into 3D models. Some sensors can provide information depth, thus helping in singling out objects in an image. Analyses of colours can differentiate difference in hues from shadows and shadows can help in gauging distances.

Notice the importance of context and experience in extracting meaning from shadows. Our brain processes shadows assuming light comes from above. Under this assumption it detects bas-relief versus engravings. By changing the direction of light it is possible to trick our brain in seeing as bas-relief what is actually an engraving and vice versa. This kind of issue is faced by image recognition algorithm that first need to determine the direction of the light illuminating the subject. In case of machine vision this can be active or passive, i.e. the illuminating light can be generated by the machine or can be ambient light.

A growing area of interest is the exploitation of communications among autonomous systems to create a global awareness. Self autonomously driving vehicles may interact with one another to get a global understanding.

Military applications are fast advancing on this approach, the creation of swarm awareness.
Drones can collaborate to identify objects and to understand what is going on (to know what to do…).

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.