Awareness Technologies and sentiment analyses
Autonomous systems need to have a situational awareness, that is they need to be aware of their surrounding and aware of their possible evolution. An autonomous vehicle needs to identify the objects in its surrounding, understand their characteristics: a bench on a sidewalk and a light pole are not going to move around but the boy sitting on the bench may move all of a sudden and jump into the street, however if the boy has a cast on his leg his possible movements will be much slower, if he is bouncing a ball, on the contrary, he is more likely to jump from the bench. Context awareness requires the ability to imagine what may happen even if there are no immediate signs. As an example, an autonomous vehicle approaching a blind crossing cannot detect any incoming vehicles but needs to be aware that there might be “incoming vehicles” whilst if there is visibility and no other vehicle is nearby then there will be no vehicles contending the crossing.
Even from these few examples it is clear the complexity faced by awareness technologies and how, sometimes –particularly when confronted with living organisms, like us, dogs, cats…- they have to enter into the sentiment analyses, imagining what a sentient being might be after.
Interestingly, in a world populated by autonomous systems this “sentiment analyses” shall be applied to them as well. Not all autonomous systems will be alike, each one will have its own character and predisposition to act in a certain way and this may change depending on the situation and over time, just like a sentient being. Actually, the more sophisticated the autonomous system the more likely the need for some sort of sentiment analyses. This is a new area of science that will need to be developed. From the point of view of an autonomous system every object has to be looked upon with some sort of suspicion, in a way of saying, about its possible behaviour.
These technologies are based on computer vision and its sub areas of image processing, image analyses, machine vision and pattern recognition.
The challenges are quite similar to the ones faced by living organisms and studies have been, and are, made on the visual processing of living organisms and the subsequent decision taking strategies. Out of these studies, usually under the label of neurobiology, the scientific branches of neural nets and deep learning have emerged.
The study of biological vision brings in studies of consciousness and its relation to the interpretation of visual stimuli and decision taking processes.
Sensors are usually providing 2D images and these images need to be converted into 3D models. Some sensors can provide information depth, thus helping in singling out objects in an image. Analyses of colours can differentiate difference in hues from shadows and shadows can help in gauging distances.
Notice the importance of context and experience in extracting meaning from shadows. Our brain processes shadows assuming light comes from above. Under this assumption it detects bas-relief versus engravings. By changing the direction of light it is possible to trick our brain in seeing as bas-relief what is actually an engraving and vice versa. This kind of issue is faced by image recognition algorithm that first need to determine the direction of the light illuminating the subject. In case of machine vision this can be active or passive, i.e. the illuminating light can be generated by the machine or can be ambient light.
A growing area of interest is the exploitation of communications among autonomous systems to create a global awareness. Self autonomously driving vehicles may interact with one another to get a global understanding.
Military applications are fast advancing on this approach, the creation of swarm awareness.
Drones can collaborate to identify objects and to understand what is going on (to know what to do…).