Awareness, Intention, Sentiment technologies in SAS – V

Pedestrians are detected in high resolution images. Based on a part wise detection, the gaze direction of a pedestrian is estimated using machine learning techniques. Also, other relevant features like the motion direction and speed are determined using tracking. The extracted features are used as an input to a prediction stage. In this step, the history of the tracked features as well as information on the surrounding are used to determine possible future trajectories. Credit: KIT MRT

Intention recognition has developed significantly in fields like security and military. Additionally, it has been considered in health care to help patients with communications disabilities. The aim is to be able to decode people intentions through the observation of their behavior and through analyses of their interactions. Our brain has become quite good in reading “between the lines” and it is on the spot most of the time. It is also fairly good in recognizing intentions from some living creatures it is familiar with, like dogs and cats and by extension to similar animals through some stereotype used to detect aggressiveness, social behavior…

Animals’ brain is also equipped, at least in a number of species, with this capability (notice that it is different from the one addressed in the previous section that related to imagining the impact of one’s action on another entity).

In the area of symbiotic autonomous systems the interest on intention recognition is extended to recognizing the intention of any kind of interacting entity, both living and machines.

Cars have to predict the possible movements of people in their environment (has that pedestrian the “intention” to cross the road? Will that cyclist turn right?) as well as predict the movements of other vehicles. Obviously cars might communicate with each other if they are new models, but they will have to discover the intention of old vehicles that are not equipped with communications capability.

Looking at the direction light is not, per sé, an assessment of an intention, it is a way of communicating a plan. However, sometimes the direction light may have been erroneously activated and therefore intention assessment can be important (a driver has indicated the intention to turn right and instead the car keeps going straight or worse turns left). Notice that our brain, through experience, is pretty good in evaluating a variety of signs, hints, and to work out a high probability intention recognition. By many signs, that would be difficult to spell out, a driver may detect the intention of a car he is tailing to turn, even if no direction light has been activated. Sometimes it is referred to as a “sixth sense”: indeed, Intention recognition technologies are asked to provide a sixth sense to a symbiotic autonomous sytem and more generally to smart autonomous systems.

Pedestrian intention recognition is being assessed using Hidden Markow Models, looking at a face for tiny reactions can provide data for intention recognition, comparing those reactions to a virtual model of that person (which takes into account gender, age, culture…) enriched with historical data on that person if available (using deep neural networks analyses). Notice that digital sensors, like the one of a digital camera, can pick up variation in the heart beat by looking at the subtle changes in color of the face as the heart pumps the blood, thus deriving data pointing to excitement, fear, interest… Similarly, the detection of eyelids, of the iris and pupil movements provide additional data useful for intention recognition.

Systems have been developed to assess the physical fitness of a driver (increasing drowsiness as an example) based on head movements.
A lot of social and psychological studies have created a significant knowledge in modeling human behavior and “expected” behavior. This can be used in the evaluation of data collected by sensors (mostly visual sensors although a growing set of knowledge is becoming available in the assessing of aural clues). Robots designed to work in an open environment are progressively equipped with intention recognition capabilities.

Brain Computer Interfaces have been demonstrated to have the capability of detecting intentions before they can actually been turned into an activity. Actually, recent studies have shown that the intention may be present in the brain processing even before it is perceived by the brain owner. Hence, in the long term, once seamless BCI will become feasible, it might become possible to “mine” the brain for intentions. Clearly this opens up a Pandora box in terms of privacy and ethics.

In certain complex settings, intention recognition can become important for a machine to “understand” the command received from the human operator. Teleoperation of drones may be one example as pointed out in a recent paper proposing to use Convolutional Neural Networks (CNN).

Machines can also demonstrate signs that help in intention recognition, eg. a deceleration by a preceding vehicle in certain situations may suggest a turn intention.

Discovering the intention of a machine requires a similar process of data evaluation against a virtual model of the behavior of that machine. If no human is involved (as it might be the case of a vehicle where the actual behavior is the result of its driver behavior) the point is to understand the decision process guiding the behavior of that machine. However, in the future as machines will be acting on the bases of their embedded artificial intelligence, the intention recognition will become, in a way, more similar to human intention recognition.

In case of complex systems whose behavior is emerging from the loose interactions among its components the intention recognition takes a different spin. Observing a flock of starling we can see it continually changing its shape and direction. This is the result of an emergent behavior out of a multitude of (predictable) behaviors. In this cases the intention recognition is played on a different level since there is not an “intention”, just an effect. Complexity science, small worlds, is the one to be used in this area.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.