SAS Delphi results – Self-Aware Machines

Machine awareness does not necessarily implies Machine Consciousness, although it would not be possible to achieve consciousness without awareness. Credit: Futurizon. Pearson.

Area 6 – Self-Aware Machines

Technology makes possible to create smaller and smaller objects that can self coordinate to achieve complex goals. Each component is an autonomous system on its own and is relatively cheap to manufacture and deploy. When clustered with many other similar (or exact replica) components the context awareness along with (flexible or rigid) engagement rules give rise to an emergent behavior, similar to what is seen in insect swarms or in bird flocks. The overall swarm is cheaper to create, more resilient to components malfunction and can generate complex behaviors.

Q 6.1

Will machines ever become self-aware (i.e., will they perceive themselves as an entity)?

Unanimous agreement of the expert that this will be achieved.

Q 6.2

Will machines ever become aware of why something is happening in an ambient that includes humans (i.e., will machines become aware of deep human intention, not just of their probable behaviour)?

The experts split, basically half foresee a future where a machine will be able to internalise the deeper human intentions and their possible motivation, the other half does not believe that machine can feel what a human feel, just evaluate his probable behaviour.

The line between probable behaviour and human intention is very blurred. Is it possible that machines will understand us better than we understand ourselves most of the time? Absolutely. But that has more to do with the human ability to self-delude than the abilities of the machine.

In general, this is part of an ineluctable drive of intelligent technologies. The good news is that humans will be in a position to design new type of self awareness with varied and measured degrees of psycho-technical autonomy.

Q 6.3

Will machines become aware of other machines’ awareness?

Unanimous agreement of the expert that this will be achieved.

Notice that this is the same or similar computational sense in which machines appreciate human awareness (via AR or other means of conveyance or internal representation).

Q 6.4

Will other-aware machines acquire a fully-developed theory of mind (the ability to recognize and attribute mental states—thoughts, perceptions, desires, intentions, feelings, emotions—to oneself and to others, and to understand how these mental states might affect behaviour) equal or superior to that of humans?

Unanimous agreement of the expert that this will be achieved.

However, this will be achieved with limited accuracy as educated guesses. Given workable representations and data, the reasoning and thinking should all be possible to some degree but thought of as useful information for behavioural guidance rather than accurate information, as it is the case for humans.

Q 6.5

What will likely be the roadmap towards full machine awareness (time and quality)?

Most experts foresee this happening beyond the end of this century, hence beyond the horizon of the SAS Initiative.

A gradual growth of machine awareness can be as follows:

  • 2020ies understanding of ambient (driven by self driving vehicles and robots in industrial environment),
  • 2030ies understanding of motivation -i.e support to other entities behaviour prediction,
  • 2040 understanding of feelings,
  • 2050 empathic relations.

Once the right path is found, evolution will be exceptionally rapid:

  • sensing
  • representations
  • shared mental models
  • cognitive maps
  • autonomous learning
  • high-level reasoning

Q 6.6

Will self-aware machines self-create their own goals (e.g., remain healthy, reproduce, interact with other self-aware machines, interact with humans) and will they “cheat” to achieve them?

A majority of experts foresee a time when machine will be able to, and will actually, create their own goals, whilst a minority see this as a possibility but within a predefined framework. Clearly, this second view, if true, would create fewer issues in terms of controlling the motivation of machines (in a way machines can evolve still abiding to Asimov’s the three laws of robotics).

By 2050, machines will be very efficient at ambient awareness, but still in the early stages of self-

awareness and of handling its consequences.

Machines would self-create sub-goals only and toward pursuit of goals humans impose

on them. Responsible development should avoid enabling self-creation of machine’s own goals independent of human goals.

Cheating is a very human notion based on perceived rules and etiquette. It would seem likely that a machine might do something a human would consider cheating purely as a more optimal way to achieve a goal. Whether that, actually, constitutes cheating is a different matter entirely…

Q 6.7

Following on 6.6, will machines experience pain by not reaching their goal and elation by succeeding (in other words, can sensations be used as motivators for machines)?

The majority of experts foresee machines having sensation of pain and elation/joy similar to what humans feel and acting as a consequence.

This already the case for some autonomous mobile robots that "feel"

frustration when encountering motion limit cycling during navigation and break out of it with momentary random motions. Such motivations have been tools for some time now in autonomous systems.

A minority argues that such sensations can be programed in a machine, even with great sophistication leading to the appreciation of nuances, and will result in a change of behaviour aiming at decreasing pain and increasing elation/joy but that cannot be considered as equivalent to having human feelings. Notice, however, that this is basically the objection of the Chinese Room, raised when discussing the Turing test.

Q 6.8

What will a human–machine relation be once the latter becomes aware?

The majority of experts does not foresee a change in the relationship human-machines as these become more sophisticated, even when acquiring human like behaviour influenced by human like feelings. A minority of experts foresee a point when it will be difficult for human not to equate a machine to a living being, with associated empathy and need to have a machine “feel better”. This will also generate the instance of machine rights and their protection.

Humans will be likely to consider machines as living entities, although they will remain at a lower level than humans themselves, more like we tend to consider animals (and to a lower level plants). Some machines, sharing their life with us will enter into an empathy space and will be seen as pets of a sort.

Q 6.9

Will humans “humanize” self-aware machines?

Unanimous agreement of the expert that this will be achieved.

Some humans will try “humanising machines”. Note that if the machines develop emotions and self-awareness, doing so will not be at all like reprogramming, but more like trying to change another human and so may fail or have negative consequences

Q 6.10

How might we learn from self-aware machines?

The experts split basically in half, a part foreseeing that when such a point will be reached humans will apply to machine the same paradigm being applied to humans, hence will try to learn from a machine as we try to learn from other humans, the other half foresee the opening of new paradigms of learning (e.g. establishing a relation leading to the exploitation of machine capabilities serving the humans without the need to learn from it).

Machines will likely develop their own sense of the world and of the relations existing in the world. Some of them may come as unexpected to us and we can learn from them.

Interaction with self-aware machines would make us more human, by having greater respect for life and self-awareness. In reality, we might learn also many negative traits, such as perfecting the art of bullying a machine to be subservient.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.