Awareness, Intention, Sentiment technologies in SAS – III

From the present exploitation of Artificial Intelligence in many areas the expectation is to achieve a level of human intelligence before 2050, the so called Artificial General Intelligence -AGI. From then on the Artificial Intelligence will keep growing exceeding our Human Intelligence, the Artificial SuperIntelligence -ASI. That is a point where machines, swarms may take the upper hand in defining goals. Some expect this to happen around 2075. Graph credit: ATKearney. Source: Nick Bostrom – Superintelligence: Paths, Dangers, Strategies

A second area of awareness relates to Goal Awareness.

The statement I made in the previous post in this series “… and select the most appropriate one” implies that there is a metric, a framework, to identify the good one. This has been so in autonomous systems where the criteria of “good” was cabled inside the system, like “take the option that reduces fuel consumption”. In symbiotic autonomous systems there are several independent systems and it may not be straightforward to cable in each system a metric and have them all make sense as a whole, since in most cases they have been designed independently of one another. Moreover, the overall goal may require some adjustment to the individual goals.

As artificial intelligence, in particular AGI, takes over the system can learn not just the most effective strategies but can also start to create its own framework upon which take decisions. It will, in a way, develop its own goal.

As it is observed in the book “Life 3.0: being human in the Age of Artificial Intelligence”,  as discussed by the Future of Life Institute, the problem is not (as much) the emergence of a malicious AI whose goals will oppose our human goals, rather the prevalence of the AGI (Artificial General Intelligence) competence leading it to create goals that are not compatible with ours. How could this be? It is the same situation we are facing when deciding to build a dam to create hydroelectric power. By flooding the area with a lake we are killing anthills, just to mention one side effect, but nevertheless we are not giving it any second thoughts. The benefit of having electric power far outweighs, in our framework, the loss of millions of ants. Would we be able to ask the involved ants we might get a different perspective on the matter! So, what if, for a superior benefit, AGI sets itself a goal whose side effect includes the loss of human lives? Notice that it could be a perfectly “good” goal, like recovering from an epidemic/a famine. A swarm of drones have been engaged to help solving the problem in a remote region, they carry drugs and food. Once on the spot their collective intelligence finds out that by killing a certain number of elderly the combined effect of drugs and food will defeat the epidemic/famine. It is unlikely that most people would accept that kind of solution as ethically viable. This is both an ethical question, and as such it will be addressed in the relevant section of this White Paper, and a technology related question, how can we define (and possibly control the outcome) an autonomous goal setting such as the one that might become viable in the context of symbiotic autonomous systems. How can we implement a system of shared intelligence that lead to an overall emerging intelligence that is still under our control?

It doesn’t stop here. In the future, possibly before the end of this century (with some betting on a date around 2075) AGI will be superseded by ASI, Artificial SuperIntelligence, and intelligence far beyond the human one and that kind of intelligence will have “embedded” the capability to set its own goals.

This is an open area of research that involves:

  • Transferring our goals to AI – notice this is more about an AI system “learning” our goals rather than programming it. By definition if the system is autonomous you cannot “program it”, you “interact” with it. Learning means that AI cannot stop at learning what we do, rather it should understand “why” we do such things.
  • Having AI adopting our goals – it is notoriously difficult to have other people adopting our goals, just imagine a machine. To adopt our goal a human needs to find it compatible with his own framework, be open to adopting a goal he does not have and not having already committed to adopt a different goal (possibly auto-generated). With machine intelligence the trend is similar, just trickier: the machine should be smart enough to understand our goal and ready to adopt it and not so smart to consider that the only goals that really matter are the ones it can self generate. Researchers are studying this aspect using inverse reinforcement learning.
  • Having AI retaining our goals – this is probably the trickiest of the three. Again, looking at humans, as we grow we get (generally) smarter, more intelligent, and we change our goals. We have a few goals that are cabled in our genes, like “reproduce!”. Yet, as we understand how “it works” we change the goal keeping the fun part and dropping the reproduction part. This can similarly apply to a superintelligent system. As it considers its goals it would reflect on them and eventually decide to change them.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.