Looking ahead to 2050 – Artificial Intelligence IV

To close this brief set of speculations on Artificial Intelligence in 2050 let’s consider the implication of a world populated by intelligent (autonomous) entities. In the previous post I stressed that there is no consensus on the fact that a machine passing the Turing test (having a behaviour that is indistinguishable from a human one) would have "feelings". What about "free will"? What about "character", "mood"? Would these machines be sociable? Would they compete with one another or cooperate?
The point is really tricky. You cannot say: "No problem, we are going to program them so they they will be -good-".
The essence of creating artificial intelligence, like for human, is to have an entity that can have its own peculiar open space of decision. Actually, most recent technologies to have machines learning and getting smarter are letting them work out by themselves the "best" strategy and keep becoming better and better.  The problem, of course, is that "better" can mean different things to different people, communities, cultures, societies and… machines!
In this respect it is very interesting to look at studies, like the one recently published by Google researchers who investigated the behaviour of "intelligent agents" when they are confronting with one another. It turns out that depending on the availability of their own resources (that is the power of their "brain", the processing power available) they may lean over a cooperative or a competitive strategy. Notably, if one agent has more processing power than the others it will tend to drift into a competitive attitude, whilst in presence of equivalent processing power the agents tend to lean towards a cooperative attitude. It looks like the "law of the strongest" is at play in the machines kingdom as it has been in the human civilisation history. 
Over the millennia, with difficulties and with several relapses, humankind has built a social ethic and has strived to cooperate rather than compete (the widespread occurrence of wars, fights is there to remind us this is not really so…). Also in business, particularly in today’s world, competition is the name of the game.
This is a big issue. True, one can assume that the open space of decision for AI machines will be bounded but those boundaries will get broader and fuzzier over time. Besides who is going to set those boundaries? Who is programming the programmer?
Again, we are in an uncharted territory. This is pointing to the need for a broad cooperation among scientists, researchers, sociologists, politicians and regulators. I have a bad feeling in just letting the market evolve and hope for the better.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.