How does a machine think?

Through its sensors a self driving cars sees the road ahead. But seeing is not enough. It needs to understand what it is seeing and this is not a trivial feat. In this image on the left the car perceives the road turning to the left and correctly steers the car. On the right hand side the photo is darker and the car navigation system gets it wrong and crashes the car on the guardrail. Credit: Columbia University

We are so used at doing most of our daily activity that we seldom stop to reflect on what goes on in our head. Our senses feed our brain with data and it is up to the brain to make “sense” and take action (actually this is not always true, our spinal ganglia are the ones taking action when you touch a burning stove…). And our brain is really good in making sense of the world (as long as it pertains to our usual daily life). So it can seamlessly spot when we drive our car that the road turns left and send appropriate commands to our arms to turn the steering wheel just … right! The light may be very different, noon and sunshine, sunset, dusk… Yet the brain compensates the differences honing on the meaning of what is “out there”.

Self driving cars have been made possible by huge progress in understanding images. This understanding comes from analyses made by software using deep learning (DL) and convolutional neural networks (CNN) technologies. What is fascinating, but scaring, in these technologies is that we do not know how they work! It is not like coding an algorithm where you can follow step by step what is going on. Here the processing keeps changing and it is elusive. With this technology the machine learns over time, so in principle it gets better. On the other hand one cannot be sure of the outcome because one does not know what has been learnt and what has not been learnt that cannot be inferred from what has been learnt so far.

In a machine working through algorithm you can program the concept: “you can pull a rope but you cannot push it”. Using CNN or DL a machine can come across a situation where a rope is being pulled but it is unlikely it can come across a situation where a rope is being pushed. On the other hand it may likely come across situations where poles are pulled and pushed. Will it associate a rope to a pole and hence “believe” that one can push a rope? This is just an example to give you a feeling of the diversity between an algorithmic based decision and an artificial intelligence based decision.

Clearly this difficulty in understanding how a machine think is a huge problem when you need to assure that its reasoning is correct.

Today testing is made by submitting different situations to the analyses of the machine and check versus the expected, correct, outcome. The problem is that as the machine is responding it is also changing its “experience” and that might result in a different outcome next time. Do you see the problem?

Researchers at Columbia and Lehigh universities have created a tool, DeepXplore, designed to test Deep Learning systems. The tool creates a variety of situations to test the Deep Learning system using world test images to expose logical errors in the neural networks. In the figure you see an example: the same image is shown with two different illumination. The system should derive the same understanding and therefore take the same decision but in this case it is not happening. The darker image is interpreted in a different way resulting in the car crashing against the guardrail.

The DeepXplore generates thousands of images simulating partial darkening resulting from dust on the camera lens or from an object partially blocking the sight. It has been able to spot a number of bugs that were not intercepted with usual testing systems.

The big question, how does a machine think remains, and it remains because machines are now mimicking, in a way, more and more our brain and the truth is, we do not know, yet, how we think!

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.