Learning like a child

Children lear to understand the world basically by themselves, through repetition and association. What if a robot could do the same, faster? Image credit: MIT

Machine learning is making significant progress, soft agents are duplicating themselves to explore different strategies and learn in the process, researchers are finding new ways to tag reality (like movie clips) to let machine learning by capturing those tags.

Children do not need any of that to learn. They experience life and build associations that progressively give a sense to their perceptions.

Researchers at MIT wondered if it wouldn’t be possible to equip robots with a child like inquisitive mind to let them learn by themselves. Of course a robot that learns at a child pace will not be good in this fast paced world. You wouldn’t wait for 3-6-12 years to have the robot learning … However, robots can experience and process data way faster than a child (or a grown up…) and it might be possible to use the same learning paradigm compressing it in time.

That’s what they did and first results indicate that indeed, it may work!

Although the objective is to learn like a child do, it is not a child’s play at all. Researchers had to manage multiple streams of sensorial input, form a framework that constantly updates itself attributing probability (confidence values) to each object singled out in a scene and to the relations among objects. As the confidence value becomes satisfactory that object/relation can be used as a stepping stone to understand other objects.

In this way a software agent can learn to understand, and speak, any language. Notice that even if you take a specific language there are many ways that language is being spoken, and understood, in different environment. When we talk we used partial sentences, we take for granted a shared understanding of the context, like “if you go out, it is cold” meaning “before going out get a sweater!”. The same might be the result of saying “ehi, be careful, don’t get a cold!”. Here the word “cold” is the same syntax as before but the semantics is different. Yet, the effect of both sentences would be the same, urging the person to wear a sweater.

If this research can result in effective learning we might have robots that in a short time lear to speak a dialect, just by listening us to speak it. That would make machines much more integrated with humans.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.