It looks alike …

Using a photo as a “style” model the application can apply this blueprint to another photo creating an image that is somewhat similar to both. Credit: Cornell University

One of the things that distinguishes human beings from computers is the capability of the former (us) to detect similarities. On the contrary, computers are extremely good at telling differences. I remember as a boy playing a game on a magazine that was about finding 20 tiny differences in two images. It was quite challenging!

At the same time, think about the daily situation when you meet a person you know. You immediately recognise her, it doesn’t matter that she is dressed differently from the last time you saw her, she might be wearing a dress you never saw before, have a different hair style and dies, yet it is definitely “her”. It might also be ten years since you last saw her, and yet you still recognise her and go as far as saying “you look always the same” (which is not true….).

We go even further in our capacity to recognise similarities. We look at the sky a feel that a cloud has the shape of a wolf, looks like the face of a pirate… We look at a painting and we feel its the same style of a famous painter…

Indeed, we are great in finding similarities. Computers are not.

Well, this may no longer be true. Google is providing a service to search for images where it is able to retrieve images that are “similar” to the one you are submitting as model, hence their computers are able to find similarities.

A team of researchers at Cornell University, in collaboration with an Adobe team, has created an application that can extract a “style” from an image and then apply the same style to another image.

The application is based on deep learning algorithms to extract the “style” and reapply it in such a way to recreate it. As you can see in the image the “deep photo style transfer” (this is the name of the application), is recognising there is a home and trees in the first (reference) photo and applies it correctly identifying the house and trees in the second (target) photo. Similarly, in the second example, the algorithms recognise the clouds and the water to transposed the reflection effect correctly.  You can find more examples here.

This result is quite amazing because it looks so straightforward to us, whilst it was beyond the reach of computers just few years ago.

The scaring side is that computers can emulate our capabilities more and more, I would say they are going beyond the requirements of the Turing test. Just few days ago I posted the news of computers able to mimic a specific human voice …

The singularity, indeed, is on the horizon.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.