AI can slow you down

On the left hand side a photogram captured by a digital movie camera and on the left a photogram generated using AI to artificially slow down the filming. Credit: NVIDIA

Artificial Intelligence is usually associated with improving performances, with “faster”. Well, here is an application of AI to slow things down!

NVIDIA researchers have used AI to transform a standard video into a slow motion one, preserving its quality (watch the clip).

Of course, one could in principle have the AI looking at the slow motion video to create an even slower motion one… The problem is that with every step you have to get more creative in imagining what are the missing parts in a way that both make sense and correspond to reality.

Graphic editing programs, like Photoshop, have had since many years some interpolating function to increase the resolution of photos. Increasing the resolution of a photo is similar the process of creating a slow motion movie. In both cases you need to add something that is not there.

For the increase of resolution you take side by side pixels and expand them including extra pixels between them.  The strategy to decide how the extra pixels should look like is based on some mathematical algorithm (interpolation) that is taking into account a radius of several pixels and based on the trend of change as you move further from the center it decides how the new pixels should look like. Lately interpolation algorithms are starting to make use of artificial intelligence to understand what the image is about. As an example if you have the edge of an object that edge will have to be kept as it is, you don’t want to expand it, rather expand the inner or outer part of the objects. This will preserve contrast and edges sharpness.

For creating slow motion video the interpolation is done temporally, rather than spatially. So the algorithm considers two following frames and inserts some frames in between them. If a pixel at a certain location in the two frames is the same then the additional frames will have the same pixel at that location. If, on the other hand, the pixel at a location differs in two consecutive frames it means that something has changed and in the creation of the additional frames the pixels at that location need to be interpolated. The difficulty is in making an interpolation leading to credible results. The resulting slow motion clip needs to be smooth and this is where the AI steps in. The AI has to understand what is going on looking at several frames and based on that create a temporal interpolation.

Look at the clip offered by NVIDIA showing different examples. They all looked really convincing to me, i.e. I did not detected any AI intrusion, which I guess is what you have to expect when AI works fine!

This is all good. Yet, one has to wonder where this creation of an artificial reality that we can no longer distinguish from the original one will be taking us.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.