From Photography to … Photography, but it is quite another thing

With computational photography it is possible to create any sort of images. Here is an example with multiple shots taken with the intervalometer and composited manually in photoshop. The manual composition can be replaced by an automatic one. Image credit: Fredo Durand

Once upon a time there was photography. It used chemicals on a film. The film got perfected, it was shipped into rolls and that’s probably what most of us called photography.

Less than 20 years ago digital photography started to appear. It was called digital photography to distinguish it from the “real” photography, with much better tonal resolution and colour depth.
How many people today are still calling this “digital photography”? Very few, if any.

Digital photography has become … photography. And it has amazing resolution, very good tonal and colour depth.

Today we are starting to talk about “computational photography”.

To understand the upcoming shift (it is already on us) we need to go back to digital photography.

Most of the people, when dropped the film and picked up a digital camera, kept doing exactly what they were used to do with their film camera. Look through the viewfinder and click! Of course, the possibility of using a digital camera as if it were a film camera was a plus, no re-adjustment required on the human side.

Yet, a few stopped looking at the viewfinder and started looking at the (digital) screen. More specifically they looked at the graphic overlaid on the screen and learnt to “expose to the right”. What does that mean? It meant that those people were no longer trying to take an image that was as similar as possible to what was actually in front of them. What they were doing was to exploit the capability of the sensor in the camera to capture as many data as possible (and that requires to expose to the right). Once as many data as possible have been collected one could process them with special software to create a slate of images, one of them pretty close to reality, others that could be a distortion of reality and a few that could represent a more accurate reality than the one our eyes can perceive.

This processing of data to create an image is what we call “computational photography”.

Smartphones are rapidly shifting to computational photography and the reason is obvious: it is no longer possible to improve on the sensor (increasing resolution keeping fixed their size would result in smaller pixels that in turn will capture less light) nor to improve the optical part (lenses are already a marvel of optical engineering and increasing their size is not an option since you want a thin phone). The only way to improve quality and performance is to turn to the bits. You can use the sensor to take not one but several (up to 100) photos. Actually, you harvest data, not images, then you use those data to work out a much better image, one that is exposed with as many “stops” -dynamic range- as our eye can see (each stop represent a doubling, or halving, of light, in a film camera had 9-10 stops, top of the line digital cameras today can have 10-11 stops, I know I am simplifying it a bit but you don’t want me to go into ISO, bit depth and A/D converter, would you?). A human eye has a theoretical dynamic range equivalent to 20 stops, way way better than a digital sensor (64-512 times as much, the range is due to the difference between theory -512- and practice -64-).

By taking several pictures (HDR) you can extend the dynamic range to any number of stop (in theory, in practice there are limitations because of the electronic and photonics  noise and because of the need to take all snapshots in a very brief period of time to freeze the image). There is also another problem making the increase in stop senseless beyond a certain point. You can create a “file” that contains information equivalent to any number of stops but when you want to “see” it than you have to use some sort of display, either print the image or display it on a screen and you are constrained by the limitation of the media chosen (a print on a glossy paper has a dynamic range in the order of 6 stops, a LCD screen has an equivalent of 9 stops).

It is not just about the dynamic range. With software you can artificially de-focus some parts of the image (bokek), you can highlight certain objects… and much more.

There is basically no limit to what computational photography can do, it is about software and creativity.

Of course this flexibility and potential has its dark side: you can look at photos that look real but are actually fake!

It is more than likely that in 5 years time all photography will be computational photography and t will be the way of taking picture, so that everybody will call it just photography. But, of course, it is quite another thing!

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.