Digital Transformation – Disruptions III

The L16 has 16 lenses, each time you take a photo you are using several of them and software takes care of fusing the individual lens output into a single photo. Image credit: Macfilos

From Digital Photography to Computational Photography

The next disruption coming up in photography is going to be computational photography, again the result of technology evolution.

To make better quality photography you needed better equipment, better atoms…: a good camera, a good film roll. This did not change with the shift to digital photography where you still need a good digital camera (with a good sensor and electronics) and a good lens.

By using post processing software (bits) you can improve on your photo and post processing software has become more and more powerful. Now part of this post processing can take place in the camera making your life easier.

We are already starting to see more advanced features that can be performed inside the camera using software (and a lot of processing power) and that would not be possible outside of the camera.

Take the L16 camera. It was a few years in the making (more than they expected) and it hit the market in 2018. It is a completely new camera, based on computational photography. It leverages on bit much more than on atoms (although making the atoms comply was really tricky!). As shown in the first image it has 16 lenses (hence the name!) and these lenses come in 3 focal lengths: 5 28m, 5 70mm and 6 150mm (equivalent). Yet when you see the image on its screen (that will correspond to the final photo) you can swipe your finger and decide the focal length you actually want, anything between 28 and 150! Now, notice that the result will be an image taken at the focal length you select, even though there is no optical lens delivering that focal length (unless, of course you select 28 or 70 or 150). Do not confuse this with the electronic zoom you have in your digital camera. That one is just clipping the area to be used on the sensor making objects appear closer as you zoom further and further. However, if you had a camera with a real optical zoom you will notice that the depth of filed (and bokeh) will change as you change the focal length. Not so if you use the electronic zoom. Also, using an optical zoom (or changing your lens to a longer focal one) objects will become compressed, not so with an electronic zoom.

The difference is that with an electronic zoom you are still relying on atoms (your camera optical lens and its sensor) whilst in computational photography you are relying on bits.

The Doi Inthanon Waterfalls, photo taken with my iPhone.
The Doi Inthanon Waterfalls, photo created through computational photography by my iPhone. In this case the computational photography was used to simulate a long exposure time.

Modern smartphones have started to use computational photography providing enhanced capabilities. As an example (see the waterfall photo I took) they can use several snapshots (automatically taken when you push the shutter) and combine them to create the same result you would get by using a long exposure time. Notice that in most cases, with your digital camera, you won’t be able to use a long exposure time because there is too much light around and you would end up with an over-exposed photo. You will have to use filters, but again this is not a solution that could work in many situations.

Take the picture my iPhone generated simulating a long exposure time. Nice effect, isn’t it? Well I could have used dark filters (not really practical on an iPhone) to get the same effect exposing for 2 seconds. The problem would have been, however, a sharp increase in noise (because of long exposure time), I would have need a tripod to keep the phone absolutely still and moreover if there was a person in the frame that person would have to remain completely still for those two seconds.

With computational photography the solution is all in the bits (and in the application managing them).

There are many things that computational photography can do (look at the clip) today, like decreasing noise and increasing sharpness. However, there are even more things that it will be able to do in the coming decade and that will disrupt photography and the related value chains.

First, the requirement of having good atoms to get good photos will no longer be there (having a good photographer will still be a pre-requisite though!). This will disrupt the companies whose selling point has been delivering good lenses, always better and better sensors. Of course they will remain a factor (like having the sensor embedding on the processing chip will make computation faster) but it will no longer be a competitive edge. This will shift to companies developing software (signal processing): this requires a different set of skills and being based on bits will no longer require massive capitals. A small company in India may become a leader in some computational photography features, whilst today it would be impossible for a small company to become a leader in digital imaging chips.

Second, by using computational photography the requirements for atoms decrease, hence their price will also decrease (and that is a very interesting proposition to win the consumer market) and likely their bulkiness will decrease letting any object/product, become a potential camera. Any product will be able to “see” how you use it, how you like it and … report back. Progress in this area may be driven by self driving cars that in the next decade will be relying more and more on image processing to become aware of their environment.

Third, bits can be analysed to detect objects, deriving semantics. This is likely to open up a new market for companies like Amazon that might be willing to provide you with the very best computational photography features completely free aiming to make money from understanding what you like (usually you photograph what you like) and offering you related goods (“name your pleasure I can sell” becomes “let me see what you see, I can sell”).

We are seeing  the first steps being taken today with the increased number of cameras on smartphones: there used to be 2 – one in the front and one in the back, now we are seeing several phones with 3 cameras and a few with 4. Also notice that it won’t be long before companies will start to use both the front and back camera, with the one facing you helping in determining your mood as you take a picture (or the other way round detecting what is the environment in which you are taking a selfie).

I have no doubt that the next decade will see the complete shift to computational photography and that this will become an opportunity to several new business (or a reinforcement to several existing business). At the same time as value will be shifting from atoms to bits (but remember the value of bits tends to approach zero, the value lies on what you can leverage out of those bits) several companies will need to reinvent themselves as their business will fade away.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.