Taking photos one photon at a time…

image-formation process with the sample image from the 1Mjot QIS prototype chip. (a) Magnified area in one field of binary single-photon data. (b) Same area in the binary field data with lower magnification. (c) Raw binary QIS output images, including eight continuous frames. (d) Gray-scale image after processing. Credit: Jiaju Ma, Dartmouth Thayer School of Engineering

Our eyes’ retina has the incredible capability of detecting one single photon (actually a single photon is sufficient to activate the rhodopsin molecule which in turns signals the activation by decreasing the activity of the cone/rod; it would be an extremely unlike situation the one where a single photon goes through the eye and hits a rhodopsin molecule). My camera (a “old” Nikon D600) in a daylight situation capture on the average some 400,000+ photons (for an interesting insight on how this number is calculated look here) and to start to perceive a “grey” from a total “black” you can say you need to have 4,000 photons hitting the sensor (a photon hitting the sensor has a probability of being converted into an electrical signal to be processed by the camera chip of about 18%, this is being calculated by looking at the quantum efficiency of the sensor ..).

All of this is to say that our retina is really “sensitive”. It does not imply that a single photon may raise your awareness, but … it can. However, it is impossible both to our eyes and our digital camera to capture a picture one photon at a time. They both need quite a lot of them.

Researchers at the Dartmouth Thayer School of Engineering have managed to develop a chip, a Quanta Image Sensor, and a software algorithm, that can use single photons to create an image. Take a look at the picture shown. The leopard face has been recreated (rendered) using a sequence of 8 images taken in such a low light that only single photons could be captured on each of the 1 million pixel sensor.

The capability to capture single photons and reconstruct an image has application in very low light conditions (when a flash is not an option, like in astronomy, where you cannot “flash” the universe…) as well as in very fast image capture (the faster your shutter speed the less number of photons can make it to the sensor).

A low moonlit landscape requires exposure times of the order of tens of seconds, even minutes. With this chip it would be possible to set the shutter speed at 1/1000 of a second, thus capturing moving objects, like nocturnal animals.

This technology can also be applied to microscope photography, e.g. to take images of cells to determine the effectiveness of drugs.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.