Home / Blog / Human eye resolution

Human eye resolution

Canon has announced at the CP+ event in Yokohama the development of a 120Mpixels sensors for full frame digital cameras. So far it is not clear when such a sensors will be powering a Canon camera. In the announcement Canon hints at possible use in space, video production and aviation applications.
Interestingly, they claim that the new sensors has a resolution equivalent to the one of our eyes, which to a certain extent is the truth. Our retina has about 120 million rods, cells that can sense light intensity (black and white) plus 6-7 millions cones (divided in cells sensitive to red, to blue and to green, the latter forming the majority). Hence if you are comparing one pixel to one cell sensing light in our retina you are basically even in number.
However the resolution perceived by us, as we look around, has roots both in the retina’s sensor cells as in the brain way of computing the signals generated by the cells. These signals are actually pre-processed by the retina itself , the optical nerve bandwidth (here I am talking like an engineer…) would not be sufficient to carry all the data generated, so that the brain receives less data, but more information… like where edges are, vertical and horizontal lines…
The true vision, however, occurs in the brain. This has been amazingly demonstrated also by studies showing that people who lost the functionality of the retina can be induced to see by processing sounds (which is not that surprising, after all: bats are doing that normally). Also, information and vision are not a one to one match. A pathology, called blind sight, resulting from a stroke than blocks data transfer inside the brain from the optical chiasma to the brain vision area results in that person being blind to images but he will still perceive movements and potential threats since these are processed in a different part of the brain (amygdala). 
The difference between resolution (which at first glance would seem to mean better image perception, a richer one) and image quality is further emphasised by the fact that a more contrasted image is usually perceived as a better image, with more details, whilst in pure information sense it is the opposite. As you increase contrast you lose details.
The sensor therefore is a an essential component in vision but is just part of it. The "biggest part" is the processing of sensors data (sensors in the plural because we form our perception of image from several senses, including proprioceptors -position of the eyes, smell… and experience). It is ultimately more about ICT than about sensors. And ICT, thanks to improved storage, processing and computation algorithms, has made amazing progresses in the last 10 years with more evolution to come. This is probably an area that EIT ICT Labs should be looking at…

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. He's currently the Chair of the Symbiotic Autonomous Systems Initiative of IEEE-FDC. Until April 2017 he led the EIT Digital Italian Node and up to September 2018 he was the Head of the EIT Digital Industrial Doctoral School. Previously, up to December 2011, he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the Industry Advisory Board within the Future Directions Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books. He writes a daily blog,  http://sites.ieee.org/futuredirections/category/blog/, with commentary on innovation in various technology and market areas.