Detecting fake images using AI

Examples of tampered images that have undergone dif- ferent tampering techniques. From the top to bottom are the exam- ples showing manipulations of splicing, copy-move and removal. Credit: Adobe
Next Prev
Slide 1 | Your Content
Slide 2 | Your Content
Slide 3 | Your Content
Next Prev

Adobe Photoshops (as several other image processing applications) makes it easy (and easier with any new release) to tamper with the images captured by a digital cameras.

Bits are bits and you cannot distinguish one from the other. This has stimulated tampering with digital images with the help of tools that can identify an object in the image, change its shape, colours, delete it and replacing it with something else. In the last few years Artificial Intelligence has come to help in the process from the object identification to guessing what would be an appropriate replacement.
Of course there is the downside: it is becoming more and more difficult to tell if an image is an original or if it has been tampered with. Sometimes the tampering occurred to improve the look of the image (all images are tampered! When you take a shot with your digital camera, including the one inside your smartphone, the pixels are processed and “doctored” to provide a more pleasing -according to the digital imaging software- image), but sometimes there is a tampering done with the intention to mislead the viewer (like adding or removing a person).

The problem is that the editing tools are becoming so good that it is becoming more and more difficult to tell a real image (well, almost real) from a fake.

Adobe is clearly one responsible for this situation and Adobe has been working, in cooperation with the University of Maryland, to find ways to flag fake images, automatically, using again Artificial Intelligence. The result has been presented at the 2018 Computer Vision and Pattern Recognition conference.

By using CNN, Convoluted Neural Networks, the application examines the image looking first for abnormal areas of contrast (in an untampered image the contrast among different pixels tend to change gradually, whilst in case of cut and past the contrast changes are sharper) and then to the random noise present in the image. This noise is generated by the camera sensor and tend to be “uniformly random”, creating a sort of digital signature for that sensor.  Clearly a cut and past will alter the uniformity of the noise and this can be detected by the CNN.

However, notice that this approach would identify the “amateur” faker. A professional one, once these fake-detection technologies will be deployed will use those same technologies to create undetectable fakes (e.g. recreating credible noise background in the retouched image…). My personal feeling is that in this area, as in many aspect of security, it will be a never ending story of trying to fix a hole with a tool just to realise that new ways have been found to circumvent the fixing…

Take a look at the clip for more information and if you are interested in the technical detail read the article presented at the conference.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.