Adobe Photoshops (as several other image processing applications) makes it easy (and easier with any new release) to tamper with the images captured by a digital cameras.
Bits are bits and you cannot distinguish one from the other. This has stimulated tampering with digital images with the help of tools that can identify an object in the image, change its shape, colours, delete it and replacing it with something else. In the last few years Artificial Intelligence has come to help in the process from the object identification to guessing what would be an appropriate replacement.
Of course there is the downside: it is becoming more and more difficult to tell if an image is an original or if it has been tampered with. Sometimes the tampering occurred to improve the look of the image (all images are tampered! When you take a shot with your digital camera, including the one inside your smartphone, the pixels are processed and “doctored” to provide a more pleasing -according to the digital imaging software- image), but sometimes there is a tampering done with the intention to mislead the viewer (like adding or removing a person).
The problem is that the editing tools are becoming so good that it is becoming more and more difficult to tell a real image (well, almost real) from a fake.
Adobe is clearly one responsible for this situation and Adobe has been working, in cooperation with the University of Maryland, to find ways to flag fake images, automatically, using again Artificial Intelligence. The result has been presented at the 2018 Computer Vision and Pattern Recognition conference.
By using CNN, Convoluted Neural Networks, the application examines the image looking first for abnormal areas of contrast (in an untampered image the contrast among different pixels tend to change gradually, whilst in case of cut and past the contrast changes are sharper) and then to the random noise present in the image. This noise is generated by the camera sensor and tend to be “uniformly random”, creating a sort of digital signature for that sensor. Clearly a cut and past will alter the uniformity of the noise and this can be detected by the CNN.
However, notice that this approach would identify the “amateur” faker. A professional one, once these fake-detection technologies will be deployed will use those same technologies to create undetectable fakes (e.g. recreating credible noise background in the retouched image…). My personal feeling is that in this area, as in many aspect of security, it will be a never ending story of trying to fix a hole with a tool just to realise that new ways have been found to circumvent the fixing…
Take a look at the clip for more information and if you are interested in the technical detail read the article presented at the conference.