Follow Datanami:
June 21, 2019

Facing Up to Image Fakery

Source: Adobe

The folks who transformed digital imagery nearly three decades ago with the introduction of Adobe Photoshop are now using deep neural networks to help detect doctored faces and other faked imagery.

Add newer social media technologies to the mix, the company notes, “and those falsehoods fly faster than ever.”

Adobe researchers are working with counterparts at the University of California at Berkeley on a method for detecting doctored images created using a popular Photoshop that allows users to adjust and exaggerate facial features.

With the rise of “Fake News” and social media disinformation campaigns, Adobe (NASDAQ: ADBE) has been focusing its image manipulation detection efforts on basic tactics that include image splicing, cloning and removal.

“Every image has its own imperceptible noise statistics,” said Vlad Morariu, a senior research scientist at Adobe. “When you manipulate an image you actually move the noise statistics along with the content. We can actually identify these very small differences.”

Since Adobe Photoshop pioneered the manipulation of images, the company asserts it is uniquely positioned to provide tools to identify doctored images.

The company has been using deep neural network technology to help detect image manipulation. “Once you design the architecture, you can provide large amounts of data and the network is able to learn whatever it needs in order to solve the task,” Morariu said.

The collaboration between Adobe and UC-Berkeley is funded under a Defense Advanced Research Projects Agency program called MediFor, as in “media forensics.” Among the DARPA program’s goals is identifying faked images that can be used for “adversarial purposes” such as disinformation campaigns.

The researchers are focusing on images edited with an Adobe feature called Face Aware Liquify that can adjust facial expressions. They are training a convolutional neural network to recognize altered images of faces. Model training data consists of images gathered by the Photoshop Liquify feature based on thousands of images scraped from the web. A subset was randomly chosen for model training.

An artist then altered images that were mixed into the training data set, thereby adding a human element to the range of Photoshop tricks likely to be used by actual image manipulators.

The researchers then tested the model against human testers aware that faces had been altered. Human eyes spotted altered faces at a rate of 53 percent, a little better than chance. By contrast, the neural network tool achieved accuracy as high as 99 percent.

“Because deep learning can look at a combination of low-level image data, such as warping artifacts, as well as higher level cues such as layout, it seems to work,” Alexei Efros, a UC-Berkeley researcher said in a recent blog posted by Adobe.

The code for detecting Photoshopped faces will soon be available on GitHub.

Datanami