U.S. Eyes Tools to Spot Faked Images, Video
University researchers are using machine learning along with signal and image processing tools in a military-funded effort to detect doctored images and video, then figuring out how they were manipulated.
As groups like the Islamic State, also known as ISIS, increasingly turn to social media to recruit, the U.S. military is relying heavily on online images and video to monitor the activities of terror groups. Since readily available software tools can be used to manipulate visual media, a defense agency is funding a multinational effort to, for example, develop better algorithms that can be used to spot fake images. The tools would then allow analysts to conduct forensic investigations to determine precisely how and why images were manipulated.
That capability could ultimately provide insights into the “digital lineage” of doctored images and video, a field known as “multimedia phylogeny.”
The four-year, $4.4 million program is being funded by the Defense Advanced Research Projects Agency. The image research has been divided among several U.S. universities along with investigators in Brazil and Italy. The multidisciplinary team includes the University of Notre Dame, New York University, Purdue University and the University of Southern California.
“A key aspect of this project is its focus on gleaning useful information from massive troves of data by means of data-driven techniques instead of just developing small laboratory solutions for a handful of cases,” Walter Scheirer, a principal investigator at Notre Dame, noted in a statement.
Tools already exist to scan Internet images, but not on the scale required by U.S. intelligence agencies. Researchers noted that such a capability would require specialized machine-learning platforms designed to automatically perform processes needed to verify the authenticity of millions of videos and images.
“You would like to be able to have a system that will take the images, perform a series of tests to see whether they are authentic and then produce a result,” explained Edward Delp, director of Purdue’s Video and Image Processing Laboratory. “Right now you have little pieces that perform different aspects of this task, but plugging them all together and integrating them into a single system is a real problem.”
Hence, investigators will attempt to piece together a complete system capable of handling the massive volumes of visual media and other unstructured data uploaded to the Internet each day. That will require deep-learning tools capable of churning through millions of images, detecting doctored images and producing a digital lineage that might shed light on the motivation of terror groups.
Meanwhile, researchers also stressed potential commercial applications in fields such as medical forensics. Elsewhere, news and social media web sites could use the platform to authenticate images and video before posting.
Purdue’s piece of the project focuses on using tools like image analysis to determine whether media has been faked, what tools were used and what portions of an image or video were actually modified. “The biggest challenge is going to be the scalability, to go from a sort of theoretical academic tool to something that can actually be used,” Delp added.