Follow Datanami:
November 6, 2017

Algorithms Go to War

There are few enterprises accumulating more sensor and video data than the U.S. Department of Defense. For analysts, that means sitting for hours each day in front of screens observing full-motion video and other reconnaissance data to spot threats.

That, military leaders have concluded, is a poor use of analysts’ skills.

In response, the Pentagon launched an AI effort in April dubbed Project Maven to accelerate DoD’s integration of big data and machine learning into its intelligence operations. The first computer vision algorithms focused on parsing full-motion video are scheduled for release by the end of the year, according to Lt. Gen. John Shanahan, DoD’s director of defense intelligence.

“We have analysts looking at full-motion video, staring at screens [6 to 11] hours at a time,” Shanahan explained last week during Nvidia’s (NASDAQ: NVDAGPU) roadshow in Washington, DC. “They’re doing the same thing photographic interpreters were doing in World War II.”

Project Maven aims to “let the machines do what machines do well, and let humans do what only humans can do,” the cognitive analytical portion of video interpretation, Shanahan added.

Also known by the decidedly bureaucratic name, Algorithmic Warfare Cross-Functional Team, DoD’s AI effort represents what Shanahan called “prototype warfare,” that is, releasing prototype algorithms by the end of the year, obtaining user feedback and repeating the process if the prototypes don’t fly.

The Pentagon’s leadership concluded earlier this year that throwing more analysts at the problem of scanning reconnaissance video was not the answer. “We can’t possibly exploit the data,” Shanahan noted, adding that automation in the form of algorithms and other tools is the only way to keep up with the haul of sensor data. Hence, Project Maven initially focuses on machine and deep learning algorithms that would be trained to extract objects from reconnaissance imagery.

The AI push is part of a wider DoD effort designed to expand real-time situational awareness capabilities by making greater use of machine automation. “Speed, the tempo of decision and information, is the problem because our adversaries have figured out how to move inside out military decision loop,” Pamela Melroy, former deputy director of DARPA’s Tactical Technology Office, told the National Space Council last month.

Melroy and other military experts also backed greater use of advances in automation tools that can augment real-time situational awareness. “When you get a new piece of information, you need to be able to tell immediately whether something actually just happened … or if one of your distributed sensors has been spoofed,” Melroy explained. “It is possible to automate that.”

Project Maven is starting small with the goal of scaling the project as more algorithms pass muster. (Marine Corps Col. Drew Cukor heads what is by Pentagon standards a small team of about a dozen personnel.) The effort initially focuses on extracting objects from full-motion video. The first algorithms delivered in December will be integrated with unmanned surveillance platforms to perform image processing and exploitation, Shanahan said.

Ultimately, Project Maven aims to get AI and machine learning tools “up and running” across DoD intelligence operations over the next three years, he added.

“People and computers will work symbiotically to increase the ability of weapon systems to detect objects,” Cukor told an industry summit in July. “Eventually we hope that one analyst will be able to do twice as much work, potentially three times as much, as they’re doing now. That’s our goal.”

The project also seeks to tap into steady advances in GPU processing power while advancing DoD’s cloud operations beyond storage to cloud platforms “optimized for AI and machine learning,” Shanahan said.

Recent items:

Intel Unveils USB Toolkit For AI Prototyping

5 Things AI Is Better At Than You

 

Datanami