Follow Datanami:
September 12, 2016

Pentagon Eyes AI on the Battlefield

AI and machine learning are all the rage across a range of technology and industrial sectors. Now, the U.S. military is gauging the prospects for leveraging machine learning and other tools in autonomous weapons via emerging human-machine interfaces.

In describing the latest version of the Pentagon’s “offset strategy,” senior U.S. defense officials have in recent months highlighted the military’s desire to leverage AI to develop autonomous weapons. That, Robert Work, deputy U.S. defense secretary, noted in a recent address, “is going to lead to a new era of human-machine collaboration and combat teaming.

Added Work: “Collaboration is using the tactical acuity of a computer to help a human make better decisions and human-machine combat teaming is using manned and unmanned platforms.”

DoD’s “offset strategy” refers to a blueprint designed to maintain technological superiority. Early examples included the miniaturization of nuclear weapons in the 1950s and the integration of microprocessors into conventional weapons such as fighter aircraft in the early 1970s.

“What is it that really is going to make human-machine collaboration and combat teaming a reality?” Work asked in a speech last December. “That is going to be advances in artificial intelligence and autonomy that we see around us every day.”

Among the early “building blocks,” the DoD official explained, are deep learning systems that are already being used to analyze intelligence data. Work cited a National Geospatial-Intelligence Agency program called “Coherence Out of Chaos” designed to crunch satellite intelligence data “and making sense of it, and queuing human analysts to really take a look at certain things.”

Most of that “chaos” is in the formed of unstructured data gathered by satellites and other sensors along with increasing attention being given by intelligence analysts to social media chatter.

For now, U.S. military officials are quick to stress that only human operators will decide when and where lethal force is used. “But when you’re under attack, especially at machine speeds, we want to have a machine that can protect us,” Work hedged.

Other applications such as cyber defense and electronic warfare are likely to be among the first military applications for AI, officials added. In the case of cyber security, Work stressed that humans can’t respond in real-time to a concerted threat, hence, “You’re going to have to have a learning machine that does that.”

Meanwhile, several electronic warfare efforts spearheaded by the Defense Advanced Research Projects Agency are attempting to leverage machine learning algorithms and techniques to spot and counter threats like communications jammers. One DARPA effort, Adaptive Radar Countermeasures, would use these algorithms to automate the process of isolating threatening radar waveforms, then coming up with a countermeasure.

“Through learning machines will be able to figure out how to take care of that waveform in the mission while it’s happening,” Work asserted.

Much of DoD’s work on human-machine interfaces is being justified by increasing research spending by China, which is said to be investing in robotics and autonomous weapons. Pentagon officials also point to a recent pronouncement by Russian military leaders that a “roboticized” battlefield is possible in the near future.

Recent items:

How Analytics is Driving Military Intelligence

DARPA Launches ‘Big Code’ Initiative

Datanami