The Pain of Watching AI Become A Pawn in the Geopolitical Fray
Artificial intelligence (AI) has been a predominantly transnational phenomenon for most of this decade.
Radical cross-border openness has been a central driver of the AI revolution. One of the primary factors stimulating advances in AI research, commercialization, and adoption has been the growing open-source stack of data analytics platforms and tools. There is nobody in any country these days who hasn’t built their AI practice partially or entirely on an open-source stack consisting of Hadoop, Spark, TensorFlow, and the like. In the same vein, open-source communities have accelerated the cross-border sharing of data, libraries, models, research findings, and expertise.
However, a nationalist perspective has begun to assume greater prominence in the world’s understanding of AI’s disruptive potential. More nations are putting a high priority on stimulating the development of their domestic AI expertise. What’s driving this trend is a growing fear that falling behind in domestic AI capabilities may relegate some nations to second-tier status in the new world economic order.
There is some validity to this viewpoint, considering that many of the most disruptive applications these days incorporate machine learning (ML), deep learning (DL), and other forms of AI. Companies that aggressively commercialize these technologies may dominate their industries for decades to come, and the countries where they’re based may become extravagantly wealthy in the process. Likewise, nations that invest in pure and applied AI research may become world leaders in many scientific and engineering disciplines, becoming magnets for the smartest people from every nation and also boosting their countries’ innovative capacity over the long term.
Even more worrisome is the concern that lack of world-class domestic AI expertise may expose some countries to national security threats in a world where the most powerful new weapon systems are built on AI capabilities of astonishing sophistication. The battle for AI supremacy is increasingly being portrayed as a do-or-die struggle in a world of existential threats. Once again, this concern cannot be taken lightly. AI is enabling new modes of automated, augmented, and otherwise algorithmically powered warfare. The new AI-stoked weaponry is being deployed both in cyberspace and in the flesh-and-blood physical world of swarming drones, autonomous tanks, and AI-guided intercontinental ballistic missiles.
Whenever the world’s militaries get involved in any sector of the economy, transnational openness inevitably suffers. The AI community is in danger of being forcibly fragmented along national lines or transnational military alliances such as NATO. We’re starting to see a protectionist mentality creep into popular discussions of AI’s role in the geopolitical balance of power and prosperity. This is no idle threat in an era in which a lunatic US president capriciously erects trade barriers against our closest allies. Considering how much he hates Silicon Valley, one can easily imagine Donald Trump trying to stop Google, Facebook, Microsoft, IBM, and other US-based AI powerhouses from engaging in cross-border collaboration with partners in other nations, invoking some bogus national security concerns as the pretext.
If the private sector isn’t careful, it might inadvertently create a climate of opinion that gives Trump and his ilk a political mandate to isolate the US AI community from its peers elsewhere. What I’m referring to is the troubling notion that there’s an “arms race” (literal or metaphorical) in development of nations’ respective AI capabilities. For example, consider the sentiments expressed in this recent article by Horacio Rozanski, the CEO of Booz Allen Hamilton Inc., which is a major US defense contractor. This is just one of many recent instances where someone prominent in the business world refers to a “close race” between United States and China in developing and exploiting AI.
This Cold War mentality is out of step with the thoroughly transnational trend in world culture in the 21st century. If you’re even vaguely familiar with the hysteria that ensued in the US after the Soviet Union launched the first satellite, you’ll hear echoes in Rozanski’s concerns about our side not funding AI sufficiently at the federal level, not coordinating public and private sector AI initiatives, not educating enough AI practitioners, not being vigilant to infiltration of our AI infrastructure by foreign nations, and not having a strategy to match the Chinese leader’s call for his nation to achieve a “10-fold increase in AI output … by 2030.”
If the post-Sputnik period is prelude, it’s clear to see what’s coming next in the US and other countries. As in China, we’re likely to see current and future administrations place the highest national-security priority on AI. We’re sure to see greater funding for research hubs for the most critical AI technologies, especially those with clear military and scientific applications. We may see export controls on some bleeding-edge crypto technologies that are especially useful for secure communications among distributed AI components and microservices. And we are 100% sure to see changes in the federal tax code to encourage massive education and training of AI professionals, as well as to encourage private-sector investments in AI-enabling tools and platforms that are essentially for national competitiveness in this new global economy.
Everywhere you look, people will be calling for the US and allied nations to boost their “AI output” or AI as a share of gross domestic product. This raises the issue of how one would scope “AI output,” as opposed to the AI capabilities that are woven into every type of product and service in the new world economy. This is not equivalent to, say, one nation monopolizing the world’s supply of some rare-earth element whose only known exploitable deposits lie within its territory.
How, for example, can any nation hope to monopolize some futuristic capability as vague and broadly scoped as “artificial general intelligence”? If and when that emerges, it will be a technique worked out globally by many researchers throughout the AI community, including many of the smartest researchers within your borders.
You won’t need to send your country’s spies to gain access to this or any other advances in fundamental AI techniques, because it will have been discussed ad nauseam in within research that’s been published openly and freely every step of the way. Even if your country’s AI researchers are briefly unaware of some important advance, there’s a high probability that they’ve been working on the same challenges and will catch up quickly.
Obviously, the above-referenced CEO was alluding primarily to AI’s weaponization use cases. I doubt that that he and other defense contractors are worried about China’s Baidu building a better AI-powered chatbot than Google Assistant. Weaponization is also the unspoken focus of national AI policy in China and many other nations, including such US allies as the U.K. and France. Weaponization was also the flashpoint in the recent controversy in which some Google employees balked at participating in a Pentagon AI program.
If you think that only traditional DoD contractors are exhorting the US to beef up defense-related spending on AI, you’re wrong. Consider that Google chairman Eric Schmidt sits on a DoD advisory board and has openly referred the present period as a “Sputnik moment” for the U.S. and allies to accelerate these sort of initiatives before AI-savvy adversaries seize the upper hand. And let’s note that Google’s having withdrawn from the controversial Pentagon program will not stop the Pentagon in its tracks. As everybody has freely acknowledged, the DoD is primarily using open-source AI tools—most notably, the Google-developed TensorFlow—in its work.
Considering how widely used that tool and other AI-enabling open-source technologies are, no one doubts that DoD will quickly find other contractors to do the work as well or better than Google. Keep in mind that China and every other nation on Earth has access to all of those same resources—and a fair number of smart AI professionals within their borders—to concurrently develop equivalent weaponry.
This brings us full circle to the issue of how feasible it can be for nations to erect protectionist barriers to secure their respective AI futures. In an era where everybody has free and open access to all the tools, data, and expertise needed to build high-quality AI for every conceivable application, how effective can trade barriers be?
Keep in mind that even with the most stringent protections over US nuclear secrets after World War 2, every nation that was determined to build atomic bombs eventually did. In addition to the US, the world “nuclear club” now includes Russia, China, North Korea, India, Pakistan, Israel, France, and the United Kingdom.
There is no AI equivalent of enriched plutonium that we dare not let fall into the wrong hands. Every single AI technology that would go into a next-generation intelligent weapon is harmless in isolation and has myriad peaceable applications. In fact, the more you break down the AI ecosystem into platforms, libraries, algorithms, tools, applications, and techniques, the more futile these protectionist concerns become.
For example, let’s look at AI-powered intelligent robotics, which is a key enabler for autonomous weapon systems. If we break AI-driven robotics down into their functional subsystems, it’s clear that these capabilities are already available and under development everywhere, and that protectionist measures would also stymie their many sophisticated non-military applications:
- AI-powered robotic locomotion: The robotics revolution has spawned a cyber-Cambrian explosion of intelligent robots that can walk, pounce, flap, flutter, hover, trot, creep, gallop, swim, burrow, crawl, slither, and otherwise move around as an actual organism might.
- AI-powered robotic sensation: Robotics researchers are leveraging a growing pool of AI research into every perception modality, including vision, hearing, smell, and touch.
- AI-powered robotic cognitive and affective processing: Researchers continue to push the envelope in cognitive computing, as well as into algorithmic emulation of such associated faculties as emotion, imagination, and communication.
- AI-powered robotic manipulation: Researchers continue to refine innovative AI approaches that give robots the ability to grasp, grapple, process, and otherwise manipulate objects, fluids, and every other aspect of the myriad physical environments into which they’re deployed.
Given the global nature of research into these enabling AI and robotics technologies, no nation can count on having within its borders the world’s best implementations in all of these areas at any one time—or, even if they somehow did, realistically hope to retain that strategic advantage for long.
Let’s just assume that AI researchers everywhere will have attained roughly consistent high accuracy in their respective geospatial navigation, video processing, image classification, speech recognition, object detection, machine translation, and other ML, DL, and natural language processing algorithms.
Likewise, every nation will continue to have access the latest advances in neurorobotics, drones, swarm intelligence, reinforcement learning, embodied cognition, multi-objective decision making, evolutionary computing, and master learning algorithms.
And everybody will have access to the latest and greatest GPUs, TPUs, ASICs, FPGAs, systems on a chip, and other chipsets available for training and inferencing of all of this fancy AI in every conceivable deployment and application scenario.
And let’s take for granted that no one nation’s strengths in any of these areas will be long term or make their AI-infused autonomous weapon systems substantially more effective than anybody else’s for very long.
That’s where benchmarking one nation’s AI capability against another’s begins to fall apart. It’s not enough to boast that you have more skilled professionals working on military AI projects if what they’ve built, trained, and deployed is not substantially more fit for purpose than the adversary’s equivalent systems. Whether it be for military or civilian applications, we can validly benchmark one nation’s AI against another with regard to specific AI uses where one side has achieved some tangible advantage.
If your opponent has spent a fraction of the resources that you have, but has developed far more accurate, efficient, flexible, low-cost, and low-power AI assets than you, that’s the ultimate weapon.
But even where those technological advantages emerge, they’re sure to be transient in a world where open sharing of AI expertise is the world’s default norm. Everybody will have your innovative AI very soon, if for no other reason than that your country will want to export it and thereby dominate some hot new AI-disruptive growth sector.
And that’s because every nation wants to reinvent the US’s most strategic weapon in the AI wars: Silicon Valley. Putting a homegrown export dynamo of this caliber behind protectionist walls would strangle it in its cradle.
About the author: James Kobielus is SiliconANGLE Wikibon‘s lead analyst for Data Science, Deep Learning, and Application Development.