Filling Policy Void, U.S. AI Report Outlines Stakes
Who’s ahead in the race to win the global AI race is the subject of intense debate, much of it centered on China’s all-out effort to dominate AI development and whether the U.S. can get its act together in time to keep pace.
Few would dispute Beijing’s commitment to AI development, most pointing to its aggressive strategy that seeks to catch up with the U.S. and the rest of the world by 2020, becoming dominant by 2030. That has prompted policy makers to worry over the ability of U.S. companies to keep pace and how “general” AI will be used in the future.
Despite huge federal investments in R&D, totaling nearly $500 billion in 2015, “the United States’ leadership in AI is no longer guaranteed,” warns a report released in September by the House Oversight and Government Reform IT subcommittee.
The study is based on a series of hearings held earlier this year that included AI experts from industry, academia and federal research agencies. The AI hearings and resulting report provide a framework for boosting U.S. R&D spending on AI. In that respect, the congressional panel has taken the lead in forging a U.S. strategy for AI development. Aside from a few preliminary meetings, the Trump administration has said little about its AI plans.
Meanwhile, the House report seeks to make the case for boosting U.S. AI investments as a way of confronting the Chinese challenge, approaching the issue from economic and national security perspectives. While concluding that current AI technology is “immature,” the House report emphasizes that profound effect the technology poises for the American workforce.
Then there is the issue of big data, and how it is used while preserving privacy and avoiding bias in machine learning models. “AI requires massive amounts of data, which may invade privacy or perpetuate bias, even when using data for good purposes,” the report warns.
“AI has the potential to disrupt every sector of society in both anticipated and unanticipated ways,” the congressional study concludes. “In light of that potential for disruption, it’s critical that the federal government address the different challenges posed by AI, including its current and future applications.”
The AI hearings, spearheaded by Rep. Will Hurd (R-Texas), chairman of the IT panel, also focused on how government agencies can promote development and adoption of “game changing” AI technologies. Officials from U.S. research agencies echoed the call for greater R&D investments, broad access to government data and expanding the AI workforce through computer science and STEM education.
Among the issues pursued by the panel was ensuring “we’re using it in the right way,” Hurd said.
Data access has become a key issue in the global AI race, with some experts noting the U.S. retains a competitive advantage over China in terms of open source data. For example, Xiaomeng Lu, international public policy manager for technology consultant Access Partnership, notes that institutional roadblocks remain in China, especially strict controls on access to government data that severely limit AI researchers who need loads of local data for model training and other development steps.
Calling China’s AI goals “aspirational,” Lu noted that Beijing may be in the best position to establish technical standards for AI development. Indeed, Chinese AI leaders such as Alibaba (NYSE: BABA) and Baidu (NASDAQ: BIDU) have been steadily increasing their participation in open source development efforts while partnering with Nvidia (NASDAQ: NVDA) and other U.S. chip makers.
For its part, the congressional panel concludes that AI will play a key role in maintaining both U.S. national and economic security. “AI is likely to have a significant impact in cybersecurity, and American competitiveness in AI will be critical to ensuring the United States does not lose any decisive cybersecurity advantage to other nation- states,” the report concludes.
To account for potential biases in AI systems, the panel also recommended that government agencies using AI systems “to make consequential decisions about people should ensure that the algorithms that support these systems are accountable and inspectable.”