Follow BigDATAwire:

October 30, 2023

Biden’s Executive Order on AI and Data Privacy Gets Mostly Favorable Reactions

(Paper piper/Shutterstock)

As expected, President Joe Biden today issued an executive order on the adoption of artificial intelligence (AI) and the protection of sensitive data. Technology leaders reacted in a mostly favorable manner to the order, although some leaders wanted the president to go further and others demanded that US Congress pass new laws.

The executive order, which you can read about here, has several provisions, including requiring the National Institute of Standards and Technology to establish standards for “red team safety tests,” and requiring developers of the foundation model to notify the government when training the model and sharing the results of the tests.

It also requires the Department of Commerce to develop standards for detecting AI-generated content; requires the establishment of a program to use AI to “find and fix” security vulnerabilities; and requires the National Security Council and White House Chief of Staff to further research AI and security.

A section on privacy directs more investment into the development of privacy-preserving techniques, while another on “advancing equity and civil rights” mandates the creation of “clear guidance” covering the use of algorithms for landlords, federal contractors, and federal benefits programs. It also calls for more training to track down “algorithmic discrimination,” as well as ending “civil rights violations related to AI” in the justice system.

Other sections cover consumer protections in an age of AI and “mitigating the harm” of job displacement through AI. Biden also calls for “catalyzing AI research,” “promoting a fair, open and competitive AI ecosystem,” and “advancing American leadership abroad.”

IBM chairman and CEO Arvind Krishna says the executive order will help the country move forward.

“This Executive Order sends a critical message: that AI used by the United States government will be responsible AI,” Krishna says. “IBM proudly supports the White House voluntary commitments on AI which align with our own long-standing practices to promote trust in this powerful technology.”

Miriam Vogel, the president and CEO of EqualAI, says the executive order is an important step in preventing bias in AI and developing responsible governance.

“Actions mandated in [the executive order] will be instrumental steps in maximizing benefits and reducing harms from AI in key areas impacting Americans’ daily lives and wellbeing, including housing, health, employment, and education through a whole of government approach to its safe use and equitable application,” she says.

President Biden issued his executive order on AI on October 30, 2023 (Consolidated-News-Photos/Shuttestock)

Biden has struck a balance with this executive order, says Timothy Young, the CEO of Jasper.

“With this executive order, the White House has taken essential steps to protect public safety and national security without shutting down innovation or creating an unbalanced competitive playing field for the wide range of companies developing this technology,” Young says. “While it’s still early, this order lays the foundation well to strike the right balance for safety, innovation and a competitive field.”

Alexandra Reeve Givens, the president and CEO of the Center for Democracy & Technology (CDT), applauded the “whole-of-government” impact the order will have.

“It’s notable to see the Administration focus on both the emergent risks of sophisticated foundation models and the many ways in which AI systems are already impacting people’s rights,” she says. “The Administration rightly underscores that U.S. innovation must also include pioneering safeguards to deploy technology responsibly.”

Caitlin Seeley George, the managing director of Fight For the Future, said the executive order was “good, not great.”

“It’s far from breaking news that Artificial Intelligence is exacerbating discrimination and bias, but it’s a positive step for the Biden Administration to acknowledge these harms and direct agencies to address them in this Executive Order,” she says. “However, it’s hard to say that this document, on its own, represents much progress.”

Michael Leach, the compliance manager at security vendor Forcepoint, appreciates many aspects of the order, in particular the focus on safeguarding personal data when using AI.

“The new Executive Order provides valuable insight for the areas that the U.S. government views as critical when it comes to the development and use of AI, and what the cybersecurity industry should be focused on moving forward when developing, releasing and using AI such as standardized safety and security testing, the detection and repair of network and software security vulnerabilities, identifying and labeling AI-generated content, and last, but not least, the protection of an individual’s privacy by ensuring the safeguarding of their personal data when using AI,” he says

The National Science Foundation (NSF) is ready to implement the order, says NSF Director Sethuraman Panchanathan.

“As the primary federal investor in non-defense artificial intelligence research, the U.S. National Science Foundation is driving cutting-edge research that is expanding our understanding of AI concepts and techniques, accelerating trustworthy AI innovation, and preparing the next-generation AI workforce,” Panchanathan says. “NSF stands ready to fully implement the actions outlined in today’s Executive Order as well as the eight guiding principles and priorities it lays out.”

(MUNGKHOOD-STUDIO/Shutterstock)

Once data makes its way into a model, it can never be pulled out, Rehan Jalil, president and CEO of Securiti, reminded us.

“AI has transformed the data landscape,” Jalil writes. “However, careful examination of data sharing practices and recipients is much needed. A responsible data-centric approach should underpin every AI implementation, and the standards being set for government agencies should also be upheld across private and public organizations as well. It is crucial to acknowledge that once AI systems gain access to data, that information becomes an integral part of the system permanently; we cannot afford to trust AI technology without the appropriate controls and set responsibilities.

Robert Weissman, the president of Public Citizen, liked some of what he saw in the executive order, but not everything.

“Consumers and workers deserve guardrails that will protect the public interest as AI continues to be developed and utilized in new contexts. Today’s executive order is a vital step by the Biden administration to begin the long process of regulating rapidly advancing AI technology – but it’s only a first step,” he wrote. “However, as much as the White House can do on its own, those measures are no substitute for agency regulation and legislative action. Preventing the foreseeable and unforeseeable threats from AI requires agencies and Congress take the baton from the White House and act now to shape the future of AI – rather than letting a handful of corporations determine our future, at potentially great peril.”

Another call for Congressional action came from Paul Barrett, the deputy director of the NYU Stern Center for Business and Human Rights.

“President Biden deserves credit for taking important initial steps to safeguard AI,” he writes. “But there are many additional issues that AI companies and Congress need to address to assure that powerful artificial intelligence systems help people, rather than exacerbate the spread of misinformation, cyberattacks, fraud, invasion of privacy, and other serious societal threats…The administration is moving in the right direction. But today is just the beginning of a regulatory process that will be long and arduous–and ultimately must require that the companies profiting from AI bear the burden of proving that their products are safe and effective, just as the manufacturers of pharmaceuticals or industrial chemicals or airplanes must demonstrate that their products are safe and effective.”

Related Items:

White House Issues Executive Order for AI

Americans Don’t Trust AI and Want It Regulated: Poll

NIST Puts AI Risk Management on the Map with New Framework

 

BigDATAwire