Follow Datanami:
January 26, 2023

NIST Puts AI Risk Management on the Map with New Framework

(Image courtesy NIST)

The National Institute of Standards and Technology (NIST) today published the AI Risk Management Framework, document intended to help organizations voluntarily develop and deploy AI systems without bias and other negative outcomes. The document has a very good shot at defining the standard legal approach that organizations will use to mitigate the risks of AI in the future, says Andrew Burt, founder of AI law firm BNH.ai.

As the pace of AI development accelerates, so too do the potential harms from using AI. The NIST, at the request of the United States Congress, devised the AI Risk Management Framework (RMF) to devise a repeatable approach to creating responsible AI systems.

“Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities,” states the RMF executive summary. “With proper controls, AI systems can mitigate and manage inequitable outcomes.”

The 48-page document, which you can access here, seeks to help organizations approach AI risk management in four ways, dubbed the RMF Core functions, including Map, Measure, Manage, and Govern.

First, it encourages users to map out the AI system in its entirely, including intended business purpose and the potential harms that can result from using AI. Imagining the different ways that AI systems can have positive and negative outcomes is essential to the whole process. Business context is critical here, as is the organization’s tolerance for risk.

Map, measure, manage, and govern (NIST AI RMF)

Second, the RMF asks the ethical AI practitioner to use the maps created in the first step to determine how to measure the impacts that AI systems are having, in both a quantitative and a qualitative manner. The measurements should be conducted regularly, cover the AI systems’ functionality, examinability, and trustworthiness (avoidance of bias), and the results should be compared to benchmarks, the RMF states.

Third, organizations will use the measurements from step two to help it manage the AI system in an ongoing fashion. The framework gives users the tools to manage the risks of deployed AI systems and to allocate risk management resources based on assessed and prioritized risks, the RMF says.

The map, measure, and manage functions come together under an overarching governance framework, which gives the user the policies and procedures to implement all the necessary components of a risk mitigation strategy.

The RMF doesn’t have the force of law, and likely never will. But it does lay out a workable approach to managing risk in AI, says Burt, who co-founded BNH.ai in 2019 after working as Immuta’s chief legal counsel

“Part of the advantage of the NIST framework is that it’s voluntary, not regulatory,” Burt tells Datanami in an interview today. “That being said, I think it’s going to set the standard of care.”

The current state of American law when it comes to AI is “the Wild West,” Burt says. There are no clear legal standards, which is a concern both to the companies looking to adopt AI as well as citizens hoping not to be harmed by it.

The NIST RMF has the potential to become “a concrete, specific standard” that everybody in the U.S. can agree on, Burt says.

“From a legal perspective, if people have practices in place that are wildly divergent from the NIST RMF, I think it will be easy for a plaintiff to say ‘Hey what you’re doing is negligent or irresponsible’ or ‘Why didn’t you do this?’” he says. “This is a clear standard, a clear best practice.”

(Lightspring/Shutterstock)

BNH.ai conducts AI audits for a number of clients, and Burt foresees the RMF approach becoming the standard way to conduct AI audits in the future. Companies are quickly awakening to the fact that they need to audit their AI systems to ensure that they’re not harming users or perpetuating bias in a damaging way. In many ways, the AI cart is getting way out in front of the horse.

“The market is adopting these technologies a lot faster than they can mitigate their harm,” Burt says. “That’s where we come in as a law firm. That’s where regulations are starting to come in. That’s where the NIST framework comes in. There are all sorts of sticks and carrots that are going to, I think, help to correct this misbalance. But right now, I’d say there’s a pretty severe imbalance between the value that people are getting out of these tools and the actual risk that they pose.”

Much of the risk stems from the rapid adoption of tools like ChatGPT and other large language and generative AI models. Since these systems are trained on a corpus of data that is almost equal to the entire Internet, the amount of bias and hate-speech contained in the training data is potentially staggering.

“In the last three months, the big big change for the potential of AI to inflect harm , relates to how many people are using these systems,” Burt says. “I don’t know the numbers for ChatGPT and others, but they’re skyrocketing. These systems are starting to be deployed outside of laboratory environments in ways that are really significant. And that’s where the law comes in. That’s where risk comes in and that’s where real harms start to be generated.”

The RMF in some ways will become the American counter to the European Union’s AI Act. First proposed in 2021, the EU’s AI Act is likely to become law this year, and–with its gradations of levels of acceptable risk–will have a dramatic impact on the capability of companies to deploy AI systems.

(Drozd Irina/Shutterstock)

There are big differences between the two approaches, however. For starters, the AI Act will have the force of law, and will impose fines for transgressions. The RMF, on the other hand, is completely voluntary, and will impose change by becoming the industry standard that attorneys can cite in civil court.

The RMF is also general and flexible enough to adapt to the fast-changing AI landscape, which also puts it at odds with the AI Act, Burt says.

“I would say [the EU]s approach tends to be pretty systematic and pretty inflexible, similar to the GDPR,” Burt says. “They’re trying to really tackle everting all at once. It’s a valiant effort, but the NIST RMF is a lot more flexible. Smaller organizations with minimal resources can apply it. Large organizations with a huge amount of resources can apply it. I would say it’s a lot more of a risk-based, context-specific flexible approach.”

You can access more information about the RMF at www.nist.gov/itl/ai-risk-management-framework.

Related Items:

Europe’s New AI Act Puts Ethics In the Spotlight

Organizations Struggle with AI Bias

New Law Firm Tackles AI Liability

 

Datanami