Follow Datanami:
August 18, 2020

NIST Launches Colloquy on Explainable AI

(BAIVECTOR/Shutterstock)

Among the best ways to create stable technologies are standards and specifications that provide a template for building trust while often seeding new technological ecosystems. That’s especially true for AI, where lack of trust and inability to explain decisions has hindered innovation and wider enterprise adoption of AI platforms.

Indeed, early corporate AI deployments have underscored users’ unwillingness to base critical decisions on opaque machine reasoning.

A new initiative launched by the U.S. National Institute of Standards and Technology (NIST) proposes four principles for determining how accurately AI-based decisions can be explained. A draft publication released this week seeks public comments on the proposed AI explainability principles. The comment period extends through Oct. 15, 2020.

The draft “is intended to stimulate a conversation about what we should expect of our decision-making devices,” the agency said Tuesday (Aug. 18), encouraging feedback from engineers, computer scientists, social scientists and legal experts.

“AI is becoming involved in high-stakes decisions, and no one wants machines to make them without an understanding of why,” said Jonathon Phillips, a NIST electronic engineer and report co-author. “But an explanation that would satisfy an engineer might not work for someone with a different background.”

Hence, NIST is casting a wider net to collect a range of views from a diverse list of stakeholders.

The proposed AI principles take a systems approach, emphasizing explanation, meaning, accuracy and “knowledge limits”:

  • AI systems should deliver accompanying evidence or reasons for all their outputs.
  • Systems should provide explanations that are meaningful or understandable to individual users.
  • Explanations should correctly reflect the system’s process for generating the output.
  • The system only operates under conditions for which it was designed, or when the system achieves sufficient confidence in its output.

Expanding on the final principle, NIST said AI-based systems lacking sufficient confidence in a decision should therefore refrain from supplying a decision from a user.

Read the full story here on sister web site EnterpriseAI.com.

Datanami