Americas

  • United States

Asia

Oceania

Contributing Writer

With AI RMF, NIST addresses artificial intelligence risks

News Analysis
Apr 11, 20227 mins
Artificial IntelligenceRisk Management

The new framework could have wide-ranging implications for the private and public sectors. NIST is seeking comments on the current draft by April 29, 2022.

A virtual brain is wired with technology connections.
Credit: Just Super / Getty Images

Business and government organizations are rapidly embracing an expanding variety of artificial intelligence (AI) applications: automating activities to function more efficiently, reshaping shopping recommendations, credit approval, image processing, predictive policing, and much more.

Like any digital technology, AI can suffer from a range of traditional security weaknesses and other emerging concerns such as privacy, bias, inequality, and safety issues. The National Institute of Standards and Technology (NIST) is developing a voluntary framework to better manage risks associated with AI called the Artificial Intelligence Risk Management Framework (AI RMF). The framework’s goal is to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.

The initial draft of the framework builds on a concept paper released by NIST in December 2021. NIST hopes the AI RMF will describe how the risks from AI-based systems differ from other domains and encourage and equip many different stakeholders in AI to address those risks purposefully. NIST said it can be used to map compliance considerations beyond those addressed in the framework, including existing regulations, laws, or other mandatory guidance.

Although AI is subject to the same risks covered by other NIST frameworks, some risk “gaps” or concerns are unique to AI. Those gaps are what the AI RMF aims to address.

AI stakeholder groups and technical characteristics

NIST has identified four stakeholder groups as intended audiences of the framework: AI system stakeholders, operators, and evaluators, external stakeholders, and the general public. NIST uses a three-class taxonomy of characteristics that should be considered in comprehensive approaches for identifying and managing risk related to AI systems: technical characteristics, socio-technical characteristics, and guiding principles.

Technical characteristics refer to factors under the direct control of AI system designers and developers, which may be measured using standard evaluation criteria, such as accuracy, reliability, and resilience. Socio-technical characteristics refer to how AI systems are used and perceived in individual, group, and societal contexts, such as “explainability,” privacy, safety, and managing bias. In the AI RMF taxonomy, guiding principles refer to broader societal norms and values that indicate social priorities such as fairness, accountability, and transparency.

Like other NIST Frameworks, the AI RMF core contains three elements that organize AI risk management activities: functions, categories, and subcategories. The functions are organized to map, measure, manage, and govern AI risks. Although the AI RMF anticipates providing context for specific use cases via profiles, that task, along with a planned practice guide, has been deferred until later drafts.

Following the release of the draft framework in mid-March, NIST held a three-day workshop to discuss all aspects of the AI RMF, including a deeper dive into mitigating harmful bias in AI technologies.

Mapping AI risk: Context matters

When it comes to mapping AI risk, “We still have to figure out the context, the use case, and the deployment scenario,” Rayid Ghani of Carnegie Mellon University said at the workshop. “I think in the ideal world, all of those things should have happened when you were building the system.”

Marilyn Zigmund Luke, vice president of America’s Health Insurance Plans, told attendees that, “Given the variety of the different contexts and constructs, the risk will be different, of course, to the individual and the organization. I think understanding all of that in terms of evaluating the risk, you’ve got to start at the beginning and then build out some different parameters.”

Measuring AI activities: New techniques needed

Measurement of AI-related activities is still in its infancy because of the complexity of the socio-political ethics and mores inherent in AI systems. David Danks of the University of California, San Diego, said, “There’s a lot in the measure function that right now is essentially being delegated to the human to know. What does it mean for something to be biased in this particular context? What are the relevant values? Because of course, risk is fundamentally about threats to the values of the humans or the organizations, and values are difficult to specify formally.”

Jack Clark, co-founder of AI safety and research company Anthropic, said that the advent of AI has created a need for new metrics and measures, ideally baked into the creation of the AI technology itself. “One of the challenging things about some of the modern AI stuff, [we] need to design new measurement techniques in co-development with the technology itself,” Clark said.

Managing AI risk: Training data needs an upgrade

The management function of the AI RMF addresses the risks that have been mapped and measured to maximize benefits and minimize adverse impacts. But data quality issues can hinder the management of AI risks, Jiahao Chen, chief technology officer of Parity AI, said. “The availability of data being put in front of us for training models doesn’t necessarily generalize to the real world because it could be several years out of date. You have to worry about whether or not the training data actually reflects the state of the world as it is today.”

Grace Yee, director of ethical innovation at Adobe, said, “It’s no longer sufficient for us to deliver the world’s best technologies for creating digital experiences. We want to ensure that our technology is designed for inclusiveness and respects our customers, communities, and Adobe values. Specifically, we’re developing new systems and processes to evaluate if our AI is creating harmful bias.”

Vincent Southerland of the New York University School of Law raised the use of predictive policing tools as an object lesson of what can go wrong in managing AI. “They are deployed all across the criminal system,” he said, from identifying the perpetrator of the crime to when offenders should be released from custody. But until recently, “There wasn’t this fundamental recognition that the data that these tools rely upon and how these tools operate actually help to exacerbate racial inequality actually and help to exacerbate the harms in the criminal system itself.”

AI governance: Few organizations do it

When it comes to AI governance policies, few organizations are doing it. Patrick Hall, scientist at bnh.ai, said that outside large consumer finance organizations and just a few other highly regulated spaces, AI is being used without formal governance guidance, so companies are left to sort out these stormy governance issues on their own.”

Natasha Crampton, chief responsible AI officer at Microsoft, said, “Failure mode arises when your approach to governance is overly decentralized. This is a situation where teams want to deploy AI models into production, and they’re just adopting their own processes and structures, and there’s little coordination.”

Agus Sudjianto, executive vice president and head of corporate model risk at Wells Fargo, also stressed top-level management in governing AI risk. “It will not work if the head of responsible AI or the head of management doesn’t have the stature, ear, and support from the top of the house.”

Teresa Tung, cloud first chief technologist at Accenture, emphasized that all businesses need to focus on AI. “About half of the Global 2000 companies reported about AI in their earnings call. This is something that every business needs to be aware of.”

As with other risk management frameworks developed by NIST, such as the Cybersecurity Framework, the final AI RMF could have wide-ranging implications for the private and public sectors. NIST is seeking comments on the current draft of the AI RMF by April 29, 2022.