The National Institute of Standards and Technology publishes the draft AI Risk Management Framework for Feedb


Author(s): Simon Hodgett, Sam Ip, Sam Dobbin

November 24, 2022

The US National Institute of Standards and Technology (NIST) has published its second draft of the NIST AI Risk Management Framework [PDF] (AI RMF) and its draft of an accompanying NIST AI RMF Playbook for feedback on September 29, 2022. Subsequently, on October 18-19, 2022, NIST held a live online workshop for industry-leading AI professionals to discuss the community Feedback on the second draft and next steps for the AI ​​RMF.

AI risk management framework

The AI ​​RMF is being developed to better manage risks for individuals, organizations and society impacted by AI. The AI ​​RMF is voluntary and is intended to improve the incorporation of trustworthiness into the design, development, deployment and evaluation of AI products, systems and services.

There are general risks in any software or information-based system, including cybersecurity, privacy, etc., but there are also specific risks in AI that are difficult to predict and manage. This is the purpose of the AI-RMF: to consider and manage complex AI risks such as amplifying unfair outcomes and unintended consequences for individuals and communities. The AI ​​RMF is not a checklist to be used in individual cases; instead, it should be fully incorporated into decision-making and used alongside existing regulations and laws, not as a tool to replace them.

The KI-RMF organizes AI risk management into functions that control, map, measure and manage AI risks:

  • Govern: This role is intended to cultivate the culture of AI risk management in organizations that develop and deploy AI systems. This function ensures that risks and potential impacts are identified, measured and managed effectively and consistently.
  • Map: This function establishes the context related to an AI system and risks related to the context are identified. After completing the mapping function, users should have sufficient contextual knowledge to be able to make a go/no-go decision on whether to design, develop, or deploy an AI system.
  • Measure: This function measures, analyses, assesses and tracks AI risk and its associated impacts through the use of quantitative, qualitative or mixed methods and tools.
  • Administer: This feature allocates risk management resources to mapped and measured risks. These risks are then prioritized and addressed based on the project impact.

The AI ​​RMF also introduces the concept of profiles, which are used to illustrate and analyze AI risk management processes for specific scenarios. Two notable profiles are mentioned:

  • Use Case Profileswhich can be used to provide insights into risk management at different stages of an AI lifecycle in a specific sector, e.g. B. in hiring or fair housing.
  • Temporal Profileswhich can be used to describe either the current or desired state of certain AI risk management activities within a sector or industry, which can reveal what gaps exist between current and targeted risk management processes.

Next Steps

Feedback on the second draft of the AI ​​RMF and comments requested at the workshop will be reviewed and incorporated. NIST plans to release the final version of the KI-RMF in January 2023. Although the KI-RMF is voluntary, we anticipate that, once released, it will be used across various industries as AI systems are designed, developed, and deployed is extensively referenced.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *