-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

AI Risk Management Frameworks Overview

Introduction

“Intelligence” is generally defined as the ability to learn or understand, or to deal with new or trying situations.1 A trait normally associated with biological beings such as chimpanzees, dolphins, and, of course, humans, recent scientific and engineering developments have enabled computers to exercise “artificial intelligence” or AI. Artificial intelligence is the simulation of human intelligence processes (especially learning and adaptive behavior) by machines.2

Artificial intelligence is powering a wide variety of business and consumer applications, such as sifting through mountains of big data to extract precious business knowledge, or permitting a vehicle to drive itself. Although popular fiction is filled with nightmare scenarios in which AI applications achieve human-like consciousness and decide to dispose of their flawed human creators, many, if not most, experts believe the danger is either negligible or something to be addressed in a distant future.

For now, artificial intelligence – especially, machine learning – is enabling enterprise adopters to increase operational efficiency, decrease operational expenses such as personnel costs, and improve overall competitiveness. As a result, AI capabilities are being integrated into literally everything. 

The situation is somewhat analogous to the emergence of the Internet and electronic commerce (or e-commerce) in the 1990s, when “brick-and-mortar” retail establishments scrambled to create online stores, often without realizing the risks involved. The legacy of that rapid – and generally unplanned – transformation lingers today in terms of computer viruses (including ransomware), identify theft, and phishing attacks.

Hopefully, to avoid a repeat of today’s Internet nightmares, risk professionals have begun the process of developing artificial intelligence risk management frameworks – frameworks inclusive enough to encompass the evolving role of AI, including AI and cloud computing, AI and edge computing, AI and the Internet of Things (IoT), as well as AI and finance, medicine, transportation, and so-called “knowledge work.”

Not surprisingly, two preeminent standards bodies – the US National Institute of Standards and Technology (NIST) and the International Organization for Standardization (ISO) – have weighed in, not with any final prescription for managing AI risk, but as a means of opening a proverbial dialogue on the subject.

NIST AI 100-1

Published in January 2023, the NIST framework is NIST AI 100-1: “Artificial Intelligence Risk Management Framework (AI RMF 1.0).” The framework is designed to equip organizations and individuals – referred to as AI actors – with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AI systems over time. AI actors are defined by the Organization for Economic Co-operation and Development (OECD) as “those who play an active role in the AI system lifecycle, including organizations and individuals that deploy or operate AI.”3

NIST AI 100-1 is available for free download from the NIST website.

ISO/IEC 23894:2023

Released a month after NIST AI 100-1, the ISO framework is ISO/IEC 23894:2023: “Information technology – Artificial intelligence – Guidance on risk management.” This document provides guidance on how organizations that develop, produce, deploy, or use products, systems, and services that utilize artificial intelligence (AI) can manage risk specifically related to AI. The guidance also aims to assist organizations to integrate risk management into their AI-related activities and functions. It moreover describes processes for the effective implementation and integration of AI risk management.4

AI Risk and Trust

In an enterprise context, uncontrolled – or worse, uncontrollable – AI has the potential to harm core assets and interests, including:

Harm to an individual’s civil liberties, rights, physical or psychological safety, or economic opportunity.

Harm to an enterprise’s business operations.

Harm to an enterprise’s supply chain or other interconnected and interdependent financial or operational resources.

Harm to the environment and the planet.5

A potential early warning sign is the rapid evolution and deployment of “generative AI.”  Generative AI, like the wildly popular ChatGPT application, refers to programs that can generate novel and unique content. Analyst Matteo Wong warns that these “powerful and easy-to-use programs that produce synthetic text, images, video, and audio … can be used by bad actors to fabricate events, people, speeches, and news reports to sow disinformation.”6

More alarmingly, “good actors,” like most employees, could unknowingly use generative AI to produce enterprise documents of questionable provenance and reliability – documents which, for example, may contain errors or unsubstantiated claims. In addition, the scale and influence of generative AI may not be fully known or appreciated by enterprise officials, as it often exists in the murky environs of “shadow IT.”

Another concern for enterprise managers is what might be termed “AI creep,” a phenomenon whereby software developers rush to introduce artificial intelligence into their products and services. For example, Danielle Abril of the Washington Post reports that Microsoft “unveiled Microsoft 365 Copilot, which embeds artificial intelligence into apps like Word, Outlook, Teams and Excel. The AI assistant combines natural language processing, based on OpenAI’s ChatGPT technology, with Microsoft Office tools and capabilities to help workers automate or accelerate some of their more mundane work.”7 While the end results might be greater efficiency and effectiveness, enterprise executives need to establish AI “acceptable use” policies to help control AI proliferation in the workplace.

Characteristics of Trustworthy AI Systems

As a foundation for building an Artificial Intelligence Risk Management Framework (AI RMF), NIST declared the six characteristics of trustworthy AI systems.8  Such systems should be:

Valid & Reliable – Validation is the “confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled.”9 Reliability is defined as the “ability of an item to perform as required, without failure, for a given time interval, under given conditions.”10

Safe – AI systems should “not under defined conditions, lead to a state in which human life, health, property, or the environment is endangered.”11

Secure & Resilient – AI systems, as well as the ecosystems in which they are deployed, may be said to be resilient if they can withstand unexpected adverse events or unexpected changes in their environment or use – or if they can maintain their functions and structure in the face of internal and external change and degrade safely and gracefully when this is necessary.12 Common security concerns relate to adversarial examples, data poisoning, and the exfiltration of models, training data, or other intellectual property through AI system endpoints.

Explainable & Interpretable – Explainability refers to a representation of the mechanisms underlying AI systems’ operation, whereas interpretability refers to the meaning of AI systems’ output in the context of their designed functional purposes. Together, explainability and interpretability assist those operating or overseeing an AI system, as well as users of an AI system, to gain deeper insights into the functionality and trustworthiness of the system, including its outputs.

Privacy-Enhanced – Privacy refers generally to the norms and practices that help to safeguard human autonomy, identity, and dignity. These norms and practices typically address freedom from intrusion, limiting observation, or individuals’ agency to consent to disclosure or control of facets of their identities (e.g., body, data, reputation).

Fair – with Harmful Bias Managed – Fairness in AI includes concerns for equality and equity by addressing issues such as harmful bias and discrimination. Standards of fairness can be complex and difficult to define because perceptions of fairness differ among cultures and may shift depending on application.

Explainable & Interpretable

Arguably the most important trust characteristic because it relates directly to the unique nature of artificial intelligence is “explainable & interpretable.” AI applications often rely on derived logic, which may be difficult for humans to discern and understand. Thus, they may be inscrutable in the traditional sense of “show me your work.” To gain essential trust in AI operations, NIST has developed four principles of explainable artificial intelligence – principles to which AI systems should adhere.

Principle 1. Explanation: Systems deliver accompanying evidence or reason(s) for all outputs.

The Explanation principle obligates AI systems to supply evidence, support, or reasoning for each output.

Principle 2. Meaningful: Systems provide explanations that are understandable to individual users.

A system fulfills the Meaningful principle if the recipient understands the system’s explanations.

Principle 3. Explanation Accuracy: The explanation correctly reflects the system’s process for generating the output.

Together, the Explanation and Meaningful principles only call for a system to produce explanations that are meaningful to a user community. These two principles do not require that a system delivers an explanation that correctly reflects a system’s process for generating its output.  The Explanation Accuracy principle imposes accuracy on a system’s explanations.

Principle 4. Knowledge Limits: The system only operates under conditions for which it was designed, or when the system reaches a sufficient confidence in its output.

The previous principles implicitly assume that a system is operating within its knowledge limits. This Knowledge Limits principle states that systems identify cases they were not designed or approved to operate, or their answers are not reliable. By identifying and declaring knowledge limits, this practice safeguards answers so that a judgment is not provided when it may be inappropriate to do so. The Knowledge Limits Principle can increase trust in a system by preventing misleading, dangerous, or unjust decisions or outputs.13

AI RMF 1.0

The NIST Artificial Intelligence Risk Management Framework (AI RMF) is designed to help public sector agencies and private sector companies measure and manage the risks associated with their use of artificial intelligence. More aspirationally, the AI RMF is intended to start a dialogue about how, when, and under what circumstances AI is incorporated into enterprise operations.

The heart of the AI RMF is the AI RMF Core. As illustrated in Figure 1, the Core is composed of four functions: Govern, Map, Measure, and Manage.

Figure 1. AI RMF Core Source: NIST14
Figure 1. AI RMF Core
Source: NIST14

Govern

The Govern function:

Cultivates and implements a culture of risk management within organizations designing, developing, deploying, evaluating, or acquiring AI systems;

Outlines processes, documents, and organizational schemes that anticipate, identify, and manage the risks a system can pose, including to users and others across society – and procedures to achieve those outcomes;

Incorporates processes to assess potential impacts;

Provides a structure by which AI risk management functions can align with organizational principles, policies, and strategic priorities;

Connects technical aspects of AI system design and development to organizational values and principles, and enables organizational practices and competencies for the individuals involved in acquiring, training, deploying, and monitoring such systems; and

Addresses full product lifecycle and associated processes, including legal and other issues concerning use of third-party software or hardware systems and data.15

Map

The Map function establishes the context to frame risks related to an AI system. The AI lifecycle consists of many interdependent activities involving a diverse set of actors. In practice, AI actors in charge of one part of the process often do not have full visibility or control over other parts and their associated contexts. Without contextual knowledge, and awareness of risks within the identified contexts, risk management is difficult to perform.16

Measure

The Measure function employs quantitative, qualitative, or mixed-method tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk and related impacts. It uses knowledge relevant to AI risks identified in the Map function and informs the Manage function. AI systems should be tested before their deployment and regularly while in operation. AI risk measurements include documenting aspects of systems’ functionality and trustworthiness.17

Manage

The Manage function entails allocating risk resources to mapped and measured risks on a regular basis and as defined by the Govern function. These risk resources facilitate risk treatment operations comprising plans to respond to, recover from, and communicate about incidents or events.18

Using the AI RMF 1.0

According to NIST, the framework is designed to equip organizations and individuals with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AI systems over time.19

Enterprise executives concerned with the impact of artificial intelligence – that should include everyone – should establish an AI Risk Management Program, using the NIST or other risk management framework as a foundation.

Since AI, unlike other technologies, has the capacity to advance itself, the time to take meaningful action may be short, especially as employees and business partners begin to embrace AI systems, like ChatGPT and its successors, for their own purposes.

From a positive perspective, an AI Risk Management Program can help enterprise managers:

Expedite the development of AI-empowered automation initiatives, containing costs and improving productivity;

Incorporate AI into enterprise products and services, adding or enhancing functionality, reducing defects, and increasing sales; and

Prepare enterprise executives – to the extent possible – for next generation AI risks and opportunities.

Resource File

References

    • 1 Webster’s Dictionary.
    • 2 TechTarget.
    • 3 NIST AI 100-1: “Artificial Intelligence Risk Management Framework (AI RMF 1.0).” US National Institute of Standards and Technology. January 2023:2.
    • 4 ISO/IEC 23894:2023: “Information technology — Artificial intelligence — Guidance on risk management.” International Organization for Standardization. February 2023.
    • 5 NIST AI 100-1: “Artificial Intelligence Risk Management Framework (AI RMF 1.0).” US National Institute of Standards and Technology. January 2023:5.
    • 6 Matteo Wong. “Conspiracy Theories Have a New Best Friend.” The Atlantic | The Atlantic Monthly Group. March 2, 2023.
    • 7 Danielle Abril. “You May Soon Be Able to Use AI in Microsoft Word, Outlook.” Washington Post. March 16, 2023.
    • 8 NIST AI 100-1: “Artificial Intelligence Risk Management Framework (AI RMF 1.0).” US National Institute of Standards and Technology. January 2023:13-18.
    • 9 ISO 9000:2015.
    • 10 ISO/IEC TS 5723:2022.
    • 11 Ibid.
    • 12 Ibid.
    • 13 P. Jonathon Phillips, Carina A. Hahn, Peter C. Fontana, David A. Broniatowski, and Mark A. Przybocki. Draft NISTIR 8312 “Four Principles of Explainable Artificial Intelligence.” US National Institute of Standards and Technology. August 2020:2-4.
    • 14 NIST AI 100-1: “Artificial Intelligence Risk Management Framework (AI RMF 1.0).” US National Institute of Standards and Technology. January 2023:2.
    • 15 Ibid. p.21.
    • 16 Ibid. p.24-25.
    • 17 Ibid. p.28.
    • 18 Ibid. p.31.
    • 19 Ibid. p.2.
    EAIWorld Cover
    Free
    for qualified subscribers
    Subscribe Now Current Issue Past Issues