-->

Enterprise AI World 2024 Is Nov. 20-21 in Washington, DC. Register now for $100 off!

AI Risk Management

Introduction

"Intelligence" is generally defined as the ability to learn or understand, or to deal with new or trying situations.1 A trait normally associated with biological beings such as chimpanzees, dolphins, and, of course, humans, recent scientific and engineering developments have enabled computers to exercise "artificial intelligence" or AI. Artificial intelligence is "the simulation of human intelligence processes [especially learning and adaptive behavior] by machines."2

Also known as "cognitive computing," artificial intelligence is powering a wide variety of business and consumer applications, such as sifting through mountains of Big Data to extract precious business intelligence, or permitting a vehicle to drive itself.

The Perils of AI

While the proponents of AI have a positive story to tell, they must also acknowledge that AI arrives with certain risks, even existential risks. Prominent AI critics, including Elon Musk and the late Stephen Hawking, allege two main problems with artificial intelligence:

  • First, we are starting to create machines that think like humans but have no morality to guide their actions.
  • Second, in the future, these intelligent machines will be able to procreate, producing even smarter machines, a process often referred to "superintelligence." Colonies of smart machines could grow at an exponential rate – a phenomenon for which mere people could not erect sufficient safeguards.3

"We humans steer the future not because we’re the strongest beings on the planet, or the fastest, but because we are the smartest," said James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era. "So when there is something smarter than us on the planet, it will rule over us on the planet."4

While the future that Barrat envisions might seem fantastic, even impossible, today’s technologists – like today’s business leaders – should start practicing artificial intelligence risk management. Consider, for example, that when e-commerce began to blossom in the mid-90s, IT departments were not fully prepared to manage the onslaught of new risks in the form of "malware," nor were they prepared for future related risks like "identity theft" and "ransomware."

Like e-commerce, smartphones, and cloud computing, new technology will often present novel and, in some cases, serious risks. In response to the coming artificial intelligence revolution, enterprise security departments should commit to:

  • Identifying – and mitigating – known AI risks; and
  • Anticipating emerging risks based on business, societal, and technological trends.

AI Risks

Artificial intelligence offers an abundance of risks. Some are near-term like job displacement, as AI accelerates automation initiatives. Some are longer-term like smart soldiers, where AI-enhanced autonomous robots replace human infantry, lowering casualty totals but also lowering inhibitions relative to military engagements.

From a forecasting perspective, the adjective "longer-term" rather than "long-term" is preferred since AI research is often opaque and time horizons can shift unexpectedly. While today AI is featured in more applications than experts might have predicted, the pace of change can also be slower than expected. AI-based autonomous vehicles, for instance, are proving more difficult to produce than anticipated. In part, this is owing to the need to resolve certain driving-related legal and ethical quandaries. In such cases, promising AI advances may be delayed or even suspended until we, as human beings, can assess their full import.

Overall, AI risks fall into three broad categories: National security, enterprise or business, and personal or individual.

National Security Risks

The fiscal year 2019 National Defense Authorization Act (NDAA) established the National Security Commission on Artificial Intelligence to "consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies by the United States to comprehensively address the national security and defense needs of the United States."

In an "Interim Report"5 released in November 2019, the Commission members revealed a number of key AI-related concerns. Principal among these were:

  • Erosion of US Military Advantage – Strategic competitors, led by China and Russia, want to use AI-enabled autonomous systems in their military strategies, operations, and capabilities to undermine US military superiority.
  • Strategic Stability at Risk – Global stability and nuclear deterrence could be undermined if AI-enabled systems enable the tracking and targeting of previously invulnerable military assets.
  • The Diffusion of AI Capabilities – The likelihood of reckless and unethical uses of AI-enabled technologies by rogue states or non-state actors is increasing as AI applications become more readily available.

Other risks include:

  • Disinformation and the threat to our democratic system, which is ironic given that certain US politicians have been active in promoting disinformation without AI-aided software.
  • Erosion of individual privacy and civil liberties, as new AI tools provide greater capabilities to monitor citizen movement and actions.
  • Accelerated cyber attacks, featuring intelligent malware that discovers and automatically exploits software vulnerabilities "at superhuman speed."

As we shall see, certain of these National Security concerns also affect the other domains, such as accelerated cyber attacks (Enterprise), and erosion of individual privacy and civil liberties (Personal).

Enterprise Risks

Artificial intelligence can help mitigate enterprise risk or amplify it.

Pointing to the positive, "machine learning algorithms that flag fraudulent credit card transactions are in widespread use … and have led to the development of AI systems that readily enable both consumers and vendors to act when a potential issue arises."6 Such applications are not only welcome but essential to thwarting cyber crime, especially as criminals employ AI to enhance the effectiveness of their financial exploits.

Although AI is a critical tool for enterprise risk managers, AI can introduce or accentuate enterprise risks. Consider the more problematic consequences of artificial intelligence.

AI Is Still a Relatively Unknown or Misunderstood Technology – As Kristin Broughton reports in the Wall Street Journal, "AI hasn’t been adopted at a large scale and the unintended consequences aren’t fully documented," or as Steve Culp, senior managing director for finance and risk at Accenture, says pointedly, "Before there were ships, we never had shipwrecks.”

Broughton reveals that according to an Accenture survey of 683 risk managers in nine countries, only "11 percent of risk managers in banking, capital markets and insurance say they are fully capable of assessing AI-related risks." Perhaps just as ominously, the "respondents expressed a similar comfort level with assessing the possible downsides associated with blockchain technology, quantum computing, and other emerging areas of technology."7

McKinsey analysts believe that "Because AI is a relatively new force in business, few leaders have had the opportunity to hone their intuition about the full scope of societal, organizational, and individual risks, or to develop a working knowledge of their associated drivers, which range from the data fed into AI systems to the operation of algorithmic models and the interactions between humans and machines. As a result, executives often overlook potential perils … or overestimate an organization’s risk-mitigation capabilities. It’s also common for leaders to lump in AI risks with others owned by specialists in the IT and analytics organizations (‘I trust my technical team; they’re doing everything possible to protect our customers and our company’)."8

The Trajectory and Velocity of AI Development Is – and Will Likely Remain – Unclear – Predicting the future is always difficult, particularly when it involves something of unlimited potential, as with artificial intelligence. Analyst Ron Schmelzer previews this dilemma by reminding us how little we knew – or imagined – about the possibilities surrounding portable phones when they were introduced. "In the 1980s the emergence of portable phones made it pretty obvious that they would allow us to make phone calls wherever we are, but who could have predicted the use of mobile phones as portable computing gadgets with apps, access to worldwide information, cameras, GPS, and the wide range of things we now take for granted as mobile, ubiquitous computing. Likewise, the future world of AI will most likely have much greater impact in a much different way than what we might be assuming today."9

AI Disrupts Human Resources – Let’s face it. Enterprise interest in artificial intelligence is driven in large measure by a desire to decrease enterprise dependence on human intelligence. The goal is automation, eliminating not only blue-collar but white-collar workers, and their associated salaries and benefits. Workforce reductions, however, can produce risks, such as:

  • Dismissing workers prematurely, i.e., before AI-inspired replacement mechanisms are proven safe and effective;
  • Generating resentment against AI initiatives, especially where AI is designed to complement human endeavors and man-machine partnerships are required; and
  • Prompting AI-related sabotage of systems and operations.

AI Is Becoming Less Transparent – Test-taking students are frequently admonished to "show your work," as a way of proving that their answers were formulated through logical means. Unfortunately, one of the troubling aspects of machine learning – and AI in general – is often the inability to perform human oversight, particularly as the process of ML decision-making becomes more sophisticated, and less grounded in traditional data analysis techniques. The rationale for ML conclusions may, in some cases, be unexplained and unexplainable, creating a potential crisis in confidence.

AI Is Susceptible to Cyber Attacks – As with other information technologies, artificial intelligence – and machine learning in particular – are vulnerable to malware attacks. Exposures exist on two levels:

  • First, compromised data can result in ML applications "learning the wrong lessons."
  • Second, compromised applications can produce erroneous data interpretations.

Either result can adversely affect enterprise operations.

AI Systems Face Safety & Security Challenges – As itemized by the White House’s National Science & Technology Council, AI systems face important safety and security challenges due to:

  • Complex and uncertain environments – In many cases, AI systems are designed to operate in complex environments, with a large number of potential states that cannot be exhaustively examined or tested. A system may confront conditions that were never considered during its design.
  • Emergent behavior – For AI systems that learn after deployment, a system’s behavior may be determined largely by periods of learning under unsupervised conditions. Under such conditions, it may be difficult to predict a system’s behavior.
  • Goal misspecification – Due to the difficulty of translating human goals into computer instructions, the goals that are programmed for an AI system may not match the goals that were intended by the programmer.
  • Human-machine interactions – In many cases, the performance of an AI system is substantially affected by human interactions. In these cases, variation in human responses may affect the safety of the system.10

AI Can Evolve in a Rapid and Uncontrolled Manner – Artificial intelligence is evolving at an exciting – and, to some, an alarming – rate. Even former US Secretary of State Henry Kissinger is asking questions. "The implications of this evolution are shown by a recently designed program, AlphaZero, which plays chess at a level superior to chess masters and in a style not previously seen in chess history. On its own, in just a few hours of self-play, it achieved a level of skill that took human beings 1,500 years to attain."11

Risk Managers Are Under-Informed and Under-Prepared – Analyst Daniel Wagner observes that while "Risk managers are becoming more accustomed to integrating unknown unknowns into their risk calculations, … this presumes that they have a firm grounding in the subject matter from which risk is derived. The truth is, given how new the [AI] industry is, most risk managers and decision makers have relatively little knowledge about what AI and machine learning are, how they function, how the sector is advancing, or what impact all this is likely to have on their ability to protect their organizations against the threats that naturally emanate from AI and machine learning."12

Personal Risks

AI Can Erode Individual Privacy and Civil Liberties – The European Union contends that "The use of AI can … lead to breaches of fundamental rights," including the rights to:

  • Freedom of expression
  • Freedom of assembly
  • Human dignity
  • Non-discrimination
  • A fair trial or "effective judicial remedy
  • Consumer protection

"These risks might result from flaws in the overall design of AI systems (including … human oversight) or from the use of data without correcting possible bias (e.g., the system is trained using only or mainly data from men leading to suboptimal results in relation to women)."13

AI Can Compromise Personal Safety – The EU also asserts that "AI technologies may present new safety risks for users when they are embedded in products and services. For example, as result of a flaw in the object recognition technology, an autonomous car can wrongly identify an object on the road and cause an accident involving injuries and material damage.

"As with the risks to fundamental rights, these risks can be caused by flaws in the design of the AI technology, be related to problems with the availability and quality of data, or to other problems stemming from machine learning. While some of these risks are not limited to products and services that rely on AI , the use of AI may increase or aggravate the risks."14

Managing AI Risks

For enterprise officials, the art and science of managing AI risks begins with a few basic steps.

Form an AI Risk Management Practice

Proper governance is essential. Form an enterprise AI Risk Management practice:

  • Declaring who is in charge of AI Risk Management
  • Developing AI Risk Management policies, protocols, and procedures

Inventory Present and Planned AI Applications

Establish the enterprise’s current commitment to artificial intelligence. One place to start is by reviewing employee use of popular AI developer platforms, such as Amazon SageMaker, Azure Machine Learning, and Google AI Platform.

Beyond that, a comprehensive survey of present and planned AI applications should be conducted. Learn as much as possible about present AI activity and attempt to bring that activity under AI Risk Management supervision.

Proceed with AI Applications with Caution

Emerging information technologies can "sneak up" on an enterprise, gaining an operational foothold before enterprise officials can systematically assess the risks and rewards. Recent examples include virtualization, cloud computing, converged infrastructure, and edge computing. Artificial intelligence fits that pattern except that it has a "mind of its own" and may unduly endanger enterprise operations, even enterprise employees.

Before approving any AI application (either new or expanded), enterprise risk officials, part of the newly-created AI Risk Management practice, should engage developers in a dialogue to identify and reduce risks. Questions include:

  • How does the AI contribute to the application?
  • Are there mechanisms that ensure the AI is functioning properly? How do these mechanisms work, and how do they report their findings?
  • Are there mechanisms that prevent the AI from "going rogue?" How do these mechanisms work, and how do they report their findings?
  • Are there mechanisms to ensure the AI is not compromising enterprise data?
  • Is the AI delivering on its promise? How is AI performance measured and reported?
  • Is the AI compliant with all relevant regulatory regimes like HIPAA? How is compliance proven?
  • Can the AI decision-making processes be understood by human analysts? If not, how is oversight conducted?
  • Is the AI suitable for other applications? If so, which ones?

Review Emerging AI Risk Management Standards

To keep current on AI Risk Management best practices, monitor emerging international standards. For example, the International Organization for Standardization (ISO) is beginning to address AI risk.

In May 2020, the ISO issued ISO/IEC TR 24028:2020: Information technology – Artificial intelligence – Overview of trustworthiness in artificial intelligence. This document surveys topics related to trustworthiness in AI systems, including the following:

  • Approaches to establish trust in AI systems through transparency, explainability, controllability, etc.
  • Engineering pitfalls and typical associated threats and risks to AI systems, along with possible mitigation techniques and methods
  • Approaches to assess and achieve availability, resiliency, reliability, accuracy, safety, security, and privacy of AI systems

Still under development is ISO/IEC CD 23894: Information Technology – Artificial Intelligence – Risk Management.

Weblinks

ASIS International: http://www.asisonline.org/
Continuity Central: http://www.continuitycentral.com/
International Organization for Standardization: http://www.iso.org/
SANS Institute: http://www.sans.org/
US National Institute of Standards and Technology: http://www.nist.gov/

References

1 Webster’s Dictionary.

2 TechTarget.

3 Nick Bilton. "Artificial Intelligence As a Threat." The New York Times. November 5, 2014.

4 Ibid.

5 "Interim Report." National Security Commission on Artificial Intelligence. November 2019:11-13.

6 James Kulich. "AI Is Giving Enterprise Risk Management a Boost." Elmhurst University. 2020.

7 Kristin Broughton. "Risk Managers Grapple with Potential Downsides of AI." Wall Street Journal | Dow Jones & Company, Inc. December 2, 2019.

8 Benjamin Cheatham, Kia Javanmardian, and Hamid Samandan. "Confronting the Risks of Artificial Intelligence." McKinsey & Company. April 26, 2019.

9 Ron Schmelzer. "The AI-Enabled Future." Forbes. October 17, 2019.

10 "The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update." The White House: National Science & Technology Council. June 2019:24-25.

11 Henry A. Kissinger. "How the Enlightenment Ends." The Atlantic. June 2018:14.

12 Daniel Wagner. "Artificial Intelligence and Risk Management." Risk Management magazine. September 17, 2018.

13 "On Artificial Intelligence – A European Approach to Excellence and Trust." European Commission. February 19, 2020:10-12.

14 Ibid.

EAIWorld Cover
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues