-->

Enterprise AI World 2024 Is Nov. 20-21 in Washington, DC. Register now for $100 off!

Generative AI Risk Management: Frameworks and Best Practices

Every enterprise, whether large or small, public or private, profit or non-profit, generates content in the form of original text, images, audio, video, and computer code. Normally, the process of content creation and curation involves the combined efforts of researchers, writers, editors, proofreaders, photographers, artists, audio and video engineers and editors, and other content professionals. However, over the past several years the world of content generation and management has experienced a new and revolutionary dynamic: generative artificial intelligence (generative AI or gen AI).

Today, the most prominent – and controversial – example of generative AI is an OpenAI text generation application called ChatGPT. In a business context, ChatGPT and other gen AI platforms can be employed to produce:

  • Press or news releases
  • Financial statements, including annual reports
  • Sales and marketing collateral
  • Speeches and other special communications
  • Operational plans, schedules, product specifications, and proprietary process descriptions
  • Website pages, blog posts, and social media posts
  • Various memoranda, articles, reports, documents, and white papers

As is well chronicled, however, generative AI is risky, raising concerns about:

  • Quality – Gen AI systems are “trained” – at least in part – on data exfiltrated from the Internet. In most cases, the provenance and accuracy of such data is unknown or in question.
  • Intellectual property – Gen AI systems do not respect copyright or other assertions of intellectual property ownership.
  • Transparency – Gen AI systems do not cite sources, rendering most of their insights unverifiable.1

Despite the risks, many enterprise leaders are anxious to enjoy the benefits of generative AI, which promises to improve productivity and profitability. As reported by the Harvard Business Review:

  • “New research shows 67 percent of senior IT leaders are prioritizing generative AI for their business within the next 18 months, with one-third (33 percent) naming it as a top priority.
  • “Companies are exploring how it could impact every part of the business, including sales, customer service, marketing, commerce, IT, legal, HR, and others.”2

As a necessary counterbalance, many enterprise risk leaders are busy developing generative AI risk regimes: policies, protocols, and procedures designed to ensure that gen AI implementations are safe, secure, and effective.

Risk Management Best Practices

In terms of mitigating the risks associated with generative AI, a number of risk management best practices have emerged.

Implement a Temporary Gen AI Moratorium

Riskonnect, a leading risk management software solution provider, reports that “organizations, like JPMorgan, prohibit the use of ChatGPT in the workplace, while others, like Amazon and Walmart, have urged staff to exercise caution while using AI.”3

While it may present a short-term competitive disadvantage, enterprise leaders should consider restricting the use of generative AI programs and platforms, except for research purposes. Be aware, however, that well-meaning employees may attempt to violate a gen AI moratorium by accessing software from third-party sites like OpenAI – a troublesome phenomenon referred to as “shadow IT.” To help prevent such “back door” usage, an enterprise moratorium should be enforced through appropriate sanctions, including the suspension or dismissal of any offending personnel.

Monitor Gen AI Public Rulemaking

Lawmakers at the state, national, and international level have been busy enacting legislation governing the use of artificial intelligence. One of the more anticipated strictures is the European Union (EU) Artificial Intelligence Act, which analyst Savannah Fortis predicts “will be one of the world’s premier regulation packages for AI technologies.”4

Even private-sector entities, the putative targets of AI legislation, are embracing regulation. Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI agreed to eight voluntary commitments proposed by the Biden Administration around the use and oversight of generative AI.5

The companies committed to:

  1. Internal and external security testing of their AI systems before their release.
  2. Sharing information across the industry and with governments, civil society, and academia on managing AI risks.
  3. Investing in cybersecurity and insider threat safeguards to protect proprietary data.
  4. Facilitating third-party discovery and reporting of vulnerabilities in their AI systems.
  5. Developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system.
  6. Publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use.
  7. Prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy.
  8. Develop and deploy advanced AI systems to help address society’s greatest challenges.

By tracking developments in the AI legal and regulatory domains, enterprise leaders can:

  • Devise a workable compliance strategy; and
  • Appreciate how they should approach AI/Gen AI governance.

Establish Gen AI Roles & Responsibilities

PricewaterhouseCoopers (PwC) advises that effective gen AI risk management begins with the formation of an inclusive enterprise governance structure. Key contributors are the:

Chief Information Security Officer (CISO) – “The most immediate risk to worry about? More sophisticated phishing.”

Chief Data Officer (CDO) and Chief Privacy Officer (CPO) – “Gen AI applications could exacerbate data and privacy risks; after all, the promise of large language models is that they use a massive amount of data and create even more new data, which are vulnerable to bias, poor quality, unauthorized access and loss.”

Chief Compliance Officer (CCO) – “A nimble, collaborative, regulatory-and-response approach is emerging with generative AI, requiring, perhaps, a major adjustment for compliance officers. Keep up with new regulations and stronger enforcement of existing regulations that apply to generative AI.”

Chief Legal Officer (CLO) or General Counsel – “Without proper governance and supervision, [an enterprise’s] use of generative AI can create or exacerbate legal risks. Lax data security measures, for example, can publicly expose the company’s trade secrets and other proprietary information as well as customer data.”

Chief Financial Officer (CFO) or Controller – “Without proper governance and supervision, a company’s use of GenAI can create or exacerbate financial risks. If not used properly, it opens the company to ‘hallucination’ risk on financial facts, errors in reasoning and over-reliance on outputs requiring numerical computation.”6

Know Your Training Data

Many, if not most, gen AI applications train on data “scraped” from the Internet or other suspect sources, in the process accumulating:

  • Misinformation
  • Disinformation
  • Inadequate information
  • Inflammatory information

To avoid a “garbage in, garbage out” scenario, analysts Kathy Baxter and Yoav Schlesinger assert that enterprises “need to be able to train AI models on their own data to deliver verifiable results that balance accuracy, precision, and recall (the model’s ability to correctly identify positive cases within a given dataset).”It’s important to communicate when there is uncertainty regarding generative AI responses and enable people to validate them. This can be done by citing the sources where the model is pulling information from in order to create content, [and] explaining why the AI gave the response it did.”

Also, for maximum utility, training data should be as fresh and relevant as possible.7

Fact-Check All Gen AI Findings

At present, gen AI cannot be totally trusted. As Lyle Moran reports for Legal Dive, in a highly-publicized case a New York lawyer “cited fake cases generated by ChatGPT in a legal brief filed in federal court.” More worrying than gen AI’s flirtation with fiction, the lawyer, Steven A. Schwartz of Levidow, Levidow & Oberman, “said that ChatGPT not only provided the legal sources, but assured him of the reliability of the opinions and [citations] that the court has called into question.”8

Keep a Human or Two “In the Loop”

As analysts Baxter and Schlesinger remind us: “Just because something can be automated doesn’t mean it should be. Generative AI tools aren’t always capable of understanding emotional or business context, or knowing when they’re wrong or damaging.

“Humans need to be involved to review outputs for accuracy, suss out bias, and ensure models are operating as intended. More broadly, generative AI should be seen as a way to augment human capabilities and empower communities, not replace or displace them.”9

Recommendation

Along with adopting – and adhering to – a set of generative AI risk management best practices, enterprise leaders should craft an Artificial Intelligence Plan, detailing:

How the enterprise will incorporate artificial intelligence (including gen AI) into its business and administrative functions, and for what purposes.

Whether the enterprise will be an AI user, or produce products and services featuring AI functionality. In the latter case, how will enterprise customers be protected against potential AI abuses?

Whether the enterprise will develop its own in-house AI expertise, or rely on third-party providers.

How the enterprise will manage an “AI incident” affecting either employees or customers.

How the enterprise will manage the union of AI and cloud computing, edge computing, the Internet of Things, and other technological trends.

Resource File

References

  • 1 “AI Risk Management: How to Use Generative AI Responsibly.” Riskonnect. July 10, 2023.
  • 2 Kathy Baxter and Yoav Schlesinger. “Managing the Risks of Generative AI.” Harvard Business Review | Harvard Business School Publishing. June 6, 2023.
  • 3 “AI Risk Management: How to Use Generative AI Responsibly.” Riskonnect. July 10, 2023.
  • 4 Savannah Fortis. “OpenAI Warns European Officials Over Upcoming AI Regulations.” Cointelegraph. May 25, 2023.
  • 5 Megan Crouse. “OpenAI, Google and More Agree to White House List of Eight AI Safety Assurances.” TechRepublic | TechnologyAdvice. July 24, 2023.
  • 6 Sean Joyce, Mir Kashifuddin, Jennifer Kosar, Tim Persons, Vikas Agarwal, and Bret Greenstein. “Managing the Risks of Generative AI.” PwC. 2023.
  • 7 Kathy Baxter and Yoav Schlesinger. “Managing the Risks of Generative AI.” Harvard Business Review | Harvard Business School Publishing. June 6, 2023.
  • 8 Lyle Moran. “Lawyer Cites Fake Cases Generated by ChatGPT in Legal Brief.” Legal Dive | Industry Dive. May 30, 2023.
  • 9 Kathy Baxter and Yoav Schlesinger. “Managing the Risks of Generative AI.” Harvard Business Review | Harvard Business School Publishing. June 6, 2023.
EAIWorld Cover
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues