-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

Generative AI and the Law

A subset of artificial intelligence (or, perhaps more precisely, machine learning), “generative AI” (GAI) refers to programs that can generate unique business, literary, and artistic content, creating brand new digital images, video, audio, text, and even computer code.

Generative AI in general – and its principal exemplar, ChatGPT, in particular – have experienced intense scrutiny owing to GAI’s ability to disrupt or even displace the human element in creative arts, robbing writers, artists, programmers, and other “creators” of their livelihood and denying audiences the opportunity to experience human-generated content.

In addition to their adverse effect on employment, generative AI programs have been charged with:

  • Copyright infringement, owing to their use of proprietary materials for training purposes;
  • A propensity to lie or suffer “hallucinations”; and
  • A general disregard for personal privacy.

While offering a valuable tool set for both individuals and enterprise clients, generative AI in its present form poses both legal and ethical concerns. For example, when invited to describe the relationship between “generative AI and the law,” ChatGPT1 itself cited the following issues:  

Intellectual Property and Creativity – Generative AI systems, particularly those using techniques like Generative Adversarial Networks (GANs) and deep learning … [present] a complex challenge regarding intellectual property rights. Who owns the creations generated by AI? Is it the developer of the AI model, the user, or the AI system itself?

Liability and Accountability – As AI systems become increasingly autonomous and capable of decision-making, questions of liability and accountability become paramount. When AI systems make errors or cause harm, who is responsible? Is it the AI developers, the users, or the AI itself? This issue is especially pertinent in fields like autonomous vehicles and healthcare, where AI systems can have life-altering consequences.

Privacy and Data Protection – Generative AI systems often rely on vast amounts of data to operate effectively. Concerns about data privacy and protection have become central in discussions about AI and the law. The use of personal data to train AI models raises ethical and legal questions about consent, transparency, and the right to be forgotten.

Ethical and Bias Considerations – Generative AI systems are not immune to biases inherent in their training data. These biases can perpetuate social inequalities and discrimination, leading to legal and ethical dilemmas. Discriminatory outcomes in areas such as hiring, lending, and criminal justice can result in legal challenges and public backlash.

The Regulation of GAI

Because generative AI has so many problematic elements, one can anticipate the imposition of a steadily accumulating set of regulations, contracts, and service level agreements designed to address the legal and ethical issues surrounding GAI and:

  • Job security
  • Personal privacy
  • Intellectual property protection

For instance, after months on the picket line, Hollywood writers scored a major victory when the Writers Guild of America (WGA) and the Alliance of Motion Picture and Television Producers (AMPTP) agreed that:

“AI can’t write or rewrite literary material, and AI-generated material will not be considered source material under the [Minimum Basic Agreement (MBA)], meaning that AI-generated material can’t be used to undermine a writer’s credit or separated rights.

“A writer can choose to use AI when performing writing services, if the company consents and provided that the writer follows applicable company policies, but the company can’t require the writer to use AI software (e.g., ChatGPT) when performing writing services.

“The Company must disclose to the writer if any materials given to the writer have been generated by AI or incorporate AI-generated material.

“The WGA reserves the right to assert that exploitation of writers’ material to train AI is prohibited by MBA or other law.”2

Hot GAI Legal Topics

As indicated by recent articles and posts, the hot GAI legal topic:

  • For clients, is how to preserve intellectual property rights, i.e., copyright; and
  • For attorneys, is how to employ generative AI to improve their practice.

Generative AI and Copyright

 Generative AI Is an Intellectual Property Violator (Source: Pix4Free.org)
Generative AI Is an Intellectual Property Violator
Source: Pix4Free.org

As defined by the US Copyright Office, “Copyright is a type of intellectual property that protects original works of authorship.” This includes:

  • Paintings
  • Photographs
  • Illustrations
  • Musical compositions
  • Sound recordings
  • Computer programs
  • Books
  • Poems
  • Blog posts
  • Movies
  • Architectural works
  • Plays

Since generative AI programs are capable of producing many of these IP types, GAI developers and users are asking: Who owns the copyright on an AI-generated work?

The answer, at least for now, is no one. As analyst Jon Gold reports, “The US Copyright Office has taken the position that human authorship is required for copyright to exist” and “neither the creator of [an] AI nor the provider of the prompt used to generate a particular work ‘owns’ that output.” According to Gold, “The larger issue for AI in terms of copyright law is likely to be the concept of fair use, particularly as it applies to the training data used to create the large language models (LLMs) underpinning generative AI. F

“Fair use, in brief, is a defense to copyright claims written into federal law. The four factors that courts have to consider when deciding whether a particular use of copyright material without permission is ‘fair use,’ are the character and purpose of the use (educational or other not-for-profit use is much more likely to be deemed fair than commercial use), the nature of the original work, the amount of the original work used, and the market effect on the original work.”3

Unfortunately, many large language models abuse the fair use standard, as they contain a combination of:

  • Copyrighted books
  • Blogs
  • News sites
  • Wikipedia articles
  • Proprietary information
  • Personal data
  • Material literally “scraped” from the Internet

To protect their intellectual property from unauthorized use (and potential misuse), content producers have begun suing Gen AI companies for copyright violation. To date, the highest profile action has come from comedian and writer Sarah Silverman who, along with two other authors, sued OpenAI and Meta over the use of their books to train ChatGPT.

Comedian Sarah Silverman Is Taking GAI Copyright Infringement Seriously (Source: Wikimedia Commons)As Gold explains, “The core issue in that lawsuit is the use of a data set called ‘BookCorpus,’ which, the plaintiffs say, contained their copyright material. OpenAI and Meta are likely to argue that the market effect on Silverman’s and others works is negligible, and that the ‘character and purpose’ of the use is different than that which prompted the writing of the books in the first place, while the plaintiffs are likely to highlight the for-profit nature of Meta and OpenAI’s use, as well as the use of entire works in training data.”4

Although the case filed by Silverman et al. appears to be strong, analyst Victoria Bekiempis cautions that “there’s some doubt that the case is going to be a slam dunk … due to a landmark case involving Google Books nearly a decade ago. The US Supreme Court determined in 2016 that Google Books’ practice of summarizing texts – and showing excerpts to users – didn’t violate copyright law. Deven Desai, a professor of business law and ethics at Georgia Institute of Technology, says that [the] law presently permits the use of books to train software. Desai notes that the Google Books case resulted in ‘the ability to use books in transformative ways, including creating snippets and training software in that sense, for machine learning,’ so machines can use books to learn under the law.”5

Regardless of the Silverman suit – which seeks class action status, damages, and injunctive relief – the courts – or more likely, the US Congress – will feel pressure to:

  • Restrain AI companies from any unauthorized – and uncompensated – use of copyrighted materials;
  • Establish, definitively, ownership of generative AI works; and
  • Protect – to the extent possible – the ability of writers, artists, and other content producers to earn a living in a GAI world.

Generative AI and Lawyering

“AI won’t replace lawyers, but lawyers who use AI will replace lawyers who don’t.” – Popular catchphrase6

While many in the legal community will be contesting the use of generative AI in court, an even larger percentage will be leveraging the technology to facilitate their day-to-day functions. As Andrew Perlman, dean of the Suffolk University Law School, observes:

“A significant part of lawyers’ work takes the form of written words – in e-mails, memos, motions, briefs, complaints, discovery requests and responses, transactional documents of all kinds, and so forth. Although existing technology has made the generation of these words easier in some respects, such as by allowing us to use templates and automated document assembly tools, these tools have changed most lawyers’ work in relatively modest ways. In contrast, AI tools like ChatGPT hold the promise of altering how we generate a much wider range of legal documents and information. In fact, within a few months of ChatGPT’s release, law firms and legal tech companies are already announcing new ways of using generative AI tools.”7

Among these new (and future) developments, Perlman points out that:

Law firms may develop their own GAI tool, “[using] their own legal documents to train a proprietary instance of an AI tool [like ChatGPT].”

Law schools will incorporate GAI into their curriculum, informing aspiring attorneys about generative AI issues and use cases.

Lawyers who eschew the use of GAI may be subject to malpractice claims, by failing to “satisfy their duty of competence.”

Lawyers who use GAI will be better equipped to serve marginalized clients. “Nearly 90 percent of people living below the poverty line and a majority of middle-income Americans receive no meaningful assistance when facing important civil legal issues, such as child custody, debt collection, eviction, and foreclosure. Many factors contribute to these and related problems, but the cumulative effect is a legal system that is among the most costly and inaccessible in the world.” GAI can help alleviate these obstacles.8

GAI Penetration

Where are we now in terms of GAI and the law? The recent survey from Wolters Kluwer and Above the Law titled “Generative AI in the Law: Where Could This All Be Headed?” reveals the following:

“More than 80 percent of all respondents agree that generative AI will create transformative efficiencies for research and routine tasks.

“[Sixty-two] (62) percent of respondents believe that effective use of generative AI will separate successful law firms from unsuccessful firms within the next five years.

“Respondents are less convinced that AI will transform high-level legal work: 31 percent agree that this will happen, while 50 percent disagree.”9

GAI Hallucinations

As analyst Suzanne McGee wryly suggests, one of the major downsides of GAI – hopefully just a temporary “teething” problem – is that generative AI “has an unfortunate habit of making stuff up,” in at least one highly-publicized incident citing cases that don’t exist.10

Just like a senior partner might conduct a cursory check of a new associate’s work product, a law firm must double-check GAI output.

Recommendations

For any enterprise using or just experimenting with generative AI (and that will be most enterprises), risk management, particularly legal risk management, should be a top priority. All Gen AI activities, from product/service selection to implementation to application to promotion should be supervised by the Security and Risk departments, supplemented by in-house legal counsel.

As analyst Dylan Walsh advises, “Companies should be active in their due diligence, taking actions such as monitoring AI systems and getting adequate assurances from service and data providers. Contracts with service and data providers should include indemnification – a mechanism for ensuring that if a company uses a product or technology in accordance with an agreement, that company is protected from legal liability.”11

In addition, enterprise officials should:

Fashion a “Generative AI Acceptable Use Policy.”

Include a discussion of generative AI in Security Awareness training, particularly the detection of Gen AI hallucinations.

Resist any attempts at Gen AI bragging, or false claiming that generative AI is featured in enterprise processes and products when it is not.

Web Links

International Organization for Standardization: https://www.iso.org/
OpenAI: https://www.openai.com/
US National Institute of Standards and Technology: https://www.nist.gov/

References

1 ChatGPT. October 9, 2023.

2 “Summary of the 2023 WGA MBA.” Writers Guild of America. 2023.

3 Jon Gold. “Generative AI and US copyright law are on a collision course.” Computerworld | IDG Communications, Inc. September 22, 2023.

4 Ibid.

5 Victoria Bekiempis. “Can Sarah Silverman’s AI Lawsuit Save Us From Robot Overlords?”  Vulture | Vox Media, LLC. August 11, 2023.

6 Suzanne McGee. “Generative AI and the Law.” LexisNexis. 2023.

7 Andrew Perlman. “The Implications of ChatGPT for Legal Services and Society.” “The Practice” | Harvard Law School Center on the Legal Profession. March/April 2023.

8 Ibid.

9 “Three Things Legal Professionals Told Us About Generative AI.” Wolters Kluwer N.V. August 7, 2023.

10 Suzanne McGee. “Generative AI and the Law.” LexisNexis. 2023.

11 Dylan Walsh. “The Legal Issues Presented by Generative AI.” MIT Sloan School of Management. August 28, 2023.

EAIWorld Cover
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues