-->

NEW EVENT: KM & AI Summit 2025, March 17 - 19 in beautiful Scottsdale, Arizona. Register Now! 

The Coming AI Regulations

Article Featured Image

Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. More than most technologies, AI offers both promise and peril: Promise, particularly in the form of generative AI programs like ChatGPT; and peril, particularly in the form of artificial general intelligence (AGI) in which AI exceeds human intelligence. Given the level of concern about how AI will be employed and its impact on people and the economy, there have been widespread calls for AI regulations with numerous jurisdictions having already implemented AI laws and guidelines.

Executive Summary

Artificial intelligence (AI) is “the simulation of human intelligence processes by machines, especially computer systems.”1

More than most technologies, AI offers both promise and peril:

Promise, particularly in the form of generative AI programs like ChatGPT, Midjourney, and others, which can generate novel and unique content, literally creating new digital images, video, audio, text, and code.

Peril, particularly in the form of artificial general intelligence (AGI) in which AI exceeds human intelligence, potentially posing a “Skynet”-style risk to human existence. In May 2023, 350 executives, researchers, and engineers working in AI signed an open letter asserting that “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”2

In addition to the threat of an AI apocalypse, the US National Institute of Standards and Technology (NIST) has cataloged some of the near-term risks of AI, as shown in Figure 1.

Figure 1. Artificial Intelligence Risks
Source: NIST3

The Altman Manifesto

In May 2023, Sam Altman, CEO of OpenAI, the developer of the wildly-popular AI chatbot and text generation tool ChatGPT, addressed the US Senate Subcommittee on Privacy, Technology, and the Law. In a candid and dramatic presentation, Altman told lawmakers that: 

OpenAI believes that regulation of AI is essential, and we’re eager to help policymakers as they determine how to facilitate regulation that balances incentivizing safety while ensuring that people are able to access the technology’s benefits. It is also essential that a technology as powerful as AI is developed with democratic values in mind. OpenAI is committed to working with US policymakers to maintain US leadership in key areas of AI and to ensuring that the benefits of AI are available to as many Americans as possible.

“We are actively engaging with policymakers around the world to help them understand our tools and discuss regulatory options. For example, we appreciate the work [the] National Institute of Standards and Technology has done on its risk management framework, and are currently researching how to specifically apply it to the type of models we develop. Earlier this month, we discussed AI with the President, Vice President, and senior White House officials, and we look forward to working with the Administration to announce meaningful steps to help protect against risks while ensuring that the United States continues to benefit from AI and stays in the lead on AI.

“To that end, there are several areas I would like to flag where I believe that AI companies and governments can partner productively.

“First, it is vital that AI companies – especially those working on the most powerful models – adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results. To ensure this, the US government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements.

“Second, AI is a complex and rapidly evolving field. It is essential that the safety requirements that AI companies must meet have a governance regime flexible enough to adapt to new technical developments. The US government should consider facilitating multi-stakeholder processes, incorporating input from a broad range of experts and organizations, that can develop and regularly update the appropriate safety standards, evaluation requirements, disclosure practices, and external validation mechanisms for AI systems subject to license or registration.

“Third, we are not alone in developing this technology. It will be important for policymakers to consider how to implement licensing regulations on a global scale and ensure international cooperation on AI safety, including examining potential intergovernmental oversight mechanisms and standard-setting.”

Future of Regulations

Given the level of concern about AI – even among its most ardent advocates like Altman – there have been widespread calls for AI regulations, with numerous jurisdictions having already implemented AI laws and guidelines.

The coming months and years should produce an avalanche of AI legislation, involving issues such as:

  • Job security
  • Personal privacy
  • Intellectual property protection

Current Regulations

Blueprint for an AI Bill of Rights

As a precursor to possible federal legislation and regulation, the Biden Administration has produced a “Blueprint for an AI Bill of Rights.” According to a summary statement, the Blueprint “[lays out] five common sense protections to which everyone in America should be entitled:

  1. Safe and Effective Systems – You should be protected from unsafe or ineffective systems.
  2. Algorithmic Discrimination Protections – You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
  3. Data Privacy – You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
  4. Notice and Explanation – You should know when an automated system is being used and understand how and why it contributes to outcomes that impact you.
  5. Human Alternatives, Consideration, and Fallback – You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.”4

State and Local Laws

A number of US states and localities have passed laws regulating AI. Among these are:

Alabama – Act No. 2021-344 – To establish the Alabama Council on Advanced Technology and Artificial Intelligence to review and advise the Governor, the Legislature, and other interested parties on the use and development of advanced technology and artificial intelligence in this state.

Colorado – SB 22-113 – Concerning the use of personal identifying data, and, in connection therewith, creating a task force for the consideration of facial recognition services, restricting the use of facial recognition services by state and local government agencies, temporarily prohibiting public schools from executing new contracts for facial recognition services, and making an appropriation.

Illinois – Public Act 102-0047 – Artificial Intelligence Video Interview Act. Requires employers who solely rely on AI analysis of video interview to determine whether an applicant will be selected for an in-person interview to collect and report demographic data about the race and ethnicity of applications who are not selected for in-person interviews and of those who are hired.

Mississippi – HB 633 – Mississippi Computer Science and Cyber Education Equality Act. The State Department of Education is directed to implement a K-12 computer science curriculum including instruction in artificial intelligence and machine learning.

Vermont – H 410 – An act relating to the use and oversight of artificial intelligence in State government. Creates the Division of Artificial Intelligence within the Agency of Digital Services to review all aspects of artificial intelligence developed, employed, or procured by State government.5,6

New York City – Local Law 144 (the AI Law) – “The AI Law makes it an unlawful employment practice for employers to use automated employment decision tools (AEDTs) to screen candidates and employees within New York City unless certain bias audit and notice requirements are met.”7

China

Analyst Charlotte Trueman reports that “so far, China is the only country that has passed laws and launched prosecutions relating to generative AI – in May, Chinese authorities detained a man in Northern China for allegedly using ChatGPT to write fake news articles.”8

Coming Regulations

Regulatory Considerations

The enactment of future AI regulations will be influenced by several factors.

A Plethora of Regulations – Whether for economic, political, jurisdictional, or other purposes, every nation, state, and municipality will be inclined to develop its own AI rules and regulations.

A Desire to Rationalize Regulations – Based on tradition, shared values, and the desire to facilitate compliance initiatives, western nations will seek to “harmonize” (or render compatible and interoperable) their various regulations. This is particularly important for multi-national corporations or for other organizations conducting cross-border business.9

US Vs. EU – Notwithstanding the desire for harmonization, national instincts will likely prevail:

  • The US, as always, will be reflexively reluctant to regulate AI, or to exact penalties for regulatory violations.
  • Also as usual, Europe, specifically the European Union (EU), will approach regulation and enforcement aggressively, even enthusiastically.

The Reality That Regulation Is Hard – As analyst Joshua P. Meltzer observes, “The processes for developing AI regulation increasingly stand in contrast to the current zeitgeist – where AI systems are becoming increasingly powerful and having impact much faster than government can react. This raises the question as to whether the government is even capable of regulating AI effectively.”10

US Regs

In the US, expect the pace of AI regulation to accelerate, particularly at the state level and in response to issues related to job security and personal privacy.

Certain legislation will reflect enterprise anxiety over intellectual property rights.

At the Executive Branch level, one should anticipate periodic rounds of AI-related rule-making, for example, at the:

  • Equal Employment Opportunity Commission (EEOC)
  • Federal Trade Commission (FTC)
  • Federal Communications Commission (FCC)

EU AI Act

Analyst Savannah Fortis predicts that the EU Artificial Intelligence Act, which is still in development, “will be one of the world’s premier regulation packages for AI technologies.”11 Such confidence may be justified given the EU’s track record in successfully regulating data protection. The EU’s General Data Protection Regulation (GDPR) is widely viewed as the standard for data security and data privacy management. In fact, achieving GDPR compliance normally ensures that an enterprise is also adhering to any and all local data protection mandates.

As previewed in a recent proposal, the EU AI Act (formally, the “Regulation of the European Parliament and of the Council Laying Down Harmonized Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts”) will consist of twelve titles, as described in Table 1. [Note: Title I consists of an overview of the Act and is omitted from the table.]

Recommendations

Don’t Expect Too Much from AI Regulations

As analysts Blair Levin and Larry Downes write in the Harvard Business Review:

“It’s far from clear that any combination of government action – legislative, regulatory, or judicial – can really achieve the balancing act of maximizing the value of AI while minimizing its potential harm to the economy or society more broadly.

“As with all revolutionary technologies, the ability of governments to effectively regulate [large language models (LLMs)] will almost certainly fall short. This is no criticism of lawmakers and regulators, but a side effect of the basic fact that law advances incrementally while technology evolves exponentially.”12

Nonetheless, Prepare for the Coming Regulatory Wave

Analyst Daniel J. Felz, Kimberly Kiefer Peretti, and Alysa Austin advise enterprise planners to take a number of crucial steps. These include:

Lay the Groundwork for AI Adoption – [Enterprises] should craft policies that govern how AI will be used in their organization.

Design Governance and Accountability Structures – AI dependencies cut across [enterprise] functions and will increasingly do so in the future as AI is integrated into more business processes.

Risk Assessments – [Enterprises] deciding whether to implement a new AI system should consider a risk assessment (particularly if there could be a ‘heightened risk’ to consumers) and cost-benefit analysis to determine whether the system is worth implementing, noting that bigger risks may justify a formal impact assessment.”13

Plan to Comply with the EU AI Act

While not yet formalized, plan to adhere to the provisions of the European Union Artificial Intelligence Act:

  • First, because enterprises doing business in Europe – and that’s most – will have to; and
  • Second, by complying with the EU AI Act, an enterprise will most likely satisfy any and all requirements imposed by lesser local, state, and national AI laws and regulations.

      References

        1 Ed Burns, Nicole Laskowski, Linda Tucci, and George Lawton. “Artificial Intelligence (AI).” TechTarget. March 2023.

        2 Kevin Roose. “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn.” The New York Times. May 30, 2023.

        • 3 NIST AI 100-1: “Artificial Intelligence Risk Management Framework (AI RMF 1.0).” US National Institute of Standards and Technology. January 2023:5.
        • 4 Dr. Alondra Nelson, Dr. Sorelle Friedler, and Ami Fields-Meyer. “Blueprint for an AI Bill of Rights: A Vision for Protecting Our Civil Rights in the Algorithmic Age.” The White House. October 4, 2022.
        • 5 Andrew J. Gray IV and Kimberley E. Lunetta. “AI in the Workforce: Hiring Considerations and the Benefits and Pitfalls of Generative AI.” Morgan, Lewis & Bockius LLP. May 30, 2023.
        • 6 Caroline Kraczon. “The State of State AI Policy (2021-22 Legislative Session).” EPIC. August 8, 2022.
        • 7 Sharon Perley Masling, W. John Lee, Daniel A. Kadish, and Carolyn M. Corcoran. “New York City Issues Final Rule on AI Bias Law and Postpones Enforcement to July 2023.” Morgan, Lewis & Bockius LLP. April 19, 2023.
        • 8 Charlotte Trueman. “Governments Worldwide Grapple with Regulation to Rein in AI Dangers.” Computerworld | IDG Communications, Inc. June 5, 2023.
        • 9 Ibid.
        • 10 Joshua P. Meltzer. “The US Government Should Regulate AI If It Wants to Lead on International AI Governance.” The Brookings Institution. May 22, 2023.
        • 11 Savannah Fortis. “OpenAI Warns European Officials Over Upcoming AI Regulations.” Cointelegraph. May 25, 2023.
        • 12 Blair Levin and Larry Downes. “Who Is Going to Regulate AI?” Harvard Business Review | Harvard Business School Publishing. May 19, 2023.
        • 13 Daniel J. Felz, Kimberly Kiefer Peretti, and Alysa Austin. “Privacy, Cyber & Data Strategy Advisory: AI Regulation in the U.S.: What’s Coming, and What Companies Need to Do in 2023.” Alston & Bird LLP. December 9, 2022.
        EAIWorld Cover
        Free
        for qualified subscribers
        Subscribe Now Current Issue Past Issues