-->

Enterprise AI World 2024 Is Nov. 20-21 in Washington, DC. Register now for $100 off!

ChatGPT Cyber Risks

Introduction

ChatGPT is a digital phenomenon. Developed by OpenAI and released in November 2022, this AI-infused chatbot is revolutionizing how we write and code, providing a friendly and effortless vehicle for:

  • Students to write essays
  • Lawyers to write briefs
  • Marketing professionals to write ads
  • Researchers to outline scholarly works
  • Programmers to write software

In six short months, ChatGPT has become a pervasive influence in business communications. Analyst Arvind Raman reveals that according to a Fishbowl survey, “43 percent of working professionals have used AI tools like ChatGPT to complete tasks at work. More than two-thirds of respondents hadn’t told their bosses they were doing so.1

What is ChatGPT?

ChatGPT, as shown in Figure 1, is a general-purpose chatbot that leverages artificial intelligence capabilities to generate text in response to a human prompt.

Figure 1. ChatGPT User Interface Source: OpenAI
Figure 1. ChatGPT User Interface
Source: OpenAI

The name “ChatGPT” combines:

  • “Chat,” referring to the program’s chatbot functionality; and
  • “GPT,” which stands for Generative Pre-Trained Transformer, “a large language model that [utilizes] deep learning [techniques] to produce human-like text.”2 

What’s Wrong with ChatGPT?

Persons opposed to ChatGPT use – including some university professors, business leaders, intellectual property attorneys, and privacy advocates – complain that the program:

  • Is a poor steward of personal privacy
  • Is regularly guilty of plagiarism, copyright violations, and other IP sins
  • Is capable of spreading misinformation (ChatGPT, like other AI systems, can suffer from “hallucinations”)
  • Is encouraging young people to commit academic fraud
  • Is creating conditions conducive to cyber exploits

As ChatGPT does not enjoy any Fifth Amendment privileges against self-incrimination, this writer queried the program as follows: “Does ChatGPT pose any cyber risks?” Here’s how ChatGPT responded:

As an AI language model, ChatGPT itself doesn’t pose direct cyber risks. However, it’s important to consider potential risks associated with its usage. Here are a few considerations:

Privacy: When interacting with ChatGPT or any AI system, be cautious about sharing personal, sensitive, or confidential information. Although efforts are made to ensure data privacy, there’s always a small possibility of data breaches or leaks.

Malicious Use: ChatGPT can generate human-like text, which means it could be potentially misused to generate harmful content, spread misinformation, or engage in phishing attempts. OpenAI has implemented safeguards to mitigate such risks, but there’s always a chance of circumvention or misuse.

Dependency on Input: ChatGPT responds based on the information provided to it. If the input contains biased or inappropriate content, ChatGPT may inadvertently amplify or reinforce such biases. It’s crucial to provide responsible and unbiased inputs to ensure ethical usage.

Manipulation and Social Engineering: AI language models like ChatGPT can be used to impersonate individuals or manipulate others by generating convincing text. This can pose risks in areas such as social engineering, online scams, or spreading disinformation.

To mitigate these risks, OpenAI and other organizations are actively researching and implementing measures like content filtering, user feedback systems, and guidelines for responsible AI use. It’s important to continue advancing safety measures, user awareness, and regulatory frameworks to address potential cyber risks associated with AI systems.

While we wait for various legislatures and standard bodies to sort out the issues related to intellectual property and personal privacy violations, IT and security personnel should attempt to address ChatGPT’s more significant cybersecurity risks.

ChatGPT Cyber Risks

Generating Dangerous Malware

Analysts Alissa Irei and Ashwin Krishnan warn that ChatGPT could serve as an unwitting accomplice in malware attacks. “Generative AI that writes ethical code can also write malware. And, while ChatGPT rejects prompts it recognizes as explicitly illegal or nefarious, users have found they can evade its guardrails fairly easily. For example, malicious hackers might ask ChatGPT to generate code for penetration testing, only to then tweak and repurpose it for use in cyber attacks.”3

Exposing Sensitive Data

ChatGPT does not keep secrets well. Because their interactions with ChatGPT are generally productive, even intimate in some ways, many users, especially enterprise users, assume a level of confidentiality that does not exist. As Irei and Krishnan report, “Over the course of a single week in early 2023, employees at the average 100,000-person company entered confidential business data into ChatGPT 199 times, according to research from data security vendor Cyberhaven.”4

Data submitted to ChatGPT becomes part of the program’s general training data, with the effect that Enterprise A data could appear in a response to an Enterprise B query. This could be particularly problematic if the two firms are rivals, or if the data encompassed any or all of the following:

  • Personally identifiable information (PPI) belonging to employees or customers
  • Trade secrets or other intellectual property
  • Enterprise plans
  • Enterprise financial or operational data

Aiding Phishing Attacks

Phishing is a type of social engineering attack in which the attacker attempts to trick an individual – often an enterprise employee – into revealing sensitive information. Phishing attacks are normally conducted via e-mail and typically involve convincing an e-mail recipient to:

  • Reply with sensitive information
  • Click on a link to a malicious website
  • Open a malicious e-mail attachment

Despite numerous warnings, employees – and, of course, their enterprise employers – are still victimized in large numbers. ChatGPT can make the situation worse by helping phishers craft more appealing phishing messages. As analyst Jim Chilton explains, “ChatGPT’s ability to converse so seamlessly with users without spelling, grammatical, and verb tense mistakes makes it seem like there could very well be a real person on the other side of the chat window. From a hacker’s perspective, ChatGPT is a game changer.”

Also, since many phishers are foreign-born, “ChatGPT will afford hackers from all over the globe a near fluency in English to bolster their phishing campaigns.”5

Being Hacked Itself

Hopefully a lesser, but still potential, threat, ChatGPT, itself, could be hacked.6 This could result in bad actors disseminating misinformation, either:

  • Universally, to all program users; or
  • Selectively, to specifically-targeted enterprise users. 

Recommendations

In evaluating the cyber risks presented by ChatGPT, it’s important to remember that ChatGPT (and its successors) are here to stay.

ChatGPT is not only a writing tool, it’s an automation tool and, as such, will be favored by cost-conscious enterprise executives. While some professional writers may argue that their work product is superior to ChatGPT’s, consider that ChatGPT is in its technological infancy and is rapidly maturing. Also, most enterprise writing assignments are not that challenging and are well within the capabilities of ChatGPT, even today.

As a partial reality check, analyst Rashi Shrivastava relates the experience of businessperson Melissa Shea. “[Ms.] Shea hires freelancers to take on most of the basic tasks for her fashion-focused tech startup, paying $22 per hour on average for them to develop websites, transcribe audio, and write marketing copy. In January 2023, she welcomed a new member to her team: ChatGPT. At $0 an hour, the chatbot can crank out more content much faster than freelancers and has replaced three content writers she would have otherwise hired.

“‘I’m really frankly worried that millions of people are going to be without a job by the end of this year,’ says Shea, co-founder of New York-based Fashion Mingle, a networking and marketing platform for fashion professionals. ‘I’ve never hired a writer better than ChatGPT.‘”7

While Ms. Shea does not speak for all enterprise executives, its seems certain that ChatGPT (and other generative AI programs) will feature prominently in many, if not most, enterprise toolboxes. Informed by this realization, enterprise CSOs should endeavor to mitigate any harm that generative AI programs can do. Their top priorities should include:

Surfacing ChatGPT Users

Right now, ChatGPT exists mostly in the realm of “shadow IT.” Employees who use ChatGPT should be encouraged to inform their managers, and thus be subject to any ChatGPT “acceptable use” standards the enterprise may institute.

Defining Acceptable Use

Specifically:

  • No use of ChatGPT without proper authorization.
  • No use of sensitive, confidential, or proprietary enterprise data when communicating with ChatGPT.
  • No use of ChatGPT-generated text without basic fact-checking.
  • All use of ChatGPT-generated text should be acknowledged – with proper citations – at least internally.

Lobbying for Standards

ChatGPT – indeed, AI in general – should be regulated. In May 2023, Sam Altman, CEO of OpenAI, told a US Senate Committee that:

“OpenAI believes that regulation of AI is essential, and we’re eager to help policymakers as they determine how to facilitate regulation that balances incentivizing safety while ensuring that people are able to access the technology’s benefits. It is also essential that a technology as powerful as AI is developed with democratic values in mind. OpenAI is committed to working with US policymakers to maintain US leadership in key areas of AI and to ensuring that the benefits of AI are available to as many Americans as possible.”

Enterprise CEOs and CSOs should join Altman and other AI leaders to affect reasonable regulations and standards for the safe and secure use of artificial intelligence, including ChatGPT.

Resource File

References

    1 Arvind Raman. “ChatGPT at Work: What’s the Cyber Risk for Employers?” Cybersecurity Dive | Industry Dive. April 11, 2023.

    2 Kyle Wiggers and Alyssa Stringer. “ChatGPT: Everything You Need to Know About the AI-Powered Chatbot.” TechCrunch | Yahoo. May 31, 2023.

    3 Alissa Irei and Ashwin Krishnan. “Five ChatGPT Security Risks in the Enterprise.” TechTarget. April 2023.

    4 Ibid.

    5 Jim Chilton. “The New Risks ChatGPT Poses to Cybersecurity.” Harvard Business Review | Harvard Business School Publishing. April 21, 2023.

    6 Ibid.

    7 Rashi Shrivastava. “‘I’ve Never Hired a Writer Better Than ChatGPT’: How AI Is Upending the Freelance World.” Forbes.com. April 20, 2023.

    EAIWorld Cover
    Free
    for qualified subscribers
    Subscribe Now Current Issue Past Issues