-->

Enterprise AI World 2024 Is Nov. 20-21 in Washington, DC. Register now for $100 off!

Deepfakes: Overview and How to Spot Them

Introduction

A deepfake is a form of synthetic content, such as images, audio or video, that purports to be real but is actually false and misleading. The word itself is a portmanteau of deep learning, a form of AI, and fake. In addition to manipulating existing content, the purveyors of deepfakes can create entirely new content where an individual is represented in a false manner, portrayed as doing or saying something that he or she has not done or said. An example of this is a deepfake video from 2022 that appears to depict President Zelenskyy of the Ukraine instructing Ukrainian troops to surrender to Russian aggressors. The primary aim of deepfakes is to spread false information so that it appears to come from trusted sources.

The technology behind deepfakes was invented in 2014 by the director of machine learning at Apple Special Projects Group. It works by studying photographs and videos of a targeted person, capturing them from multiple angles in an attempt at greater accuracy. The resulting videos or other output mimic the behavior and speech patterns of victims. Its proliferation has been driven by the availability of cheap computing power, large data sets, AI and deep learning technology. When used in combination, the sophistication of deep learning algorithms is vastly improved.

According to Truepic, 2.8 billion people regularly use image editing apps such as Photoshop, resulting in 4.7 billion images being created every day. It estimates that 51.1 percent of all online misinformation comes from manipulated images. A renowned expert in this field, Nina Schick, estimates that 90 percent of digital content that appears online will be synthetic or generated by AI by 2025.

Uses of Deepfakes

There are some uses of deepfakes that can be beneficial, such as in the art and entertainment fields.

In terms of art, deepfakes can be used to generate new content, such as music, building on the output of artists. However, this can also be controversial if the music or artwork generated plagiarizes that of existing artists. In the entertainment world, movies and video games can clone and manipulate the voices of actors in order to save time or when an actor is no longer available. They can also be used for scenes that are problematic to shoot. Deepfakes have been used for satire or parody, intending to be humorous. In these cases, it is generally obvious to the audience that the scenes are deepfakes.

Deepfakes can also be used in customer service, seen as cheaper and more efficient than using real people. Services can use deepfakes to provide individuals with personalized responses, such as call forwarding or other receptionist services. Other services can include checking an account balance or filing a complaint.

However, the majority of deepfakes are being used for purposes that are often illicit.

When a court case is involved but the hearing is not face-to-face but performed over video, false images or audio can be substituted that can be used as evidence of guilt or innocence.

Deepfakes can be used for fraud by impersonating particular individuals in order to trick a person into providing personal information such as financial information that can be used for fraudulent purposes. This can be a major cybersecurity threat for organizations where employees or executives are tricked into giving away details such as credentials that enable access to sensitive information, which can seriously harm the business. Another way that an organization can be impacted is by forging materials that are used to determine stock price. Examples include a fake video of a CEO making damaging statements about the organization that could cause the stock price to fall, or fake videos announcing a technological breakthrough or product launch that could elevate the stock price.

Deepfakes can also be used for blackmail, for example by targeting individuals that purport to show them in illegal, inappropriate or compromising situations. This can include a public figure seemingly lying to the public, engaging in sexual acts or taking drugs. Such deepfakes can be used for purposes of extortion or to ruin a person’s reputation. In terms of pornography, likenesses of female celebrities are superimposed on the bodies of others.

Political manipulation is a particular problem with deepfakes, aiming to sway public opinion or to spread false news. Several high-profile politicians have had their images transformed by deepfakes in order to ridicule them or spread false information. In 2019, the US Senate Select Committee on Intelligence held hearings regarding the potential for deepfakes to be used maliciously to sway elections.

Implications for Organizations

As has been shown, AI-generated media can be a force for good, such as in customer service operations for organizations. It can also be used for customizable HR avatars or for computer-generated faces for advertising campaigns. However, propagating fake personas can lead to problems for an organization, the worst of which is probably reputational damage. All decisions regarding the use of deepfake technology should be made by business leaders and should not be left to the IT department.

According to NTT, it will not be long before deepfake attacks begin to really impact businesses. It states that there are three deepfake-based attacks that are most likely to affect organizations:

C-level fraud – This is the most prominent method. Rather than sending a fake email that has been spoofed to persuade an employee to transfer money, deepfake technology can make a phone call that mimics a known caller, making them sound like the CFO or CEO in order to make them even more convincing and hard to tell apart from the real person behind such a call.

Extorting companies or individuals – Faces and voices can be attached to media files that show people making false statements. For example, a video could be generated showing the organization’s CEO announcing that customer information has been lost on a large scale and could result in bankruptcy. If the attacker threatens to send the video to press agencies or to post it on social networks, they would be able to blackmail the organization.

Manipulation of authentication methods – Deepfake technology could be used to circumvent camera-based authentication mechanisms.

With these dangers, there are certain defensive processes that organizations can put in place. In terms of authentication, stronger factors should be used, requiring individuals to authenticate their devices using biometric identifiers or PINS, in conjunction with a FIDO 2 conforming passkey. The FBI is warning that criminals are using deepfakes to create employees online for remote-work positions in order to gain access to corporate information.

The Guardian newspaper cautions that there is AI that is already being used that could financially destroy organizations and individuals. The Federal Trade Commission warned about one particular scam in March 2023. In this scam, a person such as a loved one, doctor or attorney is impersonated through deepfake technology, begging the recipient to send money. With this type of event, it is important to contact the person supposedly behind the call to check whether or not it is real. All that a scammer needs is a short audio clip of the person being impersonated, which could be gleaned from an online post such as on social media, and a voice-cloning program.

There should also be processes in place whereby any financial manager should not be allowed to transfer cash based on a phone call, but should be required to call the requester back, even if it is the CEO of the organization. In addition, no transaction over a predetermined amount should be authorized without prior written approval from multiple executives. There should also be a signed request or written contract underlying the transaction request.

The Guardian states that $11 million was extorted from unsuspecting customers using this type of scam. As such, scams become more widespread; this is likely the tip of the iceberg.

How to Discern a Deepfake

The gap between authentic and fake content is becoming harder to decipher, especially as the technology is constantly evolving and being improved.

However, there are some obvious giveaways that something is not quite right, both in visual and textual deepfakes.

Many visual deepfakes have problems recreating body parts in a realistic manner. Sometimes, a person’s head will look like it has been superimposed on a person’s body. Hands are a particular problem with deepfake technology, including too many fingers. Feet are also a known problem and a deepfake image will also produce more teeth in a person’s mouth than is realistic. Unusual skin tones or blurred features are also known problems.

The backgrounds of images are also problematic. Items in the background will often appear blurred and people in the background have been seen to be looking in a completely different direction to that expected or they don’t appear to be human.

According to TechTarget, the following are best practices for detecting deepfake attacks:

  • Unusual or awkward facial positioning.
  • Unnatural facial or body movement.
  • Unnatural coloring.
  • Videos that look odd when zoomed or magnified.
  • Inconsistent audio.
  • People that don’t blink.

Where textual deepfakes are concerned, the following are some indicators to look for:

  • Misspellings.
  • Sentences that don’t flow naturally.
  • Suspicious source email addresses.
  • Phrasing that doesn’t match the supposed sender.
  • Out-of-context messages that aren’t relevant to any discussion, event or issue.

Legality and Digital Content Provenance

At present, deepfakes are generally legal and there is little that law enforcement can do. They are only illegal where local or existing laws such a child pornography, defamation or hate crimes are transgressed. In the US, three states have laws concerning deepfakes. Texas bans deepfakes that aim to influence elections; Virginia bans the dissemination of deepfake pornography; and California has laws regarding the use of political deepfakes within 60 days of an election, as well as the dissemination of non-consensual pornography.

The lack of laws is partly a consequence of the emerging nature of this technology, with many people still unaware of it. As a result, few individuals that are targeted by deepfakes are protected under the law.

There are attempts underway to require that the provenance of all digital content can be proved. The Coalition for Content Provenance and Authenticity (C2PA), which currently counts 750 organizations as members, has developed a provenance-based standard that aims to address issues regarding trust and authenticity. This is an open technical standard that provides publishers, content creators and consumers with ways to opt in in order to document and trace the origin and authenticity of original and synthetic content.

The C2PA released in late 2021 Adobe Content Credentials, which are available to Creative Cloud subscribers to its products, predominantly as an opt-in feature in its Photoshop product. These tools are aimed at establishing authorship for creators, to foster transparency within digital media and to bolster trust in content by adding robust, tamper-evident provenance data regarding how a piece of content was created, edited and published.

Microsoft has also developed deepfake detection software to analyze videos and photos in order to show whether or not the media has been manipulated. Operation Minerva uses catalogues of previously discovered deepfakes to ascertain whether or not a new video is a modification of an existing fake that has previously been seen and for which a digital fingerprint has been established. Sensity offers a platform that deploys deep learning in a similar manner to the way that antivirus tools look for virus and malware signatures, alerting users when a deepfake is uncovered.

The US Department of Defense’s Defense Advanced Research Projects Agency (DARPA) is developing technology that aims to identify and block deepfakes. There are some social media companies that are using blockchain technology to verify the source of videos and images before they can be posted to their platforms in order to ensure that only trusted sources are used and fakes are prevented. Facebook and Twitter have banned malicious deepfakes.

Conclusion

Deepfakes are an emerging problem and are only now really coming into the public’s consciousness. However, it seems likely that they will present a very real threat in the future, especially as the technology is constantly being refined. At present, the focus is largely on individuals, but there are likely to be major consequences for businesses and other organizations in coming years.

Resource File

EAIWorld Cover
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues