Audio/Video

Know Your Enemy: The Difficulty of Defining Deepfakes

Facebook recently promised that it would increase efforts to remove so-called “deepfake” videos, including content that included “misleading manipulated media.”

In addition to fears that deepfakes — altered videos that appear to be authentic — could impact the upcoming 2020 general election in the United States, there are growing concerns that they could ruin reputations and impact businesses.

A manipulated video that looks real could convince viewers to believe that the subjects in the video said things they didn’t say or did things they didn’t do.

Deepfakes have become more sophisticated and easier to produce, thanks to artificial intelligence and machine learning, which can be applied to existing videos quickly and easily, achieving results that took professional special effects teams and digital artists hours or days to achieve in the past.

“Deepfake technology is being weaponized for political misinformation and cybercrime,” said Robert Prigge, CEO of Jumio.

In one high-profile case, criminals last year used AI-based software to impersonate a chief executive’s voice and demand a fraudulent transfer of US$243,000, Prigge told TechNewsWorld.

“Unfortunately, deepfakes can also be used to bypass many biometric-based identity verification systems, which have rapidly grown in popularity in response to impersonation attacks, identity theft, and social engineering,” he added.

Facing Off Against Deepfakes

Given the potential for damage, both Facebook and Twitter have banned such content. However, it’s not clear what the bans cover. For its part, Facebook will utilize third-party fact-checkers, reportedly including more than 50 partners working worldwide in more than 40 languages.

“They’re banning videos created through machine learning that are intended to be deceptive,” explained Paul Bischoff, privacy advocate at Comparitech.

“The videos must have both audio and video that the average person wouldn’t reasonably assume is fake,” he told TechNewsWorld.

“A deepfake superimposes existing video footage of a face onto a source head and body using advanced neural network-poweredAI to create increasingly realistic doctored videos,” noted Prigge. “In other words, a deepfake looks to be a real person’s recorded face and voice, but the words they appear to be speaking were never really uttered by them.”

Defining Deepfakes

One troubling issue with deepfakes is simply determining what is a deepfake and what is just an edited video. In many cases, deepfakes are built by utilizing the latest technology to edit or manipulate video. News outlets regularly edit interviews, press conferences, and other events when crafting news stories as a way to highlight certain elements and get juicy sound bites.

Of course, there have been plenty of criticisms of mainstream news media for manipulating video footage to change the context without AI or machine learning, simply using the tools of the editing suite.

Deepfakes generally are viewed as far more dangerous because it isn’t just context that is altered.

“At its heart, a deepfake is when someone uses sophisticated technology — artificial intelligence — to blend multiple images or audio together in order to change its original meaning and convey something that is not true or valid,” said Chris Olson, CEO of The Media Trust.

“From manipulating audio to creating misleading images, deepfakes foster the spread of disinformation as the end user typically doesn’t know that the content or message is not real,” he told TechNewsWorld.

“To varying degrees, social platforms have issued policies prohibiting the posting of highly manipulated videos that are not clearly labeled or readily apparent to consumers as fake,” added Olson.

Still, “while these policies are a step in the right direction, they do not explicitly ban manipulated video or audio,” he pointed out. “Having your account blocked isn’t much of a deterrent.”

Manipulation Without Malice

Facebook’s ban and other efforts to ban or otherwise curb deepfakes do not apply to political speech or parodies.

Consent may be another issue that needs to be addressed.

“This is a great point — fake videos and images can be defined broadly — for example, anything that is manipulated,” said Shuman Ghosemajumder, CTO of Shape Security and former fraud czar at Google.

“But most media created is, to some extent, manipulated,” he told TechNewsWorld.

Manipulations include automatic digital enhancements to photos taken using modern cameras — those equipped with HDR settings or other AI-based enhancement — as well as filters, and aesthetic editing and retouching, noted Ghosemajumder.

“If most media is thus automatically marked on a platform as ‘synthetic’ or ‘manipulated,’ this will reduce the benefit of such a tag,” he remarked.

The next step will be to figure out objective criteria to exclude that type of editing and focus on “maliciously manipulated” media, which could be an inherently subjective standard.

However, “it can’t be a question of individuals consenting to be in videos, because no such consent is generally required of public figures or of videos and images that are taken in public places, “observed Ghosemajumder, “and public figures are the ones that are most likely to be targeted by malicious users of these technologies.”

AI Tools Singled Out

Facebook’s deepfakes ban singles out videos that use AI technology or machine learning to manipulate the content.

“This is an incomplete approach since most fake content, including misleading videos posted today, are not created with such technology,” said Ghosemajumder.

The now famous Nancy Pelosi video “could have been created with technology from 40-plus years ago since it was just simple video editing,” he added.

More importantly, “maliciousness cannot be defined based only on the technology used,” said Ghosemajumder, “since much of the same technology used to create a malicious deepfake is already being used to create legitimate works of art, such as the de-aging technology used in The Irishman.”

Viewer Perception

As the Facebook policy stands, satire and parody would be exempt, but what falls into those categories isn’t always clear. Viewer reactions don’t always align with what the content maker may have had in mind. A joke video that falls flat might not be viewed as satire.

“The standards for judging satire or fan films are also subjective –it may be possible to determine what is or is not intended as satire in a court of law to society’s satisfaction in an individual instance, but it is much more difficult to make such determinations automatically for millions of pieces of content in a social media platform,” warned Ghosemajumder.

In addition, even in cases when a video is created with obviously satirical intent, that intent can get lost if the video is shortened, taken out of context, or even just reposted by someone who didn’t understand the original intent.

“There are many examples of satirical content fooling people who didn’t understand the humor,” said Ghosemajumder.

“It’s more about how the audience perceives it. Satire doesn’t fall into the ban. Nor does parody, and if the video is clearly labeled as fiction, it should be fine,” countered Bischoff.

“It is our understanding that Facebook and Twitter are not banning satire or parody — intention is the key differentiator,” added Alexey Khitrov, CEO of ID R&D.

“Satire by definition is the use of humor, exaggeration or irony, whereas the intention of a deepfake is to pass off altered or synthetic video or speech as authentic,” he told TechNewsWorld.”Deepfakes are used to trick a viewer and spread misinformation. While a deepfake aims to deceive the average user, satire is apparent.”

Legal Efforts

There have been legal efforts to stop the proliferation of deepfakes, but the government might not be the best entity to tackle this high-tech problem.

“Over the past two years, several U.S. states introduced legislation to govern deepfakes, with the Malicious Deep Fake Prohibition Act and DEEPFAKES Accountability Act introduced to the U.S. Senate and House of Representatives respectively,” said The Media Trust’s Olson.

Both bills stalled with lawmakers, and neither proposed much change beyond introducing penalties. Even if laws were passed, it is unlikely a legislative approach can keep up with technological advancements.

“It’s very difficult to effectively legislate against a moving target like emerging technology,” warned Olson.

“Until there is perfect recognition that content is a deepfake, platforms and media outlets need to disclose to consumers the source of the content,” he suggested.

“Deepfake videos cannot be stopped, just like photoshopped photos cannot be stopped,” said Josh Bohls, CEO of Inkscreen.

“Libel laws can be expanded to include altered videos that might misrepresent a public figure in a harmful way, providing the subject some kind of recourse,” he told TechNewsWorld. “It would also be prudent to pass laws requiring the labeling of certain categories of videos — political ads, for example — so that the viewer is aware that the content has been altered.”

Tech to Fight Tech

Instead of the government creating new laws, the tech industry could solve the deepfakes problem, even if defining them remains fuzzy. Providing access to technology to determine if a video has been manipulated could be a good first step.

“Several social platforms have taken steps to detect and remove deepfake videos with limited success as detection lags behind the speed at which new technology emerges to create better, more realistic deepfakes in an ever-diminishing period of time,” said Olson.

“The challenge remains on the difficult process of identifying and removing deepfakes before they spread to the general public,” he said.

Social media is where these videos are spreading and where removal is crucial. These platforms are in a good position to roll out new technology.

“Overall, Twitter and Facebook announcing plans to take action against malicious fake content is an excellent first step and will increase scrutiny and skepticism of media uploaded to the Internet, especially by anonymous or unknown sources,” noted Ghosemajumder.

However, this is absolutely not a ‘silver bullet’ solution to this problem for many reasons, warned Ghosemajumder.

“The detection of fake media is a cat-and-mouse game. If manipulated content is immediately flagged, then malicious actors will experiment against the system with variations of their content until they can pass undetected,” he explained.

“On the other hand, if manipulated content is not immediately flagged or removed, it can be spread — and will quickly morph — causing damage in whatever period exists between creation and uploading and flagging, in a way that may not be possible to easily contain at that point, “Ghosemajumder suggested.

“Finally, the use of fake accounts and automated fraud and abuse is the primary mechanism that malicious actors use to spread disinformation,” he said. “This is one of the key areas social networks need to address with the most sophisticated technology available, not just home-built solutions.”

Anti-Deepfake Tools

Several companies are exploring methods of combating deepfakes. Facebook, Microsoft and AWS launched the Deepfake Detection Challenge to encourage the development of open-source detection tools. [*Correction – Jan. 22, 2020]

“Without consistently flagging suspect digital content and labeling the source of the doctored video or audio, technology will have little impact on the issue,” said Olson.

“Providing this context will help the consumer better understand the veracity of the message. Was it sent to me from an unknown third party, or did I find it on a brand website during product research? This attribution information is what’s needed to counter deepfakes,” he said.

However, with a lot of manipulated content, it is often about misdirection — and in this case, too much focus on the video itself could be problematic.

“It’s not just the visual stream that’s vulnerable to deepfakes, but also the voice stream. Both can be altered or manipulated as part of a deepfake,” said ID R&D’s Khitrov.

There is technology to help detect a deepfake’s manipulated audio, he noted.

“Liveness detection capabilities can identify artifacts that aren’t audible to the human ear but are present in synthesized, recorded, or computer-altered voice,” explained Khitrov. “We can detect over 99 percent of audio deepfakes, but only where that technology is deployed.”

The Deep View

Those with a pessimistic take believe the technology to create convincing deepfakes simply will outpace the technology to stop it.

“Our analogy is that there are viruses and there is antivirus technology, and staying ahead of the bad guys requires constant iteration, and the same holds true for deepfake detection,” said Khitrov.

However, “the AI-based technologies that the bad guys are using are very similar to the technologies the good guys are using. So the breakthroughs that are available to the bad guys are also being used by the good guys to stop them,” he added.

A bigger threat is that “deepfake software is already freely available on the Web, although it’s not that great yet,” said Comparitech’s Bischoff. “We will have to learn to be vigilant and skeptical.”

*ECT News Network editor’s note – Jan. 22, 2020: Our original published version of this story incorrectly stated that the Deepfake Detection Challenge was a project launched by The Media Trust. In fact, it is a joint effort of Facebook, Microsoft, and AWS. We regret the error.

Peter Suciu

Peter Suciu has been an ECT News Network reporter since 2012. His areas of focus include cybersecurity, mobile phones, displays, streaming media, pay TV and autonomous vehicles. He has written and edited for numerous publications and websites, including Newsweek, Wired and FoxNews.com.Email Peter.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

What's your outlook for the business climate in 2025?
Loading ... Loading ...

Technewsworld Channels