• Tekniikka
  • Sähkölaitteet
  • Materiaaliteollisuus
  • Digitaalinen elämä
  • Tietosuojakäytäntö
  • O nimi
Location: Home / Tekniikka / The impact of deepfakes: How do you know when a video is real?

The impact of deepfakes: How do you know when a video is real?

Tekninen palvelu |
1982

In a world where seeing is increasingly no longer believing, experts are warning that society must take a multi-pronged approach to combat the potential harms of computer-generated media.

As Bill Whitaker reports this week on 60 Minutes, artificial intelligence can manipulate faces and voices to make it look like someone said something they never said. The result is videos of things that never happened, called "deepfakes." Often, they look so real, people watching can't tell. Just this month, Justin Bieber was tricked by a series of deepfake videos on the social media video platform TikTok that appeared to be of Tom Cruise.

These fabricated videos, named for a combination of the computer science practice known as "deep learning" and "fake," first arrived on the internet near the end of 2017. The sophistication of deepfakes has advanced rapidly in the ensuing four years, along with the availability of the tools needed to make them.

But beyond entertaining social media users and tricking unsuspecting pop singers, deepfakes can also pose a serious threat.

In a 2018 California Law Review paper, legal scholars Bobby Chesney and Danielle Citron outlined the potential harms deepfakes pose to individuals and society. Deepfakes, they wrote, can potentially distort democratic discourse; manipulate elections; erode trust in institutions; jeopardize public safety and national security; damage reputations; and undermine journalism.

The primary threat of deepfakes stems from people becoming convinced that something fictional really occurred. But deepfakes can distort the truth in another insidious way. As manipulated videos pervade the internet, it may become progressively harder to separate fact from fiction. So, if any video or audio can be faked, then anyone can dismiss the truth by claiming it is synthetic media.

It is a paradox Chesney and Citron call the "Liar's Dividend."

"As the public becomes more aware of the idea that video and audio can be convincingly faked, some will try to escape accountability for their actions by denouncing authentic video and audio as deepfakes," they wrote. "Put simply: a skeptical public will be primed to doubt the authenticity of real audio and video evidence."

As the public learns more about the threats posed by deepfakes, efforts to debunk lies can instead seem to legitimize disinformation, as a portion of the audience believes there must be some truth to the fraudulent claim. That is the so-called "dividend" paid out to the liar.

The impact of deepfakes: How do you know when a video is real?

One public example of this occurred last year, when Winnie Heartstrong, a Republican Congressional candidate, released a 23-page report claiming that the video of George Floyd's murder was actually a deepfake. The false report alleged that Floyd had been long-dead.

"We conclude that no one in the video is really one person but rather they are all digital composites of two or more real people to form completely new digital persons using deepfake technology," Heartstrong wrote in the report, according to the Daily Beast.

Nina Schick, a political scientist and technology consultant, wrote the book Deepfakes. She told 60 Minutes this "liar's dividend" concept carries the potential to erode the information ecosystem.

Videos, she pointed out, are currently compelling evidence in a court of law. But if jurors cannot agree on their authenticity, the same video could exonerate someone — or send them to prison for years.

"We really have to think about how do we inbuild some kind of security so that we can ensure that there is some degree of trust with all the digital content that we interact with on a day-to-day basis," Schick said. "Because if we don't, then any idea of a shared reality or a shared objective reality is absolutely going to disappear."

Looking for truth — how to authenticate real videos

But how can people determine if a video has been faked? Schick said there are two ways of approaching the problem. The first is to build technology that can determine if a video has been manipulated — a task that is harder than it seems.

That is because the technology that makes deepfakes possible is a type of deep learning called generative adversarial networks (GANs). GANs consist of two neural networks, which are a series of algorithms that find relationships in a data set, like a collection of photos of faces. The two networks — one a "generator," the other a "discriminator" — are then pitted against each other.

The generator attempts to perfect an output, images of faces, for example, while the discriminator tries to determine if the new images had been created artificially. As the two networks work against each other in a sort of competition, they hone each other's capability. The result is an output that increasingly improves over time.

"This is always going to be a game of cat and mouse," Schick said. "Because just as soon as you build a detection model that can detect one kind of deepfake, there will be a generator that will be able to beat that detector."

Schick likened it to antivirus software that must be continually updated because viruses advance more quickly than the software that finds them.

Rather than attempting to detect videos that have been faked, Schick said, the answer may lie in validating real videos. It's an approach known as "media provenance."

To do this, technology will need to be embedded in hardware and software. For a video, the technology would indicate where the video was shot and keep a record of how it had been manipulated. Think of it, in a way, like a digital watermark imprinted every time a video is edited.

For it to work, the watermark must be permanent, unable to be changed by outside parties. Then there would be no disputing, for example, that the video of George Floyd's murder had been shot outside 3759 Chicago Ave. in Minneapolis on May 25, 2020 and never modified.

"That's exactly what the kind of broader idea is for media provenance," Schick said, "that you have a chain, almost like a ledger, immutable, in the DNA of that content that nobody can tamper with, nobody can edit to show you not only that this is real, but where it was taken, when it was taken."

Today, the most popular forms of immutable ledger are being created through blockchain technology. It functions as a secure, public electronic database, which is the underlying approach that allows cryptocurrencies such as Bitcoin to record transactions.

If a mobile phone operating system manufacturer were to adopt this approach, for example, then any photo or video taken by a smartphone would be able to be authenticated through this chain of provenance.

But individual companies using authenticating technology is not enough, Schick said. Instead, society will need to have a multi-faceted approach, with policymakers working with technologists and companies.

Others agree. Adobe, in partnership with Twitter and the New York Times, in 2019 announced the Content Authenticity Initiative, a coalition of journalists, technologists, creators, and leaders "who seek to address misinformation and content authenticity at scale." Earlier this year, Adobe, Microsoft, the BBC and others formed the Coalition for Content Provenance and Authenticity, a consortium working on common standards for provenance of digital media.

"You're really talking about trying to figure out a way to build safeguards and resilience up to this rapidly changing information ecosystem," Schick said, "which has become increasingly corrupt, in which deepfakes are just really the latest emerging and evolving threat."

The video above was produced by Brit McCandless Farmer and Will Croxton. It was edited by Will Croxton.

Download our Free App

For Breaking News & Analysis Download the Free CBS News app