DOE/LANL Jurisdiction Fire Danger Rating:
  1. LANL Home
  2. media
  3. publications
  4. national security science
December 1, 2020

Detecting deepfakes

Scientists use machine learning to expose deceptive, sometimes dangerous, videos.

  • Octavio Ramos, Writer
Deepfakes use artificial intelligence to create convincing images, audio, and video hoaxes. Credit to: Los Alamos National Laboratory

Since the 1950s, filmmakers have used computer-generated imagery (CGI) to produce breathtaking special effects for blockbuster films. Over time, CGI has become more sophisticated and easier to produce, creating fantastic creatures like the dragon in The Hobbit trilogy and crafting realistic models of actual human beings.

“The sophistication of this technology continues to evolve quickly. It’s getting to the point that we will no longer be able to trust our own eyes.”
- Juston Moore

Today, what used to take months of intense labor, multiple computing systems, and millions of dollars to produce can now be done on a home computer in a matter of hours. Thanks to advances in artificial intelligence technology, anyone can create startling videos by using sophisticated but surprisingly accessible and cheap software programs. These programs have led to a phenomenon known as a “deepfake.”

“A deepfake is a manipulated video recording, either doctored footage or completely fabricated performances,” explains Juston Moore of the Advanced Research in Cyber Systems group at Los Alamos National Laboratory.

The most common type of deepfake is a video portrait. “A source actor is filmed speaking, and then special software transfers the target’s (say Barack Obama’s or Donald Trump’s) facial mannerisms—including head position and rotation, eye gaze, and lip movement—over the source actor,” Moore explains. New audio is provided by an actor capable of mimicking voices. The end result is a video of a target saying something they never actually said.

“The sophistication of this technology continues to evolve quickly,” Moore says. “It’s getting to the point that we will no longer be able to trust our own eyes.”

Deepfake technology has been used to create amusing videos, such as one with Gene Kelly’s head replaced by that of Nicolas Cage for a “Singing in the Rain” dance sequence. But deepfakes can be insidious, posing a threat to national security.

Imagine a convincing deepfake of a world leader declaring war or a well-liked actress making a terrorist threat. To demonstrate quickly that such videos are frauds, a team of Los Alamos researchers is exploring several machine-learning methods that identify and thus counter deepfakes.

Garrett Kenyon, a member of Moore’s team, is working on an approach inspired by models of the brain’s visual cortex. In other words, Kenyon's models recognize images much like the brain does.

“Our detection technology consists of cortically inspired algorithms,” Kenyon explains. “Think of these cortical representations as pieces of a jigsaw puzzle. Our algorithms are so powerful that they can reconstruct the same jigsaw puzzle—the video portrait—in an infinite number of ways.”

The team discovered that the jigsaw pieces used to reconstruct real video portraits are different from the ones used to reconstruct deepfakes. The disparities are what enable the software under development to tell the difference between a real video portrait and a deepfake one.

Kenyon notes that better, more realistic deepfakes are under constant development. For example, one new target for deepfakes is body manipulation—videos that show, for example, couch potatoes playing professional sports, performing advanced martial arts, or executing 100 chin-ups with ease. Although full-body manipulation is still in its infancy, the age of the “digital puppeteer” is here.

“We are up against a rapidly moving target,” Kenyon says. “Thus, we are constantly working on speeding up and improving our algorithms. Advanced deepfakes may fool the brain, but we’re working to ensure that they don’t fool our algorithms.”

Share

Stay up to date
Get the latest content from National Security Science delivered straight to your inbox.
Subscribe Now

More National Security Science Stories

National Security Science Home
Cover Image Sun

The fusion issue

Scientists at Los Alamos National Laboratory have pioneered fusion research for 80 years—and counting.

Darht Charlie

Leading the way for Weapons

Charlie Nakhleh brings decades of experience to Los Alamos’ top Weapons job.

Rhada

When AI meets fusion

Large language models tackle challenges in inertial confinement fusion.

Abstracts Fuel

Fueling the future of fusion

Los Alamos National Laboratory scientists sharpen their understanding of the fusion fuel cycle.

Fusion History Workshop Lanl 20241001 Dw 7034

An academic approach

A special issue of Fusion Science and Technology highlights early fusion research.

Aiden Fusor

First in fusion

A Los Alamos researcher helps a teen set a world record.

Follow us

Keep up with the latest news from the Lab