Monday, March 1, 2021

Is It Real or Is It "Deepfake"? Only Your AI Engineer Will Know



You have just watched a “deepfake.” Zach Math, a film producer and director, had been hired by Mischief USA, a creative agency, to direct a pair of ads for a voting rights campaign. The ads would feature deepfaked versions of North Korean leader Kim Jong-un and Russian president Vladimir Putin.

In consultation with a deepfake artist, John Lee, the team had chosen to go the face-swapping route with the open-source software DeepFaceLab. It meant the final ad would include the actors’ bodies, so they needed to cast believable body doubles.

The team wanted the deepfake leaders to speak in English, though with authentic North Korean and Russian accents. So the casting director went hunting for male actors who resembled each leader in build and facial structure, matched their ethnicity, and could do convincing voice impersonations.

(Karen Hao. “Inside the strange new world of being a deepfake actor.” MIT Technology Review. October 09, 2020.)


What Is a Deepfake?

First coined in late 2017 by a Reddit user of the same name, the term “deepfake” is blended from “deep learning” and “fake.” A deepfake refers to a specific kind of synthetic media where a person in an image or video is swapped with another person's likeness.

The term has since expanded to include “synthetic media applications” that existed before the Reddit page and new creations like StyleGAN — “realistic-looking still images of people that don’t exist,” said Henry Ajder, head of threat intelligence at deepfake detection company Deeptrace.

The term understandably has a negative connotation, but there are a number of potentially beneficial use cases for businesses, specifically applications in marketing and advertising that are already being utilized by well-known brands," Ajder said.

(Meredith Somers Deepfakes, explained.” MIT Sloan School of Management. July 21, 2020.)

Deepfakes are convincing content depicting artificially constructed events. That content could be a video, photo, or audio recording. Whatever the medium, its content has undergone a change via artificial rendering. (Think changes to a face, a voice, or any element of a person’s movements, speech, or actions.)

The 21st century’s answer to Photoshopping, deepfakes use a form of artificial intelligence called “deep learning” to make images of fake events, hence the name “deepfake.”

The technology can seamlessly stitch anyone in the world into a video or photo they never actually participated in. Such capabilities have existed for decades—that’s how the late actor Paul Walker was resurrected for Fast & Furious 7. But it used to take entire studios full of experts a year to create these effects. Now, deepfake technologies – new automatic computer-graphics or machine-learning systems – can synthesize images and videos much more quickly.

In the article from MIT, Meredith Somers explains how a deepfake is made …

To make a deepfake video, a creator swaps one person’s face and replaces it with another, using a facial recognition algorithm and a deep learning computer network called a variational auto-encoder [VAE], said Matt Groh, a research assistant with the Affective Computing Group at the MIT Media Lab.

VAEs are trained to encode images into low-dimensional representations and then decode those representations back into images.

For example, if you wanted to transform any video into a deepfake with Oscar-winning movie star Nicolas Cage, you’d need two auto-encoders — one trained on images of the actor’s face, and one trained on images of a wide diversity of faces.

The images of faces used for both training sets can be curated by applying a facial recognition algorithm to video frames to capture different poses and lighting conditions that naturally occur.

Once this training is done, you combine the encoder trained on the diverse faces with the decoder trained on Nicolas Cage’s faces, resulting in the actor’s face on someone else’s body.”

(Meredith Somers Deepfakes, explained.” MIT Sloan School of Management. July 21, 2020.)


“An easy way to think of a deepfake is like photoshop on steroids. (And powered by artificial intelligence.)”

Deepfake technology holds positive potential for education. It could revolutionise our history lessons with interactivity. It could preserve stories and help capture attention. How? With deepfake examples of historical figures.

For instance, in 2018 the Illinois Holocaust Museum and Education Centre created hologrammatic interviews so visitors could talk to and interact with Holocaust survivors. They could ask questions and hear their stories.

Yes, positive deepfake examples exist.” Think Automaton. https://www.thinkautomation.com/bots-and-ai/yes-positive-deepfake-examples-exist/.)

However deepfakes present negative possibilities. For example, in 2018, Sam Cole, a reporter at Motherboard, discovered a new and disturbing corner of the internet. A Reddit user was posting nonconsensual fake porn videos using an AI algorithm to swap celebrities’ faces into real porn. Cole sounded the alarm on the phenomenon, right as the technology was about to explode. A year later, deepfake porn had spread far beyond Reddit, with easily accessible apps that could “strip” clothes off any woman photographed.

There’s also the risk that political deepfakes will generate convincing fake news that could wreak havoc in unstable political environments. Deepfakes could create powerful alternative histories,


In July 2020, two MIT researchers, Francesca Panetta and Halsey Burgund, released a project to create an alternative history of the 1969 Apollo moon landing. Called “In Event of Moon Disaster,” it uses the speech that President Richard Nixon would have delivered had the momentous occasion not gone according to plan. The researchers partnered with two separate companies for deepfake audio and video, and hired an actor to provide the “base” performance. They then ran his voice and face through the two types of software, and stitched them together into a final deepfake Nixon.

MIT Tech reports, “In February, Time magazine re-created Martin Luther King Jr.’s March on Washington for virtual reality to immerse viewers in the scene. The project didn’t use deepfake technology, but Chinese tech giant Tencent later cited it in a white paper about its plans for AI, saying deepfakes could be used for similar purposes in the future.”

    (Karen Hao and Will Douglas. “Heaven The year deepfakes went mainstream.” MIT Technology Review. December 24, 2020)

How can you spot a deepfake? Matt Groh, a research assistant with the Affective Computing Group at the MIT Media Lab, advised to pay attention to the:

  • Face – Is someone blinking too much or too little? Do their eyebrows fit their face? Is someone’s hair in the wrong spot? Does their skin look airbrushed or, conversely, are there too many wrinkles?

  • Audio – Does someone’s voice not match their appearance (ex. a heavyset man with a higher-pitched feminine voice).

  • Lighting – What sort of reflection, if any, are a person’s glasses giving under a light? (Deepfakes often fail to fully represent the natural physics of lighting.)

It doesn’t have to be a politician to be a deepfake. It even might be your friend. It could be you that’s targeted.”

    Francesca Panetta, XR creative director at the MIT Center for Advanced Virtuality


Check out these interactive Deepfake sites:

12 deepfake examples that terrified and amused the internet. Click here: https://www.creativebloq.com/features/deepfake-examples

Watch: In Event of Moon Disaster. Click here: https://moondisaster.org/

Can you spot the DeepFake video? Click here: https://detectfakes.media.mit.edu/


No comments: