The Best Deep-Fake You’ve Ever Seen and That Should Worry You

By: Nick Gambino

If you maintain little more than a peripheral view of the tech world or if you enjoy scrolling through Facebook, there’s a good chance you’ve seen some examples of deep fake technology. The tech, which is seeing constant improvements to its believability, takes the image of one person’s face and maps it onto another’s. That video where Sylvester Stallone appears as the Terminator in T2 instead of Arnold? That’s deep fake.

It’s pretty frightening but it hasn’t been “good enough” to really fool anyone – until now. Zao is a new app for iOS devices that allows you to stick your face over the face of an actor in a movie with scary accuracy.

As you can see in the video, the guy is able to replace the image of Leonardo DiCaprio with his own and you can hardly tell the difference.

Beyond the uncanny switch, what’s really disturbing is the ease with which the app allows you to place yourself in the scene. Until now, deep fake required a ton of data processing and multiple scans of your face from different angles, making different expressions. Even then it wasn’t perfect. Zao does it all with just one image. A simple selfie and seconds later you’ve taken someone else’s identity.

While the app was going viral in China over the weekend, their privacy policy started raising some red flags with more careful observers. It’s in Chinese so I have to rely on the reporting of others that say their policy had close to zero restrictions on how they could use your image, including sharing it with the Chinese government. That’s worrisome, but luckily, they’ve changed their privacy agreement in response to the backlash to state that they’ll own use your image to improve the app.

The implications of this level of fakery should concern even those who wouldn’t be caught dead believing in a conspiracy theory. If someone is able to put your face in any video, there’s no limit to the craziness that could ensue. If people are willing to tell lies now, wait until they have video footage to “prove it.”

I don’t see any scenario where brilliant minds don’t continue working on this technology until it’s near-perfect. What we need are some equally brilliant minds to come up with a way to verify if the deep-fake tech has been used on a particular video or not. Another way it might work is if the creation of any video also created a hidden watermark of some sort that can be detected with software.

Whatever they do, it’s clear that we need some moral checks and balances here before this gets out of control.