By: Nick Gambino
If you maintain little more than a peripheral view of the tech world or if you enjoy scrolling through Facebook, there’s a good chance you’ve seen some examples of deep fake technology. The tech, which is seeing constant improvements to its believability, takes the image of one person’s face and maps it onto another’s. That video where Sylvester Stallone appears as the Terminator in T2 instead of Arnold? That’s deep fake.
It’s pretty frightening but it hasn’t been “good enough” to really fool anyone – until now. Zao is a new app for iOS devices that allows you to stick your face over the face of an actor in a movie with scary accuracy.
In case you haven’t heard, #ZAO is a Chinese app which completely blew up since Friday. Best application of ‘Deepfake’-style AI facial replacement I’ve ever seen.
Here’s an example of me as DiCaprio (generated in under 8 secs from that one photo in the thumbnail) 🤯 pic.twitter.com/1RpnJJ3wgT
— Allan Xia (@AllanXia) September 1, 2019
As you can see in the video, the guy is able to replace the image of Leonardo DiCaprio with his own and you can hardly tell the difference.
Beyond the uncanny switch, what’s really disturbing is the ease with which the app allows you to place yourself in the scene. Until now, deep fake required a ton of data processing and multiple scans of your face from different angles, making different expressions. Even then it wasn’t perfect. Zao does it all with just one image. A simple selfie and seconds later you’ve taken someone else’s identity.
The implications of this level of fakery should concern even those who wouldn’t be caught dead believing in a conspiracy theory. If someone is able to put your face in any video, there’s no limit to the craziness that could ensue. If people are willing to tell lies now, wait until they have video footage to “prove it.”
I don’t see any scenario where brilliant minds don’t continue working on this technology until it’s near-perfect. What we need are some equally brilliant minds to come up with a way to verify if the deep-fake tech has been used on a particular video or not. Another way it might work is if the creation of any video also created a hidden watermark of some sort that can be detected with software.
Whatever they do, it’s clear that we need some moral checks and balances here before this gets out of control.