What if you could alter a video of anyone to emulate facial and mouth movements that never existed in the source video—by yourself, at home, using a cheap webcam?
Meet Face2Face. Using RGB input from one video and mapped pixels from a second video, manipulating someone's face—including distinct facial and mouth movements—has become incredibly easy. A team of researchers recently released a video showing what this looks like in real-time. While the method is still imperfect, it has major implications for future online content.
According to the team’s publication: “Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion.”
Using YouTube videos of well-known political figures, the team overlays real-time facial mapping from one camera directly onto the source video, with little to no visual distortion. Even the interior of the mouth is seamlessly recreated, allowing for accurate real-time lip motion.
Some proposed applications for this technology include teleconferencing, on-the-fly visual dubbing of facial movements for audio translation, and more robust integration into the worlds of gaming, CGI, AR and VR.
Facial capture and re-mapping is nothing new in these fields. But until now, extensive hardware, software and skill sets have been required to participate. Once this software is released to the public, anyone will be able to do this from home with minimal resources, computing power or skills.
That said, it isn’t perfect just yet. Face motion outside of a 30-degree angle from the neck won’t translate. Meanwhile, fingers, beards, and long hair also present challenges of their own. Because the software relies so heavily on facial mapping, any objects distorting the face or warping the dimensions of the structure limit how well the translation occurs. For now, moving one's head or obscuring the face is the best way to ensure the facial mapping isn’t successful.
As this technology becomes more refined—especially if it starts seeping into consumer applications—the future of video content may enter an unprecedented new era of video-manipulation. Some of this will naturally translate into teleconferencing, more realistic CGI characters in films, or VR avatars that can perfectly mimic a person’s expressions.
But not all applications will be innocuous. Combined with advances in voice manipulation, AI-based vocal interfaces, and robotics—it may suddenly be possible to create video of world leaders, celebrities, and anyone else doing things they themselves would never dream of. Authentic personalization has arrived at the expense of possible misuse allowing for inauthentic trickery.
How will advanced video manipulation impact credibility? As the creators of this tool have shown, using political figures to exhibit live facial re-enactment has major implications for trusting online videos; especially when it comes to sensitive personalities and ideas.
If you aren’t already skeptical of what you see on the internet—you’ll soon need to be.