Facebook, a company with a fabulous track record in using artificial intelligence to make the world a better place, have figured out how to turn telly people into videogame people. The researchers’ Vid2game game project lets them create a controllable avatar from live-action footage, as demonstrated below with a tennis player.
The resulting animations aren’t super convincing, at least at this stage. That doesn’t stop Facebook boasting that their work “paves the way for new types of realistic and personalized games, which can be casually created from everyday videos.”
As well as using a joystick to control the character’s movements, the software also allows the background to be swapped. Again, this is all rather limited – we’re a long way from plunging such avatars into 3D worlds.
It works like this:
“The method is based on two networks. The first network maps a current pose, and a single-instance control signal to the next pose. The second network maps the current pose, the new pose, and a given background, to an output frame. Both networks include multiple novelties that enable high-quality performance. This is demonstrated on multiple characters extracted from various videos of dancers and athletes.”
And looks like this:
Anyone with basic software skills has been able to get up to video trickery like this for a while now, creating fake videos from footage of actual people. They’ve never been able to puppet them in real time, though.
If you really want to know how it works and aren’t intimidated by phrases like “autoregressive models” and “ReLU activations”, the full research paper is here.
I’ve been trying to think about exactly what the implications of this are, beyond the obviously horrific upcoming prevalence of playable YouTube influencers. I do think any progress towards making easily doctored video is course for concern. I’m not saying we should attempt to curtail such research, as that’s likely to be counterproductive – and hey, I’m enough of a narcissist to find the idea of playing as my literal self in a videogame appealing. But it seems to me that a well-placed fake video at a critical juncture, like say, an upcoming election or referendum, could easily contribute to steering society in an unwelcome direction.
It’s worth thinking about the impact of widely spread fake videos more broadly, too. You’ll never be able to fully trust your eyes, and people might always be able to claim real footage of them is merely a fiction. What do we do when video evidence doesn’t stand up in court?