This article was published on February 7, 2020

Reuters built a prototype for automated news videos using Deepfakes tech

Coming to you live from the inside of an artificial neural network


Reuters built a prototype for automated news videos using Deepfakes tech

The Reuters news company and an AI startup named Synthesia today unveiled a new project they’ve partnered on that uses Deepfakes-style technology to generate automated news reports in real time.

Designed as a proof-of-concept, the system takes real-time scoring data from football matches and generates news reports complete with photographs and a script. Synthesia and Reuters then use a neural network similar to Deepfakes and prerecorded footage of a real news anchor to turn the script into a “live” video of the news anchor giving up-to-the-second scoring updates.

Credit: Reuters

Credit: Reuters

The big idea here is that you could have, for example, ten or twenty different “live” videos streaming that simultaneously showed the same “person” announcing real-time scoring updates for different sporting events.

Reuters was quick to point out that this is just a prototype and not necessarily a feature it intended to implement. The company’s global head of product and core news services, Nick Cohen, said in a statement:

Reuters has long been at the forefront of exploring the potential of new technologies to deliver news and information. This kind of prototyping is helping us to understand how AI and synthetic media can be combined with our real-time feeds of photography and reporting to create whole new kinds of products and services.

But the implications for the tech – assuming it can overcome the tell-tale artifacting that plagues most common Deepfakes-style use-cases – could be huge. Aside from news coverage, it’s easy to imagine airports full of monitors showing the same face giving updates for flights from different airlines, or any number of on-demand video updating services tailored to specific geographic areas.

On the flip side, we could be headed towards a dystopia where the clever use of AI and a face that people can trust becomes the primary deciding factor in whether the general public considers something fake news or not – “The AI reporter said it” will become the future’s version of “I read it on Facebook.”

That might sound far-fetched, but lets not forget that Hollywood spent decades paying one person to do the vast majority of voice-over work for its movie trailers because consumers consistently showed that they were more likely to get excited for a film if they heard Don “In a world….” Lafontaine’s voice.

Abject horror at the prospect of living in a future where humanity and AI are indistinguishable aside, this represents a positive use-case for Deepfakes tech. In tandem with the right translation services, this kind of tool could be used to generate emergency video reports to places where language barriers in both spoken and written communications might otherwise complicate the quick dissemination of information.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with