Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on September 2, 2020

Microsoft’s new deepfake detection tool rates bogus videos with a confidence score


Microsoft’s new deepfake detection tool rates bogus videos with a confidence score

Deepfakes are now widely used by a range of people across the world including news agencies, politicians, and even meme-makers. So, it’s equally important for organizations to step up and build tech that can detect these AI-powered bogus videos manipulated to look realistic

Just ahead of the US Presidential elections, Microsoft has introduced new tools to detect manipulated videos that can help political campaigners and media organizations.. The detection tool, called Microsoft Video Authenticator, will analyze the videos frame-by-frame to give you a confidence score to indicate if the video was modified. 

The algorithm has been developed with the help of Microsoft’s Responsible AI team and the Microsoft AI, Ethics, and Effects in Engineering and Research (AETHER) Committee. And it’s designed to identify manipulated elements that are not easily caught by the human eye. The team trained the AI by using publicly available datasets such as Face Forensic++, and was tested on the DeepFake Detection Challenge Dataset.

Microsoft Video Authenticator

The company is also releasing a tool powered by Microsoft’s Azure cloud infrastructure to let creators sign a piece of content with a certificate. There’s also a reader that can scan that certificate to verify that the content is authentic. However, we don’t yet know if Microsoft plans to make either of these tools public.

[Read: Watch the first-ever demo of a self-powered electronic paper interface]

Currently, the company is working with the AI Foundation’s Reality Defender 2020 (RD2020), a US-based dual commercial and nonprofit enterprise, to make its Video Authenticator available to electorial organizations. Plus, it’s also partnering with media houses such as BBC and the New York Times to detect manipulated media. 

Microsoft acknowledged that detection tools are not perfect and generative technologies might always be ahead of it. As a result, some videos might slip through and the company will have to keep working to make the algorithm robust. Right now, the Microsoft’s focus is on preventing deepfakes from disrupting elections. However, it should possibly make this technology open-sourced in the future so more researchers can contribute to it.

Did you know we have an online event about product design coming up? Join the Sprint track at TNW2020 to explore the latest trends and emerging best practices in product development.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top