Can you detect these deepfakes? 99.9% can’t, claims biometrics leader iProov

Fraudsters have progressed from simple cheapfakes to realistic synthetics


Can you detect these deepfakes? 99.9% can’t, claims biometrics leader iProov

Deepfakes have become alarmingly difficult to detect. So difficult, that only 0.1% of people today can identify them.

That’s according to iProov, a British biometric authentication firm. The company tested the public’s AI detective skills by showing 2,000 UK and US consumers a collection of both genuine and synthetic content.

Sadly, the budding sleuths overwhelmingly failed in their investigations.

A woeful 99.9% of them couldn’t distinguish between the real and the deepfake. Think you can do better, Sherlock? You’re not the only one.

In iProov’s study, over 60% of the participants were confident in their AI detection skills — regardless of the accuracy of their guesses. Still trust your nose for digital clues? Well, you can test it for yourself in a deepfake quiz released alongside the study results.

The quiz arrives amid a surge in headline-grabbing deepfake attacks.

In January, for instance, the tabloids were enraptured by one that targeted a French woman called Anne.

Scammers swindled her out of €830,000 after using deepfakes to pose as Brad Pitt with deepfakes of the actor. The fraudsters also sent her footage of an AI-generated TV anchor revealing the Hollywood star’s “exclusive relationship with one special individual… who goes by the name of Anne.”

Poor Anne was roundly mocked for her naivety, but she’s far from alone in falling for a deepfake.

Deepfakes on the rise

Last year, a deepfake attack happened every five minutes, according to ID verification firm Onfido.

The content is frequently weaponised for fraud. A recent study estimated that AI drives almost half (43%) of all fraud attempts.

Andrew Bud, the founder and CEO of iProov, attributes the escalation to three converging trends:

  1. The rapid evolution of  AI and its ability to produce realistic deepfakes
  2. The growth of Crime-as-a-Service (CaaS) networks that offer cheaper access to sophisticated, purpose-built, attack technologies
  3. The vulnerability of  traditional ID verification practices

Bud also pointed to the lower barriers of entry to deepfakes. Attackers have progressed from simple “cheapfakes” to powerful tools that create convincing synthetic media within minutes.

“Deepfaking has become commoditised,” Bud told TNW via email. “The tools to create deepfake content are widely accessible, very affordable, and produce results undetectable to the human eye. It’s creating a perfect storm of cybercrime, as most organisations lack adequate defences to counter these attacks.

“Traditional solutions and manual processes like video identification simply can’t keep up. Organisations must adopt science-based biometric systems combined with AI-powered defences that can detect, evolve with, and prevent these attacks.”

AI will take centre stage at TNW Conference, which takes place on June 19-20 in Amsterdam. Tickets for the event are now on sale. Use the code TNWXMEDIA2025 at the check-out to get 30% off the price tag.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top