Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on November 19, 2021

MIT research shows sad reason why deepfakes pose little threat to US politics

Donkeys and elephants are both stubborn


MIT research shows sad reason why deepfakes pose little threat to US politics

MIT researchers working with funding from Google’s Jigsaw group recently conducted a pair of studies to determine what, if any, potential impact deepfake AI technology political ads could have on US voters.

In total, more than 7,500 people in the US participated in the paired studies – as far as we’re able to determine, that makes this the largest set of experiments on the subject.

Participants were split into three groups; one group watched a video, the second read a text transcript of the video, and the third acted as a control group so they received no media prompts.

In the next phase, participants from all three groups were asked questions to determine whether they believed the media they’d seen, read, or if they agreed with certain statements.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The findings suggest a bad news, worse news scenario. Let’s start with bad news.

Per the paper:

Overall, we find that individuals are more likely to believe an event occurred when it is presented in video versus textual form.

That might not blow your socks off, but it turns out that people are more likely to believe what they see than what they read. Obviously that’s a bad thing in a world where deepfakes are so easy to create.

But it gets so much worse. Again, per the paper:

Moreover, when it comes to attitudes and engagement, the difference between the video and text conditions is comparable to, if not smaller than, the difference between the text and control conditions. Taken together, these results call into question widely held assumptions about the unique persuasive power of political video over text.

In other words: People in the US are more likely to believe a deepfake than fake news in text form, but it does very little to change their political opinions.

The researchers are quick to caution against drawing too many conclusions from this data. They warn that the conditions in which the studies were conducted don’t necessarily imitate those in which US voters are likely to be duped by deepfakes.

According to the paper:

It should be noted, however, that although we observe only small differences in the persuasiveness of video versus text across our two studies, the effects of these two modalities may diverge more sharply outside an experimental context.

In particular, it is possible that video is more attention grabbing than text, such that people scrolling on social media are more likely to attend to and therefore be exposed to video versus text.

As a result, even if video has only a limited persuasive advantage over text within a controlled, forced-choice setting, it could still exert an outsized effect on attitudes and behavior in an environment where it receives disproportionate attention.

Okay, so it’s possible deepfakes could be way more effective in the wild when it comes to influencing people to change their political opinions.

But this particular research provides evidence to the contrary. From where we’re standing it makes perfect sense.

More US citizens voted in the last election than any other in US history. Yet the margins were so close that one side is still (idiotically) claiming the election was rigged. In fact, two of the last four US presidents lost the popular vote. That indicates that US voters are anything but fickle.

It’s obvious that deepfakes are pretty low on the list of problems plaguing US politics. However, it’s a bit sad to see our country’s partisanship so easily codified by MIT research.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top