Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on December 9, 2021

Codifying humanity: Why AI sucks at content moderation

Systems that are designed to fail, tend to, uh, fail.


Codifying humanity: Why AI sucks at content moderation

Welcome to “Codifying Humanity.” A new Neural series that analyzes the machine learning world’s attempts at creating human-level AI. Previous entries include Can humor be reduced to an algorithm? and Why robots should fear death as much as we do.

Meta (the company your grandparents still call Facebook) just unveiled a new AI system it calls “an important milestone that signals a shift toward more intelligent, generalized AI systems.”

It’s called the “Meta AI Few-Shot Learner,” presumably because the person who names stuff at Meta was on vacation when it was time to come up with a title. It doesn’t even work as an acronym. MAIFSL? Nah.

The big idea

Content moderation is hard. Humans don’t get along. Whether we’re talking about the wide-open worlds of Facebook and Twitter, tiny targeted communities such as knitting forums, or just about any site with a comments section, if you give humans a modicum of anonymity and a platform, they’re going to be shitty to each other. It never fails.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The problem is simple. Content that crosses the line – whatever a given platform’s definition of that may be – needs to be curated. But finding a solution is incredibly difficult.

Even platforms boasting a lack of censorship and complete respect for the right to free speech need to have some form of content moderation. We don’t want child and animal abuse videos circulating unchecked or mass murderers using social media platforms to stream their crimes.

A couple of decades ago the answer was simple. Online communities appointed human moderators to oversee forums. The more users a site had, the more human moderators it needed to keep the discourse civil.

Unfortunately that system can’t function at the global scale. We’d need to create an entire planet full of moderators to moderate all the people on this one. And then another to moderate the people on the second planet, and so on. 

That leaves social media companies with two choices: either limit the number of users to an amount that can be monitored by human moderators, or invent a way to automate content moderation.

More efficient failure

Big tech chose “none of the above.” All of the popular social media platforms have grown far too large for human moderation to have much of an impact, and there is currently no artificial intelligence system robust enough to function with even a fraction of the efficacy of a human moderator.

In other words: AI sucks at content moderation. And there’s no reason to believe that’s going to change until it achieves human-level performance in language and social understanding.

That’s not what’s happening with this new system. It will still fail to moderate the vast majority of user content on the platform. It’ll just fail more efficiently.

Meta’s new AI is a few-shot learner. That means it’s a small, easy to train model that can be modified and updated quicker than larger, slower models.

The research is fascinating and the team’s ability to get AI to do so much with such little data is incredible. But the assertion that this “signals a shift toward more intelligent, generalized AI systems,” is laughable at best.

It’s also ironic. The term “artificial general intelligence” (AGI) is used to describe an AI capable of performing any feat that a human could, given the same access. And, while that’s arguably what it would take for AI to successfully moderate content at the scale of social media, it’s clear that making a language model a bit more efficient isn’t the same thing as solving AGI.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Published
Back to top