This article was published on May 26, 2020

YouTube removed phrase critical of Chinese government due to AI error


YouTube removed phrase critical of Chinese government due to AI error

For the last few weeks, users have noticed that particular Chinese expressions have been automatically removed from YouTube comments. After rampant speculation about why Google would want these phrases — both of which are critical of the Chinese government — to be banned, it now claims it was an error by its machine learning software.

Users noticed that any YouTube comment containing the phrase “共匪” or “五毛” would purged within seconds of being submitted. One of my colleagues tested this and found it to be true. The former is an insult directed at the Chinese Communist government (it translates as “Communist bandit,” according to activist Jennifer Zeng). The latter is a slang term for online denizens who’ve been paid to deflect criticism of the Communist party.

As you might expect, users immediately suspected there might be an ulterior motive for Google banning the phrase. YouTube is banned in China, so why would its parent company care that anyone would criticize the CCP? This particular phrase had triggered this reaction for months, which is an awfully long time for an error to persist.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

Perhaps Google was prompted to check into the matter when the phrase’s mysterious banning was pointed out by Oculus founder Palmer Luckey. Either way, it’s finally spoken about it. And it claims it’s not banning the phrase out of some hidden sympathy for the Chinese government, but rather out of an error.

According to a statement to TechCrunch, YouTube says the banned phrase was added to its hate speech filters, which automatically remove comments containing offensive content. That would explain why using the phrase even in a positive way, instantly brought the hammer down.

The question now is why that phrase was added to the filters. All Google would say is that the company is relying more heavily on AI-based moderation as its employees are out of the office thanks to the coronavirus pandemic. A YouTube blog post from March foreshadows the problem:

Machine learning helps detect potentially harmful content and then sends it to human reviewers for assessment. As a result of the new measures we’re taking, we will temporarily start relying more on technology to help with some of the work normally done by reviewers. This means automated systems will start removing some content without human review, so we can continue to act quickly to remove violative content and protect our ecosystem, while we have workplace protections in place.

It wouldn’t be the first time a major tech company ran into issues with machine learning moderation thanks to the coronavirus. Facebook had a similar problem when its AI blocked posts about making face masks.

YouTube claims it is still investigating the error. It invites anyone to “report suspected issues to troubleshoot errors and help us make product improvements.”

via The Verge

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with