Early bird prices are coming to an end soon... ⏰ Grab your tickets before January 17

This article was published on August 11, 2020

Facebook blames COVID-19 for reduced action on suicide, self-injury, and child exploitation content

Sending content reviewers home forced Facebook to rely more on AI


Facebook blames COVID-19 for reduced action on suicide, self-injury, and child exploitation content Image by: Pixabay

Facebook says that COVID-19 has hindered its ability to remove posts about suicide, self-injury, and child nudity and sexual exploitation.

The social media giant said the decision to send content reviewers home in March had forced it to rely more heavily on tech to remove violating content.

As a result, the firm says it took action on 911,000 pieces of content related to suicide and self-injury in the second quarter of this year — just over half the number of the previous quarter.

On Instagram, the number dropped even further, from 1.3 million pieces of content in Q1 to 275,000 in Q2. Meanwhile, action on Instagram content that sexually exploits or endangers children decreased from 1 million to 479,400.

“With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram,” said Guy Rosen, Facebook’s VP of Integrity, in a blog post today.

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

[Read: Social media firms will use more AI to combat coronavirus misinformation, even if it makes more mistakes]

Facebook said that stretched human resources had also reduced the number of appeals it could offer. In addition, the firm claimed that its focus on removing of harmful content meant it couldn’t calculate the prevalence of violent and graphic content in its latest community standards report.

More human moderation needed

Facebook did report some improvements in its AI moderation efforts. The company said the proactive detection rate for hate speech on Facebook had increased from 89% to 95%. This led it to take action on 22.5 million pieces of violating content, up from the 9.6 million in the previous quarter.

Instagram’s hate speech detection rate climbed even further, from 45% to 84%, while actioned content rose from 808,900 to 3.3 million.

Rosen said the results show the importance of  human moderators:

Today’s report shows the impact of COVID-19 on our content moderation and demonstrates that, while our technology for identifying and removing violating content is improving, there will continue to be areas where we rely on people to both review content and train our technology.

In other Facebook news, the company today announced new measures to stop publishers backed by political organizations from running ads disguised as news. Under the new policy, news Pages with these affiliations will be banned from Facebook News. They’ll also lose access to news messaging on the Messenger Business Platform or the WhatsApp business API.

With the US election season approaching, it’s gonna be a busy few months for Facebook’s content moderation team.

So you like our media brand Neural? You should join our Neural event track at TNW2020, where you’ll hear how artificial intelligence is transforming industries and businesses. 

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with