Facebook says coronavirus restricted efforts to remove suicide and self-harm posts on its platform

Fewer posts were removed due to a reduction in workers during the pandemic. Credit: PA

Efforts to remove suicide and self-injury posts were hindered due to the coronavirus pandemic, the social media giant Facebook has revealed.

The social network took action on significantly fewer posts containing such content between April and June, because of a reduction of reviewers working during the pandemic, the company said.

The firm sent moderators home in March in a bid to reduce the spread of the virus but boss Mark Zuckerberg warned enforcement requiring human intervention could be hit.

The firm says it has since brought “many reviewers back online from home” and, where it is safe, a “smaller number into the office”.



Facebook’s community standards report revealed that 911,000 pieces of content related to suicide and self-injury underwent action within the three-month period, versus 1.7 million in the previous quarter.

Meanwhile on Instagram, action was taken against 275,000 posts compared with 1.3 million before.

Action on media featuring child nudity and sexual exploitation also fell on Instagram, from one million posts to 479,400.

Mark Zuckerberg warned enforcement requiring human intervention could be hit during the pandemic. Credit: AP

"The impact of Covid-19 on our content moderation and demonstrates that, while our technology for identifying and removing violating content is improving, there will continue to be areas where we rely on people to both review content and train our technology,” the company said.

It added: “With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram.

“Despite these decreases, we prioritised and took action on the most harmful content within these categories.

“Our focus remains on finding and removing this content while increasing reviewer capacity as quickly and as safely as possible.”

Facebook estimates that less than 0.05% of views were of content that violated its standards against suicide and self-injury.

Automated technology is working to remove other harmful posts, such as hate speech which went from 9.6 million on Facebook in the last quarter to 22.5 million now, the report said.

Much of that material, 94.5%, was detected by artificial intelligence before a user had a chance to report it.

Proactive detection for hate speech on Instagram increased from 45% to 84%.

There were improvements on terrorism content, with action against 8.7 million pieces on Facebook this quarter compared with 6.3 million previously.

Only 0.4% of that content was reported by a user, while the vast bulk was picked up and removed automatically by the firm’s detection systems.

“We’ve made progress in combating hate on our apps, but we know we have more to do to ensure everyone feels comfortable using our services,” Facebook said.

It said that it has established new teams and task forces to help build products that are "fair and inclusive".

The firm added: “We’re also updating our policies to more specifically account for certain kinds of kinds of implicit hate speech, such as content depicting blackface, or stereotypes about Jewish people controlling the world.”

  • If you are in distress or need some support, the Samaritans are available 24 hours a day on 116 123 or visit their website.