Facebook labelled 167 million user posts for Covid misinformation

Yolanda Curtis
November 21, 2020

Facebook said it took action on 22.1 million pieces of hate content in the third quarter. Facebook is often criticized for its handling of hate speech and other offensive content, but the company said it removed or labeled 22.1 million posts about hate speech in the third quarter, and detected roughly 95% of those posts before users reported them.

Facebook, which has 1.82 billion daily users globally, has drawn flak in the past for its handling of hate speech on the platform in India, which is among its biggest markets.

Facebook's VP of Integrity Guy Rosen shared the stats during a call with reporters Thursday.

We specifically measure how much harmful content may be seen on Facebook and Instagram because the amount of times content is seen is not evenly distributed.

Although the improvements and detection rates are impressive, the idea of AI doing majority of the detection and enforcement work on the platform raises concerns about posts and accounts being taken down due to false-positives, as AI is susceptible to error.

In addition, the site took action on 12.4 million pieces of content relating to child nudity and sexual exploitation, a rise of around three million the previous reporting period.

"This is really sensitive content".

"Without our work, Facebook is unusable", they wrote. Facebook has acknowledged that paying out $52 million this year to contractors that suffered from PTSD and other mental health problems deemed work-related.

'The letter said: "Facebook needs us". Memes are typically clever or amusing combinations of text and imagery, and only in the combination of the two is the toxic message revealed, he said.

Facebook just offered its first look at the scale of its fight against election misinformation. For example, the company banned political ads in the week before and after the election, for example, and recently announced that it would continue the ban on those ads until further notice. On Instagram, it took action on 4.1 million pieces of violent and graphic content, up from 3.1 million in the second quarter.

Earlier this week, Zuckerberg and Twitter Inc CEO Jack Dorsey were grilled by Congress on their companies' content moderation practices, from Republican allegations of political bias to decisions about violent speech.

Moderators are now demanding Facebook allow those who are high-risk or live with someone who is high-risk for having a severe case of COVID-19 to be able to work from home indefinitely. Among the moderators demand from Facebook and the contracted companies that employ them: hazard pay, more flexibility to work from home and access to better mental healthcare.

Now, on top of work that is psychologically toxic, holding onto the job means walking into a hot zone.

"Workers have asked Facebook leadership, and the leadership of your outsourcing firms like Accenture and CPL, to take urgent steps to protect us and value our work".

On Tuesday, MarkZuckerberg appeared before Congress to discuss Facebook's response to misinformation published on its platform before and after the election. Zuckerberg again called for more government involvement in the development and enforcement of content moderation and transparency standards. Much of it was used by Republican senators to allege that Facebook and Twitter systematically treat conservative content differently than liberal content.

Other reports by iNewsToday