Uncategorized

Removing violent content and hate speech: an ongoing challenge

Posted on
by

A recent TV documentary investigation has disclosed how Facebook decides what its users can and cannot see on its platform. The investigation covered topics including how the company deals with violent content, reported posts, and hate speech.

Violent Content and Hate Speech Remains on the Site

According to the investigation, despite being flagged by users, violent content like videos of assaults on children and graphic images remained on the site. The content was not deleted by Facebook even after users flagged it as inappropriate and requested that it be removed.

During filming, thousands of reported posts remained on the site and were not moderated. In addition, reported posts relating to self-harm and suicide threats remained on the site beyond the 24-hour turnaround that Facebook had stated as its goal for removal. Also, the pages that belonged to far-right groups with many followers were treated differently than those belonging to news organizations and governments, and were allowed to exceed the deletion time limits.

Facebook has a policy of not allowing children under age of 13 to have accounts, but a company trainer told an undercover reporter not to take any action against underage users unless they admit to being underage. For example, if there is an image of someone who looks underage and the image contains content for self-harm, then instead of reporting the account as an underage account, the account owner is treated like an adult and is sent information about organizations that help with issues relating to self-harm.

Facebook’s take on hate speech

The undercover reporter was told that content that racially abuses protected religious or ethnic groups violates Facebook guidelines, but if posts racially abuse immigrants from these groups, then it is permitted. Training for moderators included an example of a cartoon comment that described “drowning a girl if her first boyfriend is a negro”; the cartoon was deemed “permitted.”

In a talk with the television studio, the tech company reassured that the cartoon violates its guidelines and hate speech standards, and that it is reviewing the settings to prevent it from happening again.

 

Continue reading