A recent TV documentary investigation has disclosed how Facebook decides what its users can and cannot see on its platform. The investigation covered topics including how the company deals with violent content, reported posts, and hate speech.
Violent Content and Hate Speech Remains on the Site
According to the investigation, despite being flagged by users, violent content like videos of assaults on children and graphic images remained on the site. The content was not deleted by Facebook even after users flagged it as inappropriate and requested that it be removed.
During filming, thousands of reported posts remained on the site and were not moderated. In addition, reported posts relating to self-harm and suicide threats remained on the site beyond the 24-hour turnaround that Facebook had stated as its goal for removal. Also, the pages that belonged to far-right groups with many followers were treated differently than those belonging to news organizations and governments, and were allowed to exceed the deletion time limits.
Facebook has a policy of not allowing children under age of 13 to have accounts, but a company trainer told an undercover reporter not to take any action against underage users unless they admit to being underage. For example, if there is an image of someone who looks underage and the image contains content for self-harm, then instead of reporting the account as an underage account, the account owner is treated like an adult and is sent information about organizations that help with issues relating to self-harm.
Facebook’s take on hate speech
The undercover reporter was told that content that racially abuses protected religious or ethnic groups violates Facebook guidelines, but if posts racially abuse immigrants from these groups, then it is permitted. Training for moderators included an example of a cartoon comment that described “drowning a girl if her first boyfriend is a negro”; the cartoon was deemed “permitted.”
In a talk with the television studio, the tech company reassured that the cartoon violates its guidelines and hate speech standards, and that it is reviewing the settings to prevent it from happening again.
- Revisiting the #DeleteFacebook Trend - November 19, 2018
- White House to Investigate Tech Giants about Antitrust - November 13, 2018
- New Facebook Feature to Identify Fake News - November 6, 2018
- Can Facebook and Twitter Team Up to Fight Hate Speech? - October 30, 2018
- EU Gets Strict Over Extremist Content on Social Media Platforms - October 23, 2018
- Social Media Sites Face Hearings on Fake News and Hate Speech - October 15, 2018
- Is Bitcoin a Good Means of Payment for Merchants? - October 9, 2018
- False Articles Plaguing the Swedish Election - October 2, 2018
- Twitter to Bring Status Indicators and Threaded Replies - September 23, 2018
- Facebook Losing its Fight Against Hate Speech in Myanmar - September 18, 2018