A recent TV documentary investigation has disclosed how Facebook decides what its users can and cannot see on its platform. The investigation covered topics including how the company deals with violent content, reported posts, and hate speech.
Violent Content and Hate Speech Remains on the Site
According to the investigation, despite being flagged by users, violent content like videos of assaults on children and graphic images remained on the site. The content was not deleted by Facebook even after users flagged it as inappropriate and requested that it be removed.
During filming, thousands of reported posts remained on the site and were not moderated. In addition, reported posts relating to self-harm and suicide threats remained on the site beyond the 24-hour turnaround that Facebook had stated as its goal for removal. Also, the pages that belonged to far-right groups with many followers were treated differently than those belonging to news organizations and governments, and were allowed to exceed the deletion time limits.
Facebook has a policy of not allowing children under age of 13 to have accounts, but a company trainer told an undercover reporter not to take any action against underage users unless they admit to being underage. For example, if there is an image of someone who looks underage and the image contains content for self-harm, then instead of reporting the account as an underage account, the account owner is treated like an adult and is sent information about organizations that help with issues relating to self-harm.
Facebook’s take on hate speech
The undercover reporter was told that content that racially abuses protected religious or ethnic groups violates Facebook guidelines, but if posts racially abuse immigrants from these groups, then it is permitted. Training for moderators included an example of a cartoon comment that described “drowning a girl if her first boyfriend is a negro”; the cartoon was deemed “permitted.”
In a talk with the television studio, the tech company reassured that the cartoon violates its guidelines and hate speech standards, and that it is reviewing the settings to prevent it from happening again.
- Facebook Losing its Fight Against Hate Speech in Myanmar - September 18, 2018
- Twitter Deleted Over 58 Million Accounts in Last Quarter Of 2017 - September 9, 2018
- Removing violent content and hate speech: an ongoing challenge - September 3, 2018
- Twitter Plans to Draft a Policy on “Dehumanizing Speech” Ban (s) - August 27, 2018
- Twitter Banning Suspicious Accounts to Combat Hate Speech - August 20, 2018
- Facebook Continues to Struggle with its Fake News Problem - August 13, 2018
- Only Invest in Cryptocurrency What you Can Afford to Lose - August 7, 2018
- Facebook shared user information with 52 companies - July 30, 2018
- Facebook blocks some promotional posts, but allows political ads - July 24, 2018
- RKN Global on: Tech Companies’ Partnerships with the SPLC - July 16, 2018