A recent TV documentary investigation has disclosed how Facebook decides what its users can and cannot see on its platform. The investigation covered topics including how the company deals with violent content, reported posts, and hate speech.
Violent Content and Hate Speech Remains on the Site
According to the investigation, despite being flagged by users, violent content like videos of assaults on children and graphic images remained on the site. The content was not deleted by Facebook even after users flagged it as inappropriate and requested that it be removed.
During filming, thousands of reported posts remained on the site and were not moderated. In addition, reported posts relating to self-harm and suicide threats remained on the site beyond the 24-hour turnaround that Facebook had stated as its goal for removal. Also, the pages that belonged to far-right groups with many followers were treated differently than those belonging to news organizations and governments, and were allowed to exceed the deletion time limits.
Facebook has a policy of not allowing children under age of 13 to have accounts, but a company trainer told an undercover reporter not to take any action against underage users unless they admit to being underage. For example, if there is an image of someone who looks underage and the image contains content for self-harm, then instead of reporting the account as an underage account, the account owner is treated like an adult and is sent information about organizations that help with issues relating to self-harm.
Facebook’s take on hate speech
The undercover reporter was told that content that racially abuses protected religious or ethnic groups violates Facebook guidelines, but if posts racially abuse immigrants from these groups, then it is permitted. Training for moderators included an example of a cartoon comment that described “drowning a girl if her first boyfriend is a negro”; the cartoon was deemed “permitted.”
In a talk with the television studio, the tech company reassured that the cartoon violates its guidelines and hate speech standards, and that it is reviewing the settings to prevent it from happening again.
- Crypto: Are the Tech Giants Jumping In? - March 18, 2019
- UK Solution to Help Tackle Cyber Threats: Cyber Security Competition - March 12, 2019
- Cyber Security Education to Combat Cyber Crime - March 5, 2019
- Pension Funds and Investing in Crypto? - February 25, 2019
- Security Firm Briefly Hijacked Accounts on Twitter to Show Vulnerability - February 18, 2019
- Facebook Bug Allows Access to Data of 6.8 Million Users - February 12, 2019
- Twitter CEO Criticized for Hate Speech: Caught between Two Sides - February 5, 2019
- Uganda Ready to Regulate Cryptocurrencies - January 29, 2019
- Facebook Account Blocking before Elections - January 23, 2019
- India to Take Legal Action If Twitter Fails To Stop Hate Messages - January 20, 2019