Social Networking

Leaked Docs Spotlight Complexity of Moderating Facebook Content

The public got a rare view into how Facebook tries to keep offensive and dangerous content offline in a report published Sunday.

Confidential documents leaked to The Guardian exposed the secret rules by which Facebook polices postings on issues such as violence, hate speech, terrorism, pornography, racism and self-harm, as well as such subjects as sports fixing and cannibalism.

After reviewing more than 100 internal training manuals, spreadsheets and flowcharts, The Guardian found Facebook’s moderation policies often puzzling.

For example, threats against a head of a state automatically are removed, but threats against other people are untouched unless they’re considered “credible.”

Photos of nonsexual physical abuse and bullying of children do not have to be deleted unless they include a sadistic or celebratory element. Pics of animal abuse are allowed, although if the abuse is extremely upsetting, they need to be marked “disturbing.”

Facebook will allow people to live-stream attempts to harm themselves because it “doesn’t want to censor or punish people in distress.”

Any Facebook member with more than 100,000 followers is considered a public figure and is given fewer protections than other members.

Keeping People Safe

In response to questions from The Guardian, Facebook defended its moderation efforts.

“Keeping people on Facebook safe is the most important thing we do,” said Monika Bickert, Facebook’s head of global policy management.

“We work hard to make Facebook as safe as possible while enabling free speech,” she told TechNewsWorld. “This requires a lot of thought into detailed and often difficult questions, and getting it right is something we take very seriously.”

As part of its efforts to “get it right,” the company recently announced it would be adding 3,000 people to its global community operations team over the next year, to review the millions of reports of content abuse Facebook receives on a daily basis.

“In addition to investing in more people, we’re also building better tools to keep our community safe,” Bickert said. “We’re going to make it simpler to report problems to us, faster for our reviewers to determine which posts violate our standards, and easier for them to contact law enforcement if someone needs help.”

Soul-Destroying Work

If The Guardian’s report revealed anything, it is how complex moderating content on the social network has become.

“It highlights just how challenging policing content on a site like Facebook, with its enormous scale, is,” noted Jan Dawson, chief analyst at Jackdaw Research, in an online post.

Moderators have to walk a fine line between censorship and protecting users, he pointed out.

“It also highlights the tensions between those who want Facebook to do more to police inappropriate and unpleasant content, and those who feel it already censors too much,” Dawson continued.

Neither the people writing the policies nor those enforcing them have an enviable job, he said, and in the case of the content moderators, that job can be soul-destroying.

Still, “as we’ve also seen with regard to live video recently,” Dawson said, “it’s incredibly important and going to be an increasingly expensive area of investment for companies like Facebook and Google.”

‘No Transparency Whatsoever’

Facebook has shied away from releasing many details about the rules its moderators use to act on content reported to them.

“They say they don’t want to publish that type of thing because it enables bad guys to game the system,” said Rebecca MacKinnon, director of the Ranking Digital Rights program at the Open Technology Institute.

“Nevertheless, there’s too little transparency now, which is why this thing was leaked,” she told TechNewsWorld.

The Ranking Digital Rights Project assesses the information transparency of companies on a range of policies related to freedom of expression and privacy, MacKinnon explained. It questions companies and seeks information about their rules for content moderation, how they implement those rules, and what volume of content is deleted or restricted.

“With Facebook, there’s no transparency whatsoever,” MacKinnon said. “Such a low level of transparency isn’t serving their users or their company very well.”

Death by Publisher

As the amount of content on social media sites has grown, there has been a clamoring from some corners of the Internet for treatment of the sites as publishers. Now they’re treated only as distributors that are not responsible for what their users post on them.

“Saying companies are liable for everything their users do is not going to solve the problem,” MacKinnon said. “It will probably kill a lot of what’s good about social media.”

Making Facebook a publisher not only would destroy its protected status as a third-party platform, but also might destroy the company, noted Karen North, director of the Annenberg Program on Online Communities at the University of Southern California.

“When you make subjective editorial decisions, you’re like a newspaper where the content is the responsibility of the management,” she told TechNewsWorld. “They could never mount a team big enough to make decisions about everything that’s posted on Facebook. It would be the end of Facebook.”

John P. Mello Jr.

John P. Mello Jr. has been an ECT News Network reportersince 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, theBoston Phoenix, Megapixel.Net and GovernmentSecurity News. Email John.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by John P. Mello Jr.
More in Social Networking

What's your outlook for the business climate in 2025?
Loading ... Loading ...

Technewsworld Channels