Facebook on Monday issued a new set of community standards, including provisions designed to clamp down on revenge porn, bullying, threats and other forms of online harassment.
“We have zero tolerance for any behavior that puts people in danger, whether someone is organizing or advocating real-world violence or bullying other people,” wrote Monika Bickert, head of global product policy, and Justin Osofsky, vice president of global operations, in a letter posted online.
Limp Enforcement
Facebook will remove content that appears to purposefully target private individuals with the intent of degrading or shaming them, or that threatens or promotes sexual violence or exploitation. It will remove credible threats to public figures as well as hate speech directed at them.
Facebook will work with law enforcement when it believes there’s a genuine risk of physical harm or direct threats to public safety, or when there’s an offer of sexual services.
However, there’s the catch: Facebook will not proactively search its network for content that breaks the rules; instead, subscribers will have to report possible violations and tell Facebook why they think specific content should be removed. Facebook specialists then will investigate and decide whether the flagged posts should be removed.
Facebook may decline to remove posts, regardless of the number of complaints lodged against them.
More Tangles Than a Fishing Line
“There are few real specifics here — just a general sense that [Facebook is] saying it wants to do the right thing,” said Rob Enderle, principal analyst at the Enderle Group.
“Your and my view on what the right thing is and their view may differ a lot,” he told TechNewsWorld.
The zero-tolerance stance might be a tad harsh, Enderle said, because “it’s more likely to punish users for innocent acts taken out of context.”
Facebook does face a dilemma, he observed, because “if any service gets the reputation of regularly reporting users to law enforcement, it’s likely to bleed a lot of suddenly concerned users.”
On the other hand, any threat that’s followed by a physical attack “could be a huge potential liability problem for Facebook if they choose to selectively not report the threat,” Enderle pointed out.
Sliding Around Facebook’s Rules?
The safety standards prohibit the publication of posts that harass or degrade others, or that express support for terrorist or organized criminal activity. However, users can repost that material and hold it up as an example of such content for others.
That might seem like it would offer an easy end run around Facebook’s rules.
However, “it should be pretty obvious to Facebook’s community standard employees when somebody is disingenuously defending their hateful rhetoric as not being hate speech but being about hate speech,” said attorney Carrie A. Goldberg
The Wisdom of the Crowd
The company’s declaration of intent is a step in the right direction, Goldberg told TechNewsWorld, pointing out that the sheer number of posts Facebook receives makes scanning them difficult.
Every minute, about 300,000 statuses are updated and 136,000 photos are uploaded, according to Zephoria’s February 2015.
An automated system for removing verbal content suspected of harassment could lead to the inadvertent censorship of “a lot of additional, nonabusive content,” Goldberg pointed out.
The Power and the Glory
Facebook, like other online platforms, is “a tiny government that gets to promulgate new rules, change them, end them, and enforce them,” and how it does so will influence its bottom line, Goldberg noted.
If the Facebook team “creates a service where their own kids would be safe,” then it’s likely they’ll succeed in providing a safe environment for users, Enderle suggested. However, “they’ll get into trouble if they try to offset safety with maximizing ad revenue.”