Following the recent terrorist attacks in Paris and San Bernardino, Calif., social media companies are coming under increased pressure, both internally and externally, to protect their networks from being used as platforms for operational planning and propaganda.
There needs to be a greater balance between promoting free expression on the Web and allowing social media to become a tool to spread hatred or violence, Google Executive Chairman Eric Schmidt said in an op-ed published Monday in The New York Times.
“We should build tools to help de-escalate tensions on social media — sort of like spell checkers — but for hate and harassment,” he wrote.
Networks should target accounts from groups like the Islamic State, he recommended, and either take down their videos or assist efforts to present messages countering the terrorist groups’ activities.
ISIS Propaganda
The full extent of terrorist activity on social media is unclear, but ISIS supporters used at least 46,000 Twitter accounts — possibly as many as 70,000 — between September and December of last year, according to a report the Brookings Institution’s Center for Middle East Policy released earlier this year. Most of the accounts were located in Syria, Iraq, or other regions where ISIS is active.
About three-quarters of those supporters chose Arabic as their main language when posting, while 20 percent posted in English. Twitter suspended at least 1,000 accounts during that period of time, the report showed.
Facebook “shares the government’s goal” of keeping terrorist activity off its site, according to spokesperson Jodi Seth.
“Facebook has zero tolerance for terrorists, terror propaganda or the praising of terror activity and we work aggressively to remove it as soon as we become a ware of it,” she told TechNewsWorld.
Facebook’s policy is to pass on information to law enforcement as soon as it becomes aware of any planned attack or threat of imminent harm, Seth said, and it does follow through on that policy on a regular basis.
The tech community is facing a much more difficult conversation than it did immediately after the Snowden revelations in 2013 and before the terrorist attacks in Paris in November and San Bernardino earlier this month, said Susan Schreiner, an analyst at C4 Trends.
“This is a developing cancer, and there’s a good chance that the answers today may need to be transformed as the nature of these exploits, propaganda techniques and heinous actions evolve, she told TechNewsWorld.”
The nexus of international terrorism and social media is not an entirely new subject, and civil liberties groups long have held that censoring this type of speech sets a dangerous precedent that could lead to government control over political speech.
Sen. Dianne Feinstein, D-Calif., on Tuesday revived a bill that would require social media companies to alert federal officials about any terrorist related activity on their networks.
Sen. Ron Wyden, D-Ore., earlier this year thwarted her effort to add the bill to the Senate Intelligence Authorization Act, citing vague language.
When Safety Impacts Free Speech
“Social media companies shouldn’t take on the job of censoring speech on behalf of any government, and they certainly shouldn’t do so voluntarily,” said Danny O’Brien, international director of the Electronic Frontier Foundation.
Numerous circumstances would be problematic, to say the least, he told TechNewsWorld. For example, would Facebook take down a post from a group that the Russian, Saudi, Syrian or Israeli government claimed were terrorists?
An issue of transparency remains on the table, O’Brien said. Some social media groups have been more transparent than others about government requests to take down an account or remove content.