Facebook has teamed up with Twitter, YouTube and Microsoft to fight the proliferation of terrorist content on the Web. The tech giants will create a shared industry database of hashes for violent terrorist imagery, terrorist recruitment videos, or images they have removed from their services.
They may use these shared hashes to help identify potential terrorist content on their platforms. Hashes to be shared will apply to content that’s most likely to violate all the companies’ content policies.
“Each one of the companies that is part of this agreement has its own specific definitions, practices and processes in place for governments to make requests to them for user data and to remove content,” YouTube explained in policy notes provided to TechNewsWorld by company rep Stephanie Shih. “Any such requests for information will be routed through each company to handle as they normally do per its individual policies and procedures.”
No personally identifiable information will be shared. There will be no automated takedowns of terrorism-related content. Each company will retain its own process for dealing with appeals against its removal of content.
The four will apply their own transparency and review practices when responding to any government requests.
Magnitude of the Problem
ISIS, or ISIL, has used the Web to great effect for the purpose of broadcasting its ideology and recruiting fighters, the UN Security Council’s Counter-Terrorism Committee said last year, noting that it then had 30,000 fighters, drawn from more than 100 countries.
All four of the tech participants that teamed in the latest initiative already have launched separate efforts to counter terrorist activities online, in some cases through other partnerships.
“Our existing efforts to counter extremism and terrorist content will continue,” Facebook said in comments provided to TechNewsWorld by spokesperson Alec Gerlach. “This agreement means that there will be more operational efficiency as we try and stop terrorist content from easily migrating between platforms.”
Each Player’s Battle
Twitter earlier this year outlined its policy, which includes deactivating accounts linked to terrorism groups, cooperating with law enforcement entities when appropriate, and partnering with organizations working to counter extremist content online.
Facebook earlier this year began offering advertising credits to some users combating terrorism online, and it began collaborating with the U.S. State Department to develop antiterrorist messaging from college students.
YouTube’s content policies strictly prohibit terrorist recruitment and content intending to incite violence, the company said. YouTube terminates any account if it has reason to believe that the account holder is an agent of a foreign terrorist organization.
Google parent company Alphabet this summer partnered with Facebook and Twitter to sponsor three experiments using videos to combat the spread of terrorist propaganda on their sites.
Google think tank Jigsaw this summer launched Redirect, a pilot project that aims to redirect people searching for jihadist information online toward counterterrorism content. Project Redirect is not involved in YouTube’s partnership with Microsoft, Facebook and Twitter.
Microsoft this spring outlined its two-pronged approach to the online terrorism problem: addressing the appearance of related content on its services; and partnering with others to tackle the issue more broadly.