European Commission Demands Social Media Tackle Illegal Content
The European Commission is calling upon social media companies including Facebook Inc. and Alphabet Inc. to develop a common set of tools to detect, block and remove terrorist propaganda and hate speech.
In guidelines issued Thursday, the commission asked the online platforms to appoint contact persons that would allow them to be reached quickly with requests to remove illegal content. It asked them to lean more heavily on networks of “trusted flaggers” — experts in what constitutes illegal content — as well as making it easier for average users to flag and report possible extremist content.
“Illegal content should be removed as fast as possible, and can be subject to specific timeframes, where serious harm is at stake, for instance in cases of incitement to terrorist acts,” the commission said.
he commission did not specify exactly how quickly social media companies should take down content, saying it would analyze the issue further. In May 2016, a number of social media companies, including Facebook Inc., Twitter Inc., and Google’s YouTube voluntarily committed to trying to take down illegal content within 24 hours. Under this program, removal of flagged content with the 24-hour window has gone from 30 percent to 60 percent, the EU said Thursday.
Since then, Germany has passed a law requiring hate speech to be removed within 24 hours of it being flagged, with penalties of up to 50 million euros ($58.8 million) for repeated failures to comply. British Prime Minister Theresa May earlier this month proposed new rules that would require internet companies to take down extremist content within two hours.
At the same time, the commission said online platforms should “introduce safeguards to prevent the risk of over-removal.” It did not specify what these safeguards should be.
In arguing against the new German law, Facebook said that the large fines and tight deadlines for content removal only served to encourage it to err on the side of taking down questionable content, potentially harming free speech.
The commission also said the internet companies should take steps to dissuade users from repeatedly uploading illegal content and encouraged them to develop more automatic tools to prevent the re-appearance of content that had previously been removed.
Facebook, Google, Twitter and Microsoft Corp. teamed up in December 2016 to create a shared database of “digital fingerprints” for videos that any of the companies remove for violating their policies on extremist content. If someone tries to upload the same video to a different social media platform, it is automatically flagged for review.
The guidelines the commission issued Thursday are non-binding recommendations. But it held out the prospect of future legislation if the companies did not take additional steps along the lines it is suggesting by May 2018.
For years, sites like Facebook and YouTube have largely relied on hundreds of contractors and employees to manually review posts that users flag for violating their terms of service. While this process was far from perfect, company executives long insisted that automated systems — which rely on artificial intelligence — were not yet sophisticated enough to handle this process.
In the past year, as these companies have come under greater political and legal pressure to do more to address terrorist propaganda, hate speech and fake news, they have begun leaning more heavily on automated systems.