More worrisome news for free speech advocates as government agencies continue pushing for, what could inevitably result in, online censorship. There is, of course, no place for hate on the internet. And it is absolutely unfair to discriminate against religious/ethnic minorities. But if the government gets to control expressions of hate/anger on the biggest platform connecting the world, what could this mean for freedom of expression in general – which has been a core European value for ages? The European Commission begs to differ..
This does not affect the right to freedom of expression; rather, it refers to conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin.
European Commission via LinkedIn
Yesterday, Facebook Inc, Google’s YouTube, Twitter Inc and Microsoft signed an agreement to block illegal hate speech from their services in Europe within 24 hours. This is yet another move that shows the mounting pressure on tech companies to monitor and control content.
The new European Union “code of conduct on illegal online hate speech” states that the above mentioned tech giants will review reports of hate speech and remove or disable access to the content if necessary.
European governments were acting in response to a surge in antisemitic, anti-immigrant and pro-Islamic State commentary on social media.
The companies made light of the deal, saying it was a simple extension of what they already do. Unlike in the US, many forms of hate speech, such as pro-Nazi propaganda, are illegal in some or all European countries, and the major internet companies have the technical ability to block content on a country-by-country basis.
But people familiar with the complicated world of internet content filtering say the EU agreement is part of a broad and worrisome trend toward more government restrictions.
“Other countries will look at this and say, ‘This looks like a good idea, let’s see what leverage I have to get similar agreements,'” said Daphne Keller, former associate general counsel at Google and director of intermediary liability at the Stanford Center for Internet and Society.
“Anybody with an interest in getting certain types of content removed is going to find this interesting.”
European authorities have been putting tremendous pressure on social media companies to be more aggressive in targeting hate speech online. For example, this celebratory hashtag (#Brusselsisonfire) that surfaced after the March bombing in Belgium. Or when Twitter said in February that it had deleted more than 125,000 ISIS-affiliated accounts.
The Code of Conduct is a “self-regulatory” measure, which means that it’s not legally binding. The code of conduct enumerates a few specific commitments to address the problem:
- The companies need to have clear and accessible ways of identifying and removing hateful content, and that they must review the majority of the content reported within 24 hours.
- They have to work more with “civil society organizations” (nonprofits, advocacy organizations, etc.) to target such content.
- They must train their staff “on current societal developments and to exchange views on the potential for further improvement.”
Despite the agreement signed today, these companies have a pretty tense relationship with European regulators, especially Google and Facebook. The EU is currently leveling antitrust charges against Google, and German regulators have begun looking into Facebook’s practices.
You can read more about the code of conduct here. Leave us a comment and let us know what you think!