When should hate speech be regulated? Context matters, says UN expert

Rappler.com

This is AI generated summarization, which may have errors. For context, always refer to the full article.

When should hate speech be regulated? Context matters, says UN expert
According to United Nations Special Rapporteur David Kaye, hate speech should be regulated when it incites violence or when it undermines other people's right to express themselves freely

MANILA, Philippines – What is hate speech and how can platforms manage them without infringing on basic rights to free expression?

United Nations Special Rapporteur David Kaye tried to answer these questions in an interview with Rappler on Thursday, May 24, 2018. Kaye has recently submitted a report to the UN Human Rights Council where he proposes a human rights-based approach to regulating user-generated content online. 

As platforms increasingly become the public spaces within which democracies operate, however, governments now share the responsibility of regulating expressions around the world with tech companies. This includes regulating hateful rhetorics online.

This is no easy task, however, as hate speech, in practice, can be hard to define. According to Kaye, the International Covenant on Civil and Political Rights requires states to “prohibit national, racial or religious hatred that constitutes incitement to discrimination, hostility, or violence,” but it also protects our “right to seek, receive, impart information and ideas of all kinds regardless of frontiers.” 

The role of the government and platforms, therefore, is to strike the balance between regulation of hate speech and promoting free speech. 

“So the question for governments, but also for companies is to define when does hatred cross the line from being merely [an] expression to being something that’s inciting hostility or violence against individuals,” Kaye explained. “That’s the human rights standard.”  

Context matters

According to Kaye, the answer lies in the context with which something is said.  

For example, a person can post or express all the hate he has, but it doesn’t automatically become something that either governments or companies should prohibit. 

“The question is whether it’s hate speech that should be prohibited by say, a platform or regulated by the government, and it always requires some sense of context,” Kaye said.

Kaye said that there are two instances when hate speech should be regulated: When the expression calls the audience to act violently or when the comments specifically undermine the rights of other people to express themselves freely. 

Even if a call to violence is not said directly, Kaye said, it should still be regulated if its context can still be understood as an incitement to violence.

For example, derogatory posts calling Rohingyas subhuman can be seen, in the context of what’s happening in Myanmar, as a call to violence. The same applies, Kaye said, to mysoginistic comments against women and members of the LGBTQI community. 

SEEKING HELP. In this file photo, Rohingya refugees from Myanmar's Rakhine state wait for aid at Kutupalong refugee camp in the Bangladeshi town of Teknaf on September 5, 2017. Photo by K M Asad/AFP

“Incitement to violence can have real harmful effects. They can lead to harms such as killings and massacres as we’ve seen in places like Myanmar over the past year,” Kaye said. 

Given the tech companies’ clear role in moderating public life, Kaye said that they need to be more transparent in how they moderate hate speech and give us examples of what they’re doing in different cases.  

“They should provide that kind of disclosure so that people like you and me and all of their users can understand the rules better,” Kaye said. “In order to hold the platforms accountable for their regulation of expression they should really be providing remedies to people when their content is taken down.”

Action from Facebook

In an email to members of the Global South Coalition, Facebook’s Monika Beckert, Vice President for Global Policy Management, and Guy Rosen, Facebook’s Vice President for Product Management, admitted that “they have been slow to take action” but are “investing heavily in people and technology so that we do better going forward.”

Facebook said in that same statement that they are working hard to apply their Community Standards consistently and fairly. The challenge, however, is doing it consistently and at scale in multiple countries and languages. To do it, they are increasing the number of people and teams working on it, and are building new tools too more quickly and effectively detect abusive, hateful or false content. 

In terms of transparency, Facebook published its internal guidelines on how it enforces its Community Standards, and issued a way for people to judge Facebook’s performance and track their progress. They have also created an appeals process so people can let Facebook know if they’ve made a mistake on individual content decisions.

Facebook also said that they “respect the human rights of everyone who uses Facebook and routinely conduct human rights impact assessments of product and policy decisions across our apps, as part of our membership in the Global Network Initiative.” – With reports from Annabella Garcia/Rappler.com

Annabella Garcia is a Rappler intern.

Add a comment

Sort by

There are no comments yet. Add your comment to start the conversation.

Summarize this article with AI

How does this make you feel?

Loading
Download the Rappler App!