Jillian C. York:: "Line between moderation and censorship increasingly blurry"

Line between moderation and censorship increasingly blurry

Jillian C. York:: "Line between moderation and censorship increasingly blurry"

Photo: Nadine Barišić
 
In early days of social media content moderation was typically conducted by communities, volunteers, or in-house by companies. Nowadays freedom of speech on social media is subject to a complex system of governance. A great deal of public conversation is regulated by privately-owned corporations. 
 
We talked about this complex issue with Jillian C. York, Director for International Freedom of Expression at the Electronic Frontier Foundation (EFF), who also spoke on this topic in a webinar on trending topics in media literacy organized within the SEENPM members’  “Media for Citizens – Citizens for Media” project in the Western Balkans. 
 
What is content moderation nowadays? 
 
Content moderation is the method that tech companies use to manage the content—posts, images, videos, etc—that users post to their platforms. This encompasses a number of things, from the use of human content moderators to checking content that is reported by users for violating a platform’s rules to the use of automation to proactively restrict certain types of content, such as pornography or terrorism. Additionally, governments submit requests either through formal or informal (extrajudicial) means—this is sometimes handled through the aforementioned processes, but can also be managed separately by teams that review court and law enforcement documents.
 
Could you tell us about examples of models from the US, EU, and other countries?
 
Different countries manage restrictions on content in different ways. For instance, Germany has in place a law—the Network Enforcement Act or NetzDG—that requires companies to restrict access to unlawful content within the country’s borders within 24 hours of it being reported. Other countries, including less democratic nations such as Turkey, have implemented similar laws in recent years.
 
In the United States, there are fewer restrictions on what the government can restrict access to, but a law implemented in 2018 called SESTA-FOSTA has had a chilling effect on what users can say online. Intended to stop sex trafficking activities online, the law has resulted in companies increasingly banning sexual content and nudity for fear of running afoul of the law.
 
Other countries, particularly more authoritarian ones, tend to simply demand that companies take down or locally restrict access to content. While some countries do this by submitting orders from courts or law enforcement, others simply use backchannels with the companies to pressure them to take action, sometimes by threatening to block them locally.
 
What can you tell us about contemporary content moderation when it comes to hate speech, extremism or harassment or indeed disinformation? Where is the line between moderation and censorship? 
 
The line between moderation and censorship is increasingly blurry. While US companies have the right, under the First Amendment of the Constitution to restrict various types of expression as they see fit, they have moved over the years from promoting free expression to placing more and more restrictions on what people can say and do. Some of this is the result of changing speech norms, while other rules have been put in place as the result of external pressure, from the public or governments.
 
In addition, US companies are protected from liability for (most) of these choices by a law commonly referred to as “Section 230” or “CDA 230.” They are required to remove certain unlawful content, however, such as child sexual abuse imagery.
 
In the United States, the concept of “hate speech” does not exist under the law. Therefore, companies are left to determine for themselves what constitutes hate speech, and these policy decisions seem to be made in an ad-hoc manner, with the rules frequently changing. The difficulty in moderating hate speech is that, even if the definitions are clear, it is extremely difficult to get right at scale, and automation is not up to the task of detecting the level of nuance required to get it right. Therefore, hate speech restrictions can result in non-hateful content—particularly things like satire, comedy, and counterspeech—getting removed.
 
The case of extremism is a bit different, but also problematic. In the United States, there are laws that limit platforms from hosting content from certain foreign terrorist groups, as labelled by the State Department. When it comes to domestic extremism, however, no such restrictions exist, so again, the decision is left to companies to decide which groups are extremist in nature. Again, pressure from various external (and sometimes internal) actors can result in inconsistent decisions about this type of speech.
 
What modalities of  transparency could in your view work well when it comes to content moderation?
 
There are a number of things that companies should be doing, but aren’t. First, they should be publishing things like their content moderation error rates so that users know how many mistakes are being made across various categories. They also should be providing users with detailed notice about what rules they violated and what the consequences are—as well as how to appeal in the incident that a mistake is made.
 
There are also increasing demands for companies to be transparent about things like the number of moderators working in a given language, country, or region.
 
All of these things are important, and I believe that companies should be putting more resources into making them happen.
 
What current efforts are underway to regulate the big international platforms? 
 
There are a number of efforts underway globally to regulate the platforms, many of which are quite troubling (such as the new laws in Turkey and India), but there are two key efforts I would like to highlight.
 
The first is that in the United States, there are a number of bills pending that seek to roll back Section 230 liability protections. These all take different shapes, but are aimed at holding companies accountable, or liable, for the content posted by users. Notably, Facebook supports some of these efforts, which raises concerns—namely, because a company like Facebook has the resources to moderate however the government requires, while smaller companies may not. This will have a negative impact on competition, giving users less choice in where to participate online.
 
The second set of efforts I’d like to highlight is at the European Union level. First the Digital Services Act: this act has several components, but the ones I’m most excited about would force companies to be more accountable, through transparency and appeals as I mentioned above. More troubling, however, is the Terrorism Regulation which is set to be voted on next week. This Act would require companies to respond to reports of terrorism on their platforms within just one hour. I believe that such a tight timeframe will have a chilling effect on speech, including counterspeech, efforts to document human rights violations, and artistic expression, including satire.