Un-Breaking Democracy: Online Platform Regulation and Free Speech

Online hate speech, disinformation, and strategic media manipulation are seen as threats to democracy. But is regulation the solution?

The pressure is on. Online hate speech, disinformation, and strategic media manipulation have alerted activists, policy makers, and media platforms alike of an impending threat against democracy, especially during upcoming elections. Last year, Facebook for example, became known in news articles and social media circles as the platform that “broke” democracy, due to its implication in the United States elections turnout, agenda, and outcome. Recent disclosures also reveal how data analytics firm Cambridge Analytica harvested data of Facebook users to build profiles for microtargeting. Twitter suffers from botnets–robot networks–of countless unidentified strategic political bot accounts that amplify political messages online to set agendas and steer the discourse. Google provides anti-Semitic search results and ads on anti-Semitic sites, and YouTube’s algorithms may have radicalizing effects. The threat, here illustrated by these and other examples, has bred a need to simply do something about it.

 

“In fact, 2017 may well be remembered as the year when platform harms, and the demands to address them, became widely understood in the mainstream as a leading problem of public policy in digital space”  

David Kaye, UN Special Rapporteur on Right to Freedom of Opinion and Expression in Platform Regulations: How Platforms are Regulated and How They Regulate Us, Official Outcome of the UN IGF Dynamic Coalition on Platform Responsibility

 

A few different approaches as to how we may do something about this threat against democracy, have been offered. I portrayed in an earlier blog post (in Swedish), following the IGF2017, a rather established consensus among a broad array of actors that education is the favoured long-term solution. Education, such as the Internet Foundation’s initiative Internetkunskap, will ensure a higher level of internet literacy by equipping people with knowledge to navigate the vast information jungle. An immediate short-term solution, however, would be regulation. There are tech companies that are actively working to counter this threat, but a number of actors are calling for increased platform regulation. Sadiq Khan for example, Mayor of London, wants tech companies to take more responsibility and speed up their ability to remove content. Germany has already put in place criticized new regulation that obliges tech companies to remove content that is considered hate speech. As of now, there is no consensus on which approach to regulation we should be taking.

The current debate about platform regulation, in all its complexity, bears on variations in approaches in democratic traditions to the limits (or non-limits) to freedom of speech. When does online content become illegal? In the early internet days, dreams of a virtual world–a cyberspace–free from any regulation dominated the shared imaginary of its English speaking inhabitants, as symbolized through A Declaration of Independence of Cyberspace (R.I.P. JPB). As with many technological developments, the infrastructure and usage were far ahead of policy. The physical world–the meat space–however, governed the development of the internet, and cyberspace quickly filled up with restrictions.

Much internet history lies between now and then. Much internet access and content too. Free speech has been central to many legal battles over internet-specific technologies and their regulations and remains so even today. The idea of free speech discussed in the early internet days originated in an Anglo-American context, whereas the debated free speech of today comes in different forms, where many countries have a stronger protection of those subject to hate speech than the United States’ Constitution. Which democratic norms and traditions will be at play in our contemporary debate about platform regulation?

This blog post cannot do justice to the nuances of all the separate issues of platform functionalities and approaches to free speech but will briefly address some of the difficulties attached to state, private, and no regulation, as well as how these issues matter to internet freedom.

State regulation

One approach to platform regulation is to push for the state to regulate online platforms in the name of public safety and security. This would allow for the state to require platforms to adhere to national law and policy to a greater extent, to be accountable to the state, and to protect national consumer data, among other things. Including the government in a regulatory process regarding online content regulation, would presumably offer a legitimate democratically negotiated judiciary system in place for administering complaints. Another argument, as pointed out by Yehven Fedchenko, Director of Mohyla School of Journalism, at IGF2017 on a session on Fighting Fake News, Protecting Free Speech, is that without state regulation, our capacity of acting or countering fake news and misinformation is limited. These examples are, of course, based on the assumption that the government in question is benevolent.

If you remove the assumption that your government is benevolent, the above arguments fail.

Dunja Mijatović, expert on media regulation and former OSCE Representative on Freedom of the Media, emphasized during the same IGF session why giving states the responsibility to regulate content would be problematic. This responsibility would give governments the authority to define what is true or false online. In their position, they would hold much power to create narratives intended to achieve political goals and could make people distrust everything they used to believe about established institutions, values, and realities. This we now know does not only apply to governments, although it certainly may help to bring about a change in government.

A state may also have the best intentions, but still end up compromising free speech. Germany, as mentioned above, threatens tech companies who fail to remove illegal content with considerable fines. Companies in some cases now display German penal code instead of the platforms’ community standards or codes of conduct when removing content. Free speech activists have already objected to this law, as they argue that it will impact free speech when companies too hastily remove legitimate speech in fear of facing the fines. Political actors from both left and right have also criticized the law, either for being too regulatory, or for giving away too much of the responsibility to private actors.

Private regulation

Civil society members and academics alike have pointed to a displacement of responsibility from state and judiciary towards corporations. The responsibility has shifted towards internet mediaries in particular, online platforms such as social media services and search engines, who are now expected to do the work that state and judiciaries used to be in charge of. On March 1, 2018, the European Commission proposed recommendations on measures to effectively tackle illegal content online, largely urging ‘hosting service providers’ (i.e. a provider of information society services) to take a greater responsibility.

From a human rights perspective, this shift in responsibility (also described as the “privatisation of regulation and police” in the Official Outcome of the UN IGF Dynamic Coalition on Platform Responsibility is problematic for several reasons. In regards to content regulation, for example, the Terms of Service (ToS) function as a form of non-negotiated law on a platform that does not have to undergo public scrutiny. Interpreting online content in relation to community guidelines is not an easy task, and content reviewers make mistakes, such as those Facebook have admitted to. What then, can the large numbers of users throughout various regions with varying regulations of hate speech expect from platforms when content reviewers ignore hate speech or take down legitimate speech? In Facebook’s case, users can report the content or provide feedback. In other words, urging for private regulation may be giving private companies too much responsibility to determine what can and cannot be said and done online when the same do not offer any formal appeal process.

No regulation

Finally, there is the possibility of no regulation. Online platforms, such as social media services or search engines, are not content providers. They merely direct their users to content and should perhaps not be held responsible for this.

This last category stands out, as if it remains blind to the issues that regulation as a response to a threat to democracy is trying to address. Granting platforms with a greater responsibility allows them to swiftly remove illegal content, and no regulation may result in harmful content remaining in effect for a longer period of time. Jillian C. York, Director for International Freedom of Expression at the Electronic Frontier Foundation is not, however, blind to these issues. While acknowledging the hostile online climate that has given rise to the debate about regulation, York’s article argues that the costs of content regulation are simply too high, and lists examples of how it has primarily affected the less powerful. Instead, York lists a number of other steps available to work against online hate speech while ensuring a sanctity of free speech, including increased transparency.

Moving forward

Navigating the regulatory options available is a thorny and complex task. There are several reasons why increased state regulation could improve accountability, as a judiciary systems based on democratic processes is in place for the public, compared to arbitrary takedowns by private companies. There are also great risks with investing trust in government to regulate online speech, especially if we look to the rise of increasingly authoritarian rule around the world. Similarly, there are many reasons to urge online platforms to take greater responsibility for online content. There are also reasons for why we should not.

The emphasis on platform and content regulation as a response to various issues broadly referred to as a threat to democracy may, in fact, be misplaced. The debate about platform regulation does not address firsthand the role of the design of the rules that govern the order in which internet users are provided content, namely algorithms. Platform and content regulation is likely not the main regulatory move that proves efficient in countering fake news and the spread of disinformation online. Instead, algorithmic transparency would allow institutions and the public to scrutinize the rules governing the circulation of content online. Many actors, including the high-level group of experts (”the HLEG”) advising the European Commission, are indeed focusing on transparency as a central measure to counter the threats to democracy. Similarly, the debate easily sidesteps questions of data and content ownership, which is especially pertinent in regards to political microtargeting, lawful or not.

Focusing on issues such as the need for algorithmic transparency should not remove responsibility from the actors involved to acknowledge the threat of strategic media manipulation to democracy as we know it. On the contrary, governments, companies, and users all have a role to play in addressing what is at stake here. In that, we need keep in mind that all mentioned approaches to regulation, including non-regulation, are defined in direct relation to our understanding of the limits (or non-limits) to free speech that these represent. With this, we can try to bring the conversation forward in a transparent manner.

This article has no tags Photo: Justitia by Markus Daams (CC BY)

About the blogger

Isadora Hellegren Isadora Hellegren Isadora is currently leading the work of Goto 10, a space for internet related knowledge exchange and innovation at the Internet Foundation in Sweden. Her interest lies in various aspects of knowledge sharing of and through internet-specific technologies, culture, and governance. In her graduate research, she has focused in particular on these aspects of encryption software in relation to internet freedom. Isadora holds a research oriented M.A. in Communication Studies from McGill University and is also the elected Chair of Communication Committee at the Global Internet Governance Academic Network (GigaNet).