Currently Reading

Platforms and Fundamental Rights by Bernát Török

14 minute read

From Issue

Vol. I, No. 2

Read Previous

Liberalisms, Liberal, and Illiberal by Anthony O’Hear

Reflections on the European Right by Miklós Pogrányi Lovas

Read Next

Current

Platforms and Fundamental Rights

The Case of Social Media and Freedom of Speech

The debate over the role of social media in public is intensifying worldwide. There is no disagreement, however, that its role has changed decisively in recent years. Everyone can and does see that these new online platforms have fundamentally redrawn the structure of the public sphere, with an enormous impact on the development of social dialogue. So far, the focus has been mostly on observing and understanding how these processes are unfolding, but now is the time for a social response to the new situation to be expressed. Put simply, the debate is about what to expect from social media companies in their current central role. Many would grab the opportunity of a highly effective regulation of social discourse, and urge them to intervene as actively as possible, while others are convinced that community deliberation should not be subject to the omnipotent decrees of one (or at best one or two) technology giants. This article, which takes the latter position, addresses some of the fundamentals regarding the relationship between platforms and freedom of speech, and argues that, despite serious challenges, social media does have democratic potential, but that to preserve this, we cannot allow it to become the arbiter of social debate.

That the operation of these platforms is now closely linked to freedom of expression is an evident fact. It is important, however, when speaking about this connection, to clarify our views on the nature of the aspects and arguments in question. Freedom of expression has long been more than a legal concept in the narrow sense,1 and this has only become truer in recent times. Western constitutions and international conventions enshrine freedom of expression as a fundamental right, but it is often invoked primarily as a political, rather than a strictly legal, requirement in public—or even jurisprudential—debates. While both legal and political arguments are legitimate and important, it is worth separating them, pending clarification, in order to have a clear view of the scope of their validity, as well as the set of tools available to enforce them. Although in the long run political considerations may determine the validity of a constitutional provision, the force of law can be used to steer processes in a desired direction even in the shorter term.

This article deals with constitutional considerations arising from the doctrine of freedom of expression. It therefore sets out requirements for social media that can even be enforced through legislation—albeit not necessarily internationally. Regarding the issues analysed below, North American and European constitutional doctrines differ significantly, and the arguments put forward lead, above all, to conclusions valid in a European context. The essence of these conclusions is that with regard to social media, respect for the freedom of speech of others can be viewed not only as a political, but also as a constitutional requirement.

A public sphere without gatekeepers

One of the crucial innovations of social media in terms of freedom of speech is its ability to provide anyone with a real opportunity to be heard in the public sphere. Of course, this possibility has, at least in principle, been available wherever the constitutional right to freedom of expression is guaranteed. However, the right to speak has not always been accompanied by the kind of informational infrastructure that would really allow anyone to express themselves, perhaps on a daily basis, to many others. In the case of traditional media—that is, newspapers, radio, and television—there are technical and economic barriers to participatory communication, and the Internet, though it may have inaugurated a world of informational abundance, did not in itself make as much of a breakthrough as the social media revolution. These service providers have developed—not in a spirit of social responsibility, but according to their business interests—the structure and system network which has made it realistic for many to speak to many, with previously unimaginable speed.

The democratic benefit of these platforms, in terms of freedom of expression, is that they allow anyone to make their voice heard within a broader social dialogue. There is more to this than the suitability of the technology and the network. Social media, in its first period of operation, created a sphere of social dialogue without gatekeepers, where it was not up to newspaper or TV owners, editorial staff, or even journalists, to decide whether a given speaker was heard, but instead depended primarily on the speaker. It would seem a pity to argue that this also entails certain risks. After all, whatever our views on the possibility of many people speaking out and the risks involved—so long as we see free speech as the most important basis for legitimacy and participation in democracy —we must not destroy that opportunity, but instead work to develop good practices.

The public sphere without gatekeepers has, of course, also become an attractive opportunity for politicians.

The consequence, and additional driving force, of social media becoming a public forum is that these platforms are now indispensable in the communication toolbox of politicians. It is also not uncommon for important government announcements to appear first on Facebook rather than in public service media broadcasts. Public figures usually have a significantly larger circle of followers than the average, which, according to one possible argument, could even raise the prospect of greater restrictions on their social media utterances. However, the constitutional logic of freedom of speech supports a completely different argument. According to the doctrine of freedom of expression which holds sway across the Western world, the deeper we penetrate in social deliberation into the political debate on public affairs, the less any substantive intervention can be justified. The argument that politicians, more than anyone, must be thick-skinned enough to endure manifestations of public sentiment which are often expressed in less than polite terms is also valid in the other direction: in that case, correction can best be entrusted to social exchange itself. The language of politicians is particularly multifaceted, their misleading statements are challenged by a multitude of public figures, and in the context of a democratic, multi-party public sphere, the audience may well be aware of the often exaggerated and even manipulative nature of party-political communication.

Fundamental rights versus platforms

Once the structure had been established whereby any one person could address many, the extent to which service providers could interfere with the content published on their interfaces immediately became a pressing question. The point of departure was already clear years ago: in principle, they can do anything, as they are private companies that can shape the dialogue on their platform according to their own views and rules. To this day, that is the prevailing legal attitude toward the operation of social media platforms in the United States. According to the American doctrine of constitutional law, such fundamental rights can be enforced against the state—and only against the state. In this sense, it is not the users who can invoke freedom of expression against service providers, but service providers who can invoke it against regulatory interference: they are free to shape content as they wish on their own interfaces.

The situation in Europe is more complex. Although, as befits their purpose, constitutional rights primarily oblige the state to respect civil liberties, in most European states, including Hungary, the protection of fundamental rights cannot be granted only to the citizen against the state. It is an integral part of European constitutional thinking that— in some well-defined cases where private actors find themselves in a situation that significantly affects others in the exercise of their fundamental rights—constitutional provisions also impose certain obligations on them. That is precisely the situation we currently face. Over the years, these platforms have become such a significant medium for social dialogue that to a certain extent their position is already fixed. Whereas in the past the issue of freedom of speech could be interpreted almost exclusively as a relationship between the citizen and the state, today social media, which broadly determines the discursive infrastructure, must also be considered a third actor. All the more so because the activities of this third actor are akin to the criteria of ‘governance’ in terms of their impact on citizens’ actions.

It is not that the system of requirements placed upon states should be transferred wholesale to social media platforms. Firstly, the bearer of obligations with regard to fundamental rights remains, first and foremost, the state, so it follows that it is states which are most restricted by precepts arising from freedom of expression. Secondly, the enforcement of constitutional rights against private actors always takes into account the specific, legitimate interests of the obligated party. In spite of this, the emergence of a fundamental rights aspect hinges precisely onthe fact that these interests cannot be invoked without restriction. Although the platform provider may, on the basis of the objectives of the social network operator, impose special restrictions, it must respect the essential aspects of the fundamental rights thus affected. One such criterion, which follows from the principle of freedom of expression, is that everyone should be free, above all, to express and publish their views in the debate on public affairs. The more heated and current the debate on social issues, the narrower the opportunity to intervene with regard to the expression of opinion, and the less the service provider can deviate from the consideration of the already established constitutional standards.

Of course, part of the problem is that, though the situation differs from country to country, in the best case a few, and in the worst case only one social media platform holds a dominant position. In Hungary, for example, it cannot be ignored that Facebook has a remarkably dominant role, even in comparison with its neighbours: the vast majority of adult Internet users, 80 per cent, use it regularly, while the other platforms combined have only been able to bite off 25 per cent of the market cake.

Passivity in moderation

Moderation is usually the focus of the most heated debates. Social media platforms are expected—and in Europe sometimes strictly obliged through regulation—to restrict dialogue on their sites within certain bounds. The level of moderation, which aims to remove manifestly illegal content, such as hate speech, pro-terrorist content, and even profiles created only for deception, is an integral part of the established framework of the democratic public sphere. Similarly, if there is a real risk of violence in a given situation, action may be warranted until the danger has been averted. However, the situation is different when it comes to other interventions, and the more directly related they are to our socio-political issues, the less justified it becomes to silence certain voices.

The practice whereby social media platforms, deleting posts or profiles, actively interfere in the content of the public debate according to their own standards, is contrary to the right to freedom of expression, and undermines the role of these platforms as public spaces. It is indefensible for platforms to enter the public debate as the new arbiters of social dialogue, relying on perspectives they themselves define and apply. Extensive moderation, which does not shy away from deletion, will carry platforms far from their initial democratic achievement, making them gatekeepers that can be manipulated more easily than in any previous medium of communication. No like mechanism has yet existed to remove a person’s voice from such a significant domain of social dialogue with a single click. There is no more effective control than when we enforce our requirements in the online space through codes and algorithms. The situation is clearly exacerbated by the fact that platforms are visibly becoming more systematic in their moderation policies and acting more uniformly against the content they are concerned about. Recent waves of bans have also revealed the cornerstone of the system: if one service provider does not act in lockstep with others, and does not remove opinions criticized by the majority, key players which are largely unavoidable in accessing apps (Google, Apple) simply remove that service provider from view.

There are essentially two ways to resolve the contradiction between the constitutional framework and the practices of service providers: either we modify the obligations arising from freedom of expression, or we change the practice of the platforms. The question is certainly an open one, inasmuch as many would prefer the first option. Many would argue that the precepts governing the expression of opinion have thus far been tailored to fewer speakers, a slower flow of communication, and more considered public statements, therefore we must also make effective use of platforms’ means of intervention when setting new standards. The fact is, however, that these constitutional aspects were not first and foremost tailored to particular circumstances, but to the democratic rule of law. It is an undeniable fact that the conditions of our social relations and democratic exchange of views have changed significantly over recent decades. However, the theoretical point of departure is not a contingent one, but stems rather from the essence of democracy, the idea of democratic participation, and so long as there is a common belief that we wish to conduct our public affairs democratically, our social practices must be adapted to that principle, not the other way around.

Transparency in navigation

The activities of social media platforms are not a homogeneous mass: they have different effects on social communication through different functions, which in turn justifies a differentiated approach. The essence of the ‘democratic gain’ is that anyone, whether a private individual or a politician, can share his or her opinions and reach the public without anyone interfering with the content. On the other hand, it raises different liability issues when a post can reach millions of people in the blink of an eye through the functions operated by the platform, especially through a chain of algorithmic information-distributing systems (feeds) or instant shares. No one has a fundamental right to be able to communicate with the widest possible public and at lightning speed.

At the same time, in addition to moderation, social media companies also affect important societal interests

by influencing the discursive structure on their platforms, in particular by directing our attention. The general characteristic of the information society in the twenty-first century is also true for the functioning of the wider public sphere: today it is no longer information or opinions that are scarce, but the attention that people use to absorb some of the abundance of information and opinions that overwhelm them. Social media can thus have an extremely important impact on the development of our social debate, not only through content intervention, but also by influencing where we direct our attention. Even if there is no space in this context to raise arguments surrounding subjective fundamental rights relevant to moderation, constitutional law cannot be ignored on navigation either, due to their impact on the development of public opinion. It is unsustainable that we gain at most a vague insight into how this influence on social dialogue works, on the basis of arbitrary information provided by service providers. Navigation alone can even advance important social goals, without the platforms becoming a stand-alone content editors. For example, they could play a meaningful role in shaping the structure of discourse on their platforms through their algorithms, so that their millions of users encounter more diverse content, rather than deepening their isolation in private echo chambers. However, we only have a chance of a meaningful discussion about these matters when we obtain a somewhat clearer picture of the way of navigation works, and the system of criteria it uses.

Regulation and beyond

It can be stated that while in recent years the focus has been on unpacking the public role of social media and understanding it, now is the time for a social response to the new situation. Legislation in the EU has a particularly important role to play, as the power of joint action may run counter to the interests of global business giants, but national specificities are also unavoidable in regulating social media use, which continues to be predominantly national.

Europe has taken the first steps, and a text has been proposed for a draft EU regulation governing the operation of online platforms. The concept is an important step forward, and provides adequate answers to some of the questions raised above. Definite rules seem to promote transparency in the operation of the platforms. In the scope of activities previously referred to as navigation, this may be the primary regulatory goal: the service provider should be accountable for those aspects in which it shapes the structure of communication on its interface, and how it navigates us in the necessarily limited reception of overabundant content. In the area of moderation transparency, the draft could also make progress by imposing an obligation on service providers to provide reasoning and complaint handling. For the time being, however, the regulatory concept lacks the substantive framework of moderation, that is, limiting the power of the platforms to shape the content of public debate. It follows from the above that the right to freedom of speech entails, in addition to setting the framework for democratic publicity on the basis of a broad consensus, that social media may lose the power to silence opinions expressed in social dialogue.

So for now the task is to work out the legal rules, but once we are done with that, we can turn to the even more important question: how are we to deal with the functioning of our public sphere? Social media and the new risks that come with it certainly did not arise out of a vacuum. Some of the problems we blame on social media actually point to serious anomalies that already existed in the way our public sphere operates. The irrationality that is increasingly manifested in public discourse, the confinement to echo chambers, or the lack of credible organs that are commonly accepted and considered trustworthy by a critical mass of society, are not merely the products of the social media age. Even if resolving all these issues seems a utopian dream, we cannot simply abandon the vision of a healthier democratic public sphere. Those who would seek to solve the problems that undoubtedly exist and are amplified by social media through a more effective gatekeeping role are on the wrong path. That is because a pluralistic public is inherently too complex for a single arbitrator to filter all our substantive concerns. Democratic political debate is a risky mechanism: if a society does not work together to develop and maintain its healthy functioning, it will face the consequences of its distortion together. The justifying principles of freedom of speech, which are still valid today, warn us that if we want to remedy the risks of community dialogue by limiting democratic debate, we are not achieving a goal, but merely pouring oil on the fire. The ideal of lively public discourse, which is the constitutional basis for the regulation of democratic publicity throughout the Western world, cannot tolerate an omnipotent, central mediator.


Bernát Török, graduated in law from the Pázmány Péter Catholic University, Budapest, in 2003. He worked as a lawyer at the National Radio and Television Committee, and later became chief counselor at the Constitutional Court of Hungary. He is an associate professor of constitutional law and head of the Institute of the Information Society at the University of Public Service—Ludovika (Budapest). His research interests include freedom of speech and fundamental rights in the information society.

Tags:

Hungarianconservative.com uses cookies to personalize and deliver appropriate content, analyze website traffic and display advertising. Visit our Cookie Policy to learn more. By clicking “Accept” you agree to our terms and may continue to use Hungarianconservative.com.