What do platform standards protect us from?

In recent years, platform regulation projects have forced companies to actively fight against what they call "harmful content". This is the case of the Digital Services Act, an ambitious project that has been implemented in the European Union and has served as inspiration for similar projects, such as the UK's Online Safety Bill or the so-called " fake news project" in Brazil. 

However, the category of harmful content has major gaps due to the lack of agreement on its definition, since it is not exclusively about illegal content -as one might think-. Although this concept can cover conduct prohibited by the laws of different countries - harassment, discrimination or threats - it also includes other types of behavior that can be problematic in the digital sphere, such as the promotion of eating disorders or dangerous activities. 

In the absence of an established and transversal legal definition for an issue that does not distinguish geographies, the platforms' community standards -those policies that establish what can or cannot be said on a social network- are presented as a guide to understand what kind of behaviors are undesirable and which ones can cause harm in offline life. These rules, although imposed unilaterally by social networking companies, often involve security and public policy experts and members of civil society and academia. Their updating is also subject to press and public pressure during emergencies or political scandals, as was the case during the pandemic and the last U.S. presidential election. 

While platforms have developed their policies to cover broad areas of unwanted online behavior, these efforts have left out other types of content that could affect users. In this post, we take a look at three policies aimed at sanctioning content that could cause harm in offline life, and review some of their limitations and gray areas:

Incitement to hatred

Platforms design their hate policies to prohibit offensive content based on a person's inherent characteristics. While each company creates its rules according to its own conceptions, they all have as protected characteristics nationality, religion, ethnicity, sexual orientation, gender, or disabilities or serious illnesses that a person may have.

These policies not only sanction explicitly discriminatory comments, but also the reproduction of stigmatizing stereotypes, epithets, tropes, expressions to degrade a person, as well as publications that unjustifiably link a population with criminal activities, criminal or terrorist groups.

As community standards have a global scope, their design sometimes loses sight of the nuances of the local context, which, for social or political reasons, can affect specific vulnerable groups. This is the case, for example, of certain professions that are exceptionally exposed to risks and threats, such as journalists or human rights defenders in some Latin American countries.

The same is true of another type of discrimination: that based on a person's social class. Except for Meta, none of the major social networking platforms include class as a protected characteristic, despite the exclusionary effect it can have within a society and the well-studied relationship between classism and other forms of discrimination, such as racial discrimination. 


This is probably the most prevalent type of harmful behavior known to the common user, ranging from mocking or denying a tragedy, to insults and obscene language in general. 

It is not always easy to check to what extent an aggressive comment on social networks can be harmful to the user to whom it is addressed, whether it is a joke, a consensual exchange or angry but harmless responses between those involved in a digital conversation. For this reason, it is essential to pay attention to the context in which these types of publications are published, as provided for in some of the community standards.

Sometimes, platforms include in this type of policies the protection of victims of sexual or domestic violence or violent events such as shootings or massacres, who -as has happened with some conspiracy theories in the United States-are exposed to attacks, mockery or denial of their testimonies.

However, the rules of the platforms leave a gray area for other kinds of problematic content, such as those in which the victims themselves are blamed for what happened. This may be the case of gender-based violence events in which some online comments go so far as to suggest or affirm that a person is guilty of her own sexual assault or even of her femicide.

This happened, for example, with Valentina Trespalacios, a DJ who was found dead and with signs of torture in January this year in Bogota. Her case, whose main suspect is her partner, has been widely commented on social networks, where some users have incurred in this kind of revictimization, for which the platforms do not have clear rules.

Incitement to violence

Social networks in turn prohibit incitement to violence, i.e. content that glorifies violent events, makes statements in their favor, wishes harm to others or incurs direct threats. 

There are, however, some exceptions to these rules. In critical social and political contexts, certain displays of anger or indignation against certain people or situations may be covered by freedom of expression. Last year, for example, when the war in Ukraine started, Meta allowed users of its platforms in that country to wish Vladimir Putin dead. The same happened in January of this year, when the same company, on the recommendation of its Content Advisory Board, allowed wishing the death of Ayatollah Khamenei in Iran, in view of the days of protests that have been taking place in that country for months.

Although EU rules tend to sanction a broad spectrum of problematic content and conduct in the digital sphere, some of their loopholes may leave certain discriminatory or revictimizing content in circulation. Moreover, the complexity of some conversations, as well as the political, social and cultural aspects of each country, highlight the importance of taking the context into account when applying any content moderation measure that seeks to maintain a balance between security and users' freedom of expression.