Electoral integrity: challenges of content moderation in election season

For years, social networks have been an obligatory platform for political candidates. In these spaces, electoral battles are fought, alliances and candidacies are defined and messages are positioned that ultimately seek to affect the sense of the vote of those who participate in the democratic exercise. This dynamic has led platforms to develop electoral integrity standards that allow moderating political discourse in these contexts. In this post we explore the evolution of these rules and some important factors to take into account for the regional elections in Colombia and those scheduled for this year in countries such as Ecuador and Argentina. 

To some extent, the platforms' rules respond to lessons learned from the 2016 and 2020 U.S. elections regarding fraud narratives. An analysis of policies on platforms such as Meta, YouTube, Twitter and TikTok allows us to classify five types of prohibited online behavior:

1. Electoral disinformation: propagation of false or misleading content about essential aspects of an electoral process and the candidates participating in it. 

2. Electoral interference: content that seeks to alter the normal course of a process, dissuading citizens from participating or misleading them about key aspects, such as voting sites or dates. 

3. Election violence and intimidation: propagation of hate speech and threats directed at election officials or the incitement of violence against officials. The eradication of this rhetoric seeks to maintain a safe and respectful debate environment.

4. Delegitimization: false narratives about the results of an election or the legality of the processes. 

5. Electoral fraud: content that promotes the commission of electoral crimes, such as vote buying. 

Despite their impact on the public debate, platforms have sometimes avoided assuming their role as electoral campaign scenarios. This is the case of TikTok, which for a long time presented itself exclusively as an entertainment space, shying away from any additional responsibility. Until April of this year, its community standards were limited to prohibiting "content that sought to mislead the community about elections or other civic processes," a rule too broad for the problem it sought to prevent. However, just before the company's CEO appeared before the U.S. Congress, the company completely reformulated its policies to safeguard electoral integrity and adopted more specific rules, such as a ban on making erroneous claims about polling places or on disputing without evidence the outcome of an election process.

It is foreseeable that these new rules will be put to the test in the elections scheduled for this year. Especially, for the Colombian case, because of the proven success of Rodolfo Hernandez 's campaign last year on TikTok, whose strategy gave him national visibility making him the 'king candidate' of this social network. 

In any case, election-specific policies are not the only front in this battle in Colombia. Community standards are designed to be applied globally, so their design may overlook aspects specific to local contexts, and allow certain potentially harmful content to remain online. 

A monitoring conducted by Linterna Verde earlier this year found some gray areas in these policies regarding protection against certain problematic content, such as incitement to violence in response to social protest or class-based discrimination regarding the figure of Vice President Francia Márquez. 

These gaps may be especially relevant in the context of regional elections in Colombia. On the one hand, a pulse against the national government is predicted; on the other hand, it is possible that social protest will be an issue on the discussion agenda in areas such as Valle del Cauca, where the events of the 2021 National Strike are still relevant.

In addition to the rules, during election seasons, platforms implement measures to provide greater clarity and context to users. This is the case of interstitial messages that provide links to information resource centers, such as fact checker pages or official sites. However, these measures may be insufficient. Last year, for example, during the presidential campaign in Colombia, Twitter searches related to the process offered generic links to the Registraduría and Consejo Nacional Electoral pages, where there was no information to counter the misleading narratives that circulated during this period. 

On the other hand, it is important to highlight that online disinformation and its impact on electoral events should not be observed from a single social network. It is a complex phenomenon that unfolds in an information ecosystem that includes other platforms, messaging services and traditional and independent media. The formula also includes influence operations and manipulation of platforms that can influence the perception of users and the course of an election, as shown by the research"Digital Mercenaries" recently published by the Latin American Center for Investigative Journalism (CLIP).

Electoral disinformation in social networks involves many and varied battle fronts. Its analysis must be seen in a broad perspective, including the responsibility of platforms in terms of standards that address local needs, but also a critical look at the infrastructure of these spaces and the risks of manipulation by political actors. With regional elections on the horizon, the development of content standards and the introduction of other containment measures face a new test to protect the integrity of digital public debate.