20/6/2023

Three perspectives on digital rights: lessons from the epicenter of research and activism.

A new edition of RightsCon was held in Costa Rica in early June. The event, organized by Access Now, has become in recent years the most important in the world in the field of digital rights. With the participation of experts, public officials, academics, members of civil society and some technology companies, RightsCon was the setting for nearly 600 face-to-face, virtual and hybrid events.

An event of these dimensions is, naturally, an explosion of ideas, testimonies, projects, -and some clickbait-. In any case, in the midst of the overexposure it is possible to discover new trends and discussions around digital research, online activism and the exercise of human rights on the Internet. In this post, we address three key points from the last edition of RightsCon.

The place of grassroots organizations in Internet governance

Traditionally, digital rights organizations have defended freedom of expression, equality, and privacy in digital spaces. In the face of debates and research that are sometimes technical or self-centered, organizations that do not specialize in information technology and that develop their agenda in the offline world are left out of a conversation that also affects them, as they are often exposed to online attacks or are the target of other harmful behaviors on the Internet.

This was reported by Juan Francisco Sandoval, former head of the Special Prosecutor's Office against Corruption in Guatemala, and Ulises Sánchez Morales, a member of the Mexican organization Unasse, who at an event organized by Cejil and Article 19 Mexico and Central America described how they had been subjected to disinformation campaigns and online threats.

The conversation not only exposed the way in which human rights defenders are attacked on social networks, sometimes with messages that include elements of hate speech, but also how some product decisions have affected the scope of these attacks. According to Sandoval, since Twitter changed its verification system, many of the accounts from which the attacks come from have the blue badge that was previously reserved for high profiles on the platform and is now available to anyone who pays a subscription. "With Twitter's new rules, the attacks are already certified," he said at the event.

Thus, the search for an intersectoral dialogue when discussing the role of the platforms and demanding improvements in their governance mechanisms also involves the inclusion of grassroots organizations dedicated to the defense of human rights. The call for an authentic plurality implies giving a platform to these agents to participate in a broad conversation around the concerns and needs of the communities, as Lesly Guerrero, member of Cejil, pointed out in that space.

Content moderation: a problem beyond technology

While in one RightsCon panel a group of experts from the Oversight Board - an independent Meta body that acts as a Supreme Court on moderation issues - explained the criteria for selecting cases, evaluating them in light of international human rights treaties and deliberating for months to finally make a decision; in another, Daniel Montaung, a former Facebook moderator in Kenya, recounted how during his job he had just 25 seconds to decide whether or not a post should be removed from the platform.

The contrast of the two events exposes the huge gap that exists between the daily work of moderators, who work on an outsourced basis in call-centers in developing countries, and the activity of the Oversight Board, a body that in two years of activity has resolved only 36 cases. 

The harmful effects that moderators suffer from their work have been widely documented and Montaung himself attested to them at the event. Hours of exposure to the Internet's most harmful content - explicit violent images, child sexual abuse and hate speech - have left many with post-traumatic stress syndrome and sleep disorders. Despite the investment in automated systems and in warranted experiments like that of the Oversight Board, content moderators are the last link in the chain.

In addition to the already known problems of moderation systems, such as errors in the detection of dangerous posts or in the application of sanctions, there is an additional one that does not depend on technology: the working conditions of content moderators around the world, the first line of defense for the protection of users on social networks. 

Although it is a labor law issue, platform regulation projects, which are beginning to be implemented or discussed around the world, have neglected this issue, one of the cornerstones in the fight against harmful online content and the protection of the human rights of those who work for Internet companies. 

Artificial intelligence is not the end of humanity

Predictably, the rise of artificial intelligence tools and discussions to regulate them put this point at the center of the RightsCon agenda. In contrast to the alarmist discourse that permeates conversations about AI, in which the dominance of machines over humanity and the loss of millions of jobs are anticipated, several of the events were a call for calm.

For some, it is suspicious that much of the alarm about the risks of AI comes from those who are entering the AI market, such as some of the signatories of a letter released in early March calling for a halt to advances in the models. According to Frederike Kaltheuner of Human Rights Watch, instead of imagining catastrophic scenarios, the discussion should focus on the risks that already exist and that vitiate AI, such as misinformation, data biases and the impact on users' rights.

As highlighted by Tate Ryan-Mosley of MIT Technology Review, it is not a matter of distorting the risks, but of focusing on the most relevant ones, those that are far from leading humanity to extinction or obsolescence, but close to increasing the existing gaps for communities that have been historically marginalized and are already beginning to suffer the damage of these technologies, as is the case, for example, with the migrant population or with speakers of languages that are unrecognizable to artificial intelligence models.