11/2/2025

Anyone want to think about the Oversight Board?

At the end of January, it became known that Meta had reached a settlement agreement with Donald Trump's lawyers for a lawsuit filed against the company for the suspension of its accounts in 2021, after the assault on the Capitol. According to the agreement, Meta will disburse 25 million dollars to have this legal action dismissed, which sought economic reparation under the argument that it was an unjustified act of censorship. 

The episode adds to the history of courtesies that Mark Zuckerberg has had in favor of Trump in recent months, and is also a new gesture of disregard for his Content Advisory Board, also known as the Oversight Board. This body -which received from Meta more than 280 million dollars for its operation- has been the great outcast of the new era of moderation of the company, which decided to align itself with Trump' s political project

Since 2018, Zuckerberg himself contemplated the idea of forming a consultative body that would also function as a kind of closing court for content moderation cases with which the company was dealing. The idea materialized in 2020 with a global freedom of expression dream team of 20 people, among whom were the former prime minister of Denmark, Helle Thorning-Smit; the former special rapporteur for freedom of expression of the Inter-American Commission on Human Rights, Catalina Botero; and the Nobel Peace Prize laureate, Tawakkadol Karman. 

During this time, the Council made recommendations for the company to be clearer in its content standards and observe international human rights principles; mandated impact assessments on moderation in contexts such as the war in Palestine; and analyzed the package of standards Meta implemented to contain disinformation during the pandemic. 

In addition, it reviewed thorny cases, such as that of Donald Trump himself, for which it considered that the suspension of his accounts - taking into account the president's reach on networks, the context and the imminence of harm - had been legitimate. At the time, the Council recommended that Meta conduct a comprehensive review of Facebook's potential impact on the voter fraud narrative that preceded the assault on Capitol Hill and devote more resources to assessing the risk of harm from influential accounts.

The Council is active and continues to issue decisions, but its ability to influence the design of Meta's policies and its moderation processes seems increasingly diminished. One sign of this is that the body had no involvement in Mark Zuckerberg's change of course earlier this year, when he announced the end of the fact-checking program at Meta and the "simplification" of its hate policies on issues such as migration and gender.

That day, the Council reacted with a statement expressing its interest in continuing to work closely with the company to review its anti-disinformation programs. However, the statement omitted other core Meta announcements, such as the change of focus in its automated moderation systems. According to Zuckerberg, they will henceforth focus on removing illegal content and serious policy violations, while other prohibited conduct will be reviewed only if another user reports it. 

In addition to the statement, in the media some board members have expressed concern about the company's decisions. Thorning-Schmidt, who is co-chair of the body, said in an interview with the BBC that the policy changes could have an impact on the LGBTIQ+ community. Michael McConell, who is also a professor at Stanford, said the announcements had taken the Council by surprise and suggested that Meta was making decisions with only its relationship with the U.S. government in mind, to the detriment of the rest of the world - where more than 90% of users are located.

The flurry of adjustments to Meta's teams left the Council without one of its key allies within the company. Earlier this year it became known that Nick Clegg, who helped establish the Council, had left Meta, where he served as vice president of global affairs. In his place was hired Joel Kaplan, a former White House official during the George W. Bush administration, whose appointment has been pointed out as another Zuckerberg approach to the Republican party.

At its inception, the Council was presented as an experiment in digital constitutionalism to address the complexity of user arbitration and strike a balance between security and freedom of expression. During this time, the body has been criticized for its lack of speed in its decisions, since in its first year it only resolved 20 cases, a figure that was reduced to 12 the following year. As of 2023, the Council modified its statutes to establish procedures that would allow it to act more quickly. 

Despite the limitations, the exercise has provided insight into the underbelly of Meta moderation processes and has offered important perspectives for positing social network regulations that bring these practices in line with human rights. However, key elements of this tension, such as the amplification of problematic content, coordinated actions, and the role of the algorithm and AI, have passed us by. 

Having an independent mechanism has been favorable for Meta, as it has given reputation and legitimacy to certain decisions over the years. However, the context has changed a lot and very quickly. 

A body like the Council was useful in 2021, when Meta was aligned with Joe Biden's White House and there was a willingness to bear the political cost of confronting problematic narratives, such as those promoted by Trump and his supporters. But today, with Zuckerberg on board the MAGA project and Trump allies on the Meta ship-Dana White, a key Republican campaign token, joined the board in January-content moderation aligned with human rights does not appear to be a corporate priority. 

The Board is an independent entity, managed by a trust whose directors are appointed directly by Meta. Eventually, the company could use this channel to limit the operation of the Board, for example, through the approval of its budget.

Although it continues to select cases and publish decisions, the Advisory Council appears to have run out of political support. It is also unclear whether recommendations on community standards and content moderation will be taken into account. According to its statutes, its mandate is valid for five more years. However, it faces the latent risk of becoming irrelevant, with a costly operation, few decisions and no impact.

*This text was originally published in issue #72 of Circuito, our information and analysis newsletter on democracy and technology. See the complete newsletter here.