9/2/2023

Fact-checking in the age of AI: risks and opportunities

2022 ended with news that will be talked about throughout the year: the public release of ChatGPT, an advanced artificial intelligence (AI) model capable of generating text and solving complex problems with natural, articulate language. In the wake of the fascination triggered by its launch, all sorts of concerns were raised, ranging from its impact on education to the dawn of a new era of online misinformation.

According to a report presented by Stanford University's Internet Observatory, Georgetown's Center for Security and Emerging Technologies and OpenIA - the company that developed ChatGPT and Dall-E - the advance of artificial intelligence poses risks to the spread of influence operations and imposes new challenges to the control of disinformation and the work of fact-checkers.

Gradually, the digital environment will be filled with content whose origin - human or artificial - will not be entirely clear, nor will its motivations. With these models, which make it possible to create natural and eloquent texts in a few seconds, the possibility of initiating and automating misleading and propagandistic information campaigns becomes cheaper.

Digital deception: risks of the malicious use of AI

Although ChatGPT has security measures in place to prevent the dissemination of false information and is capable of recognizing some misleading requests, these models still have many limitations and can be manipulated to produce false results. Despite their advances, these types of technologies are not trained to recognize what is fact and what is not true. Moreover, because their training is based on existing online data, they can perpetuate misinformation and biases in those sources.

With a couple of minutes in ChatGPT and the right instructions, it is not only possible to create misleading texts, but to build from scratch a fake news environment and fill non-existent media pages with content that appears to be totally legitimate, as has already started to happen.

The agility with which these extensive language models can respond to requests made by humans highlights three specific risks:

  1. Increased reach. The ability of generative AI to compete with human-written content at a low cost lowers the barrier to entry and opens the possibility for a greater number of actors to participate in the creation of disinformation campaigns and influence operations and scale them quickly, whether for political, ideological, financial or social motivations. In addition, due to the amount of false information and biases that circulate on the web and that feed extended language models, users with low media literacy are not only susceptible to consume and disperse problematic information unintentionally, but can also be easily persuaded to act in certain ways.
  1. Greater effectiveness. The very naturalness and eloquence of AI-generated texts make disinformation more convincing and personalized, making it more difficult for even the most experienced fact-checkers to detect and generating an erosion of trust in information ecosystems. All of this can be effective for certain actors seeking to instigate conspiracy theories, intervene in democratic processes or promote adversarial narratives.
  1. Greater sophistication. Although AI lacks the ability to distinguish facts and qualify them as true, its facility to imbue false information with a charge of authority and pass it off as true opens the way for new and more sophisticated disinformation tactics.

This new scenario forces fact-checkers to resort to strategies that can keep pace with the advances in misleading information and influence operations designed through artificial intelligence. 

How to mitigate the damage? 

First of all, it is important to use tools that allow fact-checkers to verify the origin of a text. Alarm in various sectors about the impact of artificial intelligence has given rise to a number of programs to identify whether a piece of content is the result of these generators. OpenAI itself recently launched the AI Text Classifier in an effort to offer a technology capable of this kind of detection, but the company warns of its limitations and that its diagnosis is not always accurate.

Beyond this type of tools, it is also possible to train artificial intelligence models specialized in finding false statements in a text. Initiatives of this type have already been implemented by organizations such as Newtral -which has data verification as one of its areas of work- and socialized in checkers' congresses in the region.

In turn, it is important to promote education and awareness of misinformation and manipulation of artificial intelligence tools among the general public, including strategies to identify misleading information and methods to protect against it. 

A fundamental measure is also in the regulation of this kind of tools. Mira Murati, CTO of OpenAI, has stated the need for legislators to get involved in the development of this type of technologies to prevent them from being used for malicious purposes.

Despite the fact that misinformation is a global problem, there are still insufficient incentives to work towards coordinated solutions. As the aforementioned Stanford, Georgetown and OpenAI report points out, technology companies, public authorities and researchers work independently, with different -although sometimes complementary- definitions and resources, without the capacity to agree on a comprehensive way to address this phenomenon. A regulation, for example, could harmonize the work of the actors involved to find comprehensive solutions or strategies in the fight against disinformation and influence operations.

The report also provides a series of recommendations to be applied throughout the different stages of intervention, from the design and construction of the models, through access, content dissemination and audience development. 

References