Online video viewing has increased significantly in recent years, becoming one of the main reasons people use the web. According to the Digital 2023 report, which shows a global overview of the use of the Internet, social networks and mobile devices, watching videos ranks fourth on the list of the most common online activities worldwide.
The transition that conventional platforms such as Instagram and Facebook are attempting to make to the video format, coupled with the success of emerging networks such as TikTok, where users spend close to 24 hours per month, has not only increased the demand for video data but also represents new limitations for collecting and systematizing information from these sources.
Some of the difficulties faced by those seeking to understand digital environments are related to the enormous amount of information generated through video sources, the scarce tools to extract and process relevant data, as well as the lack of standardization in the way this data is stored and presented.
In our blog 'TikTok: A mine of information waiting to be explored', we addressed this and other challenges that the social network and the video format have generated in digital research. In this post we will focus on the initiatives that seek to overcome this barrier to access to information and on the advances that have been made in the analysis of moving images.
Information gathering and databases
In addition to the content itself, digital research is leveraged on metrics that allow measuring the reach, the level of influence and the way in which information is disseminated on social networks. With time, pressures from academia, media and civil society, platforms have ended up offering access to their APIs (programming interfaces), in which it is possible to obtain some relevant data for this purpose.
However, this information is not always complete or does not always provide in-depth knowledge of the dynamics of social networks. In the case of TikTok, a platform that is becoming increasingly important for understanding the digital debate, the limitations of its API are even more marked, as it does not allow downloading metadata in bulk or accessing the flow of relevant content from the "For you" page, one of the most important sections in the infrastructure of this social network.
Moreover, the construction of metadata bases poses additional difficulties when trying to understand audiovisual content. Relevant information such as likes, reproductions, comments and hashtags, among other records, have been beyond the reach of researchers, who, in case they wanted to collect them, have had to take on this task manually. These are some of the tools that have been implemented for this purpose:
Zeeschuimer is a browser extension that monitors and collects information about elements of the interface of a platform or social network, such as navigation data, menus, templates, sounds and, in general, the interaction channels that the user has with the content and the application. This monitoring, in conjunction with other tools, allows for subsequent systematic analysis. For example, in the case of TikTok, it allows exporting a list of all posts viewed in the order of reproduction to be read later through tools such as 4CAT for reading, analysis and storage.
4CAT is a research tool that can be used to process online social network data. It aims to make data capture and analysis accessible to people through a web interface, without the need for programming skills. 4CAT creates a platform dataset through user-determined parameters. This tool linked with Zeeschuimer, facilitates the reading of data from social networks such as TikTok.
Youtube Data API
Youtube Data API is an extension of the application that was developed in 2007 for programmers and researchers. The tool, which is now in version 3.0, allows searching YouTube data by user-determined topics, accessing metadata such as content interaction figures and audience readings within the application. Although this is an experimental version, it has been widely used to access a data reading model in which YouTube, one of the least studied streaming applications, is included in the research.
Video and image formats contain other types of features that, broken down and grouped together, can be read by artificial intelligence. This field of AI research is called computer vision. This technique makes it possible to make objects, colors, bodies, gestures, representations, sounds or actions useful data for analysis. In addition, the tools it uses make it possible to detect aspects of format such as speed, video compositions or the tracking of coinciding images that open up the possibility of referencing with other sources. The use of these tools has been an obstacle both because of their high costs and lack of accessibility for organizations and independent researchers. Recently, however, other types of tools have been adapted to perform video reading actions and complement the computer vision functions that are freely available. These are some of them:
Google Video API
The Google Video Intelligence API is a freely available tool that extracts and annotates the central frame of each video shot based on its visual content. That is, it makes identification of data such as bodies, stencils, colors, etc. This is a very useful method when dealing with narrative arcs and multiple shot transitions because it breaks down the shots and allows coding the information quantitatively and qualitatively. In addition, it can be used for audio cast analysis by identifying whether it is an original sound or if it comes from the sound library within the social network, a very pertinent function for research in TikTok.
Memespector-GUI is a multiplatform server that allows the reading of data obtained by computer vision tools such as Google Video Intelligence. With this tool it is possible to identify nodes of interest within the video data. For example, it is possible to detect and classify images through parameters such as recognized text, reference sites, logos or people's faces, among others. These functions make it possible to place the content in a broader context that can contribute to the research.
The high consumption of video content has made it a key object of interest for digital research. At Linterna Verde we are constantly looking for new strategies to complement our social media research methodologies. Combining different techniques and tools will allow academics and civil society to keep up with the constant advances in social media, to understand how the flow of information operates online and its impact on digital public debate.
Capturing TikTok Data with Zeeschuimer and 4CAT, Digital Methods Initiative
DeepTikTok: Three Methods for tracing video memes, Elena Pilipets