Aarhus University Seal

PhD project: Social Media, Misinformation, and AI

The online spread of misinformation could potentially have negative societal consequences, lowering trust in institutions and delegitimizing factual sources of information. Recent developments within artificial intelligence (AI) have made it possible to produce and disseminate misinformation much faster. Anton Elias Holt’s PhD project ‘Patterns of Misinformation at scale and the Rising Prevalence of AI Generated Content’ seeks to advance understanding of who views and shares misinformation as well as how AI impacts the online flow of misinformation. His project forms part of the Social Media INFLUENCE project led by Anja Bechmann.

What is your PhD project about?

“My project has three main components: social media, misinformation, and AI. Currently, I’m working mainly with Facebook data. Facebook is particularly interesting because it’s the largest social media platform and the one used most for news in the US and the EU. The dataset I’m working with allows me to see all the links that have been shared on Facebook over 100 times. I’m specifically looking at this in a European context, investigating the links that have been reported or flagged by users as misinformation. Based on this, I can investigate the demographics of people in Europe who are more likely to encounter misinformation online.”

 

How did you get interested in this subject?

“Since my BA, I’ve been interested in how technology affects society. I started at DATALAB as a research assistant associated with the Nordis project, which aimed to better understand misinformation spread in the Nordic region. Here I got to work with large scale trace-data, applying my theoretical knowledge from my Bachelor of Arts from RUC, and MA in Cognitive Semiotics from AU. I really enjoyed the research environment at DATALAB and the empirical grounding, so I decided to apply for a PhD.”

 

What are you working on at the moment?

“I’m currently analyzing user-reported content within the Facebook dataset, i.e. all posts that have been flagged as misinformation, hate speech, or spam. I’ve found that some of the posts that have been reported come from what could be considered reputable sources, for example posts from UNICEF promoting vaccines. I want to investigate whether political motivations could be affecting what users report as misinformation, and if these types of reports can be used to investigate other forms of misinformation than the ones rated by fact-checkers. If we can find methods for filtering out politically motivated reports, then user reports could potentially be an effective tool for finding larger amounts of misinformation and harmful content online. This in turn could lead to more in-depth studies of misinformation and the demographics of groups that are likely to encounter specific types of content.”

“I'm also currently teaching students from our Media Studies MA program about AI. For this, I have set up a website, where the students can engage with OpenAI’s Large Language Models in a secure way.  The students will have access to this chatbot and get instructions on how to use it as a tool for coding, brainstorming and writing. They’ll be given guidance on how to write good prompts during the course, and the aim is to examine whether good prompts can effectively counter the common problem of AI hallucinations, where the AI will essentially just make something up. I hope the students will give their consent for me to look at their chat histories. From this, I will be able to see if the AI will still sometimes generate false information, even when prompted properly. This could indicate how functional AI tools like ChatGPT are in generating misinformation and what new forms of misinformation we might encounter when using these technologies.”

 

What findings from your project do you find the most interesting or surprising so far?

“One thing I found surprising is what characterizes people who are most likely to encounter misinformation on Facebook. Studies conducted in the US have created this narrative that the ones most exposed to misinformation are older men. But from my research, at least in a European context, this isn’t the case. In Europe, the ones who encounter misinformation the most are women between the ages of 35-45. This makes sense when you consider that this group is also the one which uses Facebook the most.”

“Another interesting finding is the difference between the types of misinformation that different age groups are most likely to encounter on Facebook. Older people are more likely to encounter political misinformation, as well as wide-ranging conspiracy theories about secret groups of elites controlling the world, or economic scams, such as ads falsely claiming that the user can get a 50-euro gift card to Aldi by clicking on a suspicious link. Younger people are more likely to encounter more ‘sensationalistic’ misinformation, such as stories about Tom Hanks being killed in a car crash, or stories about a pedophile priest haven been bit by a bulldog. There are of course also overlaps and commonalities, for example when it comes to misinformation about the climate crisis.”

 

What do you look forward to working with in the future?

“I’m looking forward to working more with the AI aspect of my project. Because AI is developing so quickly, it is really exciting to investigate how it changes the information environment. Although the pace also makes it almost impossible to say something concrete about it. But I am very interested in examining how it can change both the scale, quality and type of misinformation that is spread online. Similarly, I am excited about studying how malicious actors that spread misinformation will adopt AI in their operations.  Before the mainstreaming of AI, bots who spread misinformation on for example X (formerly known as Twitter) would simply copy and paste the same text and post it from different accounts. For researchers, it was possible to map out these networks of bots, because you could search for any account that had posted the same text. Now people can use the AI to alter the text and instantly generate different versions of it, making it harder for researchers to trace. In general, I am just excited to be working on new problems in such an interesting and fast-moving field.”