Crisis, Control, and Content Moderation: Democratic Oversight of Social Media Platforms
Research project
Social media platforms are central arenas for democratic debate, but can also amplify democratic risk, particularly in relation to elections or in times of crisis. Within the EU, legal frameworks seek to address risks by reshaping how content is governed. This raises questions of control, accountability, and protecting freedom of expression. Who has the power to define and mitigate democratic risks online—and how far can legal intervention go without undermining the values it seeks to protect?
This research project examines the legal framework – primarily the Digital Services Act – governing content moderation, with a particular focus on regulating mis- and disinformation, political interference, and other misleading content posted on platforms, especially in times of crisis or around elections. It entails an analysis of the distribution of power between the EU, social media platforms, and civil society actors, as well as the implications for fundamental rights and democratic resilience in the EU.
This project examines how the EU’s new legislation – primarily the Digital Services Act (DSA) – seeks to address democratic risks that are spread and amplified through large social media platforms, so-called VLOPs, or Very Large Online Platforms. Through algorithms and automated accounts, harmful content such as disinformation can spread an amplified more easily and cause a greater societal impact than before. This can affect elections, public debate, and ultimately the trust in democratic institutions.
Under the DSA, platforms with more than 45 million users in the EU are required to assess and mitigate risks to democratic processes, such as electoral interference and threats to the public sphere. In situations of serious crisis, such as war or large-scale disinformation campaigns, the European Commission may require these platforms to act swiftly, for example by modifying their terms of service or removing harmful content. This so-called crisis response mechanism was introduced in haste following Russia’s invasion of Ukraine in 2022.
This research project examines how legal regulation affects the balance between protecting freedom of expression and reducing the spread of harmful content. It also studies how power and responsibility are distributed between the EU, platforms, and civil society, including actors such as fact-checkers and “trusted flaggers” who are granted special rights to report content. The aim is to gain a deeper understanding of how these new rules may strengthen democratic resilience – that is, the ability to resist, adapt to, and recover from crises – while also identifying the risks they pose to individuals’ right to freedom of expression.
The project builds on previous research on social media, freedom of expression, and democratic resilience, and connects legal analysis with theories of democratic defence in times of crisis. Its goal is to contribute new knowledge on how the EU’s regulation of digital platforms may affect democracy, both positively and negatively, in an era of increasing global crises.