Public discourse was once driven by a mix of journalistic integrity, lived experiences, and interpersonal communication. However, with the advent of machine learning-driven recommendation systems (Bakshy et al., 2015), AI now dictates what people see, read, and believe. Platforms such as Facebook, X (formerly Twitter), and TikTok use AI to amplify content that maximises engagement, often at the cost of truth, nuance, and objectivity (Zuboff, 2019). This algorithmic bias creates a distorted version of public opinion, wherein the most sensational, emotionally charged narratives rise to the top - not necessarily the most accurate or representative ones. AI-driven "echo chambers" reinforce these biases, ensuring that people are exposed to views that confirm rather than challenge their existing beliefs (Pariser, 2011).
For instance, during the COVID-19 pandemic, AI algorithms heavily promoted anti-vaccine content and conspiracy theories over factual public health information, leading to widespread misinformation (Cinelli et al., 2020). Similarly, AI-powered recommendation engines have exacerbated political polarisation by creating information silos that reinforce partisan divisions (Guess et al., 2018).
Perhaps the most dangerous manifestation of AI-driven misinformation is in election manipulation and political discourse. Deepfake technology - highly convincing AI-generated videos - has been used to spread false statements and fabricated scandals (Chesney & Citron, 2019). The ability to engineer reality has led to an erosion of trust in political figures and democratic processes, as citizens struggle to differentiate truth from AI-generated deception.
- During the 2016 and 2020 US elections, for example, AI-powered bots flooded social media with targeted disinformation campaigns, influencing millions of voters by presenting manufactured consensus as genuine public sentiment (Benkler et al., 2018).
- In India’s 2019 elections, deepfake videos were used to misrepresent political candidates, while in Myanmar, AI-driven misinformation contributed to ethnic violence against the Rohingya population (Mozur, 2018).
- Similar tactics were deployed in Brexit and the Russian-Ukrainian conflict, demonstrating the global reach of AI in shaping national and international political landscapes.
AI doesn’t just distort existing viewpoints - it can also fabricate them. AI-generated comments, fake reviews, and synthetic grassroots movements (also known as "astroturfing") create an illusion of mass support or dissent (Ferrara et al., 2020). When people see what they believe to be the majority opinion, they often adjust their own views accordingly - this is known as the bandwagon effect (Sunstein, 2001).
For example, Amazon and TripAdvisor have struggled with AI-generated fake reviews, which artificially boost or damage businesses, misleading consumers in the process. In politics, AI-driven bots have been used to simulate public support for policies that lack genuine grassroots backing, skewing legislative priorities and media narratives (Howard et al., 2018).
What happens when the "voice of the people" is no longer a collective expression but a product of AI-generated narratives?
The consequences are profound: societies fragment, trust in traditional institutions erodes, and real grassroots movements struggle to gain traction amid a sea of artificial noise. With AI increasingly curating public discourse, a new form of AI-driven populism has emerged. Leaders and movements that align with AI-optimised engagement strategies - those that thrive on divisiveness, emotional extremity, and oversimplification - gain an unnatural advantage (Tufekci, 2018). The ability of AI to amplify outrage and suppress complexity ensures that nuanced, fact-based discussions struggle to survive, leading to a public sphere where extreme views dominate. This AI-engineered populism has redefined "Leitkultur" - a term meaning "leading culture" that once described the shared cultural values of a society. Instead of organic social values, today's Leitkultur is increasingly dictated by algorithms that prioritise engagement over enlightenment. In this reality, truth becomes secondary to emotional impact, and rational discourse is replaced by AI-optimised outrage cycles (Lanier, 2018).
In Brazil, the rise of Jair Bolsonaro was heavily supported by AI-driven WhatsApp disinformation campaigns that disseminated false narratives to millions of voters (Mello, 2019). Similarly, the French "Yellow Vest" protests were inflamed by AI-generated narratives, exacerbating public dissent through viral misinformation loops (Marwick & Lewis, 2017).
AI-driven misdirection isn't just a passive phenomenon - it has actively exacerbated divisions within societies. Through predictive analytics and psychographic profiling, AI can tailor content to manipulate emotions, reinforcing fears, anxieties, and hostilities (Cadwalladr, 2019). For example, AI-powered systems in social media have been found to prioritise conflict-driven content, leading to greater political and social fragmentation. A 2021 study found that misinformation spreads six times faster than factual news on social platforms (Vosoughi et al., 2018). This creates a self-perpetuating cycle of division, as individuals become more entrenched in their perspectives, convinced that their views are the dominant or "correct" ones.
While AI has been used to manipulate vox populi, it also holds potential as a force for good. Fact-checking algorithms, AI-driven media literacy programs, and transparent algorithmic auditing could serve as countermeasures to combat AI-generated disinformation. One solution is Explainable AI (XAI), which enhances transparency by making algorithmic decisions interpretable and accountable (Doshi-Velez & Kim, 2017). Another approach involves democratising AI - giving independent researchers, civic groups, and policymakers access to AI tools to analyse and detect disinformation campaigns in real time. Ultimately, societies must shift towards critical digital literacy, equipping individuals with the skills to identify AI-driven manipulation and engage with digital content more skeptically. Without intervention, the "voice of the people" risks becoming little more than an echo of AI's programmed priorities. If AI is being used to manipulate public opinion, then the solution must lie in reclaiming authentic discourse. Here are some strategies:
- Algorithmic Transparency: Demanding transparency in AI recommendation systems can help mitigate bias and reduce misinformation.
- Digital Literacy: Educating the public on how AI-driven systems manipulate content can create a more discerning audience.
- AI for Truth: Developing fact-checking AI that counters disinformation can help neutralise the impact of AI-generated falsehoods.
- Decentralised Platforms: Moving towards decentralised social networks where AI has less power over content prioritisation can restore authentic discourse.
- Human-Curated Content: Prioritising human editors over AI-driven algorithms in news and information platforms can help filter out synthetic engagement-driven misinformation.
The manipulation is almost undetectable - when you mix some factual information with a lot of wrong information and unscientific deductions, it would normally take some effort to research the finer details. This is not happening, this form of digital literacy is founded in scientific literacy. This is a weak point of many societies, where human actors and AI have fundamentally altered the nature of "vox populi," shifting it from an organic representation of public sentiment to a manufactured and algorithmically curated construct. Most often, this is sold as "free speech." Yet many don't realise that the content selection arriving in the social media feed is really adulterated free speech at best - modified and repackaged by algorithms that are gamifying the online experience of billions of users.
While AI offers unprecedented tools for knowledge-sharing and connectivity, its unchecked influence over public discourse poses a significant threat to democracy, truth, and social cohesion. If societies wish to reclaim their collective voice, the focus must shift towards technological transparency, ethical AI development, and renewed emphasis on genuine human dialogue.
If and when the "voice of the people" can be simulated, manipulated, and amplified by artificial intelligence, one question is foremost: Is it truly the vox populi, or merely the echo of an algorithm?
References
Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130–1132.
Benkler, Y., Faris, R., & Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics. Oxford University Press.
Cadwalladr, C. (2019). The great British Brexit robbery: How our democracy was hijacked. The Guardian.
Chesney, R., & Citron, D. K. (2019). Deepfakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107, 1753.
Chesney, R., & Citron, D. K. (2019). Deepfakes and the new disinformation war: The coming age of post-truth geopolitics. Foreign Affairs, 98(1), 147-155.
Cinelli, M., Quattrociocchi, W., Galeazzi, A., Valensise, C. M., Brugnoli, E., Schmidt, A. L., & Scala, A. (2020). The COVID-19 social media infodemic. Scientific Reports, 10(1), 1-10.
Ferrara, E., et al. (2020). The rise of social bots. Communications of the ACM, 59(7), 96-104.
Lanier, J. (2018). Ten arguments for deleting your social media accounts right now. Henry Holt and Company.
Mozur, P. (2018). A genocide incited on Facebook, with posts from Myanmar’s military. The New York Times.
Pariser, E. (2011). The filter bubble: What the internet is hiding from you. Penguin UK.
Sunstein, C. R. (2001). Republic.com. Princeton University Press.
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151.
Zuboff, S. (2019). The age of surveillance capitalism. Profile Books.
Kommentar schreiben