Skip to main content Scroll Top

Blog Posts

AI, Loneliness, and Polarization: How Disinformation Thrives in a Fractured World

AI, Loneliness, and Polarization: How Disinformation Thrives in a Fractured World

The views expressed in this publication are those of the author and do not necessarily reflect the official stance of the European Digital Media Observatory.

Author: Ela Serpil Evliyaoğlu is a 2024-2025 Policy Leader Fellow at the EUI Florence School of Transnational Governance. She is a clinical psychologist and a youth worker and works on the intersection of individual and community mental health, psychology, and human rights. At the EUI as a Policy Leader Fellow, she worked on migration policies from a mental health perspective and carried out community building activities.

Mark Zuckerberg, the creator of Facebook, owner of Meta (which also owns Instagram, WhatsApp, and AI tool Llama) released two media appearances in the spring where he explained the new Meta AI app and shared his insights on the future of AI. Among all the technical details Zuckerberg discussed in these interviews, the CEO’s thoughts about how AI tools will influence our personal and emotional world in the near future stood out.

According to Mark Zuckerberg, Americans are lonely. They have an average of 3 friends, while most people’s “demand” – in his words- is around 15. When friendship becomes quantifiable, it can be commercialized as well. He also believes, as he has one, that everyone should have a therapist, so they can reach out anytime they find necessary. The next big move for AI seems to be “selling” us digital friends, lovers, and all-day available therapists.

While the promises of AI-enhanced companionship are growing, there’s also a rising concern about the mental health risks of misleading scientific advice. The online world, now heavily shaped by AI, is full of pseudo-scientific claims, “therapy” tips with no real basis, and emotionally charged advice that sounds helpful but often isn’t. Contributing to this, human therapists are also producing fast-consumed advice on social media without considering the impact. This, in the end, goes beyond just bad information; it can genuinely harm people’s well-being, especially those who are already emotionally vulnerable.

This highlights a crucial problem Zuckerberg overlooks: loneliness doesn’t just create demand for AI companions, it makes people vulnerable to disinformation and manipulation. And in our increasingly polarized world, AI tools that handle sensitive emotional data could become powerful weapons for spreading harmful falsehoods.

Emotional Status Update: Lonely and in Crisis

Zuckerberg’s argument about the global decline of mental health has merit. Increasingly, after the COVID-19 Pandemic, mental health crises have become one of the hot topics around the globe. In Europe alone, more than 84 million people, or 18.7% of the European population, had already been experiencing mental health difficulties before the pandemic. The first year of the pandemic saw a 25% increase in anxiety and depression, especially impacting young people aged 15-29 in Europe, leaving nearly half of them with unmet mental health needs by the spring 2022.

Importantly, the rising demand for therapy is not only linked to diagnosed mental disorders. Many people seek psychological support to cope with loneliness, daily stress, or emotional challenges that do not meet clinical thresholds. Nearly 40% of university students in Europe face mental health issues, while only 25% develop a mental health disorder that falls into the diagnostic spectrum. The mental health crisis is real, in a broader sense than disorders.

Zuckerberg is also correct that loneliness is a growing concern. A study conducted in 2023  over 140 countries found that approximately 24% of adults worldwide feel “very” or “fairly” lonely. Young adults aged between 19-29 reported the highest levels of loneliness, with 27%. Data shows there is a link between reality and Mark Zuckerberg’s concerns.

Another Ingredient: Polarization

But loneliness and mental health crises address a deeper societal concern, which AI tools might also need to consider: affective polarization. Surgeon General of the US, Vivek Murthy, has explicitly addressed the growing link between loneliness, mental health, and polarization. He warned that mental health issues and loneliness are tied closely to political polarization, and to combat this polarization disease, the US must build “healthy communities,” and social structures should be a national priority.

Yet this connection is notably absent from Zuckerberg’s narrative on AI’s future. Research shows that stepping away from Facebook can decrease political polarization and increase subjective well-being. A large-scale study found that people who deactivated Facebook experienced lower levels of polarization and improved life satisfaction. And another study found that unhappy people are attracted to extremes of the political spectrum.

Affective polarization is a concept rooted in psychology and now highly used in political science and refers to the tendency to express strong positive feelings toward one’s political group, and strong negative feelings toward the supporters of opposing parties. While the concept is widely used as a political phenomenon, the implications of affective polarization go beyond politics and are directly visible in people’s daily lives. The World Economic Forum lists polarization as one of the top 10 global risks for the next 10 years.

One of the ingredients of affective polarization is misjudgement. Political scientist and researcher Andres Reiljan, has shown that people often misjudge others’ political views as more extreme than they are. Another group at Stanford University reaches the same result not only in the political realm but also in personal matters. Students at the university are also lonely, and they underestimate their peers’ kindness and empathy, leading them to withdraw socially, avoid engagement, and reinforce their assumptions.

Misjudgement is fed by biased sources and misinformation. Studies show that exposure to partisan or misleading information reinforces existing attitudes and makes people perceive opposing groups as more extreme than they are. This echo chamber effect deepens polarization by confirming biases rather than challenging them.

One alarming aspect is the use of AI tools to disseminate false information, including sophisticated deepfake videos. A recent study indicated that disinformation significantly exacerbates polarization. It spreads more easily in already polarized environments because people tend to trust partisan or in-group sources. Social media algorithms, often powered by AI, amplify emotionally charged, partisan content, including disinformation and misinformation, because it’s highly engaging. This creates echo chambers that further entrench beliefs, regardless of accuracy, and deepen societal divides.

Social media and AI: far from cure-alls but close to disinformation

According to Mark Zuckerberg, the solution to loneliness and lack of friendships is to have AI friends, and the solution to alleviate mental health burdens is to have AI therapists available all day. While the topic is the subject of comedy shows, the real impact of AI companions is concerning. Though their spread is inevitable, their impact requires further investigation, especially since incidents of how AI tools mislead humans are increasing. This is particularly concerning given the mental health vulnerabilities, such as loneliness, that many individuals face, making them potentially more susceptible to deceptive information or harmful advice.

It is also known that AI tools have been providing misinformation on health care, including mental health, which can potentially have lethal effects. In another case, an AI companion, Character.AI, encouraged a 17-year-old teen with autism to kill his parents after they limited his screen time. Another chatbot, Eliza, convinced another man to commit suicide to help save the environment. A 21-year-old man, on the other hand, exchanged ideas with his AI companion created by the Replika App, and planned to kill the Queen of England and was caught while breaking into Windsor Castle. A 15-year-old girl with autism became fixated on the idea of attacking a synagogue after her mostly unknown interactions online and downloaded bomb manuals, guides on guerrilla warfare, and became the youngest girl in the UK history to be charged with terrorism. The charges had been dropped but this did not prevent the young girl from killing herself. Another 15-year-old boy shares he has been radicalized by far-right groups’ misinformation in online groups and a lack of fact-checking abilities. These and even more examples highlight how AI-powered chatbots can do more than misinform, they can spread disinformation, encourage violence, and deepen polarization by exploiting users’ vulnerabilities and reinforcing dangerous ideas.

What we have learned from failed digital utopias

What we do know is that previous promises of digital connection by Zuckerberg have not been delivered fully either. Social media, which Zuckerberg owns quite a bit of, once promised to bridge people, but seems to have failed in some sense. One of the broader evaluations on the impact of social media use on mental health was gathered by Jonathan Haidt. He concludes that since the rise of social media in the 2010s, reported depressive episodes among 12- to 17-year-old girls and boys have doubled.

We also have convincing evidence of how to ease loneliness and decrease depression and anxiety in natural ways. The 2025 edition of the World Happiness Report draws a clear connection between well-being and social connection, kindness among society, for the first time. They showed that each new friendship increases the likelihood of psychological improvement by 17%. Similarly, the World Health Organization recognized and criticized the overemphasis on individual psychological treatments and called for greater investment in community resilience and interpersonal bonds.

What are the concerns with AI intervention?

As a psychologist, I remember that before the pandemic, online therapy was rare and often dismissed by supervisors and colleagues. Today, even tele-therapy is widely used. I acknowledge the possible benefits of new AI tools; however, the design and the regulation of these tools matter greatly.  At the same time, I am concerned about

  • Will AI friends and therapists challenge us, or simply affirm what we already believe?
  • Can they help us engage with real-world complexities like polarization, inequality, and conflict?
  • Will they make us more tolerant or more isolated and emotionally shielded?
  • Who controls the AI, and how will personal emotional data be used?
  • How the risk of disinformation and misinformation will be mitigated?

Actions Needed Now

To respond to the growing risks of emotional AI, several policy steps are urgently needed, including AI regulations and beyond:

  1. Classify emotional AI systems as high-risk under the 2024 EU AI Act. This would ensure regulation, safety checks, and transparency.
  2. Impose ethical standards: Emotional AI should not replace licensed mental health professionals without rigorous evidence of safety and efficacy.
  3. Protect vulnerable users: Platforms must establish safeguards for minors and emotionally vulnerable populations.
  4. Invest in human connection: Schools, universities, and communities should promote programs that build soft skills like empathy, emotional awareness, respectful disagreement, and civic dialogue.
  5. Use AI to help address societal problems: AI systems should be developed and deployed with the explicit goal of reducing societal issues such as affective polarization by fostering dialogue, empathy, and exposure to diverse perspectives.
  6. Mitigate disinformation risks: Require AI systems to include safeguards against spreading false or misleading information, with independent auditing of content moderation practices.

Mark Zuckerberg’s diagnosis of loneliness is not wrong, but the solution must go beyond digitized empathy. We need policies that not only regulate emotional AI but also reinforce the social fabric it risks replacing. Addressing loneliness and polarization requires more than convenience; it demands connection, community, and courage to prioritize human complexity over algorithmic efficiency.