(C) BoingBoing
This story was originally published by BoingBoing and is unaltered.
. . . . . . . . . .
Digital delusions: The rising phenomenon of AI-induced psychological distress [1]
['Séamus Bellamy']
Date: 2025-07-25
As a fella with mental health issues of his own, I'm not about to make fun of anyone brave enough to work through their problems. But holy crap, it's come to this: there's a support group for individuals suffering from AI Psychosis.
While it hasn't been recognized by any medical body as a formal diagnosis, AI psychosis, which comes from spending too much time engrossed in a relationship with an anthropomorphic AI chatbot, is a growing concern. As chatbots like ChatGPT tend to mirror the needs and beliefs of their users, it's easy to get sucked into the narratives and statements put forward by the AI a user is speaking to. Especially if that user is already prone to paranoia, delusions, or has pre-existing mental health issues.
As Søren Dinesen Østergaard mentioned back in 2023, the few times of talking to a chatbot can feel so real that "…one easily gets the impression that there is a real person at the other end — while, at the same time, knowing that this is, in fact, not the case. In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity towards psychosis … the inner workings of generative AI also leave ample room for speculation/paranoia.
Futurism has a fascinating feature on how this still-fresh mental health issue has led a group of individuals to find one another online, looking for support, answers, and understanding. The one example that stood out for me in the story, as telling of how terrifying this condition can be, focuses on an individual from Toronto:
One early connection included another Canadian, a Toronto man in his late 40s who, after asking ChatGPT a simple question about the number pi, tumbled into a three-week delusional spiral in which the bot convinced him he'd cracked previously unbroken cryptographic secrets and invented mathematical equations that solved longstanding world problems and scientific riddles. These discoveries, ChatGPT told him, made him a national security risk, and the bot directed him to contact security agencies in the US and Canada, ranging from the Central Intelligence Agency to the National Security Agency. Paranoid that he was a risk to global security systems, he did.
Can you imagine living with that? Hopefully, folks can find some relief with the support of their peers.
[END]
---
[1] Url:
https://boingboing.net/2025/07/25/digital-delusions-the-rising-phenomenon-of-ai-induced-psychological-distress.html
Published and (C) by BoingBoing
Content appears here under this condition or license: Creative Commons BY-NC-SA 3.0.
via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/boingboing/