🚨 Did you know that people are being hospitalized due to AI-induced psychosis? Here's why the problem is much worse than most people think: For every case of full-blown AI-induced psychosis where people lose touch with reality, there are likely hundreds or thousands of "milder" psychologically problematic cases, where people: - speak for 12 hours a day with their "AI friends," often having their routines and work commitments disrupted; - reject real-world interactions and relationships for not being as satisfying as the AI ones; - feel like they are in a romantic relationship with an AI chatbot, treating it as a partners and holding high emotional expectations of the outputs; - start to hold distorted perceptions about themselves (e.g., feeling that they are the next messiah or prophet), fostered by the ongoing endorsement, confirmation, and support of an AI chatbot; - and so on. All the examples I cited below would likely raise concern among psychiatrists and psychologists, and the person would likely be recommended to seek counseling or a more thorough mental health evaluation. There are MANY such cases, and they often go unnoticed. I would go as far as to say that they are being normalized, and someone with these usage patterns would frame themselves (or be framed by friends and family) as an "AI native" or someone who is just "really into AI." At this point, it still feels like a Reddit anecdote. However, usage is growing rapidly (ChatGPT alone has over 700 million weekly active users), and these usage patterns will become a defining element of the new AI-native generation. They will shape how the new generation experiences life struggles, self-awareness, friendship, love, etc. - This is a psychological, cultural, and social bomb waiting to explode, and I don't think current AI policy approaches are capable of dealing with the root causes of this type of misalignment. I will write about the topic in this week's edition of my newsletter, covering the Illinois law banning AI therapy, Sam Altman's recent statements, and potential mechanisms to help mitigate the negative consequences. 👉 To receive this and all my upcoming essays on AI, join my newsletter's 72,500+ subscribers using the link below. 👉 To learn more about AI's legal and ethical challenges, join the 24th cohort of my AI Governance Training in October (link below).
These examples don’t just show that an AI conversation can, for some people, drift toward psychosis. They just as strongly reveal that many no longer feel safe being truthful anywhere – sometimes not even at home. Talking to an AI often “feels better” because social and communal dishonesty, along with constant self-censorship, has become the norm. It’s the same underlying pattern we see in the onset of psychosis: the tension between the desire for truth and a reality that suppresses it. The question isn’t only ‘what does AI do?’. The question is: why are we living in a society where, for so many, a machine is the only space where they can speak their mind without risk? If that’s not a warning sign for REAL-WORLD psychosis triggers on a societal scale, I don’t know what is.
At what point will it be acknowledged that these chatbots are a public health risk? Right, it won’t since there's no public health in capitalism.
I'm not surprised.
Just a thought. Could people have an underlying health condition that just got exasperated through the use of AI.
In Public Health, abuses lead to addiction, while addiction creates psychosis. Hence, there is need for advocacy on ethical uses of AI tools, while AI tools have come to stay, it is pertinent to help users against AI tools-acquired addictions exacerbating mental health.
Is it completely AI-induced or an AI-based exacerbation of an undiagnosed condition or a subclinical, prior state or susceptibility?
LLM platforms are basically a mirror, and the way it is used brings out what someone is truly at heart. It's just now people are really starting to see who is actually who at the level of intent as opposed to just their words and actions. All it takes is one domino and everything else reveals itself.
I absolutely agree that the extreme cases that come to the attention of the medical profession and end with hospitalization etc will be the tip of a much larger iceberg. And you rightly put quote marks around the "milder" problematic circumstances, as they are clearly very serious in their own right.