Risky ChatGPT Usage Patterns
OpenAI's recent paper "How People Use ChatGPT" strategically obfuscates risky usage patterns. More scrutiny and further research on AI chatbot-related harm are urgently needed | Edition #235
👋 Hi everyone, Luiza Jarovsky here.
Welcome to the 235th edition of my newsletter, trusted by over 78,300 subscribers interested in staying ahead and leading the change in AI.
👉 To receive more resources and stay ahead, upgrade and join my private Substack community.
🔥 Join the last cohort of 2025
If you are looking to upskill and explore the legal and ethical challenges of AI, as well as the EU AI Act, join the 25th cohort of my 16-hour live online AI Governance Training in November (the final cohort of the year).
Each cohort is limited to 30 people, and more than 1,300 professionals have taken part. Many described the experience as transformative and an important step in their career growth.
👉 Join or apply for a discounted seat.
Risky ChatGPT Usage Patterns
OpenAI has recently released the paper "How People Use ChatGPT," the largest study to date of consumer ChatGPT usage.
Some of the paper's methodological choices, however, strategically obfuscate risky usage patterns, making legal oversight of AI chatbot-related harm more difficult.
Further research is urgently needed to scrutinize AI chatbots’ usage pattern data and the negative implications AI chatbots can have on users.
-
The goal of the paper is to illustrate ChatGPT's economic benefits, as the company's blog post makes clear:
“We’re releasing the largest study to date of how people are using ChatGPT, offering a first-of-its-kind view into how this broadly democratized technology creates economic value through both increased productivity at work and personal benefit.”
and also:
“ChatGPT’s economic impact extends to both work and personal life. Approximately 30% of consumer usage is work-related and approximately 70% is non-work—with both categories continuing to grow over time, underscoring ChatGPT’s dual role as both a productivity tool and a driver of value for consumers in daily life. In some cases, it’s generating value that traditional measures like GDP fail to capture.”
Focusing on the economic benefits helps OpenAI justify the enormous investments it has been receiving and supports its continuous expansion efforts, which are extremely broad and include national and government projects (such as Stargate UAE and OpenAI for Government), universities (ChatGPT Edu), and toys.
Additionally, the study's chosen methodology and framing of ChatGPT use cases strategically help OpenAI downplay and minimize risky AI ChatGPT usage, especially those related to therapy, companionship, affective, and other forms of intensive personal usage, which have led to suicide, murder, AI psychosis, spiritual delusions, and more.
-
First, let me explain why OpenAI is interested in downplaying or obfuscating risky ChatGPT usage.
Not only do they generate negative headlines for OpenAI (which might affect its profits and its ability to close more deals and partnerships), but they also significantly increase the company's legal risks, which could potentially put it out of business. How?
First, from an EU data protection perspective, to benefit from the "legitimate interest" provision in the GDPR, a company must demonstrate, among other things, that the data processing will not cause harm to the data subjects involved. Mainstream usage patterns that lead to suicide, self-harm, unhealthy emotional attachment, and psychoses would complicate this claim.
Second, these ChatGPT-related harms create a major liability problem for OpenAI, which is already being sued by one of the victim's families (Raine vs. OpenAI). If the risky use cases are mainstream (and not exceptions), judges are likely to side with the victims, given the company's lack of built-in safety provisions.
Third, last week, the U.S. Federal Trade Commission issued 6(b) orders against various AI chatbot developers, including OpenAI, initiating major inquiries to understand what steps these companies have taken to prevent the negative impacts that AI chatbots can have on children. OpenAI is interested in downplaying risky usage to avoid further scrutiny and enforcement.
Fourth, Illinois has recently enacted a law banning the use of AI for mental health or therapeutic decision-making without oversight by licensed clinicians. Other U.S. states and countries are considering similar laws. OpenAI wants to downplay these use cases to avoid bans and further legal scrutiny.
Fifth, risk-based legal frameworks, such as the EU AI Act, treat AI systems like ChatGPT as general-purpose AI systems, which are generally outside of any specific high-risk category. If it becomes clear that risky use patterns are extremely common (and cases of harm are growing), it could lead to risk reassessments, increase the compliance burden.
-
Now, back to the paper.
It strategically minimizes risky use cases, especially AI therapy, companionship, and similar affective and intensive personal usage patterns. How?
First, it names the category "Relationships and Personal Reflection," which the authors strategically highlight as accounting for only 1.9% of ChatGPT messages:
“(...) we find the share of messages related to companionship or social-emotional issues is fairly small: only 1.9% of ChatGPT messages are on the topic of Relationships and Personal Reflection (...) In contrast, Zao-Sanders (2025) estimates that Therapy/Companionship is the most prevalent use case for generative AI."
*The Zao-Sanders study referred to by the OpenAI paper can be found here (I have mentioned this study a few times in this newsletter), and it states that therapy and companionship, “organizing my life,” and “finding purpose” are the three most prevalent uses of ChatGPT in 2025.
Strategically framing messages related to companionship or social-emotional issues as ‘Relationships and Personal Reflection’ will likely exclude the majority of risky usage patterns in this context. Why?
As we learned from recent tragedies involving AI chatbots, such as the Florida teenager's suicide last year, Adam Raine's suicide this year, and in other documented cases of intense emotional attachment, the victims used AI chatbots every day, for various hours a day, and often as a replacement for other human interactions (friends, partner, parents, doctors, therapists etc).
The conversations were not necessarily about relationships or explicitly focused on personal reflection.
In the Raine's family lawsuit, for example, we learn that before Adam Raine committed suicide, he asked ChatGPT if the noose he was building could hang a human (screenshot from the family's lawsuit below):
This extremely risky interaction, which should never have happened, would likely have fallen into either "specific info" or "how to advice" in the context of the OpenAI paper terminology. However, this interaction is directly related to companionship and social-emotional issues, and is a risky usage pattern in that context.
OpenAI's paper points out that approximately 30% of consumer usage is work-related and approximately 70% is non-work.
Within both these categories, there are probably millions of interactions that are potentially risky, especially given usage intensity (daily use, various hours a day), the type of user (children and teenagers, emotionally unstable and vulnerable people), and the intention/context (interacting with ChatGPT as a replacement for other human and social interactions).
As I wrote above, this OpenAI paper was written with the goal of showing investors, lawmakers, policymakers, regulators, and the public how 'democratizing' and 'economically valuable' ChatGPT is.
To properly govern AI chatbots, we need more studies (preferably not written by AI chatbot developers) that explore the various types of AI chatbot usage patterns, including risky ones, and the ones more likely to cause harm.
We need a much more nuanced understanding of the model specs and features that might lead to more dependence, emotional attachment, manipulation, or distortions, and the avoidance of healthy human and social interactions.
Certain usage patterns can be extremely risky, especially when children, teenagers, and other vulnerable groups are involved. AI companies must acknowledge that (and not pretend that AI chatbots are harmless productivity boosters).
Only then will we be able to develop stronger AI safety and AI governance mechanisms and policies that will more effectively protect people against AI harm.
Until then, we will remain in the AI chatbot Wild West.
Thank you for reading!
If you enjoy this newsletter, consider upgrading your subscription. What you will get:
The commenting section below is open; you are welcome to leave your thoughts.
Have a great day, and see you soon! Luiza
Building AI-driven MVPs for tech startups in just 60 days | Founder/CEO at Techling LLC
5hUnderstanding real usage patterns is key for safe AI deployment. How can researchers ensure transparency without compromising user privacy?
3x founder & AI specialist helping startups scale fast, build trust, and stay true from day one.
10hImportant point, usage data without transparency on risks is a half-truth, Luiza
Spot on, Luiza 👏 From our work with enterprises, we see the same challenges: shadow adoption of tools without clear guardrails, no accountability for how prompts and outputs are used, blind trust in vendor assurances without independent verification.
A crucial perspective, Luiza Jarovsky, PhD. Understanding not just usage, but the risks and ethical implications of AI chatbots is vital for responsible adoption. More research and scrutiny will help shape safer, user-centric AI experiences.
Building a Community of AI Agent Builders | Data & AI Tech Leader | Speaker | Insights via talks & podcasts 🎙️
13hA critical reminder Luiza - without full transparency on usage patterns, it is impossible to address the real risks these systems pose.