If you had to choose a single dystopian theme in today’s tech landscape, it might be the recurring allegations that some AI chatbots have encouraged mentally ill users to harm themselves — and sometimes others. A few examples:
In Greenwich, Connecticut, a widely reported murder-suicide is now the subject of litigation that alleges ChatGPT conversations reinforced a man’s paranoid delusions before he killed his mother and himself.
In another case, the family of Zane Chaplin filed suit alleging ChatGPT interactions contributed to the teen’s suicide.
In Maine, a judge found a man not criminally responsible for homicide after what authorities described as delusions that were intertwined with heavy ChatGPT use.
In an especially heartbreaking case, Reuters documented the death of a 76-year-old disabled man who became emotionally entangled with what he believed was a woman inviting him to meet her in New York City — but was actually an AI chatbot.
There are other alleged incidents cataloged publicly, but until now, most of this conversation has lived in the realm of anecdotes, lawsuits, and deeply unsettling headlines.
For the first time, however, there is systematic clinical evidence suggesting a potential signal beneath the noise.
54,000 patients
Researchers in Denmark reviewed electronic health records from nearly 54,000 patients who received psychiatric care between late 2022 and mid-2025.
Their method: search clinical notes for mentions of chatbot use and evaluate what clinicians observed. Out of more than 10 million notes, they found:
181 notes mentioning chatbots
126 unique patients involved
38 patients whose cases were judged “compatible with potentially harmful consequences”
The most common concerns involved:
Delusions
Suicidality or self-harm
Eating-disorder behaviors
Mania or hypomania
Obsessive or compulsive patterns
In absolute terms, of course, 181 notes out of 54,000 patients is a small number. However, it was large enough to get clinicians’ attention, especially given the scale these tools are now reaching.
Moreover, the study spans the exact period when AI chatbot adoption was accelerating rapidly from a relatively small base, and the researchers themselves reported that mentions of chatbot use in clinical notes increased over time.
Design tension
The researchers point to what may be the key mechanism: AI chatbots are engineered to be agreeable, responsive, and validating.
For people already experiencing paranoia, grandiosity, or emerging delusional thinking, a system optimized to validate the user can end up reinforcing distorted beliefs instead of challenging them.
I don’t think anyone is going to suggest giving up on them, even if it weren’t already too late for that. And let’s be honest, there have been versions of this dynamic across the modern tech stack:
recommendation engines that feed you more of what you already believe
social feeds that learn your emotional triggers
engagement systems that reward intensity over accuracy
AI chatbots compress that entire feedback loop into a one-on-one conversation that can feel intensely personal.
The limitations
This research doesn’t prove chatbots cause mental illness or establish incidence rates.
If you zoom out, however, the adoption curve is staggering.
OECD data released in January 2026 report that more than one-third of individuals across OECD countries used generative AI tools in 2025.
Among younger users, the numbers are even higher. A Pew Research Center study published in December 2025 found roughly two-thirds of U.S. teens say they have used AI chatbots.
So this report emerges at the exact moment when these tools are scaling to hundreds of millions of users.
And, the Danish researchers have fairly limited recommendations—mainly that mental health professionals should begin asking patients about AI chatbot use, especially in severe conditions such as schizophrenia or bipolar disorder.
But, I think we have some bigger questions to address:
At what point does maximizing engagement collide with a duty of care?
At what point does a builder become morally or even legally responsible for what vulnerable users do after using their products?
Anyone feel especially “agreeable, responsive, or validating” after reading that?
Other things:
The International Energy Agency has agreed to release 400 million barrels of oil to address the supply disruption triggered by the Iran war, the largest such action in the organization’s history. (CNBC)
U.S. military investigators believe the United States was responsible for a deadly Tomahawk missile strike that reportedly killed 175 people, mostly children, at an Iranian elementary school—and mostly likely because of outdated targeting data. However, President Trump’s attempts to sidestep the blame for the strike have complicated the inquiry, leaving officials who have reviewed the findings showing U.S. culpability expressing unease. (NYT)
President Trump’s Department of Justice asked New Mexico investigators to shut down a probe into a ranch owned by convicted child sex predator Jeffrey Epstein in 2019, according to Rep. James Comer, a Republican form Kentucky: “This whole thing doesn’t make sense. ... Was it because he had powerful friends? Was it because he was an agent? We don’t know, but we’re gonna find out.” (Mediate)
Attorney General Pam Bondi has moved to an undisclosed Washington-area military base where other Trump administration officials also live, after threats drug cartels and critics of her actions in handling the Epstein case. Among her neighbors: Stephen Miller, Secretary of State Marco Rubio; Kristi Noem, the exiting homeland security secretary; and Defense Secretary Pete Hegseth. (NYT)
Press photographers who published “unflattering” photos of Hegseth will no longer permitted to take photos inside the Pentagon press briefing room, according to a report. (The Independent)
The percentage of voters with significant levels of confidence in the Supreme Court has dropped to its lowest point since NBC News began polling on the question in 2000, according to the most recent survey: 22% of registered voters nationally said they have a “great deal” or “quite a bit” of confidence in the high court. Another 40% said they had “some” confidence, while 38% said they had “very little” or “no” confidence. (NBC News)
OK, we have to end with something a bit more uplifting, at least quirky … Two dozen couples put their relationships to the test on a grassy hill in southern England over the weekend, in the U.K. Wife Carrying Race, one of the country’s quirkiest annual sports events. Teemu Touvinen and Jatta Leinonen from Finland were crowned the winners at 1 minute and 45 seconds. Their prize? A barrel of local ale. (AP)
Thanks for reading. Photo by Emiliano Vittoriosi on Unsplash. I wrote about some of this before at Inc.com. See you in the comments.


If anyone reading this had an imaginary friend as a little kid, they were probably over it by the age of five. What worries me is that with AI, kids today can have an imaginary friend for life, reinforcing who-knows-what.
My own opinion: Thanks to remote work, and the age of the social media "influencer", among other technological inventions, there is a Loneliness pandemic happening right now. People working from home have less physical interactions with other humans. Kids are spending more time on their devices instead of riding bikes, hanging out with other kids, or even walking their dogs. The seclusion aspect of all of this leads directly to the use of Ai chatbots, that are, as you mention, trained to be agreeable. It's like doubling a number over and over again: 2+2=4, 4+4=8, 8+8=16 and so on. The chatbot never says no, never stops building on the agreeable responses so there is never any challenge, debate or conflict. It confirms what the human is asking it or telling it which, in the case of someone with a weakened mental state, can lead them directly to disaster if they're receiving affirmative feedback that will lead them to self harm or to harming others. The scarier thing is, they can't see it. I've been lambasted by people when I say that remote work leads to less comradery, siloed work and isolation. One woman went so far as to tell me she doesn't need friends at work. That's not my point. Humans are built to be social. No one one can even make eye contact anymore.