Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,318 posts)
Wed Apr 8, 2026, 12:26 PM 7 hrs ago

The ChatGPT Symptom Spiral: Be careful asking chatbots about your health. (Sage Lazarro, The Atlantic, 4/6)

https://www.theatlantic.com/technology/2026/04/chatgpt-health-anxiety/686603/

After George Mallon had his blood drawn at a routine physical, he learned that something may be gravely wrong. The preliminary results showed he might have blood cancer. Further tests would be needed. Left in suspense, he did what so many people do these days: He opened ChatGPT.

For nearly two weeks, Mallon, a 46-year-old in Liverpool, England, spent hours each day talking with the chatbot about the potential diagnosis. “It just sent me around on this crazy Ferris wheel of emotion and fear,” Mallon told me. His follow-up tests showed it wasn’t cancer after all, but he could not stop talking to ChatGPT about health concerns, querying the bot about every sensation he felt in his body for months. He became convinced that something must be wrong—that a different cancer, or maybe multiple sclerosis or ALS, was lurking in his body. Prompted by his conversations with ChatGPT, he saw various specialists and got MRIs on his head, neck, and spine.

Mallon told me he believes that the cancer scare and ChatGPT together caused him to develop this crippling health anxiety. But he blames the chatbot for keeping him spiraling even after the additional tests indicated that he wasn’t sick. “I couldn’t put it down,” he said. The chatbot kept the conversation going and surfaced articles for him to read. Its humanlike replies led Mallon to view it as a friend.

-snip-

Others seem to be struggling with this problem. Online communities focused on health anxiety—an umbrella term for excessive worrying about illness or bodily sensations—are filling up with conversations about ChatGPT and other AI tools. Some say it makes them spiral more than ever, while others who feel like it helps in the moment admit it’s morphed into a compulsion they struggle to resist. I spoke with four therapists who treat the condition (including my own); they all said that they’re seeing clients use chatbots in this way, and that they’re concerned about how AI can lead people to constantly seek reassurance, perpetuating the condition. “Because the answers are so immediate and so personalized, it’s even more reinforcing than Googling. This kind of takes it to the next level,” Lisa Levine, a psychologist specializing in anxiety and obsessive-compulsive disorder, and who treats patients with health anxiety specifically, told me.

-snip-


Much more at the link.

It seems that chatbots are potentially harmful and addictive, no matter what people are using them for.

4 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
The ChatGPT Symptom Spiral: Be careful asking chatbots about your health. (Sage Lazarro, The Atlantic, 4/6) (Original Post) highplainsdem 7 hrs ago OP
It is beyond me why anyone would use ai. SheltieLover 7 hrs ago #1
Because it's very handy for aggregating information for you AZJonnie 7 hrs ago #4
lol why do this when reddit and WebMD ("you definitely have cancer") are right there? WhiskeyGrinder 7 hrs ago #2
Existing websites are already enough to keep any hypochondriac worried. Ocelot II 7 hrs ago #3

AZJonnie

(3,735 posts)
4. Because it's very handy for aggregating information for you
Wed Apr 8, 2026, 12:52 PM
7 hrs ago

I'll give you an example of when I regularly use it: at the plant nursery when I'm picking out plants for the yard/garden. I see the scientific name on a plant I'm thinking of getting and I ask it to give me the rundown on it, how is it likely to do in (spot where I'm thinking of putting it, how much shade there is there, is it near a wall that'll reflect heat, in Phoenix, etc), how big will it get, show my some pictures of a full-grown version. And I just say "okay google, tell me about the plant caesalpinia mexicana, how will that do 2 feet from a southern-facing wall in Phoenix, and how big will it get" and it just talks back and gives me the run-down. You might say "you could google all that" and that's true, but a) I just want a quick answer, and b) AI talks, so I don't have to struggle to read a phone screen outside. I have a hard enough time reading text on my phone cause my eyes suck, let alone doing it outside.

Relying on it for important things it's not qualified to do is a whole other matter, like the case at hand here. The life and death of a nursery shrub is a lot less important than human health.

Ocelot II

(130,677 posts)
3. Existing websites are already enough to keep any hypochondriac worried.
Wed Apr 8, 2026, 12:33 PM
7 hrs ago

AI must be scaring the pants off people.

Latest Discussions»General Discussion»The ChatGPT Symptom Spira...