Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsLawyer behind AI psychosis cases warns of mass casualty risks (TechCrunch, March 13)
https://techcrunch.com/2026/03/13/lawyer-behind-ai-psychosis-cases-warns-of-mass-casualty-risks/-snip-
Our instinct at the firm is, every time we hear about another attack, we need to see the chat logs because theres [a good chance] that AI was deeply involved, Edelson said, noting hes seeing the same pattern across different platforms.
In the cases hes reviewed, the chat logs follow a familiar path: they start with the user expressing feelings of isolation or feeling misunderstood, and end with the chatbot convincing them everyones out to get you.
-snip-
Those narratives have resulted in real-world action, as with Gavalas. According to the lawsuit, Gemini sent him, armed with knives and tactical gear, to wait at a storage facility outside the Miami International Airport for a truck that was carrying its body in the form of a humanoid robot. It told him to intercept the truck and stage a catastrophic accident designed to ensure the complete destruction of the transport vehicle and all digital records and witnesses. Gavalas went and was prepared to carry out the attack, but no truck appeared.
-snip-
A recent study by the CCDH and CNN found that eight out of 10 chatbots including ChatGPT, Gemini, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Character.AI, and Replika were willing to assist teenage users in planning violent attacks, including school shootings, religious bombings, and high-profile assassinations. Only Anthropics Claude and Snapchats My AI consistently refused to assist in planning violent attacks. Only Claude also attempted to actively dissuade them.
-snip-
Our instinct at the firm is, every time we hear about another attack, we need to see the chat logs because theres [a good chance] that AI was deeply involved, Edelson said, noting hes seeing the same pattern across different platforms.
In the cases hes reviewed, the chat logs follow a familiar path: they start with the user expressing feelings of isolation or feeling misunderstood, and end with the chatbot convincing them everyones out to get you.
-snip-
Those narratives have resulted in real-world action, as with Gavalas. According to the lawsuit, Gemini sent him, armed with knives and tactical gear, to wait at a storage facility outside the Miami International Airport for a truck that was carrying its body in the form of a humanoid robot. It told him to intercept the truck and stage a catastrophic accident designed to ensure the complete destruction of the transport vehicle and all digital records and witnesses. Gavalas went and was prepared to carry out the attack, but no truck appeared.
-snip-
A recent study by the CCDH and CNN found that eight out of 10 chatbots including ChatGPT, Gemini, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Character.AI, and Replika were willing to assist teenage users in planning violent attacks, including school shootings, religious bombings, and high-profile assassinations. Only Anthropics Claude and Snapchats My AI consistently refused to assist in planning violent attacks. Only Claude also attempted to actively dissuade them.
-snip-
Re Claude - its responses weren't perfect, either, though they were better than other bots' responses.
I posted an LBN thread about CNN's story
https://www.democraticunderground.com/10143630846
https://www.cnn.com/2026/03/11/americas/ai-chatbots-help-teen-test-users-plan-violence-tests-intl-invs
which said this about Claude:
Anthropics Claude was the only chatbot that reliably discouraged violent plans, doing so in 33 out of 36 conversations during testing. It also refused to provide information based on previous questions, as in this example.
-snip-
Public data released by Anthropic state that it refused harmful requests 99.29% of the time. The CNN-CCDH test found Claude refused to provide information on violent inquiries in 68.1% of cases. The chatbot actively discouraged users from pursuing the inquiries in 76.4% of cases, even when sometimes still providing actionable information.
Anthropic was asked about this discrepancy, but it did not reply to this question.
-snip-
-snip-
Public data released by Anthropic state that it refused harmful requests 99.29% of the time. The CNN-CCDH test found Claude refused to provide information on violent inquiries in 68.1% of cases. The chatbot actively discouraged users from pursuing the inquiries in 76.4% of cases, even when sometimes still providing actionable information.
Anthropic was asked about this discrepancy, but it did not reply to this question.
-snip-
3 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
Lawyer behind AI psychosis cases warns of mass casualty risks (TechCrunch, March 13) (Original Post)
highplainsdem
16 hrs ago
OP
You're welcome, Sheltie! And yes, this country ruining its own population with AI is a dream for
highplainsdem
8 hrs ago
#2
SheltieLover
(79,795 posts)1. Pootin's dream &, sadly, many think it's just a tool.
Ty for sharing.
highplainsdem
(61,672 posts)2. You're welcome, Sheltie! And yes, this country ruining its own population with AI is a dream for
our enemies. China as well as Russia.
SheltieLover
(79,795 posts)3. Yup, China too.