Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Judi Lynn

(164,148 posts)
Sun Aug 31, 2025, 06:53 PM Aug 2025

There are 32 different ways AI can go rogue, scientists say -- from hallucinating answers to a complete misalignment with

There are 32 different ways AI can go rogue, scientists say — from hallucinating answers to a complete misalignment with humanity

By Drew Turney published 13 hours ago

New research has created the first comprehensive effort to categorize all the ways AI can go wrong, with many of those behaviors resembling human psychiatric disorders.


Scientists have suggested that when artificial intelligence (AI) goes rogue and starts to act in ways counter to its intended purpose, it exhibits behaviors that resemble psychopathologies in humans. That's why they have created a new taxonomy of 32 AI dysfunctions so people in a wide variety of fields can understand the risks of building and deploying AI.

In new research, the scientists set out to categorize the risks of AI in straying from its intended path, drawing analogies with human psychology. The result is "Psychopathia Machinalis" — a framework designed to illuminate the pathologies of AI, as well as how we can counter them. These dysfunctions range from hallucinating answers to a complete misalignment with human values and aims.

Created by Nell Watson and Ali Hessami, both AI researchers and members of the Institute of Electrical and Electronics Engineers (IEEE), the project aims to help analyze AI failures and make the engineering of future products safer, and is touted as a tool to help policymakers address AI risks. Watson and Hessami outlined their framework in a study published Aug. 8 in the journal Electronics.

According to the study, Psychopathia Machinalis provides a common understanding of AI behaviors and risks. That way, researchers, developers and policymakers can identify the ways AI can go wrong and define the best ways to mitigate risks based on the type of failure.

More:
https://www.livescience.com/technology/artificial-intelligence/there-are-32-different-ways-ai-can-go-rogue-scientists-say-from-hallucinating-answers-to-a-complete-misalignment-with-humanity
5 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
There are 32 different ways AI can go rogue, scientists say -- from hallucinating answers to a complete misalignment with (Original Post) Judi Lynn Aug 2025 OP
This message was self-deleted by its author jfz9580m Sep 2025 #1
Your point is good but malicious AI is possible if it is given independence or control or gains it. Bernardo de La Paz Sep 2025 #2
This message was self-deleted by its author jfz9580m Sep 2025 #3
This message was self-deleted by its author jfz9580m Sep 2025 #4
This message was self-deleted by its author jfz9580m Sep 2025 #5

Response to Judi Lynn (Original post)

Bernardo de La Paz

(60,320 posts)
2. Your point is good but malicious AI is possible if it is given independence or control or gains it.
Mon Sep 1, 2025, 11:22 AM
Sep 2025

For one thing, people can be malicious, as you note. So saying "humans made it" does not mean it can't be malicious.

Current "AI" uses the same methods as the brain does, but with less sophistication of mechanism and very deficient and narrow training for narrow tasks (making statements that have some "truthiness" ).

People have been writing off AI for decades: It will never play chess, it will never win at chess, it will never beat the best human, it will never diagnose a disease, it will never find protein foldings, it will never find novel metal alloys, .... All failed predictions. Beware of saying "AI will never ____". Fill in the blank.

Well, there is one "never" prediction that can be made: It will never experience life the same way humans do. That's about it. But it can understand that experience.

Yes, it is currently oversold and promises about what the current iteration can do are over-promises. But don't write it off. You ain't seen nothin' yet. The difference between today's AI and 2050's AI is like the difference between 1995 internet and 2020 internet. You ain't seen nothin' yet.

You are 100% right on about it being deployed undemocratically even in democracies.

Response to Bernardo de La Paz (Reply #2)

Response to Bernardo de La Paz (Reply #2)

Response to Bernardo de La Paz (Reply #2)

Latest Discussions»Culture Forums»Science»There are 32 different wa...