Science
Related: About this forumThere are 32 different ways AI can go rogue, scientists say -- from hallucinating answers to a complete misalignment with
There are 32 different ways AI can go rogue, scientists say from hallucinating answers to a complete misalignment with humanityBy Drew Turney published 13 hours ago
New research has created the first comprehensive effort to categorize all the ways AI can go wrong, with many of those behaviors resembling human psychiatric disorders.
Scientists have suggested that when artificial intelligence (AI) goes rogue and starts to act in ways counter to its intended purpose, it exhibits behaviors that resemble psychopathologies in humans. That's why they have created a new taxonomy of 32 AI dysfunctions so people in a wide variety of fields can understand the risks of building and deploying AI.
In new research, the scientists set out to categorize the risks of AI in straying from its intended path, drawing analogies with human psychology. The result is "Psychopathia Machinalis" a framework designed to illuminate the pathologies of AI, as well as how we can counter them. These dysfunctions range from hallucinating answers to a complete misalignment with human values and aims.
Created by Nell Watson and Ali Hessami, both AI researchers and members of the Institute of Electrical and Electronics Engineers (IEEE), the project aims to help analyze AI failures and make the engineering of future products safer, and is touted as a tool to help policymakers address AI risks. Watson and Hessami outlined their framework in a study published Aug. 8 in the journal Electronics.
According to the study, Psychopathia Machinalis provides a common understanding of AI behaviors and risks. That way, researchers, developers and policymakers can identify the ways AI can go wrong and define the best ways to mitigate risks based on the type of failure.
More:
https://www.livescience.com/technology/artificial-intelligence/there-are-32-different-ways-ai-can-go-rogue-scientists-say-from-hallucinating-answers-to-a-complete-misalignment-with-humanity
Response to Judi Lynn (Original post)
jfz9580m This message was self-deleted by its author.
Bernardo de La Paz
(60,320 posts)For one thing, people can be malicious, as you note. So saying "humans made it" does not mean it can't be malicious.
Current "AI" uses the same methods as the brain does, but with less sophistication of mechanism and very deficient and narrow training for narrow tasks (making statements that have some "truthiness" ).
People have been writing off AI for decades: It will never play chess, it will never win at chess, it will never beat the best human, it will never diagnose a disease, it will never find protein foldings, it will never find novel metal alloys, .... All failed predictions. Beware of saying "AI will never ____". Fill in the blank.
Well, there is one "never" prediction that can be made: It will never experience life the same way humans do. That's about it. But it can understand that experience.
Yes, it is currently oversold and promises about what the current iteration can do are over-promises. But don't write it off. You ain't seen nothin' yet. The difference between today's AI and 2050's AI is like the difference between 1995 internet and 2020 internet. You ain't seen nothin' yet.
You are 100% right on about it being deployed undemocratically even in democracies.
Response to Bernardo de La Paz (Reply #2)
jfz9580m This message was self-deleted by its author.
Response to Bernardo de La Paz (Reply #2)
jfz9580m This message was self-deleted by its author.
Response to Bernardo de La Paz (Reply #2)
jfz9580m This message was self-deleted by its author.