'Cognitive Surrender' Is a New and Useful Term for How AI Melts Brains (Gizmodo, 4/5) [View all]
https://gizmodo.com/cognitive-surrender-is-a-new-and-useful-term-for-how-ai-melts-brains-2000742595
Kyle Orland of ArsTechnica wrote a blog post about the term cognitive surrender on April 3. Maybe I should have noticed it sooner since its been floating around since at least January, when it was, it appears, coined in this context by the Wharton Business School marketing researchers Steven Shaw and Gideon Nave. Their paper is incredibly troubling, and once you read about these findings, the term cognitive surrender will be stuck in your head too.
What Shaw and Nave did was give 1,372 people a test, and access to an AI chatbot for helpwith the twist that the chatbot sometimes gave wrong answers. The test was an adapted version of something called a Cognitive Reflection Test, meaning every question was a certain type of brain-buster youve seen before:
-snip-
At any rate, in the part of the study where the subjects were allowed to consult the chatbot, they did so about half the time. When it gave correct answers, they accepted them 93 percent of the time. Unfortunately, when it was wrong, they accepted answers 80 percent of the time. And keep in mind, they didnt have to use it at all. They let the bad advice trump their own brains. Even worse, those who used AI rated their confidence 11.7 percent higher than those who didnt, even though it was wrong.
-snip-
This isnt the first time the phrase cognitive surrender has existed. The theologian Peter Berger used it in a religious context in the 1990s, but it meant something more like surrendering faith in God to relieve cognitive dissonance. And if youre like me, youve probably noticed that AI-assisted cognitive surrender looks like older forms of mental laziness.
-snip-
Found that on Bluesky, in a message from Gizmodo posted an hour ago, when I checked the platform again. But I'd mentioned a similar problem with AI interfering with people's judgment in a reply earlier today about radiologists and AI, in this thread in LBN:
https://www.democraticunderground.com/10143644307
In reply 31 there, I'd quoted a Forbes article from several weeks ago:
https://www.forbes.com/sites/jessepines/2026/02/23/will-ai-de-skill-doctors-evidence-is-starting-to-trickle-in/
A study found that radiologists ability to catch AI-generated errors in mammograms correlated strongly with experience. In a simulated scenario where an AI system provided an incorrect suggestion, the rates of correctly read mammograms was 20% for inexperienced radiologists, 25% for the moderately experienced and 46% for the very experienced.
This raises the specter of what is called never-skilling. If medical trainees rely on AI-generated differentials before wrestling with clinical ambiguity themselves, the scaffolding of diagnostic reasoning that typically emerges during the years of residency training may never fully develop.
That seems like cognitive surrender to AI, too.
âCognitive Surrenderâ Is a New and Useful Term for How AI Melts Brains https://gizmodo.com/cognitive-surrender-is-a-new-and-useful-term-for-how-ai-melts-brains-2000742595
— Gizmodo (@gizmodo.com) 2026-04-05T21:45:04.497Z