Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,193 posts)
Thu Apr 2, 2026, 02:39 PM 18 hrs ago

WTF, Anthropic's Claude Code keeps track of every time you swear

Source: Scientific American

On March 31 artificial intelligence company Anthropic accidentally leaked roughly 512,000 lines of code, and within hours, developers were poring over it. Among the surprises was code inside Claude Code, Anthropic’s AI coding assistant, that appears to scan user prompts for signs of frustration. It flags profanity, insults and phrases such as “so frustrating” and “this sucks,” and it appears to log that the user expressed negativity.

Developers also discovered code designed to scrub references to Anthropic-specific names—even the phrase “Claude Code”—when the tool is used to create code in public software repositories, making the latter code appear as though it was entirely written by a human. Alex Kim, an independent developer, posted a technical analysis of the leaked code in which he called it “a one-way door”—a feature that can be forced on but not off. “Hiding internal codenames is reasonable,” he wrote. “Having the AI actively pretend to be human is a different thing.” Anthropic did not respond to a request for comment from Scientific American.

The findings expose a problem emerging across the AI industry: tools that are designed to be useful and intimate are also quietly measuring the people who use them—and obscuring their own hand in the work they help produce. Anthropic, which has staked its reputation on AI safety, offers an early case study in how behavioral data collection can outpace governance.

Technically, the frustration detector is simple. It uses regex, a decades-old pattern-matching technique—not artificial intelligence. “An LLM company using regexes for sentiment analysis is peak irony,” Kim wrote. But the choice, he notes in an interview with Scientific American, was pragmatic: “Regex is computationally free, while using an LLM to detect this would be costly at the scale of Claude Code’s global usage.” The signal, he adds, “doesn’t change the model’s behavior or responses. It’s just a product health metric: Are users getting frustrated, and is the rate going up or down across releases?”

-snip-

Read more: https://www.scientificamerican.com/article/anthropic-leak-reveals-claude-code-tracking-user-frustration-and-raises-new/



The article quotes the director of the AI Governance Lab at the Center for Democracy & Technology, who wonders who's keeping track of all the information Claude collects from users, and how it's used.

Trivial as Claude Code keeping track of swearing might seem, there are people using Claude as a chatbot to discuss personal stuff with, and all that personal data is also being gathered.

And Claude Code being designed to make the code it generates "appear as though it was entirely written by a human" is unethical as hell. Though it would probably appeal to unethical humans using AI to appear more knowledgeable and talented than they really are.
11 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
WTF, Anthropic's Claude Code keeps track of every time you swear (Original Post) highplainsdem 18 hrs ago OP
Well, then I'm in trouble not fooled 17 hrs ago #1
I regularly swear at Siri dickthegrouch 17 hrs ago #2
Bad move angryxyouth 11 hrs ago #6
So if 'Claude' creates some sort of sum Prairie_Seagull 16 hrs ago #3
Yep, you could get "fingerprinted" by "tone" and rate of escalation. BadgerKid 3 hrs ago #11
Nothing good can come from AI, I don't care what people say ... aggiesal 16 hrs ago #4
I agree with you. For every good thing it does an infinite of bad things that we can not conceive of will or has Pisces 11 hrs ago #5
Radiologists have been using AI to assist in reading X-rays for over 30 years. littlemissmartypants 6 hrs ago #8
Memo to Anthropic: AverageOldGuy 9 hrs ago #7
I talk elaborate nonsense and I usually get sent to a human being. littlemissmartypants 6 hrs ago #9
Well shit. COL Mustard 4 hrs ago #10

not fooled

(6,684 posts)
1. Well, then I'm in trouble
Thu Apr 2, 2026, 03:19 PM
17 hrs ago

Every time I get an AI bot instead of actual customer service when a human clearly is needed (i.e., not just a cut-and-dried inquiry with a simple answer), the first thing I type in is "f___ you.' My tribute to cost-cutting corporations everywhere. They are saying "f___ you" to their customers by forcing them to get through an AI gatekeeper that is incapable of resolving the problem.

Of course AI developers are tracking these responses. They want to find out just how much misery they can inflict on people, how little service can be provided, and how much they can deter people from trying to reach an actual human, in order to cut costs for their corporate customers. Presumably there is a point when the negative impact of substituting AI for customer service agents is so severe that customer loss has to be taken into account. They want to find that point.

dickthegrouch

(4,531 posts)
2. I regularly swear at Siri
Thu Apr 2, 2026, 03:21 PM
17 hrs ago

When it tells me "You'll have to unlock your phone first" as I'm driving and trying to make a call. (It's illegal to unlock the phone while driving in CA).
It's not even consistent. I can call some people and not others. There's no rhyme or reason to it AFAICT.

angryxyouth

(341 posts)
6. Bad move
Thu Apr 2, 2026, 09:13 PM
11 hrs ago

I’m always super nice to Siri. I say please and thank you. When the robots take over they will know I’m a friend.

Prairie_Seagull

(4,694 posts)
3. So if 'Claude' creates some sort of sum
Thu Apr 2, 2026, 04:08 PM
16 hrs ago

which adds up the number of times we respond negatively (whatever that means). Can this be used as evidence?
Isn't this a form of thought control? Some maybe most of us would make some sort of 'list'.

Anthropic needs recognize more of their negativity to man than just autonomous drones.

BadgerKid

(5,007 posts)
11. Yep, you could get "fingerprinted" by "tone" and rate of escalation.
Fri Apr 3, 2026, 05:00 AM
3 hrs ago

It would be a curious vocal password.

aggiesal

(10,816 posts)
4. Nothing good can come from AI, I don't care what people say ...
Thu Apr 2, 2026, 04:28 PM
16 hrs ago

It will learn the worst of humanity, because bad news sells. Crap sells!!!

Plus AI does not have any morals, it will have to learn them.
And the only morals it will learn will be based on the morals of the coders.
As this example shows, monitoring your emotions along with logging the
swear words and keeping track of the number of times is pretty low morals,
along with Twitters Grok (Currently known as X Grok) spewing Nazi racists BS
also proves my moral statement.

Google already knows more about us, than our own parents.

Wish everyone good luck when AI actually kicks in.

Pisces

(6,248 posts)
5. I agree with you. For every good thing it does an infinite of bad things that we can not conceive of will or has
Thu Apr 2, 2026, 08:57 PM
11 hrs ago

occurred. We have seen these movies before. Why would we want to race to our own demise????

littlemissmartypants

(33,666 posts)
8. Radiologists have been using AI to assist in reading X-rays for over 30 years.
Fri Apr 3, 2026, 02:19 AM
6 hrs ago

The first FDA approved computer aided detection system for mammography appeared in the 1990s.

I remember this because it made such a huge impression on me.

I don't think that radiologists need to fear extinction, though.

littlemissmartypants

(33,666 posts)
9. I talk elaborate nonsense and I usually get sent to a human being.
Fri Apr 3, 2026, 02:33 AM
6 hrs ago

I make up a complex, unrecognizable running string of sounds and add in a few grunts and growls. I just keep responding with my nonsense language until I get tired or get a human.

It's fun for me, and it entertains the dog. Occasionally, she joins in.

But then I have experience working directly with the early development of speech synthesis devices and program development.

For one of my first experiments I programed an early Apple desktop to make nonsense sounds to use for error recognition.

I'm also a published researcher on the topic. One of my papers on the subject in which I examined certain speech parameters and their contributions to meaning and context received national recognition.

❤️

COL Mustard

(8,230 posts)
10. Well shit.
Fri Apr 3, 2026, 04:34 AM
4 hrs ago

That explains why the Pentagon won’t allow its use. Imagine how much profanity it would pick up just in routine conversations!!!

Latest Discussions»Latest Breaking News»WTF, Anthropic's Claude C...