ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues [View all]
https://arstechnica.com/security/2026/01/chatgpt-falls-to-new-data-pilfering-attack-as-a-vicious-cycle-in-ai-continues/
STILL LEAKY AFTER ALL THESE YEARS
ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues
Will LLMs ever be able to stamp out the root cause of these attacks? Possibly not.
Dan Goodin Jan 8, 2026 8:00 AM
Theres a well-worn pattern in the development of AI chatbots. Researchers discover a vulnerability and exploit it to do something bad. The platform introduces a guardrail that stops the attack from working. Then, researchers devise a simple tweak that once again imperils chatbot users.
The reason more often than not is that AI is so inherently designed to comply with user requests that the guardrails are reactive and ad hoc, meaning they are built to foreclose a specific attack technique rather than the broader class of vulnerabilities that make it possible. Its tantamount to putting a new highway guardrail in place in response to a recent crash of a compact car but failing to safeguard larger types of vehicles.
-snip-
As is the case with a vast number of other LLM vulnerabilities, the root cause is the inability to distinguish valid instructions in prompts from users and those embedded into emails or other documents that anyoneincluding attackerscan send to the target. When the user configures the AI agent to summarize an email, the LLM interprets instructions incorporated into a message as a valid prompt.
AI developers have so far been unable to devise a means for LLMs to distinguish between the sources of the directives. As a result, platforms must resort to blocking specific attacks. Developers remain unable to reliably close this class of vulnerability, known as indirect prompt injection, or simply prompt injection.
-snip-
Much more at the link. No paywall.
One of many reminders over the years that LLMs aren't very secure.
And this latest news about that is especially timely, given that OpenAI is now urging people to trust them with their confidential medical data:
https://www.democraticunderground.com/100220920563