A Family Feud at an Oregon Winery Turns to Vinegar Over A.I. Slop (NYT, 4/17) [View all]
https://www.nytimes.com/2026/04/17/us/oregon-winery-ai-legal-fight.html
A Family Feud at an Oregon Winery Turns to Vinegar Over A.I. Slop
She wanted to pry her late mothers vineyard from two of her brothers. Instead, her lawyers were fined nearly $110,000 for citing bogus case law generated by artificial intelligence.
By Evan Gorelick and Anna Griffin
Anna Griffin reported from Ruch, Ore.
April 17, 2026, 5:02 a.m. ET
-snip-
Ms. Couvrette hired a lawyer in California, Steve Brigandi, who agreed to help represent her for free, since Ms. Couvrettes daughter was dating Mr. Brigandis son, according to a voice mail message that Robert Wisnovsky left his brother Michael.
-snip-
Ms. Couvrette got what she paid for; bogus, A.I.-generated citations started pouring in. Two appeared in a January 2025 filing, then seven in April and 16 more in May even after the opposing lawyers had pointed out the previous ones.
-snip-
Because of Ms. Couvrettes shared responsibility for the bogus citations, the judge permanently dismissed her case against her brothers. He also fined Mr. Brigandi almost $100,000.
-snip-
Timothy Murphy, an Oregon lawyer hired by Ms. Couvrette to ensure Mr. Brigandi followed local court rules, also faces more than $14,000 in fines for failing to meaningfully participate in the case. It never occurred to him that Mr. Brigandi might have stamped his own name on briefs written by Ms. Couvrette, he said, but it seemed like thats what was happening.
-snip-
According to the article, this might have been the largest monetary penalty yet for AI legal misuse.
Which is on the rise, despite news stories that should have been adequate warning. The NYT mentions a database tracking judges' reprimands for AI misuse, now at more than 1300 cases, nearly 3 times the number from 5 months ago.
And in this case the AI-using fools kept using AI and citing nonexistent or irrelevant cases even months after their misuse of AI was first pointed out.
I'd guess there's something more going on here than merely stupidity or naivete. Especially when people are supposedly well educated.
I think it's likely at least some of these people are addicted to AI use, maybe even to particular chatbots. To them, the chatbot's mistakes really don't matter - especially if they're offered typically contrite chatbot apologies for errors - and they'll continue asking the AI for advice, even as the errors continue, until they slam into a judicial wall or something else goes drastically wrong because of what the AI has told them.
They LIKE the AI. They think it WANTS to be helpful. And it's probably telling its human users just how smart and perceptive they are for using AI this way. If someone criticizes the AI they're using, especially for its errors, they'll probably come up with excuses for its mistakes, even taking the blame themselves. If they hear of anyone else getting wrong answers from that AI, they'll suggest it was that user's fault.
All of which is irrational, but those users are no longer rational. They're AI addicts. Probably unaware they're addicts.
But if anyone is using AI daily and finding excuses for its mistakes, they probably should step away from the chatbot before it gets them into real trouble.