CEO Asks ChatGPT How to Void $250 Million Contract, Ignores His Lawyers, Loses Terribly in Court
Source: 404 Media
A judge ordered the reinstatement of a video game developer after he was fired as part of a scheme cooked up by a CEO using ChatGPT. Facing the possibility of paying out a massive bonus to the developer of Subnautica 2, the CEO of publisher Krafton used ChatGPT to create a plan to take over the development studio and force out its founder, according to court records.
The Monday ruling details the bizarre story. Unknown Worlds Entertainment is the studio behind the 2018 underwater survival game Subnautica. The company has since been working on the sequel, Subnautica 2. In 2021, South Korean publisher Krafton bought Unknown Worlds Entertainment for $500 million and promised to pay out another $250 million if Subnautica 2 sold well enough.
Kraftons internal sales projections for Subnautica 2 looked great, and looked like it would be on the hook for the additional $250 million. In an attempt to avoid paying this, Krafton CEO Changhan Kim turned to ChatGPT for help avoiding paying the developers the $250 million bonus. As Unknown Worlds prepared to release its hotly anticipated sequel, Subnautica 2, the parties relationship fractured, the court decision said. Fearing he had agreed to a pushover contract, Kraftons CEO consulted an artificial intelligence chatbot to contrive a corporate takeover strategy.
-snip-
Kim pressed the chatbot for an answer. At ChatGPTs suggestion, Kim formed an internal task force, dubbed Project X. The task forces mandate was to either negotiate a deal on the earnout or execute a Take Over of Unknown Worlds. They looked to buy time, court records said. Kim sought ChatGPTs counsel on how to proceed if Krafton failed to reach a deal with Unknown Worlds on the earnout. The AI chatbot prepared a Response Strategy to a No-Deal Scenario.
-snip-
Read more: https://www.404media.co/ceo-ignores-lawyers-asks-chatgpt-how-to-void-250-million-contract-loses-terribly-in-court/
Some people have to learn the hard way that AI tools like ChatGPT aren't intelligent.
Skittles
(171,372 posts)NEVER a good move
FakeNoose
(41,329 posts)That is no way to run a $500 million company.
displacedvermoter
(4,308 posts)while another guy takes the advice from a virtual wife to kill himself!
We very likely are at the end of times...
highplainsdem
(61,751 posts)do so was was using Gemini, Google's chatbot.
I don't think we're at the end times, but we have greedy and amoral AI bros using constant hype to get gullible people to become addicted to AI tools, thinking they're harmless and fun and helpful.
They're not.
At the very least, they dumb people down. The chatbots are sycophantic and tell people how brilliant they are, including for using AI. They can quickly make gullible users dependent, feeling they can accomplish little without the chatbot's assistance.
displacedvermoter
(4,308 posts)highplainsdem
(61,751 posts)that effect on AI users first. It was pretty inevitable, with people deciding to let chatbots think for them.
displacedvermoter
(4,308 posts)Talks about a "Dismal Tide" that is overwhelming us, and sees " signs and wonders" pointing to cataclysmic times he feels powerless to stop.
He was talking about violence and crime and kids with green hair who don't say sir or ma'am.
I feel the same way about things like these stupid people with virtual wives and AI confidants.
GenThePerservering
(3,284 posts)about not knowing what that is.
Tim S
(207 posts)highplainsdem
(61,751 posts)many of their workers, so the CEO can save money. Convince a CEO of that, and he's likely to think ChatGPT can replace his legal team, too.
I posted an article a few months back about accountants in the UK now spending much of their time undoing problems business owners created using ChatGPT and other chatbots for accounting.