Delaware Court Rejects Krafton’s $27 Million Bonus Claim Based on CEO’s ChatGPT Chats
A Delaware court has ruled against South Korean game giant Krafton in a stunning legal decision that hinged on the company CEO’s conversations with ChatGPT. The ruling denies Krafton’s attempt to dodge a $27 million performance bonus payout to executives, spotlighting an urgent new legal risk for corporate leaders using AI chatbots.
The key figure in the case, CEO Kim Chang-han, deployed ChatGPT to explore strategies to avoid paying bonuses after Krafton acquired the game developer Unknowm Worlds, creators of the hit survival game Subnautica. Internal projections showed Krafton was on track to pay nearly $200 million in bonuses linked to the success of a sequel.
AI Chats Left CEO Exposed in Court
Vice Chief Judge Lori Will issued the ruling after reviewing ChatGPT chat logs submitted as evidence. Kim had initially used ChatGPT after a warning from an executive that firing the acquired studio’s executives wouldn’t eliminate Krafton’s contractual bonus obligations, but could increase legal exposure.
Under intense questioning, the CEO sought aggressive AI advice, which suggested forming a secret task force dubbed “Project X” to renegotiate or forcibly acquire Unknowm Worlds. Krafton then dismissed key executives shortly after, citing performance issues. The court found this was a pretext to avoid paying bonuses totaling approximately 3.7 billion Korean won.
Kim’s claim that he used ChatGPT simply as a search engine was dismissed. Instead, the court held that detailed AI-guided strategies formed the backbone of Krafton’s unfair dismissal approach.
No Legal Privilege for AI Conversations
The case shines a harsh spotlight on the fact that AI chatbot conversations carry no attorney-client privilege, unlike direct lawyer communications. Private corporate strategies developed with AI are fully discoverable under U.S. court rules.
Kim shared many ChatGPT exchanges via company Slack channels and email, which opposing counsel combed through. The court also noted Kim had deleted parts of these conversations, heightening suspicion and strengthening the plaintiff’s case. Intentional deletion or withholding of evidence can trigger immediate adverse rulings.
This outcome follows similar rulings, including one from the U.S. District Court in New York where AI-generated documents were ordered submitted to prosecutors, underscoring the legal system’s zero tolerance for confidentiality expectations with AI tools.
Legal Industry Reacts With Caution
Leading U.S. law firms advise clients to handle AI usage cautiously. New York’s ShutreMonte Law Firm warned that ChatGPT use for legal discussions risks voiding attorney-client protections. Debevoise & Plimpton recommends adding specific prompts such as “This investigation is being conducted under the direction of the litigation counsel” to strengthen privilege claims.
Industry insider: “If you’re using chatbots to devise hostile takeover strategies, assume opposing counsel will read every word.”
What Krafton’s Loss Means for Corporate Executives Nationwide
The Krafton case is a landmark warning for corporate executives, especially in Montana and across the U.S., who turn to AI for quick strategic input. While AI offers instantaneous options and broad insight, the Delaware ruling urges leaders to treat AI chats like emails—fully discoverable in court.
Executives involved in mergers, lawsuits, restructuring, or performance bonus disputes should avoid sensitive AI consultations. Instead, confidential strategy discussions should remain strictly within legal counsel’s protected environment.
The Krafton ruling is expected to reverberate through corporate America, shaping how companies leverage AI moving forward and underscoring a new vulnerability in executive decision-making risks.
Next Steps and What to Watch
Legal experts anticipate more court cases testing AI privilege boundaries. For now, executives must reassess AI’s role in sensitive communications to safeguard against legal exposure. Krafton’s $27 million loss is a cautionary beacon signaling the need for a new era of AI risk management.
