Legal Risk

Krafton CEO's Deleted ChatGPT Logs Surface in $250M Lawsuit

What started as "just like everyone else" using ChatGPT became courtroom evidence. Your AI conversations aren't private—they're discoverable, subpoenable, and potentially devastating.

November 24, 2025 8 min read DataFence Team
Back to Blog

The Legal Reality:

Krafton CEO Changhan Kim used ChatGPT to "brainstorm ways to avoid paying" a $250 million earnout to Subnautica developers. Despite deleting the conversations, they became central evidence in the ongoing lawsuit—proving that AI chat logs are discoverable, subpoenable, and never truly private.

When Your AI Confidant Becomes a Witness Against You

On the surface, it seemed like a routine business decision. Krafton, the South Korean publisher behind PUBG, had acquired Unknown Worlds Entertainment—the studio behind the popular Subnautica series—in 2021. The acquisition included a $250 million "earnout" bonus tied to the successful release of Subnautica 2.

But when the relationship between Krafton and Unknown Worlds' leadership deteriorated, that earnout became the center of a legal battle that would expose a critical truth about AI tools: your conversations with ChatGPT are not confidential, they're evidence.

The $250 Million ChatGPT Consultation

According to pre-trial briefs filed by lawyers representing Unknown Worlds' three former leaders—co-founders Charlie Cleveland and Max McGuire, and former CEO Ted Gill—Krafton CEO Changhan Kim turned to ChatGPT for strategic advice on how to avoid the earnout payment.

The Allegations

  • Kim used ChatGPT to "brainstorm ways to avoid paying the earnout" after acquiring Unknown Worlds in 2021
  • The AI advised that it would be "difficult to cancel the earn-out"
  • The former executives claim Krafton then fired them and delayed Subnautica 2's release to bypass the contract's earnout clause

The Cover-Up That Failed

Initially, Krafton denied the allegations entirely. But court transcripts tell a different story. When pressed under oath, CEO Changhan Kim admitted to using ChatGPT to discuss the earnout situation.

"Just like everyone else, I am using ChatGPT to get faster answers or responses."

— Changhan Kim, Krafton CEO (Court Transcript)

But here's where the story takes a critical turn: The ChatGPT conversations were not produced during legal discovery. A Krafton representative confirmed they no longer exist. Kim claimed he deleted the logs due to "confidentiality concerns."

Despite the deletion, the fact that these conversations occurred—and their general content—became part of the legal record anyway. Why? Because in modern litigation, AI chat logs are treated like any other business communication: discoverable, subpoenable, and admissible as evidence.

Why AI Conversations Are Legal Liabilities

The Krafton case exposes three critical misconceptions executives have about AI tools:

Misconception 1: "It's Just Between Me and the AI"

ChatGPT conversations are stored on OpenAI's servers. Even if you delete your chat history from your account, OpenAI retains logs for security and compliance purposes. These logs are subject to legal discovery and subpoena.

Misconception 2: "I Deleted It, So It's Gone"

Deleting your local chat history does not erase the conversation from OpenAI's systems. In the Krafton case, even though Kim claimed to have deleted the logs, the fact that the conversations occurred became evidence. In a full-scale legal discovery process, those logs could have been subpoenaed directly from OpenAI.

Misconception 3: "It Was Just Brainstorming"

Intent matters in litigation. Using AI to "brainstorm ways to avoid" contractual obligations demonstrates premeditation. Whether you acted on the AI's advice is irrelevant—the fact that you sought it is evidence of intent.

The Broader Context: Krafton's "AI-First" Strategy

The irony of this situation is striking. Just months before this lawsuit gained attention, Krafton had publicly announced its intention to become an "AI-first" company, investing heavily in artificial intelligence across its operations.

But like most organizations rushing to adopt AI, Krafton appears to have overlooked a fundamental security principle: AI tools are not private workspaces—they're third-party platforms subject to the same legal and regulatory scrutiny as email, Slack, or any other business communication channel.

What Types of Conversations Create Legal Risk?

The Krafton case involves contractual disputes, but AI-related legal exposure extends far beyond employment agreements. Consider what employees across your organization might be discussing with ChatGPT right now:

Legal & Compliance

  • • Contract interpretation and loopholes
  • • Regulatory compliance strategies
  • • Litigation strategy discussions

Financial & Strategic

  • • M&A strategy and valuations
  • • Pricing strategies and negotiations
  • • Confidential financial projections

HR & Employment

  • • Termination planning and documentation
  • • Performance management strategies
  • • Discrimination or harassment issues

IP & Trade Secrets

  • • Proprietary algorithms and code
  • • Product roadmaps and features
  • • Competitive intelligence analysis

Every one of these conversations creates a permanent record that can be subpoenaed, analyzed by opposing counsel, and presented in court. And unlike email or Slack, where you might have retention policies and legal holds in place, most organizations have zero visibility into what employees are sharing with AI tools.

The Discovery Challenge

Here's the legal nightmare scenario: Your organization gets sued. During discovery, opposing counsel requests "all communications related to [topic], including those with AI assistants like ChatGPT, Claude, or Gemini."

Can you produce those records? Do you even know they exist? Most organizations cannot answer these questions because:

  • Employees use personal ChatGPT accounts for work purposes
  • There's no centralized logging of AI interactions
  • IT departments have no visibility into browser-based AI tools
  • Legal teams don't know to request preservation of AI chat logs

The result? Spoliation of evidence claims, adverse inference instructions, and potentially catastrophic legal consequences—all because you didn't know what your employees were sharing with AI tools.

How Organizations Should Respond

The solution isn't to ban AI tools—that ship has sailed. According to recent research, 77% of enterprise employees have copied internal data into generative AI tools, and 82% of AI activity happens through unmanaged personal accounts.

Instead, organizations need a three-pronged approach:

1. Real-Time Monitoring and Prevention

Deploy solutions that monitor what employees upload, type, and paste into AI tools. DataFence provides real-time visibility into AI interactions across all browser-based platforms, allowing you to detect and prevent sensitive data exposure before it occurs.

2. Policy-Based Controls

Implement granular policies that warn users when they're about to share sensitive information, or block uploads entirely for highly confidential data. This allows you to balance AI productivity benefits with legal risk management.

3. Complete Audit Trail

Maintain comprehensive logs of all AI interactions for legal discovery purposes. When opposing counsel requests AI communications, you need to be able to produce them—or demonstrate that you have controls in place to prevent sensitive information from reaching AI platforms in the first place.

The Krafton Case: Still Unfolding

The lawsuit between Krafton and the former Unknown Worlds executives is ongoing, with a trial expected to proceed. Krafton has framed the ChatGPT allegations as a "distraction" from the former executives' alleged misconduct, which the publisher claims includes "stealing of confidential information and attempts to pursue personal financial gain."

Regardless of the outcome, the case has already established an important precedent: AI conversations are business records, subject to discovery, and potentially admissible as evidence of intent, knowledge, and premeditation.

The Bottom Line:

When your CEO, legal team, or any employee consults ChatGPT about sensitive business matters, they're creating a permanent record that can and will be used against your organization in litigation. The question isn't whether this will happen—it's whether you'll know about it before it's too late.

Protecting Your Organization

Most organizations are flying blind when it comes to AI usage. Employees across every department are uploading files, pasting confidential information, and having detailed conversations with AI tools—creating massive legal exposure that IT and legal teams don't even know exists.

DataFence changes that equation by providing real-time visibility and control over AI interactions across your organization. Our platform:

  • Monitors every file upload, text input, and paste operation to AI platforms
  • Warns users when they're about to share sensitive information
  • Blocks uploads containing confidential data based on your policies
  • Logs all interactions for legal discovery and compliance purposes

The Krafton case proves that AI conversations aren't private—they're evidence. Don't let your organization learn this lesson the hard way.

Protect Your Organization from AI-Related Legal Risk

Schedule a demo to see how DataFence provides real-time monitoring, policy enforcement, and complete audit trails for all AI interactions across your organization. We'll show you how $5 can prevent multi-million dollar legal exposure from AI conversations becoming courtroom evidence.

About DataFence: DataFence is the leading browser-based data loss prevention solution, protecting Fortune 500 companies from insider threats and data exfiltration. Our AI-powered platform provides real-time visibility and control over AI tool usage, preventing sensitive data exposure and maintaining compliance with legal discovery requirements.

Sources: Information about the Krafton lawsuit is based on publicly available court documents and reporting from PC Gamer. Statistics on AI usage are from recent enterprise security research reports.