The Legal Reality:
Krafton CEO Changhan Kim used ChatGPT to "brainstorm ways to avoid paying" a $250 million earnout to Subnautica developers. Despite deleting the conversations, they became central evidence in the ongoing lawsuit—proving that AI chat logs are discoverable, subpoenable, and never truly private.
When Your AI Confidant Becomes a Witness Against You
On the surface, it seemed like a routine business decision. Krafton, the South Korean publisher behind PUBG, had acquired Unknown Worlds Entertainment—the studio behind the popular Subnautica series—in 2021. The acquisition included a $250 million "earnout" bonus tied to the successful release of Subnautica 2.
But when the relationship between Krafton and Unknown Worlds' leadership deteriorated, that earnout became the center of a legal battle that would expose a critical truth about AI tools: your conversations with ChatGPT are not confidential, they're evidence.
The $250 Million ChatGPT Consultation
According to pre-trial briefs filed by lawyers representing Unknown Worlds' three former leaders—co-founders Charlie Cleveland and Max McGuire, and former CEO Ted Gill—Krafton CEO Changhan Kim turned to ChatGPT for strategic advice on how to avoid the earnout payment.
The Allegations
- Kim used ChatGPT to "brainstorm ways to avoid paying the earnout" after acquiring Unknown Worlds in 2021
- The AI advised that it would be "difficult to cancel the earn-out"
- The former executives claim Krafton then fired them and delayed Subnautica 2's release to bypass the contract's earnout clause
The Cover-Up That Failed
Initially, Krafton denied the allegations entirely. But court transcripts tell a different story. When pressed under oath, CEO Changhan Kim admitted to using ChatGPT to discuss the earnout situation.
"Just like everyone else, I am using ChatGPT to get faster answers or responses."
— Changhan Kim, Krafton CEO (Court Transcript)
But here's where the story takes a critical turn: The ChatGPT conversations were not produced during legal discovery. A Krafton representative confirmed they no longer exist. Kim claimed he deleted the logs due to "confidentiality concerns."
Despite the deletion, the fact that these conversations occurred—and their general content—became part of the legal record anyway. Why? Because in modern litigation, AI chat logs are treated like any other business communication: discoverable, subpoenable, and admissible as evidence.
Why AI Conversations Are Legal Liabilities
The Krafton case exposes three critical misconceptions executives have about AI tools:
Misconception 1: "It's Just Between Me and the AI"
ChatGPT conversations are stored on OpenAI's servers. Even if you delete your chat history from your account, OpenAI retains logs for security and compliance purposes. These logs are subject to legal discovery and subpoena.
Misconception 2: "I Deleted It, So It's Gone"
Deleting your local chat history does not erase the conversation from OpenAI's systems. In the Krafton case, even though Kim claimed to have deleted the logs, the fact that the conversations occurred became evidence. In a full-scale legal discovery process, those logs could have been subpoenaed directly from OpenAI.
Misconception 3: "It Was Just Brainstorming"
Intent matters in litigation. Using AI to "brainstorm ways to avoid" contractual obligations demonstrates premeditation. Whether you acted on the AI's advice is irrelevant—the fact that you sought it is evidence of intent.
The Broader Context: Krafton's "AI-First" Strategy
The irony of this situation is striking. Just months before this lawsuit gained attention, Krafton had publicly announced its intention to become an "AI-first" company, investing heavily in artificial intelligence across its operations.
But like most organizations rushing to adopt AI, Krafton appears to have overlooked a fundamental security principle: AI tools are not private workspaces—they're third-party platforms subject to the same legal and regulatory scrutiny as email, Slack, or any other business communication channel.
What Types of Conversations Create Legal Risk?
The Krafton case involves contractual disputes, but AI-related legal exposure extends far beyond employment agreements. Consider what employees across your organization might be discussing with ChatGPT right now:
Legal & Compliance
- • Contract interpretation and loopholes
- • Regulatory compliance strategies
- • Litigation strategy discussions
Financial & Strategic
- • M&A strategy and valuations
- • Pricing strategies and negotiations
- • Confidential financial projections
HR & Employment
- • Termination planning and documentation
- • Performance management strategies
- • Discrimination or harassment issues
IP & Trade Secrets
- • Proprietary algorithms and code
- • Product roadmaps and features
- • Competitive intelligence analysis
Every one of these conversations creates a permanent record that can be subpoenaed, analyzed by opposing counsel, and presented in court. And unlike email or Slack, where you might have retention policies and legal holds in place, most organizations have zero visibility into what employees are sharing with AI tools.
The Discovery Challenge
Here's the legal nightmare scenario: Your organization gets sued. During discovery, opposing counsel requests "all communications related to [topic], including those with AI assistants like ChatGPT, Claude, or Gemini."
Can you produce those records? Do you even know they exist? Most organizations cannot answer these questions because:
- Employees use personal ChatGPT accounts for work purposes
- There's no centralized logging of AI interactions
- IT departments have no visibility into browser-based AI tools
- Legal teams don't know to request preservation of AI chat logs
The result? Spoliation of evidence claims, adverse inference instructions, and potentially catastrophic legal consequences—all because you didn't know what your employees were sharing with AI tools.
How Organizations Should Respond
The solution isn't to ban AI tools—that ship has sailed. According to recent research, 77% of enterprise employees have copied internal data into generative AI tools, and 82% of AI activity happens through unmanaged personal accounts.
Instead, organizations need a three-pronged approach:
1. Real-Time Monitoring and Prevention
Deploy solutions that monitor what employees upload, type, and paste into AI tools. DataFence provides real-time visibility into AI interactions across all browser-based platforms, allowing you to detect and prevent sensitive data exposure before it occurs.
2. Policy-Based Controls
Implement granular policies that warn users when they're about to share sensitive information, or block uploads entirely for highly confidential data. This allows you to balance AI productivity benefits with legal risk management.
3. Complete Audit Trail
Maintain comprehensive logs of all AI interactions for legal discovery purposes. When opposing counsel requests AI communications, you need to be able to produce them—or demonstrate that you have controls in place to prevent sensitive information from reaching AI platforms in the first place.
The Krafton Case: Still Unfolding
The lawsuit between Krafton and the former Unknown Worlds executives is ongoing, with a trial expected to proceed. Krafton has framed the ChatGPT allegations as a "distraction" from the former executives' alleged misconduct, which the publisher claims includes "stealing of confidential information and attempts to pursue personal financial gain."
Regardless of the outcome, the case has already established an important precedent: AI conversations are business records, subject to discovery, and potentially admissible as evidence of intent, knowledge, and premeditation.
The Bottom Line:
When your CEO, legal team, or any employee consults ChatGPT about sensitive business matters, they're creating a permanent record that can and will be used against your organization in litigation. The question isn't whether this will happen—it's whether you'll know about it before it's too late.
Protecting Your Organization
Most organizations are flying blind when it comes to AI usage. Employees across every department are uploading files, pasting confidential information, and having detailed conversations with AI tools—creating massive legal exposure that IT and legal teams don't even know exists.
DataFence changes that equation by providing real-time visibility and control over AI interactions across your organization. Our platform:
- Monitors every file upload, text input, and paste operation to AI platforms
- Warns users when they're about to share sensitive information
- Blocks uploads containing confidential data based on your policies
- Logs all interactions for legal discovery and compliance purposes
The Krafton case proves that AI conversations aren't private—they're evidence. Don't let your organization learn this lesson the hard way.
Protect Your Organization from AI-Related Legal Risk
Schedule a demo to see how DataFence provides real-time monitoring, policy enforcement, and complete audit trails for all AI interactions across your organization. We'll show you how $5 can prevent multi-million dollar legal exposure from AI conversations becoming courtroom evidence.
About DataFence: DataFence is the leading browser-based data loss prevention solution, protecting Fortune 500 companies from insider threats and data exfiltration. Our AI-powered platform provides real-time visibility and control over AI tool usage, preventing sensitive data exposure and maintaining compliance with legal discovery requirements.
Sources: Information about the Krafton lawsuit is based on publicly available court documents and reporting from PC Gamer. Statistics on AI usage are from recent enterprise security research reports.
Frequently Asked Questions
What is cloud data loss prevention for AI chatbots like ChatGPT?
Cloud data loss prevention for AI chatbots is a security technology that monitors and blocks sensitive data from being uploaded to cloud-based AI services like ChatGPT, Claude, or Gemini. Cloud data loss prevention works at the browser level, intercepting text inputs, file uploads, and paste operations before they reach AI platforms. This technology is critical because once data is submitted to ChatGPT or similar cloud AI tools, it becomes part of OpenAI's systems and may be retained for security, compliance, and training purposes. Cloud data loss prevention for AI chatbots addresses the unique challenge that AI conversations create permanent records that are discoverable in litigation, as the Krafton CEO case demonstrates. Effective cloud data loss prevention solutions scan content in real-time for sensitive patterns like customer data, financial information, legal strategies, trade secrets, and source code, blocking or warning users before data leaves the organization's control and becomes evidence in potential lawsuits.
How does cloud data loss prevention prevent ChatGPT legal risks?
Cloud data loss prevention prevents ChatGPT legal risks by providing real-time monitoring and blocking of sensitive conversations before they create discoverable evidence. When executives or employees use ChatGPT to discuss contractual strategies, M&A planning, termination decisions, or other sensitive business matters, cloud data loss prevention intercepts those conversations and either blocks them entirely or warns users about the legal exposure they're creating. The Krafton case illustrates why cloud data loss prevention is essential: CEO Changhan Kim's deleted ChatGPT conversations about avoiding a $250 million earnout payment became central evidence in litigation, despite his attempts to delete them. Cloud data loss prevention would have prevented those conversations from ever reaching OpenAI's servers, eliminating the legal risk entirely. Cloud data loss prevention also maintains audit trails showing what was blocked and when, providing legal teams with documentation that the organization has controls in place to prevent sensitive information from reaching third-party AI platforms. This creates a defensible position in litigation: either the sensitive conversations never happened (because they were blocked), or if they did happen, the organization can produce complete records for legal discovery.
What is data leakage protection and how does it differ from traditional DLP?
Data leakage protection is a proactive security approach focused on preventing sensitive information from escaping organizational control through any channel, including modern cloud services that traditional DLP tools miss. Data leakage protection differs from traditional data loss prevention in scope and methodology: traditional DLP typically monitors email and file transfers through corporate networks, while modern data leakage protection operates at the browser level to catch data leaving through web-based applications, AI chatbots, personal cloud storage, and SaaS platforms. Data leakage protection is critical for AI tools because 77% of enterprise employees copy internal data into generative AI using personal accounts that bypass traditional DLP. The Krafton CEO case illustrates a classic data leakage scenario: sensitive contractual strategy discussions leaked to ChatGPT through a browser interaction that traditional network-based DLP would never see. Effective data leakage protection solutions like DataFence monitor all browser-based data transmission regardless of destination, identifying sensitive content through contextual analysis rather than just keyword matching, and providing policy-based controls that balance productivity with protection.
Are ChatGPT conversations discoverable in legal proceedings?
Yes, ChatGPT conversations are absolutely discoverable in legal proceedings and treated as business records subject to the same discovery rules as emails or Slack messages. The Krafton CEO case establishes clear precedent: Changhan Kim's deleted ChatGPT conversations about avoiding a $250 million earnout payment became central evidence despite his attempts to erase them. ChatGPT conversations are discoverable because: (1) OpenAI retains chat logs on its servers for security and compliance purposes even after users delete their local chat history, (2) these logs are subject to legal subpoenas just like any third-party business records, (3) the content of AI conversations can demonstrate intent, knowledge, and premeditation in litigation, and (4) courts increasingly recognize AI chat logs as relevant evidence in contract disputes, employment cases, and intellectual property litigation. Organizations must understand that every ChatGPT conversation creates a permanent record that can be requested during discovery with language like 'all communications related to [topic], including those with AI assistants.' Failing to preserve or produce these records can result in spoliation of evidence claims, adverse inference instructions, and potentially catastrophic legal consequences.
Can deleted ChatGPT conversations be recovered for legal discovery?
Yes, deleted ChatGPT conversations can be recovered for legal discovery through subpoenas to OpenAI, even when users have deleted their chat history from their accounts. The Krafton CEO attempted to delete his ChatGPT conversations about the earnout strategy 'due to confidentiality concerns,' but the conversations still became evidence in the lawsuit. Deleted ChatGPT conversations remain recoverable because: (1) deleting chat history from your ChatGPT account only removes it from your view, not from OpenAI's servers, (2) OpenAI retains conversation logs for 30 days (for free users) or longer (for business accounts) for security monitoring and abuse prevention, (3) legal subpoenas can compel OpenAI to produce these retained logs during discovery, and (4) even if the exact chat logs are no longer available, testimony about their existence and content (as in the Krafton case) can still become evidence. Cloud data loss prevention prevents this scenario entirely by blocking sensitive conversations before they reach OpenAI's servers. Once data is submitted to ChatGPT, organizations lose control over it permanently—deletion is not true erasure, and legal discovery can resurrect conversations users believed were gone. The only reliable protection is prevention through real-time monitoring and blocking.
What types of AI conversations create the most legal risk?
AI conversations create the most legal risk when they involve contractual strategy, employment decisions, litigation planning, or confidential business information that demonstrates intent or knowledge. The Krafton case exemplifies high-risk conversations: using ChatGPT to 'brainstorm ways to avoid' a contractual obligation showed premeditation that became central evidence in litigation. The highest-risk AI conversation categories include: (1) Contract interpretation and strategies to avoid obligations or find loopholes, creating evidence of bad faith, (2) Employment termination planning or performance management strategies, which can become evidence in wrongful termination or discrimination cases, (3) M&A strategy and valuation discussions, exposing sensitive financial information and negotiation positions to discovery, (4) Litigation strategy or evidence evaluation, which may waive attorney-client privilege if done outside proper legal channels, (5) Regulatory compliance workarounds or risk assessment of non-compliant actions, demonstrating knowledge of violations, and (6) Intellectual property discussions including trade secrets, proprietary algorithms, or confidential product plans. Any AI conversation where an employee is essentially asking 'how can we get away with this' or 'what are the loopholes in this requirement' creates discoverable evidence of intent that prosecutors and opposing counsel will use to establish bad faith, knowledge of wrongdoing, or premeditation.
How does DataFence prevent Krafton-style ChatGPT legal exposure?
DataFence prevents Krafton-style ChatGPT legal exposure by intercepting sensitive conversations before they reach AI platforms, eliminating the creation of discoverable evidence. When a CEO or executive attempts to use ChatGPT to discuss contractual strategies, M&A planning, termination decisions, or other legally sensitive topics, DataFence detects the sensitive content and blocks the transmission before it leaves the browser. This approach addresses the exact scenario that created legal problems for Krafton CEO Changhan Kim: his ChatGPT conversations about avoiding the $250 million earnout would have been blocked by DataFence before ever reaching OpenAI's servers, preventing the creation of discoverable evidence entirely. DataFence provides layered protection: (1) Real-time content analysis that identifies legally sensitive discussions based on context and keywords, (2) Policy-based blocking that prevents transmission of contractual information, financial strategies, or termination planning, (3) User warnings that explain why the action is being blocked and suggest approved alternatives, (4) Complete audit trails showing what was blocked and when, demonstrating to courts that the organization has controls preventing sensitive AI usage, and (5) Legal team visibility into attempted high-risk AI usage, enabling proactive intervention. The key insight from the Krafton case is that deletion doesn't protect you—prevention does. DataFence ensures sensitive conversations never happen in discoverable channels in the first place.
Should organizations ban ChatGPT to prevent legal risks?
No, banning ChatGPT is neither effective nor necessary to prevent legal risks like those in the Krafton case. Organizations should not ban ChatGPT because: (1) Bans are unenforceable—77% of enterprise employees already use AI tools through personal accounts that bypass corporate restrictions, (2) Blanket bans eliminate legitimate productivity benefits while forcing usage underground where it's completely invisible to security teams, (3) Employees will find alternative AI tools that are equally unmonitored, playing whack-a-mole rather than solving the underlying problem, and (4) The business reality is that AI tools provide competitive advantages and banning them puts organizations at a disadvantage. Instead of banning ChatGPT, organizations should implement cloud data loss prevention that allows safe usage while preventing legal exposure. DataFence enables organizations to: allow ChatGPT for general business use like writing assistance, research, and productivity, while automatically blocking conversations about contracts, M&A, terminations, litigation, or other legally sensitive topics; provide user education at the point of action, explaining why certain content creates legal risk; maintain compliance and audit trails for legal discovery purposes; and balance AI productivity benefits with legal risk management. The Krafton CEO could have used ChatGPT safely with proper controls—the problem wasn't the tool, it was the unmonitored, uncontrolled usage that created discoverable evidence.