AI chatbot Lee Luda popular for sounding natural, until users realized it used data from private chats
Published: 17 Jul. 2025, 07:00
Audio report: written by reporters, read by AI
Lee Luda, an AI chatbot developed by Scatter Lab, has the persona of a woman born in 2002, according to its developers. [SCATTER LAB]
If an AI chatbot uses your chat history without consent, can you make it pay up?
You can in Korea, for now. A district court has ordered a startup to pay damages to victims whose private messages were used without proper consent to train an AI chatbot, signaling that data privacy could become a critical fault line in the burgeoning AI sector.
What happened?
On June 12, the Seoul Eastern District Court’s Civil Division partially ruled in favor of 246 plaintiffs who sued Scatter Lab, the developer of the AI chatbot Lee Luda, over personal data leaks. The court awarded damages ranging from 100,000 won ($72) to 400,000 won each, depending on the severity of the privacy violations.
The court found that 26 victims whose personal information had been exposed were entitled to 100,000 won for mental distress. Another 23 people whose sensitive personal information was leaked were granted 300,000 won. The court ordered 40 victims who suffered both types of breaches to receive 400,000 won each.
Under Korean law, sensitive personal information refers to data that could cause significant invasions of privacy.
Scatter Lab on Oct. 25, 2022 announced the official launch of Lee Luda 2.0 on platform Nutty on Oct. 27, 2022. [SCATTER LAB]
What did Lee Luda do?
Lee Luda, an AI-powered chatbot launched by Scatter Lab in 2020, gained rapid popularity for its seemingly natural conversations with users. But it quickly came under fire after users found that the company trained the bot on data drawn from its other services, including "Science of Love” and “Text At,” apps that analyze users’ KakaoTalk conversations to offer psychological insights.
Users argued that Scatter Lab never informed them properly that their private chat logs would be repurposed to develop AI models, and pointed to indications that personal and sensitive information had been leaked both inside and outside the company.
In April 2021, they filed a lawsuit seeking compensation, claiming the company mishandled both personal and sensitive information.
'No clear consent given'
The judges concluded that Scatter Lab used portions of data from around 600,000 users, including roughly 9.4 billion lines of messenger conversations, to train Lee Luda. They said the company failed to clearly inform users or secure substantial consent for such usage, violating the Personal Information Protection Act at the time.
“It is reasonable to conclude that Scatter Lab did not provide these victims with a clear explanation of consent requirements, nor did it secure actual consent for processing their data,” the court stated in its ruling.
Ha Jung-rim, an attorney at Law Firm Taelim who represented the plaintiffs, said, “This is the first damages ruling in Korea against an AI developer that used personal data for machine learning without explicit consent.
"It’s also meaningful because the court distinguished between sensitive information and general personal data, awarding up to 400,000 won for compounded harm,” she added.
Tech firm Scatter Lab's logo [SCATTER LAB]
Scatter Lab tried to argue that the data had been sufficiently pseudonymized and that its AI training qualified as “scientific research,” which would exempt it from the need for explicit consent under Korean privacy laws.
The court rejected this, finding that the data was not properly anonymized and that Lee Luda’s development could not be classified as scientific research.
Kim Borami, an attorney specializing in data protection, noted that “whether a project counts as scientific research and how explicitly a company secures consent will become key standards in AI development."
"Since this case involved sensitive data processed without consent, it could even trigger criminal penalties,” Kim added.
What’s next?
Scatter Lab has appealed the ruling, so a final decision will rest with a higher court. Even so, many in the tech industry expect that the case will pressure AI companies to be far more rigorous about securing user permission.
An official at a domestic AI firm said the industry would likely have to embrace stricter ethical norms, adding, “Getting explicit consent from data owners will become more critical, inevitably adding operational burdens for AI companies.”
Scatter Lab's AI chatbot Lee Luda is shown in this post uploaded to an Instagram page for the Lee Luda persona. [SCREEN CAPTURE]
Translated from the JoongAng Ilbo using generative AI and edited by Korea JoongAng Daily staff.
BY KIM NAM-YOUNG, KIM SEONG-JIN [[email protected]]





with the Korea JoongAng Daily
To write comments, please log in to one of the accounts.
Standards Board Policy (0/250자)