Could you be fined for AI content? What to know about Korea's latest technology law.

Home > National > Social Affairs

print dictionary print

Could you be fined for AI content? What to know about Korea's latest technology law.

Audio report: written by reporters, read by AI


An AI warning sign [GETTY IMAGES BANK]

An AI warning sign [GETTY IMAGES BANK]



[EXPLAINER]
 
Korea is now the first country to enforce a comprehensive and nationwide artificial intelligence law. With the measures having taken effect on Thursday, uncertainty remains over how its provisions will be interpreted and applied.

 
The law, formally titled the Framework Act on the Development of Artificial Intelligence and the Establishment of Trust, was passed by the National Assembly in December 2024. It aims to protect human rights and dignity by regulating AI-generated content while promoting what the government calls the sound development of the industry.
 

Related Article

 
The legislation includes a one-year grace period intended to give companies time to adjust before penalties are imposed.
 
The move comes as governments around the world race to regulate the rapid advance of AI. The European Union adopted its own AI Act in 2024, and the law entered into force on Aug. 1 of that year, though most of its provisions will be phased in gradually through 2027.
 
Why was the law adopted?
 
Korean officials say the law is meant to strike a balance between fostering innovation and establishing safeguards at a moment of explosive growth in the AI sector. The act lays out principles for AI-related policymaking and imposes obligations on businesses that develop or deploy AI systems.
 
The law also comes as public concern rises over crimes involving manipulated digital content, especially deepfake content. During a nationwide police crackdown from November 2024 to October 2025, authorities recorded 3,411 cyber sexual crime cases, of which 35.2 percent involved deepfake material.
 
A passerby walks past a poster warning of the harms of deepfake content in Daejeon on Aug. 30, 2024. [NEWS1]

A passerby walks past a poster warning of the harms of deepfake content in Daejeon on Aug. 30, 2024. [NEWS1]



What are the main provisions?
 
The law introduces several core elements, including rules governing transparency requirements for AI-generated content and so-called high-impact AI systems, penalties for violations and a requirement that foreign AI companies that meet certain benchmarks designate a domestic representative in Korea to serve as a point of contact with regulators.
 
That last requirement applies only to companies that meet at least one of three thresholds: global revenue of 1 trillion won ($680 million), domestic revenue of 10 billion won or an average of more than one million daily users in Korea. In practice, officials acknowledge that the provision is likely to apply only to a handful of global technology companies, such as Google and OpenAI.
 
On the industrial policy side, the law provides a legal basis for government support for AI research and development, assistance for AI adoption and commercialization, support for startups, the promotion of AI convergence across industries, the cultivation of specialized talent and the development of AI data centers.
 
Notably, however, the law does not include provisions specifically addressing the protection of minors who use AI services.
 
Key provision of the AI Basic Act [YUN YOUNG]

Key provision of the AI Basic Act [YUN YOUNG]

 
Who is penalized for failing to disclose AI-generated content?
 
Under the law, penalties apply to providers of AI services rather than end users. This includes foreign companies that offer AI products or services to users in Korea. Individual consumers are not subject to punishment.
 
The utilization of AI tools in itself is not grounds for liability; responsibility rests with AI developers and service operators, not with private users, broadcasters or publishers that rely on those tools.
 
What transparency requirements apply?
 
According to the Ministry of Science and ICT, transparency obligations vary depending on the service environment and the type of AI-generated output.
 
For content delivered within a service platform or used within a service user interface, the providers are required to let users know that AI was used to produce content, with both general AI-generated material and deepfake content clearly identified. The visibility of the Gemini logo on its website while users input prompts is one example. When gamers engage in an AI-based chat service on a gaming platform, the service provider is required to clearly note that the response is generated by AI. Disclosure may take the form of a logo or explanatory text accompanying the content, an advance notice to users that generative AI is being used or a direct watermark on the output.
 
A screencapture of Google’s Gemini shows its logo and an explicit disclosure of AI use, cited by the Ministry of Science and ICT as an example under the AI Basic Act. [SCREEN CAPTURE]

A screencapture of Google’s Gemini shows its logo and an explicit disclosure of AI use, cited by the Ministry of Science and ICT as an example under the AI Basic Act. [SCREEN CAPTURE]

 
For content distributed outside a platform — content that can be downloaded and shared externally — the requirements differ. General AI-generated material must be disclosed in a way that is perceptible to users, either through visible watermarks or through nonvisual methods such as audio notices or disclosure messages displayed during downloads. For example, if AI-generated text files were shared, such as from ChatGPT, the file must explicitly state that the text came from the service, or the file's metadata must contain an disclaimer. If nonvisual approaches such as metadata are used, the provider should notify users either through text or audio that the file was generated by AI during downloading.
 
Deepfake content distributed externally must be labeled in a manner that is clearly recognizable to humans. In cases involving artistic or creative works, the law allows for alternative methods that do not interfere with exhibition or appreciation.
 
For deepfake content distributed externally, rules are medium-specific. Audio content must include an announcement at the beginning indicating that it was generated by AI. Images must display a visible watermark, such as a logo. Video content must carry a watermark throughout its entire playback.
 
What qualifies as 'high-impact AI'?
 
The law defines high-impact AI as systems that are likely to have a significant effect on human life, physical safety or fundamental rights. It applies to systems used in at least 10 designated areas, including health care, energy, drinking water, nuclear power, criminal investigations, hiring, loan screening, transportation, public services and education.
 
Examples cited by the government include vehicles equipped with Level 4 fully autonomous driving technology or higher.
 
As debate has grown over how the designation should be applied, government officials have emphasized that the definition does not extend to AI systems in which humans retain meaningful control over final decisions. At the current level of technological development, this means that only systems such as fully autonomous driving technologies and certain hyperscale AI models would be classified as high-impact, while AI tools used to support decision-making, such as in loan approvals, would generally fall outside the scope if a human makes the final determination.
 
To address uncertainty among businesses, the government has said it plans to establish an “AI Basic Act support desk” to provide consultations and guidance on whether systems qualify as high-impact AI and on how companies should meet their compliance obligations.
 
Under Article 33 of the new law, business operators are required to review in advance whether the AI falls under the high-impact category, and if necessary, ask the ministry for confirmation.
 
What obligations do operators of high-impact AI face?
 
Operators of high-impact AI systems are required to establish and operate risk management plans and implement measures to protect users. They must also develop explanations that describe, to the extent technically feasible, the final outcomes produced by AI systems, the main parameters used to generate those outcomes and an overview of the training data involved.
 
In addition, companies must assign human personnel to oversee and manage high-impact AI systems, prepare and retain documentation verifying the safety and reliability measures they have taken and comply with any additional requirements deliberated and approved by the government’s AI oversight committee.
 
Lawmakers pass the revised AI law at the National Assembly in Yeouido, western Seoul, on Dec. 30, 2025. [YONHAP]

Lawmakers pass the revised AI law at the National Assembly in Yeouido, western Seoul, on Dec. 30, 2025. [YONHAP]



How will the government enforce the law?
 
The Ministry of Science and ICT is authorized to conduct fact-finding investigations if violations are reported or suspected. These may include requests for documents or on-site inspections by public officials. Companies that refuse to cooperate may face administrative fines.
 
To minimize confusion during the initial rollout, the government has said that such investigations will be suspended during the one-year grace period. Even after that period ends, officials say enforcement will be kept to a minimum and will be reserved for exceptional cases involving serious social harm, such as loss of life or major human rights violations.
 
Is the law ready to work as intended?
 
Amid lingering confusion over how the law will be implemented, experts say the framework will require further refinement.
 
“It is true that multiple revisions are needed,” said a source familiar with the discussions who wished to remain anonymous, noting that the presidential National AI Strategy Committee has already proposed a series of improvements.
 
Those recommendations include easing certain obligations on AI service operators, particularly during the operational phase, and further differentiating compliance requirements based on how AI systems are used in practice.
 
Business preparedness for the AI law [YUN YOUNG]

Business preparedness for the AI law [YUN YOUNG]

 
The committee has also called for the creation of a cross-ministerial coordination mechanism to encourage ministries overseeing sectors designated as high-impact AI areas to revise their existing laws in line with the AI transition. Once those revisions are completed, the Ministry of Science and ICT would review additional enforcement decrees under the AI Basic Act to eliminate overlapping regulations, according to the proposal.
 
Industry preparedness remains limited. Only 2 percent of 101 Korean AI startups surveyed last month said they had established a substantive compliance system to respond to the new law, according to a report by the nonprofit Startup Alliance.
 
What penalties apply?
 
Violations of transparency requirements, failures to designate a domestic representative when required or noncompliance with government investigations may result in administrative fines of up to 30 million won. No penalties will be imposed during the grace period.

BY CHO JUNG-WOO [[email protected]]
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
s
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)