Korea must lead in setting global AI norms

Home > Opinion > Columns

print dictionary print

Korea must lead in setting global AI norms

Audio report: written by reporters, read by AI


 
Kim Sok-chul
 
The author is a former president of the Korea Institute of Nuclear Safety (KINS).
 
 
The Korean government has begun in earnest to develop its own foundation model, referred to as “sovereign AI” — a domestically built large-scale artificial intelligence system intended to reduce reliance on foreign platforms. The Ministry of Science and ICT recently selected five “national teams” out of 15 applicants to lead the project. Once complete, the technology will be released as open source for public use. The plan envisions securing AI sovereignty and positioning Korea among the world's top three AI powers.
 
Yet technology alone cannot guarantee AI sovereignty. As artificial intelligence exerts ever more disruptive influence, it inevitably reshapes international order, which requires adherence to shared norms. No matter how advanced a system may be, if it falls outside the framework of global rules, its value is diminished.
 
Deputy Prime Minister and Minister of Economy and Finance Koo Yun-cheol announces the 15 leading projects for national AI transformation during a joint briefing with related ministries at the government complex in Jongno District, central Seoul, on Aug.22. [NEWS1]

Deputy Prime Minister and Minister of Economy and Finance Koo Yun-cheol announces the 15 leading projects for national AI transformation during a joint briefing with related ministries at the government complex in Jongno District, central Seoul, on Aug.22. [NEWS1]

 
Rules often lag behind technology, but they define the boundaries of power. Nuclear technology, developed more than 70 years ago, remains governed by the Nuclear Nonproliferation Treaty (NPT). The reason the United States and a few other powers continue to hold nuclear dominance today lies in their grip on that regime.
 
AI shares the dual nature of nuclear energy — an innovation that also poses threats. AI-driven weapons are changing battlefields, while large-scale disinformation campaigns exploit its reach. Even designs for biochemical weapons can be generated with a few lines of code. Despite these dangers, an international framework to regulate AI remains stalled due to conflicting interests among major powers.
 
For Korea, this paralysis could be an opening. If the country advances not only technology but also embeds safety, ethics, nonproliferation and export controls into an international regime, it could become a rule setter in the AI domain.
 
The United States and the European Union, considered AI leaders, have so far tailored rules to their own interests. The EU has legislated risk-based assessments, transparency requirements, human oversight and data governance under its Artificial Intelligence Act. It has also classified AI software and specialized semiconductors with military potential as strategic goods for export control. Risks must be managed at the design stage, not after deployment. Without similar steps, Korea could find itself squeezed between U.S. export controls, EU high-risk certification and China’s data sovereignty rules — reduced to a passive rule-taker.
 

Related Article

 
The government must therefore complement technology development with strategic regulation. Alongside work on Korean AI algorithms and chips, a multidisciplinary approach integrating ethics, humanities and security from the outset is essential. Legal frameworks should institutionalize safety and ethics, prevent military misuse and oversee strategic materials. Establishing a dedicated regulatory body should also be considered.
 
Korea could further enhance its position by leading global partnerships on AI rules and joint guidelines. A “Seoul AI Safety Declaration” addressing security, misuse and nonproliferation would send a strong signal. Hosting an international body to oversee AI risk management, headquartered in Korea, would elevate the country as a supplier of norms and a provider of a trusted “K-AI Safe” label.
 
Norm leadership should be seen not as a cost but as an investment and insurance for national security. A Korean framework that incorporates risk management from the research and development stage would create the rare global value of “trustworthy AI.” This aligns with the contemporary trend in which regulation becomes the market standard, as seen in carbon neutrality and advanced biotechnology.
 
This illustration photograph shows screens displaying the logo of DeepSeek, a Chinese AI company that develops open-source large language models, and the logo of OpenAI's artificial intelligence chatbot ChatGPT on Jan. 29. [AP/YONHAP]

This illustration photograph shows screens displaying the logo of DeepSeek, a Chinese AI company that develops open-source large language models, and the logo of OpenAI's artificial intelligence chatbot ChatGPT on Jan. 29. [AP/YONHAP]

 
The global consensus is moving toward "unsafe AI is not AI." Just as the five official nuclear powers under the NPT control the fuel cycle and international order, those who design and lead AI ethics, safety and transparency will dominate its future. Korea’s competitiveness will hinge on whether it sets the rules or follows those drawn by others.
 
Imagination advances technology, but rules connect that technology to people and markets. Korea must now carve its name into the coordinates of an "innovation power that manages risk." With the world worried about gaps in AI governance, now is the time to act.


This article was originally written in Korean and translated by a bilingual reporter with the help of generative AI tools. It was then edited by a native English-speaking editor. All AI-assisted translations are reviewed and refined by our newsroom.
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
s
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)