LG AI Research unveils EXAONE 4.5 AI model to compete with ChatGPT, Claude
Published: 09 Apr. 2026, 18:12
Lim Woo-hyung, the co-head of LG AI Research, delivers a presentation on his company's EXAONE model during a press conference in Barcelona, Spain, on March 2. [JOINT PRESS CORPS]
LG AI Research unveiled a new artificial intelligence model on Thursday designed to understand and reason across both text and images as competition in multimodal AI intensifies across the world.
The model, EXAONE 4.5, builds on the institute’s earlier work since the release of EXAONE 1.0 in 2021. It can read documents that combine written and visual elements — such as design blueprints, financial statements and contracts — and interpret them in context.
That capability could prove particularly useful in industrial settings, where complex documents often require manual review. By analyzing such materials quickly and holistically, the model may help improve efficiency in tasks that have long depended on human oversight.
LG AI Research said EXAONE 4.5 performed competitively against leading global models across a range of benchmarks.
In tests of visual processing and reasoning, it outperformed systems such as GPT-5 mini and Claude Sonnet 4.5. It scored 77.3 on evaluations in science, technology, engineering and mathematics, and surpassed Google’s latest model in coding and complex chart analysis, according to the institute.
The release comes as global competition in artificial intelligence accelerates, with analysts saying the results suggest Korean researchers are gaining ground on key performance measures.
A chart released by LG AI Research comparing the company's EXAONE 4.5 model against other models [LG AI RESEARCH]
EXAONE 4.5 has 33 billion parameters — about one-seventh the size of LG’s earlier K-EXAONE model — while maintaining comparable performance in text understanding and reasoning. A smaller model that delivers similar results could reduce both costs and computing demands.
In addition to Korean and English, the model supports languages including Spanish, German, Japanese and Vietnamese, broadening its potential for global use.
LG AI Research said it plans to expand EXAONE’s capabilities to include voice, video and real-world environments, part of a broader push toward what it describes as “physical intelligence.”
The institute also said it is incorporating data from the Northeast Asian History Foundation to better reflect Korean historical and cultural contexts, while applying its own standards to improve reliability.
EXAONE 4.5 has been released on the AI model-sharing platform Hugging Face for research, academic and educational use, expanding access for developers.
“Going forward, we will extend the model’s capabilities to understand voice, video and physical environments, developing it into an AI that can make decisions and act in real industrial settings,” said Lee Jin-sik, head of the EXAONE Lab at LG AI Research.
This article was originally written in Korean and translated by a bilingual reporter with the help of generative AI tools. It was then edited by a native English-speaking editor. All AI-assisted translations are reviewed and refined by our newsroom.
BY PARK YOUNG-WOO [[email protected]]





with the Korea JoongAng Daily
To write comments, please log in to one of the accounts.
Standards Board Policy (0/250자)