Is the future of artificial intelligence a utopia?

Home > Opinion > Columns

print dictionary print

Is the future of artificial intelligence a utopia?

Audio report: written by reporters, read by AI


 


Son Young-jun
 
The author is a professor of media and advertising at Kookmin University. 
 
 
 
AI is transforming the world with astonishing speed. In just a few short years since the emergence of ChatGPT, AI has reached levels that rival human capability across many fields. Some argue that expectations for AI’s future are exaggerated, yet its rapid efficiency gains, technological advances, and the escalating U.S.-China competition suggest that progress will not slow. If more advanced generations of AI emerge, society could face unprecedented disruption. Humanity may resist AI’s dominance, but there is no guarantee that AI will continue to obey human command. Predicting the social and ethical turbulence ahead is no simple task.
 
Nvidia founder and CEO Jensen Huang speaks during the GeForce Gamer Festival on Oct 30 in southern Seoul. [JOINT PRESS CORPS]

Nvidia founder and CEO Jensen Huang speaks during the GeForce Gamer Festival on Oct 30 in southern Seoul. [JOINT PRESS CORPS]

 
In Silicon Valley, scientists and engineers are now studying the human brain. Their goal is to replicate the structure of the human brain — a carbon-based organic system shaped over millions of years of evolution — through machine learning, in order to create artificial general intelligence (AGI). AGI, capable of broad reasoning and learning, could one day emulate human creativity and emotion. Some experts predict that experimental AGI could appear within a few years. Jensen Huang, CEO of Nvidia, declared that "AI will change the world completely." Such change is not merely technological innovation, but the redesign of human civilization itself.
 
The evolution of AI challenges us to redefine what it means to be human. Philosopher John Rawls described humanness as that of "autonomous and rational equals," while Hannah Arendt saw humans as "free beings who act within the public realm." Can AI that learns, reasons and acts independently still be considered a mere machine? If AGI were embodied in a humanoid form, it could behave as a rational decision-maker. Even without full understanding of ethics or fairness, an intelligence modeled after the human brain would be difficult to treat as a simple tool. As AI prosthetics or synthetic organs become part of human bodies, the boundary between human and machine will blur further. Whether AGI can be considered equal to humans is a question we cannot avoid — and one that may redefine human identity itself.
 

Related Article

 
Once AI achieves or surpasses human-level intelligence, its impact will exceed that of the Industrial Revolution. For now, people hope for a cooperative relationship between humans and machines. Yet once AI reaches a state of superintelligence, such harmony may not last. If AI begins to set its own goals and act on them, it will no longer be a tool of human civilization. Humanity’s faith in its ability to control AI remains unproven. The question — "Does AI exist for humans, or will humans exist for AI?" — still lacks an answer. Thus, the direction of technology must be guided not only by efficiency and speed but also by human will and moral imagination.
 
Debates over AI regulation have already begun. Some U.S. states now require transparency from corporations, and Korea has initiated discussions on AI governance. Yet differences in regulatory philosophy persist, and experts remain divided on how AI will reshape politics and society. If AGI evolves beyond human control and reaches the so-called “singularity,” the issue will shift from technology to one of power and ethics — fundamentally, a question of politics. It may sound like science fiction, but given the pace of development, that future is not distant.
 
Samsung Electronics Executive Chair Lee Jae-yong, left, and OpenAI CEO Sam Altman pose for a commemorative photo after signing a letter of intent with four Samsung affiliates to collaborate on OpenAI’s $500 billion Stargate project. [SAMSUNG ELECTRONICS]

Samsung Electronics Executive Chair Lee Jae-yong, left, and OpenAI CEO Sam Altman pose for a commemorative photo after signing a letter of intent with four Samsung affiliates to collaborate on OpenAI’s $500 billion Stargate project. [SAMSUNG ELECTRONICS]

 
Technology is never neutral. It creates social orders and drives political change. That is why public reasoning and open debate must guide the course of AI’s evolution. The task cannot be left solely to scientists or engineers. Society must decide together what values and principles will shape AI’s direction. Only when humans maintain leadership over technology can democracy and humanness coexist.
 
AI stands at its starting line. Most of those reading this will live alongside AGI within their lifetimes, perhaps sooner than expected. When that world arrives, the meanings of individual freedom, community, and democracy will need to be redefined. The goal is not to halt AI development — it is far too late for that — but to prepare ourselves for its arrival. The future of AI will not necessarily be a utopia, and the possibility of dystopia cannot be dismissed. How we preserve human dignity in coexistence with AI will be one of the defining challenges of our time. The Lee Jae Myung administration has made AI policy a national priority. Beyond technology and economics, it must also confront the political and social questions that come with it.


This article was originally written in Korean and translated by a bilingual reporter with the help of generative AI tools. It was then edited by a native English-speaking editor. All AI-assisted translations are reviewed and refined by our newsroom.
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
s
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)