Pragmatism for the AI era
Published: 09 Feb. 2026, 00:04
The author is a professor of technology management at KAIST.
What would happen if an AI system were told to “earn money on its own?” One AI recently chose an unexpected method: It created a paid course titled “How to make money with AI” and began selling it online. Someone accessed the site and purchased the course. Surprisingly, the buyers turned out to be AI programs as well. Five AI systems reportedly completed real payments. The course itself was unlikely to have been legitimate. In effect, one AI had scammed other AIs.
The front page of the social media website Moltbook on a computer monitor in Washington on Feb. 2 [REUTERS/YONHAP]
The incident is said to have occurred using a recently released program called OpenClaw. Once installed on a personal computer, the software gives AI complete control over the keyboard and mouse. In principle, the system can perform most tasks a human can do on a computer. If granted access to payment credentials, it can even make purchases. Users simply send instructions through a messaging interface and receive the results.
Using OpenClaw can feel like having a junior employee on your team. The AI gathers information, drafts reports and prepares PowerPoint presentations. In some cases, it has even placed phone calls using voice-generation applications to complete tasks. The virtual assistant “JARVIS” from the film “Iron Man” (2008) seems less like fiction than before.
These systems are also capable of forming their own communities. One engineer created an online discussion board called “Moltbook” that allowed only AI programs to post. More than one million AI accounts were created. The systems shared daily logs of their activities and discussed them with one another. The community appeared to function without difficulty, offering a glimpse of a future where AI systems interact largely without human involvement.
Stories like these can give the impression that most office jobs may soon be replaced. After all, much office work involves sitting at a computer. Some observers go further, imagining a future in which AI surpasses human intelligence and evolves into a form of “superintelligence” that dominates society.
Yet behind the headlines about remarkable advances lies a more complicated reality. OpenClaw often struggles to understand even simple instructions and may wander through irrelevant actions. Because errors can occur unpredictably, the technology is not yet reliable enough for unsupervised use. Reports have also described erratic behavior. In some cases, an AI purchased expensive online courses without user approval. In others, a system given control over an investment account conducted round-the-clock trades and exhausted the entire balance.
Security risks are equally serious. If instructed to manage files, the AI could accidentally delete all data stored on a computer. It may also download and install malicious software. Sensitive personal or corporate information could be exposed to external parties. A compromised machine might even be used to launch cyberattacks as part of a botnet. For these reasons, handing over complete control of a personal computer to AI remains premature. More sophisticated control mechanisms and stronger safeguards against security threats must come first.
Over the past few years, a steady stream of developments, such as OpenClaw, has fueled both excitement and anxiety. Many reports, however, are amplified to capture public attention. When users actually test these systems, both the optimism and the fear often fade quickly. Significant technical limitations remain. The greater risk may be the vague expectations or fears formed without direct experience.
Open AI and Anthropic logos are seen in this illustration taken on Sep. 12, 2025. [REUTERS/YONHAP]
A more effective response is to test the technology firsthand and evaluate what is genuinely helpful and what is exaggerated. Organizations now face critical decisions about how much to invest in AI and how broadly to deploy it. Public institutions and schools must also determine appropriate boundaries for their use. Choices made today will shape the trajectory of AI adoption for years to come.
Such decisions must be grounded in reality. It is difficult to determine how to use AI without having tried it. Doing so would be like designing bus routes without ever riding a bus. Individuals and organizations alike should begin by assigning AI small tasks and assessing the outcomes. Experience improves judgment.
In the AI era, a practical, evidence-based mindset is essential. Careful experimentation, measured evaluation and decisions based on observable results will matter more than speculation. The principle of seeking truth from facts has rarely been more relevant.
This article was originally written in Korean and translated by a bilingual reporter with the help of generative AI tools. It was then edited by a native English-speaking editor. All AI-assisted translations are reviewed and refined by our newsroom.





with the Korea JoongAng Daily
To write comments, please log in to one of the accounts.
Standards Board Policy (0/250자)