The butterfly effect of the Anthropic contract termination
Published: 04 Mar. 2026, 00:01
Ahn Hai-ri
The author is an editorial writer at the JoongAng Ilbo.
A stock I bought late in the rally, trusting optimistic forecasts and stretching my finances to invest, plunged nearly 7 percent on Tuesday. A planned trip to Europe via Dubai was disrupted when flights at Dubai International Airport were suspended. These are the ripple effects ordinary Koreans are experiencing after the U.S. operation to eliminate Iran’s leader, Ayatollah Ali Khamenei, code-named “Epic Fury.” What seemed like a distant conflict has entered daily life. The concern is not that this is the end, but that it may mark the beginning of deeper structural consequences.
Left: Anthropic CEO Dario Amodei looks on during a meeting with France's President Emmanuel Macron on the sidelines of the AI Impact Summit in New Delhi on February 19. Right: U.S. Defense Secretary Pete Hegseth speaks during a press conference on U.S. military action in Iran at the Pentagon in Washington on March 2. [AFP/YONHAP]
Over time, financial markets may stabilize, and Middle Eastern hubs such as Dubai and Doha may reopen. Short-term volatility often finds a floor. Yet another issue is less likely to fade: renewed concern over technological dependence on AI, highlighted by a clash between the Donald Trump administration and the U.S. AI firm Anthropic before and after the Iran strikes. It would be a mistake to dismiss this as a dispute confined to Washington and Silicon Valley. Just as “Epic Fury” has affected companies and individuals here, the consequences of that dispute could extend far beyond the United States.
The 300 Howard St. building that Anthropic is in talks to lease is seen in San Francisco on Feb. 25. [EPA/YONHAP]
On Friday, Trump designated Anthropic, founded by former OpenAI executive Dario Amodei, a “supply chain risk to national security,” a classification previously applied to Chinese or Russian firms. Federal agencies were ordered to halt use of its products immediately. The administration also unilaterally terminated a U.S. Department of Defense contract signed last July worth up to $200 million.
Earlier this year, during an operation targeting Venezuelan President Nicolás Maduro, U.S. special forces reportedly used Anthropic’s Claude model for target identification and battle scenario simulations. However, when Anthropic maintained that mass domestic surveillance and fully autonomous weapons crossed ethical boundaries, the administration imposed sweeping restrictions. The episode underscored a growing tension between national security demands and corporate principles in advanced technology sectors.
The episode took another turn within hours. Despite the ban, the U.S. military proceeded with “Epic Fury” using Claude. Although the Pentagon had signed comparable agreements last July with OpenAI, Google and xAI, the classified military network built on the Palantir platform relied uniquely on Claude. When Anthropic was placed on a government blacklist, OpenAI quickly reached a new agreement with the Defense Department, but its tools were not yet ready for deployment in live operations. The administration could not afford to wait months during an ongoing conflict.
The contradiction was stark. However forceful the rhetoric, cutting-edge U.S. military operations were, at that moment, dependent on Claude. From one perspective, the episode underscores how a private AI company’s principles can shape outcomes more decisively than government directives. Even the United States, which sells weapons to more than half the world’s countries, faced technological constraints in executing its own operations. In Washington, debate erupted over whether a private firm’s usage policies could override military imperatives and whether contractual terms could effectively circumscribe sovereign authority.
British Prime Minister Keir Starmer, center, and Palantir Technologies CEO Alex Karp, right, tour Palantir headquarters in Washington on Feb. 27, 2025. [AP/YONHAP]
The episode carries sobering implications for Korea. The country ranks among the world’s top 10 arms exporters and cooperates closely with U.S. defense contractors. Yet the analytical “brain” that determines how and where such systems are used may increasingly depend on foreign AI platforms. That reliance presents a structural vulnerability that goes beyond economics. If an overseas AI provider were suddenly excluded from government supply chains due to policy disputes, as happened with Anthropic, the disruption could be immediate and far-reaching for allied states that rely on interoperable systems.
Dependence on foreign AI is not merely a commercial issue but a national security risk. Opinions differ on whether Anthropic acted responsibly by upholding its principles at a significant cost. For Korea, however, the more urgent question is whether national security can be safeguarded without sovereign AI capabilities. And if private firms that develop such systems wield influence exceeding that of governments, what options remain for states seeking to preserve strategic autonomy in an era defined by algorithmic power?
This article was originally written in Korean and translated by a bilingual reporter with the help of generative AI tools. It was then edited by a native English-speaking editor. All AI-assisted translations are reviewed and refined by our newsroom.





with the Korea JoongAng Daily
To write comments, please log in to one of the accounts.
Standards Board Policy (0/250자)