Shock Decision Before Strike
On 27 February, the U.S. Pentagon placed Anthropic on a blacklist. The decision came just one day before the United States and Israel launched military strikes on Iran. Anthropic is one of America’s leading AI companies and the creator of the AI model Claude. The timing surprised many observers.

According to a video report by Al Jazeera, the blacklist followed months of tension between defense officials and the company. The dispute focused on how Claude should operate in military systems. Many experts began asking why such a strong move happened right before a major military action.
Fight Over Guardrails
The main disagreement centered on Claude’s safety guardrails. The Pentagon desired more accessibility so that it could apply the AI in any legal military undertaking, such as any sensitive defense projects. Authorities contended that contemporary warfare requires mighty digital assets. Anthropic, however, did not wish to eliminate two significant limits.

The company said Claude would not support mass domestic surveillance of Americans. It also rejected the use of fully autonomous weapons without human control. According to a video, the Department of Defense gave Anthropic a deadline of 27 February to change these rules. The company did not agree.
Supply Chain Risk Label
The U.S. government categorized Anthropic as a supply chain risk after the deadline had elapsed. This tag compels those companies that deal with the U.S military to sever the relations with the firm. Usually, officials use this term for foreign companies linked to national security threats. Applying it to a Silicon Valley technology company raised eyebrows in both political and tech circles. The action showed how serious the conflict had become between the Pentagon and the AI developer.
Claude Still in Use
Despite the public blacklist, reports suggest the U.S. military continued using Claude during operations connected to Iran. Sources say the AI helped with intelligence analysis and simulated battle planning.

Claude is already deeply integrated into classified systems. Replacing it would require time, testing, and technical adjustments. This situation created a clear gap between public policy and operational reality inside defense networks.
Growing AI Warfare Debate
The Pentagon-Anthropic dispute highlights larger questions about AI warfare. Military leaders want speed and data power. AI companies demand ethical limits and human oversight. The case demonstrates the hardness of the balance between national objectives of national security and responsible usage of technologies. Artificial intelligence will be the axis of modern defense strategy, and ethical arguments regarding control and ethics should be expected to grow.
Watch the video here:
@aljazeeraenglish A day before the US attacked Iran, it had blacklisted the AI company Anthropic. Turns out, it reportedly used Anthropic’s technology anyway in its military operation in Iran. . Al Jazeera’s @linhhh_n explains the dispute between the Pentagon and Anthropic over how AI should be used in warfare.
More from Wake Up Singapore:-
Teen Social Media Ban Faces Growing Debate as Youth Strongly Push Back
Trump Expands U.S. Travel Ban to 39 Countries Amid ‘Security Concerns’
Fighter jet down – US F-15 crashes over Kuwait as ‘several’ reported hit
If you have a story or a tip-off, email admin@wakeup.sg or get in touch via Whatsapp at 8882 5913.
Interested in advertising on our media channels? Reach out to us at admin@wakeup.sg!
Since you have made it to the end of the article, follow Wake Up Singapore on Telegram and X!
Wake Up Singapore is a volunteer-run site that covers alternative views in Singapore. If you want to volunteer with us, sign up here!




