In the rapidly unfolding conflict with Iran (known in the US as Epic Fury and in Israel as Roaring Lion), artificial intelligence has ceased to be a back-office analytical tool and has become operationally embedded in battlefield decision-making and war planning. 

Reports indicate that the US military deployed AI systems provided by the start-up Anthropic – specifically its large-language model “Claude” – to support intelligence analysis, target identification, and in operational simulations during recent strikes on Iranian targets, even hours after US President Donald Trump ordered a federal ban on the technology. 

This extraordinary sequence of events – in which AI’s role in kinetic operations outpaced public policy – reflects both the deep integration of advanced models into combat systems and the Pentagon’s urgent push to field AI across its mission sets.

From intelligence support to operational acceleration

According to reports from The Wall Street Journal and other outlets, US Central Command utilized Claude in conjunction with conventional assets – including Tomahawk missiles, stealth aircraft, and AI-driven drones – to process vast quantities of battlefield and sensor data in real time. The AI model assisted commanders by synthesizing intelligence, prioritizing high-value targets, and running “what-if” scenarios that had traditionally taken hours of human analysis.

Even as the Trump administration publicly denounced Anthropic’s technology and gave federal agencies six months to phase it out, the reality of its use in an actual war zone underscores the operational value military planners see in these models.

The market reaction to the DoW deal
The market reaction to the DoW deal (credit: Courtesy)

Rescue and war planners reportedly resisted immediate cutoff because Claude was already deeply embedded in mission-critical workflows, including through partnerships with firms such as Palantir that integrate commercial AI into secure military systems.

The tensions between technological utility and political leadership are stark. While commanders in the theater of war rely on the AI’s ability to collapse sensor-to-commander timelines, civilian leadership is still grappling with the authority and ethics of accelerating such integration without clear oversight.

The Pentagon’s ‘AI-first’ directive

The US Department of War (DoW) – the modern name for the Pentagon’s operational arm – has formally embraced an ‘AI-first’ strategy, a blueprint to make AI foundational to how the US armed forces fight, gather intelligence, and organize operations across domains. 

The strategy memo directs the DoW to become an “AI-first warfighting force” that accelerates experimentation with frontier models, removes bureaucratic barriers to AI deployment, prioritizes asymmetric advantage in compute and data, and incorporates AI into core decision loops. 

Seven “Pace-setting projects” highlighted in the strategy roadmap span disciplines from tactical swarm coordination to AI-augmented battle management agents – signaling that AI isn’t only for intelligence support but is being woven into how campaigns are planned and executed.

In practical terms, the strategy is not an abstract wish list. The Department has already rolled out GenAI.mil, a secure AI platform designed to bring generative models and analytics into both classified and unclassified networks, expanding AI access to millions of service members and civilian personnel.

Silicon Valley meets the war machine

Defense’s rapid adoption of AI has provoked significant industry debate. Anthropic, initially an approved provider of AI models for classified missions, has resisted Pentagon demands to remove safeguards – particularly regarding autonomous weapons and mass surveillance – arguing that such uses exceed current safe boundaries for the technology.

Defense officials, meanwhile, have threatened contract cancellation and even labeling the company a “supply chain risk” to compel broader access, injecting political pressure into what was once a technical negotiation.

These clashes have triggered internal tech industry pushback, including employee petitions opposing military AI use in certain domains, reflecting broader tensions over ethics, governance, and national security.

The new ‘rules’ of war

The US experience in the Iran conflict highlights a transformative moment in modern warfare: AI models are no longer confined to predictive maintenance or administrative support but are actively processed as force multipliers in combat scenarios. This shift carries profound implications for how wars are planned, fought, and governed – from tactical autonomy to strategic escalation.

At the same time, scholars and policymakers caution that the rush to embed AI into lethal operations must be paired with robust ethical and legal frameworks, lest the technology outpace the norms that govern its use.

The evolution of international law, rules of engagement, and accountability mechanisms will be tested as AI systems influence decisions once exclusively in human hands.

The AI arms race is on

The US military’s deployment of AI in the Iran conflict, in the face of a political ban and amid an AI-first institutional strategy, reveals both the strategic imperatives and the dilemmas that advanced technology introduces into contemporary warfare.

As AI becomes deeply woven into command cycles, intelligence synthesis, and operational planning, the United States is effectively pioneering a future where the boundary between human judgment and algorithmic decision support is continually renegotiated.

The outcome of this negotiation among military planners, policymakers, industry partners, and international audiences will shape the rules of war in the AI era.


The writer is the head of the Institute for Applied Research in Responsible AI at HIT and of the Deep-Tech & National Security Project at the Institute for National Security Studies (INSS). She is also a former senior director at the National Security Council (NSC).