Daily Sabah logo

Politics
Diplomacy Legislation War On Terror EU Affairs Elections News Analysis
TÜRKİYE
Istanbul Education Investigations Minorities Expat Corner Diaspora
World
Mid-East Europe Americas Asia Pacific Africa Syrian Crisis Islamophobia
Business
Automotive Economy Energy Finance Tourism Tech Defense Transportation News Analysis
Lifestyle
Health Environment Travel Food Fashion Science Religion History Feature Expat Corner
Arts
Cinema Music Events Portrait Reviews Performing Arts
Sports
Football Basketball Motorsports Tennis
Opinion
Columns Op-Ed Reader's Corner Editorial
PHOTO GALLERY
JOBS ABOUT US RSS PRIVACY CONTACT US
© Turkuvaz Haberleşme ve Yayıncılık 2026

Daily Sabah - Latest & Breaking News from Turkey | Istanbul

  • Politics
    • Diplomacy
    • Legislation
    • War On Terror
    • EU Affairs
    • Elections
    • News Analysis
  • TÜRKİYE
    • Istanbul
    • Education
    • Investigations
    • Minorities
    • Expat Corner
    • Diaspora
  • World
    • Mid-East
    • Europe
    • Americas
    • Asia Pacific
    • Africa
    • Syrian Crisis
    • Islamophobia
  • Business
    • Automotive
    • Economy
    • Energy
    • Finance
    • Tourism
    • Tech
    • Defense
    • Transportation
    • News Analysis
  • Lifestyle
    • Health
    • Environment
    • Travel
    • Food
    • Fashion
    • Science
    • Religion
    • History
    • Feature
    • Expat Corner
  • Arts
    • Cinema
    • Music
    • Events
    • Portrait
    • Reviews
    • Performing Arts
  • Sports
    • Football
    • Basketball
    • Motorsports
    • Tennis
  • Gallery
  • Opinion
    • Columns
    • Op-Ed
    • Reader's Corner
    • Editorial
  • TV
  • Opinion
  • Columns
  • Op-Ed
  • Reader's Corner
  • Editorial

The Iran case and evolution of AI in battlefield operations

by Gloria Shkurti Özdemir

Apr 16, 2026 - 12:05 am GMT+3
A poster with U.S. President Donald Trump asking how to blow up a school, referring to OpenAI/ChatGPT having signed a deal with the U.S. Department of War, London, U.K., March 13, 2026. (Getty Images Photo)
A poster with U.S. President Donald Trump asking how to blow up a school, referring to OpenAI/ChatGPT having signed a deal with the U.S. Department of War, London, U.K., March 13, 2026. (Getty Images Photo)
by Gloria Shkurti Özdemir Apr 16, 2026 12:05 am

AI is reshaping war from decision-making to execution, raising urgent ethical risks

The recent U.S. and Israel-led operations against Iran were quickly labelled as “the first AI war.” At first glance, this framing may appear powerful, attention-grabbing and even symbolically compelling. Analytically, however, it does not fully capture reality. What we are witnessing today is not Artificial Intelligence entering warfare for the first time; rather, it is the more visible and more intensive manifestation of patterns of use that have been gradually tested, refined and institutionalized across different fronts for some time.

In fact, the traces of this process were already becoming visible much earlier. The war in Ukraine was one of the first major examples demonstrating how AI could become functional in data processing, target identification and intelligence analysis. The war in Gaza revealed how these technologies had moved beyond a merely supporting role and had become embedded directly within the kill chain. The most recent operation against Iran represents a stage built upon those two experiences, namely, more coherent, more systematic and larger in scale. It is therefore more accurate to speak not of a “first,” but of a new threshold in the transformation of warfare.

For this reason, wars such as those in Ukraine, Gaza and Iran should not be read as disconnected events but as parts of the same whole. Each functions as a laboratory, demonstrating in which domains AI can be employed in combat, at what speeds it can drive decision-making, where it produces errors and what kinds of advantages it provides to actors on the ground. More importantly, these wars are not merely testing technology but they are simultaneously building the experience of the actors who wield it. This is why the more frequently actors such as the U.S. and Israel conduct this kind of warfare, the more they are doing: not simply accumulating data but becoming more experienced, more adaptive and more effective for the next war. Today, the question is no longer solely who possesses the superior AI system but it is who is accumulating a learning advantage by deploying these systems more extensively in combat.

New pattern of strikes

Another noteworthy development in recent years is that large-scale attacks have begun to follow a distinct pattern. Wars no longer typically open with direct missile barrages. Instead, the adversary’s capacity to see, hear, communicate and respond is targeted first. Cyber operations, electronic warfare and space-enabled interventions have thus become the opening phase that paves the way for kinetic strikes.

In an unspecified place, an instructor operates a simulator that imitates combat in confined spaces during basic military training at a training center of the Ukrainian Land Forces, March 25, 2026. (Getty Images Photo)
In an unspecified place, an instructor operates a simulator that imitates combat in confined spaces during basic military training at a training center of the Ukrainian Land Forces, March 25, 2026. (Getty Images Photo)

The Iran operation followed precisely this sequence in its first stage. By neutralizing communication networks, sensor systems and command-and-control elements, the adversary's situational awareness was paralyzed and only then did physical strikes commence. This model had already been observed in Ukraine. What this tells us is that in contemporary warfare, the primary target is no longer simply military units but the adversary's capacity to perceive and to decide. AI plays a decisive role here: by processing vast datasets in real time, it reveals far more rapidly which points the adversary can be weakened at, which networks must be taken offline first and which targets are to be prioritized.

Put differently, the opening move of war is no longer merely about establishing physical superiority; it is about establishing cognitive and digital superiority. Whoever sees faster, makes sense of information faster and decides faster will, to a very great extent, determine kinetic superiority on the ground as well.

AI usage in war

The operations against Iran demonstrated how deeply AI has become embedded across different phases of warfare. On the U.S. side, the primary platform to emerge was the Maven Smart System, developed by Palantir and incorporating Anthropic technology. This system integrates satellite imagery, drone feeds, signals intelligence, radar data, human intelligence and even certain data drawn from civilian infrastructure into a single digital pool, enabling target identification and prioritization. Beyond that, it generates recommendations on which weapon is appropriate for a given target, how the legal justification for a strike should be constructed and what the probable secondary effects might be.

It is also noteworthy that the use of this system in Iran was not a “first.” The same platform is known to have been employed previously in the operation aimed at capturing Nicolas Maduro in Venezuela. This indeed demonstrates that AI systems are being field-tested and incrementally scaled and that the Iran operation represents a continuation of that process.

The most striking consequence of these systems is the radical compression of the decision cycle in warfare. Target lists that once required hundreds of analysts working for days can now be produced with far fewer people in a fraction of the time. The fact that approximately 1,000 targets were struck within the first 24 hours of the war is a concrete illustration of this acceleration, one that exceeds human capacity alone. This does not simply mean hitting more targets but it means that the tempo of war has reached a level that the human mind cannot sustain independently.

On the Israeli side, systems such as Gospel, Lavender and Where's Daddy? came to the fore. Before, widely used in Gaza with a 10% error rate, the Gospel primarily assists in targeting buildings, facilities and infrastructure elements, while Lavender drew attention as a system capable of flagging individuals as potential targets based on data patterns. Where's Daddy? enabled the tracking and striking of target individuals within their private spaces, specifically at the moment they returned home. These systems show that war is now conducted not only between forces meeting on a front line but through human lives that have been rendered into data.

One important illustration of the usage of these systems is the assassination of Iran’s Supreme Leader Ali Khamenei. The process of identifying and striking the target was completed within an extraordinarily short timeframe, approximately 60 seconds. The operation was made possible through the simultaneous AI-processing of hacked traffic cameras, signals intelligence and large data streams.

Misjudgement, automation bias

Yet, the speed that AI brings to warfare does not make it reliable or infallible. On the contrary, when speed and error combine, the consequences can be far more devastating. One of the most striking examples of this in the Iran war was the attack on a school in Minab. Assessed to have been based on outdated and unrefreshed intelligence data, the strike resulted in the deaths of hundreds of civilians. The problem here is not simply that the wrong target was hit, but it is that AI-assisted systems can process stale data as though it reflects current reality, with lethal results.

Another notable case is the “Police Park” incident in Tehran. Assessments suggesting that a public park may have ended up on the target list because its name contained the word “Police” illustrate how fragile AI systems can be in their grasp of context. The machine sees the word but cannot comprehend its meaning. It selects the pattern but cannot read the social and spatial context. In warfare, this kind of contextual blindness is not a routine technical error; it is a directly lethal risk. So here we need to ask ourselves, would the same mistake be made if a human were the one behind the data labeling and processing?

There is also a deeper problem: human oversight may often be far less robust than is claimed. Official discourse consistently emphasizes that the final decision always rests with a human. In practice, however, human operators facing AI-generated target lists and strike options are frequently making decisions within extremely narrow time windows, i.e., just 10 seconds for each target. This reduces the human from an independent decision-maker exercising autonomous judgment to a role of merely validating the machine’s recommendation. Furthermore, it is also widely seen that humans are biased toward the AI systems’ decision, therefore accepting the AI decision faster without much analysis, something also known in the literature as “automation bias.” All these taken together pose serious ethical problems regarding modern warfare.

Future that awaits us

All of this raises a disquieting but important question about the future: the more AI proliferates in warfare, will it become correspondingly harder to constrain it at the international level? In large part, yes. Because no technology that confers battlefield advantage is easily or voluntarily constrained, especially when that technology accelerates decision-making, expands targeting capacity and provides a clear edge over the adversary.

For this reason, the more extensively AI is used in war, the weaker the prospects for global governance may become. States can simultaneously maintain the rhetoric of "human oversight" while, in practice, transitioning to far deeper levels of automation. This creates the appearance that humans remain at the center of the system, when in reality it is the algorithmic architecture that does the actual directing. The human ceases to be the genuine author of the decision and begins instead to function as a façade that lends it legitimacy.

The geography of war is also shifting. Iran's targeting of cloud infrastructures and data centers demonstrates that it is no longer only military bases but digital infrastructure and the AI ecosystem itself that have become direct components of warfare. Data centers, cloud systems and computational capacity are no longer technical elements outside the scope of conflict but they are strategic targets. This confirms that the wars of the future will be fought not only over land, sea and air but across servers, networks and data flows as well.

Based on all these discussions, the fundamental issue is not that this transformation has begun but how far it will go and on what political and ethical foundations it will proceed. If AI-assisted warfare practices become incrementally normalized with each new conflict, we may find ourselves confronting wars in which humans no longer make decisions but merely assume responsibility for them. At that point, the humanitarian, legal and moral limits of war will not simply have eroded. They may have been redefined in ways that cannot be reversed.

About the author
Director of Emerging Technologies and Artificial Intelligence Research Center (ETAI) at Khazar University, researcher at SETA Foundation
The views and opinions expressed in this article are solely those of the author. They do not necessarily reflect the editorial stance, values or position of Daily Sabah. The newspaper provides space for diverse perspectives as part of its commitment to open and informed public discussion.
  • shortlink copied
  • KEYWORDS
    artificial intelligence ai ai technology war technologies us-iran war
    The Daily Sabah Newsletter
    Keep up to date with what’s happening in Turkey, it’s region and the world.
    You can unsubscribe at any time. By signing up you are agreeing to our Terms of Use and Privacy Policy. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
    No Image
    In photos: Best of Wimbledon 2021
    PHOTOGALLERY
    • POLITICS
    • Diplomacy
    • Legislation
    • War On Terror
    • EU Affairs
    • News Analysis
    • TÜRKİYE
    • Istanbul
    • Education
    • Investigations
    • Minorities
    • Diaspora
    • World
    • Mid-East
    • Europe
    • Americas
    • Asia Pacific
    • Africa
    • Syrian Crisis
    • İslamophobia
    • Business
    • Automotive
    • Economy
    • Energy
    • Finance
    • Tourism
    • Tech
    • Defense
    • Transportation
    • News Analysis
    • Lifestyle
    • Health
    • Environment
    • Travel
    • Food
    • Fashion
    • Science
    • Religion
    • History
    • Feature
    • Expat Corner
    • Arts
    • Cinema
    • Music
    • Events
    • Portrait
    • Performing Arts
    • Reviews
    • Sports
    • Football
    • Basketball
    • Motorsports
    • Tennis
    • Opinion
    • Columns
    • Op-Ed
    • Reader's Corner
    • Editorial
    • Photo gallery
    • DS TV
    • Jobs
    • privacy
    • about us
    • contact us
    • RSS
    © Turkuvaz Haberleşme ve Yayıncılık 2021