The U.S.-Israel-Iran war demonstrates that artificial intelligence (AI) is increasingly shaping military decision-support and targeting processes. In the conflict, the United States has reportedly employed AI tools that analyze large volumes of satellite, drone, and intelligence data to assist in strike planning. These tools appear to include decision support systems (DSS), such as the Maven Smart System, which process data from various sources for use in operational planning and target identification.
Reports indicate that DSS have significantly accelerated U.S. operational tempo: In the first four days of the war, the U.S. military struck 2,000 targets. In comparison, during earlier operations, such as the campaign against the Islamic State, that scale of targeting would require around six months. This difference equates to a remarkable 45-fold increase in operational tempo, making clear the appeal of such systems to defense planners. As such, similar AI tools have reportedly featured prominently in recent conflicts, including the Russia-Ukraine war and the Israel-Gaza conflict.
These developments merit closer attention in South Asia. India already used similar AI-enabled tools to process historical intelligence alongside real-time surveillance data in support of targeting decisions during Operation Sindoor in May 2025. Pakistan’s military exercises also point to growing interest in AI-enabled warfare, though public reporting has provided fewer operational details. In all likelihood, both countries are likely to continue integrating AI into military operations moving forward.
Both sides must remain cognizant of the significant risks posed by such systems. First, in the South Asian context, where military infrastructure can be dual-use or dual-capable, target misidentification can transform a simple data error into a potentially deadly escalation trigger. Second, automation bias, whereby operators place excessive confidence in AI outputs, and cognitive offloading, where sustained reliance on AI systems reduces human scrutiny, imply serious risks in high-stakes conflict scenarios. Third, both the challenge of AI interpretability and the misaligned incentives around transparency complicate confidence-building measures.
In a crisis between India and Pakistan, these risks could jeopardize escalation control, especially given intense political pressures and the fraught information environment. Both countries should commit to managing these risks through stronger data governance, human oversight, and crisis communication mechanisms to prevent technical errors from precipitating strategic crises.
“Automation bias, cognitive offloading, and interpretability and transparency challenges further heighten the risk presented by AI-enabled targeting systems.”
Risks of AI-Enabled Targeting in South Asia
AI tools already play a role in the South Asian security environment. An Indian military officer has acknowledged the use of an AI system for targeting during Operation Sindoor in May 2025 with an “accuracy rate of 94 percent.” According to the official, the system is capable of analyzing real-time data obtained using drones, radars, and satellite feeds collated with twenty-six years of archival intelligence data. This archival data included records of radio frequency emissions, sensor signatures, equipment locations, and movement patterns of Pakistani army units. Following the May 2025 conflict with Pakistan, the Indian military adopted an “AI roadmap” to better integrate such tools into operations over the next year, demonstrating the growing role of AI-enabled DSS in Indian defense planning.
Pakistan also appears to be integrating AI into military operations. The Pakistan Air Force inaugurated the center for Artificial Intelligence and Computing in 2022 and the Pakistan Army cyber command has reportedly included AI among its key focus areas. Furthermore, according to official communications, the Pakistan Air Force exercise “Gold Eagle” in February 2026 was particularly centered on AI-enabled and net-centric operations. These developments highlight that Pakistan is also moving towards the application of AI in the military domain, even if fewer specific details of operational use are publicly available.
The growing military adoption of AI in South Asia poses significant challenges. Although the use of real-time data in the Indian system could be presented as a corrective to the limitations of archived intelligence, AI tools trained on historical data and built to prioritize speed could still identify outdated targets in practice. Even at low error rates, as the scale of targeting increases during a protracted or high-intensity conflict, the probability of mistakes increases. For instance, even taken at face value, the reported 94 percent accuracy of the Indian targeting system leaves the possibility of catastrophic mistakes.
Automation bias, cognitive offloading, and interpretability and transparency challenges further heighten the risk presented by AI-enabled targeting systems. Automation bias occurs when military decisionmakers perceive AI outputs as inherently accurate, leading to overreliance. In fast-paced conflict scenarios, the potential for such bias becomes even more acute. Israeli military personnel, for instance, reportedly take just “20 seconds” to verify targets identified by the AI-enabled Lavender system, acting more as a “rubber stamp” than a genuine check. Cognitive offloading compounds this challenge: Recent studies indicate that the use of AI tools can weaken human memory and critical thinking. In a military targeting context, this phenomenon could gradually erode human oversight and increase error rates. Lastly, interpretability and transparency are key roadblocks to the more responsible use of AI-enabled targeting. AI systems still have a “black box” nature, meaning it remains difficult to understand how models reach particular outputs. Furthermore, states are incentivized to maintain secrecy around AI tools, especially for intelligence and national security applications. This dynamic precludes external technical audits and accountability.

Managing India-Pakistan Escalation Risks
These risks appear far more alarming when viewed through the lens of India-Pakistan strategic rivalry, where operational mistakes can rapidly escalate into political and military crises. In this politically-charged relationship, an AI targeting failure can spiral into a standoff between two nuclear neighbors, exacerbated by hyperactive medias, nationalistic sentiments, and domestic political pressures.
Importantly, India and Pakistan are already operating in an environment of deep mutual distrust. A high-casualty incident triggered by an AI-enabled targeting error could lead to reciprocal military signaling, counterstrikes, and broader coercive escalation. Such escalation might occur not because either side actively seeks large-scale war, but because neither leadership would want to appear weak under intense public scrutiny.
The risk of targeting dual-capable systems also looms large. Given that rocket, artillery, and missile systems can have similar signatures, data errors compounded by the ambiguity of targets could make the leap from misidentification to escalation. In a worst-case scenario, an AI-enabled targeting error against a conventional military asset could be misinterpreted as an attempt to degrade nuclear capabilities, potentially fomenting existential crisis.
“[B]oth countries must invest in human oversight, data governance, and crisis management mechanisms to ensure that operational speed does not come at the expense of strategic stability.”
Taken together, these risks point to the need for a more deliberate framework to manage AI-related targeting systems in the India-Pakistan context. First, both countries must take steps to provide political leaders and military decisionmakers alike the full risk profile of AI-enabled targeting systems: Stakeholders must understand that, despite the operational speed such systems can deliver, they can also become a source of unintended escalation.
Second, developing stronger data governance should be a paramount priority. Vulnerabilities lie not only in the models of AI-enabled targeting systems, but also in the data inputs. As demonstrated by the U.S. strike on the Minab school, which reportedly relied on outdated data and killed nearly 170 civilians, incorrect targeting data can lead to tragic outcomes, with or without AI tools. To minimize the risk of errors, militaries should implement regular auditing, cleansing, and revalidation of archived targeting data so that legacy errors do not remain embedded in AI-supported strike systems.
Third, human oversight must be made substantive rather than symbolic. Operators should be required to conduct multi-source verification, especially for civilian, dual-use, or dual-capable targets.
Fourth, even if full transparency is impossible because of secrecy surrounding intelligence data and military systems, both states should still pursue limited confidence-building measures. These measures could include a rapid clarification mechanism through existing military hotlines, the development of mutual understanding on especially sensitive targeting categories, and the establishment of norms around immediate political-level communication when incidents create ambiguity about intent. The objective of such measures should be to signal red lines, clarify the sensitivity of dual-capable targets, and reinforce political control over escalation decisions.
Going forward, the deployment of AI tools will likely encourage militaries to act with greater speed and confidence, but this combination presents serious risks, particularly in tense security environments with ever-present uncertainty, ambiguity, and political pressure. In South Asia, as India and Pakistan expand the use of AI in the military domain, technical failures, flawed data, or overreliance could lead to escalation. With the security rivalry already shaped by complex deterrence dynamics, mutual suspicion, and compressed decision-making timelines, both countries must invest in human oversight, data governance, and crisis management mechanisms to ensure that operational speed does not come at the expense of strategic stability.
Views expressed are the author’s own and do not necessarily reflect the positions of South Asian Voices, the Stimson Center, or our supporters.
Also Read: Cyber Quicksand? Uncharted Risks and Escalatory Dynamics in a Future India-Pakistan Crisis
***
Image 1: Rajnath Singh via X
Image 2: DGPR Air Force via X