AI warfare
Use of AI in war
From Wikipedia, the free encyclopedia
AI warfare refers to the use of artificial intelligence technologies to automate military operation and enhance or bypass human decision-making in armed conflicts. AI is used to rapidly analyze large volumes of military intelligence data,[1] including making recommendations or decisions on who and what to target.[2] Abdul-Rahman al-Rawi, a 20-year-old student, is the first civilian killed in airstrikes carried out with AI assistance in a US strike in Iraq in 2024.[3] In 2026, the U.S. declared it would become an 'AI-first' warfighting force.[4]
Husain et al (2018) coined the term hyperwar to refer to warfare which is algorithmic or controlled by artificial intelligence, with little to no human decision-making.[5][6]
2026 Iran war
The 2026 Iran war has been described as the "first AI war", although the U.S. and Israel have previously used AI to identify targets during the Gaza war.[7] The United States has used AI tools to attack Iran.[8] These tools have been used for military intelligence, targeting, and damage assessment in the war in Iran.[9] Using the Maven smart system, the U.S. attacked 1000 targets in the first 24 hours of the war and 5000 targets over the course of 10 days.[10] While the U.S. had used Maven in 2022 to share targeting information with Ukraine and strike against Iraq, Syria, and against the Houthis in 2024, Iran's attacks are its biggest.[10] Authorities are looking into whether artificial intelligence was involved in the airstrike on an Iranian girls' school that killed 170 civilians, the majority of whom were female students.[11] The United States Central Command emphasizes that humans make final decisions on what to shoot.[12]
Involved companies
Maven Smart System is developed by Palantir. It integrates Anthropic's Claude as its large language model, and uses Amazon's AWS servers as its cloud infrastructure.[13]
Involved state actors
In 2024, the United States Department of Defense had 800+ active AI-related projects and requested $1.8bn in AI funding, with Project Maven and Project Artemis (AI-resistant drones developed together with Ukraine) being the main ones.[14] The technology has been used in Iran, Iraq, Syria and Yemen to identify targets.[13]
Israel deployed AI systems Gospel (targeting buildings) and Lavender (identifying individuals) extensively during the Gaza conflict.[15] They were both developed by Unit 8200.
China is pursuing intelligentized warfare, integrating AI across all combat domains — land, sea, air, space, and cyber — with military AI spending exceeding $1.6bn annually.[16]
International regulation
Since 2014, states meeting within the framework of the Convention on Certain Conventional Weapons have discussed lethal autonomous weapon systems. In 2016, the treaty's states parties established an open-ended Group of Governmental Experts on Lethal Autonomous Weapons Systems to continue those discussions.[17] The discussions have addressed international humanitarian law, accountability, possible prohibitions and regulations, and the extent of human control required over AI-enabled weapons.[18]