As Israel uses US-made AI models in war, concerns arise about tech’s role in who lives and who dies
The Israeli military uses AI to sift through vast troves of intelligence, intercepted communications and surveillance to find suspicious speech or behavior and learn the movements of its enemies.

U.S. tech giants have quietly empowered Israel to track and kill many more alleged militants more quickly in Gaza and Lebanon through a sharp spike in artificial intelligence and computing services. But the number of civilians killed has also soared, fueling fears that these tools are contributing to the deaths of innocent people.
Militaries have for years hired private companies to build custom autonomous weapons. However, Israel’s recent wars mark a leading instance in which commercial AI models made in the United States have been used in active warfare, despite concerns that they were not originally developed to help decide who lives and who dies.
The Israeli military uses AI to sift through vast troves of intelligence, intercepted communications and surveillance to find suspicious speech or behavior and learn the movements of its enemies. After a deadly surprise attack by Hamas militants on Oct. 7, 2023, its use of Microsoft and OpenAI technology skyrocketed, an Associated Press investigation found. The investigation also revealed new details of how AI systems select targets and ways they can go wrong, including faulty data or flawed algorithms. It was based on internal documents, data and exclusive interviews with current and former Israeli officials and company employees.
“This is the first confirmation we have gotten that commercial AI models are directly being used in warfare,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute and former senior safety engineer at OpenAI. “The implications are enormous for the role of tech in enabling this type of unethical and unlawful warfare going forward.”
As U.S. tech titans ascend to prominent roles under President Donald Trump, the AP’s findings raise questions about Silicon Valley’s role in the future of automated warfare. Microsoft expects its partnership with the Israeli military to grow, and what happens with Israel may help determine the use of these emerging technologies around the world.
The Israeli military’s usage of Microsoft and OpenAI artificial intelligence spiked last March to nearly 200 times higher than before the week leading up to the Oct. 7 attack, the AP found in reviewing internal company information. The amount of data it stored on Microsoft servers doubled between that time and July 2024 to more than 13.6 petabytes — roughly 350 times the digital memory needed to store every book in the Library of Congress. Usage of Microsoft’s huge banks of computer servers by the military also rose by almost two-thirds in the first two months of the war alone.
Israel’s goal after the attack that killed about 1,200 people and took over 250 hostages was to eradicate Hamas, and its military has called AI a “game changer” in yielding targets more swiftly. Since the war started, more than 50,000 people have died in Gaza and Lebanon and nearly 70% of the buildings in Gaza have been devastated, according to health ministries in Gaza and Lebanon.
The AP’s investigation drew on interviews with six current and former members of the Israeli army, including three reserve intelligence officers. Most spoke on condition of anonymity because they were not authorized to discuss sensitive military operations.
The AP also interviewed 14 current and former employees inside Microsoft, OpenAI, Google and
Amazon, most of whom also spoke anonymously for fear of retribution. Journalists reviewed internal company data and documents, including one detailing the terms of a $133 million contract between Microsoft and Israel’s Ministry of Defense.
The Israeli military says its analysts use AI-enabled systems to help identify targets but independently examine them together with high-ranking officers to meet international law, weighing the military advantage against the collateral damage. A senior Israeli intelligence official authorized to speak to the AP said lawful military targets may include combatants fighting against Israel, wherever they are, and buildings used by militants. Officials insist that even when AI plays a role, there are always several layers of humans in the loop.
“These AI tools make the intelligence process more accurate and more effective,” said an Israeli military statement to the AP. “They make more targets faster, but not at the expense of accuracy, and many times in this war they’ve been able to minimize civilian casualties.”
The Israeli military declined to answer detailed written questions from the AP about its use of commercial AI products from American tech companies.
Microsoft declined to comment for this story and did not respond to a detailed list of written questions about cloud and AI services provided to the Israeli military. In a statement on its website, the company says it is committed “to champion the positive role of technology across the globe.” In its 40-page Responsible AI Transparency Report for 2024, Microsoft pledges to manage the risks of AI throughout development “to reduce the risk of harm,” and does not mention its lucrative military contracts.
Advanced AI models are provided through OpenAI, the maker of ChatGPT, through Microsoft’s Azure cloud platform, where they are purchased by the Israeli military, the documents and data show. Microsoft has been OpenAI’s largest investor. OpenAI said it does not have a partnership with Israel’s military, and its usage policies say its customers should not use its products to develop weapons, destroy property or harm people. About a year ago, however, OpenAI changed its terms of use from barring military use to allowing for “national security use cases that align with our mission.”