... | 🕐 --:--
-- -- --
عاجل
⚡ عاجل: كريستيانو رونالدو يُتوّج كأفضل لاعب كرة قدم في العالم ⚡ أخبار عاجلة تتابعونها لحظة بلحظة على خبر ⚡ تابعوا آخر المستجدات والأحداث من حول العالم
⌘K
AI مباشر
155916 مقال 232 مصدر نشط 38 قناة مباشرة 7761 خبر اليوم
آخر تحديث: منذ 0 ثانية

AI on the battlefield

تكنولوجيا
Dawn
2026/04/12 - 07:15 502 مشاهدة

THE prominent use of artificial intelligence in warfare lately will remind many of us in Pakistan that this is not new.

The ‘signature strikes’ via drones in Pakistan by the Obama-led US administration considered any ‘military age males’ as legitimate combatant targets, a practice that led to a very high civilian casualty rate unaccounted for by the US and criticised by rights groups.

The processing of footage captured by surveillance drones that constantly buzzed above Waziristan and adjoining areas and deducing targets as early as in 2008 was one of the earliest cases of the use of AI intelligence in warfare which made Pakistan a testing laboratory for the tech-enabled warfare that would follow.

As a technological tool that uses large swathes of data called large language models to process information and make deductions for varied purposes, AI has come to be used across sectors, including in the information ecosystem. The ‘fog of war’ created by the use of generative AI in information operations is another example of its deployment in warfare, but that deserves a dedicated discussion. The lesser known targeting of people through AI certainly merits attention.

AI has several functions in warfare today, such as finding and classifying targets; fusing data for command and control; helping plan operations with generative AI and simulation; enabling drones and other unmanned systems to operate; and in supporting cyberwarfare, maintenance and logistics.

Reports of AI company Anthropic refusing to accede to terms demanded by the Pentagon in a military contract highlighted the tensions as well as collaborations that exist between technology companies and governments. However, the Anthropic case was an outlier, as evident from the eager signing of a contract with Pentagon by OpenAI. It signified that in a competitive market, an ethical stand taken by a company can quickly be overrun by another company.

Use of artificial intelligence in warfare comes with a critical risk of automation bias.

In the Israeli genocide of Palestinians, the collusion of tech companies with the Israeli military has demonstrated the extent to which they are willing to go for profit, positioning them effectively as defence companies. The Israeli military has used AI for target generation and ranking, surveillance and signal analysis, translation/ tra­n­­scription of intercepted communications, and cloud infrastructure to store and process huge da­­tasets for the genocide.

The best known AI-enab­led warfare systems used by Israel are Gospel and Lavender. Gospel suggests strike targets using technology by Palantir, whereas Lavender uses machine learning to create a list of potential hu­­man targets, scoring people from zero to 100 bas­­ed on data such as family ties and intercepted calls.

Microsoft, Google, OpenAI and Palantir, among others have been supplying advanced technology to enable Israel to carry out the Palestinian genocide. Microsoft provides the Israeli defence ministry with software, Azure cloud services to store data, and Azure AI services to process the data including language translation, apart from using OpenAI models. The news agency AP’s investigations have found a direct link between AI-based targeting and “the wrongful killing of civilians, including a Lebanese family with children”.

After having quietly removed the restriction on the use of its AI products for military purposes from its policy, Google along with Amazon has a $1.2 billion contract with Israel called Project Nimbus, for the provision of cloud computing and AI services to the Israeli military.

Just like everyday use of AI results in error and hallucinations, AI use in warfare comes with a critical risk of automation bias where military personnel start to rely on it without care for mistranslations, faulty pattern-making and other data problems. Further, there are few opportunities for companies to withdraw the provision of technology and services when humanitarian and human rights laws are violated, provided that is a priority for companies other than profits.

The mistreatment of whistle-blowers and workers protesting against the rights violations their labour was contributing to within these companies is indicative of their policies and priorities. After media reports that exposed Microsoft’s dangerous partnership with the Israeli government, Microsoft only ended one of many contracts. This shows the lack of adequate and meaningful hu­­man rights due diligence in the practices of these companies, and little oversight of the human rights risks and impact that the technology of these companies has on civilian populations.

It is not rare for companies to end their presence and work with governments due to their questionable human rights record. Google shelved its Dragonfly project in China after concerns were raised about the censorship it would build into its search engine. Tech companies also withdrew from Russia after its invasion of Ukraine, and banned Russian media pages.

However, in the case of Israel, tech companies have raced to partner with the government despite credible determination of the genocide being carried out by them. This points to a clear inconsistency in the policies of US tech companies that have blatant double standards.

Like governments, companies cannot function with impunity. They are bound by international human rights law and international humanitarian law. The UN Guiding Principles on Business and Human Rights outline the responsibilities of companies, including human rights due diligence, as well as states’ responsibility to ensure that companies functioning from their territories are not complicit in human rights violations.

The Global Network Initiative principles also bind companies to carry out human rights due diligence and protect freedom of expression and privacy rights in their business practices. But regulations like the EU AI Act make an exception for the military use of AI when it comes to AI regulation and oversight, and this is a problem.

AI enabled America’s killing of ‘military age men’ in Pakistan in 2008, and more than 200 schoolgirls in Iran in 2026. Governments, regional and international bodies and companies must ensure that AI and associated technologies are not complicit in taking civilian lives.

Any regulation of AI and its oversight must include human rights and humanitarian protections. Companies must not value profits over human lives and rights, or be prepared to face the consequences.

The writer is director of Bolo Bhi, an advocacy forum for digital rights.

X: @UsamaKhilji

Published in Dawn, April 12th, 2026

مشاركة:

مقالات ذات صلة

AI
يا هلا! اسألني أي شي 🎤