AI and the military
Hi there. This week I want to talk about the war elephant in the room: artificial intelligence. I’ve been reading, using, and thinking about AI for the same amount of time as most people reading this, which will be close to three years. I’ve used it for fun, for my day job, and, occasionally (shock! horror!) in my writing. I’ve read a lot of wonderful fellow bloggers give their two cents on generative AI, and I’ve seen its utility and limitations. I have long wanted to write about it, but couldn’t disentangle my feelings as a commentator on military technology from that of being a content creator. It’s still hard to disentangle, but here’s my attempt.
This week I’m going to talk about AI and the military. Next week I’ll discuss the impact of AI, as I see it, on business, writing, and our personal lives. The role of AI in the miliary is by far the more serious and existential of these two issues, which is why I’m getting it out of the way to focus on my own petty ideas! Military world building, as promised, is on a back-burner for the next two weeks or so, but I haven’t forgotten about it.
I’m going to start off with the current state of play with respect to killer robots. Then I’m going to talk about the human capacity for inhumane deeds, before describing the specific risks that increased adoption of AI could bring. We’ll wrap up with a look at the moral question of who gets blamed when an AI does something awful.
Before we start, let me encourage you to subscribe to the blog using the link below. As always, I’d love to see your comments below, or you can contact me via webform here or email here. Finally, if you’ve enjoyed this post, please share it with a friend.
If you enjoy this blog and want to support it, please consider a donation. Keeping this blog going doesn’t cost much, but it isn’t free either, so any help would be very much appreciated👍
The killer robots are here
Worries about killer robots are nothing new. Technologists and lawyers have been debating and proposing legal frameworks for autonomous killing systems for many years. The Autonomous Weapons website (spoiler alert: they are against them) lists a chronology of milestones in their legal regulation, going back to 2013, but with roots in the 1925 Geneva Protocol.
A glance at this chronology reveals a lot of talking, but recent events have overtaken the debate. The unfortunate first person to be killed by an autonomous robot was in Libya in 2020, and swarms of AI-controlled killer drones were used in Gaza as far back as 2021. In Ukraine the belligerents have turned to AI-controlled drones to magnify the lethal force that a single soldier can bring to bear on the battlefield. Autonomous control also gets around GPS or radio-link jamming by the enemy.
AI is not only being used at the lethal end of the kill-chain, but also as an aid to identifying and prioritising targets: most recently, and controversially, by Israel in the Gaza War. Moving further back from tactical and operational to strategic considerations, the BBC recently reported on the use of AI in national wargames in the UK. It’s only a short leap from there to the use of AI as an aid in times of actual crisis. This all reminds me of something:

Pandora’s Box is definitely open. So, do I lie awake at night worrying that robots are going to destroy humanity? No. Or, if I do, I just need to read Randall Munroe’s excellent take on why robots aren’t going to kill us all any time soon:
If all that experience has taught me anything, it’s that the robot revolution would end quickly, because the robots would all break down or get stuck against walls. Robots never, ever work right.
What people don’t appreciate, when they picture Terminator-style automatons striding triumphantly across a mountain of human skulls, is how hard it is to keep your footing on something as unstable as a mountain of human skulls. Most humans probably couldn’t manage it, and they’ve had a lifetime of practice at walking without falling over.
—Randall Munroe: Robot Apocalypse, on What If
While I don’t get worried at the thought of killer robots killing all humans, I do get worried at the thought of killer robots killing some humans under the guidance of other humans. Let’s turn to that depressing topic next.
Killer robots reflect killer humans
Humans, it need not be said, are well-capable of indiscriminate, inhumane, and illegal killing of other humans. The “get some!” door gunner scene in Full Metal Jacket is one of countless depictions we can turn to:

No, this doesn’t depict a real event, but it pales in comparison to real wartime atrocities, in Vietnam and elsewhere. This is what humans can do when we are at our worst.
Someone who works deep in the belly of AI recently described it to me in simple terms as “an averaging technology.” In other words, AI just interpolates between all the existing stuff that’s already out there, that has already been created by humans1. This goes for human biases, prejudices, and heuristics; just as much as it does for text and images.
If AI produces little or nothing that’s “new,” simply re-hashing and rearranging what’s already there, and if humans (as we’ve seen) are already great at being shitty to one another, then what’s the cause for alarm?
The big reason for today’s2 AI hype is not that AI will necessarily do jobs better than humans will, but that they will do them faster and at greater scale3. It’s an attractive proposition for a military: one human pilot can lead a squadron of wingmen, one remote-controlled drone can guide a swarm of dozens.
When we turn to look at warfare again, this is where it gets scary, because “speed and scale” are not KPIs which we want to grow when it comes to killing and destroying.
AI brings speed and scale
It sounds like a bad idea, but let’s hear out the pro side and deal with their points. Firstly, there’s the “robots are more moral than humans” argument. We touched on this above. Just as I don’t think robots are necessarily worse than people, nor do I think they are better.
Secondly, there’s the idea that robots will displace humans in the battlespace, leading to fewer casualties overall (those trusty robots will do all the killing and dying!). This is basically the argument made by Willem Dafoe as the military school commandant in The Simpsons:
https://youtu.be/rkg3wZq0cdo?is=29zEdSJWDjjgz8pB
In this worldview, war becomes a bloodless economic struggle between countries. Armies of steel and electronics smash each other to pieces in a silicon echo of the gruesome attrition of WW1 trench warfare. Whichever side has the greatest access to raw materials and industrial capacity will ultimately prevail.
The problem with this is that people made similar predictions of every technology before now: that it would reduce bloodshed by making human-vs-machine contests so one-sided. From machine guns to strategic bombing, a certain type of military theorist has always argued that “this is so bad, it will be good.” It’s always turned out bad, though, and I fear that killer robots will be the same. We have real-world data for this in the Russo-Ukrainian War, where both sides have access to certain levels of killer robots and yet are still suffering and inflicting human casualties at an appalling rate.
There’s a myth out there that civilians as a portion of total war casualties have gone from something like 10% in 1900 to 90% today. This isn’t true, but the fraction of casualties which are civilian has remained steady from century to century at about half. Improved killing technology hasn’t led to overwhelming civilian deaths4, but it hasn’t improved things either. I see no reason to think that killer robots will be any different.
That’s not to say that one side with vastly superior technology won’t see a benefit in using killer robots against a technologically inferior foe. However, technological asymmetries like this have always existed. The problem, as in the past, with any technology, is who to blame when killer robots carry out atrocities.
But who gets the blame when things go wrong?
A common feature—I would say obsession—of many legal and ethical frameworks around killer robots have relied on the “human in the loop.” The idea is that a human exists at a crucial part of the kill chain and must consciously decide whether or not to execute every lethal action. The role of the “human” in this loop is twofold:
- Act as a check on any illegal or inhumane action by the AI (which, as we’ve seen above, is not an effective check).
- Be the person to blame in the event of any illegal or inhuman action.
The second reason is the more relevant one, because there’s no basis at present for holding computers responsible for their actions. I, Robot explored this conundrum back in 2004:

Keeping the human in the loop is more important for accountability than for humanity. But it’s hard to argue the case that it’s a perfect system. To take the most extreme example of the modern age: was justice really done for the six million Holocaust victims by convicting fewer than 200 German perpetrators in the Nuremberg Trials?
If an AI selects and then a human decides to engage those targets, then we would say that the human bears the full moral responsibility. And that makes sense if there’s only one target and the human had the chance to fully evaluate it. But what if there are ten, or a hundred, or a thousand targets, and no time to do any due diligence on the AI’s results? You can say that the responsibility rests with the human, but they will plausibly claim deniability, at least insofar as they were “just following orders.”
If the AI tool acts as an assistant, then the culpability rests with the “human,” just like if they had a human assistant supplying them with lists of targets. If the human in the loop is not able or allowed to question the AI, however, then we have a more serious question to answer.
Conclusion: Can AI tell right from wrong?
Some philosophers argue that a sufficiently advanced AI ought to have the same rights as humans. The corollary to this is that an advanced AI should also have the same responsibilities, which means legal culpability for wartime atrocities.
I’m personally heavily influenced by the Culture universe of the late, great Iain M. Banks. This society is ruled by super-intelligent AIs who very obviously are sentient beings with the ability to distinguish right from wrong. If a biological mind is capable of sentience and conscious thought, Banks argues, then there’s no physical reason why a digital mind cannot do the same. It’s the same electrical signals; the only difference is the substrate matter for those signals.
That said, we’re a long way from machine human-level sentience. As I mentioned above, AI today is a tool which averages out existing inputs, most of which (hopefully) are human.
It may also be the case that machine superintelligence will happen without conscious thought. Maybe consciousness and the idea of a “self” is a funny quirk particular to humans and some of the higher animals, which is cool and (mostly) fun for said animals, but which conveys relatively little advantage for intelligence. In other words, maybe consciousness and sentience are enablers for intelligence, but not critical ones.
So the question is less “can robots tell right from wrong” and more “can humans tell right from wrong?”
That’s it for this week, folks. Thanks for reading and please remember, if you haven’t already, to subscribe using the link below. Please also share this article with a friend. See you next week.
Featured Image: Generated by ChatGPT. Prompt: “Can you please generate a banner image for an article please? The article is called “silicon psychopaths” and is about lethal autonomous weapons systems. It should show an action shot of a tracked machine gun platform exchanging fire with a small drone with a grenade. Something like that. Use your imagination!“ Followed by eight more messages to try to get it into the right shape, but to no avail.
- And, increasingly, by other AIs. ↩︎
- For future readers, I’m writing in October 2025. Nvidia’s market capitalisation is $4.6 trillion with a price nearly 52 times its earnings. ↩︎
- Side note: I recently heard Cory Doctorow argue on the QAA Podcast that (and I’m paraphrasing): “AI doesn’t need to actually be able to do your job. It just needs to be able to convince your manager that it can do your job.” ↩︎
- I mean proportionally. In absolute terms, total numbers of civilian casualties are horrific. The ever-present spectre of nuclear war would change this in an instant. ↩︎

Leave a Reply to Killer robots and asymmetric war – Military Realism ReportCancel reply