Over the past three years, autonomous weapon systems have become increasingly prevalent. Unmanned and remotely piloted aircraft, autonomous and semi-autonomous, have become increasingly central to modern battlefields. More than 70% of casualties in the Russia-Ukraine war have been due to unmanned aircraft, at the borders and within the country – mainly Shahed-136 type UAVs, which strike targets with high precision; sometimes civilian targets as well.
From the densely packed urban combat zones of Gaza to the battlefields of Ukraine, aircraft navigate autonomously and can even track targets and strike with speed and accuracy that humans cannot achieve.
Yet a question remains constantly in the background, one that may shape the future of warfare no less than the technology itself: Who bears responsibility when an autonomous lethal weapon makes the wrong decision?
Automation vs. autonomy - not just semantics
It’s easy to confuse automation with autonomy, but the difference is fundamental.
Automated systems, such as Israel’s Iron Dome, manufactured by Rafael, respond to preprogrammed triggers according to human instructions. They intercept rockets when radar detects them, and the algorithm “understands” that the trajectory threatens a protected area.
The system executes the defensive action and launches the interceptor missile (or recommends launch) – but it’s the human who approved the rules of engagement and defined what constitutes a threatening target and when a missile should be fired to intercept it.
In fact, many systems we use in daily life have an automated component, such as cars, vacuum cleaners, and dishwashers. An autonomous system is something entirely different. An autonomous weapon has the built-in capability to select the suspected target and decide to execute the lethal action without human involvement. This is the boundary between autonomy and automation – the choice and decision to attack.
The autonomous weapon can identify suspicious movement, cross-reference all data in its database, and decide to strike – without asking permission.
It doesn’t just execute; it chooses. The choice is based on artificial intelligence (AI) tools, when, in practice, we’re talking about lines of code that decide matters of life and death.
Man in the loop - are humans still in control?
Military doctrines around the world emphasize that “meaningful human control” is required or, as defined by the US Department of Defense as “appropriate human judgment” in Directive 3000.09.
But what does this mean in practice? If a single soldier supervises five drones and approves each strike, the human is clearly in meaningful control. But if the same soldier supervises 50 drones? Or 100? It’s clear that at this stage the operator’s role is more symbolic than real. The human is supposedly “in the loop” but is not really deciding.
Defensive systems already pose the same dilemma. If hundreds of missiles are fired simultaneously, no human can realistically and truly approve each interception. It’s the system that makes the decision. And when a defensive interception system misses, the consequences can still be devastating, as we saw about two months ago in a case where a Russian missile, fired in response to a Ukrainian drone attack, passed dangerously close to a civilian passenger aircraft.
The responsibility gap
HERE, THE ethical debate becomes legal. International Humanitarian Law (IHL) is built, among other things, on accountability. Commanders are obligated to possess the ability to distinguish between combatants and civilians, and they bear responsibility for the consequences of their decisions. But what happens when a machine makes the choice and errs?
There are three clear candidates for responsibility in such a case:
• The commander who gave the order and was in the field, even if he did not know the system well from an engineering perspective and even if he had no meaningful control over it.
• The developer or engineer who designed the algorithm, sometimes decades before the incident, exactly like Israel’s “Arrow” system by Israel Aerospace Industries (IAI), where 35 years passed between the beginning of its development and its first operational use.
• The state that approved the use of the autonomous system and is obligated according to Article 36 of international law – an article adopted by most Western nations – to examine the legality of a weapons system before beginning to use it.
Every answer and every option raises problems. Commanders cannot and should not be blamed for mistakes they could not have anticipated and/or prevented. Engineers cannot be held criminally responsible for a system that has long since left their hands, decades after development; meanwhile, states only rarely volunteer to take legal responsibility, especially when it involves the fog of war.
This is the responsibility gap, a dangerous and problematic legal void, since humans will always seek someone responsible. In such cases, the concern is that the warrior in the field, the last person in the chain, will become a “moral crumple zone” and be chosen to bear responsibility, despite this being the morally wrong choice.
The UN: A lot of talk, yet no decisions and no action
The United Nations has been discussing lethal autonomous weapons for more than 15 years within the framework of the Convention on Certain Conventional Weapons (CCW) on prohibitions or restrictions on the use of certain conventional weapons which may be deemed to be excessively injurious or to have indiscriminate effects.
Despite numerous meetings, including the establishment of a group of governmental experts, the committee has repeatedly failed in its attempts to reach a binding decision. Why? Because geopolitical interests are too strong.
Western nations seek for themselves the advantage of advanced technology.
On the other hand, China and Russia are accelerating programs for lethal autonomy, with very little transparency, along with demagogy that speaks against autonomy while developing it vigorously, defining autonomy as narrowly as possible, to enable actual development.
Smaller and less technologically developed nations, by contrast, demand a complete ban, primarily due to their technological inferiority, but the more advanced nations oppose the ban and effectively torpedo it.
In the absence of international consensus, each nation sets its own rules. Some require a human in the loop for every lethal action; others, under the radar, are already on the threshold of lethal autonomous weapons systems, with minimal human oversight, if any.
Danger of accountability gap and absence of responsibility
THE MOST dangerous outcome is not the autonomous weapon itself, as this could reduce casualties on the battlefield for both sides. It would be more precise, would not harm innocents, feels no need for revenge, does not tire or come under stress, and always obeys rules of engagement.
The problem lies in a world where responsibility becomes blurred or erased, where no one can be held accountable under international law, and the laws of war lose their meaning. The protection of civilians, the cornerstone of international humanitarian law, becomes unenforceable.
It bears remembering that technology always advances faster than legislation. Nuclear weapons were developed many years before arms control agreements and were eventually brought in under a series of treaties and accords.
The concern is that lethal autonomy will not reach similar arrangements and will remain ambiguous. And once responsibility disappears and these tools begin operating, it will be impossible to turn back the clock, leaving us without accountability from major powers or states, and with many fighters in the field finding themselves on the defensive.
Restoring the red lines
What can and should be done? The answer is not to ban autonomous systems. This cannot be done and is also inadvisable. Lethal autonomous systems will bring clear advantages to the battlefield – faster response times, protection of soldiers, and far greater precision than humans.
But clear boundaries must be established: preliminary decision-making must be carried out by humans. The ability to intervene must be preserved. The capability to abort the mission must exist. States must accept responsibility and examine lethal tools before they enter the battlefield, according to Article 36.
Thus, even if a lethal autonomous drone operated ostensibly “independently,” its actions must be attributed first and foremost to the state, and only then to the manufacturer and the fighters in the field, who will naturally also bear responsibility if they sent a lethal weapon to an inappropriate area saturated with civilians, or if they defined rules of engagement too broadly.
Additionally, built-in precautionary measures are required, such as a clear user interface, the ability to track the system’s decisions from the moment the human exits the operational loop, and the capability to investigate every action. These and other practical steps can prevent catastrophic errors and ensure that, despite the introduction of autonomy, there will still be accountability for the operation, and not just for the fighter in the field.
Summary - responsibility cannot be outsourced
The future battlefield will include autonomy on a vast scale. Robots will fly, drive, patrol at sea, and dive underwater.
They will operate with minimal human guidance, and at times without any guidance at all. Technology is advancing too rapidly to stop it, and the advantages of autonomous weapons on the battlefield are so pronounced that everyone will want to use these tools.
However, responsibility cannot be outsourced to an algorithm. The laws of war demand accountability, and states must ensure that even in the era of lethal autonomous weapons, a human – not a machine – is the one who bears the burden of responsibility. Because when the trigger is pulled, it is not just about winning a battle but about defending the principles that separate man from machine, and between warfare permitted under the laws of war and uncontrolled and prohibited killing.