Lethal autonomous weapons systems & artificial intelligence: Trends, challenges, and policies

We see and expect increased global proliferation of lethal
autonomous weapons. Global coordination is needed to
control and regulate these weapons.


Global trends
In recent years, technological advances in hardware, from electro-optical, infrared, and sonar systems to synthetic aperture radar, and in AI/robotics, from better 3D and visual perception to motion forecasting and planning, have enabled the rapid development of autonomous systems [2].Technological advances drive lower costs, greater accessibility, less human error, and accelerated reaction times; this expands the opportunities for uses in intelligence, surveillance, and reconnaissance (ISR), navigation, detection, and so on.AI-enabled devices provide unmanned vehicles with greater speed and the ability to operate in environments without the need for data links, e.g.underwater or near adversary jamming devices, increasing opportunities to outmaneuver enemy systems [3].Recent trends see a wider set of nations involved in active LAWS development, including increasingly offensive applications, positioning for urban conflict rather than battlefields, and swarming capabilities.Additionally, the capabilities of unmanned aerial vehicles (UAVs) are expanding in their duration of deployment, geographical areas of operation, their range of identifiable objects, and ability to coordinate among themselves [2].
The international debate has intensified correspondingly.Nation states through the United Nations, together with individual citizens through international advocacy groups, apply increasing pressure to impose legally binding  [2] and publicly available knowledge.
international treaties regulating such weapons; yet the largest military actors, including the United States, have repeatedly refrained from any such commitments.Consequently, the technological advancement and military adoption of a wide variety of LAWS with increasingly autonomous functions continues forward.
Timeline & examples of LAWS: LAWS have been evolving for decades.In prior decades, Semi-Automatic Ground Environment (SAGE) air defense systems searched for hostile jets, and warships employed close-in weapon systems (CIWS) to automatically detect, track, and eliminate incoming missiles [4,5].Perhaps the most famous example of human intervention in automated warfare was the 1983 Soviet nuclear false alarm incident, triggered by automatic target detection.Stanislov Petrov, a Soviet Air Defense lieutenant colonel, chose not to invoke the Soviet policy of compulsory nuclear counter-attack when early warning satellites incorrectly identified high-altitude clouds as intercontinental ballistic missiles traveling from the United States.
The 1980s saw the manufacturing and development of third-generation Anti-Tank Guided Weapons (ATGWs), which were designed to be fired upwards into the air and would acquire targets independently using infrared.The European PARS 3 LR [6] and Israeli Spike [7] ATGWs are examples of this type of homing missile.The modern Javelin ATGW, sent by the United States and other nations to the Ukrainian army in 2022, incorporates a control system called an electronic safe arm and fire (ESAF) which directs the missile towards the target after launch [8].
The U.S. has been a prominent innovator in the field of autonomous weapons, pioneering a target tracking and acquisition unit named Phalanx CWIS first produced in the 1970s [9].In the early 2000s the U.S. Patriot Missile computers misidentified friendly jets on two separate occasions, which led to friendly fire and death [10].Flawed procedures had not properly accounted for automation error.
In the last decade, South Korea and Israel have built sentry guns capable of recognizing and firing on humans with complete autonomy [10].
In recent years, Russia and Israel have also developed unmanned surface vehicles (USVs) with autonomous navigation and targeting capabilities, and China has developed an autonomous helicopter [5,11].The advent of lethal UASs and loitering munitions are perhaps the most dangerous LAWS, now widely developed and relatively cheap and accessible [11].Previously used for reconnaissance, these aerial systems are designed to autonomously patrol regions, search for enemy radar, aircraft or people, and intercept them, often with a built-in warhead.Numerous examples are shown in Table I.  [12] to the United Nations Security Council.According to the report, on 27 March 2020, the forces of Khalifa Haftar were attacked by at least one Kargu-2 LAWS, documenting the first likely fully autonomous fire-and-forget usage of a lethal autonomous weapon.There may have been similar attacks yet unknown to the public due to the difficulty in confirming whether a weapon such as this was truly acting autonomously or not.
Russia's invasion of Ukraine in 2022 saw the widespread use of the TB2 (Table I) and Javelin ATGWs.It is not clear whether TB2 autonomy in takeoff and cruise functionalities have contributed to the war, but the Javelin missile's "fire-and-forget" capabilities have enabled small counterattacking forces to rapidly strike and retreat at distance [13].

Dual-use technology:
Ref. [14] describe the "dual-use dilemma" of artificial intelligence: that the same technology offers both critical civilian and military applications.The same visual perception, human identification, and tracking tools which self-driving cars use to steer clear of pedestrians are easily re-tasked with finding and detonating on military targets.

Dual-Use Technology
Technologies for object identification, tracking, and navigation, which are critical for civilian applications, are often adapted for lethal military applications.

Accessibility
The hardware and software components for LAWS are increasingly affordable and accessible.

Accountability
Legal ambiguity and technological limitations create an accountability gap for the actions of autonomous agents.

Attribution & Compliance
Reduced traceability and interpretability for LAWS complicates attribution, regulation, and enforcement of laws governing conflict.

Ethics
The automation of violence, especially in targeting humans is ethically dubious.

Political Fragmentation
The United States and other prominent political players refuse to engage in negotiating international treaties regulating LAWS.

Interpretability
State-of-the-art machine learned systems offer little justifications or diagnostic tools for their decisions.

Generalization & Robustness
Machine perception, tracking, and navigation adapt poorly to unseen environments or circumstances.

Decision Making
The time between pre-programmed decision criteria and the autonomous attack escalates the risks of unintended consequences.

Adversarial Vulnerability
Machine learned systems are extremely vulnerable to intentional perturbations in physical environments.

Facial Recognition
The use of facial recognition, gait recognition, or phone sensing technologies are increasingly considered for automated identification and targeting in LAWS, as well as automated surveillance.Autonomous systems extend many positive benefits, from clearing land mines, supplying contested territories, identifying and safeguarding non-combatants, and limiting collateral damage.These applications rarely require automated targeting or firing.DART, the Dynamic Analysis and Replanning Tool, which used AI to optimize logistics and scheduling during Operation Desert Storm, is reported to have offset the expense of all DARPA funding for AI research in the prior 30 years [15].Coupled with the ever-increasing accessibility of open source AI tools and technologies, disentangling harmful from beneficial applications may be more challenging than for nuclear, chemical, or biological weapons.

Shades of autonomy:
The simplest form of autonomy is to enact decisions in an environment without human instruction -such as a land mine automatically triggering from contact.Ref. [16] discusses the dimensions of machine autonomy, including the human-machine command-control relationship, the sophistication of decision-making, and the autonomous function.For command-control relationships, many autonomous functions are largely assistive to humans -alerting of detected signals, identifying objects, suggesting schedules or routes, or aiding in targeting.Increasingly, though, this relationship is inverted, where the human merely assists the machine, with veto or override capacity [17].Prior work reports human supervision over machines is rife with unsolved challenges [18].In fully autonomous systems, the machine will conduct its own actions based on pre-programmed human instructions, or objective(s) with a set of constraints rather than a set of explicit instructions (i.e., "specification gaming" as described in Ref. [19]).
In terms of the autonomous function, many UAS and USVs incorporate some autonomy of navigation: to travel a route or arrive at a destination without human piloting.Navigation usually requires the ability to identify environmental conditions (obstacles, humans, other aircraft or vehicles) to navigate around.For autonomy of identification the machine detects and decides the identity or composition of its environment -required to some extent in navigation to avoid obstacles.Autonomy of target selection ("targeting") and autonomy of firing, are the most dangerous, allowing machines to choose their own targets and then initiating fire of their own accord.Only systems with these autonomous target selection and attack capabilities (critical functions) are considered LAWS.
Accessibility: Costs of lethal autonomous weaponry have been driven down by technological advances.Modern loitering munitions are also reusable if they have not detonated.
Each modern FGM-148 Javelin ATGW costs the U.S. Department of Defense $175k [20], and is large and difficult to assemble quickly and stealthily.In contrast, open-source image recognition and navigation software attached to cheap drones and homemade explosives suddenly make similar weapons -or fleets of these weapons -more accessible to actors with fewer resources.
Interpretability: Among modern machine learned systems, there exists an implicit trade-off between accuracy and the interpretability of a model's decision.Large artificial neural networks offer the best performance for most sophisticated tasks, including visual perception, forecasting, and motion planning, but most architectures currently provide limited human-understandable explanation for the model's output -effectively becoming a "black-box".Additionally, neural network models are often poorly calibrated, particularly in out-of-distribution environments, meaning their generated confidence scores can occasionally over-or under-exaggerate their chance of error.In certain medical applications, automated systems are required to give some interpretable explanation of their decisions.
As an alternative, the United States requires a human-in-the-loop for any lethal systems [1].However, it is unclear whether the speed and information available to these human agents is always sufficient to rectify automated mistakes.The documented phenomenon of "automation bias" shows humans begin to trust automated system excessively after extended use, biasing their judgement [21].

Accountability:
The NGO Human Rights Watch affirms "an accountability gap" where "neither criminal law nor civil law guarantees adequate accountability" for actors involved in the chain of autonomous system design or command [22].Ultimately, experts agree fully autonomous systems cannot feasibly inherit the liability from their designers, despite the replacement of some human decision-making.In the event of catastrophic errors in machine judgement, there is uncertainty whether engineers, product designers, users, or leadership teams are to be held responsible.To breach the Geneva Convention's Law of Armed Conflict requires some evidence that unlawful acts due to AI were foreseeable [5].The lack of model interpretability may complicate this verdict, given inadequate explanations for the cause of events.
Attribution & enforcement: Related to the accountability gap, attacks carried out by LAWS complicates attribution and enforcement.Duplicitous actors may use autonomous weapons to reduce the traceability of their attacks, or blame autonomous system errors to disguise their intentions.Responses to seemingly unprovoked attacks will have to contend with the possibility of misrepresentation or misdirection, more easily concealed by autonomous systems.This poses a problem for deterrence.System hacks or denial of service attacks that cannot always be successfully traced serve as a cyberwarfare analog.Others have warned that without traceability, and particularly because LAWS are cheap and replaceable, robot warfare may engender more rapid escalation in future confrontations [23,24].
Additionally, a broader problem for compliance is that without data access, it is difficult to prove if a weapon was operating autonomously.There are few extrinsic factors to separate an autonomously-acting weapon from a human-operated one.
Political stances for the regulation of LAWS: Over 70 nations have called for a fundamental ban on fully-autonomous weapons, with regulation on autonomy in weapons to ensure they comply with legal and moral standards.These nations include Argentina, Austria, Brazil Egypt, New Zealand, Norway, Pakistan, and Switzerland, among other nations (see Ref. [25] for the full list of nations).China has called for a treaty, while also investing heavily in LAW development.
Prominent humanitarian figures, NGOs, and advocacy groups also demand regulation for autonomous weapons, especially with regard to targeting and firing, including the United Nations Secretary-General António Guterres, Amnesty International, Human Rights Watch, the Campaign to Stop Killer Robots, itself a coalition of hundreds of organizations, and most recently the International Committee of the Red Cross (ICRC).Secretary-General António Guterres said "machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant" [26].In lieu of full autonomy, the UN and other organizations have advocated for "meaningful human control" in the Convention on Certain Conventional Weapons (CCW) [27].Similarly, the ICRC "recommends that states adopt new, legally binding rules to regulate autonomous weapon systems to ensure that sufficient human control and judgement is retained in the use of force.It is the ICRC's view that this will require prohibiting certain types of autonomous weapon systems and strictly regulating all others" [28].Experts mention an eventual objective is to establish a legally binding treaty, or in the absence of full agreement, to stigmatize these weapons, thereby creating an international norm against their use.

Political stances against the regulation of LAWS:
Notably absent among the nations calling for regulation of LAWS are eight powerful and globally engaged militaries: Australia, India, Israel, Russia, South Korea, Turkey, the United Kingdom, and the United States [29].These nations actively invest in a growing arms race for technological superiority [25].
The U.S. military currently requires human-in-the-loop for any lethal autonomous weaponry [1]; however, active development areas seem to defy this stipulation: for example, there is funding available for swarming technology, such as a 2017 Pentagon research proposal requesting a "Cluster UAS Smart Munition for Missile Deployment" [30].Ref. [10] argues that a large swarm of UAS designed to rapidly find and destroy targets could not feasibly be lethal with meaningful human control.
Military experts also argue the futility of a treaty or ban given the benefits of speed and protection conveyed by LAWS [31].The Final Report from the National Security Commission on Artificial Intelligence, commissioned by Congress, even argues a ban would complicate enforcement for weapons already in the U.S. arsenal, and therefore recommend against it [32].However, members of this commission have noted conflicts of interest [33].

Autonomous capabilities and intelligence
System design -hardware: LAWS are increasingly outfitted with more capable sensor hardware, which improve their autonomous perception functions.Aside from an array of optical cameras, USVs are frequently paired with 3D LiDAR, and UASs with 2D laser range finders.Newer candidates for inclusion are depth, light-field, and event-based cameras, as well as magnetic, olfaction, and thermal sensors [34].The Kargu-2, a small UAS, is just 70x70x40cm and weighs 7kg.They contain carbon-fibre blades, a small computer chip, a portable controller, and house swappable explosives inside their main frame [35].
System design -algorithms for autonomy: Autonomous navigation remains a challenging and active area of development, propelled by the commercial prospect of self-driving cars.Typically, the objective is divided into four underlying tasks for artificial systems [36]- [38], each critical elements to navigation, targeting, tracking and other autonomous functions: 1) Perception Also referred to as "detection", this step predicts the presence of surrounding physical objects and their 3D bounding boxes, from the visual and range sensory signals.2) Object tracking This step uses object detection temporally, over past frames.Linking together detected objects over time enables modeling of the trajectory, velocity, and, if applicable, the supposed intent of other mobile physical objects.3) Motion forecasting This predicts the future positions for the set of objects tracked and resolved from previous frames.4) Motion planning Given a representation of the scene, and the likely distribution of object motions, the vehicle plans its own motion in accordance with short-and long-term objectives: safely avoiding collisions, and reaching its destination.
Advances in hardware and machine learning have led to end-to-end neural network models, rather than cascade of specialist models, gaining greater performance at the expense of modularity, interpretability, and a human engineer's ability to impose intermediate interventions.

Perceptual generalization & robustness:
There remain a few technological obstacles to more reliable or fully autonomous systems.The first is the lack of robustness in perception systems, which regularly fail to detect a real object, predict something which is not there, or incorrectly identify an object's type.Perception will often fail when confronted with environments dissimilar from what they were trained on, known as poor generalization [39,40].If the lighting conditions, distribution and appearance of surrounding objects, e.g., plants, buildings, people, or the behaviour of these objects are different from training, it may regularly lead to catastrophic errors.The presence of snow, fog, mirrors, birds, unfamiliar urban planning, or other new features of their environment are often sufficient to trigger errors [40].Accurate perception, localization and scene understanding are critical to robust, collision-free motion planning in diverse conditions.Consequently, engineers prefer to train autonomous systems in the particular environments they plan to operate in, though this may not always be possible for real and unanticipated conflict zones, nor a guaranteed solution, as environments change during conflict, for instance, human behavior changes, erected barriers, and destroyed buildings.As a result, the data autonomous systems are trained on are almost never sufficiently complete, representative, and of high enough quality.
Further, operationalizing these systems outside of test environments come with many unforeseen socio-political risks aside from the technological ones described above [41].And identification or targeting of humans is itself a poorly defined task [42].

Vulnerability to adversarial examples:
In addition to natural errors, artificial neural networks are known to be susceptible to intentionally "adversarial" examples -minor input perturbations which may be imperceptible to a human [43,44].
Prior literature has extensively demonstrated the dangers of visual and non-visual adversarial examples [45]- [47], and recently this has been demonstrated for physical adversarial examples [48]- [50].The authors in ref. [48] successfully construct perturbations on physical objects that fool image classifiers under various real-world conditions.These physical perturbations can effectively fool state-of-the-art perception systems from as close as 30 feet, posing reliability challenges to autonomous systems in dense, human-populated environments.
Where visual perception vulnerabilities exist, 3D LiDAR sensors are usually seen as a defensive solution.However, [51] also demonstrate a physically realizable method to fool LiDAR detectors, generating a 3D object that when mounted on a vehicle renders them invisible to LiDAR detection systems with high accuracy.
Up to here we have discussed the robustness of visual or LiDAR-based perception.The authors in Ref. [52] successfully demonstrate attacks against the end-to-end autonomous systems, resulting in final physical consequences -in this case lane violations or crashes for USVs.These attacks are physically realizable, simple to implement, and appear inconspicuous to humans.
Altogether, autonomous systems face numerous unsolved challenges in the robustness, generalization, and vulnerability to attack.Though much literature is devoted to defenses, the unbounded nature of possible attacks means there has been no panacea.In many cases, only human controlslowing down the decision loops -can reliably diminish the potential for catastrophic error, e.g.target mis-identification or unrecognized presence of civilians.
Decision making: Aside from machine perception, LAWS are forced to make nuanced judgements on proportionality and distinction.Given their limitations, commanders need to pre-program these decisions in advance of deployment.The greater distance between the codified decisions and parameters and the actual attack pose significant hazards for unintended consequences.
Facial recognition: Manufacturers of military drones now offer integrated facial recognition software for automated target identification [35].State-of-the-art facial recognition systems are built with the same basic ingredients as perception/detection systems described above and are therefore equally susceptible to their risks and challenges: lack of robustness, poor generalization, and adversarial vulnerabilities.
The task of precisely identifying human facial features must contend with uncontrolled illumination, occlusion, pose variations, variability in facial expressions, makeup, facial hair, and clothing [53,54].Under uncontrolled settings, over multiple frames (video footage), this task is highly error-prone.Ref. [55,56] highlights severe inequity in state-of-the-art, commercial facial analysis systems from companies such as IBM, Microsoft, and Amazon, in a highly controlled setting, for a simple task: gender identification.As late as 2018, the simple task of identifying gender revealed less than 1% error for light-skinned males, compared to 35% error rate for darker-skinned females.
In combination with the documented abuses and unethical applications of facial surveillance systems domestically and internationally, threatening rights of privacy, freedom of expression, freedom of association, and due process [57,58], these results urge swift action to regulate and possibly prohibit the use of any facial recognition software for automated, online, or lethal decisions.
Table II provides a summary of the key concepts, risks and challenges for LAWS, as described in the the previous sections.

Autonomous weaponry policy
International law: Of all areas of international law, international humanitarian law (IHL) is the most specific and developed, largely out of necessity.Products of the Hague Conference and Geneva Conventions have been able to codify many of the customs that govern conduct during war, as well as laws governing the declaration of war.However, similar to previous times of technological transformation in military technology, international law lags behind the development and use of new classes of weaponry.
In order to better assess the development of international policy on LAWS, past revolutions in military weaponry can provide a framework for analysis.Specifically, emerging nuclear and cyber technologies each created their own strategic environments. 1While a great deal of similarities exist between LAWS and these past technologies in warfare, the incorporation of autonomy into weapon system targeting and engagement can be applied to all existing kinetic systems, rather than creating their own class of weaponry or domain.LAWS are intersectional with these strategic environments rather than a distinct environment of its own.The incorporation of autonomy into existing weapons systems influences how states operate in strategic environments, e.g., conventional, nuclear, and cyber, but it does not create a novel set of conditions effecting security behaviors [59].As such, as international law continues to adapt to LAWS, policy makers will have to consider how autonomy impacts each strategic environment individually, as well as the impacts of autonomous weapons systems on broader trends in warfare.While the current state of IHL imposes varying structure on existing strategic environments, its applicability to autonomous systems is ambiguous.
United States policy: U.S. Department of Defense Directive 3000.09,Autonomy in Autonomous Weapons, defines two echelons of weapons with integrated autonomous systems: autonomous weapons and semi-autonomous weapons.The key differentiating factor between the two in a weapon system's kill chain is in human influence, with autonomous weapons not requiring any human input from target identification to engagement (sometimes called "human out of the loop" systems).The class of semi-autonomous weapons is further broken down into "human on the loop systems," where a human can intervene between target identification and engagement, and "human in the loop systems," where a human is tasked with selecting or confirming a specific target [1,60].
This framework provides greater clarity regarding the current state of U.S. policy.According to a report from the Congressional Research Service, the United States does not currently have any autonomous lethal weapons in its inventory.However the same report identified that there are no laws prohibiting the United States from developing or employing LAWS [60].Further, DOD Directive 3000.09imposes the ambiguous restriction of "appropriate levels of human judgment over the use of force" on weapons to be developed and employed, with no clarification on what is "appropriate".Despite the lack of clarity in official U.S. policy, the U.S. Department of Defense modernization priorities, which include autonomy as well-funded line of effort, shed light on the practical efforts of the U.S. national security apparatus [61,62].

Policy development challenges: In March of 2019, the
United States submitted a report to the UN Convention on Certain Conventional Weapons (CCW) on the application of IHL to the integration of autonomy into weapon systems [63].The report specifically identified three defined requirements in IHL that are most ambiguous when applied to LAWS [63,64]: distinction between combatants and civilians, proportionality of attacks relative to military advantage gained, and precaution used in attacks when feasible to reduce risk of civilian casualties.
Since IHL applies to combatants rather than weapon systems, there is a gap in how kill chain decisions are governed for LAWS.In order to provide more insight, the report enumerated three generalized scenarios on autonomy in the employment of a weapon system [63]: 1) An autonomous function of a weapon system could be used to more accurately engage the already-intended target of a commander.2) An autonomous function could provide information to a commander to inform target selection.3) An autonomous function of a weapon system could allow for the selection and engagement of targets that were unknown to a commander prior to the function's output.
The CCW report clarifies how the IHL can be applied to LAWS in the above situations.In the first scenario, the target remains consistent, so the use of autonomy is not considered divergent from the use of non-autonomous weaponry.Furthermore, weapons systems with non-autonomous targeting functions in their kill chain are largely not considered within the scope of LAWS.Even so, the justification of the first scenario as within the bounds of IHL assumes autonomous systems are more reliable than humans in the engagement stage of the kill chain.However, autonomous driving has shown this is not always true, with inherent weaknesses and biases in autonomous targeting and tracking [65].
The second scenario describes an autonomous function that is supplemental to the targeting process and can contribute to more informed targeting.In its analysis the report identifies prerequisites for this type of autonomy to be in accordance with IHL: a commander's understanding of the system's accuracy, appropriate synthesizing of other relevant information, and an urgency to make a decision.The first requirement, however, raises questions about practical implementation, as it requires tacticians are sufficiently versed in autonomous systems to understand their accuracy and limitations, and will not fall prey to automation bias in a time when human interpretability of autonomous systems is often sacrificed for greater accuracy [21].
The final scenario posed by the report provides the greatest departure from the current standard practices of warfare.However, in analyzing this scenario, the report draws strong comparison to the use of anti-tank mines, which are used in accordance with IHL without "an express intention at the time of emplacing or activating" [63].In this comparison, the principles of distinction and proportionality in autonomous weapons are deemed not to diverge significantly enough from conventional equivalents for additional restrictions to be necessary.However, it should be noted the static and predictable nature of anti-tank mines makes for a dubious comparison with loitering munitions which operate over larger geographic and temporal ranges and engage their environment in more complex and potentially unpredictable ways.
As efforts to reconcile existing legal standards to developments in autonomous weaponry continue, implementation remains a central challenge.Any restrictions on LAWS short of a complete ban will need to incorporate validation, enforcement, and accountability processes to assess compliance.These methods raise ambiguity including in how to prevent actors from skirting enforcement and how to integrate accountability methods into the development pipelines of highly complex weapon systems [66].
Future steps: The U.S. government's opposition to increased regulation of autonomous weaponry was laid out in a 2018 white paper to the UN CCW [67].The argument centered on the potential benefits of autonomous functions, predictable and unforeseeable.While the report specifically cites efforts to stigmatize or ban autonomous weapons systems to be contrary to humanitarian innovative developments, it could be seen to present a false dichotomy.
The report details benefits that might arise from weapons system autonomy, but fails to consider alternatives, short of an outright ban of LAWS, which could reduce collateral damage.Although regulation will likely require a more technically-fluent and fine-tuned approach, the absence of U.S. commitment, or leadership, to create nuanced regulation on the development and employment of LAWS may discourage serious international engagement.A committed effort to regulation would not only avoid the growing risks of LAWS, but it could also encourage more scoped innovation for safer development of lethal autonomous weapons.

Conclusions
The wide and rapid development of lethal autonomous weapons hails a new and perilous era of technological warfare.This work emphasizes the unaccounted risks: the inevitability of development, accessibility, ambiguous attribution or enforcement, and the deployment of unreliable and uninterpretable lethal weapons.The nature and limitations of these weapons systems indicate a high likelihood they will contravene international law, failing to recognize surrendering soldiers or accidentally causing mass civilian casualties.Current policy remains unequipped to handle such risks, and while international pressure grows to limit their usage, commitment to regulation may not acquire sufficient momentum until after more serious catastrophes.

On 8
March 2021, the Panel of Experts on Libya submitted UN Letter [S/2021/229]

TABLE I : Selected Aerial Military Systems with Autonomous Capabilities. Sourced from
MIT Science Policy Review | August 29, 2022 | vol. 3 | pg.47

TABLE II : Risks and Challenges to
the development and regulation of LAWS and semi-autonomous ISR systems.