Wij waarschuwen al langer tegen de digitale dehumanisering in oorlogsvoering en zijn in Genève aanwezig om staten op te roepen om altijd betekenisvolle menselijk interventie te verplichten bij de inzet van wapensystemen.
Het groeiend gebruik van kunstmatige intelligentie en steeds meer autonome wapensystemen zijn buitengewoon zorgelijke ontwikkelingen, de gevolgen ervan zien we in Gaza en Oekraïne.
Beslissingen over leven en dood mogen niet aan machines overgelaten worden. Staten moeten, na al ruim tien jaar met elkaar in gesprek te zijn geweest over autonome wapensystemen, onderhandelingen starten over een verdrag dat menselijke controle over het gebruik van geweld verplicht.
Op 3 september gaf PAX-medewerker Roos Boer hierover een statement af namens de Stop Killer Robots-campagne. Het statement kun je hieronder bekijken en lezen.
"[Human] control as a principle in international law is crucial for drawing red lines to protect our humanity, and for establishing clearly what we find fundamentally ethically unacceptable as an int'l community." – @rboer.bsky.social delivers #StopKillerRobots statement at #CCWUN @paxvoorvrede.nl
— Stop Killer Robots (@stopkillerrobots.bsky.social) 2 september 2025 om 16:48
[image or embed]
Statement on behalf of the Campaign to Stop Killer Robots to the CCW GGE LAWS on Box III, 3 September 2025 .
Delivered by Roos Boer (PAX).
Thank you Chair.
We welcome the progress states have made through the rolling text in their common understanding of some of the basic components of meaningful human control (or context- appropriate human control and judgement). Many of the elements in boxes III-paragraph V align with some of the basic parameters that we believe must be now fully developed into rules in a legal instrument, in order to uphold basic legal and ethical principles and ensure responsibility and accountability.
In box III, these elements include the consideration of predictability, reliability, traceability and explainability; the importance of ethical as well as legal assessments; and the placing of limitations on the types of targets, duration, geographical scope and scale of operation of weapons systems.
We recommend that all the elements in box III are retained in the text at this point, as a basis for states’ subsequent and more detailed negotiations on a legally binding instrument. This particularly applies to the elements in paragraphs 5 and 6 – with one exception that we will raise shortly.
We welcome the recognition by states in the rolling text of the fact that ‘context-appropriate human control and judgment’ is needed to uphold international law: the linkage between such control and fulfilling basic legal principles is important, and must be preserved in states’ discussions and negotiations.
However, we do not believe that ensuring compliance with existing law is the sole purpose of establishing international rules to ensure meaningful human control in the use of force.
Rather, enshrining such control as a principle in international law is crucial for drawing red lines to protect our humanity, and for establishing clearly what we find fundamentally ethically unacceptable as an international community when it comes to states and companies pursuing increasing autonomy in weapons systems. The pursuit of increasing autonomy represents a fundamental shift in the use of force: it raises much bigger questions than IHL compliance alone.
For this reason, we were disappointed to see the changes to paragraphs 5 and 6 in box III that remove the concept of a general prohibition on the use of systems without meaningful human control. We also regret that the current text more explicitly restricts the purpose of box III’s elements to the goal of upholding compliance with IHL.
We were also deeply concerned to see the addition of paragraph 6.D.v.
Merely limiting “real-time machine learning with regard to target selection and engagement functions” will not be sufficient to prevent a loss of sufficient understanding and meaningful control over systems.
We continue to urge states to fully consider the issues raised by anti-personnel autonomous weapons systems and how these must be dealt with. We believe that a specific prohibition on systems that target people, within a wider legal instrument on autonomous weapons systems, is the only ethically and legally viable response.
Thank you Chair.