10 Amendments of Lukas MANDL related to 2020/2013(INI)
Amendment 2 #
Draft opinion
Paragraph 1
Paragraph 1
1. Highlights that the security and defence policies of the European Union and its Member States are guided by the principles of the UN Charter and international law, by the principles of human rights and respect for human dignity, and by a common understanding of the universal values of the inviolable and inalienable rights of the human person, of freedom, of democracy, of equality and of the rule of law; highlights that all defence- related efforts within the Union framework must respect these universal values while promoting peace, security and progress in Europe and in the world;
Amendment 9 #
Draft opinion
Paragraph 2
Paragraph 2
2. Calls on the UN and the wider international community to undertake all necessary efforts to ensure that the application of Artificial Intelligence (AI) in military affairs and the study, development and use of AI-enabled systems by the military stay within the strict limits set by international law and international humanitarian law (IHL);
Amendment 16 #
Draft opinion
Paragraph 3
Paragraph 3
3. Considers in particular that AI- enabled systems must be designed in a way that abides by the principles of the Martens Clause, and must never breach or be permitted to breach the dictates of the public conscience and humanity; considers that this is the ultimate test for the admissibility of an AI-enabled system in warfare; calls on the AI research community to integrate this principle in all AI-enabled systems intended to be used in warfare; considers that no authority can issue a derogation from those principles or certify an AI-enabled system;
Amendment 25 #
Draft opinion
Paragraph 4
Paragraph 4
4. Stresses that states, parties to a conflict and individuals, when employing AI-enabled systems in warfare, must at all times adhere to their obligations under the applicable international law and must remain accountable for actions resulting from the use of such systems; recalls that AI machines can under no circumstances be held accountable for intended, unintended or undesirable effects caused by AI- enabled systems on the battlefield;
Amendment 28 #
Draft opinion
Paragraph 5
Paragraph 5
5. Highlights the need to take duly into account, during the design, development, testing and deployment phases of an AI-enabled system, potential risks as regards, in particular civilian casualties and injury, accidental loss of life, and damage to civilian infrastructure, but also risks related to unintended engagement, manipulation, proliferation, cyber-attack or interference and acquisition by terrorist groups, as well as to escalatory destabilising effects;
Amendment 36 #
Draft opinion
Paragraph 6
Paragraph 6
6. Stresses the need for robust testing and evaluation systems based on clear legal norms to ensure that during the entire lifecycle of AI-enabled systems in the military domain, in particular during the phases of human- machine interaction, machine learning and adjusting and adapting to new circumstances, the systems do not go beyond the intended limits and will at all times be usable in a way that complyies with the applicable international law;
Amendment 44 #
Draft opinion
Paragraph 7
Paragraph 7
7. Highlights that any AI-enabled system used in the military domain must, as a minimum set of requirements, be able to distinguish between combatants and non-combatants on the battlefield, and between civilian and military objectives, not have indiscriminate effects, not cause unnecessary suffering to persons, not be biased or be trained on biased data, and be in compliance with the IHL principles of military necesshumanity, proportionality in the use of force and precaution prior to engagement, military necessity, and precaution in attack;
Amendment 47 #
Draft opinion
Paragraph 8
Paragraph 8
8. Stresses that in the use of AI- enabled systems in security and defence, full situational understanding of the human operator, as well as the human operator's ability to detect possible changes in circumstances and ability to discontinue an attack are needed to ensure that IHL principles, in particular distinction, proportionality and precaution in attack, are fully applied across the entire chain of command and control; stresses that AI- enabled systems must allow the military leadership to exert control and to assume its full responsibility at all times;
Amendment 53 #
Draft opinion
Paragraph 9
Paragraph 9
9. Calls onEncourages states to carry out an assessment of how autonomous military devices have contributed to their national security and what their national security can gain from AI-enabled weapon systems, in particular as regards the potential of such technologies to reduce human error, thus enhancing the implementation of IHL and its principles;
Amendment 59 #
Draft opinion
Paragraph 10
Paragraph 10
10. Calls on the HR/VP, in the framework of the ongoing discussions on the international regulation of lethal autonomous weapon systems by states parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons (CCW), to help streamline the global debate on core issues and definitions where consensus has not been reached, in particular as regards concepts and characteristics of AI-enabled lethal autonomous weapons and their functions in the identification, selection and engagement of a target, application of the concept of human responsibility in the use of AI-enabled systems in defence, and theemerging technologies in the area of lethal autonomous weapons, ethical and legal questions of human control, in particular with regard to critical functions such as target selection and engagement, retention of human responsibility and accountability and the necessary degree of human/machine interaction, including the concept of human control and human judgment, during the different stages of the lifecycle of an AI- enabled weapon. with a view to agree tangible recommendations on the clarification, consideration and development of aspects of the normative and operational framework on emerging technologies in the area of lethal autonomous weapons systems.