BETA

6 Amendments of Eva KAILI related to 2020/2016(INI)

Amendment 86 #
Motion for a resolution
Paragraph 2
2. Reaffirms that all AI solutions for law enforcement and the judiciary also need to fully respect the principles of non- discrimination, freedom of movement, the presumption of innocence and right of defence, freedom of expression and information, freedom of assembly and of association, equality before the law, and the right to an effective remedy and a fair trial; any artificial intelligence, robotics and related technologies, shall be developed, deployed or used in a manner that prevents the possible identification of individuals from data that were previously processed based on anonymity or pseudonymity, and the generation of new, inferred, potentially sensitive data and forms of categorisation through automated means;
2020/07/20
Committee: LIBE
Amendment 117 #
Motion for a resolution
Paragraph 4 a (new)
4 a. Suggests that special attention should be paid to the technological advancement of drones used in police and military operations. Urges the Commission to create a code of conduct on their use considering the great damage they can cause in human capital if potentially weaponised in the future;
2020/07/20
Committee: LIBE
Amendment 121 #
Motion for a resolution
Paragraph 5
5. Stresses the potential for bias and discrimination arising from the use of machine learning and AI applications; notes that biases can be inherent in underlying datasets, especially when historical data is being used, introduced by the developers of the algorithms, or generated when the systems are implemented in real world settings; underlines that any software, algorithm or data used or produced by artificial intelligence, robotics and related technologies developed, deployed or used in the Union shall protect the human rights of individuals against violations by AI actors throughout AI systems’ entire lifecycle. A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data gathering process;
2020/07/20
Committee: LIBE
Amendment 126 #
Motion for a resolution
Paragraph 5 a (new)
5 a. Stresses that the developer or deployer shall carry out ethical impact assessments of AI systems that have the potential to cause harm in the form of bias, discrimination and privacy. Τhese assessments shall envision possible moral risks related to the implementation of the AI/Machine learning (ML), consider all possible ethical risks that could result from the AI/ML application in question and shall be publicly released. It is also proposed that all public and government organizations using AI systems are required to conduct an ethical technology assessment prior to deployment of the AI system;
2020/07/20
Committee: LIBE
Amendment 142 #
Motion for a resolution
Paragraph 8 a (new)
8 a. Stresses the importance, of ensuring that AI weaponised products that are produced in the EU, have advanced software security provisions in accordance with the "security by design approach" which would render them difficult to hack by third parties or terrorists and they will allow specific human oversight before they operate in case of being hacked and activated by unknown source;
2020/07/20
Committee: LIBE
Amendment 150 #
Motion for a resolution
Paragraph 9 a (new)
9 a. Highlights that it must always be possible to reduce the AI system´s computations to a form comprehensible by humans and considers that AI products used for police and judicial authorities should record data on every transaction carried out by the machine, -including the logic that contributed to its decisions - as well as with a “switch-off’ button which would instantly deactivate the AI system after requested by a human;
2020/07/20
Committee: LIBE