9 Amendments of Elena YONCHEVA related to 2020/2012(INL)
Amendment 24 #
Draft opinion
Paragraph 1
Paragraph 1
1. Believes that any ethical framework shouldthere is a difference between ethics and law and the role they play in our societies; any framework of ethical principles for the development, deployment and use of Artificial Intelligence (AI), robotics and related technologies should complement the EU Charter of Fundamental Rights and thereby seek to respect human dignity and autonomy, prevent harm, promote fairness, and transparency, respect the principle of explicability of technologies; and guarantee that the technologies are there to serve people, with the ultimate aim of increasing human well-being for everybody;
Amendment 39 #
Draft opinion
Paragraph 2
Paragraph 2
2. SHighlights the power asymmetry between those who employ AI technologies and those who interact and are subject to them; in this context stresses the importance of developing an “ethics-by-default and by design” framework which fully respect the Charter of Fundamental Rights of the European Union, Union law and the Treaties;
Amendment 44 #
Draft opinion
Paragraph 3
Paragraph 3
3. Considers that the current Union legalislative framework will need to be updaon protection of privacy and personal data fully applies to AI, robotics and related technologies, however could benefit from being supplemented with guidingrobust ethical principlguidelines; points out that, where it would be premature to adopt legal acts, a soft law framework should be used;
Amendment 66 #
Draft opinion
Paragraph 5 a (new)
Paragraph 5 a (new)
5a. Promotes a European Agency for Artificial Intelligence, which ensures a European coordination of AI standards and regulations; this centralized agency develops common criteria for a European certificate of ethical compliance, which also takes the data used for algorithmic processes into account;
Amendment 68 #
Draft opinion
Paragraph 5 b (new)
Paragraph 5 b (new)
5b. Promotes Corporate Digital Responsibility on a voluntary basis; the EU should support corporations, who by choice use digital technologies and AI ethically within their companies; the EU should encourage corporations to become proactive by establishing a platform for companies to share their experiences with ethical digitalization, as well as coordinating the actions and strategies of participating companies;
Amendment 76 #
Draft opinion
Paragraph 6
Paragraph 6
6. Stresses that the protection of networks of interconnected AI and robotics mustis important, and strong measures must be taken to prevent security breaches, cyber- attacks and the misuse of personal data;
Amendment 78 #
Draft opinion
Paragraph 6 a (new)
Paragraph 6 a (new)
6a. Calls for a comprehensive risk assessment of AI, robotics and related technologies in addition to the impact assessment provided by Article 35 GDPR (Article 27 of Directive (EU) 2016/680 and Article 39 of Regulation (EU) 2018/1725); the more impact an algorithm has, the more transparency, auditability, accountability and regulation is needed; where an algorithmic decision leads to a limitation of fundamental rights, there needs to be a very robust assessment in place; in highly critical fields - when health, freedom or human autonomy are directly endangered - the implementation of AI should be prohibited;
Amendment 91 #
Draft opinion
Paragraph 7
Paragraph 7
7. Notes that AI and robotic technology are used more and more in the area of law enforcement and border control could enhance public safety and security; stresses that its use must respect the principles of proportionality and necessity; , often with adverse effects on individuals when it comes to their rights to privacy, data protection and non- discrimination; stresses that the deployment and use of these technologies must respect the principles of proportionality and necessity, the Charter of Fundamental Rights, in particular the rights to data protection, privacy and non- discrimination, as well as the relevant secondary Union law such as EU data protection rules;
Amendment 98 #
Draft opinion
Paragraph 8
Paragraph 8
8. Stresses that AI and robotics are not immune from making mistakes and can easily have inherent bias; notes that biases can be inherent in the underlying datasets, especially when historical data is being used, introduced by the developers of the algorithms, or generated when the systems are implemented in the real world setting; considers the need for legislators to reflect upon the complex issue of liability in the context of criminal justice.