22 Amendments of Anne-Sophie PELLETIER related to 2020/2012(INL)
Amendment 23 #
Draft opinion
Paragraph 1
Paragraph 1
1. Believes that any ethical framework should seek toensure respect for human autonomy, ensure benefits for all, prevent harm, and promote fairness, and respect the principle of explicability of technologies; equality and transparency; notes that the potential for artificial intelligence (AI), robotics and related technologies that are truly ethical will inevitably conflict with the profit- orientation of private companies and interests; stresses therefore that an ethical framework for AI, robotics and related technologies is no substitute for wide- ranging and binding legal regulation of same; calls for the project of full and binding legal regulation of AI, robotics and related technologies by the European Union to be moved forward without any delay;
Amendment 34 #
Draft opinion
Paragraph 1 b (new)
Paragraph 1 b (new)
1b. Stresses that the development of AI, robotics and related technologies poses risks for human rights - namely privacy, data protection, and freedom of expression and information - and that in the future it may pose further risks that are still unknown; calls for the precautionary principle to be at the heart of both ethical and legal frameworks for AI;
Amendment 36 #
Draft opinion
Paragraph 4 a (new)
Paragraph 4 a (new)
4a. Highlights that user safety, data security, protection of personal data and ethical concerns altogether will determine public acceptance and consequent market penetration of automated systems; highlights that public authorities and private stakeholders will need to provide credible answers to all these concerns as well as prove the environmental, economic, social and safety benefits of AI in order to gain public trust;
Amendment 39 #
Draft opinion
Paragraph 4 b (new)
Paragraph 4 b (new)
4b. Notes that data security and privacy will come along with ethical concerns regarding the definition of the data to collect as well as their ownership, sharing, storage and purpose; notes, additionally, that ethics will play a key role in the definition of the legislative framework regulating the use and management of such data;
Amendment 40 #
Draft opinion
Paragraph 4 c (new)
Paragraph 4 c (new)
4c. Reiterates European principles on the ownership of individuals of their own personal data and explicit, informed consent which is necessary before using personal data as enshrined in the GDPR; points out that consent implies that individuals understand for which purpose their data will be used and that entities using personal data in algorithms have a responsibility for ensuring this understanding;
Amendment 45 #
Draft opinion
Paragraph 3
Paragraph 3
3. Considers that the Union legal framework willmay need to be updacomplemented with guiding ethical principles; points out that, where it would be premature to adopt legal acts, a soft law framework should be used;
Amendment 48 #
Draft opinion
Paragraph 3 a (new)
Paragraph 3 a (new)
3a. Recalls that the lack of transparency of AI systems makes it difficult to identify and prove possible breaches of laws, including legal provisions that protect fundamental rights; believes that an examination of, and guidelines on, how the Union’s human rights frameworks and the obligations that flow therefrom can protect citizens in the context of the widespread use of AI, robotics and related technologies are urgently needed; stresses the need to assess whether the EU’s human rights framework will need to be updated to meet the challenge posed to rights by these complex and emergent technologies;
Amendment 51 #
Draft opinion
Paragraph 3 b (new)
Paragraph 3 b (new)
3b. Stresses the need to assess how existing EU rules, in particular data protection rules, apply to AI and how proper enforcement of these rules in this field can be assured; calls on the Commission, the Member States and the data protection authorities to identify and take any possible measures to minimise algorithmic discrimination and bias and to develop a strong and common ethical framework for the transparent processing of personal data and automated decision- making that can guide data usage and the ongoing enforcement of Union law;
Amendment 52 #
Draft opinion
Paragraph 5 a (new)
Paragraph 5 a (new)
5a. Highlights the need to pay particular attention to situations involving more vulnerable groups such as children, persons with disabilities, elderly people and others that have historically been disadvantaged or are at risk of exclusion, and to situations which are characterised by asymmetries of power or information, such as between employers and workers, or between businesses and consumers;
Amendment 53 #
Draft opinion
Paragraph 3 c (new)
Paragraph 3 c (new)
3c. Stresses that the data sets and algorithmic systems used when making classifications, assessments and predictions at the different stages of data processing in the development of AI, robotics and related technologies may result not only in infringements of the fundamental rights of individuals, but also in differential treatment of and indirect discrimination against groups of people with similar characteristics; calls for a rigorous examination of AI’s politics and consequences, including close attention to AI’s classification practices and harms; emphasises that ethical AI, robotics and related technologies require that the field centre non-technical disciplines whose work traditionally examines such issues, including science and technology studies, critical race studies, disability studies, and other disciplines attuned to social context, including how difference is constructed, the work of classification, and its consequences; stresses the need therefore to systematically and immediately invest in integrating these disciplines into AI study and research at all levels;
Amendment 54 #
Draft opinion
Paragraph 3 d (new)
Paragraph 3 d (new)
3d. Notes that the field of AI, robotics and related technologies is strikingly homogenous and lacking in diversity; recognises the need to ensure that the teams that design, develop, test, maintain, deploy and procure these systems reflect the diversity of its uses and of society in general in order to ensure that bias is not unwittingly ‘built in’ to these technologies;
Amendment 55 #
Draft opinion
Paragraph 5 b (new)
Paragraph 5 b (new)
5b. Believes that explicability is crucial for building and maintaining users’ trust in AI systems; this calls for processes to be transparent, the capabilities and purpose of AI systems openly communicated, and decisions explainable to those directly and indirectly affected; believes that ‘black box’ algorithms which do not provide such information must be required to provide that the system as a whole respects fundamental rights;
Amendment 57 #
Draft opinion
Paragraph 6
Paragraph 6
6. Recalls the importance of ensuring the availability of effective remedies for consumers and calls on the Member States to ensure that accessible, affordable, independent and effective procedures are available to guarantee an impartial review of all claims of violations of consumer rights through the use of algorithmic systems, whether stemming from public or private sector actors; urges Member States to ensure consumer organisations have sufficient funding to assist consumers to exercise their right to remedy;
Amendment 60 #
Draft opinion
Paragraph 5
Paragraph 5
5. Calls for a horizontal approach, including technology-neutral standards that apply to all sectors in which AI could be employed; calls on the Union to promote strong and transparenta debate on how best the public and private sectors may cooperatione and share knowledge-sharing between the public and private sectors to create best practic to create best practices; recalls that artificial intelligence technologies would not exist without training data sets populated with data harvested from citizens and from public sources, and calls for the Union to urgently explore mechanisms for making privately-held data sets publicly and freely available, without prejudice to applicable data protection rules;
Amendment 68 #
Draft opinion
Paragraph 7 a (new)
Paragraph 7 a (new)
7a. Stresses that the data sets and the processes that yield the AI system’s decision, including those of data gathering and data labelling as well as the algorithms used, should be documented to the best possible standard to allow for traceability and an increase in transparency; stresses that this also applies to the decisions made by the AI system;
Amendment 73 #
Draft opinion
Paragraph 7 b (new)
Paragraph 7 b (new)
7b. Underlines that data sets used by AI systems (both for training and operation) may suffer from the inclusion of inadvertent historic bias, incompleteness and bad governance models; stresses that the continuation of such biases could lead to unintended (in)direct prejudice and discrimination against certain groups or people, potentially exacerbating prejudice and marginalisation; notes that harm can also result from the intentional exploitation of (consumer) biases or by engaging in unfair competition, such as the homogenisation of prices by means of collusion or a non-transparent market; stresses that identifiable and discriminatory bias should be removed in the collection phase where possible; notes that the way in which AI systems are developed (e.g. algorithms’ programming) may also suffer from unfair bias; stresses that this could be counteracted by putting in place oversight processes to analyse and address the system’s purpose, constraints, requirements and decisions in a clear and transparent manner; notes that hiring from diverse backgrounds should be encouraged;
Amendment 89 #
Draft opinion
Paragraph 7
Paragraph 7
7. Notes that AI and roboticBelieves that certain uses of AI cannot be considered as ethical as such, and that there are areas where any legal and ethical framework would not prevent risks of fundamentals rights violations; recalls that the use of AI, robotics and related technologyies in the area of law enforcement and border control could enhance public safety and security; stresses that its use must respect the principles of proportionality and necessityposes extremely serious risks to fundamental rights; calls for a complete ban on the use of AI, robotics and related technologies in this arena; calls also for a ban on the use of facial recognition technology in public areas and a ban on affect recognition AI in any arena;
Amendment 95 #
Draft opinion
Paragraph 9 b (new)
Paragraph 9 b (new)
9b. Notes that particular attention in AI literacy programs must also be paid to situations where AI systems can cause or exacerbate adverse impacts due to asymmetries of power or information, such as between employers and employees, businesses and consumers or governments and citizens;
Amendment 95 #
Draft opinion
Paragraph 7 a (new)
Paragraph 7 a (new)
7a. Notes the increasing use of AI- enabled labour-management systems; emphasises that the introduction of such systems raises significant questions about worker rights and safety; notes that AI systems used for worker control and management are inevitably optimised to produce benefits for employers, often at great cost to workers; recalls that Article 22 GDPR is not sufficient to adequately protect workers in the context of AI- enabled management systems; calls for urgent and specific regulation in this arena;
Amendment 101 #
Draft opinion
Paragraph 8
Paragraph 8
8. Stresses that AI and robotics are not immune from making mistakes; considers the need for legislators to reflect upon the complex issue of liability in the context of both civil and criminal justice.
Amendment 108 #
Draft opinion
Paragraph 8 a (new)
Paragraph 8 a (new)
8a. Reiterates the call for the establishment of a European Agency for Artificial Intelligence, and emphasises the importance of having national supervisory authorities in each Member State responsible for ensuring, assessing and monitoring compliance with ethical principles and legal obligations pertaining to the development, deployment and use of artificial intelligence, robotics and related technologies.
Amendment 134 #
Draft opinion
Paragraph 13 d (new) (after Subheading 5 a new)
Paragraph 13 d (new) (after Subheading 5 a new)
13d. Highlights the need for equal respect for the moral worth and that dignity of all human beings must be ensured. This goes beyond non- discrimination, which tolerates the drawing of distinctions between dissimilar situations based on objective justifications. In an AI context, equality entails that the system’s operations cannot generate unfairly biased outputs (e.g. the data used to train AI systems should be as inclusive as possible, representing different population groups).; calls for adequate protection for potentially vulnerable persons and groups, such as workers, women, persons with disabilities, ethnic minorities, elderly people, children, consumers or others at risk of exclusion;