Activities of Ibán GARCÍA DEL BLANCO related to 2021/0106(COD)
Plenary speeches (1)
Artificial Intelligence Act (debate)
Shadow opinions (2)
OPINION on the proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts
OPINION on the proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts
Amendments (257)
Amendment 56 #
Proposal for a regulation
Recital 1
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework based on ethical principles in particular for the development, marketingdeployment and use of artificial intelligence in conformity with Union values. Therefore, this Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety, environment and fundamental rights and values including democracy and rule of law, and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketingdeployment and use of AI systems, unless explicitly authorised by this Regulation.
Amendment 58 #
Proposal for a regulation
Recital 2
Recital 2
(2) (2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is trustworthy and safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured in order to achieve trustworthy AI, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operatodevelopers, deployers and users and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
Amendment 62 #
Proposal for a regulation
Recital 3
Recital 3
(3) Artificial intelligence is a fast evolving family of technologies that can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities if developed in accordance with ethical principles. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education and training, culture, infrastructure, management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.
Amendment 75 #
Proposal for a regulation
Recital 6
Recital 6
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the software, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension. AI systems can be designed to operate with varying levels of autonomy and be used on a stand- alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to–date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list.
Amendment 84 #
Proposal for a regulation
Recital 13
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety, the environment and fundamental rights, and values such as democracy and the rule of law, a set of ethical principles and common normative standards for all high-risk AI systems should be established. Those principles and standards should be consistent with the Charter of fundamental rights of the European Union (the Charter), the European Green Deal (The Green Deal) and the Joint Declaration on Digital Rights of the Union (the Declaration) and should be non-discriminatory and in line with the Union’s international trade commitments.
Amendment 86 #
Proposal for a regulation
Recital 14 a (new)
Recital 14 a (new)
(14 a) (14 a) Without prejudice to tailoring rules to the intensity and scope of the risks that AI systems can generate, or to the specific requirements laid down for high-risk AI systems, all AI systems developed, deployed or used in the Union should respect not only Union and national law but also a specific set of ethical principles that are aligned with the values enshrined in Union law and that are in part, concretely reflected in the specific requirements to be complied with by high-risk AI systems. That set of principles should, inter alia, also be reflected in codes of conduct that should be mandatory for the development, deployment and use of all AI systems. Accordingly, any research carried out with the purpose of attaining AI-based solutions that strengthen the respect for those principles, in particular those of social responsibility and environmental sustainability, should be encouraged by the Commission and the Member States.
Amendment 87 #
Proposal for a regulation
Recital 14 b (new)
Recital 14 b (new)
(14 b) (14 b) AI literacy’ refers to skills, knowledge and understanding that allows both citizens and operators in the context of the obligations set out in this Regulation, to make an informed deployment and use of AI systems, as well as to gain awareness about the opportunities and risks of AI and thereby promote its democratic control. AI literacy should not be limited to learning about tools and technologies, but should also aim to equip citizens more generally and operators in the context of the obligations set out in this Regulation, with the critical thinking skills required to identify harmful or manipulative uses as well as to improve their agency and their ability to fully comply with and benefit from trustworthy AI. It is therefore necessary that the Commission, the Member States as well as operators of AI systems, in cooperation with all relevant stakeholders, promote the development of AI literacy, in all sectors of society, for citizens of all ages, including women and girls, and that progress in that regard is closely followed.
Amendment 89 #
Proposal for a regulation
Recital 15
Recital 15
(15) Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy, gender equality and the rights of the child.
Amendment 90 #
Proposal for a regulation
Recital 16
Recital 16
(16) The placing on the market, putting into servicedevelopment, deployment or use of certain AI systems intendused to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention toby materially distorting the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research. .
Amendment 106 #
Proposal for a regulation
Recital 28
Recital 28
(28) AI systems could produce adverse outcomes to health and safety of persons, in particular when such systems operate as components of products. Consistently with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate. The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non- discrimination, right to education, consumer protection, workers’ rights. Special attention should be paid to gender equality, rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration, protection of intellectual property rights and ensuring cultural diversity. In addition to those rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons or to the environment, due to the extraction and consumption of natural resources, waste and the carbon footprint.
Amendment 107 #
Proposal for a regulation
Recital 32
Recital 32
(32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre- defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems.
Amendment 115 #
Proposal for a regulation
Recital 35
Recital 35
(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed, developed and used, such systems may violate the right to education and training as well as the rights to gender equality and to not to be discriminated against and perpetuate historical patterns of discrimination. Finally, education is also a social learning process therefore, the use of artificial intelligence systems must not replace the fundamental role of teachers in education.
Amendment 118 #
Proposal for a regulation
Recital 36
Recital 36
(36) AI systems used in employment, workers management and access to self- employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact the health, safety and security rules aplicable in their work and at their workplaces and future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy. In this regard, specific requirements on transparency, information and human oversight should apply. Trade unions and workers representatives should be informed and they should have access to any documentation created under this Regulation for any AI system deployed or used in their work or at their workplace.
Amendment 127 #
Proposal for a regulation
Recital 1
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the developbased on ethical principles in particular for the design, development, deployment, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety, environment and fundamental rights, and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
Amendment 129 #
(70) Certain AI systems intendused to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications, which should include a disclaimer, should be provided in accessible formats for children, the elderly, migrants and persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio, text, scripts or video content that appreciably resembles existing persons, places, test, scripts or events and would falsely appear to a person to be authentic, should appropriately disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin, namely the name of the person or entity that created it. AI systems used to recommend, disseminate and order news or cultural and creative content displayed to users, should include an explanation of the parameters used for the moderation of content and personalised suggestions which should be easily accessible and understandable to the users.
Amendment 132 #
Proposal for a regulation
Recital 73
Recital 73
(73) In order to promote and protect innovation, it is important that the interests of small-scale providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on AI literacy, awareness raising and information communication. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users.
Amendment 133 #
Proposal for a regulation
Recital 2
Recital 2
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is trustworthy and safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured in order to achieve trustworthy AI, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
Amendment 134 #
Proposal for a regulation
Recital 76
Recital 76
(76) In order to facilitate a smooth, effective and harmonised implementation of this and other Regulations a European Agency for Data and Artificial Intelligence Board should be established. The BoardAgency should be responsible for a number of advisory tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation and other present or future legislations, including on technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to and assisting the Commission on specific questions related to artificial intelligence. The Agency should establish a Permanent Stakeholders' Group composed of experts representing the relevant stakeholders, such as representatives of developers, deployers and users of AI systems, including SMEs and start-ups, consumer groups, trade unions, fundamental rights organisations and academic experts and it should communicate its activities to citizens as appropriate.
Amendment 136 #
Proposal for a regulation
Recital 79
Recital 79
(79) In order to ensure an appropriate and effective enforcement of the requirements and obligations set out by this Regulation, which is Union harmonisation legislation, the system of market surveillance and compliance of products established by Regulation (EU) 2019/1020 should apply in its entirety. Where necessary for their mandate, national public authorities or bodies, which supervise the application of Union law protecting fundamental rights, including equality bodies, should also have access to any documentation created under this Regulation. Where appropriate, national authorities or bodies, which supervise the application of Union law or national law compatible with union law establishing rules regulating the health, safety, security and environment at work, should also have access to any documentation created under this Regulation.
Amendment 137 #
Proposal for a regulation
Recital 81
Recital 81
(81) The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of trustworthy, socially responsible and environmentally sustainable artificial intelligence in the Union. Providers of non- high-risk AI systems should be encouraged to create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Providers should also be encouraged to apply on a voluntary basis additional requirements related, for example, to environmental sustainability, accessibility to persons with disability, stakeholders’ participatDevelopers and deployers of all AI systems should also draw up codes of conduct in order to ensure and demonstrate compliance with the ethical principles underpinning trustworthy AI. The Commission inand the design and development of AI systems, and diversity of the development teams. The CommissionEuropean Agency for Data and Artificial Intelligence may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data.
Amendment 145 #
Proposal for a regulation
Article 1 – paragraph 1 – point a
Article 1 – paragraph 1 – point a
(a) harmonised rules for the placing on the market, the putting into servicedevelopment, deployment and the use of artificial intelligence systems (‘AI systems’) in the Union;
Amendment 146 #
Proposal for a regulation
Article 1 – paragraph 1 – point d
Article 1 – paragraph 1 – point d
(d) harmonised transparency rules for AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content;
Amendment 151 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniquescan, in and approaches listed in Annex I and canutomated manner, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
Amendment 152 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2
Article 3 – paragraph 1 – point 2
(2) ‘providdeveloper’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge, or that adapts a general purpose AI system to a specific purpose and use;
Amendment 153 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2 a (new)
Article 3 – paragraph 1 – point 2 a (new)
(2 a) ‘deployer’ means any natural or legal person, public authority, agency or other body putting into service an AI system developed by another entity without substantial modification, or using an AI system under its authority,
Amendment 154 #
Proposal for a regulation
Recital 13
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter), the European Green Deal (The Green Deal) and the Joint Declaration on Digital Rights of the Union (the Declaration) and should be non-discriminatory and in line with the Union’s international trade commitments.
Amendment 155 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4
Article 3 – paragraph 1 – point 4
(4) ‘user’ means any natural or legal person, public authority, agency or other body using an AI system under itsthe authority, except where the AI system is used in the course of a personal non- professional activity of a deployer;
Amendment 157 #
Proposal for a regulation
Article 3 – paragraph 1 – point 8
Article 3 – paragraph 1 – point 8
(8) ‘operator’ means the providdeveloper, the deployer, the user, the authorised representative, the importer and the distributor;
Amendment 158 #
Proposal for a regulation
Recital 14
Recital 14
(14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk- based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems. With regard to transparency and human oversight obligations, Member States should be able to adopt further national measures to complement them without changing their harmonising nature.
Amendment 161 #
Proposal for a regulation
Recital 14 a (new)
Recital 14 a (new)
(14a) Without prejudice to tailoring rules to the intensity and scope of the risks that AI systems can generate, or to the specific requirements laid down for high-risk AI systems, all AI systems developed, deployed or used in the Union should respect not only Union and national law but also a specific set of ethical principles that are aligned with the values enshrined in Union law and that are in part, concretely reflected in the specific requirements to be complied with by high-risk AI systems. That set of principles should, inter alia, also be reflected in codes of conduct that should be mandatory for the development, deployment and use of all AI systems. Accordingly, any research carried out with the purpose of attaining AI-based solutions that strengthen the respect for those principles, in particular those of social responsibility and environmental sustainability, should be encouraged by the Commission and the Member States.
Amendment 161 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a
Article 3 – paragraph 1 – point 44 – point a
(a) the death of a person or serious damage to a person’s fundamental rights, health, to property or the environment, to democracy or the democratic rule of law,
Amendment 162 #
Proposal for a regulation
Recital 14 b (new)
Recital 14 b (new)
(14b) AI literacy’ refers to skills, knowledge and understanding that allows both citizens more generally and developers, deployers and users in the context of the obligations set out in this Regulation to make an informed deployment and use of AI systems, as well as to gain awareness about the opportunities and risks of AI and thereby promote its democratic control. AI literacy should not be limited to learning about tools and technologies, but should also aim to equip citizens more generally and developers, deployers and users in the context of the obligations set out in this Regulation with the critical thinking skills required to identify harmful or manipulative uses as well as to improve their agency and their ability to fully comply with and benefit from trustworthy AI. It is therefore necessary that the Commission, the Member States as well as developers and deployers of AI systems, in cooperation with all relevant stakeholders, promote the development of AI literacy, in all sectors of society, for citizens of all ages, including women and girls, and that progress in that regard is closely followed.
Amendment 163 #
Proposal for a regulation
Recital 15
Recital 15
(15) Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy, gender equality and the rights of the child.
Amendment 165 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
Article 3 – paragraph 1 – point 44 a (new)
(44 a) 'AI literacy' means the skills, knowledge and understanding regarding AI systems
Amendment 166 #
Proposal for a regulation
Article 4
Article 4
Amendments to Annex I The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I, in order to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed therein.rticle 4 deleted
Amendment 168 #
Proposal for a regulation
Article 4 a (new)
Article 4 a (new)
Amendment 169 #
Proposal for a regulation
Article 4 b (new)
Article 4 b (new)
Article 4 b AI literacy 1. When implementing this Regulation, the Union and the Member States shall promote measures and tools for the development of a sufficient level of AI literacy, across sectors and groups of operators concerned, including through education and training, skilling and reskilling programmes and while ensuring a proper gender and age balance, in view of allowing a democratic control of AI systems. 2. Developers and deployers of AI systems shall promote tools and take measures to ensure a sufficient level of AI literacy of their staff and any other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the environment the AI systems are to be used in, and considering the persons or groups of persons on which the AI systems are to be used. 3. Such literacy tools and measures shall consist, in particular, of the teaching and learning of basic notions and skills about AI systems and their functioning, including the different types of products and uses, their risks and benefits and the severity of the possible harm they can cause and its probability of occurrence. 4. A sufficient level of AI literacy is one that contributes to the ability of operators to fully comply with and benefit from trustworthy AI, and in particular with the requirements laid down in this Regulation in Articles 13, 14, 29, 52 and 69.
Amendment 170 #
Proposal for a regulation
Recital 16
Recital 16
(16) The placing on the market, putting into servicedevelopment, deployment or use of certain AI systems intendused to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention toby materially distorting the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
Amendment 191 #
Proposal for a regulation
Recital 27
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into servicedeveloped and deployed if they comply with certain mandatory requirements based on ethical principles. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any.
Amendment 194 #
Proposal for a regulation
Recital 28
Recital 28
(28) AI systems could produce adverse outcomes to health and safety of persons, in particular when such systems operate as components of products. Consistently with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate. The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non- discrimination, gender equality, education, consumer protection, workers’ rights, rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration. In addition to those rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons or to the environment, due to the extraction and consumption of natural resources, waste and the carbon footprint.
Amendment 200 #
Proposal for a regulation
Recital 35
Recital 35
(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed, developed and used, such systems may violate the right to education and training as well as the right to gender equality and to not to be discriminated against and perpetuate historical patterns of discrimination.
Amendment 201 #
Proposal for a regulation
Recital 36
Recital 36
(36) AI systems used in employment, workers management and access to self- employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact the health, safety and security rules applicable in their work and at their workplaces and future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy. In this regard, specific requirements on transparency, information and human oversight should apply. Trade unions and workers representatives should be informed and they should have access to any documentation created under this Regulation for any AI system deployed or used in their work or at their workplace.
Amendment 201 #
Proposal for a regulation
Article 6 – paragraph 2
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk due to their risk to cause harm to health, safety, the environment, fundamental rights or to democracy and the rule of law.
Amendment 202 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73, after ensuring adequate consultation with relevant stakeholders and the European Agency for Data and AI, to update the list in Annex III by adding high-risk AI systems where both of the following conditions are fulfilled:
Amendment 203 #
Proposal for a regulation
Article 7 – paragraph 1 – point a
Article 7 – paragraph 1 – point a
(a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III;
Amendment 204 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
Article 7 – paragraph 1 – point b
(b) the AI systems pose a risk of harm to the environment, health and safety, or a risk of adverse impact on fundamental rights, democracy and rule of law that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
Amendment 206 #
Proposal for a regulation
Article 7 – paragraph 2 – introductory part
Article 7 – paragraph 2 – introductory part
2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the environment, health and safety or a risk of adverse impact on fundamental rights, democracy and the rule of law, that is equivalent to or greater than the risk of harm posed by the high- risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria:
Amendment 214 #
Proposal for a regulation
Recital 46
Recital 46
(46) Having comprehensible information on how high- risk AI systems have been developed and how they perform throughout their lifecycle is essential to verify compliance with the requirements under this Regulation and to allow users to make informed and autonomous decisions about their use. This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements. Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system. The technical documentation should be kept up to date.
Amendment 214 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point c a (new)
Article 9 – paragraph 4 – subparagraph 1 – point c a (new)
(c a) provision of a sufficient level of AI literacy
Amendment 215 #
Proposal for a regulation
Recital 47
Recital 47
(47) To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certainsufficient degree of transparency should be required for high-risk AI systems. Users should be able to interpret the system output and use it appropriately. High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate. The same applies to AI systems with general purposes that may have high-risk uses that are not forbidden by their developer. In such cases, sufficient information should be made available allowing deployers to carry out tests and analysis on performance, data and usage. The systems and information should also be registered in the EU database for stand- alone high-risk AI systems foreseen in Article 60 of this Regulation.
Amendment 216 #
Proposal for a regulation
Article 9 – paragraph 8
Article 9 – paragraph 8
8. When implementing the risk management system described in paragraphs 1 to 7, specific consideration shall be given to whether the high-risk AI system is likely to be accessed by or have an impact on children, the elderly, migrants or other vulnerable groups.
Amendment 218 #
Proposal for a regulation
Recital 48
Recital 48
(48) High-risk AI systems should be designed and developed in such a way that natural persons can overseehave agency over them by being able to oversee and control their functioning. For this purpose, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate and at the very least where decisions based solely on the automated processing enabled by such systems produce legal or otherwise significant effects, such measures should guarantee that the system is subject to in- built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role.
Amendment 221 #
Proposal for a regulation
Recital 49
Recital 49
(49) High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to thein an intelligible manner to the deployers and users.
Amendment 221 #
Proposal for a regulation
Article 10 – paragraph 2 – point g a (new)
Article 10 – paragraph 2 – point g a (new)
(g a) the purpose and the environment in which the system is to be used;
Amendment 224 #
Proposal for a regulation
Article 13 – paragraph 1
Article 13 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users todevelopers, deployers, users and other relevant stakeholders to easily interpret the system’s functioning and output and use it appropriately. An appropriate type and degree of transparency shall be ensured on the basis of informed decisions, with a view to achieving compliance with the relevant obligations of the user and of the provider set out in Chapter 3 of this Title.
Amendment 225 #
Proposal for a regulation
Article 13 – paragraph 3 a (new)
Article 13 – paragraph 3 a (new)
3 a. In order to comply with the obligations established in this Article, developers and deployers shall ensure a sufficient level of AI literacy in line with New Article 4b.
Amendment 228 #
Proposal for a regulation
Article 14 – paragraph 5 a (new)
Article 14 – paragraph 5 a (new)
5 a. In order to comply with the obligations established in this Article, developers and deployers shall ensure a sufficient level of AI literacy in line with new Article 4b
Amendment 229 #
(68) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons and for society as a whole. It is thus appropriate that under exceptional and ethically justified reasons of public security or protection of life and health of natural persons and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment.
Amendment 232 #
Proposal for a regulation
Article 29 – paragraph 1 a (new)
Article 29 – paragraph 1 a (new)
1 a. In order to comply with the obligations established in this Article, as well as to be able to justify their possible non-compliance, deployers of high-risk AI systems shall ensure a sufficient level of AI literacy in line with new Article 4b;
Amendment 235 #
Proposal for a regulation
Article 52 – paragraph 1
Article 52 – paragraph 1
1. ProvidDevelopers and deployers shall ensure that AI systems intendused to interact with natural persons are designed and developed in such a way that natural persons are informed, in a timely, clear and intelligible manner that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This information shall also include, as appropriate, the functions that are AI enabled, and the rights and processes to allow natural persons to appeal against the application of such AI systems to them. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
Amendment 236 #
Proposal for a regulation
Article 52 – paragraph 2
Article 52 – paragraph 2
2. Users of an emotion recognition system or a biometric categorisation system shall inform, in a timely, clear and intelligible manner, of the operation of the system to the natural persons exposed thereto. This information shall also include, as appropriate, the rights and processes to allow natural persons to appeal against the application of such AI system to then. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.
Amendment 237 #
Proposal for a regulation
Recital 71
Recital 71
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate and ethically justified safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service.
Amendment 237 #
Proposal for a regulation
Article 52 – paragraph 3 – introductory part
Article 52 – paragraph 3 – introductory part
3. UDeployers and users of an AI system that generates or manipulates image, audio, text, scripts or video content that appreciably resembles existing persons, objects, places, text, scripts or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose in an appropriate timely, clear and visible manner, that the content has been artificially generated or manipulated, as well as the name of the person or entity that generated or manipulated it.
Amendment 242 #
Proposal for a regulation
Recital 72
Recital 72
(72) The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation; to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs) and start-ups; to contribute to the development of ethical, socially responsible and environmentally sustainable AI systems, in line with the ethical principles outlined in this Regulation. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.
Amendment 242 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1
Article 52 – paragraph 3 – subparagraph 1
However, the first subparagraph shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offencesforms part of an evidently artistic, creative or fictional cinematographic or analogous work or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
Amendment 243 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1 a (new)
Article 52 – paragraph 3 – subparagraph 1 a (new)
Developers and deployers of an AI systems that recommend, disseminate and order news or creative and cultural content shall disclose in an appropriate, easily accesible, clear and visible manner, the parameters used for the moderation of content and personalised suggestions. This information shall include a disclaimer.
Amendment 244 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1 b (new)
Article 52 – paragraph 3 – subparagraph 1 b (new)
The information referred to in previous paragraphs shall be provided to the natural persons in a timely, clear and visible manner, at the latest at the time of the first interaction or exposure. Such information shall be made accessible when the exposed natural person is a person with disabilities, a child or from a vulnerable group. It shall be complete, where possible, with intervention or flagging procedures for the exposed natural person taking into account the generally acknowledged state of the art and relevant harmonised standards and common specifications.
Amendment 245 #
Proposal for a regulation
Article 52 – paragraph 4 a (new)
Article 52 – paragraph 4 a (new)
4 a. In order to comply with the obligations established in this Article, a sufficient level of AI literacy shall be ensured.
Amendment 246 #
Proposal for a regulation
Recital 73
Recital 73
(73) In order to promote and protect innovation, it is important that the interests of small-scale providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on AI literacy, awareness raising and information communication. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users.
Amendment 251 #
Proposal for a regulation
Recital 81
Recital 81
(81) The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of trustworthy socially responsible and environmentally sustainable artificial intelligence in the Union. Providers of non- high-risk AI systems should be encouraged to create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Providers should also be encouraged to apply on a voluntary basis additional requirements related, for example, to environmental sustainability, accessibility to persons with disability, stakeholders’ participation in the design and development of AI systems, and diversity of the development teams. The Commission may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data.
Amendment 253 #
Proposal for a regulation
Article 69 – paragraph 1
Article 69 – paragraph 1
1. The Commission and the Member States shall encouragsupport the mand facilitate theatory drawing up of codes of conduct intended to demonstrate compliance with the ethical principles underpinning trustworthy AI set out in new Article 4a and to foster the voluntary application to AI systems other than high-risk AI systems of the requirements set out in Title III, Chapter 2 on the basis of technical specifications and solutions that are appropriate means of ensuring compliance with such requirements in light of the intended purpose of the systems.
Amendment 254 #
Proposal for a regulation
Article 69 – paragraph 2
Article 69 – paragraph 2
2. The Commission and the Board shall encourage and facilitatIn the drawing up codes of conduct intended to ensure and demonstrate compliance with the ethe drawing up of codes of conduct intended to foster the voluntary application to AI systems of requirements related for example to environmental sustainability, accessibility forical principles underpinning trustworthy AI set out in Article 4a, developers and deployers shall, in particular: (a) consider whether there is a sufficient level of AI literacy among their staff and any other persons dealing with the operation and use of AI systems in order to observe such principles; (b) assess to what extent their AI systems may affect vulnerable persons or groups of persons, including children, the elderly, migrants and persons with a disability, stakeholders participation in the design and development ofies or whether any measures could be put in place in order to support such persons or groups of persons; (c) pay attention to the way in which the use of their AI systems may have an impact on gender balance and equality; (d) have especial regard to whether their AI systems cand diversity of be used in a way that, directly or indirectly, may residually or significantly reinforce existing biases or inequalities; (e) reflect on the need and relevance of having in place diverse development teams oin the basis of clear objectives and key performance indicators to measure the achievement of those objectiveview of securing an inclusive design of their systems; (f) give careful consideration to whether their systems can have a negative societal impact, notably concerning political institutions and democratic processes; (g) evaluate the extent to which the operation of their AI systems would allow them to fully comply with the obligation to provide an explanation laid down in Article New 71 of this Regulation; (h) take stock of the Union’s commitments under the European Green Deal and the European Declaration on Digital Rights and Principles; (i) state their commitment to privileging, where reasonable and feasible, the common specifications to be drafted by the Commission pursuant to Article 41 rather than their own individual technical solutions.
Amendment 255 #
Proposal for a regulation
Article 69 – paragraph 3
Article 69 – paragraph 3
3. Codes of conduct may be drawn up by individual providers of AI systems or by organisations representing them or by both, including with the involvement of users and any interested stakeholders and their representative organisations, including in particular trade unions and consumers organisations. Codes of conduct may cover one or more AI systems taking into account the similarity of the intended purpose of the relevant systems.
Amendment 256 #
Proposal for a regulation
Article 1 – paragraph 1 – point a
Article 1 – paragraph 1 – point a
(a) harmonised rules for the placing on the market, the putting into servicedevelopment, deployment and the use of artificial intelligence systems (‘AI systems’) in the Union;
Amendment 256 #
Proposal for a regulation
Article 69 – paragraph 3 a (new)
Article 69 – paragraph 3 a (new)
3 a. Developers and deployers shall designate at least one natural person that is responsible for the internal monitoring of the drawing up of their code of conduct and for verifying compliance with that code of conduct in the course of their activities. That person shall serve as a contact point for users, stakeholders, national competent authorities, the Commission and the European Agency for Data and AI on all matters concerning the code of conduct.
Amendment 257 #
Proposal for a regulation
Article 69 – paragraph 3 b (new)
Article 69 – paragraph 3 b (new)
3 b. In order to comply with the obligations established in this Article, developers and deployers shall ensure a sufficient level of AI literacy in line with New Article 6.
Amendment 259 #
Proposal for a regulation
Article 2 – paragraph 1 – point a
Article 2 – paragraph 1 – point a
(a) providers‘developer’ placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country or that adapts a general purpose AI system to a specific purpose and use;
Amendment 260 #
Proposal for a regulation
Annex I
Annex I
Amendment 262 #
Proposal for a regulation
Annex III – paragraph 1 – point 2 – point a
Annex III – paragraph 1 – point 2 – point a
(a) AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating, telecommunications, and electricity.
Amendment 264 #
(a) AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions or of determining the study program or areas of study to be followed by students;
Amendment 266 #
Proposal for a regulation
Annex III – paragraph 1 – point 3 a (new)
Annex III – paragraph 1 – point 3 a (new)
3 a. AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests at education and training institutions
Amendment 270 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
Annex III – paragraph 1 – point 4 – point b
(b) AI intended to be used for making decisions on establishment, promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships.
Amendment 286 #
Proposal for a regulation
Article 3 – paragraph 1 – point 8
Article 3 – paragraph 1 – point 8
(8) ‘operator’ means the providdeveloper, the deployer, the user, the authorised representative, the importer and the distributor;
Amendment 287 #
Proposal for a regulation
Article 3 – paragraph 1 – point 8 a (new)
Article 3 – paragraph 1 – point 8 a (new)
(8a) ‘deployer’ means any natural or legal person, public authority, agency or other body putting into service an AI system developed by another entity without substantial modification, or using an AI system under its authority,
Amendment 301 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a
Article 3 – paragraph 1 – point 44 – point a
(a) the death of a person or serious damage to a person’s fundamental rights, health, to property or the environment, to democracy or the democratic rule of law,
Amendment 303 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a a (new)
Article 3 – paragraph 1 – point 44 – point a a (new)
(aa) 'AI literacy' means the skills, knowledge and understanding regarding AI systems that are necessary for compliance with and enforcement of this Regulation
Amendment 310 #
Proposal for a regulation
Recital 1
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework based on ethical principles in particular for the development, marketingdeployment and use of artificial intelligence in conformity with Union values. Therefore, this Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety, environment and fundamental rights and values including democracy and rule of law, and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketingdeployment and use of AI systems, unless explicitly authorised by this Regulation.
Amendment 319 #
Proposal for a regulation
Recital 2
Recital 2
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is trustworthy and safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured in order to achieve trustworthy AI, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operatodevelopers, deployers and users and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
Amendment 320 #
Proposal for a regulation
Recital 3
Recital 3
(3) Artificial intelligence is a fast evolving family of technologies that can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities if developed in accordance with relevant ethical principles. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education and training, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.
Amendment 325 #
Proposal for a regulation
Recital 4
Recital 4
(4) At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate risks and cause harm to public interests and rights that are protected by Union law. Such harm might be material or immaterial and might affect one or more persons, a groups of persons or society as a whole.
Amendment 326 #
Proposal for a regulation
Recital 5
Recital 5
(5) A Union legal framework laying down harmonised rules on artificial intelligence based on ethical principles is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety, the environment and the protection of fundamental rights and values, including democracy and the rule of law, as recognised and protected by Union law. To achieve that objective, rules regulating the development, the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33 , and it ensures the protection of ethical principles, as specifically requested by the European Parliament34 . _________________ 33 European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 34 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL).
Amendment 327 #
Proposal for a regulation
Recital 6
Recital 6
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the software, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension. AI systems can be designed to operate with varying levels of autonomy and be used on a stand- alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to–date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list.
Amendment 335 #
Proposal for a regulation
Recital 13
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety, the environment and fundamental rights, and values such as democracy and the rule of law, a set of ethical principles and common normative standards for all high-risk AI systems should be established. Those principles and standards should be consistent with the Charter of fundamental rights of the European Union (the Charter), the European Green Deal (The Green Deal) and the Joint Declaration on Digital Rights of the Union (the Declaration) and should be non-discriminatory and in line with the Union’s international trade commitments.
Amendment 339 #
Proposal for a regulation
Recital 14
Recital 14
(14) In order to introduce a proportionate and effective set of binding rules based on ethical principles for AI systems, a clearly defined risk- based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems. With regard to transparency and human oversight obligations, Member States should be able to adopt further national measures to complement them without changing their harmonising nature.
Amendment 340 #
Proposal for a regulation
Recital 14 a (new)
Recital 14 a (new)
Amendment 341 #
Proposal for a regulation
Recital 14 b (new)
Recital 14 b (new)
(14b) AI literacy’ refers to skills, knowledge and understanding that allows both citizens more generally and developers, deployers and users in the context of the obligations set out in this Regulation to make an informed deployment and use of AI systems, as well as to gain awareness about the opportunities and risks of AI and thereby promote its democratic control. AI literacy should not be limited to learning about tools and technologies, but should also aim to equip citizens more generally and developers, deployers and users in the context of the obligations set out in this Regulation with the critical thinking skills required to identify harmful or manipulative uses as well as to improve their agency and their ability to fully comply with and benefit from trustworthy AI. It is therefore necessary that the Commission, the Member States as well as developers and deployers of AI systems, in cooperation with all relevant stakeholders, promote the development of AI literacy, in all sectors of society, for citizens of all ages, including women and girls, and that progress in that regard is closely followed .
Amendment 343 #
Proposal for a regulation
Recital 15
Recital 15
(15) Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy, gender equality and the rights of the child.
Amendment 344 #
Proposal for a regulation
Recital 16
Recital 16
(16) The placing on the market, putting into servicedevelopment, deployment or use of certain AI systems intendused to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention toby materially distorting the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
Amendment 364 #
Proposal for a regulation
Recital 27
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into servicedeveloped and deployed if they comply with certain mandatory requirements based on ethical principles. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests, democracy and the rule of law, as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety, the environment, and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any, democracy and the rule of law in the Union.
Amendment 365 #
Proposal for a regulation
Recital 28
Recital 28
(28) AI systems could produce adverse outcomes to health and safety of persons, in particular when such systems operate as components of products. Consistently with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate. The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non- discrimination, education, consumer protection, workers’ rights, gender equality, rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration, right to protection of intellectual property, cultural diversity. In addition to those rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons or to the environment, due to the extraction and consumption of natural resources, waste and the carbon footprint.
Amendment 368 #
Proposal for a regulation
Recital 32
Recital 32
(32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre- defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems.
Amendment 375 #
(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed, developed and used, such systems may violate the right to education and training as well as the rights to gender equality and to not to be discriminated against and perpetuate historical patterns of discrimination.
Amendment 377 #
Proposal for a regulation
Recital 36
Recital 36
(36) AI systems used in employment, workers management and access to self- employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact the health, safety and security rules applicable in their work and at their workplaces and future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy. In this regard, specific requirements on transparency, information and human oversight should apply. Trade unions and workers representatives should be informed and they should have access to any documentation created under this Regulation for any AI system deployed or used in their work or at their workplace.
Amendment 391 #
Proposal for a regulation
Recital 43
Recital 43
(43) Requirements should apply to high- risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as applicable in the light of the intended purpose of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.
Amendment 393 #
Proposal for a regulation
Recital 46
Recital 46
(46) Having comprehensible information on how high- risk AI systems have been developed and how they perform throughout their lifecycle is essential to verify compliance with the requirements under this Regulation and to allow users to make informed and autonomous decisions about their use. This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements. Such information should include the general characteristics, capabilities and limitations of the system, algorithmsnamely with regard to the extraction and consumption of natural resources, algorithms and any pre- determined changes on it and its performance, data, training, testing and validation processes used as well as documentation on the relevant risk management system and on the entity that carried out the conformity assessment. The technical documentation should be kept up to date.
Amendment 395 #
Proposal for a regulation
Recital 47
Recital 47
(47) To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certainsufficient degree of transparency should be required for high-risk AI systems. Users should be able to easily interpret the system output and use it appropriately. High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate. . The same applies to AI systems with general purposes that may have high-risk uses that are not forbidden by their developer. In such cases, sufficient information should be made available allowing deployers to carry out tests and analysis on performance, data and usage. The systems and information should also be registered in the EU database for stand-alone high-risk AI systems foreseen in Article 60 of this Regulation.
Amendment 398 #
Proposal for a regulation
Recital 48
Recital 48
(48) High-risk AI systems should be designed and developed in such a way that natural persons can overseehave agency over them by being able to oversee and control their functioning. For this purpose, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate and at the very least where decisions based solely on the automated processing enabled by such systems produce legal or otherwise significant effects, such measures should guarantee that the system is subject to in- built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role.
Amendment 401 #
Proposal for a regulation
Recital 49
Recital 49
(49) High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to thein an intelligible manner to the deployers and users.
Amendment 409 #
Proposal for a regulation
Recital 64
Recital 64
(64) Given the more extensive experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phaseduring the first year of application of this Regulation, the scope of application of third-party conformity assessment for high-risk AI systems other than those related to products. Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems intended to be used for the remote biometric identification of persons, for which the involvement of a notified body in the conformity assessment should be foreseen, to the extent they are not prohibited.
Amendment 413 #
Proposal for a regulation
Recital 68
Recital 68
(68) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons and for society as a whole. It is thus appropriate that under exceptional and ethically justified reasons of public security or protection of life and health of natural persons and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment.
Amendment 415 #
Proposal for a regulation
Recital 70
Recital 70
(70) Certain AI systems intendused to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications, which should include a disclaimer, should be provided in accessible formats for children, the elderly, migrants and persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin, namely the name of the person or entity that created it.
Amendment 416 #
Proposal for a regulation
Recital 71
Recital 71
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate ethical safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service.
Amendment 417 #
Proposal for a regulation
Recital 72
Recital 72
(72) The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation; to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs) and start-ups; to contribute to the development of ethical, socially responsible and environmentally sustainable AI systems, in line with the ethical principles outlined in this Regulation. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.
Amendment 419 #
Proposal for a regulation
Recital 73
Recital 73
(73) In order to promote and protect innovation, it is important that the interests of small-scale providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on AI literacy, awareness raising and information communication. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users.
Amendment 420 #
Proposal for a regulation
Recital 76
Recital 76
(76) In order to facilitate a smooth, effective and harmonised implementation of this and other Regulations a European Agency for Data and Artificial Intelligence Board should be established. The BoardAgency should be responsible for a number of advisory tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation and other present or future legislations, including on technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to and assisting the Commission on specific questions related to artificial intelligence.
Amendment 423 #
Proposal for a regulation
Recital 79
Recital 79
(79) In order to ensure an appropriate and effective enforcement of the requirements and obligations set out by this Regulation, which is Union harmonisation legislation, the system of market surveillance and compliance of products established by Regulation (EU) 2019/1020 should apply in its entirety. Where necessary for their mandate, national public authorities or bodies, which supervise the application of Union law protecting fundamental rights, including equality bodies, should also have access to any documentation created under this Regulation. Where appropriate, national authorities or bodies, which supervise the application of Union law or national law compatible with union law establishing rules regulating the health, safety, security and environment at work, should also have access to any documentation created under this Regulation.
Amendment 425 #
Proposal for a regulation
Recital 81
Recital 81
(81) The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of trustworthy, socially responsible and environmentally sustainable artificial intelligence in the Union. Providers of non- high-risk AI systems should be encouraged to create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Providers should also be encouraged to apply on a voluntary basis additional requirements related, for example, to environmental sustainability, accessibility to persons with disability, stakeholders’ participation in the deDevelopers and deployers of all AI systems should also draw up codes of conduct in order to ensure and demonstrate compliance with the ethical principles underpinning trustworthy AI as outlined in paragraph 2 of Article 4a. The Commissigon and development of AI systems, and diversity of the development teams. The Commissionthe European Agency for Data and Artificial Intelligence may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data.
Amendment 427 #
Proposal for a regulation
Recital 83
Recital 83
(83) In order to ensure trustful and constructive cooperation of competent authorities on Union and national level, all parties involved in the application of this Regulation should respect the confidentiality and property of information and data obtained in carrying out their tasks.
Amendment 428 #
Proposal for a regulation
Recital 84
Recital 84
(84) Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement. For certain specific infringements, Member States should take into account the margins and criteria set out in this Regulation. The European Data Protection SupervisorAgency for Data and Artificial Intelligence should have the power to impose fines on Union institutions, agencies and bodies falling within the scope of this Regulation.
Amendment 434 #
Proposal for a regulation
Article 1 – paragraph 1 – point a
Article 1 – paragraph 1 – point a
(a) harmonised rules for the placing on the market, the putting into servicedevelopment, deployment and the use of artificial intelligence systems (‘AI systems’) in the Union;
Amendment 436 #
Proposal for a regulation
Article 1 – paragraph 1 – point d
Article 1 – paragraph 1 – point d
(d) harmonised transparency rules for certain AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content;
Amendment 439 #
Proposal for a regulation
Article 1 – paragraph 1 – point e a (new)
Article 1 – paragraph 1 – point e a (new)
(ea) rules on governance
Amendment 440 #
Proposal for a regulation
Article 1 – paragraph 1 – point e b (new)
Article 1 – paragraph 1 – point e b (new)
(eb) rules for the establishment of an European Agency for Data and Artificial Intelligence.
Amendment 441 #
Proposal for a regulation
Article 1 – paragraph 1 a (new)
Article 1 – paragraph 1 a (new)
In order to protect public interests such as health, safety, the environment, fundamental rights, democracy and the rule of law, Member States may establish national provisions focusing on certain aspects of use of AI systems that build upon and complement but do not replace, circumvent or contradict the harmonised framework laid down by this Regulation.
Amendment 443 #
Proposal for a regulation
Article 2 – paragraph 1 – point a
Article 2 – paragraph 1 – point a
(a) providdevelopers and deployers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country;
Amendment 444 #
Proposal for a regulation
Article 2 – paragraph 1 – point a a (new)
Article 2 – paragraph 1 – point a a (new)
(aa) developers and deployers established or located within the Union for the placing on the market or putting into service AI systems or when the output produced by the system is used in a third country
Amendment 446 #
Proposal for a regulation
Article 2 – paragraph 1 – point c
Article 2 – paragraph 1 – point c
(c) providdevelopers, deployers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union;
Amendment 454 #
Proposal for a regulation
Article 2 – paragraph 4
Article 2 – paragraph 4
4. This Regulation shall not apply to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international agreements for law enforcement and judicial cooperation with the Union or with one or more Member States. In the framework of those agreements, no EU public authority nor any Member State shall obtain, or otherwise make use of, any AI system that is prohibited or limited under this Regulation, unless safeguards similar to the ones established in this provision are adopted by those authorities or organisations
Amendment 459 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
Article 2 – paragraph 5 a (new)
5a. This Regulation shall be without prejudice to Union and national laws on social policies.
Amendment 463 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniquescan, in and approaches listed in Annex I and canutomated manner, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
Amendment 467 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2
Article 3 – paragraph 1 – point 2
(2) ‘providdeveloper’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge, or that adapts a general purpose AI system to a specific purpose and use;
Amendment 469 #
Proposal for a regulation
Article 3 – paragraph 1 – point 3 a (new)
Article 3 – paragraph 1 – point 3 a (new)
(3a) ‘deployer’ means any natural or legal person, public authority, agency or other body putting into service an AI system developed by another entity without substantial modification, or using an AI system under its authority,
Amendment 470 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4
Article 3 – paragraph 1 – point 4
(4) ‘user’ means any natural or legal person, public authority, agency or other body using an AI system under itsthe authority, except where the AI system is used in the course of a personal non- professional activity; of a deployer
Amendment 476 #
Proposal for a regulation
Article 3 – paragraph 1 – point 8
Article 3 – paragraph 1 – point 8
(8) ‘operator’ means the providdeveloper, the deployer, the user, the authorised representative, the importer and the distributor;
Amendment 478 #
Proposal for a regulation
Article 3 – paragraph 1 – point 12
Article 3 – paragraph 1 – point 12
(12) ‘intended purpose’ means the use for which an AI system is intendused by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation;
Amendment 490 #
Proposal for a regulation
Article 3 – paragraph 1 – point 39
Article 3 – paragraph 1 – point 39
(39) ‘publicly accessible space’ means any physical or virtual place accessible to the public, regardless of whether certain conditions for access may apply;
Amendment 491 #
Proposal for a regulation
Article 3 – paragraph 1 – point 39 a (new)
Article 3 – paragraph 1 – point 39 a (new)
(39a) 'social scoring' means the evaluation or categorisation of citizens based on their behaviour or personal characteristics;
Amendment 493 #
Proposal for a regulation
Article 3 – paragraph 1 – point 42
Article 3 – paragraph 1 – point 42
(42) ‘national supervisory authority’ means the authority to which a Member State assigns the responsibility for the implementation and application of this Regulation, for coordinating the activities entrusted to that Member State, for acting as the single contact point for the Commission, and for representing the Member State at the European Artificial Intelligence Boardgency for Data and AI (EADA);
Amendment 495 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a
Article 3 – paragraph 1 – point 44 – point a
(a) the death of a person or serious damage to a person’s fundamental rights, health, to property or the environment, to democracy or the democratic rule of law,
Amendment 497 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
Article 3 – paragraph 1 – point 44 a (new)
(44a) 'AI literacy' means the skills, knowledge and understanding regarding AI systems that raises are necessary for the compliance with and enforcement of this Regulation
Amendment 500 #
Proposal for a regulation
Article 4 – paragraph 1
Article 4 – paragraph 1
Amendment 502 #
Proposal for a regulation
Article 4 a (new)
Article 4 a (new)
Amendment 505 #
Proposal for a regulation
Article 4 b (new)
Article 4 b (new)
Article 4 b AI literacy 1. When implementing this Regulation, the Union and the Member States shall promote measures and tools for the development of a sufficient level of AI literacy, across sectors and groups of developers, deployers and users concerned, including through education and training, skilling and reskilling programmes and while ensuring a proper gender and age balance, in view of allowing a democratic control of AI systems. 2. Developers and deployers of AI systems shall promote tools and take measures to ensure a sufficient level of AI literacy of their staff and any other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the environment the AI systems are to be used in, and considering the persons or groups of persons on which the AI systems are to be used. 3. Such literacy tools and measures shall consist, in particular, of the teaching and learning of basic notions and skills about AI systems and their functioning, including the different types of products and uses, their risks and benefits and the severity of the possible harm they can cause and its probability of occurrence. 4. A sufficient level of AI literacy is one that contributes to the ability of developers, deployers and users to fully comply with and benefit from trustworthy AI, and in particular with the requirements laid down in this Regulation in Articles 13, 14, 29, 52 and 69.
Amendment 558 #
Proposal for a regulation
Article 5 – paragraph 1 a (new)
Article 5 – paragraph 1 a (new)
1a. The prohibitions under this Article are without prejudice to other prohibitions that may apply where an artificial intelligence practice violates Union and national laws, including data protection law, non-discrimination law, consumer protection law, and competition law.
Amendment 573 #
Proposal for a regulation
Article 6 – paragraph 2
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk due to their risk to cause harm to health, safety, the environment, fundamental rights or to democracy and the rule of law.
Amendment 576 #
Proposal for a regulation
Article 6 a (new)
Article 6 a (new)
Amendment 579 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73, after ensuring adequate consultation with relevant stakeholders and the European Agency for Data and AI, to update the list in Annex III by adding high-risk AI systems where both of the following conditions are fulfilled:
Amendment 584 #
Proposal for a regulation
Article 7 – paragraph 1 – point a
Article 7 – paragraph 1 – point a
(a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 ofon Annex III;
Amendment 585 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
Article 7 – paragraph 1 – point b
(b) the AI systems pose a risk of harm to the environment, health and safety, or a risk of adverse impact on fundamental rights, democracy and the rule of law, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
Amendment 588 #
Proposal for a regulation
Article 7 – paragraph 2 – introductory part
Article 7 – paragraph 2 – introductory part
2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the environment, health and safety or a risk of adverse impact on fundamental rights or democracy and rule of law, that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria:
Amendment 590 #
Proposal for a regulation
Article 56 – paragraph 2 – point a
Article 56 – paragraph 2 – point a
(a) contribute to thepromote and support effective cooperation of the national supervisory authorities and the Commission with regard to matters covered by this Regulation;
Amendment 591 #
Proposal for a regulation
Article 7 – paragraph 2 – point a
Article 7 – paragraph 2 – point a
(a) the intended purpose of the AI system;
Amendment 591 #
Proposal for a regulation
Article 56 – paragraph 2 – point c a (new)
Article 56 – paragraph 2 – point c a (new)
(ca) assist developers, deployers and users of AI systems to meet the requirements of this Regulation, including those set out in present and future Union legislation, in particular SMEs and start-ups.
Amendment 592 #
Proposal for a regulation
Article 7 – paragraph 2 – point c
Article 7 – paragraph 2 – point c
(c) the extent to which the use of an AI system has already caused harm to the environment, health and safety or adverse impact on the fundamental rights or democracy and rule of law or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities;
Amendment 602 #
Proposal for a regulation
Article 9 – paragraph 2 – introductory part
Article 9 – paragraph 2 – introductory part
2. The risk management system shall consist of a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updating, and in any event when the high-risk AI system is subject to significant changes in its design or purpose. It shall comprise the following steps:
Amendment 605 #
Proposal for a regulation
Article 9 – paragraph 2 – point d a (new)
Article 9 – paragraph 2 – point d a (new)
(da) drawing up of the mandatory Codes of Conduct referred to in Article 69 taking into account the ethical principles laid down in new Article 4a.
Amendment 609 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point c a (new)
Article 9 – paragraph 4 – subparagraph 1 – point c a (new)
(ca) the provision of a sufficient level of AI literacy as outlined in new Article 4b to deployers and users.
Amendment 613 #
Proposal for a regulation
Article 9 – paragraph 8
Article 9 – paragraph 8
8. When implementing the risk management system described in paragraphs 1 to 7, specific consideration shall be given to whether the high-risk AI system is likely to be accessed by or have an impact on children, the elderly, migrants or other vulnerable groups.
Amendment 614 #
Proposal for a regulation
Article 10 – paragraph 1
Article 10 – paragraph 1
1. High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality and fairness criteria referred to in paragraphs 2 to 5.
Amendment 619 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases, including where data outputs are used as an input for future operations;
Amendment 622 #
Proposal for a regulation
Article 10 – paragraph 2 – point g
Article 10 – paragraph 2 – point g
(g) the identification of any possible data gaps or shortcomings, and how those gaps and shortcomings can be addressed, as well as any other relevant variables.
Amendment 624 #
Proposal for a regulation
Article 10 – paragraph 2 – point g a (new)
Article 10 – paragraph 2 – point g a (new)
(ga) the purpose and the environment in which the system is to be used;
Amendment 626 #
Proposal for a regulation
Article 10 – paragraph 3
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be relevant, representative and, to the best extend possible, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof. If occasional inaccuracies cannot be avoided, the system shall indicate, to the best extent possible, the likeliness of errors and inaccuracies to deployers and users through appropriate means.
Amendment 630 #
Proposal for a regulation
Article 10 – paragraph 4
Article 10 – paragraph 4
4. Training, validation and testing data sets shall take into account, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within which the high- risk AI system is intended to be used.
Amendment 635 #
Proposal for a regulation
Article 11 – paragraph 1 – subparagraph 1
Article 11 – paragraph 1 – subparagraph 1
The technical documentation shall be drawn up, without unduly compromising intellectual property rights or trade secrets, in such a way to demonstrate that the high-risk AI system complies with the requirements set out in this Chapter and provide national competent authorities and notified bodies with all the necessary information to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV.
Amendment 636 #
Proposal for a regulation
Article 12 – paragraph 1
Article 12 – paragraph 1
1. High-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events (‘logs’) while the high-risk AI systems is operatingthroughout the AI systems lifecycle. Those logging capabilities shall conform to recognised standards or common specifications.
Amendment 638 #
Proposal for a regulation
Article 12 – paragraph 2
Article 12 – paragraph 2
2. The logging capabilities shall ensure a level of traceability of the AI system’s functioning throughout its lifecycle that is appropriate to the intended purpose of the system.
Amendment 639 #
Proposal for a regulation
Article 12 – paragraph 3
Article 12 – paragraph 3
3. In particular, logging capabilities shall enable the monitoring of the operation of the high-risk AI system with respect to the occurrence of situations that may result in the AI system presenting a risk within the meaning of Article 65(1) or lead to a substantial modification, and facilitate the post-market monitoring referred to in Article 61 and the monitoring of the operation of high-risk AI systems referred to in Article 29 (4).
Amendment 643 #
Proposal for a regulation
Article 13 – paragraph 1
Article 13 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users todevelopers, deployers, users and other relevant stakeholders to easily interpret the system’s functioning and output and use it appropriately. An appropriate type and degree of transparency shall be ensured on the basis of informed decisions , with a view to achieving compliance with the relevant obligations of the user and of the provider set out in Chapter 3 of this Title.
Amendment 646 #
1a. Any person or groups of persons subject to a decision taken by a deployer or user on the basis of output from an AI System shall be informed where such decision produces legal or otherwise significant effects, including when their health and safety or the respect for their fundamental rights is affected.
Amendment 647 #
Proposal for a regulation
Article 13 – paragraph 1 b (new)
Article 13 – paragraph 1 b (new)
1b. In the cases referred to in paragraph 1, the persons or groups of person affected shall have the right to request an explanation in line with New Article 71.
Amendment 653 #
Proposal for a regulation
Article 13 – paragraph 3 – point a a (new)
Article 13 – paragraph 3 – point a a (new)
(aa) where it is not the same as the deployer, the identity and the contact details of the entity that carried out the conformity assessment and, where applicable, of its authorised representative;
Amendment 656 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point i
Article 13 – paragraph 3 – point b – point i
(i) its intended purpose;
Amendment 660 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point iii
Article 13 – paragraph 3 – point b – point iii
(iii) any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to unethical risks to the health and safety or, environment, fundamental rights or democracy and the rule of law;
Amendment 664 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point iv
Article 13 – paragraph 3 – point b – point iv
(iv) its performance as regards the persons or groups of persons on which the system is intended to be used;
Amendment 666 #
(v) when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the AI system.
Amendment 667 #
Proposal for a regulation
Article 13 – paragraph 3 – point c
Article 13 – paragraph 3 – point c
(c) the changes to the high-risk AI system and its performance, including its algorithms, which have been pre- determined by the provider at the moment of the initial conformity assessment, if any;
Amendment 668 #
Proposal for a regulation
Article 13 – paragraph 3 – point e
Article 13 – paragraph 3 – point e
(e) the expected lifetime of the high- risk AI system, its level of extraction and consumption of natural resources, and any necessary maintenance and care measures to ensure the proper functioning of that AI system, including as regards software updates.
Amendment 670 #
Proposal for a regulation
Article 13 – paragraph 3 a (new)
Article 13 – paragraph 3 a (new)
3a. In order to comply with the obligations established in this Article, developers and deployers shall ensure a sufficient level of AI literacy in line with New Article 6.
Amendment 671 #
Proposal for a regulation
Article 13 – paragraph 3 b (new)
Article 13 – paragraph 3 b (new)
3b. Member States may adopt measures beyond those listed in this Article insofar as they are not in contradiction with, result in the circumvention of or otherwise jeopardize the harmonised application of the requirements laid out in this Regulation, irrespective of whether they would apply to high-risk AI systems or all AI systems.
Amendment 672 #
Proposal for a regulation
Article 14 – title
Article 14 – title
14 Human agency and oversight
Amendment 674 #
Proposal for a regulation
Article 14 – paragraph 1
Article 14 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can at all times be effectively overseen with agency by natural persons during the period in which the AI system is in use and irrespectively of their specific characteristics.
Amendment 676 #
Proposal for a regulation
Article 14 – paragraph 2
Article 14 – paragraph 2
2. Human oversight shall aim at preventing or minimising theunethical risks to the environment, health, safety or, fundamental rights and democracy or the rule of law that may emerge when a high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter and where decisions based solely on automated processing by AI systems produce legal or otherwise significant effects on the persons or groups of persons on which the system is to be used.
Amendment 685 #
Proposal for a regulation
Article 14 – paragraph 5
Article 14 – paragraph 5
5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 shall be such as to ensure that, in addition, no action or decision is taken by the user on the basis of the identification resulting from the system unless this has been verified and confirmed by at least two natural persons with the necessary competence, training and authority.
Amendment 688 #
Proposal for a regulation
Article 14 – paragraph 5 a (new)
Article 14 – paragraph 5 a (new)
5a. In order to comply with the obligations established in this Article, developers and deployers shall ensure a sufficient level of AI literacy in line with New Article 6.
Amendment 710 #
Proposal for a regulation
Article 29 – paragraph 1 a (new)
Article 29 – paragraph 1 a (new)
1a. In order to comply with the obligations established in this Article, as well as to be able to justify their possible non-compliance, deployers of high-risk AI systems shall ensure a sufficient level of AI literacy in line with New Article 6.
Amendment 741 #
Proposal for a regulation
Title IV
Title IV
TRANSPARENCY OBLIGATIONS FOR CERTAIN AI SYSTEMS
Amendment 744 #
Proposal for a regulation
Article 52 – title
Article 52 – title
Transparency obligations for certain AI systems
Amendment 746 #
Proposal for a regulation
Article 52 – paragraph 1
Article 52 – paragraph 1
1. ProvidDevelopers and deployers shall ensure that AI systems intendused to interact with natural persons are designed and developed in such a way that natural persons are informed, in a timely, clear and intelligible manner that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This information shall also include, as appropriate, the functions that are AI enabled, and the rights and processes to allow natural persons to appeal against the application of such AI systems to them. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
Amendment 752 #
Proposal for a regulation
Article 52 – paragraph 3 – introductory part
Article 52 – paragraph 3 – introductory part
3. Users of an AI system that generates or manipulates image, audio, text, scripts or video content that appreciably resembles existing persons, objects, places, text, scripts or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose in an appropriate clear and visible manner, that the content has been artificially generated or manipulated, as well as the name of the natural or legal person that generated or manipulated it.
Amendment 758 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1
Article 52 – paragraph 3 – subparagraph 1
However, the first subparagraph shall not apply where the use is authorised by law to detectcontent forms part of an evidently artistic, pcrevent, investigate and prosecute criminal offencesative or fictional cinematographic and analogous work, or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
Amendment 759 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1 a (new)
Article 52 – paragraph 3 – subparagraph 1 a (new)
The information referred to in paragraph 1 to 3 shall be provided to the natural persons in a timely, clear and visible manner, at the latest at the time of the first interaction or exposure. Such information shall be made accessible when the exposed natural person is a person with disabilities, a child or from a vulnerable group. It shall be complete, where possible, with intervention or flagging procedures for the exposed natural person taking into account the generally acknowledged state of the art and relevant harmonised standards and common specifications.
Amendment 760 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1 b (new)
Article 52 – paragraph 3 – subparagraph 1 b (new)
Developers of AI systems with general purposes that are not listed as high-risk in Annex III shall provide relevant information allowing deployers and users to comply with the requirements and obligations set out in Title III of this Regulation. Such systems shall be registered in the EU database set out in Article 60.
Amendment 761 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1 c (new)
Article 52 – paragraph 3 – subparagraph 1 c (new)
In order to comply with the obligations established in this Article, developers and deployers shall ensure a sufficient level of AI literacy in line with New Article 6.
Amendment 769 #
Proposal for a regulation
Article 53 – paragraph 5
Article 53 – paragraph 5
5. Member States’ competent authorities that have established AI regulatory sandboxes shall coordinate their activities and cooperate within the framework of the European Agency for Data and Artificial Intelligence Board. They shall submit annual reports to the BoardAgency and the Commission on the results from the implementation of those scheme, including good practices, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox.
Amendment 779 #
Proposal for a regulation
Title VI – Chapter 1 – title
Title VI – Chapter 1 – title
1 European Agency for Data and Artificial Intelligence Board(‘EADA’)
Amendment 780 #
Proposal for a regulation
Article 56 – title
Article 56 – title
Establishment of the Europeuropean Agency for Data and Artificial Intelligence Board(‘EADA’)
Amendment 781 #
Proposal for a regulation
Article 56 – paragraph 1
Article 56 – paragraph 1
1. A ‘European Agency for Data and Artificial Intelligence Board’ (the ‘Board’) is established(the ‘Agency’) is established to promote a trustworthy, effective and competitive internal market for the data and artificial intelligence sectors.
Amendment 782 #
Proposal for a regulation
Article 56 – paragraph 2 – introductory part
Article 56 – paragraph 2 – introductory part
2. The BoardAgency shall provide advice and assistance to the Commission and the Member States, when implementing Union law related to data and artificial intelligence. It shall cooperate with the developers and deployers of AI systems, in order to:
Amendment 783 #
Proposal for a regulation
Article 56 – paragraph 2 – point a
Article 56 – paragraph 2 – point a
(a) contribute topromote and support the effective cooperation of the national supervisory authorities and the Commission with regard to matters covered by this Regulation;
Amendment 785 #
Proposal for a regulation
Article 56 – paragraph 2 – point c a (new)
Article 56 – paragraph 2 – point c a (new)
(ca) assist developers, deployers and users of AI systems to meet the requirements of this Regulation, including those set out in present and future Union legislation, in particular SMEs and start-ups.
Amendment 787 #
Proposal for a regulation
Article 56 – paragraph 2 – point c b (new)
Article 56 – paragraph 2 – point c b (new)
(cb) issue recommendations and carry out assessments of the compliance by developers and deployers and the enforcement by national supervisory authorities of Articles 70 to 74.
Amendment 788 #
Proposal for a regulation
Article 56 – paragraph 2 a (new)
Article 56 – paragraph 2 a (new)
2a. The Agency shall act as a reference point for advice and expertise for Union institutions, bodies, offices and agencies as well as for other relevant stakeholders on matters related to data and artificial intelligence.
Amendment 791 #
Proposal for a regulation
Article 56 – paragraph 2 b (new)
Article 56 – paragraph 2 b (new)
2b. The Agency shall act as a contact point for persons or groups of persons affected by AI systems when there has been no national enforcement of their rights under Article 70a to 74 or when the AI system affecting or harming them is deployed and used in more than one Member State
Amendment 792 #
Proposal for a regulation
Article 57 – title
Article 57 – title
Amendment 793 #
Proposal for a regulation
Article 57 – paragraph -1 (new)
Article 57 – paragraph -1 (new)
-1. The Agency shall have a Chair elected by qualified majority among the members of its board. It shall carry out its tasks independently, impartially, transparently and in a timely manner. It shall have a strong mandate, a secretariat as well as sufficient resources and skilled personnel at its disposal for the proper performance of its tasks. The mandate of the Agency shall contain the operational aspects related to the execution of the Agency’s tasks as listed in Article 58.
Amendment 795 #
Proposal for a regulation
Article 57 – paragraph 1
Article 57 – paragraph 1
1. The BAgency shall establish a board. The board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor. Other national authorities mayrepresentatives of the European Commission as well as, high level representatives from the European Data Protection Supervisor, the EU Agency for Fundamental Rights and the EU Agency for Cybersecurity. Other national authorities, as well as other Union bodies, offices, agencies and advisory groups shall be invited to the meetings, where the issues discussed are of relevance for them.
Amendment 798 #
Proposal for a regulation
Article 57 – paragraph 2
Article 57 – paragraph 2
2. The BAgency’s board shall adopt its rules of procedure by a simple majority of its members, following the consent of the Commission. The rules of procedure shall also contain the operational aspects related to the execution of the Board’s tasks as listed in Article 58. The Board may establish sub-groups as appropriate for the purpose of examining specific ques, namely with regard to the election of its Chair, by a simple majority of its members, with the assistance of the Agency’s secretariat. The Agency’s secretariat shall convene the meetings and prepare the agenda in accordance with the task of the Agency’s board pursuant with its rules of procedure. The Agency’s secretariat will provide administrative and analytical support for the activities of the board pursuant to this Regulations.
Amendment 801 #
Proposal for a regulation
Article 57 – paragraph 3
Article 57 – paragraph 3
3. The BoardAgency shall be chaired by the Commission. The Commission shall convene the meetings and pestablish a Permanent Stakeholders' Group composed of experts repare the agenda in accordance with the tasks of the Board pursuant to this Regulation and with its rules of procedure. The Commission shall provide administrative and analytical support for the activities of the Board pursuant to this Regulationsenting the relevant stakeholders, such as representatives of developers, deployers and users of AI systems, including SMEs and start-ups, consumer groups, trade unions, fundamental rights organisations and academic experts.
Amendment 803 #
Proposal for a regulation
Article 57 – paragraph 4
Article 57 – paragraph 4
4. The Board may invite external experts and observers to attend its meetings and may hold exchanges withAgency shall also inform interested third parties to informand citizens on its activities to an appropriate extent. To that end the Commission may facilitate exchanges between the Board and other Union bodies, offices, agencies and advisory groups.
Amendment 805 #
Proposal for a regulation
Article 58 – title
Article 58 – title
Tasks of the BoardAgency
Amendment 806 #
Proposal for a regulation
Article 58 – paragraph 1 – introductory part
Article 58 – paragraph 1 – introductory part
When providing advice and assistance to the Commission in, the context of Article 56(2), the Board shall in particularMember States and in cooperation with the developers, deployers and users of AI systems with regard to the application of this Regulation , the Agency shall:
Amendment 807 #
Proposal for a regulation
Article 58 – paragraph 1 – point a a (new)
Article 58 – paragraph 1 – point a a (new)
(aa) promote and support the cooperation among national supervisory authorities and the Commission, and ensure the Union safeguard procedure referred to Article 66;
Amendment 808 #
Proposal for a regulation
Article 58 – paragraph 1 – point c – introductory part
Article 58 – paragraph 1 – point c – introductory part
(c) issue guidelines, opinions, recommendations or written contributions on matters related to the implementation of this Regulation, in particular
Amendment 809 #
Proposal for a regulation
Article 58 – paragraph 1 – point c – point ii a (new)
Article 58 – paragraph 1 – point c – point ii a (new)
(iia) on the provisions related to post market monitoring as referred to in Article 61,
Amendment 811 #
Proposal for a regulation
Article 58 – paragraph 1 – point c – point iii a (new)
Article 58 – paragraph 1 – point c – point iii a (new)
(iiia) on the need for the amendment of each of the Annexes as referred to in Article 73,
Amendment 814 #
Proposal for a regulation
Article 58 – paragraph 1 – point c a (new)
Article 58 – paragraph 1 – point c a (new)
(ca) to establish and maintain the EU database for stand-alone high risk AI systems, referred to in Article 60;
Amendment 815 #
Proposal for a regulation
Article 58 – paragraph 1 – point c b (new)
Article 58 – paragraph 1 – point c b (new)
(cb) to carry out annual reviews and analysis of the complaints sent to and the findings made by the national competent authorities of the serious incidents report referred to in Article 62;
Amendment 816 #
Proposal for a regulation
Article 58 – paragraph 1 – point c c (new)
Article 58 – paragraph 1 – point c c (new)
(cc) to act as the market surveillance authority where Union institutions, agencies and bodies fall within the scope of this Regulation, as referred to in paragraph 6 of Article 63 and Article 72;
Amendment 817 #
Proposal for a regulation
Article 58 – paragraph 1 – point c d (new)
Article 58 – paragraph 1 – point c d (new)
(cd) to provide guidance material to developers, deployers and users regarding the compliance with the requirements set out in this Regulation. In particular, it shall issue guidelines: i) for the trustworthy AI technical assessment referred to in paragraph 6 of new Article 4a, ii) for the preliminary risk self-assessment referred to in new Article 5a; iii) for the methods for performing the conformity assessment based on internal control referred to Article 43; iv) to facilitate compliance with the reporting of serious incidents and of malfunctioning referred to in Article 62; v) to facilitate the drawing up of the mandatory Codes of Conduct referred to in Article 69; vi) on any other concrete procedures to be performed by developers, deployers and users when complying with this Regulation, in particular those regarding the documentation to be delivered to notified bodies and methods to provide authorities with other relevant information.
Amendment 818 #
Proposal for a regulation
Article 58 – paragraph 1 – point c e (new)
Article 58 – paragraph 1 – point c e (new)
(ce) to provide specific guidance to help and alleviate the burden to SMEs, start-ups or small-scale operators, regarding the compliance of the obligations set out in this Regulation;
Amendment 819 #
Proposal for a regulation
Article 58 – paragraph 1 – point c f (new)
Article 58 – paragraph 1 – point c f (new)
(cf) to raise awareness and provide guidance material to developers, deployers regarding the compliance with the requirement to put in place tools and measures to ensure a sufficient level of AI literacy in line with new Article 6.
Amendment 820 #
Proposal for a regulation
Article 58 – paragraph 1 – point c g (new)
Article 58 – paragraph 1 – point c g (new)
(cg) to contribute to the Union efforts to cooperate with third countries and international organisations in view of promoting a common global approach towards trustworthy AI;
Amendment 822 #
Proposal for a regulation
Article 59 – paragraph 3
Article 59 – paragraph 3
3. Member States shall inform the Commission and the Agency of their designation or designations and, where applicable, the reasons for designating more than one authority.
Amendment 826 #
Proposal for a regulation
Article 59 – paragraph 5
Article 59 – paragraph 5
5. Member States shall report to the Commission on an annual basis on the status of the financial and human resources of the national competent authorities with an assessment of their adequacy. The Commission shall transmit that information to the BoardAgency for discussion and possible recommendations.
Amendment 827 #
Proposal for a regulation
Article 59 – paragraph 6
Article 59 – paragraph 6
6. The CommissionAgency shall facilitate the exchange of experience between national competent authorities.
Amendment 830 #
Proposal for a regulation
Article 59 – paragraph 8
Article 59 – paragraph 8
8. When Union institutions, agencies and bodies fall within the scope of this Regulation, the European Data Protection SupervisorAgency shall act as the competent authority for their supervision.
Amendment 831 #
Proposal for a regulation
Article 60 – paragraph 1
Article 60 – paragraph 1
1. The CommissionAgency shall, in collaboration with the Member States, set up and maintain a EU database containing information referred to in paragraph 2 concerning high-risk AI systems referred to in Article 6(2) which are registered in accordance with Article 51, as well as the information referred to in new paragraph 3x new of Article 52.
Amendment 833 #
2. The data listed in Annex VIII shall be entered into the EU database by the providers. The CommissionAgency shall provide them with technical and administrative support.
Amendment 837 #
Proposal for a regulation
Article 60 – paragraph 5
Article 60 – paragraph 5
5. The CommissionAgency shall be the controller of the EU database. It shall also ensure to providers adequate technical and administrative support.
Amendment 846 #
Proposal for a regulation
Article 63 – paragraph 6
Article 63 – paragraph 6
6. Where Union institutions, agencies and bodies fall within the scope of this Regulation, the European Data Protection SupervisorAgency shall act as their market surveillance authority.
Amendment 850 #
Proposal for a regulation
Article 65 – paragraph 3
Article 65 – paragraph 3
3. Where the market surveillance authority considers that non-compliance is not restricted to its national territory, it shall inform the Agency, the Commission and the other Member States of the results of the evaluation and of the actions which it has required the operator to take.
Amendment 851 #
Proposal for a regulation
Article 65 – paragraph 5
Article 65 – paragraph 5
5. Where the operator of an AI system does not take adequate corrective action within the period referred to in paragraph 2, the market surveillance authority shall take all appropriate provisional measures to prohibit or restrict the AI system's being made available on its national market, to withdraw the product from that market or to recall it. That authority shall inform the Agency, the Commission and the other Member States, without delay, of those measures.
Amendment 852 #
Proposal for a regulation
Article 65 – paragraph 6 – point -a (new)
Article 65 – paragraph 6 – point -a (new)
(-a) the non-compliance with new Article 4a;
Amendment 853 #
Proposal for a regulation
Article 65 – paragraph 7
Article 65 – paragraph 7
7. The market surveillance authorities of the Member States other than the market surveillance authority of the Member State initiating the procedure shall without delay inform the Agency, the Commission and the other Member States of any measures adopted and of any additional information at their disposal relating to the non- compliance of the AI system concerned, and, in the event of disagreement with the notified national measure, of their objections.
Amendment 854 #
Proposal for a regulation
Article 66 – paragraph 1
Article 66 – paragraph 1
1. Where, within three months of receipt of the notification referred to in Article 65(5), objections are raised by a Member State against a measure taken by another Member State, or where the Agency or the Commission considers the measure to be contrary to Union law, the CommissionAgency shall without delay enter into consultation with the relevant Member State and operator or operators and shall evaluate the national measure. On the basis of the results of that evaluation, the CommissionAgency shall decide whether the national measure is justified or not within 96 months from the notification referred to in Article 65(5) and notify such decision to the Member State concerned.
Amendment 855 #
Proposal for a regulation
Article 66 – paragraph 2
Article 66 – paragraph 2
2. If the national measure is considered justified, all Member States shall take the measures necessary to ensure that the non-compliant AI system is withdrawn from their market, and shall inform the CommissionAgency accordingly. If the national measure is considered unjustified, the Member State concerned shall withdraw the measure.
Amendment 856 #
Proposal for a regulation
Article 67 – paragraph 3
Article 67 – paragraph 3
3. The Member State shall immediately inform the Agency, the Commission and the other Member States. That information shall include all available details, in particular the data necessary for the identification of the AI system concerned, the origin and the supply chain of the AI system, the nature of the risk involved and the nature and duration of the national measures taken.
Amendment 857 #
Proposal for a regulation
Article 67 – paragraph 5
Article 67 – paragraph 5
5. The Commission shall address its decision to the Agency and the Member States.
Amendment 859 #
Proposal for a regulation
Article 68 a (new)
Article 68 a (new)
Article 68 a Reporting of breaches and protection of reporting persons Directive (EU) 2019/1937 of the European Parliament and of the Council1a shall apply to the reporting of breaches of this Regulation and the protection of persons reporting such breaches. _________________ 1a Directive (EU) 2019/1937 of the European Parliament and of the Council of 23 October 2019 on the protection of persons who report breaches of Union law (OJ L 305, 26.11.2019, p. 17).
Amendment 864 #
Proposal for a regulation
Article 69 – paragraph 1
Article 69 – paragraph 1
1. The Commission and the Member States shall encouragsupport the mand facilitate theatory drawing up of codes of conduct intended to demonstrate compliance with the ethical principles underpinning trustworthy AI set out in Article 4a and to foster the voluntary application to AI systems other than high-risk AI systems of the requirements set out in Title III, Chapter 2 on the basis of technical specifications and solutions that are appropriate means of ensuring compliance with such requirements in light of the intended purpose of the systems.
Amendment 866 #
Proposal for a regulation
Article 69 – paragraph 2
Article 69 – paragraph 2
2. The Commission and the Board shall encourage and facilitatIn the drawing up codes of conduct intended to ensure and demonstrate compliance with the ethe drawing up of codes of conduct intended to foster the voluntary application to AI systems of requirements related for example to environmental sustainability, accessibility forical principles underpinning trustworthy AI set out in Article 4a, developers and deployers shall, in particular: (a) consider whether there is a sufficient level of AI literacy among their staff and any other persons dealing with the operation and use of AI systems in order to observe such principles; (b) assess to what extent their AI systems may affect vulnerable persons or groups of persons, including children, the elderly, migrants and persons with a disability, stakeholders participation in the design and development ofies or whether any measures could be put in place in order to support such persons or groups of persons; (c) pay attention to the way in which the use of their AI systems may have an impact on gender balance and equality; (d) have especial regard to whether their AI systems cand diversity of be used in a way that, directly or indirectly, may residually or significantly reinforce existing biases or inequalities; (e) reflect on the need and relevance of having in place diverse development teams oin the basis of clear objectives and key performance indicators to measure the achievement of those objectives. view of securing an inclusive design of their systems; (f) give careful consideration to whether their systems can have a negative societal impact, notably concerning political institutions and democratic processes; (g) evaluate the extent to which the operation of their AI systems would allow them to fully comply with the obligation to provide an explanation laid down in Article New 71 of this Regulation; (h) take stock of the Union’s commitments under the European Green Deal and the European Declaration on Digital Rights and Principles; (i) state their commitment to privileging, where reasonable and feasible, the common specifications to be drafted by the Commission pursuant to Article 41 rather than their own individual technical solutions.
Amendment 868 #
Proposal for a regulation
Article 69 – paragraph 3
Article 69 – paragraph 3
3. Codes of conduct may be drawn up by individual providdevelopers and deployers of AI systems or by organisations representing them or by both, including with the involvement of users and any interested stakeholders and their representative organisations, in particular trade unions, and consumer organisations. Codes of conduct may cover one or more AI systems taking into account the similarity of the intended purpose of the relevant systems.
Amendment 871 #
Proposal for a regulation
Article 69 – paragraph 3 a (new)
Article 69 – paragraph 3 a (new)
3a. Developers and deployers shall designate at least one natural person that is responsible for the internal monitoring of the drawing up of their code of conduct and for verifying compliance with that code of conduct in the course of their activities. That person shall serve as a contact point for users, stakeholders, national competent authorities, the Commission and the European Agency for Data and AI on all matters concerning the code of conduct.
Amendment 872 #
Proposal for a regulation
Article 69 – paragraph 4
Article 69 – paragraph 4
4. The Commission and the BoardEuropean Agency for Data and AI shall take into account the specific interests and needs of the small-scale providers and start-ups when encouraging and facilitasupporting the drawing up of codes of conduct.
Amendment 876 #
Proposal for a regulation
Article 69 – paragraph 4 a (new)
Article 69 – paragraph 4 a (new)
4a. In order to comply with the obligations established in this Article, developers and deployers shall ensure a sufficient level of AI literacy in line with New Article 6.
Amendment 877 #
Proposal for a regulation
Title X
Title X
CONFIDENTIALITY, REMEDIES AND PENALTIES
Amendment 883 #
Proposal for a regulation
Article 70 a (new)
Article 70 a (new)
Article 70 a Right to an explanation 1. Any persons or groups of persons subject to a decision taken by a deployer or user on the basis of output from an AI system which produces legal effects, or which significantly affects them, shall have the right to receive from the deployer, upon request and, where concerning AI systems other than high- risk that are not subject to the requirements of Article 13 of this Regulation, at the time when the decision is communicated, a clear and meaningful explanation of: (a) the logic involved, the main parameters of decision-making and their relative weight; (b) the input data relating to the affected person or groups of persons and each of the main parameters on which the decision was made, including an easily understandable description of inferences drawn from other data if it is the inference that relates to a main parameter. 2. Paragraph 1 shall not apply to the use of AI systems: (a) that are authorised by law to detect, prevent, investigate and prosecute criminal offences or other unlawful behaviour; (b) for which exceptions from, or restrictions to, the obligation under paragraph 1 follow from Union or national law, which lays down other appropriate safeguards for the affected person or groups of persons’ rights and freedoms and legitimate interests; or (c) where the affected person has given free, explicit, specific and informed consent not to receive an explanation.
Amendment 884 #
Proposal for a regulation
Article 70 b (new)
Article 70 b (new)
Amendment 885 #
Proposal for a regulation
Article 70 c (new)
Article 70 c (new)
Amendment 886 #
Proposal for a regulation
Article 70 d (new)
Article 70 d (new)
Article 70 d Representation of affected persons or groups of persons 1. Without prejudice to Directive 2020/1828/EC, the person or groups of persons harmed by AI systems shall have the right to mandate a not-for-profit body, organisation or association which has been properly constituted in accordance with the law of a Member State, has statutory objectives which are in the public interest, and is active in the field of the protection of rights and freedoms impacted by AI to lodge the complaint on his, her or their behalf, to exercise the rights referred to in Articles New 71, New 72 and New 73 on his or her behalf. 2. Without prejudice to Directive 2020/1828/EC, the body, organisation or association referred to in paragraph 1 shall have the right to exercise the rights established in Articles New 72 and New 73 independently of a mandate by a person or groups of person if it considers that a developer or a deployer has infringed any of the rights or obligations set out in this Regulation.
Amendment 887 #
Proposal for a regulation
Article 70 e (new)
Article 70 e (new)
Article 70 e Representative actions 1. The following is added to Annex I of Directive 2020/1828/EC on Representative actions for the protection of the collective interests of consumers: “Regulation xxxx/xxxx of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts”.
Amendment 895 #
Proposal for a regulation
Article 72 – paragraph 1 – introductory part
Article 72 – paragraph 1 – introductory part
1. The European Data Protection SupervisorAgency may impose administrative fines on Union institutions, agencies and bodies falling within the scope of this Regulation. When deciding whether to impose an administrative fine and deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation shall be taken into account and due regard shall be given to the following:
Amendment 899 #
Proposal for a regulation
Article 72 – paragraph 1 – point b
Article 72 – paragraph 1 – point b
(b) the cooperation with the European Data Protection SupervisorAgency in order to remedy the infringement and mitigate the possible adverse effects of the infringement, including compliance with any of the measures previously ordered by the European Data Protection SupervisorAgency against the Union institution or agency or body concerned with regard to the same subject matter;
Amendment 903 #
Proposal for a regulation
Article 72 – paragraph 4
Article 72 – paragraph 4
4. Before taking decisions pursuant to this Article, the European Data Protection SupervisorAgency shall give the Union institution, agency or body which is the subject of the proceedings conducted by the European Data Protection SupervisorAgency the opportunity of being heard on the matter regarding the possible infringement. The European Data Protection SupervisorAgency shall base his or herits decisions only on elements and circumstances on which the parties concerned have been able to comment. Complainants, if any, shall be associated closely with the proceedings.
Amendment 904 #
Proposal for a regulation
Article 72 – paragraph 5
Article 72 – paragraph 5
5. The rights of defense of the parties concerned shall be fully respected in the proceedings. They shall be entitled to have access to the European Data Protection SupervisorAgency’s file, subject to the legitimate interest of individuals or undertakings in the protection of their personal data or business secrets.
Amendment 910 #
Proposal for a regulation
Article 83 – paragraph 1 – introductory part
Article 83 – paragraph 1 – introductory part
1. This Regulation shall not apply to the AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex IX that have been placed on the market or put into service before [12 months after the date of application of this Regulation referred to in Article 85(2)], unless the replacement or amendment of those legal acts leads to a significant change in the design or intended purpose of the AI system or AI systems concerned.