211 Amendments of Adriana MALDONADO LÓPEZ related to 2021/0106(COD)
Amendment 127 #
Proposal for a regulation
Recital 1
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the developbased on ethical principles in particular for the design, development, deployment, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety, environment and fundamental rights, and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
Amendment 133 #
Proposal for a regulation
Recital 2
Recital 2
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is trustworthy and safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured in order to achieve trustworthy AI, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
Amendment 154 #
Proposal for a regulation
Recital 13
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter), the European Green Deal (The Green Deal) and the Joint Declaration on Digital Rights of the Union (the Declaration) and should be non-discriminatory and in line with the Union’s international trade commitments.
Amendment 158 #
Proposal for a regulation
Recital 14
Recital 14
(14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk- based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems. With regard to transparency and human oversight obligations, Member States should be able to adopt further national measures to complement them without changing their harmonising nature.
Amendment 161 #
Proposal for a regulation
Recital 14 a (new)
Recital 14 a (new)
(14a) Without prejudice to tailoring rules to the intensity and scope of the risks that AI systems can generate, or to the specific requirements laid down for high-risk AI systems, all AI systems developed, deployed or used in the Union should respect not only Union and national law but also a specific set of ethical principles that are aligned with the values enshrined in Union law and that are in part, concretely reflected in the specific requirements to be complied with by high-risk AI systems. That set of principles should, inter alia, also be reflected in codes of conduct that should be mandatory for the development, deployment and use of all AI systems. Accordingly, any research carried out with the purpose of attaining AI-based solutions that strengthen the respect for those principles, in particular those of social responsibility and environmental sustainability, should be encouraged by the Commission and the Member States.
Amendment 162 #
Proposal for a regulation
Recital 14 b (new)
Recital 14 b (new)
(14b) AI literacy’ refers to skills, knowledge and understanding that allows both citizens more generally and developers, deployers and users in the context of the obligations set out in this Regulation to make an informed deployment and use of AI systems, as well as to gain awareness about the opportunities and risks of AI and thereby promote its democratic control. AI literacy should not be limited to learning about tools and technologies, but should also aim to equip citizens more generally and developers, deployers and users in the context of the obligations set out in this Regulation with the critical thinking skills required to identify harmful or manipulative uses as well as to improve their agency and their ability to fully comply with and benefit from trustworthy AI. It is therefore necessary that the Commission, the Member States as well as developers and deployers of AI systems, in cooperation with all relevant stakeholders, promote the development of AI literacy, in all sectors of society, for citizens of all ages, including women and girls, and that progress in that regard is closely followed.
Amendment 163 #
Proposal for a regulation
Recital 15
Recital 15
(15) Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy, gender equality and the rights of the child.
Amendment 170 #
Proposal for a regulation
Recital 16
Recital 16
(16) The placing on the market, putting into servicedevelopment, deployment or use of certain AI systems intendused to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention toby materially distorting the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
Amendment 191 #
Proposal for a regulation
Recital 27
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into servicedeveloped and deployed if they comply with certain mandatory requirements based on ethical principles. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any.
Amendment 194 #
Proposal for a regulation
Recital 28
Recital 28
(28) AI systems could produce adverse outcomes to health and safety of persons, in particular when such systems operate as components of products. Consistently with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate. The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non- discrimination, gender equality, education, consumer protection, workers’ rights, rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration. In addition to those rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons or to the environment, due to the extraction and consumption of natural resources, waste and the carbon footprint.
Amendment 200 #
Proposal for a regulation
Recital 35
Recital 35
(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed, developed and used, such systems may violate the right to education and training as well as the right to gender equality and to not to be discriminated against and perpetuate historical patterns of discrimination.
Amendment 201 #
Proposal for a regulation
Recital 36
Recital 36
(36) AI systems used in employment, workers management and access to self- employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact the health, safety and security rules applicable in their work and at their workplaces and future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy. In this regard, specific requirements on transparency, information and human oversight should apply. Trade unions and workers representatives should be informed and they should have access to any documentation created under this Regulation for any AI system deployed or used in their work or at their workplace.
Amendment 214 #
Proposal for a regulation
Recital 46
Recital 46
(46) Having comprehensible information on how high- risk AI systems have been developed and how they perform throughout their lifecycle is essential to verify compliance with the requirements under this Regulation and to allow users to make informed and autonomous decisions about their use. This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements. Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system. The technical documentation should be kept up to date.
Amendment 215 #
Proposal for a regulation
Recital 47
Recital 47
(47) To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certainsufficient degree of transparency should be required for high-risk AI systems. Users should be able to interpret the system output and use it appropriately. High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate. The same applies to AI systems with general purposes that may have high-risk uses that are not forbidden by their developer. In such cases, sufficient information should be made available allowing deployers to carry out tests and analysis on performance, data and usage. The systems and information should also be registered in the EU database for stand- alone high-risk AI systems foreseen in Article 60 of this Regulation.
Amendment 218 #
Proposal for a regulation
Recital 48
Recital 48
(48) High-risk AI systems should be designed and developed in such a way that natural persons can overseehave agency over them by being able to oversee and control their functioning. For this purpose, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate and at the very least where decisions based solely on the automated processing enabled by such systems produce legal or otherwise significant effects, such measures should guarantee that the system is subject to in- built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role.
Amendment 221 #
Proposal for a regulation
Recital 49
Recital 49
(49) High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to thein an intelligible manner to the deployers and users.
Amendment 229 #
(68) Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons and for society as a whole. It is thus appropriate that under exceptional and ethically justified reasons of public security or protection of life and health of natural persons and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment.
Amendment 237 #
Proposal for a regulation
Recital 71
Recital 71
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate and ethically justified safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service.
Amendment 242 #
Proposal for a regulation
Recital 72
Recital 72
(72) The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation; to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs) and start-ups; to contribute to the development of ethical, socially responsible and environmentally sustainable AI systems, in line with the ethical principles outlined in this Regulation. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.
Amendment 246 #
Proposal for a regulation
Recital 73
Recital 73
(73) In order to promote and protect innovation, it is important that the interests of small-scale providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on AI literacy, awareness raising and information communication. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users.
Amendment 251 #
Proposal for a regulation
Recital 81
Recital 81
(81) The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of trustworthy socially responsible and environmentally sustainable artificial intelligence in the Union. Providers of non- high-risk AI systems should be encouraged to create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Providers should also be encouraged to apply on a voluntary basis additional requirements related, for example, to environmental sustainability, accessibility to persons with disability, stakeholders’ participation in the design and development of AI systems, and diversity of the development teams. The Commission may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data.
Amendment 256 #
Proposal for a regulation
Article 1 – paragraph 1 – point a
Article 1 – paragraph 1 – point a
(a) harmonised rules for the placing on the market, the putting into servicedevelopment, deployment and the use of artificial intelligence systems (‘AI systems’) in the Union;
Amendment 259 #
Proposal for a regulation
Article 2 – paragraph 1 – point a
Article 2 – paragraph 1 – point a
(a) providers‘developer’ placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country or that adapts a general purpose AI system to a specific purpose and use;
Amendment 286 #
Proposal for a regulation
Article 3 – paragraph 1 – point 8
Article 3 – paragraph 1 – point 8
(8) ‘operator’ means the providdeveloper, the deployer, the user, the authorised representative, the importer and the distributor;
Amendment 287 #
Proposal for a regulation
Article 3 – paragraph 1 – point 8 a (new)
Article 3 – paragraph 1 – point 8 a (new)
(8a) ‘deployer’ means any natural or legal person, public authority, agency or other body putting into service an AI system developed by another entity without substantial modification, or using an AI system under its authority,
Amendment 301 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a
Article 3 – paragraph 1 – point 44 – point a
(a) the death of a person or serious damage to a person’s fundamental rights, health, to property or the environment, to democracy or the democratic rule of law,
Amendment 303 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a a (new)
Article 3 – paragraph 1 – point 44 – point a a (new)
(aa) 'AI literacy' means the skills, knowledge and understanding regarding AI systems that are necessary for compliance with and enforcement of this Regulation
Amendment 316 #
Proposal for a regulation
Recital 1
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and, fundamental rights, the environment and the Union values enshrined in Article 2 of the Treaty on European Union (TEU), and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
Amendment 325 #
Proposal for a regulation
Recital 2 a (new)
Recital 2 a (new)
(2 a) However, in line with Article 114(2) TFEU, this Regulation does not affect the rights and interests of employed persons. This Regulation should therefore not affect Community law on social policy and national labour law and practice, that is any legal and contractual provision concerning employment conditions, working conditions, including health and safety at work and the relationship between employers and workers, including information, consultation and participation. This Regulation should not affect the exercise of fundamental rights as recognized in the Member States and at Union level, including the right or freedom to strike or to take other action covered by the specific industrial relations systems in Member States, in accordance with national law and/or practice. Nor should it affect concertation practices, the right to negotiate, to conclude and enforce collective agreement or to take collective action in accordance with national law and/or practice. It should in any case not prevent the Commission from proposing specific legislation on the rights and freedoms of workers affected by AI systems.
Amendment 338 #
Proposal for a regulation
Recital 4 a (new)
Recital 4 a (new)
(4 a) In order to ensure the dual green and digital transition, and secure the technological resilience of the EU, to reduce the carbon footprint of artificial intelligence and achieve the objectives of the new European Green Deal, this Regulation should contribute to the promotion of a green and sustainable artificial intelligence and to the consideration of the environmental impact of AI systems throughout their lifecycle. Sustainability should be at the core at the European artificial intelligence framework to guarantee that the development of artificial intelligence is compatible with sustainable development of environmental resources for current and future generations, at all stages of the lifecycle of artificial intelligence products; sustainability of artificial intelligence should encompass sustainable data sources, data centres, resource use, power supplies and infrastructure;
Amendment 342 #
Proposal for a regulation
Recital 4 b (new)
Recital 4 b (new)
(4 b) Despite the high potential of solutions to the environmental and climate crisis offered by artificial intelligence, the design, training and execution of algorithms imply a high energy consumption and, consequently, high levels of carbon emissions. Artificial intelligence technologies and data centres have a high carbon footprint due to increased computational energy consumption, and high energy costs due to the volume of data stored and the amount of heat, electric and electronic waste generated, thus resulting in increased pollution. These environmental and carbon footprints are expected to increase overtime as the volume of data transferred and stored and the increasing development of artificial intelligence applications will continue to grow exponentially in the years to come. It is therefore important to minimise the climate and environmental footprint of artificial intelligence and related technologies and that AI systems and associated machinery are designed sustainably to reduce resource usage and energy consumption, thereby limiting the risks to the environment.
Amendment 343 #
Proposal for a regulation
Recital 4 c (new)
Recital 4 c (new)
(4 c) To promote the sustainable development of AI systems and in particular to prioritise the need for sustainable, energy efficient data centres, requirements for efficient heating and cooling of data centres should be consistent with the long-term climate and environmental standards and priorities of the Union and comply with the principle of 'do no significant harm' within the meaning of Article 17 of Regulation (EU) 2020/852 on the establishment of a framework to facilitate sustainable investment, and should be fully decarbonised by January 2050. In this regard, Member States and telecommunications providers should collect and publish information relating to the energy performance and environmental footprint for artificial intelligence technologies and date centres including information on the energy efficiency of algorithms to establish a sustainability indicator for artificial intelligence technologies. A European code of conduct for datacentre energy efficiency can establish key sustainability indicators to measure four basic dimensions of a sustainable data centre, namely, how efficiently it uses energy, the proportion of energy generated from renewable energy sources, the reuse of any waste and heat, and the usage of fresh water.
Amendment 345 #
Proposal for a regulation
Recital 5
Recital 5
(5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and, the protection of fundamental rights, as recognised and protected by Union law, the environment and the Union values enshrined in Article 2 TEU. To achieve that objective, rules regulating the development, the placing on the market, and the putting into service and the use of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33 , and it ensures the protection of ethical principles, as specifically requested by the European Parliament34 . _________________ 33 European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 34 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL).
Amendment 360 #
Proposal for a regulation
Recital 6
Recital 6
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the software, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension. AI systems can be designed to operate with varying levels of autonomy and be used on a stand- alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to–date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list. AI systems can be developed through various techniques using learning, reasoning or modelling, such as: machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; statistical approaches, Bayesian estimation, search and optimization methods.
Amendment 372 #
Proposal for a regulation
Recital 8
Recital 8
(8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used. Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near-‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned.
Amendment 382 #
Proposal for a regulation
Recital 9
Recital 9
(9) For the purposes of this Regulation the notion of publicly accessible space should be understood as referring to any physical place that is accessible to the public, irrespective of whether the place in question is privately or publicly owned. Therefore, the notion does not cover places that are private in nature and normally not freely accessible for third parties, including law enforcement authorities, unless those parties have been specifically invited or authorised, such as homes, private clubs, offices, warehouses and factories. Online spaces are not covered either, as they are not physical spaces. However, the mere fact that certain conditions for accessing a particular space may apply, such as admission tickets or age restrictions, does not mean that the space is not publicly accessible within the meaning of this Regulation. Consequently, in addition to public spaces such as streets, relevant parts of government buildings and most transport infrastructure, spaces such as cinemas, theatres, shops and shopping centres are normally also publicly accessible. Whether a given space is accessible to the public should however be determined on a case-by-case basis, having regard to the specificities of the individual situation at hand.
Amendment 390 #
Proposal for a regulation
Recital 11
Recital 11
(11) In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union. This is the case for example of an operator established in the Union that contracts certain services to an operator established outside the Union in relation to an activity to be performed by an AI system that would qualify as high-risk and whose effects impact natural persons located in the Union. In those circumstances, the AI system used by the operator outside the Union could process data lawfully collected in and transferred from the Union, and provide to the contracting operator in the Union the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service or used in the Union. To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union. Nonetheless, to take into account existing arrangements and special needs for cooperation with foreign partners with whom information and evidence is exchanged, this Regulation should not apply to public authorities of a third country and international organisations when acting in the framework of international agreements concluded at national or European level for law enforcement and judicial cooperation with the Union or with its Member States. Such agreements have been concluded bilaterally between Member States and third countries or between the European Union, Europol and other EU agencies and third countries and international organisations.
Amendment 395 #
Proposal for a regulation
Recital 12
Recital 12
(12) This Regulation should also apply to Union institutions, offices, bodies and agencies when acting as a provider or user of an AI system. AI systems exclusively developed or used for military purposes should be excluded from the scope of this Regulation where that use falls under the exclusive remit of the Common Foreign and Security Policy regulated under Title V of the Treaty on the European Union (TEU). This Regulation should be without prejudice to the provisions regarding the liability of intermediary service providers set out in Directive 2000/31/EC of the European Parliament and of the Council [as amended by the Digital Services Act].
Amendment 402 #
Proposal for a regulation
Recital 12 a (new)
Recital 12 a (new)
(12 a) AI systems developed or used exclusively for military purposes should be excluded from the scope of this Regulation where that use falls under the exclusive remit of the Common Foreign and Security Policy regulated under Title V TEU. However, AI systems which are developed or used for military purposes but can also be used for civil purposes, falling under the definition of “dual use items” pursuant to Regulation (EU) 2021/821 of the European Parliament and of the Council1ashould fall into the scope of this Regulation. _________________ 1a Regulation (EU) 2021/821 of the European Parliament and of the Council of 20 May 2021 setting up a Union regime for the control of exports, brokering, technical assistance, transit and transfer of dual-use items (OJ L 206 11.6.2021, p. 1).
Amendment 405 #
Proposal for a regulation
Recital 12 b (new)
Recital 12 b (new)
(12 b) This Regulation should not affect the provisions aimed at improving working conditions in platform work set out in Directive 2021/762/EC.
Amendment 409 #
Proposal for a regulation
Recital 13
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, the environment and the Union values enshrined in Article 2 TEU, common normative standards for all high- risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should be non-discriminatory and in line with the Union’s international trade commitments.
Amendment 414 #
Proposal for a regulation
Recital 14
Recital 14
(14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk- based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain unacceptable artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems.
Amendment 421 #
Proposal for a regulation
Recital 15 a (new)
Recital 15 a (new)
Amendment 427 #
Proposal for a regulation
Recital 16
Recital 16
(16) The placing on the market, putting into service or use of certain AI systems intended towith the effect or likely effect of distorting human behaviour, whereby material or non-material harm, including physical or, psychological or economic harms are likely to occur, should be forbidden. This limitation should be understood to include neuro-technologies assisted by AI systems that are used to monitor, use, or influence neural data gathered through brain- computer interfaces. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention toeffect of materially distorting the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
Amendment 433 #
Proposal for a regulation
Recital 17
Recital 17
(17) AI systems providing social scoring of natural persons for general purpose by private or public authorities or on their behalf may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non- discrimination and the values of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be therefore prohibited.
Amendment 442 #
Proposal for a regulation
Recital 17 a (new)
Recital 17 a (new)
(17 a) AI systems used by law enforcement authorities or on their behalf to predict the probability of a natural person to offend or to reoffend, based on profiling and individual or place-based risk-assessment hold a particular risk of discrimination against certain persons or groups of persons, as they violate human dignity as well as the key legal principle of presumption of innocence. Such AI systems should therefore be prohibited.
Amendment 516 #
Proposal for a regulation
Recital 25
Recital 25
(25) In accordance with Article 6a of Protocol No 21 on the position of the United Kingdom and Ireland in respect of the area of freedom, security and justice, as annexed to the TEU and to the TFEU, Ireland is not bound by the rules laid down in Article 5(1), point (d), (2) and (3) of this Regulation adopted on the basis of Article 16 of the TFEU which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU, where Ireland is not bound by the rules governing the forms of judicial cooperation in criminal matters or police cooperation which require compliance with the provisions laid down on the basis of Article 16 of the TFEU.
Amendment 517 #
Proposal for a regulation
Recital 26
Recital 26
(26) In accordance with Articles 2 and 2a of Protocol No 22 on the position of Denmark, annexed to the TEU and TFEU, Denmark is not bound by rules laid down in Article 5(1), point (d), (2) and (3) of this Regulation adopted on the basis of Article 16 of the TFEU, or subject to their application, which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU.
Amendment 518 #
Proposal for a regulation
Recital 26 a (new)
Recital 26 a (new)
(26 a) AI systems capable of reading facial expressions to infer emotional states hold no scientific basis, while at the same time running a high risk of inaccuracy, in particular for certain groups of individuals whose facial traits are not easily readable by such systems, as several examples have shown. Therefore, due to the particular risk of discrimination, these systems should be prohibited.
Amendment 525 #
Proposal for a regulation
Recital 27
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into service or used if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law and do not contravene the Union values enshrined in Article 2 TEU. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and the fundamental rights of persons in the Union or the environment and such limitation minimises any potential restriction to international trade, if any.
Amendment 559 #
Proposal for a regulation
Recital 35
Recital 35
(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate or monitor persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination.
Amendment 581 #
Proposal for a regulation
Recital 38
Recital 38
(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offenceon their behalf to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences.
Amendment 588 #
Proposal for a regulation
Recital 39
Recital 39
(39) AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, non- discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk AI systems intended to be used by the competent public authorities charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools or to detect the emotional state of a natural person; for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status.; for verifying the authenticity of the relevant documents of natural persons; AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council49 , the Regulation (EC) No 810/2009 of the European Parliament and of the Council50 and other relevant legislation. _________________ 49 Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60). 50 Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1).
Amendment 590 #
Proposal for a regulation
Article 56 – paragraph 2 – point a
Article 56 – paragraph 2 – point a
(a) contribute to thepromote and support effective cooperation of the national supervisory authorities and the Commission with regard to matters covered by this Regulation;
Amendment 591 #
Proposal for a regulation
Article 56 – paragraph 2 – point c a (new)
Article 56 – paragraph 2 – point c a (new)
(ca) assist developers, deployers and users of AI systems to meet the requirements of this Regulation, including those set out in present and future Union legislation, in particular SMEs and start-ups.
Amendment 601 #
Proposal for a regulation
Recital 40 a (new)
Recital 40 a (new)
(40 a) Certain AI systems should at the same time be subject to transparency requirements and be classified as high- risk AI systems, given their potential to deceive and cause both individual and societal harm. In particular, AI systems that generate deep fakes representing existing persons have the potential to both manipulate the natural persons that are exposed to those deep fakes and harm the persons they are representing or misrepresenting, while AI systems that, based on limited human input, generate complex text such as news articles, opinion articles, novels, scripts and scientific articles have the potential to manipulate, to deceive, or to expose natural persons to built-in biases or inaccuracies. These should not include AI systems intended to translate text, or cases where the content forms part of an evidently artistic, creative or fictional cinematographic and analogous work.
Amendment 609 #
Proposal for a regulation
Recital 41
Recital 41
(41) The fact that an AI system is classified as high risk under this Regulation should not be interpreted as indicating that the use of the system is necessarily lawful under other acts of Union law or under national law compatible with Union law, such as on the protection of personal data, on the use of polygraphs and similar tools or other systems to detect the emotional state of natural persons. Any such use should continue to occur solely in accordance with the applicable requirements resulting from the Charter and from the applicable acts of secondary Union law and national law. This Regulation should not be understood as providing for the legal ground for processing of personal data, including special categories of personal data, where relevant.
Amendment 619 #
Proposal for a regulation
Recital 43
Recital 43
(43) Requirements should apply to high- risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and, fundamental rights, the environment and the Union values enshrined in Article 2 TEU, as applicable in the light of the intended purpose or reasonably foreseeable use of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.
Amendment 644 #
Proposal for a regulation
Recital 48 a (new)
Recital 48 a (new)
Amendment 666 #
Proposal for a regulation
Recital 58 a (new)
Recital 58 a (new)
(58 a) Whilst risks related to AI systems can generate from the way such systems are designed, risks can as well stem from how such AI systems are used. Users of high-risk AI system therefore play a critical role in ensuring that fundamental rights are protected, complementing the obligations of the provider when developing the AI system. Users are best placed to understand how the high-risk AI system will be used concretely and can therefore identify potential risks that were not foreseen in the development phase, thanks to a more precise knowledge of the context of use, the people or groups of people likely to be affected, including marginalised and vulnerable groups. In order to efficiently ensure that fundamental rights are protected, the user of high-risk AI systems should therefore carry out a fundamental rights impact assessment on how it intends to use such AI systems, and prior to putting it into use. The impact assessment should be accompanied by a detailed plan describing the measures or tools that will help mitigating the risks to fundamental rights identified. When performing this impact assessment, the user should notify the national supervisory authority, the market surveillance authority as well as relevant stakeholders. It should also involve representatives of groups of persons likely to be affected by the AI system in order to collect relevant information which is deemed necessary to perform the impact assessment.
Amendment 681 #
Proposal for a regulation
Recital 64
Recital 64
Amendment 689 #
Proposal for a regulation
Recital 65
Recital 65
(65) In order to carry out third-party conformity assessment for AI systems intended to be used for the remote biometric identification of personsany of the use- cases listed in Annex III, notified bodies should be designated under this Regulation by the national competent authorities, provided they are compliant with a set of requirements, notably on independence, competence and absence of conflicts of interests.
Amendment 714 #
Proposal for a regulation
Recital 70
Recital 70
(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.
Amendment 726 #
Proposal for a regulation
Recital 72
Recital 72
(72) The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation; to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs) and start-ups. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.
Amendment 728 #
Proposal for a regulation
Recital 72 a (new)
Recital 72 a (new)
(72 a) To ensure that Artificial Intelligence leads to socially and environmentally beneficial outcomes, Member States should support and promote research and development of AI in support of socially and environmentally beneficial outcomes by allocating sufficient resources, including public and Union funding, and giving priority access to regulatory sandboxes to projects led by civil society. Such projects should be based on the principle of interdisciplinary cooperation between AI developers, experts on inequality and non- discrimination, accessibility, consumer, environmental, and digital rights, as well as academics.
Amendment 756 #
Proposal for a regulation
Recital 80 a (new)
Recital 80 a (new)
(80 a) Where the national market surveillance authority has not taken measures against an infringement to this Regulation, the Commission should be in possession of all the necessary resources, in terms of staffing, expertise, and financial means, for the performance of its tasks instead of the national market surveillance authority under this Regulation. In order to ensure the availability of the resources necessary for the adequate investigation and enforcement measures that the Commission could undertake under this Regulation, the Commission should charge fees on national market surveillance authorities, the level of which should be established on a case-by-case basis. The overall amount of fees charged should be established on the basis of the overall amount of the costs incurred by the Commission to exercise its investigation and enforcement powers under this Regulation. Such an amount should include costs relating to the exercise of the specific powers and tasks connected to Chapter 4 of Title VIII of this Regulation. The external assigned revenues resulting from the fees could be used to finance additional human resources, such as contractual agents and seconded national experts, and other expenditure related to the fulfilment of these tasks entrusted to the Commission by this Regulation.
Amendment 765 #
Proposal for a regulation
Recital 84 a (new)
Recital 84 a (new)
(84 a) An affected person should also have the right to mandate a not-for-profit body, organisation or association that has been properly constituted in accordance with the law of a Member State, to lodge the complaint on their behalf. To this end, Directive 2020/1828/EC on Representative Actions for the Protection of the Collective Interests of Consumers should be amended to include this Regulation among the provisions of Union law falling under its scope.
Amendment 772 #
Proposal for a regulation
Recital 85
Recital 85
(85) In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to amend the techniques and approaches referred to in Annex I to define AI systems, the Union harmonisation legislation listed in Annex II, the high-risk AI systems listed in Annex III, the provisions regarding technical documentation listed in Annex IV, the content of the EU declaration of conformity in Annex V, and the provisions regarding the conformity assessment procedures in Annex VI and VII and the provisions establishing the high-risk AI systems to which the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation should apply. It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making58 . In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States’ experts, and their experts systematically have access to meetings of Commission expert groups dealing with the preparation of delegated acts. _________________ 58 OJ L 123, 12.5.2016, p. 1.
Amendment 780 #
Proposal for a regulation
Article 1 – paragraph -1 (new)
Article 1 – paragraph -1 (new)
-1 The purpose of this Regulation is to ensure a high level of protection of health, safety, fundamental rights, the environment and the Union values enshrined in Article 2 TEU from harmful effects of artificial intelligence systems in the Union while promoting innovation.
Amendment 790 #
Proposal for a regulation
Article 1 – paragraph 1 – point a a (new)
Article 1 – paragraph 1 – point a a (new)
(a a) principles applicable to all AI systems;
Amendment 792 #
Proposal for a regulation
Article 1 – paragraph 1 – point c a (new)
Article 1 – paragraph 1 – point c a (new)
(c a) harmonised rules on high-risk AI systems to ensure a high level of trustworthiness and protection of fundamental rights, health and safety, the Union values enshrined in Article 2 TEU and the environment;
Amendment 811 #
Proposal for a regulation
Article 1 – paragraph 1 a (new)
Article 1 – paragraph 1 a (new)
This Regulation shall be applied taking due account of the precautionary principle.
Amendment 831 #
Proposal for a regulation
Article 2 – paragraph 1 – point c a (new)
Article 2 – paragraph 1 – point c a (new)
(c a) natural persons, affected by the use of an AI system, who are in the Union;
Amendment 838 #
Proposal for a regulation
Article 2 – paragraph 1 a (new)
Article 2 – paragraph 1 a (new)
1 a. providers placing on the market or putting into service AI systems in a third country where the provider or distributor of such AI systems originates from the Union;
Amendment 867 #
Proposal for a regulation
Article 2 – paragraph 3
Article 2 – paragraph 3
3. This Regulation shall not apply to AI systems developed or used exclusively for military purposes. However, this Regulation shall apply to AI systems which are developed or used as dual-use items, as defined in Article 2, point (1) of Regulation (EU) 2021/821 of the European Parliament and of the Council1a. _________________ 1a Regulation (EU) 2021/821 of the European Parliament and of the Council of 20 May 2021 setting up a Union regime for the control of exports, brokering, technical assistance, transit and transfer of dual-use items (OJ L 206, 11.6.2021, p. 1).
Amendment 876 #
Proposal for a regulation
Article 2 – paragraph 3 a (new)
Article 2 – paragraph 3 a (new)
Amendment 878 #
Proposal for a regulation
Article 2 – paragraph 4
Article 2 – paragraph 4
Amendment 890 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
Article 2 – paragraph 5 a (new)
5 a. This Regulation shall not affect community law on social policy.
Amendment 891 #
Proposal for a regulation
Article 2 – paragraph 5 b (new)
Article 2 – paragraph 5 b (new)
5 b. This Regulation shall not affect national labour law and practice or collective agreements, and it shall not preclude national legislation to ensure the protection of workers’ rights in respect of the use of AI systems by employers, including where this implies introducing more stringent obligations than those laid down in this Regulation.
Amendment 897 #
Proposal for a regulation
Article 2 – paragraph 5 c (new)
Article 2 – paragraph 5 c (new)
5 c. This Regulation is without prejudice to the rules laid down by other Union legal acts regulating other aspects of AI systems as well as the national rules aimed at enforcing or, as the case may be, implementing these acts, in particular Union law on consumer protection and product safety, including Regulation (EU)2017/2394, Regulation (EU) 2019/1020, Directive 2001/95/EC on general product safety and Directive 2013/11/EU.
Amendment 920 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with can perceive, learn, reasone or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives,del based on machine and/or human based inputs, to generate outputs such as content, hypotheses, predictions, recommendations, or decisions influencing the real or virtual environments they interact with;
Amendment 961 #
Proposal for a regulation
Article 3 – paragraph 1 – point 8 a (new)
Article 3 – paragraph 1 – point 8 a (new)
(8 a) ‘affected person’ means any natural person or group of persons who are subject to or affected by an AI system;
Amendment 984 #
Proposal for a regulation
Article 3 – paragraph 1 – point 14
Article 3 – paragraph 1 – point 14
(14) ‘safety component of a product or system’ means a component of a product or of a system which fulfils a safety or security function for that product or system or the failure or malfunctioning of which endangers the health and, safety of persons or property, fundamental rights of persons or which damages property, or the environment;
Amendment 999 #
Proposal for a regulation
Article 3 – paragraph 1 – point 20
Article 3 – paragraph 1 – point 20
(20) ‘conformity assessment’ means the process of verifydemonstrating whether the requirements set out in Title III, Chapter 2 of this Regulation relating to an AI system have been fulfilled;
Amendment 1019 #
Proposal for a regulation
Article 3 – paragraph 1 – point 30
Article 3 – paragraph 1 – point 30
(30) ‘validation data’ means data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process, among other things, in order to prevent underfitting or overfitting; whereas the validation dataset can beis a separate dataset or part of the training dataset, either as a fixed or variable split;
Amendment 1036 #
Proposal for a regulation
Article 3 – paragraph 1 – point 34
Article 3 – paragraph 1 – point 34
(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions or intentions of natural personthoughts, states of mind or intentions of individuals or groups on the basis of their biometric and biometric-based data;
Amendment 1039 #
Proposal for a regulation
Article 3 – paragraph 1 – point 35
Article 3 – paragraph 1 – point 35
(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, such as gender, sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, on the basis of their biometric data; social origin, health, mental or physical ability, behavioural or personality traits, language, religion, or membership of a national minority, or sexual or political orientation, on the basis of their biometric or biometric-based data, or which can be reasonably inferred from such data.
Amendment 1089 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a
Article 3 – paragraph 1 – point 44 – point a
(a) the death of a person or serious damage to a person’s health, to property or the environment,
Amendment 1095 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point b a (new)
Article 3 – paragraph 1 – point 44 – point b a (new)
(b a) a breach of obligations under Union law intended to protect fundamental rights;
Amendment 1098 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
Article 3 – paragraph 1 – point 44 a (new)
(44 a) ‘AI systems presenting a risk’ means an AI system having the potential to affect adversely fundamental rights, health and safety of persons in general, including in the workplace, protection of consumers, the environment, public security, the values enshrined in Article 2 TEU and other public interests, that are protected by the applicable Union harmonisation legislation, to a degree which goes beyond that considered reasonable and acceptable in relation to its intended purpose or under the normal or reasonably foreseeable conditions of use of the system concerned, including the duration of use and, where applicable, its putting into service, installation and maintenance requirements.
Amendment 1114 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 b (new)
Article 3 – paragraph 1 – point 44 b (new)
(44 b) ‘child’ means any person below the age of 18 years.
Amendment 1132 #
Proposal for a regulation
Article 4
Article 4
Amendments to Annex I The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I, in order to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed therein.rticle 4 deleted
Amendment 1143 #
Proposal for a regulation
Article 4 a (new)
Article 4 a (new)
Amendment 1148 #
Proposal for a regulation
Article 4 b (new)
Article 4 b (new)
Amendment 1153 #
Proposal for a regulation
Article 4 c (new)
Article 4 c (new)
Article 4 c Right to receive an explanation of individual decision-making 1. A decision which is taken by the user on the basis of the output from an AI system and which produces legal effects on an affected person, or which similarly significantly affects that person, shall be accompanied by a meaningful explanation of (a) the role of the AI system in the decision-making process; (b) the logic involved, the main parameters of the decision-making, and their relative weight; and (c) the input data relating to the affected person and each of the main parameters on the basis of which the decision was made. For information on input data under point c) to be meaningful, it must include an easily understandable description of inferences drawn from other data, if it is the inference that relates to the main parameter. 2. For the purpose of Paragraph 1, it shall be prohibited for the law enforcement authorities or the judiciary in the Union to use AI systems that are considered closed or labelled as proprietary by the providers or the distributors; 3. The explanation within the meaning of paragraph 1 shall be provided at the time when the decision is communicated to the affected person.
Amendment 1154 #
Proposal for a regulation
Article 4 d (new)
Article 4 d (new)
Article 4 d Right not to be subject to non-compliant AI systems Natural persons shall have the right not to be subject to AI systems that: (a) pose an unacceptable risk pursuant to Article 5, or (b) otherwise do not comply with the requirements of this Regulation.
Amendment 1157 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviourtechniques with the effect or likely effect of materially distorting a person’s behaviour by appreciably impairing the persons’ ability to make an informed decision, thereby causing the person to take a decision that they would not have taken otherwise, in a manner that causes or is likely to cause that person or another person, or a group of persons material or non-material harm, including physical or, psychological or economic harm;
Amendment 1173 #
Proposal for a regulation
Article 5 – paragraph 1 – point a a (new)
Article 5 – paragraph 1 – point a a (new)
(a a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques.
Amendment 1176 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
Article 5 – paragraph 1 – point b
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities ofor may be reasonably foreseen to exploit vulnerabilities of children or characteristics of a person or a specific group of persons due to their age, physical or mental disability, in order togender, sexual orientation, ethnicity, race, origin, and religion or social or economic situation, with the effect or likely effect of materially distorting the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person material or non-material harm, including physical or, psychological or economic harm;
Amendment 1202 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point i
Article 5 – paragraph 1 – point c – point i
Amendment 1217 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point ii
Article 5 – paragraph 1 – point c – point ii
Amendment 1222 #
Proposal for a regulation
Article 5 – paragraph 1 – point c a (new)
Article 5 – paragraph 1 – point c a (new)
(c a) the placing on the market, putting into service or use of an AI system for making individual or place-based risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of a natural person or on assessing personality traits and characteristics or past criminal behaviour of natural persons or groups of natural persons;
Amendment 1252 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point i
Article 5 – paragraph 1 – point d – point i
Amendment 1265 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point ii
Article 5 – paragraph 1 – point d – point ii
Amendment 1277 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
Article 5 – paragraph 1 – point d – point iii
Amendment 1395 #
Proposal for a regulation
Article 5 – paragraph 4 a (new)
Article 5 – paragraph 4 a (new)
4 a. The placing on the market, putting into service or use of AI systems intended to be used as polygraphs, emotion recognition systems or similar tools to detect the emotional state, trustworthiness or related characteristics of a natural person.
Amendment 1398 #
Proposal for a regulation
Article 5 – paragraph 4 b (new)
Article 5 – paragraph 4 b (new)
4 b. Member States may, by law or collective agreements, decide to prohibit or to limit the use of AI systems to ensure the protection of the rights of workers in the employment context, in particular for the purposes of the recruitment, the performance of the contract of employment, including discharge obligations laid down by law or by collective agreements, management, planning and organization of work, equality and diversity at the workplace, health and safety at work, protection of employers or customers' property and for the purposes of the exercise and enjoyment, on an individual or collective basis, of rights and benefits related to employment, and for the purpose of the termination of the employment relationship.
Amendment 1399 #
Proposal for a regulation
Article 5 – paragraph 4 c (new)
Article 5 – paragraph 4 c (new)
4 c. the placing on the market, putting into service or the use of AI systems by or on behalf of competent authorities in migration, asylum or border control management, to profile an individual or assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered the territory of a Member State, on the basis of personal or sensitive data, known or predicted, except for the sole purpose of identifying specific care and support needs;
Amendment 1400 #
Proposal for a regulation
Article 5 – paragraph 4 d (new)
Article 5 – paragraph 4 d (new)
4 d. the placing on the market, putting into service or use of AI systems by competent authorities or on their behalf in migration, asylum and border control management, to forecast or predict individual or collective movement for the purpose of, or in any way reasonably foreseeably leading to, the prohibiting, curtailing or preventing migration or border crossings;
Amendment 1401 #
Proposal for a regulation
Article 5 – paragraph 4 e (new)
Article 5 – paragraph 4 e (new)
4 e. the placing on the market, putting into service or the use of AI systems intended to assist competent authorities for the examination of application for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status;
Amendment 1402 #
Proposal for a regulation
Article 5 – paragraph 4 f (new)
Article 5 – paragraph 4 f (new)
4 f. the placing on the market, putting into service, or use of an AI system for the specific technical processing of brain or brain-generated data in order to access, infer, influence, or manipulate a person's thoughts, emotions, memories, intentions, beliefs, or other mental states against that person's will or in a manner that causes or is likely to cause that person or another person physical or psychological harm;
Amendment 1413 #
Proposal for a regulation
Article 6 – paragraph -1 (new)
Article 6 – paragraph -1 (new)
-1. AI systems referred to in Annex III shall be considered high-risk for the purposes of this Regulation.
Amendment 1433 #
Proposal for a regulation
Article 6 – paragraph 2
Article 6 – paragraph 2
Amendment 1462 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by addingAnnex III, including by adding new areas of high-risk AI systems, where both of the following conditions are fulfilled: a type of AI system poses a risk of harm to the health and safety, a risk of adverse impact on fundamental rights, on climate change mitigation and adaptation, the environment, or a risk of contravention of the Union values enshrined in Article 2 TEU, and that risk is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems in use in the areas listed in Annex III.
Amendment 1473 #
Proposal for a regulation
Article 7 – paragraph 1 – point a
Article 7 – paragraph 1 – point a
Amendment 1481 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
Article 7 – paragraph 1 – point b
Amendment 1507 #
Proposal for a regulation
Article 7 – paragraph 2 – point c
Article 7 – paragraph 2 – point c
(c) the extent to which the use of an AI system has already caused harm to natural persons, has contravened the Union values enshrined in Article 2 TEU, has caused harm to the health and safety or has had an adverse impact on the fundamental rights, on the environment or society, or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities, to the Commission, to the Board, to the EDPS or to the European Union Agency for Fundamental Rights (FRA);
Amendment 1526 #
Proposal for a regulation
Article 7 – paragraph 2 – point g
Article 7 – paragraph 2 – point g
(g) the extent to which the outcome produced with an AI system is easily reversible, whereby outcomes having an impact on the health or safety of persons, the fundamental rights of persons, the environment or society, or on the Union values enshrined in Article 2 TEU shall not be considered as easily reversible;
Amendment 1566 #
Proposal for a regulation
Article 8 – paragraph 2
Article 8 – paragraph 2
2. The intended purpose, reasonably foreseeable uses and foreseeable misuses of the high- risk AI system and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements.
Amendment 1594 #
Proposal for a regulation
Article 9 – paragraph 2 – point b
Article 9 – paragraph 2 – point b
(b) estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose or reasonably foreseeable use and under conditions of reasonably foreseeable misuse;
Amendment 1615 #
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable, provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable use or misuse. Those residual risks shall be communicated to the user.
Amendment 1633 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 2
Article 9 – paragraph 4 – subparagraph 2
In eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, training to be expected by the user and the environment in which the system is intended or reasonably foreseeable to be used.
Amendment 1678 #
Proposal for a regulation
Article 10 – paragraph 1 a (new)
Article 10 – paragraph 1 a (new)
1 a. Validation datatsets shall be separate datasets from both the testing and the training datasets, in order for the evaluation to be unbiased. If only one dataset is available, it shall be divided in three parts: a training set, a validation set, and a testing set. Each set shall comply with paragraph 3 of this Article.
Amendment 1680 #
Proposal for a regulation
Article 10 – paragraph 1 b (new)
Article 10 – paragraph 1 b (new)
1 b. Techniques such as unsupervised learning and reinforcement learning, that do not use validation and testing datasets, shall be developed on the basis of training datasets that meet the quality criteria referred to in paragraphs 2 to 4.
Amendment 1719 #
3. Training, validation and testing data sets shall be relevant, representative, up-to-date, and to the best extent possible, taking into account the state of the art, free of errors and be as complete as possible. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets mayshall be met at the level of each individual data sets or a combination thereof.
Amendment 1732 #
Proposal for a regulation
Article 10 – paragraph 4
Article 10 – paragraph 4
4. Training, validation and testing dData sets shall take into account, to the extent required by the intended purpose, the reasonably foreseeable uses and misuses of AI systems, the characteristics or elements that are particular to the specific geographical, cultural, behavioural or functional setting within which the high-risk AI system is intended to be used.
Amendment 1764 #
Proposal for a regulation
Article 11 – paragraph 3 a (new)
Article 11 – paragraph 3 a (new)
3 a. Providers that are credit institutions regulated by Directive 2013/36/EU shall maintain the technical documentation as part of the documentation concerning internal governance, arrangements, processes and mechanisms pursuant to Article 74 of that Directive.
Amendment 1773 #
Proposal for a regulation
Article 12 – paragraph 2
Article 12 – paragraph 2
2. The logging capabilities shall ensure a level of traceability of the AI system’s functioning throughout its lifecycle that is appropriate to the intended purpose or reasonably foreseeable use of the system.
Amendment 1782 #
Proposal for a regulation
Article 12 – paragraph 4 – point a
Article 12 – paragraph 4 – point a
(a) recording of the period of each use of the system (start date and time and end date and time of each use);
Amendment 1783 #
Proposal for a regulation
Article 12 – paragraph 4 – point c
Article 12 – paragraph 4 – point c
Amendment 1873 #
Proposal for a regulation
Article 15 a (new)
Article 15 a (new)
Article 15 a Sustainable AI systems reporting 1. Providers of high-risk AI systems shall make publicly available information on the energy consumption of the AI system, in particular its carbon footprint with regard to the development of hardware, computational resources, as well as algorithm design and training, testing and validating processes of the high-risk AI systems. The provider shall include this information in the technical documentation referred to in Article 11. 2. The Commission shall develop, by means of an implementing act, a standardised document to facilitate the disclosure of information on the energy used in the training and execution of AI systems and their carbon intensity.
Amendment 1882 #
Proposal for a regulation
Article 16 – paragraph 1 – point a a (new)
Article 16 – paragraph 1 – point a a (new)
(a a) indicate their name, registered trade name or registered trade mark, and their address on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as appropriate;
Amendment 1945 #
Proposal for a regulation
Article 18
Article 18
Amendment 2060 #
Prior to putting into service or use an AI system at the workplace, users shall consult workers representatives, inform the affected employees that they will be subject to the system and obtain their consent.
Amendment 2079 #
Proposal for a regulation
Article 29 a (new)
Article 29 a (new)
Amendment 2092 #
Proposal for a regulation
Article 30 – paragraph 8
Article 30 – paragraph 8
8. Notifying authorities shall make sure that conformity assessments are carried out in a proportionate and timely manner, avoiding unnecessary burdens for providers and that notified bodies perform their activities taking due account of the size of an undertaking, the sector in which it operates, its structure and the degree of complexity of the AI system in question.
Amendment 2094 #
Proposal for a regulation
Article 31 – paragraph 3
Article 31 – paragraph 3
3. Where the conformity assessment body concerned cannot provide an accreditation certificate, it shall provide the notifying authority with all the documentary evidence necessary for the verification, recognition and regular monitoring of its compliance with the requirements laid down in Article 33. For notified bodies which are designated under any other Union harmonisation legislation, all documents and certificates linked to those designations may be used to support their designation procedure under this Regulation, as appropriate.
Amendment 2096 #
Proposal for a regulation
Article 32 – paragraph 3
Article 32 – paragraph 3
3. The notification referred to in paragraph 2 shall include full details of the conformity assessment activities, the conformity assessment module or modules and the artificial intelligence technologies concerned, as well as the relevant attestation of competence.
Amendment 2098 #
Proposal for a regulation
Article 32 – paragraph 4
Article 32 – paragraph 4
4. The conformity assessment body concerned may perform the activities of a notified body only where no objections are raised by the Commission or the other Member States. within onetwo weeks of the validation of the notification where it includes an accreditation certificate referred to in Article 31(2), or within two months of athe notification where it includes documentary evidence referred to in Article 31(3).
Amendment 2100 #
Proposal for a regulation
Article 32 – paragraph 4 a (new)
Article 32 – paragraph 4 a (new)
4 a. Where objections are raised, the Commission shall without delay enter into consultation with the relevant Member States and the conformity assessment body. In view thereof, the Commission shall decide whether the authorisation is justified or not. The Commission shall address its decision to the Member State concerned and the relevant conformity assessment body.
Amendment 2104 #
Proposal for a regulation
Article 33 – paragraph 4
Article 33 – paragraph 4
4. Notified bodies shall be independent of the provider of a high-risk AI system in relation to which it performs conformity assessment activities. Notified bodies shall also be independent of any other operator having an economic interest in the high-risk AI system that is assessed, as well as of any competitors of the provider. This shall not preclude the use of assessed AI systems that are necessary for the operations of the conformity assessment body or the use of such systems for personal purposes.
Amendment 2110 #
Proposal for a regulation
Article 36 – paragraph 1
Article 36 – paragraph 1
1. Where a notifying authority has suspicions or has been informed that a notified body no longer meets the requirements laid down in Article 33, or that it is failing to fulfil its obligations, that authority shall without delay investigate the matter with the utmost diligence. In that context, it shall inform the notified body concerned about the objections raised and give it the possibility to make its views known. If the notifying authority comes to the conclusion that the notified body investigation no longer meets the requirements laid down in Article 33 or that it is failing to fulfil its obligations, it shall restrict, suspend or withdraw the notification as appropriate, depending on the seriousness of the failure. It shall also immediately inform the Commission and the other Member States accordingly.
Amendment 2112 #
Proposal for a regulation
Article 37 – paragraph 3
Article 37 – paragraph 3
3. The Commission shall ensure that all confidentialsensitive information obtained in the course of its investigations pursuant to this Article is treated confidentially.
Amendment 2119 #
Proposal for a regulation
Article 39 – paragraph 1
Article 39 – paragraph 1
Conformity assessment bodies established under the law of a third country with which the Union has concluded an agreement in this respect may be authorised to carry out the activities of notified Bodies under this Regulation.
Amendment 2127 #
Proposal for a regulation
Article 40 – paragraph 1 a (new)
Article 40 – paragraph 1 a (new)
When AI systems are intended to be deployed at the workplace, harmonised standards shall be limited to technical specifications and procedures.
Amendment 2159 #
Proposal for a regulation
Article 43 – paragraph 1 – introductory part
Article 43 – paragraph 1 – introductory part
1. For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high- risk AI system with the requirements set out in Chapter 2 of this Title, the provider has not applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall follow one of the following procedures:the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII.
Amendment 2164 #
Proposal for a regulation
Article 43 – paragraph 1 – point a
Article 43 – paragraph 1 – point a
Amendment 2168 #
Proposal for a regulation
Article 43 – paragraph 1 – point b
Article 43 – paragraph 1 – point b
Amendment 2173 #
Proposal for a regulation
Article 43 – paragraph 1 – subparagraph 1
Article 43 – paragraph 1 – subparagraph 1
Amendment 2176 #
Proposal for a regulation
Article 43 – paragraph 1 – subparagraph 2
Article 43 – paragraph 1 – subparagraph 2
Amendment 2178 #
Proposal for a regulation
Article 43 – paragraph 1 a (new)
Article 43 – paragraph 1 a (new)
1 a. Without prejudice to paragraph 1, if the provider has applied harmonised standard referred to in Article 40, or where applicable, common specifications referred to in Article 41, it shall follow the conformity assessment procedure based on internal control referred to in Annex VI.
Amendment 2179 #
Proposal for a regulation
Article 43 – paragraph 1 b (new)
Article 43 – paragraph 1 b (new)
1 b. In the following cases, the compliance of the high-risk AI system with requirements laid down in Chapter 2 of this Title shall be assessed following the conformity assessment procedure based on the assessment of the quality management system and the assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII: (a) where harmonised standards, the reference number of which has been published in the Official Journal of the European Union, covering all relevant safety requirements for the AI system, do not exist; (b) where the harmonised standards referred to in point (a) exist but the manufacturer has not applied them or has applied them only in part; (c) where one or more of the harmonised standards referred to in point (a) has been published with a restriction; (d) when the provider considers that the nature, design, construction or purpose of the AI system necessitate third party verification.
Amendment 2182 #
Proposal for a regulation
Article 43 – paragraph 2
Article 43 – paragraph 2
2. For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. For high-risk AI systems referred to in point 5(b) of Annex III, placed on the market or put into service by credit institutions regulated by Directive 2013/36/EU, the conformity assessment shall be carried out as part of the procedure referred to in Articles 97 to101 of that Directive.
Amendment 2197 #
Proposal for a regulation
Article 43 – paragraph 4 a (new)
Article 43 – paragraph 4 a (new)
4 a. The specific interests and needs of the small-scale providers shall be taken into account when setting the fees for third-party conformity assessment under this Article, reducing those fees proportionately to their size and market size.
Amendment 2205 #
Proposal for a regulation
Article 43 – paragraph 6
Article 43 – paragraph 6
Amendment 2230 #
Proposal for a regulation
Article 49 – paragraph 1
Article 49 – paragraph 1
1. The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems before the high-risk AI system is placed on the market. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate. It may be followed by a pictogram or any other marking indicating a special risk or use.
Amendment 2236 #
Proposal for a regulation
Article 49 – paragraph 3 a (new)
Article 49 – paragraph 3 a (new)
3 a. Where high-risk AI systems are subject to other Union legislation which also provides for the affixing of the CE marking, the CE marking shall indicate that the high-risk AI system also fulfil the requirements of that other legislation.
Amendment 2238 #
Proposal for a regulation
Article 50 – paragraph 1 – introductory part
Article 50 – paragraph 1 – introductory part
The provider shall, for the entire lifecycle of the AI system or for a period ending 10 years after the AI system has been placed on the market or put into service, whichever is the longest, keep at the disposal of the national competent authorities:
Amendment 2317 #
Proposal for a regulation
Article 53 – paragraph 3
Article 53 – paragraph 3
3. The AI regulatory sandboxes shall not affect the supervisory and corrective powers of the competent authorities. Any significant risks to health and safety and, fundamental rights and the environment identified during the development and testing of such systems shall result in immediate mitigation and, failing that, in the suspension ofand adequate mitigation. Where such mitigation proves to be ineffective, the development and testing process shall be suspended without delay until such mitigation takes place.
Amendment 2343 #
Proposal for a regulation
Article 54
Article 54
Amendment 2369 #
Proposal for a regulation
Article 54 a (new)
Article 54 a (new)
Amendment 2504 #
Proposal for a regulation
Article 58 – paragraph 1 – point c – introductory part
Article 58 – paragraph 1 – point c – introductory part
(c) issue opinions, recommendations or written contributions on matters related to the implementation of this Regulation, after consulting relevant stakeholders, in particular
Amendment 2651 #
Proposal for a regulation
Article 62 – paragraph 1 – introductory part
Article 62 – paragraph 1 – introductory part
1. Providers and, where users have identified a serious incident or malfunctioning, users of high-risk AI systems placed on the Union market shall report any serious incident or any malfunctioning of those systems which constitutes a breach of obligations under Union law intended to protect fundamental rights to the market surveillance authorities of the Member States where that incident or breach occurred and to the affected persons and, where the incident or breach occurs or is likely to occur in at least two Member States, to the Commission.
Amendment 2667 #
Proposal for a regulation
Article 62 – paragraph 2 a (new)
Article 62 – paragraph 2 a (new)
2 a. The market surveillance authorities shall take appropriate measures within 7 days from the date it received the notification referred to in paragraph 1. Where the infringement takes place or is likely to take place in other Member States, the market surveillance authority shall notify the Commission, the Board and the relevant national competent authorities of these Member States.
Amendment 2670 #
Proposal for a regulation
Article 62 – paragraph 3
Article 62 – paragraph 3
3. For high-risk AI systems referred to in point 5(b) of Annex III which are placed on the market or put into service by providers that are credit institutions regulated by Directive 2013/36/EU and for high-risk AI systems which are safety components of devices, or are themselves devices, covered by Regulation (EU) 2017/745 and Regulation (EU) 2017/746, the notification of serious incidents or malfunctioning for the purposes of this Regulation shall be limited to those that that constitute a breach of obligations under Union law intended to protect fundamental rights and the environment.
Amendment 2677 #
Proposal for a regulation
Article 63 – paragraph 5
Article 63 – paragraph 5
5. For AI systems listed in point 1(a) in so far as the systemsthat are used for law enforcement purposes, points 6 and 7 of Annex III, Member States shall designate as market surveillance authorities for the purposes of this Regulation either the competent data protection supervisory authorities under Directive (EU) 2016/680, or Regulation 2016/679 or the national competent authorities supervising the activities of the law enforcement, immigration or asylum authorities putting into service or using those systems.
Amendment 2705 #
Proposal for a regulation
Article 65 – paragraph 1
Article 65 – paragraph 1
1. AI systems presenting a risk shall be understood as a product presenting a risk defined in Article 3, point 19 of Regulation (EU) 2019/1020 insofar as risks to the health or safety or to the protection of fundamental rights of persmeans an AI system having the potential to affect adversely fundamental rights, health and safety of persons in general, including in the workplace, protection of consumers, the environment, public security, the values enshrined in Article 2 TEU and other public interests, that are protected by the applicable Union harmonisation legislation, to a degree which goes beyond that considered reasonable and acceptable in relation to its intended purpose or under the normal or reasonably foreseeable conditions of use of the system concerned, including the duration of use and, where applicable, its putting into service, installations are concernednd maintenance requirements.
Amendment 2717 #
Proposal for a regulation
Article 65 – paragraph 2 – subparagraph 1
Article 65 – paragraph 2 – subparagraph 1
Where, in the course of that evaluation, the market surveillance authority or, where relevant, the national public authority referred to in Article 64(3) finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe, and in any case no later than 15 working days.
Amendment 2725 #
Proposal for a regulation
Article 65 – paragraph 5
Article 65 – paragraph 5
5. Where the operator of an AI system does not take adequate corrective action within the period referred to in paragraph 2, the market surveillance authority shall take all appropriate provisional measures to prohibit or restrict the AI system's being made available on its national market or put into service, to withdraw the productAI system from that market or to recall it. That authority shall immediately inform the Commission, the Board and the other Member States, without delay, of those measures.
Amendment 2728 #
Proposal for a regulation
Article 65 – paragraph 6 – point a
Article 65 – paragraph 6 – point a
(a) a failure of the AI system to meet requirements set out in Title III, Chapter 2and obligations set out in this Regulation;
Amendment 2745 #
Proposal for a regulation
Article 66 a (new)
Article 66 a (new)
Amendment 2749 #
Proposal for a regulation
Article 67 – paragraph 1
Article 67 – paragraph 1
1. Where, having performed an evaluation under Article 65, in full cooperation with the relevant national public authority referred to in Article 64(3),the market surveillance authority of a Member State finds that although an AI system is in compliance with this Regulation, it presents a risk to the health or safety of persons, to the compliance with obligations under Union or national law intended to protect fundamental rights, environment, European values as enshrined in Article 2 TEU or to other aspects of public interest protection, it shall require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk, to withdraw the AI system from the market or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe.
Amendment 2775 #
Proposal for a regulation
Article 68 a (new)
Article 68 a (new)
Amendment 2781 #
Proposal for a regulation
Article 68 b (new)
Article 68 b (new)
Article 68 b Representation of affected persons or groups of persons 1. Without prejudice to Directive 2020/1828/EC, the person or groups of persons harmed by AI systems shall have the right to mandate a not-for-profit body, organisation or association which has been properly constituted in accordance with the law of a Member State, has statutory objectives which are in the public interest, and is active in the field of the protection of rights and freedoms impacted by AI to lodge the complaint on his, her or their behalf, to exercise the rights referred to in this Regulation on his, her or their behalf. 2. Without prejudice to Directive 2020/1828/EC, the body, organisation or association referred to in paragraph 1 shall have the right to exercise the rights established in this Regulation independently of a mandate by a person or groups of person if it considers that a provider or a user has infringed any of the rights or obligations set out in this Regulation.
Amendment 2783 #
Proposal for a regulation
Article 68 c (new)
Article 68 c (new)
Article 68 c Amendment to Directive 2020/1828/EC on Representative Actions for the Protection of the Collective Interests of Consumers The following is added to Annex I of Directive 2020/1828/EC on Representative actions for the protection of the collective interests of consumers: “Regulation xxxx/xxxx of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts”.
Amendment 2785 #
Proposal for a regulation
Article 68 d (new)
Article 68 d (new)
Article 68 d Reporting of breaches and protection of reporting persons Directive (EU) 2019/1937 of the European Parliament and of the Council shall apply to the reporting of breaches of this Regulation and the protection of persons reporting such breaches.
Amendment 2825 #
Proposal for a regulation
Article 71 – paragraph 2
Article 71 – paragraph 2
2. TWithin [three months following the entry into force of this Regulation], the Member States shall notify the Commission of those rules and of those measures and shall notify it, without delay, of any subsequent amendment affecting them.
Amendment 2828 #
Proposal for a regulation
Article 71 – paragraph 2 a (new)
Article 71 – paragraph 2 a (new)
2 a. The non-compliance of the AI system with the prohibition of the practices referred to in Article 5 shall be subject to administrative fines of up to 50 000 000 EUR or, if the offender is a company, up to 10% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Amendment 2832 #
Proposal for a regulation
Article 71 – paragraph 3 – introductory part
Article 71 – paragraph 3 – introductory part
3. The following infringementsnon-compliance of the AI system with the requirements laid down in Article 10 shall be subject to administrative fines of up to 340 000 000 EUR or, if the offender is a company, up to 68 % of its total worldwide annual turnover for the preceding financial year, whichever is higher: .
Amendment 2836 #
Proposal for a regulation
Article 71 – paragraph 3 – point a
Article 71 – paragraph 3 – point a
Amendment 2844 #
Proposal for a regulation
Article 71 – paragraph 3 – point b
Article 71 – paragraph 3 – point b
Amendment 2849 #
Proposal for a regulation
Article 71 – paragraph 4
Article 71 – paragraph 4
4. The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 230 000 000 EUR or, if the offender is a company, up to 46 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Amendment 2858 #
Proposal for a regulation
Article 71 – paragraph 5
Article 71 – paragraph 5
5. The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 120 000 000 EUR or, if the offender is a company, up to 24 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Amendment 2865 #
Proposal for a regulation
Article 71 – paragraph 6 – point c
Article 71 – paragraph 6 – point c
Amendment 2894 #
Proposal for a regulation
Article 72 – paragraph 2 – introductory part
Article 72 – paragraph 2 – introductory part
2. The following infringementsnon-compliance with the prohibition of the artificial intelligence practices referred to in Article 5 shall be subject to administrative fines of up to 1 000 000 EUR; 2a. The non-compliance of the AI system with the requirements laid down in Article 10 shall be subject to administrative fines of up to 5700 000 EUR: .
Amendment 2899 #
Proposal for a regulation
Article 72 – paragraph 2 – point a
Article 72 – paragraph 2 – point a
Amendment 2903 #
Proposal for a regulation
Article 72 – paragraph 2 – point b
Article 72 – paragraph 2 – point b
Amendment 2911 #
Proposal for a regulation
Article 72 – paragraph 3
Article 72 – paragraph 3
3. The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 2500 000 EUR.
Amendment 2913 #
Proposal for a regulation
Article 72 – paragraph 5
Article 72 – paragraph 5
5. The rights of defense of the parties concerned shall be fully respected in the proceedings. They shall be entitled to have access to the European Data Protection Supervisor’s file, subject to the legitimate interest of individuals or undertakings in the protection of their personal data or business secrets.
Amendment 2917 #
Proposal for a regulation
Article 73 – paragraph 2
Article 73 – paragraph 2
2. The delegation of power referred to in Article 4, Article 7(1), Article 11(3), Article 43(5) and (6, Article 48(5) and Article 48(5)68a shall be conferred on the Commission for an indeterminate period of time from [entering into force of the Regulation].
Amendment 2921 #
Proposal for a regulation
Article 73 – paragraph 3
Article 73 – paragraph 3
3. The delegation of power referred to in Article 4, Article 7(1), Article 11(3), Article 43(5) and (6, Article 48(5) and Article 48(5)68a may be revoked at any time by the European Parliament or by the Council. A decision of revocation shall put an end to the delegation of power specified in that decision. It shall take effect the day following that of its publication in the Official Journal of the European Union or at a later date specified therein. It shall not affect the validity of any delegated acts already in force.
Amendment 2932 #
Proposal for a regulation
Article 73 – paragraph 5
Article 73 – paragraph 5
5. Any delegated act adopted pursuant to Article 4, Article 7(1), Article 11(3), Article 43(5) and (6) and, Article 48(5) and 68d shall enter into force only if no objection has been expressed by either the European Parliament or the Council within a period of three months of notification of that act to the European Parliament and the Council or if, before the expiry of that period, the European Parliament and the Council have both informed the Commission that they will not object. That period shall be extended by three months at the initiative of the European Parliament or of the Council.
Amendment 2960 #
Proposal for a regulation
Article 83 – paragraph 2
Article 83 – paragraph 2
2. This Regulation shall apply to the high-risk AI systems, other than the ones referred to in paragraph 1, that have been placed on the market or put into service before [date of application of this Regulation referred to in Article 85(2)], only if, from that date, those systems are subject to significant changes in their design or intended purpose.
Amendment 2965 #
Proposal for a regulation
Article 84 – paragraph 1
Article 84 – paragraph 1
1. The Commission shall assess the need for amendment of the list in Annex III, including the extension of existing area headings or addition of new area headings, the list of prohibited practices in Article 5, and the list of AI systems requiring additional transparency measures, once a year following the entry into force of this Regulation.
Amendment 2985 #
Proposal for a regulation
Article 84 – paragraph 6
Article 84 – paragraph 6
6. In carrying out the evaluations and reviews referred to in paragraphs 1 to 4 the Commission shall take into account the positions and findings of the Board, of the European Parliament, of the Council, and of equality bodies and other relevant bodies or sources, and shall consult relevant external stakeholders, in particular those potentially affected by the AI system, as well as stakeholders from academia and civil society.
Amendment 2990 #
Proposal for a regulation
Article 84 – paragraph 7
Article 84 – paragraph 7
7. The Commission shall, if necessary, submit appropriate proposals to amend this Regulation, in particular taking into account developments in technology, the effect of AI systems on health and safety, fundamental rights, the environment, equality, and accessibility for persons with disabilities, and in the light of the state of progress in the information society.
Amendment 2997 #
Proposal for a regulation
Article 84 – paragraph 7 a (new)
Article 84 – paragraph 7 a (new)
7 a. To guide the evaluations and reviews referred to in paragraphs 1 to 4, the Board shall undertake to develop an objective and participative methodology for the evaluation of risk level based on the criteria outlined in the relevant articles and inclusion of new systems in: the list in Annex III, including the extension of existing area headings or addition of new area headings; the list of prohibited practices in Article 5; and the list of AI systems requiring additional transparency measures.
Amendment 3010 #
Proposal for a regulation
Annex I
Annex I
Amendment 3099 #
Proposal for a regulation
Annex III – paragraph 1 – point 3 – point b
Annex III – paragraph 1 – point 3 – point b
(b) AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions. or monitoring of students during exams, for determining learning objectives, and for allocating personalised learning tasks to students;
Amendment 3115 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
Annex III – paragraph 1 – point 4 – point b
(b) AI intended to be used for making decisions on promotion and termination of work-related contractual relationships,affecting the initiation, establishment, implementation, promotion and termination of an employment relationship, including AI systems intended to support collective legal and regulatory matters, particularly for task allocation and for monitoring and evaluating performance and behavior of persons or in matters of training or further education in such relationships.
Amendment 3157 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point b
Annex III – paragraph 1 – point 6 – point b
Amendment 3165 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point c
Annex III – paragraph 1 – point 6 – point c
(c) AI systems intended to be used by law enforcement authorities or on their behalf to detect deep fakes as referred to in article 52(3) and in point 8a(a) and (b) of this Annex;
Amendment 3170 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point d
Annex III – paragraph 1 – point 6 – point d
(d) AI systems intended to be used by law enforcement authorities or on their behalf for evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences;
Amendment 3193 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point a
Annex III – paragraph 1 – point 7 – point a
Amendment 3200 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
Annex III – paragraph 1 – point 7 – point b
Amendment 3209 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
Annex III – paragraph 1 – point 7 – point d
Amendment 3238 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 a (new)
Annex III – paragraph 1 – point 8 a (new)
8 a. Other applications: (a) AI systems intended to be used to generate, on the basis of limited human input, complex text content that would falsely appear to a person to be human- generated and authentic, such as news articles, opinion articles, novels, scripts, and scientific articles, except where the content forms part of an evidently artistic, creative or fictional and analogous work; (b) AI systems intended to be used to generate or manipulate audio or video content that appreciably resembles existing natural persons, in a manner that significantly distorts or fabricates the original situation, meaning, content, or context and would falsely appear to a person to be authentic, except where the content forms part of an evidently artistic, creative or fictional cinematographic and analogous work.
Amendment 3246 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point a
Annex IV – paragraph 1 – point 1 – point a
(a) its intended purpose or reasonably foreseeable use, the person/s developing the system the date and the version of the system;
Amendment 3274 #
Proposal for a regulation
Annex IV – paragraph 1 – point 3 a (new)
Annex IV – paragraph 1 – point 3 a (new)
3 a. A description of the appropriateness of the performance metrics for the specific AI system.
Amendment 3275 #
Proposal for a regulation
Annex IV – paragraph 1 – point 3 b (new)
Annex IV – paragraph 1 – point 3 b (new)
3 b. Detailed information about the carbon footprint and the energy efficiency of the AI system, in particular with regard to the development of hardware, computational resources, as well as algorithm design and training processes;
Amendment 3276 #
Proposal for a regulation
Annex IV – paragraph 1 – point 3 c (new)
Annex IV – paragraph 1 – point 3 c (new)
3 c. Information about the computational resources required for the functioning of the AI system and its expected energy consumption during its use;