452 Amendments of Kim VAN SPARRENTAK related to 2021/0106(COD)
Amendment 93 #
Proposal for a regulation
Recital 17 a (new)
Recital 17 a (new)
(17 a) The placing on the market, putting into service or useof certain AI systems that can be used or foreseeably misused for intrusivemonitoring and flagging to identify or deter rule-breaking or fraud should beforbidden. The use of such intrusive monitoring and flagging, such ase-proctoring software, in a relationship of power, for example where educationinstitutions have a relationship of power over their students and pupils, posesan unacceptable risk to the fundamental rights of students and pupils, includingminors. Notably these practices affect private life, data protection and humandignity of students and pupils, including minors.
Amendment 193 #
Proposal for a regulation
Article 5 – paragraph 1 a (new)
Article 5 – paragraph 1 a (new)
1 a. the placing on the market, putting into service or useof an AI system that can be used for intrusive monitoring and flagging toidentify or deter rule-breaking or fraud
Amendment 313 #
Proposal for a regulation
Recital 1
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform minimum legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and its well as the environment, society, rule of law and democracy, economic interests and consumer protection. It also ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation., or justified by the need to ensure the protection of the rights and freedoms of natural persons, or the ethical principles advocated by this Regulation
Amendment 319 #
Proposal for a regulation
Recital 1 a (new)
Recital 1 a (new)
(1 a) The term “artificial intelligence” (AI) refers to systems developed by humans that can, using different techniques and approaches, generate outputs such as content, predictions, recommendations and decisions. The context they are used in is decisive for how much and what kind of influence they can have, and whether they are perceived by an observer as “intelligent”. The term “automated decision-making” (ADM) has been proposed as it could avoid the possible ambiguity of the term AI. ADM involves a user delegating initially a decision, partly or completely, to an entity by way of using a system or a service. That entity then uses automatically executed decision-making models to perform an action on behalf of a user, or to inform the user’s decisions in performing an action
Amendment 320 #
Proposal for a regulation
Recital 2
Recital 2
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
Amendment 334 #
Proposal for a regulation
Recital 4
Recital 4
(4) At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate risks and cause harm to public interests and rights that are protected by Union law. Such harm might be material or immaterial. , whether individual, societal, environmental, economic, or to the rule of law and democracy. Such harm might be material or immaterial. Harm should be understood as injury or damage to the life, health, physical integrity and the property of a natural or legal person, economic harm to individuals, damage to their environment, security and other aspects defined in the scope of New Approach directives, complemented by collective harms such as harm to society, the democratic process and the environment, or going against core ethical principles. Immaterial harms should be understood as meaning harm as a result of which the affected person suffers considerable detriment, an objective and demonstrable impairment of his or her personal interests and an economic loss calculated having regard, for example, to annual average figures of past revenues and other relevant circumstances. Such immaterial harm can therefore consist of psychological harm, reputational harm or change in legal status. Harm can be caused (i) by single events and (ii) through exposure over time to harmful algorithmic practices, as well as (iii) through action distributed among a number of actors where the entity causing the harm is not necessarily that which uses the AI or (iv) through uses of AI which are different than intended for the given system.
Amendment 347 #
Proposal for a regulation
Recital 5
Recital 5
(5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time mguaranteets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, as recognised and protected by Union law as well as the environment, society, rule of law and democracy, economic interests and consumer protection. To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33 , and it ensures the protection of ethical principles, as specifically requested by the European Parliament34 . _________________ 33 European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 34 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL).
Amendment 362 #
Proposal for a regulation
Recital 6
Recital 6
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the softwareystem, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension. AI systems can be designed to operate with varying levels of autonomy and be used on a stand- alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to–date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list.
Amendment 370 #
Proposal for a regulation
Recital 7
Recital 7
(7) The notion of biometric data used in this Regulation is in line with and should be interpreted consistently with the notion of biometric data as defined in Article 4(14) of Regulation (EU) 2016/679 of the European Parliament and of the Council35 , Article 3(18) of Regulation (EU) 2018/1725 of the European Parliament and of the Council36 and Article 3(13) of Directive (EU) 2016/680 of the European Parliament and of the Council37 . The notion of “biometrics-based data” is broader, covering situations where the data in question may not, of itself, confirm the unique identification of an individual. _________________ 35 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1). 36 Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC (OJ L 295, 21.11.2018, p. 39) 37 Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA (Law Enforcement Directive) (OJ L 119, 4.5.2016, p. 89).
Amendment 378 #
Proposal for a regulation
Recital 8
Recital 8
(8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used. Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ rirrespectively of the particular technology, processes or types of biometric data used. The notion of ‘at a distance’ in Remote bBiometric iIdentification systems. In(RBI) means the cause of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near-‘live’ material, such as video footage, generated by a camera or osystems as described in Article 3(36), at a distance great enough that the system has the capacity to scan multiple persons in its field of view (or the equivalent generalised scanning of online / virtual spaces), which would mean that the identification could happen without one or more of ther device with similar functionality. In the ata subjects’ knowledge. Because of ‘post’ systems, in contrast, the biometric data have already been capturRBI relates to how a system is designed and installed, and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the usnot solely to whether or not data subjects have consented, this definition applies even when warning notices are placed in the location that is under the surveillance of the RBI system in respect of the natural persons concerned, and is not de facto annulled by pre-enrolment.
Amendment 380 #
Proposal for a regulation
Recital 9
Recital 9
(9) For the purposes of this Regulation the notion of publicly accessible physical or virtual space should be understood as referring to any physical or virtual place that is accessible to the public, on a temporary or permanent basis, irrespective of whether the place in question is privately or publicly owned. Therefore, the notion does not covers places that are both private in nature, used for private purposes only, accessed completely voluntarily and normally not freely accessible for third parties, including law enforcement authorities, unless those parties have been specifically invited or authorised, such as homes, private clubs, offices, warehouses and factories. Online spaces are not covered either, as they are not physical space and private clubs. However, the mere fact that certain conditions for accessing a particular space may apply, such as admission tickets or age restrictions, does not mean that the space is not publicly accessible within the meaning of this Regulation. Consequently, in addition to public spaces such as streets, relevant parts of government buildings and most transport infrastructure, spaces such as cinemas, theatres, shops and shopping centrports grounds, virtual gaming environments, schools, universities, hospitals, amusement parks, festivals, shops and shopping centres, offices, warehouses and factories are normally also publicly accessible. Whether a given space is accessible to the public should however be determined on a case- by-case basis, having regard to the specificities of the individual situation at hand.
Amendment 385 #
Proposal for a regulation
Recital 9 a (new)
Recital 9 a (new)
(9 a) In order to ensure the rights of individuals and groups, and the growth of trustworthy AI, certain principles should be guaranteed across all AI systems, such as transparency, the right to an explanation and the right to object to a decision. This requires that discrimination, and detrimental power and information imbalances be prevented, control and oversight guaranteed, and that compliance is demonstrable and subject to ongoing monitoring. Decision- making by, or supported by, AI systems, should be subject to specific transparency rules, as regards the logic and parameters on which decisions are made.
Amendment 386 #
Proposal for a regulation
Recital 9 b (new)
Recital 9 b (new)
(9 b) Requirements on transparency and on the explicability of AI decision-making should contribute to countering the deterrent effects of digital asymmetry, power and information imbalance, and so-called ‘dark patterns’ targeting individuals and their informed consent.
Amendment 387 #
Proposal for a regulation
Recital 10
Recital 10
(10) In order to ensure a level playing field and an effective protection of rights and freedoms of individuals across the Union, the rules established by this Regulation should apply to providers of AI systems in a non-discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to usdeployers of AI systems established within the Union. This Regulation and the rules it establishes should take into account different development and business models and the fact that standard implementations, or Free and Open Source software development and licensing models might entail less knowledge about and little to no control over further use, modification, and deployment within an AI system.
Amendment 392 #
Proposal for a regulation
Recital 11
Recital 11
(11) In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union. This is the case for example of an operator established in the Union that contracts certain services to an operator established outside the Union in relation to an activity to be performed by an AI system that would qualify as high-risk and whose effects impact natural persons located in the Union. In those circumstances, the AI system used by the operator outside the Union could process data lawfully collected in and transferred from the Union, and provide to the contracting operator in the Union the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service or used in the Union. To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and usdeployers of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union. Nonetheless, to take into account existing arrangements and special needs for cooperation with foreign partners with whom information and evidence is exchanged, this Regulation should not apply to public authorities of a third country and international organisations when acting in the framework of international agreements concluded at national or European level for law enforcement and judicial cooperation with the Union or with its Member States. Such agreements have been concluded bilaterally between Member States and third countries or between the European Union, Europol and other EU agencies and third countries and international organisations or affects people in the Union.
Amendment 397 #
Proposal for a regulation
Recital 12
Recital 12
(12) This Regulation should also apply to Union institutions, offices, bodies and agencies when acting as a provider or user of an AI system. AI systems exclusively developed or used for military purposes should be excluded from the scope of this Regulation where that use falls under the exclusive remit of the Common Foreign and Security Policy regulated under Title V of the Treaty on the European Union (TEU)deployer of an AI system. This Regulation should be without prejudice to the provisions regarding the liability of intermediary service providers set out in Directive 2000/31/EC of the European Parliament and of the Council [as amended by the Digital Services Act].
Amendment 408 #
Proposal for a regulation
Recital 13
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety, and fundamental rights, as well as the environment, society, rule of law and democracy, economic interests and consumer protection, common normative standards for all high- risk AI systems should be established. Those standards should be consistent with the Charter of fFundamental rRights of the European Union (the Charter) and should be non-discriminatory and in line with the Union’s international trade commitments.
Amendment 411 #
Proposal for a regulation
Recital 13 a (new)
Recital 13 a (new)
(13 a) AI systems and related ICT technology require significant natural resources, contribute to waste production, and have a significant overall impact on the environment. It is appropriate to design and develop in particular high-risk AI systems with methods and capabilities that measure, record, and reduce resource use and waste production, as well as energy use, and that increase their overall efficiency throughout their entire lifecycle. The Commission, the Member States and the European AI Board should contribute to these efforts by issuing guidelines and providing support to providers and deployers.
Amendment 415 #
Proposal for a regulation
Recital 15
Recital 15
(15) Aside from the many beneficial uses of artificial intelligence, that technologyI systems can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child. All uses of AI systems which interfere with the essence of the fundamental rights of individuals should in any case be prohibited. The prohibitions listed in this Regulation should apply notwithstanding existing Union law and do not provide a new legal basis for the development placing on the market, deployment or use of AI systems. To keep up with rapid technological development and to ensure future-proof regulation, the Commission should keep the list of prohibited and high-risk AI systems under constant review.
Amendment 420 #
Proposal for a regulation
Recital 15 a (new)
Recital 15 a (new)
(15 a) The European Union and its Member States as signatories to the United Nations Convention on the Rights of Persons with Disabilities (CRPD) are obliged to protect persons with disabilities from discrimination and to promote their equality. They are obliged to ensure that persons with disabilities have access, on an equal basis with others, to information and communications technologies and systems and to ensure respect for the fundamental rights, including that of privacy, of persons with disabilities.
Amendment 423 #
Proposal for a regulation
Recital 15 b (new)
Recital 15 b (new)
(15 b) Providers of AI systems should ensure that these systems are designed in accordance with the accessibility requirements set out in Directive (EU) 2019/882 and guarantee full, equal, and unrestricted access for everyone potentially affected by or using AI systems, including persons with disabilities.
Amendment 425 #
Proposal for a regulation
Recital 16
Recital 16
(16) The placing on the market, putting into service or use of certain AI systems intended towith the effect or likely effect of distorting human behaviour, whereby physical, economic or psychological harms to individuals or society are likely to occur, should be forbidden. SuchThis includes AI systems that deploy subliminal components that individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention tomay not be able to perceive or understand, or exploit vulnerabilities of individuals. They materially distort the behaviour of a person and, including in a manner that causes or is likely to cause harm to that or another person. The intenphysical, psychological or economic harm to that or another person, or to society, or lead them to make decisions they would not otherwise have taken. Manipulation may not be presumed if the distortion of human behaviour clearly results from factors external to the AI system which are outside of the control of the provider or the user and are not reasonably foreseeable at or during the deployment of the AI system. Research for legitimate purposes in relation to such AI systems should not be stiflunduly limited by the prohibition, if such research does not amount to use of the AI system in non-supervised human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research. If necessary, further flexibilities in order to foster research, and thereby European innovation capacities, should be introduced by Member States under controlled circumstances only and with all relevant safeguards to protect health and safety, fundamental rights, environment, society, rule of law and democracy.
Amendment 435 #
Proposal for a regulation
Recital 17
Recital 17
(17) AI systems providing social scoring of natural persons for general purpose by public authorities or on their behalfthat evaluate, classify, rate or score the trustworthiness or social standing of natural persons may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non-discrimination and the values of equality and justice. Such AI systems evaluate or classify the trustworthiness or social standing of natural persons based on multiple data points related to their social behaviour in multiple contexts or known, inferred or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be therefore prohibited.
Amendment 438 #
Proposal for a regulation
Recital 17 a (new)
Recital 17 a (new)
(17 a) The placing on the market, putting into service or use of certain AI systems that can be used or foreseeably misused for intrusive monitoring and flagging to identify or deter rule-breaking or fraud should be forbidden. The use of such intrusive monitoring and flagging in a relationship of power, such as the use of e-proctoring software by education institutions to monitor students and pupils, or the use of surveillance- or monitoring software by employers on workers poses an unacceptable risk to the fundamental rights of workers, students and pupils, including minors. Notably, these practices affect the right to private life, data protection and human dignity of students and pupils, including minors.
Amendment 444 #
Proposal for a regulation
Recital 17 b (new)
Recital 17 b (new)
Amendment 445 #
Proposal for a regulation
Recital 17 c (new)
Recital 17 c (new)
(17 c) Similarly, ostensible truth- detection technologies, such as polygraphs, have a long and unsuccessful history of abuse, misselling, miscarriages of justice and failure. The problems underlying these failures are exacerbated in the field of migration, which thusfar has been tarnished by new failings due to, inter alia to incorrect cultural assumptions. Such technologies therefore cannot be used while protecting the essence of all relevant fundamental rights.
Amendment 449 #
Proposal for a regulation
Recital 18
Recital 18
(18) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is consideredis particularly intrucorrosive into the rights and freedoms of the concerned persons, to the extent that it ma and can ultimately affect the private life of a large part of the population, evoke a feeling of constant surveillanceleave society with a justifiable feeling of constant surveillance, give parties deploying biometric identification in publicly accessible spaces a position of uncontrollable power and indirectly dissuade individuals from the exercise of their freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities at the core to the Rule of Law. Biometric identification not carried out in real time carries different but equally problematic risks. Due to the increase in pervasiveness, functionality and memory capacities of relevant devices, this would amount to a "surveillance time machine", which could be used to track movements and social interactions stretching back an indeterminate period into the past.
Amendment 459 #
Proposal for a regulation
Recital 18 a (new)
Recital 18 a (new)
(18 a) The use of data collected or generated by practices prohibited under this Regulation should also be prohibited. Within the framework of judicial and administrative proceedings, the responsible authorities should establish that data collected or generated by practices prohibited under this regulation should not be admissible.
Amendment 470 #
Proposal for a regulation
Recital 19
Recital 19
(19) The use of thoseAI systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. _________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).remote biometric identification of individuals should therefore be prohibited
Amendment 476 #
Proposal for a regulation
Recital 20
Recital 20
Amendment 484 #
Proposal for a regulation
Recital 21
Recital 21
Amendment 492 #
Proposal for a regulation
Recital 22
Recital 22
Amendment 502 #
Proposal for a regulation
Recital 23
Recital 23
(23) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric and biometrics- based data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context and Article 9 of Regulation 2016/679, thius Rregulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law that may give effect to itng such use and the processing of biometric data involved in an exhaustive manner.
Amendment 513 #
Proposal for a regulation
Recital 24
Recital 24
(24) Any processing of biometric data, biometrics-based data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of ‘real- time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, including where those systems are used by competent authorities in publicly accessible spaces for other purposes than law enforcementas regulated by this Regulation, should continue to comply with all requirements resulting from Article 9(1) of Regulation (EU) 2016/679, Article 10(1) of Regulation (EU) 2018/1725 and Article 10 of Directive (EU) 2016/680, as applicable.
Amendment 519 #
Proposal for a regulation
Recital 27
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to thoseclassified as such when thatey have a significant harmful impact on the health, safety, economic status and fundamental rights of personindividuals in the Union, and such limitation minimises any potential restriction to international traalso on the environment, society, rule of law, democracy or consumer protection. Given the rapid path of technological development, but also given the potential changes in the use and the aim of authorised AI systems, regardless of whether they are high-risk or lower risk, the limited list of high-risk systems and areas of high risk systems in Annex III should nonetheless be subject to permanent review through the exercise of regular assessment as provide,d if anyn Title III of this Regulation.
Amendment 530 #
Proposal for a regulation
Recital 28
Recital 28
(28) AI systems could produce adverse outcomes to health and safety ofhave an adverse impact on persons, in particular when such systems operate as components of products. Consistently with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate. The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non- discrimination, consumer protection, workers’ rights, rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration. In addition to those rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons.
Amendment 531 #
Proposal for a regulation
Recital 28 a (new)
Recital 28 a (new)
(28 a) The risk-assessment of AI systems as regards their environmental impact and use of resources should not only focus on sectors related to the protection of the environment, but be common to all sectors, as environmental impacts can stem from any kind of AI systems, including those not originally directly related to the protection of the environment, in terms of energy production and distribution, waste management and emissions control.
Amendment 539 #
Proposal for a regulation
Recital 32
Recital 32
(32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a hsighnificant risk of harm to the health and safety or the fundamental rights of persons, as well as the environment, society, rule of law, democracy, economic interests and consumer protection, taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre- defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems. Such classification should take place before the placing onto the market but also during the life-cycle of an AI system.
Amendment 551 #
Proposal for a regulation
Recital 33
Recital 33
(33) Technical inaccuracies of, as well as conscious or subconscious design decisions, and the use of training data which codify and reinforce structural inequalities, mean that AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. ThereforeAs a result, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversightundermine the essence of fundamental rights and therefore must be prohibited.
Amendment 552 #
Proposal for a regulation
Recital 33 a (new)
Recital 33 a (new)
(33 a) Human oversight should target high-risk AI systems as a priority, with the aim of serving human-centric objectives. The individuals to whom human oversight is assigned shall be provided with adequate education and training on the functioning of the application, its capabilities to influence or make decisions, and to have harmful effects, notably on fundamental rights. The persons in charge of the assignment of these individuals shall provide them with relevant staff and psychological support.
Amendment 557 #
Proposal for a regulation
Recital 35
Recital 35
(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems mayAI systems that are designed to constantly monitor individuals are particuarly intrusive and violate the right to education and training as well as, the right not to be discriminated against and perpetuate historical patterns of discrimination and should therefore be prohibited.
Amendment 560 #
Proposal for a regulation
Recital 36
Recital 36
(36) AI systems used in employment, workers management and access to self- employment, notably for theaffecting the initiation, establishment, implementation and termination of an employment relationship, including AI systems intended to support collective legal and regulatory matters should be high risk. Particularly AI affecting recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring for measuring and monitoring of performance or for evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects and livelihoods of these persons. AI systems used for constant monitoring of workers pose an unacceptable risk to their fundamental rights, and should be therefore prohibited. Relevant work-related contractual relationships should meaningfully involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacyundermine the essence of their fundamental rights to data protection and privacy. This Regulation applies without prejudice to Union and Member State competences to provide for more specific rules for the use of AI- systems in the employment context.
Amendment 571 #
Proposal for a regulation
Recital 37
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systemsprohibited, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to an unacceptably high risk of discrimination ofagainst persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-riskprohibited. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
Amendment 578 #
Proposal for a regulation
Recital 38
Recital 38
(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number of AI systems intended to be used In addition, some applications, such as to make predictions, profiles, or risk assessments based on data analysis or profiling of groups or individuals for the purpose of predicting the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect thoccurrence or recurrence of actual or potential offences or rule- breaking undermine the essence of fundamental rights and should be prohibited. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profilingedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It ins the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offencerefore appropriate to classify as prohibited a number of AI systems intended to be used in the law enforcement context as well as for crime analytics regarding natural persons.
Amendment 589 #
Proposal for a regulation
Recital 39
Recital 39
(39) AI systems used in migration, asylum and border control management affect people who are often in a particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, non- discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk AI systems intended to be used by the competent public authorities charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools or to detect the emotional state of a natural person; for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status. AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council49 , the Regulation (EC) No 810/2009 of the European Parliament and of the Council50 and other relevant legislation. _________________ 49 Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60). 50 Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1).
Amendment 591 #
Proposal for a regulation
Recital 39 a (new)
Recital 39 a (new)
(39 a) The use of AI systems in migration, asylum and border control management should in no circumstances be used by Member States or European Union institutions as a means to circumvent their international obligations under the Convention of 28 July 1951 relating to the Status of Refugees as amended by the Protocol of 31 January 1967, nor should they be used to in any way infringe on the principle of non- refoulement, or or deny safe and effective legal avenues into the territory of the Union, including the right to international protection;
Amendment 596 #
Proposal for a regulation
Recital 40
Recital 40
(40) Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. The use of Artificial Intelligence tools can support, but should not interfere with the decision-making power of judges or judicial independence, as the final decision-making must remain a human- driven activity and decision. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a concrete set of facts. Such qualification should not extend, however, to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks or allocation of resources.
Amendment 603 #
Proposal for a regulation
Recital 40 a (new)
Recital 40 a (new)
(40 a) Certain AI-systems used in the area of healthcare that are not covered by Regulation (EU) 2017/745 (Regulation on Medical Devices) should be high-risk. Uses such as software impacting diagnostics, treatments or medical prescriptions and access to health insurance can clearly impact health and safety, but also can also obstruct access to health services, impact the right to health care and cause physical harm in the long run.
Amendment 607 #
Proposal for a regulation
Recital 40 b (new)
Recital 40 b (new)
(40 b) Certain AI-systems used in the area of media, particularly in the area of social media, due to their potentially large reach and the specific risk of large scale spread of disinformation and exacerbation of societal polarisation should be high-risk due to their potential impact on individuals’ rights, but also on society and democracy at large.
Amendment 610 #
Proposal for a regulation
Recital 41
Recital 41
(41) The fact that an AI system is classified as high risk under this Regulation should not be interpreted as indicating that the use of the system is necessarily lawful under other acts of Union law or under national law compatible with Union law, such as on the protection of personal data, on the use of polygraphs and similar tools or other systems to detect the emotional state of natural persons. Any such use should continue to occur solely in accordance with the applicable requirements resulting from the Charter and from the applicable acts of secondary Union law and national law. This Regulation should not be understood as providing for the legal ground for processing of personal data, including special categories of personal data, where relevant.
Amendment 615 #
Proposal for a regulation
Recital 42
Recital 42
(42) To mitigate the risks from high-risk AI systems placed or otherwise put into service on the Union market for usdeployers and affected personAI subjects, certain mandatory requirements should apply, taking into account the intended purpose of the , the potential or reasonably foreseeable use or misuse of the system, and according toshould be in accordance with the risk management system to be established by the provider.
Amendment 618 #
Proposal for a regulation
Recital 43
Recital 43
(43) Requirements should apply to high- risk AI systems as regards the quality and relevance of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as applicable in the light of the intended purpowell as the environment, society, rule of law, democracy, economic interests and consumer protection, as applicable in the light of the intended purpose, the potential or reasonably foreseeable use or misuse of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.
Amendment 628 #
(44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become thea source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete, statistically complete and relevant in view of the intended purpose of the system and the context of its use. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on whichin relation to whom the high- risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent requirednecessary in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. ISolely in order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers should be able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high- risk AI systems.
Amendment 635 #
Proposal for a regulation
Recital 46
Recital 46
(46) Having information on how high- risk AI systems have been developed and how they perform throughout their lifecycle is essential to verify compliance with the requirements under this Regulation. This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements. Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system. The technical documentation should be kept up to date throughout the entire lifecycle of the AI system.
Amendment 637 #
Proposal for a regulation
Recital 47
Recital 47
(47) To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certain degree of transparency should be required for high-risk AI systems. UsDeployers should be able to interpret the system’s goals, priorities and output and use it appropriately. High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate. Where individuals are passively subject to AI systems (AI subjects), information to ensure an appropriate type and degree of transparency should be made publicly available, with full respect to the privacy, personality, and related rights of subjects.
Amendment 643 #
Proposal for a regulation
Recital 48
Recital 48
(48) High-risk AI systems should be designed and developed in such a way that natural persons can oversee their functioningmeaningfully oversee and regulate their functioning or investigate in case of an accident. For this purpose, appropriate human oversight measures should be identifiensured by the provider of the system before its placing on the market or putting into service. In particular, where appropriate, such measures should guarantee that the system is subject to in- built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role.
Amendment 647 #
Proposal for a regulation
Recital 49
Recital 49
(49) High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness, reliability and cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to the usdeployers.
Amendment 648 #
Proposal for a regulation
Recital 50
Recital 50
(50) The technical robustness is a key requirement for high-risk AI systems. They should be resilient against risks connected to the limitations of the system (e.g. errors, faults, inconsistencies, unexpected situations) as well as adequately protected against malicious actions that may compromise the security of the AI system and result in harmful or otherwise undesirable behaviour. Failure to protect against these risks could lead to safety impacts or negatively affect the fundamental rights, for example due to erroneous decisions or wrong or biased outputs generated by the AI system.
Amendment 653 #
Proposal for a regulation
Recital 51
Recital 51
(51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leveratarget AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.
Amendment 656 #
Proposal for a regulation
Recital 53
Recital 53
(53) It is appropriate that a specific natural or legal person, defined as the provider, takes the responsibility for the placing on the market or, putting into service or deploying of a high-risk AI system, regardless of whether that natural or legal person is the person who designed or developed the system.
Amendment 657 #
Proposal for a regulation
Recital 54
Recital 54
(54) The provider and, where applicable, deployer should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation and establish a robust post-market monitoring system. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority in question. Deployers should have strategies in place to ensure that the data management, including data acquisition, data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data during the deployment lifetime of high-risk AI systems, complies with applicable rules and ensure regulatory compliance, in particular regarding modifications to the high-risk AI systems.
Amendment 665 #
Proposal for a regulation
Recital 58
Recital 58
(58) Given the nature of AI systems and the risks to safety and fundamental rights possibly associated with their use, including as regards the need to ensure proper monitoring of the performance of an AI system in a real-life setting, it is appropriate to set specific responsibilities for users. Usdeployers. Deployers should in particular use high-risk AI systems in accordance with the instructions of use and certain other obligations should be provided for with regard to monitoring of the functioning of the AI systems and with regard to record- keeping and quality management, as appropriate.
Amendment 667 #
Proposal for a regulation
Recital 58 a (new)
Recital 58 a (new)
(58 a) To ensure that fundamental rights, the environment and the public interest are effectively protected where an AI- system is classified as high-risk under Annex III, both producers and deployers before each deployment should perform a fundamental rights impact assessment of the systems’ impact in the context of use throughout the entire lifecycle and include measures to mitigate any impact on fundamental rights, the environment or the public interest. The fundamental rights impact assessment should be registered in the public EU database for stand-alone high-risk AI systems and be publicly accessible. The supervisory authority should have the power to review these fundamental rights impact assessments.
Amendment 670 #
Proposal for a regulation
Recital 59
Recital 59
(59) It is appropriate to envisage that the usdeployer of the AI system should be the natural or legal person, public authority, agency or other body under whose authority the AI system is operated except where the use is made in the course of a personal non- professional activity.
Amendment 672 #
Proposal for a regulation
Recital 60
Recital 60
(60) In the light of the complexity of the artificial intelligence value chain, relevant third parties, notably the ones involved in the sale and the supply of software, software tools and components, pre-trained models and data, or providers of network services, should cooperate, as appropriate, with providers and usdeployers to enable their compliance with the obligations under this Regulation and with competent authorities established under this Regulation.
Amendment 676 #
Proposal for a regulation
Recital 61 a (new)
Recital 61 a (new)
(61 a) As part of the new legal framework on corporate sustainable reporting and due diligence, minimum common standards for the reporting of businesses on the societal and environmental impacts of the AI systems that they develop, sell or distribute should be established and used at an early stage of the development and life-cycle of AI systems. Such common standard obligations should notably consist of mandatory human rights due diligence rules, thus enabling a level-playing field among European businesses and non- European businesses operating in the EU.
Amendment 679 #
Proposal for a regulation
Recital 62
Recital 62
(62) In order to ensure a high level of trustworthiness of high-risk AI systems, those systems should be subject to a third party conformity assessment prior to their placing on the market or putting into service.
Amendment 684 #
Proposal for a regulation
Recital 64
Recital 64
(64) Given the more extensive experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of application of this Regulation, the scope ofessential to ensure, particularly in the period before application of this Regulation, the development of adequate capacity for the application of third-party conformity assessment for high-risk AI systems other than those related to products. Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems intended to be used for the remote biometric identification of persons, for which the involvement of a notified body in the conformity assessment should be foreseen, to the extent they are not prohibited.
Amendment 686 #
Proposal for a regulation
Recital 65
Recital 65
Amendment 690 #
Proposal for a regulation
Recital 65 a (new)
Recital 65 a (new)
(65 a) Third party conformity assessments for products listed in Annex III are essential as a precautionary measure and to ensure that trust is not lost in AI products, to the detriment of innovation, competition and growth. Due to the particularly sensitive nature of the tasks at hand, third party conformity assessments in the fields of law enforcement, asylum and immigration should be carried out by the market surveillance authority.
Amendment 694 #
Proposal for a regulation
Recital 66
Recital 66
(66) In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that an AI system undergoes a new third party conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes. In addition, as regards AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out), it is necessary to provide rules establishing that changes to the algorithm and its performance that have been pre-determined by the provider and assessed at the moment of the conformity assessment should not constitute a substantial modification.
Amendment 699 #
Proposal for a regulation
Recital 68
Recital 68
Amendment 705 #
Proposal for a regulation
Recital 69
Recital 69
(69) In order to facilitate the work of the Commission and the Member States in the artificial intelligence field as well as to increase the transparency towards the public, providers of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonisation legislation,and deployers of high- risk AI systems should be required to register their high- risk AI system in a EU database, to be established and managed by the Commission. The Commission should be the controller of that database, in accordance with Regulation (EU) 2018/1725 of the European Parliament and of the Council55 . In order to ensure the full functionality of the database, when deployed, the procedure for setting the database should include the elaboration of functional specifications by the Commission and an independent audit report. _________________ 55 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1).
Amendment 711 #
Proposal for a regulation
Recital 70
Recital 70
(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, usdeployers, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin. Additionally, the use of an AI system to generate or manipulate image, audio or video content that appreciably resembles a natural person should be permitted only when used for freedom of expression and artistic purposes and while respecting the limits of these purposes, or with the explicit consent of that person.
Amendment 719 #
Proposal for a regulation
Recital 71
Recital 71
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight andbenefits from clear rules and legal certainty, and requires regulatory oversight. In order to fulfill its potential to benefit society, a safe space for controlled experimentation, while ensuring respect for Union law and the protection of fundamental rights, can help foster responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that ispromotes sustainable innovation-friendly,, is future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to cooperate in establishing artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service.
Amendment 724 #
Proposal for a regulation
Recital 72
Recital 72
(72) The objectives of the regulatory sandboxes should be to foster AI innovation for the benefit of society by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring respect for and protection of fundamental rights, compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation; to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs) and start-ups. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal dataPersonal data that had originally been collected for otherdifferent purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680should be processed in a sandbox only under specified conditions and within the limits of Regulation (EU) 2016/679. Such further processing should be considered as for statistical purposes in the meaning of Article 5(1)(b) of that Regulation. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide over the suspending or banning them from participating in the sandbox, or whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680. This Regulation should also provide the legal basis for the use of data protected by intellectual property or trade- secrets for developing certain AI systems in the public interest within the AI regulatory sandbox, without prejudice to Directive (EU) 2019/790 and to Directive (EU) 2016/943. The authorised use of data protected by intellectual property or trade-secrets under Article 54 of this Regulation should be covered by Article 4 of Directive (EU) 2019/790.
Amendment 732 #
Proposal for a regulation
Recital 73
Recital 73
(73) In order to promote and protect innovation, it is important that the interests of small-scale providers and usdeployers of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on awareness raising and information communication, and including the cooperation across borders. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border usdeployers.
Amendment 737 #
Proposal for a regulation
Recital 74
Recital 74
(74) In order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market as well as to facilitate compliance of providers and notified bodies with their obligations under this Regulation, the AI- on demand platform, the European Digital Innovation Hubs and the Testing and Experimentation Facilities established by the Commission and the Member States at national or EU level should possibly contribute to the implementation of this Regulation. Within their respective mission and fields of competence, they may provide in particular technical and scientific support to providers and notified bodies.
Amendment 740 #
Proposal for a regulation
Recital 76
Recital 76
(76) In order to facilitate a smooth, effective and harmonised implementation of this Regulation a European Artificial Intelligence Board should be established. The Board should be independent and responsible for a number of advisory and enforcement tasks, including issuing decisions, opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation, including on technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to and assisting the Commission on specific questions related to artificial intelligence. In order to ensure a consistent and appropriate enforcement vis-à-vis very large undertakings, the Board should be the supervisory authority for undertakings that meet the criteria of 'community dimension' as defined in Article 1(3) of Regulation 139/200 (Merger Regulation). The Board should have a secretariat with sufficient resources and expertise to be able to fulfil its role. In this respect, the secretariat should establish a European Centre of Excellence for Artificial Intelligence (ECE-AI).
Amendment 744 #
Proposal for a regulation
Recital 77
Recital 77
(77) Member States hold a key role in the application and enforcement of this Regulation. In this respect, each Member State should designate one or more national competent authorities for the purpose of supervising the application and implementation of this Regulation. In order to increase organisation efficiency on the side of Member States and to set an official point of contact vis-à-vis the public and other counterparts at Member State and Union levels, in each Member State one national authority should be designated as national supervisory authority. In order to avoid duplication and combine expertise and competences, this should be a supervisory authority established under Regulation (EU) 2016/679 (General Data Protection Regulation). The supervisory authorities should have sufficient investigative and corrective powers.
Amendment 748 #
Proposal for a regulation
Recital 78
Recital 78
(78) In order to ensure that providers of high-risk AI systems can take into account the experience on the use of high-risk AI systems for improving their systems and the design and development process or can take any possible corrective action in a timely manner, all providers should have a post-market monitoring system in place. This system is also key to ensure that the possible risks emerging from AI systems which continue to ‘learn’ after being placed on the market or put into service can be more efficiently and timely addressed. In this context, providers should also be required to have a system in place to report to the relevant authorities any serious incidents or any breaches to national and Union law, including those protecting fundamental rights and consumer rights, resulting from the use of their AI systems.
Amendment 750 #
Proposal for a regulation
Recital 79
Recital 79
(79) In order to ensure an appropriate and effective enforcement of the requirements and obligations set out by this Regulation, which is Union harmonisation legislation, the system of market surveillance and compliance of products established by Regulation (EU) 2019/1020 should apply in its entirety. Where necessary for their mandate, national public authorities or bodies, which supervise the application of Union law protecting fundamental rights, including equality bodies, should also have access to any documentation created under this Regulation. A reasonable suspicion of breach of fundamental rights, which may arise from a complaint from an individual or a notification of a breach submitted by a civil society organisation, should be deemed as a sufficient reason for the commencement of an evaluation of an AI system at national level.
Amendment 751 #
Proposal for a regulation
Recital 79 a (new)
Recital 79 a (new)
(79 a) As the rights and freedoms of individuals can be seriously undermined by AI systems, it is essential that affected individuals have meaningful access to reporting and redress mechanisms. They should be able to report infringements of this Regulation to their national supervisory authority and have the right to be heard and to be informed about the outcome of their complaint and the right to a timely decision. In addition, they should have the right to an effective remedy against competent authorities who fail to enforce these rights and the right to redress. Where applicable, deployers should provide internal complaints mechanisms to be used by affected individuals and should be liable for pecuniary and non-pecuniary damages in cases of breaches of individuals’ or groups’ rights. Collective representation of affected individuals must be possible.
Amendment 754 #
Proposal for a regulation
Recital 80
Recital 80
(80) Union legislation on financial services includes internal governance and risk management rules and requirements which are applicable to regulated financial institutions in the course of provision of those services, including when they make use of AI systems. In order to ensure coherent application and enforcement of the obligations under this Regulation and relevant rules and requirements of the Union financial services legislation, the authorities responsible for the supervision and enforcement of the financial services legislation, including where applicable the European Central Bank, should be designated as competent authorities for the purpose of supervising the implementation of this Regulation, including for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions. To further enhance the consistency between this Regulation and the rules applicable to credit institutions regulated under Directive 2013/36/EU of the European Parliament and of the Council56 , it is also appropriate to integrate the conformity assessment procedure and some of the providers’ procedural obligations in relation to risk management, post marketing monitoring and documentation into the existing obligations and procedures under Directive 2013/36/EU. In order to avoid overlaps, limited derogations should also be envisaged in relation to the quality management system of providers and the monitoring obligation placed on users of high-risk AI systems to the extent that these apply to credit institutions regulated by Directive 2013/36/EU. _________________ 56 Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 on access to the activity of credit institutions and the prudential supervision of credit institutions and investment firms, amending Directive 2002/87/EC and repealing Directives 2006/48/EC and 2006/49/EC (OJ L 176, 27.6.2013, p. 338).
Amendment 760 #
Proposal for a regulation
Recital 81
Recital 81
(81) The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of trustworthy artificial intelligence in the Union. Providers of non-high-risk AI systems should be encouraged to create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Providers should also be encouraged to apply on a voluntary basis additional requirements related, for example, to energy efficiency, resource use and waste production, and environmental sustainability, accessibility to persons with disability, stakeholders’ participation in the design and development of AI systems, and diversity, equal representation and gender-balance of the development teams. The Commission may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data.
Amendment 761 #
Proposal for a regulation
Recital 82
Recital 82
(82) It is important that AI systems related to products that are not high-risk in accordance with this Regulation and thus are not required to comply with the requirements set out hereinfor high-risk AI systems are nevertheless safe when placed on the market or put into service. To contribute to this objective, the Directive 2001/95/EC of the European Parliament and of the Council57 would apply as a safety net. _________________ 57 Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product safety (OJ L 11, 15.1.2002, p. 4).
Amendment 762 #
Proposal for a regulation
Recital 83
Recital 83
(83) In order to ensure trustful and constructive cooperation of competent authorities on Union and national level, all parties involved in the application of this Regulation should aim for transparency and openness. Where necessary for individual cases and internal deliberations, they should also respect the confidentiality of information and data obtained in carrying out their tasks.
Amendment 771 #
Proposal for a regulation
Recital 85
Recital 85
(85) In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to amend the techniques and approaches referred to in Annex I to define AI systems, the Union harmonisation legislation listed in Annex II, the high-risk AI systems listed in Annex III, the provisions regarding technical documentation listed in Annex IV, the content of the EU declaration of conformity in Annex V, the provisions regarding the conformity assessment procedures in Annex VI and VII and the provisions establishing the high-risk AI systems to which the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation should apply. It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making58 . These consultations should involve the participation of a balanced selection of stakeholders, including consumer organisations, associations representing affected persons, businesses representatives from different sectors and sizes, as well as researchers and scientists. In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States’ experts, and their experts systematically have access to meetings of Commission expert groups dealing with the preparation of delegated acts. _________________ 58 OJ L 123, 12.5.2016, p. 1.
Amendment 774 #
Proposal for a regulation
Recital 86 a (new)
Recital 86 a (new)
(86 a) Given the rapid technological developments and the required technical expertise in conducting the assessment of high-risk AI systems, the Commission should regularly review Annex III, at least every six months, while consulting with the relevant stakeholders, including ethics experts and anthropologists, sociologists, mental health specialists and any relevant scientists and researchers.
Amendment 776 #
Proposal for a regulation
Recital 86 b (new)
Recital 86 b (new)
(86 b) When adopting delegated or implementing acts concerning high-risk sectors of AI development, notably those raising concerns with respect to ethical principles or entailing risks to the health or safety of humans, animals or plants, or the protection of the environment, Member States should also assume greater responsibility in the decision- making process. In particular, the abstentions of Member States representatives’ should be counted within a qualified majority, each Member State representative should give substantive reasons for votes and abstentions, each of their vote and abstention should be accompanied by a detailed justification, on the basis of Regulation XX/XX amending Regulation (EU) No 182/2011.
Amendment 777 #
Proposal for a regulation
Recital 87 a (new)
Recital 87 a (new)
(87 a) As reliable information on the resource and energy use, waste production and other environmental impact of AI systems and related ICT technology, including software, hardware and in particular data centres, is limited, the Commission should evaluate the impact and effectiveness of this Regulation regarding these criteria and further evaluate bringing legislation for the sector to contribute to EU climate strategy and targets.
Amendment 778 #
Proposal for a regulation
Recital 89
Recital 89
(89) The European Data Protection Supervisor and the European Data Protection Board were consulted in accordance with Article 42(2) of Regulation (EU) 2018/1725 and delivered an opinion on […]18.6.2021”.
Amendment 783 #
Proposal for a regulation
Article 1 – paragraph 1 – introductory part
Article 1 – paragraph 1 – introductory part
1.The purpose of this Regulation is to ensure a high level of protection of public interests, such as health, safety, fundamental rights, the environment and democracy from harmful effects of artificial intelligence systems ("AI systems") in the Union, whether individual, societal or environmental, while enhancing innovation.Its provisions are underpinned by the precautionary principle. This Regulation lays down:
Amendment 786 #
Proposal for a regulation
Article 1 – paragraph 1 – point a
Article 1 – paragraph 1 – point a
(a) harmonised rules for the placing on the market, the development, the putting into service and the use of, the deployment and the use of human-centric and trustworthy artificial intelligence systems (‘AI systems’) in the Union;
Amendment 795 #
Proposal for a regulation
Article 1 – paragraph 1 – point d
Article 1 – paragraph 1 – point d
(d) harmonised transparency rules for AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content;
Amendment 798 #
Proposal for a regulation
Article 1 – paragraph 1 – point e
Article 1 – paragraph 1 – point e
(e) rules on market monitoring and, market surveillance and enforcement.
Amendment 808 #
Proposal for a regulation
Article 1 – paragraph 1 a (new)
Article 1 – paragraph 1 a (new)
When justified by significant risks to fundamental rights of persons, including the protection of consumer rights, Member States may introduce regulatory solutions ensuring a higher level of protection of persons than offered in this Regulation.
Amendment 815 #
Proposal for a regulation
Article 2 – paragraph 1 – point a
Article 2 – paragraph 1 – point a
(a) providers placing on the market or, developing, putting into service or deploying AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country;
Amendment 819 #
Proposal for a regulation
Article 2 – paragraph 1 – point b
Article 2 – paragraph 1 – point b
(b) usdeployers of AI systems located or established within the Union;
Amendment 826 #
Proposal for a regulation
Article 2 – paragraph 1 – point c
Article 2 – paragraph 1 – point c
(c) providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union or has effects in the Union;
Amendment 829 #
Proposal for a regulation
Article 2 – paragraph 1 – point c a (new)
Article 2 – paragraph 1 – point c a (new)
(c a) importers, distributors, and authorised representatives of providers of AI systems;
Amendment 836 #
Proposal for a regulation
Article 2 – paragraph 1 – point c b (new)
Article 2 – paragraph 1 – point c b (new)
(c b) AI systems as a product, service or practice, or as part of a product, service or practice.
Amendment 864 #
Proposal for a regulation
Article 2 – paragraph 3
Article 2 – paragraph 3
Amendment 879 #
Proposal for a regulation
Article 2 – paragraph 4
Article 2 – paragraph 4
Amendment 889 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
Article 2 – paragraph 5 a (new)
5 a. This Regulation shall not provide a legal basis for the development, deployment or use of AI systems that is unlawful under Union or national law;
Amendment 893 #
Proposal for a regulation
Article 2 – paragraph 5 b (new)
Article 2 – paragraph 5 b (new)
5 b. This Regulation is without prejudice to the rules laid down by other Union legal acts regulating the protection of personal data, in particular Regulation (EU) 2016/679, Directive (EU) 2016/680, Regulation (EU) 2018/1725, and Directive 2002/57/EC;
Amendment 898 #
Proposal for a regulation
Article 2 – paragraph 5 c (new)
Article 2 – paragraph 5 c (new)
5 c. This Regulation is without prejudice to the rules laid down by other Union legal acts relating to consumer protection and product safety, including Regulation (EU) 2017/2394, Regulation (EU) 2019/1020 and Directive 2001/95/EC on general product safety and Directive 2013/11/EU.
Amendment 917 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defineinputs and objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
Amendment 946 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4
Article 3 – paragraph 1 – point 4
(4) ‘usdeployer’ means any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non- professional activity;
Amendment 948 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4 a (new)
Article 3 – paragraph 1 – point 4 a (new)
(4 a) ‘AI subject’ means any natural or legal person that is subject to a decision based on or assisted by an AI system, or subject to interaction with an AI system or treatment of data relating to them by an AI system, or otherwise subjected to analysis by an AI or otherwise impacted or affected by an AI system;
Amendment 958 #
Proposal for a regulation
Article 3 – paragraph 1 – point 8
Article 3 – paragraph 1 – point 8
(8) ‘operator’ means the provider, the usdeployer, the authorised representative, the importer and the distributor;
Amendment 963 #
Proposal for a regulation
Article 3 – paragraph 1 – point 11
Article 3 – paragraph 1 – point 11
(11) ‘putting into service’ means the supply of an AI system for first use directly to the usdeployer or for own use on the Union market for its intended purpose;
Amendment 976 #
Proposal for a regulation
Article 3 – paragraph 1 – point 13
Article 3 – paragraph 1 – point 13
(13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems;
Amendment 989 #
Proposal for a regulation
Article 3 – paragraph 1 – point 15
Article 3 – paragraph 1 – point 15
(15) ‘instructions for use’ means the information provided by the provider to inform the usdeployer of in particular an AI system’s intended purpose and proper use, inclusive of the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used;
Amendment 992 #
Proposal for a regulation
Article 3 – paragraph 1 – point 16
Article 3 – paragraph 1 – point 16
(16) ‘recall of an AI system’ means any measure aimed at achieving the return to the provider of an AI system made available to usdeployers;
Amendment 997 #
Proposal for a regulation
Article 3 – paragraph 1 – point 20
Article 3 – paragraph 1 – point 20
(20) ‘conformity assessment’ means the process of verifying whether theication by an independent third party whether the principles and requirements set out in Title III, Chapter 2 of this Regulation relating to an AI system have been fulfilled;
Amendment 1003 #
Proposal for a regulation
Article 3 – paragraph 1 – point 23
Article 3 – paragraph 1 – point 23
(23) ‘substantial modification’ means a change to the AI system following its placing on the market or putting into service which affects the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation or results in a modification to the intended purpose for which the AI system has been assessed; or to its performance, including modifications of the intended purpose of an AI system which is not classified as high-risk and is already placed on the market or put into service;
Amendment 1015 #
Proposal for a regulation
Article 3 – paragraph 1 – point 29
Article 3 – paragraph 1 – point 29
(29) ‘training data’ means data used for training an AI system through fitting its learnable parameters, including the weights of a neural network;
Amendment 1017 #
Proposal for a regulation
Article 3 – paragraph 1 – point 30
Article 3 – paragraph 1 – point 30
(30) ‘validation data’ means data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process, among other things, in order to prevent overfitting; whereas the validation dataset can be a separate dataset or part of the training dataset, either as a fixed or variable split;
Amendment 1023 #
Proposal for a regulation
Article 3 – paragraph 1 – point 33
Article 3 – paragraph 1 – point 33
(33) ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data;
Amendment 1027 #
Proposal for a regulation
Article 3 – paragraph 1 – point 33 a (new)
Article 3 – paragraph 1 – point 33 a (new)
(33 a) ‘biometrics-based data’ means data resulting from specific technical processing relating to physical, physiological, or behavioural features, signals, or characteristics of a natural person;
Amendment 1038 #
Proposal for a regulation
Article 3 – paragraph 1 – point 34
Article 3 – paragraph 1 – point 34
(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions, thoughts, states of mind or intentions of natural persons on the basis of their biometric data;
Amendment 1041 #
Proposal for a regulation
Article 3 – paragraph 1 – point 35
Article 3 – paragraph 1 – point 35
(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, health, mental ability, personality traits, tattoos, ethnic origin or sexual or political orientation, on the basis of their biometric data or biometrics-based data;
Amendment 1055 #
Proposal for a regulation
Article 3 – paragraph 1 – point 36
Article 3 – paragraph 1 – point 36
(36) ‘remote biometric identification system’ means an AI system for the purposcapable of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified ;
Amendment 1064 #
Proposal for a regulation
Article 3 – paragraph 1 – point 37
Article 3 – paragraph 1 – point 37
(37) ‘‘real-time’ remote biometric identification system’ means a remote biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay. This comprises not only instant identification, but also limited short delays in order to avoid circumventionon a continuous or large-scale basis over a period of time and without limitation to a particular past incident.
Amendment 1065 #
Amendment 1069 #
Proposal for a regulation
Article 3 – paragraph 1 – point 39
Article 3 – paragraph 1 – point 39
(39) ‘publicly accessible space’ means any physical place accessible to the public, or fulfilling a public function, regardless of whether certain conditions for access may apply;
Amendment 1081 #
Proposal for a regulation
Article 3 – paragraph 1 – point 43
Article 3 – paragraph 1 – point 43
(43) ‘national competent authority’ means the EDPS, the national supervisory authority, the notifying authority and the market surveillance authority;
Amendment 1097 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point b a (new)
Article 3 – paragraph 1 – point 44 – point b a (new)
(b a) a serious violation of an individual’s fundamental rights;
Amendment 1100 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
Article 3 – paragraph 1 – point 44 a (new)
(44 a) ‘Recommender system’ means a fully or partially automated system used by an online platform to suggest or prioritise in its online interface specific information to recipients of the service, including as a result of a search initiated by the recipient of the service or otherwise determining the relative order or prominence of information displayed.
Amendment 1146 #
Proposal for a regulation
Article 4 a (new)
Article 4 a (new)
Article 4 a Transparency Rights 1. Providers and deployers of AI systems which affect natural persons, in particular, by evaluating or assessing them, making predictions about them, recommending information, goods or services to them or determining or influencing their access to goods and services, shall inform the natural persons that they are subject to the use of such an AI system. 2. The information referred to in paragraph 1 shall include a clear and concise indication about the provider or deployer and the purpose of the AI system, information about the rights of the natural person conferred under this Regulation, and a reference to publicly available resource where more information about the AI system can be found, in particular the relevant entry in the EU database referred to in Article 60, if applicable. 3. This information shall be presented in a concise, intelligible and easily accessible form, including for persons with disabilities. 4. This obligation shall be without prejudice to other Union or Member State laws, in particular Regulation 2016/679 [GDPR], Directive 2016/680 [LED], Regulation 2022/XXX [DSA]. 5. AI subjects will have the right not to be subject to a high-risk AI system.
Amendment 1150 #
Proposal for a regulation
Article 4 b (new)
Article 4 b (new)
Article 4 b Principles applicable to all AI systems 1. Providers and deployers of AI systems shall respect the following principles: (a) AI systems must be used in a fair and transparent manner in relation to AI subjects; (b) AI subjects shall have a right to automatically receive an explanation in accordance with Article 4c; (c) AI subjects shall have the right to object to a decision taken solely by an AI system, or relying to a significant degree on the output of an AI system, which produces legal effects concerning him or her, or similarly significantly affects him or her. This paragraph is without prejudice to Article 22 of Regulation 2016/679; (d) AI systems shall not be used to exploit power and information asymmetries to the detriment of AI subjects, regardless of whether such asymmetries already exist or may be created or aggravated by the use of AI systems themselves. In particular, AI systems may not be used to discriminate against AI subjects on the basis of the characteristics listed in Article 21 of the European Charter of Fundamental Rights, on the basis of biometrics-based data, as well as on the basis of economic factors; (e) AI systems must be safe and secure, ensuring a performance that is reliable, accurate, and robust throughout their lifecycle; (f) AI systems intended to interact with AI subjects shall be designed and developed in such a way that natural individuals are informed that they are interacting with an AI system, especially where its outputs or behaviour may be reasonably mistaken for that of a human being; 2. Providers of AI systems shall be responsible for, and be able to demonstrate compliance with, the principles established in paragraph 1. This requirement shall apply accordingly to deployers where they have substantially influenced the intended purpose or the manner of operation of the AI system; 3. The functioning of AI systems shall be regularly monitored and assessed to ensure they respect the rights and obligations set out in Union law; 4. These principles shall apply without prejudice to existing obligations relating to transparency, explanation or motivation of decision-making under Member State or Union law.
Amendment 1152 #
Proposal for a regulation
Article 4 c (new)
Article 4 c (new)
Amendment 1155 #
Proposal for a regulation
Article 5 – title
Article 5 – title
5 -1. Any practices related to artificial intelligence and AI systems whose development, deployment or use, or reasonably foreseeable misuse, that adversely affect, or are likely to adversely affect, the essence of any fundamental right shall be prohibited.
Amendment 1156 #
Proposal for a regulation
Article 5 – paragraph 1 – introductory part
Article 5 – paragraph 1 – introductory part
1. TIn addition to paragraph -1, the following artificial intelligence practices shall be prohibited:
Amendment 1158 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
Article 5 – paragraph 1 – point a
(a) the development, the placing on the market, putting into service, deployment or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviourtechniques with the effect or likely effect of materially distorting a person’s or a group's behaviour, including by impairing the person’s ability to make an informed decision, thereby causing the person to take a decision that they would not otherwise have taken, in a manner that causes or is likely to cause thatany person or another person physicalsociety at large physical, economic or psychological harm;
Amendment 1174 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
Article 5 – paragraph 1 – point b
(b) the development, placing on the market, putting into service, deployment or use of an AI system that exploits any of the vulnerabilitiesor may be reasonably foreseen to exploit any of the characteristics of one or more individuals, ofr a specific group of persons due to their age, physical or mental disability, in order to, including those characteristic of known, inferred or predicted personality traits, orientations, or social or economic situation, with the effect or likely effect of materially distorting the behaviour of a person pertaining toone or more persons that are part of that group in a manner that causes or is likely to cause thatany person material or another person physical or psychological harmn-material harm, including physical, economic or psychological harm or affecting democracy or society at large;
Amendment 1195 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – introductory part
Article 5 – paragraph 1 – point c – introductory part
(c) the development, placing on the market, putting into service, deployment or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness or social standing of natural persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following:potentially leading to detrimental or unfavourable treatment of persons or whole groups;
Amendment 1203 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point i
Article 5 – paragraph 1 – point c – point i
Amendment 1214 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point ii
Article 5 – paragraph 1 – point c – point ii
Amendment 1242 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – introductory part
Article 5 – paragraph 1 – point d – introductory part
(d) the use of ‘real-time’development, placing on the market, putting into service, deployment or use of remote biometric identification systems or biometrics-based in publicly accessible spaces for the purpose of law enforcement, unless and , including online as far as such use is strictly necessary for one of the following objectivccessible spaces:;
Amendment 1248 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point i
Article 5 – paragraph 1 – point d – point i
Amendment 1259 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point ii
Article 5 – paragraph 1 – point d – point ii
Amendment 1279 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
Article 5 – paragraph 1 – point d – point iii
Amendment 1289 #
Proposal for a regulation
Article 5 – paragraph 1 – point d a (new)
Article 5 – paragraph 1 – point d a (new)
(d a) the development, placing on the market, putting into service, deployment or use of of biometric categorisation systems;
Amendment 1295 #
Proposal for a regulation
Article 5 – paragraph 1 – point d b (new)
Article 5 – paragraph 1 – point d b (new)
(d b) the placing on the market, putting into service, deployment or use of of emotion recognition systems other than for the personal use of natural persons as an assistive technology;
Amendment 1304 #
Proposal for a regulation
Article 5 – paragraph 1 – point d c (new)
Article 5 – paragraph 1 – point d c (new)
(d c) the development, placing on the market, putting into service, deployment or use of AI systems for automated monitoring and analysis of human behaviour in publicly accessible spaces, including online;
Amendment 1306 #
Proposal for a regulation
Article 5 – paragraph 1 – point d d (new)
Article 5 – paragraph 1 – point d d (new)
(d d) the development, placing on the market, putting into service, deployment or use of an AI system that can reasonably foreseeably be used for constant monitoring of an individual’s behaviour to identify, predict or deter rule-breaking or fraud in a relationship of power, such as at work or in education, in particular where this constant monitoring has potential punitive or detrimental consequences for individuals;
Amendment 1315 #
Proposal for a regulation
Article 5 – paragraph 1 – point d e (new)
Article 5 – paragraph 1 – point d e (new)
(d e) the placing on the market, putting into service, deployment or use of recommender systems aimed at generating interaction that systematically suggest disinformation or illegal content;
Amendment 1318 #
Proposal for a regulation
Article 5 – paragraph 1 – point d f (new)
Article 5 – paragraph 1 – point d f (new)
(d f) the use of AI systems by law enforcement authorities, criminal justice authorities, migration, asylum and border-control authorities, or other public authorities to make predictions, profiles or risk assessments based on data analysis or profiling of natural persons as referred to in Article 3(4) of Directive EU 2016/680, groups or locations, for the purpose of predicting the occurrence or recurrence of an actual or potential criminal offence(s) or other offences, or rule-breaking;
Amendment 1323 #
Proposal for a regulation
Article 5 – paragraph 1 – point d g (new)
Article 5 – paragraph 1 – point d g (new)
(d g) the use of AI systems by or on behalf of competent authorities, or third parties acting on their behalf, in migration, asylum or border control management, to profile an individual or assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person on the basis of personal or sensitive data, known or predicted, except for the sole purpose of identifying specific care and support needs;
Amendment 1329 #
Proposal for a regulation
Article 5 – paragraph 1 – point d h (new)
Article 5 – paragraph 1 – point d h (new)
(d h) the placing on the market, putting into service, or use of AI systems by law enforcement authorities, or by competent authorities in migration, asylum and border control management, as polygraphs and similar tools to detect deception, trustworthiness or related characteristics
Amendment 1334 #
Proposal for a regulation
Article 5 – paragraph 1 – point d i (new)
Article 5 – paragraph 1 – point d i (new)
(d i) The development of private facial recognition or other private biometric databases and the use of such databases for the purpose of law enforcement;
Amendment 1336 #
Proposal for a regulation
Article 5 – paragraph 1 – point d j (new)
Article 5 – paragraph 1 – point d j (new)
Amendment 1340 #
Proposal for a regulation
Article 5 – paragraph 1 – point d k (new)
Article 5 – paragraph 1 – point d k (new)
(d k) The use of remote biometric identification for the purpose of migration management, border surveillance and humanitarian aid;
Amendment 1342 #
Proposal for a regulation
Article 5 – paragraph 1 – point d l (new)
Article 5 – paragraph 1 – point d l (new)
(d l) the use of AI systems for indiscriminate surveillance applied in a generalised manner to a large number of natural persons without differentiation;
Amendment 1343 #
Proposal for a regulation
Article 5 – paragraph 1 – point d m (new)
Article 5 – paragraph 1 – point d m (new)
(d m) The collection or generation of data for practices and AI systems listed in paragraphs -1 and 1 shall also be prohibited throughout their lifecycle, including training, validation and testing;
Amendment 1344 #
Proposal for a regulation
Article 5 – paragraph 1 – point d n (new)
Article 5 – paragraph 1 – point d n (new)
(d n) The placing on the market, putting into use or deployment of AI systems built on, designed, trained, validated or tested with data that was collected, processed or generated illegally;
Amendment 1345 #
Proposal for a regulation
Article 5 – paragraph 1 – point d o (new)
Article 5 – paragraph 1 – point d o (new)
Amendment 1346 #
Proposal for a regulation
Article 5 – paragraph 1 a (new)
Article 5 – paragraph 1 a (new)
1 a. In Accordance with Article 73, the Commission is empowered to amend paragraph 1 of this Article by means of a delegated act by adding systems that adversely affect, or are likely to adversely affect, the essence of fundamental rights. In doing so the Commission shall consult civil society and human rights experts annually to reflect state-of-the-art knowledge regarding the potential impacts of technology on fundamental rights.
Amendment 1349 #
Proposal for a regulation
Article 5 – paragraph 2
Article 5 – paragraph 2
Amendment 1366 #
Proposal for a regulation
Article 5 – paragraph 3
Article 5 – paragraph 3
Amendment 1377 #
Proposal for a regulation
Article 5 – paragraph 3 – subparagraph 1
Article 5 – paragraph 3 – subparagraph 1
Amendment 1388 #
Proposal for a regulation
Article 5 – paragraph 4
Article 5 – paragraph 4
Amendment 1410 #
Proposal for a regulation
Article 6 – title
Article 6 – title
Amendment 1415 #
1. Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled:
Amendment 1418 #
Proposal for a regulation
Article 6 – paragraph 1 – point a
Article 6 – paragraph 1 – point a
(a) the AI system is intended to be used as a safety component of a product, or is itself a product, covered by or it is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II;
Amendment 1424 #
Proposal for a regulation
Article 6 – paragraph 1 – point a a (new)
Article 6 – paragraph 1 – point a a (new)
(a a) its uses are undetermined or indeterminate;
Amendment 1425 #
Proposal for a regulation
Article 6 – paragraph 1 – point a b (new)
Article 6 – paragraph 1 – point a b (new)
(a b) in the course of the self- assessment pursuant to Article 6 a of this Regulation, the AI system or its operation is found to result in a high risk to the rights and freedoms of natural persons; or
Amendment 1426 #
Proposal for a regulation
Article 6 – paragraph 1 – point a c (new)
Article 6 – paragraph 1 – point a c (new)
(a c) it is listed in Annex III.
Amendment 1427 #
Proposal for a regulation
Article 6 – paragraph 1 – point b
Article 6 – paragraph 1 – point b
Amendment 1446 #
Proposal for a regulation
Article 6 – paragraph 2 a (new)
Article 6 – paragraph 2 a (new)
2 a. The assessment referred to in paragraph 2 shall be conducted by the Commission annually and under the consultation conditions laid down in this regulation, notably in Article 73;
Amendment 1449 #
Proposal for a regulation
Article 6 – paragraph 2 b (new)
Article 6 – paragraph 2 b (new)
2 b. Where the Commission finds in the course of the assessment pursuant to paragraphs 1 and 2 that an AI system or an area of AI systems must be considered "high risk" or can not or no longer be considered “high risk”, including due to improvements in technology or to social or legal safeguards put in place, it is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding or removing AI systems and areas of AI systems.
Amendment 1457 #
Proposal for a regulation
Article 6 a (new)
Article 6 a (new)
Amendment 1464 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding high-risk AI systems where both of the following conditions are fulfilled: and areas of high-risk systems that pose a risk of harm to health and safety, or a risk of adverse impact on fundamental rights, environment, society, rule of law or democracy, a risk of economic harm or to consumer protection that is, in respect of its severity or probability of occurrence;
Amendment 1474 #
Proposal for a regulation
Article 7 – paragraph 1 – point a
Article 7 – paragraph 1 – point a
Amendment 1480 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
Article 7 – paragraph 1 – point b
Amendment 1489 #
Proposal for a regulation
Article 7 – paragraph 2 – introductory part
Article 7 – paragraph 2 – introductory part
2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the following non-cumulative criteria:
Amendment 1495 #
Proposal for a regulation
Article 7 – paragraph 2 – point a
Article 7 – paragraph 2 – point a
(a) the intended purpose of the AI system, potential use, or reasonably foreseeable misuse;
Amendment 1508 #
Proposal for a regulation
Article 7 – paragraph 2 – point c
Article 7 – paragraph 2 – point c
(c) the extent to which the use of an AI system has already caused harm to the health and safety or adversely impact on thed fundamental rights or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities, environment, society, rule of law or democracy, consumer protection or caused economic harm or has given rise to reasonable concerns in relation to the likelihood of such harm or adverse impact;
Amendment 1511 #
Proposal for a regulation
Article 7 – paragraph 2 – point c a (new)
Article 7 – paragraph 2 – point c a (new)
(c a) the AI systems pose a risk of harm to occupational health and safety, including psychosocial risks and mental health;
Amendment 1514 #
Proposal for a regulation
Article 7 – paragraph 2 – point d
Article 7 – paragraph 2 – point d
(d) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons;
Amendment 1518 #
Proposal for a regulation
Article 7 – paragraph 2 – point e
Article 7 – paragraph 2 – point e
(e) the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced withinvolving an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome;
Amendment 1519 #
Proposal for a regulation
Article 7 – paragraph 2 – point e
Article 7 – paragraph 2 – point e
(e) the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome;
Amendment 1523 #
(f) the extent to which there is an imblanace of power, or the potentially harmed or adversely impacted persons are in a vulnerable position in relation to the user of an AI system, in particular due to an imbalance of powerstatus, authority, knowledge, economic or social circumstances, or age;
Amendment 1527 #
Proposal for a regulation
Article 7 – paragraph 2 – point g
Article 7 – paragraph 2 – point g
(g) the extent to which the outcome produced withinvolving an AI system is easily reversible, whereby o and can effectively be appealed by AI subjects. Outcomes having an impact on the fundamental rights or health or safety of persons shall not be considered as easily reversible;
Amendment 1542 #
Proposal for a regulation
Article 7 – paragraph 2 – point h – point i
Article 7 – paragraph 2 – point h – point i
(i) effective measures of redress in relation to the risks podamage caused by an AI system, with the exclusion of claims for direct or indirect damages;
Amendment 1553 #
Proposal for a regulation
Article 8 – paragraph 1
Article 8 – paragraph 1
1. High-risk AI systems shall comply with the requirements established in this Chapter throughout the entire lifecycle of the AI system. This includes their placing on the market as well as their deployment and use. Providers and deployers of AI systems shall ensure compliance by establishing technical and operational measures in line with this Chapter.
Amendment 1560 #
Proposal for a regulation
Article 8 – paragraph 1 a (new)
Article 8 – paragraph 1 a (new)
1 a. Where a deployer discovers non- compliance of a high-risk AI system with this regulation during reasonably foreseeable use, the deployer shall have the right to obtain the necessary modifications from the provider to the high-risk AI system.
Amendment 1561 #
Proposal for a regulation
Article 8 – paragraph 1 b (new)
Article 8 – paragraph 1 b (new)
1 b. Prospective deployers of high-risk AI systems shall have certified third parties assess and confirm the conformity of the AI system and its use with this Regulation and relevant applicable Union legislation before putting it into use. The conformity certificate shall be uploaded to the database pursuant to Article 60.
Amendment 1562 #
Proposal for a regulation
Article 8 – paragraph 1 c (new)
Article 8 – paragraph 1 c (new)
1 c. Where personal data is processed or is expected to be processed in the use of a high-risk AI system, this shall be understood as constituting a high risk in the meaning of Article 35 of Regulation (EU) 2016/679.
Amendment 1565 #
Proposal for a regulation
Article 8 – paragraph 2
Article 8 – paragraph 2
2. The intended purpose, the potential or reasonably foreseeable use or misuse of the high- risk AI system and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements.
Amendment 1578 #
Proposal for a regulation
Article 9 – paragraph 1
Article 9 – paragraph 1
1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems throughout the entire lifecycle of the AI system.
Amendment 1581 #
Proposal for a regulation
Article 9 – paragraph 2 – introductory part
Article 9 – paragraph 2 – introductory part
2. The risk management system shall consist of a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps:
Amendment 1586 #
Proposal for a regulation
Article 9 – paragraph 2 – point a
Article 9 – paragraph 2 – point a
(a) identification and analysis of the known and foreseeable risks associated with each high-risk AI system, including by means of a fundamental rights impact assessment as provided for in Article 9a;
Amendment 1596 #
Proposal for a regulation
Article 9 – paragraph 2 – point b
Article 9 – paragraph 2 – point b
(b) estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable use or misuse;
Amendment 1606 #
Proposal for a regulation
Article 9 – paragraph 3
Article 9 – paragraph 3
3. The risk management measures referred to in paragraph 2, point (d) shall give due consideration to the effects and possible interactions resulting from the combined application of the requirements set out in this Chapter 2. They shall take into account the generally acknowledged state of the art, including as reflected in relevant harmonised standards or common specifications.
Amendment 1610 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable, provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. Those residual risks shall be communicated to the userand the reasoned judgements made shall be communicated to the deployer and made available to AI subjects.
Amendment 1625 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point b
Article 9 – paragraph 4 – subparagraph 1 – point b
(b) where appropriate, implementation of adequate mitigation and control measures in relation toaddressing risks that cannot be eliminated;
Amendment 1630 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point c
Article 9 – paragraph 4 – subparagraph 1 – point c
(c) provision of adequate information pursuant to Article 13, in particular as regards the risks referred to in paragraph 2, point (b) of this Article, and, where appropriate, training to users.deployers. (This amendment applies throughout the text. Adopting it will necessitate corresponding changes throughout.)
Amendment 1631 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point c a (new)
Article 9 – paragraph 4 – subparagraph 1 – point c a (new)
(c a) the governance structures to mitigate risks.
Amendment 1632 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 2
Article 9 – paragraph 4 – subparagraph 2
In eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, training to be expected by the user and the environmendeployer, to the socio-technical context in which the system is intended to be used, and to reasonably foreseeable use or misuse.
Amendment 1642 #
Proposal for a regulation
Article 9 – paragraph 5
Article 9 – paragraph 5
5. High-risk AI systems shall be tested for the purposes of identifying the most appropriate risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpo, safely during reasonably foreseeable conditions of use or misuse, and they are in compliance with the requirements set out in this Chapter.
Amendment 1645 #
Proposal for a regulation
Article 9 – paragraph 6
Article 9 – paragraph 6
Amendment 1656 #
Proposal for a regulation
Article 9 – paragraph 7
Article 9 – paragraph 7
7. The testing of the high-risk AI systems shall be performed, as appropriate, at any point in time throughout the development process, and, in any event, prior to the placing on the market or the putting into service. Testing shall be made against preliminarily defined metrics and probabilistic thresholds that are appropriate to the intended purpouse or reasonably foreseeable misuse of the high-risk AI system.
Amendment 1660 #
Proposal for a regulation
Article 9 – paragraph 8
Article 9 – paragraph 8
8. When implementing the risk management system described in paragraphs 1 to 7, specific consideration shall be given to whether the high-risk AI system is likely to be accessed by or have an impact on children.:
Amendment 1662 #
Proposal for a regulation
Article 9 – paragraph 8 – point a (new)
Article 9 – paragraph 8 – point a (new)
(a) adversely affect specific groups of people, in particular on the basis of gender, sexual orientation, age, ethnicity, disability, religion, socio-economic standing, religion or origin, including asylum seekers including migrants, refugees and asylum seekers;
Amendment 1663 #
Proposal for a regulation
Article 9 – paragraph 8 – point b (new)
Article 9 – paragraph 8 – point b (new)
(b) have an adverse impact on the environment, or;
Amendment 1664 #
Proposal for a regulation
Article 9 – paragraph 8 – point c (new)
Article 9 – paragraph 8 – point c (new)
(c) be implemented on children;
Amendment 1665 #
Proposal for a regulation
Article 9 – paragraph 8 – point d (new)
Article 9 – paragraph 8 – point d (new)
(d) have an adverse effect on mental health, individual’s behaviour;
Amendment 1666 #
Proposal for a regulation
Article 9 – paragraph 8 – point e (new)
Article 9 – paragraph 8 – point e (new)
(e) amplify the spread of disinformation and amplify polarisation;
Amendment 1667 #
Proposal for a regulation
Article 9 – paragraph 8 – point f (new)
Article 9 – paragraph 8 – point f (new)
(f) amplify the spread of disinformation and amplify polarisation;
Amendment 1671 #
Proposal for a regulation
Article 9 a (new)
Article 9 a (new)
Amendment 1684 #
Proposal for a regulation
Article 10 – paragraph 2 – introductory part
Article 10 – paragraph 2 – introductory part
2. Training, validation and testing data sets shall be subject to appropriate data governance and management practices. throughout the entire lifecycle of the AI system. Those practices shall concern in particular,
Amendment 1710 #
Proposal for a regulation
Article 10 – paragraph 2 – point g a (new)
Article 10 – paragraph 2 – point g a (new)
(g a) verification of the legality of the sources of the data.
Amendment 1722 #
Proposal for a regulation
Article 10 – paragraph 3
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be relevant, representative, free of errors and statistically complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on whichin relation to whom the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
Amendment 1735 #
Proposal for a regulation
Article 10 – paragraph 5
Article 10 – paragraph 5
Amendment 1746 #
Proposal for a regulation
Article 10 a (new)
Article 10 a (new)
Article 10 a Environmental Impact of high-risk AI systems 1. High-risk AI systems shall be designed and developed making use of state-of-the- art methods to reduce energy use, resource use and waste, as well as to increase energy efficiency, and the overall efficiency of the system. They shall be designed and developed and set up with capabilities enabling the measurement and logging of the consumption of energy and resources, and other environmental impact the deployment and use of the systems may have over their entire lifecycle. 2. Member States shall ensure that relevant national authorities issue guidelines and provide support to providers and deployers in their efforts to reduce the environmental impact and resource use of high-risk AI systems. 3. The Commission shall be empowered to adopt delegated acts in accordance with Article 73 to detail the measurement and logging procedures, taking into account state-of-the-art methods, in particular to enable the comparability of the environmental impact of systems, and taking into account the economies of scale.
Amendment 1749 #
Proposal for a regulation
Article 11 – paragraph 1 – introductory part
Article 11 – paragraph 1 – introductory part
1. The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date throughout its entire lifecycle, and where appropriate, beyond.
Amendment 1768 #
Proposal for a regulation
Article 12 – paragraph 1
Article 12 – paragraph 1
1. High-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events (‘logs’) while the high-risk AI systems is operating. Those logging capabilities shall conform to the state of the art and recognised standards or common specifications.
Amendment 1788 #
Proposal for a regulation
Article 13 – title
Article 13 – title
Transparency and provision of information to userdeployers and AI subjects
Amendment 1791 #
Proposal for a regulation
Article 13 – paragraph 1
Article 13 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable usdeployers to interpret the system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured, with a view to achieving compliance with the relevant obligations of the usdeployer and of the provider set out in Chapter 3 of this Title. Where individuals are passively subject to AI systems (AI subjects), information to ensure an appropriate type and degree of transparency shall be made publicly available, with full respect to the privacy, personality, and related rights of subjects.
Amendment 1794 #
Proposal for a regulation
Article 13 – paragraph 2
Article 13 – paragraph 2
2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, statistically complete, correct and clear information that is relevant, accessible and comprehensible to usdeployers.
Amendment 1803 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point iii
Article 13 – paragraph 3 – point b – point iii
(iii) any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable use or misuse, which may lead to risks to the health and, safety or, fundamental rights, the environment, or democracy;
Amendment 1806 #
Proposal for a regulation
Article 13 – paragraph 3 – point d
Article 13 – paragraph 3 – point d
(d) the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the usdeployers;
Amendment 1809 #
Proposal for a regulation
Article 13 – paragraph 3 – point e a (new)
Article 13 – paragraph 3 – point e a (new)
(e a) the level of extraction and consumption of natural resources.
Amendment 1814 #
Proposal for a regulation
Article 14 – paragraph 1
Article 14 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use, and to allow for thorough investigation after an incident.
Amendment 1817 #
Proposal for a regulation
Article 14 – paragraph 2
Article 14 – paragraph 2
2. Human oversight shall aim at preventing or minimising the risks to health, safety or, fundamental rights, democracy, or the environment that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable use or misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter.
Amendment 1821 #
Proposal for a regulation
Article 14 – paragraph 3 – introductory part
Article 14 – paragraph 3 – introductory part
3. Human oversight shall be ensured through either one or allboth of the following measures:
Amendment 1822 #
Proposal for a regulation
Article 14 – paragraph 3 – point a
Article 14 – paragraph 3 – point a
(a) measures identified and builby the provider building human oversight, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service;
Amendment 1824 #
Proposal for a regulation
Article 14 – paragraph 3 – point b
Article 14 – paragraph 3 – point b
(b) other measures identified by the provider before placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the userdeployer, such as user guides.
Amendment 1835 #
Proposal for a regulation
Article 14 – paragraph 4 – point b
Article 14 – paragraph 4 – point b
(b) remain aware of the possible tendencymitigate the risk of automatically relying or over- relying on the output produced by a high- risk AI system (‘automation bias’), in particular for high- risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;
Amendment 1839 #
Proposal for a regulation
Article 14 – paragraph 4 – point d
Article 14 – paragraph 4 – point d
(d) be ablfree to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system;
Amendment 1842 #
Proposal for a regulation
Article 14 – paragraph 4 – point e
Article 14 – paragraph 4 – point e
(e) be able to intervene oin the operation of the high-risk AI system or interrupt the system through a “stop” button or a similar procedure that allows the system to come to a halt in a safe state.
Amendment 1845 #
Proposal for a regulation
Article 14 – paragraph 5
Article 14 – paragraph 5
5. For high-risk AI systems referred to in point 1(a) and 1(b) of Annex III, the measures referred to in paragraph 3 shall be such as to ensure that, in addition, no action or decision is taken by the usdeployer on the basis of the identification resultingoutput from the system unless this has been verified and confirmed by at least two natural persons.
Amendment 1851 #
Proposal for a regulation
Article 15 – paragraph 1
Article 15 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way that they achieve security by design and by default, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurieliability, robustness, resilience, safety, and perform consistently in those respectscybersecurity throughout their lifecycle.
Amendment 1853 #
Proposal for a regulation
Article 15 – paragraph 2
Article 15 – paragraph 2
2. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be assessed by an independent entity and declared in the accompanying instructions of use. The language used shall be clear, free of misunderstandings or misleading statements.
Amendment 1859 #
Proposal for a regulation
Article 15 – paragraph 3 – introductory part
Article 15 – paragraph 3 – introductory part
3. High-risk AI systems shall be resilienobust as regards errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems.
Amendment 1861 #
Proposal for a regulation
Article 15 – paragraph 3 – subparagraph 2
Article 15 – paragraph 3 – subparagraph 2
High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way to ensure that possibly biased outputs due to outputs used as an input for future operations (‘feedback loops’) are duut into service shall ensure that 'feedback loops' caused by biased outputs are adequately addressed with appropriate mitigation measures.
Amendment 1866 #
Proposal for a regulation
Article 15 – paragraph 4 – introductory part
Article 15 – paragraph 4 – introductory part
4. High-risk AI systems shall be resilient as regardsadequately protected against attempts by unauthorised third parties to alter their use or performance by exploiting the system vulnerabilities.
Amendment 1869 #
Proposal for a regulation
Article 15 – paragraph 4 – subparagraph 1
Article 15 – paragraph 4 – subparagraph 1
The technical solutionand orgaisational measures aimed at ensuring the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks.
Amendment 1871 #
Proposal for a regulation
Article 15 – paragraph 4 – subparagraph 2
Article 15 – paragraph 4 – subparagraph 2
The technical solutionand orgaisational measures to address AI specific vulnerabilities shall include at least, where appropriate, measures to prevent and control for attacks trying to manipulate the training dataset (‘data poisoning’), inputs designed to cause the model to make a mistake (‘adversarial examples’), or model flaws.
Amendment 1872 #
Proposal for a regulation
Article 15 – paragraph 4 a (new)
Article 15 – paragraph 4 a (new)
4 a. High risk AI shall be accompanied by security solutions and patches for the lifetime of the product it is embedded in, or in case of the absence of dependence on a specific product, for a time that needs to be stated by the manufacturer and cannot be less then 10 years.
Amendment 1874 #
Proposal for a regulation
Title III – Chapter 3 – title
Title III – Chapter 3 – title
3 OBLIGATIONS OF PROVIDERS AND USDEPLOYERS OF HIGH-RISK AI SYSTEMS and other parties
Amendment 1875 #
Proposal for a regulation
Article 16 – title
Article 16 – title
Obligations of providers and deployers of high-risk AI systems
Amendment 1877 #
Proposal for a regulation
Article 16 – paragraph 1 – introductory part
Article 16 – paragraph 1 – introductory part
Providers and, where applicable, deployers of high-risk AI systems shall:
Amendment 1885 #
Proposal for a regulation
Article 16 – paragraph 1 – point a a (new)
Article 16 – paragraph 1 – point a a (new)
(a a) include name and contact information;
Amendment 1889 #
Proposal for a regulation
Article 16 – paragraph 1 – point d
Article 16 – paragraph 1 – point d
(d) when under their control, keep the logs automatically generated by their high- risk AI systems for a period of at least two years, or as long as is appropriate in the light of the intended purpose of high-risk AI system and applicable legal obligations under Union or national law;
Amendment 1895 #
Proposal for a regulation
Article 16 – paragraph 1 – point e
Article 16 – paragraph 1 – point e
(e) ensure that the high-risk AI system undergoes the relevant third party conformity assessment procedure, prior to its placing on the market or putting into service;
Amendment 1912 #
Proposal for a regulation
Article 17 – paragraph 1 – introductory part
Article 17 – paragraph 1 – introductory part
1. Providers and, where applicable, deployers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects:
Amendment 1923 #
Proposal for a regulation
Article 17 – paragraph 1 – point e
Article 17 – paragraph 1 – point e
(e) technical specifications, including standards, to be applied and, where the relevant harmonised standards are not applied in full, or do not cover all of the relevant requirements, the means to be used to ensure that the high-risk AI system complies with the requirements set out in Chapter 2 of this Title;
Amendment 1925 #
Proposal for a regulation
Article 17 – paragraph 1 – point f
Article 17 – paragraph 1 – point f
(f) systems and procedures for data management, including data acquisition, data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purposes of the placing on the market or, putting into service, and deployment of high-risk AI systems;
Amendment 1939 #
Proposal for a regulation
Article 17 – paragraph 2
Article 17 – paragraph 2
Amendment 1942 #
Proposal for a regulation
Article 17 – paragraph 3
Article 17 – paragraph 3
3. For providers that are credit institutions regulated by Directive 2013/36/ EU, the obligation to put a quality management system in place shall be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to Article 74 of that Directive. In that context, any harmonised standards referred to in Article 40 of thiThis Article applies without prejudice to the obligations for providers that are credit institutions Rregulation shall be taken into accounted by Directive 2013/36/ EU.
Amendment 1948 #
Proposal for a regulation
Article 18 – paragraph 1
Article 18 – paragraph 1
1. Providers of high-risk AI systems shall draw up the technical documen-tation referred to in Article 11 in accordance with Annex IV and make it available at the request of a national competent authority.
Amendment 1957 #
Proposal for a regulation
Article 20 – paragraph 1
Article 20 – paragraph 1
1. Providers of high-risk AI systems shall keep the logs automatically generated by their high-risk AI systems, to the extent such logs are under their control by virtue of a contractual arrangement with the usdeployer or otherwise by law. The logs shall be kept for a period that is appropriate in the light of the intended purpose of high- risk AI system and applicable legal obligations under Union or national law.
Amendment 1960 #
Proposal for a regulation
Article 21 – paragraph 1
Article 21 – paragraph 1
Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system which they have placed on the market or put into service is not in conformity with this Regulation shall immediately inform the competent authorities and take the necessary corrective actions to bring that system into conformity, to withdraw it, to disable it, or to recall it, as appropriate. They shall inform the distributors and deployers of the high-risk AI system in question and, where applicable, the authorised representative and importers accordingly.
Amendment 1966 #
Proposal for a regulation
Article 22 – paragraph 1
Article 22 – paragraph 1
Where the high-risk AI system presents a risk within the meaning of Article 65(1) and that risk is known to the provider of the system becomes aware of that risk, that provider shall immediately inform the national competent authorities of the Member States in which it made the system available and, where applicable, the notified body that issued a certificate for the high-risk AI system, in particular of the non-compliance and of any corrective actions taken.
Amendment 1972 #
Proposal for a regulation
Article 23 – paragraph 1
Article 23 – paragraph 1
Providers of high-risk AI systems shall, upon request by a national competent authority, provide that authority with all the information and documentation necessary to demonstrate the conformity of the high- risk AI system with the requirements set out in Chapter 2 of this Title, in an official Union language determined by the Member State concerned. Upon a reasoned request from a national competent authority, providers shall also give that authority access to the logs automatically generated by the high- risk AI system, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law.
Amendment 1988 #
Proposal for a regulation
Article 25 – paragraph 2 – point b
Article 25 – paragraph 2 – point b
(b) provide a national competent authority, upon a reasoned request, with all the information and documentation necessary to demonstrate the conformity of a high-risk AI system with the requirements set out in Chapter 2 of this Title, including access to the logs automatically generated by the high-risk AI system to the extent such logs are under the control of the provider by virtue of a contractual arrangement with the user or otherwise by law;
Amendment 2002 #
Proposal for a regulation
Article 26 – paragraph 3
Article 26 – paragraph 3
3. Importers shall indicate their name, registered trade name or registered trade mark, and the address at which they can be contacted on the high-risk AI system or, where that is not possibleand, on its packaging or its accompanying documentation, aswhere applicable.
Amendment 2005 #
Proposal for a regulation
Article 26 – paragraph 5
Article 26 – paragraph 5
5. Importers shall provide national competent authorities, upon a reasoned request, with all necessary information and documentation to demonstrate the conformity of a high-risk AI system with the requirements set out in Chapter 2 of this Title in a language which can be easily understood by that national competent authority, including access to the logs automatically generated by the high-risk AI system to the extent such logs are under the control of the provider by virtue of a contractual arrangement with the user or otherwise by law. They shall also cooperate with those authorities on any action national competent authority takes in relation to that system.
Amendment 2013 #
Proposal for a regulation
Article 27 – paragraph 2
Article 27 – paragraph 2
2. Where a distributor considers or has reason to consider that a high-risk AI system is not in conformity with the requirements set out in Chapter 2 of this Title, it shall not make the high-risk AI system available on the market until that system has been brought into conformity with those requirements. Furthermore, where the system presents a risk within the meaning of Article 65(1), the distributor shall inform the competent authorities and the provider or the importer of the system, as applicable, to that effect.
Amendment 2017 #
Proposal for a regulation
Article 27 – paragraph 4
Article 27 – paragraph 4
4. A distributor that considers or has reason to consider that a high-risk AI system which it has made available on the market is not in conformity with the requirements set out in Chapter 2 of this Title shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 65(1), the distributor shall immediately inform the national competent authorities of the Member States in which it has made the product available to that effect, giving details, in particular, of the non-compliance and of any corrective actions taken.
Amendment 2021 #
Proposal for a regulation
Article 27 – paragraph 5
Article 27 – paragraph 5
5. Upon a reasoned request from a national competent authority, distributors of high-risk AI systems shall provide that authority with all the information and documentation necessary to demonstrate the conformity of a high-risk system with the requirements set out in Chapter 2 of this Title. Distributors shall also cooperate with that national competent authority on any action taken by that authority.
Amendment 2025 #
Proposal for a regulation
Article 28 – title
Article 28 – title
Obligations of distributors, importers, usdeployers or any other third-party
Amendment 2027 #
Proposal for a regulation
Article 28 – paragraph 1 – introductory part
Article 28 – paragraph 1 – introductory part
1. Any distributor, importer, usdeployer or other third-party shall be considered a provider for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances:
Amendment 2030 #
Proposal for a regulation
Article 28 – paragraph 1 – point b a (new)
Article 28 – paragraph 1 – point b a (new)
(b a) they deploy a high-risk system for a purpose other than the intended purpose;
Amendment 2032 #
Proposal for a regulation
Article 28 – paragraph 1 – point c a (new)
Article 28 – paragraph 1 – point c a (new)
(c a) they modify the intended purpose of an AI system which is not high-risk and is already placed on the market or put into service, in a way which makes the modified system a high-risk AI system.
Amendment 2035 #
Proposal for a regulation
Article 29 – title
Article 29 – title
29 Obligations of usdeployers of high- risk AI systems (This amendment applies throughout the text. Adopting it will necessitate corresponding changes throughout.)
Amendment 2038 #
Proposal for a regulation
Article 29 – paragraph 1
Article 29 – paragraph 1
1. UsDeployers of high-risk AI systems shall usetake appropriate technical and organisational measures and ensure that the use of such systems is in accordance with the instructions of use accompanying the systems and enables human oversight and decision-making, pursuant to paragraphs 2 and 5.
Amendment 2043 #
Proposal for a regulation
Article 29 – paragraph 1 a (new)
Article 29 – paragraph 1 a (new)
1 a. Deployers shall identify the categories of natural persons and groups likely to be affected by the system before putting it into use.
Amendment 2045 #
Proposal for a regulation
Article 29 – paragraph 1 b (new)
Article 29 – paragraph 1 b (new)
1 b. Human oversight following paragraph 1 shall be carried out by natural persons having the necessary competences, training, authority and independence.
Amendment 2047 #
Proposal for a regulation
Article 29 – paragraph 2
Article 29 – paragraph 2
2. The obligations in paragraph 1 are without prejudice to other usdeployer obligations under Union or national law and to the usshall take due account of the deployer’'s discretion in organising its own resources and activities for the purpose of implementing the human oversight measures indicated by the provider.
Amendment 2089 #
Proposal for a regulation
Article 30 – paragraph 7
Article 30 – paragraph 7
7. Notifying authorities shall have a sufficient number of competent personnel at their disposal for the proper performance of their tasks. Where applicable, competent personnel shall have necessary expertise in supervision of fundamental rights.
Amendment 2103 #
Proposal for a regulation
Article 33 – paragraph 4
Article 33 – paragraph 4
4. Notified bodies shall be independent of the provider of a high-risk AI system in relation to which it performs conformity assessment activities. Notified bodies shall also be independent of any other operator having an economic interest in the high-risk AI system that is assessed, as well as of any competitors of the provider. Notified bodies and their employees should not have provided any service to the provider of a high-risk system for 12 months before the assessment. They should also commit not to work for the provider of a high-risk system assessed or a professional organisation or business association of which the provider of a high-risk system is a member for 12 months after their position in the auditing organisation has ended.
Amendment 2111 #
Proposal for a regulation
Article 37 – paragraph 1
Article 37 – paragraph 1
1. The Commission shall, where necessary, investigate all cases where there are reasons to doubt whether a notified body complies with the requirements laid down in Article 33.
Amendment 2113 #
Proposal for a regulation
Article 37 – paragraph 4
Article 37 – paragraph 4
4. Where the Commission ascertains that a notified body does not meet or no longer meets the requirements laid down in Article 33, it shall adopt a reasoned decision requesting the notifying Member State to take the necessary corrective measures, including withdrawal of notification if necessaryapplicable. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 74(2).
Amendment 2136 #
Proposal for a regulation
Article 41 – paragraph 1
Article 41 – paragraph 1
1. Where harmonised standards referred to in Article 40 do not exist or where the Commission considers that the relevant harmonised standards are insufficient or that there is a need to address specific safety, accessibility, or fundamental right concerns, the Commission may, by means of implementing acts, adopt common specifications in respect of the requirements set out in Chapter 2 of this Title. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2).
Amendment 2152 #
Proposal for a regulation
Article 42 – paragraph 1
Article 42 – paragraph 1
1. Taking into account their intended purpose, high-risk AI systems that have been trained and tested on data concerning the specific geographical, behavioural and functional setting within which they are intended to be used or are reasonably foreseeable to be used shall be presumed to be in compliance with the requirement set out in Article 10(4).
Amendment 2156 #
Proposal for a regulation
Article 43 – title
Article 43 – title
Amendment 2160 #
Proposal for a regulation
Article 43 – paragraph 1 – introductory part
Article 43 – paragraph 1 – introductory part
1. For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high- risk AI system with the requirements set out in Chapter 2 of this Title, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall follow one of the following procedures:Annex III the provider shall have a conformity assessment carried out by an independent third-party, following the conformity assessment procedure set out in Annex VII.
Amendment 2165 #
Proposal for a regulation
Article 43 – paragraph 1 – point a
Article 43 – paragraph 1 – point a
Amendment 2169 #
Amendment 2172 #
Proposal for a regulation
Article 43 – paragraph 1 – subparagraph 1
Article 43 – paragraph 1 – subparagraph 1
Amendment 2177 #
Proposal for a regulation
Article 43 – paragraph 1 – subparagraph 2
Article 43 – paragraph 1 – subparagraph 2
For the purpose of carrying out the conformity assessment procedure referred to in Annex VII, the provider may choose any of the notified bodies. However, when the system is intended to be put into service by law enforcement, immigration or asylum authorities as well as EU institutions, bodies or agencies, the market surveillance authority referred to in Article 63(5) or (6), as applicable, shall act as a notified body.
Amendment 2181 #
Proposal for a regulation
Article 43 – paragraph 2
Article 43 – paragraph 2
2. For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. For high-risk AI systems referred to in point 5(b) of Annex III, placed on the market or put into service by credit institutions regulated by Directive 2013/36/EU, the conformity assessment shall be carried out as part of the procedure referred to in Articles 97 to101 of that Directive.
Amendment 2186 #
Proposal for a regulation
Article 43 – paragraph 3 – subparagraph 2
Article 43 – paragraph 3 – subparagraph 2
Amendment 2192 #
Proposal for a regulation
Article 43 – paragraph 4 – introductory part
Article 43 – paragraph 4 – introductory part
4. High-risk AI systems shall undergo a new third party conformity assessment procedure whenever they are substantially modified, regardless of whether the modified system is intended to be further distributed or continues to be used by the current usdeployer.
Amendment 2204 #
Proposal for a regulation
Article 43 – paragraph 6
Article 43 – paragraph 6
Amendment 2210 #
Proposal for a regulation
Article 44 – paragraph 3
Article 44 – paragraph 3
3. Where a notified body finds that an AI system no longer meets the requirements set out in Chapter 2 of this Title, it shall, taking account of the principle of proportionality, suspend or withdraw the certificate issued or impose any restrictions on it, unless compliance with those requirements is ensured by appropriate corrective action taken by the provider of the system within an appropriate deadline set by the notified body. The notified body shall give reasons for its decision.
Amendment 2213 #
Proposal for a regulation
Article 46 – paragraph 3
Article 46 – paragraph 3
3. Each notified body shall provide the other notified bodies carrying out similar conformity assessment activities covering the same artificial intelligence technologiesystems with relevant information on issues relating to negative and, on request, positive conformity assessment results.
Amendment 2216 #
Proposal for a regulation
Article 47 – paragraph 1
Article 47 – paragraph 1
1. By way of derogation from Article 43, any market surveillance authority may request a judicial authority to authorise the placing on the market or putting into service of specific high-risk AI systems within the territory of the Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, environmental protection and the protection of key industrial and infrastructural assets. That authorisation shall be for a limited period of time, while the necessary conformity assessment procedures are being carried out, and shall terminate once those procedures have been completed. The completion of those procedures shall be undertaken without undue delay.
Amendment 2218 #
Proposal for a regulation
Article 47 – paragraph 2
Article 47 – paragraph 2
2. The authorisation referred to in paragraph 1 shall be issued only if the market surveillance authority and judicial authority concludes that the high-risk AI system complies with the requirements of Chapter 2 of this Title. The market surveillance authority shall inform the Commission and the other Member States of any request made and any subsequent authorisation issued pursuant to paragraph 1.
Amendment 2219 #
Proposal for a regulation
Article 47 – paragraph 3
Article 47 – paragraph 3
3. Where, within 15 calendar days of receipt of the information referred to in paragraph 2, no objection has been raised by either a Member State or the Commission in respect ofto the request of the maret surveillance authority for an authorisation issued by a market surveillance authority of a Member State in accordance with paragraph 1, that authorisationrequest shall be deemed justified.
Amendment 2221 #
Proposal for a regulation
Article 47 – paragraph 4
Article 47 – paragraph 4
4. Where, within 15 calendar days of receipt of the notification referred to in paragraph 2, objections are raised by a Member State against an authorisation request issued by a market surveillance authority of another Member State, or where the Commission considers the authorisationrequest to be contrary to Union law or the conclusion of the Member States regarding the compliance of the system as referred to in paragraph 2 to be unfounded, the Commission shall without delay enter into consultation with the relevant Member State; the operator(s) concerned shall be consulted and have the possibility to present their views. In view thereof, the Commission shall decide whether the authorisationrequest is justified or not. The Commission shall address its decision to the Member State concerned and the relevant operator or operators.
Amendment 2222 #
Proposal for a regulation
Article 47 – paragraph 5
Article 47 – paragraph 5
5. If the authorisationrequest is considered unjustified, this shall be withdrawn by the market surveillance authority of the Member State concerned.
Amendment 2223 #
Proposal for a regulation
Article 48 – paragraph 1
Article 48 – paragraph 1
1. The provider shall draw up a writtennotifying authority after third party conformity assessment shall draw up a written physical and machine-readable electronic EU declaration of conformity for each AI system and keep it at the disposal of the national competent authorities for 105 years after the AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be given to the relevant national competent authorities upon request.
Amendment 2227 #
Proposal for a regulation
Article 48 – paragraph 4
Article 48 – paragraph 4
4. By drawing upAfter receiving the EU declaration of conformity, the provider shall assume responsibility for continuous compliance with the requirements set out in Chapter 2 of this Title. The provider shall keep the EU declaration of conformity up-to-date as appropriat throughout the entire lifecycle.
Amendment 2240 #
Proposal for a regulation
Article 50 – paragraph 1 – introductory part
Article 50 – paragraph 1 – introductory part
The provider shall, for a period ending 10 5 years after the AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities:
Amendment 2247 #
Proposal for a regulation
Article 51 – paragraph 1
Article 51 – paragraph 1
Before placing on the market or putting into service a high-risk AI system referred to in Article 6(2), the provider or, where applicable, the authorised representative shall register that system in the EU database referred to in Article 60.
Amendment 2253 #
Proposal for a regulation
Article 51 – paragraph 1 a (new)
Article 51 – paragraph 1 a (new)
Before each deployment of, or substantial modification to, a high-risk AI system referred to in Article 6, the deployer or, where applicable, the authorised representative shall register that system in the EU database referred to in Article 60.
Amendment 2257 #
Proposal for a regulation
Article 51 – paragraph 1 b (new)
Article 51 – paragraph 1 b (new)
In case the provider or deployer is a public authority they shall register both high-risk AI systems and all other AI systems.
Amendment 2259 #
Proposal for a regulation
Title IV
Title IV
TRANSPARENCY OBLIGATIONS FOR CERTAIN AI SYSTEMS
Amendment 2260 #
Proposal for a regulation
Article 52 – title
Article 52 – title
Transparency obligations for certain AI systems
Amendment 2264 #
Proposal for a regulation
Article 52 – paragraph 1
Article 52 – paragraph 1
1. Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed without delay that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unlessshall also include information on which components and functions are supported through AI, information which main parameters the AI system takes into account, and information on human oversight and which person is responsible for decisions made or influenced by those systems are available for the public to report a criminal offences well as information on rectification, redress rights and options.
Amendment 2266 #
Proposal for a regulation
Article 52 – paragraph 2
Article 52 – paragraph 2
2. UsDeployers of an remotione biometric recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detectshall also include information on which components and functions are supported through AI, information which main parameters the AI system takes into account, and information on human oversight and which person is responsible for decisions made or influenced by the system as well as information on rectification, prevent and investigate criminal offencedress rights and options.
Amendment 2272 #
Proposal for a regulation
Article 52 – paragraph 3 – introductory part
Article 52 – paragraph 3 – introductory part
3. UsDeployers of an AI system other than those in paragraphs 1 or 2, that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated.
Amendment 2273 #
Amendment 2280 #
Proposal for a regulation
Article 52 – paragraph 3 a (new)
Article 52 – paragraph 3 a (new)
3 a. The obligations in paragraphs 1, 2 and 3 shall be without prejudice to Union law on delaying information of subjects in ongoing criminal investigations, and be without prejudice to the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
Amendment 2282 #
Proposal for a regulation
Article 52 – paragraph 4
Article 52 – paragraph 4
4. PThe information in paragraphs 1, 2 and 3 shall be provided in an accessible, easy to understand, yet comprehensive manner, at least in one of the languages of the Member State in which the system was made available, and shall not affect the requirements and obligations set out in Title III of this Regulation.
Amendment 2285 #
Proposal for a regulation
Article 52 a (new)
Article 52 a (new)
Article 52 a Limitations for deep fakes of persons Notwithstanding Article 52 and subject to appropriate safeguards for the rights and freedoms of third parties, the use of AI systems that generate or manipulate image, audio or video content that appreciably resembles existing persons and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall be permitted only (a) when used for the exercise of the rights to freedom of expression and to artistic expression, or (b) with the explicit consent of the affected persons.
Amendment 2289 #
Proposal for a regulation
Article 53 – paragraph 1
Article 53 – paragraph 1
1. AI regulatory sandboxes established by one or more Member States competent authorities or the European Data Protection Supervisor shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. TFollowing a fundamental rights impact assessment, as laid out in Article 9a, this shall take place under the direct supervision and guidance by the competent authorities with a view to identifying risks in particular to the environment, health and safety, and fundamental rights, ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox. Access to the regulatory sandboxes shall require providers to apply for participation. Supervising authorities shall inform applicants of their decision within 3 months of the application, or, in justified cases, of an extension of this deadline by at most another 3 months. The supervising authority shall inform the European Artificial Intelligence Board of the provision of regulatory sandboxes.
Amendment 2308 #
Proposal for a regulation
Article 53 – paragraph 2
Article 53 – paragraph 2
2. Member States shall ensure that to the extent the innovative AI systems involve the processing of personal data, or otherwise fall under the supervisory remit of other national authorities or competent authorities providing or supporting access to data, the national data protection authorities and those other national authorities are associated to the operation of the AI regulatory sandbox and involved in the control of those aspects of the sandbox it supervises to the full extent of its respective powers.
Amendment 2313 #
Proposal for a regulation
Article 53 – paragraph 3
Article 53 – paragraph 3
3. The AI regulatory sandboxes shall not affect the supervisory and corrective powers of the competent authorities. Any significant risks to democracy, the environment, health and safety and fundamental rights identified during the development and testing of such systems shall result in immediate mitigation and, failing that, in the suspension of the development and testing process until such mitigation takes place, or, where mitigating measures cannot be identified that stop and remedy such significant risk or harm, Member States shall ensure that the competent authorities have the power to permanently suspend the development and testing process. In the case of abuse, competent authorities shall have the power to ban providers from applying for and participating in the regulatory sandbox for a limited amount of time or indefinitely. Decisions to suspend or ban providers from participating in regulatory sandboxes shall be submitted without delay to the European Artificial Intelligence Board. Applicants shall have access to remedies.
Amendment 2327 #
Proposal for a regulation
Article 53 – paragraph 5
Article 53 – paragraph 5
5. Member States’ competent authorities that have established AI regulatory sandboxes shall coordinate their activities and cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Board and the Commission on the results from the implementation of those schemes, including good practices, lessons learnt and recommendations on their setup and, where relevant, on the application and possible revision of this Regulation and other Union legislation supervised within the sandbox, in particular with regards to easing burdens and introducing further regulation where additional risks and potential harms are identified.
Amendment 2335 #
Proposal for a regulation
Article 53 – paragraph 6
Article 53 – paragraph 6
6. The modalities and the conditions of the operation of the AI regulatory sandboxes, including the eligibility criteria and the procedure for the application, selection, participation and exiting from the sandbox, and the rights and obligations of the participants shall be set out in implementing acts. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2)by the European Artificial Intelligence Board in close cooperation with the Member States’ and competent authorities. A list of planned and current sandboxes, including the modalities, conditions, eligibility criteria and application, selection, participation procedure shall be made publicly available by the European Artificial intelligence Board.
Amendment 2346 #
Proposal for a regulation
Article 54 – paragraph 1 – introductory part
Article 54 – paragraph 1 – introductory part
1. In the AI regulatory sandbox personal data and data protected by intellectual property rights or trade secrets lawfully collected for other purposes shall be processed solely for the purposes of developing and testing certain innovative AI systems in the sandbox under the following conditions:
Amendment 2349 #
Proposal for a regulation
Article 54 – paragraph 1 – point a – introductory part
Article 54 – paragraph 1 – point a – introductory part
(a) the innovative AI systems shall be developed for safeguarding substantial public interest in one or more of the following areas:
Amendment 2351 #
Proposal for a regulation
Article 54 – paragraph 1 – point a – point i
Article 54 – paragraph 1 – point a – point i
Amendment 2353 #
Proposal for a regulation
Article 54 – paragraph 1 – point a – point iii
Article 54 – paragraph 1 – point a – point iii
(iii) a high level of protection and improvement of the quality of the environment, and to counter and remedy the climate crisis;
Amendment 2356 #
Proposal for a regulation
Article 54 – paragraph 1 – point c
Article 54 – paragraph 1 – point c
(c) there are effective monitoring mechanisms to identify if any high risks to the fundamental rights of the data subjects and holders of intellectual property rights or trade secrets may arise during the sandbox experimentation as well as response mechanism to promptly mitigate those risks and, where necessary, stop the processing;
Amendment 2357 #
Proposal for a regulation
Article 54 – paragraph 1 – point d
Article 54 – paragraph 1 – point d
(d) any personal data or data protected by intellectual property rights or trade secrets to be processed in the context of the sandbox are in a functionally separate, isolated and protected data processing environment under the control of the participants and only authorised persons have access to thatose data;
Amendment 2359 #
Proposal for a regulation
Article 54 – paragraph 1 – point e
Article 54 – paragraph 1 – point e
(e) any personal data or data protected by intellectual property rights or trade secrets processed are not be transmitted, transferred or otherwise accessed by other parties;
Amendment 2362 #
Proposal for a regulation
Article 54 – paragraph 1 – point g
Article 54 – paragraph 1 – point g
(g) any personal data or data protected by intellectual property rights or trade secrets processed in the context of the sandbox are deleted once the participation in the sandbox has terminated or the personal data has reached the end of its retention period;
Amendment 2364 #
Proposal for a regulation
Article 54 – paragraph 1 – point h
Article 54 – paragraph 1 – point h
(h) the logs of the processing of personal data or data protected by intellectual property rights or trade secrets in the context of the sandbox are kept for the duration of the participation in the sandbox and 1 year after its termination, solely for the purpose of and only as long as necessary for fulfilling accountability and documentation obligations under this Article or other applicationble Union or Member States legislation;
Amendment 2366 #
Proposal for a regulation
Article 54 – paragraph 1 – point j
Article 54 – paragraph 1 – point j
(j) a short summary of the AI projectsystem developed in the sandbox, its objectives and expected results published on the website of the competent authorities.
Amendment 2368 #
Proposal for a regulation
Article 54 – paragraph 2
Article 54 – paragraph 2
2. Paragraph 1 further specifies Article 89 of Regulation (EU) 2016/679 and is without prejudice to Union or Member States legislation excluding processing for other purposes than those explicitly mentioned in that legislation or to Union or Member States legislation excluding the use of data protected by intellectual property or trade secrets under the conditions covered by Paragraph 1.
Amendment 2371 #
Proposal for a regulation
Article 55 – title
Article 55 – title
Measures for small-scale providers and usersdeployers (This amendment applies throughout the text. Adopting it will necessitate corresponding changes throughout.)
Amendment 2415 #
Proposal for a regulation
Article 56 – paragraph 2 – point c a (new)
Article 56 – paragraph 2 – point c a (new)
(c a) propose amendments to Annexes I and III to the Commission.
Amendment 2430 #
Proposal for a regulation
Article 57 – paragraph 1
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor. Other national authorit, the Chair of the European Data Protection Board, the Director of the Fundamental Rights Agency, the Executive Director of the European Union Agency for Cybersecurity or their respective representatives. Other national authorities or Union agencies and bodies may be invited to the meetings, where the issues discussed are of relevance for them.
Amendment 2445 #
Proposal for a regulation
Article 57 – paragraph 2
Article 57 – paragraph 2
2. The Board shall adopt its rules of procedure by a simple majority of its members, following the consent of the Commissiontwo-thirds majority and shall take decisions by a simple majority of its members. The rules of procedure shall also contain the operational aspects related to the execution of the Board’s tasks as listed in Article 58. The Board may establish sub-groups as appropriate for the purpose of examining specific questions.
Amendment 2456 #
Proposal for a regulation
Article 57 – paragraph 3
Article 57 – paragraph 3
3. The Board shall be chaired by the Commission. The Commissionelect a chair and two deputy chairs from among its members. Their term of office shall be five years and be renewable once. . The Chair shall convene the meetings and prepare the agenda in accordance with the tasks of the Board pursuant to this Regulation and with its rules of procedure. The Commission shall provide administrative and analytical support for the activities of the Board pursuant to this Regulation.
Amendment 2463 #
Proposal for a regulation
Article 57 – paragraph 4
Article 57 – paragraph 4
4. The Board may invite external experts and observers to attend its meetings and may hold exchanges with interested third parties to inform its activities to an appropriate extent. To that end the Commission mayhair shall facilitate exchanges between the Board and other Union bodies, offices, agencies and advisory groups. The Board shall ensure a balanced representation of stakeholders from academia, research, industry and civil society when it invites external experts and observers, and actively stimulate participation from underrepresented categories.
Amendment 2470 #
Proposal for a regulation
Article 57 a (new)
Article 57 a (new)
Amendment 2478 #
Proposal for a regulation
Article 58 – paragraph -1 (new)
Article 58 – paragraph -1 (new)
-1 The Board shall ensure the consistent application of this Regulation and shall the competent supervisory authority to enforce this Regulation where one of the following criteria is met: (a) The aggregate worldwide turnover of an undertaking or the undertaking to which another undertaking belongs is more than EUR 2 500 million; (b) in each of at least three Member States, the aggregate turnover of an undertaking or the undertaking to which another undertaking belongs is more than EUR 100 million; (c) in each of at least three Member States included for the purpose of point (b), the aggregate turnover of an undertaking or the undertaking to which another undertaking belongs is more than EUR 25 million;and (d) the aggregate Union-wide turnover of an undertaking or the undertaking to which another undertaking belongs is more than EUR 100 million, unless each of the undertakings concerned achieves more than two-thirds of its aggregate Community-wide turnover within one and the same Member State.
Amendment 2479 #
Proposal for a regulation
Article 58 – paragraph -1 a (new)
Article 58 – paragraph -1 a (new)
-1 a In order to ensure consistent application of this Regulation, the Board shall, on its own initiative or, where relevant, at the request of the Commission, in particular: (a) monitor and ensure the correct application of Title III of this Regulation without prejudice to the tasks of national supervisory authorities; (b) advise the Commission on any issue related to the development and use of artificial intelligence in the in the Union, including on any proposed amendment of this Regulation; (c) issue guidelines, recommendations, and best practices on procedures, information and documentation as referred to in Titles III and VIII; (d) examine, on its own initiative, on request of one of its members or on request of the Commission, any question covering the application of this Regulation and issue guidelines, recommendations and best practices in order to encourage consistent application of this Regulation; (e) draw up guidelines for supervisory authorities concerning the application of this Regulation; (f) draw up guidelines for supervisory authorities concerning the setting of administrative fines pursuant to Article 72; (g) review the practical application of the guidelines, recommendations and best practices referred to in points (e) and (f); (h) encourage the drawing-up of codes of conduct pursuant to Article 69; (i) issue opinions on codes of conduct drawn up at Union level pursuant to Article 69(3a); (j) issue decisions pursuant to Articles 66 and 67; (k) promote the cooperation and the effective bilateral and multilateral exchange of information and best practices between the supervisory authorities; (l) promote common training programmes and facilitate personnel exchanges between the supervisory authorities and, where appropriate, with the supervisory authorities of third countries or with international organisations; (m) promote the exchange of knowledge and documentation on relevant legislation and practice with supervisory authorities whose scope includes artificial intelligence worldwide; (n) maintain a publicly accessible electronic register of decisions taken by supervisory authorities and courts on issues handled pursuant to Chapter 3 of Title VIII.
Amendment 2480 #
Proposal for a regulation
Article 58 – paragraph -1 b (new)
Article 58 – paragraph -1 b (new)
-1 b Where the Commission requests advice from the Board, it may indicate a time limit, taking into account the urgency of the matter.
Amendment 2481 #
Proposal for a regulation
Article 58 – paragraph -1 c (new)
Article 58 – paragraph -1 c (new)
-1 c The Board shall forward its opinions, guidelines, recommendations, and best practices to the Commission and to the committee referred to in Article 73 and make them public.
Amendment 2482 #
Proposal for a regulation
Article 58 – paragraph -1 d (new)
Article 58 – paragraph -1 d (new)
-1 d The Board shall, where appropriate, consult interested parties and give them the opportunity to comment within a reasonable period. The Board shall make the results of the consultation procedure publicly available.
Amendment 2483 #
Proposal for a regulation
Article 58 – paragraph -1 e (new)
Article 58 – paragraph -1 e (new)
Amendment 2487 #
Proposal for a regulation
Article 58 – paragraph 1 – introductory part
Article 58 – paragraph 1 – introductory part
1. When providing advice and assistance to the Commission in the context of Article 56(2), the Board shall in particular:
Amendment 2554 #
Proposal for a regulation
Article 58 a (new)
Article 58 a (new)
Article 58 a Independence of the Board 1. The Board shall act with complete independence in performing its tasks and exercising its powers in accordance with this Regulation. 2. The members of the Board shall, in the performance of their tasks and exercise of their powers in accordance with this Regulation, remain free from external influence, whether direct or indirect, and shall neither seek nor take instructions from anybody. 3. The members of the Board shall refrain from any action incompatible with their duties and shall not, during their term of office, engage in any incompatible occupation, whether gainful or not.
Amendment 2560 #
Proposal for a regulation
Article 59 – paragraph 1
Article 59 – paragraph 1
1. National competent authorities shall be established or designated by each Member State for the purpose of ensuring the application and, implementation and enforcement of this Regulation. National competent authorities shall be organised so as to safeguard the objectivity and impartiality of their activities and tasks.
Amendment 2564 #
Proposal for a regulation
Article 59 – paragraph 2
Article 59 – paragraph 2
2. 2. Each Member State shall designate athe national data protection authority as tthe national supervisory authority among the national competent authorities. The national supervisory authority shall act as notifying authority and market surveillance authority unless a Member State has organisational and administrative reasons to designate more than one authority.
Amendment 2571 #
Proposal for a regulation
Article 59 – paragraph 4
Article 59 – paragraph 4
4. Member States shall ensure that national competent authorities are provided with adequate financial and human and technical resources to fulfil their tasks effectively under this Regulation. In particular, national competent authorities shall have a sufficient number of personnel permanently available whose competences and expertise shall include an in-depth understanding of artificial intelligence technologies, data and data computing, fundamental rights, competition law, health and safety risks and knowledge of existing standards and other legal requirements.
Amendment 2577 #
Proposal for a regulation
Article 59 – paragraph 5
Article 59 – paragraph 5
5. Member States shall report to the Commission on an annual basis on the status of the financial and human resources of the national competent authorities with an qualified assessment of their adequacy. The Commission shall transmit that information to the Board for discussion and possible recommendations and formally accept or reject the assessments. Where an assessment is rejected, a new assessment shall be requested.
Amendment 2586 #
Proposal for a regulation
Article 59 – paragraph 6
Article 59 – paragraph 6
6. The CommissionBoard shall facilitate the exchange of experience between national competent authorities.
Amendment 2590 #
Proposal for a regulation
Article 59 – paragraph 7
Article 59 – paragraph 7
7. National competent authoritiesThe Board may provide guidance and advice on the implementation of this Regulation, including to small-scale providers. Whenever national competent authoritiesthe Board intends to provide guidance and advice with regard to an AI system in areas covered by other Union legislation, the competent national authorities under that Union legislation shall be consulted, as appropriate. Member States may also establish one central contact point for communication with operators.
Amendment 2598 #
Proposal for a regulation
Article 59 a (new)
Article 59 a (new)
Article 59 a Independence 1. Each supervisory authority shall act with complete independence in performing its tasks and exercising its powers in accordance with this Regulation. 2. The member or members of each supervisory authority shall, in the performance of their tasks and exercise of their powers in accordance with this Regulation, remain free from external influence, whether direct or indirect, and shall neither seek nor take instructions from anybody. 3. The member or members of each supervisory authority shall refrain from any action incompatible with their duties and shall not, during their term of office, engage in any incompatible occupation, whether gainful or not. 4. Each Member State shall ensure that each supervisory authority chooses and has its own staff which shall be subject to the exclusive direction of the member or members of the supervisory authority concerned. 5. Each Member State shall ensure that each supervisory authority is subject to financial control which does not affect its independence and that it has separate, public annual budgets, which may be part of the overall state or national budget.
Amendment 2599 #
Proposal for a regulation
Article 59 b (new)
Article 59 b (new)
Amendment 2607 #
Proposal for a regulation
Title VII
Title VII
EU DATABASE FOR STAND-ALONE HIGH-RISK AI SYSTEMS
Amendment 2611 #
Proposal for a regulation
Article 60 – title
Article 60 – title
EU database for stand-alone high-risk AI systems
Amendment 2618 #
Proposal for a regulation
Article 60 – paragraph 1
Article 60 – paragraph 1
1. The Commission shall, in collaboration with the Member States, set up and maintain a public EU database containing information referred to in paragraph 2 concerning high-risk AI systems referred to in Article 6(2) which are registered in accordance with Article 51.
Amendment 2621 #
Proposal for a regulation
Article 60 – paragraph 2
Article 60 – paragraph 2
2. The data listed in Annex VIII shall be entered into the EU database by the providers, and, where relevant, deployers. The Commission shall provide them with technical and administrative support.
Amendment 2623 #
Proposal for a regulation
Article 60 – paragraph 3
Article 60 – paragraph 3
3. Information contained in the EU database shall be freely available and accessible to the public, comply with the accessibility requirements of Annex I to Directive 2019/882, and be user-friendly, navigable, and machine-readable, containing structured digital data based on a standardised protocol.
Amendment 2634 #
Proposal for a regulation
Article 60 – paragraph 5
Article 60 – paragraph 5
5. The Commission shall be the controller of the EU database. It shall also ensure to providers and, where relevant, deployers, adequate technical and administrative support.
Amendment 2642 #
Proposal for a regulation
Article 61 – paragraph 2
Article 61 – paragraph 2
2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data provided by usdeployers or collected through other sources on the performance of high- risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2. Post-market monitoring shall include continuous analysis of the AI environment, including other devices, software, and other AI systems that interact with the AI system.
Amendment 2656 #
Proposal for a regulation
Article 62 – paragraph 1 – introductory part
Article 62 – paragraph 1 – introductory part
1. Providers of high-riskand deployers of AI systems placed on the Union market shall report any serious incident or any malfunctioning of those systems which constitutes a breach of obligations under Union law intended to protector of fundamental rights to the market surveillance authorities of the Member States where that incident or breach occurred.
Amendment 2658 #
Proposal for a regulation
Article 62 – paragraph 1 – subparagraph 1
Article 62 – paragraph 1 – subparagraph 1
Such notification shall be made immediately after the provider has established a causal link between the AI system and the incident or malfunctioning or the reasonable likelihood of such a link, and, in any event, not later than 15 day72 hours after the providers becomes aware of the serious incident or of the malfunctioning.
Amendment 2666 #
Proposal for a regulation
Article 62 – paragraph 2
Article 62 – paragraph 2
2. Upon receiving a notification related to a breach of obligations under Union law intended to protector of fundamental rights, the market surveillance authority shall inform the national public authorities or bodies referred to in Article 64(3). The Commission shall develop dedicated guidance to facilitate compliance with the obligations set out in paragraph 1. That guidance shall be issued 123 months after the entry into force of this Regulation, at the latest.
Amendment 2671 #
Proposal for a regulation
Article 62 – paragraph 3
Article 62 – paragraph 3
3. For high-risk AI systems referred to in point 5(b) of Annex III which are placed on the market or put into service by providers that are credit institutions regulated by Directive 2013/36/EU and for high-risk AI systems which are safety components of devices, or are themselves devices, covered by Regulation (EU) 2017/745 and Regulation (EU) 2017/746, the notification of serious incidents or malfunctioning shall be limited to those that that constitute a breach of obligations under Union law intended to protector fundamental rights.
Amendment 2685 #
Proposal for a regulation
Article 64 – paragraph 1
Article 64 – paragraph 1
1. Access to data and documentation iIn the context of their activities, the market surveillance authorities shall be granted full access to the comprehensive training, validation and testing datasets used by the provider, including through application programming interfaces (‘API’) or other appropriate technical means and tools enabling remote access.
Amendment 2695 #
Proposal for a regulation
Article 64 – paragraph 2
Article 64 – paragraph 2
2. Where necessary to assess the conformity of the high-risk AI system with the requirements set out in Title III, Chapter 2 and upon a reasoned request, the market surveillance authorities shall be granted access to the source code of the AI system.
Amendment 2701 #
Proposal for a regulation
Article 64 – paragraph 5
Article 64 – paragraph 5
5. Where the documentation referred to in paragraph 3 is insufficient to ascertain whether a breach of obligations under Union law intended to protector fundamental rights has occurred, the public authority or body referred to paragraph 3 may make a reasoned request to the market surveillance authority to organise testing of the high- risk AI system through technical means. The market surveillance authority shall organise the testing with the close involvement of the requesting public authority or body within reasonable time following the request.
Amendment 2741 #
Proposal for a regulation
Article 66 – paragraph 1
Article 66 – paragraph 1
1. Where, within three months of receipt of the notification referred to in Article 65(5), objections are raised by a Member State against a measure taken by another Member State, or where the CommissionBoard considers the measure to be contrary to Union law, the CommissionBoard shall without delay enter into consultation with the relevant Member State and operator or operators and shall evaluate the national measure. On the basis of the results of that evaluation, the CommissionBoard shall decide whether the national measure is justified or not within 9 months from the notification referred to in Article 65(5) and notify such decision to the Member State concerned.
Amendment 2742 #
Proposal for a regulation
Article 66 – paragraph 2
Article 66 – paragraph 2
2. If the national measure is considered justified, all Member States shall take the measures necessary to ensure that the non-compliant AI system is withdrawn from their market, and shall inform the CommissionBoard accordingly. If the national measure is considered unjustified, the Member State concerned shall withdraw the measure.
Amendment 2756 #
Proposal for a regulation
Article 67 – paragraph 3
Article 67 – paragraph 3
3. The Member State shall immediately inform the CommissionBoard and the other Member States. That information shall include all available details, in particular the data necessary for the identification of the AI system concerned, the origin and the supply chain of the AI system, the nature of the risk involved and the nature and duration of the national measures taken.
Amendment 2760 #
Proposal for a regulation
Article 67 – paragraph 4
Article 67 – paragraph 4
4. The CommissionBoard shall without delay enter into consultation with the Member States and the relevant operator and shall evaluate the national measures taken. On the basis of the results of that evaluation, the CommissionBoard shall decide whether the measure is justified or not and, where necessary, propose appropriate measures.
Amendment 2763 #
Proposal for a regulation
Article 67 – paragraph 5
Article 67 – paragraph 5
5. The CommissionBoard shall address its decision to the Member States.
Amendment 2764 #
Proposal for a regulation
Article 67 – paragraph 5 a (new)
Article 67 – paragraph 5 a (new)
5 a. The Board shall adopt guidelines to help national competent authorities to identify and rectify, where necessary, similar problems arising in other AI systems.
Amendment 2771 #
Proposal for a regulation
Article 68 a (new)
Article 68 a (new)
Article 68 a Right to lodge a complaint with a supervisory authority 1. Without prejudice to any other administrative or judicial remedy, AI subjects and any natural or legal person affected by an AI system shall have the right to lodge a complaint with a supervisory authority, in particular in the Member State of his or her habitual residence, place of work or place of the alleged infringement if the subject considers that the use of a particular AI system, he or she is affected by, infringes this Regulation. Such a complaint may be lodged through a representative action for the protection of the collective interests of consumers as provided under Directive (EU) 2020/1828. 2. Complainants shall have a right to be heard in the complaint handling procedure and in the context of any investigations or deliberations conducted by the competent authority as a result of their complaint. 3. Supervisory authorities shall inform complainants or their representatives about the progress and outcome of their complaints. In particular, supervisory authorities shall take all the necessary actions to follow up on the complaints they receive and, within three months of the reception of a complaint, give the complainants a preliminary response indicating the measures they intend to take and the next steps in the procedure, if any. 4. The supervisory authority shall take a decision on the complaint, including the possibility of a judicial remedy pursuant to Article 68b, without delay and no later than six months after the date on which the complaint was lodged.
Amendment 2780 #
Proposal for a regulation
Article 68 b (new)
Article 68 b (new)
Article 68 b Right to an effective judicial remedy against an authority 1. Without prejudice to any other administrative or non-judicial remedy, individuals and their representatives shall have the right to an effective judicial remedy against any legally binding decision concerning them, whether by a market surveillance authority or a supervisory authority. 2. Without prejudice to any other administrative or non-judicial remedy, individuals shall have the right to a an effective judicial remedy where the authority which is competent does not handle a complaint, does not inform the individual on the progress or preliminary outcome of the complaint lodged within three months pursuant to Article 68a (3), does not comply with its obligation to reach a final decision on the complaint within six months pursuant to Article 68a (3) or its obligations under Article 65. 3. Proceedings against a market surveillance authority shall be brought before the courts of the Member State where the authority is established.
Amendment 2782 #
Proposal for a regulation
Article 68 c (new)
Article 68 c (new)
Article 68 c Remedies 1. Without prejudice to any available administrative or non-judicial remedy and the right to lodge a complaint with a supervisory authority pursuant to Article 68a, any natural person shall have the right to an effective judicial remedy against a provider or deployer where they consider that their rights under this Regulation have been infringed or has been subject to an AI system otherwise in non-compliance with this Regulation. 2. Any person who has suffered material or non-material harm, as a result of an infringement of this Regulation shall have the right to receive compensation from the provider or deployer for the damage suffered. Individuals and their representatives shall be able to seek judicial and non-judicial remedies against providers or deployers of AI systems, including repair, replacement, price reduction, contract termination, reimbursement of the price paid or compensation for material and immaterial damages, for breaches of the rights and obligations set out in this Regulation. 3. Providers and deployers of AI systems which may affect individuals, including AI-subjects, or consumers must provide an effective complaint handling system which enables complaints to be lodged electronically and free of charge, and ensure that complaints submitted through this system are dealt with in an efficient and expedient manner. 4. Providers and deployers of AI systems shall ensure that their internal complaint- handling systems are easy to access, user- friendly and enable and facilitate the submission of sufficiently precise and adequately substantiated complaints. 5. Where an AI system infringes this Regulation, any natural or legal person affected by said AI system may ask the supervisory authority or judicial authorities to stop the use of this system. 6. Member States shall ensure that where infringements of an AI system are imminent or likely, any affected natural or legal person may seek a prohibitory injunction under national law.
Amendment 2784 #
Proposal for a regulation
Article 68 d (new)
Article 68 d (new)
Amendment 2801 #
Proposal for a regulation
Article 70 – paragraph 1 – point a
Article 70 – paragraph 1 – point a
(a) intellectual property rights, and confidential business information or trade secrets of a natural or legal person, including source code, except the cases referred to in Article 5 of Directive 2016/943 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure apply.
Amendment 2808 #
Proposal for a regulation
Article 70 – paragraph 2 – introductory part
Article 70 – paragraph 2 – introductory part
2. Without prejudice to paragraph 1, information exchanged on a confidential basis between the national competent authorities and between national competent authorities and the Commission shall not be disclosed without the prior consultation of the originating national competent authority and the usdeployer when high-risk AI systems referred to in points 1, 6 and 7 of Annex III are used by law enforcement, immigration or asylum authorities, when such disclosure would jeopardise public andor national security interests.
Amendment 2816 #
Proposal for a regulation
Article 71 – paragraph 1
Article 71 – paragraph 1
1. In compliance with the terms and conditions laid down in this Regulation, Member States shall lay down the rules on penalties, including administrative fines, applicable to infringements of this Regulation and shall take all measures necessary to ensure that they are properly and effectively implemented. The penalties provided for shall be effective, proportionate, and dissuasive. They shall take into particular account the interests of small-scale providers and start-up and their economic viability.
Amendment 2851 #
Proposal for a regulation
Article 71 – paragraph 4
Article 71 – paragraph 4
4. The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 20 000 000 EUR or, if the offender is a company, up to 46 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Amendment 2857 #
Proposal for a regulation
Article 71 – paragraph 5
Article 71 – paragraph 5
5. The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 10 000 000 EUR or, if the offender is a company, up to 24 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Amendment 2860 #
Proposal for a regulation
Article 71 – paragraph 5 a (new)
Article 71 – paragraph 5 a (new)
5 a. Where trade secrets, intellectual property rights or data protection rights have been infringed in the development of an AI system, competent authorities may order the definitive deletion of that system and all associated training data and outputs.
Amendment 2924 #
Proposal for a regulation
Article 73 – paragraph 3 a (new)
Article 73 – paragraph 3 a (new)
3 a. Before adopting a delegated act, the Commission shall consult with the relevant institutions and stakeholders in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making.
Amendment 2927 #
Proposal for a regulation
Article 73 – paragraph 4
Article 73 – paragraph 4
4. Once the Commission decides to draft a delegated act, it shall notify the European Parliament of this fact. This notification does not place an obligation on the Commission to adopt the said act. I As soon as it adopts a delegated act, the Commission shall notify it simultaneously to the European Parliament and to the Council.
Amendment 2937 #
Proposal for a regulation
Article 81 a (new)
Article 81 a (new)
Amendment 2948 #
Proposal for a regulation
Article 83 – paragraph 1 – introductory part
Article 83 – paragraph 1 – introductory part
1. This Regulation shall not apply to the AI systems which are components of the large-scale IT systems established by the legal acts listed in Annex IX that have been placed on the market or put into service before [12 months afterstarting [ on the date of application of this Regulation referred to in Article 85(2)], unless the replacement or amendment of those legal acts leads toor as soon as there is a significant change in the design or intended purpose of the AI system or AI systems concerned. in which case it shall apply from [the date of application of this Regulation]
Amendment 2954 #
Proposal for a regulation
Article 83 – paragraph 1 – subparagraph 1
Article 83 – paragraph 1 – subparagraph 1
The requirements laid down in this Regulation shall be taken into account, where applicable, in the evaluation of each large-scale IT systems established by the legal acts listed in Annex IX to be undertaken as provided for in those respective acts.
Amendment 2959 #
Proposal for a regulation
Article 83 – paragraph 2
Article 83 – paragraph 2
2. This Regulation shall apply to the high-risk AI systems, other than the ones referred to in paragraph 1, that have been placed on the market or put into service beforefrom [date of application of this Regulation referred to in Article 85(2)], only if, from that date, those systems are subject to significant changes in their design or intended purpose.
Amendment 2963 #
Proposal for a regulation
Article 83 a (new)
Article 83 a (new)
Article 83 a AI systems deployed in the context of employment Member States may, by law or by collective agreements, decide to prohibit or limit the use of certain AI systems in the employment context or provide for more specific rules for AI systems in employment, in particular for the purposes of the recruitment, the performance of the contract of employment, including discharge of obligations laid down by law or by collective agreements, management, planning and organisation of work, equality and diversity in the workplace, health and safety at work, protection of employer's or customer's property and for the purposes of the exercise and enjoyment, on an individual or collective basis, of rights and benefits related to employment, and for the purpose of the termination of the employment relationship.
Amendment 2969 #
Proposal for a regulation
Article 84 – paragraph 1
Article 84 – paragraph 1
1. The Commission shall assess the need for amendment of the list in Annex III , including the extension of existing area headings or addition of new area headings, once a year following the entry into force of this Regulation.
Amendment 2980 #
Proposal for a regulation
Article 84 – paragraph 3 a (new)
Article 84 – paragraph 3 a (new)
3 a. Within [two years after the date of application of this Regulation referred to in Article 85(2)] and every two years thereafter, the Commission shall evaluate the impact and effectiveness of the Regulation with regards to the resource and energy use, waste production and other environmental impact of AI systems and evaluate the need for proposing legislation to regulate the resource and energy efficiency of AI systems and related ICT systems in order for the sector to contribute to EU climate strategy and targets.
Amendment 2987 #
Proposal for a regulation
Article 84 – paragraph 6
Article 84 – paragraph 6
6. In carrying out the evaluations and reviews referred to in paragraphs 1 to 4 the Commission shall take into account the positions and findings of the Board, of the European Parliament, of the Council, and of other relevant bodies or sources, including from academia and civil society.
Amendment 2992 #
Proposal for a regulation
Article 84 – paragraph 7
Article 84 – paragraph 7
7. The Commission shall, if necessary, submit appropriate proposals to amend this Regulation, in particular taking into account the effect of AI systems on fundamental rights, equality, and accessibility for persons with disabilities, developments in technology and in the light of the state of progress in the information society.
Amendment 2995 #
Proposal for a regulation
Article 84 – paragraph 7 a (new)
Article 84 – paragraph 7 a (new)
7 a. By three years from the date of application of this Regulation at the latest, the Commission shall carry out an assessment of the enforcement of this Regulation and shall report it to the European Parliament, the Council and the European Economic and Social Committee, taking into account the first years of application of the Regulation. On the basis of the findings that report shall, where appropriate, be accompanied by a proposal for amendment of this Regulation with regard to the structure of enforcement and the need for an EU agency to resolve any identified shortcomings.
Amendment 3003 #
Proposal for a regulation
Article 85 – paragraph 2
Article 85 – paragraph 2
2. This Regulation shall apply from [246 months following the entering into force of the Regulation].
Amendment 3035 #
Proposal for a regulation
Annex II – Part A – point 12 a (new)
Annex II – Part A – point 12 a (new)
12 a. Directive 2014/35/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of electrical equipment designed for use within certain voltage limits (OJ L96/357, 29.3.2014).
Amendment 3040 #
Proposal for a regulation
Annex II – Part B – point 7 a (new)
Annex II – Part B – point 7 a (new)
7 a. Directive 2009/125/EC of the European Parliament and of the Council of 21 October 2009 establishing a framework for the setting of ecodesign requirements for energy-related products.
Amendment 3050 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – introductory part
Annex III – paragraph 1 – point 1 – introductory part
1. Biometric identification, biometrics-based data and categorisation of natural persons:
Amendment 3057 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a
Annex III – paragraph 1 – point 1 – point a
Amendment 3069 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a a (new)
Annex III – paragraph 1 – point 1 – point a a (new)
(a a) AI systems that may be or are intended to be used for the ‘real-time’ and ‘post’ non-remote biometric identification of natural persons in publicly accessible spaces, as well as in workplaces, in educational settings and in border surveillance;
Amendment 3074 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a b (new)
Annex III – paragraph 1 – point 1 – point a b (new)
(a b) AI systems that may be or are intended to be used for the ‘real-time’ and ‘post’ non-remote biometric identification of natural persons in publicly accessible spaces, as well as in workplaces, in educational settings and in border surveillance;
Amendment 3079 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a c (new)
Annex III – paragraph 1 – point 1 – point a c (new)
(a c) AI systems that are or may be used for ‘real-time’ and ‘post’ biometric verification in publicly accessible spaces, as well as in workplaces and in educational settings;
Amendment 3083 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a d (new)
Annex III – paragraph 1 – point 1 – point a d (new)
(a d) AI systems that are or may be used for the ‘real-time’ and ‘post’ detection of a person’s presence, in workplaces, in educational settings, and in border surveillance, including in the virtual or online version of these spaces, on the basis of their physical, physiological or behavioural data, including biometric data;
Amendment 3085 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a e (new)
Annex III – paragraph 1 – point 1 – point a e (new)
(a e) AI systems intended to be used by or on behalf of competent authorities in ‘real-time’ and ‘post’ migration, asylum and border control management for the forecasting or prediction of trends related to migration, movement and border crossings.
Amendment 3091 #
Proposal for a regulation
Annex III – paragraph 1 – point 2 – point a
Annex III – paragraph 1 – point 2 – point a
(a) AI systems that may be or are intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity and entities falling under [Directive XXXX/XXX/EU (‘NIS 2 Directive’)].
Amendment 3098 #
Proposal for a regulation
Annex III – paragraph 1 – point 3 – point a
Annex III – paragraph 1 – point 3 – point a
(a) AI systems that may be or are intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions;
Amendment 3100 #
Proposal for a regulation
Annex III – paragraph 1 – point 3 – point b
Annex III – paragraph 1 – point 3 – point b
(b) AI systems that may be or are intended to be used for the purpose of assessing students in educational and vocational training institutions andor for assessing participants in tests commonly required for admission to educational institutions.
Amendment 3103 #
Proposal for a regulation
Annex III – paragraph 1 – point 3 – point b a (new)
Annex III – paragraph 1 – point 3 – point b a (new)
(b a) AI systems that may be or are intended to be used for the purpose of assessing the appropriate level of education for an individual with potential effects for the methods or level of education that individual will recieve or will be able to access.
Amendment 3109 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point a
Annex III – paragraph 1 – point 4 – point a
(a) AI systems that may be or are intended intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;
Amendment 3114 #
(b) AI that may be or are intended to be used for making decisions on promotion and termination of work-related contractualto assist decision-making affecting the initiation, establishment, implementation and termination of an employment relationship, including AI systems intended to support collective legal and regulatory matters, particularly work-related relationships, for task allocation and for monitoring, measuring and evaluating performance and behavior of persons in such relationships.
Amendment 3123 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point a
Annex III – paragraph 1 – point 5 – point a
(a) AI systems that may be or are intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services;
Amendment 3133 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
Annex III – paragraph 1 – point 5 – point b
(b) AI systems that may be or are intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use;
Amendment 3134 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
Annex III – paragraph 1 – point 5 – point b
(b) AI systems that may be or are intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use;
Amendment 3142 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point c
Annex III – paragraph 1 – point 5 – point c
(c) AI systems that may be or are intended to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid.
Amendment 3143 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point c a (new)
Annex III – paragraph 1 – point 5 – point c a (new)
(c a) AI systems that may be used or are intended to be used for making individual risk assessments of natural persons in the context of access to private and public services, including determining the amounts of insurance premiums.
Amendment 3146 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point c b (new)
Annex III – paragraph 1 – point 5 – point c b (new)
(c b) AI systems that may be used or are intended to be used in the context of payment and debt collection services.
Amendment 3147 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 a (new)
Annex III – paragraph 1 – point 5 a (new)
5 a. Use by vulnerable groups or in situations that imply vulnerability (a) AI systems intended to be used by children in a way that may seriously affect a child’s personal development, such as by educating the child in a broad range of areas not limited to areas which parents or guardians can reasonably foresee at the time of the purchase; (b) AI systems, such as virtual assistants, intended to be used by natural persons for taking decisions with regard to their private lives that have legal effects or similarly significantly affect the natural persons; (c) AI systems intended to be used for personalised pricing within the meaning of Article 6 (1) (ea) of Directive 2011/83/EU.
Amendment 3150 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point a
Annex III – paragraph 1 – point 6 – point a
Amendment 3162 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point b
Annex III – paragraph 1 – point 6 – point b
Amendment 3167 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point c
Annex III – paragraph 1 – point 6 – point c
(c) AI systems that may be or are intended to be used by law enforcement authorities to detect deep fakes as referred to in article 52(3);
Amendment 3171 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point d
Annex III – paragraph 1 – point 6 – point d
(d) AI systems that may be or are intended to be used by law enforcement authorities for evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences;
Amendment 3176 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point e
Annex III – paragraph 1 – point 6 – point e
Amendment 3183 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point f
Annex III – paragraph 1 – point 6 – point f
(f) AI systems that may be or are intended to be used by law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;
Amendment 3187 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point g
Annex III – paragraph 1 – point 6 – point g
Amendment 3189 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point a
Annex III – paragraph 1 – point 7 – point a
Amendment 3201 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
Annex III – paragraph 1 – point 7 – point b
(b) AI systems that may be or are intended to be used by competent public authorities, or third parties on their behalf, to assess a risk, including, but not limited to, a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
Amendment 3206 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point c
Annex III – paragraph 1 – point 7 – point c
(c) AI systems that may be or are intended to be used by competent public authorities for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features;
Amendment 3215 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
Annex III – paragraph 1 – point 7 – point d
(d) AI systems that may be or are intended to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
Amendment 3217 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d a (new)
Annex III – paragraph 1 – point 7 – point d a (new)
(d a) AI systems that may be or are intended to be used by competent public authorities for border management and immigration authorities to monitor, surveil or process data for the purpose of detecting, verifying or identifying natural persons.
Amendment 3222 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d b (new)
Annex III – paragraph 1 – point 7 – point d b (new)
(d b) AI systems that may be or are intended to be used for migration analytics regarding natural persons or groups, allowing immigration authorities or related entities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data.
Amendment 3231 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 – point a
Annex III – paragraph 1 – point 8 – point a
(a) AI systems which may be or are intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts. or used in a similar way in alternative dispute resolution.
Amendment 3236 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 – point a a (new)
Annex III – paragraph 1 – point 8 – point a a (new)
(a a) AI systems that may or are intended to assist in democratic processes, the casting or counting of votes, such as in elections.
Amendment 3240 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 a (new)
Annex III – paragraph 1 – point 8 a (new)
8 a. Media (a). Recommender systems, meaning AI systems used by an online platform to suggest in its online interface specific information to recipients of the service, including as a result of a search initiated by the recipient or otherwise determining the relative order or prominence of information displayed.
Amendment 3242 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 b (new)
Annex III – paragraph 1 – point 8 b (new)
8 b. Health and Healthcare (a) AI systems intended to be used inside or outside of the national healthcare system the outputs of which can influence individuals’ health, for example through impacting health diagnostics, treatments or medical prescriptions. (b) AI systems intended to be used to facilitate administrative, planning, and health insurance processes within the healthcare system which could influence the distribution of healthcare resources, health insurance or access to healthcare. (c) AI systems intended to be used by pharmaceutical companies and medical technology companies to facilitate research and development, as well as for pharmacovigilance, market optimisation and pharmaceutical marketing.
Amendment 3243 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point a
Annex IV – paragraph 1 – point 1 – point a
(a) its intended purpose, the person/s developing the system the date and the version of the system, reflecting its relation to previous and, where applicable, more recent, versions in the succession of revisions;
Amendment 3248 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point a a (new)
Annex IV – paragraph 1 – point 1 – point a a (new)
(a a) the categories of natural persons and groups likely or foreseen to be affected;
Amendment 3249 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point a b (new)
Annex IV – paragraph 1 – point 1 – point a b (new)
(a b) the categories and nature of data likely or foreseen to be processed;
Amendment 3250 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point b
Annex IV – paragraph 1 – point 1 – point b
(b) how the AI system interacts or can be used to interact with hardware or software, including other AI systems that isare not part of the AI system itself, where applicable;
Amendment 3253 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point c
Annex IV – paragraph 1 – point 1 – point c
(c) the versions of relevant software or firmware and any requirement related to development, maintenance and version update;
Amendment 3257 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point g
Annex IV – paragraph 1 – point 1 – point g
(g) instructions of use for the usdeployer and, where applicable installation instructions;
Amendment 3258 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point g a (new)
Annex IV – paragraph 1 – point 1 – point g a (new)
(g a) instructions on the intervention in case of emergency, interrupting the system through a “stop” button or a similar procedure that allows the system to come to a halt in a safe state;
Amendment 3263 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point b
Annex IV – paragraph 1 – point 2 – point b
(b) the design specifications of the system, namely the general logic of the AI system and, of the algorithms and of data structures; the key design choices including the rationale and assumptions made, also with regard to persons or groups of persons on which the system is intended to be used; the main classification choices; what the system is designed to optimise for and the relevance of the different parameters; the decisions about any possible trade-off made regarding the technical solutions adopted to comply with the requirements set out in Title III, Chapter 2;
Amendment 3266 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point d
Annex IV – paragraph 1 – point 2 – point d
(d) where relevant, the data requirements in terms of datasheets describing the training methodologies and techniques and the training data sets used, including information about the provenance of those data sets, their scope and main characteristics; how the data was obtained and, selected and prepared; labelling procedures (e.g. for supervised learning), data cleaning methodologies (e.g. outliers detection), and methods applied to prevent bias;
Amendment 3267 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point e
Annex IV – paragraph 1 – point 2 – point e
(e) assessment of the human oversight measures needed in accordance with Article 14, including an assessment of the technical measures needed to facilitate the interpretation of the outputs of AI systems by the usdeployers, in accordance with Articles 13(3)(d);
Amendment 3277 #
Proposal for a regulation
Annex IV – paragraph 1 – point 4 a (new)
Annex IV – paragraph 1 – point 4 a (new)
4 a. A detailed description of the system’s environmental impact in accordance with Article 10a.
Amendment 3291 #
Proposal for a regulation
Annex VIII – paragraph 1
Annex VIII – paragraph 1
The following information shall be provided and thereafter kept up to date with regard to high-risk AI systems to be registered in accordance with Article 51.
Amendment 3296 #
Proposal for a regulation
Annex VIII – point 1
Annex VIII – point 1
1. Name, address and contact details of the provider or deployer;
Amendment 3297 #
Proposal for a regulation
Annex VIII – point 2
Annex VIII – point 2
2. Where submission of information is carried out by another person on behalf of the provider or deployer, the name, address and contact details of that person;
Amendment 3299 #
Proposal for a regulation
Annex VIII – point 5
Annex VIII – point 5
5. Descriptions of: (a) the intended purpose of the AI system; (b) the components and functions supported through AI; (c) the main parameters the AI system takes into account; (d) arrangements for human oversight and responsible natural persons for decisions made or influenced by the AI system;
Amendment 3301 #
Proposal for a regulation
Annex VIII – point 5 a (new)
Annex VIII – point 5 a (new)
5 a. Where applicable, the categories of natural persons and groups likely or foreseen to be affected;
Amendment 3302 #
Proposal for a regulation
Annex VIII – point 5 b (new)
Annex VIII – point 5 b (new)
5 b. Where applicable, the categories and nature of data likely or foreseen to be processed by the AI system;
Amendment 3303 #
Proposal for a regulation
Annex VIII – point 5 c (new)
Annex VIII – point 5 c (new)
5 c. For each deployment, the deployer’s assessments of the assessment of the systems’ impact in the context of use throughout the entire lifecycle as conducted by the deployer under Article 9a;
Amendment 3306 #
Proposal for a regulation
Annex VIII – point 11
Annex VIII – point 11
11. Electronic instructions for use; this information shall not be provided for high-risk AI systems in the areas of law enforcement and migration, asylum and border control management referred to in Annex III, points 1, 6 and 7.