Activities of Axel VOSS related to 2021/0106(COD)
Legal basis opinions (0)
Amendments (491)
Amendment 310 #
Proposal for a regulation
Citation 5 a (new)
Citation 5 a (new)
Having regard to the opinion of the European Central Bank,
Amendment 312 #
Proposal for a regulation
Citation 5 b (new)
Citation 5 b (new)
Having regard to the joint opinion of the European Data Protection Board and the European Data Protection Supervisor,
Amendment 331 #
Proposal for a regulation
Recital 3 a (new)
Recital 3 a (new)
(3 a) In order for Member States to reach the carbon neutrality targets, European companies should seek to utilise all available technological advancements that can assist in realising this goal. AI is a well-developed and ready-to-use technology that can be used to process the ever-growing amount of data created during industrial, environmental, health and other processes. To facilitate investments in AI- based analysis and optimisation solutions, this Regulation should provide a predictable and proportionate environment for low-risk industrial solutions.
Amendment 346 #
Proposal for a regulation
Recital 5
Recital 5
(5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. Furthermore, clear rules supporting the application and design of AI systems should be laid down, thus enabling a European ecosystem of public and private actors creating AI systems in line with European values. By laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33 , and it ensures the protection of ethical principles, as specifically requested by the European Parliament34 . _________________ 33 European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 34 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL).
Amendment 353 #
Proposal for a regulation
Recital 5 a (new)
Recital 5 a (new)
(5 a) Furthermore, in order to foster the development of artificial intelligence in line with Union values, the Union needs to address the main gaps and barriers blocking the potential of the digital transformation including the shortage of digitally skilled workers, cybersecurity concerns, lack of investment and access to investment, and existing and potential gaps between large companies, SME’s and start-ups. Special attention should be paid to ensuring that the benefits of AI and innovation in new technologies are felt across all regions of the Union and that sufficient investment and resources are provided especially to those regions that may be lagging behind in some digital indicators.
Amendment 356 #
Proposal for a regulation
Recital 5 b (new)
Recital 5 b (new)
(5 b) To ensure the development of secure, trustworthy and ethical AI, the European Commission established the High-Level Expert Group on Artificial Intelligence. In formulating both Ethics guidelines for Trustworthy AI and a corresponding Assessment List for Trustworthy Artificial Intelligence, this independent group solidified the foundational ambition for ‘Trustworthy AI’. As noted by the group, Trustworthiness is a prerequisite for people, societies and companies to develop, deploy and use AI systems. Without AI systems – and the human beings behind them – being demonstrably worthy of trust, serious and unwanted consequences may ensue and the uptake of AI might be hindered, preventing the realisation of the potentially vast social and economic benefits that trustworthy AI systems can bring. This approach should be seen as the basis of a European approach to both ensure and scale AI that is innovative and ethical.
Amendment 357 #
Proposal for a regulation
Recital 6
Recital 6
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the software, in particular theis definition should be in line with definitions that have found international acceptance. Moreover, it should be based on the key functional characteristics of artificial intelligence distinguishing it from more classic software systems and modelling approaches such as logistic regression and other techniques that are similarly transparent, explainable and interpretable. For the purposes of this Regulation, the definition should be based on the key functional characteristics of the AI system, in particular its ability, for a given set of human-defined objectives, to generate outputs such as content,make predictions, recommendations, or decisions whichthat influence the environment with which the system interacts, be it in a physical or digital dimensionreal or virtual environments, whereby it uses machine and/or human-based data and inputs to (i) perceive real and/or virtual environments; (ii) abstract these perceptions into models through analysis in an automated manner (e.g. with machine learning), or manually; and (iii) use model inference to formulate options for outcomes. AI systems can bare designed to operate with varying levels of autonomy and can be used on a stand- alone basis or as a component of a product, irrespective of whether the system is physicallysoftware system, integrated into thea physical product (embedded) or, used to serve the functionality of thea physical product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which s or used as a subsystem of a software/physical/hybrid system of systems. If an AI system is used as a subsystem of a system of systems, then all parts including their interfaces to other parts of the system of systems that would be obsolete if the AI functionality were turned off or removed are essential parts of the AI system thus fall directly under this regulation. Any parts of the system of systems to which this does not hould be kept up-to– date intrue are not covered by this regulation and the oblight of market and technological developments through the adoption of delegated acts by the Commission to amend that list. ations listed in this regulation do not apply to them. This is to ensure that the integration of AI systems into existing systems is not blocked by this regulation.
Amendment 365 #
Proposal for a regulation
Recital 6 a (new)
Recital 6 a (new)
(6 a) Defining AI systems is an ongoing process that should take into account the context in which AI operates, keep pace with societal developments in this field and not lose sight of the link between the ecosystem of excellence and the ecosystem of trust. The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to–date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list. In the drafting process of these delegated acts, the Commission shall insure the input of all relevant stakeholders including the technical experts and developers of AI systems. This consultation can take place through existing bodies such as the High Level Expert Group on AI or a newly established similar advisory body that is closely included in the work of the European Artificial Intelligence Board. Should the definition of ‘AI system’ from the OECD be adjusted in the coming years, the European Commission should engage in dialogue with these organisations to ensure alignment between the two definitions. Should the AI Act still be undergoing legislative procedure, the co-legislators should consider these latest developments during the legislative process, so as to ensure alignment, legal clarity and broad international acceptance of the AI Act Definition of ‘AI Systems’.
Amendment 366 #
Proposal for a regulation
Recital 6 b (new)
Recital 6 b (new)
(6 b) Taking into account the work of International Standardisation Organisations, it is important to highlight the differences as well as the connection between Automation, Heteronomy and Autonomy. Experts speak of an automated system with different levels of automation instead of levels of autonomy. Autonomy is understood as the highest level of automation. An autonomous AI system would be capable to change its scope or its goals independently. However, today's AI technologies do not allow full autonomy yet and are not self-governing. Instead, they operate based on algorithms and otherwise obey the commands of operators. A fully autonomous AI system would be a genuine General or Super AI. Despite these restrictions, this Regulation will use the term “autonomy” as it is a key element of international accepted definitions.
Amendment 379 #
Proposal for a regulation
Recital 8
Recital 8
(8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database repository, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used. Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real- time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near- ‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned.
Amendment 394 #
Proposal for a regulation
Recital 11
Recital 11
(11) In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union. This is the case for example of an operator established in the Union that contracts certain services to an operator established outside the Union in relation to an activity to be performed by an AI system that would qualify as high-risk and whose effects impact natural persons located in the Union. In those circumstances, the AI system used by the operator outside the Union could process data lawfully collected in and transferred from the Union, and provide to the contracting operator in the Union the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service or used in the Union. To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is intended for used in the Union. Nonetheless, to take into account existing arrangements and special needs for future cooperation with foreign partners with whom information and evidence is exchanged, this Regulation should not apply to public authorities of a third country and international organisations when acting in the framework of international agreements concluded at national or European level for law enforcement and judicial cooperation with the Union or with its Member States. Such agreements have been concluded bilaterally between Member States and third countries or between the European Union, Europol and other EU agencies and third countries and international organisations.
Amendment 401 #
Proposal for a regulation
Recital 12 a (new)
Recital 12 a (new)
(12 a) This Regulation should also ensure harmonisation consistency in definitions and terminology as biometric techniques can, in the light of their primary function, be divided into techniques of biometric identification, authentication and verification. Biometric authentication means the process of matching an identifier to a specific stored identifier in order to grant access to a device or service, whilst biometric verification refers to the process of confirming that an individual is who they claim to be. As they do not involve any “one-to-many” comparison of biometric data that is the distinctive trait of identification, both biometric verification and authentication should be excluded from the scope of this Regulation.
Amendment 417 #
Proposal for a regulation
Recital 15 a (new)
Recital 15 a (new)
(15 a) As signatories to the United Nations Convention on the Rights of Persons with Disabilities (CRPD), the European Union and all Member States are legally obliged to protect persons with disabilities from discrimination and promote their equality, to ensure that persons with disabilities have access, on an equal basis with others, to information and communications technologies and systems, and to ensure respect for privacy of persons with disabilities. Given the growing importance and use of AI systems, the strict application of universal design principles to all new technologies and services should ensure full, equal, and unrestricted access for everyone potentially affected by or using AI technologies, including persons with disabilities, in a way that takes full account of their inherent dignity and diversity. It is essential to ensure that providers of AI systems design them, and users use them, in accordance with the accessibility requirements set out in Directive (EU) 2019/882.
Amendment 424 #
Proposal for a regulation
Recital 16
Recital 16
(16) The placing on the market, putting into service or use of certain AI systems intended tomaterially distorting human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individualthat persons cannot perceive or those systems otherwise exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention toa specific group of persons due to their age, disability within the meaning of Directive (EU) 2019/882, or social or economic situation. Such systems can be placed on the market, put into service or used with the objective to or the effect of materially distorting the behaviour of a person and in a manner that causes or is reasonably likely to cause physical or psychological harm to that or another person. The intention or groups of persons, including harms that may be accumulated over time. The intention to distort the behaviour may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount meaning factors that may not be reasonably foreseen and mitigated by the provider or the user of the AI system. In any case, it is not necessary for the provider or the user to have the intention to cause of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research. physical or psychological harm, as long as such harm results from the manipulative or exploitative AI-enabled practices. The prohibitions for such AI practices is complementary to the provisions contained in Directive [Unfair Commercial Practice Directive 2005/29/EC, as amended by Directive (EU) 2019/216], notably that unfair commercial practices leading to economic or financial harms to consumers are prohibited under all circumstances, irrespective of whether they are put in place through AI systems or otherwise.
Amendment 439 #
Proposal for a regulation
Recital 17 a (new)
Recital 17 a (new)
(17 a) AI systems that are intended for use to protect consumers and prevent fraudulent activities should not necessarily be considered high-risk under this Regulation. As set by Article 94 of the Directive (EU) 2015/2366, payment systems and payment service providers should be allowed to process data to safeguard the prevention, investigation and detection of payment fraud. Therefore AI systems used to process data to safeguard the prevention, investigation and detection of fraud may not be considered as high-risk AI systems for the purpose of this Regulation.
Amendment 521 #
Proposal for a regulation
Recital 27
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. To ensure alignment with sectoral legislation, requirements for certain high-risk AI systems and uses will take account of sectoral legislation which already lay out sufficient requirements for high-risk AI systems included within this Act, such as Regulation (EU) 2017/745 on Medical Devices and Regulation (EU) 2017/746 on In Vitro Diagnostic Devices and Directive 2006/42/EC on Machinery. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any.
Amendment 522 #
Proposal for a regulation
Recital 27
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. To ensure alignment with sectoral legislation, requirements for certain high-risk AI systems and uses will take account of sectoral legislation which already lay out sufficient requirements for high-risk AI systems included within this Act, such as Regulation (EU) 2017/745 on Medical Devices and Regulation (EU) 2017/746 on In Vitro Diagnostic Devices and Directive 2006/42/EC on Machinery. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any.
Amendment 532 #
Proposal for a regulation
Recital 29
Recital 29
(29) As regards high-risk AI systems that are safety components of products or systems, or which are themselves products or systems falling within the scope of Regulation (EC) No 300/2008 of the European Parliament and of the Council39 , Regulation (EU) No 167/2013 of the European Parliament and of the Council40 , Regulation (EU) No 168/2013 of the European Parliament and of the Council41 , Directive 2014/90/EU of the European Parliament and of the Council42 , Directive (EU) 2016/797 of the European Parliament and of the Council43 , Regulation (EU) 2018/858 of the European Parliament and of the Council44 , Regulation (EU) 2018/1139 of the European Parliament and of the Council45 , and Regulation (EU) 2019/2144 of the European Parliament and of the Council46 , Regulation (EU) 2017/745 of the European Parliament and of the Council, and Regulation (EU) 2017/746 of the European Parliament and of the Council, it is appropriate to amend those acts to ensure that the Commission takes into account, on the basis of the technical and regulatory specificities of each sector, and without interfering with existing governance, conformity assessment, market surveillance and enforcement mechanisms and authorities established therein, the mandatory requirements for high-risk AI systems laid down in this Regulation when adopting any relevant future delegated or implementing acts on the basis of those acts. _________________ 39 Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common rules in the field of civil aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72). 40 Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1). 41 Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval and market surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52). 42 Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and repealing Council Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146). 43 Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system within the European Union (OJ L 138, 26.5.2016, p. 44). 44 Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1). 45 Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European Parliament and of the Council and Council Regulation (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1). 46 Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users, amending Regulation (EU) 2018/858 of the European Parliament and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the European Parliament and of the Council and Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1).
Amendment 533 #
Proposal for a regulation
Recital 29
Recital 29
(29) As regards high-risk AI systems that are safety components of products or systems, or which are themselves products or systems falling within the scope of Regulation (EC) No 300/2008 of the European Parliament and of the Council39 , Regulation (EU) No 167/2013 of the European Parliament and of the Council40 , Regulation (EU) No 168/2013 of the European Parliament and of the Council41 , Directive 2014/90/EU of the European Parliament and of the Council42 , Directive (EU) 2016/797 of the European Parliament and of the Council43 , Regulation (EU) 2018/858 of the European Parliament and of the Council44 , Regulation (EU) 2018/1139 of the European Parliament and of the Council45 , and Regulation (EU) 2019/2144 of the European Parliament and of the Council46 , Regulation (EU)2017/745 of the European Parliament and of the Council, and Regulation (EU)2017/746 of the European Parliament and of the Council, it is appropriate to amend those acts to ensure that the Commission takes into account, on the basis of the technical and regulatory specificities of each sector, and without interfering with existing governance, conformity assessment and enforcement mechanisms and authorities established therein, the mandatory requirements for high-risk AI systems laid down in this Regulation when adopting any relevant future delegated or implementing acts on the basis of those acts. _________________ 39 Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common rules in the field of civil aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72). 40 Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1). 41 Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval and market surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52). 42 Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and repealing Council Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146). 43 Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system within the European Union (OJ L 138, 26.5.2016, p. 44). 44 Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1). 45 Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European Parliament and of the Council and Council Regulation (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1). 46 Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users, amending Regulation (EU) 2018/858 of the European Parliament and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the European Parliament and of the Council and Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1).
Amendment 535 #
Proposal for a regulation
Recital 30
Recital 30
(30) As regards AI systems that are safety components of products, or which are themselves products, falling within the scope of certain Union harmonisation legislation (as specified in Annex II), it is appropriate to classify them as high-risk under this Regulation if the product in question undergoes the conformity assessment procedure with a third-party conformity assessment body pursuant to that relevant Union harmonisation legislation. In particular, such products are machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices.
Amendment 537 #
Proposal for a regulation
Recital 31
Recital 31
(31) The classification of an AI system as high-risk pursuant to this Regulation should not necessarilyall not mean that the product whose safety component is the AI system, or the AI system itself as a product, is considered ‘high-risk’ under the criteria established in the relevant Union harmonisation legislation that applies to the product. This is notably the case for Regulation (EU) 2017/745 of the European Parliament and of the Council47 and Regulation (EU) 2017/746 of the European Parliament and of the Council48 , where a third-party conformity assessment is provided for medium-risk and high-risk products. _________________ 47 Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1). 48 Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).
Amendment 547 #
Proposal for a regulation
Recital 33
Recital 33
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk, except for the purpose of remote client on-boarding or verification of a user through a device. In view of the risks that they may pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and, when appropriate and justified by a proven added value to the protection of health, safety and fundamental rights, human oversight.
Amendment 553 #
Proposal for a regulation
Recital 34
Recital 34
(34) As regards the management and operation of critical infrastructure, it is appropriate to classify as high-risk the AI systems intended to be used as safety or security components in the management and operation of road traffic and the supply of water, gas, heating and electricity, since their failure or malfunctioning may infringe the security and integrity of such critical infrastructure and thus put at risk the life and health of persons at large scale and lead to appreciable disruptions in the ordinary conduct of social and economic activities.
Amendment 565 #
Proposal for a regulation
Recital 36
Recital 36
(36) AI systems used in employment, workers management and access to self- employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy.
Amendment 611 #
Proposal for a regulation
Recital 41
Recital 41
(41) The fact that an AI system is classified asompliant with the requirements for high- risk AI under this Regulation should not be interpreted as indicating that the use of the system is necessarily unlawful under other acts of Union law or under national law compatible with Union law, such as on the protection of personal data, on the use of polygraphs and similar tools or other systems to detect the emotional state of natural persons. Any such use should continue to occur solely in accordance with the applicable requirements resulting from the Charter and from the applicable acts of secondary Union law and national law. This Regulation should notAs far as is applicable and proportionate, this Regulation may, where duly justified, be understood as providing for the legal ground for processing of personal data, including special categories of personal data, where relevant.
Amendment 612 #
Proposal for a regulation
Recital 41 a (new)
Recital 41 a (new)
(41 a) AI systems do not operate in a lawless world. A number of legally binding rules at European, national and international level already apply or are relevant to AI systems today. Legal sources include, but are not limited to EU primary law (the Treaties of the European Union and its Charter of Fundamental Rights), EU secondary law (such as the General Data Protection Regulation, the Product Liability Directive, the Regulation on the Free Flow of Non- Personal Data, anti-discrimination Directives, consumer law and Safety and Health at Work Directives), the UN Human Rights treaties and the Council of Europe conventions (such as the European Convention on Human Rights), and numerous EU Member State laws. Besides horizontally applicable rules, various domain-specific rules exist that apply to particular AI applications (such as for instance the Medical Device Regulation in the healthcare sector).
Amendment 614 #
Proposal for a regulation
Recital 42
Recital 42
(42) To mitigate the risks from high-risk AI systems placed or otherwise put into service on the Union market for users and affected persons, certain mandatory requirements should apply, taking into account the intended purpose of the use of the system, level of reliance of the user or business user on the output of the AI system for the final decision or outcome and according to the risk management system to be established by the provider.
Amendment 623 #
Proposal for a regulation
Recital 44
Recital 44
(44) High data quality is essential forand having simple and accessible data plays a vital role in providing structure and ground truth for AI and are essential for purpose- ready data analytics and the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. To achieve simple access to and usability of high quality data for AI, the Commission should examine ways to facilitate the lawful processing of personal data to train legitimate AI systems by appropriate amendments to applicable laws. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, machine learning validation and testing data sets should be sufficiently relevant, and representative and free of errors and complete in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, machine learning validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. If it is necessary for the aforementioned purpose to use existing sets of data that includes personal data originally collected and stored for a different purpose, their use for the aforementioned purpose should be deemed compatible with the original purpose so long as the personal data is not transferred to any third party. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers should be able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high- risk AI systems.
Amendment 633 #
Proposal for a regulation
Recital 45
Recital 45
(45) For the development and assessment of high-risk AI systems, certain actors, such as providers, notified bodies and other relevant entities, such as digital innovation hubs, testing experimentation facilities and researchers, should be able to access and use high quality datasets within their respective fields of activities which are related to this Regulation. European common data spaces established by the Commission and the facilitation of data sharing between businesses and with government in the public interest will be instrumental to provide trustful, accountable and non-discriminatory access to high quality data for the training, validation and testing of AI systems. For example, in health, the European health data space will facilitate non- discriminatory access to health data and the training of artificial intelligence algorithms on those datasets, in a privacy-preserving, secure, timely, transparent and trustworthy manner, and with an appropriate institutional governance. Relevant competent authorities, including sectoral ones, providing or supporting the access to data may also support the provision of high-quality data for the training, validation and testing of AI systems.
Amendment 636 #
Proposal for a regulation
Recital 46
Recital 46
(46) Having information on how high- risk AI systems have been developed and how they perform throughout their lifecycltime is essential to verify compliance with the requirements under this Regulation. This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements, while preserving trade secrets. Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system. The technical documentation should be kept up to date.
Amendment 639 #
Proposal for a regulation
Recital 48
Recital 48
(48) High-risk AI systems should be designed and developed in such a way that natural persons canmay, when appropriate, oversee their functioning. For this purpose, when it brings proven added value to the protection of health, safety and fundamental rights, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate, such measures should guarantee that the system is subject to in- built operational constraints that cannot beand are responsive to the human ovperridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that roleator during the expected lifetime of the device where necessary to reduce risks as far as possible and achieve performance in consideration of the generally acknowledged state-of-the-art and technological and scientific progress, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role. By way of derogation regarding high-risk AI systems within the scope of Regulation (EU) 2017/745 and Regulation (EU) 2017/746 of the European Parliament and of the Council, the established benefit-risk ratio requirements under the sectoral medical device legislation should apply.
Amendment 640 #
Proposal for a regulation
Recital 48
Recital 48
(48) High-risk AI systems should be designed and developed in such a way that natural persons canmay, when appropriate, oversee their functioning. For this purpose, when it brings proven added value to the protection of health, safety and fundamental rights, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate, such measures should guarantee that the system is subject to in- built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that roleand are responsive to the human operator during the expected lifetime of the device where necessary to reduce risks as far as possible and achieve performance in consideration of the generally acknowledged state-of-the-art technological and scientific progress, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role. By way of derogation regarding high-risk AI systems within the scope of Regulation (EU) 2017/745 and Regulation (EU) 2017/746 of the European Parliament and of the Council, the established benefit-risk ratio requirements under the sectoral medical device legislation should apply.
Amendment 645 #
Proposal for a regulation
Recital 49
Recital 49
(49) High-risk AI systems should perform consistently throughout their lifecycltime and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to the users. While standardisation organisations exist to establish standards, coordination on benchmarking is needed to establish how these standards should be met and measured. The European Artificial Intelligence Board should bring together national metrology and benchmarking authorities and provide guidance to address the technical aspects of how to measure the appropriate levels of accuracy and robustness. Their work should not be seen as a replacement of the standardisation organisations, but as a complementary function to provide specific technical expertise on measurement.
Amendment 649 #
Proposal for a regulation
Recital 51
Recital 51
(51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitabletate-of-the-art measures should therefore be taken into account by the providers of high-risk AI systems, but also taking into account asby the national competent authorities, market surveillance authorities and notified bodies that are accessing the data of providers of high-risk AI systems, next to appropriate the underlying ICT infrastructure. It should be further taken into account that AI in the form of machine learning is a critical defence against malware representing a legitimate interest of the AI user.
Amendment 671 #
Proposal for a regulation
Recital 60
Recital 60
(60) In the light of the complexity of the artificial intelligence value chain, relevant third parties, notably the ones involved in the sale and the supply of software, software tools and components, pre-trained models and data, or providers of network services, should cooperate, as appropriate, with providers and users to enable their compliance with the obligations under this Regulation and with competent authorities established under this Regulation. This provision shall qualify as a legal obligation in the context of the processing of personal data where necessary for the cooperation between the relevant providers.
Amendment 678 #
Proposal for a regulation
Recital 62
Recital 62
(62) In order to ensure a high level of trustworthiness of high-risk AI systems, those systems should be subject to a conformity assessment prior to their placing on the market or putting into service. AI systems, including general purpose AI systems, that may not necessarily be high-risk, are frequently used as components of other AI or non-AI software systems. In order to increase trust in the value chain and to give certainty to businesses about the performance of their systems, providers may voluntarily apply for a third-party conformity assessment.
Amendment 687 #
Proposal for a regulation
Recital 65
Recital 65
(65) In order to carry out third-party conformity assessment for AI systems intended to be used for the remote biometric identification of persons, notified bodies should be designated under this Regulation by the national competent authorities, provided they are compliant with a set of requirements, notably on independence, competence and, absence of conflicts of interests and minimum cybersecurity requirements.
Amendment 691 #
Proposal for a regulation
Recital 66
Recital 66
(66) In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may create a new or increased risk and significantly affect the compliance of the system with this Regulation or when the intended purpose of the system changes. If such a case materialises, the provider should follow a clear procedure with fixed deadlines, transparency requirements and reporting duties involving, where appropriate and applicable, external oversight by notified bodies or, where it is covered already under the relevant sectoral legislation, post market monitoring if that is needed. In addition, as regards AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out), it is necessary to provide rules establishing that changes to the algorithm and its performance that have been pre- determinconsidered by the provider and assessed at the moment of the conformity assessment should not constitute a substantial modification. In addition, it should not be considered a substantial modification if the user trains an AI system. In this situation, the user should clearly delimit the effects that the learning can have for the AI system. The notion of substantial modification should be assessed in light of the essential requirements set in this Regulation and be left to the manufacturer to determine if a modification is deemed to be substantial.
Amendment 696 #
Proposal for a regulation
Recital 66 a (new)
Recital 66 a (new)
(66 a) To prevent any deterioration in the expected safety of the algorithm subject to significant changes independent of the providers control, a clearly developed plan to address such significant changes should be subject to oversight by the relevant competent authorities or notified bodies when it is already addressed in principle in the respective sectoral Union harmonisation legislation regarding post- market monitoring
Amendment 710 #
Proposal for a regulation
Recital 70
Recital 70
(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use or where the content forms part of an evidently creative, satirical, artistic or fictional cinematographic, video game visuals or analogous work. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose in an appropriate, clear and visible manner that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.
Amendment 716 #
Proposal for a regulation
Recital 70 a (new)
Recital 70 a (new)
(70 a) In light of the nature and complexity of the value chain for AI systems, it is essential to clarify the role of humans who may contribute to the development of AI systems covered by this Regulation, without being providers, no longer being providers or when other natural or legal persons have also become providers. Therefore, it is particularly important to clarify the legal situation when it comes to general purpose AI systems. Those AI system are able to perform generally applicable functions such as image/speech recognition, audio/video generation, pattern detection, question answering or translation in a plurality of contexts. Every natural or legal person can become a new provider by adapting a general purpose AI system, already placed on the market or put into service, to a specific intended purpose. Due to their peculiar nature and in order to ensure a fair sharing of responsibilities along the AI value chain, such general purpose AI system should however already be subject to proportionate and tailored requirements and obligations under this Regulation even before placing it on the Union market or putting it into service. The original provider of a general purpose AI system should furthermore cooperate, as appropriate, with the new provider to enable its compliance with the relevant obligations under this Regulation.
Amendment 729 #
Proposal for a regulation
Recital 73
Recital 73
(73) In order to promote and protect innovation, it is important that the interests of small-scaleSME providers and users of AI systems are taken into particular account. To this objective, AI solutions and services designed to combat fraud and protect consumers against fraudulent activities should not be considered high-risk, nor be prohibited. As a matter of substantial public interest, it is vital that this Regulation does not undermine the incentive of industry to create and roll out solutions designed to combat fraud across the Union. Furthermore, Member States should develop initiatives, which are targeted at those operators, including on awareness raising and information communication. Moreover, the specific interests and needs of small-scaleSME providers shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users. Member States should also be encouraged to do the same for small and medium enterprises, which may sometimes lack the requisite administrative and legal resources to ensure proper understanding and compliance with the provisions under this act. In the event that Member States request it, the Commission may also provide assistance in this regard.
Amendment 735 #
Proposal for a regulation
Recital 74
Recital 74
(74) In order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market as well as to facilitate compliance of providers and notified bodies with their obligations under this Regulation, Member States should utilise existing dedicated channels for communication with SMEs and start-ups. Such existing channels could include but are not limited to ENISA’s Computer Security Incident Response Teams, National data protection agencies, the AI- on demand platform, the European Digital Innovation Hubs and the Testing and Experimentation Facilities established by the Commission and the Member States at national or EU level should possibly contribute to the implementation of this Regulation. Within their respective mission and fields of competence, they may provide in particular technical and scientific support to providers and notified bodies.
Amendment 743 #
Proposal for a regulation
Recital 76 a (new)
Recital 76 a (new)
(76 a) The Commission should re- establish the High Level Expert Group or a similar body with a new and balanced membership comprising an equal number of experts from SMEs and start-ups, large enterprises, academia and Research, and civil society. This new High Level Expert Group should not only act as advisory body to the Commission but also to the Board. At least every quarter, the new High Level Expert Group must have the chance to share its practical and technical expertise in a special meeting with the Board.
Amendment 755 #
Proposal for a regulation
Recital 80
Recital 80
(80) Union legislation on financial services includes internal governance and risk management rules and requirements which are applicable to regulated financial institutions in the course of provision of those services, including when they make use of AI systems. In order to ensure coherent application and enforcement of the obligations under this Regulation and relevant rules and requirements of the Union financial services legislation, the competent authorities responsible for the supervision and enforcement of the financial services legislation, including where applicable the European Central Bankcompetent authorities as defined in Directive 2013/36/EU of the European Parliament and of the Council, should be designated as competent authorities for the purpose of supervising the implementation of this Regulation, inexcluding for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions. To further enhance the consistency between this Regulation and the rules applicable to credit institutions regulated under Directive 2013/36/EU of the European Parliament and of the Council56 , it is also appropriate to integrate certain aspects of the conformity assessment procedure and some of the providers’ procedural obligations in relation to risk management, post marketing monitoring and documentation into the existing obligations and procedures under Directive 2013/36/EU. In order to avoid overlaps, limited derogations should also be envisaged in relation to the quality management system of providers and the monitoring obligation placed on users of high-risk AI systems to the extent that these apply to credit institutions regulated by Directive 2013/36/EU. _________________ 56 Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 on access to the activity of credit institutions and the prudential supervision of credit institutions and investment firms, amending Directive 2002/87/EC and repealing Directives 2006/48/EC and 2006/49/EC (OJ L 176, 27.6.2013, p. 338).
Amendment 763 #
Proposal for a regulation
Recital 84
Recital 84
(84) Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement. For certain specific infringements, Member States should take into account the margins and criteria set out in this Regulation. The European Data Protection Supervisor should have the power to impose fines on Union institutions, agencies and bodies falling within the scope of this Regulation. The penalties and litigation costs under this Regulation should not be subject to contractual clauses or any other arrangements.
Amendment 799 #
Proposal for a regulation
Article 1 – paragraph 1 – point e
Article 1 – paragraph 1 – point e
(e) rules on market monitoring and, market surveillance. and governance;
Amendment 800 #
Proposal for a regulation
Article 1 – paragraph 1 – point e a (new)
Article 1 – paragraph 1 – point e a (new)
(e a) measures in support of innovation with a particular focus on SMEs and start-ups, including but not limited to setting up regulatory sandboxes and targeted measures to reduce the compliance burden on SME’s and start- ups;
Amendment 806 #
Proposal for a regulation
Article 1 – paragraph 1 – point e b (new)
Article 1 – paragraph 1 – point e b (new)
(e b) the establishment of an independent ‘European Artificial Intelligence Board’ and its activities supporting the enforcement of this Regulation.
Amendment 818 #
Proposal for a regulation
Article 2 – paragraph 1 – point b
Article 2 – paragraph 1 – point b
(b) users of AI systems locatwho are physically present or established within the Union;
Amendment 824 #
Proposal for a regulation
Article 2 – paragraph 1 – point c
Article 2 – paragraph 1 – point c
(c) providers and users of AI systems that are located in a third country, where the output, meaning predictions, recommendations or decisions, produced by the AI system is used in the Unionand influencing the environment it interacts with, is intended for use in the Union and puts at risk the health, safety or fundamental rights of natural persons physically present in the Union, insofar as the provider has permitted, is aware or can reasonably expect such use;
Amendment 830 #
Proposal for a regulation
Article 2 – paragraph 1 – point c a (new)
Article 2 – paragraph 1 – point c a (new)
(c a) importers, distributors and authorised representatives of providers of AI-systems.
Amendment 843 #
Proposal for a regulation
Article 2 – paragraph 2 – introductory part
Article 2 – paragraph 2 – introductory part
2. For high-risk AI systems that are safety components of products or systems, or which are themselves products or sSystems, and that falling within the scope of the following actslisted Acts in Annex II - Section B, only Article 84 of this Regulation shall apply:.
Amendment 846 #
Proposal for a regulation
Article 2 – paragraph 2 – point a
Article 2 – paragraph 2 – point a
Amendment 848 #
Proposal for a regulation
Article 2 – paragraph 2 – point b
Article 2 – paragraph 2 – point b
Amendment 850 #
Amendment 852 #
Proposal for a regulation
Article 2 – paragraph 2 – point d
Article 2 – paragraph 2 – point d
Amendment 854 #
Proposal for a regulation
Article 2 – paragraph 2 – point e
Article 2 – paragraph 2 – point e
Amendment 855 #
Proposal for a regulation
Article 2 – paragraph 2 – point f
Article 2 – paragraph 2 – point f
Amendment 858 #
Proposal for a regulation
Article 2 – paragraph 2 – point g
Article 2 – paragraph 2 – point g
Amendment 859 #
Proposal for a regulation
Article 2 – paragraph 2 – point h
Article 2 – paragraph 2 – point h
Amendment 869 #
Proposal for a regulation
Article 2 – paragraph 3
Article 2 – paragraph 3
3. This Regulation shall not apply to AI systems developed or used exclusively for military or national security purposes.
Amendment 875 #
Proposal for a regulation
Article 2 – paragraph 3 a (new)
Article 2 – paragraph 3 a (new)
3 a. Title III of this Regulation shall not apply to AI systems that are used in a business-to-business environment and do not directly impact natural persons.
Amendment 888 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
Article 2 – paragraph 5 a (new)
5 a. This Regulation shall not affect any research, testing and development activity regarding an AI system prior to this system being placed on the market or put into service.
Amendment 892 #
Proposal for a regulation
Article 2 – paragraph 5 b (new)
Article 2 – paragraph 5 b (new)
5 b. This Regulation shall not apply to AI systems, including their output, specifically developed and put into service for the sole purpose of scientific research, testing and development. The Commission may adopt delegated acts that clarify this exemption.
Amendment 906 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means softwarea machine-based system that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they intis capable of influencing the environment by producing an output(predictions, recommendations, or decisions) for a given set of objectives. It uses machine and/or human-based data and inputs to (i) perceive real and/or virtual environments; (ii) abstract these perceptions into models through analysis in an automated manner (e.g. with machine learning), or manually; and (iii) use model inference to formulate options for outcomes. AI systems are designed to operacte with varying levels of autonomy;
Amendment 924 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1 a (new)
Article 3 – paragraph 1 – point 1 a (new)
(1 a) ‘machine learning’ means an AI system that gives computers the ability to find patterns in data without being explicitly programmed for a given task;
Amendment 925 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1 b (new)
Article 3 – paragraph 1 – point 1 b (new)
(1 b) 'general purpose AI system' means an AI system that - irrespective of the modality in which it is placed on the market or put into service including as open source software - is able to perform generally applicable functions such as image or speech recognition, audio or video generation, pattern detection, question answering, translation or others; a general purpose AI system may be used in a plurality of contexts and may be integrated in a plurality of other AI systems;
Amendment 927 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1 c (new)
Article 3 – paragraph 1 – point 1 c (new)
(1 c) ‘autonomous’ means an AI-system that operates by interpreting certain input and results and by using a set of pre- determined objectives, without being limited to such instructions, despite the system’s behaviour being constrained by, and targeted at, fulfilling the goal it was given and other relevant design choices made by its provider;
Amendment 928 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1 d (new)
Article 3 – paragraph 1 – point 1 d (new)
(1 d) ‘risk’ means the combination of the probability of occurrence of a harm and the severity of that harm;
Amendment 929 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1 e (new)
Article 3 – paragraph 1 – point 1 e (new)
(1 e) ‘harm’ means an adverse impact affecting the health, safety or fundamental rights of a natural person;
Amendment 931 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2
Article 3 – paragraph 1 – point 2
(2) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing itplaces an AI system on the market or puttings it into service under its own name or trademark, whether for payment or free of charge or that adapts general purpose AI systems to an intended purpose;
Amendment 934 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2 a (new)
Article 3 – paragraph 1 – point 2 a (new)
(2 a) ‘new provider’ means a natural or legal person that becames provider for the purposes of this Regulation due to one of the circumstances referred to in Art 23a(1).
Amendment 935 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2 b (new)
Article 3 – paragraph 1 – point 2 b (new)
Amendment 936 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2 c (new)
Article 3 – paragraph 1 – point 2 c (new)
(2 c) ‘original provider’ means a provider of a general purpose AI system, who has made available the AI system to a natural or legal person that itself became a provider by giving an intended purpose to the general purpose AI system;
Amendment 938 #
Proposal for a regulation
Article 3 – paragraph 1 – point 3
Article 3 – paragraph 1 – point 3
Amendment 945 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4
Article 3 – paragraph 1 – point 4
(4) ‘usdeployer’ means any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non- professional activity;
Amendment 952 #
Proposal for a regulation
Article 3 – paragraph 1 – point 5
Article 3 – paragraph 1 – point 5
(5) ‘authorised representative’ means any natural or legal person physically present or established in the Union who has received and accepted a written mandate from a provider of an AI system to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation;
Amendment 954 #
Proposal for a regulation
Article 3 – paragraph 1 – point 5 a (new)
Article 3 – paragraph 1 – point 5 a (new)
(5 a) ‘product manufacturer’ means a manufacturer within the meaning of any of the Union harmonisation legislation listed in Annex II;
Amendment 955 #
Proposal for a regulation
Article 3 – paragraph 1 – point 6
Article 3 – paragraph 1 – point 6
(6) ‘importer’ means any natural or legal person physically present or established in the Union that places on the market or puts into service an AI system that bears the name or trademark of a natural or legal person established outside the Union;
Amendment 956 #
Proposal for a regulation
Article 3 – paragraph 1 – point 7 a (new)
Article 3 – paragraph 1 – point 7 a (new)
(7 a) ‘economic operator’ means the provider, the authorised representative, the importer and the distributor;
Amendment 957 #
Proposal for a regulation
Article 3 – paragraph 1 – point 8
Article 3 – paragraph 1 – point 8
(8) ‘operator’ means the provider, the user, the authorised representative, the importer and the distributoeconomic operator and the user;
Amendment 973 #
Proposal for a regulation
Article 3 – paragraph 1 – point 13
Article 3 – paragraph 1 – point 13
(13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purpose and with the specific context and conditions of use established by the provider, but which may result from reasonably foreseeable human behaviour or interaction with other systems;
Amendment 982 #
Proposal for a regulation
Article 3 – paragraph 1 – point 14
Article 3 – paragraph 1 – point 14
(14) ‘safety component of a product or system’ means, in line with the relevant Union harmonisation legislation listed in Annex II, a component of a product or of a system which fulfils a direct and critical safety function for that product or system sor the failure or malfunctioning of whichat its malfunction endangers the health and safety of persons or property;
Amendment 987 #
Proposal for a regulation
Article 3 – paragraph 1 – point 15
Article 3 – paragraph 1 – point 15
(15) ‘instructions for use’ means the information provided by the provider to inform the user of in particular an AI system’s intended purpose and proper use, inclusive of the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used;
Amendment 991 #
Proposal for a regulation
Article 3 – paragraph 1 – point 16
Article 3 – paragraph 1 – point 16
(16) ‘recall of an AI system’ means any measure aimed at achieving the return to the provider or taking it out of service or disable the use of an AI system made available to users;
Amendment 993 #
Proposal for a regulation
Article 3 – paragraph 1 – point 17
Article 3 – paragraph 1 – point 17
(17) ‘withdrawal of an AI system’ means any measure aimed at preventing the distribution, disan AI system in the supplay and offer of an AI systemchain being made available on the market;
Amendment 998 #
Proposal for a regulation
Article 3 – paragraph 1 – point 20
Article 3 – paragraph 1 – point 20
(20) ‘conformity assessment’ means the process of verifydemonstrating whether the requirements set out in Title III, Chapter 2 of this Regulation relating to an AI system have been fulfilled;
Amendment 1000 #
Proposal for a regulation
Article 3 – paragraph 1 – point 22
Article 3 – paragraph 1 – point 22
(22) ‘notified body’ means a conformity assessment body designatnotified in accordance with Art 32 of this Regulation and with other relevant Union harmonisation legislation;
Amendment 1007 #
Proposal for a regulation
Article 3 – paragraph 1 – point 23
Article 3 – paragraph 1 – point 23
(23) ‘substantial modification’ means a change to the AI system following its placing on the market or putting into service, not foreseen by the provider, which affects the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation or results in a modification to the intended purpose for which the AI system has been assessed;
Amendment 1010 #
Proposal for a regulation
Article 3 – paragraph 1 – point 24
Article 3 – paragraph 1 – point 24
(24) ‘CE marking of conformity’ (CE marking) means a physical or electronic marking by which a provider indicates that an AI system is in conformity with the requirements set out in Title III, Chapter 2 of this Regulation and other applicable Union legislation harmonising the conditions for the marketing of products (‘Union harmonisation legislation’) providing for its affixing as well as the GDPR;
Amendment 1012 #
Proposal for a regulation
Article 3 – paragraph 1 – point 28
Article 3 – paragraph 1 – point 28
(28) ‘common specifications’ means a document, other than a standard, containing technical solutions comprising a set of technical specifications, other than a standard, providing a means to, comply with certain requirements and obligations established under this Regulation;
Amendment 1018 #
Proposal for a regulation
Article 3 – paragraph 1 – point 30
Article 3 – paragraph 1 – point 30
(30) ‘machine learning validation data’ means data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process, among other things, in order to prevent overfitting; whereas the validation dataset can be a separate dataset or part of the training dataset, either as a fixed or variable split;
Amendment 1021 #
Proposal for a regulation
Article 3 – paragraph 1 – point 31
Article 3 – paragraph 1 – point 31
(31) ‘testing data’ means data used for providing an independent evaluation of the trained and validated AI system in order to confirm the expected performance of that system before its placing on the market or putting into service. The testing data must be a separate dataset;
Amendment 1035 #
Proposal for a regulation
Article 3 – paragraph 1 – point 34
Article 3 – paragraph 1 – point 34
(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric dataor other data obtained, read or interpreted from an individual;
Amendment 1059 #
Proposal for a regulation
Article 3 – paragraph 1 – point 36
Article 3 – paragraph 1 – point 36
(36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons at a physical distance through thea “one to many” comparison of awhere the person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified identified do not claim to have a particular identity but where the identity is otherwise established - without the conscious cooperation of these persons - by matching live templates with templates stored in a template database;
Amendment 1062 #
Proposal for a regulation
Article 3 – paragraph 1 – point 36 a (new)
Article 3 – paragraph 1 – point 36 a (new)
(36 a) ‘at a distance’ means the process of identification, verification or authentication in physical distance with indirect interaction with the data subject or without;
Amendment 1068 #
Proposal for a regulation
Article 3 – paragraph 1 – point 39
Article 3 – paragraph 1 – point 39
(39) ‘publicly accessible space’ means any physical place accessible to the public, regardless of whether certain conditions for access may applyublicly or privately owned physical place accessible to an undetermined number of natural persons, regardless of whether certain conditions or circumstances for access have been predetermined, and regardless of the potential capacity restrictions;
Amendment 1075 #
Proposal for a regulation
Article 3 – paragraph 1 – point 41
Article 3 – paragraph 1 – point 41
(41) ‘law enforcement’ means activities carried out by law enforcement authorities or on their behalf for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security;
Amendment 1079 #
Proposal for a regulation
Article 3 – paragraph 1 – point 43
Article 3 – paragraph 1 – point 43
Amendment 1083 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – introductory part
Article 3 – paragraph 1 – point 44 – introductory part
(44) ‘serious incident’ means any incident that directly or indirectly leads, might have led or might lead to any of the following:
Amendment 1088 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point a
Article 3 – paragraph 1 – point 44 – point a
(a) the death of a person or serious damage to a person’s health, to property or the environment,
Amendment 1099 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
Article 3 – paragraph 1 – point 44 a (new)
(44 a) ‘regulatory sandbox’ means a framework which, by providing a structured context for experimentation, enable where appropriate in a real-world or digital environment the testing of innovative technologies, products, services or approaches for a limited time and in a limited part of a sector or area under regulatory supervision ensuring that appropriate safeguards are in place;
Amendment 1110 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 b (new)
Article 3 – paragraph 1 – point 44 b (new)
(44 b) ‘deep fake’ means manipulated or synthetic audio, image or video content that would falsely appear to be authentic or truthful, and which features depictions of persons appearing to say or do things they did not say or do, without their consent, produced using AI techniques, including machine learning and deep learning;
Amendment 1118 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 c (new)
Article 3 – paragraph 1 – point 44 c (new)
(44 c) ‘incident’ means a faulty operation of an AI system;
Amendment 1121 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 d (new)
Article 3 – paragraph 1 – point 44 d (new)
(44 d) ‘personal data’ means data as defined in point (1) of Article 4 of Regulation (EU) 2016/679;
Amendment 1123 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 e (new)
Article 3 – paragraph 1 – point 44 e (new)
(44 e) ‘non-personal data’ means data other than personal data as defined in point (1) of Article 4 of Regulation (EU) 2016/679;
Amendment 1124 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 f (new)
Article 3 – paragraph 1 – point 44 f (new)
(44 f) ‘critical infrastructure’ means an asset, system or part thereof which is neccesary for the delivery of a service that is essential for the maintenance of vital societal functions or economic activities within the meaning of Article 2 (4) and (5) of Directive of the European Parliament and of the Council on the resilience of critical entities (2020/0365 (COD));
Amendment 1126 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 g (new)
Article 3 – paragraph 1 – point 44 g (new)
(44 g) ‘harmful subliminal technique’ means a measure whose existence and operation is entirely imperceptible by a natural person on whom it is used, and which has the purpose and direct effect to induce actions leading to that persons physical or phychological harm;
Amendment 1127 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 h (new)
Article 3 – paragraph 1 – point 44 h (new)
(44 h) 'unfair bias' means an inclination of prejudice towards or against a natural person that can result in discriminatory and/or unfair treatment of some natural persons with respect to others.
Amendment 1135 #
Proposal for a regulation
Article 4 – paragraph 1
Article 4 – paragraph 1
The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I, in order to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed therein, after ensuring adequate consultation with relevant stakeholders, to amend the list of techniques and approaches listed in Annex I within the scope of the definition of an AI system as provided for in Article 3(1), in order to update that list to market and technological developments on the basis of transparent criteria. Every time the list of techniques and approaches listed in Annex I is amended, providers and users of AI systems, which become in scope of the Regulation shall have 24 months to apply the relevant requirements and obligations. Article 83 shall apply for AI systems already placed on the market before delegated acts are published.
Amendment 1138 #
The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I, in order to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed therein. As an adequate transitional period, two years shall be applied to each amendment.
Amendment 1144 #
Proposal for a regulation
Article 4 a (new)
Article 4 a (new)
Article 4 a Trustworthy AI systems 1. The principles set out in this Article establish a high-level framework for a coherent and coordinated human-centric European approach on trustworthy AI systems that respect and promote the values on which the Union is founded. This Regulation takes those principles into account by establishing certain requirements for high-risk AI systems listed in Article 8 to 15. • ‘human agency and oversight’ means that AI systems shall be developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be controlled and overseen by humans in a manner that is appropriate to the circumstances of the case. • ‘technical robustness and safety’ means that AI systems shall be developed and used in a way to minimize unintended and unexpected harm as well as being robust in case of problems and being resilient against attempts to alter the use or performance of the AI system by malicious third parties. • ‘privacy and data governance’ means that AI systems shall be developed and used in compliance with existing privacy and data protection rules, while processing data that meets high standards in terms of quality and integrity. • ‘transparency’ means that AI systems shall be developed and used in a way that allows appropriate traceability and explainability, while making humans Aware that they communicate or interact with an AI system as well as duly informing users of the capabilities and limitations of that AI system. • ‘diversity, non-discrimination and fairness’ means that AI systems shall be developed and used in a way that includes diverse actors and promotes equal access, while avoiding discriminatory impacts that are prohibited by Union or Member States law. • ‘social and environmental well-being’ means that AI systems shall be developed and used in a sustainable and environmentally friendly manner as well as in away to benefit all human beings, while monitoring and assessing the long- term impacts on the individual, society and democracy. • ‘accountability’ means that AI systems shall be developed or used in a way that facilitates auditability and accountability pursuant to applicable Union and Member States law, while making clear who is legally responsible in case the AI system causes negative impacts. 2. Paragraph 1 is without prejudice to obligations set up by existing Union and Member States legislation and does not create any additional obligations for providers or users. 3. European Standardisation Organisations shall understand the principles referred to in paragraph 1 as outcome-based objectives when developing the appropriate harmonised standards for high risk AI systems as referred to in Article 40(2b). For all other AI systems, the voluntary application on the basis of harmonised standards, technical specifications and codes of conducts as referred to in Article 69(1a) is encouraged.
Amendment 1166 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order towith the objective to significantly and materially distorting a person’s behaviour in a manner that causes or is likely toor directly causeing that person or another person physical or psychologsignificalnt harm;
Amendment 1184 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
Article 5 – paragraph 1 – point b
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order towith the objective to or the effect of materially distorting the behaviour of a person pertaining to that group in a manner that causes or is likely to directly cause that person or another person physical or psychologsignificalnt harm;
Amendment 1209 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point i
Article 5 – paragraph 1 – point c – point i
(i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;
Amendment 1219 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – point ii
Article 5 – paragraph 1 – point c – point ii
(ii) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;
Amendment 1240 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – introductory part
Article 5 – paragraph 1 – point d – introductory part
(d) the use of ‘real-time’ remote biometric identification function of an AI systems in publicly accessible spaces for the purpose of law enforcementby law enforcement or on their behalf, unless and in as far as such use is strictly necessary used for one of the following objectives:
Amendment 1266 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point ii
Article 5 – paragraph 1 – point d – point ii
(ii) the prevention of a specific, and substantial and imminent threat to the lifecritical infrastructure, life, health or physical safety of natural persons or of a terrorist attack;
Amendment 1280 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
Article 5 – paragraph 1 – point d – point iii
(iii) the detection, localisation, or identification or prosecution of a perpetrator or suspect of a criminalf a natural person for the purpose of conducting a criminal investigation, prosecution or exeuting a criminal penalty for offences referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, or other specific offences punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least five years as determined by the law of that Member State. _________________ 62 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).
Amendment 1373 #
Proposal for a regulation
Article 5 – paragraph 3 – introductory part
Article 5 – paragraph 3 – introductory part
3. As regards paragraphs 1, point (d) and 2, each individual use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation and theif such authorisation may beis requested only during or after the usewithout undue delay, and, if such authorisation is rejected, the system’s use is stopped with immediate effect.
Amendment 1391 #
Proposal for a regulation
Article 5 – paragraph 4
Article 5 – paragraph 4
4. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision and reporting relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement.
Amendment 1414 #
Proposal for a regulation
Article 6 – paragraph 1 – introductory part
Article 6 – paragraph 1 – introductory part
1. Irrespective of whether an AI system is placedAn AI system that is itself a product shall be considered as high risk AI system if, under the applicable Union harmonisation legislation listed in Annex II, it is classified as high-risk AI system or an equivalent thereof and has to undergo a third-party conformity assessment for meeting essential safety requirements prior to placing it on the market or putting it into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both. An AI system intended to be used as a core and essential safety component of a product under the applicable Union harmonisation legislation listed in Annex II, shall be considered as high risk if such Union harmonisation legislation classifies it as high-risk or an equivalent thereof and requires it to undergo a third-party conformity assessment for meeting essential safety requirements with a view to placing it on the market or putting it into service. The high-risk classification set in paragraph 1 shall not impact or determine the outcome of othe following conditions are fulfilled:r risk classification procedures established in Union harmonisation legislation listed in Annex II
Amendment 1417 #
Proposal for a regulation
Article 6 – paragraph 1 – point a
Article 6 – paragraph 1 – point a
Amendment 1428 #
Proposal for a regulation
Article 6 – paragraph 1 – point b
Article 6 – paragraph 1 – point b
Amendment 1436 #
Proposal for a regulation
Article 6 – paragraph 2
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, each AI systems referred to in Annex III shall also be considered high-risk with an intended purpose - as specified in its instruction to use in accordance with Art 3(12) and Art 13(2) - that means that it will be deployed in a way that falls under one of the critical use cases referred to in Annex III shall also be considered high-risk if that AI system will make a final decision that puts significantly at risk the health, safety or fundamental rights of natural persons.
Amendment 1445 #
Proposal for a regulation
Article 6 – paragraph 2 a (new)
Article 6 – paragraph 2 a (new)
Amendment 1467 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73, after ensuring adequate consultation with relevant stakeholders, to update the list in Annex III by adding high- risk AI systems where both of the following conditions are fulfilled:
Amendment 1484 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
Article 7 – paragraph 1 – point b
(b) the AI systems pose a serious risk of harm to the health and safety, or a serious risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
Amendment 1497 #
Proposal for a regulation
Article 7 – paragraph 2 – point a a (new)
Article 7 – paragraph 2 – point a a (new)
(a a) the general capabilities and functionalities of the AI system independent of its intended purpose;
Amendment 1501 #
Proposal for a regulation
Article 7 – paragraph 2 – point b a (new)
Article 7 – paragraph 2 – point b a (new)
(b a) the extent to which the AI system acts with a certain level of autonomy;
Amendment 1516 #
Proposal for a regulation
Article 7 – paragraph 2 – point e
Article 7 – paragraph 2 – point e
(e) the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system with a distinction to be made between an AI system used in an advisory capacity or one used directly to make a decision, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome;
Amendment 1521 #
Proposal for a regulation
Article 7 – paragraph 2 – point e a (new)
Article 7 – paragraph 2 – point e a (new)
(e a) the potential misuse and malicious use of the AI system and of the technology underpinning it;
Amendment 1529 #
Proposal for a regulation
Article 7 – paragraph 2 – point g
Article 7 – paragraph 2 – point g
(g) the extent to which the outcome produced with an AI system is not easily reversible or remedied, whereby outcomes having an impact on the health or safety of persons shall not be considered as easily reversible;
Amendment 1530 #
Proposal for a regulation
Article 7 – paragraph 2 – point g a (new)
Article 7 – paragraph 2 – point g a (new)
(g a) the extent of the availability and use of demonstrated technical solutions and mechanisms for the control, reliability and corrigibility of the AI system;
Amendment 1532 #
Proposal for a regulation
Article 7 – paragraph 2 – point g b (new)
Article 7 – paragraph 2 – point g b (new)
(g b) the extent of human oversight and the possibility for a human to intercede in order to override a decision or recommendations that may lead to potential harm;
Amendment 1533 #
Proposal for a regulation
Article 7 – paragraph 2 – point g c (new)
Article 7 – paragraph 2 – point g c (new)
(g c) the magnitude and likelihood of benefit of the deployment of the AI system for industry, individuals, or society at large;
Amendment 1534 #
Proposal for a regulation
Article 7 – paragraph 2 – point g d (new)
Article 7 – paragraph 2 – point g d (new)
(g d) the reticence risk and/or opportunity costs of not using the AI system for industry, individuals, or society at large;
Amendment 1535 #
Proposal for a regulation
Article 7 – paragraph 2 – point g e (new)
Article 7 – paragraph 2 – point g e (new)
(g e) the amount and nature of data processed;
Amendment 1536 #
Proposal for a regulation
Article 7 – paragraph 2 – point g f (new)
Article 7 – paragraph 2 – point g f (new)
(g f) the benefits provided by the use of the AI system, including making products safer;
Amendment 1539 #
Proposal for a regulation
Article 7 – paragraph 2 – point h – introductory part
Article 7 – paragraph 2 – point h – introductory part
(h) the extent to which existing Union legislation, in particular GDPR, provides for:
Amendment 1548 #
Proposal for a regulation
Article 7 – paragraph 2 a (new)
Article 7 – paragraph 2 a (new)
2 a. The Commission may remove AI systems from the list in Annex III if the conditions referred to in paragraph 1 are no longer met.
Amendment 1550 #
Proposal for a regulation
Article 7 – paragraph 2 b (new)
Article 7 – paragraph 2 b (new)
2 b. The Board, notified bodies and other actors may request the Commission to reassess an AI system. The AI system shall then be reviewed for reassessment and may be re-categorized. The Commission shall give reasons for its decision and publish the reasons. The details of the application procedure shall be laid down by the Commission by means of delegated acts in accordance with Article 73.
Amendment 1557 #
Proposal for a regulation
Article 8 – paragraph 1
Article 8 – paragraph 1
1. High-risk AI systems shall comply with the essential requirements established in this Chapter, taking into account the generally acknowledged state of the art, including as reflected in relevant industry and harmonised standards.
Amendment 1569 #
Proposal for a regulation
Article 8 – paragraph 2 a (new)
Article 8 – paragraph 2 a (new)
2 a. AI systems referred to in Article 6 may be wholly or partially exempted from fulfilling the requirements referred to in Articles 8-15 if risks posed by the AI systems are sufficiently eliminated or mitigated through appropriate operational countermeasures or built-in fail-safe systems.
Amendment 1574 #
Proposal for a regulation
Article 9 – paragraph 1
Article 9 – paragraph 1
1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems if this system poses a risk of harm to health and safety or a risk of adverse impacts on fundamental rights.
Amendment 1579 #
Proposal for a regulation
Article 9 – paragraph 2 – introductory part
Article 9 – paragraph 2 – introductory part
2. The risk management system shall consist of a continuous iterative process run throughout the entire lifecycltime of a high- risk AI system, requiring regular systematic updatingreview of the suitability of the risk management process to ensure its continuing effectiveness, and documentation of any decisions and actions taken. It shall comprise the following steps and all of these steps shall be integrated into already existing risk management procedures relating to the relevant Union sectoral legislation to avoid unnecessary bureaucracy:
Amendment 1585 #
Proposal for a regulation
Article 9 – paragraph 2 – point a
Article 9 – paragraph 2 – point a
(a) identification and analysis of the known and reasonable foreseeable risks associated with eachof harms most likely to occur to the health, safety or fundamental rights in view of the intended purpose of the high-risk AI system;
Amendment 1592 #
Proposal for a regulation
Article 9 – paragraph 2 – point b
Article 9 – paragraph 2 – point b
Amendment 1597 #
Proposal for a regulation
Article 9 – paragraph 2 – point c
Article 9 – paragraph 2 – point c
(c) evaluation of other possibly arising risksnew risks consistent with those described in paragraph (2a) of this Article and identified based on the analysis of data gathered from the post- market monitoring system referred to in Article 61;
Amendment 1600 #
Proposal for a regulation
Article 9 – paragraph 2 – point d
Article 9 – paragraph 2 – point d
(d) adoption of suitableappropriate and targeted risk management measures designed to address identified known and foreseeable risks to health and safety or fundamental rights, in accordance with the provisions of the following paragraphs.
Amendment 1603 #
Proposal for a regulation
Article 9 – paragraph 3
Article 9 – paragraph 3
3. The risk management measures referred to in paragraph 2, point (d) shall give due consideration to the effects and possible interactions resulting from the combined application of the requirements set out in this Chapter 2, with a view to treating risks effectively while ensuring an appropriate and proportionate implementation of the requirements. They shall take into account the generally acknowledged state of the art, including as reflected in relevant harmonised standards or common specifications.
Amendment 1611 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any significant residual risk associated with each hazard as well as the overall residual risk ofof the high-risk AI systems is reasonably judged to be acceptable, having regards to the benefits that the high-risk AI systems is judged acceptable,reasonably expected to deliver and provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. ThoseSignificant residual risks shall be communicated to the user.
Amendment 1616 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – introductory part
Article 9 – paragraph 4 – subparagraph 1 – introductory part
In identifying the most appropriate risk management measures, the following shall be ensuredtaken into account:
Amendment 1618 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point a
Article 9 – paragraph 4 – subparagraph 1 – point a
(a) elimination or reduction of risks as far as possible through adequate design and developmentreduction of identified and evaluated risks as far as proportionate and technologically possible in light of the generally acknowledged state of the art and industry standards, through adequate design and development of the high risk AI system in question;
Amendment 1628 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point c
Article 9 – paragraph 4 – subparagraph 1 – point c
(c) provision of the required adequate information pursuant to Article 13, in particular as regards the risks referred to in paragraph 2, point (b) of this Article, and, where appropriate, training to users.
Amendment 1634 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 2
Article 9 – paragraph 4 – subparagraph 2
In eliminatseeking tor reducinge risks related to the use of the high-risk AI system, due consideration shall be given toproviders shall take into due consideration the technical knowledge, experience, education, training to be expected by the user andhe user may need, including in relation to the environment in which the system is intended to be used.
Amendment 1641 #
Proposal for a regulation
Article 9 – paragraph 5
Article 9 – paragraph 5
5. High-risk AI systems shall be tesevaluated for the purposes of identifying the most appropriate and targeted risk management measures. Testing and weighing any such measures against the potential benefits and intended goals of the system. Evaluations shall ensure that high-risk AI systems perform consistently for their intended purpose and they are in compliance with the relevant requirements set out in this Chapter.
Amendment 1648 #
6. TEvaluation or testing procedures shall be suitable to achieve the intended purpose of the AI system and do not need to go beyond what is necessary to achieve that purpose.
Amendment 1657 #
Proposal for a regulation
Article 9 – paragraph 7
Article 9 – paragraph 7
7. The testing of the high-risk AI systems shall be performed, as appropriate, at any point in time throughout the development process, and, in any event, prior to the placing on the market or the putting into service. Testing shall be made against preliminarilyior defined metrics and, such as probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system.
Amendment 1661 #
Proposal for a regulation
Article 9 – paragraph 8
Article 9 – paragraph 8
8. When implementing the risk management system described in paragraphs 1 to 7, shall give specific consideration shall be given to whether the high-risk AI system is likely to be accessed by or have an impact on children.
Amendment 1670 #
Proposal for a regulation
Article 9 – paragraph 9
Article 9 – paragraph 9
9. For credit institutions regulated by Directive 2013/36/EUAI systems already covered by Union law that require them to carry out specific risk assessments, the aspects described in paragraphs 1 to 8 shall be part ofcombined with the risk manageassessment procedures established by those institutions pursuant to Article 74 of that Directiveat Union law or deemed to be covered as part of it.
Amendment 1674 #
Proposal for a regulation
Article 10 – paragraph 1
Article 10 – paragraph 1
1. High-risk AI systems which make use of techniques involving the training of models with data shall be, as far this can be reasonably expected and is feasible from a technical point of view, developed onwith the basis ofest efforts to ensure training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5.
Amendment 1685 #
Proposal for a regulation
Article 10 – paragraph 2 – introductory part
Article 10 – paragraph 2 – introductory part
2. Training, machine-learning validation and testing data sets shall be subject to appropriate data governance and management practices during the expected lifetime. Those practices shall concern in particular, , where relevant:
Amendment 1688 #
Proposal for a regulation
Article 10 – paragraph 2 – point a
Article 10 – paragraph 2 – point a
(a) the relevant design choices for training and machine learning validation;
Amendment 1691 #
Proposal for a regulation
Article 10 – paragraph 2 – point b
Article 10 – paragraph 2 – point b
(b) data collection processes;
Amendment 1692 #
Proposal for a regulation
Article 10 – paragraph 2 – point c
Article 10 – paragraph 2 – point c
(c) relevant data preparation processing operations, such as annotation, labelling, cleaning, enrichment and aggregation;
Amendment 1696 #
Proposal for a regulation
Article 10 – paragraph 2 – point e
Article 10 – paragraph 2 – point e
Amendment 1699 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
Article 10 – paragraph 2 – point f
(f) examination in view of possible biasesunfair biases that are likely to affect the health and safety of persons or lead to discrimination prohibited under Union law;
Amendment 1705 #
Proposal for a regulation
Article 10 – paragraph 2 – point g
Article 10 – paragraph 2 – point g
(g) the identification of any possiblesignificant and consequential data gaps or shortcomings, and how those gaps and shortcomings can be addressed.;
Amendment 1709 #
Proposal for a regulation
Article 10 – paragraph 2 – point g a (new)
Article 10 – paragraph 2 – point g a (new)
(g a) the presumable context of the use as well as the intended purpose of the AI System.
Amendment 1725 #
Proposal for a regulation
Article 10 – paragraph 3
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be relevant, representative, free of errHigh Risk AI systems should be designed and developed with the best efforts and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be usedto ensure that, where appropriate, training datasets, machine-learning validation and testing data sets are sufficiently accurate, relevant and representative in view of the intended purpose of the AI system. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
Amendment 1726 #
Proposal for a regulation
Article 10 – paragraph 3 a (new)
Article 10 – paragraph 3 a (new)
3 a. In assessing the quality of a data set, account shall be taken to the extent to which the data set is constructed with a view to fulfilling in particular the following aspects: a) provides a similar output for relevant demographic Groups impacted by the system; b) minimizes disparities in outcomes for relevant demographic groups impacted by the system, in case where the system allocates resources or opportunities to natural persons; c) minimizes the potential for stereotyping, demeaning, or erasing relevant demographic groups impacted by the system where the system describes, depicts, or otherwise represents people, cultures, or society.
Amendment 1727 #
Proposal for a regulation
Article 10 – paragraph 4
Article 10 – paragraph 4
Amendment 1733 #
Proposal for a regulation
Article 10 – paragraph 4 a (new)
Article 10 – paragraph 4 a (new)
4 a. The processing of personal data to train, validate and test data sets of an AI system in order to meet the requirements of this Regulation shall be lawful for the purpose of the legitimate interest of the provider as referred to in Article 6(1f) GDPR or in accordance with Article 6(4) GDPR subject to appropriate safeguards in line with Article 89 GDPR for ensuring to the extent necessary and proportionate one or more of the following objectives: a) national and common security; b) functioning of the internal market; c) prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; d) exercise of public authorities’ official mission, such as tax and customs authorities, financial investigation units, independent administrative authorities, or financial market authorities responsible for the regulation and supervision of securities markets should not be regarded as recipients if they process personal data to train, validate and test an AI system which are necessary to carry out a particular inquiry in the general interest, in accordance with Union or Member State law; e) network and information security to the extent necessary and proportionate for this purpose; f) protection of an interest which is essential for the life of the data subject or that of another natural person, in particular where it is necessary for reasons of public interest in the areas of public health.
Amendment 1738 #
Proposal for a regulation
Article 10 – paragraph 5
Article 10 – paragraph 5
5. To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems maywill have a legal basis and necessary exception to process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art: (i) state-of-the-art security and privacy- preserving measures, such as data- minimization, pseudonymisation, encryption, and where anonymisation may significantly affect the purpose pursued; (ii) measures ensuring availability and resilience of processing systems and services, and the ability to restore the availability and access to specuritial category personal data in a timely mand privacy- preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursuedner in the event of a physical or technical incident; (iii) processes for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures in order to ensure the security of the processing; (iv) measures for user identification, authorisation, protection of data during transmission, protection of data during storage, ensuring physical security of locations at which personal data are processed, internal IT and IT security governance and management, certification/assurance of processes and products; (v) measures for ensuring data minimisation, data quality, limited data retention, and data portability and ensuring erasure.
Amendment 1743 #
Proposal for a regulation
Article 10 – paragraph 6 a (new)
Article 10 – paragraph 6 a (new)
6 a. Providers and user may comply with the obligations set out in this Article through the use of third-parties that offer certified compliance services including verification of data governance, data set integrity, and data training, validation and testing practices.
Amendment 1745 #
Proposal for a regulation
Article 10 – paragraph 6 b (new)
Article 10 – paragraph 6 b (new)
Amendment 1748 #
Proposal for a regulation
Article 11 – paragraph 1 – introductory part
Article 11 – paragraph 1 – introductory part
1. The technical documentation of a high-risk AI system shall be drawn up, where possible, relevant, and without compromising intellectual property rights or trade secrets, before that system is placed on the market or put into service and shall be kept up-to date.
Amendment 1750 #
Proposal for a regulation
Article 11 – paragraph 1 – subparagraph 1
Article 11 – paragraph 1 – subparagraph 1
The technical documentation shall be drawn up, where possible, relevant, and without compromising intellectual property rights or trade secrets, in such a way to demonstrate that the high-risk AI system complies with the requirements set out in this Chapter and provide national competent authorities and notified bodies with all the necessary information to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV or in the case of SME’s and start-ups, any equivalent documentation meeting the same objectives, subject to approval of the competent national authority.
Amendment 1758 #
Proposal for a regulation
Article 11 – paragraph 2
Article 11 – paragraph 2
2. Where a high-risk AI system related to a product, to which the legal acts listed in Annex II, section A apply, is placed on the market or put into service only one single and appropriate technical documentation shall be drawn up for each product, containing all the information set out in Annex IV as well as the information required under those legal acts.
Amendment 1761 #
Proposal for a regulation
Article 11 – paragraph 2 a (new)
Article 11 – paragraph 2 a (new)
2 a. To ensure that a single technical documentation is possible, terms and definitions related to this required documentation and any required documentation in the appropriate Union sectoral legislation shall be aligned as much as possible;
Amendment 1767 #
Proposal for a regulation
Article 12 – paragraph 1
Article 12 – paragraph 1
1. High-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events (‘logs’) while the high-risk AI systems is operating. Those logging capabilities shall conform to recognised standards or common specificationstechnically allow the automatic recording of events (‘logs’) over the durations of the lifetime of the system.
Amendment 1771 #
Proposal for a regulation
Article 12 – paragraph 2
Article 12 – paragraph 2
2. The logging capabilities shallIn order to ensure a level of traceability of the AI system’s functioning throughout its lifecycle that is appropriate to the intended purpose of the systemwhich is appropriate to the intended purpose of the system, the logging capabilities shall enable the recording of events relevant for the identification of situations that may: (i) result in the AI system presenting a risk within the meaning of Article 65 (1);or (ii) lead to a substantial modification that facilitates the post market monitoring referred to in Article 61.
Amendment 1776 #
Proposal for a regulation
Article 12 – paragraph 3
Article 12 – paragraph 3
Amendment 1779 #
Proposal for a regulation
Article 12 – paragraph 4
Article 12 – paragraph 4
Amendment 1789 #
Proposal for a regulation
Article 13 – paragraph 1
Article 13 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriatelyreasonably understand the system’s functioning. An appropriate type and degree of transparency shall be ensured, depending on the intended purpose of the system, with a view to achieving compliance with the relevant obligations of the user and of the provider set out in Chapter 3 of this TitleArticle 16 and Article 29 of this Title. The explanation shall be provided at least in the language of the country where the AI system is deployed. Transparency shall thereby mean that, to the extent that can be reasonably expected and is feasible in technical terms at the time when the AI system is placed on the market, the AI system is interpretable to the provider, in that the provider can understand the rationale of decisions taken by the high risk AI system, while enabling the user to understand and use the AI system appropriately, by generally knowing how the AI system works and what data it processes.
Amendment 1792 #
Proposal for a regulation
Article 13 – paragraph 2
Article 13 – paragraph 2
2. High-risk AI systems shall be accompanied by comprehensible instructions for use in an appropriate digital format or made otherwise available that include concise, complete, correct and clear information that ishelps supporting informed decision-making by users and is reasonably relevant, accessible and comprehensible to users.
Amendment 1795 #
Proposal for a regulation
Article 13 – paragraph 3 – introductory part
Article 13 – paragraph 3 – introductory part
3. To the extent neccessary to achieve the outcomes referred to in paragraph 1, the information referred to in paragraph 2 shall specify:
Amendment 1796 #
Proposal for a regulation
Article 13 – paragraph 3 – point a
Article 13 – paragraph 3 – point a
(a) the identity and the contact details of the provider and, where applicable, of itstheir authorised representative;
Amendment 1797 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – introductory part
Article 13 – paragraph 3 – point b – introductory part
(b) the characteristics, capabilities and limitations of performance of the high-risk AI system, including that are relevant to the material risks associated with the intended purpose, including where appropriate:
Amendment 1798 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point ii
Article 13 – paragraph 3 – point b – point ii
(ii) the level of accuracy, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity, including an overview of the capabilities and performance metrics of the AI system, and of representative use cases based on the intended purpose;
Amendment 1802 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point iii
Article 13 – paragraph 3 – point b – point iii
(iii) anythe known or foreseeable circumstances, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights, including, where appropriate, illustrative examples of such limitations and of scenarios for which the system should not be used;
Amendment 1804 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point v
Article 13 – paragraph 3 – point b – point v
(v) when appropriate, specifications for therelevant information about user actions that may influence system performance, including type or quality of input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the AI system.
Amendment 1807 #
Proposal for a regulation
Article 13 – paragraph 3 – point e a (new)
Article 13 – paragraph 3 – point e a (new)
(e a) a description of the mechanisms included within the AI system that allow users to properly collect, store and interpret the logs in accordance with Art 12(1), where relevant.
Amendment 1811 #
Proposal for a regulation
Article 14 – paragraph 1
Article 14 – paragraph 1
1. HWhere proportionate to the risks associated with the high-risk system and where technical safeguards are not sufficient, high-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectivelyallow informed overseenight by natural persons during the period in which the AI system is in useexpected lifetime of the device. Oversight capabilities should be tailored to the AI system’s intended purpose and the context of use and take into account cases where human oversight may compromise the correct and safe functioning of the AI system.
Amendment 1819 #
Proposal for a regulation
Article 14 – paragraph 3 – introductory part
Article 14 – paragraph 3 – introductory part
3. HThe degree of human oversight shall be adapted to the specific risks, the level of automation, and context of the AI system and shall be ensured through either one or all of the following types of measures:
Amendment 1823 #
Proposal for a regulation
Article 14 – paragraph 3 – point a
Article 14 – paragraph 3 – point a
(a) identified and built, when technically feasible and appropriate, into the high-risk AI system by the provider before it is placed on the market or put into service;
Amendment 1825 #
Proposal for a regulation
Article 14 – paragraph 3 – point b
Article 14 – paragraph 3 – point b
(b) identified by the provider operationalized before placing the high- risk AI system on the market or putting it into service and that are appropriate to be implemented by the user.;
Amendment 1826 #
Proposal for a regulation
Article 14 – paragraph 3 – point b a (new)
Article 14 – paragraph 3 – point b a (new)
(b a) required of the user, if appropriate, for their implementation;
Amendment 1827 #
Proposal for a regulation
Article 14 – paragraph 3 – point b b (new)
Article 14 – paragraph 3 – point b b (new)
(b b) included during the development, testing, or monitoring processes.
Amendment 1828 #
Proposal for a regulation
Article 14 – paragraph 3 a (new)
Article 14 – paragraph 3 a (new)
Amendment 1829 #
Proposal for a regulation
Article 14 – paragraph 4 – introductory part
Article 14 – paragraph 4 – introductory part
4. The measures referred to For the purpose of implementing paragraph 3 shall enable the individuals 1 to 3, the high-risk AI system shall be provided to the user in such a way that natural persons to whom human oversight is assigned tocan do the following, as appropriate and proportionate to the circumstances and instructions for use and in accordance with industry standards:
Amendment 1831 #
Proposal for a regulation
Article 14 – paragraph 4 – point a
Article 14 – paragraph 4 – point a
(a) fulto be aware and sufficiently understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible;
Amendment 1834 #
Proposal for a regulation
Article 14 – paragraph 4 – point b
Article 14 – paragraph 4 – point b
(b) remain aware of the possible tendency of automatically relying or over- relying on the output produced by a high- risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;
Amendment 1837 #
Proposal for a regulation
Article 14 – paragraph 4 – point c
Article 14 – paragraph 4 – point c
(c) be able to correctly interpret the high-risk AI system’s output, taking into account in particular the characteristics of the system and, for example, the interpretation tools and methods available;
Amendment 1840 #
Proposal for a regulation
Article 14 – paragraph 4 – point e
Article 14 – paragraph 4 – point e
(e) be able to intervene on the operation of the high-risk AI system or interrupt, where reasonable and technically feasible, the system through a “stop” button or a similar procedure, except if the human interference increases the risk or would negatively impact the performance in consideration of generally acknowledge state-of-the-art.
Amendment 1843 #
Proposal for a regulation
Article 14 – paragraph 5
Article 14 – paragraph 5
5. For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 shall be such as to ensure that, in addition, no action or decision is taken by the user on the basis of the identification resulting from the system unless this has been separately verified and confirmed by at least two natural persons on-site or remotely, except for temporary actions or decisions which cannot be delayed due to safety or security reasons for the purpose of law enforcement.
Amendment 1846 #
Proposal for a regulation
Article 14 – paragraph 5 a (new)
Article 14 – paragraph 5 a (new)
5 a. For the purpose of implementing paragraph 2, in the case where the result of an identification is inconclusive, the human oversight requirements from paragraphs 3 to 5 shall be performed directly internally by the closest entity to the user in the supply chain of the high- risk AI system.
Amendment 1847 #
Proposal for a regulation
Article 14 – paragraph 5 b (new)
Article 14 – paragraph 5 b (new)
5 b. With the exception of high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 shall not be interpreted as requiring a human to review every action or decision taken by the AI system. Full automation of such systems shall be possible provided that technical measures are put in place to comply with provisions in paragraphs 1 to 4.
Amendment 1848 #
Proposal for a regulation
Article 15 – paragraph 1
Article 15 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose and to the extent that can be reasonably expected and is in accordance with relevant industry standards, an appropriate level of accuracy, reliability, robustness and cybersecurity, and the basic pillars of information security and protection, such as confidentiality, integrity and availability as well as to perform consistently in those respects throughout their lifecycletime while taking their evolving nature into account.
Amendment 1852 #
Proposal for a regulation
Article 15 – paragraph 1 a (new)
Article 15 – paragraph 1 a (new)
1 a. To address the technical aspects of how to measure the appropriate levels of accuracy and robustness in paragraph 1, the European Artificial Intelligence Board shall bring together national metrology and benchmarking authorities and provide non-binding guidance on the matter as per Article 56(2a) of this Regulation.
Amendment 1855 #
Proposal for a regulation
Article 15 – paragraph 2
Article 15 – paragraph 2
2. The levels of accuracy and the relevant accuracy metrics of high-risk AI systemsrange of expected performance and the operational factors that affect that performance, shall be declared, where possible, in the accompanying instructions of use.
Amendment 1857 #
Proposal for a regulation
Article 15 – paragraph 3 – introductory part
Article 15 – paragraph 3 – introductory part
3. High-risk AI systems shall be resilientdesigned and developed with safety and security by design mechanism by default so that they achieve, in the light of their intended purpose, an appropriate level of cyber resilience as regards errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems.
Amendment 1860 #
Proposal for a regulation
Article 15 – paragraph 3 – subparagraph 1
Article 15 – paragraph 3 – subparagraph 1
The robustness of high-risk AI systems may be achieved through diverse technical redundancy solutions, which may include reasonably designed backup or fail-safe plans by the appropriate provider or user or as mutually agreed by the provider and the user.
Amendment 1862 #
Proposal for a regulation
Article 15 – paragraph 3 – subparagraph 2
Article 15 – paragraph 3 – subparagraph 2
High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way to ensure that possibly biased outputs due to outputs used asinfluencing an input for future operations (‘feedback loops’) are duly addressed with appropriate mitigation measures.
Amendment 1865 #
Proposal for a regulation
Article 15 – paragraph 3 a (new)
Article 15 – paragraph 3 a (new)
3 a. In accordance with Article 42 (2), the compliance with Article 15 for high- risk AI systems that have already been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to Regulation (EU) 2019/881 shall be assumed.
Amendment 1868 #
Proposal for a regulation
Article 15 – paragraph 4 – subparagraph 1
Article 15 – paragraph 4 – subparagraph 1
The technical solutions aimed at ensuringnd organisational measures designed to uphold the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks.
Amendment 1870 #
Proposal for a regulation
Article 15 – paragraph 4 – subparagraph 2
Article 15 – paragraph 4 – subparagraph 2
The technical solutions to address AI specific vulnerabilities shallmay include, where appropriate, measures to prevent and control for attacks trying to manipulate the training dataset (‘data poisoning’), inputs designed to cause the model to make a mistake (‘adversarial examples’), or model flaws, or exploratory attacks that may aim to extract knowledge, algorithms, trade secrets or training information from the AI.
Amendment 1879 #
Proposal for a regulation
Article 16 – paragraph 1 – point a
Article 16 – paragraph 1 – point a
(a) ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title before placing them on the market or putting them into service;
Amendment 1881 #
Proposal for a regulation
Article 16 – paragraph 1 – point a a (new)
Article 16 – paragraph 1 – point a a (new)
(a a) indicate their name, registered trade name or registered trade mark, the address at which they can be contacted on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as applicable;
Amendment 1888 #
Proposal for a regulation
Article 16 – paragraph 1 – point c
Article 16 – paragraph 1 – point c
(c) draw-up the technical documentation of the high-risk AI systemkeep the documentation referred to in Article 18;
Amendment 1890 #
Proposal for a regulation
Article 16 – paragraph 1 – point d
Article 16 – paragraph 1 – point d
(d) when under their control, keep the logs automatically generated by their high- risk AI systems, in accordance with Article 20;
Amendment 1897 #
Proposal for a regulation
Article 16 – paragraph 1 – point e
Article 16 – paragraph 1 – point e
(e) ensure that the high-risk AI system undergoescarry out the relevant conformity assessment procedure, as provided for in Article 19, prior to its placing on the market or putting into service;
Amendment 1900 #
Proposal for a regulation
Article 16 – paragraph 1 – point g
Article 16 – paragraph 1 – point g
(g) take the necessary corrective actions as referred to in Art 21, if the high- risk AI system is not in conformity with the requirements set out in Chapter 2 of this Title;
Amendment 1901 #
Proposal for a regulation
Article 16 – paragraph 1 – point i
Article 16 – paragraph 1 – point i
(i) to affix the CE marking to their high- risk AI systems to indicate the conformity with this Regulation in accordance with Article 49;
Amendment 1904 #
Proposal for a regulation
Article 16 – paragraph 1 – point j
Article 16 – paragraph 1 – point j
(j) upon reasoned request of a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Titleprovide the relevant information and documentation to demonstrate the conformity of the high-risk AI system.
Amendment 1915 #
Proposal for a regulation
Article 17 – paragraph 1 – introductory part
Article 17 – paragraph 1 – introductory part
1. Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects: and that shall be incorporated as part of an existing quality management system under sectoral legislation or as provided by the International Organisation for Standardization.
Amendment 1917 #
Proposal for a regulation
Article 17 – paragraph 1 – point a
Article 17 – paragraph 1 – point a
Amendment 1918 #
Proposal for a regulation
Article 17 – paragraph 1 – point b
Article 17 – paragraph 1 – point b
Amendment 1919 #
Proposal for a regulation
Article 17 – paragraph 1 – point c
Article 17 – paragraph 1 – point c
Amendment 1920 #
Proposal for a regulation
Article 17 – paragraph 1 – point d
Article 17 – paragraph 1 – point d
Amendment 1922 #
Proposal for a regulation
Article 17 – paragraph 1 – point e
Article 17 – paragraph 1 – point e
Amendment 1924 #
Proposal for a regulation
Article 17 – paragraph 1 – point f
Article 17 – paragraph 1 – point f
Amendment 1928 #
Proposal for a regulation
Article 17 – paragraph 1 – point g
Article 17 – paragraph 1 – point g
Amendment 1929 #
Proposal for a regulation
Article 17 – paragraph 1 – point h
Article 17 – paragraph 1 – point h
Amendment 1930 #
Proposal for a regulation
Article 17 – paragraph 1 – point i
Article 17 – paragraph 1 – point i
Amendment 1933 #
Proposal for a regulation
Article 17 – paragraph 1 – point j
Article 17 – paragraph 1 – point j
Amendment 1936 #
Proposal for a regulation
Article 17 – paragraph 1 – point k
Article 17 – paragraph 1 – point k
Amendment 1937 #
Proposal for a regulation
Article 17 – paragraph 1 – point l
Article 17 – paragraph 1 – point l
Amendment 1938 #
Proposal for a regulation
Article 17 – paragraph 1 – point m
Article 17 – paragraph 1 – point m
Amendment 1946 #
Proposal for a regulation
Article 18 – paragraph 1
Article 18 – paragraph 1
1. PThe providers of high-risk AI systems shall draw up shall, for a period of 3 years after the AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities: (a) the technical documen tation referred to in Article 11 in accordance with Annex IVand Annex IV; (b) the documentation concerning the quality management system referred to in Article 17; (c) the documentation concerning the changes approved by notified bodies where applicable; (d) the decisions and other documents issued by the notified bodies where applicable; (e) the EU declaration of conformity referred to in Article 48.
Amendment 1958 #
Proposal for a regulation
Article 20 – paragraph 1
Article 20 – paragraph 1
1. Providers of high-risk AI systems shall keep the logs automatically generated by their high-risk AI systems, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law. The logs shall be kept for a period that is appropriate in the light of the intended purpose of high-risk AI system and applicable legal obligations underlaw as well as under their factual control and to the extent that it is technically feasible. They shall keep them for a period of at least six months, unless provided otherwise in applicable Union or national law.
Amendment 1959 #
Proposal for a regulation
Article 21 – paragraph 1
Article 21 – paragraph 1
Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system which they have placed on the market or put into service is not in conformity with this Regulation shall immediately, where applicable, investigate the causes in collaboration with the user and, take the necessary corrective actions to bring that system into conformity, to withdraw it or to recall it, as appropriate. They shall inform the distributors of the high-risk AI system in question and, where applicable, the authorised representative and importers accordingly.
Amendment 1964 #
Proposal for a regulation
Article 22 – paragraph 1
Article 22 – paragraph 1
Where the high-risk AI system presents a risk within the meaning of Article 65(1) and that risk is known to the provider of the system, that provider shall immediately inform the national competentmarket surveillance authorities of the Member States in which it made the system available and, where applicable, the notified body that issued a certificate for the high-risk AI system, in particular the nature of the non-compliance and of any relevant corrective actions taken by the provider.
Amendment 1968 #
Proposal for a regulation
Article 23 – paragraph 1
Article 23 – paragraph 1
Providers of high-risk AI systems shall, upon a reasoned request by a national competent authority, provide that authority with all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title, in an official Union language determined by the Member State concerned language that can be easily understood by that national competent authority. Upon a reasoned request from a national competent authority, providers shall also give that authority access to the logs automatically generated by the high- risk AI system, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law. Any information submitted in accordance with the provision of this article shall be considered by the national competent authority a trade secret of the company that is submitting such information and kept strictly confidential.
Amendment 1976 #
Proposal for a regulation
Article 23 a (new)
Article 23 a (new)
Amendment 1979 #
Proposal for a regulation
Article 24
Article 24
Amendment 1982 #
Proposal for a regulation
Article 25 – paragraph 1
Article 25 – paragraph 1
1. Prior to making their systems available on the Union market, where an importer cannot be identified, providers established outside the Union shall, by written mandate, appoint an authorised representative which is established in the Union.
Amendment 1984 #
Proposal for a regulation
Article 25 – paragraph 2 – introductory part
Article 25 – paragraph 2 – introductory part
2. The authorised representative shall perform the tasks specified in the mandate received from the provider. TFor the purpose of this Regulation, the mandate shall empower the authorised representative to carry out only the following tasks:
Amendment 1986 #
Proposal for a regulation
Article 25 – paragraph 2 – point a
Article 25 – paragraph 2 – point a
(a) keep a copy ofensure that the EU declaration of conformity and the technical documentation at the disposal of the national competent authhave been drawn up and that an appropriate conformities and national authorities refey assessment procedure has been carried to in Article 63(7)out by the provider;
Amendment 1989 #
Proposal for a regulation
Article 25 – paragraph 2 – point b a (new)
Article 25 – paragraph 2 – point b a (new)
(b a) keep at the disposal of the national competent authorities and national authorities referred to in Article 63(7), for a period ending 3 years after the high-risk AI system has been placed on the market or put into service, a copy of the EU declaration of conformity, the technical documentation and, if applicable, the certificate issued by the notified body;
Amendment 1992 #
Proposal for a regulation
Article 25 – paragraph 2 – point c
Article 25 – paragraph 2 – point c
(c) cooperate with competent nationalnational supervisory authorities, upon a reasoned request, on any action the latter takes in relation to the high-risk AI system.;
Amendment 1993 #
Proposal for a regulation
Article 25 – paragraph 2 – point c a (new)
Article 25 – paragraph 2 – point c a (new)
(c a) comply with the registration obligations referred to in Article 51 or, if the registration is carried out by the provider itself, ensure that the information referred to in point 3 of Annex VIII is correct.
Amendment 1995 #
Proposal for a regulation
Article 25 – paragraph 2 – subparagraph 1 (new)
Article 25 – paragraph 2 – subparagraph 1 (new)
The authorised representative shall terminate the mandate if it considers or has reason to consider that the provider acts contrary to its obligations under this Regulation. In such a case, it shall also immediately inform the market surveillance authority of the Member State in which it is established, as well as, where applicable, the relevant notified body, about the termination of the mandate and the reasons thereof.
Amendment 1996 #
Proposal for a regulation
Article 26 – paragraph 1 – introductory part
Article 26 – paragraph 1 – introductory part
1. Before placing a high-risk AI system on the market, importers of such system shall ensure that: such a system is in conformity with this Regulation by ensuring that:
Amendment 1998 #
Proposal for a regulation
Article 26 – paragraph 1 – point a
Article 26 – paragraph 1 – point a
(a) the appropriaterelevant conformity assessment procedure referred to in Article 43 has been carried out by the provider of that AI system;
Amendment 1999 #
Proposal for a regulation
Article 26 – paragraph 1 – point c
Article 26 – paragraph 1 – point c
(c) the system bears the required conformity marking and is accompanied by the required documentation and instructions of use.;
Amendment 2000 #
Proposal for a regulation
Article 26 – paragraph 1 – point c a (new)
Article 26 – paragraph 1 – point c a (new)
(c a) the authorised representative referred to in Article 25 has been established by the Provider.
Amendment 2001 #
Proposal for a regulation
Article 26 – paragraph 2
Article 26 – paragraph 2
2. Where an importer considers or has reason to consider that a high-risk AI system is not in conformity with this Regulation, or is falsified, or accompanied by falsified documentation it shall not place that system on the market until that AI system has been brought into conformity. Where the high- risk AI system presents a risk within the meaning of Article 65(1), the importer shall inform the provider of the AI system and the market surveillance authorities to that effect.
Amendment 2003 #
Proposal for a regulation
Article 26 – paragraph 4
Article 26 – paragraph 4
4. Importers shall ensure that, while a high-risk AI system is under their responsibility, where applicable, storage or transport conditions do not jeopardise its compliance with the requirements set out in Chapter 2 of this Titlekeep, for a period ending 3 years after the AI system has been placed on the market or put into service, a copy of the certificate issued by the notified body, where applicable, of the instructions for use and of the EU declaration of conformity.
Amendment 2006 #
Proposal for a regulation
Article 26 – paragraph 5
Article 26 – paragraph 5
5. IWhere no authorised representative has been established, importers shall provide national competent authorities, upon a reasoned request, with all necessary information and documentation to demonstrate the conformity of a high-risk AI system with the requirements set out in Chapter 2 of this Title in a language which can be easily understood by that national competent authority, including access to the logs automatically generated by the high-risk AI system to the extent such logs are under the control of the provider by virtue of a contractual arrangement with the user or otherwise by law. They shall also cooperate with those authorities on any action national competent authority takes in relation to that system. To this purpose they shall also ensure that the technical documentation can be made available to those authorities.
Amendment 2008 #
Proposal for a regulation
Article 26 – paragraph 5 a (new)
Article 26 – paragraph 5 a (new)
5 a. Importers shall cooperate with national competent authorities on any action those authorities take in relation to an AI system.
Amendment 2009 #
Proposal for a regulation
Article 27 – paragraph 1
Article 27 – paragraph 1
1. Before making a high-risk AI system available on the market, distributors shall verify that the high-risk AI system bears the required CE conformity marking, that it is accompanied by the required documentation and instruction of use, and that the provider and the importer of the system, as applicable, have complied with their obligations set out in this Regulation in Article 16 and Article 26(3), respectively.
Amendment 2010 #
Proposal for a regulation
Article 27 – paragraph 2
Article 27 – paragraph 2
2. Where a distributor considers or has reason to consider, on the basis of the information in its possession, that a high- risk AI system is not in conformity with the requirements set out in Chapter 2 of this Title, it shall not make the high-risk AI system available on the market until that system has been brought into conformity with those requirements. Furthermore, where the system presents a risk within the meaning of Article 65(1), the distributor shall inform the provider or the importer of the system, as applicable, to that effect, and the market surveillance authorities.
Amendment 2014 #
Proposal for a regulation
Article 27 – paragraph 4
Article 27 – paragraph 4
4. A distributor that considers, on the basis of the information in its possession, or has reason to consider that a high-risk AI system which it has made available on the market is not in conformity with the requirements set out in Chapter 2 of this Title shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 65(1), the distributor shall immediately inform the provider or importer of the system and the national competent authorities of the Member States in which it has made the product available to that effect, giving details, in particular, of the non-compliance and of any corrective actions taken.
Amendment 2020 #
Proposal for a regulation
Article 27 – paragraph 5
Article 27 – paragraph 5
5. Upon a reasoned request from a national competent authority and where no authorised representative has been appointed, distributors of high-risk AI systems shall provide that authority with all the information and documentation necessary to demonstrate the conformity of a high-risk system with the requirements set out in Chapter 2 of this Title. Distributors shall also cooperate with that national competent authority on any action taken by that authorityregarding its activities as described in paragraphs 1 to 4.
Amendment 2022 #
Proposal for a regulation
Article 27 – paragraph 5 a (new)
Article 27 – paragraph 5 a (new)
5 a. Importers shall cooperate with national competent authorities on any action those authorities take in relation to an AI system.
Amendment 2023 #
Proposal for a regulation
Article 28
Article 28
Amendment 2037 #
Proposal for a regulation
Article 29 – paragraph 1
Article 29 – paragraph 1
1. Users of high-risk AI systems shall use such systems and implement human oversight in accordance with the instructions of use accompanying the systems, pursuant to paragraphs 2 and 5 of this Article. Users shall bear sole responsibility in case of any use of the AI system that is not in accordance with the instructions of use accompanying the systems.
Amendment 2042 #
Proposal for a regulation
Article 29 – paragraph 1 a (new)
Article 29 – paragraph 1 a (new)
1 a. To the extent the user exercises control over the high-risk AI system, that user shall only assign human oversight to natural persons who have the necessary competence, training and authority as well as ensure that relevant and appropriate robustness and cybersecurity measures are in place and are regularly adjusted or updated.
Amendment 2048 #
Proposal for a regulation
Article 29 – paragraph 2
Article 29 – paragraph 2
2. The obligations in paragraph 1 and 1a are without prejudice to other user obligations under Union or national law and to the user’s discretion in organising its own resources and activities for the purpose of implementing the human oversight measures indicated by the provider.
Amendment 2051 #
Proposal for a regulation
Article 29 – paragraph 3
Article 29 – paragraph 3
3. Without prejudice to paragraph 1, to the extent the user exercises control over the input data, that user shall ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system.
Amendment 2053 #
Proposal for a regulation
Article 29 – paragraph 4 – introductory part
Article 29 – paragraph 4 – introductory part
4. Users shall monitor the operation of the high-risk AI system on the basis of the instructions of use and, when relevant, inform providers in accordance with Article 61. To the extent the user exercises control over the high-risk AI system, users shall also perform risk assessments in line with Article 9 but limited to the potential adverse effects of using the high-risk AI system and the respective mitigation measures. When they have reasons to consider that the use in accordance with the instructions of use may result in the AI system presenting a risk within the meaning of Article 65(1) they shall inform the provider or distributor and relevant regulatory authority and suspend the use of the system. They shall also inform the provider or distributor and relevant regulatory authority when they have identified any serious incident or any malfunctioning within the meaning of Article 62 and interrupt the use of the AI system. In case the user is not able to reach the provider, importer or distributer Article 62 shall apply mutatis mutandis.
Amendment 2058 #
Proposal for a regulation
Article 29 – paragraph 5 – introductory part
Article 29 – paragraph 5 – introductory part
5. Users of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system, to the extent such logs are under their control. The logsy shall be keptkeep them for a period that is appropriate in the light of the intended purpose of the high-risk AI system and applicable legal obligations underof at least six months, unless provided otherwise in applicable Union or national law.
Amendment 2065 #
Proposal for a regulation
Article 29 – paragraph 6
Article 29 – paragraph 6
6. Users of high-risk AI systems shall use the information provided under Article 13 to comply with their obligation to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, where applicab and may revert in part to those data protection impact assessments for fulfilling the obligations set out in this Article.
Amendment 2074 #
Proposal for a regulation
Article 29 – paragraph 6 a (new)
Article 29 – paragraph 6 a (new)
6 a. The provider shall be obliged to cooperate closely with the user and in particular provide the user with the necessary information to allow the fulfilment of the obligations set out in this Article.
Amendment 2077 #
Proposal for a regulation
Article 29 – paragraph 6 b (new)
Article 29 – paragraph 6 b (new)
6 b. Users shall cooperate with national competent authorities on any action those authorities take in relation to an AI system.
Amendment 2087 #
Proposal for a regulation
Article 30 – paragraph 1
Article 30 – paragraph 1
1. Each Member State shall designate or establish a notifying authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring. To this end, Member States shall ensure a sufficient number of conformity assessment bodies, in order to make the certification feasible in a timely manner.
Amendment 2093 #
Proposal for a regulation
Article 31 – paragraph 2
Article 31 – paragraph 2
2. The application for notification shall be accompanied by a description of the conformity assessment activities, the conformity assessment module or modules and the artificial intelligence technologies for which the conformity assessment body claims to be competent, as well as by an accreditation certificate, where one exists, issued by a national accreditation body attesting that the conformity assessment body fulfils the requirements laid down in Article 33. Any valid document related to existing designations of the applicant notified body under any other Union harmonisation legislation shall be added.
Amendment 2095 #
Proposal for a regulation
Article 32 – paragraph 1
Article 32 – paragraph 1
1. Notifying authorities mayshall notify only conformity assessment bodies which have satisfied the requirements laid down in Article 33.
Amendment 2097 #
Proposal for a regulation
Article 32 – paragraph 3
Article 32 – paragraph 3
3. The notification shall include full details of the conformity assessment activities, the conformity assessment module or modules and the artificial intelligence technologies concerned.
Amendment 2102 #
Proposal for a regulation
Article 33 – paragraph 2 a (new)
Article 33 – paragraph 2 a (new)
2 a. Notified bodies shall satisfy the minimum cybersecurity requirements set out for public administration entities identified as operators of essential services pursuant to Directive XXXX/XX on measures for a high common level of cybersecurity across the Union (NIS 2), repealing Directive (EU) 2016/1148.
Amendment 2107 #
Proposal for a regulation
Article 33 – paragraph 10
Article 33 – paragraph 10
10. Notified bodies shall have sufficient internal competences to be able to effectively evaluate the tasks conducted by external parties on their behalf. To that end, at all times and for each conformity assessment procedure and each type of high-risk AI system in relation to which they have been designated, the notified body shall have permanent availability of sufficient administrative, technical and scientific personnel who possess experience and knowledge relating to the relevant artificial intelligence technologiesAI, data and data computing and to the requirements set out in Chapter 2 of this Title.
Amendment 2109 #
Proposal for a regulation
Article 34 – paragraph 4
Article 34 – paragraph 4
4. Notified bodies shall keep at the disposal of the notifying authority the relevant documents concerning the assessmentverification of the qualifications of the subcontractor or the subsidiary and the work carried out by them under this Regulation.
Amendment 2116 #
Proposal for a regulation
Article 38 – paragraph 2 a (new)
Article 38 – paragraph 2 a (new)
2 a. The Commission shall provide for the exchange of knowledge and best practices between the Member States' national authorities responsible for notification policy.
Amendment 2118 #
Proposal for a regulation
Article 39 – paragraph 1
Article 39 – paragraph 1
Amendment 2120 #
Proposal for a regulation
Article 39 – paragraph 1 a (new)
Article 39 – paragraph 1 a (new)
2. Conformity assessment bodies established under the law of a third country may carry out the activities of notified bodies under this regulation where they have been accredited as competent by an accreditation body, whether established in the territory of the EU or a third country, that is a signatory of an international accreditation or conformity assessment scheme based on rigorous peer-review processes, such as the International Laboratory Accreditation Collaboration (ILAC) Mutual Recognition Arrangement (MRA) and International Accreditation Forum (IAF) Multilateral Recognition Arrangement (MLA).
Amendment 2121 #
Proposal for a regulation
Article 39 – paragraph 1 b (new)
Article 39 – paragraph 1 b (new)
3. In addition, where conformity assessment bodies established under the law of a third country have not been accredited by signatory bodies of such international accreditation or conformity assessment schemes, third-country conformity assessment bodies may carry out the activities of notified bodies where international mutual recognition arrangements, conformity assessment protocols, or other agreements exist between the EU and the country in which the conformity assessment body is established.
Amendment 2122 #
Proposal for a regulation
Article 40 – paragraph 1
Article 40 – paragraph 1
1. High-risk AI systems which are in conformity with harmonised standards developed in accordance with Regulation 1025/2021 or parts thereof the references of which have been published in the Official Journal of the European Union shall be presumed to be in conformity with the requirements set out in Chapter 2 of this Title, to the extent those standards cover those requirements.
Amendment 2124 #
Proposal for a regulation
Article 40 – paragraph 1 a (new)
Article 40 – paragraph 1 a (new)
Amendment 2128 #
Proposal for a regulation
Article 40 – paragraph 1 b (new)
Article 40 – paragraph 1 b (new)
The Commission shall issue standardisation requests covering all essential requirements of the Regulation in accordance with Article 10 of Regulation (EU) No 1025/2012 no later than 6 months after the date of entry into force of the Regulation.
Amendment 2130 #
Proposal for a regulation
Article 41 – paragraph 1
Article 41 – paragraph 1
1. WThere harmonised standards referred to in Article 40 do not exist or where the Commission considers that the relevant harmonised standards are insufficient or that there is a need to address specific safety or fundamental right concerns, the Commission may, by means of implementing acts, adopt common specifications in respect of the requirements set out in Chapter 2 of this Title Commission may, by means of implementing acts, adopt common specifications in respect of the requirements set out in Chapter 2 of this Title for the essential requirements where health and safety, the protection of consumers or of the environment, other aspects of public interest, or clarity and practicability so require after consulting the Board, the Committee referred to in Art 22 of Regulation 1025/20212 as well as the relevant stakeholders and where the following conditions have been fulfilled: (a) the Commissions has concluded, that contrary to Article 10(6) of Regulation (EU) No 1025/2012 a harmonised standard does not satisfy the requirements which it aims to cover and which are set out in the corresponding Union harmonisation and has therefore not published a reference of such harmonised standard in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012; (b) the Commission has requested one or more European standardization organisations to draft a harmonised standard for the essential health and safety requirements and there are undue delays in the standardisation procedure; (c) the request has, without reason, not been accepted by the European standardization organisations concerned. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2).
Amendment 2140 #
Proposal for a regulation
Article 41 – paragraph 2
Article 41 – paragraph 2
2. The Commission, wWhen preparing the common specifications referred to in paragraph 1, shallthe Commission shall fulfil the objectives referred of Article 40(2) and gather the views of relevant bodies or expert groups established under relevant sectorial Union law.
Amendment 2146 #
Proposal for a regulation
Article 41 – paragraph 3
Article 41 – paragraph 3
3. High-risk AI systems which are in conformity with the common specifications referred to in paragraph 1 shall be presumed to be in conformity with the requirements set out in Chapter 2 of this Title, to the extent those common specifications cover those requirements, and as long as those requirements are not covered by harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012.
Amendment 2148 #
Proposal for a regulation
Article 41 – paragraph 4
Article 41 – paragraph 4
4. Where providers do not comply with the common specifications referred to in paragraph 1, they shall duly justify that they have adopted technical solutions that aremeet the requirements referred to in Chapter 2 to a level at least equivalent thereto.
Amendment 2154 #
Proposal for a regulation
Article 42 – paragraph 1
Article 42 – paragraph 1
1. Taking into account their intended purpose, hHigh-risk AI systems that have been trained and tested on data concernreflecting the specific geographical, behavioural and functional setting within which they are intended to be used shall be presumed to be in compliance with the respective requirements set out in Article 10(4).
Amendment 2155 #
Proposal for a regulation
Article 42 – paragraph 2
Article 42 – paragraph 2
2. High-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to Regulation (EU) 2019/881 of the European Parliament and of the Council63 or pursuant to other harmonization legislation in the field of security of network and information systems and electronic communications networks and services and the references of which have been published in the Official Journal of the European Union shall be presumed to be in compliance with the cybersecurity requirements set out in Article 15 of this Regulation in so far as the cybersecurity certificate or statement of conformity or parts thereof cover those requirements. _________________ 63 Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act) (OJ L 151, 7.6.2019, p. 1).
Amendment 2161 #
Proposal for a regulation
Article 43 – paragraph 1 – introductory part
Article 43 – paragraph 1 – introductory part
1. For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high- risk AI system with the requirements set out in Chapter 2 of this Title, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall followopt for one of the following procedures:
Amendment 2166 #
Proposal for a regulation
Article 43 – paragraph 1 – point a
Article 43 – paragraph 1 – point a
(a) the conformity assessment procedure based on internal control referred to in Annex VI; or
Amendment 2171 #
Proposal for a regulation
Article 43 – paragraph 1 – point b
Article 43 – paragraph 1 – point b
(b) the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII.
Amendment 2183 #
Proposal for a regulation
Article 43 – paragraph 2
Article 43 – paragraph 2
2. For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. For high-risk AI systems referred to in point 5(b) of Annex III, placed on the market or put into service by credit institutions regulated by Directive 2013/36/EU, the conformity assessment shall bebased on internal control shall be verified by means of an ex-post assessment and carried out as part of the procedure referred to in Articles 97 to101 of that Directive. but only to the extent that prudential risks and related requirements are concerned.
Amendment 2185 #
Proposal for a regulation
Article 43 – paragraph 3 – introductory part
Article 43 – paragraph 3 – introductory part
3. For high-risk AI systems, to which legal acts listed in Annex II, section A, apply, and which are subject to points 1 and 2 of Article 6 the provider shall follow the relevant conformity assessment as required under those legal acts. The requirements set out in Chapter 2 of this Title shall apply to those high-risk AI systems and shall be part of that assessment. Points 4.3., 4.4., 4.5. and the fifth paragraph of point 4.6 of Annex VII shall also apply.
Amendment 2188 #
Proposal for a regulation
Article 43 – paragraph 4 – introductory part
Article 43 – paragraph 4 – introductory part
4. High-risk AI systems shall undergo a new conformity assessment procedure, that have already been subject to a conformity assessment procedure, shall undergo a new conformity assessment procedure in line with the provisions foreseen by the legal acts listed in Annex II, section A, whenever they are substantially modified, regardless of whether the modified system is intended to be further distributed or continues to be used by the current user.
Amendment 2196 #
Proposal for a regulation
Article 43 – paragraph 4 – subparagraph 1 a (new)
Article 43 – paragraph 4 – subparagraph 1 a (new)
The same should apply to updates of the AI system for security reasons in general and to protect against evolving threats of manipulation of the system. This paragraph only applies if the Member State has established a legal framework, which allows the provider of a high risk AI system, which autonomously make substantial modifications to itself, to regularly perform an automated real-time conformity assessment procedure.
Amendment 2198 #
Proposal for a regulation
Article 43 – paragraph 4 a (new)
Article 43 – paragraph 4 a (new)
4 a. Any provider may voluntarily apply for a third-party conformity assessment regardless of the risk level of their AI system.
Amendment 2200 #
Proposal for a regulation
Article 43 – paragraph 5
Article 43 – paragraph 5
5. TAfter consulting the AI Board referred to in Article 56 and after providing substantial evidence, followed by thorough consultation and the involvement of the affected stakeholders, the Commission is empowered to adopt delegated acts in accordance with Article 73 for the purpose of updating Annexes VI and Annex VII in order to introduceamend elements of the conformity assessment procedures that become necessary or unnecessary in light of technical progress.
Amendment 2207 #
Proposal for a regulation
Article 43 – paragraph 6
Article 43 – paragraph 6
6. TAfter consulting the AI Board referred to in Article 56 and after providing substantial evidence, followed by thorough consultation and the involvement of the affected stakeholders, the Commission is empowered to adopt delegated acts to amend paragraphs 1 and 2 in order to subject high-risk AI systems referred to in points 2 to 8 of Annex III to the conformity assessment procedure referred to in Annex VII or parts thereof. The Commission shall adopt such delegated acts taking into account the effectiveness of the conformity assessment procedure based on internal control referred to in Annex VI in preventing or minimizing the risks to health and safety and protection of fundamental rights posed by such systems as well as the availability of adequate capacities and resources among notified bodies.
Amendment 2212 #
Proposal for a regulation
Article 46 – paragraph 3
Article 46 – paragraph 3
3. Each notified body shall provide the other notified bodies carrying out similar conformity assessment activities covering the same artificial intelligence technologies with relevant information on issues relating to negative and, on request, positive conformity assessment results.
Amendment 2217 #
Proposal for a regulation
Article 47 – paragraph 1 a (new)
Article 47 – paragraph 1 a (new)
1 a. In a duly justified situation of urgency for exceptional reasons of public security or in case of specific, substantial and imminent threat to the life or physical safety of natural persons, law enforcement authorities may put a specific high-risk AI system into service without the authorisation referred to in paragraph 1 provided that such authorisation is requested during or after the use without undue delay, and if such authorisation is rejected, its use shall be stopped with immediate effect.
Amendment 2220 #
Proposal for a regulation
Article 47 – paragraph 4
Article 47 – paragraph 4
4. Where, within 15 calendar days of receipt of the notification referred to in paragraph 2, objections are raised by a Member State against an authorisation issued by a market surveillance authority of another Member State, or where the Commission considers the authorisation to be contrary to Union law or the conclusion of the Member States regarding the compliance of the system as referred to in paragraph 2 to be unfounded, the Commission shall without delay enter into consultation with the relevant Member State; the operator(s) concerned shall be consulted and have the possibility to present their views. In view thereof, the Commission shall decide whether the authorisation is justified or not. The Commission shall address its decision to the Member State concerned and the relevant operator or operators(s).
Amendment 2225 #
1. The provider shall draw up a written or electronically signed EU declaration of conformity for each AI system and keep it at the disposal of the national competent authorities for 10 years after the AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be givensubmitted to the relevant national competent authorities upon request.
Amendment 2228 #
Proposal for a regulation
Article 48 – paragraph 5
Article 48 – paragraph 5
5. TAfter consulting the Board, the Commission shall be empowered to adopt delegated acts in accordance with Article 73 for the purpose of updating the content of the EU declaration of conformity set out in Annex V in order to introduce elements that become necessary in light of technical progress.
Amendment 2233 #
Proposal for a regulation
Article 49 – paragraph 1
Article 49 – paragraph 1
1. The physical CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate.
Amendment 2235 #
Proposal for a regulation
Article 49 – paragraph 1 a (new)
Article 49 – paragraph 1 a (new)
1 a. An electronic CE marking may replace the physical marking if it can be accessed via the display of the product or via a machine-readable code.
Amendment 2237 #
Proposal for a regulation
Article 50
Article 50
Amendment 2249 #
Proposal for a regulation
Article 51 – paragraph 1
Article 51 – paragraph 1
Before placing on the market or putting into service a high-risk AI system referred to in Article 6(2)listed in Annex III, the provider or, where applicable, the authorised representative shall register that system in the EU database referred to in Article 60.
Amendment 2261 #
Proposal for a regulation
Article 52 – paragraph 1
Article 52 – paragraph 1
1. Providers shall ensure that AI systems intended to directly interact with natural persons are designed and developed in such a way that natural persons are informedthe AI system, the provider itself or the user can inform the natural person exposed to an AI system that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Where relevant, this information shall also include which functions are AI enabled, if there is human oversight and who is responsible for the decision- making process. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
Amendment 2269 #
Proposal for a regulation
Article 52 – paragraph 3 – introductory part
Article 52 – paragraph 3 – introductory part
3. Users of an AI system that generates or manipulates image, audio or videovisual content that appreciably resembles existing persons, objects, places or other entities or events and would falselywould falsely appear to be authentic or truthful and which features depictions of people appearing to a person to be authentic or truthfulsay or do things they did not say or do, without their consent (‘deep fake’), shall disclose that the content has been artificially generated or manipulated. Disclosure shall mean labelling the content in a way that informs that the content is inauthentic and that is clearly visible for the recipient of that content. To label the content, users shall take into account the generally acknowledged state of the art and relevant harmonised standards and specifications.
Amendment 2275 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1
Article 52 – paragraph 3 – subparagraph 1
However, the first subparagraph shall not apply where the use of an AI system that generates or manipulates audio or visual content is authoriszed by law to detect, prevent, investigate and prosecute criminal offences or where the content forms part of an evidently creative, satirical, artistic or fictional cinematographic, video game visuals or analogous work or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
Amendment 2281 #
Proposal for a regulation
Article 52 – paragraph 3 a (new)
Article 52 – paragraph 3 a (new)
3 a. The information referred to in paragraphs 1 to 3 shall be provided to natural persons in a clear and visible manner at the latest at the time of the first interaction or exposure.
Amendment 2288 #
Proposal for a regulation
Article 53 – paragraph 1
Article 53 – paragraph 1
1. AI regulatory sandboxes established by one or more Member States competeThe competent authorities of the Member States shall establish several physical and digital AI regulatory sandboxes six months prior to the entry into authoritiespplication orf the European Data Protection Supervisor shallis Regulation based on well-established criteria that provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. SMEs, start-ups, enterprises, innovators or other relevant actors could be included as partners in the regulatory sandboxes. This shall take place under the direct supervision and guidance by the respective national competent authorities with a view to ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox. or by the European Data Protection Supervisor in relation to AI systems provided by the EU institutions, bodies and agencies with a view to identify risks to health and safety and fundamental rights, test mitigation measures for identified risks, demonstrate prevention of these risks and otherwise ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox. The Commission shall play a complementary role, allowing those Member States with demonstrated experience with sandboxing to build on their expertise and, on the other hand, assisting and providing technical understanding and resources to those Member States that seek guidance on the set-up and running of these regulatory sandboxes.
Amendment 2299 #
Proposal for a regulation
Article 53 – paragraph 1 a (new)
Article 53 – paragraph 1 a (new)
1 a. This article shall also apply to AI systems for which full compliance with the requirements of Title III Chapter 2 requires an initial phase of placing the systems on the market or putting them into service and using the experiences gained in such initial phase to further develop the AI system so as to fully fulfil the requirements of Title III Chapter 2, particularly for the case of general purpose AI Systems.
Amendment 2302 #
Proposal for a regulation
Article 53 – paragraph 1 b (new)
Article 53 – paragraph 1 b (new)
1 b. The national competent authority or the European Data Protection Supervisor, as appropriate, may also supervise testing in real world conditions upon the request of participants in the sandbox.
Amendment 2303 #
Proposal for a regulation
Article 53 – paragraph 1 c (new)
Article 53 – paragraph 1 c (new)
Amendment 2305 #
Proposal for a regulation
Article 53 – paragraph 2
Article 53 – paragraph 2
2. Member States in collaboration with the Commission shall ensure that to the extent the innovative AI systems involve the processing of personal data or otherwise fall under the supervisory remit of other national authorities or competent authorities providing or supporting access to data, the national data protection authorities and those other national authorities are associated to the operation of the AI regulatory sandbox. As appropriate, national competent authorities may allow for the involvement in the AI regulatory sandbox of other actors within the AI ecosystem such as national or European standardisation organisations, notified bodies, testing and experimentation facilities, research and experimentation labs and innovation hubs.
Amendment 2312 #
Proposal for a regulation
Article 53 – paragraph 2 a (new)
Article 53 – paragraph 2 a (new)
2 a. Access to the AI regulatory sandboxes and supervision and guidance by the relevant authorities shall be free of charge, without prejudice to exceptional costs that national competent authorities may recover in a fair and proportionate manner. It shall be open to any provider or prospective provider of an AI system who fulfils national eligibility and selection criteria and who has been selected by the national competent authorities or by the European Data Protection Supervisor. Participation in the AI regulatory sandbox shall be limited to a period that is appropriate to the complexity and scale of the project in any case not longer than a maximum period of 2 years, starting upon the notification of the selection decision. The participation may be extended for up to 1 more year.
Amendment 2315 #
Proposal for a regulation
Article 53 – paragraph 3
Article 53 – paragraph 3
3. The participation in the AI regulatory sandboxes shall not affect the supervisory and corrective powers of the competent authorities. Any significant risks to health and safety and fundamental rights identified during the development and testing of such sys supervising the sandbox. However, provided that the participant(s) respect the sandbox plan and the terms shall result in immediate mitigand conditions for their participation and, failing that, in the suspension of the development and testing process until such mitigation takes placeollow in good faith the guidance given by the authorities, no administrative enforcement action shall be taken by the authorities for infringement of applicable Union or Member State legislation.
Amendment 2320 #
Proposal for a regulation
Article 53 – paragraph 4
Article 53 – paragraph 4
4. Participants in the AI regulatory sandbox shall remain liable under applicable Union and Member States liability legislation for any harm intentionally inflicted on third parties as a result from the experimentation taking place in the sandbox, which was known or reasonably foreseeable at the time of experimentation and the risk of which the sandbox participants was not made aware of.
Amendment 2322 #
Proposal for a regulation
Article 53 – paragraph 4 a (new)
Article 53 – paragraph 4 a (new)
4 a. The AI regulatory sandboxes shall be designed and implemented in such a way that, where relevant, they facilitate cross-border cooperation between national competent authorities and synergies with relevant sectoral regulatory sandboxes. Cooperation may also be envisaged with third countries outside the Union establishing mechanisms to support AI innovation.
Amendment 2323 #
Proposal for a regulation
Article 53 – paragraph 5
Article 53 – paragraph 5
5. Member States’ competent authorities that havein collaboration with the Commission shall established AI regulatory sandboxes shall, as much as possible through national and regional initiatives, in particular through European digital innovation hubs, and closely coordinate their activities ands well as cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Board and the Commission on the results from the implementation of those schemes, including good practices, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox. The annual reports or abstracts shall be made available to the public, online, in order to further enable innovation within the Union. Outcomes and learnings of the sandbox should be leveraged when monitoring the effectiveness and enforcement of this Regulation and taken into account when proceeding to amending it. The annual reports shall also be submitted to the AI Board which shall publish on its website a summary of all good practices, lessons learnt and recommendations.
Amendment 2337 #
Proposal for a regulation
Article 53 – paragraph 6
Article 53 – paragraph 6
6. The modalities and the conditions of the operation of the AI regulatory sandboxes, including the eligibility criteria and the procedure for the application, selection, participation and exiting from the sandbox, and the rights and obligations of the participants shall be set out in implementing acts in accordance with the Council’s communication(11/2020) and in strong cooperation with relevant stakeholders. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2).
Amendment 2339 #
Proposal for a regulation
Article 53 – paragraph 6 a (new)
Article 53 – paragraph 6 a (new)
6 a. Notwithstanding the modalities and conditions outlined in paragraph 6, Member States shall design regulatory sandboxes to provide access to as many providers as possible. There shall be aparticular focus on the use and application of general purpose AI systems. Member States may establish virtual sandboxing environments to ensure that sandboxes can meet the demand.
Amendment 2342 #
Proposal for a regulation
Article 53 – paragraph 6 b (new)
Article 53 – paragraph 6 b (new)
6 b. The Commission shall establish an EU AI Regulatory Sandboxing Work Programme whose modalities referred to in Article 53(6) shall cover the elements set out in Annex IXa. The Commission shall proactively coordinate with national and local authorities, where relevant.
Amendment 2348 #
Proposal for a regulation
Article 54 – paragraph 1 – introductory part
Article 54 – paragraph 1 – introductory part
1. In the AI regulatory sandbox personal data lawfully collected for other purposes shallmay be processed for the purposes of developing and testing certain innovative AI systems in the sandbox under the following conditions:
Amendment 2350 #
Proposal for a regulation
Article 54 – paragraph 1 – point a – introductory part
Article 54 – paragraph 1 – point a – introductory part
(a) the innovative AI systems shall be developed for safeguarding substantial public interest in one or more of the following areas:
Amendment 2355 #
Proposal for a regulation
Article 54 – paragraph 1 – point c
Article 54 – paragraph 1 – point c
(c) there are effective monitoring mechanisms to identify if any high risks to the fundamental rights of the data subjectsrights and freedoms of the data subjects, as referred to in Art 35 Regulation (EU) 2016/679 and in Article 35 of Regulation (EU) 2018/1725 may arise during the sandbox experimentation as well as response mechanism to promptly mitigate those risks and, where necessary, stop the processing;
Amendment 2358 #
(e) any personal data processed are not be transmitted, transferred or otherwise accessed by other parties that are not participants in the sandbox nor transferred to a third country outside the Union or an international organisation;
Amendment 2360 #
Proposal for a regulation
Article 54 – paragraph 1 – point f
Article 54 – paragraph 1 – point f
(f) any processing of personal data in the context of the sandbox do not lead to measures or decisions affecting the data subjectsshall not affect the application of the rights of the data subjects as provided for under Union law on the protection of personal data, in particular in Article 22 of Regulation (EU) 2016/679 and Article 24 of Regulation (EU) 2018/1725;
Amendment 2361 #
Proposal for a regulation
Article 54 – paragraph 1 – point g
Article 54 – paragraph 1 – point g
(g) any personal data processed in the context of the sandbox are protected by means of appropriate technical and organisational measures and deleted once the participation in the sandbox has terminated or the personal data has reached the end of its retention period;
Amendment 2363 #
Proposal for a regulation
Article 54 – paragraph 1 – point h
Article 54 – paragraph 1 – point h
(h) the logs of the processing of personal data in the context of the sandbox are kept for the duration of the participation in the sandbox and 1 year after its termination, solely for the purpose of and only as long as necessary for fulfilling accountability and documentation obligations under this Article or other application Union or Member States legislation;
Amendment 2367 #
Proposal for a regulation
Article 54 – paragraph 1 a (new)
Article 54 – paragraph 1 a (new)
1 a. Provided that the conditions of paragraph 1 are met, personal data processed for developing and testing innovative AI systems in the sandbox shall be considered compatible for the purposes of Article 6(4) GDPR.
Amendment 2370 #
Proposal for a regulation
Article 55 – title
Article 55 – title
Measures for small-scale providers and users that are SME’s or start-ups
Amendment 2374 #
Proposal for a regulation
Article 55 – paragraph 1 – point a
Article 55 – paragraph 1 – point a
(a) provide small-scale providerSMEs and start-ups with priority access to thand make AI regulatory sandboxes reusable as well as affordable to the extent that theySMEs and start-ups fulfil the eligibility conditions;
Amendment 2376 #
Proposal for a regulation
Article 55 – paragraph 1 – point b
Article 55 – paragraph 1 – point b
(b) organise specific awareness raising and training activities about the application of this Regulation tailored to the needs of the small-scale providers and userSME’s and start-ups;
Amendment 2378 #
Proposal for a regulation
Article 55 – paragraph 1 – point c
Article 55 – paragraph 1 – point c
(c) where appropriate, establish a dedicated channel for communication with small-scale providerSME’s and start-ups and user and other innovators to provide guidance and respond to queries about the implementation of this Regulation.;
Amendment 2380 #
Proposal for a regulation
Article 55 – paragraph 1 – point c a (new)
Article 55 – paragraph 1 – point c a (new)
(c a) consult representative organisations of SMEs and start ups and involve them in the development of relevant standards;
Amendment 2382 #
Proposal for a regulation
Article 55 – paragraph 1 – point c b (new)
Article 55 – paragraph 1 – point c b (new)
(c b) create development paths and services for SMEs and start ups, ensuring that government support is provided at all stages of their development, in particular by promoting digital tools and developing AI transition plans;
Amendment 2383 #
Proposal for a regulation
Article 55 – paragraph 1 – point c c (new)
Article 55 – paragraph 1 – point c c (new)
(c c) promote industry best practices and responsible approaches toAI development and use self-regulatory commitments as a criterion for public procurement projects or as a factor that allows more opportunities to use andshare data responsibly;
Amendment 2384 #
Proposal for a regulation
Article 55 – paragraph 1 – point c d (new)
Article 55 – paragraph 1 – point c d (new)
Amendment 2385 #
Proposal for a regulation
Article 55 – paragraph 1 – point c e (new)
Article 55 – paragraph 1 – point c e (new)
(c e) reduce extensive reporting, information or documentation obligations, establish a single EU online portal in different languages concerning all necessary procedures and formalities to operate in another EU country, a single point of contact in the home country that can certify the company’s eligibility to provide services in another EU country as well as a standardized EU-wide VAT declaration in the respective native language.
Amendment 2386 #
Proposal for a regulation
Article 55 – paragraph 2
Article 55 – paragraph 2
2. The specific interests and needs of the small-scale providerSME’s and start-ups shall be taken into account when setting the fees for conformity assessment under Article 43, reducing those fees proportionately to their size and market size, by granting subsidies or even exempting SMEs and start ups from paying.
Amendment 2398 #
Proposal for a regulation
Article 56 – paragraph 1
Article 56 – paragraph 1
1. A ‘European Artificial Intelligence Board’ (the ‘Board’) is established as an independent body with its own legal personality. The Board shall have a Secretariat, a strong mandate as well as sufficient resources and skilled personnel at its disposal for the assistance in the performance of its tasks laid down in Article 58.
Amendment 2407 #
Proposal for a regulation
Article 56 – paragraph 2 – point a
Article 56 – paragraph 2 – point a
(a) contribute to the effective cooperation of the national supervisory authorities and the Commission with regard to matters covered by this Regulation;
Amendment 2409 #
Proposal for a regulation
Article 56 – paragraph 2 – point c
Article 56 – paragraph 2 – point c
(c) assist the Commission, national supervisory authorities and othe Commissionr competent authorities in ensuring the consistent application of this Regulation., in particular in line with the consistency mechanism referred to in Article 59a(3);
Amendment 2412 #
Proposal for a regulation
Article 56 – paragraph 2 – point c a (new)
Article 56 – paragraph 2 – point c a (new)
(c a) provide particular oversight, monitoring and regular dialogue with the providers of general purpose AI systems about their compliance with the Regulation. Any such meeting shall be open to national supervisory authorities, notified bodies and market surveillance authorities to attend and contribute
Amendment 2416 #
Proposal for a regulation
Article 56 – paragraph 2 – point c b (new)
Article 56 – paragraph 2 – point c b (new)
(c b) bring together national metrology and benchmarking authorities to provide guidance to address the technical aspects of how to measure appropiate levels of accuracy and robustness.
Amendment 2429 #
Proposal for a regulation
Article 57 – paragraph 1
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor. Other national authorities may be invited to the meetings, where the issues discussed are of relevance for them. Other national authorities may also be invited to the meetings, where the issues discussed are of relevance for them. The European Data Protection Supervisor, the Chairperson of the EU Agency for Fundamental Rights, the Executive director of the EU Agency for Cybersecurity, the Chair of the High Level Expert Group on AI, the Director- General of the Joint Research Centre, and the presidents of the European Committee for Standardization, the European Committee for Electrotechnical Standardization, and the European Telecommunications Standards Institute shall be invited as permanent observers with the right to speak but without voting rights.
Amendment 2447 #
Proposal for a regulation
Article 57 – paragraph 2
Article 57 – paragraph 2
2. The Board shall adopt its rules of procedure by a simple majority of its members, following the consent of the Commission. The rules of procedure shall also contain the operational aspects related to the execution of the Board’s tasks as listed in Article 58. The Board may establish standing or temporary sub-groups as appropriate for the purpose of examining specific questions.
Amendment 2455 #
Proposal for a regulation
Article 57 – paragraph 3
Article 57 – paragraph 3
3. The Board shall be chaired by the Commission. The CommissionBoard’s Secretariat shall convene the meetings and prepare the agenda in accordance with the tasks of the Board pursuant to this Regulation and with its rules of procedure. The CommissionBoard’s Secretariat shall also provide administrative and analytical support for the activities of the Board pursuant to this Regulation.
Amendment 2462 #
Proposal for a regulation
Article 57 – paragraph 4
Article 57 – paragraph 4
4. The Board mashall regularly invite external experts and observers to attend its meetings and may hold exchanges with interested third par, in particular from organisations representing the interests of the providers and users of AI systems, SMEs and start-ups, civil society organisations, representatives of affected persons, researchers, standardisation organisations, testing and experimentation facilities, to inform its activities toattend its meetings in order to ensure accountability and appropriate extent. To that end tparticipation of external actors. The Commission may facilitate exchanges between the Board and other Union bodies, offices, agencies and advisory groups.
Amendment 2467 #
Proposal for a regulation
Article 57 – paragraph 4 a (new)
Article 57 – paragraph 4 a (new)
4 a. Without prejudice to paragraph 4, the Board’s Secretariat shall organise four additional meetings between the Board and the High Level Expert Group on AI to allow them to share their practical and technical expertise every quarter of the year.
Amendment 2491 #
Proposal for a regulation
Article 58 – paragraph 1 – point a
Article 58 – paragraph 1 – point a
(a) collect and share expertise and best practices among Member States, including on the promotion of awareness raising initiatives on Artificial Intelligence and the Regulation;
Amendment 2499 #
Proposal for a regulation
Article 58 – paragraph 1 – point b
Article 58 – paragraph 1 – point b
(b) contribute to uniform administrative practices in the Member States, including for the assessment, establishing, managing with the meaning of fostering cooperation and guaranteeing consistency among regulatory sandboxes, and functioning of regulatory sandboxes referred to in Article 53;
Amendment 2508 #
Proposal for a regulation
Article 58 – paragraph 1 – point c – point iii a (new)
Article 58 – paragraph 1 – point c – point iii a (new)
(iii a) on the need for the amendment of each of the Annexes as referred to in Article 73 as well as all other provisions in this Regulation that the Commission can amend, in light of the available evidence.
Amendment 2509 #
Proposal for a regulation
Article 58 – paragraph 1 – point c – point iii b (new)
Article 58 – paragraph 1 – point c – point iii b (new)
(iii b) on activities and decisions of Member States regarding post-market monitoring, information sharing, market surveillance referred to in Title VIII;
Amendment 2510 #
Proposal for a regulation
Article 58 – paragraph 1 – point c – point iii c (new)
Article 58 – paragraph 1 – point c – point iii c (new)
(iii c) on developing common criteria for market operators and competent authorities having the same understanding of concepts such as the 'generally acknowledged state of the art' referred to in Article 9 (3), 'foreseeable risks' referred to in Articles 9 (2) (a), 'foreseeable misuse' referred to in Article 3 (13), Article 9 (2) (b), Article 9 (4), Article 13 (3)(b)(iii) and Article 14 (2), and the 'type and degree of transparency' referred in Article 13 (1);
Amendment 2511 #
Proposal for a regulation
Article 58 – paragraph 1 – point c – point iii d (new)
Article 58 – paragraph 1 – point c – point iii d (new)
(iii d) verify alignment with the legal acts listed in Annex II, including with the implementation matters related to those acts.
Amendment 2512 #
Proposal for a regulation
Article 58 – paragraph 1 – point c a (new)
Article 58 – paragraph 1 – point c a (new)
(c a) carry out annual reviews and analyses of the complaints sent to and findings made by national supervisory authorities, of the serious incidents reports referred to in Article 62, and of the new registration in the EU Database referred to in Article 60 to identify trends and potential emerging issues threatening the future health and safety and fundamental rights of citizens that are not adequately addressed by this Regulation;
Amendment 2522 #
Proposal for a regulation
Article 58 – paragraph 1 – point c b (new)
Article 58 – paragraph 1 – point c b (new)
(c b) carry out biannual horizon scanning and foresight exercises to extrapolate the impact the trends and emerging issues can have on the Union;
Amendment 2525 #
Proposal for a regulation
Article 58 – paragraph 1 – point c c (new)
Article 58 – paragraph 1 – point c c (new)
(c c) annually publish recommendations to the Commission, in particular on the categorization of prohibited practices, high-risk systems, and codes of conduct for AI systems that are not classified as high-risk;
Amendment 2531 #
Proposal for a regulation
Article 58 – paragraph 1 – point c d (new)
Article 58 – paragraph 1 – point c d (new)
(c d) encourage and facilitate the drawing up of codes of conduct as referred to in Article 69;
Amendment 2535 #
Proposal for a regulation
Article 58 – paragraph 1 – point c e (new)
Article 58 – paragraph 1 – point c e (new)
(c e) coordinate among national supervisory authorities and make sure that the consistency mechanism in Article 59a(3) is observed;
Amendment 2536 #
Proposal for a regulation
Article 58 – paragraph 1 – point c f (new)
Article 58 – paragraph 1 – point c f (new)
(c f) adopt binding decisions for national supervisory authorities in case the consistency mechanism is not able to solve the conflict among national supervisory authorities as it is clarified in Article 59a(6);
Amendment 2541 #
Proposal for a regulation
Article 58 – paragraph 1 – point c g (new)
Article 58 – paragraph 1 – point c g (new)
(c g) issue yearly reports on the implementation of the Regulation, including an assessment of the impact of the Regulation on economic operators.
Amendment 2553 #
Proposal for a regulation
Article 58 a (new)
Article 58 a (new)
Article 58 a Guidelines from the Commission on the implementation of this Regulation Upon the request of the Member States or the Board, or on its own initiative, the Commission shall issue guidelines on the practical implementation of this Regulation and in particular on: (i) the application of the requirements referred to in Articles 8 - 15; (ii) the prohibited practices referred to in Article 5; (iii) the practical implementation of the provisions related to substantial modification; (iv) the identification and application of criteria and use cases related to high risk AIsystems referred to in Annex III; (v) the practical implementation of transparency obligations laid down in Article 52; (vi) the relationship of this Regulation with other relevant Union legislation. When issuing such guidelines, the Commission shall pay particular attention to the needs of SMEs and start-ups as well as sectors most likely to be affected by this Regulation.
Amendment 2557 #
Proposal for a regulation
Title VI – Chapter 2 – title
Title VI – Chapter 2 – title
2 national competentsupervisory authorities
Amendment 2558 #
Proposal for a regulation
Article 59 – title
Article 59 – title
Designation of national competentsupervisory authorities
Amendment 2559 #
Proposal for a regulation
Article 59 – paragraph 1
Article 59 – paragraph 1
1. National competent authoritiesEach Member State shall be established or designated by each Member State for the purpose of ensuring the application and implementation of this Regulation. National competent one national supervisory authoritiesy, which shall be organised so as to safeguard the objectivity and impartiality of theirits activities and tasks.
Amendment 2562 #
Proposal for a regulation
Article 59 – paragraph 2
Article 59 – paragraph 2
2. Each Member State shall designate a national supervisory authority among the national competent authorities. The national supervisory authority shall act as notifying authority and market surveillance authority unless a Member State has organisational and administrative reasons to designate more than onThe national supervisory authority shall be in charge to ensure the application and implementation of this Regulation. With regard to high-risk AI systems, related to products to which legal acts listed in Annex II apply, the competent authorities designated under those legal acts shall continue to lead the administrative procedures. However, to the extent a case involves aspects covered by this Regulation, the competent authorities shall be bound by measures issued by the national supervisory authority designated under this Regulation. The national supervisory authority shall also act as notifying authority and market surveillance authority.
Amendment 2566 #
Proposal for a regulation
Article 59 – paragraph 3
Article 59 – paragraph 3
3. The national supervisory authority in each Member States shall inform the Commission of their designation or desigbe the lead authority, ensure adequate coordinations and, where applicable, the reasons for designating more than one authority act as single point of contact for this Regulation. Member States shall inform the Commission of their designations.
Amendment 2570 #
Proposal for a regulation
Article 59 – paragraph 4
Article 59 – paragraph 4
4. Member States shall ensure that national competentsupervisory authorities arey is provided with adequate financial and human resources to fulfil theirits tasks under this Regulation. In particular, national competentsupervisory authorities shall have a sufficient number of personnel permanently available personnel, whose competences and expertise shall include an in-depth understanding of artificial intelligence technologies, data, data protection and data computing, cybersecurity, competition law, fundamental rights, health and safety risks ands well as knowledge of existing standards and legal requirements.
Amendment 2573 #
Proposal for a regulation
Article 59 – paragraph 4 a (new)
Article 59 – paragraph 4 a (new)
4 a. National supervisory authorities shall satisfy the minimum cybersecurity requirements set out for public administration entities identified as operators of essential services pursuant to Directive XXXX/XX on measures for a high common level of cybersecurity across the Union (NIS 2), repealing Directive (EU) 2016/1148.
Amendment 2576 #
Proposal for a regulation
Article 59 – paragraph 4 b (new)
Article 59 – paragraph 4 b (new)
4 b. Any information and documentation obtained by the national supervisory authorities pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
Amendment 2582 #
Proposal for a regulation
Article 59 – paragraph 5
Article 59 – paragraph 5
5. Member States shall report to the Commission on an annual basis on the status of the financial and human resources of the national competentsupervisory authoritiesy with an assessment of their adequacy. The Commission shall transmit that information to the Board for discussion and possible recommendations.
Amendment 2585 #
Proposal for a regulation
Article 59 – paragraph 6
Article 59 – paragraph 6
6. The Commission and board shall facilitate the exchange of experience between national competentsupervisory authorities.
Amendment 2592 #
Proposal for a regulation
Article 59 – paragraph 7
Article 59 – paragraph 7
7. National competentsupervisory authorities may provide guidance and advice on the implementation of this Regulation, including to small-scale providersSMEs and start-ups, as long as it is not in contradiction with the Board’s or the Commission’s guidance and advice. Whenever national competentsupervisory authorities intend to provide guidance and advice with regard to an AI system in areas covered by other Union legislation, the competent national authorities under that Union legislation shall be consulted, as appropriate. Member States may also establish one central contact point for communication with operators.
Amendment 2596 #
Proposal for a regulation
Article 59 a (new)
Article 59 a (new)
Amendment 2616 #
Proposal for a regulation
Article 60 – paragraph 1
Article 60 – paragraph 1
1. The Commission shall, in collaboration with the Member States and by building on the existing Business Registries in line with Directive 2012/17/EU, set up and maintain a EU database containing information referred to in paragraph 2 concerning high-risk AI systems referred to in Article 6(2)listed in Annex III which are registered in accordance with Article 51.
Amendment 2619 #
Proposal for a regulation
Article 60 – paragraph 2
Article 60 – paragraph 2
2. The data listed in Annex VIII shall be entered into the EU database by the providers. The Commission shall provide them with technical and administrative supportpost-market monitoring system shall actively and systematically collect, document and analyse relevant data provided by users or collected through other sources, to the extent such data are readily accessible to the provider and taking into account the limits resulting from data protection, copyright and competition law, on the performance of high-risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2.
Amendment 2631 #
Proposal for a regulation
Article 60 – paragraph 4 a (new)
Article 60 – paragraph 4 a (new)
4 a. The EU database shall not contain any confidential business information or trade secrets of a natural or legal person, including source code.
Amendment 2636 #
Proposal for a regulation
Article 60 – paragraph 5 a (new)
Article 60 – paragraph 5 a (new)
5 a. Any information and documentation obtained by the Commission and Member States pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
Amendment 2639 #
1. Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system.
Amendment 2644 #
Proposal for a regulation
Article 61 – paragraph 2
Article 61 – paragraph 2
2. TIn order to allow the provider to evaluate the compliance of AI systems with the requirements set out in Title III, Chapter 2 throughout their lifetime, the post-market monitoring system shall actively and systematically collect, document and analyse relevant data provided by users or collected through other sources on, to the performance of high- risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2extent such data are readily accessible to the provider and taking into account the limits resulting from data protection, copyright and competition law, on the performance of high-risk AI systems.
Amendment 2649 #
Proposal for a regulation
Title VIII – Chapter 2 – title
Title VIII – Chapter 2 – title
2 Sharing of information on incidents and malfunctioning
Amendment 2650 #
Proposal for a regulation
Article 62 – title
Article 62 – title
Reporting of serious incidents and of malfunctioning
Amendment 2653 #
Proposal for a regulation
Article 62 – paragraph 1 – introductory part
Article 62 – paragraph 1 – introductory part
1. Providers of high-risk AI systems placed on the Union market shall report any serious incident or any malfunctioning of those systems which constitutes a breach of obligations under Union law intended to protect fundamental rights to the market surveillance authorities of the Member States where that incident or breach occurred.
Amendment 2662 #
Proposal for a regulation
Article 62 – paragraph 1 – subparagraph 1
Article 62 – paragraph 1 – subparagraph 1
Such notification shall be made immediatwithout undue delay after the provider has established a causal link between the AI system and the serious incident or malfunctioning or the reasonable likelihood of such a link, and, in any event, not later than 15 day72 hours after the providers becomes aware of the serious incident or of the malfunctioning.
Amendment 2663 #
Proposal for a regulation
Article 62 – paragraph 1 – subparagraph 1 a (new)
Article 62 – paragraph 1 – subparagraph 1 a (new)
No report under this Article is required if the serious incident also leads to reporting requirements under other laws. In that case, the authorities competent under those laws shall forward the received report to the national competent authority.
Amendment 2665 #
Proposal for a regulation
Article 62 – paragraph 2
Article 62 – paragraph 2
Amendment 2669 #
Proposal for a regulation
Article 62 – paragraph 3
Article 62 – paragraph 3
3. For high-risk AI systems referred to in point 5(b) of Annex III which are placed on the market or put into service by providers that are credit institutions regulated by Directive 2013/36/EU and for high-risk AI systems which are safety components of devices, or are themselves devices, covered by Regulation (EU) 2017/745 and Regulation (EU) 2017/746subject to regulations that require solutions equivalent to those set out in this Regulation, the notification of serious incidents or malfunctioning shall be limited to those that that constitute a breach of obligations under Union law intended to protect fundamental rightsreferred to in Article 3(44).
Amendment 2675 #
Proposal for a regulation
Article 63 – paragraph 3 a (new)
Article 63 – paragraph 3 a (new)
Amendment 2680 #
Proposal for a regulation
Article 64 – paragraph 1
Article 64 – paragraph 1
1. AWhen appropriate and proportionate, market surveillance authorities may request access to data and documentation in the context of their activities, t. The market surveillance authorities shall only be granted full, access to those training, machine-learning validation and testing datasets used by the provider, including through application programming interfaces (‘API’) that are relevant and strictly necessary for other appropriate technical means and tools enabling remote access purpose of its request, after it has been clearly demonstrated that the data and documentation provided under paragraph 1 was not sufficient to assess conformity.
Amendment 2686 #
Proposal for a regulation
Article 64 – paragraph 1 a (new)
Article 64 – paragraph 1 a (new)
1 a. Providers may challenge requests through an appeal procedure made available by Member States.
Amendment 2687 #
Proposal for a regulation
Article 64 – paragraph 2
Article 64 – paragraph 2
Amendment 2699 #
Proposal for a regulation
Article 64 – paragraph 4
Article 64 – paragraph 4
4. By 3 months after the entering into force of this Regulation, each Member State shall identify the public authorities or bodies referred to in paragraph 3 and make a list publicly available on the website of the national supervisory authority. Member States shall notify the list to the Commission and all other Member States and keep the list up to date. The European Commission shall publish in a dedicated website the list of all the Competent authorities designated by the Member States in accordance with this article.
Amendment 2746 #
Proposal for a regulation
Article 67
Article 67
Amendment 2766 #
Proposal for a regulation
Article 68 – paragraph 1 – point b
Article 68 – paragraph 1 – point b
(b) the conformityCE marking has not been affixed;
Amendment 2768 #
Proposal for a regulation
Article 68 – paragraph 2
Article 68 – paragraph 2
2. Where the non-compliance referred to in paragraph 1 persists, the Member State concerned shall take all appropriproportionate measures to restrict or prohibit the high- risk AI system being made available on the market or ensure that it is recalled or withdrawn from the market.
Amendment 2777 #
Proposal for a regulation
Article 68 a (new)
Article 68 a (new)
Article 68 a Right to lodge a complaint with a supervisory authority 1. Every citizen who considers that his or her right to protection of personal data has been infringed by the use of a prohibited AI system or a high-risk AI system shall have the right to lodge a complaint with the authority in charge to handle complaints under Article 77 of Regulation (EU) 2016/679 in the Member State of his or her habitual residence, place of work or place of the alleged infringement. 2. The supervisory authority with which the complaint has been lodged shall inform the complainant on the progress and the outcome of the complaint.
Amendment 2788 #
Proposal for a regulation
Article 69 – paragraph 1 a (new)
Article 69 – paragraph 1 a (new)
1 a. The Commission and the Board shall encourage and facilitate the drawing up of Codes of Conduct intended to foster the voluntary application of the concept of trustworthy AI set out in Article 4(a) to AI systems other than high-risk AI systems on the basis of technical specifications and solutions that are appropriate means of ensuring compliance with such requirements in light of the intended purpose of the system.
Amendment 2794 #
Proposal for a regulation
Article 69 – paragraph 4
Article 69 – paragraph 4
4. The Commission and the Board shall take into account the specific interests and needs of the small-scale providerSMEs and start-ups when encouraging and facilitating the drawing up of codes of conduct.
Amendment 2795 #
Proposal for a regulation
Article 70 – paragraph 1 – introductory part
Article 70 – paragraph 1 – introductory part
1. National competent authorities, market surveillance authorities and notified bodies involved in the application of this Regulation shall respectput effective cybersecurity, technical and organisational measures in place to ensure the confidentiality of information and data obtained in carrying out their tasks and activities in such a manner as to protect, in particular:
Amendment 2799 #
Proposal for a regulation
Article 70 – paragraph 1 – point a
Article 70 – paragraph 1 – point a
(a) intellectual property rights, and confidential business information or trade secrets of a natural or legal person in line with the 2016 EU Trade Secrets Directive (Directive 2016/943) as well as the 2004 Directive on the enforcement of intellectual property rights (Directive 2004/48/EC), including source code, except the cases referred to in Article 5 of Directive 2016/943 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure apply.
Amendment 2802 #
Proposal for a regulation
Article 70 – paragraph 1 – point c a (new)
Article 70 – paragraph 1 – point c a (new)
(c a) the principles of purpose limitation and data minimization, meaning that national competent authorities minimize the quantity of data requested for disclosure in line with what is absolutely necessary for the perceived risk and its assessment, and they must not keep the data for any longer than absolutely necessary.
Amendment 2804 #
Proposal for a regulation
Article 70 – paragraph 1 a (new)
Article 70 – paragraph 1 a (new)
1 a. In cases where the activity of national competent authorities, market surveillance authorities and notified bodies pursuant to the provisions of this Article results in a breach of intellectual property rights, Member States shall provide for the measures, procedures and remedies necessary to ensure the enforcement of the intellectual property rights in full application of Directive 2004/48/EC on the enforcement of intellectual property rights.
Amendment 2811 #
Proposal for a regulation
Article 70 – paragraph 4
Article 70 – paragraph 4
4. The Commission and Member States may, if consistent with the provisions contained in EU trade agreements with third countries, exchange, where necessary, confidential information with regulatory authorities of third countries with which they have concluded bilateral or multilateral confidentiality arrangements guaranteeing an adequate level of confidentiality.
Amendment 2818 #
Proposal for a regulation
Article 71 – paragraph 1
Article 71 – paragraph 1
1. In compliance with the terms and conditions laid down in this Regulation, Member States shall lay down the rules on penalties, including administrative fines, applicable to infringements of this Regulation and shall take all measures necessary to ensure that they are properly and effectively implemented and aligned with the guidelines issued by the Board, as referred to in Article 58 (c) (iii). The penalties provided for shall be effective, proportionate, and dissuasive. They shall take into particular account the interests of small-scale providerSMEs and start-up and their economic viability.
Amendment 2826 #
Proposal for a regulation
Article 71 – paragraph 2
Article 71 – paragraph 2
2. The Member States shall notify the Commission of those rules and of those measures and shall notify it, without delay, of any subsequent amendment affecting them.
Amendment 2833 #
Proposal for a regulation
Article 71 – paragraph 3 – introductory part
Article 71 – paragraph 3 – introductory part
3. The following infringementsNon-compliance with the prohibition of the AI practices referred to in Article 5 shall be subject to administrative fines of up to 320 000 000 EUR or, if the offender is a company, up to 6 4% of its total worldwide annual turnover for the preceding financial year, whichever is higher:.
Amendment 2837 #
Proposal for a regulation
Article 71 – paragraph 3 – point a
Article 71 – paragraph 3 – point a
Amendment 2843 #
Amendment 2847 #
Proposal for a regulation
Article 71 – paragraph 4
Article 71 – paragraph 4
4. The grossly negligent non- compliance by the provider or user of the AI system with anythe respective requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 210 000 000 EUR or, if the offender is a company, up to 42 % of its total worldwide annual turnover for the preceding financial year, whichever is higher, and in case of SMEs and start-ups, up to 1% of its worldwide annual turnover for the preceding financial year, whichever is higher.
Amendment 2855 #
Proposal for a regulation
Article 71 – paragraph 5
Article 71 – paragraph 5
5. The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 10 000 000 EUR or, if the offender is a company, up to 2 % of its total worldwide annual turnover for the preceding financial year, whichever is higher and in case of SMEs and start- ups, up to 1% of its worldwide annual turnover for the preceding financial year, whichever is higher.
Amendment 2862 #
Proposal for a regulation
Article 71 – paragraph 6 – introductory part
Article 71 – paragraph 6 – introductory part
6. When decidingFines may be imposed in addition to or instead of non-monetary measures such as orders or warnings. When deciding on whether to impose a fine or on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation shall be taken into account and due regard shall be given to the following:
Amendment 2863 #
Proposal for a regulation
Article 71 – paragraph 6 – point a
Article 71 – paragraph 6 – point a
(a) the nature, gravity and duration of the infringement and of its consequences taking into account the nature, scope or purpose of the AI system concerned, as well as the number of individuals affected, and the level of damage suffered by them;
Amendment 2867 #
Proposal for a regulation
Article 71 – paragraph 6 – point c
Article 71 – paragraph 6 – point c
(c) the size, the annual turnover and market share of the operator committing the infringement;
Amendment 2868 #
Proposal for a regulation
Article 71 – paragraph 6 – point c a (new)
Article 71 – paragraph 6 – point c a (new)
(c a) any action taken by the provider to mitigate the harm or damage suffered by the affected persons;
Amendment 2869 #
Proposal for a regulation
Article 71 – paragraph 6 – point c a (new)
Article 71 – paragraph 6 – point c a (new)
(c a) the intentional or negligent character of the infringement;
Amendment 2870 #
Proposal for a regulation
Article 71 – paragraph 6 – point c c (new)
Article 71 – paragraph 6 – point c c (new)
(c c) the degree of cooperation with the national competent authorities, in order to remedy the infringement and mitigate the possible adverse effects of the infringement;
Amendment 2871 #
Proposal for a regulation
Article 71 – paragraph 6 – point c c (new)
Article 71 – paragraph 6 – point c c (new)
(c c) any relevant previous infringements by the provider;
Amendment 2872 #
Proposal for a regulation
Article 71 – paragraph 6 – point c e (new)
Article 71 – paragraph 6 – point c e (new)
(c e) any other aggravating or mitigating factor applicable to the circumstances of the case, such as financial benefits gained, or losses avoided, directly or indirectly, from the infringement;
Amendment 2873 #
Proposal for a regulation
Article 71 – paragraph 6 – point c e (new)
Article 71 – paragraph 6 – point c e (new)
(c e) the manner in which the infringement became known to the national competent authority, in particular whether, and if so to what extent, the provider notified the infringement;
Amendment 2874 #
Proposal for a regulation
Article 71 – paragraph 6 – point c g (new)
Article 71 – paragraph 6 – point c g (new)
(c g) in the context of paragraph 5 of this Article, the intentional or unintentional nature of the infringement.
Amendment 2880 #
Proposal for a regulation
Article 71 – paragraph 8 a (new)
Article 71 – paragraph 8 a (new)
8 a. Administrative fines shall not be applied to a participant in a regulatory sandbox, who was acting in line with the recommendation issued by the supervisory authority.
Amendment 2882 #
Proposal for a regulation
Article 71 – paragraph 8 b (new)
Article 71 – paragraph 8 b (new)
8 b. The penalties referred to in this article as well as the associated litigation costs and indemnification claims may not be the subject of contractual clauses or other form of burden-sharing agreements between the providers and distributors, importers, users, or any other third- parties.
Amendment 2883 #
Proposal for a regulation
Article 71 – paragraph 8 c (new)
Article 71 – paragraph 8 c (new)
8 c. The exercise by the market surveillance authority of its powers under this Article shall be subject to appropriate procedural safeguards in accordance with Union and Member State law, including effective judicial remedy and due process.
Amendment 2920 #
Proposal for a regulation
Article 73 – paragraph 2 a (new)
Article 73 – paragraph 2 a (new)
2 a. The delegation of power referred to in Article 4, Article 7(1), Article 11(3), Article 43(5) and (6) and Article 48(5) shall undergo due process, be proportionate and be based on a permanent and institutionalised exchange with the relevant stakeholders as well as the Board and the High Level Expert Group on AI.
Amendment 2934 #
Proposal for a regulation
Article 80 – paragraph 1 – introductory part
Article 80 – paragraph 1 – introductory part
In Article 5 of Regulation (EU) 2018/858 the following paragraph iss are added:
Amendment 2936 #
Proposal for a regulation
Article 80 – paragraph 1
Article 80 – paragraph 1
Regulation (EU) 2018/858
Article 5
Article 5
4 a. The Commission shall, prior to fulfilling the obligation pursuant to paragraph 4, provide a reasonable explanation based on a gap analysis of existing sectoral legislation in the automative sector to determine the existence of potential gaps relating to Artifical Intelligence therein, and consult relevant stakeholders, in order to avoid duplications and overregulation, in line with the Better Regulation principle.
Amendment 2938 #
Proposal for a regulation
Article 82 – paragraph 1 – introductory part
Article 82 – paragraph 1 – introductory part
In Article 11 of Regulation (EU) 2019/2144, the following paragraph iss are added:
Amendment 2941 #
Proposal for a regulation
Article 82 – paragraph 1
Article 82 – paragraph 1
Regulation (EU) 2019/2144
Article 11
Article 11
3 a. The Commission shall, prior to fulfilling the obligation pursuant to paragraph 3, provide a reasonable explanation based on a gap analysis of existing sectoral legislation in the automative sector to determine the existence of potential gaps relating to Artifical Intelligence therein, and consult relevant stakeholders, in order to avoid duplications and overregulation, in line with the Better Regulation principle.
Amendment 2955 #
Proposal for a regulation
Article 83 – paragraph 2
Article 83 – paragraph 2
2. This Regulation shall apply to the high-risk AI systems, other than the ones referred to in paragraph 1, that have been placed on the market or put into service before [date of application of this Regulation referred to in Article 85(2)], only if, from that date, those systems are subject to significant changes as defined in Article 3(23) in their design or intended purpose, and those changes are not needed to comply with applicable existing or new legislation, or to provide security fixes.
Amendment 2998 #
Proposal for a regulation
Article 84 – paragraph 7 a (new)
Article 84 – paragraph 7 a (new)
7 a. Any amendment to this Regulation pursuant to paragraph 7, or relevant future delegated or implementing acts, which concern sectoral legislation listed in annex II section B, shall take into account the regulatory specificities of each sector, and should not interfere with existing governance, conformity assessment and enforcement mechanisms and authorities established therein.
Amendment 3002 #
Proposal for a regulation
Article 85 – paragraph 2
Article 85 – paragraph 2
2. This Regulation shall apply from [248 months following the entering into force of the Regulation].
Amendment 3005 #
Proposal for a regulation
Article 85 – paragraph 3 – point b a (new)
Article 85 – paragraph 3 – point b a (new)
(b a) Title II shall apply from [24 months following the entry into force of this Regulation].
Amendment 3006 #
Proposal for a regulation
Article 85 – paragraph 3 a (new)
Article 85 – paragraph 3 a (new)
3 a. Member States shall not until ... [24 months after the date of application of this Regulation] impede the making available of AI systems and products which were placed on the market in conformity with Union harmonisation legislation before [the date of application of this Regulation].
Amendment 3009 #
Proposal for a regulation
Article 85 – paragraph 3 b (new)
Article 85 – paragraph 3 b (new)
3 b. At the latest by six months after entry into force of this Regulation, the European Commission shall submit a standardization request to the European Standardisation Organisations in order to ensure the timely provision of all relevant harmonised standards that cover the essential requirements of this regulation. Any delay in submitting the standardisation request shall add to the transitional period of 24 months as stipulated in paragraph 4
Amendment 3015 #
Proposal for a regulation
Annex I – point b
Annex I – point b
Amendment 3020 #
Proposal for a regulation
Annex I – point c
Annex I – point c
Amendment 3031 #
Proposal for a regulation
Annex II – Part A – point 6
Annex II – Part A – point 6
Amendment 3032 #
Proposal for a regulation
Annex II – Part A – point 11
Annex II – Part A – point 11
Amendment 3033 #
Proposal for a regulation
Annex II – Part A – point 11
Annex II – Part A – point 11
Amendment 3034 #
Proposal for a regulation
Annex II – Part A – point 12
Annex II – Part A – point 12
Amendment 3038 #
Proposal for a regulation
Annex II – Part B – point 7 a (new)
Annex II – Part B – point 7 a (new)
7 a. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117,5.5.2017, p. 1;
Amendment 3039 #
Proposal for a regulation
Annex II – Part B – point 7 a (new)
Annex II – Part B – point 7 a (new)
7 a. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1;
Amendment 3041 #
Proposal for a regulation
Annex II – Part B – point 7 b (new)
Annex II – Part B – point 7 b (new)
7 b. Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).
Amendment 3044 #
Proposal for a regulation
Annex III – title
Annex III – title
Amendment 3046 #
Proposal for a regulation
Annex III – paragraph 1 – introductory part
Annex III – paragraph 1 – introductory part
Amendment 3048 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – introductory part
Annex III – paragraph 1 – point 1 – introductory part
1. Biometric identification and categorisation of natural persons:systems, excluding biometric authentication or verification, intended to be used for the ‘real-time’ and ‘post’ remote biometric identification or categorisation of natural persons (i.e., revealing their identity or tracking their behaviour) without their expressed or implied consent and causing legal effects or discrimination against the affected person;
Amendment 3055 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a
Annex III – paragraph 1 – point 1 – point a
Amendment 3094 #
Proposal for a regulation
Annex III – paragraph 1 – point 2 – point a
Annex III – paragraph 1 – point 2 – point a
(a) AI systems intended to be used as safety or security components in the management and operation of road traffic andto the supply of water, gas, heatextent that they are not embedded ing and electricity. vehicle;
Amendment 3095 #
Proposal for a regulation
Annex III – paragraph 1 – point 2 – point a a (new)
Annex III – paragraph 1 – point 2 – point a a (new)
(a a) AI systems intended to be used as safety or security components in the management and operation of the supply of water, gas, heating and electricity, provided the failure of the AI system is highly likely to lead to an imminent threat to such supply.
Amendment 3097 #
Proposal for a regulation
Annex III – paragraph 1 – point 3 – point a
Annex III – paragraph 1 – point 3 – point a
(a) AI systems intended to be used for the purpose of determining access or assigningor materially influence decision on the admission of natural persons to educational and vocational training institutions;
Amendment 3101 #
Proposal for a regulation
Annex III – paragraph 1 – point 3 – point b
Annex III – paragraph 1 – point 3 – point b
(b) AI systems intended to be used for the purpose of assessing the learning outcome of students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educationalthese institutions.
Amendment 3105 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – introductory part
Annex III – paragraph 1 – point 4 – introductory part
Amendment 3106 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point a
Annex III – paragraph 1 – point 4 – point a
Amendment 3112 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
Annex III – paragraph 1 – point 4 – point b
Amendment 3124 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point a
Annex III – paragraph 1 – point 5 – point a
(a) AI systems intended to be used by public authorities or on behalf of public authorities to evaluate and decide on the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services;
Amendment 3126 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
Annex III – paragraph 1 – point 5 – point b
Amendment 3153 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point a
Annex III – paragraph 1 – point 6 – point a
(a) AI systems intended to be used by law enforcement authorities or on their behalf for making individual risk assessments of natural persons in order to assess the risk ofor a natural person for offending or reoffending or the risk for a natural person to become a potential victims of criminal offences;
Amendment 3179 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point e
Annex III – paragraph 1 – point 6 – point e
(e) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, with the exception of AI systems used for compliance with applicable counterterrorism and anti-money laundering legislation;
Amendment 3186 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point g
Annex III – paragraph 1 – point 6 – point g
Amendment 3229 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 – introductory part
Annex III – paragraph 1 – point 8 – introductory part
8. Administration of justice and democratic processes:
Amendment 3232 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 – point a
Annex III – paragraph 1 – point 8 – point a
(a) AI systems intended to assistbe used by a judicial authority, administrative body or on their behalf for in researching and interpreting facts and the law and infor applying the law to a concrete set of facts.
Amendment 3247 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point a
Annex IV – paragraph 1 – point 1 – point a
(a) its intended purpose, the person/s developing the system the datename of the provider and the version of the system;
Amendment 3252 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point b
Annex IV – paragraph 1 – point 1 – point b
(b) how the AI system interacts or can be used to interacts intended to be used with hardware or software that is not part of the AI system itself, where applicable;
Amendment 3254 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point c
Annex IV – paragraph 1 – point 1 – point c
(c) the versions of relevant software or firmware and any requirement related to version updatversion update information for the user, where applicable;
Amendment 3255 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point d
Annex IV – paragraph 1 – point 1 – point d
(d) the description of all forms in which the AI system is placedr list of the various configurations and variants of the AI system which are intended to be made available on the market or put into service;
Amendment 3256 #
Proposal for a regulation
Annex IV – paragraph 1 – point 1 – point f
Annex IV – paragraph 1 – point 1 – point f
(f) where the AI system is a component of products, photographs or illustrations showing external features, marking and internal layout of those productsdescriptions and, if applicable, photographs or illustrations of the user interface;
Amendment 3259 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – introductory part
Annex IV – paragraph 1 – point 2 – introductory part
2. A detailed descripProvided that no confidential information ofr the elementsrade secrets are disclosed, a detailed description of the AI system and of the process for its development, including:
Amendment 3261 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point b
Annex IV – paragraph 1 – point 2 – point b
(b) the architecture and design specifications: a description of the AI system, namely the general logic of the AI system and of the algorithms architecture, with a decomposition of its components and interfaces, how they relate to one another and how they provide for the overall processing or logic of the AI system; the key design choices including the rationale and assumptions made, also with regard to persons or groups of persons on which the system is intended to be used; the main classification choices; what the system is designed to optimise for and the relevance of the different parameters; the decisions about any possible trade-off made regarding the technical solutions adopted to comply with the requirements set out in Title III, Chapter 2;
Amendment 3264 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point c
Annex IV – paragraph 1 – point 2 – point c
Amendment 3265 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point d
Annex IV – paragraph 1 – point 2 – point d
Amendment 3268 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point g
Annex IV – paragraph 1 – point 2 – point g
(g) the validation and testing procedures used, including information about the machine-learning validation and testing data used and their main characteristics; metricsinformation used to measure accuracy, robustness, cybersecurity and compliance with other relevant requirements set out in Title III, Chapter 2 as well as potentially discriminatory impacts; test logs and all test reports dated and signed by the responsible persons, including with regard to pre-determined changes as referred to under point (f).;
Amendment 3271 #
(g a) cybersecurity measures put in place.
Amendment 3278 #
Proposal for a regulation
Annex IV – paragraph 1 – point 5
Annex IV – paragraph 1 – point 5
Amendment 3284 #
Proposal for a regulation
Annex VII – point 4 – point 4.3
Annex VII – point 4 – point 4.3
4.3. The technical documentation shall be examined by the notified body. To this purpose, the notified body shall be granted full access to the training and testing datasets used by the provider, including through application programming interfaces (API) or other appropriate means and tools enabling remote access.
Amendment 3285 #
Proposal for a regulation
Annex VII – point 4 – point 4.4
Annex VII – point 4 – point 4.4
4.4. In examining the technical documentation, the notified body may require that the provider supplies further evidence or carries out further tests so as to enable a proper assessment of conformity of the AI system with the requirements set out in Title III, Chapter 2. Whenever the notified body is not satisfied with the tests carried out by the provider, the notified body shall directly carry out adequate tests, as appropriate.
Amendment 3305 #
Proposal for a regulation
Annex VIII – point 11
Annex VIII – point 11
Amendment 3311 #
Proposal for a regulation
Annex IX – title
Annex IX – title