148 Amendments of Ondřej KOVAŘÍK related to 2021/0106(COD)
Amendment 77 #
Proposal for a regulation
Recital 5 a (new)
Recital 5 a (new)
(5 a) Welcomes the regulation on artificial intelligence, which aims to create legal certainty and coherence across the EU. Notes however, that the transport and tourism sectors are already regulated by sector specific rules, and recalls the need for ensuring the coherence and complementarity with the existing legislation. To avoid unnecessary overlap and double regulation, this Regulation should only apply when sector specific legislation posing equal or stricter rules is not already in place.
Amendment 103 #
Proposal for a regulation
Recital 44
Recital 44
(44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and, up-to-date, free of errors to the best extent possible and as complete as possible in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, sectorial, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers shouldbe able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection, update and correction in relation to high- risk AI systems.
Amendment 111 #
Proposal for a regulation
Recital 71
Recital 71
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service. It is especially important to ensure that SMEs and start-ups can easily access these sandboxes, are actively involved and participate in the development and testing of innovative AI systems, in order to be able to contribute with their knowhow and experience. Their participation should be supported and facilitated.
Amendment 115 #
Proposal for a regulation
Recital 76
Recital 76
(76) In order to facilitate a smooth, effective and harmonised implementation of this Regulation a European Artificial Intelligence Board should be established. The Board should be responsible for a number of advisory tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation, including on technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to and assisting the Commission on specific questions related to artificial intelligence. In order to ensure a common and consistent approach to the development of AI and ensure good cooperation and exchange of views, the Board should regularly consult other EU institutions as well as all sector-specific relevant stakeholders.
Amendment 116 #
Proposal for a regulation
Recital 77 a (new)
Recital 77 a (new)
(77 a) To encourage knowledge sharing from best practices, the Commission should organise regular consultative meetings for knowhow exchange between different Member States' national authorities responsible for notification policy.
Amendment 202 #
Proposal for a regulation
Article 10 – paragraph 3
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be relevant, representative, up-to- date, free of errors to the best extent possible and as complete as possible. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
Amendment 236 #
Proposal for a regulation
Article 39 a (new)
Article 39 a (new)
Article 39 a Exchange of knowhow and best practices The Commission shall facilitate regular consultative meetings for the exchange of knowhow and best practices between the Member States' national authorities responsible for notification policy.
Amendment 239 #
Proposal for a regulation
Article 41 – paragraph 2
Article 41 – paragraph 2
2. The Commission, when preparing the common specifications referred to in paragraph 1, shall gather the views of relevant bodies or expert groups established under relevant sectorial Union law, as well as relevant sector-specific stakeholders.
Amendment 253 #
Proposal for a regulation
Article 53 – paragraph 1 a (new)
Article 53 – paragraph 1 a (new)
1 a. The organisers of AI regulatory sandboxes shall ensure an easy access for SMEs and start-ups by facilitating and supporting their participation and mitigating administrative burden, which might arise from joining.
Amendment 263 #
Proposal for a regulation
Article 57 – paragraph 1
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor. Other national, regional and local authorities may be invited to the meetings, where the issues discussed are of relevance for them.
Amendment 265 #
Proposal for a regulation
Article 57 – paragraph 3 a (new)
Article 57 – paragraph 3 a (new)
3 a. The Board shall organise consultations with stakeholders at least twice a year. Such stakeholders shall include representatives from industry, SMEs and start-ups, civil society organisations such as NGOs, consumer associations, the social partners and academia, to assess the evolution of trends in technology, issues related to the implementation and the effectiveness of this Regulation, regulatory gaps or loopholes observed in practice.
Amendment 271 #
Proposal for a regulation
Article 60 – paragraph 3
Article 60 – paragraph 3
3. Information contained in the EU database shall be accessible to the public, user-friendly, easily navigable and machine-readable.
Amendment 280 #
Proposal for a regulation
Article 69 – paragraph 3
Article 69 – paragraph 3
3. Codes of conduct may be drawn up by national, regional or local authorities, by individual providers of AI systems or by organisations representing them or by both, including with the involvement of users and any interested stakeholders and their representative organisations. Codes of conduct may cover one or more AI systems taking into account the similarity of the intended purpose of the relevant systems.
Amendment 348 #
Proposal for a regulation
Recital 5
Recital 5
(5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules as well as measures in support of innovation with a particular focus on SMEs and start-ups, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33 , and it ensures the protection of ethical principles, as specifically requested by the European Parliament34 . _________________ 33 European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 34 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL).
Amendment 358 #
Proposal for a regulation
Recital 6
Recital 6
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. Therefore, the term AI system should be defined in line with internationally accepted definitions. The definition should be based on the key functional characteristics of the softwareAI systems, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in air physical or digital dimensionenvironment. AI systems can be designed to operate with varying levels of autonomy and be used on a stand- alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to– date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list. In order to ensure alignment of definitions on an international level, the European Commission should engage in a dialogue with international organisations such as the Organisation for Economic Cooperation and Development (OECD), should their definitions of the term ‘AI system’ be adjusted.
Amendment 374 #
Proposal for a regulation
Recital 8
Recital 8
(8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used. Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real- time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near- ‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned. The notion of remote biometric identification system shall not include verification or authentification systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises.
Amendment 399 #
Proposal for a regulation
Recital 12 a (new)
Recital 12 a (new)
(12 a) This Regulation should not undermine research and development activity and should respect freedom of science. It is therefore necessary to exclude from its scope AI systems specifically developed and put into service for the sole purpose of scientific research and development and to ensure that the Regulation does not otherwise affect scientific research and development activity on AI systems. As regards product oriented research activity by providers, the provisions of this Regulation should apply insofar as such research leads to or entails placing of an AI system on the market or putting it into service. Under all circumstances, any research and development activity should be carried out in accordance with recognised ethical standards for scientific research.
Amendment 404 #
Proposal for a regulation
Recital 12 b (new)
Recital 12 b (new)
(12 b) Given the complexity of the value chain for AI systems, it is essential to clarify the role of persons who may contribute to the development of AI systems covered by this Regulation, without being providers and thus being obliged to comply with the obligations and requirements established herein. It is necessary to clarify that general purpose AI systems - understood as AI systems that are able to perform generally applicable functions such as image/speech recognition, audio/video generation, pattern detection, question answering, translation etc. - should not be considered as having an intended purpose within the meaning of this Regulation, unless those systems have been adapted to a specific intended purpose that falls within the scope of this Regulation. Initial providers of general purpose AI systems should therefore only have to comply with the provisions on accuracy, robustness and cybersecurity as laid down in Art. 15 of this Regulation. If a person adapts a general purpose AI application to a specific intended purpose and places it on the market or puts it into service, it shall be considered the provider and be subject to the obligations laid down in this Regulation. The initial provider of a general purpose AI application shall, after placing it on the market or putting it to service, and without compromising its own intellectual property rights or trade secrets, provide the new provider with all essential, relevant and reasonably expected information that is necessary to comply with the obligations set out in this Regulation.
Amendment 430 #
Proposal for a regulation
Recital 16
Recital 16
(16) The placing on the market, putting into service or use of certain AI systems intended towith the objective to or the effect of distorting human behaviour, whereby physical or psychological harms are reasonably likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacitiesspecific groups of persons due to their age, disabilities, social or economic situation. They do so with the intention to materially distort the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
Amendment 534 #
Proposal for a regulation
Recital 30
Recital 30
(30) As regards AI systems that are safety components of products, or which are themselves products, falling within the scope of certain Union harmonisation legislation, it is appropriate to classify them as high-risk under this Regulation if the product in question undergoes the conformity assessment procedure in order to ensure compliance with essential safety requirements with a third-party conformity assessment body pursuant to that relevant Union harmonisation legislation. In particular, such products are machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices.
Amendment 546 #
Proposal for a regulation
Recital 33
Recital 33
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk, except for verification or authentification systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversight.
Amendment 563 #
Proposal for a regulation
Recital 36
Recital 36
(36) AI systems used for making autonomous decisions or materially influencing decisions in employment, workers management and access to self- employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy.
Amendment 576 #
Proposal for a regulation
Recital 37
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providerSMEs and start-ups for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
Amendment 674 #
Proposal for a regulation
Recital 61
Recital 61
(61) Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation. Compliance with harmonised standards as defined in Regulation (EU) No 1025/2012 of the European Parliament and of the Council54 should be a means for providers to demonstrate conformity with the requirements of this Regulation. However, the Commission could adopt common technical specifications in areas where no harmonised standards exist or where they are insufficientand are not expected to be published within a reasonable period or where they are insufficient, only after consulting the Artificial Intelligence Board, the European standardisation organisations as well as the relevant stakeholders. The Commission should duly justify why it decided not to use harmonised standards. _________________ 54 Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC, 2009/23/EC and 2009/105/EC of the European Parliament and of the Council and repealing Council Decision 87/95/EEC and Decision No 1673/2006/EC of the European Parliament and of the Council (OJ L 316, 14.11.2012, p. 12).
Amendment 713 #
Proposal for a regulation
Recital 70
Recital 70
(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use or where the content is part of an obviously artistic, creative or fictional cinematographic work. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose, in an appropriate, clear and visible manner, that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.
Amendment 733 #
Proposal for a regulation
Recital 73
Recital 73
(73) In order to promote and protect innovation, it is important that the interests of small-scaletart-ups and SME providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on awareness raising and information communication. Moreover, the specific interests and needs of small-scale providerSMEs and start-ups shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users.
Amendment 796 #
Proposal for a regulation
Article 1 – paragraph 1 – point d
Article 1 – paragraph 1 – point d
(d) harmonised transparency rules for certain AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content;
Amendment 797 #
Proposal for a regulation
Article 1 – paragraph 1 – point e
Article 1 – paragraph 1 – point e
(e) rules on market monitoring and, market surveillance and governance; .
Amendment 802 #
Proposal for a regulation
Article 1 – paragraph 1 – point e a (new)
Article 1 – paragraph 1 – point e a (new)
(e a) measures in support of innovation with a particular focus on SMEs and start-ups, including the setting up of regulatory sandboxes and the reduction of regulatory burdens.
Amendment 861 #
Proposal for a regulation
Article 2 – paragraph 2 a (new)
Article 2 – paragraph 2 a (new)
2 a. This Regulation shall not apply to AI systems, including their output, specifically developed and put into service for the sole purpose of scientific research and development.
Amendment 863 #
Proposal for a regulation
Article 2 – paragraph 2 b (new)
Article 2 – paragraph 2 b (new)
2 b. This Regulation shall not apply to any research and development activity regarding AI systems in so far as such activity does not lead to or entail placing an AI system on the market or putting it into service.
Amendment 912 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact withreal or virtual environments; AI systems can be designed to operate with varying levels of autonomy and can be developed with one or more of the techniques and approaches listed in Annex I;
Amendment 930 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2
Article 3 – paragraph 1 – point 2
(2) ‘provid'developer’' means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view toand placinges it on the market or puttings it into service under its own name or trademark, whether for payment or free of charge or that adapts general purpose AI systems to a specific intended purpose;
Amendment 947 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4
Article 3 – paragraph 1 – point 4
(4) ‘usdeployer’ means any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non- professional activity;
Amendment 1002 #
Proposal for a regulation
Article 3 – paragraph 1 – point 23
Article 3 – paragraph 1 – point 23
(23) ‘substantial modification’ means a change to the AI system following its placing on the market or putting into service, which affectsis not foreseen or planned by the provider and as a result of which the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation oris affected or which results in a modification to the intended purpose for which the AI system has been assessed. A substantial modification is given if the remaining risk is increased by the modification of the AI system under the application of all necessary protective measures;
Amendment 1009 #
Proposal for a regulation
Article 3 – paragraph 1 – point 24
Article 3 – paragraph 1 – point 24
(24) ‘CE marking of conformity’ (CE marking) means a physical or digital marking by which a provider indicates that an AI system or a product with an embedded AI system is in conformity with the requirements set out in Title III, Chapter 2 of this Regulation and other applicable Union legislation harmonising the conditions for the marketing of products (‘Union harmonisation legislation’) providing for its affixing;
Amendment 1044 #
Proposal for a regulation
Article 3 – paragraph 1 – point 35
Article 3 – paragraph 1 – point 35
(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, or inferring their characteristics and attributes on the basis of their biometric or biometrics-based data;
Amendment 1052 #
Proposal for a regulation
Article 3 – paragraph 1 – point 36
Article 3 – paragraph 1 – point 36
(36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified , excluding verification/authentification systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises;
Amendment 1103 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
Article 3 – paragraph 1 – point 44 a (new)
(44 a) ‘regulatory sandbox’ means a facility that provides a controlled environment that facilitates the safe development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan;
Amendment 1111 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 b (new)
Article 3 – paragraph 1 – point 44 b (new)
(44 b) ‘deep fake’ means an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful.
Amendment 1129 #
Proposal for a regulation
Article 3 a (new)
Article 3 a (new)
Amendment 1136 #
Proposal for a regulation
Article 4 – paragraph 1
Article 4 – paragraph 1
The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I, after an adequate and transparent consultation process involving the relevant stakeholders, to amend the list of techniques and approaches listed in Annex I within the scope of the definition of an AI system as provided for in Article 3(1), in order to update that list to market and technological developments on the basis of transparent characteristics that are similar to the techniques and approaches listed therein. Providers and users of AI systems should be given 24 months to comply with any amendment to Annex I.
Amendment 1169 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order towith the objective to or the effect of materially distorting a person’s behaviour in a manner that causes or is reasonably likely to cause that person or another person physical or psychological harm;
Amendment 1181 #
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of an individual, including characteristics of such individual’s known or predicted personality or social or economic situation, a specific group of persons due to their age, physical or mental or disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
Amendment 1423 #
Proposal for a regulation
Article 6 – paragraph 1 – point a
Article 6 – paragraph 1 – point a
(a) the AI system is intended to be used as a main safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II;
Amendment 1429 #
Proposal for a regulation
Article 6 – paragraph 1 – point b
Article 6 – paragraph 1 – point b
(b) the product whose main safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment in order to ensure compliance with essential safety requirements with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.
Amendment 1456 #
Proposal for a regulation
Article 6 a (new)
Article 6 a (new)
Amendment 1466 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding high-risk AI systems where, after an adequate and transparent consultation process involving the relevant stakeholders, to update the list in Annex III by withdrawing areas from that list or by adding critical areas. For additions both of the following conditions arneed to be fulfilled:
Amendment 1503 #
Proposal for a regulation
Article 7 – paragraph 2 – point b a (new)
Article 7 – paragraph 2 – point b a (new)
(b a) the extent to which the AI system acts autonomously;
Amendment 1520 #
Proposal for a regulation
Article 7 – paragraph 2 – point e a (new)
Article 7 – paragraph 2 – point e a (new)
(e a) the potential misuse and malicious use of the AI system and of the technology underpinning it;
Amendment 1531 #
Proposal for a regulation
Article 7 – paragraph 2 – point g a (new)
Article 7 – paragraph 2 – point g a (new)
(g a) magnitude and likelihood of benefit of the deployment of the AI system for individuals, groups, or society at large;
Amendment 1538 #
Proposal for a regulation
Article 7 – paragraph 2 – point h – introductory part
Article 7 – paragraph 2 – point h – introductory part
(h) the extent to which existing Union legislation, in particular the GDPR, provides for:
Amendment 1549 #
Proposal for a regulation
Article 7 – paragraph 2 a (new)
Article 7 – paragraph 2 a (new)
2 a. The Commission shall provide a transitional period of at least 24 months following each update of Annex III.
Amendment 1555 #
Proposal for a regulation
Article 8 – paragraph 1
Article 8 – paragraph 1
1. High-risk AI systems shall comply with the requirements established in this Chapter, taking into account the generally acknowledged state of the art, including as reflected in relevant harmonised standards or common specifications.
Amendment 1575 #
Proposal for a regulation
Article 9 – paragraph 1
Article 9 – paragraph 1
1. A risk management system shall be established, implemented, documented and maintained in appropriate relation to high- risk AI systems and its risks identified in the risk assessment referred to in Art. 6a.
Amendment 1587 #
Proposal for a regulation
Article 9 – paragraph 2 – point a
Article 9 – paragraph 2 – point a
(a) identification and analysis of the known and foreseeable risks associated with eachmost likely to occur to health, safety and fundamental rights in view of the intended purpose of the high-risk AI system;
Amendment 1591 #
Proposal for a regulation
Article 9 – paragraph 2 – point b
Article 9 – paragraph 2 – point b
Amendment 1598 #
Proposal for a regulation
Article 9 – paragraph 2 – point c
Article 9 – paragraph 2 – point c
(c) evaluation of other possibly arisingnew arising significant risks based on the analysis of data gathered from the post-market monitoring system referred to in Article 61;
Amendment 1602 #
Proposal for a regulation
Article 9 – paragraph 2 a (new)
Article 9 – paragraph 2 a (new)
2 a. The risks referred to in paragraph 2 shall concern only those which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or the provision of adequate technical information.
Amendment 1609 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual significant risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is reasonably judged to be acceptable, having regard to the benefits that the high-risk AI system is reasonably expected to deliver and provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. Those residual significant risks shall be communicated to the user.
Amendment 1621 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point a
Article 9 – paragraph 4 – subparagraph 1 – point a
(a) elimination or reduction of risks as far as posidentified and evaluated risks as far as economically and technologically feasible through adequate design and development of the high-risk AI system;
Amendment 1624 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point b
Article 9 – paragraph 4 – subparagraph 1 – point b
(b) where appropriate, implementation of adequate mitigation and control measures in relation to significant risks that cannot be eliminated;
Amendment 1639 #
Proposal for a regulation
Article 9 – paragraph 5
Article 9 – paragraph 5
5. High-risk AI systems shall be tesevaluated for the purposes of identifying the most appropriate and targeted risk management measures. Testing and weighing any such measures against the potential benefits and intended goals of the system. Evaluations shall ensure that high-risk AI systems perform consistently for their intended purpose and they are in compliance with the relevant requirements set out in this Chapter.
Amendment 1669 #
Proposal for a regulation
Article 9 – paragraph 9
Article 9 – paragraph 9
9. For credit institutions regulated by Directive 2013/36/EUproviders and AI systems already covered by Union law that require them to establish a specific risk management, the aspects described in paragraphs 1 to 8 shall be part of the risk management procedures established by those institutions pursuant to Article 74 of that Directiveat Union law or deemed to be covered as part of it.
Amendment 1673 #
Proposal for a regulation
Article 10 – paragraph 1
Article 10 – paragraph 1
1. High-risk AI systems which make use of techniques involving the training of models with data shall be, as far as this can be reasonably expected and is feasible from a technical and economical point of view, developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5.
Amendment 1683 #
Proposal for a regulation
Article 10 – paragraph 2 – introductory part
Article 10 – paragraph 2 – introductory part
2. Training, validation and testing data sets shall be subject to appropriate data governance and management practices appropriate for the context of the use as well as the intended purpose of the AI system. Those practices shall concern in particular,
Amendment 1702 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases that are likely to affect the output of the AI system;
Amendment 1707 #
Proposal for a regulation
Article 10 – paragraph 2 – point g
Article 10 – paragraph 2 – point g
(g) the identification of any possiblesignificant data gaps or shortcomings, and how those gaps and shortcomings can be addressed.
Amendment 1715 #
Proposal for a regulation
Article 10 – paragraph 3
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be relevant, representative, free of errors and completeHigh-risk AI systems shall be designed and developed with the best efforts to ensure that training, validation and testing data sets shall be relevant, representative, and to the best extent possible, free of errors and complete in accordance with industry standards. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
Amendment 1742 #
Proposal for a regulation
Article 10 – paragraph 6
Article 10 – paragraph 6
6. Appropriate data governance and management practices shall apply fFor the development of high-risk AI systems nother than those which make use of using techniques involving the training of models in order to ensure that those high-risk AI systems comply with paragraph 2, paragraphs 2 to 5 shall apply only to the testing data sets.
Amendment 1753 #
Proposal for a regulation
Article 11 – paragraph 1 – subparagraph 1
Article 11 – paragraph 1 – subparagraph 1
The technical documentation shall be drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements set out in this Chapter and provide national competent authorities and notified bodies with all the necessary information to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV or, in the case of SMEs and start-ups, any equivalent documentation meeting the same objectives, subject to approval of the competent authority.
Amendment 1778 #
Proposal for a regulation
Article 12 – paragraph 4
Article 12 – paragraph 4
Amendment 1790 #
Proposal for a regulation
Article 13 – paragraph 1
Article 13 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured, with a view to achieving compliance with the relevant obligations of the user and of the provider set out in Chapter 3 of this Title. Transparency shall thereby mean that, to the extent that can be reasonably expected and is feasible in technical terms, the AI systems output is interpretable by the user and the user is able to understand the general functionality of the AI system and its use of data.
Amendment 1793 #
Proposal for a regulation
Article 13 – paragraph 2
Article 13 – paragraph 2
2. High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that helps supporting informed decision-making by users and is relevant, accessible and comprehensible to users.
Amendment 1801 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point iii
Article 13 – paragraph 3 – point b – point iii
Amendment 1808 #
Proposal for a regulation
Article 13 – paragraph 3 – point e a (new)
Article 13 – paragraph 3 – point e a (new)
(e a) a description of the mechanisms included within the AI system that allow users to properly collect, store and interpret the logs in accordance with Article 12(1).
Amendment 1812 #
Proposal for a regulation
Article 14 – paragraph 1
Article 14 – paragraph 1
1. HWhere proportionate to the risks associated with the high-risk system and where technical safeguards are not sufficient, high-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use.
Amendment 1830 #
Proposal for a regulation
Article 14 – paragraph 4 – introductory part
Article 14 – paragraph 4 – introductory part
4. The measures referred to For the purpose of implementing paragraph 3 shall enable the individuals to whom human oversight is assigned to do the following, as appropriate to the circumstances 1 to 3, the high-risk AI system shall be provided to the user in such a way that the individuals to whom human oversight is assigned are enabled as appropriate and proportionate, to the circumstances and in accordance with industry standards:
Amendment 1832 #
Proposal for a regulation
Article 14 – paragraph 4 – point a
Article 14 – paragraph 4 – point a
(a) fulto be aware of and sufficiently understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible;
Amendment 1841 #
Proposal for a regulation
Article 14 – paragraph 4 – point e
Article 14 – paragraph 4 – point e
(e) to be able to intervene on the operation of the high-risk AI system, halt or interrupt the system through a “stop” button or a similar procedurewhere reasonable and technically feasible and except if the human interference increases the risks or would negatively impact the performance in consideration of generally acknowledged state-of-the-art.
Amendment 1850 #
Proposal for a regulation
Article 15 – paragraph 1
Article 15 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose and to the extent that can be reasonably expected and is in accordance with relevant industry standards, an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle.
Amendment 1856 #
Proposal for a regulation
Article 15 – paragraph 2
Article 15 – paragraph 2
2. The levels of accuracy and the relevant accuracy metrics of high-risk AI systemsrange of expected performance and the operational factors that affect that performance shall be declared in the accompanying instructions of use.
Amendment 1858 #
Proposal for a regulation
Article 15 – paragraph 3 – introductory part
Article 15 – paragraph 3 – introductory part
3. High-risk AI systems shall be resilientdesigned and developed with safety and security-by-design mechanism so that they achieve, in the light of their intended purpose, an appropriate level of cyber resilience as regards to errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems.
Amendment 1863 #
Proposal for a regulation
Article 15 – paragraph 3 – subparagraph 2
Article 15 – paragraph 3 – subparagraph 2
High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way to ensure that possibly biased outputs due to outputs used as aninfluencing input for future operations (‘feedback loops’) are duly addressed with appropriate mitigation measures.
Amendment 1902 #
Proposal for a regulation
Article 16 – paragraph 1 – point j
Article 16 – paragraph 1 – point j
(j) upon reasoned request of a national competent authority, provide the relevant information and documentation to demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title.
Amendment 1914 #
Proposal for a regulation
Article 17 – paragraph 1 – introductory part
Article 17 – paragraph 1 – introductory part
1. Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures andor instructions, and shall include at least the following aspects:
Amendment 1916 #
Proposal for a regulation
Article 17 – paragraph 1 – point a
Article 17 – paragraph 1 – point a
Amendment 1921 #
Proposal for a regulation
Article 17 – paragraph 1 – point e
Article 17 – paragraph 1 – point e
Amendment 1934 #
Proposal for a regulation
Article 17 – paragraph 1 – point j
Article 17 – paragraph 1 – point j
(j) the handling of communication with national competent authorities, competent authorities, including sectoral ones, providing or supporting the access to data, notified bodies, other operators, customers or other interested parties;
Amendment 1935 #
Proposal for a regulation
Article 17 – paragraph 1 – point k
Article 17 – paragraph 1 – point k
Amendment 1969 #
Proposal for a regulation
Article 23 – paragraph 1
Article 23 – paragraph 1
Providers of high-risk AI systems shall, upon request by a national competent authority, provide that authority with all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title, in an official Union language determined by the Member State concerned. Upon a reasoned request from a national competent authority, providers shall also give that authority access to the logs automatically generated by the high- risk AI system, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law. Any information submitted in accordance with the provision of this article shall be considered by the national competent authority a trade secret of the company that is submitting such information and kept strictly confidential.
Amendment 2018 #
Proposal for a regulation
Article 27 – paragraph 5
Article 27 – paragraph 5
5. Upon a reasoned request from a national competent authority, distributors of high-risk AI systems shall provide that authority with all the information and documentation necessary to demonstrate the conformity of a high-risk system with the requirements set out in Chapter 2 of this Title. Distributors shall also cooperate with that national competent authority on any action taken by that authorityregarding its activities pursuant to paragraphs 1 to 4.
Amendment 2059 #
Proposal for a regulation
Article 29 – paragraph 5 – introductory part
Article 29 – paragraph 5 – introductory part
5. Users of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system, to the extent such logs are under their control. The logs shall be kept for a period that is appropriate in the light of industry standards, the intended purpose of the high-risk AI system and applicable legal obligations under Union or national law.
Amendment 2076 #
Proposal for a regulation
Article 29 – paragraph 6 b (new)
Article 29 – paragraph 6 b (new)
6 b. The obligations established by this Article shall not apply to users who use the AI system in the course of a personal non-professional activity.
Amendment 2133 #
Proposal for a regulation
Article 41 – paragraph 1
Article 41 – paragraph 1
1. Where harmonised standards referred to in Article 40 do not exist and are not expected to be published within a reasonable period or where the Commission considers that the relevant harmonised standards are insufficient or that there is a need to address specific safety or fundamental right concerns, the Commission may, by means of implementing acts, adopt common specifications in respect of the requirements set out in Chapter 2 of this Title. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2).
Amendment 2138 #
Proposal for a regulation
Article 41 – paragraph 1 a (new)
Article 41 – paragraph 1 a (new)
1 a. When deciding to draft and adopt common specifications, the Commission shall consult the Board, the European standardisation organisations as well as the relevant stakeholders, and duly justify why it decided not to use harmonised standards. The abovementioned organisations shall be regularly consulted while the Commission is in the process of drafting the common specifications.
Amendment 2141 #
Proposal for a regulation
Article 41 – paragraph 2
Article 41 – paragraph 2
2. The Commission, when preparing the common specifications referred to in paragraph 1, shall gather the views of stakeholders, including SMEs and start- ups, relevant bodies or expert groups established under relevant sectorial Union law.
Amendment 2149 #
Proposal for a regulation
Article 41 – paragraph 4
Article 41 – paragraph 4
4. Where providers of high-risk AI systems do not comply with the common specifications referred to in paragraph 1, they shall duly justify that they have adopted technical solutions that are at least equivalent thereto.
Amendment 2150 #
Proposal for a regulation
Article 41 – paragraph 4 a (new)
Article 41 – paragraph 4 a (new)
4 a. If harmonised standards referred to in Article 40 are developed and the references to them are published in the Official Journal of the European Union in accordance with Regulation (EU) No 1025/2012 in the future, the relevant common specifications shall no longer apply.
Amendment 2191 #
Proposal for a regulation
Article 43 – paragraph 4 – introductory part
Article 43 – paragraph 4 – introductory part
4. High-risk AI systems that have already been subject to a conformity assessment procedure shall undergo a new conformity assessment procedure whenever they are substantially modified, regardless of whetherif the modified system is intended to be further distributed or continues to be used by the current user.
Amendment 2193 #
Proposal for a regulation
Article 43 – paragraph 4 – subparagraph 1
Article 43 – paragraph 4 – subparagraph 1
For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification. The same should apply to updates of the AI system for security reasons in general and to protect against evolving threats of manipulation of the system as long as the update does not include significant changes to the functionality of the system.
Amendment 2201 #
Proposal for a regulation
Article 43 – paragraph 5
Article 43 – paragraph 5
5. TAfter consulting the AI Board referred to in Article 56 and after providing substantial evidence, followed by thorough consultation and the involvement of the affected stakeholders, the Commission is empowered to adopt delegated acts in accordance with Article 73 for the purpose of updating Annexes VI and Annex VII in order to introduce elements of the conformity assessment procedures that become necessary in light of technical progress.
Amendment 2208 #
Proposal for a regulation
Article 43 – paragraph 6
Article 43 – paragraph 6
Amendment 2232 #
Proposal for a regulation
Article 49 – paragraph 1
Article 49 – paragraph 1
1. The physical CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate.
Amendment 2234 #
Proposal for a regulation
Article 49 – paragraph 1 a (new)
Article 49 – paragraph 1 a (new)
1 a. A digital CE marking may be used instead of or additionally to the physical marking if it can be accessed via the display of the product or via a machine- readable code or other electronic means.
Amendment 2271 #
Proposal for a regulation
Article 52 – paragraph 3 – introductory part
Article 52 – paragraph 3 – introductory part
3. Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose, in an appropriate, clear and visible manner, that the content has been artificially generated or manipulated.
Amendment 2294 #
Proposal for a regulation
Article 53 – paragraph 1
Article 53 – paragraph 1
1. AI regulatory sandboxes established by one or more Member States competent authorities or the European Data Protection Supervisorthe European Commission, one or more Member States, or other competent entities shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. This shall take place under the direct supervision and guidance by the competent authorities with a view to ensuringin collaboration with and guidance by the European Commission or the competent authorities in order to identify risks to health and safety and fundamental rights, test mitigation measures for identified risks, demonstrate prevention of these risks and otherwise ensure compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox.
Amendment 2329 #
Proposal for a regulation
Article 53 – paragraph 5
Article 53 – paragraph 5
5. The European Commission, Member States’ competent authorities and other entities that have established AI regulatory sandboxes shall coordinate their activities and cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Board and the CommissionCommission’s AI Regulatory Sandboxing programme. The European Commission shall submit annual reports to the European Artificial Intelligence Board on the results from the implementation of those scheme, including good practices, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox.
Amendment 2340 #
Proposal for a regulation
Article 53 – paragraph 6 a (new)
Article 53 – paragraph 6 a (new)
6 a. The Commission shall establish an EU AI Regulatory Sandboxing Programme whose modalities referred to in Article 53(6) shall cover the elements set out in Annex IXa. The Commission shall proactively coordinate with national, regional and also local authorities, as relevant.
Amendment 2372 #
Proposal for a regulation
Article 55 – title
Article 55 – title
Measures for small-scale providerSMEs, start-ups and users
Amendment 2375 #
Proposal for a regulation
Article 55 – paragraph 1 – point a
Article 55 – paragraph 1 – point a
(a) provide small-scale providerSMEs and start-ups with priority access to the AI regulatory sandboxes to the extent that they fulfil the eligibility conditions;
Amendment 2377 #
Proposal for a regulation
Article 55 – paragraph 1 – point b
Article 55 – paragraph 1 – point b
(b) organise specific awareness raising activities about the application of this Regulation tailored to the needs of the small-scale providerSMEs, sart-ups and users;
Amendment 2379 #
(c) where appropriate, establish a dedicated channel for communication with small-scale providers andSMEs, start-ups, users and other innovators to provide guidance and respond to queries about the implementation of this Regulation.
Amendment 2381 #
Proposal for a regulation
Article 55 – paragraph 1 – point c a (new)
Article 55 – paragraph 1 – point c a (new)
(c a) support SME's increased participation in the standardisation development process;
Amendment 2387 #
Proposal for a regulation
Article 55 – paragraph 2
Article 55 – paragraph 2
2. The specific interests and needs of the small-scale providerSMEs and start-ups shall be taken into account when setting the fees for conformity assessment under Article 43, reducing those fees proportionately to their size and market size.
Amendment 2405 #
Proposal for a regulation
Article 56 – paragraph 2 – introductory part
Article 56 – paragraph 2 – introductory part
2. The Board shall provide advice and assistance to the Commission and to the national supervisory authorities in order to:
Amendment 2414 #
(c a) contribute to the effective cooperation with the competent authorities of third countries and with international organisations.
Amendment 2464 #
Proposal for a regulation
Article 57 – paragraph 4
Article 57 – paragraph 4
4. The Board may invite external experts and observers to attend its meetings and may hold exchanges with interested third parties to inform its activities to an appropriate extent. To that end t, and hold consultations with relevant stakeholders and ensure appropriate participation. The Commission may facilitate exchanges between the Board and other Union bodies, offices, agencies and advisory. The Commission may facilitate exchanges between the Board and other Union bodies, offices, agencies and advisory groups.
Amendment 2484 #
Proposal for a regulation
Article 58 – paragraph 1 – introductory part
Article 58 – paragraph 1 – introductory part
When providing advice and assistance to the Commission and to the national supervisory authorities in the context of Article 56(2), the Board shall in particular:
Amendment 2589 #
Proposal for a regulation
Article 59 – paragraph 7
Article 59 – paragraph 7
7. National competent authorities may provide guidance and advice on the implementation of this Regulation, including to small-scale providerSMEs and start-ups. Whenever national competent authorities intend to provide guidance and advice with regard to an AI system in areas covered by other Union legislation, the competent national authorities under that Union legislation shall be consulted, as appropriate. Member States mayshall also establish one central contact point for communication with operators and other stakeholders.
Amendment 2593 #
Proposal for a regulation
Article 59 – paragraph 8
Article 59 – paragraph 8
8. When Union institutions, agencies and bodies fall within the scope of this Regulation, the European Data Protection Supervisor shall act as the competent authority for their supervision and coordination.
Amendment 2643 #
Proposal for a regulation
Article 61 – paragraph 2
Article 61 – paragraph 2
2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data provided by users or collected through other sources, to the extent such data are readily accessible to the provider and taking into account the limits resulting from data protection, copyright and competition law, on the performance of high- risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2.
Amendment 2648 #
Proposal for a regulation
Article 61 – paragraph 3
Article 61 – paragraph 3
3. The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan by ... [12 months following the entry into force of this Regulation].
Amendment 2664 #
Proposal for a regulation
Article 62 – paragraph 1 – subparagraph 1 a (new)
Article 62 – paragraph 1 – subparagraph 1 a (new)
No report under this Article is required if the serious incident also leads to reporting requirements under other laws. In that case, the authorities competent under those laws shall forward the received report to the national competent authority.
Amendment 2689 #
Proposal for a regulation
Article 64 – paragraph 2
Article 64 – paragraph 2
2. WhereMarket surveillance authorities shall be granted access to the source code of the high-risk AI system upon a reasoned request and only when the following cumulative conditions are fulfilled: a) Access to source code is necessary to assess the conformity of thea high-risk AI system with the requirements set out in Title III, Chapter 2, and upon a reasoned request, the market surveillance authorities shall be granted access to the source code of the AI system. b) testing/auditing procedures and verifications based on the data and documentation provided by the provider have been exhausted or proved insufficient.
Amendment 2729 #
Proposal for a regulation
Article 65 – paragraph 6 – point a
Article 65 – paragraph 6 – point a
(a) a failure of the high-risk AI system to meet requirements set out in Title III, Chapter 2;
Amendment 2769 #
Proposal for a regulation
Article 68 – paragraph 2
Article 68 – paragraph 2
2. Where the non-compliance referred to in paragraph 1 persists, the Member State concerned shall take all appropriate and proportionate measures to restrict or prohibit the high- risk AI system being made available on the market or ensure that it is recalled or withdrawn from the market.
Amendment 2772 #
Proposal for a regulation
Article 68 a (new)
Article 68 a (new)
Amendment 2779 #
Proposal for a regulation
Article 68 b (new)
Article 68 b (new)
Article 68 b Right to an effective judicial remedy against a national supervisory authority 1. Without prejudice to any other administrative or non-judicial remedy, each natural or legal person shall have the right to an effective judicial remedy against a legally binding decision of a national supervisory authority concerning them. 2. Without prejudice to any other administrative or non-judicial remedy, each data subject shall have the right to a an effective judicial remedy where the national supervisory authority does not handle a complaint, does not inform the complainant on the progress or preliminary outcome of the complaint lodged within three months pursuant to Article 68a(3) or does not comply with its obligation to reach a final decision on the complaint within six months pursuant to Article 68a(4) or its obligations under Article 65. 3. Proceedings against a supervisory authority shall be brought before the courts of the Member State where the national supervisory authority is established.
Amendment 2793 #
Proposal for a regulation
Article 69 – paragraph 4
Article 69 – paragraph 4
4. The Commission and the Board shall take into account the specific interests and needs of the small-scale providerSMEs and start-ups when encouraging and facilitating the drawing up of codes of conduct.
Amendment 2796 #
Proposal for a regulation
Article 70 – paragraph 1 – introductory part
Article 70 – paragraph 1 – introductory part
1. National competent authorities and, notified bodies involved in the application of this Regulation shall respect, the Commission, the Board, and any other natural or legal person involved in the application of this Regulation shall, in accordance with Union or national law, put appropriate technical and organisational measures in place to ensure the confidentiality of information and data obtained in carrying out their tasks and activities in such a manner as to protect, in particular:
Amendment 2803 #
Proposal for a regulation
Article 70 – paragraph 1 – point c a (new)
Article 70 – paragraph 1 – point c a (new)
Amendment 2821 #
Proposal for a regulation
Article 71 – paragraph 1
Article 71 – paragraph 1
1. In compliance with the terms and conditions laid down in this Regulation, Member States shall lay down the rules on penalties, including administrative fines, applicable to infringements of this Regulation and shall take all measures necessary to ensure that they are properly and effectively implemented. The penalties provided for shall be effective, proportionate, and dissuasive. They shall take into particular account the size and interests of small-scale providerSMEs and start-ups and their economic viability.
Amendment 2830 #
Proposal for a regulation
Article 71 – paragraph 3 – introductory part
Article 71 – paragraph 3 – introductory part
3. The following infringementsNon-compliance with the prohibition of the artificial intelligence practices referred to in Article 5 shall be subject to administrative fines of up to 320 000 000 EUR or, if the offender is a company, up to 64 % of its total worldwide annual turnover for the preceding financial year, and in case of SMEs and start-ups, up to 3% of its worldwide annual turnover for the preceding financial year, whichever is higher: . .
Amendment 2848 #
Proposal for a regulation
Article 71 – paragraph 4
Article 71 – paragraph 4
4. The grossly negligent non- compliance by the provider or the user of the AI s ystem with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 210 000 000 EUR or, if the offender is a company, up to 42 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Amendment 2866 #
Proposal for a regulation
Article 71 – paragraph 6 – point c
Article 71 – paragraph 6 – point c
(c) the size, the annual turnover and market share of the operator committing the infringement;
Amendment 2881 #
Proposal for a regulation
Article 71 – paragraph 8 a (new)
Article 71 – paragraph 8 a (new)
Amendment 2962 #
Proposal for a regulation
Article 83 – paragraph 2
Article 83 – paragraph 2
2. This Regulation shall apply to the high-risk AI systems, other than the ones referred to in paragraph 1, that have been placed on the market or put into service before [date of application of this Regulation referred to in Article 85(2)], only if, from that date, those systems are subject to significant changesubstantial modification in their design or intended purpose as defined in Article 3(23) .
Amendment 3001 #
Proposal for a regulation
Article 85 – paragraph 2
Article 85 – paragraph 2
2. This Regulation shall apply from [248 months following the entering into force of the Regulation].
Amendment 3007 #
Proposal for a regulation
Article 85 – paragraph 3 a (new)
Article 85 – paragraph 3 a (new)
3 a. Member States shall not until... [24 months after the date of application of this Regulation] impede the making available of AI systems and products which were placed on the market inconformity with Union harmonisation legislation before [the date of application of this Regulation].
Amendment 3008 #
Proposal for a regulation
Article 85 – paragraph 3 b (new)
Article 85 – paragraph 3 b (new)
Amendment 3045 #
Proposal for a regulation
Annex III – title
Annex III – title
Amendment 3059 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a
Annex III – paragraph 1 – point 1 – point a
(a) AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons; , excluding verification/authentification systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises;
Amendment 3108 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point a
Annex III – paragraph 1 – point 4 – point a
(a) AI systems intended to be used formake autonomous decisions or materially influence decisions about recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;
Amendment 3118 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
Annex III – paragraph 1 – point 4 – point b
(b) AI intended to be used for makingmake autonomous decisions or materially influence decisions on promotion and termination of work- related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships.
Amendment 3136 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
Annex III – paragraph 1 – point 5 – point b
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providerSMEs and start-ups for their own use;
Amendment 3279 #
Proposal for a regulation
Annex IV – paragraph 1 – point 5
Annex IV – paragraph 1 – point 5
5. A description of relevanyt changes made by providers to the system through its lifecycle;
Amendment 3312 #
Proposal for a regulation
Annex IX a (new)
Annex IX a (new)