Activities of Emmanuel MAUREL related to 2021/0106(COD)
Shadow opinions (1)
OPINION on the proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts
Amendments (134)
Amendment 314 #
Proposal for a regulation
Recital 1
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, the environment, safety and fundamental rights, and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
Amendment 321 #
Proposal for a regulation
Recital 3 a (new)
Recital 3 a (new)
(3a) Use of artificial intelligence systems by states or public authorities, or on their behalf, must allow an improvement in access to social benefits and social rights. This technology must be used to combat the major problem of low take-up and to improve living conditions and access to public services. Use of AI systems must be assessed in relation to these effects on social rights. The Member States must not use them in any way that would jeopardise access to social rights or lead to a deterioration of the social safety net for citizens.
Amendment 323 #
Proposal for a regulation
Recital 4
Recital 4
(4) At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate risks and cause harm to public interests and rights that are protected by Union law. Such harm might be material or immaterial, especially groups that are marginalised and already vulnerable. Such harm might be material or immaterial and affect people and the environment. Under the guise of mitigating climate change, through efficient use of resources and energy, AI risks aggravating the situation instead, as additional usage could cancel out any energy savings if usage is not prioritised.
Amendment 329 #
Proposal for a regulation
Recital 6 a (new)
Recital 6 a (new)
(6a) It is important to note that AI systems should respect fundamental principles: non-maleficence, protection of fundamental rights, the trust bestowed on them by end users and system durability. One of the seven key requirements set out by the High-Level Expert Group on Artificial Intelligence is ‘societal and environmental wellbeing’. It is therefore crucial to constantly question and evaluate the social and environmental added value of each new technology developed.
Amendment 330 #
Proposal for a regulation
Recital 6 b (new)
Recital 6 b (new)
(6b) For AI systems to guarantee a high level of protection of fundamental rights, it is essential to address the issue of the digital divide. This Regulation will only be effective if it is accompanied by a policy of education, training and awareness as regards these technologies, the biases they entail and the remedies available in the case of errors.
Amendment 334 #
Proposal for a regulation
Recital 12
Recital 12
(12) This Regulation should also apply to Union institutions, offices, bodies and agencies when acting as a provider or user of an AI system. AI systems exclusively developed or used for military purposes should be excluded from the scope of this Regulation where that use falls under the exclusive remit of the Common Foreign and Security Policy regulated under Title V of the Treaty on the European Union (TEU). This Regulation should be without prejudice to the provisions regarding the liability of intermediary service providers set out in Directive 2000/31/EC of the European Parliament and of the Council [as amended by the Digital Services Act].
Amendment 338 #
Proposal for a regulation
Recital 13
Recital 13
(13) In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should be non-discriminatory and in line with the Union’s international trade commitments.
Amendment 348 #
Proposal for a regulation
Recital 17
Recital 17
(17) AI systems providing social scoring of natural persons for general purpose by public authorities or on their behalf may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non- discrimination and the values of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be therefore prohibited.
Amendment 350 #
Proposal for a regulation
Recital 18
Recital 18
(18) The use of AI systems for ‘real- time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or corrections in relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities.
Amendment 354 #
Proposal for a regulation
Recital 19
Recital 19
(19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. In these specific cases, the authorities responsible for using AI systems must ensure that their use does not adversely affect fundamental rights in the field of justice, notably access to justice, the right to a fair trial, the right to an effective remedy and the presumption of innocence. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. _________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).
Amendment 356 #
Proposal for a regulation
Recital 20
Recital 20
(20) In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those three exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The reference database of persons should be appropriate for each use case in each of the three situations mentioned above. The reference databases must be strictly proportionate and must respect the principle of data minimisation, as provided for in Regulation (EU) 2016/679. Under no circumstances should they be fed with images gathered on a large scale, for example using a large number of images available on social networks.
Amendment 363 #
Proposal for a regulation
Recital 27
Recital 27
(27) High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the environment and the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any.
Amendment 371 #
Proposal for a regulation
Recital 33
Recital 33
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk. The use of ‘real- time’ remote biometric identification systems should be restricted to certain specific cases laid down in this Regulation, should be strictly proportionate and should be subject to prior authorisation by the national competent authorities. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversight.
Amendment 379 #
Proposal for a regulation
Recital 36
Recital 36
(36) AI systems used in employment, workers management and access to self- employment, notably, but not exclusively, for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of personsand for task allocation in work- related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects and livelihoods of these persons. Use of AI systems for making decisions on promotion and termination and for organising monitoring and monitoring performance and personal behaviour should be classified as a prohibited practice. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may alsoshould be prohibited as they impact their rights to data protection and privacy.
Amendment 380 #
Proposal for a regulation
Recital 37
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systemsprohibited, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural personsprohibited. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
Amendment 387 #
Proposal for a regulation
Recital 39
Recital 39
(39) AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, non- discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-riskprohibited AI systems intended to be used by the competent public authorities charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools or to detect the emotional state of a natural person; for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status. AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council49, the Regulation (EC) No 810/2009 of the European Parliament and of the Council50 and other relevant legislation. _________________ 49 Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60). 50 Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1).
Amendment 390 #
Proposal for a regulation
Recital 43
Recital 43
(43) Requirements should apply to high- risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users and final beneficiaries, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as applicable in the light of the intended purpose of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.
Amendment 392 #
Proposal for a regulation
Recital 44
Recital 44
(44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers should be able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high- risk AI systems.
Amendment 397 #
Proposal for a regulation
Recital 47
Recital 47
(47) To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certain degreehigh level of transparency should be required for high-risk AI systems. Users should be able to interpret the system output and use it appropriately. High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate.
Amendment 406 #
Proposal for a regulation
Recital 59 a (new)
Recital 59 a (new)
(59a) In certain cases, AI systems are intended for final beneficiaries rather than users. It is important to guarantee protection of fundamental rights and information for the final beneficiaries, such as healthcare patients, students, consumers, etc. This Regulation should ensure a high level of transparency and respect for the right to information of final beneficiaries, where they differ from users.
Amendment 408 #
Proposal for a regulation
Recital 64
Recital 64
(64) Given the more extensive experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of application of this Regulation, the scope of application of third-party conformity assessment for high-risk AI systems other than those related to products. Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems intended to be used for the remote biometric identification of persons, for which the involvement of a notified body in the conformity assessment should be foreseen, to the extent they are not prohibitedThe assessment of the conformity of high-risk AI systems with this Regulation should be carried out by a notified body.
Amendment 410 #
Proposal for a regulation
Recital 65
Recital 65
(65) In order to carry out third-party conformity assessment for AI systemshigh-risk AI systems and those intended to be used for the remote biometric identification of persons, notified bodies should be designated under this Regulation by the national competent authorities, provided they are compliant with a set of requirements, notably on independence, competence and absence of conflicts of interests.
Amendment 411 #
Proposal for a regulation
Recital 67
Recital 67
(67) High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the internal market. Member States should not create unjustified obstacles to the placing on the market or putting into service of high-risk AI systems that comply with the requirements laid down in this Regulation and bear the CE marking.
Amendment 414 #
Proposal for a regulation
Recital 70
Recital 70
(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, tThe use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities and those who are least familiar with digital technologies. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.
Amendment 421 #
Proposal for a regulation
Recital 77
Recital 77
(77) Member States hold a key role in the application and enforcement of this Regulation. In this respect, each Member State should designate one or more national competent authorities for the purpose of supervising the application and implementation of this Regulation. In order to increase organisation efficiency on the side of Member States and to set an official point of contact vis-à-vis the public and other counterparts at Member State and Union levels, in each Member State onthe national data protection authority should be designated as national supervisory authority.
Amendment 426 #
Proposal for a regulation
Recital 81
Recital 81
(81) The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of trustworthy artificial intelligence in the Union. Providers of non-high-risk AI systems should be encouraged to create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Providers should also be encouraged to apply on a voluntary basisapply additional requirements related, for example, to environmental sustainability, accessibility to persons with disability and those least familiar with digital technologies, stakeholders’ participation in the design and development of AI systems, and diversity of the development teams. The Commission may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data.
Amendment 435 #
Proposal for a regulation
Article 1 – paragraph 1 – point c
Article 1 – paragraph 1 – point c
(c) specific requirements for high-risk AI systems and obligations for operators of such systems;
Amendment 450 #
Proposal for a regulation
Article 2 – paragraph 3
Article 2 – paragraph 3
Amendment 471 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4
Article 3 – paragraph 1 – point 4
(4) ‘user’ means any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a strictly personal non- professional activity;
Amendment 473 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4 a (new)
Article 3 – paragraph 1 – point 4 a (new)
(4a) ‘final beneficiary’ means any natural or legal person, other than an operator, to whom the output of an AI system is intended or provided;
Amendment 492 #
Proposal for a regulation
Article 3 – paragraph 1 – point 42
Article 3 – paragraph 1 – point 42
(42) ‘national supervisory authority’ means the authority to which a Member State assigns the responsibility for the implementation and application of this Regulation, for coordinating the activities entrusted to that Member State, for acting as the single contact point for the Commission, and for representing the Member State at the European Artificial Intelligence Board. That authority is the Member State’s data protection authority;
Amendment 494 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – introductory part
Article 3 – paragraph 1 – point 44 – introductory part
(44) ‘serious incident’ means any incident or malfunctioning that directly or indirectly leads, might have led or might lead to any of the following:
Amendment 509 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distorttechniques in order to materially distort, voluntarily or for a reasonably foreseeable misuse, a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;
Amendment 513 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
Article 5 – paragraph 1 – point b
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a person or a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person economic, physical or psychological harm;
Amendment 514 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – introductory part
Article 5 – paragraph 1 – point c – introductory part
(c) the placing on the market, putting into service or use of AI systems by public authorities or on their behalf, or by private actors, for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, such as preferences, emotions, health or intelligence, with the social score leading to either or both of the following:
Amendment 521 #
Proposal for a regulation
Article 5 – paragraph 1 – point c a (new)
Article 5 – paragraph 1 – point c a (new)
(ca) the placing on the market, putting into service or use of an AI system in a business or public authority used for making decisions on promotion and termination or for organising monitoring and monitoring performance and behaviour of an employee;
Amendment 522 #
Proposal for a regulation
Article 5 – paragraph 1 – point c b (new)
Article 5 – paragraph 1 – point c b (new)
(cb) the placing on the market, putting into service or use of an AI system designed to detect the emotional state of a natural person, except for specific health reasons, or to classify individuals in groups based on assumed ethnicity, gender, political or sexual orientation, or other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights of the European Union;
Amendment 523 #
Proposal for a regulation
Article 5 – paragraph 1 – point c c (new)
Article 5 – paragraph 1 – point c c (new)
(cc) the placing on the market, putting into service or use of an AI system for assessing the creditworthiness of natural persons or establishing their credit score;
Amendment 524 #
Proposal for a regulation
Article 5 – paragraph 1 – point c d (new)
Article 5 – paragraph 1 – point c d (new)
(cd) the placing on the market, putting into service or use, by public authorities or on their behalf, of biometric identification systems that determine allocation of social rights and social benefits;
Amendment 525 #
Proposal for a regulation
Article 5 – paragraph 1 – point c e (new)
Article 5 – paragraph 1 – point c e (new)
(ce) the placing on the market, putting into service or use of an AI system to be used by law enforcement to make predictions, profile natural persons or assess risks with the end goal of predicting criminal offences;
Amendment 526 #
Proposal for a regulation
Article 5 – paragraph 1 – point c f (new)
Article 5 – paragraph 1 – point c f (new)
(cf) the placing on the market, putting into service or use of an AI system for migration, asylum and border control management to carry out profiling or risk assessment of natural persons or groups in a manner that risks infringing the right of asylum or jeopardising the fairness of migration procedures;
Amendment 527 #
Proposal for a regulation
Article 5 – paragraph 1 – point c f (new)
Article 5 – paragraph 1 – point c f (new)
(cf) the placing on the market, putting into service or use of an AI system to influence consumers’ choices for commercial purposes;
Amendment 528 #
Proposal for a regulation
Article 5 – paragraph 1 – point c h (new)
Article 5 – paragraph 1 – point c h (new)
(ch) AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person offending or reoffending or the risk for potential victims of criminal offences;
Amendment 529 #
Proposal for a regulation
Article 5 – paragraph 1 – point c i (new)
Article 5 – paragraph 1 – point c i (new)
(ci) AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person;
Amendment 530 #
Proposal for a regulation
Article 5 – paragraph 1 – point c j (new)
Article 5 – paragraph 1 – point c j (new)
(cj) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;
Amendment 531 #
Proposal for a regulation
Article 5 – paragraph 1 – point c k (new)
Article 5 – paragraph 1 – point c k (new)
(ck) AI systems intended to be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
Amendment 532 #
Proposal for a regulation
Article 5 – paragraph 1 – point c l (new)
Article 5 – paragraph 1 – point c l (new)
(cl) AI systems intended to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
Amendment 540 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
Article 5 – paragraph 1 – point d – point iii
Amendment 560 #
Proposal for a regulation
Article 5 – paragraph 2 – subparagraph 1
Article 5 – paragraph 2 – subparagraph 1
In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations. This use shall be strictly proportionate and shall not result in any unjustified infringement of the protection of privacy and the fundamental rights protected by Union law.
Amendment 561 #
Proposal for a regulation
Article 5 – paragraph 2 – subparagraph 1 a (new)
Article 5 – paragraph 2 – subparagraph 1 a (new)
The use of reference databases used by the authorities as part of ‘real-time’ remote biometric identification systems in publicly accessible spaces shall be strictly limited and proportionate to the objective of the search. Those databases must respect the principle of data minimisation, as provided for in Regulation (EU) 2016/679. Databases containing a large volume of data without any distinction in terms of relevance to the objective are strictly prohibited. Large-scale use of data available publicly to establish huge databases is strictly prohibited. The authorities shall refrain from using reference databases that would infringe fundamental rights, especially the right to privacy.
Amendment 562 #
Proposal for a regulation
Article 5 – paragraph 2 – subparagraph 1 b (new)
Article 5 – paragraph 2 – subparagraph 1 b (new)
The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1, point d) shall under no circumstances infringe the freedom of assembly and association or political pluralism. This use cannot be used to identify individuals when exercising their rights.
Amendment 564 #
Proposal for a regulation
Article 5 – paragraph 3 – introductory part
Article 5 – paragraph 3 – introductory part
3. As regards paragraphs 1, point (d) and 2, each individual use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the use of the system may be commenced without an authorisation and the authorisation may be requested only during or after the use. Such situations must continue to be exceptions. Existing national law must be duly applied in these exceptional cases in order to guarantee respect for fundamental rights and freedoms. Where prior authorisation was not granted, the national competent authorities shall subsequently assess whether use of a ‘real-time’ biometric identification system in publicly accessible spaces was justified.
Amendment 565 #
Proposal for a regulation
Article 5 – paragraph 3 – introductory part
Article 5 – paragraph 3 – introductory part
3. As regards paragraphs 1, point (d) and 2, each individual use for the purpose of law enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior authorisation granted by a judicial authority or by an independent administrative authority of the Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of national law referred to in paragraph 4. However, in a duly justified situation of urgency, the strictly proportionate use of the system may be commenced without an authorisation and the authorisation may be requested only during or after the use.
Amendment 567 #
Proposal for a regulation
Article 5 – paragraph 4
Article 5 – paragraph 4
4. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement. The Member States shall put in place the safeguards needed to ensure respect for fundamental rights.
Amendment 574 #
Proposal for a regulation
Article 6 – paragraph 2 a (new)
Article 6 – paragraph 2 a (new)
(2a) In addition to the high-risk AI systems referred to in paragraphs 1 and 2, AI systems shall be considered high-risk if the final beneficiaries total more than 20 million citizens across the EU or 50% of the population of a given Member State, or whose users have more than 20 million customers or beneficiaries in the EU who are affected by the system.
Amendment 587 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
Article 7 – paragraph 1 – point b
(b) the AI systems pose a risk of harm to the health and safety, an economic risk or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
Amendment 594 #
Proposal for a regulation
Article 7 – paragraph 2 – point c
Article 7 – paragraph 2 – point c
(c) the extent to which the use of an AI system has already caused harm to the health and safety, safety and the environment or adverse impact on the fundamental rights or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities;
Amendment 595 #
Proposal for a regulation
Article 7 – paragraph 2 – point h – point i
Article 7 – paragraph 2 – point h – point i
(i) effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims forincluding claims for material or non- material damages;
Amendment 597 #
Proposal for a regulation
Article 7 – paragraph 2 – point h a (new)
Article 7 – paragraph 2 – point h a (new)
(ha) the general capabilities and the functionalities of the AI system independent of its intended purpose;
Amendment 598 #
Proposal for a regulation
Article 7 – paragraph 2 – point h b (new)
Article 7 – paragraph 2 – point h b (new)
(hb) the extent of the availability and use of proven technical solutions and mechanisms for the monitoring, reliability and ‘correctability’ of the AI system;
Amendment 599 #
Proposal for a regulation
Article 7 – paragraph 2 – point h c (new)
Article 7 – paragraph 2 – point h c (new)
(hc) the potential for misuse and malicious use of the AI system and the technology underpinning it;
Amendment 620 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases;
Amendment 625 #
Proposal for a regulation
Article 10 – paragraph 3
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be relevant, sufficiently representative, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These data sets shall consist of sufficiently large volumes of data, and they shall take account of all of the relevant aspects of gender, social, geographical and ethnic group, and other grounds for discrimination prohibited under Union law. They shall cover all the necessary relevant scenarios in order to avoid hazardous situations. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
Amendment 645 #
Proposal for a regulation
Article 13 – paragraph 1
Article 13 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately. An appropriate type and high degree of transparency shall be ensured, with a view to achieving compliance with the relevant obligations of the user and of the provider set out in Chapter 3 of this Title.
Amendment 652 #
Proposal for a regulation
Article 13 – paragraph 2 a (new)
Article 13 – paragraph 2 a (new)
(2a) The providers shall be available to users and authorities to answer their questions and provide any clarifications they might seek, in particular to ensure that use of the AI system respects the fundamental rights and law of the Union.
Amendment 661 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point iii
Article 13 – paragraph 3 – point b – point iii
(iii) any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety, the protection of personal data or fundamental rights;
Amendment 679 #
Proposal for a regulation
Article 14 – paragraph 2
Article 14 – paragraph 2
2. Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter.
Amendment 680 #
Proposal for a regulation
Article 14 – paragraph 3 – introductory part
Article 14 – paragraph 3 – introductory part
3. Human oversight shall be ensured through either one or all of the following measures:
Amendment 698 #
Proposal for a regulation
Article 19 – paragraph 1
Article 19 – paragraph 1
1. Providers of high-risk AI systems shall ensure that their systems undergo the relevant conformity assessment procedure in accordance with Article 43, prior to their placing on the market or putting into service. Where the compliance of the AI systems with the requirements set out in Chapter 2 of this Title has been demonstrated following that conformity assessment, the providers shall draw up an EU declaration of conformity in accordance with Article 48 and affix the CE marking of conformity in accordance with Article 49. The conformity assessment must be published.
Amendment 700 #
Proposal for a regulation
Article 20 – paragraph 2
Article 20 – paragraph 2
Amendment 703 #
Proposal for a regulation
Article 21 – paragraph 1
Article 21 – paragraph 1
Providers of high-risk AI systems which consider or, have reason to consider or have been notified by a supervisory authority that a high-risk AI system which they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it or to recall it, as appropriate. They shall inform the distributors of the high-risk AI system in question and, where applicable, the authorised representative and importers accordingly.
Amendment 714 #
Proposal for a regulation
Article 29 – paragraph 4 – introductory part
Article 29 – paragraph 4 – introductory part
4. Users shall monitor the operation of the high-risk AI system on the basis of the instructions of use. When they have reasons to consider that the use in accordance with the instructions of use may result in the AI system presenting a risk within the meaning of Article 65(1) they shall inform the national competent authorities, provider or distributor and suspend the use of the system. They shall also inform the national competent authorities, provider or distributor when they have identified any serious incident or any malfunctioning within the meaning of Article 62 and interrupt the use of the AI system. In case the user is not able to reach the provider, Article 62 shall apply mutatis mutandis.
Amendment 716 #
Proposal for a regulation
Article 29 – paragraph 5 – subparagraph 1
Article 29 – paragraph 5 – subparagraph 1
Amendment 724 #
Proposal for a regulation
Article 41 – paragraph 2
Article 41 – paragraph 2
2. The Commission, when preparing the common specifications referred to in paragraph 1, shall gather the views of relevant bodies or expert groups established under relevant sectorial Union law. It shall also consult the European Artificial Intelligence Board.
Amendment 726 #
Proposal for a regulation
Article 43 – paragraph 1 – introductory part
Article 43 – paragraph 1 – introductory part
1. For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high- risk AI system with the requirements set out in Chapter 2 of this Title, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall follow one of the following procedures:
Amendment 727 #
Proposal for a regulation
Article 43 – paragraph 1 – point a
Article 43 – paragraph 1 – point a
Amendment 730 #
Proposal for a regulation
Article 43 – paragraph 1 – point b
Article 43 – paragraph 1 – point b
(b) the conformity assessment procedure is based on assessment of the quality management system and assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII.
Amendment 733 #
Proposal for a regulation
Article 43 – paragraph 2
Article 43 – paragraph 2
2. For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide forthe conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII. For high-risk AI systems referred to in point 5(b) of Annex III, placed on the market or put into service by credit institutions regulated by Directive 2013/36/EU, the conformity assessment shall be carried out as part of the procedure referred to in Articles 97 to101 of that Directive.
Amendment 735 #
Proposal for a regulation
Article 43 – paragraph 6
Article 43 – paragraph 6
Amendment 737 #
Proposal for a regulation
Article 47 – paragraph 2
Article 47 – paragraph 2
2. The authorisation referred to in paragraph 1 shall be issued only if the market surveillance authority concludes that the high-risk AI system complies with the requirements of Chapter 2 of this Title. The market surveillance authority shall inform the Commission, the European Data Protection Supervisor, the national data protection authorities as defined by Article 51 of Regulation (EU) 2016/679 and the other Member States of any authorisation issued pursuant to paragraph 1.
Amendment 738 #
Proposal for a regulation
Article 47 – paragraph 3
Article 47 – paragraph 3
3. Where, within 15 calendar days of receipt of the information referred to in paragraph 2, no objection has been raised by either a Member State or the Commission, by the European Data Protection Supervisor or by a national data protection authority as defined by Article 51 of Regulation (EU) 2016/679 in respect of an authorisation issued by a market surveillance authority of a Member State in accordance with paragraph 1, that authorisation shall be deemed justified.
Amendment 739 #
Proposal for a regulation
Article 50 – paragraph 1 – introductory part
Article 50 – paragraph 1 – introductory part
The provider shall, for a period ending 10 yearsn unlimited period after the AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities:
Amendment 740 #
Proposal for a regulation
Title IV
Title IV
TRANSPARENCY OBLIGATIONS FOR CERTAIN AI SYSTEMS
Amendment 743 #
Proposal for a regulation
Article 52 – title
Article 52 – title
Transparency obligations for certainall AI systems
Amendment 745 #
Proposal for a regulation
Article 52 – paragraph 1
Article 52 – paragraph 1
1. Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence, especially those who are least familiar with digital technologies, are informed that they are interacting with an AI system.
Amendment 749 #
Proposal for a regulation
Article 52 – paragraph 2
Article 52 – paragraph 2
Amendment 755 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1
Article 52 – paragraph 3 – subparagraph 1
Amendment 764 #
Proposal for a regulation
Article 53 – paragraph 1
Article 53 – paragraph 1
1. AI regulatory sandboxes established by one or more Member States competent authorities or the European Data Protection Supervisor shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. This shall take place under the direct supervision and guidance by the competent authorities with a view to ensuringidentifying risks to health and safety and fundamental rights, testing mitigation measures for identified risks and demonstrating prevention of these risks in order to ensure compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox.
Amendment 767 #
Proposal for a regulation
Article 53 – paragraph 3
Article 53 – paragraph 3
3. The AI regulatory sandboxes shall not affect the supervisory and corrective powers of the competent authorities. Any significant risks to health and safety and fundamental rights identified during the development and testing of such systems shall result in immediate mitigrectification and, failing that, in the suspension of the development and testing process until such mitigrectification takes place.
Amendment 768 #
Proposal for a regulation
Article 53 – paragraph 5
Article 53 – paragraph 5
5. Member States’ competent authorities that have established AI regulatory sandboxes shall coordinate their activities and cooperate within the framework of the European Artificial Intelligence Board, in particular with the European Data Protection Supervisor. They shall submit annual reports to the Board and the Commission on the results from the implementation of those scheme, including good practices, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox.
Amendment 771 #
Proposal for a regulation
Article 53 – paragraph 6 a (new)
Article 53 – paragraph 6 a (new)
6a. When the sandboxes use the data of natural or legal persons, or when the AI system put in place is used to provide persons with results, the latter’s consent must be obtained in advance. The body or company participating in the regulatory sandbox must justify to the final beneficiaries the reasons for its approach. Those persons may refuse to participate.
Amendment 772 #
Proposal for a regulation
Article 54
Article 54
Amendment 784 #
Proposal for a regulation
Article 56 – paragraph 2 – point c a (new)
Article 56 – paragraph 2 – point c a (new)
(ca) carry out annual reviews and analyses of the complaints sent to and findings made by the national competent authorities, of the reports of serious incidents and malfunctioning referred to in Article 62, and of new registrations in the EU database referred to in Article 60 in order to identify trends and potential emerging issues threatening the future health and safety and fundamental rights of citizens that are not adequately addressed by this Regulation; carry out biannual analyses of the future and prospective analyses in order to extrapolate the possible impact of these trends and emerging issues on the Union; and publish annual recommendations to the Commission, including, but not limited to, recommendations on the categorisation of prohibited practices, high-risk systems, and codes of conduct for AI systems that are not classified as high-risk.
Amendment 789 #
Proposal for a regulation
Article 56 – paragraph 2 a (new)
Article 56 – paragraph 2 a (new)
2a. The Board shall have a sufficient number of competent personnel at its disposal to assist it in the proper performance of its tasks.
Amendment 790 #
Proposal for a regulation
Article 56 – paragraph 2 b (new)
Article 56 – paragraph 2 b (new)
2b. The Board shall be organised and operated so as to safeguard the independence, objectivity and impartiality of its activities. It shall document and implement a structure and procedures to safeguard impartiality and to promote and apply the principles of impartiality in all its activities.
Amendment 794 #
Proposal for a regulation
Article 57 – paragraph 1
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, representatives of the ethics committees of the Member States or, for those countries that have no such committees, ethics academics, researchers or experts, and the European Data Protection Supervisor. Other national authorities mayshall be invited to the meetings, where the issues discussed are of relevance for them. The Board shall cooperate closely with the national data protection authorities as defined by Article 51 of Regulation (EU) 2016/679.
Amendment 799 #
Proposal for a regulation
Article 57 – paragraph 3
Article 57 – paragraph 3
3. The Board shall be chaired by the Commission. The Commission shall convene the meetings andBoard may be convened by the Commission, on its own initiative or at the request of a Member State, a national authority responsible for the protection of fundamental rights or personal data or a national ethics committee. The Commission shall prepare the agenda in accordance with the tasks of the Board pursuant to this Regulation and with its rules of procedure. The Commission shall provide administrative and analytical support for the activities of the Board pursuant to this Regulation.
Amendment 804 #
Proposal for a regulation
Article 57 – paragraph 4
Article 57 – paragraph 4
4. The BoardCommittee may invite external experts and observers to . To thatt end its meetings and may hold exchanges with interested third parties to inform its activities to an appropriate extent. To that end the Commission may facilitate exchanges between the Boardthe Commission may facilitate exchanges between the Board and other Union bodies, offices, agencies and specialised groups. The composition of the specialised body shall guarantee fair representation of consumer organisations, civil society organisations and academics specialising in AI and in ethics. Its meetings and otheir Union bodies, offices, agencies and advisory groupsminutes shall be published online.
Amendment 810 #
(iiia) on the adaptation of this Regulation to technological, social and scientific developments, and on the need to revise this Regulation.
Amendment 813 #
Proposal for a regulation
Article 58 – paragraph 1 – point c a (new)
Article 58 – paragraph 1 – point c a (new)
(ca) be able to ask the Commission to revise Annex III to this Regulation on high-risk AI systems. In this case, the Board shall draw up precise recommendations for the revision. The Commission shall take these recommendations into consideration and shall publish a comparative report allowing its follow-up to the recommendations to be assessed and containing specific justifications.
Amendment 821 #
Proposal for a regulation
Article 59 – paragraph 1
Article 59 – paragraph 1
1. National competent authorities shall be established or designated by each Member State for the purpose of ensuring the application and implementation of this Regulation. National competent authorities shall be organised so as to safeguard the objectivity and impartiality of their activities and tasks. They shall ensure a high level of harmonisation in the application of this Regulation. They shall put in place all of the resources needed to achieve this and shall strive for uniform application of this Regulation in the Union.
Amendment 823 #
Proposal for a regulation
Article 59 – paragraph 3 a (new)
Article 59 – paragraph 3 a (new)
3a. The Commission shall ensure that this Regulation is applied uniformly in the Union. The Member States and all interested parties may notify the Commission of any cases where the national authorities are not fulfilling their obligations. With the support of the European Artificial Intelligence Board, the Commission may carry out an investigation and, if necessary, ask the national authorities to adapt their practices in order to ensure application of this Regulation. The national authorities shall take due account of the Commission’s recommendations and adapt their practices accordingly.
Amendment 832 #
Proposal for a regulation
Article 60 – paragraph 1
Article 60 – paragraph 1
1. The Commission shall, in collaboration with the Member States, set up and regularly maintain a EU database containing information referred to in paragraph 2 concerning high-risk AI systems referred to in Article 6(2) which are registered in accordance with Article 51.
Amendment 834 #
Proposal for a regulation
Article 60 – paragraph 3
Article 60 – paragraph 3
3. Information contained in the EU database shall be easily accessible to the public in the official languages of the Union.
Amendment 835 #
Proposal for a regulation
Article 60 – paragraph 3 a (new)
Article 60 – paragraph 3 a (new)
3a. Users must register deployment of high-risk AI systems in the EU database before putting them into service. Users must include information in the database, notably the identity of the provider and the user, the context of the objective and the deployment, the designation of the persons affected and the results of the impact assessment.
Amendment 839 #
Proposal for a regulation
Article 61 – paragraph 1
Article 61 – paragraph 1
1. Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the artificial intelligence technologies, the use and the risks of the high-risk AI system.
Amendment 840 #
Proposal for a regulation
Article 61 a (new)
Article 61 a (new)
Article 61a. Establishment by providers of a reporting system AI system providers shall make available to users, final beneficiaries, national authorities and all interested parties a reporting system that can be used to flag up any problem involving the functioning of the AI system, or its compliance with this Regulation and Union law or current national law. Providers shall examine these notifications diligently. They shall examine and respond to the notifications within a reasonable period of time and shall report the problems to the national authorities.
Amendment 841 #
Proposal for a regulation
Recital 61 b (new)
Recital 61 b (new)
Article 61b. Tasks of the public authorities in the monitoring of AI systems 1. If AI systems are used by public authorities or on their behalf, users shall put in place a monitoring system to detect any problems or shortcomings in the AI system that might result in a violation of fundamental rights, notably the principle of non-discrimination. 2. Users shall inform the providers of any problem caused by use of the AI system via the reporting system provided for in Article (61). 3. Users shall put in place all of the necessary measures to detect any harmful effects that using an AI system could have on the right of users to access public services.
Amendment 843 #
Proposal for a regulation
Article 62 – paragraph 1 – subparagraph 1
Article 62 – paragraph 1 – subparagraph 1
Such notification shall be made immediately after the provider has established a causal link between the AI system and the incident or malfunctioning or the reasonable likelihood of such a link, and, in any event, not later than 157 days after the providers becomes aware of the serious incident or of the malfunctioning.
Amendment 863 #
Proposal for a regulation
Title VIII – Chapter 3 a (new)
Title VIII – Chapter 3 a (new)
Amendment 870 #
Proposal for a regulation
Article 69 – paragraph 3
Article 69 – paragraph 3
3. Codes of conduct mayshall be drawn up by individual providers of AI systems or by organisations representing them or by both, including with the involvement of users and any interested stakeholders and their representative organisations. Codes of conduct may cover one or more AI systems taking into account the similarity of the intended purpose of the relevant systems.
Amendment 873 #
Proposal for a regulation
Article 69 – paragraph 4
Article 69 – paragraph 4
4. The Commission and the Board shallmay take into account the specific interests and needs of the small-scale providers and start-ups when encouraging and facilitating the drawing up of codes of conduct.
Amendment 881 #
Proposal for a regulation
Article 70 – paragraph 4
Article 70 – paragraph 4
Amendment 889 #
Proposal for a regulation
Article 71 – paragraph 3 – introductory part
Article 71 – paragraph 3 – introductory part
3. The following infringements shall be subject to administrative fines of up to 350 000 000 EUR or, if the offender is company, up to 610 % of its total worldwide annual turnover for the preceding financial year, whichever is higher:
Amendment 892 #
Proposal for a regulation
Article 71 – paragraph 4
Article 71 – paragraph 4
4. The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 240 000 000 EUR or, if the offender is a company, up to 48 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Amendment 893 #
Proposal for a regulation
Article 71 – paragraph 5
Article 71 – paragraph 5
5. The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 120 000 000 EUR or, if the offender is a company, up to 24 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Amendment 909 #
Proposal for a regulation
Article 83 – paragraph 1
Article 83 – paragraph 1
Amendment 915 #
Proposal for a regulation
Article 84 – paragraph 1
Article 84 – paragraph 1
1. The Commission shall assess the need for amendment of the list in Annex III once a year following the entry into force of this Regulation. These assessments shall be accessible to the public and forwarded to the relevant national authorities. They shall take into account the criteria set out in Article 7(2).
Amendment 929 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a a (new)
Annex III – paragraph 1 – point 1 – point a a (new)
(aa) AI systems that use physical, physiological or behavioural data and biometric data including, but not limited to, biometric identification, categorisation, detection and verification.
Amendment 935 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
Annex III – paragraph 1 – point 4 – point b
(b) AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behaviour of persons in suchtask allocation in work-related contractual relationships.
Amendment 937 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
Annex III – paragraph 1 – point 5 – point b
Amendment 940 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b a (new)
Annex III – paragraph 1 – point 5 – point b a (new)
(ba) AI systems intended to be used to assess insurance premiums and claims;
Amendment 942 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point c a (new)
Annex III – paragraph 1 – point 5 – point c a (new)
Amendment 945 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point a
Annex III – paragraph 1 – point 6 – point a
Amendment 947 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point b
Annex III – paragraph 1 – point 6 – point b
Amendment 949 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point e
Annex III – paragraph 1 – point 6 – point e
Amendment 952 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point a
Annex III – paragraph 1 – point 7 – point a
Amendment 954 #
Amendment 956 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
Annex III – paragraph 1 – point 7 – point d
Amendment 958 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 a (new)
Annex III – paragraph 1 – point 8 a (new)
8a. AI systems used to filter the content generated by users on social media and social networks;
Amendment 959 #
Proposal for a regulation
Annex III – paragraph 1 – point 8 b (new)
Annex III – paragraph 1 – point 8 b (new)
8b. AI systems developed or used exclusively for military purposes.
Amendment 963 #
Proposal for a regulation
Annex VI
Annex VI
Amendment 964 #
Proposal for a regulation
Annex VIII – point 6 a (new)
Annex VIII – point 6 a (new)
6a. If it is used by a public authority or on its behalf, the AI system used, the dates of its use and its purpose;
Amendment 965 #
Proposal for a regulation
Annex VIII – point 11
Annex VIII – point 11