51 Amendments of Jana TOOM related to 2021/0106(COD)
Amendment 346 #
Proposal for a regulation
Recital 16
Recital 16
(16) The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to occurithout the affected persons' knowledge, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacipersons or groups of persons with protected characteristiecs. They do so with the intention to materially distort the behaviour of a person and in a manner that causes or is. Such distortions are likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
Amendment 347 #
Proposal for a regulation
Recital 17
Recital 17
(17) AI systems providing social scoring of natural persons for general purpose by public authorities or on their behalf may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non- discrimination and the values of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour. Such AI systems should be therefore prohibited.
Amendment 352 #
Proposal for a regulation
Recital 19
Recital 19
(19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. _________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1).
Amendment 353 #
Proposal for a regulation
Recital 19
Recital 19
(19) The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences in the seriousness, probability and scale of the harm or possible negative consequences. _________________ 38 Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures between Member States (OJ L 190, 18.7.2002, p. 1) and certain threats to the life or physical safety of natural persons or of a terrorist attack.
Amendment 376 #
Proposal for a regulation
Recital 35
Recital 35
(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-riskprohibited, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and usedDue to the reproduction of the inherent biases of our societies, such systems may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination.
Amendment 378 #
Proposal for a regulation
Recital 36
Recital 36
(36) AI systems used in employment, workers management and access to self- employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-riskprohibited, since those systems may appreciably impact future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work- related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy.
Amendment 381 #
Proposal for a regulation
Recital 37
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-riskbanned. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a highn unacceptable risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
Amendment 382 #
Proposal for a regulation
Recital 38
Recital 38
(38) Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to classify as high-risk a number ofprohibit some AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-riskprohibited AI systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, and for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offencesincluded in such a ban.
Amendment 385 #
Proposal for a regulation
Recital 39
Recital 39
(39) AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, non- discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-riskprohibit AI systems intended to be used by the competent public authorities charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools or to detect the emotional state of a natural person; and for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum; for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status.. Other AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council49 , the Regulation (EC) No 810/2009 of the European Parliament and of the Council50 and other relevant legislation. _________________ 49 Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60). 50 Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1).
Amendment 508 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distorts a person’s behaviour in a manner that causes or is likely to cause that person or anowithout their person physical or psychological harm;knowledge.
Amendment 512 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
Article 5 – paragraph 1 – point b
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that personsex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, or sexual orientation, in order to materially distort the behaviour orf another person physical or psychological harm;ertaining to that group.
Amendment 515 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – introductory part
Article 5 – paragraph 1 – point c – introductory part
(c) the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:
Amendment 533 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – introductory part
Article 5 – paragraph 1 – point d – introductory part
(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:.
Amendment 535 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point i
Article 5 – paragraph 1 – point d – point i
Amendment 539 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point ii
Article 5 – paragraph 1 – point d – point ii
Amendment 543 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
Article 5 – paragraph 1 – point d – point iii
Amendment 546 #
Proposal for a regulation
Article 5 – paragraph 1 – point d a (new)
Article 5 – paragraph 1 – point d a (new)
(da) practices listed in Annex IIIa;
Amendment 547 #
Proposal for a regulation
Article 5 – paragraph 1 – point d b (new)
Article 5 – paragraph 1 – point d b (new)
(db) AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions;
Amendment 548 #
Proposal for a regulation
Article 5 – paragraph 1 – point d c (new)
Article 5 – paragraph 1 – point d c (new)
(dc) AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;
Amendment 549 #
Proposal for a regulation
Article 5 – paragraph 1 – point d d (new)
Article 5 – paragraph 1 – point d d (new)
(dd) AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behaviour of persons in such relationships;
Amendment 550 #
(de) AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services;
Amendment 551 #
Proposal for a regulation
Article 5 – paragraph 1 – point d f (new)
Article 5 – paragraph 1 – point d f (new)
(df) AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;
Amendment 552 #
Proposal for a regulation
Article 5 – paragraph 1 – point d g (new)
Article 5 – paragraph 1 – point d g (new)
(dg) AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person;
Amendment 553 #
Proposal for a regulation
Article 5 – paragraph 1 – point d h (new)
Article 5 – paragraph 1 – point d h (new)
(dh) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;
Amendment 554 #
Proposal for a regulation
Article 5 – paragraph 1 – point d i (new)
Article 5 – paragraph 1 – point d i (new)
(di) AI systems intended to be used by law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;
Amendment 555 #
Proposal for a regulation
Article 5 – paragraph 1 – point d j (new)
Article 5 – paragraph 1 – point d j (new)
(dj) AI systems intended to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
Amendment 556 #
Proposal for a regulation
Article 5 – paragraph 1 – point d k (new)
Article 5 – paragraph 1 – point d k (new)
(dk) AI systems intended to be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
Amendment 568 #
Proposal for a regulation
Article 5 a (new)
Article 5 a (new)
Amendment 582 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73 to update the list in Annex III by adding high-risk AI systems where both of the following conditions are fulfilled:
Amendment 583 #
Proposal for a regulation
Article 7 – paragraph 1 – point a
Article 7 – paragraph 1 – point a
Amendment 657 #
Proposal for a regulation
Article 13 – paragraph 3 – point b – point i a (new)
Article 13 – paragraph 3 – point b – point i a (new)
(ia) An overview of different inputs taken into account by the Artificial Intelligence solution when making decisions.
Amendment 718 #
Proposal for a regulation
Article 29 a (new)
Article 29 a (new)
Article 29a Recourse for parties affected by decisions of high-risk Artificial Intelligence systems 1. Where the decision of a high-risk Artificial Intelligence system directly affects a natural person, that person is entitled to an explanation of the decision, including but not limited to: (a) The inputs taken into account by the Artificial Intelligence solution in decision-making. (b) Where feasible, the inputs that had the strongest influence on the decision. 2. Where the decision of a high-risk Artificial Intelligence system directly affects a natural persons economic or social prospects (for instance, job or educational opportunities, access to benefits, public services or credit), and without prejudice to existing sectoral legislation, that person may request that the decision be re-evaluated by a human being. This re-evaluation must take place within reasonable time following the request.
Amendment 750 #
Proposal for a regulation
Article 52 – paragraph 2
Article 52 – paragraph 2
2. Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.
Amendment 757 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1
Article 52 – paragraph 3 – subparagraph 1
However, the first subparagraph shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offences or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
Amendment 905 #
Proposal for a regulation
Article 73 – paragraph 2
Article 73 – paragraph 2
2. The delegation of power referred to in Article 4, Article 5a(1), Article 7(1), Article 11(3), Article 43(5) and (6) and Article 48(5) shall be conferred on the Commission for an indeterminate period of time from [entering into force of the Regulation].
Amendment 906 #
Proposal for a regulation
Article 73 – paragraph 3
Article 73 – paragraph 3
3. The delegation of power referred to in Article 4, Article 5a(1), Article 7(1), Article 11(3), Article 43(5) and (6) and Article 48(5) may be revoked at any time by the European Parliament or by the Council. A decision of revocation shall put an end to the delegation of power specified in that decision. It shall take effect the day following that of its publication in the Official Journal of the European Union or at a later date specified therein. It shall not affect the validity of any delegated acts already in force.
Amendment 908 #
Proposal for a regulation
Article 73 – paragraph 5
Article 73 – paragraph 5
5. Any delegated act adopted pursuant to Article 4, Article 5a(1), Article 7(1), Article 11(3), Article 43(5) and (6) and Article 48(5) shall enter into force only if no objection has been expressed by either the European Parliament or the Council within a period of three months of notification of that act to the European Parliament and the Council or if, before the expiry of that period, the European Parliament and the Council have both informed the Commission that they will not object. That period shall be extended by three months at the initiative of the European Parliament or of the Council.
Amendment 931 #
Proposal for a regulation
Annex III – paragraph 1 – point 3 – point a
Annex III – paragraph 1 – point 3 – point a
Amendment 932 #
Proposal for a regulation
Annex III – paragraph 1 – point 3 – point b
Annex III – paragraph 1 – point 3 – point b
Amendment 933 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point a
Annex III – paragraph 1 – point 4 – point a
Amendment 934 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
Annex III – paragraph 1 – point 4 – point b
Amendment 936 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point a
Annex III – paragraph 1 – point 5 – point a
Amendment 938 #
Proposal for a regulation
Annex III – paragraph 1 – point 5 – point b
Annex III – paragraph 1 – point 5 – point b
Amendment 946 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point a
Annex III – paragraph 1 – point 6 – point a
Amendment 948 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point b
Annex III – paragraph 1 – point 6 – point b
Amendment 950 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point e
Annex III – paragraph 1 – point 6 – point e
Amendment 951 #
Proposal for a regulation
Annex III – paragraph 1 – point 6 – point f
Annex III – paragraph 1 – point 6 – point f
Amendment 953 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point a
Annex III – paragraph 1 – point 7 – point a
Amendment 955 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point b
Annex III – paragraph 1 – point 7 – point b
Amendment 957 #
Proposal for a regulation
Annex III – paragraph 1 – point 7 – point d
Annex III – paragraph 1 – point 7 – point d
Amendment 960 #
Proposal for a regulation
Annex III a (new)
Annex III a (new)
ANNEX IIIa ADDITIONAL PROHIBITED ARTIFICIAL INTELLIGENCE PRACTICES REFFERED TO IN ARTICLE 5(1) 1. Additional Prohibited Artificial Intelligence Practices pursuant to Article 5(1)da are: (a) AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions.