57 Amendments of Victor NEGRESCU related to 2021/0106(COD)
Amendment 57 #
Proposal for a regulation
Recital 1
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values, while minimising any risk of adverse and discriminatory impact on people. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
Amendment 58 #
Proposal for a regulation
Recital 2
Recital 2
(2) (2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is trustworthy and safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured in order to achieve trustworthy AI, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operatodevelopers, deployers and users and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
Amendment 62 #
Proposal for a regulation
Recital 3
Recital 3
(3) Artificial intelligence is a fast evolving family of technologies that can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities if developed in accordance with ethical principles. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education and training, culture, infrastructure, management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.
Amendment 65 #
Proposal for a regulation
Recital 3
Recital 3
(3) Artificial intelligence is a fast evolving family of technologies that can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, culture, education and training, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.
Amendment 86 #
Proposal for a regulation
Recital 14 a (new)
Recital 14 a (new)
(14 a) (14 a) Without prejudice to tailoring rules to the intensity and scope of the risks that AI systems can generate, or to the specific requirements laid down for high-risk AI systems, all AI systems developed, deployed or used in the Union should respect not only Union and national law but also a specific set of ethical principles that are aligned with the values enshrined in Union law and that are in part, concretely reflected in the specific requirements to be complied with by high-risk AI systems. That set of principles should, inter alia, also be reflected in codes of conduct that should be mandatory for the development, deployment and use of all AI systems. Accordingly, any research carried out with the purpose of attaining AI-based solutions that strengthen the respect for those principles, in particular those of social responsibility and environmental sustainability, should be encouraged by the Commission and the Member States.
Amendment 87 #
Proposal for a regulation
Recital 14 b (new)
Recital 14 b (new)
(14 b) (14 b) AI literacy’ refers to skills, knowledge and understanding that allows both citizens and operators in the context of the obligations set out in this Regulation, to make an informed deployment and use of AI systems, as well as to gain awareness about the opportunities and risks of AI and thereby promote its democratic control. AI literacy should not be limited to learning about tools and technologies, but should also aim to equip citizens more generally and operators in the context of the obligations set out in this Regulation, with the critical thinking skills required to identify harmful or manipulative uses as well as to improve their agency and their ability to fully comply with and benefit from trustworthy AI. It is therefore necessary that the Commission, the Member States as well as operators of AI systems, in cooperation with all relevant stakeholders, promote the development of AI literacy, in all sectors of society, for citizens of all ages, including women and girls, and that progress in that regard is closely followed.
Amendment 89 #
Proposal for a regulation
Recital 15
Recital 15
(15) Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy, gender equality and the rights of the child.
Amendment 90 #
Proposal for a regulation
Recital 16
Recital 16
(16) The placing on the market, putting into servicedevelopment, deployment or use of certain AI systems intendused to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention toby materially distorting the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human- machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research. .
Amendment 106 #
Proposal for a regulation
Recital 28
Recital 28
(28) AI systems could produce adverse outcomes to health and safety of persons, in particular when such systems operate as components of products. Consistently with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate. The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non- discrimination, right to education, consumer protection, workers’ rights. Special attention should be paid to gender equality, rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration, protection of intellectual property rights and ensuring cultural diversity. In addition to those rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons or to the environment, due to the extraction and consumption of natural resources, waste and the carbon footprint.
Amendment 107 #
Proposal for a regulation
Recital 32
Recital 32
(32) As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre- defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems.
Amendment 115 #
Proposal for a regulation
Recital 35
Recital 35
(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed, developed and used, such systems may violate the right to education and training as well as the rights to gender equality and to not to be discriminated against and perpetuate historical patterns of discrimination. Finally, education is also a social learning process therefore, the use of artificial intelligence systems must not replace the fundamental role of teachers in education.
Amendment 118 #
Proposal for a regulation
Recital 36
Recital 36
(36) AI systems used in employment, workers management and access to self- employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact the health, safety and security rules aplicable in their work and at their workplaces and future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy. In this regard, specific requirements on transparency, information and human oversight should apply. Trade unions and workers representatives should be informed and they should have access to any documentation created under this Regulation for any AI system deployed or used in their work or at their workplace.
Amendment 129 #
(70) Certain AI systems intendused to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications, which should include a disclaimer, should be provided in accessible formats for children, the elderly, migrants and persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio, text, scripts or video content that appreciably resembles existing persons, places, test, scripts or events and would falsely appear to a person to be authentic, should appropriately disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin, namely the name of the person or entity that created it. AI systems used to recommend, disseminate and order news or cultural and creative content displayed to users, should include an explanation of the parameters used for the moderation of content and personalised suggestions which should be easily accessible and understandable to the users.
Amendment 131 #
Proposal for a regulation
Recital 70
Recital 70
(70) Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats foralso for children, old people and persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.
Amendment 132 #
Proposal for a regulation
Recital 73
Recital 73
(73) In order to promote and protect innovation, it is important that the interests of small-scale providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on AI literacy, awareness raising and information communication. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users.
Amendment 133 #
Proposal for a regulation
Recital 74
Recital 74
(74) In order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market as well as to facilitate compliance of providers and notified bodies with their obligations under this Regulation, the AI- on demand platform, the European Digital Innovation Hubs and the Testing and Experimentation Facilities established by the Commission and the Member States at national or EU level should possibly contribute to the implementation of this Regulation. Within their respective mission and fields of competence, they may provide in particular technical and scientific support to providers and notified bodies. The Commission should also create a pan-European university’ and research’ networks focused on AI for enhanced studying and research on the impact of AI and to update the Digital Education Action Plan in order to integrate AI and robotics innovation in education.
Amendment 134 #
Proposal for a regulation
Recital 76
Recital 76
(76) In order to facilitate a smooth, effective and harmonised implementation of this and other Regulations a European Agency for Data and Artificial Intelligence Board should be established. The BoardAgency should be responsible for a number of advisory tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation and other present or future legislations, including on technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to and assisting the Commission on specific questions related to artificial intelligence. The Agency should establish a Permanent Stakeholders' Group composed of experts representing the relevant stakeholders, such as representatives of developers, deployers and users of AI systems, including SMEs and start-ups, consumer groups, trade unions, fundamental rights organisations and academic experts and it should communicate its activities to citizens as appropriate.
Amendment 135 #
Proposal for a regulation
Recital 76
Recital 76
(76) In order to facilitate a smooth, effective and harmonised implementation of this Regulation a European Artificial Intelligence Board should be established. The Board should be responsible for a number of advisory tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation, including on technical specifications or existing standards regarding the requirements established in this Regulation and providing expert advice to and assisting the Commission on specific questions related to artificial intelligence and to address the challenges rising from the fast evolving development of AI technologies.
Amendment 136 #
Proposal for a regulation
Recital 79
Recital 79
(79) In order to ensure an appropriate and effective enforcement of the requirements and obligations set out by this Regulation, which is Union harmonisation legislation, the system of market surveillance and compliance of products established by Regulation (EU) 2019/1020 should apply in its entirety. Where necessary for their mandate, national public authorities or bodies, which supervise the application of Union law protecting fundamental rights, including equality bodies, should also have access to any documentation created under this Regulation. Where appropriate, national authorities or bodies, which supervise the application of Union law or national law compatible with union law establishing rules regulating the health, safety, security and environment at work, should also have access to any documentation created under this Regulation.
Amendment 137 #
Proposal for a regulation
Recital 81
Recital 81
(81) The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of trustworthy, socially responsible and environmentally sustainable artificial intelligence in the Union. Providers of non- high-risk AI systems should be encouraged to create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Providers should also be encouraged to apply on a voluntary basis additional requirements related, for example, to environmental sustainability, accessibility to persons with disability, stakeholders’ participatDevelopers and deployers of all AI systems should also draw up codes of conduct in order to ensure and demonstrate compliance with the ethical principles underpinning trustworthy AI. The Commission inand the design and development of AI systems, and diversity of the development teams. The CommissionEuropean Agency for Data and Artificial Intelligence may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data.
Amendment 138 #
Proposal for a regulation
Recital 83
Recital 83
(83) In order to ensure trustful and constructive cooperation of competent authorities on Union and national level, all parties involved in the application of this Regulation should respect the confidentiality of information and data obtained in carrying out their tasks. A new set of common European guidelines and standards should be set up in order to protect privacy while making an effective use of the data available.
Amendment 145 #
Proposal for a regulation
Article 1 – paragraph 1 – point a
Article 1 – paragraph 1 – point a
(a) harmonised rules for the placing on the market, the putting into servicedevelopment, deployment and the use of artificial intelligence systems (‘AI systems’) in the Union;
Amendment 146 #
Proposal for a regulation
Article 1 – paragraph 1 – point d
Article 1 – paragraph 1 – point d
(d) harmonised transparency rules for AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content;
Amendment 151 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniquescan, in and approaches listed in Annex I and canutomated manner, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
Amendment 152 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2
Article 3 – paragraph 1 – point 2
(2) ‘providdeveloper’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge, or that adapts a general purpose AI system to a specific purpose and use;
Amendment 153 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2 a (new)
Article 3 – paragraph 1 – point 2 a (new)
(2 a) ‘deployer’ means any natural or legal person, public authority, agency or other body putting into service an AI system developed by another entity without substantial modification, or using an AI system under its authority,
Amendment 157 #
Proposal for a regulation
Article 3 – paragraph 1 – point 8
Article 3 – paragraph 1 – point 8
(8) ‘operator’ means the providdeveloper, the deployer, the user, the authorised representative, the importer and the distributor;
Amendment 165 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
Article 3 – paragraph 1 – point 44 a (new)
(44 a) 'AI literacy' means the skills, knowledge and understanding regarding AI systems
Amendment 166 #
Proposal for a regulation
Article 4
Article 4
Amendments to Annex I The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I, in order to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed therein.rticle 4 deleted
Amendment 168 #
Proposal for a regulation
Article 4 a (new)
Article 4 a (new)
Amendment 169 #
Proposal for a regulation
Article 4 b (new)
Article 4 b (new)
Article 4 b AI literacy 1. When implementing this Regulation, the Union and the Member States shall promote measures and tools for the development of a sufficient level of AI literacy, across sectors and groups of operators concerned, including through education and training, skilling and reskilling programmes and while ensuring a proper gender and age balance, in view of allowing a democratic control of AI systems. 2. Developers and deployers of AI systems shall promote tools and take measures to ensure a sufficient level of AI literacy of their staff and any other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the environment the AI systems are to be used in, and considering the persons or groups of persons on which the AI systems are to be used. 3. Such literacy tools and measures shall consist, in particular, of the teaching and learning of basic notions and skills about AI systems and their functioning, including the different types of products and uses, their risks and benefits and the severity of the possible harm they can cause and its probability of occurrence. 4. A sufficient level of AI literacy is one that contributes to the ability of operators to fully comply with and benefit from trustworthy AI, and in particular with the requirements laid down in this Regulation in Articles 13, 14, 29, 52 and 69.
Amendment 202 #
Proposal for a regulation
Article 7 – paragraph 1 – introductory part
Article 7 – paragraph 1 – introductory part
1. The Commission is empowered to adopt delegated acts in accordance with Article 73, after ensuring adequate consultation with relevant stakeholders and the European Agency for Data and AI, to update the list in Annex III by adding high-risk AI systems where both of the following conditions are fulfilled:
Amendment 214 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point c a (new)
Article 9 – paragraph 4 – subparagraph 1 – point c a (new)
(c a) provision of a sufficient level of AI literacy
Amendment 216 #
Proposal for a regulation
Article 9 – paragraph 8
Article 9 – paragraph 8
8. When implementing the risk management system described in paragraphs 1 to 7, specific consideration shall be given to whether the high-risk AI system is likely to be accessed by or have an impact on children, the elderly, migrants or other vulnerable groups.
Amendment 217 #
Proposal for a regulation
Article 9 – paragraph 8
Article 9 – paragraph 8
8. When implementing the risk management system described in paragraphs 1 to 7, specific consideration shall be given to whether the high-risk AI system is likely to be accessed by or have an impact on children and people from vulnerable groups.
Amendment 221 #
Proposal for a regulation
Article 10 – paragraph 2 – point g a (new)
Article 10 – paragraph 2 – point g a (new)
(g a) the purpose and the environment in which the system is to be used;
Amendment 224 #
Proposal for a regulation
Article 13 – paragraph 1
Article 13 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users todevelopers, deployers, users and other relevant stakeholders to easily interpret the system’s functioning and output and use it appropriately. An appropriate type and degree of transparency shall be ensured on the basis of informed decisions, with a view to achieving compliance with the relevant obligations of the user and of the provider set out in Chapter 3 of this Title.
Amendment 225 #
Proposal for a regulation
Article 13 – paragraph 3 a (new)
Article 13 – paragraph 3 a (new)
3 a. In order to comply with the obligations established in this Article, developers and deployers shall ensure a sufficient level of AI literacy in line with New Article 4b.
Amendment 228 #
Proposal for a regulation
Article 14 – paragraph 5 a (new)
Article 14 – paragraph 5 a (new)
5 a. In order to comply with the obligations established in this Article, developers and deployers shall ensure a sufficient level of AI literacy in line with new Article 4b
Amendment 232 #
Proposal for a regulation
Article 29 – paragraph 1 a (new)
Article 29 – paragraph 1 a (new)
1 a. In order to comply with the obligations established in this Article, as well as to be able to justify their possible non-compliance, deployers of high-risk AI systems shall ensure a sufficient level of AI literacy in line with new Article 4b;
Amendment 235 #
Proposal for a regulation
Article 52 – paragraph 1
Article 52 – paragraph 1
1. ProvidDevelopers and deployers shall ensure that AI systems intendused to interact with natural persons are designed and developed in such a way that natural persons are informed, in a timely, clear and intelligible manner that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This information shall also include, as appropriate, the functions that are AI enabled, and the rights and processes to allow natural persons to appeal against the application of such AI systems to them. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
Amendment 236 #
Proposal for a regulation
Article 52 – paragraph 2
Article 52 – paragraph 2
2. Users of an emotion recognition system or a biometric categorisation system shall inform, in a timely, clear and intelligible manner, of the operation of the system to the natural persons exposed thereto. This information shall also include, as appropriate, the rights and processes to allow natural persons to appeal against the application of such AI system to then. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.
Amendment 237 #
Proposal for a regulation
Article 52 – paragraph 3 – introductory part
Article 52 – paragraph 3 – introductory part
3. UDeployers and users of an AI system that generates or manipulates image, audio, text, scripts or video content that appreciably resembles existing persons, objects, places, text, scripts or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose in an appropriate timely, clear and visible manner, that the content has been artificially generated or manipulated, as well as the name of the person or entity that generated or manipulated it.
Amendment 242 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1
Article 52 – paragraph 3 – subparagraph 1
However, the first subparagraph shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offencesforms part of an evidently artistic, creative or fictional cinematographic or analogous work or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
Amendment 243 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1 a (new)
Article 52 – paragraph 3 – subparagraph 1 a (new)
Developers and deployers of an AI systems that recommend, disseminate and order news or creative and cultural content shall disclose in an appropriate, easily accesible, clear and visible manner, the parameters used for the moderation of content and personalised suggestions. This information shall include a disclaimer.
Amendment 244 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1 b (new)
Article 52 – paragraph 3 – subparagraph 1 b (new)
The information referred to in previous paragraphs shall be provided to the natural persons in a timely, clear and visible manner, at the latest at the time of the first interaction or exposure. Such information shall be made accessible when the exposed natural person is a person with disabilities, a child or from a vulnerable group. It shall be complete, where possible, with intervention or flagging procedures for the exposed natural person taking into account the generally acknowledged state of the art and relevant harmonised standards and common specifications.
Amendment 245 #
Proposal for a regulation
Article 52 – paragraph 4 a (new)
Article 52 – paragraph 4 a (new)
4 a. In order to comply with the obligations established in this Article, a sufficient level of AI literacy shall be ensured.
Amendment 253 #
Proposal for a regulation
Article 69 – paragraph 1
Article 69 – paragraph 1
1. The Commission and the Member States shall encouragsupport the mand facilitate theatory drawing up of codes of conduct intended to demonstrate compliance with the ethical principles underpinning trustworthy AI set out in new Article 4a and to foster the voluntary application to AI systems other than high-risk AI systems of the requirements set out in Title III, Chapter 2 on the basis of technical specifications and solutions that are appropriate means of ensuring compliance with such requirements in light of the intended purpose of the systems.
Amendment 254 #
Proposal for a regulation
Article 69 – paragraph 2
Article 69 – paragraph 2
2. The Commission and the Board shall encourage and facilitatIn the drawing up codes of conduct intended to ensure and demonstrate compliance with the ethe drawing up of codes of conduct intended to foster the voluntary application to AI systems of requirements related for example to environmental sustainability, accessibility forical principles underpinning trustworthy AI set out in Article 4a, developers and deployers shall, in particular: (a) consider whether there is a sufficient level of AI literacy among their staff and any other persons dealing with the operation and use of AI systems in order to observe such principles; (b) assess to what extent their AI systems may affect vulnerable persons or groups of persons, including children, the elderly, migrants and persons with a disability, stakeholders participation in the design and development ofies or whether any measures could be put in place in order to support such persons or groups of persons; (c) pay attention to the way in which the use of their AI systems may have an impact on gender balance and equality; (d) have especial regard to whether their AI systems cand diversity of be used in a way that, directly or indirectly, may residually or significantly reinforce existing biases or inequalities; (e) reflect on the need and relevance of having in place diverse development teams oin the basis of clear objectives and key performance indicators to measure the achievement of those objectiveview of securing an inclusive design of their systems; (f) give careful consideration to whether their systems can have a negative societal impact, notably concerning political institutions and democratic processes; (g) evaluate the extent to which the operation of their AI systems would allow them to fully comply with the obligation to provide an explanation laid down in Article New 71 of this Regulation; (h) take stock of the Union’s commitments under the European Green Deal and the European Declaration on Digital Rights and Principles; (i) state their commitment to privileging, where reasonable and feasible, the common specifications to be drafted by the Commission pursuant to Article 41 rather than their own individual technical solutions.
Amendment 255 #
Proposal for a regulation
Article 69 – paragraph 3
Article 69 – paragraph 3
3. Codes of conduct may be drawn up by individual providers of AI systems or by organisations representing them or by both, including with the involvement of users and any interested stakeholders and their representative organisations, including in particular trade unions and consumers organisations. Codes of conduct may cover one or more AI systems taking into account the similarity of the intended purpose of the relevant systems.
Amendment 256 #
Proposal for a regulation
Article 69 – paragraph 3 a (new)
Article 69 – paragraph 3 a (new)
3 a. Developers and deployers shall designate at least one natural person that is responsible for the internal monitoring of the drawing up of their code of conduct and for verifying compliance with that code of conduct in the course of their activities. That person shall serve as a contact point for users, stakeholders, national competent authorities, the Commission and the European Agency for Data and AI on all matters concerning the code of conduct.
Amendment 257 #
Proposal for a regulation
Article 69 – paragraph 3 b (new)
Article 69 – paragraph 3 b (new)
3 b. In order to comply with the obligations established in this Article, developers and deployers shall ensure a sufficient level of AI literacy in line with New Article 6.
Amendment 260 #
Proposal for a regulation
Annex I
Annex I
Amendment 262 #
Proposal for a regulation
Annex III – paragraph 1 – point 2 – point a
Annex III – paragraph 1 – point 2 – point a
(a) AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating, telecommunications, and electricity.
Amendment 264 #
(a) AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions or of determining the study program or areas of study to be followed by students;
Amendment 266 #
Proposal for a regulation
Annex III – paragraph 1 – point 3 a (new)
Annex III – paragraph 1 – point 3 a (new)
3 a. AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests at education and training institutions
Amendment 270 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
Annex III – paragraph 1 – point 4 – point b
(b) AI intended to be used for making decisions on establishment, promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships.