32 Amendments of Vincenzo SOFO related to 2021/0106(COD)
Amendment 315 #
Proposal for a regulation
Recital 1
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values, the Universal Declaration of Human Rights, the European Convention on Human Rights and the Charter of Fundamental Rights of the EU. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
Amendment 321 #
Proposal for a regulation
Recital 2
Recital 2
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU and to align it with relevant EU legislation such as the GDPR and the EUDPR. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board and to take into consideration the EDPB-EDPS Joint Opinion 5/2021.
Amendment 354 #
Proposal for a regulation
Recital 5 a (new)
Recital 5 a (new)
(5 a) The regulatory framework addressing artificial intelligence should be without prejudice to existing and future Union laws concerning data protection, privacy, and protection of fundamental rights. In this regard, requirements of this Regulation should be consistent with the aims and objectives of, among others, the GDPR and the EUDPR. Where this Regulation addresses automated processing within the context of article 22 of the GDPR, the requirements contained in that article should continue to apply, ensuring the highest levels of protection for European citizens over the use of their personal data.
Amendment 597 #
Proposal for a regulation
Recital 40
Recital 40
(40) Certain AI systems intended for the administration of justice and democratic processes should be classified as high-risk, considering their potentially significant impact on democracy, rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a concrete set of factsfacts and the law. Such qualification should not extend, however, to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks or allocation of resources.
Amendment 650 #
Proposal for a regulation
Recital 51
Recital 51
(51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, as well as the notified bodies, competent national authorities and market surveillance authorities accessing the data of providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.
Amendment 700 #
Proposal for a regulation
Recital 68
Recital 68
Amendment 870 #
Proposal for a regulation
Article 2 – paragraph 3
Article 2 – paragraph 3
3. This Regulation shall not apply to AI systems designed, modified, developed or used exclusively for military purposes.
Amendment 887 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
Article 2 – paragraph 5 a (new)
5 a. This Regulation shall not apply to AI systems, including their output, specifically developed or used exclusively for scientific research and development purposes.
Amendment 905 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that dis developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives,play intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals, which: (a) receives machine and/or human-based data and inputs; (b) infers how to achieve a given set of human-defined objectives using data- driven models created through learning or reasoning implemented with the techniques and approaches listed in Annex I, and (c) generates outputs such as content, in the form of content (generative AI systems), predictions, recommendations, or decisions, which influencinge the environments ithey interacts with;
Amendment 1147 #
Article 4 a Notification about the use of an AI system 1. Users of AI systems which affect natural persons, in particular, by evaluating or assessing them, making predictions about them, recommending information, goods or services to them or determining or influencing their access to goods and services, shall inform the natural persons that they are subject to the use of such an AI system. 2. The information referred to in paragraph 1 shall include a clear and concise indication of the user and the purpose of the AI system, information about the rights of the natural person conferred under this Regulation, and a reference to publicly available resource where more information about the AI system can be found, in particular the relevant entry in the EU database referred to in Article 60, if applicable. 3. This information shall be presented in a concise, intelligible and easily accessible form, including for persons with disabilities. 4. This obligation shall be without prejudice to other Union or Member State laws, in particular Regulation 2016/679, Directive 2016/680, Regulation 2022/XXX.
Amendment 1189 #
Proposal for a regulation
Article 5 – paragraph 1 – point c – introductory part
Article 5 – paragraph 1 – point c – introductory part
(c) the placing on the market, putting into service or use of AI systems by public authorities or on their behalf as well as private companies, including social media and cloud service providers, for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:
Amendment 1275 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
Article 5 – paragraph 1 – point d – point iii
Amendment 1360 #
Proposal for a regulation
Article 5 – paragraph 2 – point b a (new)
Article 5 – paragraph 2 – point b a (new)
(b a) the full respect of fundamental rights and freedoms in conformity with Union values, the Universal Declaration of Human Rights, the European Convention of Human Rights and the Charter of Fundamental Rights of the EU.
Amendment 1389 #
Proposal for a regulation
Article 5 – paragraph 4
Article 5 – paragraph 4
4. A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement within the limits and under the conditions listed in paragraphs 1, point (d), 2 and 3. That Member State shall lay down in its national law the necessary detailed rules for the request, issuance and exercise of, as well as supervision relating to, the authorisations referred to in paragraph 3. Those rules shall alsofully comply with EU values, the Universal Declaration of Human Rights, the European Convention of Human Rights and the Charter of Fundamental Rights of the EU and shall specify in respect of which of the objectives listed in paragraph 1, point (d), including which of the criminal offences referred to in point (iii) thereof, the competent authorities may be authorised to use those systems for the purpose of law enforcement.
Amendment 1441 #
Proposal for a regulation
Article 6 – paragraph 2
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk, if they pose a risk of harm to either physical health and safety or human rights, or both.
Amendment 1483 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
Article 7 – paragraph 1 – point b
(b) the AI systems pose a risk of harm to the health, natural environment and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
Amendment 1492 #
Proposal for a regulation
Article 7 – paragraph 2 – introductory part
Article 7 – paragraph 2 – introductory part
2. When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health, natural environment and safety or a risk of adverse impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the high-risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria:
Amendment 1500 #
(b) the extent to which an AI system has been used or is likely to be used and misused;
Amendment 1509 #
Proposal for a regulation
Article 7 – paragraph 2 – point c
Article 7 – paragraph 2 – point c
(c) the extent to which the use of an AI system has already caused harm to the health, natural environment and safety or adverse impact on the fundamental rights or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities;
Amendment 1607 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that anythe overall residual risk associated with each hazard as well as the overall residual risk ofof the high-risk AI systems is reasonably judged to be acceptable, having regard to the benefits that the high-risk AI systems is judged acceptablereasonably expected to deliver and, provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, subject to terms, conditions as made available by the provider, and contractual and license restrictions. Those residual risks shall be communicated to the user.
Amendment 1626 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point c
Article 9 – paragraph 4 – subparagraph 1 – point c
(c) provision of adequate information pursuant to Article 13, in particular as regards the risks referred to in paragraph 2, point (b) of this Article, and, where appropriate, training to usersand relevant information on necessary competence training and authority for natural persons exercising such oversight.
Amendment 1700 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases, that are likely to affect health and safety of persons or lead to discrimination prohibited by Union law;
Amendment 1704 #
Proposal for a regulation
Article 10 – paragraph 2 – point g
Article 10 – paragraph 2 – point g
(g) the identification of any possibleother data gaps or shortcomings that materially increase the risks of harm to the health, natural environment and safety or the fundamental rights of persons, and how those gaps and shortcomings can be addressed.
Amendment 1908 #
Proposal for a regulation
Article 16 – paragraph 1 a (new)
Article 16 – paragraph 1 a (new)
The obligations contained in paragraph 1 shall be without prejudice to obligations applicable to providers of high-risk AI systems arising from Regulation (EU) 2016/679 of the European Parliament and of the Council and Regulation (EU) 2018/1725 of the European Parliament and of the Council
Amendment 2069 #
Proposal for a regulation
Article 29 – paragraph 6 a (new)
Article 29 – paragraph 6 a (new)
6 a. Users of high risk systems involving an emotion recognition system or a biometric categorisation system in accordance with Article 52 shall implement suitable measures to safeguard the natural person's rights and freedoms and legitimate interests in such a system, including providing the natural person with the ability to express his or her point of view on the resulting categorisation and to contest the decision.
Amendment 2081 #
Proposal for a regulation
Article 29 a (new)
Article 29 a (new)
Amendment 2105 #
Proposal for a regulation
Article 33 – paragraph 6
Article 33 – paragraph 6
6. Notified bodies shall have documented procedures in place ensuring that their personnel, committees, subsidiaries, subcontractors and any associated body or personnel of external bodies respect the confidentiality of the information which comes into their possession during the performance of conformity assessment activities, except when disclosure is required by law. The staff of notified bodies shall be bound to observe professional secrecy with regard to all information obtained in carrying out their tasks under this Regulation, except in relation to the notifying authorities of the Member State in which their activities are carried out. Any information and documentation obtained by notified bodies pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
Amendment 2574 #
Proposal for a regulation
Article 59 – paragraph 4 a (new)
Article 59 – paragraph 4 a (new)
4 a. National competent authorities shall satisfy the minimum cybersecurity requirements set out for public administration entities identified as operators of essential services pursuant to Directive (…) on measures for a high common level of cybersecurity across the Union, repealing Directive (EU) 2016/1148.
Amendment 2575 #
Proposal for a regulation
Article 59 – paragraph 4 b (new)
Article 59 – paragraph 4 b (new)
4 b. Any information and documentation obtained by the national competent authorities pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
Amendment 2635 #
Proposal for a regulation
Article 60 – paragraph 5 a (new)
Article 60 – paragraph 5 a (new)
5 a. Any information and documentation obtained by the Commission and Member States pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
Amendment 3090 #
Proposal for a regulation
Annex III – paragraph 1 – point 2 – point a
Annex III – paragraph 1 – point 2 – point a
(a) AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity, whose failure or malfunctioning would directly cause significant harm to the health, natural environment or safety of natural persons.
Amendment 3113 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
Annex III – paragraph 1 – point 4 – point b
(b) AI systems intended to be used forto makinge decisions on promotion and termination of work-related contractual relationships, for task allocationbased on individual behaviour or personal traits or characteristics, and for monitoring and evaluating performance and behaviour of persons in such relationships that have a likelihood of causing harm to the physical health and safety or adversely impact on the fundamental rights or have given rise to significant concerns in relation to the materialisation of such harm or adverse impact.