45 Amendments of Andrey SLABAKOV related to 2021/0106(COD)
Amendment 54 #
Proposal for a regulation
Recital 1
Recital 1
(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values without hampering innovation, deployment and uptake of Artificial Intelligence and the beneficial contributions the technology can bring to individuals, businesses as well as society and economy at large. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI- based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
Amendment 59 #
Proposal for a regulation
Recital 2
Recital 2
(2) Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation, innovation, deployment and uptake of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
Amendment 61 #
Proposal for a regulation
Recital 3
Recital 3
(3) Artificial intelligence is a fast evolving family of technologies that can contribute and is already contributing to a wide array of economic and societal benefits across the entire spectrum of industries and social activities. By establishing an accommodative framework which entails improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and entire industries, and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education and training, media, sports, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.
Amendment 72 #
(5 a) In order to help promote the development, uptake and understanding of AI, the Union needs to put further effort into education and training, thus, inter alia, addressing the shortage of ICT professionals and AI undergraduate courses, digitally skilled workers as well as lack of even basic digital skills amongst significant share of the EU population;
Amendment 73 #
Proposal for a regulation
Recital 5 b (new)
Recital 5 b (new)
(5 b) Moreover, lack of both public and private investment is currently undermining development and use of AI systems across the Union, especially when compared to other major industrial economies. Special attention, incentives and support should be devised to promoting AI uptake amongst SMEs, including those in education and cultural and creative sectors and industries;
Amendment 76 #
Proposal for a regulation
Recital 6
Recital 6
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developmentsand commercial certainty, and be in line with internationally accepted definitions, while providing the flexibility to accommodate future technological developments. The Commission should pursue dialogue with key international organisations so as to ensure that there is alignment and common understanding of precisely what AI systems entail. The definition should be based on the key functional characteristics of the software, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension. AI systems can be designed to operate with varying levels of autonomy and be used on a stand- alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to–date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list.
Amendment 80 #
Proposal for a regulation
Recital 9
Recital 9
(9) For the purposes of this Regulation the notion of publicly accessible space should be understood as referring to any physical place that is accessible to the public, irrespective of whether the place in question is privately or publicly owned. Therefore, the notion does not cover places that are private in nature and normally not freely accessible for third parties, including law enforcement authorities, unless those parties have been specifically invited or authorised, such as homes, private clubs, offices, warehouses and, factories. Online spaces and other private spaces. Online spaces, whether publicly accessible or not, are not covered either, as they are not physical spaces. However, the mere fact that certain conditions for accessing a particular space may apply, such as admission tickets or age restrictions, does not mean that the space is not publicly accessible within the meaning of this Regulation. Consequently, in addition to public spaces such as streets, relevant parts of government buildings and most transport infrastructure, spaces such as cinemas, theatres, shops and shopping centres are normally also publicly accessible. Whether a given space is accessible to the public should however be determined on a case-by-case basis, having regard to the specificities of the individual situation at hand.
Amendment 85 #
Proposal for a regulation
Recital 14
Recital 14
(14) In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk- based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems. However, it is important to distinguish between the parties who develop and make the system and those who promote or market the product.
Amendment 116 #
Proposal for a regulation
Recital 35
Recital 35
(35) AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they mapoorly designed AI systems may negatively determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination. All other applications of AI systems in education and training, such as systems used to monitor students during tests, should by default be considered minimal risk.
Amendment 117 #
Proposal for a regulation
Recital 35 a (new)
Recital 35 a (new)
(35 a) The application of AI systems in news media is growing, helping to automate mundane tasks, raise efficiency and improve quality offer. To raise competitiveness and embrace innovation, it is vital that AI-aided automation efforts such as automatically written articles are being deployed by newsrooms. As such relevant AI applications, for which there is editorial oversight, are considered minimal risk.
Amendment 139 #
Proposal for a regulation
Recital 85
Recital 85
(85) In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to amend the techniques and approaches referred to in Annex I to define AI systems, the Union harmonisation legislation listed in Annex II, the high-risk AI systems listed in Annex III, the provisions regarding technical documentation listed in Annex IV, the content of the EU declaration of conformity in Annex V, the provisions regarding the conformity assessment procedures in Annex VI and VII and the provisions establishing the high-risk AI systems to which the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation should apply. It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making58 . Such consultations may involve qualified specialists, including from the private sector and industries, with skills and knowledge relevant to the task. In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States’ experts, and their experts systematically have access to meetings of Commission expert groups dealing with the preparation of delegated acts. _________________ 58 OJ L 123, 12.5.2016, p. 1.
Amendment 142 #
Proposal for a regulation
Recital 86 a (new)
Recital 86 a (new)
(86 a) Given the rapid technological developments and the required technical expertise in conducting the assessment of high-risk AI systems, the delegation of powers and the implementing powers of the Commission should be exercised with as much flexibility as possible. The Commission should regularly review Annex III ,while consulting with the relevant stakeholders.
Amendment 150 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
Article 2 – paragraph 5 a (new)
5 a. This Regulation shall not affect or undermine research and development activities related to AI systems and their output.
Amendment 154 #
Proposal for a regulation
Article 3 – paragraph 1 – point 3 a (new)
Article 3 – paragraph 1 – point 3 a (new)
(3 a) 'deployer' means an entity that puts into service an AI system developed by another entity without modification;
Amendment 164 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
Article 3 – paragraph 1 – point 44 a (new)
(44 a) 'deep fake' means manipulated or synthetic audio or visual media which feature persons purported to be authentic and truthful;
Amendment 167 #
Proposal for a regulation
Article 4 – paragraph 1 a (new)
Article 4 – paragraph 1 a (new)
In the drafting process of the relevant delegated acts, the Commission shall have input of all relevant stakeholders, including the European Artificial Intelligence Board as well as developers of AI systems and industry experts.
Amendment 176 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
Article 5 – paragraph 1 – point b
(b) the placing on the market, putting into service or use of an AI system that deliberately exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
Amendment 205 #
Proposal for a regulation
Article 7 – paragraph 1 a (new)
Article 7 – paragraph 1 a (new)
1 a. When adopting a delegated act, the Commission shall have input of all relevant stakeholders, including the European Artificial Intelligence Board as well as developers of AI systems and industry experts.
Amendment 209 #
Proposal for a regulation
Article 7 – paragraph 2 – point e
Article 7 – paragraph 2 – point e
(e) the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system with a distinction to be made between an AI system used in an advisory capacity or one to directly inform decision-making process, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome;
Amendment 210 #
Proposal for a regulation
Article 7 – paragraph 2 – point g a (new)
Article 7 – paragraph 2 – point g a (new)
(g a) the extent to which the relevant AI systems benefit individuals and society at large;
Amendment 211 #
Proposal for a regulation
Article 7 – paragraph 2 – point g b (new)
Article 7 – paragraph 2 – point g b (new)
(g b) the extent to which the AI system acts autonomously;
Amendment 212 #
Proposal for a regulation
Article 7 – paragraph 2 – point g c (new)
Article 7 – paragraph 2 – point g c (new)
(g c) general capabilities and functionalities of the AI system independent of its intended purpose;
Amendment 218 #
Proposal for a regulation
Article 10 – paragraph 1
Article 10 – paragraph 1
1. High-risk AI systems which make use of techniques involving the training of models with data shall be developed, to the extent technically feasible, on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5.
Amendment 219 #
Proposal for a regulation
Article 10 – paragraph 2 – introductory part
Article 10 – paragraph 2 – introductory part
2. To the extent technically feasible, training, validation and testing data sets shall be subject to appropriate data governance and management practices. Those practices shall concern in particular,
Amendment 220 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases, in particular deviations that could affect health and safety of people or lead to discrimination;
Amendment 222 #
Proposal for a regulation
Article 10 – paragraph 4
Article 10 – paragraph 4
4. Training, validation and testing data sets shall take into account, to the extent technically feasible and required by the intended purpose, the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used.
Amendment 223 #
Proposal for a regulation
Article 12 – paragraph 1
Article 12 – paragraph 1
1. High-risk AI systems shall be designed and developed with capabilities enabling the automatictechnical possibility for recording of events (‘logs’) while the high- risk AI systems is operating. Those logging capabilities shall conform to recognised standards or common specifications.
Amendment 226 #
Proposal for a regulation
Article 14 – paragraph 3 – introductory part
Article 14 – paragraph 3 – introductory part
3. HumanThe degree of human oversight shall be proportionate to the relevant risks, the level of automation and the intended purpose of the AI system. The relevant oversight shall be ensured through either one or all of the following measures:
Amendment 227 #
(a) fulsufficiently understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible;
Amendment 229 #
Proposal for a regulation
Article 15 – paragraph 1
Article 15 – paragraph 1
1. High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropria reasonably expected level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle.
Amendment 230 #
Proposal for a regulation
Article 15 – paragraph 3 – introductory part
Article 15 – paragraph 3 – introductory part
3. HSufficient and technically feasible measures shall be taken to ensure that high-risk AI systems shall be resilient as regards errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems.
Amendment 231 #
Proposal for a regulation
Article 23 – paragraph 1
Article 23 – paragraph 1
Providers of high-risk AI systems shall, upon request by a national competent authority, provide that authority with all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title, in an official Union language determined by the Member State concerned. Upon a reasoned request from a national competent authority, providers shall also give that authority access to the logs automatically generated by the high- risk AI system, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law. In accordance with Article 70(2), the national competent authorities shall not disclose and keep confidential all trade secrets or otherwise commercially sensitive information contained in the information received.
Amendment 233 #
Proposal for a regulation
Article 41 – paragraph 2
Article 41 – paragraph 2
2. The Commission, when preparing the common specifications referred to in paragraph 1, shall gather the views of applicable stakeholders, including industry representatives, SMEs as well as other relevant bodies or expert groups established under relevant sectorial Union law.
Amendment 234 #
Proposal for a regulation
Article 42 – paragraph 2
Article 42 – paragraph 2
2. High-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to Regulation (EU) 2019/881 of the European Parliament and of the Council63 and the references of which have been published in the Official Journal of the European Union shall be presumed to be in compliance with the cybersecurity requirements set out in Article 15, where applicable, of this Regulation in so far as the cybersecurity certificate or statement of conformity or parts thereof cover those requirements. _________________ 63 Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act) (OJ L 151, 7.6.2019, p. 1).
Amendment 238 #
Proposal for a regulation
Article 52 – paragraph 3 – introductory part
Article 52 – paragraph 3 – introductory part
3. Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated. Users shall be able to opt out of such disclosure notifications.
Amendment 240 #
Proposal for a regulation
Article 52 – paragraph 3 – subparagraph 1
Article 52 – paragraph 3 – subparagraph 1
However, the first subparagraph shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offences or where the content forms part of an evidently artistic, creative or fictional cinematographic and analogous work, or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
Amendment 246 #
Proposal for a regulation
Article 55 – paragraph 2 a (new)
Article 55 – paragraph 2 a (new)
2 a. The Commission shall regularly asses certification and compliance costs for small-scale providers and, within its merit, try to take reasonable steps to minimise the compliance costs for the above providers.
Amendment 247 #
Proposal for a regulation
Article 56 – paragraph 2 – point a a (new)
Article 56 – paragraph 2 – point a a (new)
(a a) work towards promoting uptake of AI within the EU, especially amongst SMEs;
Amendment 248 #
Proposal for a regulation
Article 57 – paragraph 1
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor. Other national authorities mayor international authorities and relevant stakeholders, including from the private sector, shall be invited to the meetings, where the issues discussed are of relevance for them.
Amendment 250 #
Proposal for a regulation
Article 58 – paragraph 1 – point b
Article 58 – paragraph 1 – point b
(b) contribute to uniform administrative practices in the Member States, including for the functioning of regulatory sandboxes referred to in Article 53 so as to help promote and unleash the full potential of AI;
Amendment 251 #
Proposal for a regulation
Article 58 – paragraph 1 – point c a (new)
Article 58 – paragraph 1 – point c a (new)
(c a) identify and help address existing bottlenecks;
Amendment 252 #
Proposal for a regulation
Article 64 – paragraph 1
Article 64 – paragraph 1
1. AUpon reasoned request access to data and documentation in the context of their activities, the market surveillance authorities shall be granted full access to the training, validation and testing datasets used by the provider, including through application programming interfaces (‘API’) or other appropriate technical means and tools enabling remote access.
Amendment 258 #
Proposal for a regulation
Article 71 – paragraph 1
Article 71 – paragraph 1
1. In compliance with the terms and conditions laid down in this Regulation, Member States shall lay down the rules on penalties, including administrative fines, applicable to infringements of this Regulation and shall take all measures necessary to ensure that they are properly and effectively implemented. The penalties provided for shall be effective, proportionate, and dissuasive. They shall take into particular account the interests and market position of small-scale providers and start-up and their economic viability.
Amendment 259 #
Proposal for a regulation
Article 71 – paragraph 6 – point c
Article 71 – paragraph 6 – point c
(c) the size and market share of the operator committing the infringement, while also taking into consideration the size of the operator;
Amendment 261 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – introductory part
Annex III – paragraph 1 – point 1 – introductory part
1. Biometric identification, unless for private use, and categorisation of natural persons: