Activities of Eugen JURZYCA related to 2021/0106(COD)
Plenary speeches (1)
Artificial Intelligence Act (debate)
Amendments (81)
Amendment 377 #
Proposal for a regulation
Recital 8
Recital 8
(8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference databasedatabase data repository, excluding verification/authentication systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used. Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real- time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near- ‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated by closed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned.
Amendment 400 #
Proposal for a regulation
Recital 12 a (new)
Recital 12 a (new)
(12 a) This Regulation should also ensure harmonisation and consistency in definitions and terminology as biometric techniques can, in the light of their primary function, be divided into techniques of biometric identification, authentication and verification. Biometric authentication means the process of matching an identifier to a specific stored identifier in order to grant access to a device or service, whilst biometric verification refers to the process of confirming that an individual is who they claim to be. As they do not involve any “one-to-many” comparison of biometric data that is the distinctive trait of identification, both biometric verification and authentication should be excluded from the scope of this Regulation.
Amendment 548 #
Proposal for a regulation
Recital 33
Recital 33
(33) Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk. In view of the risks that they may pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and, when appropriate and justified by a proven added value to the protection of health, safety and fundamental rights, human oversight.
Amendment 574 #
Proposal for a regulation
Recital 37
Recital 37
(37) Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Due to the fact that AI systems related to low-value credits for the purchase of moveables does not cause high risk, it is proposed to exclude this category from the scope of high-risk AI category as well. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non- discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high- risk since they make decisions in very critical situations for the life and health of persons and their property.
Amendment 641 #
Proposal for a regulation
Recital 48
Recital 48
(48) High-risk AI systems should be designed and developed in such a way that natural persons canmay, when appropriate, oversee their functioning. For this purpose, when it brings a proven added value to the protection of health, safety and fundamental rights, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate, such measures should guarantee that the system is subject to in- built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role.
Amendment 650 #
Proposal for a regulation
Recital 51
Recital 51
(51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, as well as the notified bodies, competent national authorities and market surveillance authorities accessing the data of providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.
Amendment 658 #
Proposal for a regulation
Recital 54
Recital 54
(54) The provider should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation in the language of the Member State concerned and establish a robust post-market monitoring system. All elements, from design to future development, must be transparent for the user. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority in question.
Amendment 717 #
Proposal for a regulation
Recital 70 a (new)
Recital 70 a (new)
(70 a) Suppliers of general purpose AI systems and, as relevant, other third parties that may supply other software tools and components, including pre- trained models and data, should cooperate, as appropriate, with providers that use such systems or components for an intended purpose under this Regulation in order to enable their compliance with applicable obligations under this Regulation and their cooperation, as appropriate, with the competent authorities established under this Regulation. In such cases, the provider may, by written agreement, specify the information or other assistance that such supplier will furnish in order to enable the provider to comply with its obligations herein.
Amendment 870 #
Proposal for a regulation
Article 2 – paragraph 3
Article 2 – paragraph 3
3. This Regulation shall not apply to AI systems designed, modified, developed or used exclusively for military purposes.
Amendment 887 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
Article 2 – paragraph 5 a (new)
5 a. This Regulation shall not apply to AI systems, including their output, specifically developed or used exclusively for scientific research and development purposes.
Amendment 895 #
Proposal for a regulation
Article 2 – paragraph 5 b (new)
Article 2 – paragraph 5 b (new)
5 b. This Regulation shall not affect any research and development activity regarding AI systems in so far as such activity does not lead to placing an AI system on the market or putting it into service.
Amendment 905 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that dis developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives,play intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals, which: (a) receives machine and/or human-based data and inputs; (b) infers how to achieve a given set of human-defined objectives using data- driven models created through learning or reasoning implemented with the techniques and approaches listed in Annex I, and (c) generates outputs such as content, in the form of content (generative AI systems), predictions, recommendations, or decisions, which influencinge the environments ithey interacts with;
Amendment 932 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2
Article 3 – paragraph 1 – point 2
(2) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing itand places that system on the market or puttings it into service under its own name or trademark, whether for payment or free of charge;
Amendment 950 #
Proposal for a regulation
Article 3 – paragraph 1 – point 4 a (new)
Article 3 – paragraph 1 – point 4 a (new)
(4 a) 'End-user' means any natural person who, in the framework of employment, contract or agreement with the deployer, uses the AI system under the authority of the deployer;
Amendment 975 #
Proposal for a regulation
Article 3 – paragraph 1 – point 13
Article 3 – paragraph 1 – point 13
(13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purposepurpose as indicated in instruction for use or technical specification, but which may result from reasonably foreseeable human behaviour or interaction with other systems;
Amendment 1050 #
Proposal for a regulation
Article 3 – paragraph 1 – point 36
Article 3 – paragraph 1 – point 36
(36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons, at a physical distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, repository, excluding verification/authentication systems whose sole purpose is to confirm that a specific natural person is the person he or she claims to be, and systems that are used to confirm the identity of a natural person for the sole purpose of having access to a service, a device or premises; and without prior knowledge of the user of the AI system whether the person will be present and can be identified ; ;
Amendment 1101 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
Article 3 – paragraph 1 – point 44 a (new)
(44 a) 'critical infrastructure' means an asset, system or part thereof which is necessary for the delivery of a service that is essential for the maintenance of vital societal functions or economic activities within the meaning of Article 2(4) and (5) of Directive (…) on the resilience of critical entities;
Amendment 1168 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner intended that causes or is likely to cause that person or another person physical or psychological harm;
Amendment 1255 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point i
Article 5 – paragraph 1 – point d – point i
(i) the targeted search for specific potential victims of crime, including missing children;
Amendment 1271 #
Proposal for a regulation
Article 5 – paragraph 1 – point d – point iii
Article 5 – paragraph 1 – point d – point iii
Amendment 1282 #
(iii a) searching for missing persons, especially those who are minors or have medical conditions that affect memory, communication, or independent decision- making skills;
Amendment 1431 #
Proposal for a regulation
Article 6 – paragraph 1 – point b
Article 6 – paragraph 1 – point b
(b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment related to safety with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.
Amendment 1441 #
Proposal for a regulation
Article 6 – paragraph 2
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk, if they pose a risk of harm to either physical health and safety or human rights, or both.
Amendment 1444 #
Proposal for a regulation
Article 6 – paragraph 2 a (new)
Article 6 – paragraph 2 a (new)
2 a. The classification as high-risk as a consequence of Article 6(1) and 6(2) shall be disregarded for AI systems whose intended purpose demonstrates that the generated output is a recommendation requiring a human intervention to convert this recommendation into a decision and for AI systems which do not lead to autonomous decisions or actions of the overall system.
Amendment 1451 #
Proposal for a regulation
Article 6 – paragraph 2 b (new)
Article 6 – paragraph 2 b (new)
2 b. When assessing an AI system for the purposes of paragraph 1 of Article 6, a safety component shall be assessed against the essential health and safety requirements of the relevant EU harmonisation legislation listed in Annex II.
Amendment 1607 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that anythe overall residual risk associated with each hazard as well as the overall residual risk ofof the high-risk AI systems is reasonably judged to be acceptable, having regard to the benefits that the high-risk AI systems is judged acceptablereasonably expected to deliver and, provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, subject to terms, conditions as made available by the provider, and contractual and license restrictions. Those residual risks shall be communicated to the user.
Amendment 1617 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – introductory part
Article 9 – paragraph 4 – subparagraph 1 – introductory part
In identifying the most appropriate risk management measures, the following outcomes shall be ensurpursued:
Amendment 1620 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 1 – point a
Article 9 – paragraph 4 – subparagraph 1 – point a
(a) elimination or reduction of risks as far as possible through adequcommercially reasonable and technologically feasible in light of the generally acknowledged state of the art, through appropriate design and development measures;
Amendment 1635 #
Proposal for a regulation
Article 9 – paragraph 4 – subparagraph 2
Article 9 – paragraph 4 – subparagraph 2
In seeking to eliminatinge or reducinge risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, training to be expected by the user and the environment in which the system is intended to be used.
Amendment 1640 #
Proposal for a regulation
Article 9 – paragraph 5
Article 9 – paragraph 5
5. High-risk AI systems shall be tested for the purposes of identifying the most appropriate risk management measures for the specific scenario in which the system will be operating and to ensure that a system is performing appropriately for a given use case. Testing shall ensure that high-risk AI systems perform in a manner that is consistently for with their intended purpose and they are in compliance with the requirements set out in this Chapter.
Amendment 1682 #
Proposal for a regulation
Article 10 – paragraph 2 – introductory part
Article 10 – paragraph 2 – introductory part
2. Training, validation and testing data sets shall be subject to appropriate data governance and management practices. T for the entire lifecycle of data processing. Where relevant to appropriate risk management measures, those practices shall concern in particular,
Amendment 1697 #
Proposal for a regulation
Article 10 – paragraph 2 – point e
Article 10 – paragraph 2 – point e
(e) a priorn assessment of the availability, quantity and suitability of the data sets that are needed;
Amendment 1700 #
Proposal for a regulation
Article 10 – paragraph 2 – point f
Article 10 – paragraph 2 – point f
(f) examination in view of possible biases, that are likely to affect health and safety of persons or lead to discrimination prohibited by Union law;
Amendment 1704 #
Proposal for a regulation
Article 10 – paragraph 2 – point g
Article 10 – paragraph 2 – point g
(g) the identification of any possibleother data gaps or shortcomings that materially increase the risks of harm to the health, natural environment and safety or the fundamental rights of persons, and how those gaps and shortcomings can be addressed.
Amendment 1720 #
Proposal for a regulation
Article 10 – paragraph 3
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be relevant, sufficiently diverse to mitigate bias, and, to the best extent possible, representative, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
Amendment 1731 #
Proposal for a regulation
Article 10 – paragraph 4
Article 10 – paragraph 4
4. Training, validation and testing data sets shall take into accountbe sufficiently diverse to accurately capture, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within which the high- risk AI system is intended to be used.
Amendment 1740 #
Proposal for a regulation
Article 10 – paragraph 5
Article 10 – paragraph 5
5. To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy- preserving measures, such as pseudonymisation, or encryption or biometric template protection technologies where anonymisation may significantly affect the purpose pursued.
Amendment 1775 #
Proposal for a regulation
Article 12 – paragraph 2
Article 12 – paragraph 2
2. The logging capabilities shall ensure a level of traceability of the AI system’s functioning throughoutwhile the AI system is used within its lifecycle that is appropriate to the intended purpose of the system.
Amendment 1777 #
Proposal for a regulation
Article 12 – paragraph 3 a (new)
Article 12 – paragraph 3 a (new)
3 a. For records constituting trade secrets as defined in Article 2 of Directive (EU) 2016/943, provider may elect to confidentially provide such trade secrets only to relevant public authorities to the extent necessary for such authorities to perform their obligations hereunder.
Amendment 1878 #
Proposal for a regulation
Article 16 – paragraph 1 – point a
Article 16 – paragraph 1 – point a
(a) ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title before placing them on the market or putting them into service, and shall be responsible for compliance of these systems after that point only to the extent that they exercise actual control over relevant aspects of the system;
Amendment 2026 #
Proposal for a regulation
Article 28 – paragraph 1 – introductory part
Article 28 – paragraph 1 – introductory part
1. Any distributor, importer, user or other third-party shall be considered a provider of a high-risk AI system for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances:
Amendment 2031 #
Proposal for a regulation
Article 28 – paragraph 1 – point c a (new)
Article 28 – paragraph 1 – point c a (new)
(c a) they modify the intended purpose of an AI system which is not high-risk and is already placed on the market or put into service, in a way which makes the modified system a high-risk AI system.
Amendment 2041 #
Proposal for a regulation
Article 29 – paragraph 1
Article 29 – paragraph 1
1. Users of high-risk AI systems shall use such systemsshall bear sole responsibility in case of any use of the AI system that is not in accordance with the instructions of use accompanying the systems, pursuant to paragraphs 2 and 5.
Amendment 2101 #
Proposal for a regulation
Article 33 – paragraph 2
Article 33 – paragraph 2
2. Notified bodies shall satisfy the organisational, quality management, resources and process requirememinimum cybersecurity requirements set out for public administration entities identified as operators of essential services pursuants that are necessary to fulfil their tasks.o Directive (…) on measures for a high common level of cybersecurity across the Union, repealing Directive (EU) 2016/1148;
Amendment 2105 #
Proposal for a regulation
Article 33 – paragraph 6
Article 33 – paragraph 6
6. Notified bodies shall have documented procedures in place ensuring that their personnel, committees, subsidiaries, subcontractors and any associated body or personnel of external bodies respect the confidentiality of the information which comes into their possession during the performance of conformity assessment activities, except when disclosure is required by law. The staff of notified bodies shall be bound to observe professional secrecy with regard to all information obtained in carrying out their tasks under this Regulation, except in relation to the notifying authorities of the Member State in which their activities are carried out. Any information and documentation obtained by notified bodies pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
Amendment 2129 #
Proposal for a regulation
Article 41
Article 41
Amendment 2254 #
Proposal for a regulation
Article 51 – paragraph 1 a (new)
Article 51 – paragraph 1 a (new)
Before using an AI system, public authorities shall register the uses of that system in the EU database referred to in Article 60. A new registration entry must be completed by the user for each use of an AI system.
Amendment 2284 #
Proposal for a regulation
Article 52 a (new)
Article 52 a (new)
Article 52 a General purpose AI systems 1. The placing on the market, putting into service or use of general purpose AI systems shall not, by themselves only, make those systems subject to the provisions of this Regulation. 2. Any person who places on the market or puts into service under its own name or trademark or uses a general purpose AI system made available on the market or put into service for an intended purpose that makes it subject to the provisions of this Regulation shall be considered the provider of the AI system subject to the provisions of this Regulation. 3. Paragraph 2 shall apply, mutatis mutandis, to any person who integrates a general purpose AI system made available on the market, with or without modifying it, into an AI system whose intended purpose makes it subject to the provisions of this Regulation. 4. The provisions of this Article shall apply irrespective of whether the general purpose AI system is open source software or not.
Amendment 2297 #
Proposal for a regulation
Article 53 – paragraph 1
Article 53 – paragraph 1
1. AI regulatory sandboxes established by one or more Member States competent authorities or the European Data Protection Supervisor shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. This shall take place under the direct supervision and guidance by the competent authorities with a view to ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox.
Amendment 2332 #
Proposal for a regulation
Article 53 – paragraph 5
Article 53 – paragraph 5
5. Member States’ competent authorities that have established AI regulatory sandboxes shall coordinate their activities and cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Board and the Commission on the results from the implementation of those scheme, including good practices, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox.
Amendment 2434 #
Proposal for a regulation
Article 57 – paragraph 1
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor, AI ethics experts and industry representatives. Other national authorities may be invited to the meetings, where the issues discussed are of relevance for them.
Amendment 2453 #
Proposal for a regulation
Article 57 – paragraph 3
Article 57 – paragraph 3
3. The Board shall be co-chaired by the Commission and a representative chosen from among the delegates of the Member States. The Commission shall convene the meetings and prepare the agenda in accordance with the tasks of the Board pursuant to this Regulation and with its rules of procedure. The Commission shall provide administrative and analytical support for the activities of the Board pursuant to this Regulation.
Amendment 2574 #
Proposal for a regulation
Article 59 – paragraph 4 a (new)
Article 59 – paragraph 4 a (new)
4 a. National competent authorities shall satisfy the minimum cybersecurity requirements set out for public administration entities identified as operators of essential services pursuant to Directive (…) on measures for a high common level of cybersecurity across the Union, repealing Directive (EU) 2016/1148.
Amendment 2575 #
Proposal for a regulation
Article 59 – paragraph 4 b (new)
Article 59 – paragraph 4 b (new)
4 b. Any information and documentation obtained by the national competent authorities pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
Amendment 2587 #
Proposal for a regulation
Article 59 – paragraph 7
Article 59 – paragraph 7
7. National competent authorities may provide guidance and advice on the implementation of this Regulation, including to small-scale providers. Whenever national competent authorities intend to provide guidance and advice with regard to an AI system in areas covered by other Union legislation, the competent national authorities under that Union legislation shall be consulted, as appropriate. Member States mayshall also establish one central contact point for communication with operators. In addition, the central contact point of each Member State should be contactable through electronic communications means.
Amendment 2630 #
Proposal for a regulation
Article 60 – paragraph 4 a (new)
Article 60 – paragraph 4 a (new)
4 a. The EU database shall not contain any confidential business information or trade secrets of a natural or legal person, including source code.
Amendment 2635 #
Proposal for a regulation
Article 60 – paragraph 5 a (new)
Article 60 – paragraph 5 a (new)
5 a. Any information and documentation obtained by the Commission and Member States pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
Amendment 2646 #
Proposal for a regulation
Article 61 – paragraph 2
Article 61 – paragraph 2
2. The post-market monitoring system shall actively and systematically collect, document and analyse relevant data provided by users and end-users or collected through other sources on the performance of high- risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2.
Amendment 2681 #
Proposal for a regulation
Article 64 – paragraph 1
Article 64 – paragraph 1
1. Access to data and documentation in the context of their activities, the market surveillance authorities shall be granted fullsufficient access to the training, validation and testing datasets used by the provider, including through application programming interfaces (‘API’) or other appropriate technical means and tools enabling remote access, taking into account the scope of access agreed with the relevant data subjects or data holders.
Amendment 2691 #
Proposal for a regulation
Article 64 – paragraph 2
Article 64 – paragraph 2
2. Where necessary to assess the conformity of the high-risk AI system with the requirements set out in Title III, Chapter 2 and upon a reasoned request, the market surveillance authorities shall be granted access to the source code of the AI system. . AI providers or deployers shall support market surveillance authorities with the necessary facilities to carry out testing to confirm compliance.
Amendment 2805 #
Proposal for a regulation
Article 70 – paragraph 1 a (new)
Article 70 – paragraph 1 a (new)
1 a. Where the activities of national competent authorities and bodies notified under the provisions of this Article infringe intellectual property rights, Member States shall provide for the measures, procedures and remedies necessary to ensure the enforcement of intellectual property rights in full application of Directive 2004/48/EC on the enforcement of intellectual property rights.
Amendment 2807 #
Proposal for a regulation
Article 70 – paragraph 1 b (new)
Article 70 – paragraph 1 b (new)
1 b. Information and data collected by national competent authorities and notified bodies and referred to in Paragraph 1 shall be: a) collected for specified, explicit and legitimate purposes and not further processed in a way incompatible with those purposes;further processing for archiving purposes in the public interest, for scientific or historical research purposes or for statistical purposes shall not be considered incompatible with the original purposes ("purpose limitation"); b) adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed (‘data minimisation’);
Amendment 2822 #
Proposal for a regulation
Article 71 – paragraph 1 a (new)
Article 71 – paragraph 1 a (new)
1 a. In cases where administrative fines have been imposed under Article 83 of Regulation 2016/679, no further penalties shall be imposed on operators under the AI Act.
Amendment 2887 #
Proposal for a regulation
Article 72 – paragraph 1 – point a a (new)
Article 72 – paragraph 1 – point a a (new)
(a a) the intentional or negligent character of the infringement;
Amendment 2888 #
Proposal for a regulation
Article 72 – paragraph 1 – point a b (new)
Article 72 – paragraph 1 – point a b (new)
(a b) any relevant previous infringement;
Amendment 2890 #
Proposal for a regulation
Article 72 – paragraph 1 – point b a (new)
Article 72 – paragraph 1 – point b a (new)
(b a) the degree of cooperation with the supervisory authority, in order to remedy the infringement and mitigate the possible adverse effects of the infringement;
Amendment 2891 #
Proposal for a regulation
Article 72 – paragraph 1 – point b b (new)
Article 72 – paragraph 1 – point b b (new)
(b b) any action taken by the provider to mitigate the damage suffered by subjects;
Amendment 2893 #
Proposal for a regulation
Article 72 – paragraph 1 – point c a (new)
Article 72 – paragraph 1 – point c a (new)
(c a) any other aggravating or mitigating factor applicable to the circumstances of the case, such as financial benefits gained, or losses avoided, directly or indirectly, from the infringement.
Amendment 2933 #
Proposal for a regulation
Article 80 – paragraph 1 – introductory part
Article 80 – paragraph 1 – introductory part
In Article 5 of Regulation (EU) 2018/858 the following paragraph iss are added:
Amendment 2935 #
Proposal for a regulation
Article 80 – paragraph 1
Article 80 – paragraph 1
Regulation (EU) 2018/858
Article 5
Article 5
4 a. The Commission shall, prior to fulfilling the obligation pursuant to paragraph 4, provide a reasonable explanation based on a gap analysis of existing sectoral legislation in the automotive sector to determine the existence of potential gaps relating to Artificial Intelligence therein, and consult relevant stakeholders, in order to avoid duplications and overregulation, in line with the Better Regulation principles.
Amendment 2939 #
Proposal for a regulation
Article 82 – paragraph 1 – introductory part
Article 82 – paragraph 1 – introductory part
In Article 11 of Regulation (EU) 2019/2144, the following paragraph iss are added:
Amendment 2940 #
Proposal for a regulation
Article 82 – paragraph 1
Article 82 – paragraph 1
Regulation (EU) 2019/2144
Article 11
Article 11
3 a. The Commission shall, prior to fulfilling the obligation pursuant to paragraph 3, provide a reasonable explanation based on a gap analysis of existing sectoral legislation in the automotive sector to determine the existence of potential gaps relating to Artificial Intelligence therein, and consult relevant stakeholders, in order to avoid duplications and overregulation, in line with the Better Regulation principles.
Amendment 2966 #
Proposal for a regulation
Article 84 – paragraph 1
Article 84 – paragraph 1
1. The Commission shall assess the need for amendment of the list in Annex III once a yearevery 24 months following the entry into force of this Regulation and until the end of the period of the delegation of power. The findings of that assessment shall be presented to the European Parliament and the Council.
Amendment 2973 #
Proposal for a regulation
Article 84 – paragraph 2
Article 84 – paragraph 2
2. By [threewo years after the date of application of this Regulation referred to in Article 85(2)] and every fourthree years thereafter, the Commission shall submit a report on the evaluation and review of this Regulation to the European Parliament and to the Council. The reports shall be made public.
Amendment 3018 #
Proposal for a regulation
Annex I – point b
Annex I – point b
(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;Other data-driven approaches, including search and optimization methods.
Amendment 3025 #
Proposal for a regulation
Annex I – point c
Annex I – point c
(c) Statistical approaches, Bayesian estimation, search and optimization methodsif they are used to extract decisions from data in an automated way and search.
Amendment 3051 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – introductory part
Annex III – paragraph 1 – point 1 – introductory part
1. Biometrics systems identification and categorisation of natural persons:
Amendment 3063 #
Proposal for a regulation
Annex III – paragraph 1 – point 1 – point a
Annex III – paragraph 1 – point 1 – point a
(a) AI biometric identification systems intended to be used for the ‘real- time’ and ‘post’ remote biometric identification of natural persons without their agreement;
Amendment 3090 #
Proposal for a regulation
Annex III – paragraph 1 – point 2 – point a
Annex III – paragraph 1 – point 2 – point a
(a) AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity, whose failure or malfunctioning would directly cause significant harm to the health, natural environment or safety of natural persons.
Amendment 3260 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point a
Annex IV – paragraph 1 – point 2 – point a
(a) provided that no confidential information or trade secrets are disclosed, the methods and steps performed for the development of the AI system, including, where relevant, recourse to pre- trained systems or tools provided by third parties and how these have been used, integrated or modified by the provider;
Amendment 3262 #
Proposal for a regulation
Annex IV – paragraph 1 – point 2 – point b
Annex IV – paragraph 1 – point 2 – point b
(b) provided that no confidential information or trade secrets are disclosed, the design specifications of the system, namely the general logic of the AI system and of the algorithms; the key design choices including the rationale and assumptions made, also with regard to persons or groups of persons on which the system is intended to be used; the main classification choices; what the system is designed to optimise for and the relevance of the different parameters; the decisions about any possible trade-off made regarding the technical solutions adopted to comply with the requirements set out in Title III, Chapter 2;