BETA

74 Amendments of Henna VIRKKUNEN related to 2021/0106(COD)

Amendment 102 #
Proposal for a regulation
Recital 44
(44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers should be able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high- risk AI systems. In practice, a sufficient solution for bias monitoring could be achieved by abiding by state-of-the-art security and privacy- preserving standards with regards to data management.
2022/05/04
Committee: TRAN
Amendment 112 #
Proposal for a regulation
Recital 71
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes and make such regulatory sandboxes widely available throughout the Union, in order to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service.
2022/05/04
Committee: TRAN
Amendment 121 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
5 a. This Regulation shall not apply to AI systems, including their output, that are specifically developed and put into service for the sole purpose of scientific research and development.
2022/05/04
Committee: TRAN
Amendment 125 #
Proposal for a regulation
Article 2 – paragraph 5 b (new)
5 b. This Regulation shall not affect any research and development activity regarding AI systems, in so far as such activity does not lead to or entail placing an AI system on the market or putting it into service.
2022/05/04
Committee: TRAN
Amendment 128 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more ofa system that: (i) receives machine and/or human-based data and inputs, (ii) infers how to achieve a given set of human-defined objectives using learning, reasoning or modelling implemented with the techniques and approaches listed in Annex I, and can, for a given set of human-defined objectives, generate outputs such as content(iii) generates outputs in the form of content (generative AI systems), predictions, recommendations, or decisions, which influencinge the environments ithey interacts with;
2022/05/04
Committee: TRAN
Amendment 130 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2
(2) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing itand places that system on the market or puttings it into service under its own name or trademark, whether for payment or free of charge;
2022/05/04
Committee: TRAN
Amendment 136 #
Proposal for a regulation
Article 3 – paragraph 1 – point 12
(12) ‘intended purpose’ means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation. General purpose AI systems shall not be considered as having an intended purpose within the meaning of this Regulation;
2022/05/04
Committee: TRAN
Amendment 142 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – introductory part
(44) serious incident’ means any incident or malfunctioning of an AI system that directly or indirectly leads, might have led or might lead to any of the following:
2022/05/04
Committee: TRAN
Amendment 143 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point b a (new)
(b a) breach of obligations under Union law intended to protect fundamental rights;
2022/05/04
Committee: TRAN
Amendment 144 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point b b (new)
(b b) serious damage to property or the environment;
2022/05/04
Committee: TRAN
Amendment 145 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
(44 a) 'critical infrastructure' means an asset, system or part thereof which is necessary for the delivery of a service that is essential for the maintenance of vital societal functions or economic activities within the meaning of Article 2(4) and (5) of Directive ____ on the resilience of critical entities
2022/05/04
Committee: TRAN
Amendment 147 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 b (new)
(44 b) 'personal data' means data as defined in point (1) of Article 4 of Regulation (EU) 2016/679;
2022/05/04
Committee: TRAN
Amendment 148 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 c (new)
(44 c) ’non-personal data’ means data other than personal data as defined in point (1) of Article 4 of Regulation (EU) 2016/679.
2022/05/04
Committee: TRAN
Amendment 150 #
Proposal for a regulation
Article 4 – paragraph 1
The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I within the scope of the definition of an AI system as provided for in Article 3(1), in order to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed therein.
2022/05/04
Committee: TRAN
Amendment 162 #
Proposal for a regulation
Article 6 – paragraph 1 – introductory part
1. Irrespective of whether an AI system is placed on the market or put into service independently from theAn AI system that is itself a products ref coverred to in points (a) and (b), that AI systemby the Union harmonisation legislation listed in Annex II shall be considered as high- risk where both of the following conditions are fulfilled: if it is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the above mentioned legislation.
2022/05/04
Committee: TRAN
Amendment 165 #
Proposal for a regulation
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-riskAn AI system intended to be used as a safety component of a product covered by the legislation referred to in paragraph 1 shall be considered as high risk if it is required to undergo a third- party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to above mentioned legislation. This provision shall apply irrespective of whether the AI system is placed on the market or put into service independently from the product.
2022/05/04
Committee: TRAN
Amendment 168 #
Proposal for a regulation
Article 6 – paragraph 2 a (new)
2 a. AI systems referred to in Annex III shall be considered high-risk.
2022/05/04
Committee: TRAN
Amendment 175 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
(b) the AI systems pose a serious risk of harm to the health and safety, or a serious risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
2022/05/04
Committee: TRAN
Amendment 186 #
Proposal for a regulation
Article 9 – paragraph 1
1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.
2022/05/04
Committee: TRAN
Amendment 187 #
Proposal for a regulation
Article 9 – paragraph 2 – introductory part
2. The risk management system shall consist of a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updating. It shall comprise the following steps:
2022/05/04
Committee: TRAN
Amendment 189 #
Proposal for a regulation
Article 9 – paragraph 2 – point c
(c) evaluation of other possibly arising risks based on the analysis of data gathered from the post-market monitoring system referred to in Article 61;
2022/05/04
Committee: TRAN
Amendment 190 #
3. The risk management measures referred to in paragraph 2, point (d) shall give due consideration to the effects and possible interactions resulting from the combined application of the requirements set out in this Chapter 2. They shall take into account the generally acknowledged state of the art, including as reflected in relevant harmonised standards or common specifications.
2022/05/04
Committee: TRAN
Amendment 192 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable, provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. Those residual risks shall be communicated to the user.
2022/05/04
Committee: TRAN
Amendment 203 #
Proposal for a regulation
Article 10 – paragraph 3
3. Training, validation and testing data sets shall beould be sufficiently relevant, representative, and free of errors and complete in view of the intended purpose of the system. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
2022/05/04
Committee: TRAN
Amendment 212 #
Proposal for a regulation
Recital 44
(44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers should be able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high- risk AI systems. In practice, a sufficient solution for bias monitoring could be achieved by abiding by state-of-the-art security and privacy- preserving standards with regards to data management.
2022/03/31
Committee: ITRE
Amendment 221 #
Proposal for a regulation
Article 14 – paragraph 4 – introductory part
4. The measures referred to in paragraph 3 shall enable the individuals to whom human oversight is assigned to do the following, where necessary and as appropriate to the circumstances:
2022/05/04
Committee: TRAN
Amendment 222 #
Proposal for a regulation
Article 14 – paragraph 4 – point a
(a) fullyhave an appropriate understanding of the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible;
2022/05/04
Committee: TRAN
Amendment 225 #
Proposal for a regulation
Article 14 – paragraph 4 – point d
(d) be able to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system; unless the AI system is considered state-of-the-art and such human intervention is deemed to increase risks or otherwise negatively impact the system’s performance.
2022/05/04
Committee: TRAN
Amendment 226 #
Proposal for a regulation
Article 14 – paragraph 4 – point e
(e) be able to intervene on the operation of the high-risk AI system or interrupt the system through a “stop” button or a similar procedure unless the AI system is considered state-of-the-art and such human intervention is deemed to increase risks or otherwise negatively impact the system’s performance.
2022/05/04
Committee: TRAN
Amendment 236 #
Proposal for a regulation
Recital 71
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes and make such regulatory sandboxes widely available throughout the Union, in order to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service.
2022/03/31
Committee: ITRE
Amendment 238 #
Proposal for a regulation
Article 41 – paragraph 1
1. Where harmonised standards referred to in Article 40 do not exist or where the Commission considers that the relevant harmonised standards are significantly insufficient or that there is a need to address specific and pressing safety or fundamental right concern that cannot be sufficiently settled by development of harmonised standards, the Commission may, by means of implementing acts, adopt common specifications in respect of the requirements set out in Chapter 2 of this Title. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2).
2022/05/04
Committee: TRAN
Amendment 240 #
Proposal for a regulation
Article 41 – paragraph 2
2. The Commission, when preparing the common specifications referred to in paragraph 1, shall gather the views of the developers and providers of High-risk AI systems and relevant bodies or expert groups established under relevant sectorial Union law.
2022/05/04
Committee: TRAN
Amendment 245 #
Proposal for a regulation
Article 52 – paragraph 1
1. Providers shall ensure that AI systems intendedto whose primary function is to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
2022/05/04
Committee: TRAN
Amendment 249 #
Proposal for a regulation
Article 52 a (new)
Article 52 a General purpose AI systems 1. The placing on the market, putting into service or use of general purpose AI systems shall not, by themselves only, make those systems subject to the provisions of this Regulation. 2. Any person who places on the market or puts into service under its own name or trademark or uses a general purpose AI system made available on the market or put into service for an intended purpose that makes it subject to the provisions of this Regulation shall be considered the provider of the AI system subject to the provisions of this Regulation. 3. Paragraph 2 shall apply, mutatis mutandis, to any person who integrates a general purpose AI system made available on the market, with or without modifying it, into an AI system whose intended purpose makes it subject to the provisions of this Regulation. 4. The provisions of this Article shall apply irrespective of whether the general purpose AI system is open source software or not.
2022/05/04
Committee: TRAN
Amendment 250 #
Proposal for a regulation
Article 53 – paragraph 1
1. AI regulatory sandboxes established by one or more Member States competent authorities or the European Data Protection Supervisor shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems and secure processing of personal data for a limited time before their placement on the market or putting into service pursuant to a specific plan. This shall take place under the direct supervision and guidance by the competent authorities with a view to ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox.
2022/05/04
Committee: TRAN
Amendment 252 #
Proposal for a regulation
Article 53 – paragraph 1 a (new)
1 a. The controllers of personal data referred to in Article 4 (7) of the Regulation (EU) 2016/679 may further process personal data in an AI regulatory sandbox to the extent that it is necessary for the purposes of development, testing and validation of AI systems. Right of processing is subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. This processing shall not be considered incompatible with the initial purposes.
2022/05/04
Committee: TRAN
Amendment 256 #
Proposal for a regulation
Article 53 – paragraph 5
5. Member States’ competent authorities tshat have established AI regulatory sandboxes shall coordinate their activitill coordinate their activities with regards to AI regulatory sandboxes and cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Board and the Commission on the results from the implementation of those scheme, including good practices, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox.
2022/05/04
Committee: TRAN
Amendment 264 #
Proposal for a regulation
Article 2 – paragraph 5 a (new)
5a. This Regulation shall not apply to AI systems, including their output, that are specifically developed and put into service for the sole purpose of scientific research and development.
2022/03/31
Committee: ITRE
Amendment 266 #
Proposal for a regulation
Article 2 – paragraph 5 b (new)
5b. This Regulation shall not affect any research and development activity regarding AI systems, in so far as such activity does not lead to or entail placing an AI system on the market or putting it into service.
2022/03/31
Committee: ITRE
Amendment 272 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more ofa system that: (i) receives machine and/or human-based data and inputs, (ii) infers how to achieve a given set of human-defined objectives using learning, reasoning or modelling implemented with the techniques and approaches listed in Annex I, and can, for a given set of human-defined objectives, generate outputs such as content, (iii) generates outputs in the form of content (generative AI systems),predictions, recommendations, or decisions, which influencinge the environments ithey interacts with;
2022/03/31
Committee: ITRE
Amendment 281 #
Proposal for a regulation
Article 3 – paragraph 1 – point 2
(2) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing itand places that system on the market or puttings it into service under its own name or trademark, whether for payment or free of charge;
2022/03/31
Committee: ITRE
Amendment 288 #
Proposal for a regulation
Article 3 – paragraph 1 – point 12
(12) ‘intended purpose’ means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation. General purpose AI systems shall not be considered as having an intended purpose within the meaning of this Regulation;
2022/03/31
Committee: ITRE
Amendment 300 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – introductory part
(44) serious incident’ means any incident or malfunctioning of an AI system that directly or indirectly leads, might have led or might lead to any of the following:
2022/03/31
Committee: ITRE
Amendment 301 #
Proposal for a regulation
Annex I – point b
(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;.
2022/05/04
Committee: TRAN
Amendment 304 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point b a (new)
(ba) breach of obligations under Union law intended to protect fundamental rights;
2022/03/31
Committee: ITRE
Amendment 307 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – point b b (new)
(bb) serious damage to property or the environment;
2022/03/31
Committee: ITRE
Amendment 308 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
(b) AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation based on individual behaviour or personal traits or characteristics and for monitoring and evaluating performance and behaviour of persons in such relationships.
2022/05/04
Committee: TRAN
Amendment 311 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 a (new)
(44a) ‘critical infrastructure’ means an asset, system or part thereof which is necessary for the delivery of a service that is essential for the maintenance of vital societal functions or economic activities within the meaning of Article 2(4) and (5) of Directive … on the resilience of critical entities;
2022/03/31
Committee: ITRE
Amendment 313 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 b (new)
(44b) ‘personal data’ means data as defined in point (1) of Article 4 of Regulation (EU) 2016/679;
2022/03/31
Committee: ITRE
Amendment 315 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 c (new)
(44c) ‘non-personal data’ means data other than personal data as defined in point (1) of Article 4 of Regulation (EU) 2016/679;
2022/03/31
Committee: ITRE
Amendment 321 #
Proposal for a regulation
Article 4 – paragraph 1
The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I within the scope of the definition of an AI system as provided for in Article 3(1), in order to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed therein.
2022/03/31
Committee: ITRE
Amendment 352 #
Proposal for a regulation
Article 6 – paragraph 1 – introductory part
1. Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system1. An AI system that is itself a product covered by the Union harmonisation legislation listed in Annex II shall be considered as high- risk where both of the followif it is required to undergo a third-party conformity assessment with a view to the placing conditions are fulfilled: the market or putting into service of that product pursuant to the above mentioned legislation.
2022/03/31
Committee: ITRE
Amendment 354 #
Proposal for a regulation
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-riskAn AI system intended to be used as a safety component of a product covered by the legislation referred to in paragraph 1 shall be considered as high risk if it is required to undergo a third- party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to above mentioned legislation. This provision shall apply irrespective of whether the AI system is placed on the market or put into service independently from the product.
2022/03/31
Committee: ITRE
Amendment 358 #
Proposal for a regulation
Article 6 – paragraph 2 a (new)
2a. AI systems referred to in Annex III shall be considered high-risk.
2022/03/31
Committee: ITRE
Amendment 364 #
Proposal for a regulation
Article 7 – paragraph 1 – point b
(b) the AI systems pose a serious risk of harm to the health and safety, or a serious risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
2022/03/31
Committee: ITRE
Amendment 382 #
Proposal for a regulation
Article 9 – paragraph 1
1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.
2022/03/31
Committee: ITRE
Amendment 383 #
Proposal for a regulation
Article 9 – paragraph 2 – introductory part
2. The risk management system shall consist of a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updating. It shall comprise the following steps:
2022/03/31
Committee: ITRE
Amendment 387 #
Proposal for a regulation
Article 9 – paragraph 2 – point c
(c) evaluation of other possibly arising risks based on the analysis of data gathered from the post-market monitoring system referred to in Article 61;
2022/03/31
Committee: ITRE
Amendment 390 #
Proposal for a regulation
Article 9 – paragraph 3
3. The risk management measures referred to in paragraph 2, point (d) shall give due consideration to the effects and possible interactions resulting from the combined application of the requirements set out in this Chapter 2. They shall take into account the generally acknowledged state of the art, including as reflected in relevant harmonised standards or common specifications.
2022/03/31
Committee: ITRE
Amendment 392 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable, provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. Those residual risks shall be communicated to the user.
2022/03/31
Committee: ITRE
Amendment 421 #
Proposal for a regulation
Article 10 – paragraph 3
3. Training, validation and testing data sets shall beould be sufficiently relevant, representative, and free of errors and complete in view of the intended purpose of the system. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
2022/03/31
Committee: ITRE
Amendment 445 #
Proposal for a regulation
Article 14 – paragraph 4 – introductory part
4. The measures referred to in paragraph 3 shall enable the individuals to whom human oversight is assigned to do the following, where necessary and as appropriate to the circumstances:
2022/03/31
Committee: ITRE
Amendment 447 #
Proposal for a regulation
Article 14 – paragraph 4 – point a
(a) fullyhave an appropriate understanding of the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible;
2022/03/31
Committee: ITRE
Amendment 450 #
Proposal for a regulation
Article 14 – paragraph 4 – point d
(d) be able to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system unless the AI system is considered state-of-the-art and such human intervention is deemed to increase risks or otherwise negatively impact the system’s performance;
2022/03/31
Committee: ITRE
Amendment 452 #
Proposal for a regulation
Article 14 – paragraph 4 – point e
(e) be able to intervene on the operation of the high-risk AI system or interrupt the system through a “stop” button or a similar procedure unless the AI system is considered state-of-the-art and such human intervention is deemed to increase risks or otherwise negatively impact the system’s performance.
2022/03/31
Committee: ITRE
Amendment 506 #
Proposal for a regulation
Article 41 – paragraph 1
1. Where harmonised standards referred to in Article 40 do not exist or where the Commission considers that the relevant harmonised standards are significantly insufficient or that there is a need to address specific and pressing safety or fundamental right concern that cannot be sufficiently settled by development of harmonised standards, the Commission may, by means of implementing acts, adopt common specifications in respect of the requirements set out in Chapter 2 of this Title. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2).
2022/03/31
Committee: ITRE
Amendment 508 #
Proposal for a regulation
Article 41 – paragraph 2
2. The Commission, when preparing the common specifications referred to in paragraph 1, shall gather the views of the developers and providers of High-risk AI systems and relevant bodies or expert groups established under relevant sectorial Union law.
2022/03/31
Committee: ITRE
Amendment 532 #
Proposal for a regulation
Article 52 – paragraph 1
1. Providers shall ensure that AI systems intendedto whose primary function is to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
2022/03/31
Committee: ITRE
Amendment 545 #
Proposal for a regulation
Article 52 a (new)
Article 52 a General purpose AI systems 1. The placing on the market, putting into service or use of general purpose AI systems shall not, by themselves only, make those systems subject to the provisions of this Regulation. 2. Any person who places on the market or puts into service under its own name or trademark or uses a general purpose AI system made available on the market or put into service for an intended purpose that makes it subject to the provisions of this Regulation shall be considered the provider of the AI system subject to the provisions of this Regulation. 3. Paragraph 2 shall apply, mutatis mutandis, to any person who integrates a general purpose AI system made available on the market, with or without modifying it, into an AI system whose intended purpose makes it subject to the provisions of this Regulation. 4. The provisions of this Article shall apply irrespective of whether the general purpose AI system is open source software or not.
2022/03/31
Committee: ITRE
Amendment 556 #
Proposal for a regulation
Article 53 – paragraph 1
1. AI regulatory sandboxes established by one or more Member States competent authorities or the European Data Protection Supervisor shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems and secure processing of personal data for a limited time before their placement on the market or putting into service pursuant to a specific plan. This shall take place under the direct supervision and guidance by the competent authorities with a view to ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox.
2022/03/31
Committee: ITRE
Amendment 557 #
Proposal for a regulation
Article 53 – paragraph 1 a (new)
1a. The controllers of personal data referred to in Article 4(7) of Regulation (EU) 2016/679 may further process personal data in an AI regulatory sandbox to the extent that it is necessary for the purposes of development, testing and validation of AI systems. Right of processing is subject to appropriate safeguards for the fundamental rights and freedoms of natural persons. This processing shall not be considered incompatible with the initial purposes.
2022/03/31
Committee: ITRE
Amendment 567 #
Proposal for a regulation
Article 53 – paragraph 5
5. Member States’ competent authorities tshat have established AI regulatory sandboxes shall coordinate their activitill coordinate their activities with regards to AI regulatory sandboxes and cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Board and the Commission on the results from the implementation of those scheme, including good practices, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox.
2022/03/31
Committee: ITRE
Amendment 632 #
Proposal for a regulation
Annex I – point b
(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;.
2022/03/31
Committee: ITRE
Amendment 640 #
Proposal for a regulation
Annex III – paragraph 1 – point 4 – point b
(b) AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation based on individual behaviour or personal traits or characteristics and for monitoring and evaluating performance and behaviour of persons in such relationships.
2022/03/31
Committee: ITRE