32 Amendments of Marion WALSMANN related to 2021/0106(COD)
Amendment 349 #
Proposal for a regulation
Recital 5
Recital 5
(5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules, this Regulation supports the objective of the Union of promoting the "AI made in Europe" and being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33 , and it ensures the protection of ethical principles, as specifically requested by the European Parliament34 . _________________ 33 European Council, Special meeting of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 34 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL).
Amendment 355 #
Proposal for a regulation
Recital 5 a (new)
Recital 5 a (new)
(5 a) The Union legal framework for AI should respect existing sector specific legislations and create legal certainty by avoiding duplication and additional administrative burden;
Amendment 363 #
Proposal for a regulation
Recital 6
Recital 6
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate existing harmless applications and future technological developments. The definition should be based on the key functional characteristics of the software, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension. AI systems can be designed to operate with varying levels of autonomy and be used on a stand- alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to–date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list.
Amendment 388 #
Proposal for a regulation
Recital 10
Recital 10
(10) In order to ensure a level playing field and an effective protection of rights and freedoms of individuals across the Union and on international level, the rules established by this Regulation should apply to providers of AI systems in a non- discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to users of AI systems established within the Union.
Amendment 630 #
Proposal for a regulation
Recital 44
Recital 44
(44) High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers shouldbe able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high- risk AI systems.
Amendment 730 #
Proposal for a regulation
Recital 73
Recital 73
(73) In order to promote and protect innovation, it is important that the interests of small-scale providers, like SMEs, micro-enterprises and users of AI systems are taken into particular account. SMEs are the backbone of the European economy and they face more challenges adapting to new legislations therefore measures should be foreseen to support them to cope with the new obligations or to exclude them from certain requirements. To this objective, Member States should develop initiatives, which are targeted at those operators, including on awareness raising and information communication. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users.
Amendment 787 #
Proposal for a regulation
Article 1 – paragraph 1 – point a
Article 1 – paragraph 1 – point a
(a) harmonised rules for the placing on the market, the putting into service and the use of safe and trustworthy artificial intelligence systems (‘AI systems’) in the Union;
Amendment 801 #
Proposal for a regulation
Article 1 – paragraph 1 – point e a (new)
Article 1 – paragraph 1 – point e a (new)
(e a) measures to support innovation and provide for a level playing field for European providers of AI systems on international level, in particular for small-scale providers like SMEs.
Amendment 842 #
Proposal for a regulation
Article 2 – paragraph 2 – introductory part
Article 2 – paragraph 2 – introductory part
2. FIn order to ensure legal certainty, preserve the existing legislation and avoid duplication, only Article 84 of this Regulation shall apply for high-risk AI systems that are safety components of products or systems, or which are themselves products or systems, falling within the scope of the following acts, only Article 84 of this Regulation shall apply:
Amendment 916 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives and with varying levels of autonomy, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
Amendment 972 #
Proposal for a regulation
Article 3 – paragraph 1 – point 13
Article 3 – paragraph 1 – point 13
Amendment 986 #
Proposal for a regulation
Article 3 – paragraph 1 – point 14
Article 3 – paragraph 1 – point 14
(14) ‘safety component of a product or system’ means a component of a product or of a system which fulfils a safety function for that product or system sor the failure orat its malfunctioning of which endangers the health and safety of persons or property;
Amendment 1084 #
Proposal for a regulation
Article 3 – paragraph 1 – point 44 – introductory part
Article 3 – paragraph 1 – point 44 – introductory part
(44) ‘serious incident’ means any incident that directly or indirectly leads, might have led or might lead to any of the following:
Amendment 1140 #
Proposal for a regulation
Article 4 – paragraph 1
Article 4 – paragraph 1
The Commission is empowered to adopt delegated acts in accordance with Article 73 after consulting relevant stakeholders to amend the list of techniques and approaches listed in Annex I, in order to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed therein.
Amendment 1171 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;
Amendment 1186 #
Proposal for a regulation
Article 5 – paragraph 1 – point b
Article 5 – paragraph 1 – point b
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
Amendment 1438 #
Proposal for a regulation
Article 6 – paragraph 2
Article 6 – paragraph 2
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk if they pose a risk of harm to the health and safety or a risk of adverse impact on fundamental rights.
Amendment 1576 #
Proposal for a regulation
Article 9 – paragraph 1
Article 9 – paragraph 1
1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems or be included in existing risk management procedures.
Amendment 1588 #
Proposal for a regulation
Article 9 – paragraph 2 – point a
Article 9 – paragraph 2 – point a
(a) identification and analysis of the known and foreseeable risks to the health and safety or fundamental rights of a person associated with each high-risk AI system;
Amendment 1595 #
Proposal for a regulation
Article 9 – paragraph 2 – point b
Article 9 – paragraph 2 – point b
(b) estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse;
Amendment 1613 #
Proposal for a regulation
Article 9 – paragraph 4 – introductory part
Article 9 – paragraph 4 – introductory part
4. The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable, provided that the high- risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. Those residual risks shall be communicated to the user.
Amendment 1723 #
Proposal for a regulation
Article 10 – paragraph 3
Article 10 – paragraph 3
3. Training, validation and testing data sets shall be relevant, and representative, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
Amendment 1756 #
Proposal for a regulation
Article 11 – paragraph 1 – subparagraph 1
Article 11 – paragraph 1 – subparagraph 1
The technical documentation shall be drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements set out in this Chapter and provide national competent authorities and notified bodies with all the necessary information to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV.
Amendment 1940 #
Proposal for a regulation
Article 17 – paragraph 2
Article 17 – paragraph 2
2. The implementation of aspects referred to in paragraph 1 shall be proportionate to the size of the provider’s organisation and can be fulfilled by further elaborating existing quality management systems.
Amendment 2137 #
Proposal for a regulation
Article 41 – paragraph 1
Article 41 – paragraph 1
1. Where harmonised standards referred to in Article 40 do not exist or where the Commission considers that the relevant harmonised standards are insufficient or that, because there is a need to address specific safety or fundamental right concerns, the Commission may, by means of implementing acts, adopt common specifications in respect of the requirements set out in Chapter 2 of this Title. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2).
Amendment 2143 #
Proposal for a regulation
Article 41 – paragraph 2
Article 41 – paragraph 2
2. The Commission, when shall, before preparing the common specifications referred to in paragraph 1, shall gather the views ofconsult relevant bodies or, expert groups and other relevant stakeholders established under relevant sectorial Union law.
Amendment 2436 #
Proposal for a regulation
Article 57 – paragraph 1
Article 57 – paragraph 1
1. The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor and relevant stakeholders including SMEs. Other national authorities may be invited to the meetings, where the issues discussed are of relevance for them.
Amendment 2683 #
Proposal for a regulation
Article 64 – paragraph 1
Article 64 – paragraph 1
1. Access to data and documentation in the context of their activitiUpon a reasoned reques,t the market surveillance authorities shall be granted full access to the training, validation and testing datasets used by the provider, including through application programming interfaces (‘API’) or other appropriate technical means and tools enabling remote access.
Amendment 2694 #
Proposal for a regulation
Article 64 – paragraph 2
Article 64 – paragraph 2
2. Where necessary to assess the conformity of the high-risk AI system with the requirements set out in Title III, Chapter 2 and upon a reasoned request, the market surveillance authorities shall be granted access to othe source code of the AI systemr data if no confidential business information are at risk.
Amendment 2702 #
Proposal for a regulation
Article 64 – paragraph 6
Article 64 – paragraph 6
6. Any information and documentation obtained by the market surveillance authorities or the national public authorities or bodies referred to in paragraph 1, 2 and 3 pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
Amendment 2800 #
Proposal for a regulation
Article 70 – paragraph 1 – point a
Article 70 – paragraph 1 – point a
(a) intellectual property rights, and confidential business information or professional secrecy or trade secrets of a natural or legal person, including source code, except the cases referred to in Article 5 of Directive 2016/943 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure apply.
Amendment 3000 #
Proposal for a regulation
Article 85 – paragraph 2
Article 85 – paragraph 2
2. This Regulation shall apply from [2436 months following the entering into force of the Regulation].