15 Amendments of Karlo RESSLER related to 2021/0106(COD)
Amendment 344 #
Proposal for a regulation
Recital 5
Recital 5
(5) A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. These rules should be supportive to new innovative solutions and robust in protecting fundamental rights of all the actors. By laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council33 , and it ensures the protection of ethical principles, as specifically requested by the European Parliament34 . _________________ 33 European Council, Special meeting. One of the fundamental principles of the European Council (1 and 2 October 2020) – Conclusions, EUCO 13/20, 2020, p. 6. 34 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012(INL)is legislative framework is that there is no doubt between the protection of fundamental rights or the support of innovation, since this Regulation provides rules that adequately address both of mentioned priorities.
Amendment 364 #
(6) The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the software, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimensionaligned with internationally accepted approach. AI systems can be designed to operate with varying levels of autonomy and be used on a stand-alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to– date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list. The Commission should engage in dialogue with key international organizations, so that the common international standards could be achieved to the highest possible extent.
Amendment 652 #
Proposal for a regulation
Recital 51
Recital 51
(51) Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, as well as the competent public authorities accessing the data of providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.
Amendment 721 #
Proposal for a regulation
Recital 71
Recital 71
(71) Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service. All other relevant actors should be encouraged to do so as well.
Amendment 804 #
Proposal for a regulation
Article 1 – paragraph 1 – point e a (new)
Article 1 – paragraph 1 – point e a (new)
(e a) measures in support of innovation particularly focusing on SMEs and start- ups.
Amendment 918 #
Proposal for a regulation
Article 3 – paragraph 1 – point 1
Article 3 – paragraph 1 – point 1
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I anda machine-based system that can, for a given set of human-defined objectives, generate outputs such as content,make predictions, recommendations, or decisions influencing the environments they intreal or virtual environments and is designed to operacte with varying levels of autonomy;
Amendment 980 #
Proposal for a regulation
Article 3 – paragraph 1 – point 13 a (new)
Article 3 – paragraph 1 – point 13 a (new)
(13 a) ‘harmful subliminal technique’ means a measure whose existence and operation are entirely imperceptible by those on whom it is used, and which has the purpose and direct effect to induce actions leading to that person’s physical or psychological harm.
Amendment 1167 #
Proposal for a regulation
Article 5 – paragraph 1 – point a
Article 5 – paragraph 1 – point a
(a) the placing on the market, putting into service, or use of an AI system that deploys harmful subliminal techniques beyond a person’s consciousness in orderwith the objective to materially distort a person’s behaviour in a manner that causes or is likely to, that foreseeably may cause that person or another person material, physical or psychological harm;
Amendment 1675 #
Proposal for a regulation
Article 10 – paragraph 1
Article 10 – paragraph 1
1. High-risk AI systems which make use of techniques involving the training of models with data shall be, with reasonable expectations and in accordance with the state-of-art, developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5.;
Amendment 2292 #
Proposal for a regulation
Article 53 – paragraph 1
Article 53 – paragraph 1
1. AI regulatory sandboxes established by SMEs, start-ups, enterprises and other innovators, one or more Member States competent authorities or the European Data Protection Supervisor shall provide a controlled environment that facilitates the safe development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service pursuant to a specific plan. TFor Member States competent authorities or the European Data Protection Supervisor, this shall take place under the direct supervision and guidance by the competent authorities with a view to ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox. For SMEs, start-ups, enterprises and other innovators, this shall take place independently from supervising authorities, while following rules and regulations established in close cooperation with Member State competent authorities.
Amendment 2306 #
Proposal for a regulation
Article 53 – paragraph 2
Article 53 – paragraph 2
2. Member States shall ensure that to the extent the innovative AI systems involve the processing of personal data or otherwise fall under the supervisory remit of other national authorities or competent authorities providing or supporting access topersonal data, the national data protection authorities and those other national authorities are associated to the operation of the AI regulatory sandbox. established by one or more Member States competent authorities or the European Data Protection Supervisor. Start-ups, SMEs, enterprises and other innovators may request access to personal data from relevant national authorities to be used in their AI sandbox under the guidelines defined through Member State rules and regulations.
Amendment 2325 #
Proposal for a regulation
Article 53 – paragraph 5
Article 53 – paragraph 5
5. Member States’ competent authorities that have established AI regulatory sandboxes shall coordinate their activities and cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Board and the Commission on the results ofrom the implementation of those schemes, including good practices, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox. SMEs, start-ups, enterprises and other innovators shall submit annual reports to Member States’ competent authorities and share their good practices, lessons learnt and recommendations on their AI sandboxes.
Amendment 2692 #
Proposal for a regulation
Article 64 – paragraph 2
Article 64 – paragraph 2
2. Where necessary to assess the conformity of the high-risk uses of AI system with the requirements set out in Title III, Chapter 2 and upon a reasoned request, the market surveillance authorities shall be granted access to the source code of theask for the explainability of the functioning of algorithms and criteria used by an AI system.
Amendment 3016 #
Proposal for a regulation
Annex I – point b
Annex I – point b
Amendment 3022 #
Proposal for a regulation
Annex I – point c
Annex I – point c