4 Amendments of Hilde VAUTMANS related to 2021/0106(COD)
Amendment 862 #
Proposal for a regulation
Article 2 – paragraph 2 a (new)
Article 2 – paragraph 2 a (new)
2 a. AI systems likely to interact with or impact on children shall be considered high-risk for this group;
Amendment 1747 #
Proposal for a regulation
Article 10 a (new)
Article 10 a (new)
Article 10 a Risk management system for AI systems likely to interact with children AI systems likely to interact with or impact on children shall implement a riskmanagement system addressing content, contact, conduct and contract risks to children;
Amendment 2710 #
Proposal for a regulation
Article 65 – paragraph 1 a (new)
Article 65 – paragraph 1 a (new)
1 a. When AI systems are likely to interact with or impact on children, the precautionary principle shall apply.
Amendment 2712 #
Proposal for a regulation
Article 65 – paragraph 2 – introductory part
Article 65 – paragraph 2 – introductory part
2. Where the market surveillance authority of a Member State has sufficient reasons to consider that an AI system presents a risk as referred to in paragraph 1, they shall carry out an evaluation of the AI system concerned in respect of its compliance with all the requirements and obligations laid down in this Regulation. When risks to the protection of fundamental rights are present, the market surveillance authority shall also inform the relevant national public authorities or bodies referred to in Article 64(3). The relevant operators shall cooperate as necessary with the market surveillance authorities and the other national public authorities or bodies referred to in Article 64(3). Where there is sufficient reason to consider that that an AI system exploits the vulnerabilities of children or violates their rights intentionally or unintentionally, the market surveillance authority shall have the duty to investigate the design goals, data inputs, model selection, implementation and outcomes of the AI system and the burden of proof shall be on the operator or operators of that system to demonstrate compliance with the provisions of this Regulation. The relevant operators shall cooperate as necessary with the market surveillance authorities and the other national public authorities or bodies referred to in Article 64(3), including by providing access to personnel, documents, internal communications, code, data samples and on platform testing as necessary. Where, in the course of its evaluation, the market surveillance authority finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe. The corrective action can also be applied to AI systems in other products or services judged to be similar in their objectives, design or impact.