Activities of Emmanuel MAUREL related to 2020/2014(INL)
Shadow reports (1)
REPORT with recommendations to the Commission on a civil liability regime for artificial intelligence
Amendments (37)
Amendment 15 #
Motion for a resolution
Recital B
Recital B
B. whereas any future-orientated liability framework has to strike a balance between efficiently protecting potential victims of harm or damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible; whereas ultimately, the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the deployer, the affected person or any other third partythe goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the deployer, the affected person or any other third party; whereas that framework cannot be subject to strictly profit-making considerations and whereas its objectives include the limitation of the hazardous nature of the products made available to the public;
Amendment 22 #
Motion for a resolution
Recital D
Recital D
D. whereas the legal system of a Member State can excludeadjust liability for certain actors or can make it stricter for certain activities; whereas strict liability means that a party can be liable despite the absence of fault; whereas in many national tort laws, the defendant is held strictly liable if a risk materializes which that defendant has created for the public, such as in the form of cars or hazardous activities, or which he cannot control, like animals;
Amendment 34 #
Motion for a resolution
Recital G
Recital G
G. whereas sound ethical standards for AI-systems combined with solid and fair compensation procedures can help to address those legal challenges; whereas fair liability procedures means that each person who suffers harm caused by AI-systems or whose property damage istangible or intangible harm caused by AI- systems should have the same level of protection compared to cases without involvement of an AI-system.;
Amendment 41 #
Motion for a resolution
Paragraph 1
Paragraph 1
1. Considers that the challenge related to the introduction of AI-systems into society and the economy, particularly the public sphere; digital, judicial and financial interactions; the workplace or the medical sphere, is one of the most important questions on the current political agenda; whereas technologies based on A I could improve our lives in almost every sector, from the personal sphere (e.g. personalised education, fitness program) and the professional sphere (by eliminating repetitive tasks) to global challenges (e.g. climate change, hunger and starva and nutrition);
Amendment 46 #
Motion for a resolution
Paragraph 2
Paragraph 2
2. Firmly believes that in order to efficiently exploit the advantages and prevent potential misuses, principle-based and future-proof legislation across the EU for all AI-systems is crucial; is of the opinion that, while sector specific regulations for the broad range of possible applications are preferable,prevent potential misuses of AI systems, such systems must be governed across the EU by legislation which is future-proof and based on principles such as those set out in the EU Charter of Fundamental Rights and the report with recommendations to the Commission on an ethical framework for artificial intelligence, robotics and related technologies1a; is of the opinion that a horizontal legal framework based on these common principles seemis necessary to establish equal standards across the Union and effectively protect our European values; _____________ 1a 2020/2012 (INL)
Amendment 55 #
Motion for a resolution
Paragraph 4
Paragraph 4
4. Firmly believes that the new common rules for AI-systems should only take the form of a regulation; cConsiders that the question of liability in cases of harm or damage caused by an AI-system is one of thea key aspects to address of this harmonisation and that it must therefore be dealt with in thise framework of a regulation;
Amendment 61 #
Motion for a resolution
Paragraph 5
Paragraph 5
5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer physical, psychological or mental harm or whose property is damaged end up without compensation;
Amendment 88 #
Motion for a resolution
Paragraph 10
Paragraph 10
10. Opines that liability rules involving the deployer should in principle cover all operations of AI-systems, no matter where the operation takes place and whether it happens physically or virtually; remarks that operations in public spaces that expose many third persons to a risk constitute, however, cases that require further consideration; considers that the potential victims of harm or damage are often not aware of the operation and regularly do not have contractual liability claims against the deployer; notes that when harm or damage materialises, such third persons would then only have a fault-liability claim, and they might find it difficult to prove the fault of the deployer of the AI-system;
Amendment 91 #
Motion for a resolution
Paragraph 11
Paragraph 11
11. Considers it appropriate to define the deployer as the person who decidesnatural or legal person involved in placing on the use ofmarket the AI- system, who exercises control over the risk and who benefits from its operation or making it available to users, safeguarding its availability or management and monitoring the system; considers that exercising control means any action of the deployer that affects the manner of the operation from start to finish or that changes specific functions or processes within the AI-system;
Amendment 109 #
Motion for a resolution
Paragraph 14
Paragraph 14
14. Believes that an AI-system presents a high risk when its autonomous operation involves a significant potential to cause physical, psychological or mental harm to one or more persons, in a manner that is random and impossible to predict in advance; considers that the significance of the potential depends on the interplay between the severity of possible harm, the likelihood that the risk materializes and the manner in which the AI-system is being used;
Amendment 110 #
Motion for a resolution
Paragraph 15
Paragraph 15
15. Recommends that all high-risk AI- systems be listed in an Annex to the proposed Regulation; recognises that, given the rapid technological change and the required technical expertise, it should be up to the Commission to review that Annex every six months and if necessary, amend it through a delegated act; believes that the Commission should closely coopellaborate with athe newly formed standing committee similar to the existing Standing Committee on Precursors or the Technical Committee on Motor Vehicles, which include national experts of the Member States and stakeholders; considers that the balanced membership of the ‘High-Level Expert Group on Artificial Intelligence’ could serve as an example for the formation of the group of stakeholders-established European Agency for Artificial Intelligence;
Amendment 118 #
Motion for a resolution
Paragraph 15 a (new)
Paragraph 15 a (new)
15a. Acknowledges that the list thus created could not claim to be exhaustive, particularly with regard to final court rulings which may in the meantime have identified AI systems that do not appear on it;
Amendment 120 #
Motion for a resolution
Paragraph 16
Paragraph 16
16. Believes that in line with strict liability systems of the Member States, the proposed Regulation should only cover tangible and intangible harm to the most important legally protected rights such as life, health, physical, psychological and mental integrity and property, and should set out than indicative amounts and extent of compensation as well as the limitation period;
Amendment 133 #
Motion for a resolution
Paragraph 18
Paragraph 18
18. Considers theat this liability risk to be one of the key factors that defines the success of new technologies, products and services; observes that properegime, which guarantees full compensation for any harm caused by AI systems, means that their developers and producers must pay greater attention to the security of those systems; points out that full risk coverage is also essential for assuring the public that it can trust the new technology despite the potential for suffering harm or for facing legal claims by affected persons;
Amendment 141 #
Motion for a resolution
Paragraph 20
Paragraph 20
20. Believes that a European compensation mechanism, funded with public money, is not the right way to fill potential insurance gaps; considers that bearing the good experience with regulatory sandboxes in the fintech sector in mind, it should be up to the insurance marketit is up to insurers to adjust existing products or create new insurance cover for the numerous sectors and various different technologies, products and services that involve AI- systems;
Amendment 149 #
Motion for a resolution
Annex I – part A – paragraph 1 – indent 3
Annex I – part A – paragraph 1 – indent 3
Amendment 153 #
Motion for a resolution
Annex I – part A – paragraph 1 – indent 4
Annex I – part A – paragraph 1 – indent 4
- Instead of replacing the well- functioning existing liability regimes, we should make a few specific adjustments by introducing new and future-orientated ideas.ideas which are legally operational and adapted to the specificities of AI systems;
Amendment 161 #
Motion for a resolution
Annex I – part B – recital 1
Annex I – part B – recital 1
(1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for persons to avoid causing harm or damage in the first place. Any liability framework should strive to strike a balance between efficiently protecting potential victims of damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible.
Amendment 176 #
Motion for a resolution
Annex I – part B – recital 4
Annex I – part B – recital 4
(4) AIt this point, it is important to point ouseems reasonable to expect that the advantages of deploying AI-systems will by far outweigh their disadvantages. They will help to fight climate change more effectively, to improve medical examinations, to better integrate disabled persons into the society and to provide tailor-made education courses to all types of students. To exploit the various technological opportunities and to boost people’s trust in the use of AI- systems, while at the same time preventing harmful scenarios, sound ethical standards combined with solid and fair compensation is the best way forward.
Amendment 182 #
Motion for a resolution
Annex I – part B – recital 5
Annex I – part B – recital 5
(5) Any discussion about required changes in the existing legal framework should start with the clarification that AI- systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software. Ultimately, the large majority of AI- systems are used for handling trivial tasks without any risks for the society. There are however also AI-systems that are deployed in a critical manner and are based on neuronal networks and deep-learning processes. Their opacity and autonomy could make it very difficult to trace back specific actions to specific human decisions in their design or in their operation. A deployer of such an AI- system might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI-system. The mere operation of an autonomous AI-system should at the same time not be a sufficient ground for admitting the liability claim. As a result, there might be liability cases in which a person who suffers harm or damage caused by an AI- system cannot prove the fault of the producer, of an interfering third party or of the deployer and ends up without compensation.
Amendment 206 #
Motion for a resolution
Annex I – part B – recital 10
Annex I – part B – recital 10
(10) This Regulation should cover in principle all AI-systems, no matter where they are operating and whether the operations take place physically or virtually. The majority of liability claims under this Regulation should however address cases of third party liability, where an AI-system operates in a public space and exposes many third persons to a risk. In that situation, the affected persons will often not be aware of the operating AI- system and will not have any contractual or legal relationship towards the deployer. Consequently, the operation of the AI- system puts them into a situation in which, in the event of harm or damage being caused, they only have fault-based liability claims against the deployer of the AI- system, while facing severe difficulties to prove fault on the part of the deployer.
Amendment 215 #
Motion for a resolution
Annex I – part B – recital 12
Annex I – part B – recital 12
(12) All AI-systems with a high risk should be listed, in a way which does not claim to be exhaustive, in an Annex to this Regulation. Given the rapid technical and market developments as well as the technical expertise which is required for an adequate review of AI-systems, the power to adopt delegated acts in accordance with Article 290 of the Treaty on the Functioning of the European Union should be delegated to the Commission to amend this Regulation in respect of the types of AI-systems that pose a high risk and the critical sectors where they are used. Based on the definitions and provisions laid down in this Regulation, the Commission should review the Annex every six months and, if necessary, amend it by means of delegated acts. To give businesses enough planning and investment security, changes to the critical sectors should only be made every 12 months. Developers are called upon to notify the Commission if they are currently working on a new technology, product or service that falls under one of the existing critical sectors provided for in the Annex and which later could qualify for a high risk AI-system.
Amendment 220 #
Motion for a resolution
Annex I – part B – recital 13
Annex I – part B – recital 13
(13) It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making4 . AWithin the newly-established European Agency for Artificial Intelligence, a standing committee called 'Technical Committee – high-risk AI-systems' (TCRAI) should support the Commission in its review undercontribute to the review provided for in this Regulation. That standing committee should comprise representatives of the Member States as well as a balanced selection of stakeholders, including consumer organisation, businesses representatives from different sectors and sizes, as well as researchers and scientists. In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States' experts, and their experts systematically have access to meetings of Commission expert groups as well as the standing TCRAI-committee, when dealing with the preparation of delegated acts. _________________ 4 OJ L 123, 12.5.2016, p. 1.
Amendment 221 #
Motion for a resolution
Annex I – part B – recital 14
Annex I – part B – recital 14
(14) In line with strict liability systems of the Member States, this Regulation should cover only harm or damage to life, health, physical integrity and property. For the same reason, it should determine the amount and extent of compensation, as well as the limitation period for bringing forward liability claims. In contrast to the Product Liability Directive, this Regulation should set out a significantly lower ceiling for compensation, as it only refers to a single operation of an AI-system, while the former refers to a number of products or even a product line with the same defect, psychological and mental integrity and property.
Amendment 228 #
Motion for a resolution
Annex I – part B – recital 15
Annex I – part B – recital 15
(15) All physical or virtual activities, devices or processes driven by AI-systems that are not listed as a high-risk AI-system in the Annex to this Regulation should remain subject to fault-based liability, unless the Court of Justice of the European Union decides otherwise. The national laws of the Member States, including anytheir relevant jurisprudence, with regard to the amount and extent of compensation as well as the limitation period should continue to apply. A person who suffers harm or damage caused by an AI-system should however benefit from the presumption of fault of the deployer.
Amendment 236 #
Motion for a resolution
Annex I – part B – recital 18
Annex I – part B – recital 18
(18) The legislator has to consider the liability risks connected to AI-systems during their whole lifecycle, from development to usage to end of life. The inclusion of AI-systems in a product or service represents a financial risk for businesses and consequently will have a heavy impact on the ability and options for small and medium-sized enterprises (SME) as well as for start-ups in relation to insuring and financing their projects based on new technologies. The purpose of liability is, therefore, not only to safeguard important legally protected rights of individuals but also a factor which determines whether businesses, especially SMEs and start-ups, are able to raise capital, innovate and ultimately offer new products and services, as well as whether the customers are willing to use such products and services despite the potential risks and legal claims being brought against them.
Amendment 240 #
Motion for a resolution
Annex I – part B – recital 19
Annex I – part B – recital 19
Amendment 246 #
Motion for a resolution
Annex I – part B – recital 20
Annex I – part B – recital 20
(20) Despite missing historical claim data, there are already insurance products that are developed area-by-area and cover- by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. If a new type of insurance is needed, the insurance market will develop and offer a fitting solution and thus, will close the insurance gap. In exceptional cases, in which the compensation significantly exceeds the maximum amounts set out in this Regulation, Member States should be encouraged to set up a special compensation fund for a limited period of time that addresses the specific needs of those cases.
Amendment 255 #
Motion for a resolution
Annex I – part B – recital 22
Annex I – part B – recital 22
(22) Since the objectives of this Regulation, namely to create a future- orientated and unified approach at Union levelunified approach grounded in the principle of product safety and reliability, which sets common European standards for our citizens and businesses and to ensure the consistency of rights and legal certainty throughout the Union, in order to avoid fragmentation of the Digital Single Market, which would hamper the goal of maintaining digital sovereignty and of fostering digital innovation in Europe, require that the liability regimes for AI- systems are fully harmonized. Since this cannot be sufficiently achieved by the Member States due to the rapid technological change, the cross-border development as well as the usage of AI- systems and eventually, the conflicting legislative approaches across the Union, but can rather, by reason of the scale or effects of the action, be achieved at Union level. The Union may adopt measures, in accordance with the principle of subsidiarity as set out in Article 5 of the Treaty on European Union. In accordance with the principle of proportionality as set out in that Article, this Regulation does not go beyond what is necessary in order to achieve these objectives,
Amendment 262 #
Motion for a resolution
Annex I – part B – Article 2 – paragraph 1
Annex I – part B – Article 2 – paragraph 1
1. This Regulation applies on the territory of the Union where a physical or virtual activity, device or process driven by an AI-system has caused harm or damage to the life, health, physical, mental or moral integrity or the property of a natural or legal person.
Amendment 281 #
Motion for a resolution
Annex I – part B – Article 3 – point d
Annex I – part B – Article 3 – point d
(d) ‘deployer’ means the person who decides on the use ofnatural or legal person involved in placing the AI- system, exercises control over the associated risk and benefits from its operation; on the market or making it available to users as well as managing and using it;
Amendment 292 #
Motion for a resolution
Annex I – part B – Article 3 – point f
Annex I – part B – Article 3 – point f
(f) ‘harm or damage’ means an adverse impact affecting the life, health, physical, mental or moral integrity or property of a natural or legal person, with the exception of non-material harm;
Amendment 307 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 2 – introductory part
Annex I – part B – Article 4 – paragraph 2 – introductory part
2. The high-risk AI-systems as well as the critical sectors where they are used shall be listed in the Annex to this Regulation. The Commission is empowered to adopt delegated acts in accordance with Article 13, to amend the exhaustive list in the Annex, by:
Amendment 343 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 1 – point a
Annex I – part B – Article 5 – paragraph 1 – point a
(a) up to a maximum total amount of EUR ten million in the event of death or of harm caused to the health or physical or mental integrity of one or several persons as the result of the same operation of the same high-risk AI-system;
Amendment 349 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 1 – point b
Annex I – part B – Article 5 – paragraph 1 – point b
(b) up to a maximum total amount of EUR two million in the event of damage – or moral prejudice – caused to property, including when several items of property of one or several persons were damaged as a result of the same operation of the same high-risk AI-system; where the affected person also holds a contractual liability claim against the deployer, no compensation shall be paid under this Regulation if the total amount of the damage to property is of a value that falls below EUR 500.
Amendment 353 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 1 – point 2
Annex I – part B – Article 5 – paragraph 1 – point 2
Amendment 363 #
Motion for a resolution
Annex I – part B – Article 6 – paragraph 2
Annex I – part B – Article 6 – paragraph 2
2. Within the amount set out in Article 5(1)(b), compensation to be paid by the deployer held liable in the event of harm to the health or the physical or mental integrity of the affected person shall include the reimbursement of the costs of the related medical treatment as well as the payment for any pecuniary prejudice sustained by the affected person, as a result of the temporary suspension, reduction or permanent cessation of his or her earning capacity or the consequent, medically certified increase in his or her needs.