Activities of Evelyne GEBHARDT related to 2020/2014(INL)
Shadow reports (1)
REPORT with recommendations to the Commission on a civil liability regime for artificial intelligence
Amendments (121)
Amendment 3 #
Draft opinion
Recital A
Recital A
A. whereas the use of Artificial Intelligence (AI) plays an increasing role in our everyday lives and has the potential to contribute to the deployment and development of innovations in many sectors and offer benefits for consumers through innovative products and services and, for businesses, in particular micro, small and medium enterprises (SMEs) through optimised performance;
Amendment 6 #
Draft opinion
Recital A a (new)
Recital A a (new)
Aa. whereas for the framework to be appropriate, it must cover all AI-based products and their components, including algorithms, software, and data used or produced by them;
Amendment 7 #
Draft opinion
Recital A b (new)
Recital A b (new)
Ab. whereas a common framework for the development, deployment and use of artificial intelligence, robotics and related technologies within the Union should both protect consumers from their potential risks and promote the trustworthiness of such technologies;
Amendment 11 #
Motion for a resolution
Recital A
Recital A
A. whereas the concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for natural and legal persons to avoid causing harm or damage in the first place and discourages irresponsible behaviour;
Amendment 13 #
Draft opinion
Recital B
Recital B
B. whereas the use, deployment and development of AI applications in products might also present challenges to the existing legal framework on products and reduce their effectivenes protection of consumers, thus potentially undermining consumer trust and welfare due to their specific characteristics;
Amendment 16 #
Draft opinion
Recital C
Recital C
C. whereas robust liability mechanisms remedying damage contribute to better protection of consumers, creation of trust in new technologies integrated in products and acceptance for innovation while ensuring legal certainty for business, in particular micro, small and medium enterprises;
Amendment 16 #
Motion for a resolution
Recital B
Recital B
B. whereas any future-orientated liability framework has to strike a balance between efficiently protecting potential victims of harm orensure that affected persons are appropriately protected against damage and that the same time, providing enough leeway to make the development of new technologies, products or services possible; whereas ultimatelyy are able to claim for compensation in all cases where this seems justified, the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the deployemanufacturer, the developer, the programmer, the backend operator, the frontend operator, the affected person or any other third party;
Amendment 18 #
Draft opinion
Recital C a (new)
Recital C a (new)
Ca. whereas the Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics (COM (2020) 64) and the White Paper On Artificial Intelligence - A European approach to excellence and trust (COM(2020)65) should be considered as the basis of the future European legislation;
Amendment 20 #
Draft opinion
Recital C b (new)
Recital C b (new)
Cb. whereas the Product Liability Directive is the existing regulatory framework on the responsibility for the final product;
Amendment 21 #
Motion for a resolution
Recital D
Recital D
D. whereas the legal system of a Member State can exclude liability for certain actors or can make it stricter for certain activities; whereas strict liability means that a party can be liable despite the absence of fault; whereas in many national tort laws, the defendant is held strictly liable if a risk materializes which that defendant has created for the public, such as in the form of cars or hazardous activities, or which he cannot control, like animals; whereas strict liability lies on the person that has control over the risks of the operation or is responsible for them;
Amendment 22 #
Draft opinion
Paragraph 1
Paragraph 1
1. Welcomes the Commission’s aim, which is to make the Union legal framework fit the new technological uses, deployments and developments, ensuring a high level of protection for consumers from harm caused by new technologies based on artificial intelligence, robotics and related technologies while maintaining the balance with the needs of technological innovation;
Amendment 24 #
Motion for a resolution
Recital E
Recital E
E. whereas Artificial Intelligence (AI)- systems and other emerging digital technologies, such as the Internet of Things or distributed ledger technologies present significant legal challenges for the existing liability framework and could lead to situations, in which their opacity, couldmplexity, modification through updates or self-learning during operation, limited predictability, and vulnerability to cybersecurity threats make it extremely expensivedifficult or even impossible to identify who was in control of the risk associated with the AI- system or which code or input has ultimately caused the harmful operation;
Amendment 28 #
Draft opinion
Paragraph 2
Paragraph 2
2. Stresses the need to assess to what extentCalls on the Commission to update the existing liability framework, and in particular the Council Directive 85/374/EEC1 (the Product Liability Directive), needs to be updated in order to guarantee highly effective consumer protection and legal clarity for businesses, while avoiding high costs and risks especially for small and medium enterprises and start- ups; __________________ 1 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (OJ L 210, 7.8.1985, p. 29).
Amendment 29 #
Draft opinion
Paragraph 2 a (new)
Paragraph 2 a (new)
2a. Recognises the challenge of determining liability where consumer harm results from autonomous decision- making processes; calls on the Commission to review that directive and consider adapting concepts as ‘product’ ‘damage’ and ‘defect’, in a way that is coherent with product safety and liability legislation, as well as adapting the rules governing the burden of proof while stressing that the burden of proof shall by no means lie on the consumer;
Amendment 31 #
Draft opinion
Paragraph 2 b (new)
Paragraph 2 b (new)
2b. Further stresses the need to reassess the timeframe during which the producer is held liable for defects caused by the product, as AI driven products can become unsafe during their lifecycle due to a software update or the lack thereof; simultaneously, and in cases where the supplier cannot be held liable, it might be justified to hold the producer liable for non-supply of a software update, which can fix the safety hazard;
Amendment 33 #
Draft opinion
Paragraph 2 c (new)
Paragraph 2 c (new)
2c. Points out that the revision of the Product Liability Directive should be aligned with and built on the EU General Data Protection Regulation (GDPR);
Amendment 33 #
Motion for a resolution
Recital G
Recital G
G. whereas sound ethical standards for AI-systems combined with solid and fair compensation procedures can help to address those legal challengeAI-systems need to comply with current applicable laws and, additionally, Union and national liability regimes need to be adjusted, where necessary, in order to guarantee solid and fair compensation for affected persons; whereas fair liability procedures means that each person who suffers harm caused by AI- systems or whose property damage is caused by AI-systems should have the same level of protection compared to cases without involvement of an AI-system.
Amendment 39 #
Draft opinion
Paragraph 3
Paragraph 3
3. Emphasises that any revision of the existing liability framework should aim to further harmonise liability rules in order to avoid fragmentation of the single market; stresses, however, the importance of ensuring that Unioensure a level playing field and to avoid inequalities in consumer protection as each Member state has its own rleguislation remains limited toand it could clrearly identified problems for which feasible solutions exist and leaves room for further technological developmentste unnecessary fragmentation of the single market;
Amendment 43 #
Draft opinion
Paragraph 4
Paragraph 4
4. Calls on the Commission to assess whether definitions and concepts inupdate the product liability framework need to be updated due toin order to consider the specific characteristics of AI applications such as complexity, autonomy and opac, opacity and unpredictability;
Amendment 44 #
Draft opinion
Paragraph 5
Paragraph 5
5. Urges the Commission to scrutinise whether it is necessary to include software in the definition of ‘products’ under the Product Liability Directive in line with the spirit of the current Consumer acquis, namely the definition of “good with digital elements’” under the Product LiabilityArticle 2(3) of Directive (EU) 2019/770 (the Digital Content Directive) and “goods” under Article 2(5)(b) of Directive (EU) 2019/771 (the Sale of goods Directive) and to update concepts such as ‘producer’, ‘damage’ and ‘defect’, and if so, to what extent; asks the Commission to also examine whetherurges the Commission to revise the product liability framework needs to be revised in order to protect injured parties efficiently as regards products that are purchased as a bundle with related services particularly as the Product Liability Directive only covers personal injury, and damage to consumer property, while non-material damage, damage to data or other digital assets remain currently uncovered;
Amendment 54 #
Draft opinion
Paragraph 5 a (new)
Paragraph 5 a (new)
5a. Calls on the Commission to clarify that the scope of the new legislation or the update of the Product Liability Directive should apply to all tangible and non- tangible goods, including digital services;
Amendment 58 #
Draft opinion
Paragraph 6
Paragraph 6
6. Stresses the importance of ensuring a fair liability system that makes it possibleHighlights that due to the complexity, connectivity and opacity of the products based on AI and new technologies it could be difficult for consumers to prove twhat a defect in a product caused damage, even if third party software is involvedas it cannot be assumed that consumers have all necessary information or specific technical knowledge; therefore as part of the revision orf the cause of a defect is hard to trace, for example when products are part of a complex interconnected Internet of Things environmentProduct Liability Directive it should be sufficient for the consumer to demonstrate that there has been damage even if third party software is involved or the cause of a defect is hard to trace;
Amendment 58 #
5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability modification through updates, self- learning and autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer harm ormaterial or non-material harm or damage whose property is damaged end up without compensation;
Amendment 63 #
Draft opinion
Paragraph 7
Paragraph 7
7. Calls on the Commission to revaluate whether and to what extent the burden of proof should be reversederse the burden of proof to prevent it from being placed on the consumer in order to empower harmed consumers while preventing abuse and providing legal clarity for businesses, in particular micro, small and medium enterprises;
Amendment 64 #
Motion for a resolution
Paragraph 6
Paragraph 6
6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are always the result of someone building, deploying or interfering with the systems; is of the opinion that the opacity and autonomy of AI-systems could make it in practice very difficult or even impossible to trace back specific harmful actions of the AI-systems to specific human input or to decisions in the design; recalls that this constraint has an even greater impact on the affected person for whom it is impossible to establish causality between the damage and a prior act or omission; stresses that, in accordance with widely- accepted liability concepts, one is nevertheless able to circumvent this obstacle by making the persons who create, maintain or control the risk associated with the AI-system, accountable;
Amendment 72 #
Motion for a resolution
Paragraph 7
Paragraph 7
7. Considers that the Product Liability Directive (PLD) has for over 30 years proven to be an effective means of getting compensation for harm triggered by a defective product; hence, notes that because it should also be used with regard to civil liability claims against the producer of a defective AI-system, wthen the AI-system qualifies as a product under that Directive; if Directive needs to be updated to include AI systems; underlines that legislative adjustments to the PLD are necessary, they should be discussed during a review of that Directive; is of the opinion that, for the purpose of legal certainty throughout the Union, the ‘backend operator’ should fall under the samenotes that the ‘backend operator’ does not necessarily coincide with the producer, as it can also be the developer or programmer and, therefore, the ‘backend operator’ can be justifiably covered by a different liability rules asegime than theat producer, manufacturer and developervided by the PLD;
Amendment 76 #
Draft opinion
Paragraph 8
Paragraph 8
8. Highlights the need forat in the liability stage a risk based approach to AI within the existing liability framework, which takes into account different levels of risk for consumers in specific sectors and uses of AI; underlines that such an approach, that might encompass two or more levels of risk, should be based on clear criteria and provide for legal clarityis not appropriate, as the damage has occurred and the product has proven to be a risk product;
Amendment 76 #
Motion for a resolution
Paragraph 8
Paragraph 8
8. Considers that the existing fault- based tort law of the Member States offers in most cases a sufficient level of protection for persons that suffer harm caused by an interfering third person like a hacker or whose property is damaged by such a third person, as the interference regularly constitutes a fault-based action; notes that only for cases in which the third person is untraceable or impecunioudue to these characteristics of AI-systems, additional liability rules seem necessary;
Amendment 77 #
Motion for a resolution
Paragraph 9
Paragraph 9
9. Considers it, therefore, appropriate for this report to focus on civil liability claims against the deployefrontend operator and backend operator of an AI- system; affirms that the deployer’frontend operator's liability is justified by the fact that he or she is controlling a risk associated withbenefitting from the use of the AI- system, comparable to an owner of a car or pet; considers that due to the AI-system’s complexity and connectivity, the deployefrontend operator will be in many cases the first visible contact point for the affected person; conversely, believes that the liability of the backend operator under this Regulation is based on the fact that he or she is the person continuously defining the features of the relevant technology and providing essential and ongoing backend support, therefore holding the actual control over the risks of the operation;
Amendment 78 #
Draft opinion
Paragraph 8 a (new)
Paragraph 8 a (new)
8a. Calls on the Commission to remove notion such “time at which a product is put on the market” which is no longer relevant given the dynamic features of digital goods; points out that currently the producer continues to have control over the product for a long time after having put it onto the market; urges to review the timelines for bringing a claim under the Product Liability Directive;
Amendment 79 #
Draft opinion
Paragraph 8 b (new)
Paragraph 8 b (new)
8b. Stresses that the producer shall bear the liability for products from the EU, and for the products from outside EU, that are sold through online marketplace and when the producer cannot be identified, the online marketplace shall be liable as a supplier due to the fact that online marketplaces are no longer a passive intermediary;
Amendment 80 #
Draft opinion
Paragraph 9
Paragraph 9
9. AsksCalls on the Commission to carefully assess the advantages and disadvantages of introducing a strictaddress the liability model ofor products containing AI applications and consider in a two-step process; firstly providing a fault based liability only in specific high risk areas; underlines the need to strictly respect the proportionality principle if this approach is retained.f the deployer against which the affected person should have the right to bring the claim for damages; in the event where no fault of the deployer can be established, the producer or the backend operator should be held strictly liable; considers that the two-step process is essential in order to ensure that victims are effectively compensated for damages caused by AI driven systems;
Amendment 80 #
Motion for a resolution
Subheading 3
Subheading 3
Liability of the deployer frontend and backend operator
Amendment 84 #
Draft opinion
Paragraph 9 a (new)
Paragraph 9 a (new)
9a. Notes that the new legislation about product liability should also address the challenges algorithms present in terms of ensuring non-discrimination, transparency and explainability, as well as liability; points out the need to monitor algorithms and to asses associated risks, to use high quality and unbiased datasets, as well as to help individuals acquire access to high quality products;
Amendment 86 #
Motion for a resolution
Paragraph 10
Paragraph 10
10. Opines that liability rules involving the deployefrontend and backend operator should in principle cover all operations of AI- systems, no matter where the operation takes place and whether it happens physically or virtually; remarks that operations in public spaces that expose many third persons to a risk constitute, however, cases that require further consideration; considers that the potential victims of harm or damage are often not aware of the operation and regularly do not have contractual liability claims against the deployerfrontend and backend operators; notes that when harm or damage materialises, such third persons would then only have a fault- liability claim, and they might find it difficult to prove the fault of the deployefrontend or backend operator of the AI-system;
Amendment 88 #
Draft opinion
Paragraph 9 b (new)
Paragraph 9 b (new)
9b. Calls on the Commission to propose concreate measures (such a registry of products liability cases) to enhance transparency and to monitor defective product circulating in the EU; it is essential to ensure high consumer protection and a high degree of information about the products that could be purchased.
Amendment 92 #
Motion for a resolution
Paragraph 11
Paragraph 11
11. Considers it appropriate to define the deployefrontend operator as the person who decides onbenefits from the use of the AI-system, who exercises control ovnjoys its features and functions; considers the risk and who benefits from its operation; considers that exercising control means any action of the deployer that affects the manner of the operation from staat the backend operator is defined as the person continuously defining the features of the relevant technology and providing essential and ongoing backend support, to finish or that changes specific functions or processes within the AI-systemherefore holding the actual control over the risks of the operation;
Amendment 97 #
Motion for a resolution
Paragraph 12
Paragraph 12
12. Notes that there could be situations in which there is more than one deployerare several frontend and backend operators; considers that in that event, all deployersof them should be jointly and severally liable while having the right to recourse proportionally against each other;
Amendment 99 #
Motion for a resolution
Subheading 4
Subheading 4
Amendment 100 #
Motion for a resolution
Paragraph 13
Paragraph 13
13. Recognisealls that in the type of AI- system the deployer is exercising control over is a determining factor; notes that an AI-system that entails a high risk potentially endangers the general public to a much higher degree; considers that, based on the legalliability stage, a risk-based approach to AI is not appropriate, since the damage has occurred and the product has proven to be a risk product; notes that so challenges that AI- systems pose to the existing liability regimes, it seems reasonable to set up a strict liability regime for those high-risk AI-systemsd low- risk applications may equally cause severe harm or damage;
Amendment 105 #
Motion for a resolution
Paragraph 14
Paragraph 14
14. BelievStresses that anthe liability model for products containing AI- system presents a high risk ws has to be approachend its autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random and impossible to predict in advance; considers that the significance of the potential dependsn a two-step process: firstly providing a fault based liability of the frontend operator against which the affected person should have the right to bring the claim for damages, with the possibility for the frontend operator to prove his lack of fault by complying with the duty of care consisting in the regular installation of all available updates; if this obligation is fulfilled, due diligence is presumed; secondly, in the event where no fault onf the interplay between the severity of possible harm, the likelihood that the risk materializes and the manner in which the AI-system is being usedfrontend operator can be established, the backend operator should be held strictly liable; notes that such a two-step process is essential in order to ensure that victims are effectively compensated for damages caused by AI driven systems;
Amendment 113 #
Motion for a resolution
Paragraph 15
Paragraph 15
15. Recommends that all high-risk AI- systems be listed in an Annex to the proposed Regulation; recognises that, given the rapid technological change and the required technical expertise, it should be up to the Commission to review that Annex every six months and if necessary, amend it through a delegated act; believes that the Commission should closely cooperate with a newly formed standing committee similar to the existing Standing Committee on Precursors or the Technical Committee on Motor Vehicles, which include national expPoints out that a risk-based approach to AI within the existing liability framework might create unnecessary fragmentation across the EU creating legal uncertainty, interpretative issues and confusion amongst users who would face different levels of protection depending on whether the AI-systems is classified as high-or low- risk, which is something userts of the Member States and stakeholders; considers that the balanced membership of the ‘High-Level Expert Group on Artificial Intelligence’ could serve as an example for the formcannot assess on their own; considers that, once the damage has occurred, it is irrelevant whether an AI system has been classified as high- or low- risks, and that what matters is that affected persons can obtain full compensation ofor the group of stakeholdersharm regardless of the risk category;
Amendment 121 #
Motion for a resolution
Paragraph 16
Paragraph 16
16. Believes that in line widue to the strict liability systems of the Member States, the proposed Regulation should only cover harm to the important legally protected rights such as life, health, physical integrity and property, and should set out the amounts and extent of compensation as well as the limitation periopecial characteristics of AI-systems, the proposed Regulation should cover material as well as non-material harm, including damage to intangible property and data, such as loss or leak of data, and should ensure that damage is always fully compensated, in compliance with the fundamental right of redress for damage suffered;
Amendment 124 #
Motion for a resolution
Paragraph 17
Paragraph 17
Amendment 132 #
Motion for a resolution
Paragraph 18
Paragraph 18
18. Considers the liability riskcoverage to be one of the key factors that defines the success of new technologies, products and services; observes that proper riskliability coverage is also essential for assuring the public that it can trust the new technology despite the potential for suffering harm or for facing legal claims by affected persons;
Amendment 135 #
Motion for a resolution
Paragraph 19
Paragraph 19
19. Is of the opinion that, based on the significant potential to cause harm and by taking Directive 2009/103/EC7 into account, all deployers of high-risk AI- systems listed in the Annex to the proposed Regulation should hold liability insurance; considers that such a mandatory insurance regime for high-riskall AI- systems should cover the amounts and the extent of compensation laid down by the proposed Regulation; _________________ 7 OJ L 263, 7.10.2009, p. 11.is not the right approach;
Amendment 139 #
Motion for a resolution
Paragraph 20
Paragraph 20
Amendment 147 #
Motion for a resolution
Annex I – part A – paragraph 1 – indent 2
Annex I – part A – paragraph 1 – indent 2
- New legal challenges posed by the deployment of Artificial Intelligence (AI)- systems have to be addressed by establishing maximal legal certainty for the producer, the deployefrontend operator, the backend operator, the affected person and any other third party.
Amendment 154 #
Motion for a resolution
Annex I – part A – paragraph 1 – indent 4
Annex I – part A – paragraph 1 – indent 4
- Instead of replacing the well- functioning existing liability regimes, we should make a few specificsome necessary adjustments by introducing new and future-orientated ideas.
Amendment 163 #
Motion for a resolution
Annex I – part B – recital 1
Annex I – part B – recital 1
(1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for persons to avoid causing harm or damage in the first place. Any liability framework should strive to strike a balance between efficiently protecting potential victims of damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible, ensuring that victims are able to a claim for compensation in all cases where this seems justified.
Amendment 167 #
Motion for a resolution
Annex I – part B – recital 2
Annex I – part B – recital 2
(2) EspecialNot only at the beginning of the life cycle of new products and services, but also at later stages, due to modifications through updates, there is a certain degree of risk for the user as well as for third persons that something does not function properly. This process of trial-and-error is at the same time a key enabler of technical progress without which most of our technologies would not exist. So far, the accompanying risks of new products and services have been properly mitigated by strong product safety legislation and liability rules.
Amendment 169 #
Motion for a resolution
Annex I – part B – recital 3
Annex I – part B – recital 3
(3) The rise of Artificial intelligence (AI) and other emerging digital technologies, such as the Internet of Things or distributed ledger technologies however presents a significant challenge for the existing liability frameworks. Using AI-systems in our daily life will lead to situations in which their opacity (“black box” element), complexity, modification through updates or self-learning during operation, limited predictability, and vulnerability to cybersecurity threats makes it extremely expensivedifficult or even impossible to identify who was in control of the risk of using the AI-system in question or which code or input has caused the harmful operation. This difficulty is even compounded by the connectivity between an AI-system and other AI- systems and non-AI-systems, by its dependency on external data, by its vulnerability to cybersecurity breaches as well as by the increasing autonomy of AI- systems triggered by machine-learning and deep- learning capabilities. Besides these complex features and potential vulnerabilities, AI-systems could also be used to cause severe harm, such as compromising our values and freedoms by tracking individuals against their will, by introducing Social Credit Systems or by constructing lethal autonomous weapon systems.
Amendment 174 #
Motion for a resolution
Annex I – part B – recital 4
Annex I – part B – recital 4
(4) At this point, it is important to point out that to ensure that the advantages of deploying AI- systems will by far outweigh the disadvantages, certain adjustments need to be made to the Union law. They will help to fight climate change more effectively, to improve medical examinations, to better integrate disabled persons into the society and to provide tailor-made education courses to all types of students. To exploit the various technological opportunities and to boost people’s trust in the use of AI- systems, while at the same time preventing harmful scenarios, sound ethical standards combined withit is essential to ensure that AI-systems comply with applicable laws and adjust Union and national liability regimes in order to guarantee solid and fair compensation is the best way forwardfor affected persons.
Amendment 179 #
Motion for a resolution
Annex I – part B – recital 5
Annex I – part B – recital 5
(5) Any discussion about required changes in the existing legal framework should start with the clarification that AI- systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software. Ultimately, the large majority of AI- systems are used for handling trivial tasks without any risks for the society. There are however also AI-systems that are deployed in a critical manner and are based on neuronal networks and deep-learning processes. Their opacity and autonomy could make it very difficult to trace back specific actions to specific human decisions in their design or in their operation. A deployefrontend operator of such an AI- system might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI-system. The mere operation of an autonomous AI-system should at the same time not be a sufficient ground for admitting the liability claim. As a result, there might be liability cases in which a person who suffers harm or damage caused by an AI-system cannot prove the fault of the producer, the backend operator of an interfering third party or of the deployefrontend operator and ends up without compensation. Furthermore, the allocation of liability could be unfair or inefficient. To prevent such scenarios, certain adjustments need to be made to Union and national liability regimes.
Amendment 186 #
Motion for a resolution
Annex I – part B – recital 6
Annex I – part B – recital 6
(6) NeverthelesThus, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. Additionally, strict liability should lie with the person that has more control over the risks of the operation. This follows from general and widely accepted liability concepts of justice according to which the person that creates a risk for the public is accountable if that risk materializes. Consequently, the rise of AI-systems does not pose a need for a complete revision of liability rules throughout the Union. Specific adjustments of the existing legislation and very fewthe necessary new provisions would be sufficient to accommodate the AI-related challenges.
Amendment 194 #
Motion for a resolution
Annex I – part B – recital 7
Annex I – part B – recital 7
(7) Council Directive 85/374/EEC3 (the Product Liability Directive) has proven to be an effective means of getting compensation for damage trigger, for over 30 years, provided a valuable safety net to protect consumers from harm caused by a defective product. Hence, it should also be used with regard tos and needs to be updated to take account of civil liability claims of a party who suffers harm or damage against the producer of a defective AI- system. In line with the better regulation principles of the Union, anyAll necessary legislative adjustments should be discussed during a review of that Directive. The existing fault- based liability law of the Member States also offers in most cases a sufficient level of protection for persons that suffer harm or damages caused by an interfering third person, as that interference regulbut does not necessarily constitutes a fault- based actiontake account of technological developments. Consequently, this Regulation should focus on claims against the deployefrontend operator and backend operator of an AI-system. _________________ 3 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7.8.1985, p. 29.
Amendment 195 #
Motion for a resolution
Annex I – part B – recital 8
Annex I – part B – recital 8
(8) The liability of the deployefrontend operator under this Regulation is based on the fact that he or she controls a risk by operating an AI- system. Comparable to an owner of a car or pet, the deployer is able to exercise a certain level of control over the risk that the item poses. Exercising controlis the person primarily deciding on and benefitting from the use of the relevant technology. Benefitting from the use thereby should be understood as meaning any aenjoying the features and functions of the deployer that affects the manner of the operation from start to finish or that changes specific functions or processes within the AI-systemAI-system. Conversely, the liability of the backend operator under this Regulation is based on the fact that he or she is the person continuously defining the features of the relevant technology and providing essential and ongoing backend support, therefore holding the actual control over the risks of the operation.
Amendment 202 #
Motion for a resolution
Annex I – part B – recital 9
Annex I – part B – recital 9
(9) If a user, namely the person that utilises the AI-system, is involved in the harmful event, he or she should only be liable under this Regulation if the user also qualifies as a deployer. This Regulation should not considerfrontend operator. It is appropriate to note that the backend operator, who is the person continuously defining the features of the relevant technology and providing essential and ongoing backend support, to be a deployer and thus, its provisions should not apply to him or her. For the purpose of legal certainty throughout the Union, the backend operator shdoes not necessarily coincide with the producer, as it could fall under the same liability rules as the producer, manufacturer and developso inter alia be the developer or the programmer.
Amendment 205 #
Motion for a resolution
Annex I – part B – recital 10
Annex I – part B – recital 10
(10) This Regulation should cover in principle all AI-systems, no matter where they are operating and whether the operations take place physically or virtually. The majority of liability claims under this Regulation should howeveris Regulation should also but not exclusively address cases of third party liability, where an AI-system operates in a public space and exposes many third persons to a risk. In that situation, the affected persons will often not be aware of the operating AI- system and will not have any contractual or legal relationship towards the deployefrontend operator. Consequently, the operation of the AI- system puts them into a situation in which, in the event of harm or damage being caused, they only have fault-based liability claims against the deployer of the AI- system, while facingface severe difficulties to prove fault on the part of the deployefrontend operator.
Amendment 209 #
Motion for a resolution
Annex I – part B – recital 11
Annex I – part B – recital 11
(11) The type of AI-system the deployer is exercising control over is a determining factor. An AI-system that entails a high risk potentially endangers the public to a mIn the liability stage, a risk-based approach to AI is not appropriate, since the damage has occurred and the producht higher degree and in a manner that is random and impossible to predict in advance. This means that at the start of the autonomous operation of the AI- system, the majority of the potentially affected persons are unknown and not identifiable (e.g. persons on a public square or in a neighbouring house), compared to the operation of an AI- system that involves specific persons, who have regularly consented to its deployment before (e.g. surgery in a hospital or sales demonstration in a small shop). Determining how significant the potential to cause harm or damage by a high-risk AI-system should dependas proven to be a risk product. It should be noted that so called low-risk applications may very well cause severe harm or damage. Thus, the liability model for products containing AI applications has to be approached in a two-step process: firstly providing a fault based liability onf the interplay between the manner in which the AI-system is being used, the severity of the potential harfrontend operator against which the affected person should have the right to bring the claim for damage and the likelihood that the risk materialises. The degree of severity should be determined based on the extent of the potential harm resuls, with the possibility for the frontend operator to prove his lack of fault by complying with the duty of care consisting fromin the operation, the number of affected persons, the total value for the potential damage as well as the harm to society as a whole. The likelihood should be determined based on the role of the algorithmic calculations in the decision- making process, the complexity of the decision and the reversibility of the effects. Ultimately, the manner of usage should depend, among other things, on the sector in which the AI-system operates, if it could have legal or factual effects on important legally protected rights of thregular installation of all available updates. If this obligation is fulfilled, due diligence is presumed. Secondly, in the event where no fault of the frontend operator can be established, the backend operator should be held strictly liable. A two-step process is essential in order to ensure that victims are aeffected person, and whether the effects can reasonably be avoided.ively compensated for damages caused by AI driven systems;
Amendment 213 #
Motion for a resolution
Annex I – part B – recital 12
Annex I – part B – recital 12
(12) All AI-systems with a high risk should be listed in an Annex to this Regulation. Given the rapid technical and market developments as well as the technical expertise which is required for an adequate review of AI-systems, the power to adopt delegated acts in accordance with Article 290 of the T risk-based approach to AI within the existing liability framework might create unnecessary fragmentation across the Union creating legal uncertainty, interpretaty on the Functioning of the European Union should be delegated to the Commission to amend this Regulation in respect of ive issues and confusion amongst users who would face different levels of protection depending on whether types ofhe AI-systems that poseis classified as high risk and the critical sectors where they are used. Based on the definitions and provisions laid down in this Regulation, the Commission should review the Annex every six months and, if necessary, amend it by means of delegated acts. To give businesses enough planning and investment security, changes to the critical sectors should only be made every 12 months. Developers are called upon to notify the Commiss- or low- risk, which is something users cannot assess on their own. Once the damage has occurred it is irrelevant whether an AI system has been classified as high- or low- risks, what matters is that affected persons can obtain full compensation ifor they are currently working on a new technology, product or service that falls under one of the existing critical sectors provided for in the Annex and which later could qualify for a high risk AI-system harm regardless of the risk category.
Amendment 218 #
Motion for a resolution
Annex I – part B – recital 13
Annex I – part B – recital 13
Amendment 224 #
Motion for a resolution
Annex I – part B – recital 14
Annex I – part B – recital 14
(14) In line wiDue to the strict liability pecial characteristics of AI-systems of, the Member States, thisproposed Regulation should cover only harm or damage to life, health, physical integrity and property. For the same reason, it should determine the amount and extent of compensation, as well as the limitation period for bringing forward liability claims. In contrast to the Product Liability Directive, this Regulation should set out a significantly lower ceiling for compensation, as it only refers to a single operation of an AI-system, while the former refers to a number of productmaterial as well as non- material harm, including damage to intangible property, and data, such as loss or leak of data and should ensure that damage is fully compensated in compliance with the fundamental right of redress for even a product line with the same defectdamage suffered.
Amendment 226 #
Motion for a resolution
Annex I – part B – recital 15
Annex I – part B – recital 15
Amendment 230 #
Motion for a resolution
Annex I – part B – recital 16
Annex I – part B – recital 16
(16) The diligence which can be expected from a deployen operator should be commensurate with (i) the nature of the AI system, (ii) the information on the nature of the AI system provided to the operator and to the public, (iii) the legally protected right potentially affected, (iiiv) the potential harm or damage the AI-system could cause and (iv) the likelihood of such damage. Thereby, it should be taken into account that the deployeoperator might have limited knowledge of the algorithms and data used in the AI-system. It should be presumed that the deploye, even though a sufficient level of information should be ensured, providing for the relevant documentation on the use and design instructions, including the source code and the data used by the AI system, made easily accessible through a mandatory legal deposit. It should be presumed that the operator has observed due care in selecting a suitable AI-system, if the deployeoperator has selected an AI-system which has been certified under [the voluntary certification scheme envisaged on p. 24 of COM(2020) 65 final]. It should be presumed that the deployeoperator has observed due care during the operation of the AI- system, if the deployeoperator can prove to have actually and regularly monitored the AI- system during its operation and to have notified the manufacturer about potential irregularities during the operation. It should be presumed that the deployeoperator has observed due care as regards maintaining the operational reliability, if the deployeoperator installed all available updates provided by the producer of the AI-system. according to the conditions laid down in Directive (EU) 2019/770. Due account should be given to the possibility for the largely volunteer- based free software community to produce software for the general public to use which can be integrated, in whole or in part, into AI systems, without automatically becoming subject to obligations designed for businesses providing digital content in a professional capacity.
Amendment 231 #
Motion for a resolution
Annex I – part B – recital 16
Annex I – part B – recital 16
(16) The diligence which can be expected from a deployefrontend operator should be commensurate with (i) the nature of the AI system, (ii) the legally protected right potentially affected, (iii) the potential harm or damage the AI-system could cause and (iv) the likelihood of such damage. Thereby, it should be taken into account that the deployefrontend operator might have limited knowledge of the algorithms and data used in the AI-system. It should be presumed that the deployefrontend operator has observed due care in selecting a suitable AI-system, if the deployefrontend operator has selected an AI-system which has been certified under [the voluntary certification scheme envisaged on p. 24 of COM(2020) 65 final]. It should be presumed that the deployefrontend operator has observed due care during the operation of the AI- system, if the deployefrontend operator can prove to have actually and regularly monitored the AI- system during its operation and to have notified the manufacturebackend operator about potential irregularities during the operation. It should be presumed that the deployefrontend operator has observed due care as regards maintaining the operational reliability, if the deployefrontend operator installed all available updates provided by the producebackend operator of the AI-system.
Amendment 235 #
Motion for a resolution
Annex I – part B – recital 17
Annex I – part B – recital 17
(17) In order to enable the deployefrontend operator to prove that he or she was not at fault, the producebackend operators should have the duty to collaborate with the deployefrontend operator. European as well as non- European producers should furthermore have the obligation to designate an AI- liability-representative within the Union as a contact point for replying to all requests from deployefrontend operators, taking similar provisions set out in Article 37 GDPR (data protection officers), Articles 3(41) and 13(4) of Regulation 2018/858 of the European Parliament and of the Council5 and Articles 4(2) and 5 of Regulation 2019/1020 of the European Parliament and of the Council6 (manufacturer's representative) into account. _________________ 5 Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1). 6Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (OJ L 169, 25.6.2019, p. 1).
Amendment 239 #
Motion for a resolution
Annex I – part B – recital 18
Annex I – part B – recital 18
(18) The legislator has to consider the liability risks connected to AI-systems during their whole lifecycle, from development to usage to end of life. The inclusion of AI-systems in a product or service represents a financial risk for businesses and consequently will have a heavy impact on the ability and options for small and medium-sized enterprises (SME) as well as for start-ups in relation to insuring and financing their projects based on new technologies. The purpose of liability is, therefore, not only to safeguard important legally protected rights of individuals but also a factor which determines whether businesses, especially SMEs and start-ups, are able to raise capital, innovate and ultimately offer new products and services, as well as whether the customers are willing to use such products and services despite the potential risks and legal claims being brought against them.
Amendment 241 #
Motion for a resolution
Annex I – part B – recital 19
Annex I – part B – recital 19
Amendment 243 #
Motion for a resolution
Annex I – part B – recital 20
Annex I – part B – recital 20
Amendment 250 #
Motion for a resolution
Annex I – part B – recital 21
Annex I – part B – recital 21
(21) It is of utmost importance that any future changes to this text go hand in hand with a necessary review of the PLD. The introduction of a new liability regime for the deployefrontend operator and backend operator of AI-systems requires that the provisions of this Regulation and the review of the PLD should be closely coordinated in terms of substance as well as approach so that they together constitute a consistent liability framework for AI- systems, balancing the interests of producer, deployer and the affected person, as regards the liability risk. Adapting and streamlining the definitions of AI-system, deployer, producer, developer, defect, product and service throughout all pieces of legislation is therefore necessary.
Amendment 253 #
Motion for a resolution
Annex I – part B – recital 21 a (new)
Annex I – part B – recital 21 a (new)
(21a) In the liability stage, a risk-based approach to AI is not appropriate, since the damage has occurred and the product has proven to be a risk product. The so- called low-risk applications could equally cause severe harm or damage. Thus, the liability model for products containing AI applications should be approached in a two-step process. Firstly, providing a fault-based liability of the frontend operator against which the affected person should have the right to bring the claim for damages. The frontend operator should be able to prove his lack of fault by complying with the duty of care consisting in the regular installation of all available updates. If this obligation is fulfilled, due diligence should be presumed. Secondly, in the event where no fault of the frontend operator can be established, the producer or the backend operator should be held strictly liable. Such a two-step process is essential in order to ensure that victims are effectively compensated for damages caused by AI-driven systems.
Amendment 256 #
Motion for a resolution
Annex I – part B – Article 1 – paragraph 1
Annex I – part B – Article 1 – paragraph 1
This Regulation sets out rules for the civil liability claims of natural and legal persons against the deployefrontend operator and against the backend operator of AI-systems.
Amendment 261 #
Motion for a resolution
Annex I – part B – Article 2 – paragraph 1
Annex I – part B – Article 2 – paragraph 1
1. This Regulation applies on the territory of the Union where a physical or virtual activity, device or process driven by an AI-system has caused harm or damage to the life, health, physical integrity or the property ofconnected and self-learning device, digital content, digital good and/or service has caused a typical harm or damage to a natural or legal person.
Amendment 264 #
Motion for a resolution
Annex I – part B – Article 2 – paragraph 2
Annex I – part B – Article 2 – paragraph 2
2. Any agreement between a deployefrontend or backend operator of an AI-system and a natural or legal person who suffers harm or damage because of the AI-system, which circumvents or limits the rights and obligations set out in this Regulation, whether concluded before or after the harm or damage has been caused, shall be deemed null and void.
Amendment 273 #
Motion for a resolution
Annex I – part B – Article 3 – point a a (new)
Annex I – part B – Article 3 – point a a (new)
(aa) ‘automated decision-making (ADM), decision-support or decision- informing system’ means the procedure in which decisions are initially, partly or completely, delegated to an operator by way of using a software or a service, who then in turn uses automatically executed decision-making models to perform an action;
Amendment 275 #
Motion for a resolution
Annex I – part B – Article 3 – point c
Annex I – part B – Article 3 – point c
Amendment 283 #
Motion for a resolution
Annex I – part B – Article 3 – point d
Annex I – part B – Article 3 – point d
(d) ‘deployefrontend operator’ means the person who decides on the use of the AI-system, exercises control over the associated riskprimarily deciding on and benefitsting from its operationthe use of the relevant technology;
Amendment 285 #
Motion for a resolution
Annex I – part B – Article 3 – point d a (new)
Annex I – part B – Article 3 – point d a (new)
(da) ‘backend operator’ means the person continuously defining the features of the relevant technology and providing essential and ongoing backend support;
Amendment 286 #
Motion for a resolution
Annex I – part B – Article 3 – point e
Annex I – part B – Article 3 – point e
(e) ‘affected person’ means any person who suffers harm or damage caused by a physical or virtual activity, device or process driven by an AI-system, and who is not its deployefrontend operator;
Amendment 291 #
Motion for a resolution
Annex I – part B – Article 3 – point f
Annex I – part B – Article 3 – point f
(f) ‘harm or damage’ means an adverse impact affecting the life, health, physical integrity or property of a natural or legal person, with the exception of non-y material or non-material typical harm or damage, meaning those where the special risk of AI-systems or connected devices material harmises;
Amendment 293 #
Motion for a resolution
Annex I – part B – Article 3 – point g
Annex I – part B – Article 3 – point g
Amendment 298 #
Motion for a resolution
Annex I – part B – chapter 2 – title
Annex I – part B – chapter 2 – title
Amendment 299 #
Motion for a resolution
Annex I – part B – Article 4 – title
Annex I – part B – Article 4 – title
Amendment 300 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 1
Annex I – part B – Article 4 – paragraph 1
1. The deployer of a high-riskfrontend operator of an AI- system shall be strictly liableliable at fault for any harm or damage that was caused by a physical or virtual activity, device or process driven by that AI-system. The frontend operator has to comply with the duty of care consisting in the regular installation of all available updates. If that obligation is fulfilled, due diligence shall be presumed.
Amendment 305 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 2 – introductory part
Annex I – part B – Article 4 – paragraph 2 – introductory part
2. The high-risk AI-systems as well as the critical sectors where they are used shall be listed in the Annex to this Regulation. The Commission is empowerIn the event where no fault of the frontend operator can be established, to adopt delegated acts in accordance with Article 13, to amend the exhaustive list in the Annex, by:he backend operator should be held strictly liable.
Amendment 310 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 2 – point a
Annex I – part B – Article 4 – paragraph 2 – point a
Amendment 312 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 2 – point b
Annex I – part B – Article 4 – paragraph 2 – point b
Amendment 316 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 2 – point c
Annex I – part B – Article 4 – paragraph 2 – point c
Amendment 319 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 2 – subparagraph 2
Annex I – part B – Article 4 – paragraph 2 – subparagraph 2
Amendment 320 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 3
Annex I – part B – Article 4 – paragraph 3
Amendment 326 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 4
Annex I – part B – Article 4 – paragraph 4
Amendment 330 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 4 a (new)
Annex I – part B – Article 4 – paragraph 4 a (new)
4a. The liability insurance system shall be supplemented by a fund in order to ensure that damages can be compensated for in cases where no insurance cover exists.
Amendment 331 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 5
Annex I – part B – Article 4 – paragraph 5
Amendment 335 #
Motion for a resolution
Annex I – part B – Article 5 – title
Annex I – part B – Article 5 – title
Amendment 338 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 1 – introductory part
Annex I – part B – Article 5 – paragraph 1 – introductory part
1. A deployer of a high-riskfrontend or backend operator of an AI- system that has been held liable for harm or damage under this Regulation shall fully compensate: for any harm or damage.
Amendment 340 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 1 – point a
Annex I – part B – Article 5 – paragraph 1 – point a
Amendment 348 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 1 – point b
Annex I – part B – Article 5 – paragraph 1 – point b
Amendment 355 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 1 – point 2
Annex I – part B – Article 5 – paragraph 1 – point 2
Amendment 360 #
Motion for a resolution
Annex I – part B – Article 6 – paragraph 1 – introductory part
Annex I – part B – Article 6 – paragraph 1 – introductory part
1. Within the amount set out in Article 5(1)(a), compensation to be paid by the deployerparty held liable in the event of physical harm followed by the death of the affected person, shall be calculated based on the costs of medical treatment that the affected person underwent prior to his or her death, and of the pecuniary prejudice sustained prior to death caused by the cessation or reduction of the earning capacity or the increase in his or her needs for the duration of the harm prior to death. The deployeoperator held liable shall furthermore reimburse the funeral costs for the deceased affected person to the party who is responsible for defraying those expenses.
Amendment 362 #
Motion for a resolution
Annex I – part B – Article 6 – paragraph 1 – paragraph 1
Annex I – part B – Article 6 – paragraph 1 – paragraph 1
If at the time of the incident that caused the harm leading to his or her death, the affected person was in a relationship with a third party and had a legal obligation to support that third party, the deployeoperator held liable shall indemnify the third party by paying maintenance to the extent to which the affected person would have been obliged to pay, for the period corresponding to an average life expectancy for a person of his or her age and general description. The deployeoperator shall also indemnify the third party if, at the time of the incident that caused the death, the third party had been conceived but had not yet been born.
Amendment 364 #
Motion for a resolution
Annex I – part B – Article 6 – paragraph 2
Annex I – part B – Article 6 – paragraph 2
2. Within the amount set out in Article 5(1)(b), compensation to be paid by the deployeoperator held liable in the event of harm to the health or the physical integrity of the affected person shall include the reimbursement of the costs of the related medical treatment as well as the payment for any pecuniary prejudice sustained by the affected person, as a result of the temporary suspension, reduction or permanent cessation of his or her earning capacity or the consequent, medically certified increase in his or her needs.
Amendment 366 #
Motion for a resolution
Annex I – part B – Article 7 – paragraph 1
Annex I – part B – Article 7 – paragraph 1
1. Civil liability claims, brought in accordance with Article 4(1), concerning harm to life, health or physical integrity, shall be subject to a special limitation period of 30 years from the date on which the harm occurred.
Amendment 368 #
Motion for a resolution
Annex I – part B – Article 7 – paragraph 2
Annex I – part B – Article 7 – paragraph 2
Amendment 378 #
Motion for a resolution
Annex I – part B – chapter 3 – title
Annex I – part B – chapter 3 – title
Amendment 379 #
Motion for a resolution
Annex I – part B – Article 8 – title
Annex I – part B – Article 8 – title
Amendment 380 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 1
Annex I – part B – Article 8 – paragraph 1
1. The deployer of an AI-system that is not defined as a high-risk AI-system, in accordance to Article 3(c) and, as a result is not listed in the Annex to this Regulation,frontend operator of an AI- system shall be subject to fault-based liability for any harm or damage that was caused by a physical or virtual activity, device or process driven by the AI-system.
Amendment 384 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 2 – introductory part
Annex I – part B – Article 8 – paragraph 2 – introductory part
2. The deployefrontend operator shall not be liable if he or she can prove that the harm or damage was caused without his or her fault, relying on either of the following grounds’:
Amendment 391 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 2 – subparagraph 2
Annex I – part B – Article 8 – paragraph 2 – subparagraph 2
The deployefrontend operator shall not be able to escape liability by arguing that the harm or damage was caused by an autonomous activity, device or process driven by his or her AI-system. The deployer shall not be liable if the harm or damage was caused by force majeure.
Amendment 394 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 3
Annex I – part B – Article 8 – paragraph 3
3. Where the harm or damage was caused by a third party that interfered with the AI-system by modifying its functioning, the deployerfrontend operator if at fault shall nonetheless be liable for the payment of compensation if such third party is untraceable or impecunious.
Amendment 397 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 4
Annex I – part B – Article 8 – paragraph 4
4. At the request of the deployer, the produceThe backend operator of an AI- system shall have the duty of collaborating with the deployerfrontend operator or the affected person to the extent warranted by the significance of the claim in order to allow the deployerfrontend operator or the affected person to prove that he or she acted without fault.
Amendment 400 #
Motion for a resolution
Annex I – part B – Article 9
Annex I – part B – Article 9
Amendment 402 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 1
Annex I – part B – Article 10 – paragraph 1
1. If the harm or damage is caused both by a physical or virtual activity, device or process driven by an AI-system and by the actions of an affected person or of any person for whom the affected person is responsible, the deployefrontend operator’s extent of liability under this Regulation shall be reduced accordingly. The deployefrontend operator shall not be liable if the affected person or the person for whom he or she is responsible is solely or predominantly accountable for the harm or damage caused.
Amendment 406 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 2
Annex I – part B – Article 10 – paragraph 2
2. A deployefrontend operator held liable may use the data generated by the AI- system to prove contributory negligence on the part of the affected person. The same right shall apply for the affected person. The use of such data shall comply with the data protection legislation.
Amendment 409 #
Motion for a resolution
Annex I – part B – Article 11 – paragraph 1
Annex I – part B – Article 11 – paragraph 1
If there is more than one deployefrontend and backend operator of an AI- system, they shall be jointly and severally liable. If any of the deployers is also the producer of the AI-system, this Regulation shall prevail over the Product Liability Directive.
Amendment 411 #
Motion for a resolution
Annex I – part B – Article 12 – paragraph 1
Annex I – part B – Article 12 – paragraph 1
1. The deployefrontend operator and the backend operator shall not be entitled to pursue a recourse action unless the affected person, who is entitled to receive compensation under this Regulation, has been paid in full.
Amendment 414 #
Motion for a resolution
Annex I – part B – Article 12 – paragraph 2
Annex I – part B – Article 12 – paragraph 2
2. In the event that the deployer isfrontend operator and the backend operator are held jointly and severally liable with other deployeoperators in respect of an affected person and hasve fully compensated that affected person, in accordance with Article 4(1) or 8(1), that deployeoperator may recover part of the compensation from the other deployeoperators, in proportion to his or her liability. DeployeOperators, that are jointly and severally liable, shall be obliged in equal proportions in relation to one another, unless otherwise determined. If the contribution attributable to a jointly and severally liable deployeroperators cannot be obtained from him or her, the shortfall shall be borne by the other deployeoperators. To the extent that a jointly and severally liable deployeroperators compensates the affected person and demands adjustment of advancements from the other liable deployeoperators, the claim of the affected person against the other deployeoperators shall be subrogated to him or her. The subrogation of claims shall not be asserted to the disadvantage of the original claim.
Amendment 417 #
Motion for a resolution
Annex I – part B – Article 12 – paragraph 3
Annex I – part B – Article 12 – paragraph 3
3. In the event that the deployefrontend operator of a defective AI-system fully indemnifies the affected person for harm or damages in accordance with Article 4(1) or 8(1), he or she may take action for redress against the producebackend operator of the defective AI-system according to Directive 85/374/EEC and to national provisions concerning liability for defective products.
Amendment 420 #
Motion for a resolution
Annex I – part B – Article 12 – paragraph 4
Annex I – part B – Article 12 – paragraph 4
4. In the event that the insurer of the deployeoperator indemnifies the affected person for harm or damage in accordance with Article 4(1) or 8(1), any civil liability claim of the affected person against another person for the same damage shall be subrogated to the insurer of the deployeoperator to the amount the insurer of the deployeoperator has compensated the affected person.
Amendment 423 #
Motion for a resolution
Annex I – part B – Article 13
Annex I – part B – Article 13
Amendment 430 #
Motion for a resolution
Annex I – part B – Annex
Annex I – part B – Annex