BETA

97 Amendments of Brando BENIFEI related to 2020/2014(INL)

Amendment 11 #
Motion for a resolution
Recital A
A. whereas the concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for natural and legal persons to avoid causing harm or damage in the first place and discourages irresponsible behaviour;
2020/05/28
Committee: JURI
Amendment 16 #
Motion for a resolution
Recital B
B. whereas any future-orientated liability framework has to strike a balance between efficiently protecting potential victims of harm orensure that affected persons are appropriately protected against damage and that the same time, providing enough leeway to make the development of new technologies, products or services possible; whereas ultimatelyy are able to claim for compensation in all cases where this seems justified, the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the deployemanufacturer, the developer, the programmer, the backend operator, the frontend operator, the affected person or any other third party;
2020/05/28
Committee: JURI
Amendment 21 #
Motion for a resolution
Recital D
D. whereas the legal system of a Member State can exclude liability for certain actors or can make it stricter for certain activities; whereas strict liability means that a party can be liable despite the absence of fault; whereas in many national tort laws, the defendant is held strictly liable if a risk materializes which that defendant has created for the public, such as in the form of cars or hazardous activities, or which he cannot control, like animals; whereas strict liability lies on the person that has control over the risks of the operation or is responsible for them;
2020/05/28
Committee: JURI
Amendment 24 #
Motion for a resolution
Recital E
E. whereas Artificial Intelligence (AI)- systems and other emerging digital technologies, such as the Internet of Things or distributed ledger technologies present significant legal challenges for the existing liability framework and could lead to situations, in which their opacity, couldmplexity, modification through updates or self-learning during operation, limited predictability, and vulnerability to cybersecurity threats make it extremely expensivedifficult or even impossible to identify who was in control of the risk associated with the AI- system or which code or input has ultimately caused the harmful operation;
2020/05/28
Committee: JURI
Amendment 33 #
Motion for a resolution
Recital G
G. whereas sound ethical standards for AI-systems combined with solid and fair compensation procedures can help to address those legal challengeAI-systems need to comply with current applicable laws and, additionally, Union and national liability regimes need to be adjusted, where necessary, in order to guarantee solid and fair compensation for affected persons; whereas fair liability procedures means that each person who suffers harm caused by AI- systems or whose property damage is caused by AI-systems should have the same level of protection compared to cases without involvement of an AI-system.
2020/05/28
Committee: JURI
Amendment 58 #
5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability modification through updates, self- learning and autonomy of AI-systems nevertheless represent a significant challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer harm ormaterial or non-material harm or damage whose property is damaged end up without compensation;
2020/05/28
Committee: JURI
Amendment 64 #
Motion for a resolution
Paragraph 6
6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are always the result of someone building, deploying or interfering with the systems; is of the opinion that the opacity and autonomy of AI-systems could make it in practice very difficult or even impossible to trace back specific harmful actions of the AI-systems to specific human input or to decisions in the design; recalls that this constraint has an even greater impact on the affected person for whom it is impossible to establish causality between the damage and a prior act or omission; stresses that, in accordance with widely- accepted liability concepts, one is nevertheless able to circumvent this obstacle by making the persons who create, maintain or control the risk associated with the AI-system, accountable;
2020/05/28
Committee: JURI
Amendment 72 #
Motion for a resolution
Paragraph 7
7. Considers that the Product Liability Directive (PLD) has for over 30 years proven to be an effective means of getting compensation for harm triggered by a defective product; hence, notes that because it should also be used with regard to civil liability claims against the producer of a defective AI-system, wthen the AI-system qualifies as a product under that Directive; if Directive needs to be updated to include AI systems; underlines that legislative adjustments to the PLD are necessary, they should be discussed during a review of that Directive; is of the opinion that, for the purpose of legal certainty throughout the Union, the ‘backend operator’ should fall under the samenotes that the ‘backend operator’ does not necessarily coincide with the producer, as it can also be the developer or programmer and, therefore, the ‘backend operator’ can be justifiably covered by a different liability rules asegime than theat producer, manufacturer and developervided by the PLD;
2020/05/28
Committee: JURI
Amendment 76 #
Motion for a resolution
Paragraph 8
8. Considers that the existing fault- based tort law of the Member States offers in most cases a sufficient level of protection for persons that suffer harm caused by an interfering third person like a hacker or whose property is damaged by such a third person, as the interference regularly constitutes a fault-based action; notes that only for cases in which the third person is untraceable or impecunioudue to these characteristics of AI-systems, additional liability rules seem necessary;
2020/05/28
Committee: JURI
Amendment 77 #
Motion for a resolution
Paragraph 9
9. Considers it, therefore, appropriate for this report to focus on civil liability claims against the deployefrontend operator and backend operator of an AI- system; affirms that the deployer’frontend operator's liability is justified by the fact that he or she is controlling a risk associated withbenefitting from the use of the AI- system, comparable to an owner of a car or pet; considers that due to the AI-system’s complexity and connectivity, the deployefrontend operator will be in many cases the first visible contact point for the affected person; conversely, believes that the liability of the backend operator under this Regulation is based on the fact that he or she is the person continuously defining the features of the relevant technology and providing essential and ongoing backend support, therefore holding the actual control over the risks of the operation;
2020/05/28
Committee: JURI
Amendment 80 #
Motion for a resolution
Subheading 3
Liability of the deployer frontend and backend operator
2020/05/28
Committee: JURI
Amendment 86 #
Motion for a resolution
Paragraph 10
10. Opines that liability rules involving the deployefrontend and backend operator should in principle cover all operations of AI- systems, no matter where the operation takes place and whether it happens physically or virtually; remarks that operations in public spaces that expose many third persons to a risk constitute, however, cases that require further consideration; considers that the potential victims of harm or damage are often not aware of the operation and regularly do not have contractual liability claims against the deployerfrontend and backend operators; notes that when harm or damage materialises, such third persons would then only have a fault- liability claim, and they might find it difficult to prove the fault of the deployefrontend or backend operator of the AI-system;
2020/05/28
Committee: JURI
Amendment 92 #
Motion for a resolution
Paragraph 11
11. Considers it appropriate to define the deployefrontend operator as the person who decides onbenefits from the use of the AI-system, who exercises control ovnjoys its features and functions; considers the risk and who benefits from its operation; considers that exercising control means any action of the deployer that affects the manner of the operation from staat the backend operator is defined as the person continuously defining the features of the relevant technology and providing essential and ongoing backend support, to finish or that changes specific functions or processes within the AI-systemherefore holding the actual control over the risks of the operation;
2020/05/28
Committee: JURI
Amendment 97 #
Motion for a resolution
Paragraph 12
12. Notes that there could be situations in which there is more than one deployerare several frontend and backend operators; considers that in that event, all deployersof them should be jointly and severally liable while having the right to recourse proportionally against each other;
2020/05/28
Committee: JURI
Amendment 99 #
Motion for a resolution
Subheading 4
Different liability rules for different risksNo risk-based approach in the liability phase
2020/05/28
Committee: JURI
Amendment 100 #
Motion for a resolution
Paragraph 13
13. Recognisealls that in the type of AI- system the deployer is exercising control over is a determining factor; notes that an AI-system that entails a high risk potentially endangers the general public to a much higher degree; considers that, based on the legalliability stage, a risk-based approach to AI is not appropriate, since the damage has occurred and the product has proven to be a risk product; notes that so challenges that AI- systems pose to the existing liability regimes, it seems reasonable to set up a strict liability regime for those high-risk AI-systemsd low- risk applications may equally cause severe harm or damage;
2020/05/28
Committee: JURI
Amendment 105 #
Motion for a resolution
Paragraph 14
14. BelievStresses that anthe liability model for products containing AI- system presents a high risk ws has to be approachend its autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random and impossible to predict in advance; considers that the significance of the potential dependsn a two-step process: firstly providing a fault based liability of the frontend operator against which the affected person should have the right to bring the claim for damages, with the possibility for the frontend operator to prove his lack of fault by complying with the duty of care consisting in the regular installation of all available updates; if this obligation is fulfilled, due diligence is presumed; secondly, in the event where no fault onf the interplay between the severity of possible harm, the likelihood that the risk materializes and the manner in which the AI-system is being usedfrontend operator can be established, the backend operator should be held strictly liable; notes that such a two-step process is essential in order to ensure that victims are effectively compensated for damages caused by AI driven systems;
2020/05/28
Committee: JURI
Amendment 113 #
Motion for a resolution
Paragraph 15
15. Recommends that all high-risk AI- systems be listed in an Annex to the proposed Regulation; recognises that, given the rapid technological change and the required technical expertise, it should be up to the Commission to review that Annex every six months and if necessary, amend it through a delegated act; believes that the Commission should closely cooperate with a newly formed standing committee similar to the existing Standing Committee on Precursors or the Technical Committee on Motor Vehicles, which include national expPoints out that a risk-based approach to AI within the existing liability framework might create unnecessary fragmentation across the EU creating legal uncertainty, interpretative issues and confusion amongst users who would face different levels of protection depending on whether the AI-systems is classified as high-or low- risk, which is something userts of the Member States and stakeholders; considers that the balanced membership of the ‘High-Level Expert Group on Artificial Intelligence’ could serve as an example for the formcannot assess on their own; considers that, once the damage has occurred, it is irrelevant whether an AI system has been classified as high- or low- risks, and that what matters is that affected persons can obtain full compensation ofor the group of stakeholdersharm regardless of the risk category;
2020/05/28
Committee: JURI
Amendment 121 #
Motion for a resolution
Paragraph 16
16. Believes that in line widue to the strict liability systems of the Member States, the proposed Regulation should only cover harm to the important legally protected rights such as life, health, physical integrity and property, and should set out the amounts and extent of compensation as well as the limitation periopecial characteristics of AI-systems, the proposed Regulation should cover material as well as non-material harm, including damage to intangible property and data, such as loss or leak of data, and should ensure that damage is always fully compensated, in compliance with the fundamental right of redress for damage suffered;
2020/05/28
Committee: JURI
Amendment 124 #
Motion for a resolution
Paragraph 17
17. Determines that all activities, devices or processes driven by AI-systems that cause harm or damage but are not listed in the Annex to the proposed Regulation should remain subject to fault- based liability; believes that the affected person should nevertheless benefit from a presumption of fault of the deployer;deleted
2020/05/28
Committee: JURI
Amendment 132 #
Motion for a resolution
Paragraph 18
18. Considers the liability riskcoverage to be one of the key factors that defines the success of new technologies, products and services; observes that proper riskliability coverage is also essential for assuring the public that it can trust the new technology despite the potential for suffering harm or for facing legal claims by affected persons;
2020/05/28
Committee: JURI
Amendment 135 #
Motion for a resolution
Paragraph 19
19. Is of the opinion that, based on the significant potential to cause harm and by taking Directive 2009/103/EC7 into account, all deployers of high-risk AI- systems listed in the Annex to the proposed Regulation should hold liability insurance; considers that such a mandatory insurance regime for high-riskall AI- systems should cover the amounts and the extent of compensation laid down by the proposed Regulation; _________________ 7 OJ L 263, 7.10.2009, p. 11.is not the right approach;
2020/05/28
Committee: JURI
Amendment 139 #
Motion for a resolution
Paragraph 20
20. Believes that a European compensation mechanism, funded with public money, is not the right way to fill potential insurance gaps; considers that bearing the good experience with regulatory sandboxes in the fintech sector in mind, it should be up to the insurance market to adjust existing products or create new insurance cover for the numerous sectors and various different technologies, products and services that involve AI-systems;deleted
2020/05/28
Committee: JURI
Amendment 142 #
Motion for a resolution
Paragraph 20
20. Believes that a European compensation mechanism, funded with public money, is not the right way to fill potential insurance gaps; considers that bearing the good experience with regulatory sandboxes in the fintech sector in mind, it should be up to the insurance market to adjust existing productUnderlines that the sole grounds that harm or damage is caused by a non- human agent should not limit the extent of the damages which may be recovered, nor should it limit the forms orf create new insurance cover for the numerous sectors and various different technologies, products and services that involve AI-systemsompensation which may be offered to the party suffering said harm or damage;
2020/05/28
Committee: JURI
Amendment 147 #
Motion for a resolution
Annex I – part A – paragraph 1 – indent 2
- New legal challenges posed by the deployment of Artificial Intelligence (AI)- systems have to be addressed by establishing maximal legal certainty for the producer, the deployefrontend operator, the backend operator, the affected person and any other third party.
2020/05/28
Committee: JURI
Amendment 154 #
Motion for a resolution
Annex I – part A – paragraph 1 – indent 4
- Instead of replacing the well- functioning existing liability regimes, we should make a few specificsome necessary adjustments by introducing new and future-orientated ideas.
2020/05/28
Committee: JURI
Amendment 163 #
Motion for a resolution
Annex I – part B – recital 1
(1) The concept of ‘liability’ plays an important double role in our daily life: on the one hand, it ensures that a person who has suffered harm or damage is entitled to claim compensation from the party proven to be liable for that harm or damage, and on the other hand, it provides the economic incentives for persons to avoid causing harm or damage in the first place. Any liability framework should strive to strike a balance between efficiently protecting potential victims of damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible, ensuring that victims are able to a claim for compensation in all cases where this seems justified.
2020/05/28
Committee: JURI
Amendment 167 #
Motion for a resolution
Annex I – part B – recital 2
(2) EspecialNot only at the beginning of the life cycle of new products and services, but also at later stages, due to modifications through updates, there is a certain degree of risk for the user as well as for third persons that something does not function properly. This process of trial-and-error is at the same time a key enabler of technical progress without which most of our technologies would not exist. So far, the accompanying risks of new products and services have been properly mitigated by strong product safety legislation and liability rules.
2020/05/28
Committee: JURI
Amendment 169 #
Motion for a resolution
Annex I – part B – recital 3
(3) The rise of Artificial intelligence (AI) and other emerging digital technologies, such as the Internet of Things or distributed ledger technologies however presents a significant challenge for the existing liability frameworks. Using AI-systems in our daily life will lead to situations in which their opacity (“black box” element), complexity, modification through updates or self-learning during operation, limited predictability, and vulnerability to cybersecurity threats makes it extremely expensivedifficult or even impossible to identify who was in control of the risk of using the AI-system in question or which code or input has caused the harmful operation. This difficulty is even compounded by the connectivity between an AI-system and other AI- systems and non-AI-systems, by its dependency on external data, by its vulnerability to cybersecurity breaches as well as by the increasing autonomy of AI- systems triggered by machine-learning and deep- learning capabilities. Besides these complex features and potential vulnerabilities, AI-systems could also be used to cause severe harm, such as compromising our values and freedoms by tracking individuals against their will, by introducing Social Credit Systems or by constructing lethal autonomous weapon systems.
2020/05/28
Committee: JURI
Amendment 174 #
Motion for a resolution
Annex I – part B – recital 4
(4) At this point, it is important to point out that to ensure that the advantages of deploying AI- systems will by far outweigh the disadvantages, certain adjustments need to be made to the Union law. They will help to fight climate change more effectively, to improve medical examinations, to better integrate disabled persons into the society and to provide tailor-made education courses to all types of students. To exploit the various technological opportunities and to boost people’s trust in the use of AI- systems, while at the same time preventing harmful scenarios, sound ethical standards combined withit is essential to ensure that AI-systems comply with applicable laws and adjust Union and national liability regimes in order to guarantee solid and fair compensation is the best way forwardfor affected persons.
2020/05/28
Committee: JURI
Amendment 179 #
Motion for a resolution
Annex I – part B – recital 5
(5) Any discussion about required changes in the existing legal framework should start with the clarification that AI- systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software. Ultimately, the large majority of AI- systems are used for handling trivial tasks without any risks for the society. There are however also AI-systems that are deployed in a critical manner and are based on neuronal networks and deep-learning processes. Their opacity and autonomy could make it very difficult to trace back specific actions to specific human decisions in their design or in their operation. A deployefrontend operator of such an AI- system might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI-system. The mere operation of an autonomous AI-system should at the same time not be a sufficient ground for admitting the liability claim. As a result, there might be liability cases in which a person who suffers harm or damage caused by an AI-system cannot prove the fault of the producer, the backend operator of an interfering third party or of the deployefrontend operator and ends up without compensation. Furthermore, the allocation of liability could be unfair or inefficient. To prevent such scenarios, certain adjustments need to be made to Union and national liability regimes.
2020/05/28
Committee: JURI
Amendment 181 #
Motion for a resolution
Annex I – part B – recital 5
(5) Any discussion about required changes in the existing legal framework should start withs a result of major technological advances of the clarification thatst years, AI- systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI- systems are also not so different from other technologies, which are sometimes based on even more complex software. Ultimately, the large majority of AI- systems are used for handlbecome able to perform activities which used to be exclusively human, but the development of certain autonomous and cognitive features, like the ability to learn from data and experience and take decisions, has made them more and more similar to agents that interact with their environment and are able to alter it significantly. In such a context, the legal responsibility arising through a robot’s harmful action becomes a crucial issue. This, ing trivial tasks without any risks for the society. There are however also AI-systems that are deployed in a critical manner and are based on neuronal networks and deep- learning processes. Turn, puts into question the current rules on liability, making it necessary for new principles and rules to provide clarity on the legal liability of various actors concerning responsibility for the acts and omissions of AI-systems, as their opacity and autonomy could make it very difficult to trace back specific acations to specific human decisions in their design or in their operation and to assess whether their acts or omissions causing harm or damage could have been avoided. A deployer of such an AI- system might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI- system. The mere operation of an autonomous AI-system should at the same time not be a sufficient ground for admitting the liability claim. As a result, there might be liability cases in which a person who suffers harm or damage caused by an AI-system cannot prove the fault of the producer, of an interfering third party or of the deployer and ends up without compensation.
2020/05/28
Committee: JURI
Amendment 186 #
Motion for a resolution
Annex I – part B – recital 6
(6) NeverthelesThus, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. Additionally, strict liability should lie with the person that has more control over the risks of the operation. This follows from general and widely accepted liability concepts of justice according to which the person that creates a risk for the public is accountable if that risk materializes. Consequently, the rise of AI-systems does not pose a need for a complete revision of liability rules throughout the Union. Specific adjustments of the existing legislation and very fewthe necessary new provisions would be sufficient to accommodate the AI-related challenges.
2020/05/28
Committee: JURI
Amendment 189 #
Motion for a resolution
Annex I – part B – recital 6
(6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice according to which the person that creates a risk for the public is accountable if that risk materializes. Consequently, the rise of AI-systems does not poses a need for a complete revision of liability rules throughout the Union. Specific aAdjustments of the existing legislation and very few new provisions would be sufficient to accommodate thenecessary to adapt it to AI-related challenges.
2020/05/28
Committee: JURI
Amendment 194 #
Motion for a resolution
Annex I – part B – recital 7
(7) Council Directive 85/374/EEC3 (the Product Liability Directive) has proven to be an effective means of getting compensation for damage trigger, for over 30 years, provided a valuable safety net to protect consumers from harm caused by a defective product. Hence, it should also be used with regard tos and needs to be updated to take account of civil liability claims of a party who suffers harm or damage against the producer of a defective AI- system. In line with the better regulation principles of the Union, anyAll necessary legislative adjustments should be discussed during a review of that Directive. The existing fault- based liability law of the Member States also offers in most cases a sufficient level of protection for persons that suffer harm or damages caused by an interfering third person, as that interference regulbut does not necessarily constitutes a fault- based actiontake account of technological developments. Consequently, this Regulation should focus on claims against the deployefrontend operator and backend operator of an AI-system. _________________ 3 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7.8.1985, p. 29.
2020/05/28
Committee: JURI
Amendment 195 #
Motion for a resolution
Annex I – part B – recital 8
(8) The liability of the deployefrontend operator under this Regulation is based on the fact that he or she controls a risk by operating an AI- system. Comparable to an owner of a car or pet, the deployer is able to exercise a certain level of control over the risk that the item poses. Exercising controlis the person primarily deciding on and benefitting from the use of the relevant technology. Benefitting from the use thereby should be understood as meaning any aenjoying the features and functions of the deployer that affects the manner of the operation from start to finish or that changes specific functions or processes within the AI-systemAI-system. Conversely, the liability of the backend operator under this Regulation is based on the fact that he or she is the person continuously defining the features of the relevant technology and providing essential and ongoing backend support, therefore holding the actual control over the risks of the operation.
2020/05/28
Committee: JURI
Amendment 202 #
Motion for a resolution
Annex I – part B – recital 9
(9) If a user, namely the person that utilises the AI-system, is involved in the harmful event, he or she should only be liable under this Regulation if the user also qualifies as a deployer. This Regulation should not considerfrontend operator. It is appropriate to note that the backend operator, who is the person continuously defining the features of the relevant technology and providing essential and ongoing backend support, to be a deployer and thus, its provisions should not apply to him or her. For the purpose of legal certainty throughout the Union, the backend operator shdoes not necessarily coincide with the producer, as it could fall under the same liability rules as the producer, manufacturer and developso inter alia be the developer or the programmer.
2020/05/28
Committee: JURI
Amendment 205 #
Motion for a resolution
Annex I – part B – recital 10
(10) This Regulation should cover in principle all AI-systems, no matter where they are operating and whether the operations take place physically or virtually. The majority of liability claims under this Regulation should howeveris Regulation should also but not exclusively address cases of third party liability, where an AI-system operates in a public space and exposes many third persons to a risk. In that situation, the affected persons will often not be aware of the operating AI- system and will not have any contractual or legal relationship towards the deployefrontend operator. Consequently, the operation of the AI- system puts them into a situation in which, in the event of harm or damage being caused, they only have fault-based liability claims against the deployer of the AI- system, while facingface severe difficulties to prove fault on the part of the deployefrontend operator.
2020/05/28
Committee: JURI
Amendment 209 #
Motion for a resolution
Annex I – part B – recital 11
(11) The type of AI-system the deployer is exercising control over is a determining factor. An AI-system that entails a high risk potentially endangers the public to a mIn the liability stage, a risk-based approach to AI is not appropriate, since the damage has occurred and the producht higher degree and in a manner that is random and impossible to predict in advance. This means that at the start of the autonomous operation of the AI- system, the majority of the potentially affected persons are unknown and not identifiable (e.g. persons on a public square or in a neighbouring house), compared to the operation of an AI- system that involves specific persons, who have regularly consented to its deployment before (e.g. surgery in a hospital or sales demonstration in a small shop). Determining how significant the potential to cause harm or damage by a high-risk AI-system should dependas proven to be a risk product. It should be noted that so called low-risk applications may very well cause severe harm or damage. Thus, the liability model for products containing AI applications has to be approached in a two-step process: firstly providing a fault based liability onf the interplay between the manner in which the AI-system is being used, the severity of the potential harfrontend operator against which the affected person should have the right to bring the claim for damage and the likelihood that the risk materialises. The degree of severity should be determined based on the extent of the potential harm resuls, with the possibility for the frontend operator to prove his lack of fault by complying with the duty of care consisting fromin the operation, the number of affected persons, the total value for the potential damage as well as the harm to society as a whole. The likelihood should be determined based on the role of the algorithmic calculations in the decision- making process, the complexity of the decision and the reversibility of the effects. Ultimately, the manner of usage should depend, among other things, on the sector in which the AI-system operates, if it could have legal or factual effects on important legally protected rights of thregular installation of all available updates. If this obligation is fulfilled, due diligence is presumed. Secondly, in the event where no fault of the frontend operator can be established, the backend operator should be held strictly liable. A two-step process is essential in order to ensure that victims are aeffected person, and whether the effects can reasonably be avoided.ively compensated for damages caused by AI driven systems;
2020/05/28
Committee: JURI
Amendment 213 #
Motion for a resolution
Annex I – part B – recital 12
(12) All AI-systems with a high risk should be listed in an Annex to this Regulation. Given the rapid technical and market developments as well as the technical expertise which is required for an adequate review of AI-systems, the power to adopt delegated acts in accordance with Article 290 of the T risk-based approach to AI within the existing liability framework might create unnecessary fragmentation across the Union creating legal uncertainty, interpretaty on the Functioning of the European Union should be delegated to the Commission to amend this Regulation in respect of ive issues and confusion amongst users who would face different levels of protection depending on whether types ofhe AI-systems that poseis classified as high risk and the critical sectors where they are used. Based on the definitions and provisions laid down in this Regulation, the Commission should review the Annex every six months and, if necessary, amend it by means of delegated acts. To give businesses enough planning and investment security, changes to the critical sectors should only be made every 12 months. Developers are called upon to notify the Commiss- or low- risk, which is something users cannot assess on their own. Once the damage has occurred it is irrelevant whether an AI system has been classified as high- or low- risks, what matters is that affected persons can obtain full compensation ifor they are currently working on a new technology, product or service that falls under one of the existing critical sectors provided for in the Annex and which later could qualify for a high risk AI-system harm regardless of the risk category.
2020/05/28
Committee: JURI
Amendment 218 #
Motion for a resolution
Annex I – part B – recital 13
(13) It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making4 . A standing committee called 'Technical Committee – high-risk AI-systems' (TCRAI) should support the Commission in its review under this Regulation. That standing committee should comprise representatives of the Member States as well as a balanced selection of stakeholders, including consumer organisation, businesses representatives from different sectors and sizes, as well as researchers and scientists. In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States' experts, and their experts systematically have access to meetings of Commission expert groups as well as the standing TCRAI-committee, when dealing with the preparation of delegated acts. _________________ 4deleted OJ L 123, 12.5.2016, p. 1.
2020/05/28
Committee: JURI
Amendment 224 #
Motion for a resolution
Annex I – part B – recital 14
(14) In line wiDue to the strict liability pecial characteristics of AI-systems of, the Member States, thisproposed Regulation should cover only harm or damage to life, health, physical integrity and property. For the same reason, it should determine the amount and extent of compensation, as well as the limitation period for bringing forward liability claims. In contrast to the Product Liability Directive, this Regulation should set out a significantly lower ceiling for compensation, as it only refers to a single operation of an AI-system, while the former refers to a number of productmaterial as well as non- material harm, including damage to intangible property, and data, such as loss or leak of data and should ensure that damage is fully compensated in compliance with the fundamental right of redress for even a product line with the same defectdamage suffered.
2020/05/28
Committee: JURI
Amendment 226 #
Motion for a resolution
Annex I – part B – recital 15
(15) All physical or virtual activities, devices or processes driven by AI-systems that are not listed as a high-risk AI-system in the Annex to this Regulation should remain subject to fault-based liability. The national laws of the Member States, including any relevant jurisprudence, with regard to the amount and extent of compensation as well as the limitation period should continue to apply. A person who suffers harm or damage caused by an AI-system should however benefit from the presumption of fault of the deployer.deleted
2020/05/28
Committee: JURI
Amendment 231 #
Motion for a resolution
Annex I – part B – recital 16
(16) The diligence which can be expected from a deployefrontend operator should be commensurate with (i) the nature of the AI system, (ii) the legally protected right potentially affected, (iii) the potential harm or damage the AI-system could cause and (iv) the likelihood of such damage. Thereby, it should be taken into account that the deployefrontend operator might have limited knowledge of the algorithms and data used in the AI-system. It should be presumed that the deployefrontend operator has observed due care in selecting a suitable AI-system, if the deployefrontend operator has selected an AI-system which has been certified under [the voluntary certification scheme envisaged on p. 24 of COM(2020) 65 final]. It should be presumed that the deployefrontend operator has observed due care during the operation of the AI- system, if the deployefrontend operator can prove to have actually and regularly monitored the AI- system during its operation and to have notified the manufacturebackend operator about potential irregularities during the operation. It should be presumed that the deployefrontend operator has observed due care as regards maintaining the operational reliability, if the deployefrontend operator installed all available updates provided by the producebackend operator of the AI-system.
2020/05/28
Committee: JURI
Amendment 235 #
Motion for a resolution
Annex I – part B – recital 17
(17) In order to enable the deployefrontend operator to prove that he or she was not at fault, the producebackend operators should have the duty to collaborate with the deployefrontend operator. European as well as non- European producers should furthermore have the obligation to designate an AI- liability-representative within the Union as a contact point for replying to all requests from deployefrontend operators, taking similar provisions set out in Article 37 GDPR (data protection officers), Articles 3(41) and 13(4) of Regulation 2018/858 of the European Parliament and of the Council5 and Articles 4(2) and 5 of Regulation 2019/1020 of the European Parliament and of the Council6 (manufacturer's representative) into account. _________________ 5 Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1). 6Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (OJ L 169, 25.6.2019, p. 1).
2020/05/28
Committee: JURI
Amendment 239 #
Motion for a resolution
Annex I – part B – recital 18
(18) The legislator has to consider the liability risks connected to AI-systems during their whole lifecycle, from development to usage to end of life. The inclusion of AI-systems in a product or service represents a financial risk for businesses and consequently will have a heavy impact on the ability and options for small and medium-sized enterprises (SME) as well as for start-ups in relation to insuring and financing their projects based on new technologies. The purpose of liability is, therefore, not only to safeguard important legally protected rights of individuals but also a factor which determines whether businesses, especially SMEs and start-ups, are able to raise capital, innovate and ultimately offer new products and services, as well as whether the customers are willing to use such products and services despite the potential risks and legal claims being brought against them.
2020/05/28
Committee: JURI
Amendment 241 #
Motion for a resolution
Annex I – part B – recital 19
(19) Insurance can help to ensure that victims can receive effective compensation as well as to pool the risks of all insured persons. One of the factors on which insurance companies base their offer of insurance products and services is risk assessment based on access to sufficient historical claim data. A lack of access to, or an insufficient quantity of high quality data could be a reason why creating insurance products for new and emerging technologies is difficult at the beginning. However, greater access to and optimising the use of data generated by new technologies will enhance insurers’ ability to model emerging risk and to foster the development of more innovative cover .deleted
2020/05/28
Committee: JURI
Amendment 243 #
Motion for a resolution
Annex I – part B – recital 20
(20) Despite missing historical claim data, there are already insurance products that are developed area-by-area and cover-by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. If a new type of insurance is needed, the insurance market will develop and offer a fitting solution and thus, will close the insurance gap. In exceptional cases, in which the compensation significantly exceeds the maximum amounts set out in this Regulation, Member States should be encouraged to set up a special compensation fund for a limited period of time that addresses the specific needs of those cases.deleted
2020/05/28
Committee: JURI
Amendment 250 #
Motion for a resolution
Annex I – part B – recital 21
(21) It is of utmost importance that any future changes to this text go hand in hand with a necessary review of the PLD. The introduction of a new liability regime for the deployefrontend operator and backend operator of AI-systems requires that the provisions of this Regulation and the review of the PLD should be closely coordinated in terms of substance as well as approach so that they together constitute a consistent liability framework for AI- systems, balancing the interests of producer, deployer and the affected person, as regards the liability risk. Adapting and streamlining the definitions of AI-system, deployer, producer, developer, defect, product and service throughout all pieces of legislation is therefore necessary.
2020/05/28
Committee: JURI
Amendment 253 #
Motion for a resolution
Annex I – part B – recital 21 a (new)
(21a) In the liability stage, a risk-based approach to AI is not appropriate, since the damage has occurred and the product has proven to be a risk product. The so- called low-risk applications could equally cause severe harm or damage. Thus, the liability model for products containing AI applications should be approached in a two-step process. Firstly, providing a fault-based liability of the frontend operator against which the affected person should have the right to bring the claim for damages. The frontend operator should be able to prove his lack of fault by complying with the duty of care consisting in the regular installation of all available updates. If this obligation is fulfilled, due diligence should be presumed. Secondly, in the event where no fault of the frontend operator can be established, the producer or the backend operator should be held strictly liable. Such a two-step process is essential in order to ensure that victims are effectively compensated for damages caused by AI-driven systems.
2020/05/28
Committee: JURI
Amendment 256 #
Motion for a resolution
Annex I – part B – Article 1 – paragraph 1
This Regulation sets out rules for the civil liability claims of natural and legal persons against the deployefrontend operator and against the backend operator of AI-systems.
2020/05/28
Committee: JURI
Amendment 261 #
Motion for a resolution
Annex I – part B – Article 2 – paragraph 1
1. This Regulation applies on the territory of the Union where a physical or virtual activity, device or process driven by an AI-system has caused harm or damage to the life, health, physical integrity or the property ofconnected and self-learning device, digital content, digital good and/or service has caused a typical harm or damage to a natural or legal person.
2020/05/28
Committee: JURI
Amendment 264 #
Motion for a resolution
Annex I – part B – Article 2 – paragraph 2
2. Any agreement between a deployefrontend or backend operator of an AI-system and a natural or legal person who suffers harm or damage because of the AI-system, which circumvents or limits the rights and obligations set out in this Regulation, whether concluded before or after the harm or damage has been caused, shall be deemed null and void.
2020/05/28
Committee: JURI
Amendment 275 #
Motion for a resolution
Annex I – part B – Article 3 – point c
(c) ‘high risk’ means a significant potential in an autonomously operating AI-system to cause harm or damage to one or more persons in a manner that is random and impossible to predict in advance; the significance of the potential depends on the interplay between the severity of possible harm or damage, the likelihood that the risk materializes and the manner in which the AI-system is being used;deleted
2020/05/28
Committee: JURI
Amendment 283 #
Motion for a resolution
Annex I – part B – Article 3 – point d
(d) ‘deployefrontend operator’ means the person who decides on the use of the AI-system, exercises control over the associated riskprimarily deciding on and benefitsting from its operationthe use of the relevant technology;
2020/05/28
Committee: JURI
Amendment 285 #
Motion for a resolution
Annex I – part B – Article 3 – point d a (new)
(da) ‘backend operator’ means the person continuously defining the features of the relevant technology and providing essential and ongoing backend support;
2020/05/28
Committee: JURI
Amendment 286 #
Motion for a resolution
Annex I – part B – Article 3 – point e
(e) ‘affected person’ means any person who suffers harm or damage caused by a physical or virtual activity, device or process driven by an AI-system, and who is not its deployefrontend operator;
2020/05/28
Committee: JURI
Amendment 291 #
Motion for a resolution
Annex I – part B – Article 3 – point f
(f) ‘harm or damage’ means an adverse impact affecting the life, health, physical integrity or property of a natural or legal person, with the exception of non-y material or non-material typical harm or damage, meaning those where the special risk of AI-systems or connected devices material harmises;
2020/05/28
Committee: JURI
Amendment 293 #
Motion for a resolution
Annex I – part B – Article 3 – point g
(g) ‘producer’ means the developer or the backend operator of an AI-system, or the producer as defined in Article 3 of Council Directive 85/374/EEC7 . _________________ 7 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7.8.1985, p. 29.deleted
2020/05/28
Committee: JURI
Amendment 298 #
Motion for a resolution
Annex I – part B – chapter 2 – title
High-risk AI-systemLiability regimes
2020/05/28
Committee: JURI
Amendment 299 #
Motion for a resolution
Annex I – part B – Article 4 – title
SFault liability and strict liability for high-risk AI- systems
2020/05/28
Committee: JURI
Amendment 300 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 1
1. The deployer of a high-riskfrontend operator of an AI- system shall be strictly liableliable at fault for any harm or damage that was caused by a physical or virtual activity, device or process driven by that AI-system. The frontend operator has to comply with the duty of care consisting in the regular installation of all available updates. If that obligation is fulfilled, due diligence shall be presumed.
2020/05/28
Committee: JURI
Amendment 305 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 2 – introductory part
2. The high-risk AI-systems as well as the critical sectors where they are used shall be listed in the Annex to this Regulation. The Commission is empowerIn the event where no fault of the frontend operator can be established, to adopt delegated acts in accordance with Article 13, to amend the exhaustive list in the Annex, by:he backend operator should be held strictly liable.
2020/05/28
Committee: JURI
Amendment 310 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 2 – point a
(a) including new types of high-risk AI-systems and critical sectors in which they are deploydeleted;
2020/05/28
Committee: JURI
Amendment 312 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 2 – point b
(b) deleting types of AI-systems that can no longer be considered to pose a high risk; and/ored
2020/05/28
Committee: JURI
Amendment 316 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 2 – point c
(c) changing the critical sectors for existing high-risk AI-systems.deleted
2020/05/28
Committee: JURI
Amendment 319 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 2 – subparagraph 2
Any delegated act amending the Annex shall come into force six months after its adoption. When determining new critical sectors and/or high-risk AI-systems to be inserted by means of delegated acts in the Annex, the Commission shall take full account of the criteria set out in this Regulation, in particular those set out in Article 3(c).
2020/05/28
Committee: JURI
Amendment 320 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 3
3. The deployer of a high-risk AI- system shall not be able to exonerate himself or herself by arguing that he or she acted with due diligence or that the harm or damage was caused by an autonomous activity, device or process driven by his or her AI-system. The deployer shall not be held liable if the harm or damage was caused by force majeure.deleted
2020/05/28
Committee: JURI
Amendment 326 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 4
4. The deployer of a high-risk AI- system shall ensure they have liability insurance cover that is adequate in relation to the amounts and extent of compensation provided for in Article 5 and 6 of this Regulation. If compulsory insurance regimes already in force pursuant to other Union or national law are considered to cover the operation of the AI-system, the obligation to take out insurance for the AI-system pursuant to this Regulation shall be deemed fulfilled, as long as the relevant existing compulsory insurance covers the amounts and the extent of compensation provided for in Articles 5 and 6 of this Regulation.deleted
2020/05/28
Committee: JURI
Amendment 331 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 5
5. This Regulation shall prevail over national liability regimes in the event of conflicting strict liability classification of AI-systems.deleted
2020/05/28
Committee: JURI
Amendment 335 #
Motion for a resolution
Annex I – part B – Article 5 – title
Amount ofFull compensation for damages
2020/05/28
Committee: JURI
Amendment 338 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 1 – introductory part
1. A deployer of a high-riskfrontend or backend operator of an AI- system that has been held liable for harm or damage under this Regulation shall fully compensate: for any harm or damage.
2020/05/28
Committee: JURI
Amendment 340 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 1 – point a
(a) up to a maximum total amount of EUR ten million in the event of death or of harm caused to the health or physical integrity of one or several persons as the result of the same operation of the same high-risk AI-system;deleted
2020/05/28
Committee: JURI
Amendment 348 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 1 – point b
(b) up to a maximum total amount of EUR two million in the event of damage caused to property, including when several items of property of one or several persons were damaged as a result of the same operation of the same high-risk AI- system; where the affected person also holds a contractual liability claim against the deployer, no compensation shall be paid under this Regulation if the total amount of the damage to property is of a value that falls below EUR 500.deleted
2020/05/28
Committee: JURI
Amendment 355 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 1 – point 2
2. Where the combined compensation to be paid to several persons who suffer harm or damage caused by the same operation of the same high-risk AI-system exceeds the maximum total amounts provided for in paragraph 1, the amounts to be paid to each person shall be reduced pro-rata so that the combined compensation does not exceed the maximum amounts set out in paragraph 1.deleted
2020/05/28
Committee: JURI
Amendment 360 #
Motion for a resolution
Annex I – part B – Article 6 – paragraph 1 – introductory part
1. Within the amount set out in Article 5(1)(a), compensation to be paid by the deployerparty held liable in the event of physical harm followed by the death of the affected person, shall be calculated based on the costs of medical treatment that the affected person underwent prior to his or her death, and of the pecuniary prejudice sustained prior to death caused by the cessation or reduction of the earning capacity or the increase in his or her needs for the duration of the harm prior to death. The deployeoperator held liable shall furthermore reimburse the funeral costs for the deceased affected person to the party who is responsible for defraying those expenses.
2020/05/28
Committee: JURI
Amendment 362 #
Motion for a resolution
Annex I – part B – Article 6 – paragraph 1 – paragraph 1
If at the time of the incident that caused the harm leading to his or her death, the affected person was in a relationship with a third party and had a legal obligation to support that third party, the deployeoperator held liable shall indemnify the third party by paying maintenance to the extent to which the affected person would have been obliged to pay, for the period corresponding to an average life expectancy for a person of his or her age and general description. The deployeoperator shall also indemnify the third party if, at the time of the incident that caused the death, the third party had been conceived but had not yet been born.
2020/05/28
Committee: JURI
Amendment 364 #
Motion for a resolution
Annex I – part B – Article 6 – paragraph 2
2. Within the amount set out in Article 5(1)(b), compensation to be paid by the deployeoperator held liable in the event of harm to the health or the physical integrity of the affected person shall include the reimbursement of the costs of the related medical treatment as well as the payment for any pecuniary prejudice sustained by the affected person, as a result of the temporary suspension, reduction or permanent cessation of his or her earning capacity or the consequent, medically certified increase in his or her needs.
2020/05/28
Committee: JURI
Amendment 366 #
Motion for a resolution
Annex I – part B – Article 7 – paragraph 1
1. Civil liability claims, brought in accordance with Article 4(1), concerning harm to life, health or physical integrity, shall be subject to a special limitation period of 30 years from the date on which the harm occurred.
2020/05/28
Committee: JURI
Amendment 368 #
Motion for a resolution
Annex I – part B – Article 7 – paragraph 2
2. Civil liability claims, brought in accordance with Article 4(1), concerning damage to property shall be subject to a special limitation period of: (a) 10 years from the date when the property damage occurred, or (b) 30 years from the date on which the operation of the high-risk AI-system that subsequently caused the property damage took place. Of the periods referred to in the first subparagraph, the period that ends first shall be applicable.deleted
2020/05/28
Committee: JURI
Amendment 378 #
Motion for a resolution
Annex I – part B – chapter 3 – title
Other AI-systemsDuty of Care
2020/05/28
Committee: JURI
Amendment 379 #
Motion for a resolution
Annex I – part B – Article 8 – title
Fault-based liability for other AI-systemsDuties of care of the frontend operator
2020/05/28
Committee: JURI
Amendment 380 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 1
1. The deployer of an AI-system that is not defined as a high-risk AI-system, in accordance to Article 3(c) and, as a result is not listed in the Annex to this Regulation,frontend operator of an AI- system shall be subject to fault-based liability for any harm or damage that was caused by a physical or virtual activity, device or process driven by the AI-system.
2020/05/28
Committee: JURI
Amendment 384 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 2 – introductory part
2. The deployefrontend operator shall not be liable if he or she can prove that the harm or damage was caused without his or her fault, relying on either of the following grounds’:
2020/05/28
Committee: JURI
Amendment 391 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 2 – subparagraph 2
The deployefrontend operator shall not be able to escape liability by arguing that the harm or damage was caused by an autonomous activity, device or process driven by his or her AI-system. The deployer shall not be liable if the harm or damage was caused by force majeure.
2020/05/28
Committee: JURI
Amendment 394 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 3
3. Where the harm or damage was caused by a third party that interfered with the AI-system by modifying its functioning, the deployerfrontend operator if at fault shall nonetheless be liable for the payment of compensation if such third party is untraceable or impecunious.
2020/05/28
Committee: JURI
Amendment 397 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 4
4. At the request of the deployer, the produceThe backend operator of an AI- system shall have the duty of collaborating with the deployerfrontend operator or the affected person to the extent warranted by the significance of the claim in order to allow the deployerfrontend operator or the affected person to prove that he or she acted without fault.
2020/05/28
Committee: JURI
Amendment 400 #
Motion for a resolution
Annex I – part B – Article 9
Article 9 National provisions on compensation and limitation period Civil liability claims brought in accordance with Article 8(1) shall be subject, in relation to limitation periods as well as the amounts and the extent of compensation, to the laws of the Member State in which the harm or damage occurred.deleted
2020/05/28
Committee: JURI
Amendment 402 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 1
1. If the harm or damage is caused both by a physical or virtual activity, device or process driven by an AI-system and by the actions of an affected person or of any person for whom the affected person is responsible, the deployefrontend operator’s extent of liability under this Regulation shall be reduced accordingly. The deployefrontend operator shall not be liable if the affected person or the person for whom he or she is responsible is solely or predominantly accountable for the harm or damage caused.
2020/05/28
Committee: JURI
Amendment 406 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 2
2. A deployefrontend operator held liable may use the data generated by the AI- system to prove contributory negligence on the part of the affected person. The same right shall apply for the affected person. The use of such data shall comply with the data protection legislation.
2020/05/28
Committee: JURI
Amendment 409 #
Motion for a resolution
Annex I – part B – Article 11 – paragraph 1
If there is more than one deployefrontend and backend operator of an AI- system, they shall be jointly and severally liable. If any of the deployers is also the producer of the AI-system, this Regulation shall prevail over the Product Liability Directive.
2020/05/28
Committee: JURI
Amendment 411 #
Motion for a resolution
Annex I – part B – Article 12 – paragraph 1
1. The deployefrontend operator and the backend operator shall not be entitled to pursue a recourse action unless the affected person, who is entitled to receive compensation under this Regulation, has been paid in full.
2020/05/28
Committee: JURI
Amendment 414 #
Motion for a resolution
Annex I – part B – Article 12 – paragraph 2
2. In the event that the deployer isfrontend operator and the backend operator are held jointly and severally liable with other deployeoperators in respect of an affected person and hasve fully compensated that affected person, in accordance with Article 4(1) or 8(1), that deployeoperator may recover part of the compensation from the other deployeoperators, in proportion to his or her liability. DeployeOperators, that are jointly and severally liable, shall be obliged in equal proportions in relation to one another, unless otherwise determined. If the contribution attributable to a jointly and severally liable deployeroperators cannot be obtained from him or her, the shortfall shall be borne by the other deployeoperators. To the extent that a jointly and severally liable deployeroperators compensates the affected person and demands adjustment of advancements from the other liable deployeoperators, the claim of the affected person against the other deployeoperators shall be subrogated to him or her. The subrogation of claims shall not be asserted to the disadvantage of the original claim.
2020/05/28
Committee: JURI
Amendment 417 #
Motion for a resolution
Annex I – part B – Article 12 – paragraph 3
3. In the event that the deployefrontend operator of a defective AI-system fully indemnifies the affected person for harm or damages in accordance with Article 4(1) or 8(1), he or she may take action for redress against the producebackend operator of the defective AI-system according to Directive 85/374/EEC and to national provisions concerning liability for defective products.
2020/05/28
Committee: JURI
Amendment 420 #
Motion for a resolution
Annex I – part B – Article 12 – paragraph 4
4. In the event that the insurer of the deployeoperator indemnifies the affected person for harm or damage in accordance with Article 4(1) or 8(1), any civil liability claim of the affected person against another person for the same damage shall be subrogated to the insurer of the deployeoperator to the amount the insurer of the deployeoperator has compensated the affected person.
2020/05/28
Committee: JURI
Amendment 423 #
Motion for a resolution
Annex I – part B – Article 13
Article 13 Exercise of the delegation 1. The power to adopt delegated acts is conferred on the Commission subject to the conditions laid down in this Article. 2. The power to adopt delegated acts referred to in Article 4(2) shall be conferred on the Commission for a period of five years from [date of application of this Regulation]. 3. The delegation of power referred to in Article 4(2) may be revoked at any time by the European Parliament or by the Council. A decision to revoke shall put an end to the delegation of the power specified in that decision. It shall take effect the day following the publication of the decision in the Official Journal of the European Union or at a later date specified therein. It shall not affect the validity of any delegated acts already in force. 4. Before adopting a delegated act, the Commission shall consult the standing Technical Committee for high-risk AI- systems (TCRAI-committee) in accordance with the principles laid down in the Interinstitutional Agreement on Better Law-Making of 13 April 2016. 5. As soon as it adopts a delegated act, the Commission shall notify it simultaneously to the European Parliament and to the Council. 6. A delegated act adopted pursuant to Article 4(2) shall enter into force only if no objection has been expressed by either the European Parliament or the Council within a period of two months of notification or if, before the expiry of that period, the European Parliament and the Council have both informed the Commission that they will not object. That period shall be extended by two months at the initiative of the European Parliament or of the Council.deleted
2020/05/28
Committee: JURI
Amendment 430 #
Motion for a resolution
Annex I – part B – Annex
Exhaustive list of AI-systems that pose a high risk as well as of critical sectors where the AI-systems are being deployed1 AI-systems Critical sector [...] _________________ 1 *This Annex should aim to replicate the level of detail that appears for instance in Annex I of Regulation 2018/858 (Approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicle).deleted
2020/05/28
Committee: JURI