75 Amendments of Karen MELCHIOR related to 2020/2014(INL)
Amendment 3 #
Motion for a resolution
Citation 4 a (new)
Citation 4 a (new)
- having regard to the Interinstitutional Agreement of 13 April 2016 on Better Law-Making and the Better Regulations Guidelines,
Amendment 17 #
Motion for a resolution
Recital B
Recital B
B. whereas any future-orientated liability framework has to strike a balance between efficiently protecting potential victims of harm or damage and at the same time, providing enough leeway to make the development of new technologies, products or services possible; whereas ultimately, the goal of any liability framework should be to provide legal certainty for all parties, whether it be the producer, the deployer, the developer, the affected person or any other third party;
Amendment 23 #
Motion for a resolution
Recital D a (new)
Recital D a (new)
Da. whereas the notion of Artificial Intelligence(AI)-systems comprises a large group of different technologies, including simple statistics, machine learning and deep learning;
Amendment 27 #
E. whereas Artificial Intelligence (certain AI)-systems present significant legal challenges for the existing liability framework and could lead to situations, in which their opacity could make it extremely expensive or even impossible to identify who was in control of the risk associated with the AI-system or which code or input has ultimately caused the harmful operation;
Amendment 28 #
Motion for a resolution
Recital E a (new)
Recital E a (new)
Ea. Whereas the diversity of AI applications and the diverse range of risks the technology poses complicates finding a single solution suitable for the entire spectrum of risks; whereas, in this respect, an approach should be adopted in which experiments, pilots and regulatory sandboxes are used to come up with proportional and evidence-based solutions that address specific situations and sectors where needed;
Amendment 36 #
Motion for a resolution
Recital G a (new)
Recital G a (new)
Ga. whereas the future regulatory framework needs to take into consideration all the interests at stake; whereas careful examination of the consequences of any new regulatory framework on all actors in an impact assessment should be a prerequisite for further legislative steps; whereas the crucial role of SMEs and start-ups especially in the European economy justifies a strictly proportionate approach to enable them to develop and innovate;
Amendment 38 #
Motion for a resolution
Recital G b (new)
Recital G b (new)
Gb. whereas, on the other hand, the victims of damages caused by AI-systems need to have a right to redress and full compensation of the damages and the harms that they have suffered;
Amendment 59 #
Motion for a resolution
Paragraph 5
Paragraph 5
5. Believes that there is no need for a complete revision of the well-functioning liability regimes but that the complexity, connectivity, opacity, vulnerability and autonomy of AI-systems, as well as the multitude of actors involved, nevertheless represent a significant challenge; considers that specific adjustments are necessary to avoid a situation in which persons who suffer harm or whose property is damaged end up without compensation;
Amendment 65 #
Motion for a resolution
Paragraph 6
Paragraph 6
6. Notes that all physical or virtual activities, devices or processes that are driven by AI-systems may technically be the direct or indirect cause of harm or damage, yet are always the result of someone building, deploying or interfering with the systems; notes in this respect that it is not necessary to give legal personality to AI-systems; is of the opinion that the opacity and autonomy of AI-systems could make it in practice very difficult or even impossible to trace back specific harmful actions of the AI-systems to specific human input or to decisions in the design; recalls that, in accordance with widely- accepted liability concepts, one is nevertheless able to circumvent this obstacle by making the persons who create, maintain or control the risk associated with the AI-system, accountable;
Amendment 69 #
Motion for a resolution
Paragraph 7
Paragraph 7
7. Considers that the Product Liability Directive (PLD) has proven to be an effective means of getting compensation for harm triggered by a defective product; hence, notes that it should also be used with regard to civil liability claims against the producer of a defective AI-system, when the AI-system qualifies as a product under that Directive; if lLegislative adjustments to the PLD are necessary, and they should be discussed during a review of that Directive; is of the opinion that, for the purpose of legal certainty throughout the Union, the ‘backend operator’ should fall under the same liability rules as the producer, manufacturer and developer;
Amendment 78 #
Motion for a resolution
Paragraph 9
Paragraph 9
9. Considers it, therefore, appropriate for this report to focus on civil liability claims against the deployeoperator of an AI- system; affirms that the deployeoperator’s liability is justified by the fact that he or she is controlling athe risks associated with the AI- system, comparable to an owner of a car or pet; considers that due to the AI-system’s complexity and connectivity, the deployer will be in many cases the first visible contact point for the affected person;
Amendment 81 #
Motion for a resolution
Subheading 3
Subheading 3
Liability of the deployeoperator
Amendment 87 #
Motion for a resolution
Paragraph 10
Paragraph 10
10. Opines that liability rules involving the deployeoperator should in principle cover all operations of AI-systems, no matter where the operation takes place and whether it happens physically or virtually; remarks that operations in public spaces that expose many third persons to a risk constitute, however, cases that require further consideration; considers that the potential victims of harm or damage are often not aware of the operation and regularly do not have contractual liability claims against the deployeoperator; notes that when harm or damage materialises, such third persons would then only have a fault-liability claim, and they might find it difficult to prove the fault of the deployeoperator of the AI-system;
Amendment 90 #
Motion for a resolution
Paragraph 11
Paragraph 11
11. Considers it appropriate to define the deployeoperator as the person who dexercidses on the use of the AI-system, who exercises control over the risk and who benefits from its operation; considers that exercising control means any action of the deployer that affects the manner of the operation from start to finish or thata degree of control over a risk connected with the operation and functioning of the AI-system and benefits from its operation; considers that exercising control means any action of the operator that influences the operation of the AI-system and thus the extent to which it exposes third parties to its potential risks; considers that these actions could impact the operation from start to finish by determining the input, output or results, or changes specific functions or processes within the AI- system;
Amendment 94 #
Motion for a resolution
Paragraph 12
Paragraph 12
12. Notes that there could be situations in which there is more than one deployeoperator, for example a backend and frontend operator; considers that in that event, all deployeoperators should be jointly and severally liable while having the right to recourse proportionally against each other, reflecting the level of control each party has over the materialized risk;
Amendment 104 #
Motion for a resolution
Paragraph 13
Paragraph 13
13. Recognises that the type of AI- system the deployeoperator is exercising control over is a determining factor; notes that an AI-system that entails a high risk potentially endangers the general public to a much higher degree; considers that, based on the legal challenges that AI-systems pose to the existing liability regimes, it seems reasonable to set up a strict liability regime for those high-risk AI-systems;
Amendment 108 #
Motion for a resolution
Paragraph 14
Paragraph 14
14. Believes that an AI-system presents a high risk when its autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random and impossible to predict in advancgoes beyond what can reasonably be expected from its intended use; considers that the significance of the potential depends on the interplay between the severity of possible harm, the likelihood that the risk materializes and the manner in which the AI-system is being used;
Amendment 116 #
Motion for a resolution
Paragraph 15
Paragraph 15
15. Recommends that all high-risk AI- systems be listed in an Annex to the proposed Regulation; recognises that, given the rapid technological changedevelopments and the required technical expertise, it should be up to the Commission to review that Annex every six months and if necessary, amend it through a delegated act; believes that the Commission should closely cooperate withbe informed by a newly formed standing committee similar to the existing Standing Committee on Precursors or the Technical Committee on Motor Vehicles, which include national experts of the Member States and stakeholders; considers that the balanced membership of the ‘High-Level Expert Group on Artificial Intelligence’ could serve as an example for the formation of the group of stakeholders;
Amendment 128 #
Motion for a resolution
Paragraph 17
Paragraph 17
17. Determines that all activities, devices or processes driven by AI-systems that cause harm or damage but are not listed in the Annex to the proposed Regulation should remain subject to fault- based liability; believes that the affected person should nevertheless benefit from a presumption of fault of the deployeoperator;
Amendment 138 #
Motion for a resolution
Paragraph 19
Paragraph 19
19. Is of the opinion that, based on the significant potential to cause harm and by taking Directive 2009/103/EC7 into account, all deployeoperators of high-risk AI- systems listed in the Annex to the proposed Regulation should hold liability insurance; considers that such a mandatory insurance regime for high-risk AI-systems should cover the amounts and the extent of compensation laid down by the proposed Regulation; _________________ 7 OJ L 263, 7.10.2009, p. 11.
Amendment 140 #
Motion for a resolution
Paragraph 20
Paragraph 20
20. Believes that a European compensation mechanism, funded with public money, is not the right way to fill potential insurance gaps; considers that bearing the good experience with regulatory sandboxes in the fintech sector in mind, it should be up to the insurance market to adjust existing products or create new insurance cover for the numerous sectors and various different technologies, producta lack of data of the risks associated with AI-systems make it difficult for the insurance sector to come up with adapted or new insurance products; considers that leaving the development of a mandatory insurance entirely to the market is likely to result in a one size fits all approach with disproportionate high prices and the wrong incentives, stimulating operators to opt for the cheapest insurance rather than for the best coverage; considers that the Commission should work closely with the insurance sector to see how data and innovative models cand services that involve AI-systems; be used to create the insurances that offer adequate coverage for an affordable price.
Amendment 148 #
Motion for a resolution
Annex I – part A – paragraph 1 – indent 2
Annex I – part A – paragraph 1 – indent 2
- New legal challenges posed by the deployment of Artificial Intelligence (AI)- systems have to be addressed by establishing maximal legal certainty for the producer, the deployeoperator, the affected person and any other third party.
Amendment 166 #
Motion for a resolution
Annex I – part B – recital 1
Annex I – part B – recital 1
(2) Especially at the beginning of the life cycle of new products and services, there is a certain degree of risk for the user as well as for third persons that something does not function properly. This process of trial-and-error is at the same time a key enabler of technical progress without which most of our technologies would not exist. So far, the accompanying risks of new products and services have been properly mitigated by strong product safety legislation and liability rulesthey need to continue to be properly implemented and reviewed where necessary.
Amendment 171 #
Motion for a resolution
Annex I – part B – recital 3
Annex I – part B – recital 3
(3) The rise of Artificial intelligence (AI) however presents a significant challenge for the existing liability frameworks. Using AI-systems in our daily life will lead to situations in which their opacity (“black box” element) and the multitude of actors who intervene in their life-cycle makes it extremely expensive or even impossible to identify who was in control of the risk of using the AI-system in question or which code or input has caused the harmful operation. This difficulty is even compounded by the connectivity between an AI-system and other AI-systems and non-AI-systems, by its dependency on external data, by its vulnerability to cybersecurity breaches as well as by the increasing autonomy of AI- systems triggered by machine-learning and deep- learning capabilities. Besides these complex features and potential vulnerabilities, AI-systems could also be used to cause severe harm, such as compromising our values and freedoms by tracking individuals against their will, by introducing Social Credit Systems or by constructing lethal autonomous weapon systems.
Amendment 177 #
Motion for a resolution
Annex I – part B – recital 4 a (new)
Annex I – part B – recital 4 a (new)
(4a) An adequate liability regime is also necessary to counterweight the breach of safety rules. However, the envisaged liability needs to take into consideration all interests at stake. A careful examination of the consequences of any new regulatory framework on small and medium-sized enterprises (SMEs) and start-ups is a prerequisite for further legislative steps. The crucial role that they play in the European economy justifies a strictly proportionate approach in order to enable them to develop and innovate. On the other hand, the victims of damages caused by AI-systems need to have a right to redress and full compensation of the damages and the harms that they have suffered.
Amendment 184 #
Motion for a resolution
Annex I – part B – recital 5
Annex I – part B – recital 5
(5) Any discussion about required changes in the existing legal framework should start with the clarification that AI- systems have neither legal personality nor human conscience, and that their sole task is to serve humanity. Many AI-systems are also not so different from other technologies, which are sometimes based on even more complex software. Ultimately, the large majority of AI- systems are used for handling trivial tasks without any risks for the society. There are however also AI-systems that are developed and deployed in a critical manner and are based on neuronal networks and deep-learning processes. Their opacity and autonomy could make it very difficult to trace back specific actions to specific human decisions in their design or in their operation. A deployen operator of such an AI-system might for instance argue that the physical or virtual activity, device or process causing the harm or damage was outside of his or her control because it was caused by an autonomous operation of his or her AI- system. The mere operation of an autonomous AI-system should at the same time not be a sufficient ground for admitting the liability claim. As a result, there might be liability cases in which a person who suffers harm or damage caused by an AI-system cannot prove the fault of the producer, of an interfering third party or of the deployeoperator and ends up without compensation.
Amendment 188 #
Motion for a resolution
Annex I – part B – recital 6
Annex I – part B – recital 6
(6) Nevertheless, it should always be clear that whoever creates, maintains, controls or interferes with the AI-system, should be accountable for the harm or damage that the activity, device or process causes. This follows from general and widely accepted liability concepts of justice according to which the person that creates a risk for the public is accountable if that risk materializes. Consequently, the rise of AI-systems does not pose a need for a complete revision of liability rules throughout the Union. Specific adjustments of the existing legislation and very fewwell- assessed and targeted new provisions would be sufficient to accommodate the AI-related challenges.
Amendment 193 #
Motion for a resolution
Annex I – part B – recital 7
Annex I – part B – recital 7
(7) Council Directive 85/374/EEC3 (the Product Liability Directive) has proven to be an effective means of getting compensation for damage triggered by a defective product. Hence, it should also be used with regard to civil liability claims of a party who suffers harm or damage against the producer of a defective AI- system. In line with the better regulation principles of the Union, any necessary legislative adjustments should be discussed during athe review of that Directive. The existing fault-based liability law of the Member States also offers in most cases a sufficient level of protection for persons that suffer harm or damages caused by an interfering third person, as that interference regularly constitutes a fault-based action. Consequently, this Regulation should focus on claims against the deployeoperator of an AI- system. _________________ 3 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7.8.1985, p. 29.
Amendment 196 #
Motion for a resolution
Annex I – part B – recital 8
Annex I – part B – recital 8
(8) The liability of the deployeoperator under this Regulation is based on the fact that he or she exercises a degree of controls over a risk byconnected to the operatingon of an AI- system. C, which is comparable to that of an owner of a car or pet, the deployer is able to exercise a certain level of control over the risk that the item poses. Exercising control thereby should be understood as meaning any action of the deployer that affects the manner of the operation from start to finish or thatoperator that influences the operation of the AI-system and thus the extent to which it exposes third parties to its potential risks. These actions could impact the operation from start to finish by determining the input, output or results, or changes specific functions or processes within the AI-system.;
Amendment 198 #
Motion for a resolution
Annex I – part B – recital 8 a (new)
Annex I – part B – recital 8 a (new)
(8a) The more sophisticated and more autonomous a system is, defining and influencing the algorithms, for example by continuous updates, could have a greater impact than just starting the system. As there is often more than one person who could, in a meaningful way, be considered as ‘operating’ the technology, both the backend provider and the frontend operator can be qualified as the ‘operator’ of the AI- system, depending on the degree of the exercised control.
Amendment 199 #
Motion for a resolution
Annex I – part B – recital 8 b (new)
Annex I – part B – recital 8 b (new)
(8b) Although in general, the frontend operator appears as the person who ‘primarily’ decides on the use of the AI- system, the backend provider, who on a continuous basis, defines the features of the technology and provides data and essential backend support service, could, for example, also have a high degree of control over the operational risks.
Amendment 200 #
Motion for a resolution
Annex I – part B – recital 8 c (new)
Annex I – part B – recital 8 c (new)
(8c) When there is more than one operator, the strict liability should lie with the one who exercises the highest degree of control over the risks posed by the harmful operation.
Amendment 203 #
Motion for a resolution
Annex I – part B – recital 9
Annex I – part B – recital 9
(9) If a user, namely the person that utilises the AI-system, is involved in the harmful event, he or she should only be liable under this Regulation if the user also qualifies as a deployer. This Regulation should not considern operator. For the purpose of legal certainty throughout the Union, if the backend operator, who is the person continuously defining the features of the relevant technology and providing essential and ongoing backend support, to be a deployer and thus, its provisionsroducer or the manufacturer within the meaning of Article 3 of the Product Liability Directive, that Directive should not apply to him or her. ForIf the purpose of legal certainty throughout the Union, the backend operator should fall under the same liability rules as the producer, manufacturer and developerre is only one operator, who is also the producer, this Regulation should prevail.
Amendment 207 #
Motion for a resolution
Annex I – part B – recital 10
Annex I – part B – recital 10
(10) This Regulation should cover in principle all AI-systems, no matter where they are operating and whether the operations take place physically or virtually. The majority of liability claims under this Regulation should however address cases of third party liability, where an AI-system operates in a public space and exposes many third persons to a risk. In that situation, the affected persons will often not be aware of the operating AI- system and will not have any contractual or legal relationship towards the deployeoperator. Consequently, the operation of the AI- system puts them into a situation in which, in the event of harm or damage being caused, they only have fault-based liability claims against the deployeoperator of the AI- system, while facing severe difficulties to prove fault on the part of the deployeoperator.
Amendment 211 #
Motion for a resolution
Annex I – part B – recital 11
Annex I – part B – recital 11
(11) The type of AI-system the deployeoperator is exercising control over is a determining factor. An AI-system that entails a high risk potentially endangers the public to a much higher degree and in a manner that is random and impossible to predict in advancgoes beyond what can reasonably be expected from its intended use. This means that at the start of the autonomous operation of the AI-system, the majority of the potentially affected persons are unknown and not identifiable (e.g. persons on a public square or in a neighbouring house), compared to the operation of an AI-system that involves specific persons, who have regularly consented to its deployment before (e.g. surgery in a hospital or sales demonstration in a small shop). Determining how significant the potential to cause harm or damage by a high-risk AI-system ishould dependent on the interplay between the manner in which the AI-system is being used, the severity of the potential harm or damage and the likelihood that the risk materialises. The degree of severity should be determined based on the extent of the potential harm resulting from the operation, the number of affected persons, the total value for the potential damage as well as the harm to society as a whole. The likelihood should be determined based on the role of the algorithmic calculations in the decision-making process, the complexity of the decision and the reversibility of the effects. Ultimately, the manner of usage should depend, among other things, on the sector in which the AI- system operates, if it could have legal or factual effects on important legally protected rights of the affected person, and whether the effects can reasonably be avoided.
Amendment 217 #
Motion for a resolution
Annex I – part B – recital 12
Annex I – part B – recital 12
(12) All AI-systems with a high risk should be listed in an Annex to this Regulation. Given the rapid technical and market developments as well as the technical expertise which is required for an adequate review of AI-systems, the power to adopt delegated acts in accordance with Article 290 of the Treaty on the Functioning of the European Union should be delegated to the Commission to amend this Regulation in respect of the types of AI-systems that pose a high risk and the critical sectors where they are used. Based on the definitions and provisions laid down in this Regulation, the Commission should review the Annex every six months and, if necessary, amend it by means of delegated acts. To give businesses enough planning and investment security, changes to the critical sectors should only be made every 12twelve months. DevelopeOperators are called upon to notify the Commission if they are currently working on a new technology, product or service that falls under one of the existing critical sectors provided for in the Annex and which later could qualify for a high risk AI-system.
Amendment 229 #
Motion for a resolution
Annex I – part B – recital 15
Annex I – part B – recital 15
(15) All physical or virtual activities, devices or processes driven by AI-systems that are not listed as a high-risk AI-system in the Annex to this Regulation should remain subject to fault-based liability. The national laws of the Member States, including any relevant jurisprudence, with regard to the amount and extent of compensation as well as the limitation period should continue to apply. A person who suffers harm or damage caused by an AI-system should however benefit from the presumption of fault of the deployeoperator.
Amendment 232 #
Motion for a resolution
Annex I – part B – recital 16
Annex I – part B – recital 16
(16) The diligence which can be expected from a deployen operator should be commensurate with (i) the nature of the AI system, (ii) the legally protected right potentially affected, (iii) the potential harm or damage the AI-system could cause and (iv) the likelihood of such damage. Thereby, it should be taken into account that the deployeoperator might have limited knowledge of the algorithms and data used in the AI-system. It should be presumed that the deployeoperator has observed due care in selecting a suitable AI-system, if the deployeoperator has selected an AI-system which has been certified under [the voluntary certification scheme envisaged on p. 24 of COM(2020) 65 final]. It should be presumed that the deployeoperator has observed due care during the operation of the AI- system, if the deployeoperator can prove to have actually and regularly monitored the AI- system during its operation and to have notified the manufacturer about potential irregularities during the operation. It should be presumed that the deployeoperator has observed due care as regards maintaining the operational reliability, if the deployeoperator installed all available updates provided by the producer of the AI-system.
Amendment 233 #
Motion for a resolution
Annex I – part B – recital 17
Annex I – part B – recital 17
(17) In order to enable the deployeoperator to prove that he or she was not at fault, or the affected person to prove the existence of fault, the producers should have the duty to collaborate with the deployerboth parties concerned. European as well as non-European producers should furthermore have the obligation to designate an AI-liability- representative within the Union as a contact point for replying to all requests from deployeoperators, taking similar provisions set out in Article 37 GDPR (data protection officers), Articles 3(41) and 13(4) of Regulation 2018/858 of the European Parliament and of the Council5 and Articles 4(2) and 5 of Regulation 2019/1020 of the European Parliament and of the Council6 (manufacturer's representative) into account. _________________ 5 Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1). 6Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (OJ L 169, 25.6.2019, p. 1).
Amendment 247 #
Motion for a resolution
Annex I – part B – recital 20
Annex I – part B – recital 20
(20) Despite missing historical claim data, tBecause historical claim data is missing, it should be investigated how and under which conditions liability is insurable. There are already insurance products that are developed area-by-area and cover- by-cover as technology develops. Many insurers specialise in certain market segments (e.g. SMEs) or in providing cover for certain product types (e.g. electrical goods), which means that there will usually be an insurance product available for the insured. If a new type of insurance is needed, the insurance market will develop and offer a fitting solution and thus, willHowever, a “one size fits all” solution is difficult to envisage and the insurance market will need time to adapt. The Commission should work closely with the insurance market to develop innovative insurance products that could close the insurance gap. In exceptional cases, in which the compensation significantly exceeds the maximum amounts set out in this Regulation, Member States should be encouraged to set up a special compensation fund for a limited period of time that addresses the specific needs of those cases.
Amendment 252 #
Motion for a resolution
Annex I – part B – recital 21
Annex I – part B – recital 21
(21) It is of utmost importance that any future changes to this text go hand in hand with athe necessary review of the PLD. The introduction of a new liability regime for the deployeoperator of AI-systems requires that the provisions of this Regulation and the review of the PLD should be closely coordinated in terms of substance as well as approach so that they together constitute a consistent liability framework for AI- systems, balancing the interests of producer, deployeoperator and the affected person, as regards the liability risk. Adapting and streamlining the definitions of AI-system, deployeoperator, producer, developer, defect, product and service throughout all pieces of legislation is therefore necessary and should be envisaged in parallel.
Amendment 257 #
Motion for a resolution
Annex I – part B – Article 1 – paragraph 1
Annex I – part B – Article 1 – paragraph 1
This Regulation sets out rules for the civil liability claims of natural and legal persons against the deployeoperator of AI-systems.
Amendment 260 #
Motion for a resolution
Annex I – part B – Article 2 – paragraph 1
Annex I – part B – Article 2 – paragraph 1
1. This Regulation applies on the territory of the Union where a physical or virtual activity, device or process driven by an AI-system has caused harm or damage to the life, health, physical integrity or the property ofand property of a natural or legal person or has caused significant immaterial damage to a natural or legal person.
Amendment 265 #
Motion for a resolution
Annex I – part B – Article 2 – paragraph 2
Annex I – part B – Article 2 – paragraph 2
2. Any agreement between a deployen operator of an AI-system and a natural or legal person who suffers harm or damage because of the AI-system, which circumvents or limits the rights and obligations set out in this Regulation, whether concluded before or after the harm or damage has been caused, shall be deemed null and void.
Amendment 269 #
Motion for a resolution
Annex I – part B – Article 2 – paragraph 3
Annex I – part B – Article 2 – paragraph 3
3. This Regulation is without prejudice to any additional liability claims resulting from contractual relationships between the deployeoperator and the natural or legal person who suffered harm or damage because of the AI-system.
Amendment 272 #
Motion for a resolution
Annex I – part B – Article 3 – point a
Annex I – part B – Article 3 – point a
(a) ‘AI-system’ means a system that displays intelligent behaviour by analysing certain inputheir environment and taking action, with some degree of autonomy, to achieve specific goals. AI-systems can be purely software- based, acting in the virtual world, or can be embedded in hardware devices;
Amendment 277 #
Motion for a resolution
Annex I – part B – Article 3 – point c
Annex I – part B – Article 3 – point c
(c) ‘high risk’ means a significant potential in an autonomously operating AI- system to cause harm or damage to one or more persons in a manner that is random and impossible to predict in advancgoes beyond what can reasonably be expected from its intended use; the significance of the potential depends on the interplay between the severity of possible harm or damage, the likelihood that the risk materializes and the manner in which the AI-system is being used;
Amendment 282 #
Motion for a resolution
Annex I – part B – Article 3 – point d
Annex I – part B – Article 3 – point d
(d) ‘deployeoperator’ means the person who decides on the use of the AI-system, exercises control over the associated risk and benefits from its operation;
Amendment 284 #
Motion for a resolution
Annex I – part B – Article 3 – point d a (new)
Annex I – part B – Article 3 – point d a (new)
(da) 'control' means influence on the use and operation of the AI-system from start to finish and thus the extent to which it exposes third parties to its potential risks;
Amendment 287 #
Motion for a resolution
Annex I – part B – Article 3 – point e
Annex I – part B – Article 3 – point e
(e) ‘affected person’ means any person who suffers harm or damage caused by a physical or virtual activity, device or process driven by an AI-system, and who is not its deployeoperator;
Amendment 290 #
Motion for a resolution
Annex I – part B – Article 3 – point f
Annex I – part B – Article 3 – point f
(f) ‘harm or damage’ means an adverse impact affecting the life, health, physical integrity or, property of a natural or legal person, with the exception of non-or causing significant immaterial harm;damage.
Amendment 294 #
Motion for a resolution
Annex I – part B – Article 3 – point g
Annex I – part B – Article 3 – point g
(g) ‘producer’ means the developer or the backend operator of an AI-system, or the producer as defined in Article 3 of Council Directive 85/374/EEC7 . _________________ 7 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 210, 7.8.1985, p. 29.
Amendment 302 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 1
Annex I – part B – Article 4 – paragraph 1
1. The deployeoperator of a high-risk AI- system shall be strictly liable for any harm or damage that was caused by a physical or virtual activity, device or process driven by that AI-system.
Amendment 322 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 3
Annex I – part B – Article 4 – paragraph 3
3. The deployeoperator of a high-risk AI- system shall not be able to exonerate himself or herself by arguing that he or she acted with due diligence or that the harm or damage was caused by an autonomous activity, device or process driven by his or her AI-system. The deployeoperator shall not be held liable if the harm or damage was caused by force majeure.
Amendment 329 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 4
Annex I – part B – Article 4 – paragraph 4
4. The deployeoperator of a high-risk AI- system shall ensure they have liability insurance cover that is adequate in relation to the amounts and extent of compensation provided for in Article 5 and 6 of this Regulation. If compulsory insurance regimes already in force pursuant to other Union or national law are considered to cover the operation of the AI-system, the obligation to take out insurance for the AI- system pursuant to this Regulation shall be deemed fulfilled, as long as the relevant existing compulsory insurance covers the amounts and the extent of compensation provided for in Articles 5 and 6 of this Regulation.
Amendment 339 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 1 – introductory part
Annex I – part B – Article 5 – paragraph 1 – introductory part
1. A deployeoperator of a high-risk AI- system that has been held liable for harm or damage under this Regulation shall compensate:
Amendment 356 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 1 – point 2
Annex I – part B – Article 5 – paragraph 1 – point 2
Amendment 359 #
Motion for a resolution
Annex I – part B – Article 6 – paragraph 1 – introductory part
Annex I – part B – Article 6 – paragraph 1 – introductory part
1. Within the amount set out in Article 5(1)(a), compensation to be paid by the deployeoperator held liable in the event of physical harm followed by the death of the affected person, shall be calculated based on the costs of medical treatment that the affected person underwent prior to his or her death, and of the pecuniary prejudice sustained prior to death caused by the cessation or reduction of the earning capacity or the increase in his or her needs for the duration of the harm prior to death. The deployeoperator held liable shall furthermore reimburse the funeral costs for the deceased affected person to the party who is responsible for defraying those expenses.
Amendment 361 #
Motion for a resolution
Annex I – part B – Article 6 – paragraph 1 – paragraph 1
Annex I – part B – Article 6 – paragraph 1 – paragraph 1
If at the time of the incident that caused the harm leading to his or her death, the affected person was in a relationship with a third party and had a legal obligation to support that third party, the deployeoperator held liable shall indemnify the third party by paying maintenance to the extent to which the affected person would have been obliged to pay, for the period corresponding to an average life expectancy for a person of his or her age and general description. The deployeoperator shall also indemnify the third party if, at the time of the incident that caused the death, the third party had been conceived but had not yet been born.
Amendment 365 #
Motion for a resolution
Annex I – part B – Article 6 – paragraph 2
Annex I – part B – Article 6 – paragraph 2
2. Within the amount set out in Article 5(1)(b), compensation to be paid by the deployeoperator held liable in the event of harm to the health or the physical integrity of the affected person shall include the reimbursement of the costs of the related medical treatment as well as the payment for any pecuniary prejudice sustained by the affected person, as a result of the temporary suspension, reduction or permanent cessation of his or her earning capacity or the consequent, medically certified increase in his or her needs.
Amendment 369 #
Motion for a resolution
Annex I – part B – Article 7 – paragraph 2 – introductory part
Annex I – part B – Article 7 – paragraph 2 – introductory part
2. Civil liability claims, brought in accordance with Article 4(1), concerning damage to property or significant immaterial damage shall be subject to a special limitation period of:
Amendment 374 #
Motion for a resolution
Annex I – part B – Article 7 – paragraph 2 – point b
Annex I – part B – Article 7 – paragraph 2 – point b
(b) 30 years from the date on which the operation of the high-risk AI-system that subsequently caused the property or significant immaterial damage took place.
Amendment 382 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 1
Annex I – part B – Article 8 – paragraph 1
1. The deployeoperator of an AI-system that is not defined as a high-risk AI-system, in accordance to Article 3(c)4 and, as a result is not listed in the Annex to this Regulation, shall be subject to fault-based liability for any harm or damage that was caused by a physical or virtual activity, device or process driven by the AI-system.
Amendment 385 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 2 – introductory part
Annex I – part B – Article 8 – paragraph 2 – introductory part
2. The deployeoperator shall not be liable if he or she can prove that the harm or damage was caused without his or her fault, relying on either of the following grounds’:
Amendment 393 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 2 – subparagraph 2
Annex I – part B – Article 8 – paragraph 2 – subparagraph 2
The deployeoperator shall not be able to escape liability by arguing that the harm or damage was caused by an autonomous activity, device or process driven by his or her AI-system. The deployeoperator shall not be liable if the harm or damage was caused by force majeure.
Amendment 396 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 3
Annex I – part B – Article 8 – paragraph 3
3. Where the harm or damage was caused by a third party that interfered with the AI-system by modifying its functioning, the deployeoperator shall nonetheless be liable for the payment of compensation if such third party is untraceable or impecunious.
Amendment 399 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 4
Annex I – part B – Article 8 – paragraph 4
4. At the request of the deployeroperator or the affected person, the producer of an AI- system shall have the duty of collaborating with the deployerm to the extent warranted by the significance of the claim in order to allow for the ideployer to prove that he or she acted without faultntification of the liabilities.
Amendment 404 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 1
Annex I – part B – Article 10 – paragraph 1
1. If the harm or damage is caused both by a physical or virtual activity, device or process driven by an AI-system and by the actions of an affected person or of any person for whom the affected person is responsible, the deployeoperator’s extent of liability under this Regulation shall be reduced accordingly. The deployeoperator shall not be liable if the affected person or the person for whom he or she is responsible is solely or predominantly accountable for the harm or damage caused.
Amendment 407 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 2
Annex I – part B – Article 10 – paragraph 2
2. A deployen operator held liable and the affected person may use the data generated by the AI-system to prove contributory negligence on the part of the affected person, without prejudice to Regulation (EU) 2016/679.
Amendment 410 #
Motion for a resolution
Annex I – part B – Article 11 – paragraph 1
Annex I – part B – Article 11 – paragraph 1
If there is more than one deployeoperator of an AI- system, they shall be jointly and severally liable. If any of the deployeoperators is also the producer of the AI-system, this Regulation shall prevail over the Product Liability Directive.
Amendment 412 #
Motion for a resolution
Annex I – part B – Article 12 – paragraph 1
Annex I – part B – Article 12 – paragraph 1
1. The deployeoperator shall not be entitled to pursue a recourse action unless the affected person, who is entitled to receive compensation under this Regulation, has been paid in full.
Amendment 416 #
Motion for a resolution
Annex I – part B – Article 12 – paragraph 2
Annex I – part B – Article 12 – paragraph 2
2. In the event that the deployeoperator is held jointly and severally liable with other deployeoperators in respect of an affected person and has fully compensated that affected person, in accordance with Article 4(1) or 8(1), that deployeoperator may recover part of the compensation from the other deployers, in proportion to his or her liability. DeployeOperators, that are jointly and severally liable, shall be obliged in equal proportions in relation to one another, unless otherwise determined. If the contribution attributable to a jointly and severally liable deployer cannot be obtained from him or her, the shortfall shall be borne by the other deployeoperators. To the extent that a jointly and severally liable deployeoperator compensates the affected person and demands adjustment of advancements from the other liable deployeoperators, the claim of the affected person against the other deployersoperator shall be subrogated to him or her. The subrogation of claims shall not be asserted to the disadvantage of the original claim.
Amendment 419 #
Motion for a resolution
Annex I – part B – Article 12 – paragraph 3
Annex I – part B – Article 12 – paragraph 3
3. In the event that the deployeoperator of a defective AI-system fully indemnifies the affected person for harm or damages in accordance with Article 4(1) or 8(1), he or she may take action for redress against the producer of the defective AI-system according to Directive 85/374/EEC and to national provisions concerning liability for defective products.
Amendment 422 #
Motion for a resolution
Annex I – part B – Article 12 – paragraph 4
Annex I – part B – Article 12 – paragraph 4
4. In the event that the insurer of the deployeoperator indemnifies the affected person for harm or damage in accordance with Article 4(1) or 8(1), any civil liability claim of the affected person against another person for the same damage shall be subrogated to the insurer of the deployeoperator to the amount the insurer of the deployeoperator has compensated the affected person.
Amendment 425 #
Motion for a resolution
Annex I – part B – Article 14 – subparagraph 1
Annex I – part B – Article 14 – subparagraph 1
By 1 January 202X [53 years after the date of application of this Regulation], and every three years thereafter, the Commission shall present to the European Parliament, the Council and the European Economic and Social Committee a detailed report reviewing this Regulation in the light of the further development of Artificial Intelligence.