Activities of Axel VOSS related to 2020/2012(INL)
Legal basis opinions (0)
Amendments (152)
Amendment 24 #
Motion for a resolution
Recital D a (new)
Recital D a (new)
Da. whereas the Union has a strict legal framework in place to ensure, inter alia, the protection of personal data and privacy and against discrimination, promote gender balance, environment protection and consumers’ rights;
Amendment 25 #
Motion for a resolution
Recital D b (new)
Recital D b (new)
Db. whereas such an extensive body of horizontal and sectoral legislation , including the existing rules on product safety and liability, will continue to apply in relation to artificial intelligence, robotics and related technologies, although certain adjustments to specific legal instruments may be necessary to reflect the digital transformation and address new challenges posed by the use of artificial intelligence;
Amendment 26 #
Motion for a resolution
Recital E
Recital E
E. whereas such questions should be addressed through a comprehensivein addition to adjustments to existing legislation a common and future-proof legal framework reflecting the Union’s principles and values as enshrined in the Treaties and the Charter of Fundamental Rights that wouldis needed to address the ethical questions related to development, deployment and use of artificial intelligence, robotics and related technologies in order to bring legal certainty to businesses and citizens alike;
Amendment 36 #
Motion for a resolution
Recital F
Recital F
F. whereas for the scope of thatsuch regulatory framework to be adequate it should cover a wide range of technologies and their components, including algorithms, software and data used or produced by them, proportionate and avoid the creation of unnecessary burdens, especially for SMEs, it should cover only those technologies in relation to which, considering their intended use and the sectors where they are employed, significant risk can be expected to occur;
Amendment 37 #
Draft opinion
Paragraph 2
Paragraph 2
2. Stresses the importance of developing an “ethics-by-default and by design” framework which fully respects the Charter of Fundamental Rights of the European Union, Union law and the Treaties; calls, in this regard, for a clear and coherent governance model that allows companies to further develop artificial intelligence, robotics and related technologies;
Amendment 40 #
Motion for a resolution
Recital G
Recital G
G. whereas that framework should encompass all situations requiring due consideration of the Union’s principles and values, namely development, deployment and use of the relevant high-risk technologies and their components;
Amendment 40 #
Draft opinion
Paragraph 2
Paragraph 2
2. Stresses the importance of developing an “ethics-by-default and by design” framework which fully respect the Charter of Fundamental Rights of the European Union, Union law and the Treaties but at the same time gives businesses and innovators enough leeway to continue developing new technologies based on AI;
Amendment 42 #
Motion for a resolution
Recital H
Recital H
H. whereas a harmonised approach to ethical principles relating to high-risk artificial intelligence, robotics and related technologies requires a common understanding in the Union of those concepts and of concepts such as algorithms, software, data or biometric recognition;
Amendment 46 #
Motion for a resolution
Recital I
Recital I
I. whereas action at Union level is justified by the need for a homogenous application of common ethical principles when developing, deploying and using high-risk artificial intelligence, robotics and related technologies;
Amendment 50 #
Draft opinion
Paragraph 3 a (new)
Paragraph 3 a (new)
3a. Calls on the commission to consider developing a framework of criteria and indicators to label AI technology, in which developers could participate voluntarily, in order to stimulate comprehensibility, transparency, accountability and incentivise additional precautions by developers;
Amendment 59 #
Draft opinion
Paragraph 4 a (new)
Paragraph 4 a (new)
4a. Warns that possible bias in artificial intelligence applications could lead to automated discrimination, which has to be avoided by design and application rules;
Amendment 60 #
Motion for a resolution
Recital L
Recital L
Amendment 61 #
Draft opinion
Paragraph 5
Paragraph 5
5. Calls for a horizontal and future- oriented approach, including technology- neutral standards that apply to all sectors in which AI could be employed, complemented by a vertical approach with sector-specific standards were appropriate; strongly believes that an ethical framework should apply to anyone intending to develop or operate artificial intelligence applications in the EU; favours a binding EU-wide approach to avoid fragmentation; calls on the Union to promote strong and transparent cooperation and knowledge-sharing between the public and private sectors to create best practices;
Amendment 64 #
Draft opinion
Paragraph 5
Paragraph 5
5. Calls for a horizontal and future- oriented approach, including technology- neutral standards that apply to all sectors in which AI could be employed; calls on the Union to promote strong and transparent cooperation and knowledge-sharing between the public and private sectors to create best practices;
Amendment 66 #
Motion for a resolution
Subheading 2
Subheading 2
Human-centric and human-made artificial intelligence
Amendment 72 #
Motion for a resolution
Paragraph 1
Paragraph 1
1. Declares that the development, deployment and use of high-risk artificial intelligence, robotics and related technologies, including but not exclusively by human beings, should always respect human agency and oversight, as well as allow the retrieval of human control at any timewhen needed;
Amendment 75 #
Draft opinion
Paragraph 6
Paragraph 6
6. Stresses that the protection of networks of interconnected AI and robotics must prevent security breaches, data leaks, data poisoning, cyber- attacks and the misuse of personal data; believes this will require a stronger cooperation between national and EU authorities;
Amendment 81 #
Motion for a resolution
Paragraph 2
Paragraph 2
2. Considers that the determination of whether artificial intelligence, robotics and related technologies are to be considered high-risk as regards compliance with ethical principles should always follow from an impartial, regulated and external assessmentwhen, considering their intended use and the critical sectors where they are employed, their autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random and impossible to predict in advance; considers that the significance of the risks depends on the interplay between the severity of possible harm, the likelihood that the risk materialises and the manner in which the technologies are being used;
Amendment 97 #
Motion for a resolution
Paragraph 3
Paragraph 3
3. Maintains that high-risk artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies should be developed in a secure, technically rigorous manner and in good faith;
Amendment 100 #
Draft opinion
Paragraph 8
Paragraph 8
8. Stresses that AI and robotics are not immune from making mistakes; emphasises the importance of the right to an explanation when persons are subjected to algorithmic decision-making; considers the need for legislators to reflect upon the complex issue of liability in the context of criminal justice.
Amendment 103 #
Draft opinion
Paragraph 8
Paragraph 8
8. Stresses that AI and robotics are not immune from making mistakes; considers the need for legislators to reflect upon the complex issue of liability in the context of criminal justice.
Amendment 112 #
Motion for a resolution
Paragraph 5
Paragraph 5
5. Recalls that the development, deployment and use of high-risk artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, should respect human dignity and ensure equal treatment for all;
Amendment 120 #
Motion for a resolution
Paragraph 6
Paragraph 6
6. Affirms that possible bias in and discrimination by software, algorithms and data should be addressed by setting rules for the processes through which they are designed and used, as this approach would have the potential to turn software, algorithms and data into a considerable counterbalance to unfair bias and discrimination, and a positive force for social change;
Amendment 126 #
Motion for a resolution
Paragraph 7
Paragraph 7
7. Emphasises that socially responsible artificial intelligence, robotics and related technologies shouldhas a role to play in contributing to find solutions that safeguard and promote fundamental values of our society such as democracy, diverse and independent media and objective and freely available information, health and economic prosperity, equality of opportunity, workers’ and social rights, quality education, cultural and linguistic diversity, gender balance, digital literacy, innovation and creativity;
Amendment 139 #
Motion for a resolution
Paragraph 9
Paragraph 9
9. Insists that the developmenter, deploymenters and users of these technologies should not causebe held responsible for injury or harm of any kind to individuals or society in accordance with the relevant Union and national liability rules;
Amendment 143 #
Motion for a resolution
Paragraph 10
Paragraph 10
10. States that it is essential thathigh-risk artificial intelligence, robotics and related technologies can contribute to finding solutions to support the achievement of sustainable development, climate neutrality and circular economy goals; the development, deployment and use of these technologies should be environmentally friendly, and contribute to minimising any harm caused to the environment during their lifecycle and across their entire supply chain in line with Union law;
Amendment 154 #
Motion for a resolution
Paragraph 14
Paragraph 14
14. Points out that the possibility provided by thesehigh-risk technologies of using personal data and non-personal data to categorise and micro-target people, identify the vulnerabilities of individuals, or exploit accurate predictive knowledge, has to be counterweighted by the principles of data minimisation, the right to obtain an explanation of a decision based on automated processing and privacy by design, as well as those of proportionality, necessity and limitation based on purpose in compliance with GDPR;
Amendment 155 #
Motion for a resolution
Paragraph 15
Paragraph 15
Amendment 166 #
Motion for a resolution
Paragraph 16
Paragraph 16
16. Stresses that appropriate governance of the development, deployment and use of high-risk artificial intelligence, robotics and related technologies, including by having measures in place focusing on accountability and addressing potential risks of unfair bias and discrimination, increases citizens’ safety and trust in those technologies;
Amendment 174 #
Motion for a resolution
Paragraph 17
Paragraph 17
17. Observes that data are used in large volumes in the development of high-risk artificial intelligence, robotics and related technologies and that the processing, sharing of and access to such data must be governed in accordance with the requirements of quality, integrity, security, privacy and control;
Amendment 186 #
Motion for a resolution
Paragraph 19
Paragraph 19
19. Notes the added value of having national supervisory authorities in each Member State responsible for ensuring, assessing and monitoring compliance with ethical principles for the development, deployment and use of high-risk artificial intelligence, robotics and related technologies;
Amendment 190 #
Motion for a resolution
Paragraph 19 a (new)
Paragraph 19 a (new)
19a. Calls for such authorities to be tasked with promoting regular exchanges with civil society and innovation within the Union by providing assistance to concerned stakeholders, in particular small and medium-sized enterprises or start-ups;
Amendment 194 #
Motion for a resolution
Paragraph 20 a (new)
Paragraph 20 a (new)
20a. Suggests that, in the context of such a cooperation, common criteria and an application process be developed for the granting of a European certificate of ethical compliance following a request by any developer, deployer or user seeking to certify the positive assessment of compliance carried out by the respective national supervisory authority;
Amendment 195 #
Motion for a resolution
Paragraph 21
Paragraph 21
Amendment 203 #
Motion for a resolution
Subheading 11
Subheading 11
Amendment 209 #
Motion for a resolution
Paragraph 22
Paragraph 22
Amendment 215 #
Motion for a resolution
Paragraph 23
Paragraph 23
Amendment 227 #
Motion for a resolution
Paragraph 24
Paragraph 24
Amendment 233 #
Motion for a resolution
Subheading 12
Subheading 12
Amendment 235 #
Motion for a resolution
Paragraph 25
Paragraph 25
Amendment 247 #
Motion for a resolution
Paragraph 26
Paragraph 26
26. Stresses that the Union’s ethical principles for the development, deployment and use of these high-risk technologies should be promoted worldwide by cooperating with international partners and liaising with third countries with different development and deployment models.
Amendment 254 #
Motion for a resolution
Paragraph 28
Paragraph 28
Amendment 261 #
Motion for a resolution
Paragraph 29
Paragraph 29
29. Concludes, following the above reflections on aspects related to the high- risk ethical dimension of artificial intelligence, robotics and related technologies, that the ethical dimension should be framed as a series of principles resulting in a legal framework at Union level supervised by national competent authorities, coordinated and enhanced by a European Agency for Artificial Intelligence and duly respected and certified within the internal market;
Amendment 267 #
Motion for a resolution
Paragraph 30
Paragraph 30
30. Following the procedure of Article 225 of the Treaty on the Functioning of the European Union, requests the Commission to submit a proposal for a Regulation on ethical principles for the development, deployment and use of high-risk artificial intelligence, robotics and related technologies on the basis of Article 114 of the Treaty on the Functioning of the European Union and following the detailed recommendations set out in the annex hereto;
Amendment 275 #
Motion for a resolution
Paragraph 32
Paragraph 32
32. Considers that the requested proposal woulddoes not have financial implications if a new European Agency for Artificial Intelligence is set up;
Amendment 279 #
Motion for a resolution
Annex I – part A – point I – indent 1
Annex I – part A – point I – indent 1
- to build trust in high-risk artificial intelligence, robotics and related technologies by ensuring that these technologies will be developed, deployed and used in an ethical manner;
Amendment 284 #
Motion for a resolution
Annex I – part A – point I – indent 2
Annex I – part A – point I – indent 2
- to support the development of high- risk artificial intelligence, robotics and related technologies in the Union, including by helping businesses and start- ups to assess and address regulatory requirements and risks during the development process;
Amendment 288 #
Motion for a resolution
Annex I – part A – point I – indent 3
Annex I – part A – point I – indent 3
- to support deployment of high-risk artificial intelligence, robotics and related technologies in the Union by providing the appropriate regulatory framework;
Amendment 290 #
Motion for a resolution
Annex I – part A – point I – indent 4
Annex I – part A – point I – indent 4
- to support use of high-risk artificial intelligence, robotics and related technologies in the Union by ensuring that they are developed, deployed and used in an ethical manner;
Amendment 292 #
Motion for a resolution
Annex I – part A – point I – indent 5
Annex I – part A – point I – indent 5
- to require better information flows among citizens and within organisations developing, deploying or using high-risk artificial intelligence, robotics and related technologies as a means of ensuring that these technologies are compliant with the ethical principles of the proposed Regulation.
Amendment 293 #
Motion for a resolution
Annex I – part A – point II – indent 1
Annex I – part A – point II – indent 1
- a “Regulation on ethical principles for the development, deployment and use of high-risk artificial intelligence, robotics and related technologies”;
Amendment 296 #
Motion for a resolution
Annex I – part A – point II – indent 2
Annex I – part A – point II – indent 2
Amendment 299 #
Motion for a resolution
Annex I – part A – point II – indent 4
Annex I – part A – point II – indent 4
- the work carried out by the “Supervisory Authority” in each Member State to ensure that ethical principles are applied to high-risk artificial intelligence, robotics and related technologies;
Amendment 303 #
Motion for a resolution
Annex I – part A – point III – introductory part
Annex I – part A – point III – introductory part
III. The “Regulation on ethical principles for the development, deployment and use of high-risk artificial intelligence, robotics and related technologies” builds on the following principles:
Amendment 305 #
Motion for a resolution
Annex I – part A – point III – indent 1
Annex I – part A – point III – indent 1
- human-centric and human-made artificial intelligence, robotics and related technologies;
Amendment 309 #
Motion for a resolution
Annex I – part A – point III – indent 4
Annex I – part A – point III – indent 4
- safeguards against unfair bias and discrimination;
Amendment 312 #
Motion for a resolution
Annex I – part A – point III – indent 6
Annex I – part A – point III – indent 6
- environmentally friendly and sustainable artificial intelligence, robotics and related technologies;
Amendment 317 #
Motion for a resolution
Annex I – part A – point IV – indent 1 a (new)
Annex I – part A – point IV – indent 1 a (new)
- to issue guidance as regards the application of the proposed Regulation in order to ensure its consistent application, namely regarding the application of the criteria for artificial intelligence, robotics and related technologies to be considered high-risk;
Amendment 319 #
Motion for a resolution
Annex I – part A – point IV – indent 1 b (new)
Annex I – part A – point IV – indent 1 b (new)
- to liaise with the “Supervisory Authority” in each Member State;
Amendment 326 #
Motion for a resolution
Annex I – part A – point V
Annex I – part A – point V
Amendment 328 #
Motion for a resolution
Annex I – part A – point V – indent 1
Annex I – part A – point V – indent 1
Amendment 330 #
Motion for a resolution
Annex I – part A – point V – indent 2
Annex I – part A – point V – indent 2
Amendment 332 #
Motion for a resolution
Annex I – part A – point V – indent 3
Annex I – part A – point V – indent 3
Amendment 335 #
Motion for a resolution
Annex I – part A – point V – indent 4
Annex I – part A – point V – indent 4
Amendment 337 #
Motion for a resolution
Annex I – part A – point V – indent 5
Annex I – part A – point V – indent 5
Amendment 342 #
Motion for a resolution
Annex I – part A – point VI – indent 1
Annex I – part A – point VI – indent 1
- to assess whether artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, developed, deployed and used in the Union are high-risk technologies in accordance with the criteria defined in the proposed Regulation;
Amendment 347 #
Motion for a resolution
Annex I – part A – point VI – indent 2 a (new)
Annex I – part A – point VI – indent 2 a (new)
- to issue a certificate of compliance with ethical principles, in line with common criteria and an application process developed in cooperation with other Supervisory Authorities, the European Commission and other relevant institutions, bodies, offices and agencies of the Union;
Amendment 348 #
Motion for a resolution
Annex I – part A – point VI – indent 3
Annex I – part A – point VI – indent 3
- to contribute to the consistent application of the proposed Regulation in cooperation with other Supervisory Authorities, the European Commission and other relevant institutions, bodies, offices and agencies of the Union, namely regarding the application of the criteria for artificial intelligence, robotics and related technologies to be considered high-risk by elaborating, in the context of such cooperation, a common and exhaustive list of high-risk artificial intelligence, robotics and related technologies in line with the criteria set out in this Regulation; and
Amendment 361 #
Motion for a resolution
Annex I – part A – point VII
Annex I – part A – point VII
VII. The key role of stakeholders should be to engage with the Commission, the European Agency for Artificial Intelligence and the “Supervisory Authority” in each Member State.
Amendment 366 #
Motion for a resolution
Annex I – part B – recital 1
Annex I – part B – recital 1
(1) The development, deployment and use of artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, are based on a desire to serve society. They can entail opportunities and risks, which should be addressed and regulated by a comprehensive legal framework of ethical principles to be complied with by high-risk technologies from the moment of the development and deployment of such technologies to their use.
Amendment 369 #
Motion for a resolution
Annex I – part B – recital 2
Annex I – part B – recital 2
(2) The level of compliance with the ethical principles regarding the development, deployment and use of high- risk artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies in the Union should be equivalent in all Member States, in order to efficiently seize the opportunities and consistently address the risks of such technologies. It should be ensured that the application of the rules set out in this Regulation throughout the Union is homogenous.
Amendment 370 #
Motion for a resolution
Annex I – part B – recital 3
Annex I – part B – recital 3
(3) In this context, the current diversity of the rules and practices to be followed across the Union poses a significant risk of fragmentation of the single market and to the protection of the well-being and prosperity of individuals and society alike, as well as to the coherent exploration of the full potential that artificial intelligence, robotics and related technologies have in promoting and preserving that well-being and prosperity. Differences in the degree of consideration of the ethical dimension inherent to these technologies can prevent them from being freely developed, deployed or used within the Union and such differences can constitute an obstacle to the pursuit of economic activities at Union level, distort competition and impede authorities in the fulfilment of their obligations under Union law. In addition, the absence of a common framework of ethical principles for the development, deployment and use of high-risk artificial intelligence, robotics and related technologies results in legal uncertainty for all those involved, namely developers, deployers and users.
Amendment 377 #
Motion for a resolution
Annex I – part B – recital 6
Annex I – part B – recital 6
(6) A common understanding in the Union of notions such as artificial intelligence, robotics, related technologies, algorithms and biometric recognition is required in order to allow for a harmonized regulatory approach. However, the specific legal definitions need to be developed in the context of this Regulation without prejudice to other definitions used in other legal acts and international jurisdictions.
Amendment 385 #
Motion for a resolution
Annex I – part B – recital 7
Annex I – part B – recital 7
(7) The development, deployment and use of artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, should be such as to ensure that the best interests of citizens are considered, and shouldby respecting fundamental rights as set out in the Charter of Fundamental Rights of the European Union (‘the Charter’), settled case-law of the Court of Justice of the European Union, and other European and international instruments which apply in the Union.
Amendment 387 #
Motion for a resolution
Annex I – part B – recital 8
Annex I – part B – recital 8
(8) AHigh-risk artificial intelligence, robotics and related technologies have been provided with the ability to learn from data and experience, as well as to take founded decisions. Such capacities need to remain subject to meaningful human review, judgment, intervention and control. The technical and operational complexity of such technologies should never prevent their deployer or user from being able to, at the very least, alter or halt them in cases where the compliance with the principles set out in this Regulation is at risk.
Amendment 393 #
Motion for a resolution
Annex I – part B – recital 9
Annex I – part B – recital 9
(9) Any artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, which entails a are to be considered high -risk, of breaching the principles of safety, transparency, accountability, non-bias or non- discrimination, social responsibility and gender balance, environmental friendliness and sustainability, privacy and governance, should be considered high-risk from a compliance with ethical principles perspective where that is the conclusion of an impartial, regulated and external risk assessment by the national supervisory authorityn the basis of an impartial, objective and external risk assessment by the national supervisory authority when, considering their intended use and the critical sectors where they are employed, their autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random and impossible to predict in advance.
Amendment 395 #
Motion for a resolution
Annex I – part B – recital 9 a (new)
Annex I – part B – recital 9 a (new)
(9a) Determining how significant the potential to cause harm or damage by high-risk artificial intelligence, robotics and related technologies should depend on the interplay between the severity of possible harm, the likelihood that the risk materialises and the manner in which the technologies are being used. The degree of severity should be determined based on the extent of the potential harm resulting from the operation, the number of affected persons, the total value of the potential damage as well as the harm to society as a whole. The likelihood should be determined based on the role of the algorithmic calculations in the decision- making process, the complexity of the decision and the reversibility of the effects. Ultimately, the manner of usage should depend, among other things, on whether, taking into account the specific sector in which the artificial intelligence, robotics and related technologies operate, it could have legal or factual effects on important legally protected rights of the affected person, and whether the effects can reasonably be avoided.
Amendment 397 #
Motion for a resolution
Annex I – part B – recital 10
Annex I – part B – recital 10
Amendment 404 #
Motion for a resolution
Annex I – part B – recital 11
Annex I – part B – recital 11
(11) To be trustworthy high-risk artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies should be developed, deployed and used in a safe, transparent and accountable manner based on the features of robustness, resilience, security, accuracy and error identification, explainability and identifiability, and in a manner that makes it possible to be temporarily disabled and to revert to historical functionalities in cases of non- compliance with those safety features.
Amendment 407 #
Motion for a resolution
Annex I – part B – recital 13
Annex I – part B – recital 13
(13) Developers and deployers should make available to users any subsequent updates of the technologies concerned, namely in terms of software, in accordance with the obligations stipulated in the contract or laid down in Union or national law.
Amendment 410 #
Motion for a resolution
Annex I – part B – recital 14
Annex I – part B – recital 14
(14) To the extent that their involvement with those technologies influences the compliance with the safety, transparency and accountability requirements set out in this Regulation, users should use high-risk artificial intelligence, robotics and related technologies in good faith. This means, in particular, that they should not use those technologies in a way thaccordance with the safety and use instructions provided by the developer and/or the deployer and in a way that does not contravenes the ethical principles laid down in this legal framework and the requirements listed therein. Beyond such use in good faith, users should be exempt from any responsibility that otherwise falls upon developers and deployers as established in this Regulation.
Amendment 413 #
Motion for a resolution
Annex I – part B – recital 15
Annex I – part B – recital 15
(15) The citizens’ trust in high-risk artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, depends on the understanding and comprehension of the technical processes. The degree of explainability of such processes should depend on the context and the severity of the consequences of an erroneous or inaccurate output of those technical processes, and needs to be sufficient for challenging them and seeking redress. Auditability and traceability should remedy the possible unintelligibility of such technologies.
Amendment 416 #
Motion for a resolution
Annex I – part B – recital 16
Annex I – part B – recital 16
(16) Society’s trust in high-risk artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, depends on the degree to which their assessment, auditability and traceability are enabled in the technologies concerned. Where the extent of their involvement so requires, developers should ensure that such technologies are designed and built in a manner that enables such an assessment, auditing and traceability. Deployers and users should ensure that artificial intelligence, robotics and related technologies are deployed and used in full respect of transparency requirements, and allowing auditing and traceability.
Amendment 425 #
Motion for a resolution
Annex I – part B – recital 21
Annex I – part B – recital 21
(21) Artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, should perform on the basis ofcontribute to sustainable progress. Such technologies should contribute comprehensively to thecan also play an important role in achievementing of the Sustainable Development Goals outlined by the United Nations with a view to enabling future generations to flourish. Such technologies can support the monitoring of adequate progress on the basis of sustainability and social cohesion indicators, and by using responsible research and innovation tools requiring the mobilisation of resources by the Union and its Member States to support and invest in projects addressing those goals.
Amendment 427 #
Motion for a resolution
Annex I – part B – recital 22
Annex I – part B – recital 22
(22) The development, deployment and use of high-risk artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, should in no way cause injury or harm of any kind to individuals or society. Accordingly, such technologies should be developed, deployed and used in a socially responsible manner. and, therefore,
Amendment 428 #
Motion for a resolution
Annex I – part B – recital 23
Annex I – part B – recital 23
(23) For the purposes of this Regulation, dDevelopers, deployers and users should be held responsible, to the extent of their involvement in the artificial intelligence, robotics and related technologies concerned, for any injury or harm inflicted upon individuals and society in accordance with Union and national liability rules.
Amendment 429 #
Motion for a resolution
Annex I – part B – recital 24
Annex I – part B – recital 24
Amendment 430 #
Motion for a resolution
Annex I – part B – recital 25
Annex I – part B – recital 25
(25) Socially responsible artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, can be defined as technologies which bothcontribute to find solutions that safeguard and promote a number of different aspects of society, most notably democracy, health and economic prosperity, equality of opportunity, workers’ and social rights, diverse and independent media and objective and freely available information, allowing for public debate, quality education, cultural and linguistic diversity, gender balance, digital literacy, innovation and creativity. Nevertheless, those requirements shall be applicable only to high-risk artificial intelligence, robotics and related technologies. They are also those that are developed, deployed and used having due regard for their ultimate impact on the physical and mental well- being of citizens.
Amendment 433 #
Motion for a resolution
Annex I – part B – recital 26
Annex I – part B – recital 26
(26) These technologies shouldcan also be developed, deployed and used with a view to supporting social inclusion, plurality, solidarity, fairness, equality and cooperation and their potential in that context should be maximized and explored through research and innovation projects. The Union and its Member States should therefore mobilise their resources for the purpose of supporting and investing in such projects.
Amendment 436 #
Motion for a resolution
Annex I – part B – recital 28
Annex I – part B – recital 28
(28) The development, deployment and use of high-risk artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, should take into consideration their environmental footprint and should not cause harm to the environment during their lifecycle and across their entire supply chain. Accordingly,. In line with the obligations laid down in Union law, high-risk artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies should be developed, deployed and used in an environmentally friendlysustainable manner that supports the achievement of climate neutrality and circular economy goals.
Amendment 439 #
Motion for a resolution
Annex I – part B – recital 29
Annex I – part B – recital 29
(29) For the purpoDevelopers, deployers and users of this Regulation, developers, deployers and usershigh-risk artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, should be held responsible, to the extent of their involvement in the development, deployment or use of the artificial intelligence, robotics and related technologies concerned, for any harm caused to the environment in accordance with the applicable environmental liability rules.
Amendment 441 #
Motion for a resolution
Annex I – part B – recital 30
Annex I – part B – recital 30
Amendment 442 #
Motion for a resolution
Annex I – part B – recital 31
Annex I – part B – recital 31
(31) These technologies should also be developed, deployed and used with a view to supporting the achievement of environmental goals prescribed in Union law, such as reducing waste production, diminishing the carbon footprint, preventing climate change and avoiding environmental degradation, and their potential in that context should be maximized and explored through research and innovation projects. The Union and the Member States should therefore mobilise their resources for the purpose of supporting and investing in such projects.
Amendment 446 #
Motion for a resolution
Annex I – part B – recital 33
Annex I – part B – recital 33
(33) AnyHigh-risk artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, developed, deployed and used in the Union should fully respect Union citizens’ rights to privacy and protection of personal data. In particular, their development, deployment and use should be in accordance with Regulation (EU) 2016/679 of the European Parliament and of the Council1 and Directive 2002/58/EC of the European Parliament and of the Council2 . __________________ 1Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1). 2 Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications) (OJ L 201, 31.7.2002, p. 37).
Amendment 447 #
Motion for a resolution
Annex I – part B – recital 34
Annex I – part B – recital 34
(34) TIn particular, the ethical boundaries of the use of artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, should be duly considered when using remote recognition technologies, such as biometric recognition, to automatically identify individuals. When these technologies are used by public authorities during times of national emergency, such as during a national health crisis, the use should be proportionate and criteria for that use defined in order to be able to determine whether, when and how it should take place, and such use should be mindful of its psychological and sociocultural impact with due regard for human dignity and the fundamental rights set out in the Charter.
Amendment 451 #
Motion for a resolution
Annex I – part B – recital 35
Annex I – part B – recital 35
(35) Governance that is based on relevant standards enhances safety and promotes the increase of citizens’ trust in the development, deployment and use of high-risk artificial intelligence, robotics and related technologies including software, algorithms and data used or produced by such technologies.
Amendment 457 #
Motion for a resolution
Annex I – part B – recital 37
Annex I – part B – recital 37
(37) Sharing and use of data by multiple participants is sensitive and therefore the development, deployment and use of high- risk artificial intelligence, robotics and related technologies should be governed by relevant rules, standards and protocols reflecting the requirements of quality, integrity, security, privacy and control. The data governance strategy should focus on the processing, sharing of and access to such data, including its proper management and traceability, and guarantee the adequate protection of data belonging to vulnerable groups, including people with disabilities, patients, children, minorities and migrants.
Amendment 463 #
Motion for a resolution
Annex I – part B – recital 38
Annex I – part B – recital 38
(38) The effective application of the ethical principles laid down in this Regulation will largely depend on Member States’ appointment of an independent public authority to act as a supervisory authority. In particular, each national supervisory authority should be responsible for assessing and monitoring the compliance of artificial intelligence, robotics and related technologies considered a high-risk in light of the obligationsjective criteria set out in this Regulation.
Amendment 469 #
Motion for a resolution
Annex I – part B – recital 40 a (new)
Annex I – part B – recital 40 a (new)
(40a) In the context of such cooperation, national supervisory authorities, together with the European Commission and other relevant institutions, bodies, offices and agencies of the Union, should elaborate a common and exhaustive list of high-risk artificial intelligence, robotics and related technologies in line with the criteria set out in this Regulation and develop common criteria and an application process should be developed for the granting of a European certificate of ethical compliance.
Amendment 472 #
Motion for a resolution
Annex I – part B – recital 42 a (new)
Annex I – part B – recital 42 a (new)
(42a) Full harmonisation approach is needed at the European level. Therefore, the European Commission shall be tasked to find an appropriate solution to structure such an approach. The goal is to avoid creation of another one agency. Instead, we have to find strict rules and guidelines for cooperation between Member States.
Amendment 474 #
Motion for a resolution
Annex I – part B – recital 43
Annex I – part B – recital 43
Amendment 481 #
Motion for a resolution
Annex I – part B – recital 46
Annex I – part B – recital 46
Amendment 487 #
Motion for a resolution
Annex I – part B – Article 1 – paragraph 1
Annex I – part B – Article 1 – paragraph 1
The purpose of this Regulation is to establish a regulatory framework of ethical principles for the development, deployment and use of high-risk artificial intelligence, robotics and related technologies in the Union.
Amendment 490 #
Motion for a resolution
Annex I – part B – Article 2 – paragraph 1
Annex I – part B – Article 2 – paragraph 1
This Regulation applies to high-risk artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, developed, deployed or used in the Union.
Amendment 491 #
Motion for a resolution
Annex I – part B – Article 2 – paragraph 1 a (new)
Annex I – part B – Article 2 – paragraph 1 a (new)
1a. This Regulation shall not apply to artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, developed, deployed or used in the Union which are not considered high-risk.
Amendment 495 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 1 – point a
Annex I – part B – Article 4 – paragraph 1 – point a
(a) ‘artificial intelligence’ means software systems that, inter alia, collect, process and interpret structured or unstructured data, identify patterns and establish models in order to reach conclusions or take actions in the physical or virtual dimension basdisplay intelligent behaviour by analysing certain input and taking action, with some degree of autonomy, to achieve specific goals. AI systems can be purely software-based, acting in virtual world, or can be embedded oin such conclusionhardware devices;
Amendment 504 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 1 – point f a (new)
Annex I – part B – Article 4 – paragraph 1 – point f a (new)
(fa) ‘autonomous’ means an artificial intelligence, robotics or related technology that operates by perceiving certain input and without needing to follow a set of pre-determined instructions, despite its behaviour being constrained by the goal it was given and other relevant design choices made by its developer;
Amendment 505 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 1 – point f b (new)
Annex I – part B – Article 4 – paragraph 1 – point f b (new)
(fb) ‘high risk’ means a significant potential in an autonomously operating artificial intelligence, robotics and related technology to cause harm or damage to one or more persons in a manner that is random and impossible to predict in advance, considering its intended use and the critical sector where it is employed;[FDCM1] the significance of the potential depends on the interplay between the severity of possible harm or damage, the likelihood that the risk materialises and the manner in which the AI-system is being used;
Amendment 518 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 1 – point o
Annex I – part B – Article 4 – paragraph 1 – point o
(o) ‘injury or harm’ means physical, emotional or mental injury, bias, discrimination or stigmatization, suffering caused by a lack of inclusivity and diversitysuch as hate speech, loss of privacy bias, financial or economic loss, loss of employment or educational opportunity, undue restriction of freedom of choice, wrongful conviction, environmental harmand expression and any infringement of Union law that is detrimental to a person;
Amendment 523 #
Motion for a resolution
Annex I – part B – Article 4 – paragraph 1 – point p
Annex I – part B – Article 4 – paragraph 1 – point p
(p) ‘governance’ means the manner of ensuring that the highestappropriate standards and the appropriate protocols of behaviour are adopted and observed by developers, deployers and users, based on a formal set of rules, procedures and values, and which allows them to deal appropriately with ethical matters as or before they arise.
Amendment 525 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 1
Annex I – part B – Article 5 – paragraph 1
1. Any high-risk artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, shall be developed, deployed and used in the Union in accordance with the ethical principles laid down in this Regulation.
Amendment 526 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 2
Annex I – part B – Article 5 – paragraph 2
2. The development, deployment and use of high-risk artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, shall be carried out in a manner thatthe best interest of citizens and contribute to protect the social, economic and well-being of society by ensuresing that human dignity and the fundamental rights set out in the Charter are fully respected.
Amendment 527 #
Motion for a resolution
Annex I – part B – Article 5 – paragraph 3
Annex I – part B – Article 5 – paragraph 3
Amendment 531 #
Motion for a resolution
Annex I – part B – Article 6 – title
Annex I – part B – Article 6 – title
Human-centric and human-made artificial intelligence
Amendment 532 #
Motion for a resolution
Annex I – part B – Article 6 – paragraph 1
Annex I – part B – Article 6 – paragraph 1
1. Any high-risk artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, shall be developed, deployed and used in a human- centric manner with the aim of contributing to the existence of a democratic, pluralistic and equitable society by safeguarding human autonomy and decision-making and ensuring human agency.
Amendment 535 #
Motion for a resolution
Annex I – part B – Article 6 – paragraph 2
Annex I – part B – Article 6 – paragraph 2
2. The technologies listed in paragraph 1 shall be developed, deployed and used in a manner that guarantees full human oversight at any time, in particular where that development, deployment or use entails a risk of breaching the ethical principles set out in this Regulation.
Amendment 538 #
Motion for a resolution
Annex I – part B – Article 6 – paragraph 3
Annex I – part B – Article 6 – paragraph 3
3. The technologies listed in paragraph 1 shall be developed, deployed and used in a manner that allows human control to be regained at any timewhen needed, including through the altering or halting of those technologies, when that development, deployment or use entails a risk of breaching the ethical principles set out in this Regulation.
Amendment 541 #
Motion for a resolution
Annex I – part B – Article 7 – paragraph 1
Annex I – part B – Article 7 – paragraph 1
1. For the purposes of this Regulation, artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, which entail a significant risk of breaching the ethical principles set out in this Regulation shall be considered high-risk technologiesshall be considered high-risk technologies when, considering their intended use and the critical sectors where they are employed in accordance with the Annex to this Regulation, their autonomous operation involves a significant potential to cause harm to one or more persons, in a manner that is random and impossible to predict in advance. Determining the significance of the potential shall depends on the interplay between the severity of possible harm or damage, the likelihood that the risk materializes and the manner in which the AI-system is being used.
Amendment 543 #
Motion for a resolution
Annex I – part B – Article 7 – paragraph 1 a (new)
Annex I – part B – Article 7 – paragraph 1 a (new)
1a. The risk assessment of artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, shall be carried out by the national supervisory authorities referred to in Article 14 on the basis of a common and exhaustive list of high-risk artificial intelligence, robotics and related technologies in accordance with the objective criteria provided for in paragraph 1 of this Article and elaborated jointly by the national supervisory authorities, the European Commission and other relevant institutions, bodies, offices and agencies of the Union in the context of their cooperation.
Amendment 545 #
Motion for a resolution
Annex I – part B – Article 7 – paragraph 2 a (new)
Annex I – part B – Article 7 – paragraph 2 a (new)
2a. Upon request by any developer, deployer or user of high-risk artificial intelligence, robotics and related technologies are considered high-risk technologies seeking to certify the positive assessment of compliance carried out, the respective national supervisory authority shall issue a European certificate of ethical compliance. Such a European certificate of ethical compliance shall be issued in accordance with the common criteria and an application process developed jointly by the national supervisory authorities, the European Commission and other relevant institutions, bodies, offices and agencies of the Union in the context of their cooperation.
Amendment 546 #
Motion for a resolution
Annex I – part B – Article 7 – paragraph 3
Annex I – part B – Article 7 – paragraph 3
Amendment 548 #
Motion for a resolution
Annex I – part B – Article 7 a (new)
Annex I – part B – Article 7 a (new)
Article 7a Voluntary labelling scheme for non high- risk AI technologies 1. For artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies that do not qualify as high-risk and that are not subject to the mandatory requirements and risk assessment established by this Regulation, a voluntary labelling scheme should be established. 2. Under such a voluntary labelling scheme, interested economic operators can decide to make themselves subject either to the requirements listed in this Regulation or to a specific set of similar requirements especially established for the purposes of the voluntary scheme by national authorities. 3. The economic operators concerned shall be awarded a quality label for their artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, provided that those technologies comply with the applicable requirements in accordance with paragraph 2 of this Article.
Amendment 549 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 1 – introductory part
Annex I – part B – Article 8 – paragraph 1 – introductory part
1. Any high-risk artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, developed, deployed or used in the Union shall be developed, deployed and used in a manner that ensures they do not breach the ethical principles set out in this Regulation. In particular, they shall be:
Amendment 553 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 1 – point b
Annex I – part B – Article 8 – paragraph 1 – point b
(b) developed, deployed and used in a resilient manner so that they ensure an adequate level of security, and one that prevents any technical vulnerabilities from being exploited for unfairmalicious or unlawful purposes;
Amendment 555 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 1 – point c
Annex I – part B – Article 8 – paragraph 1 – point c
(c) developed, deployed and used in a secure manner that ensures there are safeguards that include a fall-back plan and action in case of a risk of a breach of the ethical principles set out in this Regulationsafety or security risk;
Amendment 558 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 1 – point d
Annex I – part B – Article 8 – paragraph 1 – point d
(d) developed, deployed and used in a manner that ensures that there is trust that the performance is reliable as regards reaching the aims and carrying out the activities they have been conceived for, including by ensuring that all operations are reproducible;
Amendment 563 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 1 – point g
Annex I – part B – Article 8 – paragraph 1 – point g
(g) developed, deployed and used in a manner such that they are capable of warning users that they are interacting with artificial intelligence systems, duly disclosing their capabilities, accuracy and limitations to artificial intelligence developers, deployers and users;
Amendment 566 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 2
Annex I – part B – Article 8 – paragraph 2
2. In accordance with Article 6(2), the technologies mentioned in paragraph 1 shall be developed, deployed and used in transparent and traceable manner so that their elements, processes and phases are documented to the highest possible standards, and that it is possible for the national supervisory authorities referred to in Article 14 to assess the compliance of such technologies with the obligations set out in this Regulation. In particular, the developer, deployer or user of those technologies shall be responsible for, and be able to demonstrate, compliance with the safety features set out in paragraph 1.
Amendment 572 #
Motion for a resolution
Annex I – part B – Article 8 – paragraph 4
Annex I – part B – Article 8 – paragraph 4
4. Users shall be presumed to have complied with the obligations set out in this Article where their use of artificial, robotics and related technologies, including software, algorithms and data used or produced by such technologies, is carried out in good faith in accordance with the safety and use instructions provided by the developed and/or the deployer and in no way contravenesing the ethical principles laid down in this Regulation.
Amendment 573 #
Motion for a resolution
Annex I – part B – Article 9 – paragraph 1
Annex I – part B – Article 9 – paragraph 1
1. Any high-risk software, algorithm or data used or produced by artificial intelligence, robotics and related technologies developed, deployed or used in the Union shall be such as to ensure respect for human dignity and equal treatment for all in line with Union law.
Amendment 577 #
Motion for a resolution
Annex I – part B – Article 9 – paragraph 2
Annex I – part B – Article 9 – paragraph 2
2. Any high-risk software, algorithm or data used or produced by artificial intelligence, robotics and related technologies developed, deployed or used in the Union shall be unbiased and, without prejudice to paragraph 3, shall not discriminate on grounds such as race, gender, sexual orientation, pregnancy, disability, physical or genetic features, age, national minority, ethnic or social origin, language, religion or belief, political views or civic participation, citizenship, civil or economic status, education, or criminal record.
Amendment 582 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 1
Annex I – part B – Article 10 – paragraph 1
1. Any high-risk artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, shall be developed, deployed and used in the Union in compliance with the relevant Union law, principles and values, in aa socially responsible manner that ensures optimal socialsocial well-being, environmental and economic outcomes that contributes to gender balance and that does not result in injury or harm of any kind to being caused to individuals or society in compliance with relevant Union laws, principles and values.
Amendment 583 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 2
Annex I – part B – Article 10 – paragraph 2
Amendment 586 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 2 – point a
Annex I – part B – Article 10 – paragraph 2 – point a
Amendment 588 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 2 – point b
Annex I – part B – Article 10 – paragraph 2 – point b
Amendment 589 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 2 – point c
Annex I – part B – Article 10 – paragraph 2 – point c
Amendment 592 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 2 – point d
Annex I – part B – Article 10 – paragraph 2 – point d
Amendment 595 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 2 – point e
Annex I – part B – Article 10 – paragraph 2 – point e
Amendment 600 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 3
Annex I – part B – Article 10 – paragraph 3
3. The Union and its Member States shall encourage research projects intended to provide solutions, based on artificial intelligence, robotics and related technologies, that seek to promote social inclusion, democracy, plurality, solidarity, fairness, equality and cooperation.
Amendment 601 #
Motion for a resolution
Annex I – part B – Article 10 – paragraph 4
Annex I – part B – Article 10 – paragraph 4
Amendment 604 #
Motion for a resolution
Annex I – part B – Article 11 – title
Annex I – part B – Article 11 – title
Environmental friendliness and sustainability
Amendment 606 #
Motion for a resolution
Annex I – part B – Article 11 – paragraph 1
Annex I – part B – Article 11 – paragraph 1
1. Any high-risk artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, shall be developed, deployed or used in the Union in compliance with Union law, principles and values, in a manner that ensures optimal environmentally friendlysustainable outcomes and minimises their environmental footprint during their lifecycle and through their entire supply chain, in order to support the achievement of climate neutrality and circular economy goals in accordance with the applicable Union law.
Amendment 611 #
Motion for a resolution
Annex I – part B – Article 11 – paragraph 3
Annex I – part B – Article 11 – paragraph 3
3. Any high-risk artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, shall be assessed as to their environmental friendliness and sustainability by the national supervisory authorities, referred to in Article 14, ensuring that measures are put in place to mitigate their general impact as regards natural resources, energy consumption, waste production, the carbon footprint, climate change and environmental degradation in order to ensure compliance with the applicable Union or national law.
Amendment 616 #
Motion for a resolution
Annex I – part B – Article 12 – paragraph 1 a (new)
Annex I – part B – Article 12 – paragraph 1 a (new)
1a. The use and gathering of biometric data for remote identification purposes for deployment of facial recognition in public area, carries specific risk for fundamental rights and shall be limited only to substantial public interest in accordance with EU data protection rules and, in particular, the GDPR. In that case, the processing must take place on the basis of Union or national law, subject to the requirements of proportionality, respect for the essence of the right to data protection and appropriate safeguards and in compliance with Article 12(1).
Amendment 617 #
Motion for a resolution
Annex I – part B – Article 12 – paragraph 2
Annex I – part B – Article 12 – paragraph 2
Amendment 631 #
Motion for a resolution
Annex I – part B – Article 14 – paragraph 1
Annex I – part B – Article 14 – paragraph 1
1. Each Member State shall designate an independent public authority to be responsible for monitoring the application of this Regulation (‘supervisory authority’). In accordance with Article 7(1) and (2), each national supervisory authority shall be responsible for assessing whether artificial intelligence, robotics and related technologies, including software, algorithms and data used or produced by such technologies, developed, deployed and used in the Union are high- risk technologies and, if so, for assessing and monitoring their compliance with the ethical principles set out in this Regulation.
Amendment 645 #
Motion for a resolution
Annex I – part B – Article 15
Annex I – part B – Article 15
Amendment 646 #
Motion for a resolution
Annex I – part B – Article 15 – paragraph 1
Annex I – part B – Article 15 – paragraph 1
Amendment 647 #
Motion for a resolution
Annex I – part B – Article 16
Annex I – part B – Article 16
Amendment to Directive (EU) No Directive (EU) No 2019/1937 is amended as follows: (1) In Article 2(1), the following point is added: ‘ ‘(xi) development, deployment and use artificial intelligence, robotics and related technologies.’ (2) In Part I of the Annex, the following point is added: ‘K. Point (a)(xi) of Article 2(1) - development, deployment and use artificial intelligence, robotics and related technologies. “(xxi) Regulation [XXX] of the European Parliament and of the Council on ethical principles for the development, deployment and use artificial intelligence, robotics and related technologies”.’rticle 16 deleted 2019/1937
Amendment 648 #
Motion for a resolution
Annex I – part B – Article 16 – paragraph 1 – point 1
Annex I – part B – Article 16 – paragraph 1 – point 1
Amendment 649 #
Motion for a resolution
Annex I – part B – Article 16 – paragraph 1 – point 2
Annex I – part B – Article 16 – paragraph 1 – point 2
Amendment 652 #
Motion for a resolution
Annex I – part B – Annex (new)
Annex I – part B – Annex (new)