Progress: Procedure completed
Role | Committee | Rapporteur | Shadows |
---|---|---|---|
Lead | LIBE | VITANOV Petar ( S&D) | VANDENKENDELAERE Tom ( EPP), TUDORACHE Dragoş ( Renew), BREYER Patrick ( Verts/ALE), VANDENDRIESSCHE Tom ( ID), BUXADÉ VILLALBA Jorge ( ECR), ERNST Cornelia ( GUE/NGL) |
Committee Opinion | EMPL | ||
Committee Opinion | IMCO | KOLAJA Marcel ( Verts/ALE) | Stelios KOULOGLOU ( GUE/NGL), Evžen TOŠENOVSKÝ ( ECR), Alessandra BASSO ( ID) |
Committee Opinion | JURI | DZHAMBAZKI Angel ( ECR) | Emmanuel MAUREL ( GUE/NGL), Jiří POSPÍŠIL ( PPE), Franco ROBERTI ( S&D), Patrick BREYER ( Verts/ALE) |
Lead committee dossier:
Legal Basis:
RoP 54
Legal Basis:
RoP 54Subjects
Events
The European Parliament adopted by 377 votes to 248, with 62 abstentions, a resolution on artificial intelligence (AI) in criminal law and its use by the police and judicial authorities in criminal matters.
This resolution addresses the issues raised by the use of AI in criminal law and its use by police and judicial authorities in criminal matters. While recognising the potential opportunities and benefits that AI can bring, it also highlighted the significant risks and consequences that it can bring. It also highlighted the risks it may entail for the protection of people's fundamental rights.
Respect for fundamental rights
Given that the processing of large amounts of data is at the heart of AI, Members believe that the EU legal framework on data protection and privacy must be fully respected and should form a basis for any future regulation of AI for law enforcement and judicial use. The use of AI applications must be prohibited when incompatible with fundamental rights. Moreover, the use of AI applications should be categorised as high-risk in instances where there is the potential to significantly affect the lives of individuals.
Parliament reaffirmed that all AI solutions for law enforcement and the judiciary also need to fully respect the principles of human dignity, non-discrimination, freedom of movement, the presumption of innocence and right of defence, including the right to silence, freedom of expression and information, freedom of assembly and of association, equality before the law, the principle of equality of arms and the right to an effective remedy and a fair trial, in accordance with the Charter and the European Convention on Human Rights.
Risk of discrimination
Many algorithmically driven identification technologies currently in use disproportionately misidentify and misclassify and therefore cause harm to racialised people, individuals belonging to certain ethnic communities, LGBTI people, children and the elderly, as well as women.
Parliament called for algorithmic explainability, transparency, traceability and verification as a necessary part of oversight, in order to ensure that the development, deployment and use of AI systems for the judiciary and law enforcement comply with fundamental rights, and are trusted by citizens, as well as in order to ensure that results generated by AI algorithms can be rendered intelligible to users and to those subject to these systems.
Members considered that strong efforts should be made to avoid automated discrimination and bias and they called for safeguards against the misuse of AI technologies by law enforcement and judicial authorities also need to be regulated uniformly across the Union.
Mandatory impact assessments
Members called for a compulsory fundamental rights impact assessment to be conducted prior to the implementation or deployment of any AI system for law enforcement or the judiciary, in order to assess any potential risk to fundamental rights. These impact assessments should be conducted with the active participation of civil society. They should clearly define the safeguards needed to address the identified risks and be made public, as far as possible, before the deployment of any AI system.
Parliament called for periodic mandatory auditing of all AI systems used by law enforcement and the judiciary where there is the potential to significantly affect the lives of individuals. It also highlighted the need for specialised training regarding the ethical provisions, potential dangers, limitations, and proper use of AI technology, especially for police and judiciary personnel.
Guarantee human intervention
Members called for the precautionary principle to be respected in all law enforcement applications of AI and stressed that in judicial and law enforcement settings, the decision giving legal or similar effect always needs to be taken by a human, who can be held accountable for the decisions made. Those subject to AI-powered systems must have recourse to remedy.
Surveillance and mass profiling
Parliament called for the permanent prohibition of the use of automated analysis and/or recognition in publicly accessible spaces of other human features, such as gait, fingerprints, DNA, voice, and other biometric and behavioural signals. It also called for a ban on the use of private facial recognition databases (such as the Clearview AI system).
Members called for a moratorium on the deployment of law enforcement facial recognition systems for identification purposes, unless they are only used for the purpose of identifying victims of crime, until technical standards can be considered fully respectful of fundamental rights.
Members also supported a ban on mass-scale scoring of individuals using AI.
Lastly, Parliament expressed concern about research projects funded under Horizon 2020 that deploy artificial intelligence at external borders , such as the iBorderCtrl project, a ‘smart lie detection system’ for travellers entering the EU. It called on the Commission to implement, if necessary through infringement procedures, the ban on any processing of biometric data for law enforcement purposes leading to mass surveillance in publicly accessible areas.
The Committee on Civil Liberties, Justice and Home Affairs adopted an own-initiative report by Petar VITANOV (S&D, BG) on artificial intelligence (AI) in criminal law and its use by the police and judicial authorities in criminal matters.
The use of AI in law enforcement entails a number of potentially high, and in some cases unacceptable, risks for the protection of fundamental rights of individuals, such as opaque decision-making, different types of discrimination and errors inherent in the underlying algorithm which can be reinforced by feedback loops, as well as risks to the protection of privacy and personal data, the protection of freedom of expression and information, the presumption of innocence, the right to an effective remedy and a fair trial.
This report addresses the issues raised by the use of AI in criminal law and its use by police and judicial authorities in criminal matters. While recognising the potential opportunities and benefits that AI can bring, it also highlighted the significant risks and consequences that it can bring.
Respect for fundamental rights
Given that the processing of large amounts of data is at the heart of AI, Members believe that the EU legal framework on data protection and privacy must be fully respected and should form a basis for any future regulation of AI for law enforcement and judicial use. The use of AI applications must be prohibited when incompatible with fundamental rights . Moreover, the use of AI applications has to be categorised as high-risk in instances where there is the potential to significantly affect the lives of individuals.
The report reaffirmed that all AI solutions for law enforcement and the judiciary also need to fully respect the principles of human dignity, non-discrimination, freedom of movement, the presumption of innocence and right of defence, including the right to silence, freedom of expression and information, freedom of assembly and of association, equality before the law, the principle of equality of arms and the right to an effective remedy and a fair trial, in accordance with the Charter and the European Convention on Human Rights.
Any AI tools either developed or used by law enforcement or the judiciary should, as a minimum, be safe, robust, secure and fit for purpose, respect the principles of fairness, data minimisation, accountability, transparency, non-discrimination and explainability. Furthermore, their development, deployment and use should be subject to risk assessment and strict necessity and proportionality testing, where safeguards need to be proportionate to the identified risks.
Surveillance and mass profiling
Many algorithmically driven identification technologies currently in use disproportionately misidentify and misclassify and therefore cause harm to racialised people, individuals belonging to certain ethnic communities, LGBTI people, children and the elderly, as well as women.
Members considered that safeguards against the misuse of AI technologies by law enforcement and judicial authorities should be regulated uniformly across the EU.
The report stressed the legal obligation to prevent mass surveillance using AI technologies and to prohibit the use of applications that could lead to it. It called for increased efforts to avoid automated discrimination and automation bias.
Risks of data leaks
The report stressed that the safety and security aspects of AI systems used by law enforcement and judicial authorities must be carefully considered and sufficiently robust and resilient to prevent the consequences of malicious attacks against AI systems. It stressed the importance of safety by design , as well as specific human oversight before the use of certain critical applications and called for law enforcement and judicial authorities to use only those AI applications that respect the principle of privacy and data protection by design so as to avoid misuse.
Members called for the precautionary principle to be respected in all law enforcement applications of AI and stressed that in judicial and law enforcement settings, the decision giving legal or similar effect always needs to be taken by a human , who can be held accountable for the decisions made.
Mandatory impact assessments
The report called for the algorithmic explicability, transparency, traceability and verification as a necessary part of oversight, in order to ensure that the development, deployment and use of AI systems for the judiciary and law enforcement comply with fundamental rights and are trusted by citizens.
Members called for a compulsory fundamental rights impact assessment to be conducted prior to the implementation or deployment of any AI system for law enforcement or the judiciary, in order to assess any potential risk to fundamental rights. These impact assessments should be conducted with the active participation of civil society. They should clearly define the safeguards needed to address the identified risks and be made public, as far as possible, before the deployment of any AI system.
The report called for periodic mandatory auditing of all AI systems used by law enforcement and the judiciary where there is the potential to significantly affect the lives of individuals. It also highlighted the need for specialised training regarding the ethical provisions, potential dangers, limitations, and proper use of AI technology, especially for police and judiciary personnel.
Facial recognition
Members called for a moratorium on the deployment of facial recognition systems for law enforcement purposes that have the function of identification, unless strictly used for the purpose of identification of victims of crime, until the technical standards can be considered fully fundamental rights compliant.
Documents
- Commission response to text adopted in plenary: SP(2021)791
- Decision by Parliament: T9-0405/2021
- Results of vote in Parliament: Results of vote in Parliament
- Debate in Parliament: Debate in Parliament
- Committee report tabled for plenary: A9-0232/2021
- Committee opinion: PE652.371
- Committee opinion: PE648.565
- Amendments tabled in committee: PE655.659
- Committee draft report: PE652.625
- Committee draft report: PE652.625
- Amendments tabled in committee: PE655.659
- Committee opinion: PE648.565
- Committee opinion: PE652.371
- Commission response to text adopted in plenary: SP(2021)791
Activities
- Marcel KOLAJA
Plenary Speeches (3)
- 2021/10/04 Artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters (debate)
- 2021/10/04 Artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters (debate)
- 2021/10/04 Artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters (debate)
- Petar VITANOV
- Fabio Massimo CASTALDO
Plenary Speeches (1)
- Angel DZHAMBAZKI
Plenary Speeches (1)
- Cornelia ERNST
Plenary Speeches (1)
- Laura FERRARA
Plenary Speeches (1)
- Karol KARSKI
Plenary Speeches (1)
- Maite PAGAZAURTUNDÚA
Plenary Speeches (1)
- Tom VANDENKENDELAERE
Plenary Speeches (1)
- Mislav KOLAKUŠIĆ
Plenary Speeches (1)
- Dragoş TUDORACHE
Plenary Speeches (1)
- Ibán GARCÍA DEL BLANCO
Plenary Speeches (1)
- Karen MELCHIOR
Plenary Speeches (1)
- Sabrina PIGNEDOLI
Plenary Speeches (1)
- Eugen JURZYCA
Plenary Speeches (1)
- Miroslav RADAČOVSKÝ
Plenary Speeches (1)
- Jean-Lin LACAPELLE
Plenary Speeches (1)
Votes
L’intelligence artificielle en droit pénal et son utilisation par les autorités policières et judiciaires dans les affaires pénales - Artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters - Künstliche Intelligenz im Strafrecht und ihre Verwendung durch die Polizei und Justizbehörden in Strafsachen - A9-0232/2021 - Petar Vitanov - § 6/1 #
A9-0232/2021 - Petar Vitanov - § 6/2 #
A9-0232/2021 - Petar Vitanov - § 24 - Am 1 #
A9-0232/2021 - Petar Vitanov - § 27 - Am 2 #
A9-0232/2021 - Petar Vitanov - § 31 - Am 3 #
L’intelligence artificielle en droit pénal et son utilisation par les autorités policières et judiciaires dans les affaires pénales - Artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters - Künstliche Intelligenz im Strafrecht und ihre Verwendung durch die Polizei und Justizbehörden in Strafsachen - A9-0232/2021 - Petar Vitanov - Proposition de résolution (ensemble du texte) #
Amendments | Dossier |
348 |
2020/2016(INI)
2020/06/17
IMCO
68 amendments...
Amendment 1 #
Draft opinion Recital A A. whereas the functioning of the digital single market should be improved by reinforcing legal certainty for providers of artificial intelligence (AI), and reinforcing users’ trust by strengthening safeguards to ensure the rule of law and fundamental rights in particular the right to privacy and protection of personal data, right to equality and non-discrimination, and the right to good administration and a fair trial;
Amendment 10 #
Draft opinion Recital A c (new) A c. Whereas in those Member States where some information was available on the use of facial recognition technologies, data protection authorities found that the use of these technologies did not comply with data protection law and lacked legal basis for their deployment;
Amendment 11 #
Draft opinion Recital A c (new) A c. whereas in the field of the internal market, through reforming public procurement procedures the Union can make a fundamental difference in aligning government actions and behaviour with secondary policy objectives such as data protection and non-discrimination;
Amendment 12 #
Draft opinion Recital A d (new) A d. Whereas discrimination in data - driven algorithmic decision-making can occur during the design, testing, and implementation phase, through the biases that are incorporated in the datasets or the algorithms;
Amendment 13 #
Draft opinion Recital A d (new) A d. whereas a principle-based technical development and application of AI is necessary to ensure compliance with human and fundamental rights;
Amendment 14 #
Draft opinion Recital A e (new) A e. whereas on 4 December 2018 the European Commission for the Efficiency of Justice of the Council of Europe published the Ethical Charter for the Use of Artificial Intelligence in Judicial Systems, which sets out ethical principles for the use of artificial intelligence (AI) in judicial systems;
Amendment 15 #
Draft opinion Recital A e (new) A e. Whereas certain uses of AI technologies are particularly sensitive and prone to abuse, which recently made some technology companies decide to stop offering related software;
Amendment 16 #
Draft opinion Paragraph 1 1. Considers that AI used by police and judicial authorities has to be categorised as high-risk, given that the role of these authorities is to defend the public
Amendment 17 #
Draft opinion Paragraph 1 1. Considers that
Amendment 18 #
Draft opinion Paragraph 1 1. Considers that AI used by police and judicial authorities has to be categorised as high-risk and treated with the utmost care and highest standards of data protection, given
Amendment 19 #
Draft opinion Paragraph 1 1. Considers that AI used by police and judicial authorities has to be categorised as high-risk, given that the role of these authorities is to defend the public interest, and given the potential threat that these technologies can represent for fundamental rights, especially since the values reflected in Article 2 of the Treaty on the European Union are at risk in several Member States; considers that the EU should take the lead in laying down basic rules on the development and use of AI to ensure the same high level of consumer protection across the EU;
Amendment 2 #
Draft opinion Recital A A. whereas the functioning of the digital single market should be improved by reinforcing legal certainty for providers of artificial intelligence (AI), and
Amendment 20 #
Draft opinion Paragraph 1 1. Considers that AI used by police and judicial authorities has to be categorised as high-risk, given that the role
Amendment 21 #
Draft opinion Paragraph 1 1. Considers that AI used by police and judicial authorities has to be categorised as high-risk, given that the role of these authorities is to defend the public interest and in view of the nature of their responsibility; considers that the EU should take the lead in laying down basic rules on the development and use of AI to ensure the same high level of consumer protection and uniform industry standards across the EU;
Amendment 22 #
Draft opinion Paragraph 1 1. Considers that AI used by police and judicial authorities has to be categorised as high-risk, given that the role
Amendment 23 #
Draft opinion Paragraph 1 1. Considers that AI used by police and judicial authorities has to be generally categorised as high-risk, given that the role of these authorities is to defend the public interest; considers that the EU should take the lead in laying down basic rules on the development and use of AI by public institutions to ensure the same high level of consumer protection across the EU;
Amendment 24 #
Draft opinion Paragraph 1 a (new) 1 a. Recognizes that the use of AI in the field of justice can help improve efficiency and quality of proceedings; stresses in this context that in particular the rules laid down in the European Convention for Human Rights and in the Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data must be respected;
Amendment 25 #
Draft opinion Paragraph 1 a (new) 1 a. Recognizes at the same time that policing is mainly a matter that falls under the responsibility of the Member States and that the possible use of AI in policing is ultimately for each individual Member State to decide upon;
Amendment 26 #
Draft opinion Paragraph 1 a (new) 1 a. Calls on the Commission to scrutinize the application of existing legislation and its enforcement, as well as self-regulatory measures, prior to initiating any possible new legislative proposals;
Amendment 27 #
Draft opinion Paragraph 1 a (new) 1 a. Calls on the Commission to assess the AI technology available on the market and the level of use by police and judicial authorities on a country-by-country basis.
Amendment 28 #
Draft opinion Paragraph 1 b (new) 1 b. Calls on the Commission and Member States to incorporate key ethical AI principles of data protection, human control and non-discrimination into requirements as part of public procurement procedures for digital applications and AI used by police and judicial authorities;
Amendment 29 #
Draft opinion Paragraph 2 2. Stresses that AI should help to ease the administrative burden on public authorities, without ever replacing human decisions, and that AI systems
Amendment 3 #
Draft opinion Recital A A. whereas the functioning of the digital single market should be improved by reinforcing legal certainty for providers of artificial intelligence (AI), and reinforcing users
Amendment 30 #
Draft opinion Paragraph 2 2. Stresses that AI should help to ease the administrative burden on public authorities and increase the efficiency of their decision-making, without ever replacing human decisions, and that AI systems should rely on human oversight;
Amendment 31 #
Draft opinion Paragraph 2 2. Stresses that AI should help to ease the administrative burden on public authorities, without ever replacing human decisions and coordination, and that AI systems should rely
Amendment 32 #
Draft opinion Paragraph 2 2. Stresses that AI should help to ease the administrative burden on public authorities
Amendment 33 #
Draft opinion Paragraph 2 2. Stresses that AI should help to ease the administrative burden on public authorities
Amendment 34 #
Draft opinion Paragraph 2 2. Stresses that AI should help to ease the administrative burden on public authorities, without ever fully replacing human decisions, and that AI systems should rely on human oversight;
Amendment 35 #
Draft opinion Paragraph 2 2. Stresses that AI should
Amendment 36 #
Draft opinion Paragraph 2 a (new) 2 a. Calls on the Commission to issue binding rules for companies to document the development of AI systems; notes in this regard that it is essential for the risk assessment documentation, the software documentation, the algorithms and data sets used to be fully accessible to market surveillance authorities, while respecting Union law;
Amendment 37 #
Draft opinion Paragraph 3 Amendment 38 #
Draft opinion Paragraph 3 3. Considers that such tools should be released as
Amendment 39 #
Draft opinion Paragraph 3 3. Considers that
Amendment 4 #
Draft opinion Recital A A. whereas the functioning of the digital single market should be improved by reinforcing legal certainty for providers of artificial intelligence (AI) systems, and reinforcing
Amendment 40 #
Draft opinion Paragraph 3 3. Considers that Articles 18(2), 42 and 43, and Annex X of Directive 2014/24/EU on public procurement need to be updated so that police and judicial authorities can require that such tools should be released as open source software under the public procurement procedure, and that a
Amendment 41 #
Draft opinion Paragraph 3 3. Considers that such tools should be released as open source software under the public procurement procedure, and that a fundamental rights audit should be part of a prior conformity assessment; believes that – while ensuring the respect of EU law and values and the applicable data protection rules, and without jeopardising investigations or criminal prosecutions – training data
Amendment 42 #
Draft opinion Paragraph 3 3. Considers that such tools should be released as open source software under the public procurement procedure, and that a fundamental rights audit should be part of a prior conformity assessment; believes that – while ensuring the respect of EU law and values and the applicable data protection rules, and without jeopardising investigations or criminal prosecutions – training data must always be open data;
Amendment 43 #
Draft opinion Paragraph 3 a (new) 3 a. Stresses that the use of AI must be based on the principle of non- discrimination in order to prevent discrimination against individuals or groups in data entry and analysis; underlines that crucial to this are the quality of algorithms, original data and ex-ante review of decision-making processes;
Amendment 44 #
Draft opinion Paragraph 3 b (new) 3 b. Stresses that algorithmic-based procedures for analysing legal data must be made accessible, understandable and verifiable to ensure transparency and independence in criminal proceedings;
Amendment 45 #
Draft opinion Paragraph 3 c (new) 3 c. Emphasizes the importance of open-source development of AI in order to avoid obstacles such as high license fees, to ensure transparency and traceability as well as verification, to enable innovation, to strengthen cooperation in the application and development of AI and a culture of exchanging ideas and experiences from using algorithms and their creation;
Amendment 46 #
Draft opinion Paragraph 4 4. Emphasises that data collection and the monitoring of individuals should be limited to criminal suspects
Amendment 47 #
Draft opinion Paragraph 4 4. Emphasises that data collection and the monitoring of individuals should be limited to criminal suspects; stresses that data which are no longer relevant to the proceedings must be deleted;
Amendment 48 #
Draft opinion Paragraph 4 4. Emphasises that data collection and the monitoring of individuals should be limited to criminal suspects, taking into account the respect of private life and the presumption of innocence;
Amendment 49 #
Draft opinion Paragraph 4 4. Emphasises that data collection and the monitoring of individuals should be limited to criminal suspects and court approved surveillance;
Amendment 5 #
Draft opinion Recital A a (new) A a. Whereas the testing and use of AI by police and judicial authorities is wide- spread, with different type of uses, consequences and risks that these entail namely, facial recognition systems, DNA profiling, predictive crime mapping, and mobile phone data extraction, advanced case-law search engines, online dispute resolution, and machine learning for administration of justice;
Amendment 50 #
Draft opinion Paragraph 4 a (new) 4 a. Underlines that the use of AI by public authorities in criminal matters must be done with utmost precaution, only if there is thorough evidence of the trustworthiness of the algorithm, and in accordance with ethical standards in order to prevent misuses in the public sector, such as mass surveillance and breaches of due process rights;
Amendment 51 #
Draft opinion Paragraph 4 b (new) 4 b. Highlights that national authorities should receive training and basic skills to deal with algorithmic systems and responsibly use AI technologies in criminal matters, with the aim of protecting European citizens from potential risks and damages to their fundamental rights;
Amendment 52 #
Draft opinion Paragraph 5 Amendment 53 #
Draft opinion Paragraph 5 5. Insists that Member States shall ensure that citizens are informed when they are subject to the use of artificial intelligence and that effective complaint and redress procedures, including judicial redress should be made available to
Amendment 54 #
Draft opinion Paragraph 5 5. Insists that simple effective complaint and redress procedures, including judicial redress be made available to citizens in order to be able to defend their rights;
Amendment 55 #
Draft opinion Paragraph 5 5. Insists that effective and easily accessible complaint and redress procedures, including judicial redress be made available to citizens;
Amendment 56 #
Draft opinion Paragraph 6 Amendment 57 #
Draft opinion Paragraph 6 6. Recalls the high risk of abuse of certain types of AI, including facial recognition technologies in public spaces, automated behaviour detection
Amendment 58 #
Draft opinion Paragraph 6 6. Recalls the high risk of abuse of certain types of AI, including facial recognition technologies in public spaces, automated behaviour detection and profiling to divide people into risk categories at borders, a
Amendment 59 #
Draft opinion Paragraph 6 6. Recalls the high risk of abuse of certain types of AI, including facial recognition technologies in public spaces, automated behaviour detection and profiling to divide people into risk categories at borders, and calls on the Commission to
Amendment 6 #
Draft opinion Recital A a (new) A a. whereas a common European approach to AI and the regulation for its use in criminal matters by police and law enforcement is necessary in order to avoid fragmentation in the Single Market;
Amendment 60 #
Draft opinion Paragraph 6 6. Recalls the high risk of abuse of certain types of AI, including facial recognition technologies in public spaces, automated behaviour detection and profiling to divide people into risk categories at borders, and calls on the Commission to
Amendment 61 #
Draft opinion Paragraph 6 6. Recalls the high risk of abuse of certain types of AI, including facial recognition technologies in public spaces, automated behaviour detection and profiling to divide people into risk categories at borders
Amendment 62 #
Draft opinion Paragraph 6 6. Recalls the high risk of abuse of certain types of AI, including facial recognition technologies in public spaces, automated behaviour detection and profiling to divide people into risk categories at borders, and calls on the Commission to ban the
Amendment 63 #
Draft opinion Paragraph 6 6. Recalls the high risk of abuse of certain types of AI, including facial recognition technologies in public spaces, a
Amendment 64 #
Draft opinion Paragraph 6 6. Recalls the high risk of abuse of certain types of AI, including facial recognition technologies in public spaces, automated behaviour detection and profiling to divide people into risk categories at borders, and calls on the
Amendment 65 #
Draft opinion Paragraph 7 a (new) 7 a. Calls for exchanges of information and best practices regarding the application of AI techniques and tools by judicial and police authorities in Member States to avoid a fragmented approach in the Single Market, as well as to face in a coordinated manner the risks associated with AI technologies, such as vulnerability to cybersecurity threats, and ensure the protection of citizens in the Union;
Amendment 66 #
Draft opinion Paragraph 7 a (new) 7 a. Further recalls the high risk of smart policing applications, which depend on data sets collected by humans containing discriminatory and prejudiced data and calls on procurement procedures for such applications to take into account and have safeguards for possible biases;
Amendment 67 #
Draft opinion Paragraph 7 a (new) 7 a. Emphasises that where decision making is assisted by statistical calculations, such as at probation hearings, the decision makers need to be trained about the general biases statistical calculations carry and made aware about the specific biases of calculation in the particular situation;
Amendment 68 #
Draft opinion Paragraph 7 b (new) 7 b. Recalls the right of rectification established in Regulation (EU) 2016/679 (General Data Protection Regulation) and stresses the particular importance of accurate data sets, when these are used to assist administrative decisions; calls on the Commission to examine the benefits of ensuring transparency regarding the individual data included in the particular calculation and an accompanying procedure for rectification.
Amendment 7 #
Draft opinion Recital A a (new) A a. whereas the use of artificial intelligence can represent a paradigm shift in the administration of criminal justice;
Amendment 8 #
Draft opinion Recital A b (new) A b. Whereas according to the report from the Fundamental Rights Agency there is still only limited information currently available on the possible use or testing of facial recognition technologies in Member States1a; __________________ 1aEuropean Union Agency for Fundamental Rights: Facial recognition technology: fundamental rights considerations in the context of law enforcement, (FRA Focus), 27 November 2019 https://fra.europa.eu/sites/default/files/fra_ uploads/fra-2019-facial-recognition- technology-focus-paper-1_en.pdf
Amendment 9 #
Draft opinion Recital A b (new) A b. whereas the use of AI can develop a high potential, but at the same time can also entail considerable risks;
source: 653.820
2020/06/25
JURI
76 amendments...
Amendment 1 #
Draft opinion Recital A A. whereas
Amendment 10 #
Draft opinion Recital A a (new) A a. Whereas AI and related technologies, including their self-learning abilities, always involve a certain level of human intervention;
Amendment 11 #
Draft opinion Recital А a (new) Aa. whereas artificial intelligence has the potential to become a permanent part of criminal law systems;
Amendment 12 #
Draft opinion Recital A b (new) A b. whereas artificial intelligence and related technologies are a priority for the Union, considering the fast-paced advances of the technology sector and the importance of being vigilant about the impact these will and already are having on the unique European intellectual property rights system; whereas a variety of sectors are already implementing the use of artificial intelligence and related technologies, e.g. robotics, transport and the healthcare sectors to name a few;
Amendment 13 #
Draft opinion Recital B Amendment 14 #
Draft opinion Recital B B. whereas technologies such as artificial intelligence (AI) and related technologies
Amendment 15 #
Draft opinion Recital B B. whereas technologies such as artificial intelligence (AI) and related technologies
Amendment 16 #
Draft opinion Recital B B. whereas technologies such as artificial intelligence (AI) and related
Amendment 17 #
Draft opinion Recital B B. whereas
Amendment 18 #
Draft opinion Recital B B. whereas technologies such as artificial intelligence (AI) and related technologies will contribute to the reducing
Amendment 19 #
Draft opinion Recital B B. whereas in the long term technologies such as artificial intelligence (AI) and related technologies
Amendment 2 #
Draft opinion Recital A A. whereas the right to fair trial is a fundamental right
Amendment 20 #
Draft opinion Recital B B. whereas technologies such as artificial intelligence (AI) and related technologies
Amendment 21 #
Draft opinion Recital B B. whereas technologies such as artificial intelligence (AI) and related technologies
Amendment 22 #
Draft opinion Recital B B. whereas technologies such as artificial intelligence (AI) and related technologies
Amendment 23 #
Draft opinion Recital B a (new) B a. whereas facial recognition software has been increasingly contentious and globally leading developer companies have halted all research over questions related to grave concerns over data protection violations and public safety concerns; whereas facial recognition and similar softwares, may have great potential for assisting the police and other authorities in enforcement as well as crime prevention and also lessening the administrative burden for criminal justice, in light of the manifold implications and cross-sector effects such software have, further discussion is needed; whereas the further development of European data and efforts to diminish the Union's dependency on foreign software developers, foreign data and AI-technologies based services will greatly improve insufficiencies when it comes to data protection and privacy;
Amendment 24 #
Draft opinion Recital B a (new) B a. whereas these technologies can be used to create statistical anonymized databases that help authorities, academics and legislators to analyse figures and efficiently design policies to prevent criminality and to help offenders to successfully reintegrate into society;
Amendment 25 #
Draft opinion Recital B a (new) Ba. whereas, owing to the intrinsically opaque nature of AI-systems, the new tools used in criminal justice contexts might conflict with some fundamental freedoms;
Amendment 26 #
Draft opinion Recital B b (new) B b. whereas the legal framework of artificial intelligence and its application to criminal law should include legislative actions, where needed, starting with mandatory measures to prevent practices that would undoubtedly undermine fundamental rights and freedoms;
Amendment 27 #
Draft opinion Recital B b (new) Bb. whereas possible risks linked to the application of AI-systems in criminal justice matters need to be prevented and mitigated in order to safeguard the fundamental rights of suspects and accused persons in criminal proceedings;
Amendment 28 #
Draft opinion Paragraph 1 1. Emphasises th
Amendment 29 #
Draft opinion Paragraph 1 1. Emphasises th
Amendment 3 #
Draft opinion Recital A A. whereas the right to fair trial is a fundamental
Amendment 30 #
Draft opinion Paragraph 1 1. Emphasises the importance of considering the ethical and operational implications of the use of AI and related technologies within criminal justice systems; stresses the importance of the human factor, which must always be the final decision-maker and the role of AI- technologies based software and applications should be a solely assisting one within the criminal system, whether in police enforcement or criminal justice; reiterates that biometric recognition softwares should only be deployed in clearly warranted situations and not become the standard;
Amendment 31 #
Draft opinion Paragraph 1 1. Emphasises th
Amendment 32 #
Draft opinion Paragraph 1 1. Emphasises the strong importance of considering the ethical and operational implications related to
Amendment 33 #
Draft opinion Paragraph 1 1. Emphasises the high importance of duly assessing the risks and considering all the ethical and operational implications of the use of AI and related
Amendment 34 #
Draft opinion Paragraph 1 1. Emphasises the importance of
Amendment 35 #
Draft opinion Paragraph 1 a (new) 1 a. Stresses the need to work on the most efficient way of reducing bias in AI systems, in line with ethical and non- discrimination standards; underlines that the outputs should be reviewed to avoid all forms of stereotypes, discrimination and biases and when appropriate, make use of AI to identify and correct human biases when that might exist; calls on the Commission to encourage and facilitate the sharing of de-biasing strategies for data, in particular in the field of law enforcement in criminal matters; in the light of these risks of bias in AI systems, further calls on the Commission to declare a ban on the use of AI and related technologies for the assistance of judicial systems and of judicial decisions;
Amendment 36 #
Draft opinion Paragraph 1 a (new) 1a. Stresses the need to establish and maintain a balance between the use of AI systems in criminal proceedings and respect for all fundamental rights and procedural guarantees provided for under European and international law
Amendment 37 #
Draft opinion Paragraph 1 a (new) Aa. Emphasises the importance of artificial intelligence being used with due respect for the principles of the rule of law and the independence of the judiciary in the decision-making process;
Amendment 38 #
Draft opinion Paragraph 1 b (new) 1 b. Calls on the Commission to further clarify the rules on the protection and sharing of the data collected through AI and related technologies by authorised authorities habilitated to collect and/or process such data, including non-personal and anonymised data that directly or indirectly identify persons, in full respect of the GDPR and of the ePrivacy Directive; further underlines that the right to a fair trial should involve the ability for citizens and litigants to access these data, especially when the latter are collected from their personal devices or equipment, in accordance with the GDPR, but also for the purpose of their right of defence as soon as their legal liability is engaged;
Amendment 39 #
Draft opinion Paragraph 2 Amendment 4 #
Draft opinion Recital A A. whereas the right to fair trial is a fundamental right
Amendment 40 #
Draft opinion Paragraph 2 2. Underlines the
Amendment 41 #
Draft opinion Paragraph 2 2. Underlines the importance
Amendment 42 #
Draft opinion Paragraph 2 2. Underlines the importance of being able to access AI-produced or AI-assisted outputs for notification procedures and the role of AI and related technologies in criminal law enforcement and crime prevention; recalls the importance of questions related to governance, transparency and accountability; further recalls the distinction between the use of AI and related technologies in crime prevention and criminal justice; stresses the subordinate role AI-technologies fulfil at all times;
Amendment 43 #
Draft opinion Paragraph 2 2.
Amendment 44 #
Draft opinion Paragraph 2 2. Underlines the importance of being able to access AI-produced or AI-assisted outputs for notification procedures and the role of AI and related technologies in criminal law enforcement and crime prevention; recalls the importance of questions related to governance, transparency
Amendment 45 #
Draft opinion Paragraph 2 2. Underlines the importance of being able to access AI-produced or AI-assisted outputs for notification procedures and the role of AI and related technologies in criminal law enforcement and crime prevention; recalls, in this regard, the importance of questions related to governance,
Amendment 46 #
Draft opinion Paragraph 2 2. Underlines the importance of being able to access AI-produced or AI-assisted outputs for notification procedures and the role of AI and related technologies in criminal law enforcement and crime prevention; recalls the importance of questions related to governance, transparency and accountability, as well as fundamental rights and procedural guarantees;
Amendment 47 #
Draft opinion Paragraph 2 2. Underlines the importance of
Amendment 48 #
Draft opinion Paragraph 2 2. Underlines the
Amendment 49 #
Draft opinion Paragraph 2 a (new) Amendment 5 #
Draft opinion Recital A A. whereas the right to fair trial is a fundamental right which also applies to enforcement of the law and taking into account that AI-based technologies could also have an impact on different human rights;
Amendment 50 #
Draft opinion Paragraph 2 a (new) 2 a. Underlines the importance of auto-generated data using in evidence collection and analysis; recalls that, both in crime prevention and criminal justice, the cause for errors in or possible mis-use in data-input and -output analysis, as well as interpretation thereof, may be rooted in the human factor involved and calls therefore for a cautious approach when analysis the effectiveness and appropriateness of using AI-technologies in all decision-making processes;
Amendment 51 #
Draft opinion Paragraph 2 a (new) 2 a. Calls on all competent public authorities, especially law enforcement authorities like the police and the judiciary, to inform the public and provide sufficient transparency as to their use of AI and related technologies when implementing their powers, especially in criminal law matters, including public access to the source code as well as disclosure of false positive and false negative rates of the technology at hand;
Amendment 52 #
Draft opinion Paragraph 2 a (new) 2a. Considers it vital that the application of AI-systems in the context of criminal proceedings should ensure respect for the fundamental principles of criminal proceedings, including the right to a fair trial, the preservation of the principle of the presumption of innocence and the right to an effective remedy, as well as ensuring monitoring and independent control of automated decision-making systems;
Amendment 53 #
Draft opinion Paragraph 2 a (new) 2 a. Underlines the importance of human-in-command principle and verification of AI-produced or AI-assisted outputs; and recalls the importance of questions related to governance, transparency, explainability and accountability to ensure respect for fundamental rights and avoid potential faults in the AI;
Amendment 54 #
Draft opinion Paragraph 2 b (new) 2 b. Stresses its cautious approach to the use of biometric recognition softwares; highlights the ambiguity resulting from an inherent insufficiency when it comes to data protection, as well as infringements of data privacy; notes with concern the amalgamation of personal data on citizens in the European Union by foreign countries, through private sector developers and providers;
Amendment 55 #
Draft opinion Paragraph 3 Amendment 56 #
Draft opinion Paragraph 3 3. Welcomes the recommendations of the Commission’s High-Level Expert Group on AI for a proportionate use of biometric recognition technology and suggests that the application of such technology must be clearly warranted under existing laws and urges the Commission to assess how to effectively incorporate these; suggests that an ad hoc cross-sectoral advisory group be set up, consisting of representatives of all actors and stakeholders involved, both at EU- and national level, to address specifically the issue of facial-recognition softwares, which are of particular importance in the current context of a global health pandemic; recommends that the Commission undertakes a thorough assessment of the impact of biometric recognition software on relevant EU legislation and present proposals for updating existing legislation where appropriate to the realities of artificial intelligence and related technologies overall;
Amendment 57 #
Draft opinion Paragraph 3 3. Welcomes the
Amendment 58 #
Draft opinion Paragraph 3 3.
Amendment 59 #
Draft opinion Paragraph 3 3. Welcomes the recommendations of the Commission’s High-Level Expert Group on AI for a proportionate use of biometric recognition technology
Amendment 6 #
Draft opinion Recital A A. whereas the right to fair trial is a fundamental right which
Amendment 60 #
Draft opinion Paragraph 3 3. Welcomes the recommendations of the Commission’s High-Level Expert Group on AI for a proportionate use of biometric recognition technology and suggests that the application of such technology must be clearly warranted under existing laws and urges the Commission to assess how to effectively incorporate these, with particular regard to the right to privacy and protection of personal data;
Amendment 61 #
Draft opinion Paragraph 3 3. Welcomes the recommendations of the Commission’s High-Level Expert Group on AI for a proportionate use of biometric recognition technology, in compliance with GDPR legislation on the protection of personal data, and suggests that the application of such technology must be clearly warranted under existing laws and urges the Commission to assess how to effectively incorporate these;
Amendment 62 #
Draft opinion Paragraph 3 3. Welcomes the recommendations of the Commission’s High-Level Expert Group on AI for a proportionate, considerate and risk-based use of biometric recognition technology and suggests that the application of such technology must be clearly warranted under existing laws and
Amendment 63 #
Draft opinion Paragraph 3 a (new) 3 a. Calls on the Commission to implement, through legislative and non- legislative means and if necessary through infringement proceedings, a ban on any biometric processing of personal data for law enforcement purposes that leads to mass surveillance in public spaces; Calls on the Commission to stop funding biometric research or deployment which could contribute to mass surveillance in public spaces;
Amendment 64 #
Draft opinion Paragraph 3 a (new) 3 a. Strongly believes that decisions issued by AI or related technologies, especially in the areas of justice and law enforcement, that have a direct and significant impact on the rights and obligations of natural or legal persons, should be subject to strict human verification and due process;
Amendment 65 #
Draft opinion Paragraph 3 a (new) 3a. Emphasises the need to draw up strict rules to govern the use of facial recognition technologies in connection with criminal matters; suggests that a recommendation be issued banning their use temporarily pending the drafting of those rules;
Amendment 66 #
Draft opinion Paragraph 4 Amendment 67 #
Draft opinion Paragraph 4 4. Considers it necessary to clarify
Amendment 68 #
Draft opinion Paragraph 4 4.
Amendment 69 #
Draft opinion Paragraph 4 4. Considers it necessary to clarify whether law enforcement decisions can be delegated to AI and stresses the need to develop codes of conduct for the design and use of AI to help law enforcers and
Amendment 7 #
Draft opinion Recital A A. whereas the right to fair trial is a fundamental right which also applies to enforcement of the law and at all times;
Amendment 70 #
Draft opinion Paragraph 4 4. Considers it necessary to
Amendment 71 #
Draft opinion Paragraph 4 4. Considers it necessary to clarify whether it is expedient for law enforcement decisions
Amendment 72 #
Draft opinion Paragraph 4 4. Considers
Amendment 73 #
Draft opinion Paragraph 4 4. Considers it necessary to clarify
Amendment 74 #
Draft opinion Paragraph 4 4.
Amendment 75 #
Draft opinion Paragraph 4 a (new) 4 a. Considers it of importance that the Commission undertake an overall assessment of the state of play throughout the Union, with regards to technical infrastructure and resources available for the effective introduction of AI-based technologies in their national frameworks, in accordance with the governing laws; highlights further the need to assess the levels of training and awareness, with regards to regional and national specificities; calls on the Commission to not only undertakes such an in-depth assessment and come forward with proposals to support Member States in their efforts but also come forward with proposals to update the existing legislative framework with regards to data protection and other relevant EU-legislation, to reflect the realities of AI- and related technologies;
Amendment 76 #
Draft opinion Paragraph 4 a (new) 4 a. In the light of all the risks exposed, as long as fundamental rights and freedoms, as well as human review are not fully guaranteed, calls on the Commission to declare a ban on the use of AI and related technologies for the assistance of judicial systems and of judicial decisions;
Amendment 8 #
Draft opinion Recital A A. whereas the right to fair trial is
Amendment 9 #
Draft opinion Recital A a (new) A a. whereas the protection of personal data, in accordance with the General Data Protection Regulation and other relevant legislation where applicable, applies at all times;
source: 653.890
2020/07/20
LIBE
204 amendments...
Amendment 1 #
Motion for a resolution Citation 1 — having regard to the Treaty of the European Union, in particular Articles 2 and 6, and the Treaty on the Functioning of the European Union, in particular Article 16,
Amendment 10 #
Motion for a resolution Citation 12 a (new) - having regard to the Council of Europe’s European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their Environment, adopted on 4 December 2018,
Amendment 100 #
Motion for a resolution Paragraph 3 3. Considers, in this regard, that any AI tool either developed or used by law enforcement or judiciary should, as a minimum, be safe, secure and fit for purpose, respect the principles of fairness, accountability, transparency and explainability, with their deployment subject to a strict necessity and proportionality test; highlights that trust among citizens in the use of AI developed and used in the EU is conditional upon the full fulfillment of these criteria;
Amendment 101 #
Motion for a resolution Paragraph 3 3. Considers, in this regard, that any AI tool either developed or used by law enforcement or judiciary should, as a minimum, be safe, secure and fit for purpose, respect the principles of fairness, accountability, transparency and explainability, with their deployment
Amendment 102 #
Motion for a resolution Paragraph 3 3. Considers, in this regard, that any AI tool either developed or used by law enforcement or judiciary should, as a minimum, be safe, secure and fit for purpose, respect the principles of data minimisation, fairness, accountability, transparency and explainability, with their de
Amendment 103 #
Motion for a resolution Paragraph 3 3. Considers, in this regard, that any AI tool either developed or used by law enforcement or judiciary should, as a minimum, be safe, secure and fit for purpose, respect the principles of fairness, accountability, transparency and explainability, with their deployment subject to a risk assessment and a strict necessity and proportionality test;
Amendment 104 #
Motion for a resolution Paragraph 3 a (new) 3 a. Underlines that any decision about a natural person that is based solely on automated processing, including profiling, and which produces an adverse legal effect concerning the data subject or significantly affects him or her, is prohibited under Union law, unless authorised by Union or Member State law which at least provides for the right to obtain human intervention; reminds that decisions in the field of law enforcement are almost always decisions that have a legal effect on the person affected, due to the executive nature of law enforcement authorities and their actions; calls on the Commission, the European Data Protection Board and other independent supervisory authorities to propose legislation or at least issue guidelines, recommendations and best practices in order to further specify the criteria and conditions for decisions based on profiling and the use of AI for law enforcement purposes;
Amendment 105 #
Motion for a resolution Paragraph 3 a (new) 3a. Stresses the need for AI and related technologies to be used within criminal justice systems in compliance with rigorous ethical principles, such as those laid down in the Council of Europe's 'European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their environment', i.e. the principles of respect for fundamental rights, non-discrimination, quality and security, transparency, impartiality and fairness, and user control;
Amendment 106 #
Motion for a resolution Paragraph 3 a (new) 3 a. Underlines the right of the parties to access the data collection process and that relating to prognostic assessments useful for crime prevention police, to the cataloguing and evaluation of criminal evidence and to preventive assessments of whether a suspect might be a danger to society, to the risk of recidivism and the output produced or obtained through AI for notification procedures, as well as the role of AI and related technologies in criminal law enforcement and crime prevention;
Amendment 107 #
Motion for a resolution Paragraph 3 a (new) 3 a. Stresses that the use of AI in this field poses risks for human rights - namely privacy, data protection, and fair trial - and that in the future it may pose further risks that are still unknown; calls for the precautionary principle to be at the heart of any legal frameworks on AI;
Amendment 108 #
Motion for a resolution Paragraph 3 a (new) 3a. Emphasises the need to draw up strict rules to govern the use of facial recognition technologies in connection with criminal matters; suggests that a recommendation be issued banning their use temporarily pending the drafting of those rules;
Amendment 109 #
Motion for a resolution Paragraph 3 b (new) 3 b. Considers it essential, both for the effectiveness of the exercise of defence rights and for the transparency of national criminal justice systems, that a specific, clear and precise legal framework regulates the conditions, modalities and consequences of the use of AI tools in this field, as well as the rights of targeted persons, including possibilities to seek legal remedy; stresses that in the absence of such a legal framework, AI should not be used in this arena;
Amendment 11 #
Motion for a resolution Citation 12 b (new) - having regard to the report of the LIBE mission to the United States in February 2020;
Amendment 110 #
Motion for a resolution Paragraph 3 c (new) 3 c. Urges executing authorities, when deciding on a request of extradition (or surrender) to another Member State or third country, to assess whether the use of AI tools in the requesting (or issuing) country might compromise the essence of the fundamental right to a fair trial; Considers that the first step of such an assessment should be conducted ‘on the basis of material that is objective, reliable, specific and properly updated concerning the operation of the system of justice in the issuing Member State’ (C-216/18 PPU, §61); calls on the Commission to publish updated information concerning the use of AI in the Member States’ judicial and law enforcement systems, and to issue guidelines on how to conduct such an assessment in the context of judicial cooperation in criminal matters;
Amendment 111 #
Motion for a resolution Paragraph 4 4.
Amendment 112 #
Motion for a resolution Paragraph 4 4. Highlights the importance of preventing mass surveillance
Amendment 113 #
Motion for a resolution Paragraph 4 4. Highlights the importance of preventing mass surveillance by means of AI technologies, and of banning applications that would result in it; calls on the Commission and Member States not to follow China and the United States as regards the development of mass surveillance technologies, but to demonstrate that applications of AI technologies in the EU can only be deployed if fully in respect of fundamental rights;
Amendment 114 #
Motion for a resolution Paragraph 4 4. Highlights the importance of preventing mass surveillance by means of AI technologies, and of banning applications that would result in it; Reminds that individuals not only have the right to be correctly identified, but they also have the right not to be identified at all, unless it is required by law for compelling and legitimate public interests;
Amendment 115 #
Motion for a resolution Paragraph 4 4. Sees with great concern the potential of mass surveillance by means of AI technologies in the law enforcement sector; Highlights the imp
Amendment 116 #
Motion for a resolution Paragraph 4 4. Highlights the importance of preventing mass surveillance by means of AI technologies, in particular face recognition, and of banning applications that would result in it;
Amendment 117 #
Motion for a resolution Paragraph 4 a (new) 4 a. Suggests that special attention should be paid to the technological advancement of drones used in police and military operations. Urges the Commission to create a code of conduct on their use considering the great damage they can cause in human capital if potentially weaponised in the future;
Amendment 118 #
Motion for a resolution Paragraph 4 a (new) 4 a. Stresses that technology can be repurposed and calls for strict democratic control and oversight for any AI-enabled technology in use by public authorities that can be repurposed for mass surveillance or mass profiling;
Amendment 119 #
Motion for a resolution Paragraph 4 b (new) Amendment 12 #
Motion for a resolution Citation 12 c (new) - having regard to its Resolution of 19 June 2020 on the anti-racism protests following the death of George Floyd1a; _________________ 1a P9_TA(2020)0173
Amendment 120 #
Motion for a resolution Paragraph 5 5. Stresses the potential for bias and discrimination arising from the use of machine learning and AI applications; notes that biases can be inherent in underlying datasets, especially when historical data is being used, introduced by the developers of the algorithms, or generated when the systems are implemented in real world settings; reminds of its Resolution of of 19 June 2020 on the anti-racism protests following the death of George Floyd; points out that wide-spread real-world racism in the police forces is still prevalent; underlines that such racism will inevitably lead to racist bias in AI-generated findings, scores, and recommendations; therefore reiterates its call on Member States to promote anti-discrimination policies in all areas and to develop national action plans against racism, including in policing and in the justice system; in close cooperation with civil society and the communities concerned;, and to step up measures to increase diversity within police forces and to establish frameworks for dialogue and cooperation between police and communities;
Amendment 121 #
Motion for a resolution Paragraph 5 5. Stresses the potential for bias and discrimination arising from the use of machine learning and AI applications; notes that biases can be inherent in underlying datasets, especially when historical data is being used, introduced by the developers of the algorithms, or generated when the systems are implemented in real world settings; underlines that any software, algorithm or data used or produced by artificial intelligence, robotics and related technologies developed, deployed or used in the Union shall protect the human rights of individuals against violations by AI actors throughout AI systems’ entire lifecycle. A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data gathering process;
Amendment 122 #
Motion for a resolution Paragraph 5 5. Stresses the potential for bias and discrimination arising from the use of machine learning and AI applications; notes that biases can be inherent in underlying datasets, especially when historical data is being used, introduced by the developers of the algorithms, or generated when the systems are
Amendment 123 #
Motion for a resolution Paragraph 5 5. Stresses the potential for bias and discrimination arising from the use of machine learning and AI applications; notes that biases can be inherent in underlying datasets, especially when historical data is being used, introduced by the developers of the algorithms, or generated when the systems are implemented in real world settings; cautions about similar potential biases in the algorithms of AI systems; stresses that it is imperative that AI use by the police and judicial authorities in criminal matters does not become a factor of inequality, social fracture, or exclusion;
Amendment 124 #
5. Stresses the potential for bias and discrimination arising from the use of
Amendment 125 #
Motion for a resolution Paragraph 5 a (new) 5 a. Points out that several cities in the United States have ended their predictive policing systems, after audits of programs that attempted to predict individuals’ behaviors in Chicago and Los Angeles proved their discriminatory impact and practical failure; points out that place- based predictive systems have been shut down in Los Angeles and other cities that initially had adopted the technology; reminds that during the LIBE Committee’s mission to the United States in February 2020, Members were informed by the police departments of New York City and Cambridge/Mass that they had phased out their predictive policing programmes due to a lack of effectiveness and have turned to community policing instead; reminds that this lead to a decline in crime rates;
Amendment 126 #
Motion for a resolution Paragraph 5 a (new) 5 a. Stresses that the developer or deployer shall carry out ethical impact assessments of AI systems that have the potential to cause harm in the form of bias, discrimination and privacy. Τhese assessments shall envision possible moral risks related to the implementation of the AI/Machine learning (ML), consider all possible ethical risks that could result from the AI/ML application in question and shall be publicly released. It is also proposed that all public and government organizations using AI systems are required to conduct an ethical technology assessment prior to deployment of the AI system;
Amendment 127 #
Motion for a resolution Paragraph 5 a (new) 5 a. Stresses that biases inherent in underlying datasets are inclined to gradually increase and thereby perpetuate and amplify existing discrimination, in particular for persons belonging to minority ethnic groups or racialized communities; considers that such an effect is unacceptable in particular in the area of law enforcement;
Amendment 128 #
Motion for a resolution Paragraph 5 b (new) 5 b. Stresses that the datasets and algorithmic systems used when making classifications, assessments and predictions at the different stages of data processing in the development of AI and related technologies may also result in differential treatment of and indirect discrimination against groups of people with similar characteristics; calls for a rigorous examination of AI’s classification practices and harms; emphasises that AI technologies require that the field centre non-technical disciplines whose work traditionally examines such issues, including science and technology studies, critical race studies, disability studies, and other disciplines attuned to social context, including how difference is constructed, the work of classification, and its consequences; stresses the need therefore to systematically invest in integrating these disciplines into AI study and research at all levels;
Amendment 129 #
Motion for a resolution Paragraph 5 c (new) 5 c. Notes that the field of AI is strikingly homogenous and lacking in diversity, where in particular minority ethnic groups and other marginalized groups are underrepresented; stresses the need to ensure that the teams that design, develop, test, maintain, deploy and procure these systems reflect the diversity of its uses and of society in general as a non-technical means to reduce the risks of increased discrimination.
Amendment 13 #
Motion for a resolution Citation 12 d (new) - having regard to its Resolution of 14 March 2017 on fundamental rights implications of big data: privacy, data protection, non-discrimination, security and law-enforcement1a; _________________ 1a P8_TA(2017)0076
Amendment 130 #
Motion for a resolution Paragraph 6 6. Underlines the fact that many algorithmically driven identification technologies that are currently in use disproportionately misidentify
Amendment 131 #
Motion for a resolution Paragraph 6 6
Amendment 132 #
Motion for a resolution Paragraph 6 6. Underlines the fact that many algorithmically driven identification technologies disproportionately misidentify non-white people, (minority) ethnic communities, LGBTI people, migrants, children, the elderly, as well as women;
Amendment 133 #
Motion for a resolution Paragraph 6 a (new) 6 a. Stresses that data used to train predictive policing algorithms reflect ongoing surveillance priorities and that, as a consequence, AI predictions based on characteristics of a specific group of persons end up in amplifying and reproducing existing forms of discrimination and racial domination;
Amendment 134 #
Motion for a resolution Paragraph 6 a (new) 6 a. Calls for strong additional safeguards in case AI systems in law enforcement or the judiciary are used on or in relation to minors, who are particularly vulnerable;
Amendment 135 #
Motion for a resolution Paragraph 7 Amendment 136 #
Motion for a resolution Paragraph 7 Amendment 137 #
Motion for a resolution Paragraph 7 Amendment 138 #
Motion for a resolution Paragraph 7 7. Highlights the power asymmetry between those who develop and employ AI technologies and those who interact and are subject to them; it is, therefore, essential also to provide for a rule that ensures the transparency of the corporate structures of companies that produce and manage AI systems and institutionalise the principle of independence of the programmers, since it is they who prepare not only the selection of data and information to be processed at the basis of the algorithms, but also the assessment criteria that inform and produce a decision;
Amendment 139 #
7. Highlights the power asymmetry between those who develop and employ AI technologies and those who interact and are subject to them - an asymmetry which means that binding codes of conduct must be developed for the design and use of AI in criminal matters;
Amendment 14 #
Motion for a resolution Recital A A. whereas digital technologies in general and artificial intelligence (AI) in particular bring with them extraordinary promise, but unfortunately there is growing evidence of a sharp divide between promises and practices; whereas AI is one of the strategic technologies of the 21st century, potentially generating substantial benefits in efficiency, accuracy, and convenience, and thus bringing positive change to the
Amendment 140 #
Motion for a resolution Paragraph 7 7. Highlights the power asymmetry between those who develop and employ AI technologies and those who interact and are subject to them; stresses the impact on defence rights and the burdensome or even impossible tasks for persons under investigation to challenge the results of AI tools;
Amendment 141 #
Motion for a resolution Paragraph 8 8.
Amendment 142 #
Motion for a resolution Paragraph 8 a (new) 8 a. Stresses the importance, of ensuring that AI weaponised products that are produced in the EU, have advanced software security provisions in accordance with the "security by design approach" which would render them difficult to hack by third parties or terrorists and they will allow specific human oversight before they operate in case of being hacked and activated by unknown source;
Amendment 143 #
Motion for a resolution Paragraph 8 a (new) 8 a. stresses that only a robust European AI governance enable the necessary operationalisation of fundamental rights principles;
Amendment 144 #
Motion for a resolution Paragraph 9 9. Considers it necessary to create a clear and fair regime for assigning legal responsibility for the potential adverse consequences produced by these advanced digital technologies; underlines however that the first and foremost aim must be to avoid that any such consequences materialise to begin with; calls for the consequent application of the precautionary principle for all applications of AI in the law enforcement context;
Amendment 145 #
Motion for a resolution Paragraph 9 9. Considers it necessary to create a clear and fair regime for assigning legal responsibility for the potential adverse consequences produced by these advanced digital technologies; Recognises the challenges to correctly locate the responsibility for potential harm, given the complexity of development and operation of AI systems;
Amendment 146 #
Motion for a resolution Paragraph 9 9. Considers it necessary to create a clear and fair regime for assigning legal responsibility for the potential adverse consequences produced by these advanced digital technologies; considers it imperative for this regime to always identify a responsible person for decisions taken with the support of AI;
Amendment 147 #
Motion for a resolution Paragraph 9 9. Considers it necessary to create a clear and fair regime for assigning legal responsibility and legal liability for the potential adverse consequences produced by these advanced digital technologies; underlines that legal responsibility and liability must always rest with a natural or legal person;
Amendment 148 #
9. Considers it necessary to create a clear and fair regime for assigning legal responsibility and liability for the potential adverse consequences produced by these advanced digital technologies;
Amendment 149 #
Motion for a resolution Paragraph 9 a (new) 9 a. Highlights how individuals have become overly trusting in the seemingly objective and scientific nature of AI tools and thus fail to consider the possibility of their results being incorrect, incomplete or irrelevant, with potentially grave adverse consequences specifically in the area of law enforcement and justice; Emphasises the over-reliance on the results provided for by AI systems, and notes with concern the lack of confidence and knowledge, by authorities, to question or override an algorithmic recommendation;
Amendment 15 #
Motion for a resolution Recital A A. whereas digital technologies in
Amendment 150 #
Motion for a resolution Paragraph 9 a (new) Amendment 151 #
Motion for a resolution Paragraph 9 a (new) 9 a. Calls for the adoption of appropriate procurement processes for AI systems by Member States and EU agencies when used in law enforcement or judicial context, so as to ensure their compliance with fundamental rights;
Amendment 152 #
Motion for a resolution Paragraph 9 a (new) 9 a. Stresses that no AI system should be enabled to harm the physical integrity of human beings, nor to distribute rights or to impose legal obligations on individuals;
Amendment 153 #
Motion for a resolution Paragraph 10 10. Underlines that in judicial and law enforcement contexts, the final decision always needs to be taken by a human, who can be held accountable for the decisions made, and include the possibility of a recourse for a remedy; reminds that under EU law, automated individual decision making shall not be based on special categories of personal data (personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person's sex life or sexual orientation), unless suitable measures to safeguard the data subject's rights and freedoms and legitimate interests are in place; highlights that EU law prohibits profiling that results in discrimination against natural persons on the basis of special categories of personal data;
Amendment 154 #
Motion for a resolution Paragraph 10 10. Underlines that in
Amendment 155 #
Motion for a resolution Paragraph 10 10. Underlines that in judicial and law enforcement contexts, the final decision always needs to be taken by a human, who can be held accountable for the decisions made, and include the possibility of a recourse for a remedy; refers, in this regard, to Article 22 of the General Data Protection Regulation which stipulates that a person has the right not to be subject to a decision which produces legal effects concerning him or her or significantly affects him or her and is based solely on automated data processing designed to evaluate certain aspects of that person's personality;
Amendment 156 #
Motion for a resolution Paragraph 10 10. Underlines that in judicial and law enforcement contexts, the final decision always needs to be taken by a human, who can be held accountable for the decisions made, and include the possibility of a recourse for a remedy; it is necessary to prevent the use of algorithms – so-called automated decision systems – can replace human minds in final decisions, in order to avoid deterministic approaches and ensure the formation of the free judgment of judicial authorities, and whose decisions must always be justifiable, responsible and free of prejudices;
Amendment 157 #
Motion for a resolution Paragraph 10 10.
Amendment 158 #
Motion for a resolution Paragraph 10 10. Underlines that in judicial and law enforcement contexts,
Amendment 159 #
Motion for a resolution Paragraph 10 a (new) 10 a. Highlights that adequate accountability, responsibility, and liability require significant specialised training with regards to the ethical provisions, potential dangers, limitations, and proper use of AI technology, especially for police and judiciary personnel; suggests that sufficient resources be allocated to a European Agency (such as CEPOL) to accommodate such training;
Amendment 16 #
Motion for a resolution Recital A A. whereas digital technologies in general and artificial intelligence (AI) in particular bring with them extraordinary promise
Amendment 160 #
Motion for a resolution Paragraph 11 11. Calls for algorithmic explainability and transparency in order to ensure that the development, deployment and use of AI systems for judiciary and law enforcement comply with fundamental rights, and are trusted by citizens, as well as in order to ensure that results generated by AI algorithms can be rendered intelligible to users and to those subject to these systems, and that there is transparency on the source data and how the system arrived at a certain conclusion; on that note, it is necessary to develop specific mandatory rules of conduct for public and private entities responsible for the design and use of AI, to ensure that they adhere to the principles of transparency and clarity relating to the processes for developing mathematical models and predictive algorithms, while complying with the requirement for independent verification of the quality and reliability of the results achieved, in terms of acquiring and assessing evidence - especially circumstantial evidence - beyond all reasonable doubt;
Amendment 161 #
Motion for a resolution Paragraph 11 11. Calls for algorithmic explainability and transparency in order to ensure that the
Amendment 162 #
Motion for a resolution Paragraph 11 11. Calls for
Amendment 163 #
Motion for a resolution Paragraph 11 11. Calls for algorithmic explainability and transparency as a necessary part of oversight in order to ensure that the development, deployment and use of AI systems for judiciary and law enforcement
Amendment 164 #
Motion for a resolution Paragraph 11 a (new) 11 a. Calls for, in order to guarantee the algorithmic explainability and transparency of law enforcement AI systems, only such tools to be allowed to be purchased by the law enforcement in the Union, which algorithms and logic are open, to at least the police forces themselves, that can be audited, evaluated and vetted by them, and not closed and labelled proprietary by the vendors;
Amendment 165 #
Motion for a resolution Paragraph 11 a (new) 11 a. Calls for proactive and full transparency on private companies developing and deploying AI systems for law enforcement purposes;
Amendment 166 #
Motion for a resolution Paragraph 11 b (new) 11 b. considers that the use and collection of any biometric data for remote identification purposes, for example by conducting facial recognition in public places, as well as at automatic border control gates used for border checks at airports, may pose specific risks to fundamental rights; the implications of which could vary considerably depending on the purpose, context and scope of use;
Amendment 167 #
Motion for a resolution Paragraph 12 12. Calls for traceability of
Amendment 168 #
Motion for a resolution Paragraph 12 a (new) 12 a. calls for clear and appropriate time limits to be established for the erasure of personal data or for a periodic review of the need for the storage of personal data processed or generated by AI technologies for law enforcement purposes;
Amendment 169 #
Motion for a resolution Paragraph 13 13. Calls for a compulsory fundamental rights impact assessment to be conducted prior to the implementation or deployment of any AI systems for law enforcement or judiciary, in order to assess any potential risks to fundamental rights; underlines that the expertise of data protection authorities and fundamental rights agencies is essential in assessing the systems; stresses that these impact assessments should be conducted as openly as possible and with the active engagement of affected individuals and groups, and that these assessments should be made publicly available before the deployment of these systems;
Amendment 17 #
Motion for a resolution Recital A A. whereas digital technologies in general and artificial intelligence (AI) in particular bring with them extraordinary promise but also a number of critical issues in view of their ethical implications, not to mention their potential impact on several fundamental freedoms; whereas AI is one of the strategic technologies of the 21st century
Amendment 170 #
Motion for a resolution Paragraph 13 13.
Amendment 171 #
Motion for a resolution Paragraph 13 13. Calls for a compulsory fundamental rights impact assessment to be conducted prior to the implementation or deployment
Amendment 172 #
Motion for a resolution Paragraph 13 13. Calls for a compulsory fundamental rights impact assessment to be conducted prior to the implementation or deployment of any AI systems for law enforcement or judiciary purposes, in order to assess any potential risks to fundamental rights and, where necessary, define appropriate safeguards to address these risks;
Amendment 173 #
Motion for a resolution Paragraph 13 13. Calls for a compulsory fundamental rights impact assessment to be conducted prior to the implementation or deployment of any AI systems for law enforcement or judiciary, in order to assess any potential risks to fundamental rights; Calls for an obligation to make the results of such impact assessments public;
Amendment 174 #
Motion for a resolution Paragraph 13 a (new) 13 a. Deplores that many law enforcement and judicial authorities in the EU lack the funding, capacities and capabilities to reap the benefits AI tools can offer for their work; encourages law enforcement and judicial authorities to identify, structure and categorise their needs to enable the development of tailor- made AI solutions and to exchange best practices on AI deployment; stresses the need to provide the authorities with the necessary funding, as well as to equip them with the necessary expertise to guarantee full compliance with the ethical, legal and technical requirements attached to AI deployment;
Amendment 175 #
Motion for a resolution Paragraph 13 b (new) Amendment 176 #
Motion for a resolution Paragraph 14 14. Calls for
Amendment 177 #
Motion for a resolution Paragraph 14 14. Calls for periodic mandatory auditing of all AI systems used by law enforcement and the judiciary to test and evaluate algorithmic systems once they are in operation, in order to detect, investigate, diagnose and rectify any unwanted and adverse effects; underlines that the results of these audits should be made available in public registers, so that citizens know whether AI systems are being deployed and which measures are taken to remedy the violation of fundamental rights;
Amendment 178 #
Motion for a resolution Paragraph 14 14. Calls for periodic mandatory auditing and testing of all AI systems used by law enforcement and the judiciary to
Amendment 179 #
Motion for a resolution Paragraph 14 a (new) 14 a. Supports the recommendations of the Commission’s High-Level Expert Group on AI for a ban on AI-enabled mass scale scoring of individuals; considers that any form of normative citizen scoring on a large scale by public authorities, in particular within the field of law enforcement and the judiciary, leads to the loss of autonomy, endangers the principle of non-discrimination and cannot be considered in line with European values;
Amendment 18 #
Motion for a resolution Recital A A. whereas digital technologies in general and artificial intelligence (AI) in particular bring with them extraordinary promise; whereas AI is one of the strategic technologies of the 21st century, generating substantial benefits in efficiency, accuracy, and convenience, and thus bringing positive change to the European economy and improving the security and safety of European citizens; whereas AI should not be seen as an end in itself, but as a tool for serving people, with the ultimate aim of increasing human well- being;
Amendment 180 #
Motion for a resolution Paragraph 14 a (new) 14 a. Opposes the use of AI by law enforcement authorities to make behavioural predictions for individuals or groups on basis of past behaviour or group membership, such as predictive policing technologies, which attempt to identify people who are likely to commit a crime by analysing factors such as past arrests or group membership;
Amendment 181 #
Motion for a resolution Paragraph 14 b (new) 14 b. Welcomes the recommendations of the Commission’s High-Level Expert Group on AI for a proportionate use of biometric recognition technology; shares the view that the use of remote biometric identification should always be considered “high risk” and therefore be subject to additional requirements;
Amendment 182 #
Motion for a resolution Paragraph 15 15.
Amendment 183 #
Motion for a resolution Paragraph 15 15.
Amendment 184 #
Motion for a resolution Paragraph 15 15. Calls for a permanent prohibition on the use of facial recognition systems in the public space and in premises meant for education and (health) care and a moratorium on the deployment of facial recognition systems for law enforcement in semi-public spaces, such as airports, until the technical standards can be considered fully fundamental rights compliant, results derived are non-discriminatory, and there is public trust in the necessity and proportionality for the deployment of such technologies;
Amendment 185 #
Motion for a resolution Paragraph 15 15. Calls for a moratorium on the deployment of facial recognition systems for specific law enforcement operations, until the technical
Amendment 186 #
Motion for a resolution Paragraph 15 15. Calls for a moratorium on the deployment of facial recognition systems for law enforcement, until the technical standards can be considered fully fundamental rights compliant, results derived are non-
Amendment 187 #
Motion for a resolution Paragraph 15 15. Calls for a
Amendment 188 #
Motion for a resolution Paragraph 15 15.
Amendment 189 #
Motion for a resolution Paragraph 15 a (new) 15 a. Reminds that in Europe lie detector tests are generally not considered reliable evidence and their use is often forbidden as they have a detrimental impact on self-determination; stresses the contested scientific validity of affect recognition technology, such as cameras detecting eye-movements and changes in pupil size to flag potential deception, and calls for a ban on its use in the law enforcement and criminal justice field, as well as in border control;
Amendment 19 #
Motion for a resolution Recital A A. whereas digital technologies in general and artificial intelligence (AI) in particular bring with them extraordinary promise; whereas AI is one of the strategic technologies of the 21st century, with the potential to generat
Amendment 190 #
Motion for a resolution Paragraph 15 a (new) 15 a. Notes that predictive policing is among the AI applications used in the area of law enforcement; acknowledges that this can allow law enforcement to work more effectively and proactively, but warns that while predictive policing can analyse the necessary data sets for the identification of patterns and correlations, it cannot answer the question of causality and therefore cannot constitute the sole basis for an intervention;
Amendment 191 #
Motion for a resolution Paragraph 15 b (new) 15 b. Calls for a ban on uses of AI to make behavioural predictions with significant effect on people based on past behaviour, group membership, or any other characteristics, such as predictive policing;
Amendment 192 #
Motion for a resolution Paragraph 16 16. Calls for greater overall transparency from Member States, and for a comprehensive understanding of the use of AI applications in the Union, broken down by Member State law enforcement and judicial authority, the type of tool in use, the types of crime they are applied to, and the companies whose tools are being used; calls on all competent public authorities, especially law enforcement authorities like the police and the judiciary, to inform the public and provide sufficient transparency as to their use of AI and related technologies when implementing their powers, especially in criminal law matters, including public access to the source code as well as disclosure of false positive and false negative rates of the technology at hand;
Amendment 193 #
Motion for a resolution Paragraph 16 16. Calls for greater overall transparency from Member States, and for a comprehensive understanding of the use of AI applications in the Union, broken down by Member State law enforcement
Amendment 194 #
Motion for a resolution Paragraph 16 16. Calls for greater overall transparency
Amendment 195 #
Motion for a resolution Paragraph 16 a (new) 16 a. Expresses its great concern on the use of private facial recognition databases by law enforcement actors and intelligence services, such as Clearview AI, a database of more than three billion pictures that have been collected from social media and other websites, including from EU citizens; calls on Member States to oblige law enforcement actors to disclose whether they are using Clearview AI technology; recalls the opinion of the European Data Protection Board that the use of a service such as Clearview AI by law enforcement authorities in the European Union would "likely not be consistent with the EU data protection regime"; calls on the Commission to ban the use of private facial recognition databases in law enforcement.
Amendment 196 #
Motion for a resolution Paragraph 16 a (new) 16 a. Stresses the importance of independent evaluation of the functioning of AI in practice; urges EU and national authorities to invest in independent empirical research, in particular concerning the influence of AI-based legal decisions affecting individuals’ position; notes that, without such an independent evaluation, it is impossible to have a fully informed democratic debate on the necessity and proportionality of AI in the criminal justice field;
Amendment 197 #
Motion for a resolution Paragraph 16 a (new) 16 a. Reminds that AI applications, including applications used in the context of law enforcement and the judiciary, are being developed globally at a rapid pace; urges all European stakeholders, including the Commission and EU agencies, to ensure international cooperation and to engage third country partners in order to find a common and complementary ethical framework for the use of AI, in particular for law enforcement and the judiciary;
Amendment 198 #
Motion for a resolution Paragraph 16 a (new) 16 a. Calls for the Fundamental Rights Agency, in collaboration with the European Data Protection Board and the European Data Protection Supervisor to draft comprehensive guidelines for the development, use and deployment of AI applications and solutions for the use by law enforcement and judicial authorities;
Amendment 199 #
Motion for a resolution Paragraph 17 Amendment 2 #
Motion for a resolution Citation 2 — having regard to the Charter of Fundamental Rights of the European Union, in particular Article 6, Article 7, Article 8, Article 11, Article 12, Article 13, Article 20, Article 21, Article 24, and Article 47 thereof;
Amendment 20 #
Motion for a resolution Recital A A. whereas digital technologies in general and artificial intelligence (AI) in particular bring with them extraordinary promise; whereas AI
Amendment 200 #
Motion for a resolution Paragraph 17 a (new) 17 a. Expresses its strong concern over research projects financed under Horizon 2020 that deploy artificial intelligence on external borders, such as the iBorderCtrl project, a "smart lie-detection system" profiling travellers on the basis of a computer-automated interview taken by the traveller’s webcam before the trip, and an artificial-intelligence-based analysis of 38 microgestures, tested in Hungary, Latvia and Greece; calls on the Commission to stop funding for biometric processing programmes that could result in mass surveillance;
Amendment 201 #
Motion for a resolution Paragraph 17 a (new) 17a. Warns against the temptation to delegate to AI the power to take decisions under criminal law, and stresses the need to develop codes of conduct for the design and use of AI to help law enforcers and judicial authorities;
Amendment 202 #
Motion for a resolution Paragraph 17 b (new) 17 b. Urges the Commission to put forward a legislative proposal to replace the Data Protection Law Enforcement Directive by a Regulation, in order to better protect citizens’ fundamental rights in cross-border law enforcement cooperation;
Amendment 203 #
Motion for a resolution Paragraph 17 b (new) 17b. refers to the ongoing work in the Committee on Legal Affairs
Amendment 204 #
Motion for a resolution Paragraph 17 c (new) 17 c. Takes note of the Commission's feasibility study on possible changes to the Prüm Decision, including facial recognition; agrees with several Member States that the tight timetable of this study might have an impact on the Prüm architecture and that the availability of biometric data complementary to the Prüm set has not been scientifically assessed prior to this feasibility study; takes note of earlier research that no potential new identifiers, e.g. iris or facial recognition, would be as reliable in a forensic context as DNA or fingerprints; reminds the Commission that any legislative proposal must be evidence based and respect the principle of proportionality; urges the Commission to extend the Prüm Decision framework only if there is solid scientific evidence of the reliability of facial recognition in a forensic context compared to DNA or fingerprints, after it has conducted a full impact assessment, and after the recommendations of the EDPS and EDPB have been taken into account;
Amendment 21 #
Motion for a resolution Recital A a (new) A a. whereas AI can be seen as the ability of a system to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation; Whereas the key components of development in AI are the availability of vast quantities of: data, computing power, and human capital and talent;
Amendment 22 #
Motion for a resolution Recital A a (new) A a. Whereas the increasing use of AI in the criminal law field is based on the promises that it would reduce crime and would lead to more objective decisions; whereas, however, experience has shown that there are several reasons not to believe in such promises;
Amendment 23 #
Motion for a resolution Recital A a (new) Aa. whereas the right to a fair trial is a fundamental right which must be upheld in all circumstances, including in the context of the use of AI;
Amendment 24 #
Motion for a resolution Recital A b (new) A b. whereas, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring understanding of context or critical analysis; whereas, some AI applications have attained the performance levels of human experts and professionals in performing certain specific tasks, and can provide results in a completely different speed and scale;
Amendment 25 #
Motion for a resolution Recital A b (new) Ab. whereas, through the use of statistical data analytics in crime analysis and prevention, technologies such as artificial intelligence (AI) and related technologies may contribute to the reducing of crime rates;
Amendment 26 #
Motion for a resolution Recital A c (new) A c. whereas several Member States use the application of embedded artificial intelligence (AI) systems in the field of law enforcement;
Amendment 27 #
Motion for a resolution Recital B B. whereas
Amendment 28 #
Motion for a resolution Recital B B. whereas the
Amendment 29 #
Motion for a resolution Recital B B. whereas the development of AI must respect EU law, as well as the values on which the Union is founded, in particular human dignity, freedom, democracy, equality, the rule of law, and human and fundamental rights;
Amendment 3 #
Motion for a resolution Citation 2 — having regard to the Charter of Fundamental Rights of the European
Amendment 30 #
Motion for a resolution Recital B a (new) Ba. whereas the European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their environment, adopted by the European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe, lays down some fundamental guidelines to which public and private entities responsible for the design and development of AI tools and services should adhere; whereas, in particular, the Ethical Charter consists of the following principles: 1) principle of respect for fundamental rights; 2) principle of non- discrimination; 3) principle of quality and security; 4) principle of transparency, impartiality and fairness; 5) principle 'under user control';
Amendment 31 #
Motion for a resolution Recital B a (new) B a. whereas the use of AI technology should be developed in such a way as to put people at its center and therefore to be worth of public trust;
Amendment 32 #
Motion for a resolution Recital C C. whereas trustworthy AI systems
Amendment 33 #
Motion for a resolution Recital C C. whereas trustworthy AI systems need to be accountable, designed for the protection and benefit of all (including consideration of vulnerable, marginalised populations in their design), be non- discriminatory, safe and transparent, and respect human autonomy and fundamental rights;
Amendment 34 #
Motion for a resolution Recital C C. whereas
Amendment 35 #
Motion for a resolution Recital C C. whereas trustworthy AI systems need to be
Amendment 36 #
Motion for a resolution Recital C a (new) C a. whereas AI systems always have to be in the service of humans and have the ultimate safety valve of being designed so that they can always be shut down by a human operator;
Amendment 37 #
Motion for a resolution Recital D D. whereas the Union together with the Member States bear a critical responsibility for ensuring that policy choices surrounding the development, deployment and use of AI applications in the field of the judiciary and law enforcement are made in a transparent manner, respect the principles of necessity and proportionality, and guarantee that the policies and measures adopted will fully safeguard fundamental rights within the Union, in particular that any AI applications do not perpetuate existing discriminations, biases or prejudices;
Amendment 38 #
Motion for a resolution Recital D D. whereas the Union together with the Member States bear a critical responsibility for ensuring that policy choices surrounding the development, deployment and use of AI applications in the field of the judiciary and law enforcement are made in a transparent manner, respect the principles of necessity and proportionality, and guarantee that the policies and measures adopted will fully safeguard fundamental rights within the Union and reflect the societies` expectation in a constitutional, fair, and humane criminal justice system;
Amendment 39 #
Motion for a resolution Recital D D. whereas the Union together with the Member States bear a critical responsibility for ensuring that
Amendment 4 #
Motion for a resolution Citation 4 — having regard to the Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (ETS 108), and its amending protocol (“Convention 108+”);
Amendment 40 #
Motion for a resolution Recital D D. whereas the
Amendment 41 #
Motion for a resolution Recital D a (new) Da. whereas Article 2 of Regulation (EU) 2016/679, notwithstanding specific exceptions, stipulates that the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her;
Amendment 42 #
Motion for a resolution Recital E E. whereas AI applications can offer great opportunities in the field of law enforcement, in particular in improving the working methods of law enforcement agencies and judicial authorities, and making an effective contributon to preventing and combating certain types of crime
Amendment 43 #
Motion for a resolution Recital E E. whereas AI applications may offer
Amendment 44 #
Motion for a resolution Recital E E. whereas AI applications offer great opportunities in the field of law enforcement, in particular in improving the working methods of law enforcement agencies and judicial authorities, and preventing and combating certain types of crime more efficiently, in particular financial crime, money laundering and terrorist financing, as well as certain types of cybercrime, thereby contributing to the safety and security of EU citizens;
Amendment 45 #
Motion for a resolution Recital E E. whereas AI applications offer
Amendment 46 #
Motion for a resolution Recital E E. whereas AI applications may offer
Amendment 47 #
E a. whereas the development and operation of AI systems for police and judicial authorities involves the contribution of multiple individuals, organisations, machine components, software algorithms, and human users in often complex and challenging environments;
Amendment 48 #
Motion for a resolution Recital F Amendment 49 #
Motion for a resolution Recital F F. whereas a clear model for assigning legal responsibility for the potential harmful effects of AI systems in the field of criminal law is imperative;
Amendment 5 #
Motion for a resolution Citation 4 a (new) - having regard to the 'European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their environment', adopted by the European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe on 3 December 20181 a, _________________ 1a https://rm.coe.int/ethical-charter-en- for-publication-4-december- 2018/16808f699c
Amendment 50 #
Motion for a resolution Recital F F. whereas a clear model for assigning legal responsibility for the potential harmful effects of AI systems in the field of criminal law is imperative; whereas regulatory provisions in this field should always maintain human accountability;
Amendment 51 #
Motion for a resolution Recital F a (new) F a. whereas allocating and distributing responsibility between humans and machines is increasingly difficult; whereas ultimately it is the responsibility of the Member States to guarantee the full respect of fundamental rights when AI systems are used in the field of law enforcement;
Amendment 52 #
Motion for a resolution Recital F b (new) F b. whereas the relationship between protecting fundamental rights and effective policing must always be an essential element in the discussions on whether and how AI should be used by law enforcement sector, where decisions may have long lasting consequences on the life and freedom of individuals;
Amendment 53 #
Motion for a resolution Recital G G. whereas the use of AI applications in
Amendment 54 #
Motion for a resolution Recital G G. whereas AI applications in use by law enforcement include applications such as facial recognition technologies, automated number plate recognition, speaker identification, speech identification, lip-reading technologies, aural surveillance (i.e. gunshot detection algorithms),
Amendment 55 #
Motion for a resolution Recital G G. whereas AI applications in use by law enforcement include applications such as facial recognition technologies, e.g. to search suspect databases and identify victims of human trafficking or child sexual exploitation and abuse, automated number plate recognition, speaker identification, speech identification, lip- reading technologies, aural surveillance (i.e. gunshot detection algorithms), autonomous research and analysis of identified databases, forecasting (predictive policing and crime hotspot analytics),
Amendment 56 #
Motion for a resolution Recital G G. whereas AI applications in use by law enforcement include a heterogeneous array of applications such as facial recognition technologies, automated number plate recognition, speaker identification, speech identification, lip- reading technologies, aural surveillance (i.e. gunshot detection algorithms), autonomous research and analysis of identified databases, forecasting (predictive policing and crime hotspot analytics), behaviour detection tools, autonomous tools to identify financial fraud and terrorist financing, social media monitoring (scraping and data harvesting for mining connections), international mobile subscriber identity (IMSI) catchers, and automated surveillance systems incorporating different detection capabilities (such as heartbeat detection and thermal cameras); whereas the aforementioned applications have vastly varying degrees of reliability
Amendment 57 #
Motion for a resolution Recital G G. whereas AI applications in use by law enforcement include applications such as facial recognition technologies, automated number plate recognition, speaker identification, speech identification, lip-reading technologies, aural surveillance (i.e. gunshot detection algorithms), autonomous research and analysis of identified databases, forecasting
Amendment 58 #
Motion for a resolution Recital G G. whereas AI applications in use by law enforcement include applications such as facial recognition technologies, automated number plate recognition, speaker identification, speech identification, lip-reading technologies, aural surveillance (i.e. gunshot detection algorithms), autonomous research and analysis of identified databases, forecasting (predictive policing and crime hotspot analytics), behaviour detection tools, autonomous tools to identify financial fraud and terrorist financing, social media monitoring (scraping and data harvesting for mining connections), international mobile subscriber identity (IMSI) catchers, and automated surveillance systems incorporating different detection capabilities (such as heartbeat detection and thermal cameras); whereas the aforementioned applications have vastly
Amendment 59 #
Motion for a resolution Recital H H. whereas AI tools and applications are also used by the judiciary
Amendment 6 #
Motion for a resolution Citation 4 a (new) - having regard to the European ethical Charter on the use of Artificial Intelligence in judicial systems and their environment of the European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe;
Amendment 60 #
Motion for a resolution Recital H H. whereas AI tools and applications are also used by the judiciary worldwide, including in sentencing, calculating probabilities for reoffending and in determining probation; whereas this has led to distorted and diminished chances for people of colour and other minorities;
Amendment 61 #
Motion for a resolution Recital H H. whereas AI tools and applications are also used by the judiciary worldwide, including in sentencing, calculating probabilities for reoffending and in determining probation, online dispute resolution, case law management, and the provision of facilitated access to the law;
Amendment 62 #
Motion for a resolution Recital H H. whereas AI tools and applications are also used by the judiciary worldwide, including to support decisions on pre-trial detention, in sentencing, calculating probabilities for reoffending and in determining probation;
Amendment 63 #
Motion for a resolution Recital H H. whereas AI tools and applications are also used by the judiciary
Amendment 64 #
Motion for a resolution Recital H a (new) H a. whereas the applications of AI in law enforcement and the judiciary are in different development stages, ranging from conceptualisation through prototyping or evaluation to post-approval use; whereas new possibilities of use may arise in the future as the technology becomes more mature due to ongoing intensive scientific research worldwide;
Amendment 65 #
Motion for a resolution Recital H a (new) H a. whereas AI has the potential to be a permanent part of our criminal justice ecosystem by providing investigative analysis and assistance;
Amendment 66 #
Motion for a resolution Recital I I. whereas the use of AI in law enforcement entails a number of potential risks, such as opaque decision-making, different types of discrimination
Amendment 67 #
Motion for a resolution Recital I I. whereas use of AI in law enforcement entails a number of
Amendment 68 #
Motion for a resolution Recital I I. whereas use of AI in law enforcement entails a number of potential risks, such as opaque decision-making, a chilling effect, different types of discrimination, and risks to the protection of privacy and personal data, the protection of freedom of expression and information, and the presumption of innocence, and the right to an effective remedy and a fair trial;
Amendment 69 #
Motion for a resolution Recital I I. whereas use of AI in law enforcement entails a number of
Amendment 7 #
Motion for a resolution Citation 6 a (new) - having regard to the ‘Ethics Guidelines for Trustworthy AI’ of the High-Level Expert Group on Artificial Intelligence set up by the Commission of 8 April 2019;
Amendment 70 #
Motion for a resolution Recital I I. whereas use of AI in law enforcement entails a number of
Amendment 71 #
Motion for a resolution Recital I a (new) I a. whereas predictive policing systems necessarily rely heavily on historical data, which can contain biases so that any subsequent police method or strategy based upon such data are inclined to reproduce those biases in its results. Whereas these biases can have a ‘ratchet effect’ meaning that the distortion will get incrementally worse each year if police services rely on the evidence of last year’s data in order to set next year’s targets 1a. _________________ 1aWilliams, Patrick and Kind, Eric (2019)Data-driven Policing: The hardwiring of discriminatory policing practices across Europe. Project Report. European Network Against Racism (ENAR)
Amendment 72 #
Motion for a resolution Recital I a (new) I a. whereas some countries make greater use of AI applications in law enforcement and the judiciary than others, which is partly due to a lack of regulation and regulatory differences which enable or prohibit AI use for certain purposes;
Amendment 73 #
Motion for a resolution Recital I b (new) I b. whereas persons who belong to minority ethnic groups are much more likely to be subject to stop and searches by police, prosecution, punishment and imprisonment in comparison to the respective "majority" population; whereas, as recognised by Commissioner Vestager in her keynote speech at the European AI Forum on 30 June 2020, migrants and people belonging to certain ethnic groups might be targeted by predictive policing techniques that direct all the attention of law enforcement to them;
Amendment 74 #
Motion for a resolution Recital J J. whereas AI systems used by law enforcement and judiciary are also vulnerable to AI-
Amendment 75 #
Motion for a resolution Recital J J. whereas AI systems used by law enforcement are also vulnerable to
Amendment 76 #
Motion for a resolution Recital J a (new) Amendment 77 #
Motion for a resolution Recital J a (new) Ja. whereas so-called automated decision systems - AI-based algorithms - cannot be used for decision-making purposes, since the final decision in criminal matters must always be taken by a human, in respect of whom AI must retain an instrumental role;
Amendment 78 #
Motion for a resolution Recital J b (new) J b. whereas, however, the deployment of AI in this field should not be seen as a mere technical question of ensuring the accuracy and effectiveness of such tools, but rather a crucial political decision concerning the design and the objectives of law enforcement and of criminal justice systems, which will inevitably bring about a deep impact on the lives and fundamental rights of people;
Amendment 79 #
Motion for a resolution Recital J c (new) J c. Whereas a full enforcement of law is a dream that should not be pursued at all costs; whereas detecting and sanctioning all law infringements is not possible unless resorting to ubiquitous surveillance; whereas detecting all forms of illegal conduct with the same high level of efficacy is not a legitimate goal in democratic societies that value the privacy of individuals, and which, in order to protect such a value, are ready to accept that in some cases disobedience is not punished;
Amendment 8 #
Motion for a resolution Citation 8 Amendment 80 #
Motion for a resolution Recital J d (new) J d. Whereas an increasing number of authorities and legislators worldwide have banned, or are considering to ban, the use of facial recognition by law enforcement authorities; whereas, in the wake of protests around the murder of George Floyd, Amazon, Microsoft and IBM denied police departments access to their facial recognition technology, calling governments around the world to regulate the use of facial recognition;
Amendment 81 #
Motion for a resolution Recital J e (new) J e. Whereas EU instruments on judicial cooperation, such as the European Arrest Warrant, do not modify the obligation to respect the fundamental rights and legal principles as enshrined in Article 6 of the TEU; Whereas on several occasions the CJEU has concluded that mutual trust is not blind, and that the executing judicial authority might be required to assess whether there is a real risk that the individual concerned will suffer a breach of fundamental rights if surrendered to the issuing state; whereas the CJEU has applied this principle both as regards a potential violation of the prohibition of torture and inhuman or degrading treatment, and of the right to a fair trial;
Amendment 82 #
Motion for a resolution Recital J f (new) J f. Whereas modern liberal criminal law is based on the idea that state authorities react to an offence after it has been committed, without assuming that people are dangerous and need to be constantly monitored in order to prevent any potential wrongdoing; whereas AI- based surveillance techniques deeply challenge such an approach and urge legislators worldwide to thoroughly assess the consequences of allowing the deployment of technologies that reduce the role of human beings in law enforcement and adjudication.
Amendment 83 #
Motion for a resolution Paragraph 1 1.
Amendment 84 #
Motion for a resolution Paragraph 1 1. Reiterates that, as processing large quantities of data is at the heart of AI, the right to the protection of private life and the right to the protection of personal data apply to all areas of AI, and that the
Amendment 85 #
Motion for a resolution Paragraph 1 a (new) 1 a. recalls that the EU has already established data protection standards for law enforcement, which form the foundation for any future regulation in AI; recalls that processing of personal data must be lawful and fair; the purposes of processing must be specified, explicit and legitimate; must be adequate, relevant and not excessive in relation to the purpose for which is it processed; be accurate and kept up to date (inaccurate data should, subject to the purpose for which it would otherwise be retained, be correcte dor erased); should be kept for no longer than is necessary and processed in a secure manner;
Amendment 86 #
Motion for a resolution Paragraph 2 2. Reaffirms that all AI solutions for law enforcement and the judiciary also need to fully respect the principles of non- discrimination, freedom of movement, the presumption of innocence and right of defence, freedom of expression and information, freedom of assembly and of association, equality before the law, and the right to an effective remedy and a fair trial; any artificial intelligence, robotics and related technologies, shall be developed, deployed or used in a manner that prevents the possible identification of individuals from data that were previously processed based on anonymity or pseudonymity, and the generation of new, inferred, potentially sensitive data and forms of categorisation through automated means;
Amendment 87 #
Motion for a resolution Paragraph 2 2. Reaffirms that all AI solutions for law enforcement and the judiciary also need to fully respect the principles of non- discrimination, human dignity, prevention of damage, transparency, impartiality and accuracy, fairness and explainability of the use of biometric recognition technologies, guarantee of the human control by the user, freedom of movement, the presumption of innocence and right of defence, freedom of expression and information, freedom of assembly and of association, equality before the law, and the right to an effective remedy and a fair trial;
Amendment 88 #
Motion for a resolution Paragraph 2 2. Reaffirms that all AI solutions for law enforcement and the judiciary also need to fully respect the principles of non- discrimination, freedom of movement, the presumption of innocence and right of defence, including the right to silence, freedom of expression and information, freedom of assembly and of association, equality before the law, and the right to an effective remedy and a fair trial; stresses that any use of AI must be prohibited when evidently incompatible with fundamental rights;
Amendment 89 #
Motion for a resolution Paragraph 2 2. Reaffirms that all AI solutions for law enforcement and the judiciary also need to fully respect the principles of non- discrimination, freedom of movement, the presumption of innocence and right of defence, freedom of expression and information, freedom of assembly and of
Amendment 9 #
Motion for a resolution Citation 12 a (new) - having regard to the hearing in the Committee on Civil Liberties, Justice and Home Affairs (LIBE) on 20 February 2020 on Artificial Intelligence in criminal law and its use by the police and judicial authorities in criminal matters;
Amendment 90 #
Motion for a resolution Paragraph 2 2. Reaffirms that all AI solutions for law enforcement and the judiciary also need to fully respect the principles of non- discrimination, freedom of movement, the presumption of innocence and right of defence, freedom of expression and information, freedom of assembly and of association, equality before the law,
Amendment 91 #
Motion for a resolution Paragraph 2 2. Reaffirms that all AI solutions for law enforcement and the judiciary also need to fully respect the principles of non- discrimination, freedom of movement, the presumption of innocence and right of
Amendment 92 #
Motion for a resolution Paragraph 2 2. Reaffirms that all AI solutions for law enforcement and the judiciary also need to fully respect the principles of non- discrimination, freedom of movement, the presumption of innocence and right of defence, freedom of expression and information, freedom of assembly and of association, equality before the law, the principle of equality of arms, and the right to an effective remedy and a fair trial;
Amendment 93 #
Motion for a resolution Paragraph 2 2. Reaffirms that all AI solutions for law enforcement and the judiciary also need to fully respect the principles of non- discrimination, freedom of movement, the presumption of innocence and right of defence, obligation to state reasons, freedom of expression and
Amendment 94 #
Motion for a resolution Paragraph 2 a (new) 2 a. notes that the use of biometric data, such as for facial recognition technologies, relates more broadly to the principle of the right to human dignity; human dignity is the basis of all fundamental rights guaranteed by the Charter of Fundamental Rights; The Court of Justice of the EU (CJEU) has confirmed in its case law that the fundamental right to dignity is part of EU law, therefore biometric data, including facial images, must be processed in a way that respects human dignity;
Amendment 95 #
Motion for a resolution Paragraph 2 a (new) 2 a. Acknowledges that the speed at which AI applications are developed around the world necessitates a future- oriented approach and that any attempts at exhaustive listing of applications will quickly become outdated; calls, in this regard, for a clear and coherent governance model that guarantees the respect of fundamental rights, but also allows companies and organizations to further develop artificial intelligence applications;
Amendment 96 #
Motion for a resolution Paragraph 2 a (new) 2 a. Considers that AI applications used by police and judicial authorities have to be categorised as high-risk in all cases given their public role and responsibility and the impact of decisions taken by such authorities; considers furthermore that the use of AI system by such authorities can occur in a way that might have legal consequences or may significantly affect people’s lives;
Amendment 97 #
Motion for a resolution Paragraph 2 a (new) 2 a. Considers it is necessary to lower the expectations on technological solutions that promise a perfect law enforcement and the unrealistic detection of all committed offences
Amendment 98 #
Motion for a resolution Paragraph 3 3. Considers, in this regard, that any AI tool either developed or used by law enforcement or judiciary should, as a minimum, be safe, secure and fit for purpose, respect the principles of fairness, accountability, transparency and explainability, with their deployment subject to a strict necessity and proportionality test; Urges the EU and national legislators to take into great consideration the five principles of the ‘Ethical Charter on the use of artificial intelligence in judicial systems and their environment’ adopted by the European Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe, and to pay particular attention to the ‘uses to be considered with the most extreme reservation’, identified by CEPEJ;
Amendment 99 #
Motion for a resolution Paragraph 3 3. Considers, in this regard, that safeguards should be proportionate to potential risks associated with the use specific AI applications; believes that any AI tool either developed or used by law enforcement or judiciary should, as a minimum, be safe, robust, secure and fit for purpose, respect the principles of fairness, accountability, transparency
source: 655.659
|
History
(these mark the time of scraping, not the official date of the change)
docs/4 |
|
events/4 |
|
events/3 |
|
events/4 |
|
forecasts |
|
procedure/stage_reached |
Old
Awaiting Parliament's voteNew
Procedure completed |
forecasts/1 |
|
forecasts/0/title |
Old
Indicative plenary sitting dateNew
Debate in plenary scheduled |
forecasts/0/date |
Old
2021-09-13T00:00:00New
2021-10-04T00:00:00 |
docs/4 |
|
events/2/summary |
|
docs/4 |
|
events/2/docs |
|
events/2 |
|
procedure/stage_reached |
Old
Awaiting committee decisionNew
Awaiting Parliament's vote |
committees/1 |
Old
New
|
committees/2 |
Old
New
|
committees/3 |
Old
New
|
events/1 |
|
committees/0 |
|
committees/0 |
|
forecasts/0/date |
Old
2021-07-05T00:00:00New
2021-09-13T00:00:00 |
forecasts/0/date |
Old
2021-09-13T00:00:00New
2021-07-05T00:00:00 |
forecasts/0/date |
Old
2021-07-05T00:00:00New
2021-09-13T00:00:00 |
events/0/body |
EP
|
commission |
|
docs/0/docs/0/url |
https://www.europarl.europa.eu/doceo/document/LIBE-PR-652625_EN.html
|
docs/1/docs/0/url |
https://www.europarl.europa.eu/doceo/document/LIBE-AM-655659_EN.html
|
forecasts/0/date |
Old
2021-05-17T00:00:00New
2021-07-05T00:00:00 |
docs/3/docs/0/url |
https://www.europarl.europa.eu/doceo/document/JURI-AD-652371_EN.html
|
docs/2/docs/0/url |
https://www.europarl.europa.eu/doceo/document/IMCO-AD-648565_EN.html
|
events/0 |
|
events/0 |
|
forecasts/0/title |
Old
Indicative plenary sitting date, 1st reading/single readingNew
Indicative plenary sitting date |
docs/0/docs/0/url |
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE652.625
|
docs/1/docs/0/url |
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE655.659
|
docs/2/docs/0/url |
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE648.565&secondRef=02
|
docs/3/docs/0/url |
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE652.371&secondRef=02
|
committees/0 |
|
committees/0 |
|
committees/0 |
|
committees/0 |
|
forecasts |
|
committees/0 |
|
committees/0 |
|
committees/2 |
|
committees/2 |
|
committees/3 |
|
committees/3 |
|
committees/0 |
|
committees/0 |
|
committees/2 |
|
committees/2 |
|
committees/3 |
|
committees/3 |
|
committees/0 |
|
committees/0 |
|
committees/2 |
|
committees/2 |
|
committees/3 |
|
committees/3 |
|
committees/0 |
|
committees/0 |
|
committees/2 |
|
committees/2 |
|
committees/3 |
|
committees/3 |
|
committees/0 |
|
committees/0 |
|
committees/2/rapporteur/0/mepref |
197546
|
committees/3/rapporteur/0/mepref |
124873
|
docs/3/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE652.371New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE652.371&secondRef=02 |
docs/3/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE652.371&secondRef=02New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE652.371 |
committees/0 |
|
committees/0 |
|
docs/3/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE652.371New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE652.371&secondRef=02 |
docs/3/date |
Old
2020-09-11T00:00:00New
2020-09-14T00:00:00 |
docs/3/docs/0/url |
Old
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE652.371&secondRef=02New
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE652.371 |
docs/3 |
|
docs/2 |
|
docs/1/docs/0/url |
https://www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&mode=XML&language=EN&reference=PE655.659
|
docs/1 |
|
docs |
|
committees/0 |
|
committees/0 |
|
committees/2/rapporteur |
|
committees/3/rapporteur |
|
committees/0/shadows |
|
committees/0/rapporteur |
|
committees/1/opinion |
False
|