5 Amendments of Jordi CAÑAS related to 2020/2017(INI)
Amendment 7 #
Draft opinion
Paragraph 1
Paragraph 1
1. Notes that the Commission has proposed to rapidly deploy products and services that rely on AI in areas of public interest and the public sector; highlights the added value of promoting public private partnerships to secure this objective and deploy the full potential of AI in the education, culture and audiovisual sector; emphasises that in the education sector, this deployment should involve educators, learners and wider society and take their needs and the expected benefits into account in order to ensure that AI is used purposefully and ethically; points out that the legal framework governing AI in the education sector should particularly provide for legally binding measures and standards to prevent practices from the different private and public actors involved that would undermine fundamental rights and freedoms, and to ensure the development of trustworthy, ethical and technically robust AI applications;
Amendment 25 #
Draft opinion
Paragraph 2
Paragraph 2
2. Calls on the Commission to include the education sector in the regulatory framework for high-risk AI applications given the importance of ensuring that education continues to contribute to the public good and given the high sensitivity of data on pupils, students and other learners; underlines that data sets used to train AI should be reviewed to avoid reinforcing gender stereotypes andnotes that education is a sector where significant risks are likely to arise from certain uses of AI applications, which may potentially undermine fundamental rights and result in high costs in both human and social terms; underlines that data sets used to train AI should comply with reasonable and comprehensive mandatory requirements in order to ensure that they are sufficiently broad so as to avoid biased outputs and discrimination based on social, economic, ethnic, racial, sexual, gender, disability status or other biasefactors;
Amendment 48 #
Draft opinion
Paragraph 3
Paragraph 3
3. Expresses its concern that schools and other public education providers are becoming increasingly dependent on educational technology services, including AI applications, provided by just a few technology companies; stresses that this may lead to unequal access to data and limit competition by restricting consumer choice; calls for this data to be shared with the relevant public authorities so it can be usedstresses in this regard the importance of supporting the uptake of AI by SMEs in the education, culture and audiovisual sector through financial support and other appropriate incentives that do not entail a disproportionate burden and create a level playing field; calls for the data used by AI applications in the education sector to be shared with the relevant public authorities so it can be used, in accordance with the European data protection and privacy rules, and ethical, democratic and transparency standards, in the development of curricula and pedagogical practices (in particular sincewhen these services are purchased with public money or offered to public education providers for free, and becauseconsidering that education is a common good);
Amendment 86 #
Draft opinion
Paragraph 5
Paragraph 5
5. Underlines the unreliability of the currenat automated means of removing illegal content from online platforms on which audiovisual content is shared; calls for a ban on genera must not hamper legitimate uses of copyrighted material; recalls the long- established moderation and automated content filtersprinciple prohibiting general monitoring obligations under Article 15 of Directive 2000/31/EC;
Amendment 98 #
Draft opinion
Paragraph 6
Paragraph 6
6. Calls for recommendation algorithms and personalised marketing on audiovisual platforms, including video streaming platforms and news platforms, to be transparent, in order to give consumers insight into these processes and ensure that personalised services are not discriminatory; stresses the need to guarantee and properly implement the right of users to opt out from recommended and personalised services.; points out in this regard that clear and understandable explanations should be provided to the users, notably on the data used, the purpose of the algorithm, personalisation, its outcomes and potential dangers, in respect with the principles of explicability, fairness and responsibility; calls on the Commission to consider tailored product safety and liability rules when AI applications are deployed for educational purposes, considering the high risks to which the addresses, such as pupils, students and other learners, are exposed;