Βία κατά των γυναικών: το ΕΚ εγκρίνει την κύρωση της Σύμβασης της Κωνσταντινούπολης
15 Μαΐου, 2023
9η Μαΐου: η ιστορία, μία διαρκής πρόκληση για το Μέλλον της Ευρώπης
22 Μαΐου, 2023

Πράξη για την Τεχνητή Νοημοσύνη (ΤΝ): ένα βήμα πιο κοντά στους πρώτους κανόνες για την Τεχνητή Νοημοσύνη

Για να διασφαλιστεί μια ανθρωποκεντρική και ηθική ανάπτυξη της Τεχνητής Νοημοσύνης (ΤΝ) στην Ευρώπη, οι ευρωβουλευτές ενέκριναν νέους κανόνες διαφάνειας και διαχείρισης κινδύνου για τα συστήματα τεχνητής νοημοσύνης.

Την Πέμπτη 11 Μαΐου, η επιτροπή Εσωτερικής Αγοράς και Προστασίας των Καταναλωτών (IMCO) και η επιτροπή Πολιτικών Ελευθεριών Δικαιοσύνης και Εσωτερικών Υποθέσεων (LIBE) ενέκριναν σχέδιο διαπραγματευτικής εντολής για τους πρώτους κανόνες για την Τεχνητή Νοημοσύνη με 84 ψήφους υπέρ, 7 κατά και 12 αποχές. Στις τροπολογίες τους στην πρόταση της Επιτροπής, οι ευρωβουλευτές στοχεύουν να διασφαλίσουν ότι τα συστήματα τεχνητής νοημοσύνης επιβλέπονται από ανθρώπους, είναι ασφαλή, διαφανή, ανιχνεύσιμα, χωρίς διακρίσεις και φιλικά προς το περιβάλλον. Θέλουν επίσης έναν ενιαίο ορισμό για την τεχνητή νοημοσύνη, σχεδιασμένο να είναι τεχνολογικά ουδέτερος, ώστε να μπορεί να ισχύει για τα σημερινά και τα αυριανά συστήματα τεχνητής νοημοσύνης.

Επόμενα βήματα

Προτού ξεκινήσουν οι διαπραγματεύσεις με το Συμβούλιο για την τελική μορφή του νόμου, η διαπραγματευτική εντολή πρέπει να εγκριθεί από την Ολομέλεια του Ευρωπαϊκού Κοινοβουλίου, με την ψηφοφορία να αναμένεται κατά τη σύνοδο της 12ης-15ης Ιουνίου. Εφόσον εγκριθούν, θα είναι οι πρώτοι κανόνες για την Τεχνητή Νοημοσύνη στον κόσμο.

Βρείτε παρακάτω την Ενημέρωση στα αγγλικά:

AI Act: a step closer to the first rules on Artificial Intelligence

Once approved, they will be the world’s first rules on Artificial Intelligence
MEPs include bans on biometric surveillance, emotion recognition, predictive policing AI systems
Tailor-made regimes for general-purpose AI and foundation models like GPT
Universal right to make complaints about AI systems
To ensure a human-centric and ethical development of Artificial Intelligence (AI) in Europe, MEPs endorsed new transparency and risk-management rules for AI systems.
On Thursday, the Internal Market Committee and the Civil Liberties Committee adopted a draft negotiating mandate on the first ever rules for Artificial Intelligence with 84 votes in favour, 7 against and 12 abstentions. In their amendments to the Commission’s proposal, MEPs aim to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly. They also want to have a uniform definition for AI designed to be technology-neutral, so that it can apply to the AI systems of today and tomorrow.

Risk based approach to AI – Prohibited AI practices

The rules follow a risk-based approach and establish obligations for providers and users depending on the level of risk the AI can generate. AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics).

MEPs substantially amended the list to include bans on intrusive and discriminatory uses of AI systems such as:

·  “Real-time” remote biometric identification systems in publicly accessible spaces;

·  “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;

·  Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);

·  Predictive policing systems (based on profiling, location or past criminal behaviour);

·  Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and

·  Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).

High-risk AI

MEPs expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights or the environment. They also added AI systems to influence voters in political campaigns and in recommender systems used by social media platforms (with more than 45 million users under the Digital Services Act) to the high-risk list.

General-purpose AI – transparency measures

MEPs included obligations for providers of foundation models – a new and fast evolving development in the field of AI – who would have to guarantee robust protection of fundamental rights, health and safety and the environment, democracy and rule of law. They would need to assess and mitigate risks, comply with design, information and environmental requirements and register in the EU database.

Generative foundation models, like GPT, would have to comply with additional transparency requirements, like disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content and publishing summaries of copyrighted data used for training.

Supporting innovation and protecting citizens’ rights

To boost AI innovation, MEPs added exemptions to these rules for research activities and AI components provided under open-source licenses. The new law promotes regulatory sandboxes, or controlled environments, established by public authorities to test AI before its deployment.

MEPs want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.

Quotes

After the vote, co-rapporteur Brando Benifei (S&D, Italy) said: “We are on the verge of putting in place landmark legislation that must resist the challenge of time. It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening, as well as to steer the political debate on AI at the global level. We are confident our text balances the protection of fundamental rights with the need to provide legal certainty to businesses and stimulate innovation in Europe”.

Co-rapporteur Dragos Tudorache (Renew, Romania) said: “Given the profound transformative impact AI will have on our societies and economies, the AI Act is very likely the most important piece of legislation in this mandate. It’s the first piece of legislation of this kind worldwide, which means that the EU can lead the way in making AI human-centric, trustworthy and safe. We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate, while protecting fundamental rights, strengthening democratic oversight and ensuring a mature system of AI governance and enforcement.”

Next steps

Before negotiations with the Council on the final form of the law can begin, this draft negotiating mandate needs to be endorsed by the whole Parliament, with the vote expected during the 12-15 June session.

Further information
Compromise text (consolidated version 11.05.2023)
Draft reports, amendments tabled in committee
European Parliamentary Research Service: Artificial Intelligence
Legislative train: the AI Act

Πηγή: Αντιπροσωπεία του Ευρωπαϊκού Κοινοβουλίου στην Ελλάδα