News Report Technology
December 11, 2023

EU’s AI Act Agreed: Here’s What You Need to Know

In Brief

European Union policymakers forged a political agreement to establish the worldwide standard for governing artificial intelligence.

EU's AI Act Agreed: Here's What You Need to Know

After a 36-hour negotiation marathon, European Union policymakers successfully forged a political agreement to establish the worldwide standard for governing artificial intelligence (AI).

The AI Act, a legislative milestone aimed at regulating AI with a focus on its potential for harm, concluded its legislative journey as the European Commission, Council and Parliament resolved their disparities during a decisive trilogue on December 8.

During this high-stakes political meeting, which established a new record for interinstitutional negotiations, the primary EU entities navigated through a compelling agenda encompassing 21 unresolved issues. “Deal!” tweeted Thierry Breton, European Commissioner just before midnight Friday (December 8) in Brussels.

The European Parliament will now vote on the proposed AI Act early next year, and the legislation is likely to come into force by 2025.

Unacceptable AI Risks Under Consideration

The AI Act covers a roster of unacceptable risks, including manipulative techniques, systems exploiting vulnerabilities and social scoring. Parliamentarians secure the prohibition of emotion recognition in workplaces and educational institutions, albeit with a safety-oriented caveat allowing its use, for example, to detect if a driver falls asleep.

Furthermore, parliamentarians introduced a ban on predictive policing software designed to evaluate an individual’s risk of committing future crimes based on personal traits. There is also a push to forbid the deployment of AI systems that categorize individuals based on sensitive traits like race, political opinions or religious beliefs.

In the words of the EU Parliament, unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:

  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children.
  • Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics.
  • Real-time and remote biometric identification systems, such as facial recognition.

Some exceptions may be allowed: For instance, “post” remote biometric identification systems where identification occurs after a significant delay will be allowed to prosecute serious crimes but only after court approval.

Responding to pressure from European governments, Parliament relents on the ban on real-time remote biometric identification, making exceptions for narrow law enforcement purposes, specifically to prevent terrorist attacks or locate victims or suspects on a predefined list of serious crimes.

Transparency Around Generative AI

In development for the generative AI domain, models like ChatGPT are now subject to new transparency requirements. This move comes as part of an effort to address concerns related to AI-generated content. Under these regulations, generative AI systems are mandated to disclose when content is generated by AI, with the intent to provide users with clear information about the origin of the content they are interacting with.

Additionally, the models must be designed to prevent the generation of illegal content, addressing ethical considerations and ensuring that AI technologies adhere to legal standards.

Moreover, a requirement has been established for entities developing generative AI models to publish summaries of copyrighted data used during the training process, enhancing accountability and transparency in the development and use of such AI systems.

High-Risk AI Applications

To mitigate potential risks associated with AI systems, the EU has introduced a comprehensive framework categorizing high-risk AI applications. The regulations outline two primary categories for such systems, with specific attention to safety and fundamental rights.

The first category pertains to AI systems embedded in products covered by the EU’s product safety legislation. This encompasses a broad range of industries, including toys, aviation, cars, medical devices, and lifts, aiming to ensure the safe integration of AI into these diverse sectors.

The other category targets AI systems in eight specific areas, mandating their registration in an EU database.

Designated areas include biometric identification, critical infrastructure management, education, employment practices, access to essential services, law enforcement, migration and border control and legal interpretation assistance.

Crucially, all high-risk AI systems, irrespective of category, will undergo a thorough assessment before entering the market and will continue to be scrutinized throughout their lifecycle.

Members have introduced a requirement for public bodies and essential service providers to conduct fundamental right impact assessments for high-risk AI systems.

Administrative fines, set at a minimum or a percentage of annual turnover (whichever is higher), range from 1.5% or half a million euros for information inaccuracies to 6.5% or €35 million for severe violations.

This underscores the commitment to accountability and fundamental rights in deploying high-risk AI in critical public services.

Disclaimer

In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.

About The Author

Kumar is an experienced Tech Journalist with a specialization in the dynamic intersections of AI/ML, marketing technology, and emerging fields such as crypto, blockchain, and NFTs. With over 3 years of experience in the industry, Kumar has established a proven track record in crafting compelling narratives, conducting insightful interviews, and delivering comprehensive insights. Kumar's expertise lies in producing high-impact content, including articles, reports, and research publications for prominent industry platforms. With a unique skill set that combines technical knowledge and storytelling, Kumar excels at communicating complex technological concepts to diverse audiences in a clear and engaging manner.

More articles
Kumar Gandharv
Kumar Gandharv

Kumar is an experienced Tech Journalist with a specialization in the dynamic intersections of AI/ML, marketing technology, and emerging fields such as crypto, blockchain, and NFTs. With over 3 years of experience in the industry, Kumar has established a proven track record in crafting compelling narratives, conducting insightful interviews, and delivering comprehensive insights. Kumar's expertise lies in producing high-impact content, including articles, reports, and research publications for prominent industry platforms. With a unique skill set that combines technical knowledge and storytelling, Kumar excels at communicating complex technological concepts to diverse audiences in a clear and engaging manner.

Hot Stories

Top Investment Projects of the Week 25-29.03

by Viktoriia Palchik
March 29, 2024
Join Our Newsletter.
Latest News

Custom HTML

by Valentin Zamarin
August 08, 2024

Top Investment Projects of the Week 25-29.03

by Viktoriia Palchik
March 29, 2024

Supply and Demand Zones

Cryptocurrency, like any other currency, is a financial instrument based on the fundamental economic principles of supply ...

Know More

Top 10 Crypto Wallets in 2024

With the current fast-growing crypto market, the significance of reliable and secure wallet solutions cannot be emphasized ...

Know More
Read More
Read more
Custom HTML
News Report
Custom HTML
August 8, 2024
Modular Blockchain Sophon Raises $10M Funding from Paper Ventures and Maven11 Amid Veil of Mystery
Business News Report
Modular Blockchain Sophon Raises $10M Funding from Paper Ventures and Maven11 Amid Veil of Mystery
March 29, 2024
Arbitrum Foundation Announces Third Phase Of Grants Program, Opens Applications From April 15th
News Report Technology
Arbitrum Foundation Announces Third Phase Of Grants Program, Opens Applications From April 15th
March 29, 2024
Top Investment Projects of the Week 25-29.03
Digest Technology
Top Investment Projects of the Week 25-29.03
March 29, 2024