EU AI Act Enters Official Journal, Setting Legal Deadlines in Motion

EU AI Act Enters Official Journal, Setting Legal Deadlines in Motion

Publication in Official Journal Kicks Off EU AI Act Legal Deadlines

The European Union has taken a significant step forward in regulating artificial intelligence (AI) with the publication of the full and final text of the EU AI Act in the bloc’s official journal. This landmark regulation, which adopts a risk-based approach to AI applications, is set to come into force on August 1, 2024. Over the next 24 months, its provisions will become fully applicable to AI developers, marking a new era of AI governance in Europe.

Phased Implementation of the EU AI Act

The EU AI Act takes a phased approach to implementation, with various deadlines for different provisions. This phased rollout ensures that AI developers have adequate time to comply with the new regulations. The first significant milestone is the enforcement of the list of prohibited AI uses, which will take effect six months after the law comes into force, around early 2025.

Prohibited AI Use Cases

The Act bans certain AI use cases deemed to pose “unacceptable risk.” These include China-style social credit scoring, compiling facial recognition databases through untargeted scraping of the internet or CCTV, and the use of real-time remote biometrics by law enforcement in public places, except under specific circumstances such as searching for missing persons.

High-Risk AI Applications

High-risk AI applications, such as those used in biometric identification, law enforcement, employment, education, and critical infrastructure, are permitted under the Act but come with stringent obligations. Developers of these applications must ensure data quality and implement anti-bias measures to mitigate risks.

Transparency Requirements for AI Chatbots

The Act also introduces lighter transparency requirements for AI chatbots and other general-purpose AI (GPAI) models, such as OpenAI ’s GPT, the technology behind ChatGPT. These models must comply with transparency requirements, and the most powerful GPAIs may be required to conduct systemic risk assessments based on compute thresholds.

Lobbying and Industry Concerns

The AI industry, backed by some member states’ governments, has lobbied heavily to water down obligations on GPAIs. There are concerns that stringent regulations could hinder Europe’s ability to produce homegrown AI giants capable of competing with rivals in the US and China.

Codes of Practice for AI Developers

Nine months after the Act comes into force, around April 2025, codes of practice will apply to developers of in-scope AI applications. The EU’s AI Office, an ecosystem-building and oversight body established by the Act, is responsible for providing these codes. However, questions remain about who will draft the guidelines, with concerns that AI industry players could influence the rules.

Transparency Requirements for GPAIs

Twelve months after the Act’s entry into force, on August 1, 2025, the transparency requirements for GPAIs will start to apply. This means that developers of these powerful AI models must ensure compliance with the new transparency rules.

Extended Compliance Deadlines for High-Risk AI Systems

A subset of high-risk AI systems has been granted the most generous compliance deadline, with 36 months after the Act’s entry into force, until 2027, to meet their obligations. Other high-risk systems must comply sooner, within 24 months.

Impact on AI Developers

The EU AI Act places different obligations on AI developers based on the perceived risk of their applications. While the bulk of AI uses are considered low risk and will not be regulated, developers of high-risk applications face significant compliance challenges. This tiered approach aims to balance innovation with safety and ethical considerations.

Role of the EU AI Office

The EU AI Office plays a crucial role in the implementation of the Act. This body is responsible for providing codes of practice and overseeing compliance. However, the process of drafting these guidelines has raised concerns about potential industry influence, highlighting the need for transparency and inclusivity in rule-making.

Concerns from Civil Society

Civil society organizations have expressed concerns about the involvement of consultancy firms in drafting the codes of practice. There are fears that AI industry players could shape the rules to their advantage, potentially undermining the Act’s objectives. The EU AI Office’s call for expression of interest to select stakeholders aims to address these concerns and ensure an inclusive process.

Future of AI Regulation in Europe

The EU AI Act represents a significant step towards comprehensive AI regulation in Europe. By adopting a risk-based approach and setting clear obligations for developers, the Act aims to ensure the safe and ethical use of AI technologies. However, the success of this regulatory framework will depend on effective implementation and enforcement.

Discussion Points

As we navigate the complexities of AI regulation, several critical questions arise:

1. How can the EU ensure that the codes of practice are drafted transparently and inclusively?

2. What measures can be taken to prevent industry influence from undermining the objectives of the EU AI Act?

3. How will the phased implementation of the Act impact AI innovation and development in Europe?

4. What are the potential challenges and opportunities for AI developers in complying with the new regulations?

5. How can other regions learn from the EU’s approach to AI regulation?

Share your thoughts on these critical questions.

Join me and my incredible LinkedIn friends as we embark on a journey of innovation, AI, and EA, always keeping climate action at the forefront of our minds. 🌐 Follow me for more exciting updates https://lnkd.in/epE3SCni

#AIRegulation #EULaw #ArtificialIntelligence #TechEthics #Innovation #AICompliance #FutureOfAI #AIAct #EUAI #TechPolicy #AITransparency

Sources : TechCrunch; eur-lex.europa.eu

Thanks for sharing

Like
Reply
Vlada Adams

Digital Strategy | Next-Gen Marketing & Advertising Consultant

1y

Insightful!

Like
Reply
Edwin Bonte

Witboost - Data Product Management Platform; Build, Automate & Govern your Data Products and accelarate your Data, AI and GenAI practices

1y

Thanks for providing these insights & opening up the data discussion on this topic, governance & compliance should be standard part of data products and embeded in the design. Onlt then you can establish trusted data practices, drive ethcial and explainable AI / ML practices.

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore content categories