MASSIVE AI REGULATION NEWS!!! The European AI Office has published the first draft of its General-Purpose AI Code of Practice, marking a major step in AI governance. This draft forms part of the EU’s strategy to create a comprehensive framework for artificial intelligence, guiding providers on compliance, accountability, and societal benefit. Following consultation with nearly 1,000 stakeholders, the final version will be released in May 2025. Article 55 of the AI Act outlines the obligations for providers of general-purpose AI models with systemic risk, including standardised model evaluations, risk assessments, serious incident tracking, and cybersecurity measures. Providers can use codes of practice (defined in Article 56) to demonstrate compliance with these obligations until harmonised standards are issued. Article 56 enables the AI Office to facilitate Union-level codes of practice covering these obligations, aiming for collaborative development with relevant stakeholders. These codes must be detailed, regularly monitored, and adaptable to technological changes, ultimately ensuring a high standard of compliance across the EU. The draft focuses on four core objectives aligned with the EU AI Act. First, it offers clear compliance pathways by detailing how providers can document and validate adherence to the Act, particularly for advanced general-purpose AI models. Second, it fosters transparency across the AI value chain, ensuring downstream developers understand model functionalities and limitations. Copyright compliance is another critical area, with provisions to safeguard creators’ rights while balancing innovation. Finally, the Code establishes a framework for continuous monitoring of models with systemic risks, from development to deployment. Providers of general-purpose AI models bear unique responsibilities under the Code. These include maintaining comprehensive technical documentation, implementing acceptable use policies to prevent misuse, and complying with EU copyright laws, including the Text and Data Mining exception. Proportional compliance measures are introduced for small and medium enterprises to support innovation while ensuring accountability. Providers must assess and mitigate these risks through measures tailored to each model’s risk profile, including rigorous testing, safety reports, and incident response protocols. Governance structures extend accountability to executive levels, ensuring organisational oversight of AI risks. Providers must also implement safeguards to protect proprietary assets and manage systemic risks effectively. The Code mandates continuous evidence collection and lifecycle-based risk assessments, covering all stages of development and deployment. Public transparency is emphasised, with providers required to publish safety frameworks and compliance information, including text and data mining practices. Standardised documentation templates aim to ease compliance, particularly for SMEs.
EU AI Regulation Impact
Explore top LinkedIn content from expert professionals.
-
-
🚨 Did you know AI literacy is a legal obligation under the EU AI Act, and companies outside the EU will be affected? Here's what you need to know: ╰┈➤ Article 4 of the EU AI Act covers AI literacy obligations for providers and deployers of AI systems: "Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used." ╰┈➤ Recital 20 brings more information about the topic: → "In order to obtain the greatest benefits from AI systems while protecting fundamental rights, health and safety and to enable democratic control, AI literacy should equip providers, deployers and affected persons with the necessary notions to make informed decisions regarding AI systems. Those notions may vary with regard to the relevant context and can include understanding the correct application of technical elements during the AI system’s development phase, the measures to be applied during its use, the suitable ways in which to interpret the AI system’s output, and, in the case of affected persons, the knowledge necessary to understand how decisions taken with the assistance of AI will have an impact on them. → In the context of the application this Regulation, AI literacy should provide all relevant actors in the AI value chain with the insights required to ensure the appropriate compliance and its correct enforcement. Furthermore, the wide implementation of AI literacy measures and the introduction of appropriate follow-up actions could contribute to improving working conditions and ultimately sustain the consolidation, and innovation path of trustworthy AI in the Union. → The European Artificial Intelligence Board (the ‘Board’) should support the Commission, to promote AI literacy tools, public awareness and understanding of the benefits, risks, safeguards, rights and obligations in relation to the use of AI systems. In cooperation with the relevant stakeholders, the Commission and the Member States should facilitate the drawing up of voluntary codes of conduct to advance AI literacy among persons dealing with the development, operation and use of AI." ╰┈➤ Article 4 will apply in less than 3 months (February 2). ╰┈➤ Companies covered by the EU AI Act and classified as providers or deployers of AI systems (hint: not only EU-based companies; check Article 2) should prepare for this legal obligation. 👉 To learn more about AI policy, compliance & regulation, including AI Act implementation, join 38,800 people who subscribe to my weekly AI governance newsletter (link below). #AI #AILiteracy #AIGovernance #AIRegulation #AIAct #AICompliance
-
On August 1, 2024, the European Union's AI Act came into force, bringing in new regulations that will impact how AI technologies are developed and used within the E.U., with far-reaching implications for U.S. businesses. The AI Act represents a significant shift in how artificial intelligence is regulated within the European Union, setting standards to ensure that AI systems are ethical, transparent, and aligned with fundamental rights. This new regulatory landscape demands careful attention for U.S. companies that operate in the E.U. or work with E.U. partners. Compliance is not just about avoiding penalties; it's an opportunity to strengthen your business by building trust and demonstrating a commitment to ethical AI practices. This guide provides a detailed look at the key steps to navigate the AI Act and how your business can turn compliance into a competitive advantage. 🔍 Comprehensive AI Audit: Begin with thoroughly auditing your AI systems to identify those under the AI Act’s jurisdiction. This involves documenting how each AI application functions and its data flow and ensuring you understand the regulatory requirements that apply. 🛡️ Understanding Risk Levels: The AI Act categorizes AI systems into four risk levels: minimal, limited, high, and unacceptable. Your business needs to accurately classify each AI application to determine the necessary compliance measures, particularly those deemed high-risk, requiring more stringent controls. 📋 Implementing Robust Compliance Measures: For high-risk AI applications, detailed compliance protocols are crucial. These include regular testing for fairness and accuracy, ensuring transparency in AI-driven decisions, and providing clear information to users about how their data is used. 👥 Establishing a Dedicated Compliance Team: Create a specialized team to manage AI compliance efforts. This team should regularly review AI systems, update protocols in line with evolving regulations, and ensure that all staff are trained on the AI Act's requirements. 🌍 Leveraging Compliance as a Competitive Advantage: Compliance with the AI Act can enhance your business's reputation by building trust with customers and partners. By prioritizing transparency, security, and ethical AI practices, your company can stand out as a leader in responsible AI use, fostering stronger relationships and driving long-term success. #AI #AIACT #Compliance #EthicalAI #EURegulations #AIRegulation #TechCompliance #ArtificialIntelligence #BusinessStrategy #Innovation
-
This report provides the first comprehensive analysis of how the EU AI Act regulates AI agents, increasingly autonomous AI systems that can directly impact real-world environments. Our three primary findings are: 1. The AI Act imposes requirements on the general-purpose (AI GPAI) models underlying AI agents (Ch. V) and the agent systems themselves (Ch. III). We assume most agents rely on GPAI models with systemic risk (GPAISR) Accordingly, the applicability of various AI Act provisions depends on (a) whether agents proliferate systemic risks under Ch. V (Art. 55), and (b) whether they can be classified as high-risk systems under Ch. III. We find that (a) generally holds, requiring providers of GPAISRs to assess and mitigate systemic risks from AI agents. However, it is less clear whether AI agents will in all cases qualify as (b) high-risk AI systems, as this depends on the agent's specific use case. When built on GPAI models, AI agents should be considered high-risk GPAI systems, unless the GPAI model provider deliberately excluded high-risk uses from the intended purposes for which the model may be used. 2. Managing agent risks effectively requires governance along the entire value chain. The governance of AI agents illustrates the “many hands problemˮ, where accountability is obscured due to the unclear allocation of responsibility across a multi-stakeholder value chain. We show how requirements must be distributed along the value chain, accounting for the various asymmetries between actors, such as the superior resources and expertise of model providers and the context-specific information available to downstream system providers and deployers. In general, model providers must build the fundamental infrastructure, system providers must adapt these tools to their specific contexts, and deployers must adhere to and apply these rules during operation. 3. The AI Act governs AI agents through four primary pillars: risk assessment, transparency tools, technical deployment controls, and human oversight. We derive these complementary pillars by conducting an integrative review of the AI governance literature and mapping the results onto the EU AI Act. Underlying these pillars, we identify 10 sub-measures for which we note specific requirements along the value chain, presenting an interdependent view of the obligations on GPAISR providers, system providers, and system deployers. By Amin Oueslati, Robin Staes-Polet at The Future Society Read: https://lnkd.in/e6865zWq
-
Today's #sundAIreads covers a topic that has long intrigued (tormented? 🫠) #privacy scholars and is equally relevant in the context of #AI: The extent to which individuals have a right to explanation (REX) in automated decision-making. Margot Kaminski and Gianclaudio Malgieri have analyzed "The Right to Explanation in the #AIAct" in light of comparable provisions in EU law, most notably the #GDPR. ➡️ What is the scope of the REX in the AI Act (AIA)? According to Art. 86(1), the REX applies to all high-risk AI systems listed in Annex III, except those in critical infrastructure. Art. 86(2) and (3) specify that the REX shall also not apply in cases where Union or national law have carved out exemptions, or where the REX is already covered by other Union laws. ➡️ Who can exercise the REX? Any affected person has the REX. Unlike data subjects in the GDPR, the AIA does not define affected persons, but invites a broad interpretation, including people whose data are processed by AI systems, that interact with AI systems, or who are otherwise affected in any way by AI systems. Notably, unlike in the GDPR, a person can be affected by an AI system without their personal data being processed by the system. ➡️ When can the REX be exercised? The REX can be exercised whenever a decision is taken by the deployer 1️⃣ on the basis of output of a high-risk AI system, and that 2️⃣ produces legal effects or similarly significantly affects a person 3️⃣ in a way that they consider to have an adverse impact 4️⃣ on their health, safety or fundamental rights. The AIA‘s REX is thus broader than that of the GDPR, as it also encompasses semi-automated decisions, as long as the decision was based "mainly" on the output of the AI system (Rec. 171). At the same time, it is narrower than the GDPR, as it only applies to decisions that have an adverse impact on affected persons. That said, affected persons define what constitutes an "adverse impact," not deployers. This also suggests that the REX is only triggered "on demand" of the affected person, not "by default." ➡️ What kind of explanation does the AIA require? Art. 86(1) AIA demands that explanations be 1️⃣ clear and meaningful, 2️⃣ clarify the role of the AI system in the decision-making process, and 3️⃣ explain the "main elements of the decision taken." Based on context provided by both the AIA and GDPR, the authors interpret "clear" to mean an explanation both "understandable for its users" but "detailed enough to also be actionable." "Meaningful," on the other hand, is understood by the authors to imply that it "enables the right to contest an AI decision." The role of the AI system matters, as affected persons will want to know if a decision was "mainly" based on the AI system's output, which is also why it is important to know the other elements of the decision taken. The full article, which provides a wealth of additional information, can be accessed here: http://bit.ly/4dN89Ta.
-
Understanding AI Compliance: Key Insights from the COMPL-AI Framework ⬇️ As AI models become increasingly embedded in daily life, ensuring they align with ethical and regulatory standards is critical. The COMPL-AI framework dives into how Large Language Models (LLMs) measure up to the EU’s AI Act, offering an in-depth look at AI compliance challenges. ✅ Ethical Standards: The framework translates the EU AI Act’s 6 ethical principles—robustness, privacy, transparency, fairness, safety, and environmental sustainability—into actionable criteria for evaluating AI models. ✅Model Evaluation: COMPL-AI benchmarks 12 major LLMs and identifies substantial gaps in areas like robustness and fairness, revealing that current models often prioritize capabilities over compliance. ✅Robustness & Fairness : Many LLMs show vulnerabilities in robustness and fairness, with significant risks of bias and performance issues under real-world conditions. ✅Privacy & Transparency Gaps: The study notes a lack of transparency and privacy safeguards in several models, highlighting concerns about data security and responsible handling of user information. ✅Path to Safer AI: COMPL-AI offers a roadmap to align LLMs with regulatory standards, encouraging development that not only enhances capabilities but also meets ethical and safety requirements. 𝐖𝐡𝐲 𝐢𝐬 𝐭𝐡𝐢𝐬 𝐢𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭? ➡️ The COMPL-AI framework is crucial because it provides a structured, measurable way to assess whether large language models (LLMs) meet the ethical and regulatory standards set by the EU’s AI Act which come in play in January of 2025. ➡️ As AI is increasingly used in critical areas like healthcare, finance, and public services, ensuring these systems are robust, fair, private, and transparent becomes essential for user trust and societal impact. COMPL-AI highlights existing gaps in compliance, such as biases and privacy concerns, and offers a roadmap for AI developers to address these issues. ➡️ By focusing on compliance, the framework not only promotes safer and more ethical AI but also helps align technology with legal standards, preparing companies for future regulations and supporting the development of trustworthy AI systems. How ready are we?
-
The European Confederation of Institutes of Internal Auditing (ECIIA) has published “The AI Act: Road to Compliance,” which outlines critical steps to achieve compliance and effectively manage risks related to artificial intelligence (AI). The European Union’s Artificial Intelligence Act, which came into force in August 2024, marks a significant milestone in AI regulation. This legislation introduces a phased approach to compliance requirements for organizations deploying or planning to deploy AI systems in the European market. The Act aims to balance protecting fundamental rights and personal data with fostering innovation and building trust in AI technologies. Key Obligations Under the AI Act 1. AI Literacy Organizations must ensure that those responsible for operating or using AI systems have an adequate understanding of AI principles and practices. 2. AI Registry High-risk AI systems must be submitted to a central AI repository. Companies should establish their own internal AI registries, documenting all AI systems they utilize or bring to market. 3. AI Risk Assessment All systems listed in the AI registry must undergo risk assessments based on the classification methods outlined in the Act. Compliance with these standardized methods is mandatory. The obligations and requirements vary depending on the risk level and the organization’s role in the AI value chain. This regulation represents a vital step toward aligning innovation with responsibility. To learn more, explore ECIIA’s full publication and begin preparing your organization for the future of AI compliance. #ArtificialIntelligence #AICompliance #AIACT #InnovationAndTrust #RiskManagement
-
10 steps to avoid a €35 million fine for your AI-powered medical device Medical device companies need to take several steps to comply with the EU AI Act. Here's a ten step action plan towards compliance: 1. Assess AI systems - Determine if your medical devices incorporate AI/ML systems - Classify these systems as "high-risk" under the EU AI Act - Prepare for registration in the EU database for high-risk AI systems 2. Implement AI Quality Management System (QMS) - Integrate AI-specific requirements into existing medical device QMS - Ensure compliance with Article 17 of the EU AI Act - Can be combined with existing ISO/IEC 13485 Medical Device QMS 3. Develop comprehensive technical documentation - Create detailed AI system documentation as per Annex IV of the Act - Include design specifications, system architecture, data requirements, training methodologies, and performance metrics - Combine with existing EU MDR/IVDR technical documentation 4. Implement risk management system - Identify, evaluate, and mitigate AI-specific risks - Align with EU MDR risk-management system - Focus on health, safety, and fundamental rights risks 5. Enhance data governance - Assess data availability, quantity, and suitability - Examine potential biases in datasets - Consider geographical, contextual, and behavioral factors 6. Ensure transparency and human oversight - Implement measures for AI system transparency - Establish human oversight mechanisms 7. Set up incident reporting and post-market monitoring - Develop systems for reporting serious AI-related incidents - Implement continuous post-market monitoring of AI system performance 8. Conduct Fundamental Rights Impact Assessments - Assess potential impacts of AI systems on fundamental rights - Implement mitigation strategies for identified risks 9. Appoint EU authorized representative - Required for providers established outside the EU 10. Prepare for conformity assessment - Conduct internal conformity assessments - Engage with notified bodies for certification of high-risk AI systems - Align conformity assessment processes with both MDR/IVDR and AI Act requirements How are you getting ready for the EU AI Act?
-
Yayyy - New publication out in the Harvard Data Science Review on "The Future of Credit Underwriting and Insurance Under the EU AI Act: Implications for Europe and Beyond" - co-authored with excellent ML and strategy insights by Maximilian Eber from rockstar Fintech startup Taktile. Open Access! As the EU AI Act becomes applicable, financial institutions face regulatory challenges in a regulatory maze for credit scoring and certain insurance types. Our paper tackles critical questions that every lender and insurer needs to consider: Key Findings: 1) Cross-Border Impact: The AI Act's extraterritorial reach means non-EU companies serving European customers will also need to comply, potentially reshaping global underwriting practices and creating competitive advantages for early adopters. 2) Broader Scope Than Expected: The AI Act captures far more than "AI companies" think. Even traditional statistical models like logistic regression may qualify as "AI systems" under the Act's broad definition - despite recent AI Office guidance suggesting otherwise. Courts may not follow this restrictive interpretation. 3) Provider vs. Deployer Transitions: Financial institutions can quickly shift from "deployer" to "provider" status (with full compliance burdens) through seemingly minor actions like rebranding systems, changing their intended purpose, or substantial model modifications. No grace period exists. 4) Regulatory Layering, Not Replacement: The AI Act creates an additional compliance layer alongside existing banking regulations (CRR, MaRisk, DORA). While some integration is possible (Article 17(4)), key obligations like post-market monitoring and fundamental rights impact assessments remain AI Act-specific. We map out overlaps and interactions, and suggest actionable compliance strategies. 5) Data Governance: Under Art. 10 AI Act, financial institutions must demonstrate demographic representativeness, implement bias detection, and document mitigation strategies - going beyond current banking requirements. 6) Unregulated Entity Exposure: Fintech companies and other unregulated financial entities face the full burden of AI Act compliance without the alleviations available to regulated financial institutions. Critical Insight: The AI Act is best understood as the "Complex Decisioning Act" - it captures automated systems using statistical models, decision trees, and ensemble methods alongside advanced ML. Traditional rule-based systems and basic data processing remain exempt. Algorithmic DM in finance is shifting rapidly, both in tech and in law. Understanding these regulatory changes now is crucial for maintaining compliance while continuing to innovate. Read the full paper here: https://lnkd.in/eUy4E2xt Comments, lessons and critique most welcome! PS Despite being a Yalie, I stand firmly with Harvard in these times! #AIRegulation #EUAIAct #FinTech #CreditUnderwriting #Insurance #DataScience #Compliance #FinancialServices
-
The Future of Privacy Forum and OneTrust have published an updated guide to help organizations navigate Conformity Assessments (CAs) under the final version of the EU #Artificial Intelligence Act. CAs are a cornerstone of the EU AI Act's compliance framework and will be critical for any organization developing or deploying high-risk #AIsystems in the EU. The guide offers a clear and practical framework for assessing whether, when, and how a CA must be conducted. It also clarifies the role of CAs as an overarching accountability mechanism within the #AIAct. This guide: - Provides a step-by-step roadmap for conducting a Conformity Assessment under the EU AI Act. - Presents CAs as essential tools for ensuring both product safety and regulatory compliance. - Identifies the key questions organizations must ask to determine if they are subject to CA obligations. - Explains the procedural differences between internal and third-party assessments, including timing and responsibility. - Details the specific compliance requirements for high-risk #AI systems. - Highlights the role of documentation and how related obligations intersect with the CA process. - Discusses the use of harmonized standards and how they can create a presumption of conformity under the Act. This guide serves as a practical resource for understanding the conformity assessment process and supporting organizations in preparing for compliance with the EU AI Act.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development