NI Cyber’s Post

Event Recap: AI Adoption & Regulation – Real World Impact As artificial intelligence continues its shift from hype to hard reality, NI Cyber and A&L Goodbody LLP hosted a timely and insightful in-person event exploring the real-world impact of AI adoption, with a focus on the practical, the legal, and the regulatory. The session brought together a dynamic group of professionals from cybersecurity, legal, innovation, and compliance backgrounds to cut through the noise and get to the heart of what AI means for business today. The event featured a presentation from Aisling Byrne, partner at A&L Goodbody’s Employment & Incentives Group, who drew on over 20 years of experience in employment and equality law. This was followed by a lively panel discussion featuring: Aisling Byrne and Carrie McMeel, A&L Goodbody, and Ryan Donnelly, Enzai. Key Questions Explored The panel tackled some of the most pressing questions on the minds of businesses exploring or implementing AI: How has AI adoption impacted your own work? What are the biggest risks for UK businesses, and how can they be managed? How should organisations approach AI risk assessments—especially in high-stakes sectors? What liability issues arise from AI use, and how are these currently addressed under Northern Ireland law? Highlights & Insights Several critical themes emerged throughout the discussion: Global Divergence in Regulation: Some nations are prioritising AI monetisation over risk mitigation. While the UK stands as the third-largest AI economy globally, it currently relies on general regulatory frameworks—with sector-specific rules (e.g., FCA regulations) in high-risk areas. The Importance of Governance: Key risks like hallucinations in AI outputs demand robust governance. A human-in-the-loop model remains a vital component of compliance and quality assurance. Legal Lag and Liability Complexity: Courts and public institutions are struggling to keep pace with AI development, creating grey areas around liability across the AI supply chain. Concepts like a "victims fund" were discussed as potential tools for future frameworks. Practical Risk Management Tips: Treat AI as a tool, not a solution—context and implementation matter. Ask the right questions of your AI systems and providers. Consider ISO standards and other compliance certifications. Evaluate your risk appetite and monitor your supply chain closely. Be aware of "shadow AI" use within organisations (i.e., unsanctioned AI tools). Takeaway: AI is no longer just a buzzword, it’s a business reality. And with that comes a need for smart adoption strategies, strong legal understanding, and proactive risk management. Don't forget: DORA webinar in partnership w/ A&L Goodbody LLP. Also w/ Daniel Jelly, Complyfirst, Salt Communications https://lnkd.in/eMFSyU_T

  • No alternative text description for this image
  • No alternative text description for this image

To view or add a comment, sign in

Explore topics