🚨 New Study: AI Can Simplify Rules & Cut Red Tape Regulatory complexity often creates uncertainty, slows down innovation, and increases the bureaucratic burden on businesses. But what if Artificial Intelligence could help us identify overlaps, contradictions, and simplification potential in regulations? That’s exactly what a new feasibility study commissioned by Germany’s Federal Ministry of Finance (BMF) explored — and the findings are clear: ✅ AI-based applications can reliably support regulatory simplification ✅ Success depends on robust legal knowledge bases, suitable methodological approaches, and expert validation ✅ With the right framework, AI can strengthen transparency, legal certainty, and efficiency We at Lexemo are proud to have contributed our expertise to this interdisciplinary project, led by d-fine, and conducted in collaboration with A&O Shearman, Fraunhofer IAIS, and Prof. Dr. Florian Möslein (Philipps-University Marburg). Our role: bringing hands-on knowledge of AI integration into legal workflows, ensuring that AI is not just powerful but also transparent, trustworthy, and practically applicable. 🔎 The conclusion: AI has the potential to become a real game-changer in reducing red tape and enabling innovation — but it requires the right conditions and governance in place. 👉 Read more about how we at Lexemo are shaping the future of AI in legal and regulatory contexts: https://lnkd.in/eMDvpG8B
Lexemo’s Post
More Relevant Posts
-
It was a pleasure to be invited to the AI-Enabled Policymaking (AIPP) workshop, which formed the basis for this excellent report. The report highlights how today’s AI tools can support policymaking - through drafting, summarization, and brainstorming - while also underscoring key bottlenecks such as model limitations, user skill gaps, and legal/privacy challenges. Looking ahead, it explores the emerging needs for future AI systems in the policy space, emphasizing the importance of implementation considerations such as transparency, trust, responsible adoption, and the development of government-tailored tools. Thanks to RAND, The Stimson Center, and Tony Blair Institute for Global Change for the workshop and the report. https://lnkd.in/eEHYq9A8
To view or add a comment, sign in
-
This by Aaron Benanav is really, really good: “The real threat posed by generative AI is not that it will eliminate work on a mass scale, rendering human labour obsolete. It is that, left unchecked, it will continue to transform work in ways that deepen precarity, intensify surveillance, and widen existing inequalities. Technological change is not an external force to which societies must simply adapt; it is a socially and politically mediated process. Legal frameworks, collective bargaining, public investment, and democratic regulation all play decisive roles in shaping how technologies are developed and deployed, and to what ends.” This comes from the preface to the Brazilian edition of “Automation and the Future of Work”. The original book is from early 2022 and while for obvious reasons it doesn’t discuss Generative AI, it’s still very much worth reading. You can read the whole preface here: https://lnkd.in/dfkXPruf
To view or add a comment, sign in
-
-
🚨 Still confused about what’s actually in the EU AI Act? Don’t worry. You’re not the only one. The Act is about more than just risks and fines. It sets out clear rules for how AI should be built, deployed and used. ⁉️ Did you know that since Feb 2025, AI literacy has been a mandatory requirement across all organisations? What does that mean in practice? ❌ Having strong tech teams is not enough. ✅ Everyone who interacts with AI needs to understand how it works, where it can go wrong and how to use it responsibly. ⚠️ This isn’t just about avoiding penalties. It’s about building trust, staying competitive and making AI work safely and effectively for your business. We’ve unpacked it all in a short blog - what the Act says, what it means for your teams and why now is the time to start investing in AI literacy. Check it out here: https://lnkd.in/eyehzm5b
To view or add a comment, sign in
-
-
I still feel that often discussions around AI in scholarly publishing focus on how to catch bad actors. That’s an essential use case. But it’s only one part of the story. There are many more opportunities for impact, both positive and negative. And to truly understand them, we need to hear from those who experience this shift first-hand: researchers and industry veterans. So let’s ask the bigger question: How do we preserve the human voice in AI-enhanced publishing? 🔗 Registration link in the comments.
To view or add a comment, sign in
-
-
Sen. Cruz Introduces New AI Policy Framework to Enhance U.S. Leadership in Artificial Intelligence https://lnkd.in/g5Cpz9Kj Unleashing American Innovation: The SANDBOX Act for AI Development On July 23, 2025, U.S. Senate Chairman Ted Cruz introduced the SANDBOX Act, a pivotal legislative proposal aiming to revolutionize artificial intelligence in America. This framework champions innovation by easing federal regulations on AI developers while ensuring accountability. Key Highlights: Regulatory Sandbox: A space for developers to test new technologies without bureaucratic hurdles. Five Pillars: The framework outlines essential aspects for guiding AI policy. Collaboration: The Office of Science and Technology Policy will streamline regulation adjustments. Support from Industry Leaders: Backed by organizations like the U.S. Chamber of Commerce and the Information Technology Council (ITI). Cruz emphasized the urgency: “If we don’t lead in AI, our values risk being overshadowed by regimes that prioritize control.” The SANDBOX Act is a crucial step toward ensuring that American innovation flourishes while safeguarding public interests. Join the conversation! Reflect on how regulatory changes can shape the future of AI and share your thoughts below! Source link https://lnkd.in/g5Cpz9Kj
To view or add a comment, sign in
-
-
It’s fair to say that Generative AI (#GenAI) is transforming the way we all do business.. And none more so than in competition-related cases and #investigations. The tech’s ability to analyse vast amounts of data quickly and accurately is proving to be a game-changer, particularly in high-stakes scenarios like cartel investigations, where speed and precision are critical. In this article, my colleague Gary Foster covers how GenAI is being used across four key areas of competition cases to drive efficiencies and improve outcomes. He’ll look at: 📎 Document review and analysis 📈 Case strategy and reporting 👨💻 Short-form communication analysis ⭕ Proactive compliance If you want to understand how GenAI is impacting competition-related cases, I urge you to take a read- link in the comments below 🔽
To view or add a comment, sign in
-
✨ Trust is not optional in AI – it’s a requirement. At the Trustworthy AI Summit 2025 from European Trustworthy AI Association, I had the honour of presenting our poster: “Trustproofer: Assisting Operationalised AI System Trustworthiness.” 🧩 What is Trustproofer? It’s an agentic AI framework that helps practitioners enforce and operationalise trustworthiness in their AI systems. Trustproofer combines: 🤖 Multiple cooperating AI agents 🗂️ A symbolic layer of documentation cards + a knowledge graph 📊 Actionable outputs like risk/quantitative assessments and trust reports 🔑 Key functionalities include: ✅ Recording documentation cards via conversational chat ✅ Generating risk and quantitative trustworthiness assessments ✅ Navigating applicable methods to improve trustworthiness ✅ Aligning solutions with human values and preferences ✅ Supporting implementation and continuous monitoring 🎉 Pitcure highlight: Sharing insights with Ana Garcia Robles, Secretary General of BDVA - Big Data Value Association — always inspiring to get feedback from leaders pushing Europe’s data & AI trust agenda. Curious how we turn trust from theory into practice? Let’s connect. This research is part of the THEMIS 5.0 project at the IMU/ICCS, Information Management Unit, National Technical University of Athens (TRAIL Lab), with my colleagues Yiannos Paranomos, Katerina Lepenioti, Dimitris Apostolou, and Gregoris Mentzas.
To view or add a comment, sign in
-
-
💡AI is no longer just a tool - it’s becoming a co-researcher. Yesterday at the British Accounting & Finance Association SWAG Conference, I had the pleasure of giving a keynote on the opportunities and synergies that AI is creating for accounting and finance research. 👉 In a nutshell: We went through several examples of AI transforming and enhancing the research #cycle - from design, ideation, and hypothesis development, through literature review, data collection, and empirical analysis, to interpretation and cross-disciplinary translation. ⚠️ The caveat: Alongside these benefits come important risks. Researchers (and journals) must remain vigilant against practices such as #HARKing (Hypothesising After Results are Known), p-hacking, and associated publication #biases that AI could exacerbate. The challenge for our academic community is clear: to embrace AI’s potential while safeguarding the integrity of our research... 📊📚⚖️ Thanks Giulia Fantini and Brian Telford for the invitation. #ArtificialIntelligence #FinanceResearch #AccountingResearch #ResearchIntegrity
To view or add a comment, sign in
-
-
How do we regulate AI before it regulates us? The Harvard Gazette recently posed this question in a compelling piece exploring the next frontier of AI governance. As AI systems increasingly make decisions that affect our health, our money, and even our democracy, the urgency of thoughtful regulation becomes impossible to ignore. Some takeaways struck a chord: 1. Accountability needs real teeth. When algorithms mislead, discriminate, or cause harm, who’s accountable? Vendors? Developers? Organisations deploying them? The EU AI Act starts to clarify these roles through distinct obligations for providers and users of high-risk systems, but real-world accountability will depend on how these rules are applied and enforced. 2. Pluralism offers a path forward. Instead of a binary choice between reckless acceleration or blanket bans, the idea of pluralism invites us to co-create AI systems that reflect diverse values and lived experiences. The AI Act’s tiered, risk-based model gestures in this direction, but broader inclusion and stakeholder input remain areas for growth. 3. Healthcare is a pressure point. Clinical AI tools promise breakthroughs, but where’s the infrastructure to monitor their safety and effectiveness post-deployment? The AI Act mandates post-market surveillance for high-risk systems, including those in healthcare, yet many organisations are still building the capability to meet that expectation. 4. Regulation isn’t a zero-sum game. It’s not innovation or oversight: it’s innovation through oversight. The AI Act reinforces this idea with requirements for transparency, human oversight, and impact assessments. The next step is embedding these into day-to-day operational practices. This isn’t just a policy challenge. It’s a design challenge, a governance challenge, a leadership challenge. #ResponsibleAI #AIRegulation #TrustByDesign Article in the comment.
To view or add a comment, sign in
-
AI isn’t just showing up in research papers; it’s reshaping the systems that publish them. https://hubs.li/Q03FNsX40 From undetected content generation to AI-assisted peer review, platform builders now play a direct role in maintaining scientific integrity. What we’re seeing: ‣ Disclosure isn’t happening ‣ Detection tools are missing ‣ Editorial workflows aren’t ready The opportunity? Build systems that make responsible AI use visible, traceable, and manageable.
To view or add a comment, sign in
-