AI in Legal and Security: Navigating the Intersection
At RSAC 2025, we had the opportunity to sit down with Paul Starrett, a legal and compliance expert specializing in AI governance. Paul is a co-founder of PrivacyLabs, a consultancy and legal advisory firm focused on AI governance, data privacy and compliance. Our conversation explored the fast-evolving overlap between AI, law, and cybersecurity, and what it means for today’s enterprise.
AI’s Legal Impact: The Next Compliance Frontier
Paul described how AI is reshaping the legal profession—from contract analysis and e-discovery to operational oversight. But with this efficiency comes risk. Secure cloud environments and airtight compliance protocols are no longer optional—they’re foundational.
His firm is already seeing AI accelerate due diligence, reduce time spent on manual reviews, and enhance how legal teams support the business. But this new frontier raises tough questions: Who’s accountable when AI gets it wrong? How do we map liability and enforceable governance?
Security and Legal: A Growing Convergence
We also talked about how AI is redefining collaboration between legal and security teams. Paul sees a future where generative AI becomes a shared resource—automating baseline compliance queries while giving experts space to focus on high-risk, high-judgment calls.
The key? Pairing AI precision with human oversight. Without context, AI can misinterpret nuance. With the right structure, it becomes a force multiplier.
“Generative AI can be an incredible assistant,” Paul said. “But governance must stay one step ahead.”
Governance by Design: Mapping the AI Stack
One concept Paul introduced was the AI Bill of Materials (BOM)—an emerging best practice that inventories all AI components in use across the org. Think of it as a living map that legal, risk, and security teams can use to understand exposure, compliance posture, and audit readiness.
He also emphasized the importance of global frameworks like the EU AI Act, which—like GDPR before it—is setting the tone for how organizations classify, govern, and monitor their AI systems. Understanding these rules now means fewer surprises later.
Balancing Innovation with Risk
Our conversation closed with a powerful takeaway: The risk of ignoring AI is quickly outpacing the risks of using it.
Organizations that embrace responsible governance—not just for AI, but with AI—will be better positioned to compete, comply, and innovate. The intersection of law and security isn’t just a risk zone anymore. It’s a strategic battleground where clarity, transparency, and collaboration will define the winners.
Final Thoughts
This conversation with Paul was a reminder that AI governance isn’t just about tools or frameworks—it’s about foresight. As AI weaves deeper into the fabric of business, legal, and security teams must evolve together. With the right foundation, AI can become a trusted partner in both innovation and assurance.