On-demand Webinar

AI Red Teaming, Agentic Risks & Compliance: Ask the Experts

Experts break down AI Security

As AI adoption accelerates, so do the risks; from adversarial manipulation and shadow AI to autonomous agents making unsupervised decisions. Back by popular demand, join this ask-me-anything (AMA) webinar to hear the latest insights from a panel of AI security experts, spanning offensive testing, agentic AI governance, and enterprise innovation. We’ll unpack your most pressing questions around AI security—covering everything from AI red teaming and agentic testing to technical controls, compliance frameworks, and operational safeguards.

Whether you’re just beginning to evaluate AI or are already deploying models across your enterprise, this session will offer actionable insights to help you secure your AI stack, before attackers exploit it. 

What you’ll take away:

  • How AI red teaming and agentic testing differ from traditional approaches, and why they matter now
  • The latest emerging threats to AI systems (e.g., data poisoning, model inversion, policy bypass, autonomous misuse)
  • How to build an AI security strategy that balances innovation with risk
  • Key controls and questions to ask when adopting or partnering on AI
  • Answers to your questions on AI risk, compliance, and secure deployment
Watch Now
Image
Andre
Hacker & Co-Founder, Ethiack

André Baptista

André brings deep expertise in offensive testing and crowdsourced security. Recognized twice as Most Valuable Hacker by HackerOne, he has secured organizations like Shopify, Verizon, and the U.S. Department of Defense. As a co-founder of Ethiack and professor of information security, André explores how traditional red teaming evolves when applied to AI systems—highlighting where human ingenuity and adversarial testing remain irreplaceable.

Image
Kayla
Lead Security Engineer, Zenity

Kayla Underkoffler

Kayla specializes in agentic AI security and governance, helping enterprises manage the risks of autonomous systems and citizen-built AI. With a background spanning the U.S. Marine Corps, vulnerability management, and crowdsourced security, she now focuses on building guardrails for AI agents in enterprise environments. Kayla bridges innovation and policy, shaping how organizations secure the next wave of agent-driven technology.

Image
Dane
Staff Innovations Architect, HackerOne

Dane Sherrets

Dane focuses on emerging technology security, advancing HackerOne’s testing capabilities for AI/ML and beyond. With a background in ethical hacking, SaaS, and cloud applications, he explores how innovation in adversarial and autonomous testing can be operationalized for enterprise security programs. Dane’s work ensures organizations can adopt cutting-edge technologies without sacrificing safety.