👨💻 Curious how LLM agents actually work? This BruCON course shows how they plan, call tools, and interact using A2A protocols. Build your first secure agent, attack it, and fix it. Code + hacking = unforgettable AI deep dive. 🔥 https://ow.ly/vQVQ50Wp01e
BruCON’s Post
More Relevant Posts
-
Some time ago I have presented a POC for fully automatic, LLM agent based attack framework with LLM controlled C2 and undetected stealer malware #DeepSEC... I have warned, and here it is, two great projects I bumped in recently: HexStrikeAI: The latest release, v6.0, equips AI agents like OpenAI’s GPT, Anthropic’s Claude, and GitHub’s Copilot with a formidable arsenal of over 150 professional security tools, enabling autonomous penetration testing, vulnerability research, and bug bounty automation. https://lnkd.in/dBC48Sek BruteForceAI: Auto BruteForce, seeks for targets and tries to bruteforce https://lnkd.in/dEhtYGjb
To view or add a comment, sign in
-
🎙️ What if fixing vulnerabilities was no longer a slog but an automated service? On Generationship, John Amaral of Root unpacks how AI agents are reshaping security, turning weeks of patching into hours, and freeing humans to focus on strategy rather than toil. Tune in! 🎧 https://hubs.ly/Q03G6q_v0
To view or add a comment, sign in
-
-
Cybersecurity researchers have demonstrated a new prompt injection technique called PromptFix that tricks a generative artificial intelligence (GenAI) model into carrying out intended actions by embedding the malicious instruction inside a fake CAPTCHA check on a web page. #BT #Infosec #2025 https://lnkd.in/gkvi_nkT
To view or add a comment, sign in
-
Lakera is launching Gandalf: Agent Breaker, a hacking simulator game that lets you uncover and exploit the real-world vulnerabilities hiding in Agentic AI applications. By playing, you’ll see first-hand how these systems can be broken, and why securing them needs a novel approach. 🔗 https://lnkd.in/egby3XBp Each challenge is based on realistic attack scenarios, from prompt attacks and memory tampering to tool abuse and data leaks. You choose your approach, break the app, and see how far you can get. A free, hands-on way to learn GenAI security by doing—with 10 different apps to break, multiple ways to win, and a global leaderboard to showcase your skills.
To view or add a comment, sign in
-
-
Prepping for the 16th Annual Billington CyberSecurity Summit starts tomorrow and thinking about the many ways to reduce exposure surfaces we don't consider. Received an ad for locking down MCP servers that didn't touch where the LLM's they wanted to secure were given access. AI that only receives inputs and provides humans with actionable plans that are documented is very different from AI systems granted administrative rights. Tomorrow I'm interested in how IT Operations can be enhanced through ABAC applied to MCP/LLM and how document/video system outputs can be secured through redaction.
To view or add a comment, sign in
-
Our latest feature reduces the critical gap between finding vulnerabilities and fixing vulnerabilities in AI agents. Until today, we offered two separate capabilities -- one to run automated red-team tests and another to enforce org policies on the agent's inputs and outputs. Now, vijil uses the results of red-team testing to auto-generate guardrails designed to address the detected vulnerabilities. For example, if Vijil test results show that the agent is prone to prompt injections, PII disclosure, and toxicity, Vijil generates a bespoke guardrail configuration designed to block or redirect detected inputs and outputs, with the lowest latency. No need to guess your guardrails. Learn more at https://lnkd.in/g6zVg9Kd
To view or add a comment, sign in
-
Red-team tests only evaluate your AI agent, in various ways. The point, however, is to change it. The delay between finding issues and fixing issues can make all the difference in the world. At Vijil, we're building a platform that tightly couples red-team risk assessment with blue-team risk mitigation to reduce an agent's exposure to a hostile environment.
Our latest feature reduces the critical gap between finding vulnerabilities and fixing vulnerabilities in AI agents. Until today, we offered two separate capabilities -- one to run automated red-team tests and another to enforce org policies on the agent's inputs and outputs. Now, vijil uses the results of red-team testing to auto-generate guardrails designed to address the detected vulnerabilities. For example, if Vijil test results show that the agent is prone to prompt injections, PII disclosure, and toxicity, Vijil generates a bespoke guardrail configuration designed to block or redirect detected inputs and outputs, with the lowest latency. No need to guess your guardrails. Learn more at https://lnkd.in/g6zVg9Kd
To view or add a comment, sign in
-
80% of Ransomware is Now AI-Powered | Are You Ready? Attackers are using AI to pick the lock, sneak inside and hide their tracks. This isn’t the time for “good enough” security. It’s the time to upgrade your defenses. 🔴 We use Machine Learning + Behavioral Analytics to catch threats that others miss, even the AI-driven ones. https://redborder.com/
To view or add a comment, sign in
-
-
🚨 Prompt injections are one of the biggest security risks facing AI agents today. Developers want velocity. Hackers want your data. Without the right safeguards, coding agents can become an open door. Tomorrow, we’ll show how OpenHands protects you—keeping agents fast and secure: 🔒 How prompt injections work 🔍 Mitigation strategies 🛑 Live demo of malicious code being intercepted Join Robert Brennan, Joe Pelletier, and Jamie Steinberg to see how OpenHands stops attacks in their tracks. 👉 Register now to join us live or get the recording: https://luma.com/akz33lyl
To view or add a comment, sign in
-
-
AI is evolving every day, but one problem keeps coming back: prompt injection. OWASP called it the number one threat to #LLMs, and the defenses we have today are still not enough. Our new blog explains why it’s such a tough #security challenge, and what steps you can take to protect your organization: https://okt.to/5CnMfU
To view or add a comment, sign in
-