Some time ago I have presented a POC for fully automatic, LLM agent based attack framework with LLM controlled C2 and undetected stealer malware #DeepSEC... I have warned, and here it is, two great projects I bumped in recently: HexStrikeAI: The latest release, v6.0, equips AI agents like OpenAI’s GPT, Anthropic’s Claude, and GitHub’s Copilot with a formidable arsenal of over 150 professional security tools, enabling autonomous penetration testing, vulnerability research, and bug bounty automation. https://lnkd.in/dBC48Sek BruteForceAI: Auto BruteForce, seeks for targets and tries to bruteforce https://lnkd.in/dEhtYGjb
Introducing HexStrikeAI and BruteForceAI: AI-powered security tools
More Relevant Posts
-
👨💻 Curious how LLM agents actually work? This BruCON course shows how they plan, call tools, and interact using A2A protocols. Build your first secure agent, attack it, and fix it. Code + hacking = unforgettable AI deep dive. 🔥 https://ow.ly/vQVQ50Wp01e
To view or add a comment, sign in
-
-
🎙️ What if fixing vulnerabilities was no longer a slog but an automated service? On Generationship, John Amaral of Root unpacks how AI agents are reshaping security, turning weeks of patching into hours, and freeing humans to focus on strategy rather than toil. Tune in! 🎧 https://hubs.ly/Q03G6q_v0
To view or add a comment, sign in
-
-
Our latest feature reduces the critical gap between finding vulnerabilities and fixing vulnerabilities in AI agents. Until today, we offered two separate capabilities -- one to run automated red-team tests and another to enforce org policies on the agent's inputs and outputs. Now, vijil uses the results of red-team testing to auto-generate guardrails designed to address the detected vulnerabilities. For example, if Vijil test results show that the agent is prone to prompt injections, PII disclosure, and toxicity, Vijil generates a bespoke guardrail configuration designed to block or redirect detected inputs and outputs, with the lowest latency. No need to guess your guardrails. Learn more at https://lnkd.in/g6zVg9Kd
To view or add a comment, sign in
-
Red-team tests only evaluate your AI agent, in various ways. The point, however, is to change it. The delay between finding issues and fixing issues can make all the difference in the world. At Vijil, we're building a platform that tightly couples red-team risk assessment with blue-team risk mitigation to reduce an agent's exposure to a hostile environment.
Our latest feature reduces the critical gap between finding vulnerabilities and fixing vulnerabilities in AI agents. Until today, we offered two separate capabilities -- one to run automated red-team tests and another to enforce org policies on the agent's inputs and outputs. Now, vijil uses the results of red-team testing to auto-generate guardrails designed to address the detected vulnerabilities. For example, if Vijil test results show that the agent is prone to prompt injections, PII disclosure, and toxicity, Vijil generates a bespoke guardrail configuration designed to block or redirect detected inputs and outputs, with the lowest latency. No need to guess your guardrails. Learn more at https://lnkd.in/g6zVg9Kd
To view or add a comment, sign in
-
NBC put out a solid piece last week on the “AI-assisted hacking” moment—definitely worth a read. When attacks are one AI prompt away, Secure by Design matters more, not less. In an AI-led world, SbD: Removes the low-hanging fruit attackers automate first (default creds, sloppy auth, unsafe defaults). Builds friction into abuse paths (rate limits, guardrails, logging/alerts). Shifts cost back to the attacker—inch by inch. TL;DR: If the attack is one prompt away, your defense can’t be twelve configuration steps and a PDF. Porous is not a feature. Before you demo the Next Big Model, ask: “What did we eliminate so a script kiddie + LLM can’t win in five minutes?” #AISecurity #SecureByDesign #BuiltInNotBoltedOn #SoftwareSecurity #ModelSafety
To view or add a comment, sign in
-
🚨 Prompt injections are one of the biggest security risks facing AI agents today. Developers want velocity. Hackers want your data. Without the right safeguards, coding agents can become an open door. Tomorrow, we’ll show how OpenHands protects you—keeping agents fast and secure: 🔒 How prompt injections work 🔍 Mitigation strategies 🛑 Live demo of malicious code being intercepted Join Robert Brennan, Joe Pelletier, and Jamie Steinberg to see how OpenHands stops attacks in their tracks. 👉 Register now to join us live or get the recording: https://luma.com/akz33lyl
To view or add a comment, sign in
-
-
"That’s where AI offers the most real-world value to defenders: by helping us do the basics better, faster, and with fewer people." Another fantastic blog from RoboShadow founder Terry Lewis talking about the impact that AI is having on the cybersecurity race. On the flipside, a nice reminder to where AI is bring a lot of power to the workforce... validating the hours I have spent this week building GPTs to make mini versions of myself.
🚀 Our latest blog builds on our recent YouTube video and talks about how cybercriminals are now running hands-free hacks — scanning, exploiting, and adapting without a human even touching the keyboard. Meanwhile, the good guys are still stuck tuning alerts and patching late. Read more about what defenders must do here👉 https://ow.ly/iYKv50WGC4u
To view or add a comment, sign in
-
Cybersecurity researchers have demonstrated a new prompt injection technique called PromptFix that tricks a generative artificial intelligence (GenAI) model into carrying out intended actions by embedding the malicious instruction inside a fake CAPTCHA check on a web page. #BT #Infosec #2025 https://lnkd.in/gkvi_nkT
To view or add a comment, sign in
-
Choosing Apire means choosing a future where AI security is seamlessly integrated without the need for complex coding. 🚀 Our zero-code deployment shifts to a secure API endpoint, ensuring immediate protection against sophisticated threats like prompt injection and jailbreaking. With our 4-layer defense system, you can trust that your systems are safeguarded against AI-specific risks in a rapidly evolving landscape. Let's secure your AI journey together! 🌐 #AISecurity #ZeroTrust #EnterpriseSolutions https://wix.to/od4UcW2
To view or add a comment, sign in
-
-
The first AI-powered ransomware has been discovered — "PromptLock" uses local AI to foil heuristic detection and evade API tracking | Tom's Hardware https://lnkd.in/eV2E2PAR
To view or add a comment, sign in
Cybersecurity engineer with knowledge of security domains, product support, and IT infrastructure. Willing and proven ability to spearhead teams, and solve problems around all things related to information technology.
1wThanks Mark! I will take a deeper look at these.