SPLX’s cover photo
SPLX

SPLX

Computer and Network Security

The end-to-end platform to test, protect, and govern AI at enterprise scale

About us

SPLX is the leading AI security platform for Fortune 500 companies and global enterprises. We help organizations accelerate safe and trusted AI adoption by securing LLM-powered systems across the entire lifecycle – from development to deployment. Our platform combines automated AI red teaming, real-time threat detection & response, and compliance mapping to uncover vulnerabilities, block live threats, and enforce AI policies at scale. Built by AI security experts and world-class red teamers, SPLX empowers security, engineering, and risk teams to adopt LLMs, chatbots, and agents with confidence – protecting against prompt injection, jailbreaks, data leakage, off-topic responses, privilege escalation, and evolving threats. Whether you're deploying internal copilots or external-facing assistants, SPLX gives you the visibility, control, and automation needed to stay ahead of AI risks and regulations.

Website
https://splx.ai
Industry
Computer and Network Security
Company size
11-50 employees
Headquarters
New York
Type
Privately Held
Founded
2023
Specialties
LLM Security, Continuous Red-Teaming, GenAI Risk Mitigation, GenAI Guardrails, Regulatory Compliance, On-Topic Moderation, AI chatbots, Conversational AI, AI Safety, AI Risk, GenAI Application Security, Pentesting, Chatbot Security, Large Language Models, Prompt Injection, Hallucination, Multi-Modal Prompt Injection, and Security Framework Mapping

Locations

Employees at SPLX

Updates

  • View organization page for SPLX

    5,046 followers

    The AI security landscape is evolving fast - and so is SPLX. 🚀 Our automated red teaming platform has already transformed how enterprises uncover critical vulnerabilities in their AI systems. 🚀 In Q2 alone, we delivered 160% growth and onboarded 5 new Fortune 500 customers. Now, we’re entering the next chapter. Our platform has evolved, and so has our brand identity, reflecting our commitment to securing the entire AI lifecycle. Here's what’s new. 🆕 AI Runtime Protection: Real-time guardrails that act like a firewall for AI apps. Prompt injections, jailbreaks, and sensitive data leaks are stopped the instant they happen. 🆕 Analyze with AI : Turns red team findings into clear, actionable insights so security teams can prioritize and respond fast. With these new capabilities, we are raising the bar for AI security once again. SPLX lets your org move fast and stay ahead with AI adoption - without compromising on safety and security. We’ll be showcasing our platform at Black Hat Vegas 2025. Can’t wait until then? Learn more about SPLX 2.0 👉 https://lnkd.in/dpnDxrmS

  • SPLX reposted this

    View profile for Sandy Dunn

    CISO | Board Member | AIML Security | CIS & MITRE ATT&CK | OWASP Top 10 for LLM Core Team Member | Incident Response |

    🚨 California’s AI Safety Bill (SB 53) is set to reshape how organizations approach AI governance, security & compliance. ⚖️🤖 organizations who act early on AI compliance mapping systems, aligning with NIST, and strengthening safeguards will have a strategic advantage. 📊✅ Link in comments. Brian Golumbeck Bertie Green Jerry Craft Kristen Bailey Julie Tsai Anny He Chris Hughes Chris M. Ward Brad Frazer Kevin Rank, MBA Robert Ranson Dutch Schwartz #AIsecurity #GenAI #EnterpriseRisk #CISO #AIgovernance #Cybersecurity #RiskManagement #SPLX #Leadership #Innovation

  • View organization page for SPLX

    5,046 followers

    Mark Zuckerberg, did you try turning it off and on again? We’ve all had those awkward moments where tech lets us down, but usually not with the whole world watching. That’s exactly what happened when Meta unveiled their AI glasses live on stage this week. When it comes to enterprise AI, failures can damage reputations, leak sensitive data… or, like in this case, give the internet a field day. That’s why the SPLX platform runs deep Q&A tests to catch issues like prompt misalignment, context drift, or instruction failures before launch AND while live in production. And hey, be sure to always double-check that Wi-Fi connection. Thanks Dorian Schultz for the kind offer to help 😉 🎥 Daniel Miessler 🛡️ Joseph Thacker Julie Tsai Chenxi Wang, Ph.D. Michael Sutton Kristian Kamber Ante Gojsalic Edward Amoroso Dennis Xu Karol Lasota Bastien Eymery 🤖

  • SPLX reposted this

    View profile for Ante Gojsalic

    Building AI Security Products

    𝗚𝗣𝗧 𝗔𝗴𝗲𝗻𝘁 𝘃𝘀. 𝗖𝗔𝗣𝗧𝗖𝗛𝗔 CAPTCHAs exist to block automation, and models like ChatGPT are explicitly trained not to solve them. It’s a strict guardrail meant to prevent fraud, abuse, and compliance risks. But what happens when you trick a ChatGPT agent into doing it anyway? That’s what we tested - using multi-turn prompt injection and context priming. The result? The agent solved CAPTCHAs - even some complex image-based ones - and went as far as adjusting its cursor movements to appear more human. Why this research matters: - CAPTCHAs may no longer be a reliable security measure in the age of AI. - Guardrails based only on intent detection are brittle - prompt injection can bypass them. - In enterprise settings, similar manipulation could lead to data leakage, restricted access, or disallowed content generation. 👉 Read the full report: https://lnkd.in/dQ9ZGJgG

  • SPLX reposted this

    View profile for Sandy Dunn

    CISO | Board Member | AIML Security | CIS & MITRE ATT&CK | OWASP Top 10 for LLM Core Team Member | Incident Response |

    🔥 AI is everywhere, do you know where your company is using AI? Audits, lawsuits, and regulatory pressure are rising and one question keeps catching organizations off guard:  In my latest article, Avoid Lawsuits from Enterprise AI: Why Asset Management Is Your First Line of Defense, I break down: How “AI creep” (vendors, embedded tools, APIs, etc.) exposes you to bias, compliance, and security risks.  What AI asset management looks like in practice—from models to prompts to infrastructure.  Concrete frameworks (AIUC-1, EU AI Act, others) you can apply to classify risk, assign ownership, and shore up governance.  Metrics that matter: visibility, risk reduction, compliance readiness, cost optimization.  📌 Read the full article https://lnkd.in/gBvn9nyR Would love to hear: what’s the biggest blind spot your organization has found when it comes to tracking AI use & risks? Thank you to Barry Hurd for his HR TECH AI COMPLIANCE RISK ASSESSMENT Report used in this article Kristian Kamber Michael Sutton David Endler Stanislav Sirakov Petar Tsachev Lars Godejord Jure Mikuž Manoj Apte Joseph Thacker Sergej Epp Daniel Miessler 🛡️ Miessler John Stewart Ofer Ben-Noon Saša Zdjelar Julie Tsai Luka Kamber Edward Amoroso Dionisio Zumerle Dennis Xu #EnterpriseAI #AICompliance #AISecurity #AIAssetManagement #CISO

  • View organization page for SPLX

    5,046 followers

    Newer ≠ safer. AI capabilities are maturing, but security is often left in the dust. Our CEO and co-founder Kristian Kamber sat down with Michael Vizard at Techstrong TV to discuss what’s going wrong - and why AI red teaming has never been more critical. 👇

    View organization page for Techstrong TV

    2,943 followers

    Why are the newest, most powerful AI models like GPT-5 testing as less secure than their predecessors? Michael Vizard speaks with SPLX CEO Kristian Kamber about the alarming disconnect between AI benchmarking pressure and real-world security. Kamber explains how the relentless drive for more data and capabilities often sidelines safety testing, leaving models open to hallucinations, data exfiltration, and entirely new classes of attacks that traditional security teams aren’t equipped to handle. ▶️ Watch the full interview here: https://lnkd.in/eUxkRabE #AISecurity #Cybersecurity #AICompliance #RiskManagement #LLM

  • SPLX reposted this

    View profile for Kristian Kamber

    CEO & Co-Founder @SPLX - 🟥 The world’s leading end-to-end AI Security Platform!

    OUR NEW CORE FEATURE IS HERE: AI Asset Management AI adoption is accelerating, and agentic systems promise a competitive edge. But we’ve all seen the stats: - Fewer than 1 in 3 orgs feel ready on security and governance - Trust in agentic AI is slipping So we built the fix: One platform, total visibility. SPLX now helps enterprises map, monitor, and secure every layer of their AI stack - from LLMs to agentic workflows to MCP servers. Built on the momentum of our open-source Agentic Radar, AI Asset Management takes it enterprise-grade. What’s new: 🟣 Agentic Workflow Discovery – map agents, tools, risky connections 🟣 Agent-Level Threat Detection – real-time, benchmarked risk scoring 🟣 Model Benchmarking + Compliance – #Owasp , #EUAIAct, and more 🟣 Automated AI BOM Generation – live inventory of deployed models 🟣 MCP Server Discovery & Scanning – detection + vulnerability mapping SPLX is leading the charge in AI Security Posture Management (AI-SPM) - giving teams the clarity to act before attackers do. More info 👉 https://lnkd.in/dw_g46Ui Michael Sutton David Endler Karol Lasota Stanislav Sirakov Lars Godejord Manoj Apte Joseph Thacker Sergej Epp Daniel Miessler 🛡️ John Stewart Julie Tsai Luka Kamber Ante Gojsalic Edward Amoroso Avivah Litan Dionisio Zumerle Mark Wah Dennis Xu

  • View organization page for SPLX

    5,046 followers

    🚨 Introducing AI Asset Management – Full Visibility Across Your AI Stack AI is embedding itself into critical business workflows, driving efficiency and unlocking new capabilities. But most enterprises are held back by a blind spot: They don’t know what models, agents, or workflows are running across their stack. AI Asset Management closes that gap. Built on the foundation of our popular open-source project, Agentic Radar, this new enterprise-grade feature delivers full visibility and security for your AI stack. Here’s what teams can do with it: ⚡ Agentic Workflow Discovery Visualize complex agentic workflows across agents, nodes, and tools ⚡ Agent-Level Threat Analysis Detect vulnerabilities with real-time, with benchmarked risk scoring ⚡ Automated AI BOM & Security Benchmarks Maintain a live inventory of models, scored across security, safety, and business alignment ⚡ MCP Server Discovery & Scanning Automatically identify and scan MCP servers for vulnerabilities ⚡ Compliance-Ready Reporting Generate reports aligned with OWASP LLM Top 10, EU AI Act, and more With AI Asset Management, enterprises finally gain the visibility and control to scale agentic AI with confidence. Learn more >>> https://lnkd.in/e8ucipph Kristian Kamber Ante Gojsalic Michael Sutton Chenxi Wang, Ph.D. David Endler Karol Lasota Stanislav Sirakov Petar Tsachev Lars Godejord Jure Mikuž Manoj Apte Joseph Thacker Sergej Epp Daniel Miessler 🛡️ John Stewart Ofer Ben-Noon Saša Zdjelar Julie Tsai

  • SPLX reposted this

    View profile for Ante Gojsalic

    Building AI Security Products

    I’m proud to announce 𝗦𝗣𝗟𝗫 𝗔𝗜 𝗔𝘀𝘀𝗲𝘁𝘀! It all started a year ago with idea of 𝗔𝗴𝗲𝗻𝘁𝗶𝗰 𝗥𝗮𝗱𝗮𝗿, open-source tool created to improve precision of Agentic AI Red Teaming. Agentic Radar evolved into an enterprise-ready discovery module that reveals vulnerabilities within AI BOM and AI Workflows living in enterprise source-code infrastructure. Why this was crucial for SPLX platform? 🔎 We finally closed an AI-SPM loop that starts with 𝗱𝗶𝘀𝗰𝗼𝘃𝗲𝗿𝘆 𝗼𝗳 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀, 𝗟𝗟𝗠 𝗺𝗼𝗱𝗲𝗹𝘀, 𝗠𝗖𝗣 𝗦𝗲𝗿𝘃𝗲𝗿 𝗮𝗻𝗱 𝗔𝗴𝗲𝗻𝘁 𝘁𝗼𝗼𝗹𝘀 in your environment. 🧪 After all repositories are scanned and potentially vulnerable AI components are identified, we provide one-click 𝘁𝗿𝗮𝗻𝘀𝗶𝘁𝗶𝗼𝗻 𝘁𝗼 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗲𝗱 𝗔𝗜 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗧𝗲𝘀𝘁𝗶𝗻𝗴. 🛡️ Last part is an 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗲𝗱 𝗿𝗶𝘀𝗸 𝗿𝗲𝗺𝗲𝗱𝗶𝗮𝘁𝗶𝗼𝗻 by hardening your business logic and dynamically generating input/output filtering rules for your guardrails. While having it in Alpha version, we already collected 20+ feature requests for AI Assets, so step-by-step it should get as rich as other parts of SPLX platform. Congrats to whole SPLX team and our design partners on this huge milestone! 👇 Link for more details is in the comment.

    • No alternative text description for this image
  • SPLX reposted this

    View profile for Ante Gojsalic

    Building AI Security Products

    After a great experience at BSides Frankfurt, I'm happy to announce probably my last workshop for this year.. Join me at Security BSides Kraków to unpack the real-world risks behind today’s GenAI deployments. 𝗘𝘃𝗮𝗱𝗶𝗻𝗴 𝗚𝗲𝗻𝗔𝗜 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗗𝗲𝗳𝗲𝗻𝘀𝗲𝘀 🕤 Sept 27 | 09:45–10:30 📍 BSides Krakow In this session, we’ll break down: - The new attack surface GenAI creates - The most common security missteps teams are making today - What to prioritize in GenAI risk assessments, and why If you're red teaming, securing, or building with LLMs, this session is for you. Come challenge your assumptions and break things, safely. 🎟️ Link to tickets in the comments

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

SPLX 2 total rounds

Last Round

Seed

US$ 7.0M

See more info on crunchbase