Re-imagining Threat modeling with Gen AI and LLMs
As enterprise systems grow in complexity, traditional threat modeling while essential, struggles to keep pace with evolving architectures and shrinking development cycles. This paper explores a novel, LLM-powered approach to threat modeling, where generative AI augments the process by infusing context awareness, prompt driven automation, and multimodal input handling. By leveraging techniques such as Retrieval-Augmented Generation (RAG), structured prompt engineering (e.g., COSTAR), and diagram ingestion via multimodal models, organizations can drastically accelerate and enhance their ability to foresee and mitigate potential threats. The methodology also emphasizes human oversight, ensuring that AI output remains grounded in real world judgment. Framed through a "Minority Report" style lens of predictive security, this work presents a systematic, actionable guide to building context-sensitive, scalable threat modeling tools turning what was once a manual, expert driven activity into a democratized, AI-assisted practice. The result is a shift-left security state that’s not just efficient, but inevitable.