Everyone is talking about using #AI to speed up #software #development. But here’s the question: are coding agents actually producing safe and secure code? The short answer: not yet. The better answer: they can — with the right guidance. Recently, I had the privilege of bringing the experiences of #LLM-driven development with Microsoft's customers into the OpenSSF Best Practices and AI/ML working groups. Together with the amazing contributors and with guidance from David A. Wheeler we produced a new guide to help coding agents generate safer, more secure code. The key insight? You can dramatically improve AI-generated code security by giving #AIAssistants the right context through standard instruction files like agents.md, copilot-instruction.md or cursor-rules.md Shaping the “security mindset” of coding agents is becoming essential. What strategies are you using to make sure AI-generated code is secure by design? #DevTools #SoftwareDevelopment #SecureCoding #OpenSourceSecurity #OpenSSF #CyberSecurity
🚨 AI code assistants are powerful, but they’re only as secure as the prompts you give them. That’s why the OpenSSF Best Practices and AI/ML Working Groups created the new Security-Focused Guide for AI Code Assistant Instructions, led by Avishay Balter (Microsoft) and contributors from Microsoft, Google, and Red Hat. This practical resource helps developers write clear, security-focused prompts so assistants generate safer, more reliable code. 📖 Read the blog by Avishay Balter and David A. Wheeler to explore the guide and start collaborating today: https://lnkd.in/e-WfQu5C
Threat Research @ Avalor Security
1wSuper important work Avishay! Huge thanks to you and the rest of the OpenSSF working groups members for bringing this resource to the community!