RamiGPT; AI-generated code is a risk; Deepfakes are fast, cheap, and good
By John Bruggeman, virtual Chief Information Security Officer
RamiGPT, AI-powered red team tool
Not that we need more reasons to do basic cybersecurity (think patching, vulnerability scanning, strong admin passwords, MFA, PAM, etc.), but there is a new, free, red-team tool called RamiGPT that automates offensive attacks and can gain root access in less than a minute. The tool was developed by GitHub user M507, and it uses OpenAI’s API to run the attacks. RamiGPT integrates proven penetration testing frameworks like PwnTools to accelerate attacks on platforms hosted on VulnHub, a repository of intentionally vulnerable virtual machines for security training. This is not really new. I suspect most Ransomware-as-a-Service groups (e.g. LockBit, Play, RansomHub, BlackBasta, etc.) already have a tool like this as part of their service offering, but the release of this tool on GitHub makes this kind of automated offensive attacks even more accessible. Remember to Patch Your Stuff to help prevent ransomware.
What can you do?
Do you have a patching program? Do you patch every month? Do you scan your environment for vulnerabilities weekly or monthly or once a year?
We can do the patching and vulnerability scanning for you. Check out these resources from OnX.
AI generated code is a risk
One of the risks I talk about all the time with AI-generated code is hardcoding passwords and API key into the software. If you don't have a human reviewing the code, there is a big risk that your AI tool will leak secrets. You need to have someone who knows how to code review what the AI tool generates.
GitGuardian, a cybersecurity company that helps companies protect passwords and secrets, has released their latest report, "State of Secrets Sprawl 2025", and it reveals a 25% YoY increase in leaked secrets. The report notes that there are 23.8 million new credentials on GitHub in 2024. 23.8 million – let that sink in for a minute. Is your code base in that cache of compromised secrets and API keys?
The article below from CSO Online has several warnings about AI-generated code from Veracode, GitGuardian, Blackduck, and CyberArk. Mark Cherp is quoted in the article as saying:
“There have already been instances where API keys or public keys from companies such as Anthropic or OpenAI were inadvertently left in the source code or uploaded in open-source projects, making them easily exploitable. Even in closed-source projects, if secrets are hard-coded or stored in plain text within binary files or local JavaScript, the risk remains significant, as the secrets are easy to extract.”
As companies rush to embrace the advantages of AI-generated code, don't skimp on the security. Review the code and look for hardcoded secrets and API keys. If you don't "have the time" to do that basic security, don't be surprised when your code is compromised. There are tools and people out there that can help you with this risk, just ask for the help. The full story is here.
What can you do?
Are you using AI tools to write code or assist with code generation? If you are, have you done a risk assessment of those tools?
Do you have an AI project that has stalled or isn’t delivering the value you expected? If so, we can help!
Check out our AI resources.
Deepfakes are cheap, fast, and good
As part of a presentation that our experts Ryan Hamrick and Christopher DeBrunner did for a Cincinnati CIO roundtable about AI cybersecurity risks in April, I created deepfake videos of myself as a test to see who could spot the real me.
One was easy to spot as a fake. Another is a very good video of me that was created from a single photograph. Again, it wasn’t a video of me that was manipulated, but a single photograph.
The deepfake videos took me less than 15 minutes to generate, and it only took me that long because I had never used the tool (Hedra) before.
For a cybercriminal who has used this tool before, it can take minutes to create a fake but realistic-appearing video of you in minutes.
What to do?
Be aware of the risk presented here. A criminal can quickly and easily create a fake video that looks like you and sounds close to your voice, and that video can say anything.
We can help you learn more about how to protect your environment.
I have included the slide deck from the AI Risk session the three of us did. If you have any questions, please reach out to Chris, Ryan, or myself. You can fill out a contact form to reach us easily on the OnX website. We are here to help!
About the author
John Bruggeman is a veteran technologist, CTO, and CISO with nearly 30 years of experience building and running enterprise IT and shepherding information security programs toward maturity. He helps companies, boards, and C-level committees improve and develop their cybersecurity programs, create risk registers, and implement compliance controls using industry-standard frameworks like CIS, NIST, and ISO.