This study investigates the implications of large language models (LLMs) like ChatGPT in cybersecurity, focusing on their potential to generate malicious code and the necessity of red teams in addressing these new threats. The authors present numerous experiments showcasing ChatGPT's capabilities in producing sophisticated malware-related outputs while stressing the importance of prompt engineering and human moderation to ensure safety. The research highlights both the opportunities and risks posed by transformer-based models within the cybersecurity landscape.