Check Point® Software Technologies Ltd. has been recognized as a Leader and Outperformer for its Harmony Email & Collaboration security solution in GigaOm’s latest Radar for Anti-Phishingreport.
Whether it's building an SDK, launching a new application title, or knocking out a version update with advanced capabilities, speed remains a primary competitive driver for development organizations and their customers. To that end, ultra-fast and frictionless mobile application development increasingly depends on automation.
More specifically, DevOps teams are readily embracing modern tools that utilize large language models (LLMs), generative AI (GenAI), and the very buzzy agentic AI to accelerate their continuous integration/continuous delivery (CI/CD) pipelines. An estimated 70% of professional developers will be using AI-powered coding tools by 2027; Google claims that more than a quarter of their new code is already generated by AI.
But AI's tremendous potential business value is currently outshining some very real risks to mobile applications and the broader software supply chain.
Code Flaws and Opaque Dependencies
To start with, AI tools are prone to making common mistakes in DevOps environments, including generating hardcoded secrets in code, misconfiguring infrastructure-as-code (IaC) with open permissions, and overlooking secure CI/CD pipeline configurations.
AI-based development tools also increase risks stemming from dependency chain opacity in mobile applications. Blind spots in the software supply chain will increase as AI agents and coding assistants are tasked with autonomously selecting and integrating dependencies. Since AI simultaneously pulls code from multiple sources, traditional methods of dependency tracking will prove insufficient.
To mitigate the risks to mobile applications, any AI-generated code should undergo rigorous review to identify potential security vulnerabilities and quality issues early on, before they lead to costly problems downstream. Unfortunately, the responsibility for ensuring this kind of review before a release is often overlooked, and these simple, unforced errors are just the first line of potential hazards.
Slopsquatting, Hallucinations, and Bad Vibes
Any tool that brings positive benefits can also be abused or misused, and GenAI is no different. The term "slopsquatting" has emerged to describe instances where a threat actor registers a software package that doesn't actually exist. Similar to "typosquatting" (where malicious actors count on human spelling errors), slopsquatting anticipates a developer's misplaced trust in AI suggestions. If a developer installs one of these fake packages without first verifying it, malicious code can be introduced into the project.
Another issue is that many large frontier LLMs are trained on open-source software rather than on proprietary databases of secure code. As such, these LLMs are susceptible to replicating common open-source vulnerabilities, as well as data poisoning and malware attacks by malicious actors. Researchers recently discovered a specific instance where threat actors exploited machine learning (ML) models using the Pickle file format to conceal malware inside seemingly legitimate AI-related software packages.
Perhaps even more concerning, LLMs may recommend vulnerable, insecure, or non-existent open-source libraries independently. These package hallucinations can lead to a novel form of package confusion attack for careless developers. The hallucination problem is also predictably pervasive. A recent university study of over 500,000 LLM-generated code samples found that nearly 1 in 5 packages suggested by AI didn't exist. They discovered 205,474 unique examples of hallucinated package names; commercial models were 5.2% likely to include at least one hallucinated package, and that rate jumped to 21.7% for open-source models.
While these vulnerabilities may seem isolated, they can have far-reaching downstream implications for software supply chains. A prompt injection vulnerability might allow an LLM to be manipulated through malicious inputs to generate incorrect or insecure code that spreads through connected systems. One such prompt injection vulnerability was discovered in OpenAI's ChatGPT late last year.
The developer trend of intuitive "vibe coding" may take package hallucinations into serious bad trip territory. The term refers to developers using casual AI prompts to generally describe a desired mobile app outcome; the AI tool then generates code to achieve it. Counter to the common wisdom of zero trust, vibe coding tends to lean heavily on trust; developers very often copy and paste code results without any manual review checks. Any hallucinated packages that get carried over can become easy entry points for threat actors.
Agentic AI Amplifies the Chances for Trouble
According to OWASP, agentic AI represents an advancement in autonomous systems. Integration with LLMs and GenAI has significantly expanded the scale and capabilities of using these tools, as well as the associated risks. Relying on these complex multi-agent systems not only intensifies dependency opacity and multiplies the chances for error generation, it also creates opportunities for code generation tool misuse by malicious actors. OWASP specifically calls out the potential for new attack vectors using Remote Code Execution (RCE) and other code attacks.
While some predict that agentic AI will disrupt the mobile application landscape by ultimately replacing traditional apps, other modes of disruption seem more immediate. For instance, researchers recently discovered an indirect prompt injection flaw in GitLab's built-in AI assistant Duo. This could allow attackers to steal source code or inject untrusted HTML into Duo's responses and direct users to malicious websites.
Build Security into the Mobile App SDLC
While the advertised efficiency, cost, and time-to-market advantages of AI-assisted development are all tantalizing, those savings would be only short-term gains if they ultimately lead to a security incident. The associated challenges and risks to development organizations are not going unnoticed. A recent Gartner survey of software engineering/application development leaders in the US and UK found that the use of AI tools to augment software engineering workflows was a significant or moderate pain point for 71% of respondents.
To actualize the potential value of AI in DevOps, organizations need to treat these powerful tools like any other user, device, or application within the Zero Trust framework. Developers need to de-risk AI adoption by embracing effective solutions for testing, protection, and monitoring. A secure software development lifecycle (SDLC) for mobile applications is one that integrates security across every phase, including solutions for:
■ Mobile application security testing (MAST) that maintains development speed without compromising security.
■ Code hardening and obfuscation tools to make reverse engineering significantly more difficult for threat actors.
■ Runtime application self-protection (RASP) to detect and block tampering attempts while the app is running.
■ App attestation to ensure that only legitimate, trusted apps can interact with your APIs and protect your application from bots, malware, fraud, and targeted attacks.
■ Real-time threat monitoring to continuously observe the app in the field as the threat landscape evolves.
Industry News
Aqua Security, the primary maintainer of Trivy, announced that Root has joined the Trivy Partner Connect program.
GitLab signed a three-year, strategic collaboration agreement (SCA) with Amazon Web Services (AWS).
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the schedule for KubeCon + CloudNativeCon North America 2025, taking place in Atlanta, Georgia, from November 10–13, 2025.
Google Cloud announced a complete toolkit to help developers build, deploy, and optimize A2A agents.
ArmorCode announced significant application security and remediation advancements to help customers address risks posed by AI-generated code and applications, along with imminent compliance demands from regulations including the Cyber Resilience Act (CRA).
Black Duck Software announced significant enhancements to its AI-powered application security assistant, Black Duck Assist™, which is now directly integrated into the company's Code Sight™ IDE plugin.
Check Point's CloudGuard WAF global footprint has expanded with 8 new points of presence (PoPs) in recent months.
Apiiro launched its AutoFix Agent: an AI Agent for AppSec that autofixes design and code risks using runtime context – tailored to your environment.
Snyk announced the immediate availability of Secure At Inception, which consists of three new innovations focused on Model Context Protocol (MCP) technology.
Backslash Security announced that its platform for securing AI coding infrastructure and code will be shown at the AI Pavilion (booth #4312) at Black Hat USA in Las Vegas, August 6-7.
Salt Security announced the launch of Salt Surface, a new capability integrated into its API Protection Platform.
Wallarm announced the launch of its next-gen Security Edge offering, delivering the benefits of edge-based API protection to more teams, in more environments, with more control.
DefectDojo announced new automated Known Exploited Vulnerabilities (KEV) data enrichment features for DefectDojo Pro.
Temporal Technologies is launching a new integration with the OpenAI Agents SDK: a provider-agnostic framework for building and running multi-agent LLM workflows.