Week 27: AI Impersonation, Digital Deception, and the Growing Gap Between Cyber Risk and Operational Maturity
Subscribe on LinkedIn https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7340913068949876737

Week 27: AI Impersonation, Digital Deception, and the Growing Gap Between Cyber Risk and Operational Maturity

This week, ThinkChamp's THINK newsletter discusses how a sophisticated AI impersonation campaign targeting U.S. Secretary of State Marco Rubio, and generative tools are reshaping digital diplomacy. At the same time, malware-laced browser extensions and QR-based callback phishing attacks expose millions to social engineering exploits. The arrest of Chinese hacker Xu Zewei, tied to COVID-19 espionage and HAFNIUM's Exchange server exploits, signals heightened enforcement. Yet, domestic threats persist, as seen in an Air Force insider leaking secrets via a romance scam and a ransomware negotiator under investigation for collusion. On the enterprise front, critical vulnerabilities in ServiceNow (“Count(er) Strike”), Bluetooth stacks (“PerfektBlue”), and McDonald's McHire platform underscore the enduring risk in widely trusted SaaS and hardware ecosystems. Meanwhile, privacy controversies escalate as Google’s Gemini AI bypasses user opt-outs, and Meta scans personal photo libraries under the guise of user enhancement, igniting concern over data control and biometric exploitation. Regulatory complexity deepens with the Senate’s rejection of a moratorium on state AI laws, paving the way for fragmented compliance requirements as NIST and SFIA evolve their frameworks to reflect cybersecurity and AI workforce demands. The week closes with a warning from Cybersecurity Today’s expert panel: unless organizations reframe cybersecurity as a strategic enabler and bolster help desk defenses, behavioral vigilance, and AI oversight, the gap between risk exposure and operational maturity will continue to grow, posing an existential threat to digital integrity.

Deepfake Diplomacy: AI Impersonators Target U.S. Government

By Danny Bradbury, July 10, 2025 | Malwarebytes Blog

The Gist

A recent deepfake-driven cyberattack has taken impersonation to alarming new heights by targeting U.S. Secretary of State Marco Rubio. According to a leaked State Department cable, attackers used AI-generated voice and text deepfakes on the Signal messaging app to mimic Rubio’s identity. Posing as Rubio, they contacted high-level officials—including three foreign ministers and a U.S. governor—attempting to extract sensitive information. The impersonator used a Signal account to initiate these exchanges, exploiting Signal’s prevalence among top U.S. officials. Although this attack isn’t the first of its kind—similar tactics were used against White House Chief of Staff Susie Wiles—its level of sophistication and high-profile targets mark a disturbing evolution in AI-powered espionage.

The Insights

Government agencies and enterprises alike must recognize that deepfake threats are no longer theoretical—they're operational. To defend against identity impersonation attacks, especially those using audio and visual deepfakes, organizations should establish multi-layer verification protocols. For families and individuals, techniques like pre-agreed “family passwords” offer low-tech, high-impact protection against social engineering schemes. On an enterprise level, investments in deepfake detection software, secure messaging platforms, and behavioral anomaly monitoring are increasingly necessary. Public education campaigns are also crucial to equip less tech-savvy individuals, such as older adults, to identify and report suspicious digital interactions.

U.S. Arrests Chinese Hacker Behind HAFNIUM Attacks and COVID-19 Espionage

By U.S. Department of Justice, July 8, 2025 | justice.gov

The Gist

The U.S. Justice Department announced the arrest of Xu Zewei, a Chinese state-sponsored hacker, for leading numerous cyber intrusions under the direction of China's Ministry of State Security (MSS). Xu was arrested in Milan, Italy, and faces extradition to the U.S. for his role in the HAFNIUM campaign, which exploited Microsoft Exchange Server vulnerabilities and targeted over 60,000 U.S. entities, including universities and law firms. Xu also stole sensitive COVID-19 research in early 2020, acting under the MSS’s Shanghai bureau. His hacking firm, Powerock, exemplifies China's use of private contractors to obscure direct state involvement in espionage. The indictment details Xu’s exfiltration of virologist emails, installation of web shells, and targeting of U.S. government policy data using terms like “MSS” and “HongKong.” This arrest marks a significant escalation in the U.S.’s efforts to hold foreign cyber operatives accountable. Xu faces multiple federal charges, including wire fraud, identity theft, and damage to protected computers, with potential sentences up to 20 years per charge. His co-conspirator Zhang Yu remains at large.

The Insights

This case reinforces the urgent need for U.S. organizations—especially in academia, legal services, and healthcare—to adopt robust threat detection and zero-trust access models. The successful exploitation of Exchange vulnerabilities demonstrates the enduring risk posed by unpatched systems and weak security postures. Institutions should audit historical logs, monitor for web shell activity, and enforce multi-layered incident response frameworks. Organizations must also remain vigilant against nation-state threats leveraging contractor proxies, and align with federal advisories, such as those from the FBI and CISA, for early threat identification. Encouragingly, international cooperation, as seen in Xu’s arrest, is tightening the net around cyber espionage actors, but proactive defense is still the best safeguard.

Google Expands Gemini AI Access to Android Apps, Prompting Privacy Concerns

Pieter Arntz · July 8, 2025 · Malwarebytes

The Gist

Google has updated its Gemini AI settings, allowing the assistant to access core Android apps like Messages, WhatsApp, and Phone—even if the user’s Gemini Apps Activity is turned off. This change, which started rolling out on July 7, bypasses previous privacy settings unless users explicitly opt out. While Google presents this as a usability improvement, the decision has triggered criticism over user consent, clarity, and data sharing. Gemini can now perform actions like sending texts and initiating calls, and although Google insists chats aren't reviewed unless feedback is given, up to 72 hours of data may still be stored.

The Insights

This update spotlights the growing tension between AI convenience and user control. Organizations and individuals using Android devices should promptly review Gemini permissions under Settings > Privacy > Permission Manager, especially in environments handling sensitive information. To preserve privacy and security, limit Gemini's access to only essential apps and regularly audit app connections via gemini.google.com/apps. Enterprises should also provide employee training on AI assistant behavior, prioritize mobile device management (MDM) solutions, and remain alert to shifting default settings that could expose private communications or operational data.

Count(er) Strike: High-Severity Data Inference Flaw in ServiceNow

By Neta Armon, July 9, 2025 | Varonis Threat Labs

The Gist

Varonis researchers uncovered a high-severity vulnerability in ServiceNow—named Count(er) Strike—that allowed users with minimal privileges to infer and exfiltrate sensitive data, including PII, credentials, and financial information. Exploiting the record count UI element, attackers could manipulate query filters to deduce the existence and nature of hidden data across hundreds of tables. Most alarming is that even self-registered or low-privilege users could launch the attack due to overly permissive or incomplete Access Control Lists (ACLs). The vulnerability, now tracked as CVE-2025-3648, affected all unpatched instances before a May 2025 update. ServiceNow’s architecture organizes data into tables governed by layered ACLs. However, if early-stage ACL conditions (roles or security attributes) are absent or permissive, attackers could query the record count of filtered data, effectively allowing data mining via enumeration. Automation using tools like Burp Suite and scripting further amplified the exposure, even enabling table-wide data scraping. Varonis demonstrated that attackers could access everything from production server credentials to employee data using just a basic user account.

The Insights

Organizations using ServiceNow must urgently audit and update their ACL configurations, especially for tables containing regulated or high-risk data. Implementing new controls such as Query ACLs and Security Data Filters is essential to limit exposure to blind query and inference attacks. Query ACLs should be set to default-deny, with granular allow-listing to prevent data abuse via filtering tricks. Security Data Filters further help by removing data silently without clueing in attackers, minimizing exploit feedback loops. ServiceNow customers are urged to disable self-registration unless absolutely required and conduct penetration testing to detect exposed record counts. This incident underscores the importance of principle of least privilege (PoLP) and role-based access control (RBAC) in SaaS applications. Platforms that handle sensitive enterprise data—especially those with customer-facing modules—must prioritize proactive hardening and ACL hygiene to avoid widespread data breaches stemming from overlooked UI behaviors.

McDonald's Hiring Bot Exposes Personal Data of Millions

By Pieter Arntz, July 10, 2025 | Malwarebytes Blog

The Gist

McDonald's AI-powered hiring chatbot, McHire, was found to have a serious vulnerability that could have exposed personal information from as many as 64 million job applicants. Security researchers were able to access McHire’s administrative backend by guessing default credentials, like "123456", and identified an insecure API that allowed access to historical application data. Though the chatbot’s AI wasn’t susceptible to prompt injection, the backend flaw exposed sensitive applicant data due to poor access controls. McHire, used by 90% of McDonald's franchisees, is provided by Paradox.ai, which patched the vulnerability shortly after disclosure. Thankfully, no signs indicate the flaw was exploited by malicious actors before the fix.

Cybersecurity Today: Marks and Spencer Hack, Brazilian Bank Breach, and McDonald's Data Vulnerability

The Insights

This breach is a textbook example of how poor credential hygiene and misconfigured APIs can compromise sensitive data even in highly automated systems. Organizations deploying AI tools must rigorously test backend systems for basic vulnerabilities and ensure credential rotation, role-based access, and least-privilege principles are enforced. For applicants and users, it’s crucial to practice post-breach safety steps: change passwords, enable two-factor authentication, and remain vigilant for phishing scams masquerading as legitimate companies. Organizations should also commit to regular third-party audits to verify the security posture of integrated services like McHire.

$500K Heist via Malicious Cursor AI Extension Highlights Supply Chain Risk

By Georgy Kucherin, July 10, 2025 | Kaspersky GReAT

The Gist

A malicious extension posing as a code highlighter in Cursor AI led to the theft of $500,000 in crypto assets from a blockchain developer, revealing a broader campaign targeting developers via fake open-source packages. The extension, disguised as a Solidity syntax highlighter, executed remote PowerShell scripts from the domain angelic[.]su, which installed ScreenConnect for full remote control. This incident is part of a recurring attack trend, exploiting developer trust in widely used platforms like Open VSX and npm. Despite only 54,000 downloads at the time, the malicious package out-ranked legitimate ones due to ranking algorithm manipulation. The attackers utilized VBScript payloads, obfuscated URLs, and tools like Quasar backdoor and PureLogs stealer to collect data from browsers, crypto wallets, and email clients. Even after removal, a second malicious package emerged under a near-identical developer name, boasting over 2 million spoofed downloads. Kaspersky's analysis linked the campaign to prior attacks using similar infection vectors and remote access schemes, confirming an organized effort targeting blockchain and Web3 developers.

The Insights

Organizations and independent developers working in blockchain, finance, or open-source projects must treat plugin and package sourcing as a critical security layer. Always verify developer authenticity, inspect code manually when possible, and avoid assuming high download counts equate to legitimacy. Security software alone isn’t enough—proactive vetting of development tools is vital. Platforms hosting extensions should prioritize stronger validation mechanisms, including developer verification, code analysis, and anomaly detection for inflated download stats. Meanwhile, teams must implement endpoint detection and enforce least privilege access, particularly when dealing with financial data or smart contracts. This attack underscores how supply chain manipulation at the IDE level can lead to devastating consequences, even for security-conscious users.

Air Force Worker Leaked Secrets to Online Lover Posing as Foreign National

By Jessica Lyons, July 10, 2025 | The Register

The Gist

David Franklin Slater, a 64-year-old retired U.S. Army lieutenant colonel turned Air Force civilian, has pled guilty to conspiring to leak national defense information after sharing classified intelligence about the Russia-Ukraine war with a woman he met on a dating app. From February to April 2022, Slater communicated with a person identified only as "Co-Conspirator 1"—an alleged foreign national—whom he believed to be his romantic partner. The pair exchanged flirtatious messages laced with requests for sensitive military details, including NATO movements, military targets, and weapon supplies. Slater’s indiscretions occurred despite his Top Secret/SCI clearance and a signed NDA acknowledging the risks of disclosure.

The Insights

This case serves as a stark warning for insider threat programs: even seasoned military personnel can be vulnerable to social engineering through romance scams. Organizations handling sensitive data should reinforce cybersecurity awareness training, particularly on emotional manipulation vectors. Regular audits, behavioral monitoring, and robust zero-trust frameworks can help detect unusual access patterns. This incident also underlines the need for secure, monitored communication protocols within government and defense networks. Cybersecurity isn't just about tech—human vulnerabilities remain among the greatest risks to national security.

Google and Microsoft Trusted Them. 2.3 Million Users Installed Them. They Were Malware: Millions Exposed by Malicious Extensions

Pieter Arntz · July 9, 2025 · Malwarebytes

The Gist

Security researchers have uncovered a widespread surveillance campaign using 18 browser extensions found in the official Chrome and Edge web stores, with over 2.3 million installs. Initially benign, these extensions later received malicious updates, turning them into "sleeper agents" that hijacked browsers, captured visited URLs, tracked users with unique IDs, and redirected them to phishing pages. Some even mimicked legitimate tools like ChatGPT or Zoom. Though many of these extensions have now been removed, they had already compromised millions by covertly harvesting data and potentially distributing malware.

The Insights

This incident emphasizes the growing risk of trusted software turning rogue post-installation. Organizations must limit extension usage, especially on workstations that handle sensitive data, by enforcing policies via browser management tools. IT teams should routinely audit browser extensions and educate users on permission creep and deceptive update behavior. To protect against such threats, users should clear browsing data, reset browser settings, enable 2FA on critical accounts, and scan devices with trusted anti-malware tools. Vigilance in monitoring extension permissions and updates is essential as attackers increasingly exploit the browser supply chain.

Louis Vuitton UK Customer Data Breached in Latest LVMH Cyber Attack

By Mark Sweney, July 11, 2025 | The Guardian

The Gist

Luxury fashion giant Louis Vuitton confirmed that personal data from UK customers was accessed during a cyber-attack on its systems earlier this month. The breach, which compromised names, contact information, and purchase history, follows similar incidents targeting the company’s Korean operations and sister brand Christian Dior Couture in recent months. While no financial data like bank account details were stolen, customers were warned of potential phishing or fraud attempts. This breach marks the third attack on LVMH brands in just three months, amid a wider surge in cybercrime targeting high-profile retailers such as Marks & Spencer, Harrods, and the Co-op.

The Insights

This string of breaches underscores the vulnerability of even the most prestigious global brands to persistent cyber threats. Organizations must double down on segmented access controls, real-time monitoring, and timely incident reporting. Retailers, especially those handling affluent clientele data, need to enhance backend resilience and proactively alert customers to social engineering threats. Consumers should treat post-breach communications with caution—verify emails and never click unsolicited links, even if branded. With luxury brands becoming attractive targets, cyber defense must scale alongside brand prestige.

PerfektBlue Bluetooth Vulnerabilities Expose Millions of Cars to Remote Hacking

By Bill Toulas, July 10, 2025 | BleepingComputer

The Gist

Security researchers have uncovered four critical vulnerabilities in the BlueSDK Bluetooth stack by OpenSynergy, affecting vehicles from major automakers including Mercedes-Benz, Volkswagen, and Skoda. The flaws, dubbed PerfektBlue, allow attackers to perform remote code execution (RCE) via over-the-air attacks that require minimal user interaction, sometimes just one click. Despite OpenSynergy releasing patches in September 2024, many car manufacturers have yet to deploy these updates, leaving infotainment systems vulnerable. A successful exploit can let hackers eavesdrop on in-car conversations, track GPS, access contacts, and possibly move laterally within the vehicle's network.

The Insights

This incident highlights the lag between vulnerability disclosure and patch deployment in the automotive sector—a delay that opens the door to real-world exploitation. Organizations in the automotive supply chain must enforce rapid update rollouts, enhance vendor transparency, and embed secure-by-design principles in third-party software adoption. Automakers should also improve their Bluetooth pairing protocols to avoid auto-pairing vulnerabilities. For users, the best defense is vigilance—avoid pairing devices in public or unknown areas, and always apply firmware updates when available. The future of connected vehicles demands tighter, faster cyber hygiene across manufacturers.

French Authorities Arrest Russian Basketball Player Tied to Ransomware Case

Lorenzo Franceschi-Bicchierai · July 10, 2025 · TechCrunch

The Gist

French police have arrested Daniil Kasatkin, a Russian professional basketball player for MBA Moscow, over suspicions of being involved in ransomware activities. U.S. officials allege that Kasatkin moonlighted as a hacker, and he was detained at Charles de Gaulle Airport in Paris on June 21. Kasatkin briefly played college basketball at Penn State. His lawyer denies the accusations, claiming Kasatkin is not tech-savvy and unknowingly purchased a compromised computer that may have been used for illicit activities.

The Insights

This unusual intersection of professional sports and cybercrime highlights how digital attribution challenges can lead to complex legal entanglements. Organizations and investigators must employ thorough forensic analysis to differentiate between true perpetrators and individuals inadvertently caught in cyber dragnets. Meanwhile, individuals should be cautious about digital asset provenance—especially used electronics—which can become a liability if previously tied to malicious activity. As ransomware investigations grow more global and nuanced, accurate attribution and international cooperation will be essential for fair outcomes.

Four Suspects Arrested in Major UK Retail Cyberattacks

Zack Whittaker · July 10, 2025 · TechCrunch

The Gist

British law enforcement arrested four individuals—aged 17 to 20—for allegedly orchestrating cyberattacks on high-profile retailers including Marks & Spencer, Harrods, and the Co-op. The hacks, which began in April, involved stealing customer data using social engineering techniques tied to the infamous group Scattered Spider. The attackers reportedly facilitated access for the ransomware group DragonForce, which deployed malware in Marks & Spencer’s systems. The Co-op managed to thwart the ransomware attempt by shutting down its network, while Harrods reportedly minimized damage from a similar attack.

The Insights

This case highlights how sophisticated cybercriminal groups exploit social engineering to bypass traditional defenses. Retailers and customer-facing organizations must reinforce identity verification protocols in call centers and help desks, implement strong multifactor authentication, and continuously monitor access behavior. Training staff to recognize impersonation attempts, coupled with proactive incident response capabilities, can help minimize exposure when threat actors aim to manipulate internal support processes. As ransomware syndicates evolve, so must the layers of human and technical safeguards.

Ransomware Negotiator Under Investigation for Colluding With Criminals

Danny Bradbury · July 8, 2025 · Malwarebytes

The Gist

A former employee of DigitalMint, a company that facilitates ransomware negotiations is under investigation by the U.S. Department of Justice for allegedly conspiring with cybercriminals to profit from extortion payments. According to Bloomberg, the employee cut secret deals with ransomware gangs, compromising their role as a trusted intermediary. Digital Mint has since fired the individual and denied any corporate involvement. This case adds to growing concerns about the integrity of ransomware response services, a field already tarnished by past incidents of companies secretly paying off hackers.

The Insights

The incident is a stark reminder of the ethical gray zones in ransomware negotiations. Organizations relying on third parties for crisis management must thoroughly vet those partners and enforce strict oversight to prevent conflicts of interest. Establishing internal transparency, legal compliance, and anti-fraud safeguards is essential, especially given the increasing sophistication of ransomware tactics like data theft and extortion. Companies should also revisit incident response policies and consider non-payment strategies supported by cyber insurance or governmental guidance, particularly as public pressure mounts against funding cybercriminal enterprises.

Deepfakes Now Central to Cybercrime: Criminal Ecosystem Expands

Australian Cyber Security Magazine · July 10, 2025

The Gist

A new report highlights the escalating threat of deepfake-enabled cybercrime, showing how cybercriminals are now using off-the-shelf generative AI tools to carry out fraud, impersonation, and infiltration attacks. Once seen as futuristic, deepfake technology is now a present-day business threat, used to impersonate executives, fake job candidates, and bypass financial verifications. Tools initially built for content creation are now in active use for CEO fraud, recruitment fraud, and KYC evasion, with criminals trading detailed tutorials and face-swapping plug-ins in underground forums. Trend Micro's Andrew Philp warned that deepfakes are undermining digital trust and elevating threats at every organizational layer. The report underscores how real-time voice and video impersonation make it harder to detect social engineering attempts, even during live interactions. With minimal technical expertise required, the barrier to entry has collapsed, enabling a surge of attacks across industries that depend on identity verification and remote communications.

The Insights

Organizations must rethink digital identity trust models in an era where visuals and voice can be synthetically manufactured. This includes implementing robust media authentication methods, such as biometric liveness checks, voice print matching, and video fingerprinting tools to detect manipulated content. HR and financial teams should be especially vigilant, as recruitment and fund transfer workflows are prime targets. Security leaders should also train staff to recognize behavioral red flags in digital interactions, not just rely on facial or voice recognition. A deepfake-aware culture, combined with updated incident response plans and real-time detection of synthetic media, will be critical to defending against these evolving, AI-powered threats. The age of “seeing is believing” is over—verification now demands layered, contextual validation.

SOC Teams See Spike in Phishing and BEC Incidents Amid QR Code Attacks (r/cybersecurity)

From Reddit Community Thread, July 2025 | r/cybersecurity | Glad-Entry891

The Gist

In an intriguing discussion on Reddit, Inc. , Security Operations Center (SOC) professionals from various industries discussed a sharp uptick in true positive incidents, with many reporting a shift from one incident a month to several per week. The increase is primarily driven by generic phishing and QR code-based scams, which are now commonplace enough that many users scan malicious QR codes without hesitation. While business email compromise (BEC) remains the primary outcome, some also report session hijacking via Man-in-the-Middle (MitM) attacks, with token theft increasingly enabling MFA bypass. Notably, many incidents involve phishing kits capable of stealing session tokens, elevating the threat landscape beyond basic spam filters and antivirus. This rise in incidents is also prompting an overdue conversation about standardizing definitions of “events” vs. “incidents.” SOC professionals across managed service providers (MSPs) and enterprise teams note that reporting requirements, organizational size, and security maturity all influence how alerts are classified and handled. Despite having endpoint detection and response (EDR) tools, there’s consensus that detection capabilities are improving, but staffing, resourcing, and executive support remain inconsistent.

The Insights

Organizations, especially MSPs and smaller businesses, must reassess how they define and respond to security alerts, particularly as phishing sophistication increases. Deploying Conditional Access Policies, leveraging Intune for device compliance, and adopting VPN-enforced geolocation restrictions can mitigate many threats. QR phishing, in particular, demands targeted user training and awareness campaigns, such as simulated attacks and policy reminders via QR codes in office spaces. To better respond to the wave of phishing and BEC threats, SOCs should also implement token theft detection, enhance OAuth app governance, and invest in centralized breach response reporting to identify attack trends. These steps are vital in preparing for AI-enhanced social engineering, which is quickly becoming the new norm in cybercrime.

Google’s AI Stays in the UK—Sort Of: Gemini 2.5 Flash Offers Local Data Processing, But Raises Sovereignty Doubts

By Richard Speed, July 10, 2025 | The Register

The Gist

Google Cloud now allows UK organizations to process and store Gemini 2.5 Flash AI data within British borders, attempting to address local data sovereignty and compliance demands. While data processing remains within the UK, support services are handled globally, prompting criticism about the true extent of sovereignty. Critics argue that unless support and encryption keys remain local, data could still be accessed by foreign authorities, especially under legislation like the US CLOUD Act. Though Google allows customers to retain their encryption keys and promises to redirect government data requests back to clients, privacy experts worry this approach lacks legal clarity and sufficient guarantees.

The Insights

As AI adoption grows, jurisdictional control over sensitive data becomes critical, particularly in sectors like healthcare and finance. Organizations should evaluate not only where data is stored and processed, but also how it is accessed, supported, and encrypted. To enhance data control, UK-based entities using cloud AI tools should implement customer-managed encryption, consider air-gapped or partner-hosted deployments, and demand clear contractual protections against cross-border data exposure. As regulatory scrutiny increases, building AI systems with comprehensive, verifiable data localization will be essential to maintain trust and ensure compliance.

Let’s Encrypt Issues Free IP Address Certificates—Convenience or Cyber Risk?

By Pieter Arntz, July 7, 2025 | Malwarebytes Blog

The Gist

Let's Encrypt, a widely used nonprofit certificate authority, has begun issuing TLS certificates for IP addresses—a significant shift from its long-standing practice of certifying only domain names. This move could benefit users looking to secure direct IP-based connections, such as for IoT devices or NAS systems, especially where domain names are not feasible. However, cybersecurity experts warn that this also opens the door for abuse. Cybercriminals could now create legitimate-looking phishing links using just IP addresses, giving unsuspecting users a false sense of trust via HTTPS padlocks—which only indicate encryption, not authenticity or safety.

The Insights

While this update meets legitimate technical needs, organizations must adapt their detection models to identify and respond to the growing threat of phishing campaigns using IP-based certificates. Monitoring certificate transparency logs, scrutinizing links for hidden redirections, and integrating intelligence on malicious IPs are all vital steps. For users, never trust a link solely based on the padlock symbol. Cyber hygiene fundamentals—such as using multi-factor authentication, avoiding unsolicited links, and keeping devices updated—remain crucial. As attackers pivot to exploit this development, defenders must evolve their strategies to close this newly opened flank.

Force Push Scanner Uncovers Secrets Left Behind in GitHub's Deleted Commits

By Vishwa Pandagle, July 11, 2025 | Darknet

The Gist

Force Push Scanner, developed by Truffle Security Co., is a new offensive tool designed to hunt for secrets accidentally left behind in GitHub’s ephemeral commit history. When developers use the force-push feature to overwrite sensitive data, remnants often remain in dangling commits, which are temporarily stored on GitHub’s infrastructure. This tool scans those deleted commits in real-time, identifies secrets using regex and entropy analysis, and alerts security teams before the data disappears permanently. It’s deployable via Docker or Python and integrates with GitHub Archive, making it highly adaptable for red teams and OSINT researchers.

The Insights

GitHub workflows remain a high-risk vector for credential exposure, and Force Push Scanner reveals a blind spot many security teams miss—post-deletion artifacts. To mitigate risks, organizations should disable force-push on protected branches, enforce pre-push secret scanning via Git hooks, and audit force-push logs. Additionally, centralized secret management and training developers on secure practices are key. This tool illustrates how adversaries can exploit common developer habits, emphasizing the need for real-time monitoring and hardened DevSecOps pipelines to keep sensitive information from slipping through the cracks.

Jack Dorsey’s Bitchat App Promises Security—Without Actually Being Secure

Lorenzo Franceschi-Bicchierai · July 9, 2025 · TechCrunch

The Gist

Jack Dorsey, CEO of Block and Twitter co-founder, recently unveiled Bitchat, an open-source, decentralized chat app boasting “secure” messaging via Bluetooth and end-to-end encryption. However, the app has drawn sharp criticism from the security community because it launched without any external security review. Following the backlash, Dorsey appended a GitHub disclaimer admitting the app “may contain vulnerabilities” and should not be trusted in its current form. Researchers have since identified serious flaws, including broken identity verification that allows impersonation, questionable claims about forward secrecy, and a suspected buffer overflow vulnerability.

The Insights

This situation reinforces a critical lesson: labeling an app “secure” does not make it so without proper vetting. Organizations exploring alternative communication tools, particularly for high-risk or censorship-prone environments, must validate claims through third-party audits and penetration testing before adoption. Developers of privacy-focused applications must prioritize formal security reviews before release, primarily when users might rely on them for personal safety. For those considering Bitchat or similar tools, the current advice is clear—observe, don’t deploy, until the app earns its security credentials through rigorous review.

Facebook’s AI Wants Access to Your Entire Camera Roll—But At What Cost?

By Danny Bradbury, July 1, 2025 | Malwarebytes Blog

The Gist

Facebook has begun prompting users to allow cloud-based scanning of their personal photos stored on their phone’s camera roll. In exchange, the platform promises to offer features like AI-generated collages, event-themed recaps, and "restyling" suggestions. While this service is opt-in, Facebook’s terms state that it may analyze uploaded photos—including facial features and metadata like time and location—and potentially review them manually or via third-party vendors. Legal complications arise for residents of states with strict biometric privacy laws, such as Illinois and Texas. Concerns also mount about the possibility of the AI accessing sensitive or private imagery, including photos of children or intimate moments.

The Insights

Meta's new initiative underscores the importance of being cautious with automated cloud uploads and AI photo analysis tools. Despite assurances of privacy and ad-free intentions, users should assume that any uploaded image could be scrutinized, repurposed, or retained indefinitely. Organizations and individuals must critically assess the privacy policies of any app requesting broad file access, especially those with a history of data mishandling. For consumers, disable auto-upload features, avoid sharing sensitive media through apps with expansive rights clauses, and inform friends and family of your consent expectations when sharing group photos. In the era of AI-fueled data collection, trust should be earned, not assumed.

RondoDox Botnet Masquerades as VPN and Gaming Traffic to Hit Surveillance Systems

By Vishwa Pandagle, July 5, 2025 | TechNadu

The Gist

The newly discovered RondoDox botnet is exploiting critical Linux vulnerabilities to target internet-connected surveillance and industrial routers—particularly TBK DVRs and Four-Faith devices. Discovered by FortiGuard Labs, RondoDox disguises its malicious traffic as popular VPN or gaming data, evading detection by firewalls and monitoring tools. It also renames critical binaries, disables analysis tools, and maintains long-term persistence using layered scripts. While there are no confirmed victims yet, the malware’s stealth, C2 server communication, and infrastructure focus suggest it could be a nation-state-level threat.

The Insights

Organizations operating surveillance or industrial IoT networks must treat RondoDox as a serious infrastructure threat. Immediate actions should include patching vulnerable devices, restricting remote access, and deploying EDR solutions with behavioral analytics. Administrators should monitor embedded systems for unusual protocol traffic, and utilize file integrity tools to detect renamed binaries. With its evasion and anti-forensic tactics, RondoDox highlights the urgent need for zero-trust network segmentation and routine firmware validation across all edge devices—especially those in energy, transportation, and telecommunications environments.

PC Gamers Beware: Call of Duty: WWII Exploit Lets Hackers Hijack Machines

By Pieter Arntz, July 7, 2025 | Malwarebytes Blog

The Gist

The PC version of Call of Duty: WWII has been temporarily taken offline after a remote code execution (RCE) vulnerability was discovered that allows attackers to take control of players’ computers during multiplayer sessions. This exploit surfaced just days after the game was added to Microsoft’s Game Pass service, prompting reports of players experiencing unauthorized access to their systems—ranging from command prompts being opened to forced shutdowns and even malicious desktop changes. The vulnerability stems from the game’s peer-to-peer (P2P) architecture, a common issue in legacy online titles, and appears to affect only the PC version, particularly via Game Pass and possibly Steam.

The Insights

This event is a clear reminder that older, unpatched games can become high-risk attack surfaces, especially when suddenly thrust into mainstream use via platforms like Game Pass. Organizations and gamers alike should view multiplayer games with P2P networking as potential security liabilities. Until a patch is issued, players should avoid launching the game on PC and ensure all security updates are installed. Use reputable anti-malware software, and follow official updates from Activision. For developers, this reinforces the need to revisit legacy titles for security audits, especially when re-releasing on new platforms.

When AI Hallucinates URLs: How LLMs Are Fueling Phishing Attacks

By Bilaal Rashid, July 1, 2025 | Netcraft Blog

The Gist

Netcraft's latest study highlights a troubling new threat vector: AI-driven misinformation leading to phishing attacks. When researchers asked large language models (LLMs) like GPT-4.1 for login URLs to 50 major brands, 34% of the suggested domains were not controlled by the brands—many were inactive, unregistered, or linked to unrelated businesses. In one case, Perplexity AI suggested a phishing page impersonating Wells Fargo, exposing how AI-generated content can mislead users with convincing but malicious links. As AI becomes the default interface across search engines and digital assistants, these "hallucinations" pose a systemic and scalable cybersecurity risk.

The Insights

The rise of AI SEO manipulation—designing content not for human eyes, but for AI models—signals a dangerous evolution in phishing tactics. Cybercriminals are already planting AI-optimized malicious sites, blog tutorials, and GitHub repos to manipulate LLM outputs and coding assistants. Defensive domain registration is no longer sufficient. Organizations must embrace real-time threat detection, domain monitoring, and LLM-specific risk assessments. AI tools must integrate context-aware guardrails and threat intelligence, as Netcraft proposes, to stop being unwitting accomplices. Trust in AI must be earned, and that means holding LLMs to higher standards of accuracy and verification.

“One Big Beautiful Bill” Prioritizes Military Cyber, Ignores Civilian Cybersecurity Needs

By Tim Starks, July 7, 2025 | CyberScoop

The Gist

President Trump’s newly signed “One Big Beautiful Bill” pours hundreds of millions into military cybersecurity, notably allocating $250 million to Cyber Command for AI initiatives, $20 million to DARPA, and funds across defense branches for cyber offense and infrastructure. Non-military funding is sparse, with just a nod to cybersecurity via a rural health grant program. Notably absent is any support for CISA (Cybersecurity and Infrastructure Security Agency), drawing sharp criticism from Democrats, who argue the bill undermines core national cybersecurity functions and leaves election and infrastructure protection underfunded.

The Insights

The bill’s military-heavy cyber focus reflects a growing trend of tactical cyber prioritization, while neglecting civilian-facing cyber resilience, such as protecting local governments, hospitals, and election systems. Organizations, especially in the public sector, must not wait for federal support—investments in threat hunting, secure-by-design principles, and partnerships with private cybersecurity firms will be essential. As CISA funding faces cuts, local agencies and businesses should bolster incident response readiness and push for state-level cyber initiatives to fill the growing gap.

NIST Extends Public Comment Period on HPC Security Overlay

By ACSM_Accro, July 11, 2025 | Australian Cyber Security Magazine

The Gist

The National Institute of Standards and Technology (NIST) has extended the public comment deadline for its initial draft of Special Publication 800-234, titled High-Performance Computing (HPC) Security Overlay, to August 4, 2025. This publication aims to tailor security controls specifically for high-performance computing systems, which are critical in domains like AI/ML training, big data analytics, and scientific simulations. Built on the moderate baseline from SP 800-53B, the overlay adjusts 60 security controls with added guidance to suit the performance-focused nature of HPC environments.

The Insights

Organizations utilizing HPC infrastructure—particularly in sectors managing sensitive data or mission-critical AI models—should review and comment on the draft to ensure the overlay reflects practical, real-world needs. Adopting this overlay can serve as a robust security foundation that aligns compliance with performance efficiency, a key challenge in HPC environments. Furthermore, it encourages customizable implementation while promoting sector-specific safeguards that protect against evolving threats in high-compute scenarios. Now is the time for stakeholders to provide feedback that can shape the future security standards of AI and big data infrastructure.

Private Equity’s Cyber Disconnect: Awareness Without Action Risks Costly Breaches

By George V. Hulme, July 10, 2025 |

The Gist

A new survey by cybersecurity firm S-RM reveals a troubling gap between private equity firms’ awareness of cybersecurity risks and their actual diligence practices. While 89% acknowledge cybersecurity maturity affects acquisition decisions, the average firm spends just over $25,000 on cyber due diligence—barely half the amount allocated to general tech assessments. Experts warn that security assessments are often deprioritized in favor of financial metrics, leaving portfolios vulnerable to compromise. Alarmingly, 72% of respondents reported experiencing serious cybersecurity incidents within their portfolios over the past three years, while fewer than two-thirds require immediate incident reporting from portfolio companies.

The Insights

Private equity firms must align cyber due diligence with real-world risk exposure to avoid damaging breaches post-acquisition. First, they should increase upfront investment in cyber assessments, especially during M&A. At minimum, this includes reviewing security policies, incident response plans, and employee training protocols. Firms should also standardize baseline security expectations across portfolio companies and provide centralized support through shared services and intelligence sharing. Treating cybersecurity as a strategic advantage, not just a compliance hurdle, helps build resilience and enhances exit value. For PE firms navigating today’s threat landscape, security maturity is now a deal-breaker—not a postscript.

Hospitality Sector Braces for Cyber Onslaught This Summer

By Vishnu Rageev R., July 10, 2025 | Asian Hospitality

The Gist

A surge in cyberattacks targeting the hotel industry is expected this summer, with 66% of IT and security executives predicting increased attack frequency and 50% anticipating higher severity, according to VikingCloud’s latest report. The study, titled “Peak Season, Peak Risk: The 2025 State of Hospitality Cyber Report,” identifies guest-facing systems as the most vulnerable, particularly POS systems (72%), guest WiFi (56%), and front desk tech (34%). Notably, AI-driven threats and deepfakes are emerging attack vectors, but 48% of hotel staff feel unprepared to handle them. Despite most hotels employing standard protections like antivirus software and VPNs, fewer than half use more advanced measures such as vulnerability scanning, ransomware protection, or penetration testing. Compounding the issue, 30% have no plans to outsource cybersecurity to managed providers, leaving a wide gap in resilience just as the summer travel boom heightens risk.

The Insights

Hotels must move beyond basic cybersecurity practices to confront evolving AI-enabled threats. Upgrading to comprehensive defense measures—including real-time monitoring, regular vulnerability assessments, and ransomware protections—is now a necessity, not a luxury. Additionally, investing in staff training and incident response readiness can improve resilience, especially given that over 70% of hotels experienced repeated attacks in 2024. With payment systems and guest data at high risk, proactive cybersecurity strategies can help hotels protect their operations, revenue, and brand trust during this peak travel season.

Cracking Into Cybersecurity: How to Start and Thrive in a High-Demand Career

By ESET Editorial Team, July 4, 2025 | ESET WeLiveSecurity

How to get into cybersecurity | Unlocked 403 cybersecurity podcast (S2E3)

The Gist

With cybersecurity roles continuing to outpace supply, ESET’s Robert Lipovsky, Aryeh Goretsky, and Cameron Camp offer a comprehensive roadmap for breaking into the field. The shortage is vast—over 4 million globally—yet entry is not limited to those with elite degrees. Many professionals enter via certifications, self-study, open-source contributions, and unconventional routes. Lipovsky highlights the importance of curiosity and persistence, while Goretsky and Camp emphasize practical skills like command-line fluency, networking fundamentals, and communication prowess. The field rewards those who are adaptable, eager to learn, and capable of linking real-world problems with technical solutions.

The Insights

Aspiring professionals should begin by identifying what to protect—understanding how systems, networks, and software operate forms the core of all cybersecurity roles. Certifications like the CISSP may help get past initial resume screenings, but experience and proof of skills (like GitHub contributions or technical blogs) matter just as much. Soft skills such as translating technical risks into business value and effectively communicating with non-technical stakeholders are often decisive. Remote work has widened opportunities but also increased competition, making personal branding and networking more critical than ever. There’s no single path—just start, stay curious, and keep learning.

Framing Cybersecurity as a Business Driver, Not a Cost Center

By Chris Singlemann, July 9, 2025 | The Register

The Gist

Security leaders often struggle to gain board approval for new tools and headcount, especially when budgets are tight and technical language clouds the business case. A recent SANS survey revealed 47% of cybersecurity professionals cited budget constraints as their top concern in 2025. The article argues that to secure funding, CISOs must reframe security investments as enablers of business resilience, reputation, and regulatory alignment. Boards aren’t interested in patching details—they want to know how a tool impacts risk, revenue, and reputation. Instead of focusing solely on preventing threats, security leaders should highlight how controls increase operational efficiency, reduce insurance or headcount costs, and even support sales through compliance automation. Metrics like reduced MTTR (mean time to respond) or fewer false positives must be translated into board-friendly language. Including case studies of peer organizations—whether success stories or cautionary tales—helps humanize the proposal and strengthen the business case.

The Insights

To get executive buy-in, cybersecurity professionals must speak the language of business, tying security outcomes to clear business objectives. Align proposals with board priorities—whether that's avoiding downtime, meeting compliance obligations, or protecting brand trust. Use data to quantify threat exposure and demonstrate potential ROI, but also tell compelling stories to contextualize the risk. Security leaders should prepare a comprehensive rollout and performance plan for any proposed tool, including ongoing evaluation, scalability, and integration strategies. Treat every pitch as a long-term value conversation, not a panic-driven ask. As threat landscapes evolve and budgets shrink, those who can tie security to strategic advantage—not just risk mitigation—will lead the conversation at the executive level.

Emerging Darknet Marketplaces of 2025: Anatomy, Tactics & Trends

Published: July 9, 2025 | Source:

Overview

Darknet marketplaces remain a central vector in the digital underground of 2025, showing both resilience and adaptation. This comprehensive report identifies key players, evolving operational models, and significant law enforcement responses, providing essential intelligence for cybersecurity leaders, red teams, and threat analysts monitoring illicit ecosystems. It underscores the strategic pivot to privacy-enhancing technologies, vendor migrations, and niche specialization, with implications for proactive defense and threat intelligence operations.

Active Platforms: Market Expansion and Vendor Consolidation

  • Abacus Market has emerged as a dominant force since its 2021 inception, now listing over 40,000 illicit goods. With $43.3M in 2024 on-chain revenue (up 183%), it's a prime target for tracking vendor migration and operational patterns post-shutdowns like Archetyp Market.

  • Russian Market, specializing in credential theft and stealer logs, remains a top-tier source of sensitive identity data.

  • BriansClub, a persistent presence since 2014 despite major breaches, continues to thrive, signaling enduring trust in its data distribution networks.

  • Exodus Market, a Genesis successor, leverages infected endpoint logs across 190 countries and enforces invitation-only access, raising the bar for red team simulations and bot behavior profiling.

Payment Trends: Rise of Monero and Transactional Obfuscation

Markets are pivoting hard toward Monero (XMR) for its superior privacy features, including RingCT and stealth addresses, as Bitcoin’s traceability becomes a liability. The dual-currency model adopted by Abacus and Russian Market aims to balance anonymity and accessibility, while exclusive Monero adoption reflects increasing operational caution and risk aversion.

Market Shutdowns: Enforcement Success and Adversary Shifts

  • Archetyp Market was dismantled in June 2025 during Operation Deep Sentinel, marking a significant takedown involving six countries and €250M in transactions. However, such victories often trigger threat migration, as seen with vendors flocking to Exodus.

  • BidenCash and others executed exit scams, exploiting trust-building tactics before vanishing, offering persistent lessons in social engineering manipulation and the volatility of criminal trust economies.

Evolving Threat Dynamics: Trust Controls and Specialization

  • Platforms are enforcing PGP-encrypted communication, reputation systems, and bot reliability checks to manage access and maintain operational integrity.

  • Vertical specialization—like Russian Market’s sole focus on stolen data—supports more granular targeting and threat modeling.

  • Marketplace segmentation aids threat actors in evading surveillance, necessitating agile threat intelligence approaches.

Strategic Takeaways for Cybersecurity Leaders

  • Vendor Migration Analysis: Mapping shifts post-shutdowns can expose attacker loyalties, tradecraft evolution, and data redistribution channels.

  • Payment Forensics: Tracking Monero usage, although challenging, remains crucial in de-anonymizing laundering patterns.

  • Proactive Monitoring: Focusing on niche or rising platforms enables earlier detection of emerging toolkits, malware variants, and supply chain threats.

  • Darknet Metrics Matter: Chainalysis reports indicate $2B in darknet Bitcoin inflows in 2024, with Abacus alone accounting for nearly 5%—emphasizing the sheer scale and continuity of illicit economies.

Conclusion

The darknet marketplace ecosystem in 2025 reflects an intricate balance of adaptation and persistence. While law enforcement continues to land high-profile blows, adversaries rapidly pivot, often more agile than regulatory and detection mechanisms can handle. For defenders, real-time marketplace intelligence, robust vendor risk monitoring, and cross-sectoral collaboration remain indispensable for staying ahead in this evolving arms race of digital commerce and cybercrime.

From Phishing Fakes to Agentic AI: Why Cybersecurity Must Rethink the Rules of Engagement

Published: July 12, 2025 | Source: Cybersecurity Today, Month in Review Panelists: Jim Love (Host), Laura Payne (White Toke), David Shipley (Beauceron Security), Tammy Harper (Flair)

Overview

In this high-stakes edition of Cybersecurity Today, host Jim Love is joined by security leaders Laura Payne, David Shipley, and Tammy Harper to unpack a turbulent month in cyber risk, from a $30 million Canadian elder scam to the evolution of youth-led cyber syndicates like Scattered Spider. The episode traces how emerging threat actors, behavioral blind spots, and AI experimentation are converging into a volatile new threat environment. The panel pushes for clear-eyed risk governance, stronger social engineering defenses, and smarter AI integration before enterprises lose control of the tools they’ve barely begun to understand.

Key Themes

Domestic Cybercrime Hits Home: A central fraud ring in Montreal targeted seniors with fake grandchild emergencies, using legitimate business fronts and call centers. The scheme, uncovered after years of investigation, highlights the rise of local, organized cybercrime with devastating real-world consequences.

Youth-Driven Threats Go Global Scattered Spider exemplifies the rise of agile, socially engineered cyber syndicates using phishing, help desk exploits, and cloud account takeovers to cripple airlines and insurance firms. These groups blend crime with community, recruiting via Telegram and TikTok, and exploiting lax employment pathways and weak onboarding defenses.

Help Desks: The Hidden Weak Link. The group slams IT service frameworks like ITIL for incentivizing speed over caution in identity validation. Enterprises are urged to shift toward dual-incentive help desk metrics, social engineering detection training, and behavioral baselining to counteract this overlooked risk.

Ransomware Evolves Again. Facing increased prosecution risks, ransomware gangs like Hunters International are rebranding and pivoting to extortion-only tactics. This shift intensifies the pressure on businesses to improve data governance, minimize blast radius exposure, and avoid reactive ransom payments.

Ingram Micro: A Fast Recovery, Marred by Silence The Ingram Micro attack demonstrated record-breaking technical resilience—but poor crisis communications. Experts stress that fast remediation must be matched with transparency to avoid reputational erosion and regulatory scrutiny.

Agentic AI: High Power, Low Guardrails. As enterprises begin integrating agentic AI systems that plan and act independently, security professionals warn of underestimated vulnerabilities. From easily exploited LLMs to flawed permission architectures, the panel calls for mandatory sandboxing, bill-of-materials disclosures, and enterprise-wide AI risk reviews before full-scale deployments.

Actionable Guidance for Leaders

  • Treat AI tools as operational actors—not just assistants.

  • Make your help desk the frontline of anti-social engineering strategy.

  • Focus on identity governance for human and non-human (machine) users.

  • Demand faster breach disclosure and comms training for IR teams.

  • Incentivize secure innovation, not just performance metrics.

About the Author

Jim Love is the host of Cybersecurity Today and a veteran technology strategist with decades of experience leading IT and cybersecurity transformations in enterprise environments. As CIO at ITWC and a frequent moderator for executive panels, Jim brings clarity and candor to some of the most pressing digital risk conversations in Canada and globally. His work explores the intersections of cybersecurity, digital ethics, and AI governance, always with a focus on making complex issues actionable for business leaders.

To view or add a comment, sign in

Others also viewed

Explore topics