SlideShare a Scribd company logo
Vulnerabilities in AI-as-a-
Service Pose Threats to Data
Security

Vulnerabilities Discovered in AI as a Service Platforms
Recent research has unveiled critical vulnerabilities in AI as a service platform like Hugging
Face, potentially exposing millions of private AI models and apps to security threats.
According to findings by Wiz researchers Shir Tamari and Sagi Tzadik, these vulnerabilities
could empower malicious actors to escalate privileges, gain unauthorized access to other
customers’ models, and compromise the continuous integration and continuous deployment
(CI/CD) pipelines.
Two-Pronged Threats and Their Implications
The identified threats manifest in two forms: shared inference infrastructure takeover and
shared CI/CD takeover. This means attackers could upload rogue models in pickle format,
exploiting shared infrastructure to execute arbitrary code. Such breaches could allow threat
actors to infiltrate Hugging Face’s system, potentially compromising the entire service and
accessing models belonging to other customers. The research highlights the alarming
possibility of lateral movement within clusters, raising concerns about data privacy and
security breaches.
Moreover, the study indicates the potential for remote code execution via a specially crafted
Dockerfile when running applications on the Hugging Face Spaces service. This vulnerability
could enable threat actors to manipulate container registries, compromising the integrity of
the service’s internal infrastructure and posing additional security risks.
Mitigation Measures and Industry Responses
In response to the findings, researchers recommend implementing measures such as enabling
IMDSv2 with Hop Limit to prevent unauthorized access to sensitive information. Hugging
Face, in a coordinated effort, has addressed the identified issues and advises users to exercise
caution when utilizing AI models, especially those sourced from untrusted origins.
Additionally, Anthropic’s disclosure regarding the risks associated with large language
models (LLMs) underscores the need for vigilance in leveraging AI technologies. The
company has proposed a method called “many-shot jailbreaking” to address safety concerns
inherent in LLMs.
As AI continues to permeate various sectors, ensuring the security and integrity of AI as a
service platform becomes paramount. The revelations from these studies serve as a wake-up
call for the industry to fortify defenses against evolving cyber threats and adopt stringent
security protocols to safeguard sensitive data and prevent unauthorized access.
Furthermore, the disclosure from Lasso Security underscores the broader implications of AI-
related vulnerabilities. The possibility of generative AI models distributing malicious code
packages highlights the importance of scrutinizing AI outputs and exercising caution when
relying on AI technologies for coding solutions. As the threat landscape evolves, industry
stakeholders must remain proactive in identifying and mitigating potential risks to ensure the
responsible and secure deployment of AI technologies.

More Related Content

PDF
The Looming Security Threat: AI-Powered Coding Tools
PPTX
swamy_ppt[1]_[Read-Only][1].pptxswamy_ppt[1]_[Read-Only][1].pptx
PDF
Detection of Android Third Party Libraries based attacks
PDF
Artificial intelligence andCyberSecurity_zhang2021.pdf
PPTX
2024 Most Influential Cyber Security Technologies_ A Detailed Recap.pptx
PDF
MIST Effective Masquerade Attack Detection in the Cloud
PDF
A017360104
PDF
Comparative Study on Intrusion Detection Systems for Smartphones
The Looming Security Threat: AI-Powered Coding Tools
swamy_ppt[1]_[Read-Only][1].pptxswamy_ppt[1]_[Read-Only][1].pptx
Detection of Android Third Party Libraries based attacks
Artificial intelligence andCyberSecurity_zhang2021.pdf
2024 Most Influential Cyber Security Technologies_ A Detailed Recap.pptx
MIST Effective Masquerade Attack Detection in the Cloud
A017360104
Comparative Study on Intrusion Detection Systems for Smartphones

Similar to Vulnerabilities in AI-as-a-Service Pose Threats to Data Security (20)

PDF
Federated inception-multi-head attention models for cyber attacks detection
DOCX
AbstractCloud computing technology has become the new fron.docx
PDF
The Security Challenge: What's Next?
PDF
Cloudifying threats-understanding-cloud-app-attacks-and-defenses joa-eng_0118
PDF
What are top 7 cyber security trends for 2020
PDF
Contending Malware Threat using Hybrid Security Model
PDF
Securing Cloud Using Fog: A Review
PDF
Adaptive Mobile Malware Detection Model Based on CBR
DOCX
Security Issues Concerning CryptosystemsStudents NameInstitu.docx
PDF
Briskinfosec - Threatsploit Report Augest 2021- Cyber security updates
PDF
AdversLLM: A Practical Guide To Governance, Maturity and Risk Assessment For ...
PDF
CalypsoAI Investor Pitch Deck November 2022
PDF
Software Piracy Protection
DOCX
Running head technology vulnerabilities in the cloud
PDF
Enhancing Cybersecurity for Mobile Applications A Comprehensive Analysis, Thr...
PDF
Android security
PDF
Android security
PDF
Mobile Application Security Testing, Testing for Mobility App | www.idexcel.com
DOCX
Minor Project ReportCyber security Effects on AI: Challenges and Mitigation S...
DOCX
Minor Project Report about Cyber security Effects on AI: Challenges and Mitig...
Federated inception-multi-head attention models for cyber attacks detection
AbstractCloud computing technology has become the new fron.docx
The Security Challenge: What's Next?
Cloudifying threats-understanding-cloud-app-attacks-and-defenses joa-eng_0118
What are top 7 cyber security trends for 2020
Contending Malware Threat using Hybrid Security Model
Securing Cloud Using Fog: A Review
Adaptive Mobile Malware Detection Model Based on CBR
Security Issues Concerning CryptosystemsStudents NameInstitu.docx
Briskinfosec - Threatsploit Report Augest 2021- Cyber security updates
AdversLLM: A Practical Guide To Governance, Maturity and Risk Assessment For ...
CalypsoAI Investor Pitch Deck November 2022
Software Piracy Protection
Running head technology vulnerabilities in the cloud
Enhancing Cybersecurity for Mobile Applications A Comprehensive Analysis, Thr...
Android security
Android security
Mobile Application Security Testing, Testing for Mobility App | www.idexcel.com
Minor Project ReportCyber security Effects on AI: Challenges and Mitigation S...
Minor Project Report about Cyber security Effects on AI: Challenges and Mitig...
Ad

More from CyberPro Magazine (20)

PDF
Can Transferring Data To The Cloud Be Easy_ 12 Cloud Migration Tools You Can ...
PDF
Top 15 SASE Companies You Have Probably Never Heard of, But Should!
PDF
What You Don’t Know about Email Security Protocols_ Could Cost You Millions.pdf
PDF
What You Don’t Know About SMS Security_ Here’s A Few Things You Should Know!.pdf
PDF
Are Your Files Really Safe? The Hidden Cloud Security Threats | CyberPro Maga...
PDF
DoS Attack vs DDoS Attack_ The Silent Wars of the Internet.pdf
PDF
How Well Do You Know Data Privacy Laws_ Think Again!.pdf
PDF
Coast Guard Trains for Real-World Cyber Threats in High-Stakes Port Simulatio...
PDF
China-Linked Espionage Campaign Targets 70+ Global Organizations Across Secto...
PDF
Hackers Exploit Malicious Salesforce Tool in Voice Phishing Data Theft Scheme...
PDF
InCyber Forum Postpones San Antonio Conference Amid U.S. Policy Uncertainty.pdf
PDF
AI in Cybersecurity_ Attacks, Protection, and Trends in 2025.pdf
PDF
What is AWS DDoS Protection, and why is it needed_.pdf
PDF
Europe Tightens Cybersecurity Rules with NIS2 Directive.pdf
PDF
How Google’s Spam Protection Algorithm Changed in 2025_.pdf
PDF
Cybersecurity in Flux_ Trump Administration Spurs Shifts in National Digital ...
PDF
You Won’t Believe What Network Address Translation Devices Can Do in 2025.pdf
PDF
What Makes an AI Intrusion Detection System Important in 2025_.pdf
PDF
Why Mobile App Penetration Testing Matters.pdf
PDF
What is a Hardware Security Module (HSM)_ .pdf
Can Transferring Data To The Cloud Be Easy_ 12 Cloud Migration Tools You Can ...
Top 15 SASE Companies You Have Probably Never Heard of, But Should!
What You Don’t Know about Email Security Protocols_ Could Cost You Millions.pdf
What You Don’t Know About SMS Security_ Here’s A Few Things You Should Know!.pdf
Are Your Files Really Safe? The Hidden Cloud Security Threats | CyberPro Maga...
DoS Attack vs DDoS Attack_ The Silent Wars of the Internet.pdf
How Well Do You Know Data Privacy Laws_ Think Again!.pdf
Coast Guard Trains for Real-World Cyber Threats in High-Stakes Port Simulatio...
China-Linked Espionage Campaign Targets 70+ Global Organizations Across Secto...
Hackers Exploit Malicious Salesforce Tool in Voice Phishing Data Theft Scheme...
InCyber Forum Postpones San Antonio Conference Amid U.S. Policy Uncertainty.pdf
AI in Cybersecurity_ Attacks, Protection, and Trends in 2025.pdf
What is AWS DDoS Protection, and why is it needed_.pdf
Europe Tightens Cybersecurity Rules with NIS2 Directive.pdf
How Google’s Spam Protection Algorithm Changed in 2025_.pdf
Cybersecurity in Flux_ Trump Administration Spurs Shifts in National Digital ...
You Won’t Believe What Network Address Translation Devices Can Do in 2025.pdf
What Makes an AI Intrusion Detection System Important in 2025_.pdf
Why Mobile App Penetration Testing Matters.pdf
What is a Hardware Security Module (HSM)_ .pdf
Ad

Recently uploaded (20)

PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PPTX
Lesson notes of climatology university.
PDF
Weekly quiz Compilation Jan -July 25.pdf
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PPTX
202450812 BayCHI UCSC-SV 20250812 v17.pptx
PDF
Yogi Goddess Pres Conference Studio Updates
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PDF
RMMM.pdf make it easy to upload and study
PDF
Classroom Observation Tools for Teachers
PPTX
Radiologic_Anatomy_of_the_Brachial_plexus [final].pptx
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
Computing-Curriculum for Schools in Ghana
PDF
Complications of Minimal Access Surgery at WLH
PPTX
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
DOC
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
PPTX
Orientation - ARALprogram of Deped to the Parents.pptx
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
Lesson notes of climatology university.
Weekly quiz Compilation Jan -July 25.pdf
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
Module 4: Burden of Disease Tutorial Slides S2 2025
202450812 BayCHI UCSC-SV 20250812 v17.pptx
Yogi Goddess Pres Conference Studio Updates
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
RMMM.pdf make it easy to upload and study
Classroom Observation Tools for Teachers
Radiologic_Anatomy_of_the_Brachial_plexus [final].pptx
2.FourierTransform-ShortQuestionswithAnswers.pdf
Chinmaya Tiranga quiz Grand Finale.pdf
Computing-Curriculum for Schools in Ghana
Complications of Minimal Access Surgery at WLH
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
Paper A Mock Exam 9_ Attempt review.pdf.
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
Orientation - ARALprogram of Deped to the Parents.pptx

Vulnerabilities in AI-as-a-Service Pose Threats to Data Security

  • 1. Vulnerabilities in AI-as-a- Service Pose Threats to Data Security  Vulnerabilities Discovered in AI as a Service Platforms Recent research has unveiled critical vulnerabilities in AI as a service platform like Hugging Face, potentially exposing millions of private AI models and apps to security threats. According to findings by Wiz researchers Shir Tamari and Sagi Tzadik, these vulnerabilities could empower malicious actors to escalate privileges, gain unauthorized access to other customers’ models, and compromise the continuous integration and continuous deployment (CI/CD) pipelines. Two-Pronged Threats and Their Implications The identified threats manifest in two forms: shared inference infrastructure takeover and shared CI/CD takeover. This means attackers could upload rogue models in pickle format, exploiting shared infrastructure to execute arbitrary code. Such breaches could allow threat actors to infiltrate Hugging Face’s system, potentially compromising the entire service and accessing models belonging to other customers. The research highlights the alarming possibility of lateral movement within clusters, raising concerns about data privacy and security breaches.
  • 2. Moreover, the study indicates the potential for remote code execution via a specially crafted Dockerfile when running applications on the Hugging Face Spaces service. This vulnerability could enable threat actors to manipulate container registries, compromising the integrity of the service’s internal infrastructure and posing additional security risks. Mitigation Measures and Industry Responses In response to the findings, researchers recommend implementing measures such as enabling IMDSv2 with Hop Limit to prevent unauthorized access to sensitive information. Hugging Face, in a coordinated effort, has addressed the identified issues and advises users to exercise caution when utilizing AI models, especially those sourced from untrusted origins. Additionally, Anthropic’s disclosure regarding the risks associated with large language models (LLMs) underscores the need for vigilance in leveraging AI technologies. The company has proposed a method called “many-shot jailbreaking” to address safety concerns inherent in LLMs. As AI continues to permeate various sectors, ensuring the security and integrity of AI as a service platform becomes paramount. The revelations from these studies serve as a wake-up call for the industry to fortify defenses against evolving cyber threats and adopt stringent security protocols to safeguard sensitive data and prevent unauthorized access. Furthermore, the disclosure from Lasso Security underscores the broader implications of AI- related vulnerabilities. The possibility of generative AI models distributing malicious code packages highlights the importance of scrutinizing AI outputs and exercising caution when relying on AI technologies for coding solutions. As the threat landscape evolves, industry stakeholders must remain proactive in identifying and mitigating potential risks to ensure the responsible and secure deployment of AI technologies.