SlideShare a Scribd company logo
2
Most read
3
Most read
11
Most read
Privacy and
Security in the
Age of
Generative AI
Benjamin Bengfort, Ph.D. @ C4AI 2025
UNC5267
North Korea has used Western Language LLMs to
generate fake resumes and profiles to apply for
thousands of remote work jobs in western tech
companies.
Once hired, these “workers” (usually laptop farms in
China or Russia that are supervised by a handful of
individuals) use remote access tools to gain
unauthorized access to corporate infrastructure.
https://cloud.google.com/blog/topics/threat-intelligence/mitigating-dprk-it-worker-threat
https://www.forbes.com/sites/rashishrivastava/2024/08/27/the-prompt-north-korean-operati
ves-are-using-ai-to-get-remote-it-jobs/
AI Targeted Phishing
60% of participants in a recent study fell victim to AI
generated spear phishing content, a similar
success rate compared to non-AI generated
messages by human experts.
LLMs reduce the cost of generating spear phishing
messages by 95% while increasing their
effectiveness.
https://hbr.org/2024/05/ai-will-increase-the-quantity-and-quality-of-phishing-scams
F. Heiding, B. Schneier, A. Vishwanath, J. Bernstein and P. S. Park, "Devising and
Detecting Phishing Emails Using Large Language Models," in IEEE Access, vol. 12, pp.
42131-42146, 2024, doi: 10.1109/ACCESS.2024.3375882.
AI Generated Malware
OpenAI is playing a game of whack a mole trying to
ban the accounts of malicious actors who are using
ChatGPT to quickly generate malware as payloads
in targeted attacks using zip files, VBScripts, etc.
“The code is clearly AI generated because it is well
commented and most malicious actors want to
obfuscate what they’re doing to security
researchers.”
https://www.bleepingcomputer.com/news/security/openai-confirms-threat-actors-use-chatg
pt-to-write-malware/
https://www.bleepingcomputer.com/news/security/hackers-deploy-ai-written-malware-in-tar
geted-attacks/
Hugging Face Attacks
While Hugging Face does have excellent security
best practices and code scanning alerts; it is still a
vector of attack because of arbitrary code execution
in pickle __reduce__ and torch.load.
For example, the baller423/goober2 repository
had a model uploaded that initiates a reverse shell
to an IP address allowing the attacker to access the
model compute environment.
https://jfrog.com/blog/data-scientists-targeted-by-malicious-hugging-face-ml-models-with-si
lent-backdoor/
Data Trojans in DRL
AI Agents can be exploited to cause harm using
data poisoning or trojans injected during the
training phase of deep reinforcement learning.
Poisoning as little as 0.025% of the training data
allowed the inclusion of a classification backdoor
causing the agent to call a remote function.
A simple agent whose task is constrained is usually
allowed admin level privileges in its operation.
Panagiota, Kiourti, et al. "Trojdrl: Trojan attacks on deep reinforcement learning agents. in
proc. 57th acm/ieee design automation conference (dac), 2020, march 2020." Proc. 57th
ACM/IEEE Design Automation Conference (DAC), 2020. 2020.
Adversarial self-replicating prompts: prompts that
when processed by Gemini Pro, ChatGPT 4.0 and
LLaVA caused the model to replicate the input as
output to engage in malicious activities.
Additionally, these inputs compel the agent to
propagate to new agents by exploiting connectivity
within the GenAI ecosystem.
2 methods: flow-steering and RAG poisoning.
GenAI Worms
Cohen, Stav, Ron Bitton, and Ben Nassi. "Here Comes The AI Worm: Unleashing
Zero-click Worms that Target GenAI-Powered Applications." arXiv preprint
arXiv:2403.02817 (2024).
A custom AI agent built to translate natural
language prompts into bash commands using
Anthropic’s Claude LLM.
Prompt: “Access desktop using SSH”
SSH was successful but the agent continued by
updating the old Linux kernel, then investigated
why apt was taking so long and eventually bricked
the computer by rewriting the Grub boot loader.
Rogue Agents
https://decrypt.co/284574/ai-assistant-goes-rogue-and-ends-up-bricking-a-users-computer
Generally, prompts that are intended to cause an
LLM to leak sensitive information or to perform a
task in a manner not proscribed by the application
to the attacker’s benefit.
Extended case: the manipulation of a valid user’s
prompt in order to cause the LLM to take an
unexpected action or cause irrelevant output.
Prompt Injection
https://decrypt.co/284574/ai-assistant-goes-rogue-and-ends-up-bricking-a-users-computer
Liu, Yupei, et al. "Formalizing and benchmarking prompt injection attacks and defenses."
33rd USENIX Security Symposium (USENIX Security 24). 2024.
Targeting function calling LLMs that perform Google
searches and include the results into a prompt (e.g.
search based RAG); researchers showed that by
embedding hidden prompts into the retrieved
websites, they could manipulate LLMs to expose
private user data and information.
Indirect Prompt
Injection
https://thehill.com/opinion/cybersecurity/3953399-hijacked-ai-assistants-can-now-hack-you
r-data/
Greshake, Kai, et al. "Not what you've signed up for: Compromising real-world
llm-integrated applications with indirect prompt injection." Proceedings of the 16th ACM
Workshop on Artificial Intelligence and Security. 2023.
You can type just about anything into ChatGPT. But
users recently discovered that asking anything
about "David Mayer" caused ChatGPT to shut
down the conversation with the terse reply, "I'm
unable to produce a response."
A message shown at the bottom of the screen
doubled up on the David-dislike, saying, "There
was an error generating a response.
David Mayer
https://www.newsweek.com/chatgpt-openai-david-mayer-error-ai-1994100
https://www.cnet.com/tech/services-and-software/chatgpt-wont-answer-questions-about-c
ertain-names-heres-what-we-know/
Function calling (also referred to as “skills” or “tool
use” allows LLMs to make API calls based on the
descriptions of the tools available and their
parameters.
However, give an LLM a tool … it wants to use that
tool! Even prompts such as “tell me a joke” might
lead to unexpected tool use.
For more on this - come tonight!
Function Calling
https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling
A specialized form of indirect prompt injection;
exploits the fact that AI models see the complete
tool descriptions, including hidden instructions,
while users typically only see simplified versions in
their UI.
The attack modifies the tool instructions and can
use shadowing to exploit trusted servers. Because
MCP (Model Context Protocol) uses these tool calls
and has a trusted execution context, attackers can
gain access to sensitive files such as SSH keys.
Tool Poisoning: MCP
https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks
Important Lessons
Expect the Unexpected
Generative AI is not a deterministic
computer program that will behave within
expected pre-defined parameters. Treat
AI as stochastic and unpredictable.
Data governance and security in the form
of access controls is not optional when
doing machine learning and AI tasks.
Data security is as important as compute
environment security.
Do not trust the internet! Verify, escape,
scrub, and scan anything that comes from
the web! Make sure that you and your
models have guardrails.
We desperately need a mechanism to
identify what is human generated text or
imagery and what is AI generated.
Classifiers and/or watermarking is not
sufficient!
Guardrails!
Data Governance is Key
Certify Authorship
Important Lessons
Expect the Unexpected
Generative AI is not a deterministic
computer program that will behave within
expected pre-defined parameters. Treat
AI as stochastic and unpredictable.
Data governance and security in the form
of access controls is not optional when
doing machine learning and AI tasks.
Data security is as important as compute
environment security.
Do not trust the internet! Verify, escape,
scrub, and scan anything that comes from
the web! Make sure that you and your
models have guardrails.
We desperately need a mechanism to
identify what is human generated text or
imagery and what is AI generated.
Classifiers and/or watermarking is not
sufficient!
Guardrails!
Data Governance is Key
Certify Authorship
Important Lessons
Expect the Unexpected
Generative AI is not a deterministic
computer program that will behave within
expected pre-defined parameters. Treat
AI as stochastic and unpredictable.
Data governance and security in the form
of access controls is not optional when
doing machine learning and AI tasks.
Data security is as important as compute
environment security.
Do not trust the internet! Verify, escape,
scrub, and scan anything that comes from
the web! Make sure that you and your
models have guardrails.
We desperately need a mechanism to
identify what is human generated text or
imagery and what is AI generated.
Classifiers and/or watermarking is not
sufficient!
Guardrails!
Data Governance is Key
Certify Authorship
Important Lessons
Expect the Unexpected
Generative AI is not a deterministic
computer program that will behave within
expected pre-defined parameters. Treat
AI as stochastic and unpredictable.
Data governance and security in the form
of access controls is not optional when
doing machine learning and AI tasks.
Data security is as important as compute
environment security.
Do not trust the internet! Verify, escape,
scrub, and scan anything that comes from
the web! Make sure that you and your
models have guardrails.
We desperately need a mechanism to
identify what is human generated text or
imagery and what is AI generated.
Classifiers and/or watermarking is not
sufficient!
Guardrails!
Data Governance is Key
Certify Authorship
Happy to take comments and questions
online or chat after the talk!
benjamin@rotational.io
https://rtnl.link/SEmP0wIrMft
rotational.io
Thanks!
Some images in this presentation were AI generated using Gemini Pro
Special thanks to Ali Haidar and John Bruns at Anomali for
providing some of the threat intelligence research.
@bbengfort

More Related Content

PDF
Privacy and Security in the Age of Generative AI
PDF
Artificial Intelligence in cybersecurity
PDF
PROFITABLE USES OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING TO SECURE OUR...
PDF
Securing Cloud Using Fog: A Review
PPTX
Secure AI Development: Strategies for Safe Innovation in a Machine-Led World
PDF
Lessons Learned from Developing Secure AI Workflows.pdf
PDF
Implementing Function Calling LLMs without Fear.pdf
PDF
Phishing Detection using Decision Tree Model
Privacy and Security in the Age of Generative AI
Artificial Intelligence in cybersecurity
PROFITABLE USES OF ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING TO SECURE OUR...
Securing Cloud Using Fog: A Review
Secure AI Development: Strategies for Safe Innovation in a Machine-Led World
Lessons Learned from Developing Secure AI Workflows.pdf
Implementing Function Calling LLMs without Fear.pdf
Phishing Detection using Decision Tree Model

Similar to Privacy and Security in the Age of Generative AI - C4AI.pdf (20)

PDF
Role of Generative AI in Cybersecurity.pdf
PPTX
Product security by Blockchain, AI and Security Certs
PDF
MACHINE LEARNING APPROACH TO LEARN AND DETECT MALWARE IN ANDROID
PPTX
Automation: Embracing the Future of SecOps
PPTX
Ethical hacking
PDF
Two Aspect Endorsement Access Control for web Based Cloud Computing
PDF
Empowering Cloud-native Security: the Transformative Role of Artificial Intel...
PDF
Empowering Cloud-native Security: the Transformative Role of Artificial Intel...
PDF
Empowering Cloud-native Security: the Transformative Role of Artificial Intel...
PDF
Role of Generative AI in Cybersecurity.pdf
PDF
Ijsrdv8 i10355
PPTX
[DSC Europe 23][AI:CSI] Dragan Pleskonjic - AI Impact on Cybersecurity and P...
DOCX
Create a software key logger
PDF
API SECURITY by krishna murari and vikas maurya
PPSX
IBM: Cognitive Security Transformation for the Enrgy Sector
PDF
FireTail at API Days Australia 2024 - The Double-edge sword of AI for API Sec...
PDF
Machine Learning: A Game-Changer in Cybersecurity
PDF
A Survey of Keylogger in Cybersecurity Education
PPTX
Machine learning in Cyber Security
PDF
Improve network safety through better visibility – Netmagic
Role of Generative AI in Cybersecurity.pdf
Product security by Blockchain, AI and Security Certs
MACHINE LEARNING APPROACH TO LEARN AND DETECT MALWARE IN ANDROID
Automation: Embracing the Future of SecOps
Ethical hacking
Two Aspect Endorsement Access Control for web Based Cloud Computing
Empowering Cloud-native Security: the Transformative Role of Artificial Intel...
Empowering Cloud-native Security: the Transformative Role of Artificial Intel...
Empowering Cloud-native Security: the Transformative Role of Artificial Intel...
Role of Generative AI in Cybersecurity.pdf
Ijsrdv8 i10355
[DSC Europe 23][AI:CSI] Dragan Pleskonjic - AI Impact on Cybersecurity and P...
Create a software key logger
API SECURITY by krishna murari and vikas maurya
IBM: Cognitive Security Transformation for the Enrgy Sector
FireTail at API Days Australia 2024 - The Double-edge sword of AI for API Sec...
Machine Learning: A Game-Changer in Cybersecurity
A Survey of Keylogger in Cybersecurity Education
Machine learning in Cyber Security
Improve network safety through better visibility – Netmagic
Ad

More from Benjamin Bengfort (20)

PDF
Digitocracy without Borders: the unifying and destabilizing effects of softwa...
PDF
Getting Started with TRISA
PDF
Visual diagnostics for more effective machine learning
PDF
Visualizing Model Selection with Scikit-Yellowbrick: An Introduction to Devel...
PDF
Dynamics in graph analysis (PyData Carolinas 2016)
PDF
Visualizing the Model Selection Process
PDF
Data Product Architectures
PDF
A Primer on Entity Resolution
PDF
An Interactive Visual Analytics Dashboard for the Employment Situation Report
PPTX
Graph Based Machine Learning on Relational Data
PDF
Introduction to Machine Learning with SciKit-Learn
PDF
Fast Data Analytics with Spark and Python
PDF
Evolutionary Design of Swarms (SSCI 2014)
PDF
An Overview of Spanner: Google's Globally Distributed Database
PDF
Graph Analyses with Python and NetworkX
PDF
Natural Language Processing with Python
PDF
Beginners Guide to Non-Negative Matrix Factorization
PDF
Building Data Products with Python (Georgetown)
PDF
Annotation with Redfox
PDF
Rasta processing of speech
Digitocracy without Borders: the unifying and destabilizing effects of softwa...
Getting Started with TRISA
Visual diagnostics for more effective machine learning
Visualizing Model Selection with Scikit-Yellowbrick: An Introduction to Devel...
Dynamics in graph analysis (PyData Carolinas 2016)
Visualizing the Model Selection Process
Data Product Architectures
A Primer on Entity Resolution
An Interactive Visual Analytics Dashboard for the Employment Situation Report
Graph Based Machine Learning on Relational Data
Introduction to Machine Learning with SciKit-Learn
Fast Data Analytics with Spark and Python
Evolutionary Design of Swarms (SSCI 2014)
An Overview of Spanner: Google's Globally Distributed Database
Graph Analyses with Python and NetworkX
Natural Language Processing with Python
Beginners Guide to Non-Negative Matrix Factorization
Building Data Products with Python (Georgetown)
Annotation with Redfox
Rasta processing of speech
Ad

Recently uploaded (20)

PPTX
Big Data Technologies - Introduction.pptx
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
KodekX | Application Modernization Development
PDF
Electronic commerce courselecture one. Pdf
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PPT
Teaching material agriculture food technology
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Empathic Computing: Creating Shared Understanding
Big Data Technologies - Introduction.pptx
Encapsulation_ Review paper, used for researhc scholars
The Rise and Fall of 3GPP – Time for a Sabbatical?
Unlocking AI with Model Context Protocol (MCP)
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Review of recent advances in non-invasive hemoglobin estimation
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
Reach Out and Touch Someone: Haptics and Empathic Computing
20250228 LYD VKU AI Blended-Learning.pptx
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Advanced methodologies resolving dimensionality complications for autism neur...
KodekX | Application Modernization Development
Electronic commerce courselecture one. Pdf
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Teaching material agriculture food technology
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Empathic Computing: Creating Shared Understanding

Privacy and Security in the Age of Generative AI - C4AI.pdf

  • 1. Privacy and Security in the Age of Generative AI Benjamin Bengfort, Ph.D. @ C4AI 2025
  • 2. UNC5267 North Korea has used Western Language LLMs to generate fake resumes and profiles to apply for thousands of remote work jobs in western tech companies. Once hired, these “workers” (usually laptop farms in China or Russia that are supervised by a handful of individuals) use remote access tools to gain unauthorized access to corporate infrastructure. https://cloud.google.com/blog/topics/threat-intelligence/mitigating-dprk-it-worker-threat https://www.forbes.com/sites/rashishrivastava/2024/08/27/the-prompt-north-korean-operati ves-are-using-ai-to-get-remote-it-jobs/
  • 3. AI Targeted Phishing 60% of participants in a recent study fell victim to AI generated spear phishing content, a similar success rate compared to non-AI generated messages by human experts. LLMs reduce the cost of generating spear phishing messages by 95% while increasing their effectiveness. https://hbr.org/2024/05/ai-will-increase-the-quantity-and-quality-of-phishing-scams F. Heiding, B. Schneier, A. Vishwanath, J. Bernstein and P. S. Park, "Devising and Detecting Phishing Emails Using Large Language Models," in IEEE Access, vol. 12, pp. 42131-42146, 2024, doi: 10.1109/ACCESS.2024.3375882.
  • 4. AI Generated Malware OpenAI is playing a game of whack a mole trying to ban the accounts of malicious actors who are using ChatGPT to quickly generate malware as payloads in targeted attacks using zip files, VBScripts, etc. “The code is clearly AI generated because it is well commented and most malicious actors want to obfuscate what they’re doing to security researchers.” https://www.bleepingcomputer.com/news/security/openai-confirms-threat-actors-use-chatg pt-to-write-malware/ https://www.bleepingcomputer.com/news/security/hackers-deploy-ai-written-malware-in-tar geted-attacks/
  • 5. Hugging Face Attacks While Hugging Face does have excellent security best practices and code scanning alerts; it is still a vector of attack because of arbitrary code execution in pickle __reduce__ and torch.load. For example, the baller423/goober2 repository had a model uploaded that initiates a reverse shell to an IP address allowing the attacker to access the model compute environment. https://jfrog.com/blog/data-scientists-targeted-by-malicious-hugging-face-ml-models-with-si lent-backdoor/
  • 6. Data Trojans in DRL AI Agents can be exploited to cause harm using data poisoning or trojans injected during the training phase of deep reinforcement learning. Poisoning as little as 0.025% of the training data allowed the inclusion of a classification backdoor causing the agent to call a remote function. A simple agent whose task is constrained is usually allowed admin level privileges in its operation. Panagiota, Kiourti, et al. "Trojdrl: Trojan attacks on deep reinforcement learning agents. in proc. 57th acm/ieee design automation conference (dac), 2020, march 2020." Proc. 57th ACM/IEEE Design Automation Conference (DAC), 2020. 2020.
  • 7. Adversarial self-replicating prompts: prompts that when processed by Gemini Pro, ChatGPT 4.0 and LLaVA caused the model to replicate the input as output to engage in malicious activities. Additionally, these inputs compel the agent to propagate to new agents by exploiting connectivity within the GenAI ecosystem. 2 methods: flow-steering and RAG poisoning. GenAI Worms Cohen, Stav, Ron Bitton, and Ben Nassi. "Here Comes The AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications." arXiv preprint arXiv:2403.02817 (2024).
  • 8. A custom AI agent built to translate natural language prompts into bash commands using Anthropic’s Claude LLM. Prompt: “Access desktop using SSH” SSH was successful but the agent continued by updating the old Linux kernel, then investigated why apt was taking so long and eventually bricked the computer by rewriting the Grub boot loader. Rogue Agents https://decrypt.co/284574/ai-assistant-goes-rogue-and-ends-up-bricking-a-users-computer
  • 9. Generally, prompts that are intended to cause an LLM to leak sensitive information or to perform a task in a manner not proscribed by the application to the attacker’s benefit. Extended case: the manipulation of a valid user’s prompt in order to cause the LLM to take an unexpected action or cause irrelevant output. Prompt Injection https://decrypt.co/284574/ai-assistant-goes-rogue-and-ends-up-bricking-a-users-computer Liu, Yupei, et al. "Formalizing and benchmarking prompt injection attacks and defenses." 33rd USENIX Security Symposium (USENIX Security 24). 2024.
  • 10. Targeting function calling LLMs that perform Google searches and include the results into a prompt (e.g. search based RAG); researchers showed that by embedding hidden prompts into the retrieved websites, they could manipulate LLMs to expose private user data and information. Indirect Prompt Injection https://thehill.com/opinion/cybersecurity/3953399-hijacked-ai-assistants-can-now-hack-you r-data/ Greshake, Kai, et al. "Not what you've signed up for: Compromising real-world llm-integrated applications with indirect prompt injection." Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security. 2023.
  • 11. You can type just about anything into ChatGPT. But users recently discovered that asking anything about "David Mayer" caused ChatGPT to shut down the conversation with the terse reply, "I'm unable to produce a response." A message shown at the bottom of the screen doubled up on the David-dislike, saying, "There was an error generating a response. David Mayer https://www.newsweek.com/chatgpt-openai-david-mayer-error-ai-1994100 https://www.cnet.com/tech/services-and-software/chatgpt-wont-answer-questions-about-c ertain-names-heres-what-we-know/
  • 12. Function calling (also referred to as “skills” or “tool use” allows LLMs to make API calls based on the descriptions of the tools available and their parameters. However, give an LLM a tool … it wants to use that tool! Even prompts such as “tell me a joke” might lead to unexpected tool use. For more on this - come tonight! Function Calling https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling
  • 13. A specialized form of indirect prompt injection; exploits the fact that AI models see the complete tool descriptions, including hidden instructions, while users typically only see simplified versions in their UI. The attack modifies the tool instructions and can use shadowing to exploit trusted servers. Because MCP (Model Context Protocol) uses these tool calls and has a trusted execution context, attackers can gain access to sensitive files such as SSH keys. Tool Poisoning: MCP https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks
  • 14. Important Lessons Expect the Unexpected Generative AI is not a deterministic computer program that will behave within expected pre-defined parameters. Treat AI as stochastic and unpredictable. Data governance and security in the form of access controls is not optional when doing machine learning and AI tasks. Data security is as important as compute environment security. Do not trust the internet! Verify, escape, scrub, and scan anything that comes from the web! Make sure that you and your models have guardrails. We desperately need a mechanism to identify what is human generated text or imagery and what is AI generated. Classifiers and/or watermarking is not sufficient! Guardrails! Data Governance is Key Certify Authorship
  • 15. Important Lessons Expect the Unexpected Generative AI is not a deterministic computer program that will behave within expected pre-defined parameters. Treat AI as stochastic and unpredictable. Data governance and security in the form of access controls is not optional when doing machine learning and AI tasks. Data security is as important as compute environment security. Do not trust the internet! Verify, escape, scrub, and scan anything that comes from the web! Make sure that you and your models have guardrails. We desperately need a mechanism to identify what is human generated text or imagery and what is AI generated. Classifiers and/or watermarking is not sufficient! Guardrails! Data Governance is Key Certify Authorship
  • 16. Important Lessons Expect the Unexpected Generative AI is not a deterministic computer program that will behave within expected pre-defined parameters. Treat AI as stochastic and unpredictable. Data governance and security in the form of access controls is not optional when doing machine learning and AI tasks. Data security is as important as compute environment security. Do not trust the internet! Verify, escape, scrub, and scan anything that comes from the web! Make sure that you and your models have guardrails. We desperately need a mechanism to identify what is human generated text or imagery and what is AI generated. Classifiers and/or watermarking is not sufficient! Guardrails! Data Governance is Key Certify Authorship
  • 17. Important Lessons Expect the Unexpected Generative AI is not a deterministic computer program that will behave within expected pre-defined parameters. Treat AI as stochastic and unpredictable. Data governance and security in the form of access controls is not optional when doing machine learning and AI tasks. Data security is as important as compute environment security. Do not trust the internet! Verify, escape, scrub, and scan anything that comes from the web! Make sure that you and your models have guardrails. We desperately need a mechanism to identify what is human generated text or imagery and what is AI generated. Classifiers and/or watermarking is not sufficient! Guardrails! Data Governance is Key Certify Authorship
  • 18. Happy to take comments and questions online or chat after the talk! benjamin@rotational.io https://rtnl.link/SEmP0wIrMft rotational.io Thanks! Some images in this presentation were AI generated using Gemini Pro Special thanks to Ali Haidar and John Bruns at Anomali for providing some of the threat intelligence research. @bbengfort