SlideShare a Scribd company logo
Introduction to Large Language Models
Basics and Safety Considerations
Hossein A. Rahmani
PhD Student, UCL AI Center
University College London (UCL)
hossein.rahmani.22@ucl.ac.uk
rahmanidashti.github.io
Source: https://www.youtube.com/watch?v=zizonToFXDs
Source: https://www.youtube.com/watch?v=zizonToFXDs
Source: https://www.youtube.com/watch?v=zizonToFXDs
LLMs took over the world!
but what are they, and how do they work?
Adapted from: https://www.youtube.com/watch?v=9vM4p9NN0Ts
Are LLMs something new?
Are LLMs something new?
LLMs have been around for a while! Any examples?
Are LLMs something new?
LLMs have been around for a while! Any examples?
Google Keyword Suggestion
Keyboard Keyword Suggestion Gmail Smart Compose
Are LLMs something new?
Source: https://levelup.gitconnected.com/the-brief-history-of-large-language-models-a-journey-from-eliza-to-gpt-4-and-google-bard-167c614af5af
Large Language
Models (LLMs)
are a subset of
Deep learning.
Artificial Intelligence
Machine Learning
Deep Learning
Large Language Models
making machine intelligent
making machines that learn
method of machine learning
a new model of DL
Large, general-purpose language
models can be pre-trained and the
fine-tuned for specific purpose
Training a dog
Sit Come Down Stay
Basic commands, sufficient for everyday life, a good canine citizen
How do you train a dog
What if we need specialised service dog, such as police dog?
basic-commands trained dog
police dog
guide dog
Special trainings
Similar idea applies to large language models
Transformer
Architecture
Pre-Training Data
(Unlabeled) Pre-trained Model
Post-training Data
(Labeled) Fine-tuned Model
Pre-training Fine-tuning
PaLM, Google
Medical data
https://blog.google/technology/health/ai-llm-medpalm-research-thecheckup/
Med-PaLM
Language modeling
imagine the following task: predict the next word in a sequence
{ A trained language model can ___ } What words come next?
Language modeling
imagine the following task: predict the next word in a sequence
{ A trained language model can ___ }
Can we frame this as a machine learning problem? How?
What words come next?
Language modeling
imagine the following task: predict the next word in a sequence
{ A trained language model can ___ }
Can we frame this as a machine learning problem? How?
What words come next?
{ A trained language model can ___ }
LLM
Word Probability
speak 0.064
generate 0.075
politics 0.001
… …
walk 0.002
AR Language Models
● Task: Predict the next word
LLM
Adapted from: https://www.youtube.com/watch?v=9vM4p9NN0Ts
AR Language Models
● Task: Predict the next word
● Steps:
a. tokenize (she likely prefers ___)
LLM
She likely prefers
1 2 3
Adapted from: https://www.youtube.com/watch?v=9vM4p9NN0Ts
AR Language Models
● Task: Predict the next word
● Steps:
a. tokenize
b. forward
LLM
She likely prefers
1 2 3
Adapted from: https://www.youtube.com/watch?v=9vM4p9NN0Ts
AR Language Models
● Task: Predict the next word
● Steps:
a. tokenize
b. forward
c. predict probability of next token
LLM
She likely prefers
1 2 3
Adapted from: https://www.youtube.com/watch?v=9vM4p9NN0Ts
AR Language Models
● Task: Predict the next word
● Steps:
a. tokenize
b. forward
c. predict probability of next token
d. sample LLM
She likely prefers
1 2 3
5
Adapted from: https://www.youtube.com/watch?v=9vM4p9NN0Ts
AR Language Models
● Task: Predict the next word
● Steps:
a. tokenize
b. forward
c. predict probability of next token
d. sample
e. detokenize
LLM
She likely prefers
1 2 3
dogs
5
Adapted from: https://www.youtube.com/watch?v=9vM4p9NN0Ts
AR Language Models
● Task: Predict the next word
● Steps:
a. tokenize
b. forward
c. predict probability of next token
d. sample
e. detokenize
LLM
She likely prefers
1 2 3
dogs
5
inference only
Adapted from: https://www.youtube.com/watch?v=9vM4p9NN0Ts
Large language models are trained to solve
common language problems, like …
Text classification Question answering Document summarisation Text generation
… then be tailored to solve specific
problems in different fields, like …
Retail Finance Entertainment
Trained with a
relatively
small size of
field datasets
What are the benefits of using LLMs?
A single model can
be used for
different task
The fine-tune
process required
minimum filed data
The performance is
continuously growing with
more data and parameters
LLM development vs. Traditional development
● No ML experience needed
● No training examples
● No need to train a model
● Think about prompt design
● Yes ML expertise needed
● Yes training examples
● Yes need to train a model
● Yes, compute time + hardware
● Think about minimizing a loss function
LLM Development
(using pre-trained APIs)
Traditional ML development
Source: https://www.youtube.com/watch?v=zizonToFXDs
Safety and ethical consideration
Privacy Concerns
The use of LLMs in sensitive applications raises
concerns about data privacy and the potential
for abuse.
Hallucination and Misinformation
LLMs can be used to generate convincing yet false information,
potentially leading to the spread of misinformation.
Bias and Fairness
LLMs can reflect and amplify societal biases
present in their training data, leading to unfair or
discriminatory outputs.
Prompt Injection and Jailbreaks
LLMs could be misused for tasks like creating targeted spam,
phishing attacks, or generating harmful content.
Bias in LLMs responses
Wang, Peiyi, Lei Li, Liang Chen, Zefan Cai, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui.
"Large language models are not fair evaluators." arXiv preprint arXiv:2305.17926 (2023).
"Simply changing the order of candidate responses leads to overturned comparison
results, even though we add the command “ensuring that the order in which the
responses were presented does not affect your judgment” into the prompt."
Hallucination in LLMs responses
1. Zhang, Yue, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang et al. "Siren’s song in the AI ocean: A survey on hallucination in large language models,
2023." URL https://arxiv. org/abs/2309.01219 (2024).
2. Wei, Jason, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V. Le, and Denny Zhou. "Chain-of-thought prompting elicits reasoning in large
language models." Advances in neural information processing systems 35 (2022): 24824-24837.
"Three types of hallucinations occurred in LLM responses"
Hallucination in LLMs responses
1. Zhang, Yue, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang et al. "Siren’s song in the AI ocean: A survey on hallucination in large language models,
2023." URL https://arxiv. org/abs/2309.01219 (2024).
2. Wei, Jason, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V. Le, and Denny Zhou. "Chain-of-thought prompting elicits reasoning in large
language models." Advances in neural information processing systems 35 (2022): 24824-24837.
"Three types of hallucinations occurred in LLM responses"
"Chain-of-thought prompting enables LLMs to tackle complex arithmetic,
commonsense, and symbolic reasoning tasks. Chain-of-thought reasoning processes are highlighted."
Privacy issues
https://www.bitdefender.com/en-gb/blog/hotforsecurity/chatgpt-bug-leaks-users-chat-histories
https://www.bbc.co.uk/news/technology-65047304
ChatGPT bug leaked users' chat histories
"A ChatGPT glitch allowed some users to see the titles of other
users' conversations, the artificial intelligence chatbot's boss has
said." – On social media sites Reddit and Twitter, users had shared
images of chat histories that they said were not theirs.
How about personalized information in prompt or recommendation?
Prompt injection and jailbreak attack
https://www.tigera.io/learn/guides/llm-security/prompt-injection/
https://outrider.org/nuclear-weapons/articles/could-chatbot-teach-you-how-build-dirty-bomb
Prompt injection is a security risk where attackers manipulate the input prompts to an LLM to elicit undesirable or harmful
responses.
Prompt injection and jailbreak attack
Prompt injection is a security risk where attackers manipulate the input prompts to an LLM to elicit undesirable or harmful
responses.
https://www.tigera.io/learn/guides/llm-security/prompt-injection/
https://outrider.org/nuclear-weapons/articles/could-chatbot-teach-you-how-build-dirty-bomb
Prompt injection and jailbreak attack
Prompt injection is a security risk where attackers manipulate the input prompts to an LLM to elicit undesirable or harmful
responses.
https://www.tigera.io/learn/guides/llm-security/prompt-injection/
https://outrider.org/nuclear-weapons/articles/could-chatbot-teach-you-how-build-dirty-bomb
Thanks!
any questions?

More Related Content

PDF
LLM.pdf
PDF
1721436375967hhhhhhhhhhhhhuuuuuuuuuu.pdf
PDF
Evaluating the top large language models.pdf
PDF
How to supervise a thesis in NLP in the ChatGPT era? By Laure Soulier
PDF
Top Comparison of Large Language ModelsLLMs Explained.pdf
PDF
Comparison of Large Language Models The Ultimate Guide.pdf
PDF
solulab.com-Top Comparison of Large Language ModelsLLMs Explained.pdf
PDF
solulab.com-Top Comparison of Large Language ModelsLLMs Explained.pdf
LLM.pdf
1721436375967hhhhhhhhhhhhhuuuuuuuuuu.pdf
Evaluating the top large language models.pdf
How to supervise a thesis in NLP in the ChatGPT era? By Laure Soulier
Top Comparison of Large Language ModelsLLMs Explained.pdf
Comparison of Large Language Models The Ultimate Guide.pdf
solulab.com-Top Comparison of Large Language ModelsLLMs Explained.pdf
solulab.com-Top Comparison of Large Language ModelsLLMs Explained.pdf

Similar to Intro to LLMs Basics and their Safety Consideration (20)

PDF
Quick Start Guide To Large Language Models Second Edition Sinan Ozdemir
PDF
solulab.com-Comparison of Large Language Models The Ultimate Guide (1).pdf
PDF
Top Comparison of Large Language ModelsLLMs Explained (2).pdf
PPTX
Unit 5.ppt Fundamenrtal of Artificial intelligence
PPTX
Non technical explanation of Large Language Model
PDF
Top Comparison of Large Language ModelsLLMs Explained.pdf
PPTX
Understanding Large Language Models (1).pptx
PDF
How Large Language Models Are Changing the AI Landscape
PDF
Large Language Modelsjjjhhhjjjjjjbbbbbbj.pdf
PDF
Exploring LLMs in the World of Artificial Intelligence (AI) (2).pdf
PDF
Build a Large Language Model From Scratch MEAP Sebastian Raschka
PPTX
Introduction-to-Large-Language-Models.pptx
PPTX
Generative-AI-on-the-MSc-Environmental-Technology-23_24.pptx
PPTX
Generative-AI-on-the-MSc-Environmental-Technology-23_24.pptx
PPTX
deep_learning_presentation related to llm
PPTX
llm_presentation and deep learning methods
PDF
Intoduction to Large language models prompt
PDF
Crafting Your Customized Legal Mastery: A Guide to Building Your Private LLM
PPTX
The Beginner's Guide To Large Language Models
PDF
You and Your Research -- LLMs Perspective
Quick Start Guide To Large Language Models Second Edition Sinan Ozdemir
solulab.com-Comparison of Large Language Models The Ultimate Guide (1).pdf
Top Comparison of Large Language ModelsLLMs Explained (2).pdf
Unit 5.ppt Fundamenrtal of Artificial intelligence
Non technical explanation of Large Language Model
Top Comparison of Large Language ModelsLLMs Explained.pdf
Understanding Large Language Models (1).pptx
How Large Language Models Are Changing the AI Landscape
Large Language Modelsjjjhhhjjjjjjbbbbbbj.pdf
Exploring LLMs in the World of Artificial Intelligence (AI) (2).pdf
Build a Large Language Model From Scratch MEAP Sebastian Raschka
Introduction-to-Large-Language-Models.pptx
Generative-AI-on-the-MSc-Environmental-Technology-23_24.pptx
Generative-AI-on-the-MSc-Environmental-Technology-23_24.pptx
deep_learning_presentation related to llm
llm_presentation and deep learning methods
Intoduction to Large language models prompt
Crafting Your Customized Legal Mastery: A Guide to Building Your Private LLM
The Beginner's Guide To Large Language Models
You and Your Research -- LLMs Perspective
Ad

More from Hossein A. (Saeed) Rahmani (15)

PDF
Clarification Questions Usefulness (Slides)
PDF
TREC 2023 DL - Synthetic Data Generation (Slides)
PDF
Synthetic Test Collections for Retrieval Evaluation (Poster)
PDF
Beyond-Accuracy Provider Fairness (Slides)
PDF
ContextsPOI (Slides)
PDF
ACQSurvey (Slides)
PDF
ACQSurvey (Poster)
PDF
GeneralizibilityFairness - DEFirst Reading Group
PDF
Towards Confidence-aware Calibrated Recommendation (Poster)
PDF
Towards Confidence-aware Calibrated Recommendation (Slides)
PDF
Experiments on Generalizability of User-Oriented Fairness in Recommender Systems
PDF
CPFair: Personalized Consumer and Producer Fairness Re-ranking for Recommende...
PDF
The Unfairness of Popularity Bias in Book Recommendation (Bias@ECIR22)
PDF
The Unfairness of Active Users and Popularity Bias in Point-of-Interest Recom...
PDF
Introduction to Complex Networks
Clarification Questions Usefulness (Slides)
TREC 2023 DL - Synthetic Data Generation (Slides)
Synthetic Test Collections for Retrieval Evaluation (Poster)
Beyond-Accuracy Provider Fairness (Slides)
ContextsPOI (Slides)
ACQSurvey (Slides)
ACQSurvey (Poster)
GeneralizibilityFairness - DEFirst Reading Group
Towards Confidence-aware Calibrated Recommendation (Poster)
Towards Confidence-aware Calibrated Recommendation (Slides)
Experiments on Generalizability of User-Oriented Fairness in Recommender Systems
CPFair: Personalized Consumer and Producer Fairness Re-ranking for Recommende...
The Unfairness of Popularity Bias in Book Recommendation (Bias@ECIR22)
The Unfairness of Active Users and Popularity Bias in Point-of-Interest Recom...
Introduction to Complex Networks
Ad

Recently uploaded (20)

PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PDF
TR - Agricultural Crops Production NC III.pdf
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PPTX
Cell Types and Its function , kingdom of life
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PDF
Computing-Curriculum for Schools in Ghana
PDF
Complications of Minimal Access Surgery at WLH
PDF
RMMM.pdf make it easy to upload and study
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PDF
Microbial disease of the cardiovascular and lymphatic systems
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
2.FourierTransform-ShortQuestionswithAnswers.pdf
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
TR - Agricultural Crops Production NC III.pdf
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
Cell Types and Its function , kingdom of life
102 student loan defaulters named and shamed – Is someone you know on the list?
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
Computing-Curriculum for Schools in Ghana
Complications of Minimal Access Surgery at WLH
RMMM.pdf make it easy to upload and study
Module 4: Burden of Disease Tutorial Slides S2 2025
Microbial disease of the cardiovascular and lymphatic systems
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf

Intro to LLMs Basics and their Safety Consideration

  • 1. Introduction to Large Language Models Basics and Safety Considerations Hossein A. Rahmani PhD Student, UCL AI Center University College London (UCL) hossein.rahmani.22@ucl.ac.uk rahmanidashti.github.io
  • 5. LLMs took over the world! but what are they, and how do they work? Adapted from: https://www.youtube.com/watch?v=9vM4p9NN0Ts
  • 7. Are LLMs something new? LLMs have been around for a while! Any examples?
  • 8. Are LLMs something new? LLMs have been around for a while! Any examples? Google Keyword Suggestion Keyboard Keyword Suggestion Gmail Smart Compose
  • 9. Are LLMs something new? Source: https://levelup.gitconnected.com/the-brief-history-of-large-language-models-a-journey-from-eliza-to-gpt-4-and-google-bard-167c614af5af
  • 10. Large Language Models (LLMs) are a subset of Deep learning. Artificial Intelligence Machine Learning Deep Learning Large Language Models making machine intelligent making machines that learn method of machine learning a new model of DL
  • 11. Large, general-purpose language models can be pre-trained and the fine-tuned for specific purpose
  • 13. Sit Come Down Stay Basic commands, sufficient for everyday life, a good canine citizen How do you train a dog
  • 14. What if we need specialised service dog, such as police dog? basic-commands trained dog police dog guide dog Special trainings
  • 15. Similar idea applies to large language models Transformer Architecture Pre-Training Data (Unlabeled) Pre-trained Model Post-training Data (Labeled) Fine-tuned Model Pre-training Fine-tuning PaLM, Google Medical data https://blog.google/technology/health/ai-llm-medpalm-research-thecheckup/ Med-PaLM
  • 16. Language modeling imagine the following task: predict the next word in a sequence { A trained language model can ___ } What words come next?
  • 17. Language modeling imagine the following task: predict the next word in a sequence { A trained language model can ___ } Can we frame this as a machine learning problem? How? What words come next?
  • 18. Language modeling imagine the following task: predict the next word in a sequence { A trained language model can ___ } Can we frame this as a machine learning problem? How? What words come next? { A trained language model can ___ } LLM Word Probability speak 0.064 generate 0.075 politics 0.001 … … walk 0.002
  • 19. AR Language Models ● Task: Predict the next word LLM Adapted from: https://www.youtube.com/watch?v=9vM4p9NN0Ts
  • 20. AR Language Models ● Task: Predict the next word ● Steps: a. tokenize (she likely prefers ___) LLM She likely prefers 1 2 3 Adapted from: https://www.youtube.com/watch?v=9vM4p9NN0Ts
  • 21. AR Language Models ● Task: Predict the next word ● Steps: a. tokenize b. forward LLM She likely prefers 1 2 3 Adapted from: https://www.youtube.com/watch?v=9vM4p9NN0Ts
  • 22. AR Language Models ● Task: Predict the next word ● Steps: a. tokenize b. forward c. predict probability of next token LLM She likely prefers 1 2 3 Adapted from: https://www.youtube.com/watch?v=9vM4p9NN0Ts
  • 23. AR Language Models ● Task: Predict the next word ● Steps: a. tokenize b. forward c. predict probability of next token d. sample LLM She likely prefers 1 2 3 5 Adapted from: https://www.youtube.com/watch?v=9vM4p9NN0Ts
  • 24. AR Language Models ● Task: Predict the next word ● Steps: a. tokenize b. forward c. predict probability of next token d. sample e. detokenize LLM She likely prefers 1 2 3 dogs 5 Adapted from: https://www.youtube.com/watch?v=9vM4p9NN0Ts
  • 25. AR Language Models ● Task: Predict the next word ● Steps: a. tokenize b. forward c. predict probability of next token d. sample e. detokenize LLM She likely prefers 1 2 3 dogs 5 inference only Adapted from: https://www.youtube.com/watch?v=9vM4p9NN0Ts
  • 26. Large language models are trained to solve common language problems, like … Text classification Question answering Document summarisation Text generation
  • 27. … then be tailored to solve specific problems in different fields, like … Retail Finance Entertainment Trained with a relatively small size of field datasets
  • 28. What are the benefits of using LLMs? A single model can be used for different task The fine-tune process required minimum filed data The performance is continuously growing with more data and parameters
  • 29. LLM development vs. Traditional development ● No ML experience needed ● No training examples ● No need to train a model ● Think about prompt design ● Yes ML expertise needed ● Yes training examples ● Yes need to train a model ● Yes, compute time + hardware ● Think about minimizing a loss function LLM Development (using pre-trained APIs) Traditional ML development Source: https://www.youtube.com/watch?v=zizonToFXDs
  • 30. Safety and ethical consideration Privacy Concerns The use of LLMs in sensitive applications raises concerns about data privacy and the potential for abuse. Hallucination and Misinformation LLMs can be used to generate convincing yet false information, potentially leading to the spread of misinformation. Bias and Fairness LLMs can reflect and amplify societal biases present in their training data, leading to unfair or discriminatory outputs. Prompt Injection and Jailbreaks LLMs could be misused for tasks like creating targeted spam, phishing attacks, or generating harmful content.
  • 31. Bias in LLMs responses Wang, Peiyi, Lei Li, Liang Chen, Zefan Cai, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. "Large language models are not fair evaluators." arXiv preprint arXiv:2305.17926 (2023). "Simply changing the order of candidate responses leads to overturned comparison results, even though we add the command “ensuring that the order in which the responses were presented does not affect your judgment” into the prompt."
  • 32. Hallucination in LLMs responses 1. Zhang, Yue, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang et al. "Siren’s song in the AI ocean: A survey on hallucination in large language models, 2023." URL https://arxiv. org/abs/2309.01219 (2024). 2. Wei, Jason, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V. Le, and Denny Zhou. "Chain-of-thought prompting elicits reasoning in large language models." Advances in neural information processing systems 35 (2022): 24824-24837. "Three types of hallucinations occurred in LLM responses"
  • 33. Hallucination in LLMs responses 1. Zhang, Yue, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang et al. "Siren’s song in the AI ocean: A survey on hallucination in large language models, 2023." URL https://arxiv. org/abs/2309.01219 (2024). 2. Wei, Jason, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V. Le, and Denny Zhou. "Chain-of-thought prompting elicits reasoning in large language models." Advances in neural information processing systems 35 (2022): 24824-24837. "Three types of hallucinations occurred in LLM responses" "Chain-of-thought prompting enables LLMs to tackle complex arithmetic, commonsense, and symbolic reasoning tasks. Chain-of-thought reasoning processes are highlighted."
  • 34. Privacy issues https://www.bitdefender.com/en-gb/blog/hotforsecurity/chatgpt-bug-leaks-users-chat-histories https://www.bbc.co.uk/news/technology-65047304 ChatGPT bug leaked users' chat histories "A ChatGPT glitch allowed some users to see the titles of other users' conversations, the artificial intelligence chatbot's boss has said." – On social media sites Reddit and Twitter, users had shared images of chat histories that they said were not theirs. How about personalized information in prompt or recommendation?
  • 35. Prompt injection and jailbreak attack https://www.tigera.io/learn/guides/llm-security/prompt-injection/ https://outrider.org/nuclear-weapons/articles/could-chatbot-teach-you-how-build-dirty-bomb Prompt injection is a security risk where attackers manipulate the input prompts to an LLM to elicit undesirable or harmful responses.
  • 36. Prompt injection and jailbreak attack Prompt injection is a security risk where attackers manipulate the input prompts to an LLM to elicit undesirable or harmful responses. https://www.tigera.io/learn/guides/llm-security/prompt-injection/ https://outrider.org/nuclear-weapons/articles/could-chatbot-teach-you-how-build-dirty-bomb
  • 37. Prompt injection and jailbreak attack Prompt injection is a security risk where attackers manipulate the input prompts to an LLM to elicit undesirable or harmful responses. https://www.tigera.io/learn/guides/llm-security/prompt-injection/ https://outrider.org/nuclear-weapons/articles/could-chatbot-teach-you-how-build-dirty-bomb