SlideShare a Scribd company logo
2
Most read
11
Most read
Public
Responsible LLMOps:
Integrating Responsible AI
Practices into LLMOps
Debmalya Biswas
Wipro AI, Switzerland
Oct 2024, Sofia, Bulgaria
Public
Generative AI Lifecycle
Optimize and
deploy for
inference
with
enterprise
integration
LLMOps
deployment
of the
developed
Gen AI
solution
Prompt
Engineering
Responsible
AI guardrails
Evaluate
Define the
Use-case
Choose an
existing
LLM or
pre-train
own LLM
RAG
Fine-tuning
Human
Feedback Loop
Scope Select Adapt Operate and Use
Public
Responsible AI Guardrails
Domain Guardrail
Restricts the query or conversation to
specific domain. Gracefully guides the
conversation into desired domain
Safety Guardrail
Removes the unsafe & toxic responses
from LLM. Applies policies to deliver
appropriate response
Secure Guardrail
Prevent LLM to execute codes or connect
with applications. Also, provides access
controls to the queries & response.
Transparency Guardrail
Point to data source/documents from
where response is crafted, Highlight words
contributing to response being generated.
Programmable
Constraints
Easy User
Config
Safe
Conversational
system
Public
Gen AI Architecture Patterns – APIs & Embedded Gen AI
* D. Biswas. Generative AI – LLMOps Architecture Patterns. Data Driven Investor, 2023 (link)
LLM APIs: This is the classic
ChatGPT example, where we
have black-box access to a
LLM API/UI. Prompts are the
primary interaction mechanism
for such scenarios.
Enterprise LLM Apps have the
potential to accelerate LLM
adoption by providing LLM
capabilities embedded within
enterprise platforms, e.g., SAP,
Salesforce, ServiceNow.
Public
Gen AI Architecture Patterns – Fine-tuning
LLMs are generic in
nature. To realize the full
potential of LLMs for
enterprises, they need to
be contextualized with
enterprise knowledge
captured in terms of
documents, wikis,
business processes, etc.
This is achieved by fine-
tuning a LLM with
enterprise knowledge /
embeddings to develop a
context-specific LLM/
SLM.
Responsible AI safeguards
Public
Gen AI Architecture Patterns – RAGs & Agentic AI
Fine-tuning is a computationally
intensive process. RAGs provide a
viable alternative here by
providing additional context with
the prompt, grounding the retrieval
/ responses to the given context.
The future where enterprises will
be able to develop new enterprise
AI Apps by orchestrating /
composing multiple existing AI
Agents.
*D. Biswas. Constraints Enabled Autonomous Agent Marketplace: Discovery
and Matchmaking. ICAART (1) 2024: 396-403 (link)
Public
Responsible
Deployment
of LLMs
D. Biswas, D. Chakraborty, B.
Mitra. Responsible LLMOps..
Towards Data Science, 2024 (link)
Public
ML Privacy Risks
Two broad categories of
privacy inference attacks:
• Membership inference (if a
specific user data item was
present in the training
dataset) and
• Property inference
(reconstruct properties of a
participant’s dataset)
attacks.
Black box attacks are still
possible when the attacker
only has access to the APIs:
invoke the model and observe
the relationships between
inputs and outputs.
Training
dataset
wants access to
ML Model
(Classification,
Prediction)
Inference
API
has access to
Attacker
* D. Biswas. Privacy Preserving Chatbot Conversations. IEEE AIKE 2020: 179-182 (link)
*D. Biswas, K. Vidyasankar. A Privacy Framework for Hierarchical Federated Learning. CIKM Workshops 2021 (link)
Public
Gen AI Privacy Risks – novel challenges
We need to consider the
following additional privacy risks
in the case of Gen AI / LLMs:
• Membership and Property
leakage from Pre-training
data
• Model features leakage from
Pre-trained LLM
• Privacy leakage from
Conversations (history) with
LLMs
• Compliance with Privacy
Intent of Users
Training dataset
(Public/proprietary)
Pre-trained Large
Language Model
(LLM)
LLM API
Mobile / Web UI
End user Apps
Prompts
Tasks /
Queries
Users
LLM Provider
Feedback loop
Membership &
Property leakage from
Pre-training Data
Model Features
leakage from
Pre-trained LLM
(Implicit) Privacy
leakage from
Conversations History
Compliance
with Privacy
Intent of Users
Gen AI / LLM Conversational Privacy Risks
Public
LLM Safety Leaderboard
*Hugging Face LLM Safety Leaderboard (link)
*B. Wang, et. Al. DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT
Models, 2024 (link)
Public
Use-case specific Evaluation of LLMs
Need for a comprehensive LLM evaluation strategy with targeted
success metrics specific to the use-cases.
Public
Thanks
&
Questions
Debmalya Biswas
https://www.linkedin.com/in/debmalya-
biswas-3975261/
https://medium.com/@debmalyabiswas

More Related Content

PPTX
Gen AI: Privacy Risks of Large Language Models (LLMs)
PDF
AdversLLM: A Practical Guide To Governance, Maturity and Risk Assessment For ...
PDF
LLM_Security_Arjun_Ghosal_&_Sneharghya.pdf
PDF
Cybersecurity and Generative AI - for Good and Bad vol.2
PPTX
Regulating Generative AI - LLMOps pipelines with Transparency
PDF
LLM Security - Smart to protect, but too smart to be protected
PDF
Responsible Generative AI Design Patterns
PPTX
Cybersecurity Risks in Large Language Models LLMs.pptx
Gen AI: Privacy Risks of Large Language Models (LLMs)
AdversLLM: A Practical Guide To Governance, Maturity and Risk Assessment For ...
LLM_Security_Arjun_Ghosal_&_Sneharghya.pdf
Cybersecurity and Generative AI - for Good and Bad vol.2
Regulating Generative AI - LLMOps pipelines with Transparency
LLM Security - Smart to protect, but too smart to be protected
Responsible Generative AI Design Patterns
Cybersecurity Risks in Large Language Models LLMs.pptx

Similar to Responsible LLMOps presentation at Webit 2024 (20)

PDF
Responsible AI Development Life Cycle Guide
PPTX
Security of LLM APIs by Ankita Gupta, Akto.io
PDF
Supercharge Your AI Development with Local LLMs
PDF
Managing-the-Risks-of-LLMs-in-FS-Industry-Roundtable-TruEra-QuantU.pdf
PDF
Lessons Learned from Developing Secure AI Workflows.pdf
PPTX
AI Agents and their implications for Enterprise AI Use-cases
PDF
Large Language Model Powered threat modelling.pdf
PDF
The Significance of Large Language Models (LLMs) in Generative AI2.pdf
PDF
Best Practices for Harnessing Generative AI and LLMs1.pdf
PPTX
Benchmarking LLM for zero-day vulnerabilities.pptx
PPTX
Responsible Generative AI: What to Generate and What Not
PDF
Sustainable & Composable Generative AI
PDF
LLMOps: from Demo to Production-Ready GenAI Systems
PDF
Final Cut Pro Crack FREE LINK Latest Version 2025
PDF
Privacy and Security in the Age of Generative AI - C4AI.pdf
PDF
Avast Free Antivirus Crack FREE Downlaod 2025
PDF
SpyHunter Crack Latest Version FREE Download 2025
PDF
Generative AI & Large Language Models Agents
PDF
Generative AI - Responsible Path Forward.pdf
PDF
solulab.com-How to Build a Private LLM (2).pdf
Responsible AI Development Life Cycle Guide
Security of LLM APIs by Ankita Gupta, Akto.io
Supercharge Your AI Development with Local LLMs
Managing-the-Risks-of-LLMs-in-FS-Industry-Roundtable-TruEra-QuantU.pdf
Lessons Learned from Developing Secure AI Workflows.pdf
AI Agents and their implications for Enterprise AI Use-cases
Large Language Model Powered threat modelling.pdf
The Significance of Large Language Models (LLMs) in Generative AI2.pdf
Best Practices for Harnessing Generative AI and LLMs1.pdf
Benchmarking LLM for zero-day vulnerabilities.pptx
Responsible Generative AI: What to Generate and What Not
Sustainable & Composable Generative AI
LLMOps: from Demo to Production-Ready GenAI Systems
Final Cut Pro Crack FREE LINK Latest Version 2025
Privacy and Security in the Age of Generative AI - C4AI.pdf
Avast Free Antivirus Crack FREE Downlaod 2025
SpyHunter Crack Latest Version FREE Download 2025
Generative AI & Large Language Models Agents
Generative AI - Responsible Path Forward.pdf
solulab.com-How to Build a Private LLM (2).pdf
Ad

More from Debmalya Biswas (17)

PDF
Agentic AI lifecycle for Enterprise Hyper-Automation
PDF
ICAART 2025 presentation on Stateful Monitoring and Responsible Deployment of...
PDF
Agentic AI: Scalable & Responsible Deployment of AI Agents in the Enterprise
PDF
A comprehensive guide to Agentic AI Systems
PPTX
Constraints Enabled Autonomous Agent Marketplace: Discovery and Matchmaking
PPTX
Data-Driven (Reinforcement Learning-Based) Control
PPTX
MLOps for Compositional AI
PPTX
A Privacy Framework for Hierarchical Federated Learning
PPTX
Edge AI Framework for Healthcare Applications
PPTX
Compositional AI: Fusion of AI/ML Services
PPTX
Ethical AI - Open Compliance Summit 2020
PPTX
Privacy Preserving Chatbot Conversations
PPTX
Reinforcement Learning based HVAC Optimization in Factories
PPTX
Delayed Rewards in the context of Reinforcement Learning based Recommender ...
PPTX
Building an enterprise Natural Language Search Engine with ElasticSearch and ...
PDF
Privacy-Preserving Outsourced Profiling
PDF
Privacy Policies Change Management for Smartphones
Agentic AI lifecycle for Enterprise Hyper-Automation
ICAART 2025 presentation on Stateful Monitoring and Responsible Deployment of...
Agentic AI: Scalable & Responsible Deployment of AI Agents in the Enterprise
A comprehensive guide to Agentic AI Systems
Constraints Enabled Autonomous Agent Marketplace: Discovery and Matchmaking
Data-Driven (Reinforcement Learning-Based) Control
MLOps for Compositional AI
A Privacy Framework for Hierarchical Federated Learning
Edge AI Framework for Healthcare Applications
Compositional AI: Fusion of AI/ML Services
Ethical AI - Open Compliance Summit 2020
Privacy Preserving Chatbot Conversations
Reinforcement Learning based HVAC Optimization in Factories
Delayed Rewards in the context of Reinforcement Learning based Recommender ...
Building an enterprise Natural Language Search Engine with ElasticSearch and ...
Privacy-Preserving Outsourced Profiling
Privacy Policies Change Management for Smartphones
Ad

Recently uploaded (20)

PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Modernizing your data center with Dell and AMD
PDF
Electronic commerce courselecture one. Pdf
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Empathic Computing: Creating Shared Understanding
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Machine learning based COVID-19 study performance prediction
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPT
Teaching material agriculture food technology
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
Diabetes mellitus diagnosis method based random forest with bat algorithm
Modernizing your data center with Dell and AMD
Electronic commerce courselecture one. Pdf
Advanced methodologies resolving dimensionality complications for autism neur...
Spectral efficient network and resource selection model in 5G networks
Understanding_Digital_Forensics_Presentation.pptx
Reach Out and Touch Someone: Haptics and Empathic Computing
Empathic Computing: Creating Shared Understanding
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Machine learning based COVID-19 study performance prediction
Digital-Transformation-Roadmap-for-Companies.pptx
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
20250228 LYD VKU AI Blended-Learning.pptx
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Teaching material agriculture food technology
Chapter 3 Spatial Domain Image Processing.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
NewMind AI Weekly Chronicles - August'25 Week I
Per capita expenditure prediction using model stacking based on satellite ima...

Responsible LLMOps presentation at Webit 2024

  • 1. Public Responsible LLMOps: Integrating Responsible AI Practices into LLMOps Debmalya Biswas Wipro AI, Switzerland Oct 2024, Sofia, Bulgaria
  • 2. Public Generative AI Lifecycle Optimize and deploy for inference with enterprise integration LLMOps deployment of the developed Gen AI solution Prompt Engineering Responsible AI guardrails Evaluate Define the Use-case Choose an existing LLM or pre-train own LLM RAG Fine-tuning Human Feedback Loop Scope Select Adapt Operate and Use
  • 3. Public Responsible AI Guardrails Domain Guardrail Restricts the query or conversation to specific domain. Gracefully guides the conversation into desired domain Safety Guardrail Removes the unsafe & toxic responses from LLM. Applies policies to deliver appropriate response Secure Guardrail Prevent LLM to execute codes or connect with applications. Also, provides access controls to the queries & response. Transparency Guardrail Point to data source/documents from where response is crafted, Highlight words contributing to response being generated. Programmable Constraints Easy User Config Safe Conversational system
  • 4. Public Gen AI Architecture Patterns – APIs & Embedded Gen AI * D. Biswas. Generative AI – LLMOps Architecture Patterns. Data Driven Investor, 2023 (link) LLM APIs: This is the classic ChatGPT example, where we have black-box access to a LLM API/UI. Prompts are the primary interaction mechanism for such scenarios. Enterprise LLM Apps have the potential to accelerate LLM adoption by providing LLM capabilities embedded within enterprise platforms, e.g., SAP, Salesforce, ServiceNow.
  • 5. Public Gen AI Architecture Patterns – Fine-tuning LLMs are generic in nature. To realize the full potential of LLMs for enterprises, they need to be contextualized with enterprise knowledge captured in terms of documents, wikis, business processes, etc. This is achieved by fine- tuning a LLM with enterprise knowledge / embeddings to develop a context-specific LLM/ SLM. Responsible AI safeguards
  • 6. Public Gen AI Architecture Patterns – RAGs & Agentic AI Fine-tuning is a computationally intensive process. RAGs provide a viable alternative here by providing additional context with the prompt, grounding the retrieval / responses to the given context. The future where enterprises will be able to develop new enterprise AI Apps by orchestrating / composing multiple existing AI Agents. *D. Biswas. Constraints Enabled Autonomous Agent Marketplace: Discovery and Matchmaking. ICAART (1) 2024: 396-403 (link)
  • 7. Public Responsible Deployment of LLMs D. Biswas, D. Chakraborty, B. Mitra. Responsible LLMOps.. Towards Data Science, 2024 (link)
  • 8. Public ML Privacy Risks Two broad categories of privacy inference attacks: • Membership inference (if a specific user data item was present in the training dataset) and • Property inference (reconstruct properties of a participant’s dataset) attacks. Black box attacks are still possible when the attacker only has access to the APIs: invoke the model and observe the relationships between inputs and outputs. Training dataset wants access to ML Model (Classification, Prediction) Inference API has access to Attacker * D. Biswas. Privacy Preserving Chatbot Conversations. IEEE AIKE 2020: 179-182 (link) *D. Biswas, K. Vidyasankar. A Privacy Framework for Hierarchical Federated Learning. CIKM Workshops 2021 (link)
  • 9. Public Gen AI Privacy Risks – novel challenges We need to consider the following additional privacy risks in the case of Gen AI / LLMs: • Membership and Property leakage from Pre-training data • Model features leakage from Pre-trained LLM • Privacy leakage from Conversations (history) with LLMs • Compliance with Privacy Intent of Users Training dataset (Public/proprietary) Pre-trained Large Language Model (LLM) LLM API Mobile / Web UI End user Apps Prompts Tasks / Queries Users LLM Provider Feedback loop Membership & Property leakage from Pre-training Data Model Features leakage from Pre-trained LLM (Implicit) Privacy leakage from Conversations History Compliance with Privacy Intent of Users Gen AI / LLM Conversational Privacy Risks
  • 10. Public LLM Safety Leaderboard *Hugging Face LLM Safety Leaderboard (link) *B. Wang, et. Al. DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models, 2024 (link)
  • 11. Public Use-case specific Evaluation of LLMs Need for a comprehensive LLM evaluation strategy with targeted success metrics specific to the use-cases.