SlideShare a Scribd company logo
2
Most read
3
Most read
4
Most read
EXPLAINABLE AI
Factors driving rapid advancement of AI
Third Wave of AI
SymbolicAI
Logic rules represent
knowledge
No learning capability
and poor handling of
uncertainty
StatisticalAI
Statistical models for
specific domains training
on big data.
No contextual capability
and minimal explainability.
Explainable AI
Systems construct
explanatory models.
Systems learn and
reason with new
tests and situations.
GPUs, On-chip
Neural Network
Data
Availability
Cloud
Infrastructure
New
Algorithms
2
There are two ways to provide explainable AI:
• Use Machine learning approaches that are inherently explainable such as decision
trees, knowledge graphs and similarity models.
• Develop new approaches to explain complicated neural networks.
What is XAI?
It is a AI system that explains their decision making which is referred as Explainable AI or
XAI. The goal of XAI is to provide verifiable explanations of how machine learning systems
makes decisions and let humans to be in the loop.
3
What is Explainable AI?
Black BoxAI
Data
Black-Box
AI
AI
product
Explainable AI
Explainable
AI
Explainable
AI Product
Explanation
Decision
Feedback
Decision,
Recommendation
Clear & transparent
Predictions
I understand why
I understand why not
I know why you succeed or fail
I understand, so I trust you
Confusion with today’s AI
Black Box
Why did you do that?
Why did you not do that?
When do you succeed or fail?
How do I correct an error?
4
Data
Black-box AI Creates Confusion
and Doubt
Can I trust our AI
decisions?
How do I answer this
customer complaint?
Is this the best model
that can be built?
How doI monitor and
debug thismodel?
Why I am getting this
decision?
How can I get a better
decision?
Poor Decision
Black-box
AI
</>
</>
Are these AI system
decisions fair?
Internal Audit,
Regulators
Data Scientists
IT & Operations
Business Owner
5
Customer Support
Why Do We Need It?
6
• Artificial Intelligence are increasingly implemented in our everyday lives to assist humans
in making decisions.
• These trivial decisions can vary from a lifestyle choices to more complex decisions such as
loan approvals, investments, court decisions and selection of job candidates.
• Many AI algorithms are a Blackbox that is not transparent. This leads to trustability
concerns. In order to trust these systems, humans want accountability and explanation.
Why Do We Need It?
7
• While the machine learning systems deployed in 2008 were mostly within the products
of tech-first companies (i.e. Google, YouTube), the false prediction would result in the
wrong recommendation to the application user.
• But, when it is being deployed in other industries such as military,healthcare, finance, it
would lead to adverse consequences affecting many lives.
• Thus, we create AI systems that explain their decision making.
8
• Why did you do that?
• Why not something else?
• When do you succeed?
• When do you fail?
• When can I trust you?
• How do I correct an error?
We are entering a new
age of AI applications.
Machine learning is the
core technology.
Machine learning
models are opaque,
non-intuitive, and
difficult for people to
understand.
AI System DoD and non-DoD
Application
Transportation
Security
Medicine
Finance
Legal
Military
User
Process of XAI
AI is Interpretability. It collaborates between• The significant enabler of explainable
human and artificial intelligence.
• Interpretability is a degree to which a human can understand the cause of a decision.
• It strengthens trust and transparency,explains decisions, fulfil regulatory requirements,
and improve models.
• The stages of AI explainability is categorized into pre-modelling, explainable modelling
and post-modelling. They focus on explainability at the dataset stage and during model
development.
9
Explainable
AI
Feedback Loop
Train
Deploy
Monitor
A/B Test
Predict
“Explainability By Design" For AI Products
Model Diagnostics
Root Cause Analytics
Debug
Performance Monitoring
Fairness Monitoring
Model Comparison
Cohort Analysis
Model Debugging
Model Visualization
Model Evaluation
Compliance Testing
QA
Model Launch Signoff
Model Release Mgmt
Explainable Decisions
API Support
10
Explainability Approaches
11
• The popular Local Interpretable Model-agnostic Explanations (LIME) approach provides
explanation for an instance prediction of a model in terms of input features, the explanation
family, etc.
• Post-hoc Explainability approach of AI Model creates.
• Individual prediction explanations with input features, influential concepts, local
decision rules.
• The global prediction explanations with partial dependence plots, global
feature importance, global decision rules.
• The build an interpretability model approach creates
• Logistic regressions, decision trees, generalized additive models(GAMs).
Why Explainability: Improve ML Model
Standard ML Interpretable ML
Data
ML
model
Model/data
Improvement
Data
ML
model
Predictions
Generalization error Generalization error + Human experience
Verified predictions
Interpretability
Humaninspection
12
Explanation Targets
13
• The target specifies the object of an explainability method which varies in type, scope, and
complexity.
• The type of explanation target is often determined according to the role-specific goals of
end users.
• There are two types of targets: inside vs outside, which can also be referred as mechanistic
vs Functional.
• AI experts require a mechanistic explanation of some component inside a model to
understand how layers of a deep network respond to input data in order to debug or
validate the model.
Explanation Targets
14
• In contrast, non-experts often require a functional explanation to understand how some
output outside a model is produced.
• In addition, targets can vary in their scope. The outside-type targets are typically some
form of model prediction. They can be either local or global explanations.
• The inside-type targets also vary depending on the architecture of the underlying
model.
• They can either be a single neuron, or layers in a neural network.
Explanation Drivers
15
• The most common type of drivers are input features to an AI model.
• Explaining an image classifier predictions in terms of individual input pixels can result in
explanations that are too noisy, too expensive to compute, and more importantly, difficult to
interpret.
• Alternatively, we can rely on a more interpretable representation of input features knownas
super-pixels in the case of image classifier prediction.
• All factors that have an impact on the development of an AI model can be termed as
explanation drivers.
Explainer
(LIME)
Explanation Families
16
• A post-hoc explanation aims at communicating some information about how a target is
caused by drivers for a given AI model.
• An explanation family must be chosen such that its information content is easily
interpretable by the user.
• Importance Scores - The individual importance scores are meant to communicate
the relative contribution made by each explanation driver to a given target.
• Decision Rules - Decision trees is where outcome represents prediction of an AI
model and condition is a simple function defined over input features.
Explanation Families
17
• Decision Trees - Unlike decision rules, they are structured as a graph where internal
nodes represent conditional tests on input features and leaf nodes represent model out-
comes. In a decision tree each input example can satisfy only one path from the root node
to a leaf node.
• Dependency Plots - They aim at communicating how a target’s value varies as a given
explanation drivers’ value varies, in other words, how a target’s value depends on a
driver’s value.
• To explain pre-developed AI models, multiple methods have been proposed.
• They vary in terms of their Explanation target, Explanation drivers, Explanation family
and Extraction mechanism.
• XAI is an active research area with new, improved methods being developed consistently.
• Such diversity of choices can make it challenging of XAI experts to adopt the
most suitable approach for a given application.
• This challenge is addressed by presenting a snapshot of the most notable post-
modelling explainability methods.
Conclusion
18
To assist you with our services,
please reach us at
hello@mitosistech.com
www.mitosistech.com
IND: +91-78240 35173
US: +1-(415) 251-2064

More Related Content

PDF
Explainability and bias in AI
PDF
Explainable AI (XAI) - A Perspective
PPTX
An Introduction to XAI! Towards Trusting Your ML Models!
PDF
Explainable AI (XAI)
PPTX
Explainable AI
PPTX
Explainable AI
PPTX
Explainable AI in Industry (KDD 2019 Tutorial)
PPTX
Explainable AI.pptx
Explainability and bias in AI
Explainable AI (XAI) - A Perspective
An Introduction to XAI! Towards Trusting Your ML Models!
Explainable AI (XAI)
Explainable AI
Explainable AI
Explainable AI in Industry (KDD 2019 Tutorial)
Explainable AI.pptx

What's hot (20)

PDF
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck
PDF
The fundamentals of Machine Learning
PPT
Machine Learning
PDF
Artificial Intelligence And Machine Learning PowerPoint Presentation Slides C...
PPTX
Explainable AI in Industry (WWW 2020 Tutorial)
PPT
Machine learning
PDF
Machine Learning: Introduction to Neural Networks
PPTX
Intro/Overview on Machine Learning Presentation
PDF
Introduction to AI & ML
PPTX
Federated Learning
PDF
Deep learning
PDF
General introduction to AI ML DL DS
PDF
Generative AI
PDF
"An Introduction to Machine Learning and How to Teach Machines to See," a Pre...
PPTX
Future of AI - 2023 07 25.pptx
PPTX
Machine learning overview
PPTX
Machine learning ppt
PDF
Introduction to Artificial Intelligence and Machine Learning
PPTX
Explainable AI in Industry (FAT* 2020 Tutorial)
PDF
Unlocking the Power of Generative AI An Executive's Guide.pdf
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck
The fundamentals of Machine Learning
Machine Learning
Artificial Intelligence And Machine Learning PowerPoint Presentation Slides C...
Explainable AI in Industry (WWW 2020 Tutorial)
Machine learning
Machine Learning: Introduction to Neural Networks
Intro/Overview on Machine Learning Presentation
Introduction to AI & ML
Federated Learning
Deep learning
General introduction to AI ML DL DS
Generative AI
"An Introduction to Machine Learning and How to Teach Machines to See," a Pre...
Future of AI - 2023 07 25.pptx
Machine learning overview
Machine learning ppt
Introduction to Artificial Intelligence and Machine Learning
Explainable AI in Industry (FAT* 2020 Tutorial)
Unlocking the Power of Generative AI An Executive's Guide.pdf
Ad

Similar to Explainable AI (20)

PPTX
Explainable-Artificial-Intelligence-XAI-A-Deep-Dive (1).pptx
PPTX
2. The-Importance-of-Explainable-AI-XAI.pptx
PDF
What is explainable AI.pdf
PPTX
Explainable AI in Industry (AAAI 2020 Tutorial)
PPTX
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
PDF
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
PDF
RAPIDE
PPTX
The Need for Explainable AI - Dorothea Wisemann
PDF
Explainable Artificial Intelligence (XAI): Precepts, Methods, and Opportuniti...
PDF
Explainable Ai.pdf
PDF
Practical Explainable AI: How to build trustworthy, transparent and unbiased ...
PPTX
Explainable AI: Making Machine Learning Models Transparent and Trustworthy
PPTX
Explainability for Natural Language Processing
PDF
Introduction to XAI
PDF
Explainable Ai For Practitioners Designing And Implementing Explainable Ml So...
PPTX
2018.01.25 rune sætre_triallecture_xai_v2
PDF
Improved Interpretability and Explainability of Deep Learning Models.pdf
PPTX
Explainability for Natural Language Processing
PDF
Explainable_artificial_intelligence_A_survey.pdf
PDF
Explainability for Natural Language Processing
Explainable-Artificial-Intelligence-XAI-A-Deep-Dive (1).pptx
2. The-Importance-of-Explainable-AI-XAI.pptx
What is explainable AI.pdf
Explainable AI in Industry (AAAI 2020 Tutorial)
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
RAPIDE
The Need for Explainable AI - Dorothea Wisemann
Explainable Artificial Intelligence (XAI): Precepts, Methods, and Opportuniti...
Explainable Ai.pdf
Practical Explainable AI: How to build trustworthy, transparent and unbiased ...
Explainable AI: Making Machine Learning Models Transparent and Trustworthy
Explainability for Natural Language Processing
Introduction to XAI
Explainable Ai For Practitioners Designing And Implementing Explainable Ml So...
2018.01.25 rune sætre_triallecture_xai_v2
Improved Interpretability and Explainability of Deep Learning Models.pdf
Explainability for Natural Language Processing
Explainable_artificial_intelligence_A_survey.pdf
Explainability for Natural Language Processing
Ad

More from Dinesh V (6)

PDF
Data Science Deep Roots in Healthcare Industry
PDF
Healthcare evolves with Data Interoperability
PDF
Mastering Customers Moments in Retail Realm
PDF
Looking into the Black Box - A Theoretical Insight into Deep Learning Networks
PDF
Human in-the-loop in Machine Learning
PDF
Sentiment Analysis
Data Science Deep Roots in Healthcare Industry
Healthcare evolves with Data Interoperability
Mastering Customers Moments in Retail Realm
Looking into the Black Box - A Theoretical Insight into Deep Learning Networks
Human in-the-loop in Machine Learning
Sentiment Analysis

Recently uploaded (20)

PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
Empathic Computing: Creating Shared Understanding
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPTX
Big Data Technologies - Introduction.pptx
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
A Presentation on Artificial Intelligence
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PDF
Modernizing your data center with Dell and AMD
PDF
Approach and Philosophy of On baking technology
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Machine learning based COVID-19 study performance prediction
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
MYSQL Presentation for SQL database connectivity
Advanced methodologies resolving dimensionality complications for autism neur...
Building Integrated photovoltaic BIPV_UPV.pdf
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Empathic Computing: Creating Shared Understanding
Digital-Transformation-Roadmap-for-Companies.pptx
Big Data Technologies - Introduction.pptx
Spectral efficient network and resource selection model in 5G networks
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Mobile App Security Testing_ A Comprehensive Guide.pdf
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
A Presentation on Artificial Intelligence
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Modernizing your data center with Dell and AMD
Approach and Philosophy of On baking technology
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Machine learning based COVID-19 study performance prediction
Agricultural_Statistics_at_a_Glance_2022_0.pdf
MYSQL Presentation for SQL database connectivity

Explainable AI

  • 2. Factors driving rapid advancement of AI Third Wave of AI SymbolicAI Logic rules represent knowledge No learning capability and poor handling of uncertainty StatisticalAI Statistical models for specific domains training on big data. No contextual capability and minimal explainability. Explainable AI Systems construct explanatory models. Systems learn and reason with new tests and situations. GPUs, On-chip Neural Network Data Availability Cloud Infrastructure New Algorithms 2
  • 3. There are two ways to provide explainable AI: • Use Machine learning approaches that are inherently explainable such as decision trees, knowledge graphs and similarity models. • Develop new approaches to explain complicated neural networks. What is XAI? It is a AI system that explains their decision making which is referred as Explainable AI or XAI. The goal of XAI is to provide verifiable explanations of how machine learning systems makes decisions and let humans to be in the loop. 3
  • 4. What is Explainable AI? Black BoxAI Data Black-Box AI AI product Explainable AI Explainable AI Explainable AI Product Explanation Decision Feedback Decision, Recommendation Clear & transparent Predictions I understand why I understand why not I know why you succeed or fail I understand, so I trust you Confusion with today’s AI Black Box Why did you do that? Why did you not do that? When do you succeed or fail? How do I correct an error? 4 Data
  • 5. Black-box AI Creates Confusion and Doubt Can I trust our AI decisions? How do I answer this customer complaint? Is this the best model that can be built? How doI monitor and debug thismodel? Why I am getting this decision? How can I get a better decision? Poor Decision Black-box AI </> </> Are these AI system decisions fair? Internal Audit, Regulators Data Scientists IT & Operations Business Owner 5 Customer Support
  • 6. Why Do We Need It? 6 • Artificial Intelligence are increasingly implemented in our everyday lives to assist humans in making decisions. • These trivial decisions can vary from a lifestyle choices to more complex decisions such as loan approvals, investments, court decisions and selection of job candidates. • Many AI algorithms are a Blackbox that is not transparent. This leads to trustability concerns. In order to trust these systems, humans want accountability and explanation.
  • 7. Why Do We Need It? 7 • While the machine learning systems deployed in 2008 were mostly within the products of tech-first companies (i.e. Google, YouTube), the false prediction would result in the wrong recommendation to the application user. • But, when it is being deployed in other industries such as military,healthcare, finance, it would lead to adverse consequences affecting many lives. • Thus, we create AI systems that explain their decision making.
  • 8. 8 • Why did you do that? • Why not something else? • When do you succeed? • When do you fail? • When can I trust you? • How do I correct an error? We are entering a new age of AI applications. Machine learning is the core technology. Machine learning models are opaque, non-intuitive, and difficult for people to understand. AI System DoD and non-DoD Application Transportation Security Medicine Finance Legal Military User
  • 9. Process of XAI AI is Interpretability. It collaborates between• The significant enabler of explainable human and artificial intelligence. • Interpretability is a degree to which a human can understand the cause of a decision. • It strengthens trust and transparency,explains decisions, fulfil regulatory requirements, and improve models. • The stages of AI explainability is categorized into pre-modelling, explainable modelling and post-modelling. They focus on explainability at the dataset stage and during model development. 9
  • 10. Explainable AI Feedback Loop Train Deploy Monitor A/B Test Predict “Explainability By Design" For AI Products Model Diagnostics Root Cause Analytics Debug Performance Monitoring Fairness Monitoring Model Comparison Cohort Analysis Model Debugging Model Visualization Model Evaluation Compliance Testing QA Model Launch Signoff Model Release Mgmt Explainable Decisions API Support 10
  • 11. Explainability Approaches 11 • The popular Local Interpretable Model-agnostic Explanations (LIME) approach provides explanation for an instance prediction of a model in terms of input features, the explanation family, etc. • Post-hoc Explainability approach of AI Model creates. • Individual prediction explanations with input features, influential concepts, local decision rules. • The global prediction explanations with partial dependence plots, global feature importance, global decision rules. • The build an interpretability model approach creates • Logistic regressions, decision trees, generalized additive models(GAMs).
  • 12. Why Explainability: Improve ML Model Standard ML Interpretable ML Data ML model Model/data Improvement Data ML model Predictions Generalization error Generalization error + Human experience Verified predictions Interpretability Humaninspection 12
  • 13. Explanation Targets 13 • The target specifies the object of an explainability method which varies in type, scope, and complexity. • The type of explanation target is often determined according to the role-specific goals of end users. • There are two types of targets: inside vs outside, which can also be referred as mechanistic vs Functional. • AI experts require a mechanistic explanation of some component inside a model to understand how layers of a deep network respond to input data in order to debug or validate the model.
  • 14. Explanation Targets 14 • In contrast, non-experts often require a functional explanation to understand how some output outside a model is produced. • In addition, targets can vary in their scope. The outside-type targets are typically some form of model prediction. They can be either local or global explanations. • The inside-type targets also vary depending on the architecture of the underlying model. • They can either be a single neuron, or layers in a neural network.
  • 15. Explanation Drivers 15 • The most common type of drivers are input features to an AI model. • Explaining an image classifier predictions in terms of individual input pixels can result in explanations that are too noisy, too expensive to compute, and more importantly, difficult to interpret. • Alternatively, we can rely on a more interpretable representation of input features knownas super-pixels in the case of image classifier prediction. • All factors that have an impact on the development of an AI model can be termed as explanation drivers. Explainer (LIME)
  • 16. Explanation Families 16 • A post-hoc explanation aims at communicating some information about how a target is caused by drivers for a given AI model. • An explanation family must be chosen such that its information content is easily interpretable by the user. • Importance Scores - The individual importance scores are meant to communicate the relative contribution made by each explanation driver to a given target. • Decision Rules - Decision trees is where outcome represents prediction of an AI model and condition is a simple function defined over input features.
  • 17. Explanation Families 17 • Decision Trees - Unlike decision rules, they are structured as a graph where internal nodes represent conditional tests on input features and leaf nodes represent model out- comes. In a decision tree each input example can satisfy only one path from the root node to a leaf node. • Dependency Plots - They aim at communicating how a target’s value varies as a given explanation drivers’ value varies, in other words, how a target’s value depends on a driver’s value.
  • 18. • To explain pre-developed AI models, multiple methods have been proposed. • They vary in terms of their Explanation target, Explanation drivers, Explanation family and Extraction mechanism. • XAI is an active research area with new, improved methods being developed consistently. • Such diversity of choices can make it challenging of XAI experts to adopt the most suitable approach for a given application. • This challenge is addressed by presenting a snapshot of the most notable post- modelling explainability methods. Conclusion 18
  • 19. To assist you with our services, please reach us at hello@mitosistech.com www.mitosistech.com IND: +91-78240 35173 US: +1-(415) 251-2064