ETeMoX: explaining
reinforcement learning
J. M. Parra-Ullauri1
, A. García-Domínguez2
, N. Bencomo3
,
C. Zheng4
, C. Zhen5
, J. Boubeta-Puig6
, G. Ortiz6
, S. Yang7
1: Aston University, 2: University of York, 3: Durham University
4: University of Oxford, 5: University of Science and Technology of China
6: University of Cadiz, 7: Edinburgh Napier University
MODELS 2022 - Thursday October 27th
, 2022
October 27th
, 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning
Rising need for explanations in self-adaptation / AI
● Software is being written to deal with more and more complex environments,
where they need to reconfigure themselves and learn from experience
● If not careful, these systems can be “black boxes” where we can only take
their decisions at face value - it will be hard to calibrate our trust on them
● We want to be able to ask things like:
○ Why did they take that action?
○ Why did they not take that *other* action?
○ How do you (roughly) work?
● The “right to explanation” is being enshrined in the GDPR, or the IEEE P7001
standard for transparency of autonomous systems
● There is an entire field on eXplainable AI (XAI)
2
October 27th
, 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning
Types of explanations and stages involved
● Explanations can be broadly classified by scope into:
○ Local - for a specific decision
○ Global - for the overall behaviour of the system (usually, a simplified behavioural model)
● Adadi et al. identified four uses for explanations:
○ To justify decisions impacting people
○ To control systems into an envelope of good behaviour ⬅
○ To discover knowledge from the system behaviour
○ To improve the system by highlighting flaws ⬅
● Neerincx considered three stages for producing these explanations:
○ Generation - obtain necessary data and reason about it ⬅
○ Communication - show it to consumer (human / system) ⬅
○ Reception - was it effective and efficient?
3
October 27th
, 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning
How can MDE help XAI?
● In Model-Driven Engineering (MDE), we already have significant experience
abstracting away unnecessary complexity
● At design time, we raise the level of abstraction so developers of a system
can think in terms of their domain concepts
● We can also do this while the system is running - we can build a model of
what the system is perceiving, thinking, and doing (a runtime model)
● If we decide on a common trace metamodel for this, we can reuse efforts on
introducing explainability across systems
4
October 27th
, 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning
MDE: Reusable Trace Metamodel - common half
5
October 27th
, 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning
MDE: Reusable
Trace Metamodel -
specific half
● First half of the metamodel is
reusable across systems
making their own decisions
● Second half of the metamodel is
specific - this one is for systems
using Q-Learning (a type of
Reinf. Learning)
● A decision takes into account
the Q-values of each Action
● Observations have rewards
associated to them, and map to
a state in the Q-table
6
October 27th
, 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning
MDE: Indexing
Models into
Temporal Graph DBs
● At each system timeslice, the
runtime model is indexed into a
temporal graph
● Efficient representation of a
graph’s full history, using
copy-on-write state chunks
● Implemented by Greycat (from
Hartmann et al.), and used by
Eclipse Hawk for automated
model indexing
● More details here:
https://www.eclipse.org/hawk/ad
vanced-use/temporal-queries/
7
October 27th
, 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning
MDE: History-Aware
Model Querying
● For explanation generation, we
can query the temporal graph
● We created a Hawk-specific
dialect of EOL with time-aware
predicates and properties
● More details in our MODELS
2019 paper
AGD, NB, JMPU and LGP,
‘Querying and annotating model
histories with time-aware patterns’,
http://dx.doi.org/10.1109/MODELS.
2019.000-2
8
Version traversal x.versions, x.next, x.prev,
x.time, x.earliest, x.latest…
Temporal assertions x.always(version | p),
x.never(v | p),
x.eventually(v | p)...
Predicate-based scoping x.since(v | p), x.until(v | p)...
Context-based scoping x.sinceThen, x.untilThen…
Unscoping x.unscoped
October 27th
, 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning
Scaling up temporal graphs to large event volumes
● We first applied history-based expls. to Bayesian Learning-based systems
○ Partially Observable Markov Decision Processes (POMDP)
○ Had a case study on data mirroring over the network (Remote Data Mirroring)
○ Wasn’t too resource-intensive (we could simply record all versions)
● Then we tried applying it to a Reinforcement Learning system
○ Tens of training epochs, each with thousands of episodes
○ Original RL system had per-timeslice MongoDB with GBs of records to be indexed
○ RL system changed to send updates directly to Hawk - CoW reduced storage needs
○ Still a lot of history to go through - queries could take a long time!
● Do we really need all this history?
○ Answer: No.
○ How do we select the “right” moments, without imposing too much load?
9
October 27th
, 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning
Event-Driven Monitoring: Complex Event Processing
10
October 27th
, 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning
Event-driven Temporal Models for eXplanations (ETeMoX)
11
October 27th
, 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning
Case study: airborne base stations
12
October 27th
, 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning
Experiment 1:
Evolution of metrics
(optionally sampled)
● We evaluated the impact of
sampling at different rates on
the accuracy of a query
providing the historic reward
values during the RL training
● We set up the CEP engine with
Esper EPL rules as in the top
right
● We observed linear decreases
in storage required depending
on sampling rate
● 10% sampling is safe, more
than that depended on the RL
algorithm (DQN is sensitive!)
13
Q-Learning DQN
October 27th
, 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning
Experiment 2: Exploration vs Exploitation (1/2)
14
● RL systems don’t always pick the best
option (exploitation): they try things
sometimes to learn more (exploration)
● How often does this happen?
● We compared two approaches to track this:
a. CEP pattern to detect exploration/exploitation and
only index episodes with exploration
b. EOL query on full history, to check CEP pattern
correctness
● Q-Learning explored 1.41% of the time,
SARSA 7.99%, DQN 7.82%
October 27th
, 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning
Experiment 2: Exploration vs Exploitation (2/2)
15
● We tried using the exploration CEP rule as a filter for metric evolution, too
October 27th
, 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning
Experiment 3: user handovers between stations
● To provide continuous service, a user may be handed over to another station
● We wrote an EOL query to detect handovers in the system history
○ Handover: signal-to-noise ratio changes significantly across stations between timepoints
○ Found 1784 handovers in Q-Learning, 590 in SARSA, 82176 in DQN
● These queries required many checks:
○ 10 episodes, 2000 time steps
○ 2 stations, 1050 users
○ All together: 42M combinations to check!
● Required times:
○ 917s for Q-Learning
○ 1,132s for SARSA
○ 7,914s for DQN
16
October 27th
, 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning
What’s next?
● Optimise queries via Hawk timeline annotations and CEP time windows
● Explanations for other uses:
○ Human-in-the-loop (SAM 2022 presentation on Monday showed early work)
○ Hyper-parameter optimisation (external system requests explanations and drives change)
○ Global explanations of the system behaviour (event graphs)
● Studying explanation reception:
○ Effectiveness of explanation formats: plots, results, generated text, diagrams…
○ Focused on system developers so far: look into less technical audiences
○ Consider existing models for evaluation
■ Technology Acceptance Model (Davis)
■ XAI metrics (Rosenfeld)
17
Thank you!
Antonio García-Domínguez
@antoniogado - a.garcia-dominguez@york.ac.uk
Juan Marcelo Parra-Ullauri
j.parra-ullauri@aston.ac.uk
18

More Related Content

PPTX
UNIT-5 Advanced Topic in Data Science.pptx
PPTX
ODSC APAC 2022 - Explainable AI
PDF
History-Aware Explanations: Towards Enabling Human-in-the-Loop in Self-Adapti...
PDF
DSDT meetup July 2021
PDF
A Quick Overview of Artificial Intelligence and Machine Learning (revised ver...
PDF
Introducción práctica al análisis de datos hasta la inteligencia artificial
PPTX
The Hitchhiker_s Guide to XAI abridged.pptx
PDF
Introduction to Artificial Intelligence
UNIT-5 Advanced Topic in Data Science.pptx
ODSC APAC 2022 - Explainable AI
History-Aware Explanations: Towards Enabling Human-in-the-Loop in Self-Adapti...
DSDT meetup July 2021
A Quick Overview of Artificial Intelligence and Machine Learning (revised ver...
Introducción práctica al análisis de datos hasta la inteligencia artificial
The Hitchhiker_s Guide to XAI abridged.pptx
Introduction to Artificial Intelligence

Similar to MODELS 2022 Journal-First presentation: ETeMoX - explaining reinforcement learning (20)

PDF
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
PDF
Model Evaluation in the land of Deep Learning
PDF
Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Va...
PPTX
Explainable Online Reinforcement Learning for Adaptive Systems
PDF
Machine Learning: Past, Present and Future - by Tom Dietterich
PDF
Artificial intelligence in the post-deep learning era
PDF
Explainability methods
PDF
From deep learning to deep reasoning
PDF
A Quick Overview of Artificial Intelligence and Machine Learning
PDF
Reaction RuleML 1.0
PDF
Explainable Artificial Intelligence (XAI): Precepts, Methods, and Opportuniti...
PPTX
[DSC Adria 23]Davor Horvatic Human-Centric Explainable AI In Time Series Anal...
PDF
Explainable AI - making ML and DL models more interpretable
PDF
Dynamic Network Representation Based On Latent Factorization Of Tensors Hao Wu
PDF
Knowledge Discovery
PPTX
Interactive XAI for ODSC East 2023
PDF
Matineh Shaker, Artificial Intelligence Scientist, Bonsai at MLconf SF 2017
PPTX
2018.01.25 rune sætre_triallecture_xai_v2
PDF
Lead confluent HQ Dec 2019
PDF
Deep analytics via learning to reason
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for Everyone
Model Evaluation in the land of Deep Learning
Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Va...
Explainable Online Reinforcement Learning for Adaptive Systems
Machine Learning: Past, Present and Future - by Tom Dietterich
Artificial intelligence in the post-deep learning era
Explainability methods
From deep learning to deep reasoning
A Quick Overview of Artificial Intelligence and Machine Learning
Reaction RuleML 1.0
Explainable Artificial Intelligence (XAI): Precepts, Methods, and Opportuniti...
[DSC Adria 23]Davor Horvatic Human-Centric Explainable AI In Time Series Anal...
Explainable AI - making ML and DL models more interpretable
Dynamic Network Representation Based On Latent Factorization Of Tensors Hao Wu
Knowledge Discovery
Interactive XAI for ODSC East 2023
Matineh Shaker, Artificial Intelligence Scientist, Bonsai at MLconf SF 2017
2018.01.25 rune sætre_triallecture_xai_v2
Lead confluent HQ Dec 2019
Deep analytics via learning to reason
Ad

More from Antonio García-Domínguez (16)

PDF
MODELS 2022 Picto Web tool demo
PDF
EduSymp 2022 slides (The Epsilon Playground)
PDF
Boosting individual feedback with AutoFeedback
PDF
MODELS 2019: Querying and annotating model histories with time-aware patterns
PDF
Tips and resources for publication-grade figures and tables
PDF
COMMitMDE'18: Eclipse Hawk: model repository querying as a service
PDF
MRT 2018: reflecting on the past and the present with temporal graph models
PDF
Hawk: indexado de modelos en bases de datos NoSQL
PDF
Software and product quality for videogames
PDF
OCL'16 slides: Models from Code or Code as a Model?
PDF
Developing a new Epsilon Language through Annotations: TestLang
PDF
MoDELS'16 presentation: Integration of a Graph-Based Model Indexer in Commerc...
PDF
ECMFA 2016 slides
PDF
BMSD 2015 slides (revised)
PDF
Elaboración de un buen póster científico
PDF
Software libre para la integración de información en la Universidad de Cádiz
MODELS 2022 Picto Web tool demo
EduSymp 2022 slides (The Epsilon Playground)
Boosting individual feedback with AutoFeedback
MODELS 2019: Querying and annotating model histories with time-aware patterns
Tips and resources for publication-grade figures and tables
COMMitMDE'18: Eclipse Hawk: model repository querying as a service
MRT 2018: reflecting on the past and the present with temporal graph models
Hawk: indexado de modelos en bases de datos NoSQL
Software and product quality for videogames
OCL'16 slides: Models from Code or Code as a Model?
Developing a new Epsilon Language through Annotations: TestLang
MoDELS'16 presentation: Integration of a Graph-Based Model Indexer in Commerc...
ECMFA 2016 slides
BMSD 2015 slides (revised)
Elaboración de un buen póster científico
Software libre para la integración de información en la Universidad de Cádiz
Ad

Recently uploaded (20)

PPTX
CyberSecurity Mobile and Wireless Devices
PDF
Design Guidelines and solutions for Plastics parts
PPTX
Graph Data Structures with Types, Traversals, Connectivity, and Real-Life App...
PPTX
ASME PCC-02 TRAINING -DESKTOP-NLE5HNP.pptx
PDF
ChapteR012372321DFGDSFGDFGDFSGDFGDFGDFGSDFGDFGFD
PDF
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
PPTX
Fundamentals of Mechanical Engineering.pptx
PDF
III.4.1.2_The_Space_Environment.p pdffdf
PPTX
6ME3A-Unit-II-Sensors and Actuators_Handouts.pptx
PPTX
Feature types and data preprocessing steps
PDF
null (2) bgfbg bfgb bfgb fbfg bfbgf b.pdf
PDF
Categorization of Factors Affecting Classification Algorithms Selection
PPTX
Fundamentals of safety and accident prevention -final (1).pptx
PPTX
"Array and Linked List in Data Structures with Types, Operations, Implementat...
PPTX
Module 8- Technological and Communication Skills.pptx
PPTX
CURRICULAM DESIGN engineering FOR CSE 2025.pptx
PDF
EXPLORING LEARNING ENGAGEMENT FACTORS INFLUENCING BEHAVIORAL, COGNITIVE, AND ...
PDF
August -2025_Top10 Read_Articles_ijait.pdf
PPTX
Management Information system : MIS-e-Business Systems.pptx
PDF
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
CyberSecurity Mobile and Wireless Devices
Design Guidelines and solutions for Plastics parts
Graph Data Structures with Types, Traversals, Connectivity, and Real-Life App...
ASME PCC-02 TRAINING -DESKTOP-NLE5HNP.pptx
ChapteR012372321DFGDSFGDFGDFSGDFGDFGDFGSDFGDFGFD
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
Fundamentals of Mechanical Engineering.pptx
III.4.1.2_The_Space_Environment.p pdffdf
6ME3A-Unit-II-Sensors and Actuators_Handouts.pptx
Feature types and data preprocessing steps
null (2) bgfbg bfgb bfgb fbfg bfbgf b.pdf
Categorization of Factors Affecting Classification Algorithms Selection
Fundamentals of safety and accident prevention -final (1).pptx
"Array and Linked List in Data Structures with Types, Operations, Implementat...
Module 8- Technological and Communication Skills.pptx
CURRICULAM DESIGN engineering FOR CSE 2025.pptx
EXPLORING LEARNING ENGAGEMENT FACTORS INFLUENCING BEHAVIORAL, COGNITIVE, AND ...
August -2025_Top10 Read_Articles_ijait.pdf
Management Information system : MIS-e-Business Systems.pptx
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION

MODELS 2022 Journal-First presentation: ETeMoX - explaining reinforcement learning

  • 1. ETeMoX: explaining reinforcement learning J. M. Parra-Ullauri1 , A. García-Domínguez2 , N. Bencomo3 , C. Zheng4 , C. Zhen5 , J. Boubeta-Puig6 , G. Ortiz6 , S. Yang7 1: Aston University, 2: University of York, 3: Durham University 4: University of Oxford, 5: University of Science and Technology of China 6: University of Cadiz, 7: Edinburgh Napier University MODELS 2022 - Thursday October 27th , 2022
  • 2. October 27th , 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning Rising need for explanations in self-adaptation / AI ● Software is being written to deal with more and more complex environments, where they need to reconfigure themselves and learn from experience ● If not careful, these systems can be “black boxes” where we can only take their decisions at face value - it will be hard to calibrate our trust on them ● We want to be able to ask things like: ○ Why did they take that action? ○ Why did they not take that *other* action? ○ How do you (roughly) work? ● The “right to explanation” is being enshrined in the GDPR, or the IEEE P7001 standard for transparency of autonomous systems ● There is an entire field on eXplainable AI (XAI) 2
  • 3. October 27th , 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning Types of explanations and stages involved ● Explanations can be broadly classified by scope into: ○ Local - for a specific decision ○ Global - for the overall behaviour of the system (usually, a simplified behavioural model) ● Adadi et al. identified four uses for explanations: ○ To justify decisions impacting people ○ To control systems into an envelope of good behaviour ⬅ ○ To discover knowledge from the system behaviour ○ To improve the system by highlighting flaws ⬅ ● Neerincx considered three stages for producing these explanations: ○ Generation - obtain necessary data and reason about it ⬅ ○ Communication - show it to consumer (human / system) ⬅ ○ Reception - was it effective and efficient? 3
  • 4. October 27th , 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning How can MDE help XAI? ● In Model-Driven Engineering (MDE), we already have significant experience abstracting away unnecessary complexity ● At design time, we raise the level of abstraction so developers of a system can think in terms of their domain concepts ● We can also do this while the system is running - we can build a model of what the system is perceiving, thinking, and doing (a runtime model) ● If we decide on a common trace metamodel for this, we can reuse efforts on introducing explainability across systems 4
  • 5. October 27th , 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning MDE: Reusable Trace Metamodel - common half 5
  • 6. October 27th , 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning MDE: Reusable Trace Metamodel - specific half ● First half of the metamodel is reusable across systems making their own decisions ● Second half of the metamodel is specific - this one is for systems using Q-Learning (a type of Reinf. Learning) ● A decision takes into account the Q-values of each Action ● Observations have rewards associated to them, and map to a state in the Q-table 6
  • 7. October 27th , 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning MDE: Indexing Models into Temporal Graph DBs ● At each system timeslice, the runtime model is indexed into a temporal graph ● Efficient representation of a graph’s full history, using copy-on-write state chunks ● Implemented by Greycat (from Hartmann et al.), and used by Eclipse Hawk for automated model indexing ● More details here: https://www.eclipse.org/hawk/ad vanced-use/temporal-queries/ 7
  • 8. October 27th , 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning MDE: History-Aware Model Querying ● For explanation generation, we can query the temporal graph ● We created a Hawk-specific dialect of EOL with time-aware predicates and properties ● More details in our MODELS 2019 paper AGD, NB, JMPU and LGP, ‘Querying and annotating model histories with time-aware patterns’, http://dx.doi.org/10.1109/MODELS. 2019.000-2 8 Version traversal x.versions, x.next, x.prev, x.time, x.earliest, x.latest… Temporal assertions x.always(version | p), x.never(v | p), x.eventually(v | p)... Predicate-based scoping x.since(v | p), x.until(v | p)... Context-based scoping x.sinceThen, x.untilThen… Unscoping x.unscoped
  • 9. October 27th , 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning Scaling up temporal graphs to large event volumes ● We first applied history-based expls. to Bayesian Learning-based systems ○ Partially Observable Markov Decision Processes (POMDP) ○ Had a case study on data mirroring over the network (Remote Data Mirroring) ○ Wasn’t too resource-intensive (we could simply record all versions) ● Then we tried applying it to a Reinforcement Learning system ○ Tens of training epochs, each with thousands of episodes ○ Original RL system had per-timeslice MongoDB with GBs of records to be indexed ○ RL system changed to send updates directly to Hawk - CoW reduced storage needs ○ Still a lot of history to go through - queries could take a long time! ● Do we really need all this history? ○ Answer: No. ○ How do we select the “right” moments, without imposing too much load? 9
  • 10. October 27th , 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning Event-Driven Monitoring: Complex Event Processing 10
  • 11. October 27th , 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning Event-driven Temporal Models for eXplanations (ETeMoX) 11
  • 12. October 27th , 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning Case study: airborne base stations 12
  • 13. October 27th , 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning Experiment 1: Evolution of metrics (optionally sampled) ● We evaluated the impact of sampling at different rates on the accuracy of a query providing the historic reward values during the RL training ● We set up the CEP engine with Esper EPL rules as in the top right ● We observed linear decreases in storage required depending on sampling rate ● 10% sampling is safe, more than that depended on the RL algorithm (DQN is sensitive!) 13 Q-Learning DQN
  • 14. October 27th , 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning Experiment 2: Exploration vs Exploitation (1/2) 14 ● RL systems don’t always pick the best option (exploitation): they try things sometimes to learn more (exploration) ● How often does this happen? ● We compared two approaches to track this: a. CEP pattern to detect exploration/exploitation and only index episodes with exploration b. EOL query on full history, to check CEP pattern correctness ● Q-Learning explored 1.41% of the time, SARSA 7.99%, DQN 7.82%
  • 15. October 27th , 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning Experiment 2: Exploration vs Exploitation (2/2) 15 ● We tried using the exploration CEP rule as a filter for metric evolution, too
  • 16. October 27th , 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning Experiment 3: user handovers between stations ● To provide continuous service, a user may be handed over to another station ● We wrote an EOL query to detect handovers in the system history ○ Handover: signal-to-noise ratio changes significantly across stations between timepoints ○ Found 1784 handovers in Q-Learning, 590 in SARSA, 82176 in DQN ● These queries required many checks: ○ 10 episodes, 2000 time steps ○ 2 stations, 1050 users ○ All together: 42M combinations to check! ● Required times: ○ 917s for Q-Learning ○ 1,132s for SARSA ○ 7,914s for DQN 16
  • 17. October 27th , 2022 Event-driven temporal models for explanations - ETeMoX: explaining reinforcement learning What’s next? ● Optimise queries via Hawk timeline annotations and CEP time windows ● Explanations for other uses: ○ Human-in-the-loop (SAM 2022 presentation on Monday showed early work) ○ Hyper-parameter optimisation (external system requests explanations and drives change) ○ Global explanations of the system behaviour (event graphs) ● Studying explanation reception: ○ Effectiveness of explanation formats: plots, results, generated text, diagrams… ○ Focused on system developers so far: look into less technical audiences ○ Consider existing models for evaluation ■ Technology Acceptance Model (Davis) ■ XAI metrics (Rosenfeld) 17
  • 18. Thank you! Antonio García-Domínguez @antoniogado - a.garcia-dominguez@york.ac.uk Juan Marcelo Parra-Ullauri j.parra-ullauri@aston.ac.uk 18