SlideShare a Scribd company logo
1
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Software Engineering Institute
Carnegie Mellon University
Pittsburgh, PA 15213
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Navigating the Complexity
of Trust
Carol J. Smith
Sr. Research Scientist - Human-Machine Interaction, CMU’s SEI
Adjunct Instructor, CMU’s Human-Computer Interaction Institute
Twitter: @carologic @sei_etc
Boston UXPA
September 24, 2021
2
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Copyright Statement
Copyright 2021 Carnegie Mellon University.
This material is based upon work funded and supported by the Department of Defense under Contract No. FA8702-15-D-0002 with Carnegie Mellon University for the
operation of the Software Engineering Institute, a federally funded research and development center.
References herein to any specific commercial product, process, or service by trade name, trade mark, manufacturer, or otherwise, does not necessarily constitute or
imply its endorsement, recommendation, or favoring by Carnegie Mellon University or its Software Engineering Institute.
NO WARRANTY. THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL IS FURNISHED ON AN "AS-IS" BASIS.
CARNEGIE MELLON UNIVERSITY MAKES NO WARRANTIES OF ANY KIND, EITHER EXPRESSED OR IMPLIED, AS TO ANY MATTER INCLUDING, BUT NOT
LIMITED TO, WARRANTY OF FITNESS FOR PURPOSE OR MERCHANTABILITY, EXCLUSIVITY, OR RESULTS OBTAINED FROM USE OF THE MATERIAL.
CARNEGIE MELLON UNIVERSITY DOES NOT MAKE ANY WARRANTY OF ANY KIND WITH RESPECT TO FREEDOM FROM PATENT, TRADEMARK, OR
COPYRIGHT INFRINGEMENT.
[DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government
use and distribution.
This material may be reproduced in its entirety, without modification, and freely distributed in written or electronic form without requesting formal permission.
Permission is required for any other use. Requests for permission should be directed to the Software Engineering Institute at permission@sei.cmu.edu.
Carnegie Mellon® is registered in the U.S. Patent and Trademark Office by Carnegie Mellon University.
DM21-0842
3
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Acknowledging the Land I Speak On
Land of Monongahela,
Adena and Hopewell
Nations;
Seneca, Lenape
and Shawnee lands;
Osage, Delaware
and Iroquois lands.
Now known
as Pittsburgh, PA, USA.
Map by Herb Roe via Wikipedia https://en.wikipedia.org/wiki/Monongahela_culture
4
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
What is Trust?
5
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
6
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
7
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
8
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
“Trust is a psychological state comprising
the intention to
accept vulnerability
based upon
positive expectations
of the intentions or behavior of another.”
Denise Rousseau, Sim Sitkin, Ronald Burt, and Colin Camerer. (1998). Not So Different After All: A Cross-discipline View of Trust. July 1988. Academy of Management Review. 23.
10.5465/AMR.1998.926617. DOI: 10.5465/AMR.1998.926617
9
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Complex,
Transient,
and Personal
10
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Contradictions
Jonathan Rotner, Ron Hodge and Lura Danley. 2020. AI Fails and How We can Learn
from Them. The MITRE Corporation. July 2020. Case number 20-1365.
https://sites.mitre.org/aifails/failure-to-launch/
11
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Trust Involves…
• Belief and understanding
• Dependency and choice
• Context and privacy
• Perception and awareness
• Evidence and knowledge
• Emotion and respect
Jonathan Rotner, Ron Hodge and Lura Danley. 2020. AI Fails and How We can Learn from Them. The MITRE Corporation. July 2020. Case number 20-1365.
https://sites.mitre.org/aifails/failure-to-launch/
12
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Trust is achieved when…
Trustor (person)
has understanding and belief
of shared goals and values
with Trustee (system).
Trustor has justified (reasons-based)
beliefs of Trustee’s access
to context and information.
Trustor has justified expectations
that Trustee will mitigate risk,
and support shared goals and values.
Building on work of David Danks, Carnegie Mellon University; Alan Richard
Wagner, Penn State; and their sources.
13
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Appropriate Trust
As variations occur in Trustor’s
goals, values, context, and information,
Trustor will adjust
the level of trust in Trustee
to fit new circumstances.
Kun Yu, Shlomo Berkovsky, Ronnie Taib, Dan Conway, Jianlong Zhou, and Fang Chen. 2017. User Trust Dynamics: An Investigation Driven by Differences in System Performance.
IUI 2017 (March 2017), 307-317. DOI: http://dx.doi.org/10.1145/3025171.3025219
14
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Is 100% Trust the Goal?
15
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Semi-Autonomous Vehicles
Tesla Autopilot in Heavy LA Traffic by Scott Kubo https://youtu.be/m3-QzTFxoUg?t=14
16
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
What is a tomato?
Fruit?
Vegetable?
17
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Automation Bias
Propensity for humans to favor suggestions
from automated decision-making systems
and to ignore contradictory information
made without automation, even if it is correct.
Mary Cummings. 2004. Automation Bias in Intelligent Time Critical Decision Support Systems. AIAA 2004-6313. AIAA 1st Intelligent Systems Technical Conference. (September 2004).
DOI: https://doi.org/10.2514/6.2004-6313
18
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Optimal Trust
“Unnecessarily high trust in AI
may have catastrophic consequences,
especially in life-critical applications…
Optimal trust in which both humans
and AI each have some level of skepticism
regarding the other’s decisions
since both are capable of making mistakes."
Onur Asan, Alparslan Emrah Bayrak and Avishek Choudhury. 2020. Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians. J Med Internet Res (2020),
Vol. 22, 6:e15154. URL: https://www.jmir.org/2020/6/e15154 DOI: https://doi.org/10.2196/15154
19
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Trust is a Continuum
Bobbie Seppelt and John Lee. 2012. Human Factors and Ergonomics in Automation Design. In Handbook of Human Factors and Ergonomics (Fourth Edition) Chapter 59. Wiley.
DOI: https://doi.org/10.1002/9781118131350.ch59
Over Trust
Trust exceeding
system capabilities -
may lead to misuse
Calibrated Trust
Trust matches system
capabilities leading to
appropriate use.
Distrust
Trust falling short of
system capabilities
- may lead to disuse.
20
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Trust Changes Over Time
First Experience Teaming Change…
Level
of
Trust
Over Trust
Distrust
Kun Yu, Shlomo Berkovsky, Ronnie Taib, Dan Conway, Jianlong Zhou, and Fang Chen. 2017. User Trust Dynamics: An Investigation Driven by Differences in System Performance.
IUI 2017 (March 2017), 307-317. DOI: http://dx.doi.org/10.1145/3025171.3025219
21
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Change Increases or Decreases Trust
Event-Driven
• Response to an interaction, transaction, service, or event
Time-Driven
• Response to periodic evidence (observations or recommendations)
• Lack of evidence can decay trust
Jia Guo and Ing-Ray Chen. 2015. A Classification of Trust Computation Models for Service-Oriented Internet of Things Systems. 2015 IEEE International Conference on Services
Computing (2015), 324-331. DOI: https://doi.org/10.1109/SCC.2015.52
Kun Yu, Shlomo Berkovsky, Ronnie Taib, Dan Conway, Jianlong Zhou, and Fang Chen. 2017. User Trust Dynamics: An Investigation Driven by Differences in System Performance. IUI
2017 (March 2017), 307-317. DOI: http://dx.doi.org/10.1145/3025171.3025219
22
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Change is Constant
23
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Awareness of System Capabilities
Understanding of conditions, constraints
Experience with System
- Length of time
- Quality of experience
Transparency and usability of system
24
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Additional Trust / Distrust Factors
Institutional, management
Social and relational
Previous experiences
25
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
What is Appropriate?
Can there be too much trust?
What is necessary?
How do we communicate what is appropriate?
26
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Supporting Appropriate Trust
27
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Design to work with, and for, people
Minimize unintended
consequences
• Research to understand
context of use
• Design for purpose:
Systems – not just tasks
• Test prototypes/products
in environment
Human-Centered AI, White Paper. June 2021. CMU’s Software Engineering Institute.
https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=735362
28
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Speculation keeps
people safe
29
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Speculate and Design for the Worst Case
Don’t assume that only the average case will occur.
Be speculative about the worst case.
Create better decision-making tools that don't require
unsupportable risk assessments.
N. G. Leveson. 2017. The Therac-25: 30 Years Later. In Computer, vol. 50, no. 11, (November 2017), 8-11. DOI: 10.1109/MC.2017.4041349
30
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Activate Curiosity
UX research methods and activities to activate curiosity:
•Abusability Testing (Dan Brown)
•“Black Mirror” Episodes (Casey Fiesler)
(inspired by British dystopian sci-fi tv series of same name)
Speculate about system misuse and abuse
•What are potential unintended/unwanted consequences?
31
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Conversations for Understanding
Difficult Topics
•What do we value?
•Who could be hurt?
•What lines won’t our AI cross?
•How are we shifting power?*
•How will we track our progress?
*“How is this ML model shifting power?" @riakall #NeurIPS2019
Photo by Pam Sharpe https://unsplash.com/@msgrace?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText On Unsplash -
https://unsplash.com/s/photos/business-woman-smiling?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
*”Don’t ask if artificial intelligence is good or fair, ask how it shifts power.” Pratyusha Kalluri.
https://www.nature.com/articles/d41586-020-02003-2
32
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
New uncomfortable work
“Be uncomfortable”
- Laura Kalbag
Ethical design is not superficial.
33
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Critical Design Components
• Security
• Adaptability
• Communication
• Explainability
• Training/Knowledge
• Assessment
Neta Ezer, Sylvain Bruni, Yang Cai, Sam J. Hepenstal, Christopher A. Miller, and Dylan D. Schmorrow. 2019. Trust
Engineering for Human-AI Teams. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1
(November 2019): 322–26. https://doi.org/10.1177/1071181319631264.
34
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Take Pause
“In our enthusiasm
to provide measurements,
we should not attempt to measure
the unmeasurable.”
People believe a “calculated number
more than actual experience.”
N. G. Leveson. 2017. The Therac-25: 30 Years Later. In Computer, vol. 50, no. 11, (November 2017), 8-11. DOI: 10.1109/MC.2017.4041349
35
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Transparency
System limitations
Boundaries
and unfamiliar scenarios
"Explainability" isn't magic.
Transparency isn't clarity.
Human-Centered AI, White Paper. June 2021. CMU’s Software Engineering Institute.
https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=735362
36
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Safe Versus “Friendly” Experiences
Actions to get into or maintain
a safe state should be easy to do.
Actions that can lead to
an unsafe state (hazard) should be hard to do.
Relying on operators to detect errors and recover before
an accident isn't realistic.
N. G. Leveson. 2017. The Therac-25: 30 Years Later. In Computer, vol. 50, no. 11, (November 2017), 8-11. DOI: 10.1109/MC.2017.4041349
N. Leveson. 1995. Safeware: System Safety and Computers, Addison Wesley (1995).
37
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Consider Time Cycles
• Length of time interactions
occur
• Length varies
• Very short and hectic
• Longer and iterative
• Affects interactions
Clear communication,
negotiation, and coordination
required
How IAs Can Shape the Future of Human-AI Collaboration
Presented on April 28-30, 2021 at the Information Architecture Conference (IAC21)
– Video https://www.designforcontext.com/ia-shaping-human-ai-collaboration
38
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Errors and Information
Create protection against errors
Provide self-checks / error-detection / error-handling
Identify trends and behaviors that increase risk
“Data—“big” or not—isn't the same as information.”
N. G. Leveson. 2017. The Therac-25: 30 Years Later. In Computer, vol. 50, no. 11, (November 2017), 8-11. DOI: 10.1109/MC.2017.4041349
39
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Make Systems Effective Team Players
Activities observable for fellow team players
Easy to direct
Capitalize on human strengths
- How observable is behaviour for human counterparts?
- How easily and efficiently it allows itself to be directed?
- Even (or especially) during busy, novel episodes?
S. W. A. Dekker and D. D. Woods. 2002. MABA-MABA or Abracadabra? Progress on Human–Automation Co-ordination. Cognition Tech Work 4, (2002) 240–244. DOI:
https://doi.org/10.1007/s101110200022 Note: MABA-MABA (Men-Are-Better-At/Machines-Are-Better-At lists)
40
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Appropriate Trust
• Understand context and test in context
• Design for purpose: Systems
• Provide understandable evidence
• Complement human strengths
• Provide control to people
Jonathan Rotner, Ron Hodge and Lura Danley. 2020. AI Fails and How We can Learn from Them. The MITRE Corporation. July 2020. Case number 20-1365.
https://sites.mitre.org/aifails/failure-to-launch/
41
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Carol J. Smith
Twitter: @carologic
LinkedIn: https://www.linkedin.com/in/caroljsmith/
CMU’s Software Engineering Institute,
Emerging Technology Center
Twitter: @sei_etc
42
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Resources
Denise Rousseau, Sim Sitkin, Ronald Burt, and Colin Camerer. (1998). Not So Different After All: A Cross-discipline View of Trust. July 1988. Academy of Management Review. 23.
10.5465/AMR.1998.926617. DOI: 10.5465/AMR.1998.926617
Bobbie Seppelt and John Lee. 2012. Human Factors and Ergonomics in Automation Design. In Handbook of Human Factors and Ergonomics (Fourth Edition) Chapter 59. Wiley.
DOI: https://doi.org/10.1002/9781118131350.ch59
Human-Centered AI, White Paper. June 2021. Carnegie Mellon University’s Software Engineering Institute. Contributors: Hollen Barmer, Rachel Dzombak, Matt Gaston, Jay
Palat, Frank Redner, Carol J. Smith. https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=735362
Jia Guo and Ing-Ray Chen. 2015. A Classification of Trust Computation Models for Service-Oriented Internet of Things Systems. 2015 IEEE International Conference on Services
Computing (2015), 324-331. DOI: https://doi.org/10.1109/SCC.2015.52
Jonathan Rotner, Ron Hodge and Lura Danley. 2020. AI Fails and How We can Learn from Them. The MITRE Corporation. July 2020. Case number 20-1365.
https://sites.mitre.org/aifails/failure-to-launch/
Kun Yu, Shlomo Berkovsky, Ronnie Taib, Dan Conway, Jianlong Zhou, and Fang Chen. 2017. User Trust Dynamics: An Investigation Driven by Differences in System
Performance. IUI 2017 (March 2017), 307-317. DOI: http://dx.doi.org/10.1145/3025171.3025219
Mary Cummings. 2004. Automation Bias in Intelligent Time Critical Decision Support Systems. AIAA 2004-6313. AIAA 1st Intelligent Systems Technical Conference. (September
2004). DOI: https://doi.org/10.2514/6.2004-6313
Neta Ezer, Sylvain Bruni, Yang Cai, Sam J. Hepenstal, Christopher A. Miller, and Dylan D. Schmorrow. 2019. Trust Engineering for Human-AI Teams. Proceedings of the Human
Factors and Ergonomics Society Annual Meeting 63, no. 1 (November 2019): 322–26. https://doi.org/10.1177/1071181319631264.
N. G. Leveson. 2017. The Therac-25: 30 Years Later. In Computer, vol. 50, no. 11, (November 2017), 8-11. DOI: 10.1109/MC.2017.4041349
N. Leveson. 1995. Safeware: System Safety and Computers, Addison Wesley (1995).
Onur Asan, Alparslan Emrah Bayrak and Avishek Choudhury. 2020. Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians. J Med Internet Res (2020),Vol.
22,6:e15154. URL: https://www.jmir.org/2020/6/e15154 DOI: https://doi.org/10.2196/15154
Rose Challenger, Chris W. Clegg and Craig Shepherd. 2013. Function allocation in complex systems: reframing an old problem. Ergonomics, 56:7 (2017) 1051-1069.
DOI: 10.1080/00140139.2013.790482
43
Navigating the Complexity of Trust
© 2021 Carnegie Mellon University
[DISTRIBUTION STATEMENT A] This material has been approved for public
release and unlimited distribution. Please see Copyright notice for non-US
Government use and distribution.
Decision Making - Humans vs. Computers
Humans are better at:
• Perceiving patterns
• Improvising and using flexible
procedures
• Recalling relevant facts at the
appropriate time
• Reasoning inductively
• Exercising judgment
Computers are better at:
• Responding quickly to control
tasks
• Repetitive and routine tasks
• Reasoning deductively
• Handling many complex tasks
simultaneously
Mary Cummings. 2004. Automation Bias in Intelligent Time Critical Decision Support Systems. AIAA 2004-6313. AIAA 1st Intelligent Systems Technical Conference.
(September 2004). DOI: https://doi.org/10.2514/6.2004-6313

More Related Content

PPTX
Introduction to National Critical Infrastructure Cyber Security: Background a...
PDF
Digital Storytelling: Understanding Social Media and Visual Storytelling Tool...
PPTX
Effective Cybersecurity Communication Skills
PPTX
Yours Anecdotally: Developing a Cybersecurity Problem Space
PDF
UX STRAT Online 2021 Presentation by Carol Smith, Carnegie Mellon University
PDF
UX STRAT USA 2021: Carol Smith, Carnegie Mellon
PPTX
Implementing Ethics: Developing Trustworthy AI PyCon 2020
PPTX
Designing Trustworthy AI: A Human-Machine Teaming Framework to Guide Developm...
Introduction to National Critical Infrastructure Cyber Security: Background a...
Digital Storytelling: Understanding Social Media and Visual Storytelling Tool...
Effective Cybersecurity Communication Skills
Yours Anecdotally: Developing a Cybersecurity Problem Space
UX STRAT Online 2021 Presentation by Carol Smith, Carnegie Mellon University
UX STRAT USA 2021: Carol Smith, Carnegie Mellon
Implementing Ethics: Developing Trustworthy AI PyCon 2020
Designing Trustworthy AI: A Human-Machine Teaming Framework to Guide Developm...

Similar to Navigating the Complexity of Trust at UXPA Boston 2021 (20)

PDF
Workbook 3
PDF
Avoiding a Crisis
PPTX
Judge-the-Validity-of-the-Evidence-Listened-To.pptx
PDF
U.S.UCAN and its role in Wisconsin Mark Johnson Interim Executive Director, U...
PDF
W3C TPAC 2012 Breakout Session on Government Linked Data
PDF
U.S. Navy Command Leadership Social Media Handbook
PDF
Persuasive Writing Strong Work Sample By Angie Bra
DOC
PDF
Open source software in government challenges and opportunities
PDF
Modernizing Dept of Homeland Security for CFAA investigations
PDF
FINAL_Report
PPTX
Dynamic Decision Tools Catalog
PDF
Communities And Renewable Energy in UK
PPTX
Resiliency Innovation Forum - 11212024.pptx
DOC
SEWERLOCK AND TELECOMLOCK INFRASTRUCTURE ASSETS
DOC
Energy Data Access_Who wants the data
PPTX
Measure It, Manage It, Ignore It - Software Practitioners and Technical Debt
PDF
Open Data: a brief introduction
PDF
Ceres - Risk Aware Regulation - FINAL
PDF
PES 2013 - Social License to Operate: How to Get It, and How to Keep It
Workbook 3
Avoiding a Crisis
Judge-the-Validity-of-the-Evidence-Listened-To.pptx
U.S.UCAN and its role in Wisconsin Mark Johnson Interim Executive Director, U...
W3C TPAC 2012 Breakout Session on Government Linked Data
U.S. Navy Command Leadership Social Media Handbook
Persuasive Writing Strong Work Sample By Angie Bra
Open source software in government challenges and opportunities
Modernizing Dept of Homeland Security for CFAA investigations
FINAL_Report
Dynamic Decision Tools Catalog
Communities And Renewable Energy in UK
Resiliency Innovation Forum - 11212024.pptx
SEWERLOCK AND TELECOMLOCK INFRASTRUCTURE ASSETS
Energy Data Access_Who wants the data
Measure It, Manage It, Ignore It - Software Practitioners and Technical Debt
Open Data: a brief introduction
Ceres - Risk Aware Regulation - FINAL
PES 2013 - Social License to Operate: How to Get It, and How to Keep It
Ad

More from Carol Smith (20)

PPTX
Designing Trustworthy AI: A User Experience Framework at RSA 2020
PPTX
IA is Elemental: People are Fundamental at World IA Day 2020 Pittsburgh
PPTX
Gearing up for Ethnography, Michigan State, World Usability Day 2019
PPTX
On the Road: Best Practices for Autonomous Experiences at WUC19
PPTX
Designing More Ethical and Unbiased Experiences - Abstractions
PPTX
Dynamic UXR: Ethical Responsibilities and AI. Carol Smith at Strive in Toronto
PPTX
Prototyping for Beginners - Pittsburgh Inclusive Innovation Summit 2019
PPTX
Navigating challenges in IA people management at IAC19
PPTX
What can DesignOps do for you? by Carol Smith at TLMUX in Montreal
PPTX
Designing Trustable AI Experiences at IxDA Pittsburgh, Jan 2019
PPTX
Designing Trustable AI Experiences at World Usability Day in Cleveland
PPTX
Gearing up for Ethnography at Midwest UX 2018
PPTX
Designing AI for Humanity at dmi:Design Leadership Conference in Boston
PPTX
Product Design in Agile Environments: Making it Work at ProductCamp Pittsburgh
PPTX
Demystifying Artificial Intelligence: Solving Difficult Problems at ProductCa...
PPTX
UX in the Age of AI: Leading with Design UXPA2018
PPTX
IA in the Age of AI: Embracing Abstraction and Change at IA Summit 2018
PPTX
Making Great User Experiences, Pittsburgh Scrum MeetUp, Oct 17, 2017
PPTX
AI and Machine Learning Demystified by Carol Smith at Midwest UX 2017
PPTX
UX in the Age of AI: Where Does Design Fit In? Fluxible 2017
Designing Trustworthy AI: A User Experience Framework at RSA 2020
IA is Elemental: People are Fundamental at World IA Day 2020 Pittsburgh
Gearing up for Ethnography, Michigan State, World Usability Day 2019
On the Road: Best Practices for Autonomous Experiences at WUC19
Designing More Ethical and Unbiased Experiences - Abstractions
Dynamic UXR: Ethical Responsibilities and AI. Carol Smith at Strive in Toronto
Prototyping for Beginners - Pittsburgh Inclusive Innovation Summit 2019
Navigating challenges in IA people management at IAC19
What can DesignOps do for you? by Carol Smith at TLMUX in Montreal
Designing Trustable AI Experiences at IxDA Pittsburgh, Jan 2019
Designing Trustable AI Experiences at World Usability Day in Cleveland
Gearing up for Ethnography at Midwest UX 2018
Designing AI for Humanity at dmi:Design Leadership Conference in Boston
Product Design in Agile Environments: Making it Work at ProductCamp Pittsburgh
Demystifying Artificial Intelligence: Solving Difficult Problems at ProductCa...
UX in the Age of AI: Leading with Design UXPA2018
IA in the Age of AI: Embracing Abstraction and Change at IA Summit 2018
Making Great User Experiences, Pittsburgh Scrum MeetUp, Oct 17, 2017
AI and Machine Learning Demystified by Carol Smith at Midwest UX 2017
UX in the Age of AI: Where Does Design Fit In? Fluxible 2017
Ad

Recently uploaded (20)

PDF
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
PPTX
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
PPTX
history of c programming in notes for students .pptx
PDF
How to Choose the Right IT Partner for Your Business in Malaysia
PPTX
Introduction to Artificial Intelligence
PDF
Understanding Forklifts - TECH EHS Solution
PDF
Nekopoi APK 2025 free lastest update
PPTX
Transform Your Business with a Software ERP System
PDF
Softaken Excel to vCard Converter Software.pdf
PPTX
L1 - Introduction to python Backend.pptx
PDF
Which alternative to Crystal Reports is best for small or large businesses.pdf
PDF
Design an Analysis of Algorithms I-SECS-1021-03
PDF
SAP S4 Hana Brochure 3 (PTS SYSTEMS AND SOLUTIONS)
PDF
PTS Company Brochure 2025 (1).pdf.......
PPT
Introduction Database Management System for Course Database
PDF
Designing Intelligence for the Shop Floor.pdf
PDF
Digital Systems & Binary Numbers (comprehensive )
PDF
Raksha Bandhan Grocery Pricing Trends in India 2025.pdf
PDF
Wondershare Filmora 15 Crack With Activation Key [2025
PDF
top salesforce developer skills in 2025.pdf
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
history of c programming in notes for students .pptx
How to Choose the Right IT Partner for Your Business in Malaysia
Introduction to Artificial Intelligence
Understanding Forklifts - TECH EHS Solution
Nekopoi APK 2025 free lastest update
Transform Your Business with a Software ERP System
Softaken Excel to vCard Converter Software.pdf
L1 - Introduction to python Backend.pptx
Which alternative to Crystal Reports is best for small or large businesses.pdf
Design an Analysis of Algorithms I-SECS-1021-03
SAP S4 Hana Brochure 3 (PTS SYSTEMS AND SOLUTIONS)
PTS Company Brochure 2025 (1).pdf.......
Introduction Database Management System for Course Database
Designing Intelligence for the Shop Floor.pdf
Digital Systems & Binary Numbers (comprehensive )
Raksha Bandhan Grocery Pricing Trends in India 2025.pdf
Wondershare Filmora 15 Crack With Activation Key [2025
top salesforce developer skills in 2025.pdf

Navigating the Complexity of Trust at UXPA Boston 2021

  • 1. 1 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213 [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Navigating the Complexity of Trust Carol J. Smith Sr. Research Scientist - Human-Machine Interaction, CMU’s SEI Adjunct Instructor, CMU’s Human-Computer Interaction Institute Twitter: @carologic @sei_etc Boston UXPA September 24, 2021
  • 2. 2 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Copyright Statement Copyright 2021 Carnegie Mellon University. This material is based upon work funded and supported by the Department of Defense under Contract No. FA8702-15-D-0002 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center. References herein to any specific commercial product, process, or service by trade name, trade mark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by Carnegie Mellon University or its Software Engineering Institute. NO WARRANTY. THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL IS FURNISHED ON AN "AS-IS" BASIS. CARNEGIE MELLON UNIVERSITY MAKES NO WARRANTIES OF ANY KIND, EITHER EXPRESSED OR IMPLIED, AS TO ANY MATTER INCLUDING, BUT NOT LIMITED TO, WARRANTY OF FITNESS FOR PURPOSE OR MERCHANTABILITY, EXCLUSIVITY, OR RESULTS OBTAINED FROM USE OF THE MATERIAL. CARNEGIE MELLON UNIVERSITY DOES NOT MAKE ANY WARRANTY OF ANY KIND WITH RESPECT TO FREEDOM FROM PATENT, TRADEMARK, OR COPYRIGHT INFRINGEMENT. [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. This material may be reproduced in its entirety, without modification, and freely distributed in written or electronic form without requesting formal permission. Permission is required for any other use. Requests for permission should be directed to the Software Engineering Institute at permission@sei.cmu.edu. Carnegie Mellon® is registered in the U.S. Patent and Trademark Office by Carnegie Mellon University. DM21-0842
  • 3. 3 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Acknowledging the Land I Speak On Land of Monongahela, Adena and Hopewell Nations; Seneca, Lenape and Shawnee lands; Osage, Delaware and Iroquois lands. Now known as Pittsburgh, PA, USA. Map by Herb Roe via Wikipedia https://en.wikipedia.org/wiki/Monongahela_culture
  • 4. 4 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. What is Trust?
  • 5. 5 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution.
  • 6. 6 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution.
  • 7. 7 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution.
  • 8. 8 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. “Trust is a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another.” Denise Rousseau, Sim Sitkin, Ronald Burt, and Colin Camerer. (1998). Not So Different After All: A Cross-discipline View of Trust. July 1988. Academy of Management Review. 23. 10.5465/AMR.1998.926617. DOI: 10.5465/AMR.1998.926617
  • 9. 9 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Complex, Transient, and Personal
  • 10. 10 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Contradictions Jonathan Rotner, Ron Hodge and Lura Danley. 2020. AI Fails and How We can Learn from Them. The MITRE Corporation. July 2020. Case number 20-1365. https://sites.mitre.org/aifails/failure-to-launch/
  • 11. 11 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Trust Involves… • Belief and understanding • Dependency and choice • Context and privacy • Perception and awareness • Evidence and knowledge • Emotion and respect Jonathan Rotner, Ron Hodge and Lura Danley. 2020. AI Fails and How We can Learn from Them. The MITRE Corporation. July 2020. Case number 20-1365. https://sites.mitre.org/aifails/failure-to-launch/
  • 12. 12 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Trust is achieved when… Trustor (person) has understanding and belief of shared goals and values with Trustee (system). Trustor has justified (reasons-based) beliefs of Trustee’s access to context and information. Trustor has justified expectations that Trustee will mitigate risk, and support shared goals and values. Building on work of David Danks, Carnegie Mellon University; Alan Richard Wagner, Penn State; and their sources.
  • 13. 13 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Appropriate Trust As variations occur in Trustor’s goals, values, context, and information, Trustor will adjust the level of trust in Trustee to fit new circumstances. Kun Yu, Shlomo Berkovsky, Ronnie Taib, Dan Conway, Jianlong Zhou, and Fang Chen. 2017. User Trust Dynamics: An Investigation Driven by Differences in System Performance. IUI 2017 (March 2017), 307-317. DOI: http://dx.doi.org/10.1145/3025171.3025219
  • 14. 14 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Is 100% Trust the Goal?
  • 15. 15 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Semi-Autonomous Vehicles Tesla Autopilot in Heavy LA Traffic by Scott Kubo https://youtu.be/m3-QzTFxoUg?t=14
  • 16. 16 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. What is a tomato? Fruit? Vegetable?
  • 17. 17 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Automation Bias Propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct. Mary Cummings. 2004. Automation Bias in Intelligent Time Critical Decision Support Systems. AIAA 2004-6313. AIAA 1st Intelligent Systems Technical Conference. (September 2004). DOI: https://doi.org/10.2514/6.2004-6313
  • 18. 18 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Optimal Trust “Unnecessarily high trust in AI may have catastrophic consequences, especially in life-critical applications… Optimal trust in which both humans and AI each have some level of skepticism regarding the other’s decisions since both are capable of making mistakes." Onur Asan, Alparslan Emrah Bayrak and Avishek Choudhury. 2020. Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians. J Med Internet Res (2020), Vol. 22, 6:e15154. URL: https://www.jmir.org/2020/6/e15154 DOI: https://doi.org/10.2196/15154
  • 19. 19 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Trust is a Continuum Bobbie Seppelt and John Lee. 2012. Human Factors and Ergonomics in Automation Design. In Handbook of Human Factors and Ergonomics (Fourth Edition) Chapter 59. Wiley. DOI: https://doi.org/10.1002/9781118131350.ch59 Over Trust Trust exceeding system capabilities - may lead to misuse Calibrated Trust Trust matches system capabilities leading to appropriate use. Distrust Trust falling short of system capabilities - may lead to disuse.
  • 20. 20 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Trust Changes Over Time First Experience Teaming Change… Level of Trust Over Trust Distrust Kun Yu, Shlomo Berkovsky, Ronnie Taib, Dan Conway, Jianlong Zhou, and Fang Chen. 2017. User Trust Dynamics: An Investigation Driven by Differences in System Performance. IUI 2017 (March 2017), 307-317. DOI: http://dx.doi.org/10.1145/3025171.3025219
  • 21. 21 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Change Increases or Decreases Trust Event-Driven • Response to an interaction, transaction, service, or event Time-Driven • Response to periodic evidence (observations or recommendations) • Lack of evidence can decay trust Jia Guo and Ing-Ray Chen. 2015. A Classification of Trust Computation Models for Service-Oriented Internet of Things Systems. 2015 IEEE International Conference on Services Computing (2015), 324-331. DOI: https://doi.org/10.1109/SCC.2015.52 Kun Yu, Shlomo Berkovsky, Ronnie Taib, Dan Conway, Jianlong Zhou, and Fang Chen. 2017. User Trust Dynamics: An Investigation Driven by Differences in System Performance. IUI 2017 (March 2017), 307-317. DOI: http://dx.doi.org/10.1145/3025171.3025219
  • 22. 22 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Change is Constant
  • 23. 23 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Awareness of System Capabilities Understanding of conditions, constraints Experience with System - Length of time - Quality of experience Transparency and usability of system
  • 24. 24 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Additional Trust / Distrust Factors Institutional, management Social and relational Previous experiences
  • 25. 25 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. What is Appropriate? Can there be too much trust? What is necessary? How do we communicate what is appropriate?
  • 26. 26 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Supporting Appropriate Trust
  • 27. 27 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Design to work with, and for, people Minimize unintended consequences • Research to understand context of use • Design for purpose: Systems – not just tasks • Test prototypes/products in environment Human-Centered AI, White Paper. June 2021. CMU’s Software Engineering Institute. https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=735362
  • 28. 28 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Speculation keeps people safe
  • 29. 29 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Speculate and Design for the Worst Case Don’t assume that only the average case will occur. Be speculative about the worst case. Create better decision-making tools that don't require unsupportable risk assessments. N. G. Leveson. 2017. The Therac-25: 30 Years Later. In Computer, vol. 50, no. 11, (November 2017), 8-11. DOI: 10.1109/MC.2017.4041349
  • 30. 30 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Activate Curiosity UX research methods and activities to activate curiosity: •Abusability Testing (Dan Brown) •“Black Mirror” Episodes (Casey Fiesler) (inspired by British dystopian sci-fi tv series of same name) Speculate about system misuse and abuse •What are potential unintended/unwanted consequences?
  • 31. 31 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Conversations for Understanding Difficult Topics •What do we value? •Who could be hurt? •What lines won’t our AI cross? •How are we shifting power?* •How will we track our progress? *“How is this ML model shifting power?" @riakall #NeurIPS2019 Photo by Pam Sharpe https://unsplash.com/@msgrace?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText On Unsplash - https://unsplash.com/s/photos/business-woman-smiling?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText *”Don’t ask if artificial intelligence is good or fair, ask how it shifts power.” Pratyusha Kalluri. https://www.nature.com/articles/d41586-020-02003-2
  • 32. 32 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. New uncomfortable work “Be uncomfortable” - Laura Kalbag Ethical design is not superficial.
  • 33. 33 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Critical Design Components • Security • Adaptability • Communication • Explainability • Training/Knowledge • Assessment Neta Ezer, Sylvain Bruni, Yang Cai, Sam J. Hepenstal, Christopher A. Miller, and Dylan D. Schmorrow. 2019. Trust Engineering for Human-AI Teams. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (November 2019): 322–26. https://doi.org/10.1177/1071181319631264.
  • 34. 34 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Take Pause “In our enthusiasm to provide measurements, we should not attempt to measure the unmeasurable.” People believe a “calculated number more than actual experience.” N. G. Leveson. 2017. The Therac-25: 30 Years Later. In Computer, vol. 50, no. 11, (November 2017), 8-11. DOI: 10.1109/MC.2017.4041349
  • 35. 35 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Transparency System limitations Boundaries and unfamiliar scenarios "Explainability" isn't magic. Transparency isn't clarity. Human-Centered AI, White Paper. June 2021. CMU’s Software Engineering Institute. https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=735362
  • 36. 36 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Safe Versus “Friendly” Experiences Actions to get into or maintain a safe state should be easy to do. Actions that can lead to an unsafe state (hazard) should be hard to do. Relying on operators to detect errors and recover before an accident isn't realistic. N. G. Leveson. 2017. The Therac-25: 30 Years Later. In Computer, vol. 50, no. 11, (November 2017), 8-11. DOI: 10.1109/MC.2017.4041349 N. Leveson. 1995. Safeware: System Safety and Computers, Addison Wesley (1995).
  • 37. 37 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Consider Time Cycles • Length of time interactions occur • Length varies • Very short and hectic • Longer and iterative • Affects interactions Clear communication, negotiation, and coordination required How IAs Can Shape the Future of Human-AI Collaboration Presented on April 28-30, 2021 at the Information Architecture Conference (IAC21) – Video https://www.designforcontext.com/ia-shaping-human-ai-collaboration
  • 38. 38 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Errors and Information Create protection against errors Provide self-checks / error-detection / error-handling Identify trends and behaviors that increase risk “Data—“big” or not—isn't the same as information.” N. G. Leveson. 2017. The Therac-25: 30 Years Later. In Computer, vol. 50, no. 11, (November 2017), 8-11. DOI: 10.1109/MC.2017.4041349
  • 39. 39 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Make Systems Effective Team Players Activities observable for fellow team players Easy to direct Capitalize on human strengths - How observable is behaviour for human counterparts? - How easily and efficiently it allows itself to be directed? - Even (or especially) during busy, novel episodes? S. W. A. Dekker and D. D. Woods. 2002. MABA-MABA or Abracadabra? Progress on Human–Automation Co-ordination. Cognition Tech Work 4, (2002) 240–244. DOI: https://doi.org/10.1007/s101110200022 Note: MABA-MABA (Men-Are-Better-At/Machines-Are-Better-At lists)
  • 40. 40 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Appropriate Trust • Understand context and test in context • Design for purpose: Systems • Provide understandable evidence • Complement human strengths • Provide control to people Jonathan Rotner, Ron Hodge and Lura Danley. 2020. AI Fails and How We can Learn from Them. The MITRE Corporation. July 2020. Case number 20-1365. https://sites.mitre.org/aifails/failure-to-launch/
  • 41. 41 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Carol J. Smith Twitter: @carologic LinkedIn: https://www.linkedin.com/in/caroljsmith/ CMU’s Software Engineering Institute, Emerging Technology Center Twitter: @sei_etc
  • 42. 42 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Resources Denise Rousseau, Sim Sitkin, Ronald Burt, and Colin Camerer. (1998). Not So Different After All: A Cross-discipline View of Trust. July 1988. Academy of Management Review. 23. 10.5465/AMR.1998.926617. DOI: 10.5465/AMR.1998.926617 Bobbie Seppelt and John Lee. 2012. Human Factors and Ergonomics in Automation Design. In Handbook of Human Factors and Ergonomics (Fourth Edition) Chapter 59. Wiley. DOI: https://doi.org/10.1002/9781118131350.ch59 Human-Centered AI, White Paper. June 2021. Carnegie Mellon University’s Software Engineering Institute. Contributors: Hollen Barmer, Rachel Dzombak, Matt Gaston, Jay Palat, Frank Redner, Carol J. Smith. https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=735362 Jia Guo and Ing-Ray Chen. 2015. A Classification of Trust Computation Models for Service-Oriented Internet of Things Systems. 2015 IEEE International Conference on Services Computing (2015), 324-331. DOI: https://doi.org/10.1109/SCC.2015.52 Jonathan Rotner, Ron Hodge and Lura Danley. 2020. AI Fails and How We can Learn from Them. The MITRE Corporation. July 2020. Case number 20-1365. https://sites.mitre.org/aifails/failure-to-launch/ Kun Yu, Shlomo Berkovsky, Ronnie Taib, Dan Conway, Jianlong Zhou, and Fang Chen. 2017. User Trust Dynamics: An Investigation Driven by Differences in System Performance. IUI 2017 (March 2017), 307-317. DOI: http://dx.doi.org/10.1145/3025171.3025219 Mary Cummings. 2004. Automation Bias in Intelligent Time Critical Decision Support Systems. AIAA 2004-6313. AIAA 1st Intelligent Systems Technical Conference. (September 2004). DOI: https://doi.org/10.2514/6.2004-6313 Neta Ezer, Sylvain Bruni, Yang Cai, Sam J. Hepenstal, Christopher A. Miller, and Dylan D. Schmorrow. 2019. Trust Engineering for Human-AI Teams. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, no. 1 (November 2019): 322–26. https://doi.org/10.1177/1071181319631264. N. G. Leveson. 2017. The Therac-25: 30 Years Later. In Computer, vol. 50, no. 11, (November 2017), 8-11. DOI: 10.1109/MC.2017.4041349 N. Leveson. 1995. Safeware: System Safety and Computers, Addison Wesley (1995). Onur Asan, Alparslan Emrah Bayrak and Avishek Choudhury. 2020. Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians. J Med Internet Res (2020),Vol. 22,6:e15154. URL: https://www.jmir.org/2020/6/e15154 DOI: https://doi.org/10.2196/15154 Rose Challenger, Chris W. Clegg and Craig Shepherd. 2013. Function allocation in complex systems: reframing an old problem. Ergonomics, 56:7 (2017) 1051-1069. DOI: 10.1080/00140139.2013.790482
  • 43. 43 Navigating the Complexity of Trust © 2021 Carnegie Mellon University [DISTRIBUTION STATEMENT A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. Decision Making - Humans vs. Computers Humans are better at: • Perceiving patterns • Improvising and using flexible procedures • Recalling relevant facts at the appropriate time • Reasoning inductively • Exercising judgment Computers are better at: • Responding quickly to control tasks • Repetitive and routine tasks • Reasoning deductively • Handling many complex tasks simultaneously Mary Cummings. 2004. Automation Bias in Intelligent Time Critical Decision Support Systems. AIAA 2004-6313. AIAA 1st Intelligent Systems Technical Conference. (September 2004). DOI: https://doi.org/10.2514/6.2004-6313