SlideShare a Scribd company logo
ICS3211 - Intelligent
Interfaces II
Combining design with technology for effective human-
computer interaction
Week 8 Department of AI,
University of Malta
Design for Multimodal
Interfaces
Week 7 overview:
• Multimodal interactions
• Human interaction in multimodal systems
• Design guidelines
• Real world systems
Learning Outcomes
At the end of this session you should be able to:
• Describe the characteristics of multimodal interfaces;
• Draw inferences about the design of multimodal interfaces;
• Compare and contrast the multiple modalities which interfaces
would require depending on the context;
• List the best practices of the design principles for multimodal
interfaces;
Interaction Forms
• Bimanual
• Gesture recognition
• Speech recognition
• Free space gestures
• Background sensing
techniques
• AR - projector based
• BCI
Multimedia vs. Multimodal
• Multimedia – more than one mode of
communication is output to the user; e.g. a sound
clip attached to a presentation.
• Media channels: text, graphics, animation, video:
all visual media
• Multimodal – computer processes more than one
mode of communication; e.g. the combined input of
speech and touch in smart phones
• Sensory modalities: Visual, auditory, tactile,
Multimodal Interactions
• Traditional WIMP offers limited input/output
possibilities;
• Mix of audio/visual interactions important for
communication;
• All senses (including touch) are relevant;
• Combination of multiple modalities (including
speech, gestures, etc.) offer new functionalities.
Multimodal Interactions
• Modality is the mode or path of communication
according to human senses, using different types of
information and different interface devices;
• Some definitions:
Multimodal HCI system is simply one that responds to inputs in
more than one modality or communication channel (e.g. speech,
gesture, writing and others) [James/Sebe]
Multimodal interfaces process two or more combined user input
modes (such as speech, pen, touch, manual gesture, gaze and
head and body movements) in a coordinated manner with
multimedia system output. [Oviatt]
Multimodal Interactions
Case Scenarios for Discussion
Healthcare Companion Robot
• Scenario: A healthcare companion robot designed for elderly care, equipped with voice interaction, facial
recognition, touch-sensitive screens, and gesture control.
• Discussion Points: How do these multimodal interactions benefit the elderly, especially considering potential
impairments (vision, hearing, mobility)? What are the challenges in integrating these technologies seamlessly?
Interactive Learning System for Children
• Scenario: An educational platform for children that uses a combination of interactive storytelling (voice and
visuals), touch-based games, and AR experiences.
• Discussion Points: Discuss the benefits of using multiple modes of interaction in children's learning. How can
the system ensure engagement and effectiveness in education? What considerations should be made for
children at different developmental stages?
Smart Home Control Panel
• Scenario: A smart home interface that integrates voice commands, touchscreen inputs, and gesture controls
to manage home appliances, security, and lighting.
• Discussion Points: Evaluate the practicality and user experience of combining these modes of interaction.
What challenges might arise in terms of user adaptability and interface design?
Why go for Multimodal?
• Exploit the senses
• New uses
• Increased flexibility
• Helps with impairments
• More robust
• More engaging
What happens?
• When humans interact
amongst themselves …
• When humans interact
with a machine …
Input Modalities
• Speech or other sounds
• Head movements
(facial expression,
gaze)
• Pointing, pen, touch
• Body movement/
gestures
• Motion controller
(accelerometer)
• Tangibles
• Positioning
• Brain-computer
interface
• Biomodalities (sweat,
pulse, respiration)
Output Modalities
• Visual:
• Visualization
• 3D GUIs
• Virtual/Augmented Reality
• Auditory:
• Speech – Embodied
Conversational
• Sound
• Haptics
• Force feedback
• Low freq. bass
• Pain
• Taste
• Scent
Speech vs. Gestures
• Information that can be accessed from speech:
• Word recognition
• Language recognition
• Speaker recognition
• Emotion recognition
• Accent recognition
Speech vs. Gestures
• Humans use their body as communication modality:
• Gestures (explicit & implicit)
• Body language
• Focus of attention
• Activity
• Perception by computers:
• Computer vision
• Body mounted sensors
Haptics
• Manipulation tasks require feeling of objects;
• Computers can perceive this by:
• Haptic interfaces
• Tangible objects
• Force sensors
Biophysiological Modalities
• Body information through:
• Brain activity
• Skin conductance
• Temperature
• Heart rate
• Reveal information on:
• Workload
• Emotional state
• Mood
• Fatigue
Types of Multimodal
Interfaces
• Perceptual
• highly interactive
• rich, natural interaction
• Attentive
• context aware
• implicit
• Enactive
• relies on active manipulation through the use of hands or body,
such as TUI
Challenges of Multimodal
Interfaces
• Development of cognitive theories to guide
multimodal system design
• Development of effective natural language
processing
• Dialogue processing
• Error-handling techniques
• Function robustly and adaptively
• Support for collaborative multi-person use
Design of Multimodal
Interfaces
• Multimodal interfaces are designed for:
• compatibility with users’ work practices;
• flexibility;
• Design criteria;
• robustness increases as the number and heterogeneity
of modalities increase;
• performance improves with adaptivity of interface;
• persistence of operation despite physical damage, loss
of power, etc.
A representation of the multimodal man-machine interaction loop (taken from http://
humancarinteraction.com/multimodal-interaction.html )
Guidelines for the Design of
Multimodal Interfaces
• To achieve more natural interaction, like human-
human interaction
• To increase robustness by providing redundant and
complementary information
Guidelines for the Design of
Multimodal Interfaces
1. Requirements specifications
• design for broad range of users (experience, abilities, etc.) and contexts
(home, office, changing environments like car)
• address privacy and security issues
• don’t remember users by default
• use non-speech input for private information, like passwords
2. Designing multimodal input and output
• guidelines stem from cognitive science:
• maximize human cognitive and physical abilities e.g., don’t require
paying attention to two things at once
• reduce memory load
• multiple modes should complement each other, enhance each other
• integrate modalities to be compatible with user preferences, context
and system functionality e.g., match input and output styles
• use multimodal cues, e.g., look at speaker
• synchronize modalities (timing)
• synchronize system state across modalities
3. Adaptivity
• adapt to needs/experiences/skill levels of different users and contexts
• examples: gestures replace sounds in noisy settings, accommodate
for slow bandwidth, adapt quantity and stye of information display
based on user’s perceived skill level
4. Consistency
• use same language/keywords for all modalities
• use same interaction shortcuts for all modalities
• support both user and system switching between modalities
5. Feedback
• users should know what the current modality is and what other modalities
are available
• avoid lengthy instructions
• use common icons, simple instructions and labels
• confirm system interpretation of user’s commands, after fusion of all input
modalities has completed
6. Error preventing and handling
• clearly mark “exits” from: task, modality & system
• support “undo” & include help
• integrate complementary modalities to improve robustness:
strengths of one modality should overcome weaknesses of others
• let users control modality selection
• use rich modalities that can convey semantic information beyond
simple
• point-and-click
• fuse information from multiple sources
• Users do like to interact multimodally with artificial systems
• Multimodal interaction is preferred in spatial domains; we should
offer & expect more than point & speak
• Multimodal interaction changes the way we talk; we need to
adapt our speech processing components
• Speech and gesture are synchronized but not simultaneous;
• We cannot assume redundancy of content; we need to process
modalities in an integrated manner
• Use multimodality for better system error characteristics, through
expecting simplified speech, fusion of modalities to clear out
uncertainty & offering the right modality for the right task
Myths of Multimodal
Interfaces
1. If you build a multimodal system, users will interact multimodally
2. Speech and pointing is the dominant multimodal integration pattern
3. Multimodal input involves simultaneous signals
4. Speech is the primary input mode in any multimodal system that includes it
5. Multimodal language does not differ linguistically from unimodal language
6. Multimodal integration involves redundancy of content between modes
7. Individual error-prone recognition technologies combine multimodally to
produce even greater unreliability
8. All users’ multimodal commands are integrated in a uniform way
9. Different input modes are capable of transmitting comparable content
10.Enhanced efficiency is the main advantage of multimodal systems
MultiModal Research Areas
1. [Input] Applied ML: Speech Recognition/Synthesis,
Gesture Recognition and Motion Tracking, Head, Gait &
Pose Estimation, Multimodal Fusion
2. [Input] HCI: Usability Issues, Context Aware & Ubiquitous
Computing, Design of Multimodal Interfaces
3. [Output] Immersive Environments: Embodied Conversation
Agents, Haptics (force feedback), Audio (feedback)
4. [Output] HCI: Multimodal Feedback (in-vehicle navigation,
surgery, ergonomics, etc.), multimodal Interface design
Case Example
Philippe, S., Souchet, A. D., Lameras, P., Petridis, P., Caporal,
J., Coldeboeuf, G., & Duzan, H. (2020). Multimodal teaching,
learning and training in virtual reality: a review and case
study. Virtual Reality & Intelligent Hardware, 2(5), 421-442.
Key Concepts in Multimodal
Interfaces
• We cannot always believe our intuition on how
interaction will function; we need to find out by
performing well designed user studies
• We need to take a close look at how human – human
communication and interaction works before we can
build systems that resemble this behaviour; costly
collection and annotation of real world data is
necessary
• We need to generate semantic representations from
our sub-symbolic feature space

More Related Content

PDF
ICS3211 Lecture 07
PPTX
ICS3211 lecture 07
PDF
Chapter 4
PPT
Chapter 4 universal design
PPTX
Being human (Human Computer Interaction)
PPTX
Ubitous computing ppt
PPTX
HCI Presentation
PPTX
Hci lec 1 & 2
ICS3211 Lecture 07
ICS3211 lecture 07
Chapter 4
Chapter 4 universal design
Being human (Human Computer Interaction)
Ubitous computing ppt
HCI Presentation
Hci lec 1 & 2

Similar to ICS3211_lecture 08_2023.pdf (20)

PDF
ICS3211 lntelligent Interfaces
PPTX
Cognitive frameworks
PPTX
ICS3211 lecture 02
PPTX
Unit-5.pptx Adhoc sensor networks notes unit 5
PDF
Usability Workshop at Lillebaelt Academy
PPT
Lecture1.ppt
PPTX
HCI Lecture 1 Complete PPT NSUT - 2023 - 2024
PPTX
Conceptual Model
PPTX
ICT L4.pptx
PPTX
Chapter seven hci
PPTX
PDF
HCI - 2 - Usability & User Experience.pdf
PPTX
Usability and User Experience Training Seminar
PDF
ICS3211_lecture_week52023.pdf
PDF
CSE868 - Week 01 & 2 - Introduction and HCI Principles.pdf
PPTX
Multimodal Interaction
PDF
Human Computer Interaction Notes 176.pdf
PPTX
HCI_Unit 5.pptxcxxsabc.sbc/,sabc,sajcsl/lkc bxsl/'ck
PPTX
Multi Platform User Exerience
PPTX
INTERACTION AND INTERFACES MODEL OF THE INTERACTION
ICS3211 lntelligent Interfaces
Cognitive frameworks
ICS3211 lecture 02
Unit-5.pptx Adhoc sensor networks notes unit 5
Usability Workshop at Lillebaelt Academy
Lecture1.ppt
HCI Lecture 1 Complete PPT NSUT - 2023 - 2024
Conceptual Model
ICT L4.pptx
Chapter seven hci
HCI - 2 - Usability & User Experience.pdf
Usability and User Experience Training Seminar
ICS3211_lecture_week52023.pdf
CSE868 - Week 01 & 2 - Introduction and HCI Principles.pdf
Multimodal Interaction
Human Computer Interaction Notes 176.pdf
HCI_Unit 5.pptxcxxsabc.sbc/,sabc,sajcsl/lkc bxsl/'ck
Multi Platform User Exerience
INTERACTION AND INTERFACES MODEL OF THE INTERACTION
Ad

More from Vanessa Camilleri (20)

PDF
ICS 2208 Lecture 8 Slides AI and VR_.pdf
PDF
ICS2208 Lecture6 Notes for SL spaces.pdf
PDF
ICS 2208 Lecture Slide Notes for Topic 6
PDF
ICS2208 Lecture4 Intelligent Interface Agents.pdf
PDF
ICS2208 Lecture3 2023-2024 - Model Based User Interfaces
PDF
ICS2208 Lecture 2 Slides Interfaces_.pdf
PDF
ICS Lecture 11 - Intelligent Interfaces 2023
PDF
ICS3211_lecture 09_2023.pdf
PDF
ICS3211_lecture_week72023.pdf
PDF
ICS3211_lecture_week62023.pdf
PDF
ICS3211_lecture 04 2023.pdf
PDF
ICS3211_lecture 03 2023.pdf
PDF
ICS3211_lecture 11.pdf
PDF
FoundationsAIEthics2023.pdf
PDF
ICS3211_lecture 9_2022.pdf
PDF
ICS1020CV_2022.pdf
PDF
ARI5902_2022.pdf
PDF
ICS2208 Lecture10
PDF
ICS2208 lecture9
PDF
ICS2208 lecture7
ICS 2208 Lecture 8 Slides AI and VR_.pdf
ICS2208 Lecture6 Notes for SL spaces.pdf
ICS 2208 Lecture Slide Notes for Topic 6
ICS2208 Lecture4 Intelligent Interface Agents.pdf
ICS2208 Lecture3 2023-2024 - Model Based User Interfaces
ICS2208 Lecture 2 Slides Interfaces_.pdf
ICS Lecture 11 - Intelligent Interfaces 2023
ICS3211_lecture 09_2023.pdf
ICS3211_lecture_week72023.pdf
ICS3211_lecture_week62023.pdf
ICS3211_lecture 04 2023.pdf
ICS3211_lecture 03 2023.pdf
ICS3211_lecture 11.pdf
FoundationsAIEthics2023.pdf
ICS3211_lecture 9_2022.pdf
ICS1020CV_2022.pdf
ARI5902_2022.pdf
ICS2208 Lecture10
ICS2208 lecture9
ICS2208 lecture7
Ad

Recently uploaded (20)

PDF
Microbial disease of the cardiovascular and lymphatic systems
PDF
Basic Mud Logging Guide for educational purpose
PDF
BÀI TẬP TEST BỔ TRỢ THEO TỪNG CHỦ ĐỀ CỦA TỪNG UNIT KÈM BÀI TẬP NGHE - TIẾNG A...
PPTX
BOWEL ELIMINATION FACTORS AFFECTING AND TYPES
PPTX
Week 4 Term 3 Study Techniques revisited.pptx
PDF
Introduction-to-Social-Work-by-Leonora-Serafeca-De-Guzman-Group-2.pdf
PPTX
GDM (1) (1).pptx small presentation for students
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PPTX
Pharma ospi slides which help in ospi learning
PPTX
Cardiovascular Pharmacology for pharmacy students.pptx
PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
Mark Klimek Lecture Notes_240423 revision books _173037.pdf
PDF
TR - Agricultural Crops Production NC III.pdf
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PPTX
Open Quiz Monsoon Mind Game Final Set.pptx
PDF
The Final Stretch: How to Release a Game and Not Die in the Process.
Microbial disease of the cardiovascular and lymphatic systems
Basic Mud Logging Guide for educational purpose
BÀI TẬP TEST BỔ TRỢ THEO TỪNG CHỦ ĐỀ CỦA TỪNG UNIT KÈM BÀI TẬP NGHE - TIẾNG A...
BOWEL ELIMINATION FACTORS AFFECTING AND TYPES
Week 4 Term 3 Study Techniques revisited.pptx
Introduction-to-Social-Work-by-Leonora-Serafeca-De-Guzman-Group-2.pdf
GDM (1) (1).pptx small presentation for students
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
Pharma ospi slides which help in ospi learning
Cardiovascular Pharmacology for pharmacy students.pptx
Renaissance Architecture: A Journey from Faith to Humanism
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
Microbial diseases, their pathogenesis and prophylaxis
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
O5-L3 Freight Transport Ops (International) V1.pdf
Mark Klimek Lecture Notes_240423 revision books _173037.pdf
TR - Agricultural Crops Production NC III.pdf
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
Open Quiz Monsoon Mind Game Final Set.pptx
The Final Stretch: How to Release a Game and Not Die in the Process.

ICS3211_lecture 08_2023.pdf

  • 1. ICS3211 - Intelligent Interfaces II Combining design with technology for effective human- computer interaction Week 8 Department of AI, University of Malta
  • 2. Design for Multimodal Interfaces Week 7 overview: • Multimodal interactions • Human interaction in multimodal systems • Design guidelines • Real world systems
  • 3. Learning Outcomes At the end of this session you should be able to: • Describe the characteristics of multimodal interfaces; • Draw inferences about the design of multimodal interfaces; • Compare and contrast the multiple modalities which interfaces would require depending on the context; • List the best practices of the design principles for multimodal interfaces;
  • 4. Interaction Forms • Bimanual • Gesture recognition • Speech recognition • Free space gestures • Background sensing techniques • AR - projector based • BCI
  • 5. Multimedia vs. Multimodal • Multimedia – more than one mode of communication is output to the user; e.g. a sound clip attached to a presentation. • Media channels: text, graphics, animation, video: all visual media • Multimodal – computer processes more than one mode of communication; e.g. the combined input of speech and touch in smart phones • Sensory modalities: Visual, auditory, tactile,
  • 6. Multimodal Interactions • Traditional WIMP offers limited input/output possibilities; • Mix of audio/visual interactions important for communication; • All senses (including touch) are relevant; • Combination of multiple modalities (including speech, gestures, etc.) offer new functionalities.
  • 7. Multimodal Interactions • Modality is the mode or path of communication according to human senses, using different types of information and different interface devices; • Some definitions: Multimodal HCI system is simply one that responds to inputs in more than one modality or communication channel (e.g. speech, gesture, writing and others) [James/Sebe] Multimodal interfaces process two or more combined user input modes (such as speech, pen, touch, manual gesture, gaze and head and body movements) in a coordinated manner with multimedia system output. [Oviatt]
  • 8. Multimodal Interactions Case Scenarios for Discussion Healthcare Companion Robot • Scenario: A healthcare companion robot designed for elderly care, equipped with voice interaction, facial recognition, touch-sensitive screens, and gesture control. • Discussion Points: How do these multimodal interactions benefit the elderly, especially considering potential impairments (vision, hearing, mobility)? What are the challenges in integrating these technologies seamlessly? Interactive Learning System for Children • Scenario: An educational platform for children that uses a combination of interactive storytelling (voice and visuals), touch-based games, and AR experiences. • Discussion Points: Discuss the benefits of using multiple modes of interaction in children's learning. How can the system ensure engagement and effectiveness in education? What considerations should be made for children at different developmental stages? Smart Home Control Panel • Scenario: A smart home interface that integrates voice commands, touchscreen inputs, and gesture controls to manage home appliances, security, and lighting. • Discussion Points: Evaluate the practicality and user experience of combining these modes of interaction. What challenges might arise in terms of user adaptability and interface design?
  • 9. Why go for Multimodal? • Exploit the senses • New uses • Increased flexibility • Helps with impairments • More robust • More engaging
  • 10. What happens? • When humans interact amongst themselves … • When humans interact with a machine …
  • 11. Input Modalities • Speech or other sounds • Head movements (facial expression, gaze) • Pointing, pen, touch • Body movement/ gestures • Motion controller (accelerometer) • Tangibles • Positioning • Brain-computer interface • Biomodalities (sweat, pulse, respiration)
  • 12. Output Modalities • Visual: • Visualization • 3D GUIs • Virtual/Augmented Reality • Auditory: • Speech – Embodied Conversational • Sound • Haptics • Force feedback • Low freq. bass • Pain • Taste • Scent
  • 13. Speech vs. Gestures • Information that can be accessed from speech: • Word recognition • Language recognition • Speaker recognition • Emotion recognition • Accent recognition
  • 14. Speech vs. Gestures • Humans use their body as communication modality: • Gestures (explicit & implicit) • Body language • Focus of attention • Activity • Perception by computers: • Computer vision • Body mounted sensors
  • 15. Haptics • Manipulation tasks require feeling of objects; • Computers can perceive this by: • Haptic interfaces • Tangible objects • Force sensors
  • 16. Biophysiological Modalities • Body information through: • Brain activity • Skin conductance • Temperature • Heart rate • Reveal information on: • Workload • Emotional state • Mood • Fatigue
  • 17. Types of Multimodal Interfaces • Perceptual • highly interactive • rich, natural interaction • Attentive • context aware • implicit • Enactive • relies on active manipulation through the use of hands or body, such as TUI
  • 18. Challenges of Multimodal Interfaces • Development of cognitive theories to guide multimodal system design • Development of effective natural language processing • Dialogue processing • Error-handling techniques • Function robustly and adaptively • Support for collaborative multi-person use
  • 19. Design of Multimodal Interfaces • Multimodal interfaces are designed for: • compatibility with users’ work practices; • flexibility; • Design criteria; • robustness increases as the number and heterogeneity of modalities increase; • performance improves with adaptivity of interface; • persistence of operation despite physical damage, loss of power, etc.
  • 20. A representation of the multimodal man-machine interaction loop (taken from http:// humancarinteraction.com/multimodal-interaction.html )
  • 21. Guidelines for the Design of Multimodal Interfaces • To achieve more natural interaction, like human- human interaction • To increase robustness by providing redundant and complementary information
  • 22. Guidelines for the Design of Multimodal Interfaces 1. Requirements specifications • design for broad range of users (experience, abilities, etc.) and contexts (home, office, changing environments like car) • address privacy and security issues • don’t remember users by default • use non-speech input for private information, like passwords 2. Designing multimodal input and output • guidelines stem from cognitive science: • maximize human cognitive and physical abilities e.g., don’t require paying attention to two things at once • reduce memory load
  • 23. • multiple modes should complement each other, enhance each other • integrate modalities to be compatible with user preferences, context and system functionality e.g., match input and output styles • use multimodal cues, e.g., look at speaker • synchronize modalities (timing) • synchronize system state across modalities 3. Adaptivity • adapt to needs/experiences/skill levels of different users and contexts • examples: gestures replace sounds in noisy settings, accommodate for slow bandwidth, adapt quantity and stye of information display based on user’s perceived skill level
  • 24. 4. Consistency • use same language/keywords for all modalities • use same interaction shortcuts for all modalities • support both user and system switching between modalities 5. Feedback • users should know what the current modality is and what other modalities are available • avoid lengthy instructions • use common icons, simple instructions and labels • confirm system interpretation of user’s commands, after fusion of all input modalities has completed
  • 25. 6. Error preventing and handling • clearly mark “exits” from: task, modality & system • support “undo” & include help • integrate complementary modalities to improve robustness: strengths of one modality should overcome weaknesses of others • let users control modality selection • use rich modalities that can convey semantic information beyond simple • point-and-click • fuse information from multiple sources
  • 26. • Users do like to interact multimodally with artificial systems • Multimodal interaction is preferred in spatial domains; we should offer & expect more than point & speak • Multimodal interaction changes the way we talk; we need to adapt our speech processing components • Speech and gesture are synchronized but not simultaneous; • We cannot assume redundancy of content; we need to process modalities in an integrated manner • Use multimodality for better system error characteristics, through expecting simplified speech, fusion of modalities to clear out uncertainty & offering the right modality for the right task
  • 27. Myths of Multimodal Interfaces 1. If you build a multimodal system, users will interact multimodally 2. Speech and pointing is the dominant multimodal integration pattern 3. Multimodal input involves simultaneous signals 4. Speech is the primary input mode in any multimodal system that includes it 5. Multimodal language does not differ linguistically from unimodal language 6. Multimodal integration involves redundancy of content between modes 7. Individual error-prone recognition technologies combine multimodally to produce even greater unreliability 8. All users’ multimodal commands are integrated in a uniform way 9. Different input modes are capable of transmitting comparable content 10.Enhanced efficiency is the main advantage of multimodal systems
  • 28. MultiModal Research Areas 1. [Input] Applied ML: Speech Recognition/Synthesis, Gesture Recognition and Motion Tracking, Head, Gait & Pose Estimation, Multimodal Fusion 2. [Input] HCI: Usability Issues, Context Aware & Ubiquitous Computing, Design of Multimodal Interfaces 3. [Output] Immersive Environments: Embodied Conversation Agents, Haptics (force feedback), Audio (feedback) 4. [Output] HCI: Multimodal Feedback (in-vehicle navigation, surgery, ergonomics, etc.), multimodal Interface design
  • 29. Case Example Philippe, S., Souchet, A. D., Lameras, P., Petridis, P., Caporal, J., Coldeboeuf, G., & Duzan, H. (2020). Multimodal teaching, learning and training in virtual reality: a review and case study. Virtual Reality & Intelligent Hardware, 2(5), 421-442.
  • 30. Key Concepts in Multimodal Interfaces • We cannot always believe our intuition on how interaction will function; we need to find out by performing well designed user studies • We need to take a close look at how human – human communication and interaction works before we can build systems that resemble this behaviour; costly collection and annotation of real world data is necessary • We need to generate semantic representations from our sub-symbolic feature space