White Paper
Time of Flight Technology
for Gesture Interaction
Time of Flight Technology for
Gesture Interaction
Oliver Kirsch1
, Dr. Alexander van Laack2
, Gert-Dieter Tuzar3
, Judy
Blessing4
Advanced Technologies, Visteon Electronics Germany GmbH1
Design Experience Europe, Visteon Innovation & Technology GmbH2
Design Experience Europe, Visteon Electronics Germany GmbH3
Market & Trends Research Europe, Visteon Innovation & Technology GmbH4
Abstract
In this paper we present Visteon’s findings in the field of time-of-flight (ToF) gesture
interaction. We give a brief overview of current technologies before introducing our advanced
technology development, which was integrated into a drivable vehicle. A significant number of
hand gestures were implemented through our proprietary software and can be tested in the
vehicle. We conclude this paper with a summary of our findings and an outlook for next steps.
1 Introduction
Multimodal human-machine interaction (HMI) is used in today’s vehicles to offer the
driver several redundant ways to control functions. Voice and touch interaction
possibilities have already become standard features. To keep the driver’s eyes on
the road while still controlling various functions safely, gesture interaction is seen as
a promising modality.
Future driver interaction systems will require solutions for driver monitoring such as
gesture control. This functionality can be provided by 2D/3D interior camera systems
and related computer vision software solutions. In particular, the 3D sensing
technology promises a new modality to interact in a more “natural” and intuitive way
with the cockpit electronics and infotainment system of the vehicle because the
exact position of the objects in 3D space is known.
2 State-of-the-Art
Looking to camera-based solutions, several principles and technologies are
available on the market and in the research domain to record 3D depth images.
Triangulation is probably the most common and well-known principle that enables
the acquisition of 3D depth measurements with reasonable distance resolution. It
can be either a passive system based on a stereo vision (SV) camera system using
two 2D cameras, or an active system using one single 2D camera together with a
projector that beams a light pattern onto the scene. The drawback of active
triangulation systems is that they have difficulties with shadows in the scene area.
SV systems have difficulties with scene content that has no or only rare contrast
since the principle of SV-based triangulation relies on identification of the same
features in both images.
Figure 1: ToF measurement principle (RF modulated light source with phase detector1)
An alternative depth measurement method that Visteon uses in one of its technology
vehicles is called the ToF principle (see Figure 1). A camera system that uses this
technology can provide an amplitude image and a depth image of the scene (within
the camera field of view) with a high frame rate and without moving parts inside the
camera. The amplitude image looks similar to an image from a 2D camera. That
means that well reflecting objects appear brighter in that image than less reflecting
objects. The distance image provides distance information for each sensor pixel of
the scene in front of the camera.
1
Büttgen, B. (2005) CCD/CMOS Lock-in pixel for range imaging, p. 3
The essential components of a ToF camera system are the ToF sensor, an objective
lens and an infrared illumination light source that emits RF modulated light.
The sensor consists of an array of independent ToF pixels whereby each pixel is
able to measure the arrival time of the modulated scene illumination with high
accuracy. This is important because the method for measuring the distance between
the sensor and the object is based on the time difference the light needs to travel
from the light source until it is received by the sensor (being reflected by an object).
These measurements take place in parallel in each pixel and because of that the
camera can provide, for each pixel, one amplitude and one distance value and thus
one amplitude and one distance image for each camera frame.
These two images form the basic information and are used as input for applying
computer vision software algorithms in order to detect hand gestures.
3 Advanced Development
3.1 Overview
Visteon started to investigate ToF technology several years ago in its advanced
development department to build up knowledge and to understand the performance
and maturity of the technology. Besides the technology investigation inside the lab, a
proprietary hardware and software solution has been developed to enable
recognition of hand gestures in mid-air and further use cases that can be linked to
certain hand poses or hand and finger position in 3D space.
The right column of Figure 2 shows the system partitioning of a ToF camera system
and provides high-level information about the different software blocks in our current
system.
The column on the left provides an overview of the system architecture with a high-
level flow chart on how the gesture recognition concept works.
This automaker-independent development phase was necessary to reach the
following goals:
 Find out the potential and limitation of the current ToF technology and
investigate ways to push the confines of its current limitations.
 Build up a good understanding of feasible use cases that can work robustly
in a car environment by implementing and testing them and executing
improvement loops based on the findings.
 Investigate the necessary hardware and software components of such a
system.
These activities brought us into a position to build up our own know-how in this field,
demonstrate this to automakers and provide recommendations about which uses
cases (per our experience and knowledge) can work robustly in a car environment.
These achievements also reduce risks either by offering a solution for co-innovation
projects or RFQs for serial implementation.
Figure 2: Visteon’s ToF camera technology and system architecture
3.2 Vehicle integration
During the advanced development activities, one ToF camera system was
integrated into a drivable vehicle to investigate use cases for this new input modality
that can be used, for example, for driver information and infotainment systems to
reduce driver distraction and enhance the user experience.
The implementation in the car demonstrates the current status of the functionalities
that have been achieved so far – considering a camera position that is located in the
overhead console (see Figure 3 and Figure 4).
Figure 3: ToF Camera setup in Visteon’s ToF technology vehicle
This car is used to demonstrate the possibilities of ToF technology to automakers. In
parallel the vehicle is also used continuously as the main development platform to
implement new use cases and to improve our software algorithms in order to enable
robust hand/arm segmentation and tracking under real environmental and build in
conditions.
Figure 4: Camera head of ToF prototype camera (left), hand gesture interaction within the car (right)
3.3 Investigation of use cases beyond simple 3D gestures
Because the ToF camera system also provides - besides the amplitude image -
depth information, it can be used to do more than just detecting simple gestures (like
a swipe or approach in a short range). The camera sees the details of the scene (i.e.
number of visible fingers and pointing direction) and the exact position in 3D space.
Therefore the target is ready to realize a more natural HMI within a large interaction
space.
Figure 5: Examples of hand gestures and poses for HMI interaction
The following uses cases are already integrated and showcased in the technology
vehicle to demonstrate the potential of the technology and enabling new ways for
interacting with the vehicle (see Figure 5):
 Highly accurate proximity sensing, based on 3D coordinates of calculated
features
 3D gesture recognition (dynamic gestures performed in mid-air)
 Hand pose detection (static hand poses – can use as shortcut)
 2D gesture recognition (performed on surfaces such as display or interior
part)
 Touch detection on a display
 Functional areas on interior trim parts can act as a button, slider, TP etc.
 Driver / co-driver hand distinction
 Function release without need of physical contact (hygiene)
 Enables 3D HMI on 3D displays
 High level of design freedom for seamless information integration on curved
surfaces enables new premium aesthetics and merge of smart (functional)
decorative parts and display area
 Addressing a functional area that is out of reach of the hand and triggering
the related function (i.e open/close the passenger’s side widow)
4 Conclusion
In this paper, we investigated the implementation and application of a gesture
recognition system in a vehicle. Based on extensive state-of-the-art research the
ToF technology was identified as the most suitable to detect a high diversity of
spatial and on-surface gestures with high accuracy inside the vehicle. To address
the demands of automakers, a new ToF camera was developed and implemented in
a test vehicle.
The current implementation of the ToF system offers a high enough resolution to
identify gestures with a high level of detail. Due to a fairly large field-of-view of the
camera, it can detect both the driver and the passenger’s hand and, therefore,
understand who is trying to interact at a given point in time. This offers a first step to
a contextual system, which can react on active as well as passive gestures.
Future research will have to focus not only on active gestures, but also passive
ones. Driver monitoring and understanding a driver’s posture becomes certainly
more relevant when we think about hand-over scenarios for autonomous driving.
5 Literature References
Büttgen, B., Oggier, T., Lehmann, M., Kaufmann, R., Lustenberger, F. (2005). CCD/CMOS
Lock-in pixel for range imaging: challenges, limitations and state-of-the-art. In:
Proceedings of 1st Range Imaging Research Day, p. 3
6 Authors
Dr. Alexander van Laack
As human-machine interaction (HMI) and technical design
manager in Visteon’s European design experience group, Dr.
Alexander van Laack is responsible for developing strategies for
the use of specialized, automotive-related HMI concepts. In his
role, he leads driving simulated user experience research
initiatives and is responsible for HMI and user experience
concept implementation.
Dr. van Laack has more than eight years of automotive
experience.
He previously held a position as research engineer at Ford
Research and Advanced Engineering developing methodologies
to measure user experience and quality perception of interiors
and HMIs.
Dr. van Laack has a master's degree in business administration
and engineering, as well as a PhD in engineering, from the
RWTH University of Aachen in Germany.
Judy Blessing
Judy Blessing brings 17 years of research experience to her
manager position in market and trends research. Her in-depth
knowledge of all research methodologies allow her to apply the
proper testing and analysis to showcase Visteon´s automotive
intellect to external customers and industry affiliates. Judy holds
a German University Diploma degree in Marketing/ Market
Research from the Fachhoch-schule Pforzheim, Germany.
Judy Blessing has more than ten years of automotive
experience, investigating topics like consumer perceived quality,
user experience, usability and advanced product research.
She previously held positions as researcher in different
technology companies and research institutes, measuring
consumer reactions to innovative products.
Gert-Dieter Tuzar, MFA
As Principal Designer HMI and Human Factors Expert in
Visteon’s European design experience group, Gert-Dieter Tuzar
is responsible for Human Centered HMI Design Development
Process and Interaction Concepts. In his role he leads HMI
Design Development for Infotainment products, and Driver
Information Systems.
Gert-Dieter Tuzar has more than twenty years of automotive
experience. He has a Masters of Fine Arts degree from The Ohio
State University, Columbus, OH, USA as well as a Diplom-
Industrie-Designer (FH) degree from The University of Applied
Science in Pforzheim, Germany.
Oliver Kirsch
As Innovation Project Manager in Visteon’s European Advance
Innovation Group, Oliver Kirsch is responsible for investigating
new technologies from technology monitoring to proof of
concept. In his role he investigates advanced camera
technologies for interior applications with a focus on computer
vision algorithms for Time of Flight based 3-D hand gesture
recognition. Previously, he held roles in other areas of cockpit
electronics like instrument clusters and head-up displays.
Oliver Kirsch applies more than twenty years of automotive
engineering experience. He earned a degree in electrical
engineering from the Bergische University Wuppertal, Germany.
Note: Co-author Oliver Kirsch’s contributions to this paper were
made during his previous employment at Visteon Corporation.
White Paper
 
 
About Visteon
Visteon is a global company that designs, engineers and manufactures innovative
cockpit electronics products and connected car solutions for most of the world’s
major vehicle manufacturers. Visteon is a leading provider of instrument clusters,
head-up displays, information displays, infotainment, audio systems, and telematics
solutions; its brands include Lightscape®, OpenAir® and SmartCore™. Visteon also
supplies embedded multimedia and smartphone connectivity software solutions to
the global automotive industry. Headquartered in Van Buren Township, Michigan,
Visteon has nearly 11,000 employees at more than 40 facilities in 18 countries.
Visteon had sales of $3.25 billion in 2015. Learn more at www.visteon.com.
Visteon Corporation
One Village Center Dr.
Van Buren Township, MI 48188
1-800-VISTEON
www.visteon.com Copyright © 2016 Visteon Corporation

More Related Content

PDF
Intelligent Parking Space Detection System Based on Image Segmentation
PDF
Industrial Microcontroller Based Neural Network Controlled Autonomous Vehicle
PDF
PDF
C015 apwcs10-positioning
PDF
Final Project Report on Image processing based intelligent traffic control sy...
PDF
IRJET- Real Time Implementation of Air Writing
PPTX
Learning the skill of archery by a humanoid robot iCub
PDF
Visual pattern recognition in robotics
Intelligent Parking Space Detection System Based on Image Segmentation
Industrial Microcontroller Based Neural Network Controlled Autonomous Vehicle
C015 apwcs10-positioning
Final Project Report on Image processing based intelligent traffic control sy...
IRJET- Real Time Implementation of Air Writing
Learning the skill of archery by a humanoid robot iCub
Visual pattern recognition in robotics

What's hot (18)

PDF
2D mapping using omni-directional mobile robot equipped with LiDAR
PDF
Traffic Light Detection for Red Light Violation System
PDF
Detection of Virtual Passive Pointer
PDF
Gesture final report new
PDF
Camouflage Color Changing Robot For Military Purpose
PDF
Camouflage Color Changing Robot
DOCX
STEFANO CARRINO
PDF
ShawnQuinnCSS565FinalResearchProject
PDF
Traffic Violation Detection Using Multiple Trajectories of Vehicles
PDF
IRJET- Automatic Traffic Sign Detection and Recognition using CNN
PDF
C0352016019
PDF
Video / Image Processing ( ITS / Task 5 ) done by Wael Saad Hameedi / P71062
PDF
Traffic Light Detection and Recognition for Self Driving Cars using Deep Lear...
PDF
Multiple Sensor Fusion for Moving Object Detection and Tracking
PDF
IRJET- Movie Piracy Tracking using Temporal Psycovisual Modulation
PDF
Hand Gesture Controls for Digital TV using Mobile ARM Platform
PDF
International Journal of Image Processing (IJIP) Volume (3) Issue (2)
PDF
Efficient and secure real-time mobile robots cooperation using visual servoing
2D mapping using omni-directional mobile robot equipped with LiDAR
Traffic Light Detection for Red Light Violation System
Detection of Virtual Passive Pointer
Gesture final report new
Camouflage Color Changing Robot For Military Purpose
Camouflage Color Changing Robot
STEFANO CARRINO
ShawnQuinnCSS565FinalResearchProject
Traffic Violation Detection Using Multiple Trajectories of Vehicles
IRJET- Automatic Traffic Sign Detection and Recognition using CNN
C0352016019
Video / Image Processing ( ITS / Task 5 ) done by Wael Saad Hameedi / P71062
Traffic Light Detection and Recognition for Self Driving Cars using Deep Lear...
Multiple Sensor Fusion for Moving Object Detection and Tracking
IRJET- Movie Piracy Tracking using Temporal Psycovisual Modulation
Hand Gesture Controls for Digital TV using Mobile ARM Platform
International Journal of Image Processing (IJIP) Volume (3) Issue (2)
Efficient and secure real-time mobile robots cooperation using visual servoing
Ad

Viewers also liked (20)

PDF
3D Smart Sensor gb
PDF
Consumer Insights On Innovative 3D Visualization Technologies
PPTX
Escáner 3d
PDF
Diario Abierto-Jesús Ortiz
PPTX
Business Process Management
PPT
Presentacion jornadas civitas
PDF
Educación emprendedora
DOC
Rictd2013 paper format
PDF
Peoria Public schools Request to Examine Or Exclude Child from Sexual Educati...
PDF
Alianza VHL - Reunión anual 2015
DOCX
CUESTIONARIO UNIDAD 1 REACTIVOS
DOCX
Vii semana de las letras
PPTX
"Smart Conversations" Kick Off Lecture; University of Wisconsin, OshKosh Fall...
DOCX
Jorge capitul
DOC
Matematicas 2 ciclo
PDF
CAA - Brand Performance - Zed Digital
XLS
Transparencia
PPTX
Huérfanos digitales
PDF
Docker and kernel security
3D Smart Sensor gb
Consumer Insights On Innovative 3D Visualization Technologies
Escáner 3d
Diario Abierto-Jesús Ortiz
Business Process Management
Presentacion jornadas civitas
Educación emprendedora
Rictd2013 paper format
Peoria Public schools Request to Examine Or Exclude Child from Sexual Educati...
Alianza VHL - Reunión anual 2015
CUESTIONARIO UNIDAD 1 REACTIVOS
Vii semana de las letras
"Smart Conversations" Kick Off Lecture; University of Wisconsin, OshKosh Fall...
Jorge capitul
Matematicas 2 ciclo
CAA - Brand Performance - Zed Digital
Transparencia
Huérfanos digitales
Docker and kernel security
Ad

Similar to Time_of_Flight_Technology_for_Gesture_Interaction (20)

DOCX
Gesture phones final
PDF
Arduino Based Hand Gesture Controlled Robot
PDF
L010127579
PDF
Literature on safety module for vehicular driver assistance
PDF
TOUCHLESS ECOSYSTEM USING HAND GESTURES
PDF
Controlling Computer using Hand Gestures
PDF
IRJET- Smart Helmet for Visually Impaired
PPTX
Gesture_Recognition_With_Ultrasounds_and_Edge_Computing.pptx
PDF
IRJET- Survey Paper on Vision based Hand Gesture Recognition
PDF
Sign Language Identification based on Hand Gestures
PDF
9783319609270 c2
DOCX
Real Time Head & Hand Tracking Using 2.5D Data
PDF
IRJET- Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
PDF
H028038042
PDF
Ijarcet vol-2-issue-3-947-950
PDF
G0342039042
PDF
A Review: Machine vision and its Applications
PDF
Event-Handling Based Smart Video Surveillance System
PDF
IRJET- Sign Language Interpreter
PDF
Traffic detection system using android
Gesture phones final
Arduino Based Hand Gesture Controlled Robot
L010127579
Literature on safety module for vehicular driver assistance
TOUCHLESS ECOSYSTEM USING HAND GESTURES
Controlling Computer using Hand Gestures
IRJET- Smart Helmet for Visually Impaired
Gesture_Recognition_With_Ultrasounds_and_Edge_Computing.pptx
IRJET- Survey Paper on Vision based Hand Gesture Recognition
Sign Language Identification based on Hand Gestures
9783319609270 c2
Real Time Head & Hand Tracking Using 2.5D Data
IRJET- Robust and Fast Detection of Moving Vechiles in Aerial Videos usin...
H028038042
Ijarcet vol-2-issue-3-947-950
G0342039042
A Review: Machine vision and its Applications
Event-Handling Based Smart Video Surveillance System
IRJET- Sign Language Interpreter
Traffic detection system using android

Time_of_Flight_Technology_for_Gesture_Interaction

  • 1. White Paper Time of Flight Technology for Gesture Interaction
  • 2. Time of Flight Technology for Gesture Interaction Oliver Kirsch1 , Dr. Alexander van Laack2 , Gert-Dieter Tuzar3 , Judy Blessing4 Advanced Technologies, Visteon Electronics Germany GmbH1 Design Experience Europe, Visteon Innovation & Technology GmbH2 Design Experience Europe, Visteon Electronics Germany GmbH3 Market & Trends Research Europe, Visteon Innovation & Technology GmbH4 Abstract In this paper we present Visteon’s findings in the field of time-of-flight (ToF) gesture interaction. We give a brief overview of current technologies before introducing our advanced technology development, which was integrated into a drivable vehicle. A significant number of hand gestures were implemented through our proprietary software and can be tested in the vehicle. We conclude this paper with a summary of our findings and an outlook for next steps. 1 Introduction Multimodal human-machine interaction (HMI) is used in today’s vehicles to offer the driver several redundant ways to control functions. Voice and touch interaction possibilities have already become standard features. To keep the driver’s eyes on the road while still controlling various functions safely, gesture interaction is seen as a promising modality. Future driver interaction systems will require solutions for driver monitoring such as gesture control. This functionality can be provided by 2D/3D interior camera systems and related computer vision software solutions. In particular, the 3D sensing technology promises a new modality to interact in a more “natural” and intuitive way with the cockpit electronics and infotainment system of the vehicle because the exact position of the objects in 3D space is known.
  • 3. 2 State-of-the-Art Looking to camera-based solutions, several principles and technologies are available on the market and in the research domain to record 3D depth images. Triangulation is probably the most common and well-known principle that enables the acquisition of 3D depth measurements with reasonable distance resolution. It can be either a passive system based on a stereo vision (SV) camera system using two 2D cameras, or an active system using one single 2D camera together with a projector that beams a light pattern onto the scene. The drawback of active triangulation systems is that they have difficulties with shadows in the scene area. SV systems have difficulties with scene content that has no or only rare contrast since the principle of SV-based triangulation relies on identification of the same features in both images. Figure 1: ToF measurement principle (RF modulated light source with phase detector1) An alternative depth measurement method that Visteon uses in one of its technology vehicles is called the ToF principle (see Figure 1). A camera system that uses this technology can provide an amplitude image and a depth image of the scene (within the camera field of view) with a high frame rate and without moving parts inside the camera. The amplitude image looks similar to an image from a 2D camera. That means that well reflecting objects appear brighter in that image than less reflecting objects. The distance image provides distance information for each sensor pixel of the scene in front of the camera. 1 Büttgen, B. (2005) CCD/CMOS Lock-in pixel for range imaging, p. 3
  • 4. The essential components of a ToF camera system are the ToF sensor, an objective lens and an infrared illumination light source that emits RF modulated light. The sensor consists of an array of independent ToF pixels whereby each pixel is able to measure the arrival time of the modulated scene illumination with high accuracy. This is important because the method for measuring the distance between the sensor and the object is based on the time difference the light needs to travel from the light source until it is received by the sensor (being reflected by an object). These measurements take place in parallel in each pixel and because of that the camera can provide, for each pixel, one amplitude and one distance value and thus one amplitude and one distance image for each camera frame. These two images form the basic information and are used as input for applying computer vision software algorithms in order to detect hand gestures. 3 Advanced Development 3.1 Overview Visteon started to investigate ToF technology several years ago in its advanced development department to build up knowledge and to understand the performance and maturity of the technology. Besides the technology investigation inside the lab, a proprietary hardware and software solution has been developed to enable recognition of hand gestures in mid-air and further use cases that can be linked to certain hand poses or hand and finger position in 3D space. The right column of Figure 2 shows the system partitioning of a ToF camera system and provides high-level information about the different software blocks in our current system. The column on the left provides an overview of the system architecture with a high- level flow chart on how the gesture recognition concept works. This automaker-independent development phase was necessary to reach the following goals:  Find out the potential and limitation of the current ToF technology and investigate ways to push the confines of its current limitations.  Build up a good understanding of feasible use cases that can work robustly in a car environment by implementing and testing them and executing improvement loops based on the findings.  Investigate the necessary hardware and software components of such a system. These activities brought us into a position to build up our own know-how in this field, demonstrate this to automakers and provide recommendations about which uses
  • 5. cases (per our experience and knowledge) can work robustly in a car environment. These achievements also reduce risks either by offering a solution for co-innovation projects or RFQs for serial implementation. Figure 2: Visteon’s ToF camera technology and system architecture 3.2 Vehicle integration During the advanced development activities, one ToF camera system was integrated into a drivable vehicle to investigate use cases for this new input modality that can be used, for example, for driver information and infotainment systems to reduce driver distraction and enhance the user experience. The implementation in the car demonstrates the current status of the functionalities that have been achieved so far – considering a camera position that is located in the overhead console (see Figure 3 and Figure 4). Figure 3: ToF Camera setup in Visteon’s ToF technology vehicle
  • 6. This car is used to demonstrate the possibilities of ToF technology to automakers. In parallel the vehicle is also used continuously as the main development platform to implement new use cases and to improve our software algorithms in order to enable robust hand/arm segmentation and tracking under real environmental and build in conditions. Figure 4: Camera head of ToF prototype camera (left), hand gesture interaction within the car (right) 3.3 Investigation of use cases beyond simple 3D gestures Because the ToF camera system also provides - besides the amplitude image - depth information, it can be used to do more than just detecting simple gestures (like a swipe or approach in a short range). The camera sees the details of the scene (i.e. number of visible fingers and pointing direction) and the exact position in 3D space. Therefore the target is ready to realize a more natural HMI within a large interaction space. Figure 5: Examples of hand gestures and poses for HMI interaction The following uses cases are already integrated and showcased in the technology vehicle to demonstrate the potential of the technology and enabling new ways for interacting with the vehicle (see Figure 5):  Highly accurate proximity sensing, based on 3D coordinates of calculated features  3D gesture recognition (dynamic gestures performed in mid-air)  Hand pose detection (static hand poses – can use as shortcut)  2D gesture recognition (performed on surfaces such as display or interior part)  Touch detection on a display
  • 7.  Functional areas on interior trim parts can act as a button, slider, TP etc.  Driver / co-driver hand distinction  Function release without need of physical contact (hygiene)  Enables 3D HMI on 3D displays  High level of design freedom for seamless information integration on curved surfaces enables new premium aesthetics and merge of smart (functional) decorative parts and display area  Addressing a functional area that is out of reach of the hand and triggering the related function (i.e open/close the passenger’s side widow) 4 Conclusion In this paper, we investigated the implementation and application of a gesture recognition system in a vehicle. Based on extensive state-of-the-art research the ToF technology was identified as the most suitable to detect a high diversity of spatial and on-surface gestures with high accuracy inside the vehicle. To address the demands of automakers, a new ToF camera was developed and implemented in a test vehicle. The current implementation of the ToF system offers a high enough resolution to identify gestures with a high level of detail. Due to a fairly large field-of-view of the camera, it can detect both the driver and the passenger’s hand and, therefore, understand who is trying to interact at a given point in time. This offers a first step to a contextual system, which can react on active as well as passive gestures. Future research will have to focus not only on active gestures, but also passive ones. Driver monitoring and understanding a driver’s posture becomes certainly more relevant when we think about hand-over scenarios for autonomous driving. 5 Literature References Büttgen, B., Oggier, T., Lehmann, M., Kaufmann, R., Lustenberger, F. (2005). CCD/CMOS Lock-in pixel for range imaging: challenges, limitations and state-of-the-art. In: Proceedings of 1st Range Imaging Research Day, p. 3
  • 8. 6 Authors Dr. Alexander van Laack As human-machine interaction (HMI) and technical design manager in Visteon’s European design experience group, Dr. Alexander van Laack is responsible for developing strategies for the use of specialized, automotive-related HMI concepts. In his role, he leads driving simulated user experience research initiatives and is responsible for HMI and user experience concept implementation. Dr. van Laack has more than eight years of automotive experience. He previously held a position as research engineer at Ford Research and Advanced Engineering developing methodologies to measure user experience and quality perception of interiors and HMIs. Dr. van Laack has a master's degree in business administration and engineering, as well as a PhD in engineering, from the RWTH University of Aachen in Germany. Judy Blessing Judy Blessing brings 17 years of research experience to her manager position in market and trends research. Her in-depth knowledge of all research methodologies allow her to apply the proper testing and analysis to showcase Visteon´s automotive intellect to external customers and industry affiliates. Judy holds a German University Diploma degree in Marketing/ Market Research from the Fachhoch-schule Pforzheim, Germany. Judy Blessing has more than ten years of automotive experience, investigating topics like consumer perceived quality, user experience, usability and advanced product research. She previously held positions as researcher in different technology companies and research institutes, measuring consumer reactions to innovative products.
  • 9. Gert-Dieter Tuzar, MFA As Principal Designer HMI and Human Factors Expert in Visteon’s European design experience group, Gert-Dieter Tuzar is responsible for Human Centered HMI Design Development Process and Interaction Concepts. In his role he leads HMI Design Development for Infotainment products, and Driver Information Systems. Gert-Dieter Tuzar has more than twenty years of automotive experience. He has a Masters of Fine Arts degree from The Ohio State University, Columbus, OH, USA as well as a Diplom- Industrie-Designer (FH) degree from The University of Applied Science in Pforzheim, Germany. Oliver Kirsch As Innovation Project Manager in Visteon’s European Advance Innovation Group, Oliver Kirsch is responsible for investigating new technologies from technology monitoring to proof of concept. In his role he investigates advanced camera technologies for interior applications with a focus on computer vision algorithms for Time of Flight based 3-D hand gesture recognition. Previously, he held roles in other areas of cockpit electronics like instrument clusters and head-up displays. Oliver Kirsch applies more than twenty years of automotive engineering experience. He earned a degree in electrical engineering from the Bergische University Wuppertal, Germany. Note: Co-author Oliver Kirsch’s contributions to this paper were made during his previous employment at Visteon Corporation.
  • 10. White Paper     About Visteon Visteon is a global company that designs, engineers and manufactures innovative cockpit electronics products and connected car solutions for most of the world’s major vehicle manufacturers. Visteon is a leading provider of instrument clusters, head-up displays, information displays, infotainment, audio systems, and telematics solutions; its brands include Lightscape®, OpenAir® and SmartCore™. Visteon also supplies embedded multimedia and smartphone connectivity software solutions to the global automotive industry. Headquartered in Van Buren Township, Michigan, Visteon has nearly 11,000 employees at more than 40 facilities in 18 countries. Visteon had sales of $3.25 billion in 2015. Learn more at www.visteon.com. Visteon Corporation One Village Center Dr. Van Buren Township, MI 48188 1-800-VISTEON www.visteon.com Copyright © 2016 Visteon Corporation