SlideShare a Scribd company logo
“Deep Learning in Robotics”
Student: Gabriele Sisinna (516706)
Course: Intelligent Systems
Professor: Beatrice Lazzerini
Authors
Harry A. Pierson
Michael S. Gashler
Introduction
• This review discusses the applications, benefits, and
limitations of deep learning for robotic systems, using
contemporary research as example.
• Applying deep learning to
robotics is an active
research area, with at
least thirty papers
published on the subject
from 2014 through the
time of this writing (2017).
Deep learning
• Deep learning is the science of training large artificial
neural networks. Deep neural networks (DNNs) can have
hundreds of millions of parameters, allowing them to model
complex functions such as nonlinear dynamics.
History
• Several important advances have slowly transformed regression
into what we now call deep learning. First, the addition of an
activation function enabled regression methods to fit to
nonlinear functions, and It introduced biological similarity with
brain cells.
• Next, nonlinear models were stacked in “layers” to create
powerful models, called multi-layer perceptrons (MLP).
History
• Multi-layer perceptrons are universal function approximators,
meaning they could fit to any data, no matter how complex, with
arbitrary precision, using a finite number of regression units.
• Backpropagation marked the beginning of the deep learning
revolution; however, researchers still mostly limited their neural
networks to a few layers because of the problem of vanishing
gradients
Application in Robotics
• Neural networks were successfully applied for robotics
control as early as the 1980s. It was quickly recognized that
nonlinear regression provided the functionality that was
needed for operating dynamical systems in continuous
spaces
Biorobotics and Neural networks
• In 2008, neuroscientists made advances in recognizing how
animals achieved locomotion, and were able to extend this
knowledge to neural networks for experimental control of
biomimetic robots
Infinite Degree of Freedom discretization
• In the soft robotics field
new techniques are
needed for the control of
continuous systems
with high number of
DOFs
Structure A: MLP as function approximator
• DNNs are well suited for use with robots because they are
flexible and can be used in structures that other machine
learning models cannot support.
• MLP are trained by presenting a large collection of example
training pairs:
• An optimization method is applied to minimize the
prediction loss
Supervised
Classification
• This structures also excel at classification tasks, such as
determining what type of object lies before the robot, which
grasping approach or general planning strategy is best
suited for current conditions, or what is the state of a certain
complex object with which the robot is interacting.
Parallel Computing: training DNNs
• To make effective use of deep learning models, it is
important to train on one or more General Purpose
Graphical Processing Units (GPGPUs). Many other ways of
parallelizing deep neural networks have been attempted, but
none of them yet yield the performance gains of GPGPUs.
Structure B: Autoencoders
• Auto-encoders are used primarily in cases where high-
dimensional observations are available, but the user wants
a low-dimensional representation of state.
• It is one common model for facilitating “unsupervised
learning.” It requires two DNNs, called an “encoder” and a
“decoder.”
Unsupervised
Structure C: Recurrent Neural Networks
• They can keep track of the past
thanks to feedback loops
(discrete time non autonomous
dynamical systems)
• Structure C is a type of “recurrent
neural network,” which is designed to
model dynamical systems, including
robots. It is often trained with an
approach called “backpropagation
through time”
Supervised
Structure D: Deep Reinforcement Learning
• Deep reinforcement learning (DRL) uses deep learning and
reinforcement learning principles to create efficient algorithms
applied on areas like robotics, video games, healthcare, ecc…
• Implementing deep learning architectures (deep neural networks)
with reinforcement learning algorithms (Q-learning, actor critic,
etc.) is capable of scaling to previously unsolvable problems.
Exploration and exploitation
• Instead of minimizing
prediction error against a
training set of samples, deep
Q-networks seek to maximize
long-term reward.
• This is done through seeking
a balance between
exploration and exploitation
that ultimately leads to an
effective policy model.
Biological analogy
• Doya identified that supervised learning methods (Structures
A and C) mirror the function of the cerebellum.
• Unsupervised methods (Structure B) learn in a manner
comparable to that of the cerebral cortex and reinforcement
learning (Structure D) is analogous with the basal ganglia.
What’s the point?
• Every part of a complex system can be made to “learn”.
• The real power of deep learning does not come from using
just one of the structures described in the previous slides as
a component in a robotics system, but in connecting parts of
all these structures together to form a full system that learns
throughout.
• This is where the “deep” in deep learning begins to make its
impact – when each part of a system is capable of learning,
the system can adapt in sophisticated ways.
Limits
• Some remaining barriers to the adoption of deep learning in
robotics include the necessity for large training data and
long training times. One promising trend is crowdsourcing
training data via cloud robotics.
• Distributed computing offers the potential to direct more
computing resources to a given problem but can be limited
by communication speeds.
• DNNs excel at 2D image recognition, but they are known to
be highly susceptible to adversarial samples, and they still
struggle to model 3D spatial layouts.
Open challenges for the next years
1.Learning complex, high-dimensional, and novel dynamics
2.Learning control policies in dynamic environments
3.Advanced manipulation
4.Advanced object recognition
5.Interpreting and anticipating human actions (next slides)
6.Sensor fusion & dimensionality reduction
7.High-level task planning
Robot gains Social Intelligence
through Multimodal Deep
Reinforcement Learning
Authors
Ahmed Hussain Qureshi, Yutaka Nakamura, Yuichiro Yoshikawa and Hiroshi Ishiguro
Pepper Robot
• Designed to be used in professional environments, Pepper
is a humanoid robot that can interact with people, ‘read’
emotions, learn, move and adapt to its environment, and
even recharge on its own. Pepper can perform facial
recognition and develop individualized relationships when it
interacts with people.
• The authors propose a Multimodal Deep Q-Network
(MDQN) to enable a robot to learn human-like interaction
skills through a trial and error method.
Reinforcement Learning background
• An agent interacts
sequentially with an
environment E with an aim of
maximizing cumulative
reward.
• At each time-step, the agent
observes a state 𝐒𝒕, takes an
action at from the set of legal
actions 𝑨 = {𝟏,· · · , 𝑲} and
receives a scalar reward 𝑹𝒕
from the environment.
• An agent’s behavior is
formalized by a policy π,
which maps states to actions.
• The goal of a RL agent is to
learn a policy π that
maximizes the expected total
return (reward)
Deep Q-network
• Further advancements in
machine learning have merged
deep learning with reinforcement
learning (RL) which has led to
the development of the
deep Q-network (DQN)
• DQN utilizes an automatic
feature extractor called deep
convolutional neural network
(Convnets) to approximate the
action-value function of
Q-learning method
CNN for action-value function approximation
• The structure of the two streams is identical and each stream comprises of
eight layers (excluding the input layer).
• Since each stream takes eight frames as an input, therefore, the last eight
frames from the corresponding camera are pre-processed and stacked
together to form the input for each stream of the network.
Multimodal Deep Q-Network (MDQN)
• The dual stream convnets process the depth and grayscale
images independently
• The robot learns to greet people using a set of four legal actions,
i.e., waiting, looking towards human, waving hand and
handshaking.
• The objective of the robot is to learn which action to perform in
each situation.
Reward and action-value function
• The expected total return is the sum of rewards discounted by
factor 𝜸: [𝟎, 𝟏] at each time-step (𝛾 = 0.99 for the proposed work)
• Given that the optimal Q-function 𝑸′(𝒔’, 𝒂’) of the sequence 𝒔’ at
next time-step is deterministic for all possible actions 𝒂’, the
optimal policy is to select an action 𝒂’ that maximizes the expected
value of: 𝐫 + 𝐐′ 𝐬’, 𝐚’
• In DQN, the parameters of the Q-network are adjusted iteratively
towards the Bellman target by minimizing the following loss
function:
Parameters and agent behavior
• The current parameters are updated by stochastic gradient
descent in the direction of the gradient of the loss function with
respect to the parameters
• The agent’s behavior at each time-step is selected by an ε-greedy
policy where the greedy strategy is adopted with probability
(1−ε) while the random strategy with probability ε.
• The robot gets a reward of 1 on the successful handshake, -0.1
on an unsuccessful handshake and 0 for the rest of the three
actions.
Proposed algorithm
• Data generation phase: the system interacts with the environment
using Q-network 𝑄(𝑠, 𝑎; 𝜃). The system observes the current
scene, which comprises of grayscale and depth frames, and takes
an action using the 𝜺-greedy strategy. The environment in return
provides the scalar reward. The interaction experience
𝑒 = (𝑠𝑖, 𝑎𝑖, 𝑟𝑖, 𝑠𝑖 + 1) is stored in the replay memory 𝑴.
• Training phase: the system utilizes the collected data, stored in
replay memory 𝑴, for training the networks. The hyperparameter 𝒏
denotes the number of experience replay. For each experience
replay, a mini buffer 𝑩 of size 2000 interaction experiences is
randomly sampled from the finite sized replay memory M. The
model is trained on the mini batches sampled from buffer B and the
network parameters are updated iteratively.
Evaluation
• For testing the model performance, a separate test dataset,
comprising 4480 grayscale and depth frames not seen by the
system during learning was collected.
• If the agent’s decision was considered wrong by the majority, then
the evaluators were asked to consent on the most appropriate
action for that scenario.
Results
• The authors evaluated the trained y-channel Q-network,
depth-channel Q-network and the MDQN on the test
dataset; table 1 summarizes the performance measures of
these trained Q-networks. In table 1, accuracy corresponds
to how often the predictions by the Q-networks were correct.
• The multimodal deep Q-network achieved maximum
accuracy of 95.3 %, whereas the y-channel and the depth-
channel of Q-networks achieved 85.9% and 82.6% accuracy,
respectively. The results in table 1 validate that fusion of
two streams improves the social cognitive ability of the
agent.
Performance
• This figure shows the performance of MDQN on the test dataset
over the series of episodes. The episode 0 on the plot
corresponds to the Q-network with randomly initialized parameters.
The plot indicates that the performance of MQDN agent on test
dataset is continuously improving as the agent gets more and
more interaction experience with humans.
Conclusions
• In social physical human-robot interaction, it is very difficult to
envisage all the possible interaction scenarios which the robot can
face in the real-world, hence programming a social robot is
notoriously hard.
• The MDQN-agent has learned to give importance to walking
trajectories, head orientation, body language and the activity in
progress in order to decide its best action.
• Aims: i) increase the action space instead of limiting it to just four
actions; ii) use recurrent attention model so that the robot can
indicate its attention; iii) evaluate the influence of three actions,
other than handshake, on the human behavior.
Thanks!
References
• Deep Learning in Robotics: A Review of Recent Research
(Harry A. Pierson, Michael S. Gashler)
• Robot gains Social Intelligence through Multimodal Deep
Reinforcement Learning (Ahmed Hussain Qureshi, Yutaka
Nakamura, Yuichiro Yoshikawa, Hiroshi Ishiguro)

More Related Content

PDF
Overcoming catastrophic forgetting in neural network
PDF
Design Patterns
PDF
Continual learning: Survey
PDF
Continual Learning Introduction
PPT
Shell sort
PDF
09 Inference for Networks – Exponential Random Graph Models (2017)
PPTX
Introduction to continual learning
PDF
Backpropagation in Convolutional Neural Network
Overcoming catastrophic forgetting in neural network
Design Patterns
Continual learning: Survey
Continual Learning Introduction
Shell sort
09 Inference for Networks – Exponential Random Graph Models (2017)
Introduction to continual learning
Backpropagation in Convolutional Neural Network

Similar to Deep Learning in Robotics: Robot gains Social Intelligence through Multimodal Deep Reinforcement Learning (20)

PDF
Deep Learning for Unmanned Systems Anis Koubaa
PDF
Making Robots Learn
PPT
Introduction_to_DEEP_LEARNING.sfsdafsadfsadfsdafsdppt
PPT
Introduction_to_DEEP_LEARNING.ppt
PPT
Introduction_to_DEEP_LEARNING ppt 101ppt
PDF
MILA DL & RL summer school highlights
PDF
AI-based Robotic Manipulation
PDF
Advanced Deep Architectures (D2L6 Deep Learning for Speech and Language UPC 2...
PDF
Introduction-to-Neural-Networks-and-Deep-Learning.pptx.pdf
PPTX
Tsinghua invited talk_zhou_xing_v2r0
PPTX
Intro to Deep Reinforcement Learning
PPTX
lec01.pptx
PDF
(deep) reinforcement learning - CAB420
PPTX
Deep Learning Explained
PPTX
Introduction to Deep learning
PDF
Introduction to Deep Learning: Concepts, Architectures, and Applications
PPT
DEEP LEARNING PPT aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
PPT
deeplearning
PPT
Introduction_to_DEEP_LEARNING.ppt machine learning that uses data, loads ...
PDF
deep q networks (reinforcement learning)
Deep Learning for Unmanned Systems Anis Koubaa
Making Robots Learn
Introduction_to_DEEP_LEARNING.sfsdafsadfsadfsdafsdppt
Introduction_to_DEEP_LEARNING.ppt
Introduction_to_DEEP_LEARNING ppt 101ppt
MILA DL & RL summer school highlights
AI-based Robotic Manipulation
Advanced Deep Architectures (D2L6 Deep Learning for Speech and Language UPC 2...
Introduction-to-Neural-Networks-and-Deep-Learning.pptx.pdf
Tsinghua invited talk_zhou_xing_v2r0
Intro to Deep Reinforcement Learning
lec01.pptx
(deep) reinforcement learning - CAB420
Deep Learning Explained
Introduction to Deep learning
Introduction to Deep Learning: Concepts, Architectures, and Applications
DEEP LEARNING PPT aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
deeplearning
Introduction_to_DEEP_LEARNING.ppt machine learning that uses data, loads ...
deep q networks (reinforcement learning)
Ad

Recently uploaded (20)

PDF
R24 SURVEYING LAB MANUAL for civil enggi
PPTX
Internet of Things (IOT) - A guide to understanding
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
CH1 Production IntroductoryConcepts.pptx
PPTX
web development for engineering and engineering
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PPTX
UNIT 4 Total Quality Management .pptx
DOCX
573137875-Attendance-Management-System-original
PPTX
Construction Project Organization Group 2.pptx
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PPT
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
PDF
PPT on Performance Review to get promotions
PPT
Project quality management in manufacturing
R24 SURVEYING LAB MANUAL for civil enggi
Internet of Things (IOT) - A guide to understanding
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
Automation-in-Manufacturing-Chapter-Introduction.pdf
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
Foundation to blockchain - A guide to Blockchain Tech
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
CH1 Production IntroductoryConcepts.pptx
web development for engineering and engineering
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
UNIT 4 Total Quality Management .pptx
573137875-Attendance-Management-System-original
Construction Project Organization Group 2.pptx
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
PPT on Performance Review to get promotions
Project quality management in manufacturing
Ad

Deep Learning in Robotics: Robot gains Social Intelligence through Multimodal Deep Reinforcement Learning

  • 1. “Deep Learning in Robotics” Student: Gabriele Sisinna (516706) Course: Intelligent Systems Professor: Beatrice Lazzerini Authors Harry A. Pierson Michael S. Gashler
  • 2. Introduction • This review discusses the applications, benefits, and limitations of deep learning for robotic systems, using contemporary research as example. • Applying deep learning to robotics is an active research area, with at least thirty papers published on the subject from 2014 through the time of this writing (2017).
  • 3. Deep learning • Deep learning is the science of training large artificial neural networks. Deep neural networks (DNNs) can have hundreds of millions of parameters, allowing them to model complex functions such as nonlinear dynamics.
  • 4. History • Several important advances have slowly transformed regression into what we now call deep learning. First, the addition of an activation function enabled regression methods to fit to nonlinear functions, and It introduced biological similarity with brain cells. • Next, nonlinear models were stacked in “layers” to create powerful models, called multi-layer perceptrons (MLP).
  • 5. History • Multi-layer perceptrons are universal function approximators, meaning they could fit to any data, no matter how complex, with arbitrary precision, using a finite number of regression units. • Backpropagation marked the beginning of the deep learning revolution; however, researchers still mostly limited their neural networks to a few layers because of the problem of vanishing gradients
  • 6. Application in Robotics • Neural networks were successfully applied for robotics control as early as the 1980s. It was quickly recognized that nonlinear regression provided the functionality that was needed for operating dynamical systems in continuous spaces
  • 7. Biorobotics and Neural networks • In 2008, neuroscientists made advances in recognizing how animals achieved locomotion, and were able to extend this knowledge to neural networks for experimental control of biomimetic robots Infinite Degree of Freedom discretization • In the soft robotics field new techniques are needed for the control of continuous systems with high number of DOFs
  • 8. Structure A: MLP as function approximator • DNNs are well suited for use with robots because they are flexible and can be used in structures that other machine learning models cannot support. • MLP are trained by presenting a large collection of example training pairs: • An optimization method is applied to minimize the prediction loss Supervised
  • 9. Classification • This structures also excel at classification tasks, such as determining what type of object lies before the robot, which grasping approach or general planning strategy is best suited for current conditions, or what is the state of a certain complex object with which the robot is interacting.
  • 10. Parallel Computing: training DNNs • To make effective use of deep learning models, it is important to train on one or more General Purpose Graphical Processing Units (GPGPUs). Many other ways of parallelizing deep neural networks have been attempted, but none of them yet yield the performance gains of GPGPUs.
  • 11. Structure B: Autoencoders • Auto-encoders are used primarily in cases where high- dimensional observations are available, but the user wants a low-dimensional representation of state. • It is one common model for facilitating “unsupervised learning.” It requires two DNNs, called an “encoder” and a “decoder.” Unsupervised
  • 12. Structure C: Recurrent Neural Networks • They can keep track of the past thanks to feedback loops (discrete time non autonomous dynamical systems) • Structure C is a type of “recurrent neural network,” which is designed to model dynamical systems, including robots. It is often trained with an approach called “backpropagation through time” Supervised
  • 13. Structure D: Deep Reinforcement Learning • Deep reinforcement learning (DRL) uses deep learning and reinforcement learning principles to create efficient algorithms applied on areas like robotics, video games, healthcare, ecc… • Implementing deep learning architectures (deep neural networks) with reinforcement learning algorithms (Q-learning, actor critic, etc.) is capable of scaling to previously unsolvable problems.
  • 14. Exploration and exploitation • Instead of minimizing prediction error against a training set of samples, deep Q-networks seek to maximize long-term reward. • This is done through seeking a balance between exploration and exploitation that ultimately leads to an effective policy model.
  • 15. Biological analogy • Doya identified that supervised learning methods (Structures A and C) mirror the function of the cerebellum. • Unsupervised methods (Structure B) learn in a manner comparable to that of the cerebral cortex and reinforcement learning (Structure D) is analogous with the basal ganglia.
  • 16. What’s the point? • Every part of a complex system can be made to “learn”. • The real power of deep learning does not come from using just one of the structures described in the previous slides as a component in a robotics system, but in connecting parts of all these structures together to form a full system that learns throughout. • This is where the “deep” in deep learning begins to make its impact – when each part of a system is capable of learning, the system can adapt in sophisticated ways.
  • 17. Limits • Some remaining barriers to the adoption of deep learning in robotics include the necessity for large training data and long training times. One promising trend is crowdsourcing training data via cloud robotics. • Distributed computing offers the potential to direct more computing resources to a given problem but can be limited by communication speeds. • DNNs excel at 2D image recognition, but they are known to be highly susceptible to adversarial samples, and they still struggle to model 3D spatial layouts.
  • 18. Open challenges for the next years 1.Learning complex, high-dimensional, and novel dynamics 2.Learning control policies in dynamic environments 3.Advanced manipulation 4.Advanced object recognition 5.Interpreting and anticipating human actions (next slides) 6.Sensor fusion & dimensionality reduction 7.High-level task planning
  • 19. Robot gains Social Intelligence through Multimodal Deep Reinforcement Learning Authors Ahmed Hussain Qureshi, Yutaka Nakamura, Yuichiro Yoshikawa and Hiroshi Ishiguro
  • 20. Pepper Robot • Designed to be used in professional environments, Pepper is a humanoid robot that can interact with people, ‘read’ emotions, learn, move and adapt to its environment, and even recharge on its own. Pepper can perform facial recognition and develop individualized relationships when it interacts with people. • The authors propose a Multimodal Deep Q-Network (MDQN) to enable a robot to learn human-like interaction skills through a trial and error method.
  • 21. Reinforcement Learning background • An agent interacts sequentially with an environment E with an aim of maximizing cumulative reward. • At each time-step, the agent observes a state 𝐒𝒕, takes an action at from the set of legal actions 𝑨 = {𝟏,· · · , 𝑲} and receives a scalar reward 𝑹𝒕 from the environment. • An agent’s behavior is formalized by a policy π, which maps states to actions. • The goal of a RL agent is to learn a policy π that maximizes the expected total return (reward)
  • 22. Deep Q-network • Further advancements in machine learning have merged deep learning with reinforcement learning (RL) which has led to the development of the deep Q-network (DQN) • DQN utilizes an automatic feature extractor called deep convolutional neural network (Convnets) to approximate the action-value function of Q-learning method
  • 23. CNN for action-value function approximation • The structure of the two streams is identical and each stream comprises of eight layers (excluding the input layer). • Since each stream takes eight frames as an input, therefore, the last eight frames from the corresponding camera are pre-processed and stacked together to form the input for each stream of the network.
  • 24. Multimodal Deep Q-Network (MDQN) • The dual stream convnets process the depth and grayscale images independently • The robot learns to greet people using a set of four legal actions, i.e., waiting, looking towards human, waving hand and handshaking. • The objective of the robot is to learn which action to perform in each situation.
  • 25. Reward and action-value function • The expected total return is the sum of rewards discounted by factor 𝜸: [𝟎, 𝟏] at each time-step (𝛾 = 0.99 for the proposed work) • Given that the optimal Q-function 𝑸′(𝒔’, 𝒂’) of the sequence 𝒔’ at next time-step is deterministic for all possible actions 𝒂’, the optimal policy is to select an action 𝒂’ that maximizes the expected value of: 𝐫 + 𝐐′ 𝐬’, 𝐚’ • In DQN, the parameters of the Q-network are adjusted iteratively towards the Bellman target by minimizing the following loss function:
  • 26. Parameters and agent behavior • The current parameters are updated by stochastic gradient descent in the direction of the gradient of the loss function with respect to the parameters • The agent’s behavior at each time-step is selected by an ε-greedy policy where the greedy strategy is adopted with probability (1−ε) while the random strategy with probability ε. • The robot gets a reward of 1 on the successful handshake, -0.1 on an unsuccessful handshake and 0 for the rest of the three actions.
  • 27. Proposed algorithm • Data generation phase: the system interacts with the environment using Q-network 𝑄(𝑠, 𝑎; 𝜃). The system observes the current scene, which comprises of grayscale and depth frames, and takes an action using the 𝜺-greedy strategy. The environment in return provides the scalar reward. The interaction experience 𝑒 = (𝑠𝑖, 𝑎𝑖, 𝑟𝑖, 𝑠𝑖 + 1) is stored in the replay memory 𝑴. • Training phase: the system utilizes the collected data, stored in replay memory 𝑴, for training the networks. The hyperparameter 𝒏 denotes the number of experience replay. For each experience replay, a mini buffer 𝑩 of size 2000 interaction experiences is randomly sampled from the finite sized replay memory M. The model is trained on the mini batches sampled from buffer B and the network parameters are updated iteratively.
  • 28. Evaluation • For testing the model performance, a separate test dataset, comprising 4480 grayscale and depth frames not seen by the system during learning was collected. • If the agent’s decision was considered wrong by the majority, then the evaluators were asked to consent on the most appropriate action for that scenario.
  • 29. Results • The authors evaluated the trained y-channel Q-network, depth-channel Q-network and the MDQN on the test dataset; table 1 summarizes the performance measures of these trained Q-networks. In table 1, accuracy corresponds to how often the predictions by the Q-networks were correct. • The multimodal deep Q-network achieved maximum accuracy of 95.3 %, whereas the y-channel and the depth- channel of Q-networks achieved 85.9% and 82.6% accuracy, respectively. The results in table 1 validate that fusion of two streams improves the social cognitive ability of the agent.
  • 30. Performance • This figure shows the performance of MDQN on the test dataset over the series of episodes. The episode 0 on the plot corresponds to the Q-network with randomly initialized parameters. The plot indicates that the performance of MQDN agent on test dataset is continuously improving as the agent gets more and more interaction experience with humans.
  • 31. Conclusions • In social physical human-robot interaction, it is very difficult to envisage all the possible interaction scenarios which the robot can face in the real-world, hence programming a social robot is notoriously hard. • The MDQN-agent has learned to give importance to walking trajectories, head orientation, body language and the activity in progress in order to decide its best action. • Aims: i) increase the action space instead of limiting it to just four actions; ii) use recurrent attention model so that the robot can indicate its attention; iii) evaluate the influence of three actions, other than handshake, on the human behavior.
  • 33. References • Deep Learning in Robotics: A Review of Recent Research (Harry A. Pierson, Michael S. Gashler) • Robot gains Social Intelligence through Multimodal Deep Reinforcement Learning (Ahmed Hussain Qureshi, Yutaka Nakamura, Yuichiro Yoshikawa, Hiroshi Ishiguro)