SlideShare a Scribd company logo
Deep Learning for Dialogue Systemsdeepdialogue.miulab.tw
YUN-NUNG (VIVIAN) CHEN ASLI CELIKYILMAZ DILEK HAKKANI-TÜR
2
Outline
 Introduction
 Background Knowledge
 Neural Network Basics
 Reinforcement Learning
 Modular Dialogue System
 Spoken/Natural Language Understanding (SLU/NLU)
 Dialogue Management
 Dialogue State Tracking (DST)
 Dialogue Policy Optimization
 Natural Language Generation (NLG)
 Evaluation
 Recent Trends and Challenges
 End-to-End Neural Dialogue System
 Multimodality
 Dialogue Breath
 Dialogue Depth
2
Material: http://deepdialogue.miulab.tw
Break
Introduction
Introduction
3
4
Early 1990s
Early 2000s
2017
Multi-modal systems
e.g., Microsoft MiPad, Pocket PC
Keyword Spotting
(e.g., AT&T)
System: “Please say collect,
calling card, person, third
number, or operator”
TV Voice Search
e.g., Bing on Xbox
Intent Determination
(Nuance’s Emily™, AT&T HMIHY)
User: “Uh…we want to move…we
want to change our phone line
from this house to another house”
Task-specific argument extraction
(e.g., Nuance, SpeechWorks)
User: “I want to fly from Boston
to New York next week.”
Brief History of Dialogue Systems
Apple Siri
(2011)
Google Now (2012)
Facebook M & Bot
(2015)
Google Home
(2016)
Microsoft Cortana
(2014)
Amazon Alexa/Echo
(2014)
Google Assistant
(2016)
DARPA
CALO Project
Virtual Personal Assistants
Material: http://deepdialogue.miulab.tw
5
Language Empowering Intelligent Assistant
Apple Siri (2011) Google Now (2012)
Facebook M & Bot (2015) Google Home (2016)
Microsoft Cortana (2014)
Amazon Alexa/Echo (2014)
Google Assistant (2016)
Apple HomePod (2017)
Material: http://deepdialogue.miulab.tw
6
Why We Need?
 Get things done
 E.g. set up alarm/reminder, take note
 Easy access to structured data, services and apps
 E.g. find docs/photos/restaurants
 Assist your daily schedule and routine
 E.g. commute alerts to/from work
 Be more productive in managing your work and personal life
6
Material: http://deepdialogue.miulab.tw
7
Why Natural Language?
 Global Digital Statistics (2015 January)
7
Global Population
7.21B
Active Internet Users
3.01B
Active Social
Media Accounts
2.08B
Active Unique
Mobile Users
3.65B
The more natural and convenient input of devices evolves towards speech.
Material: http://deepdialogue.miulab.tw
8
Spoken Dialogue System (SDS)
 Spoken dialogue systems are intelligent agents that are able to help users finish tasks more
efficiently via spoken interactions.
 Spoken dialogue systems are being incorporated into various devices (smart-phones, smart TVs, in-
car navigating system, etc).
8
JARVIS – Iron Man’s Personal Assistant Baymax – Personal Healthcare Companion
Good dialogue systems assist users to access information conveniently and finish tasks efficiently.
Material: http://deepdialogue.miulab.tw
9
App  Bot
 A bot is responsible for a “single” domain, similar to an app
9
Users can initiate dialogues instead of following the GUI design
Material: http://deepdialogue.miulab.tw
10
GUI v.s. CUI (Conversational UI)
10
https://github.com/enginebai/Movie-lol-android
Material: http://deepdialogue.miulab.tw
11
GUI v.s. CUI (Conversational UI)
Website/APP’s GUI Msg’s CUI
Situation Navigation, no specific goal Searching, with specific goal
Information Quantity More Less
Information Precision Low High
Display Structured Non-structured
Interface Graphics Language
Manipulation Click mainly use texts or speech as input
Learning Need time to learn and adapt No need to learn
Entrance App download Incorporated in any msg-based interface
Flexibility Low, like machine manipulation High, like converse with a human
11
Material: http://deepdialogue.miulab.tw
12
Challenges
 Variability in Natural Language
 Robustness
 Recall/Precision Trade-off
 Meaning Representation
 Common Sense, World Knowledge
 Ability to Learn
 Transparency
12
Material: http://deepdialogue.miulab.tw
Two Branches of Bots
 Personal assistant, helps users achieve a certain task
 Combination of rules and statistical components
 POMDP for spoken dialog systems (Williams and Young, 2007)
 End-to-end trainable task-oriented dialogue system (Wen et al.,
2016)
 End-to-end reinforcement learning dialogue system (Li et al.,
2017; Zhao and Eskenazi, 2016)
 No specific goal, focus on natural responses
 Using variants of seq2seq model
 A neural conversation model (Vinyals and Le, 2015)
 Reinforcement learning for dialogue generation (Li et
al., 2016)
 Conversational contextual cues for response ranking
(AI-Rfou et al., 2016)
13
Task-Oriented Bot Chit-Chat Bot
Material: http://deepdialogue.miulab.tw
14
Task-Oriented Dialogue System (Young, 2000)
14
Speech
Recognition
Language Understanding (LU)
• Domain Identification
• User Intent Detection
• Slot Filling
Dialogue Management (DM)
• Dialogue State Tracking (DST)
• Dialogue Policy
Natural Language
Generation (NLG)
Hypothesis
are there any action movies to
see this weekend
Semantic Frame
request_movie
genre=action, date=this weekend
System Action/Policy
request_location
Text response
Where are you located?
Text Input
Are there any action movies to see this weekend?
Speech Signal
Backend Action /
Knowledge Providers
http://rsta.royalsocietypublishing.org/content/358/1769/1389.short
Material: http://deepdialogue.miulab.tw
15
Interaction Example
15
User
Intelligent
Agent Q: How does a dialogue system process this request?
Good Taiwanese eating places include Din Tai
Fung, Boiling Point, etc. What do you want to
choose? I can help you go there.
find a good eating place for taiwanese food
Material: http://deepdialogue.miulab.tw
16
Task-Oriented Dialogue System (Young, 2000)
16
Speech
Recognition
Language Understanding (LU)
• Domain Identification
• User Intent Detection
• Slot Filling
Dialogue Management (DM)
• Dialogue State Tracking (DST)
• Dialogue Policy
Natural Language
Generation (NLG)
Hypothesis
are there any action movies to
see this weekend
Semantic Frame
request_movie
genre=action, date=this weekend
System Action/Policy
request_location
Text response
Where are you located?
Text Input
Are there any action movies to see this weekend?
Speech Signal
Backend Action /
Knowledge Providers
Material: http://deepdialogue.miulab.tw
17
1. Domain Identification
Requires Predefined Domain Ontology
17
find a good eating place for taiwanese food
User
Organized Domain Knowledge (Database)Intelligent
Agent
Restaurant DB Taxi DB Movie DB
Classification!
Material: http://deepdialogue.miulab.tw
18
2. Intent Detection
Requires Predefined Schema
18
find a good eating place for taiwanese food
User
Intelligent
Agent
Restaurant DB
FIND_RESTAURANT
FIND_PRICE
FIND_TYPE
:
Classification!
Material: http://deepdialogue.miulab.tw
19
3. Slot Filling
Requires Predefined Schema
find a good eating place for taiwanese food
User
Intelligent
Agent
19
Restaurant DB
Restaurant Rating Type
Rest 1 good Taiwanese
Rest 2 bad Thai
: : :
FIND_RESTAURANT
rating=“good”
type=“taiwanese”
SELECT restaurant {
rest.rating=“good”
rest.type=“taiwanese”
}Semantic Frame Sequence Labeling
O O B-rating O O O B-type O
Material: http://deepdialogue.miulab.tw
20
Task-Oriented Dialogue System (Young, 2000)
20
Speech
Recognition
Language Understanding (LU)
• Domain Identification
• User Intent Detection
• Slot Filling
Dialogue Management (DM)
• Dialogue State Tracking (DST)
• Dialogue Policy
Natural Language
Generation (NLG)
Hypothesis
are there any action movies to
see this weekend
Semantic Frame
request_movie
genre=action, date=this weekend
System Action/Policy
request_location
Text response
Where are you located?
Text Input
Are there any action movies to see this weekend?
Speech Signal
Backend Action /
Knowledge Providers
Material: http://deepdialogue.miulab.tw
21
State Tracking
Requires Hand-Crafted States
User
Intelligent
Agent
find a good eating place for taiwanese food
21
location rating type
loc, rating
rating,
type
loc,
type
all
i want it near to my office
NULL
Material: http://deepdialogue.miulab.tw
22
State Tracking
Requires Hand-Crafted States
User
Intelligent
Agent
find a good eating place for taiwanese food
22
location rating type
loc, rating
rating,
type
loc,
type
all
i want it near to my office
NULL
Material: http://deepdialogue.miulab.tw
23
State Tracking
Handling Errors and Confidence
User
Intelligent
Agent
find a good eating place for taixxxx food
23
FIND_RESTAURANT
rating=“good”
type=“taiwanese”
FIND_RESTAURANT
rating=“good”
type=“thai”
FIND_RESTAURANT
rating=“good”
location rating type
loc, rating
rating,
type
loc,
type
all
NULL
?
?
rating=“good”,
type=“thai”
rating=“good”,
type=“taiwanese”
?
?
Material: http://deepdialogue.miulab.tw
24
Dialogue Policy for Agent Action
 Inform(location=“Taipei 101”)
 “The nearest one is at Taipei 101”
 Request(location)
 “Where is your home?”
 Confirm(type=“taiwanese”)
 “Did you want Taiwanese food?”
24
Material: http://deepdialogue.miulab.tw
25
Task-Oriented Dialogue System (Young, 2000)
Speech
Recognition
Language Understanding (LU)
• Domain Identification
• User Intent Detection
• Slot Filling
Hypothesis
are there any action movies to
see this weekend
Semantic Frame
request_movie
genre=action, date=this weekend
System Action/Policy
request_location
Text Input
Are there any action movies to see this weekend?
Speech Signal
Dialogue Management (DM)
• Dialogue State Tracking (DST)
• Dialogue Policy
Backend Action /
Knowledge Providers
Natural Language
Generation (NLG)Text response
Where are you located?
Material: http://deepdialogue.miulab.tw
26
Output / Natural Language Generation
 Goal: generate natural language or GUI given the selected dialogue action for interactions
 Inform(location=“Taipei 101”)
 “The nearest one is at Taipei 101” v.s.
 Request(location)
 “Where is your home?” v.s.
 Confirm(type=“taiwanese”)
 “Did you want Taiwanese food?” v.s.
26
Material: http://deepdialogue.miulab.tw
Background Knowledge27
Neural Network Basics
Reinforcement Learning
28
Outline
 Introduction
 Background Knowledge
 Neural Network Basics
 Reinforcement Learning
 Modular Dialogue System
 Spoken/Natural Language Understanding (SLU/NLU)
 Dialogue Management
 Dialogue State Tracking (DST)
 Dialogue Policy Optimization
 Natural Language Generation (NLG)
 Evaluation
 Recent Trends and Challenges
 End-to-End Neural Dialogue System
 Multimodality
 Dialogue Breath
 Dialogue Depth
28
Material: http://deepdialogue.miulab.tw
29
Machine Learning ≈ Looking for a Function
 Speech Recognition
 Image Recognition
 Go Playing
 Chat Bot
 f
 f
 f
 f
cat
“你好 (Hello) ”
5-5 (next move)
“Where is Westin?” “The address is…”
Material: http://deepdialogue.miulab.tw
Given a large amount of data, the machine learns what the function f should be.
30
Machine Learning
30
Machine
Learning
Unsupervised
Learning
Supervised
Learning
Reinforcement
Learning
Deep learning is a type of machine learning approaches, called “neural networks”.
Material: http://deepdialogue.miulab.tw
31
A Single Neuron
z
1w
2w
Nw
…1x
2x
Nx

b
 z
 z
zbias
y
  z
e
z 


1
1

Sigmoid function
Activation function
1
w, b are the parameters of this neuron
31
Material: http://deepdialogue.miulab.tw
32
A Single Neuron
z
1w
2w
Nw…
1x
2x
Nx

b
bias
y
1





5.0"2"
5.0"2"
ynot
yis
A single neuron can only handle binary classification
32
MN
RRf :
Material: http://deepdialogue.miulab.tw
33
A Layer of Neurons
 Handwriting digit classification
MN
RRf :
A layer of neurons can handle multiple possible output,
and the result depends on the max one
…
1x
2x
Nx

1
 1y

…
…
“1” or not
“2” or not
“3” or not
2y
3y
10 neurons/10 classes
Which
one is
max?
Material: http://deepdialogue.miulab.tw
34
Deep Neural Networks (DNN)
 Fully connected feedforward network
1x
2x
……
Layer 1
…… 1y
2y
……
Layer 2
……
Layer L
……
……
……
Input Output
MyNx
vector
x
vector
y
Deep NN: multiple hidden layers
MN
RRf :
Material: http://deepdialogue.miulab.tw
35
Recurrent Neural Network (RNN)
http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
: tanh, ReLU
time
RNN can learn accumulated sequential information (time-series)
Material: http://deepdialogue.miulab.tw
36
Outline
 Introduction
 Background Knowledge
 Neural Network Basics
 Reinforcement Learning
 Modular Dialogue System
 Spoken/Natural Language Understanding (SLU/NLU)
 Dialogue Management
 Dialogue State Tracking (DST)
 Dialogue Policy Optimization
 Natural Language Generation (NLG)
 Evaluation
 Recent Trends and Challenges
 End-to-End Neural Dialogue System
 Multimodality
 Dialogue Breath
 Dialogue Depth
36
Material: http://deepdialogue.miulab.tw
37
Reinforcement Learning
 RL is a general purpose framework for decision making
 RL is for an agent with the capacity to act
 Each action influences the agent’s future state
 Success is measured by a scalar reward signal
 Goal: select actions to maximize future reward
Material: http://deepdialogue.miulab.tw
38
Scenario of Reinforcement Learning
Agent learns to take actions to maximize expected reward.
Environment
Observation ot Action at
Reward rt
If win, reward = 1
If loss, reward = -1
Otherwise, reward = 0
Next Move
Material: http://deepdialogue.miulab.tw
39
Supervised v.s. Reinforcement
 Supervised
 Reinforcement
39
Hello ☺
Agent
……
Agent
……. ……. ……
Bad
“Hello” Say “Hi”
“Bye bye” Say “Good bye”
Learning from teacher
Learning from critics
Material: http://deepdialogue.miulab.tw
40
Sequential Decision Making
 Goal: select actions to maximize total future reward
 Actions may have long-term consequences
 Reward may be delayed
 It may be better to sacrifice immediate reward to gain more long-term reward
40
Material: http://deepdialogue.miulab.tw
41
Deep Reinforcement Learning
Environment
Observation Action
Reward
Function
Input
Function
Output
Used to pick the
best function
…
…
…
DNN
Material: http://deepdialogue.miulab.tw
42
Reinforcing Learning
 Start from state s0
 Choose action a0
 Transit to s1 ~ P(s0, a0)
 Continue…
 Total reward:
Goal: select actions that maximize the expected total reward
Material: http://deepdialogue.miulab.tw
43
Reinforcement Learning Approach
 Policy-based RL
 Search directly for optimal policy
 Value-based RL
 Estimate the optimal value function
 Model-based RL
 Build a model of the environment
 Plan (e.g. by lookahead) using model
is the policy achieving maximum future reward
is maximum value achievable under any policy
Material: http://deepdialogue.miulab.tw
Modular Dialogue System44
45
Task-Oriented Dialogue System (Young, 2000)
45
Speech
Recognition
Language Understanding (LU)
• Domain Identification
• User Intent Detection
• Slot Filling
Dialogue Management (DM)
• Dialogue State Tracking (DST)
• Dialogue Policy
Natural Language
Generation (NLG)
Hypothesis
are there any action movies to
see this weekend
Semantic Frame
request_movie
genre=action, date=this weekend
System Action/Policy
request_location
Text response
Where are you located?
Text Input
Are there any action movies to see this weekend?
Speech Signal
Backend Action /
Knowledge Providers
http://rsta.royalsocietypublishing.org/content/358/1769/1389.short
Material: http://deepdialogue.miulab.tw
46
Outline
 Introduction
 Background Knowledge
 Neural Network Basics
 Reinforcement Learning
 Modular Dialogue System
 Spoken/Natural Language Understanding (SLU/NLU)
 Dialogue Management
 Dialogue State Tracking (DST)
 Dialogue Policy Optimization
 Natural Language Generation (NLG)
 Evaluation
 Recent Trends and Challenges
 End-to-End Neural Dialogue System
 Multimodality
 Dialogue Breath
 Dialogue Depth
46
Material: http://deepdialogue.miulab.tw
47
Language Understanding (LU)
 Pipelined
47
1. Domain
Classification
2. Intent
Classification
3. Slot Filling
Material: http://deepdialogue.miulab.tw
LU – Domain/Intent Classification
• Given a collection of utterances ui with labels ci, D= {(u1,c1),…,(un,cn)}
where ci ∊ C, train a model to estimate labels for new utterances uk.
Mainly viewed as an utterance classification task
48
find me a cheap taiwanese restaurant in oakland
Movies
Restaurants
Sports
Weather
Music
…
Find_movie
Buy_tickets
Find_restaurant
Book_table
Find_lyrics
…
Material: http://deepdialogue.miulab.tw
49
DNN for Domain/Intent Classification – I (Sarikaya et al., 2011)
 Deep belief nets (DBN)
 Unsupervised training of weights
 Fine-tuning by back-propagation
 Compared to MaxEnt, SVM, and boosting
49
http://ieeexplore.ieee.org/abstract/document/5947649/
Material: http://deepdialogue.miulab.tw
50
DNN for Domain/Intent Classification – II (Tur et al., 2012;
Deng et al., 2012)
 Deep convex networks (DCN)
 Simple classifiers are stacked to learn complex functions
 Feature selection of salient n-grams
 Extension to kernel-DCN
50
http://ieeexplore.ieee.org/abstract/document/6289054/; http://ieeexplore.ieee.org/abstract/document/6424224/
Material: http://deepdialogue.miulab.tw
51
DNN for Domain/Intent Classification – III (Ravuri & Stolcke, 2015)
51
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/RNNLM_addressee.pdf
Intent decision after reading all words performs better
 RNN and LSTMs for utterance classification
Material: http://deepdialogue.miulab.tw
52
DNN for Dialogue Act Classification – IV (Lee & Dernoncourt, 2016)
52
 RNN and CNNs for dialogue act classification
Material: http://deepdialogue.miulab.tw
LU – Slot Filling
53
flights from Boston to New York today
O O B-city O B-city I-city O
O O B-dept O B-arrival I-arrival B-date
As a sequence
tagging task
• Given a collection tagged word sequences, S={((w1,1,w1,2,…, w1,n1),
(t1,1,t1,2,…,t1,n1)), ((w2,1,w2,2,…,w2,n2), (t2,1,t2,2,…,t2,n2)) …}
where ti ∊ M, the goal is to estimate tags for a new word sequence.
flights from Boston to New York today
Entity Tag
Slot Tag
Material: http://deepdialogue.miulab.tw
54
Recurrent Neural Nets for Slot Tagging – I (Yao et al, 2013;
Mesnil et al, 2015)
 Variations:
a. RNNs with LSTM cells
b. Input, sliding window of n-grams
c. Bi-directional LSTMs
𝑤0 𝑤1 𝑤2 𝑤 𝑛
ℎ0
𝑓
ℎ1
𝑓
ℎ2
𝑓
ℎ 𝑛
𝑓
ℎ0
𝑏
ℎ1
𝑏
ℎ2
𝑏 ℎ 𝑛
𝑏
𝑦0 𝑦1 𝑦2 𝑦𝑛
(b) LSTM-LA (c) bLSTM
𝑦0 𝑦1 𝑦2 𝑦𝑛
𝑤0 𝑤1 𝑤2 𝑤 𝑛
ℎ0 ℎ1 ℎ2 ℎ 𝑛
(a) LSTM
𝑦0 𝑦1 𝑦2 𝑦𝑛
𝑤0 𝑤1 𝑤2 𝑤 𝑛
ℎ0 ℎ1 ℎ2 ℎ 𝑛
http://131.107.65.14/en-us/um/people/gzweig/Pubs/Interspeech2013RNNLU.pdf; http://dl.acm.org/citation.cfm?id=2876380
Material: http://deepdialogue.miulab.tw
55
Recurrent Neural Nets for Slot Tagging – II (Kurata et al., 2016;
Simonnet et al., 2015)
 Encoder-decoder networks
 Leverages sentence level information
 Attention-based encoder-decoder
 Use of attention (as in MT) in the
encoder-decoder network
 Attention is estimated using a feed-
forward network with input: ht and st at
time t
𝑦0 𝑦1 𝑦2 𝑦𝑛
𝑤 𝑛 𝑤2 𝑤1 𝑤0
ℎ 𝑛 ℎ2 ℎ1 ℎ0
𝑤0 𝑤1 𝑤2 𝑤 𝑛
𝑦0 𝑦1 𝑦2 𝑦𝑛
𝑤0 𝑤1 𝑤2 𝑤 𝑛
ℎ0 ℎ1 ℎ2 ℎ 𝑛
𝑠0 𝑠1 𝑠2 𝑠 𝑛
ci
ℎ0 ℎ 𝑛…
http://www.aclweb.org/anthology/D16-1223
Material: http://deepdialogue.miulab.tw
56
Recurrent Neural Nets for Slot Tagging – III (Jaech et al., 2016;
Tafforeau et al., 2016)
 Multi-task learning
 Goal: exploit data from domains/tasks with a lot of data to improve ones
with less data
 Lower layers are shared across domains/tasks
 Output layer is specific to task
56
https://arxiv.org/abs/1604.00117; http://www.sensei-conversation.eu/wp-content/uploads/2016/11/favre_is2016b.pdf
Material: http://deepdialogue.miulab.tw
57
Joint Segmentation and Slot Tagging (Zhai et al., 2017)
 Encoder that segments
 Decoder that tags the segments
57
https://arxiv.org/pdf/1701.04027.pdf
Material: http://deepdialogue.miulab.tw
ht-
1
ht+
1
ht
W W W W
taiwanese
B-type
U
food
U
please
U
V
O
V
O
V
hT+1
EOS
U
FIND_REST
V
Slot Filling Intent Prediction
Joint Semantic Frame Parsing
Sequence-
based
(Hakkani-Tur
et al., 2016)
• Slot filling and
intent prediction
in the same
output sequence
Parallel
(Liu and
Lane, 2016)
• Intent prediction
and slot filling
are performed
in two branches
58 https://www.microsoft.com/en-us/research/wp-content/uploads/2016/06/IS16_MultiJoint.pdf; https://arxiv.org/abs/1609.01454
Material: http://deepdialogue.miulab.tw
59
Contextual LU
59
just sent email to bob about fishing this weekend
O O O O
B-contact_name
O
B-subject I-subject I-subject
U
S
I send_emailD communication
 send_email(contact_name=“bob”, subject=“fishing this weekend”)
are we going to fish this weekend
U1
S2
 send_email(message=“are we going to fish this weekend”)
send email to bob
U2
 send_email(contact_name=“bob”)
B-message
I-message
I-message I-message I-message
I-message I-message
B-contact_nameS1
Domain Identification  Intent Prediction  Slot Filling
Material: http://deepdialogue.miulab.tw
60
Contextual LU
 User utterances are highly ambiguous in isolation
Cascal, for 6.
#people time
?
Book a table for 10 people tonight.
Which restaurant would you like to book a table for?
Restaurant
Booking
Material: http://deepdialogue.miulab.tw
61
Contextual LU (Bhargava et al., 2013; Hori et al, 2015)
 Leveraging contexts
 Used for individual tasks
 Seq2Seq model
 Words are input one at a time, tags are output at the end of each utterance
 Extension: LSTM with speaker role dependent layers
61
https://www.merl.com/publications/docs/TR2015-134.pdf
Material: http://deepdialogue.miulab.tw
62
End-to-End Memory Networks (Sukhbaatar et al, 2015)
U: “i d like to purchase tickets to see deepwater horizon”
S: “for which theatre”
U: “angelika”
S: “you want them for angelika theatre?”
U: “yes angelika”
S: “how many tickets would you like ?”
U: “3 tickets for saturday”
S: “What time would you like ?”
U: “Any time on saturday is fine”
S: “okay , there is 4:10 pm , 5:40 pm and 9:20 pm”
U: “Let’s do 5:40”
m0
mi
mn-1
u
Material: http://deepdialogue.miulab.tw
63
E2E MemNN for Contextual LU (Chen et al., 2016)
63
u
Knowledge Attention Distributionpi
mi
Memory Representation
Weighted
Sum
h
∑ Wkg
o
Knowledge Encoding
Representation
history utterances {xi}
current utterance
c
Inner
Product
Sentence
Encoder
RNNin
x1 x2 xi…
Contextual
Sentence Encoder
x1 x2 xi…
RNNmem
slot tagging sequence y
ht-1 ht
V V
W W W
wt-1 wt
yt-1 yt
U UM M
1. Sentence Encoding 2. Knowledge Attention 3. Knowledge Encoding
Idea: additionally incorporating contextual knowledge during slot tagging
 track dialogue states in a latent way
RNN Tagger
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/06/IS16_ContextualSLU.pdf
Material: http://deepdialogue.miulab.tw
64
Analysis of Attention
U: “i d like to purchase tickets to see deepwater horizon”
S: “for which theatre”
U: “angelika”
S: “you want them for angelika theatre?”
U: “yes angelika”
S: “how many tickets would you like ?”
U: “3 tickets for saturday”
S: “What time would you like ?”
U: “Any time on saturday is fine”
S: “okay , there is 4:10 pm , 5:40 pm and 9:20 pm”
U: “Let’s do 5:40”
0.69
0.13
0.16
Material: http://deepdialogue.miulab.tw
65
Sequential Dialogue Encoder Network (Bapna et al., 2017)
 Past and current turn encodings input to a feed forward network
65
Bapna et.al., SIGDIAL 2017
Material: http://deepdialogue.miulab.tw
66
Structural LU (Chen et al., 2016)
 K-SAN: prior knowledge as a teacher
66
Knowledge
Encoding
Sentence
Encoding
Inner
Product
mi
Knowledge Attention Distribution
pi
Encoded Knowledge Representation
Weighted
Sum
∑
Knowledge-
Guided
Representation
slot tagging sequence
knowledge-guided structure {xi}
showme theflights fromseattleto sanfrancisco
ROOT
Input Sentence
W W W W
wt-1
yt-1
U
Mwt
U
wt+1
U
V
yt
V
yt+1
V
MM
RNN Tagger
Knowledge Encoding Module
http://arxiv.org/abs/1609.03286
Material: http://deepdialogue.miulab.tw
67
Structural LU (Chen et al., 2016)
 Sentence structural knowledge stored as memory
67
Semantics (AMR Graph)
show
me
the
flights
from
seattle
to
san
francisco
ROOT
1.
3.
4.
2.
show
you
flight
I
1.
2.
4.
city
city
Seattle
San Francisco
3.
Sentence s show me the flights from seattle to san francisco
Syntax (Dependency Tree)
http://arxiv.org/abs/1609.03286
Material: http://deepdialogue.miulab.tw
68
Structural LU (Chen et al., 2016)
 Sentence structural knowledge stored as memory
http://arxiv.org/abs/1609.03286
Using less training data with K-SAN allows the model pay the similar attention
to the salient substructures that are important for tagging.
Material: http://deepdialogue.miulab.tw
69
LU Importance (Li et al., 2017)
 Compare different types of LU errors
http://arxiv.org/abs/1703.07055
Slot filling is more important than intent detection in language understanding
Sensitivity to Intent Error Sensitivity to Slot Error
Material: http://deepdialogue.miulab.tw
70
LU Evaluation
 Metrics
 Sub-sentence-level: intent accuracy, slot F1
 Sentence-level: whole frame accuracy
70
Material: http://deepdialogue.miulab.tw
71
Outline
 Introduction
 Background Knowledge
 Neural Network Basics
 Reinforcement Learning
 Modular Dialogue System
 Spoken/Natural Language Understanding (SLU/NLU)
 Dialogue Management
 Dialogue State Tracking (DST)
 Dialogue Policy Optimization
 Natural Language Generation (NLG)
 Evaluation
 Recent Trends and Challenges
 End-to-End Neural Dialogue System
 Multimodality
 Dialogue Breath
 Dialogue Depth
71
Material: http://deepdialogue.miulab.tw
72
Elements of Dialogue Management
72(Figure from Gašić)
Dialogue State Tracking
Material: http://deepdialogue.miulab.tw
73
Dialogue State Tracking (DST)
 Maintain a probabilistic distribution instead of a 1-best prediction for
better robustness
73
Incorrect
for both!
Material: http://deepdialogue.miulab.tw
74
Dialogue State Tracking (DST)
 Maintain a probabilistic distribution instead of a 1-best prediction for
better robustness to SLU errors or ambiguous input
74
How can I help you?
Book a table at Sumiko for 5
How many people?
3
Slot Value
# people 5 (0.5)
time 5 (0.5)
Slot Value
# people 3 (0.8)
time 5 (0.8)
Material: http://deepdialogue.miulab.tw
75
Multi-Domain Dialogue State Tracking (DST)
 A full representation of the system's belief of
the user's goal at any point during the dialogue
 Used for making API calls
75
Material: http://deepdialogue.miulab.tw
Do you wanna take Angela
to go see a movie tonight?
Sure, I will be home by 6.
Let's grab dinner before the
movie.
How about some Mexican?
Let's go to Vive Sol and see
Inferno after that.
Angela wants to watch the
Trolls movie.
Ok. Lets catch the 8 pm
show.
Inferno
6 pm 7 pm
2 3
11/15/16
Vive SolRestaurant
MexicanCuisine
6:30 pm 7 pm
11/15/16Date
Time
Restaurants
7:30 pm
Century
16
Trolls
8 pm 9 pm
Movies
76
Dialog State Tracking Challenge (DSTC)
(Williams et al. 2013, Henderson et al. 2014, Henderson et al. 2014, Kim et al. 2016, Kim et al. 2016)
Challenge Type Domain Data Provider Main Theme
DSTC1 Human-Machine Bus Route CMU Evaluation Metrics
DSTC2 Human-Machine Restaurant U. Cambridge User Goal Changes
DSTC3 Human-Machine Tourist Information U. Cambridge Domain Adaptation
DSTC4 Human-Human Tourist Information I2R Human Conversation
DSTC5 Human-Human Tourist Information I2R Language Adaptation
Material: http://deepdialogue.miulab.tw
77
NN-Based DST (Henderson et al., 2013; Henderson et al., 2014; Mrkšić et al., 2015;
Mrkšić et al., 2016)
77(Figure from Wen et al, 2016)
http://www.anthology.aclweb.org/W/W13/W13-4073.pdf; https://arxiv.org/abs/1506.07190; https://arxiv.org/abs/1606.03777
Material: http://deepdialogue.miulab.tw
78
Neural Belief Tracker (Mrkšić et al., 2016)
78
https://arxiv.org/abs/1606.03777
Material: http://deepdialogue.miulab.tw
79
Multichannel Tracker (Shi et al., 2016)
79
 Training a multichannel CNN for each slot
 Chinese character CNN
 Chinese word CNN
 English word CNN
https://arxiv.org/abs/1701.06247
Material: http://deepdialogue.miulab.tw
80
DST Evaluation
 Dialogue State Tracking Challenges
 DSTC2-3, human-machine
 DSTC4-5, human-human
 Metric
 Tracked state accuracy with respect to user goal
 Recall/Precision/F-measure individual slots
80
Material: http://deepdialogue.miulab.tw
81
Outline
 Introduction
 Background Knowledge
 Neural Network Basics
 Reinforcement Learning
 Modular Dialogue System
 Spoken/Natural Language Understanding (SLU/NLU)
 Dialogue Management
 Dialogue State Tracking (DST)
 Dialogue Policy Optimization
 Natural Language Generation (NLG)
 Evaluation
 Recent Trends and Challenges
 End-to-End Neural Dialogue System
 Multimodality
 Dialogue Breath
 Dialogue Depth
81
Material: http://deepdialogue.miulab.tw
82
Elements of Dialogue Management
82(Figure from Gašić)
Dialogue Policy Optimization
Material: http://deepdialogue.miulab.tw
83
Dialogue Policy Optimization
 Dialogue management in a RL framework
83
U s e r
Reward R Observation OAction A
Environment
Agent
Natural Language Generation Language Understanding
Dialogue Manager
Slides credited by Pei-Hao Su
Optimized dialogue policy selects the best action that can maximize the future reward.
Correct rewards are a crucial factor in dialogue policy training
Material: http://deepdialogue.miulab.tw
84
Reward for RL ≅ Evaluation for System
 Dialogue is a special RL task
 Human involves in interaction and rating (evaluation) of a dialogue
 Fully human-in-the-loop framework
 Rating: correctness, appropriateness, and adequacy
- Expert rating high quality, high cost
- User rating unreliable quality, medium cost
- Objective rating Check desired aspects, low cost
84
Material: http://deepdialogue.miulab.tw
85
Reinforcement Learning for Dialogue Policy Optimization
85
Language
understanding
Language
(response)
generation
Dialogue
Policy
𝑎 = 𝜋(𝑠)
Collect rewards
(𝑠, 𝑎, 𝑟, 𝑠’)
Optimize
𝑄(𝑠, 𝑎)
User input (o)
Response
𝑠
𝑎
Type of Bots State Action Reward
Social ChatBots Chat history System Response
# of turns maximized;
Intrinsically motivated reward
InfoBots (interactive Q/A)
User current
question + Context
Answers to current
question
Relevance of answer;
# of turns minimized
Task-Completion Bots
User current input +
Context
System dialogue act w/
slot value (or API calls)
Task success rate;
# of turns minimized
Goal: develop a generic deep RL algorithm to learn dialogue policy for all bot categories
Material: http://deepdialogue.miulab.tw
86
Dialogue Reinforcement Learning Signal
 Typical reward function
 -1 for per turn penalty
 Large reward at completion if successful
 Typically requires domain knowledge
✔ Simulated user
✔ Paid users (Amazon Mechanical Turk)
✖ Real users
|||
…
﹅
86
The user simulator is usually required for dialogue
system training before deployment
Material: http://deepdialogue.miulab.tw
87
Neural Dialogue Manager (Li et al., 2017)
 Deep Q-network for training DM policy
 Input: current semantic frame observation, database returned results
 Output: system action
Semantic Frame
request_movie
genre=action, date=this weekend
System Action/Policy
request_location
DQN-based
Dialogue
Management
(DM)
Simulated User
Backend DB
https://arxiv.org/abs/1703.01008
Material: http://deepdialogue.miulab.tw
88
SL + RL for Sample Efficiency (Su et al., 2017)
 Issue about RL for DM
 slow learning speed
 cold start
 Solutions
 Sample-efficient actor-critic
 Off-policy learning with experience replay
 Better gradient update
 Utilizing supervised data
 Pretrain the model with SL and then fine-tune with RL
 Mix SL and RL data during RL learning
 Combine both
88
https://arxiv.org/pdf/1707.00130.pdfSu et.al., SIGDIAL 2017
Material: http://deepdialogue.miulab.tw
89
Online Training (Su et al., 2015; Su et al., 2016)
 Policy learning from real users
 Infer reward directly from dialogues (Su et al., 2015)
 User rating (Su et al., 2016)
 Reward modeling on user binary success rating
Reward
Model
Success/Fail
Embedding
Function
Dialogue
Representation
Reinforcement
SignalQuery rating
http://www.anthology.aclweb.org/W/W15/W15-46.pdf; https://www.aclweb.org/anthology/P/P16/P16-1230.pdf
Material: http://deepdialogue.miulab.tw
90
Interactive RL for DM (Shah et al., 2016)
90
Immediate
Feedback
https://research.google.com/pubs/pub45734.html
Use a third agent for providing interactive feedback to the DM
Material: http://deepdialogue.miulab.tw
91
Interpreting Interactive Feedback (Shah et al., 2016)
91
https://research.google.com/pubs/pub45734.html
Material: http://deepdialogue.miulab.tw
92
Dialogue Management Evaluation
 Metrics
 Turn-level evaluation: system action accuracy
 Dialogue-level evaluation: task success rate, reward
92
Material: http://deepdialogue.miulab.tw
93
Outline
 Introduction
 Background Knowledge
 Neural Network Basics
 Reinforcement Learning
 Modular Dialogue System
 Spoken/Natural Language Understanding (SLU/NLU)
 Dialogue Management
 Dialogue State Tracking (DST)
 Dialogue Policy Optimization
 Natural Language Generation (NLG)
 Evaluation
 Recent Trends and Challenges
 End-to-End Neural Dialogue System
 Multimodality
 Dialogue Breath
 Dialogue Depth
93
Material: http://deepdialogue.miulab.tw
94
Natural Language Generation (NLG)
 Mapping semantic frame into natural language
inform(name=Seven_Days, foodtype=Chinese)
Seven Days is a nice Chinese restaurant
94
Material: http://deepdialogue.miulab.tw
95
Template-Based NLG
 Define a set of rules to map frames to NL
95
Pros: simple, error-free, easy to control
Cons: time-consuming, poor scalability
Semantic Frame Natural Language
confirm() “Please tell me more about the product your are
looking for.”
confirm(area=$V) “Do you want somewhere in the $V?”
confirm(food=$V) “Do you want a $V restaurant?”
confirm(food=$V,area=$W) “Do you want a $V restaurant in the $W.”
Material: http://deepdialogue.miulab.tw
96
Plan-Based NLG (Walker et al., 2002)
 Divide the problem into pipeline
 Statistical sentence plan generator (Stent et al., 2009)
 Statistical surface realizer (Dethlefs et al., 2013; Cuayáhuitl et al., 2014; …)
Inform(
name=Z_House,
price=cheap
)
Z House is a
cheap restaurant.
Pros: can model complex linguistic structures
Cons: heavily engineered, require domain knowledge
Sentence
Plan
Generator
Sentence
Plan
Reranker
Surface
Realizer
syntactic tree
Material: http://deepdialogue.miulab.tw
97
Class-Based LM NLG (Oh and Rudnicky, 2000)
 Class-based language modeling
 NLG by decoding
97
Pros: easy to implement/ understand, simple rules
Cons: computationally inefficient
Classes:
inform_area
inform_address
…
request_area
request_postcode
http://dl.acm.org/citation.cfm?id=1117568
Material: http://deepdialogue.miulab.tw
98
Phrase-Based NLG (Mairesse et al, 2010)
Semantic
DBN
Phrase
DBN
Charlie Chan is a Chinese Restaurant near Cineworld in the centre
d d
Inform(name=Charlie Chan, food=Chinese, type= restaurant, near=Cineworld, area=centre)
98
Pros: efficient, good performance
Cons: require semantic alignments
realization phrase semantic stack
http://dl.acm.org/citation.cfm?id=1858838
Material: http://deepdialogue.miulab.tw
99
RNN-Based LM NLG (Wen et al., 2015)
<BOS> SLOT_NAME serves SLOT_FOOD .
<BOS> Din Tai Fung serves Taiwanese .
delexicalisation
Inform(name=Din Tai Fung, food=Taiwanese)
0, 0, 1, 0, 0, …, 1, 0, 0, …, 1, 0, 0, 0, 0, 0…
dialogue act 1-hot
representation
SLOT_NAME serves SLOT_FOOD . <EOS>
Slot weight tying
conditioned on
the dialogue act
Input
Output
http://www.anthology.aclweb.org/W/W15/W15-46.pdf#page=295
Material: http://deepdialogue.miulab.tw
100
Handling Semantic Repetition
 Issue: semantic repetition
 Din Tai Fung is a great Taiwanese restaurant that serves Taiwanese.
 Din Tai Fung is a child friendly restaurant, and also allows kids.
 Deficiency in either model or decoding (or both)
 Mitigation
 Post-processing rules (Oh & Rudnicky, 2000)
 Gating mechanism (Wen et al., 2015)
 Attention (Mei et al., 2016; Wen et al., 2015)
100
Material: http://deepdialogue.miulab.tw
101
 Original LSTM cell
 Dialogue act (DA) cell
 Modify Ct
Semantic Conditioned LSTM (Wen et al., 2015)
DA cell
LSTM cell
Ct
it
ft
ot
rt
ht
dtdt-1
xt
xt ht-1
xt ht-1 xt ht-1 xt ht-
1
ht-1
Inform(name=Seven_Days, food=Chinese)
0, 0, 1, 0, 0, …, 1, 0, 0, …, 1, 0, 0, …
dialog act 1-hot
representation
d0
101
Idea: using gate mechanism to control the
generated semantics (dialogue act/slots)
http://www.aclweb.org/anthology/D/D15/D15-1199.pdf
Material: http://deepdialogue.miulab.tw
102
Structural NLG (Dušek and Jurčíček, 2016)
 Goal: NLG based on the syntax tree
 Encode trees as sequences
 Seq2Seq model for generation
102
https://www.aclweb.org/anthology/P/P16/P16-2.pdf#page=79
Material: http://deepdialogue.miulab.tw
103
Contextual NLG (Dušek and Jurčíček, 2016)
 Goal: adapting users’ way of
speaking, providing context-
aware responses
 Context encoder
 Seq2Seq model
103
https://www.aclweb.org/anthology/W/W16/W16-36.pdf#page=203
Material: http://deepdialogue.miulab.tw
104
Controlled Text Generation (Hu et al., 2017)
 Idea: NLG based on generative adversarial network (GAN) framework
 c: targeted sentence attributes
https://arxiv.org/pdf/1703.00955.pdf
Material: http://deepdialogue.miulab.tw
105
NLG Evaluation
 Metrics
 Subjective: human judgement (Stent et al., 2005)
 Adequacy: correct meaning
 Fluency: linguistic fluency
 Readability: fluency in the dialogue context
 Variation: multiple realizations for the same concept
 Objective: automatic metrics
 Word overlap: BLEU (Papineni et al, 2002), METEOR, ROUGE
 Word embedding based: vector extrema, greedy matching,
embedding average
105
There is a gap between human perception and automatic metrics
Material: http://deepdialogue.miulab.tw
Evaluation106
107
Dialogue System Evaluation
107
 Dialogue model evaluation
 Crowd sourcing
 User simulator
 Response generator evaluation
 Word overlap metrics
 Embedding based metrics
Material: http://deepdialogue.miulab.tw
108
Crowdsourcing for Dialogue System Evaluation (Yang et al., 2012)
108
http://www-scf.usc.edu/~zhaojuny/docs/SDSchapter_final.pdf
The normalized mean scores of Q2
and Q5 for approved ratings in each
category. A higher score maps to a
higher level of task success
Material: http://deepdialogue.miulab.tw
109
User Simulation
 Goal: generate natural and reasonable conversations to enable reinforcement
learning for exploring the policy space
 Approach
 Rule-based crafted by experts (Li et al., 2016)
 Learning-based (Schatzmann et al., 2006; El Asri et al., 2016, Crook and Marin, 2017)
Dialogue
Corpus
Simulated User
Real User
Dialogue Management (DM)
• Dialogue State Tracking (DST)
• Dialogue Policy
Interaction
keeps a list of its goals
and actions
randomly generates an
agenda
updates its list of goals
and adds new ones
Material: http://deepdialogue.miulab.tw
110
Elements of User Simulation
Error Model
• Recognition error
• LU error
Dialogue State
Tracking (DST)
System dialogue acts
Reward
Backend Action /
Knowledge Providers
Dialogue Policy
Optimization
Dialogue Management (DM)
User Model
Reward Model
User Simulation Distribution over
user dialogue acts
(semantic frames)
The error model enables the system to maintain the robustness
Material: http://deepdialogue.miulab.tw
111
Rule-Based Simulator for RL Based System (Li et al., 2016)
111
 rule-based simulator + collected data
 starts with sets of goals, actions, KB, slot types
 publicly available simulation framework
 movie-booking domain: ticket booking and movie seeking
 provide procedures to add and test own agent
http://arxiv.org/abs/1612.05688
Material: http://deepdialogue.miulab.tw
112
Model-Based User Simulators
 Bi-gram models (Levin et.al. 2000)
 Graph-based models (Scheffler and Young, 2000)
 Data Driven Simulator (Jung et.al., 2009)
 Neural Models (deep encoder-decoder)
112
Material: http://deepdialogue.miulab.tw
113
Data-Driven Simulator (Jung et.al., 2009)
113
 Three step process
1) User intention simulator
Current
discourse
status (t-1)
User’s current
semantic
frame (t-1)
Current
discourse
status (t)
User’s current
semantic
frame (t)
Current
discourse
status
User’s current
semantic
frame
request+search_loc
(*) compute all possible semantic frame
given previous turn info
(*) randomly select one possible semantic frame
features (DD+DI)
Material: http://deepdialogue.miulab.tw
114
Data-Driven Simulator (Jung et.al., 2009)
114
 Three step process
1) User intention simulator
2) User utterance simulator
request+search_loc
Given a list of POS tags associated with
the semantic frame, using LM+Rules
they generate the user utterance.
I want to go to the city hall
PRP VB TO VB TO [loc_name]
Material: http://deepdialogue.miulab.tw
115
Data-Driven Simulator (Jung et.al., 2009)
115
 Three step process:
1) User intention simulator
2) User utterance simulator
3) ASR channel simulator
 Evaluate the generated sentences
using BLUE-like measures against
the reference utterances collected
from humans (with the same
goal)
Material: http://deepdialogue.miulab.tw
116
Seq2Seq User Simulation (El Asri et al., 2016)
 Seq2Seq trained from dialogue data
 Input: ci encodes contextual features, such as the previous system action,
consistency between user goal and machine provided values
 Output: a dialogue act sequence form the user
 Extrinsic evaluation for policy
https://arxiv.org/abs/1607.00070
Material: http://deepdialogue.miulab.tw
117
Seq2Seq User Simulation (Crook and Marin, 2017)
 Seq2Seq trained from dialogue data
 No labeled data
 Trained on just human to machine conversations
Material: http://deepdialogue.miulab.tw
118
User Simulator for Dialogue Evaluation Measures
•whether constrained values specified by users can be understood by the system
•agreement percentage of system/user understandings over the entire dialog (averaging all turns)
Understanding Ability
•Number of dialogue turns
•Ratio between the dialogue turns (larger is better)
Efficiency
•an explicit confirmation for an uncertain user utterance is an appropriate system action
•providing information based on misunderstood user requirements
Action Appropriateness
Material: http://deepdialogue.miulab.tw
119
How NOT to Evaluate Dialog System (Liu et al., 2017)
 How to evaluate the quality of the generated response ?
 Specifically investigated for chat-bots
 Crucial for task-oriented tasks as well
 Metrics:
 Word overlap metrics, e.g., BLEU, METEOR, ROUGE, etc.
 Embeddings based metrics, e.g., contextual/meaning
representation between target and candidate
https://arxiv.org/pdf/1603.08023.pdf
Material: http://deepdialogue.miulab.tw
120
Dialogue Response Evaluation (Lowe et al., 2017)
Towards an Automatic Turing Test
 Problems of existing automatic evaluation
 can be biased
 correlate poorly with human judgements of response
quality
 using word overlap may be misleading
 Solution
 collect a dataset of accurate human scores for variety
of dialogue responses (e.g., coherent/un-coherent,
relevant/irrelevant, etc.)
 use this dataset to train an automatic dialogue
evaluation model – learn to compare the reference to
candidate responses!
 Use RNN to predict scores by comparing against human
scores!
Context of Conversation
Speaker A: Hey, what do you want
to do tonight?
Speaker B: Why don’t we go see a
movie?
Model Response
Nah, let’s do something active.
Reference Response
Yeah, the film about Turing looks
great!
Material: http://deepdialogue.miulab.tw
End-to-End Learning for Dialogues
Multimodality
Dialogue Breath
Dialogue Depth
Recent Trends and Challenges121
122
Outline
 Introduction
 Background Knowledge
 Neural Network Basics
 Reinforcement Learning
 Modular Dialogue System
 Spoken/Natural Language Understanding (SLU/NLU)
 Dialogue Management
 Dialogue State Tracking (DST)
 Dialogue Policy Optimization
 Natural Language Generation (NLG)
 Evaluation
 Recent Trends and Challenges
 End-to-End Neural Dialogue System
 Multimodality
 Dialogue Breath
 Dialogue Depth
Material: http://deepdialogue.miulab.tw
123
ChitChat Hierarchical Seq2Seq (Serban et al., 2016)
 Learns to generate dialogues from offline dialogs
 No state, action, intent, slot, etc.
http://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/view/11957
Material: http://deepdialogue.miulab.tw
124
ChitChat Hierarchical Seq2Seq (Serban et.al., 2017)
 A hierarchical seq2seq model with Gaussian latent variable for generating
dialogues (like topic or sentiment)
https://arxiv.org/abs/1605.06069
Material: http://deepdialogue.miulab.tw
125
Knowledge Grounded Neural Conv. Model (Ghazvininejad et al.,
2017)
125
Material: http://deepdialogue.miulab.tw
https://arxiv.org/abs/1702.01932
126
E2E Joint NLU and DM (Yang et al., 2017)
 Errors from DM can be propagated to NLU for
regularization + robustness
DM
126
Model DM NLU
Baseline (CRF+SVMs) 7.7 33.1
Pipeline-BLSTM 12.0 36.4
JointModel 22.8 37.4
Both DM and NLU performance (frame accuracy) is improved
https://arxiv.org/abs/1612.00913
Material: http://deepdialogue.miulab.tw
127
0 0 0 … 0 1
Database Operator
Copy
field
…
Database
Sevendays
CurryPrince
Nirala
RoyalStandard
LittleSeuol
DB pointer
Can I have korean
Korean
0.7
British
0.2
French
0.1
…
Belief Tracker
Intent Network
Can I have <v.food>
E2E Supervised Dialogue System (Wen et al., 2016)
Generation Network
<v.name> serves great <v.food> .
Policy Network
127
zt
pt
xt
MySQL query:
“Select * where
food=Korean”
qt
https://arxiv.org/abs/1604.04562
Material: http://deepdialogue.miulab.tw
128
E2E MemNN for Dialogues (Bordes et al., 2016)
 Split dialogue system actions into
subtasks
 API issuing
 API updating
 Option displaying
 Information informing
https://arxiv.org/abs/1605.07683
Material: http://deepdialogue.miulab.tw
129
E2E RL-Based KB-InfoBot (Dhingra et al., 2017)
Movie=?; Actor=Bill Murray; Release Year=1993
Find me the Bill Murray’s movie.
I think it came out in 1993.
When was it released?
Groundhog Day is a Bill Murray
movie which came out in 1993.
KB-InfoBot
User
Entity-Centric Knowledge Base
129Idea: differentiable database for propagating the gradients
http://www.aclweb.org/anthology/P/P17/P17-1045.pdf
Movie Actor
Release
Year
Groundhog Day Bill Murray 1993
Australia Nicole Kidman X
Mad Max: Fury Road X 2015
Material: http://deepdialogue.miulab.tw
130
E2E RL-Based System (Zhao and Eskenazi, 2016)
130
 Joint learning
 NLU, DST, Dialogue Policy
 Deep RL for training
 Deep Q-network
 Deep recurrent network
Baseline
RL
Hybrid-RL
http://www.aclweb.org/anthology/W/W16/W16-36.pdf
Material: http://deepdialogue.miulab.tw
131
E2E LSTM-Based Dialogue Control (Williams and Zweig, 2016)
131
 Idea: an LSTM maps from raw dialogue history directly to a distribution
over system actions
 Developers can provide software including business rules & programmatic APIs
 LSTM can take actions in the real world on behalf of the user
 The LSTM can be optimized using SL or RL
https://arxiv.org/abs/1606.01269
Material: http://deepdialogue.miulab.tw
132
E2E Task-Completion Bot (TC-Bot) (Li et al., 2017)
wi
B-
type
wi
+1
wi+2
O O
EOS
<intent
>
wi
B-
type
wi
+1
wi+2
O O
EOS
<intent
>
Semantic Frame
request_movie
genre=action,
date=this weekend
System Action /
Policy
request_location
User Dialogue Action
Inform(location=San Francisco)
Time t-1
wi
<slot>
wi
+1
wi+2
O O
EOS
<intent>
Language Understanding (LU)
Time t-2
Time t
Dialogue
Management
(DM)
w0 w1 w2
Natural Language Generation (NLG)
EOS
User
Goal
User Agenda Modeling
User Simulator
End-to-End Neural Dialogue System
Text Input
Are there any action movies
to see this weekend?
Idea: supervised learning for each component and reinforcement learning for
end-to-end training the neural dialogue system
132
https://arxiv.org/abs/1703.01008
Material: http://deepdialogue.miulab.tw
133
E2E Task-Completion Bot (TC-Bot) (Li et al., 2017)
 User goal: Two tickets for “the witch” tomorrow 9:30 PM at regal meridian 16, Seattle.
RULE BASED SYSTEM
User: Can I get 2 tickets for the witch?
Agent: Which theater do you want?
User: Regal meridian 16
Agent: What time would you like to see it?
User: 9:30 pm
Agent: What date would you like to see it?
User: Tomorrow!
Agent: How many people are you?
User: Just two adults.
Agent: What is your location?
User: Seattle!
Agent: What kind of video format do you like?
User: Oh, any kind is okay.
Agent: Do you have any price constraint?
User: No, I don’t care.
Agent: Great - I was able to purchase 2 tickets for
you to see the witch tomorrow at regal meridian 16
theater in Seattle at 9:30 pm.
User: Thank you.
Agent: Thank you! 133
The system can learn how to efficiently
interact with users for task completion
REINFORCEMENT LEARNING SYSTEM
User: Can I get 2 tickets for the witch?
Agent: What date would you like to see it?
User: Tomorrow
Agent: Which theater do you prefer?
User: Regal meridian 16
Agent: What is your location?
User: Seattle
Agent: What time would you like to see it?
User: 9:30 pm.
Agent: Great - I was able to purchase 2 tickets for
you to see the witch tomorrow at regal meridian
16 theater in Seattle at 9:30 pm.
User: Thanks.
Agent: Thanks!
https://arxiv.org/abs/1703.01008
Material: http://deepdialogue.miulab.tw
134
Hierarchical RL for Composite Tasks (Peng et al., 2017)
134
Travel Planning
Actions
• Set of tasks that need to be fulfilled collectively!
• Build a dialog manager that satisfies cross-
subtask constraints (slot constraints)
• Temporally constructed goals
• hotel_check_in_time > departure_flight_time
• # flight_tickets = #people checking in the hotel
• hotel_check_out_time< return_flight_time,
https://arxiv.org/abs/1704.03084Peng et.al., EMNLP 2017
Material: http://deepdialogue.miulab.tw
135
Hierarchical RL for Composite Tasks (Peng et al., 2017)
135
 The dialog model makes decisions over two levels: meta-
controller and controller
 The agent learns these policies simultaneously
 the policy of optimal sequence of goals to follow 𝜋 𝑔 𝑔𝑡, 𝑠𝑡; 𝜃1
 Policy 𝜋 𝑎,𝑔 𝑎 𝑡, 𝑔𝑡, 𝑠𝑡; 𝜃2 for each sub-goal 𝑔𝑡
Meta-
Controller
Controller
(mitigate reward sparsity issues)
https://arxiv.org/abs/1704.03084Peng et.al., EMNLP 2017
Material: http://deepdialogue.miulab.tw
136
Outline
 Introduction
 Background Knowledge
 Neural Network Basics
 Reinforcement Learning
 Modular Dialogue System
 Spoken/Natural Language Understanding (SLU/NLU)
 Dialogue Management
 Dialogue State Tracking (DST)
 Dialogue Policy Optimization
 Natural Language Generation (NLG)
 Recent Trends and Challenges
 End-to-End Neural Dialogue System
 Multimodality
 Dialogue Breath
 Dialogue Depth
136
Material: http://deepdialogue.miulab.tw
137
Brain Signal for Understanding
137
 Misunderstanding detection by brain signal
 Green: listen to the correct answer
 Red: listen to the wrong answer
http://dl.acm.org/citation.cfm?id=2388695
Detecting misunderstanding via brain signal in order to correct the understanding results
Material: http://deepdialogue.miulab.tw
138
Video for Intent Understanding
138
Proactive (from camera)
I want to see a movie on TV!
Intent: turn_on_tv
May I turn on the TV for you?
Proactively understanding user intent to initiate the dialogues.
Material: http://deepdialogue.miulab.tw
139
App Behavior for Understanding
139
 Task: user intent prediction
 Challenge: language ambiguity
 User preference
✓ Some people prefer “Message” to “Email”
✓ Some people prefer “Ping” to “Text”
 App-level contexts
✓ “Message” is more likely to follow “Camera”
✓ “Email” is more likely to follow “Excel”
send to vivian
v.s.
Email? Message?
Communication
Considering behavioral patterns in history to model understanding for intent prediction.
http://dl.acm.org/citation.cfm?id=2820781
Material: http://deepdialogue.miulab.tw
140
Video Highlight Prediction Using Audience Chat Reactions
140
https://arxiv.org/pdf/1707.08559.pdfFu et.al., EMNLP 2017
Material: http://deepdialogue.miulab.tw
141
Video Highlight Prediction Using Audience Chat Reactions
141
https://arxiv.org/pdf/1707.08559.pdfFu et.al., EMNLP 2017
 Goal: predict highlight from the video
 Input : multi-modal and multi-lingual
(real time text commentary from fans)
 Output: tag if a frame part of a highlight
or not
Material: http://deepdialogue.miulab.tw
142
Evolution Roadmap
142
Dialogue breadth (coverage)
Dialoguedepth(complexity)
What is influenza?
I’ve got a cold what do I do?
Tell me a joke.
I feel sad…
Material: http://deepdialogue.miulab.tw
143
Outline
 Introduction
 Background Knowledge
 Neural Network Basics
 Reinforcement Learning
 Modular Dialogue System
 Spoken/Natural Language Understanding (SLU/NLU)
 Dialogue Management
 Dialogue State Tracking (DST)
 Dialogue Policy Optimization
 Natural Language Generation (NLG)
 Recent Trends and Challenges
 End-to-End Neural Dialogue System
 Multimodality
 Dialogue Breath
 Dialogue Depth
143
Material: http://deepdialogue.miulab.tw
144
Evolution Roadmap
144
Single
domain
systems
Extended
systems
Multi-
domain
systems
Open
domain
systems
Dialogue breadth (coverage)
Dialoguedepth(complexity)
What is influenza?
I’ve got a cold what do I do?
Tell me a joke.
I feel sad…
Material: http://deepdialogue.miulab.tw
145
Intent Expansion (Chen et al., 2016)
 Transfer dialogue acts across domains
 Dialogue acts are similar for multiple domains
 Learning new intents by information from other domains
CDSSM
New Intent
Intent Representation
1
2
K
:
Embedding
Generation
K+1
K+2<change_calender>
Training Data
<change_note>
“adjust my note”
:
<change_setting>
“volume turn down”
The dialogue act representations can be automatically learned for other domains
http://ieeexplore.ieee.org/abstract/document/7472838/
postpone my meeting to five pm
Material: http://deepdialogue.miulab.tw
146
Zero-Shot Learning (Daupin et al., 2016)
 Semantic utterance classification
 Use query click logs to define a task that makes the networks learn the meaning or
intent behind the queries
 The semantic features are the last hidden layer of the DNN
 Use Zero-Shot Discriminative embedding model combines H with the minimization
of entropy of a zero-shot classifier
https://arxiv.org/abs/1401.0509
Material: http://deepdialogue.miulab.tw
147
Domain Adaptation for SLU (Kim et al., 2016)
 Frustratingly easy domain adaptation
 Novel neural approaches to domain adaptation
 Improve slot tagging on several domains
http://www.aclweb.org/anthology/C/C16/C16-1038.pdf
Material: http://deepdialogue.miulab.tw
148
Policy for Domain Adaptation (Gašić et al., 2015)
 Bayesian committee machine (BCM) enables estimated Q-function to
share knowledge across domains
QRDR
QHDH
QL DL
Committee Model
The policy from a new domain can be boosted by the committee policy
http://ieeexplore.ieee.org/abstract/document/7404871/
Material: http://deepdialogue.miulab.tw
149
Outline
 Introduction
 Background Knowledge
 Neural Network Basics
 Reinforcement Learning
 Modular Dialogue System
 Spoken/Natural Language Understanding (SLU/NLU)
 Dialogue Management
 Dialogue State Tracking (DST)
 Dialogue Policy Optimization
 Natural Language Generation (NLG)
 Recent Trends and Challenges
 End-to-End Neural Dialogue System
 Multimodality
 Dialogue Breath
 Dialogue Depth
149
Material: http://deepdialogue.miulab.tw
150
Evolution Roadmap
150
Knowledge based system
Common sense system
Empathetic systems
Dialogue breadth (coverage)
Dialoguedepth(complexity)
What is influenza?
I’ve got a cold what do I do?
Tell me a joke.
I feel sad…
Material: http://deepdialogue.miulab.tw
151
High-Level Intention for Dialogue Planning (Sun et al., 2016)
 High-level intention may span several domains
Schedule a lunch with Vivian.
find restaurant check location contact play music
What kind of restaurants do you prefer?
The distance is …
Should I send the restaurant information to Vivian?
Users can interact via high-level descriptions and the system learns how to plan the dialogues
http://dl.acm.org/citation.cfm?id=2856818; http://www.lrec-conf.org/proceedings/lrec2016/pdf/75_Paper.pdf
Material: http://deepdialogue.miulab.tw
152
Empathy in Dialogue System (Fung et al., 2016)
 Embed an empathy module
 Recognize emotion using multimodality
 Generate emotion-aware responses
152Emotion Recognizer
vision
speech
text
https://arxiv.org/abs/1605.04072
Material: http://deepdialogue.miulab.tw
153
Visual Object Discovery through Dialogues (Vries et al., 2017)
 Recognize objects using “Guess What?” game
 Includes “spatial”, “visual”, “object taxonomy” and “interaction”
153
https://arxiv.org/pdf/1611.08481.pdf
Material: http://deepdialogue.miulab.tw
Conclusion154
155
Summarized Challenges
155
Human-machine interfaces is a hot topic but several components must be integrated!
Most state-of-the-art technologies are based on DNN
• Requires huge amounts of labeled data
• Several frameworks/models are available
Fast domain adaptation with scarse data + re-use of rules/knowledge
Handling reasoning
Data collection and analysis from un-structured data
Complex-cascade systems requires high accuracy for working good as a whole
Material: http://deepdialogue.miulab.tw
156
Brief Conclusions
 Introduce recent deep learning methods used in dialogue models
 Highlight main components of dialogue systems and new deep
learning architectures used for these components
 Talk about challenges and new avenues for current state-of-the-art
research
 Provide all materials online!
156
http://deepdialogue.miulab.tw
Material: http://deepdialogue.miulab.tw
THANKS FOR ATTENTION!
Q & A
Thanks to Tsung-Hsien Wen, Pei-Hao Su, Li Deng, Jianfeng Gao,
Sungjin Lee, Milica Gašić, Lihong Li, Xiujin Li, Abhinav Rastogi, Ankur
Bapna, PArarth Shah and Gokhan Tur for sharing their slides.
deepdialogue.miulab.tw

More Related Content

PDF
제 16회 보아즈(BOAZ) 빅데이터 컨퍼런스 - [코끼리책방 팀] : 사용자 스크랩 내용 기반 도서 추천
PPTX
챗GPT기반의 하이터치교육.pptx
PDF
Natural Language Processing
PDF
Natural Language Processing Crash Course
PPTX
제 11회 보아즈(BOAZ) 빅데이터 컨퍼런스 - 코끼리(BOAZ) 사서의 도서 추천 솔루션
PPTX
Natural language processing
PPTX
Natural language processing
PDF
딥러닝 자연어처리 - RNN에서 BERT까지
제 16회 보아즈(BOAZ) 빅데이터 컨퍼런스 - [코끼리책방 팀] : 사용자 스크랩 내용 기반 도서 추천
챗GPT기반의 하이터치교육.pptx
Natural Language Processing
Natural Language Processing Crash Course
제 11회 보아즈(BOAZ) 빅데이터 컨퍼런스 - 코끼리(BOAZ) 사서의 도서 추천 솔루션
Natural language processing
Natural language processing
딥러닝 자연어처리 - RNN에서 BERT까지

What's hot (20)

PDF
라이트브레인 UX/UI Trend 2022
PDF
제 16회 보아즈(BOAZ) 빅데이터 컨퍼런스 - [#인스타툰 팀] : 해시태그 기반 인스타툰 추천 챗봇
PDF
Jeff Maruschek: How does RAG REALLY work?
PDF
생성인공지능둘러보기.pdf
PDF
Introduction to natural language processing
PPTX
Natural language processing
PDF
Natural Language Processing (NLP)
PPT
Natural Language Processing
PDF
제 15회 보아즈(BOAZ) 빅데이터 컨퍼런스 - [리뷰의 재발견 팀] : 이커머스 리뷰 유용성 파악 및 필터링
PPTX
natural language processing help at myassignmenthelp.net
PDF
Types of AI Agents | Presentation | PPT
PPTX
Natural lanaguage processing
PDF
제 15회 보아즈(BOAZ) 빅데이터 컨퍼런스 - [쇼미더뮤직 팀] : 텍스트 감정추출을 통한 노래 추천
PDF
UX Discovery_Metaverse_RightBrain_Seminar
PPTX
Natural Language Processing
PDF
Tech On Trend - Chatbots
PDF
제 16회 보아즈(BOAZ) 빅데이터 컨퍼런스 - [하둡메이트 팀] : 하둡 설정 고도화 및 맵리듀스 모니터링
PPTX
Natural Language Processing
라이트브레인 UX/UI Trend 2022
제 16회 보아즈(BOAZ) 빅데이터 컨퍼런스 - [#인스타툰 팀] : 해시태그 기반 인스타툰 추천 챗봇
Jeff Maruschek: How does RAG REALLY work?
생성인공지능둘러보기.pdf
Introduction to natural language processing
Natural language processing
Natural Language Processing (NLP)
Natural Language Processing
제 15회 보아즈(BOAZ) 빅데이터 컨퍼런스 - [리뷰의 재발견 팀] : 이커머스 리뷰 유용성 파악 및 필터링
natural language processing help at myassignmenthelp.net
Types of AI Agents | Presentation | PPT
Natural lanaguage processing
제 15회 보아즈(BOAZ) 빅데이터 컨퍼런스 - [쇼미더뮤직 팀] : 텍스트 감정추출을 통한 노래 추천
UX Discovery_Metaverse_RightBrain_Seminar
Natural Language Processing
Tech On Trend - Chatbots
제 16회 보아즈(BOAZ) 빅데이터 컨퍼런스 - [하둡메이트 팀] : 하둡 설정 고도화 및 맵리듀스 모니터링
Natural Language Processing
Ad

Similar to 2017 Tutorial - Deep Learning for Dialogue Systems (20)

PDF
One Day for Bot 一天搞懂聊天機器人
PDF
[系列活動] 一天搞懂對話機器人
PDF
Deep Learning for Dialogue Systems
PPTX
Deep Learning for Dialogue Modeling - NTHU
PDF
Chatbot的智慧與靈魂
PDF
Aibdconference chat bot for every product Maksym Volchenko
PPTX
Language Empowering Intelligent Assistants (CHT)
PDF
Realizing AI Conversational Bot
PDF
Li Deng at AI Frontiers: Three Generations of Spoken Dialogue Systems (Bots)
PDF
Towards Machine Comprehension of Spoken Content
PPTX
CHATBOT PPT-2.pptx
PDF
Watch your language, young man!
PDF
Conversational AI with Rasa - PyData Workshop
PDF
Trends of ICASSP 2022
PPTX
Using Chatbots to Assist Communication in Collaborative Networks: 19th IFIP W...
PDF
DeepPavlov 2019
PPT
Introduction to Natural Language Processing
PPT
AAMAS-2006 TANDEM Design Method (poster format)
PDF
Deprecating the state machine: building conversational AI with the Rasa stack...
PDF
Deprecating the state machine: building conversational AI with the Rasa stack
One Day for Bot 一天搞懂聊天機器人
[系列活動] 一天搞懂對話機器人
Deep Learning for Dialogue Systems
Deep Learning for Dialogue Modeling - NTHU
Chatbot的智慧與靈魂
Aibdconference chat bot for every product Maksym Volchenko
Language Empowering Intelligent Assistants (CHT)
Realizing AI Conversational Bot
Li Deng at AI Frontiers: Three Generations of Spoken Dialogue Systems (Bots)
Towards Machine Comprehension of Spoken Content
CHATBOT PPT-2.pptx
Watch your language, young man!
Conversational AI with Rasa - PyData Workshop
Trends of ICASSP 2022
Using Chatbots to Assist Communication in Collaborative Networks: 19th IFIP W...
DeepPavlov 2019
Introduction to Natural Language Processing
AAMAS-2006 TANDEM Design Method (poster format)
Deprecating the state machine: building conversational AI with the Rasa stack...
Deprecating the state machine: building conversational AI with the Rasa stack
Ad

More from MLReview (13)

PDF
Bayesian Non-parametric Models for Data Science using PyMC
PDF
Machine Learning and Counterfactual Reasoning for "Personalized" Decision- ...
PDF
Tutorial on Deep Generative Models
PDF
PixelGAN Autoencoders
PDF
Representing and comparing probabilities: Part 2
PDF
Representing and comparing probabilities
PDF
OPTIMIZATION AS A MODEL FOR FEW-SHOT LEARNING
PDF
Theoretical Neuroscience and Deep Learning Theory
PDF
Deep Learning for Semantic Composition
PDF
Near human performance in question answering?
PDF
Tutorial on Theory and Application of Generative Adversarial Networks
PDF
Real-time Edge-aware Image Processing with the Bilateral Grid
PDF
Yoav Goldberg: Word Embeddings What, How and Whither
Bayesian Non-parametric Models for Data Science using PyMC
Machine Learning and Counterfactual Reasoning for "Personalized" Decision- ...
Tutorial on Deep Generative Models
PixelGAN Autoencoders
Representing and comparing probabilities: Part 2
Representing and comparing probabilities
OPTIMIZATION AS A MODEL FOR FEW-SHOT LEARNING
Theoretical Neuroscience and Deep Learning Theory
Deep Learning for Semantic Composition
Near human performance in question answering?
Tutorial on Theory and Application of Generative Adversarial Networks
Real-time Edge-aware Image Processing with the Bilateral Grid
Yoav Goldberg: Word Embeddings What, How and Whither

Recently uploaded (20)

PPTX
cpcsea ppt.pptxssssssssssssssjjdjdndndddd
PDF
The scientific heritage No 166 (166) (2025)
PDF
SEHH2274 Organic Chemistry Notes 1 Structure and Bonding.pdf
PPTX
Taita Taveta Laboratory Technician Workshop Presentation.pptx
PDF
HPLC-PPT.docx high performance liquid chromatography
PPTX
Cell Membrane: Structure, Composition & Functions
PPTX
7. General Toxicologyfor clinical phrmacy.pptx
PDF
Placing the Near-Earth Object Impact Probability in Context
PDF
. Radiology Case Scenariosssssssssssssss
PPTX
DRUG THERAPY FOR SHOCK gjjjgfhhhhh.pptx.
PPTX
Introduction to Fisheries Biotechnology_Lesson 1.pptx
PDF
bbec55_b34400a7914c42429908233dbd381773.pdf
PDF
IFIT3 RNA-binding activity primores influenza A viruz infection and translati...
PPTX
2Systematics of Living Organisms t-.pptx
PPTX
Comparative Structure of Integument in Vertebrates.pptx
PPTX
Microbiology with diagram medical studies .pptx
PPT
The World of Physical Science, • Labs: Safety Simulation, Measurement Practice
PPTX
2. Earth - The Living Planet earth and life
PPTX
The KM-GBF monitoring framework – status & key messages.pptx
PDF
Mastering Bioreactors and Media Sterilization: A Complete Guide to Sterile Fe...
cpcsea ppt.pptxssssssssssssssjjdjdndndddd
The scientific heritage No 166 (166) (2025)
SEHH2274 Organic Chemistry Notes 1 Structure and Bonding.pdf
Taita Taveta Laboratory Technician Workshop Presentation.pptx
HPLC-PPT.docx high performance liquid chromatography
Cell Membrane: Structure, Composition & Functions
7. General Toxicologyfor clinical phrmacy.pptx
Placing the Near-Earth Object Impact Probability in Context
. Radiology Case Scenariosssssssssssssss
DRUG THERAPY FOR SHOCK gjjjgfhhhhh.pptx.
Introduction to Fisheries Biotechnology_Lesson 1.pptx
bbec55_b34400a7914c42429908233dbd381773.pdf
IFIT3 RNA-binding activity primores influenza A viruz infection and translati...
2Systematics of Living Organisms t-.pptx
Comparative Structure of Integument in Vertebrates.pptx
Microbiology with diagram medical studies .pptx
The World of Physical Science, • Labs: Safety Simulation, Measurement Practice
2. Earth - The Living Planet earth and life
The KM-GBF monitoring framework – status & key messages.pptx
Mastering Bioreactors and Media Sterilization: A Complete Guide to Sterile Fe...

2017 Tutorial - Deep Learning for Dialogue Systems

  • 1. Deep Learning for Dialogue Systemsdeepdialogue.miulab.tw YUN-NUNG (VIVIAN) CHEN ASLI CELIKYILMAZ DILEK HAKKANI-TÜR
  • 2. 2 Outline  Introduction  Background Knowledge  Neural Network Basics  Reinforcement Learning  Modular Dialogue System  Spoken/Natural Language Understanding (SLU/NLU)  Dialogue Management  Dialogue State Tracking (DST)  Dialogue Policy Optimization  Natural Language Generation (NLG)  Evaluation  Recent Trends and Challenges  End-to-End Neural Dialogue System  Multimodality  Dialogue Breath  Dialogue Depth 2 Material: http://deepdialogue.miulab.tw Break
  • 4. 4 Early 1990s Early 2000s 2017 Multi-modal systems e.g., Microsoft MiPad, Pocket PC Keyword Spotting (e.g., AT&T) System: “Please say collect, calling card, person, third number, or operator” TV Voice Search e.g., Bing on Xbox Intent Determination (Nuance’s Emily™, AT&T HMIHY) User: “Uh…we want to move…we want to change our phone line from this house to another house” Task-specific argument extraction (e.g., Nuance, SpeechWorks) User: “I want to fly from Boston to New York next week.” Brief History of Dialogue Systems Apple Siri (2011) Google Now (2012) Facebook M & Bot (2015) Google Home (2016) Microsoft Cortana (2014) Amazon Alexa/Echo (2014) Google Assistant (2016) DARPA CALO Project Virtual Personal Assistants Material: http://deepdialogue.miulab.tw
  • 5. 5 Language Empowering Intelligent Assistant Apple Siri (2011) Google Now (2012) Facebook M & Bot (2015) Google Home (2016) Microsoft Cortana (2014) Amazon Alexa/Echo (2014) Google Assistant (2016) Apple HomePod (2017) Material: http://deepdialogue.miulab.tw
  • 6. 6 Why We Need?  Get things done  E.g. set up alarm/reminder, take note  Easy access to structured data, services and apps  E.g. find docs/photos/restaurants  Assist your daily schedule and routine  E.g. commute alerts to/from work  Be more productive in managing your work and personal life 6 Material: http://deepdialogue.miulab.tw
  • 7. 7 Why Natural Language?  Global Digital Statistics (2015 January) 7 Global Population 7.21B Active Internet Users 3.01B Active Social Media Accounts 2.08B Active Unique Mobile Users 3.65B The more natural and convenient input of devices evolves towards speech. Material: http://deepdialogue.miulab.tw
  • 8. 8 Spoken Dialogue System (SDS)  Spoken dialogue systems are intelligent agents that are able to help users finish tasks more efficiently via spoken interactions.  Spoken dialogue systems are being incorporated into various devices (smart-phones, smart TVs, in- car navigating system, etc). 8 JARVIS – Iron Man’s Personal Assistant Baymax – Personal Healthcare Companion Good dialogue systems assist users to access information conveniently and finish tasks efficiently. Material: http://deepdialogue.miulab.tw
  • 9. 9 App  Bot  A bot is responsible for a “single” domain, similar to an app 9 Users can initiate dialogues instead of following the GUI design Material: http://deepdialogue.miulab.tw
  • 10. 10 GUI v.s. CUI (Conversational UI) 10 https://github.com/enginebai/Movie-lol-android Material: http://deepdialogue.miulab.tw
  • 11. 11 GUI v.s. CUI (Conversational UI) Website/APP’s GUI Msg’s CUI Situation Navigation, no specific goal Searching, with specific goal Information Quantity More Less Information Precision Low High Display Structured Non-structured Interface Graphics Language Manipulation Click mainly use texts or speech as input Learning Need time to learn and adapt No need to learn Entrance App download Incorporated in any msg-based interface Flexibility Low, like machine manipulation High, like converse with a human 11 Material: http://deepdialogue.miulab.tw
  • 12. 12 Challenges  Variability in Natural Language  Robustness  Recall/Precision Trade-off  Meaning Representation  Common Sense, World Knowledge  Ability to Learn  Transparency 12 Material: http://deepdialogue.miulab.tw
  • 13. Two Branches of Bots  Personal assistant, helps users achieve a certain task  Combination of rules and statistical components  POMDP for spoken dialog systems (Williams and Young, 2007)  End-to-end trainable task-oriented dialogue system (Wen et al., 2016)  End-to-end reinforcement learning dialogue system (Li et al., 2017; Zhao and Eskenazi, 2016)  No specific goal, focus on natural responses  Using variants of seq2seq model  A neural conversation model (Vinyals and Le, 2015)  Reinforcement learning for dialogue generation (Li et al., 2016)  Conversational contextual cues for response ranking (AI-Rfou et al., 2016) 13 Task-Oriented Bot Chit-Chat Bot Material: http://deepdialogue.miulab.tw
  • 14. 14 Task-Oriented Dialogue System (Young, 2000) 14 Speech Recognition Language Understanding (LU) • Domain Identification • User Intent Detection • Slot Filling Dialogue Management (DM) • Dialogue State Tracking (DST) • Dialogue Policy Natural Language Generation (NLG) Hypothesis are there any action movies to see this weekend Semantic Frame request_movie genre=action, date=this weekend System Action/Policy request_location Text response Where are you located? Text Input Are there any action movies to see this weekend? Speech Signal Backend Action / Knowledge Providers http://rsta.royalsocietypublishing.org/content/358/1769/1389.short Material: http://deepdialogue.miulab.tw
  • 15. 15 Interaction Example 15 User Intelligent Agent Q: How does a dialogue system process this request? Good Taiwanese eating places include Din Tai Fung, Boiling Point, etc. What do you want to choose? I can help you go there. find a good eating place for taiwanese food Material: http://deepdialogue.miulab.tw
  • 16. 16 Task-Oriented Dialogue System (Young, 2000) 16 Speech Recognition Language Understanding (LU) • Domain Identification • User Intent Detection • Slot Filling Dialogue Management (DM) • Dialogue State Tracking (DST) • Dialogue Policy Natural Language Generation (NLG) Hypothesis are there any action movies to see this weekend Semantic Frame request_movie genre=action, date=this weekend System Action/Policy request_location Text response Where are you located? Text Input Are there any action movies to see this weekend? Speech Signal Backend Action / Knowledge Providers Material: http://deepdialogue.miulab.tw
  • 17. 17 1. Domain Identification Requires Predefined Domain Ontology 17 find a good eating place for taiwanese food User Organized Domain Knowledge (Database)Intelligent Agent Restaurant DB Taxi DB Movie DB Classification! Material: http://deepdialogue.miulab.tw
  • 18. 18 2. Intent Detection Requires Predefined Schema 18 find a good eating place for taiwanese food User Intelligent Agent Restaurant DB FIND_RESTAURANT FIND_PRICE FIND_TYPE : Classification! Material: http://deepdialogue.miulab.tw
  • 19. 19 3. Slot Filling Requires Predefined Schema find a good eating place for taiwanese food User Intelligent Agent 19 Restaurant DB Restaurant Rating Type Rest 1 good Taiwanese Rest 2 bad Thai : : : FIND_RESTAURANT rating=“good” type=“taiwanese” SELECT restaurant { rest.rating=“good” rest.type=“taiwanese” }Semantic Frame Sequence Labeling O O B-rating O O O B-type O Material: http://deepdialogue.miulab.tw
  • 20. 20 Task-Oriented Dialogue System (Young, 2000) 20 Speech Recognition Language Understanding (LU) • Domain Identification • User Intent Detection • Slot Filling Dialogue Management (DM) • Dialogue State Tracking (DST) • Dialogue Policy Natural Language Generation (NLG) Hypothesis are there any action movies to see this weekend Semantic Frame request_movie genre=action, date=this weekend System Action/Policy request_location Text response Where are you located? Text Input Are there any action movies to see this weekend? Speech Signal Backend Action / Knowledge Providers Material: http://deepdialogue.miulab.tw
  • 21. 21 State Tracking Requires Hand-Crafted States User Intelligent Agent find a good eating place for taiwanese food 21 location rating type loc, rating rating, type loc, type all i want it near to my office NULL Material: http://deepdialogue.miulab.tw
  • 22. 22 State Tracking Requires Hand-Crafted States User Intelligent Agent find a good eating place for taiwanese food 22 location rating type loc, rating rating, type loc, type all i want it near to my office NULL Material: http://deepdialogue.miulab.tw
  • 23. 23 State Tracking Handling Errors and Confidence User Intelligent Agent find a good eating place for taixxxx food 23 FIND_RESTAURANT rating=“good” type=“taiwanese” FIND_RESTAURANT rating=“good” type=“thai” FIND_RESTAURANT rating=“good” location rating type loc, rating rating, type loc, type all NULL ? ? rating=“good”, type=“thai” rating=“good”, type=“taiwanese” ? ? Material: http://deepdialogue.miulab.tw
  • 24. 24 Dialogue Policy for Agent Action  Inform(location=“Taipei 101”)  “The nearest one is at Taipei 101”  Request(location)  “Where is your home?”  Confirm(type=“taiwanese”)  “Did you want Taiwanese food?” 24 Material: http://deepdialogue.miulab.tw
  • 25. 25 Task-Oriented Dialogue System (Young, 2000) Speech Recognition Language Understanding (LU) • Domain Identification • User Intent Detection • Slot Filling Hypothesis are there any action movies to see this weekend Semantic Frame request_movie genre=action, date=this weekend System Action/Policy request_location Text Input Are there any action movies to see this weekend? Speech Signal Dialogue Management (DM) • Dialogue State Tracking (DST) • Dialogue Policy Backend Action / Knowledge Providers Natural Language Generation (NLG)Text response Where are you located? Material: http://deepdialogue.miulab.tw
  • 26. 26 Output / Natural Language Generation  Goal: generate natural language or GUI given the selected dialogue action for interactions  Inform(location=“Taipei 101”)  “The nearest one is at Taipei 101” v.s.  Request(location)  “Where is your home?” v.s.  Confirm(type=“taiwanese”)  “Did you want Taiwanese food?” v.s. 26 Material: http://deepdialogue.miulab.tw
  • 27. Background Knowledge27 Neural Network Basics Reinforcement Learning
  • 28. 28 Outline  Introduction  Background Knowledge  Neural Network Basics  Reinforcement Learning  Modular Dialogue System  Spoken/Natural Language Understanding (SLU/NLU)  Dialogue Management  Dialogue State Tracking (DST)  Dialogue Policy Optimization  Natural Language Generation (NLG)  Evaluation  Recent Trends and Challenges  End-to-End Neural Dialogue System  Multimodality  Dialogue Breath  Dialogue Depth 28 Material: http://deepdialogue.miulab.tw
  • 29. 29 Machine Learning ≈ Looking for a Function  Speech Recognition  Image Recognition  Go Playing  Chat Bot  f  f  f  f cat “你好 (Hello) ” 5-5 (next move) “Where is Westin?” “The address is…” Material: http://deepdialogue.miulab.tw Given a large amount of data, the machine learns what the function f should be.
  • 30. 30 Machine Learning 30 Machine Learning Unsupervised Learning Supervised Learning Reinforcement Learning Deep learning is a type of machine learning approaches, called “neural networks”. Material: http://deepdialogue.miulab.tw
  • 31. 31 A Single Neuron z 1w 2w Nw …1x 2x Nx  b  z  z zbias y   z e z    1 1  Sigmoid function Activation function 1 w, b are the parameters of this neuron 31 Material: http://deepdialogue.miulab.tw
  • 32. 32 A Single Neuron z 1w 2w Nw… 1x 2x Nx  b bias y 1      5.0"2" 5.0"2" ynot yis A single neuron can only handle binary classification 32 MN RRf : Material: http://deepdialogue.miulab.tw
  • 33. 33 A Layer of Neurons  Handwriting digit classification MN RRf : A layer of neurons can handle multiple possible output, and the result depends on the max one … 1x 2x Nx  1  1y  … … “1” or not “2” or not “3” or not 2y 3y 10 neurons/10 classes Which one is max? Material: http://deepdialogue.miulab.tw
  • 34. 34 Deep Neural Networks (DNN)  Fully connected feedforward network 1x 2x …… Layer 1 …… 1y 2y …… Layer 2 …… Layer L …… …… …… Input Output MyNx vector x vector y Deep NN: multiple hidden layers MN RRf : Material: http://deepdialogue.miulab.tw
  • 35. 35 Recurrent Neural Network (RNN) http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/ : tanh, ReLU time RNN can learn accumulated sequential information (time-series) Material: http://deepdialogue.miulab.tw
  • 36. 36 Outline  Introduction  Background Knowledge  Neural Network Basics  Reinforcement Learning  Modular Dialogue System  Spoken/Natural Language Understanding (SLU/NLU)  Dialogue Management  Dialogue State Tracking (DST)  Dialogue Policy Optimization  Natural Language Generation (NLG)  Evaluation  Recent Trends and Challenges  End-to-End Neural Dialogue System  Multimodality  Dialogue Breath  Dialogue Depth 36 Material: http://deepdialogue.miulab.tw
  • 37. 37 Reinforcement Learning  RL is a general purpose framework for decision making  RL is for an agent with the capacity to act  Each action influences the agent’s future state  Success is measured by a scalar reward signal  Goal: select actions to maximize future reward Material: http://deepdialogue.miulab.tw
  • 38. 38 Scenario of Reinforcement Learning Agent learns to take actions to maximize expected reward. Environment Observation ot Action at Reward rt If win, reward = 1 If loss, reward = -1 Otherwise, reward = 0 Next Move Material: http://deepdialogue.miulab.tw
  • 39. 39 Supervised v.s. Reinforcement  Supervised  Reinforcement 39 Hello ☺ Agent …… Agent ……. ……. …… Bad “Hello” Say “Hi” “Bye bye” Say “Good bye” Learning from teacher Learning from critics Material: http://deepdialogue.miulab.tw
  • 40. 40 Sequential Decision Making  Goal: select actions to maximize total future reward  Actions may have long-term consequences  Reward may be delayed  It may be better to sacrifice immediate reward to gain more long-term reward 40 Material: http://deepdialogue.miulab.tw
  • 41. 41 Deep Reinforcement Learning Environment Observation Action Reward Function Input Function Output Used to pick the best function … … … DNN Material: http://deepdialogue.miulab.tw
  • 42. 42 Reinforcing Learning  Start from state s0  Choose action a0  Transit to s1 ~ P(s0, a0)  Continue…  Total reward: Goal: select actions that maximize the expected total reward Material: http://deepdialogue.miulab.tw
  • 43. 43 Reinforcement Learning Approach  Policy-based RL  Search directly for optimal policy  Value-based RL  Estimate the optimal value function  Model-based RL  Build a model of the environment  Plan (e.g. by lookahead) using model is the policy achieving maximum future reward is maximum value achievable under any policy Material: http://deepdialogue.miulab.tw
  • 45. 45 Task-Oriented Dialogue System (Young, 2000) 45 Speech Recognition Language Understanding (LU) • Domain Identification • User Intent Detection • Slot Filling Dialogue Management (DM) • Dialogue State Tracking (DST) • Dialogue Policy Natural Language Generation (NLG) Hypothesis are there any action movies to see this weekend Semantic Frame request_movie genre=action, date=this weekend System Action/Policy request_location Text response Where are you located? Text Input Are there any action movies to see this weekend? Speech Signal Backend Action / Knowledge Providers http://rsta.royalsocietypublishing.org/content/358/1769/1389.short Material: http://deepdialogue.miulab.tw
  • 46. 46 Outline  Introduction  Background Knowledge  Neural Network Basics  Reinforcement Learning  Modular Dialogue System  Spoken/Natural Language Understanding (SLU/NLU)  Dialogue Management  Dialogue State Tracking (DST)  Dialogue Policy Optimization  Natural Language Generation (NLG)  Evaluation  Recent Trends and Challenges  End-to-End Neural Dialogue System  Multimodality  Dialogue Breath  Dialogue Depth 46 Material: http://deepdialogue.miulab.tw
  • 47. 47 Language Understanding (LU)  Pipelined 47 1. Domain Classification 2. Intent Classification 3. Slot Filling Material: http://deepdialogue.miulab.tw
  • 48. LU – Domain/Intent Classification • Given a collection of utterances ui with labels ci, D= {(u1,c1),…,(un,cn)} where ci ∊ C, train a model to estimate labels for new utterances uk. Mainly viewed as an utterance classification task 48 find me a cheap taiwanese restaurant in oakland Movies Restaurants Sports Weather Music … Find_movie Buy_tickets Find_restaurant Book_table Find_lyrics … Material: http://deepdialogue.miulab.tw
  • 49. 49 DNN for Domain/Intent Classification – I (Sarikaya et al., 2011)  Deep belief nets (DBN)  Unsupervised training of weights  Fine-tuning by back-propagation  Compared to MaxEnt, SVM, and boosting 49 http://ieeexplore.ieee.org/abstract/document/5947649/ Material: http://deepdialogue.miulab.tw
  • 50. 50 DNN for Domain/Intent Classification – II (Tur et al., 2012; Deng et al., 2012)  Deep convex networks (DCN)  Simple classifiers are stacked to learn complex functions  Feature selection of salient n-grams  Extension to kernel-DCN 50 http://ieeexplore.ieee.org/abstract/document/6289054/; http://ieeexplore.ieee.org/abstract/document/6424224/ Material: http://deepdialogue.miulab.tw
  • 51. 51 DNN for Domain/Intent Classification – III (Ravuri & Stolcke, 2015) 51 https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/RNNLM_addressee.pdf Intent decision after reading all words performs better  RNN and LSTMs for utterance classification Material: http://deepdialogue.miulab.tw
  • 52. 52 DNN for Dialogue Act Classification – IV (Lee & Dernoncourt, 2016) 52  RNN and CNNs for dialogue act classification Material: http://deepdialogue.miulab.tw
  • 53. LU – Slot Filling 53 flights from Boston to New York today O O B-city O B-city I-city O O O B-dept O B-arrival I-arrival B-date As a sequence tagging task • Given a collection tagged word sequences, S={((w1,1,w1,2,…, w1,n1), (t1,1,t1,2,…,t1,n1)), ((w2,1,w2,2,…,w2,n2), (t2,1,t2,2,…,t2,n2)) …} where ti ∊ M, the goal is to estimate tags for a new word sequence. flights from Boston to New York today Entity Tag Slot Tag Material: http://deepdialogue.miulab.tw
  • 54. 54 Recurrent Neural Nets for Slot Tagging – I (Yao et al, 2013; Mesnil et al, 2015)  Variations: a. RNNs with LSTM cells b. Input, sliding window of n-grams c. Bi-directional LSTMs 𝑤0 𝑤1 𝑤2 𝑤 𝑛 ℎ0 𝑓 ℎ1 𝑓 ℎ2 𝑓 ℎ 𝑛 𝑓 ℎ0 𝑏 ℎ1 𝑏 ℎ2 𝑏 ℎ 𝑛 𝑏 𝑦0 𝑦1 𝑦2 𝑦𝑛 (b) LSTM-LA (c) bLSTM 𝑦0 𝑦1 𝑦2 𝑦𝑛 𝑤0 𝑤1 𝑤2 𝑤 𝑛 ℎ0 ℎ1 ℎ2 ℎ 𝑛 (a) LSTM 𝑦0 𝑦1 𝑦2 𝑦𝑛 𝑤0 𝑤1 𝑤2 𝑤 𝑛 ℎ0 ℎ1 ℎ2 ℎ 𝑛 http://131.107.65.14/en-us/um/people/gzweig/Pubs/Interspeech2013RNNLU.pdf; http://dl.acm.org/citation.cfm?id=2876380 Material: http://deepdialogue.miulab.tw
  • 55. 55 Recurrent Neural Nets for Slot Tagging – II (Kurata et al., 2016; Simonnet et al., 2015)  Encoder-decoder networks  Leverages sentence level information  Attention-based encoder-decoder  Use of attention (as in MT) in the encoder-decoder network  Attention is estimated using a feed- forward network with input: ht and st at time t 𝑦0 𝑦1 𝑦2 𝑦𝑛 𝑤 𝑛 𝑤2 𝑤1 𝑤0 ℎ 𝑛 ℎ2 ℎ1 ℎ0 𝑤0 𝑤1 𝑤2 𝑤 𝑛 𝑦0 𝑦1 𝑦2 𝑦𝑛 𝑤0 𝑤1 𝑤2 𝑤 𝑛 ℎ0 ℎ1 ℎ2 ℎ 𝑛 𝑠0 𝑠1 𝑠2 𝑠 𝑛 ci ℎ0 ℎ 𝑛… http://www.aclweb.org/anthology/D16-1223 Material: http://deepdialogue.miulab.tw
  • 56. 56 Recurrent Neural Nets for Slot Tagging – III (Jaech et al., 2016; Tafforeau et al., 2016)  Multi-task learning  Goal: exploit data from domains/tasks with a lot of data to improve ones with less data  Lower layers are shared across domains/tasks  Output layer is specific to task 56 https://arxiv.org/abs/1604.00117; http://www.sensei-conversation.eu/wp-content/uploads/2016/11/favre_is2016b.pdf Material: http://deepdialogue.miulab.tw
  • 57. 57 Joint Segmentation and Slot Tagging (Zhai et al., 2017)  Encoder that segments  Decoder that tags the segments 57 https://arxiv.org/pdf/1701.04027.pdf Material: http://deepdialogue.miulab.tw
  • 58. ht- 1 ht+ 1 ht W W W W taiwanese B-type U food U please U V O V O V hT+1 EOS U FIND_REST V Slot Filling Intent Prediction Joint Semantic Frame Parsing Sequence- based (Hakkani-Tur et al., 2016) • Slot filling and intent prediction in the same output sequence Parallel (Liu and Lane, 2016) • Intent prediction and slot filling are performed in two branches 58 https://www.microsoft.com/en-us/research/wp-content/uploads/2016/06/IS16_MultiJoint.pdf; https://arxiv.org/abs/1609.01454 Material: http://deepdialogue.miulab.tw
  • 59. 59 Contextual LU 59 just sent email to bob about fishing this weekend O O O O B-contact_name O B-subject I-subject I-subject U S I send_emailD communication  send_email(contact_name=“bob”, subject=“fishing this weekend”) are we going to fish this weekend U1 S2  send_email(message=“are we going to fish this weekend”) send email to bob U2  send_email(contact_name=“bob”) B-message I-message I-message I-message I-message I-message I-message B-contact_nameS1 Domain Identification  Intent Prediction  Slot Filling Material: http://deepdialogue.miulab.tw
  • 60. 60 Contextual LU  User utterances are highly ambiguous in isolation Cascal, for 6. #people time ? Book a table for 10 people tonight. Which restaurant would you like to book a table for? Restaurant Booking Material: http://deepdialogue.miulab.tw
  • 61. 61 Contextual LU (Bhargava et al., 2013; Hori et al, 2015)  Leveraging contexts  Used for individual tasks  Seq2Seq model  Words are input one at a time, tags are output at the end of each utterance  Extension: LSTM with speaker role dependent layers 61 https://www.merl.com/publications/docs/TR2015-134.pdf Material: http://deepdialogue.miulab.tw
  • 62. 62 End-to-End Memory Networks (Sukhbaatar et al, 2015) U: “i d like to purchase tickets to see deepwater horizon” S: “for which theatre” U: “angelika” S: “you want them for angelika theatre?” U: “yes angelika” S: “how many tickets would you like ?” U: “3 tickets for saturday” S: “What time would you like ?” U: “Any time on saturday is fine” S: “okay , there is 4:10 pm , 5:40 pm and 9:20 pm” U: “Let’s do 5:40” m0 mi mn-1 u Material: http://deepdialogue.miulab.tw
  • 63. 63 E2E MemNN for Contextual LU (Chen et al., 2016) 63 u Knowledge Attention Distributionpi mi Memory Representation Weighted Sum h ∑ Wkg o Knowledge Encoding Representation history utterances {xi} current utterance c Inner Product Sentence Encoder RNNin x1 x2 xi… Contextual Sentence Encoder x1 x2 xi… RNNmem slot tagging sequence y ht-1 ht V V W W W wt-1 wt yt-1 yt U UM M 1. Sentence Encoding 2. Knowledge Attention 3. Knowledge Encoding Idea: additionally incorporating contextual knowledge during slot tagging  track dialogue states in a latent way RNN Tagger https://www.microsoft.com/en-us/research/wp-content/uploads/2016/06/IS16_ContextualSLU.pdf Material: http://deepdialogue.miulab.tw
  • 64. 64 Analysis of Attention U: “i d like to purchase tickets to see deepwater horizon” S: “for which theatre” U: “angelika” S: “you want them for angelika theatre?” U: “yes angelika” S: “how many tickets would you like ?” U: “3 tickets for saturday” S: “What time would you like ?” U: “Any time on saturday is fine” S: “okay , there is 4:10 pm , 5:40 pm and 9:20 pm” U: “Let’s do 5:40” 0.69 0.13 0.16 Material: http://deepdialogue.miulab.tw
  • 65. 65 Sequential Dialogue Encoder Network (Bapna et al., 2017)  Past and current turn encodings input to a feed forward network 65 Bapna et.al., SIGDIAL 2017 Material: http://deepdialogue.miulab.tw
  • 66. 66 Structural LU (Chen et al., 2016)  K-SAN: prior knowledge as a teacher 66 Knowledge Encoding Sentence Encoding Inner Product mi Knowledge Attention Distribution pi Encoded Knowledge Representation Weighted Sum ∑ Knowledge- Guided Representation slot tagging sequence knowledge-guided structure {xi} showme theflights fromseattleto sanfrancisco ROOT Input Sentence W W W W wt-1 yt-1 U Mwt U wt+1 U V yt V yt+1 V MM RNN Tagger Knowledge Encoding Module http://arxiv.org/abs/1609.03286 Material: http://deepdialogue.miulab.tw
  • 67. 67 Structural LU (Chen et al., 2016)  Sentence structural knowledge stored as memory 67 Semantics (AMR Graph) show me the flights from seattle to san francisco ROOT 1. 3. 4. 2. show you flight I 1. 2. 4. city city Seattle San Francisco 3. Sentence s show me the flights from seattle to san francisco Syntax (Dependency Tree) http://arxiv.org/abs/1609.03286 Material: http://deepdialogue.miulab.tw
  • 68. 68 Structural LU (Chen et al., 2016)  Sentence structural knowledge stored as memory http://arxiv.org/abs/1609.03286 Using less training data with K-SAN allows the model pay the similar attention to the salient substructures that are important for tagging. Material: http://deepdialogue.miulab.tw
  • 69. 69 LU Importance (Li et al., 2017)  Compare different types of LU errors http://arxiv.org/abs/1703.07055 Slot filling is more important than intent detection in language understanding Sensitivity to Intent Error Sensitivity to Slot Error Material: http://deepdialogue.miulab.tw
  • 70. 70 LU Evaluation  Metrics  Sub-sentence-level: intent accuracy, slot F1  Sentence-level: whole frame accuracy 70 Material: http://deepdialogue.miulab.tw
  • 71. 71 Outline  Introduction  Background Knowledge  Neural Network Basics  Reinforcement Learning  Modular Dialogue System  Spoken/Natural Language Understanding (SLU/NLU)  Dialogue Management  Dialogue State Tracking (DST)  Dialogue Policy Optimization  Natural Language Generation (NLG)  Evaluation  Recent Trends and Challenges  End-to-End Neural Dialogue System  Multimodality  Dialogue Breath  Dialogue Depth 71 Material: http://deepdialogue.miulab.tw
  • 72. 72 Elements of Dialogue Management 72(Figure from Gašić) Dialogue State Tracking Material: http://deepdialogue.miulab.tw
  • 73. 73 Dialogue State Tracking (DST)  Maintain a probabilistic distribution instead of a 1-best prediction for better robustness 73 Incorrect for both! Material: http://deepdialogue.miulab.tw
  • 74. 74 Dialogue State Tracking (DST)  Maintain a probabilistic distribution instead of a 1-best prediction for better robustness to SLU errors or ambiguous input 74 How can I help you? Book a table at Sumiko for 5 How many people? 3 Slot Value # people 5 (0.5) time 5 (0.5) Slot Value # people 3 (0.8) time 5 (0.8) Material: http://deepdialogue.miulab.tw
  • 75. 75 Multi-Domain Dialogue State Tracking (DST)  A full representation of the system's belief of the user's goal at any point during the dialogue  Used for making API calls 75 Material: http://deepdialogue.miulab.tw Do you wanna take Angela to go see a movie tonight? Sure, I will be home by 6. Let's grab dinner before the movie. How about some Mexican? Let's go to Vive Sol and see Inferno after that. Angela wants to watch the Trolls movie. Ok. Lets catch the 8 pm show. Inferno 6 pm 7 pm 2 3 11/15/16 Vive SolRestaurant MexicanCuisine 6:30 pm 7 pm 11/15/16Date Time Restaurants 7:30 pm Century 16 Trolls 8 pm 9 pm Movies
  • 76. 76 Dialog State Tracking Challenge (DSTC) (Williams et al. 2013, Henderson et al. 2014, Henderson et al. 2014, Kim et al. 2016, Kim et al. 2016) Challenge Type Domain Data Provider Main Theme DSTC1 Human-Machine Bus Route CMU Evaluation Metrics DSTC2 Human-Machine Restaurant U. Cambridge User Goal Changes DSTC3 Human-Machine Tourist Information U. Cambridge Domain Adaptation DSTC4 Human-Human Tourist Information I2R Human Conversation DSTC5 Human-Human Tourist Information I2R Language Adaptation Material: http://deepdialogue.miulab.tw
  • 77. 77 NN-Based DST (Henderson et al., 2013; Henderson et al., 2014; Mrkšić et al., 2015; Mrkšić et al., 2016) 77(Figure from Wen et al, 2016) http://www.anthology.aclweb.org/W/W13/W13-4073.pdf; https://arxiv.org/abs/1506.07190; https://arxiv.org/abs/1606.03777 Material: http://deepdialogue.miulab.tw
  • 78. 78 Neural Belief Tracker (Mrkšić et al., 2016) 78 https://arxiv.org/abs/1606.03777 Material: http://deepdialogue.miulab.tw
  • 79. 79 Multichannel Tracker (Shi et al., 2016) 79  Training a multichannel CNN for each slot  Chinese character CNN  Chinese word CNN  English word CNN https://arxiv.org/abs/1701.06247 Material: http://deepdialogue.miulab.tw
  • 80. 80 DST Evaluation  Dialogue State Tracking Challenges  DSTC2-3, human-machine  DSTC4-5, human-human  Metric  Tracked state accuracy with respect to user goal  Recall/Precision/F-measure individual slots 80 Material: http://deepdialogue.miulab.tw
  • 81. 81 Outline  Introduction  Background Knowledge  Neural Network Basics  Reinforcement Learning  Modular Dialogue System  Spoken/Natural Language Understanding (SLU/NLU)  Dialogue Management  Dialogue State Tracking (DST)  Dialogue Policy Optimization  Natural Language Generation (NLG)  Evaluation  Recent Trends and Challenges  End-to-End Neural Dialogue System  Multimodality  Dialogue Breath  Dialogue Depth 81 Material: http://deepdialogue.miulab.tw
  • 82. 82 Elements of Dialogue Management 82(Figure from Gašić) Dialogue Policy Optimization Material: http://deepdialogue.miulab.tw
  • 83. 83 Dialogue Policy Optimization  Dialogue management in a RL framework 83 U s e r Reward R Observation OAction A Environment Agent Natural Language Generation Language Understanding Dialogue Manager Slides credited by Pei-Hao Su Optimized dialogue policy selects the best action that can maximize the future reward. Correct rewards are a crucial factor in dialogue policy training Material: http://deepdialogue.miulab.tw
  • 84. 84 Reward for RL ≅ Evaluation for System  Dialogue is a special RL task  Human involves in interaction and rating (evaluation) of a dialogue  Fully human-in-the-loop framework  Rating: correctness, appropriateness, and adequacy - Expert rating high quality, high cost - User rating unreliable quality, medium cost - Objective rating Check desired aspects, low cost 84 Material: http://deepdialogue.miulab.tw
  • 85. 85 Reinforcement Learning for Dialogue Policy Optimization 85 Language understanding Language (response) generation Dialogue Policy 𝑎 = 𝜋(𝑠) Collect rewards (𝑠, 𝑎, 𝑟, 𝑠’) Optimize 𝑄(𝑠, 𝑎) User input (o) Response 𝑠 𝑎 Type of Bots State Action Reward Social ChatBots Chat history System Response # of turns maximized; Intrinsically motivated reward InfoBots (interactive Q/A) User current question + Context Answers to current question Relevance of answer; # of turns minimized Task-Completion Bots User current input + Context System dialogue act w/ slot value (or API calls) Task success rate; # of turns minimized Goal: develop a generic deep RL algorithm to learn dialogue policy for all bot categories Material: http://deepdialogue.miulab.tw
  • 86. 86 Dialogue Reinforcement Learning Signal  Typical reward function  -1 for per turn penalty  Large reward at completion if successful  Typically requires domain knowledge ✔ Simulated user ✔ Paid users (Amazon Mechanical Turk) ✖ Real users ||| … ﹅ 86 The user simulator is usually required for dialogue system training before deployment Material: http://deepdialogue.miulab.tw
  • 87. 87 Neural Dialogue Manager (Li et al., 2017)  Deep Q-network for training DM policy  Input: current semantic frame observation, database returned results  Output: system action Semantic Frame request_movie genre=action, date=this weekend System Action/Policy request_location DQN-based Dialogue Management (DM) Simulated User Backend DB https://arxiv.org/abs/1703.01008 Material: http://deepdialogue.miulab.tw
  • 88. 88 SL + RL for Sample Efficiency (Su et al., 2017)  Issue about RL for DM  slow learning speed  cold start  Solutions  Sample-efficient actor-critic  Off-policy learning with experience replay  Better gradient update  Utilizing supervised data  Pretrain the model with SL and then fine-tune with RL  Mix SL and RL data during RL learning  Combine both 88 https://arxiv.org/pdf/1707.00130.pdfSu et.al., SIGDIAL 2017 Material: http://deepdialogue.miulab.tw
  • 89. 89 Online Training (Su et al., 2015; Su et al., 2016)  Policy learning from real users  Infer reward directly from dialogues (Su et al., 2015)  User rating (Su et al., 2016)  Reward modeling on user binary success rating Reward Model Success/Fail Embedding Function Dialogue Representation Reinforcement SignalQuery rating http://www.anthology.aclweb.org/W/W15/W15-46.pdf; https://www.aclweb.org/anthology/P/P16/P16-1230.pdf Material: http://deepdialogue.miulab.tw
  • 90. 90 Interactive RL for DM (Shah et al., 2016) 90 Immediate Feedback https://research.google.com/pubs/pub45734.html Use a third agent for providing interactive feedback to the DM Material: http://deepdialogue.miulab.tw
  • 91. 91 Interpreting Interactive Feedback (Shah et al., 2016) 91 https://research.google.com/pubs/pub45734.html Material: http://deepdialogue.miulab.tw
  • 92. 92 Dialogue Management Evaluation  Metrics  Turn-level evaluation: system action accuracy  Dialogue-level evaluation: task success rate, reward 92 Material: http://deepdialogue.miulab.tw
  • 93. 93 Outline  Introduction  Background Knowledge  Neural Network Basics  Reinforcement Learning  Modular Dialogue System  Spoken/Natural Language Understanding (SLU/NLU)  Dialogue Management  Dialogue State Tracking (DST)  Dialogue Policy Optimization  Natural Language Generation (NLG)  Evaluation  Recent Trends and Challenges  End-to-End Neural Dialogue System  Multimodality  Dialogue Breath  Dialogue Depth 93 Material: http://deepdialogue.miulab.tw
  • 94. 94 Natural Language Generation (NLG)  Mapping semantic frame into natural language inform(name=Seven_Days, foodtype=Chinese) Seven Days is a nice Chinese restaurant 94 Material: http://deepdialogue.miulab.tw
  • 95. 95 Template-Based NLG  Define a set of rules to map frames to NL 95 Pros: simple, error-free, easy to control Cons: time-consuming, poor scalability Semantic Frame Natural Language confirm() “Please tell me more about the product your are looking for.” confirm(area=$V) “Do you want somewhere in the $V?” confirm(food=$V) “Do you want a $V restaurant?” confirm(food=$V,area=$W) “Do you want a $V restaurant in the $W.” Material: http://deepdialogue.miulab.tw
  • 96. 96 Plan-Based NLG (Walker et al., 2002)  Divide the problem into pipeline  Statistical sentence plan generator (Stent et al., 2009)  Statistical surface realizer (Dethlefs et al., 2013; Cuayáhuitl et al., 2014; …) Inform( name=Z_House, price=cheap ) Z House is a cheap restaurant. Pros: can model complex linguistic structures Cons: heavily engineered, require domain knowledge Sentence Plan Generator Sentence Plan Reranker Surface Realizer syntactic tree Material: http://deepdialogue.miulab.tw
  • 97. 97 Class-Based LM NLG (Oh and Rudnicky, 2000)  Class-based language modeling  NLG by decoding 97 Pros: easy to implement/ understand, simple rules Cons: computationally inefficient Classes: inform_area inform_address … request_area request_postcode http://dl.acm.org/citation.cfm?id=1117568 Material: http://deepdialogue.miulab.tw
  • 98. 98 Phrase-Based NLG (Mairesse et al, 2010) Semantic DBN Phrase DBN Charlie Chan is a Chinese Restaurant near Cineworld in the centre d d Inform(name=Charlie Chan, food=Chinese, type= restaurant, near=Cineworld, area=centre) 98 Pros: efficient, good performance Cons: require semantic alignments realization phrase semantic stack http://dl.acm.org/citation.cfm?id=1858838 Material: http://deepdialogue.miulab.tw
  • 99. 99 RNN-Based LM NLG (Wen et al., 2015) <BOS> SLOT_NAME serves SLOT_FOOD . <BOS> Din Tai Fung serves Taiwanese . delexicalisation Inform(name=Din Tai Fung, food=Taiwanese) 0, 0, 1, 0, 0, …, 1, 0, 0, …, 1, 0, 0, 0, 0, 0… dialogue act 1-hot representation SLOT_NAME serves SLOT_FOOD . <EOS> Slot weight tying conditioned on the dialogue act Input Output http://www.anthology.aclweb.org/W/W15/W15-46.pdf#page=295 Material: http://deepdialogue.miulab.tw
  • 100. 100 Handling Semantic Repetition  Issue: semantic repetition  Din Tai Fung is a great Taiwanese restaurant that serves Taiwanese.  Din Tai Fung is a child friendly restaurant, and also allows kids.  Deficiency in either model or decoding (or both)  Mitigation  Post-processing rules (Oh & Rudnicky, 2000)  Gating mechanism (Wen et al., 2015)  Attention (Mei et al., 2016; Wen et al., 2015) 100 Material: http://deepdialogue.miulab.tw
  • 101. 101  Original LSTM cell  Dialogue act (DA) cell  Modify Ct Semantic Conditioned LSTM (Wen et al., 2015) DA cell LSTM cell Ct it ft ot rt ht dtdt-1 xt xt ht-1 xt ht-1 xt ht-1 xt ht- 1 ht-1 Inform(name=Seven_Days, food=Chinese) 0, 0, 1, 0, 0, …, 1, 0, 0, …, 1, 0, 0, … dialog act 1-hot representation d0 101 Idea: using gate mechanism to control the generated semantics (dialogue act/slots) http://www.aclweb.org/anthology/D/D15/D15-1199.pdf Material: http://deepdialogue.miulab.tw
  • 102. 102 Structural NLG (Dušek and Jurčíček, 2016)  Goal: NLG based on the syntax tree  Encode trees as sequences  Seq2Seq model for generation 102 https://www.aclweb.org/anthology/P/P16/P16-2.pdf#page=79 Material: http://deepdialogue.miulab.tw
  • 103. 103 Contextual NLG (Dušek and Jurčíček, 2016)  Goal: adapting users’ way of speaking, providing context- aware responses  Context encoder  Seq2Seq model 103 https://www.aclweb.org/anthology/W/W16/W16-36.pdf#page=203 Material: http://deepdialogue.miulab.tw
  • 104. 104 Controlled Text Generation (Hu et al., 2017)  Idea: NLG based on generative adversarial network (GAN) framework  c: targeted sentence attributes https://arxiv.org/pdf/1703.00955.pdf Material: http://deepdialogue.miulab.tw
  • 105. 105 NLG Evaluation  Metrics  Subjective: human judgement (Stent et al., 2005)  Adequacy: correct meaning  Fluency: linguistic fluency  Readability: fluency in the dialogue context  Variation: multiple realizations for the same concept  Objective: automatic metrics  Word overlap: BLEU (Papineni et al, 2002), METEOR, ROUGE  Word embedding based: vector extrema, greedy matching, embedding average 105 There is a gap between human perception and automatic metrics Material: http://deepdialogue.miulab.tw
  • 107. 107 Dialogue System Evaluation 107  Dialogue model evaluation  Crowd sourcing  User simulator  Response generator evaluation  Word overlap metrics  Embedding based metrics Material: http://deepdialogue.miulab.tw
  • 108. 108 Crowdsourcing for Dialogue System Evaluation (Yang et al., 2012) 108 http://www-scf.usc.edu/~zhaojuny/docs/SDSchapter_final.pdf The normalized mean scores of Q2 and Q5 for approved ratings in each category. A higher score maps to a higher level of task success Material: http://deepdialogue.miulab.tw
  • 109. 109 User Simulation  Goal: generate natural and reasonable conversations to enable reinforcement learning for exploring the policy space  Approach  Rule-based crafted by experts (Li et al., 2016)  Learning-based (Schatzmann et al., 2006; El Asri et al., 2016, Crook and Marin, 2017) Dialogue Corpus Simulated User Real User Dialogue Management (DM) • Dialogue State Tracking (DST) • Dialogue Policy Interaction keeps a list of its goals and actions randomly generates an agenda updates its list of goals and adds new ones Material: http://deepdialogue.miulab.tw
  • 110. 110 Elements of User Simulation Error Model • Recognition error • LU error Dialogue State Tracking (DST) System dialogue acts Reward Backend Action / Knowledge Providers Dialogue Policy Optimization Dialogue Management (DM) User Model Reward Model User Simulation Distribution over user dialogue acts (semantic frames) The error model enables the system to maintain the robustness Material: http://deepdialogue.miulab.tw
  • 111. 111 Rule-Based Simulator for RL Based System (Li et al., 2016) 111  rule-based simulator + collected data  starts with sets of goals, actions, KB, slot types  publicly available simulation framework  movie-booking domain: ticket booking and movie seeking  provide procedures to add and test own agent http://arxiv.org/abs/1612.05688 Material: http://deepdialogue.miulab.tw
  • 112. 112 Model-Based User Simulators  Bi-gram models (Levin et.al. 2000)  Graph-based models (Scheffler and Young, 2000)  Data Driven Simulator (Jung et.al., 2009)  Neural Models (deep encoder-decoder) 112 Material: http://deepdialogue.miulab.tw
  • 113. 113 Data-Driven Simulator (Jung et.al., 2009) 113  Three step process 1) User intention simulator Current discourse status (t-1) User’s current semantic frame (t-1) Current discourse status (t) User’s current semantic frame (t) Current discourse status User’s current semantic frame request+search_loc (*) compute all possible semantic frame given previous turn info (*) randomly select one possible semantic frame features (DD+DI) Material: http://deepdialogue.miulab.tw
  • 114. 114 Data-Driven Simulator (Jung et.al., 2009) 114  Three step process 1) User intention simulator 2) User utterance simulator request+search_loc Given a list of POS tags associated with the semantic frame, using LM+Rules they generate the user utterance. I want to go to the city hall PRP VB TO VB TO [loc_name] Material: http://deepdialogue.miulab.tw
  • 115. 115 Data-Driven Simulator (Jung et.al., 2009) 115  Three step process: 1) User intention simulator 2) User utterance simulator 3) ASR channel simulator  Evaluate the generated sentences using BLUE-like measures against the reference utterances collected from humans (with the same goal) Material: http://deepdialogue.miulab.tw
  • 116. 116 Seq2Seq User Simulation (El Asri et al., 2016)  Seq2Seq trained from dialogue data  Input: ci encodes contextual features, such as the previous system action, consistency between user goal and machine provided values  Output: a dialogue act sequence form the user  Extrinsic evaluation for policy https://arxiv.org/abs/1607.00070 Material: http://deepdialogue.miulab.tw
  • 117. 117 Seq2Seq User Simulation (Crook and Marin, 2017)  Seq2Seq trained from dialogue data  No labeled data  Trained on just human to machine conversations Material: http://deepdialogue.miulab.tw
  • 118. 118 User Simulator for Dialogue Evaluation Measures •whether constrained values specified by users can be understood by the system •agreement percentage of system/user understandings over the entire dialog (averaging all turns) Understanding Ability •Number of dialogue turns •Ratio between the dialogue turns (larger is better) Efficiency •an explicit confirmation for an uncertain user utterance is an appropriate system action •providing information based on misunderstood user requirements Action Appropriateness Material: http://deepdialogue.miulab.tw
  • 119. 119 How NOT to Evaluate Dialog System (Liu et al., 2017)  How to evaluate the quality of the generated response ?  Specifically investigated for chat-bots  Crucial for task-oriented tasks as well  Metrics:  Word overlap metrics, e.g., BLEU, METEOR, ROUGE, etc.  Embeddings based metrics, e.g., contextual/meaning representation between target and candidate https://arxiv.org/pdf/1603.08023.pdf Material: http://deepdialogue.miulab.tw
  • 120. 120 Dialogue Response Evaluation (Lowe et al., 2017) Towards an Automatic Turing Test  Problems of existing automatic evaluation  can be biased  correlate poorly with human judgements of response quality  using word overlap may be misleading  Solution  collect a dataset of accurate human scores for variety of dialogue responses (e.g., coherent/un-coherent, relevant/irrelevant, etc.)  use this dataset to train an automatic dialogue evaluation model – learn to compare the reference to candidate responses!  Use RNN to predict scores by comparing against human scores! Context of Conversation Speaker A: Hey, what do you want to do tonight? Speaker B: Why don’t we go see a movie? Model Response Nah, let’s do something active. Reference Response Yeah, the film about Turing looks great! Material: http://deepdialogue.miulab.tw
  • 121. End-to-End Learning for Dialogues Multimodality Dialogue Breath Dialogue Depth Recent Trends and Challenges121
  • 122. 122 Outline  Introduction  Background Knowledge  Neural Network Basics  Reinforcement Learning  Modular Dialogue System  Spoken/Natural Language Understanding (SLU/NLU)  Dialogue Management  Dialogue State Tracking (DST)  Dialogue Policy Optimization  Natural Language Generation (NLG)  Evaluation  Recent Trends and Challenges  End-to-End Neural Dialogue System  Multimodality  Dialogue Breath  Dialogue Depth Material: http://deepdialogue.miulab.tw
  • 123. 123 ChitChat Hierarchical Seq2Seq (Serban et al., 2016)  Learns to generate dialogues from offline dialogs  No state, action, intent, slot, etc. http://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/view/11957 Material: http://deepdialogue.miulab.tw
  • 124. 124 ChitChat Hierarchical Seq2Seq (Serban et.al., 2017)  A hierarchical seq2seq model with Gaussian latent variable for generating dialogues (like topic or sentiment) https://arxiv.org/abs/1605.06069 Material: http://deepdialogue.miulab.tw
  • 125. 125 Knowledge Grounded Neural Conv. Model (Ghazvininejad et al., 2017) 125 Material: http://deepdialogue.miulab.tw https://arxiv.org/abs/1702.01932
  • 126. 126 E2E Joint NLU and DM (Yang et al., 2017)  Errors from DM can be propagated to NLU for regularization + robustness DM 126 Model DM NLU Baseline (CRF+SVMs) 7.7 33.1 Pipeline-BLSTM 12.0 36.4 JointModel 22.8 37.4 Both DM and NLU performance (frame accuracy) is improved https://arxiv.org/abs/1612.00913 Material: http://deepdialogue.miulab.tw
  • 127. 127 0 0 0 … 0 1 Database Operator Copy field … Database Sevendays CurryPrince Nirala RoyalStandard LittleSeuol DB pointer Can I have korean Korean 0.7 British 0.2 French 0.1 … Belief Tracker Intent Network Can I have <v.food> E2E Supervised Dialogue System (Wen et al., 2016) Generation Network <v.name> serves great <v.food> . Policy Network 127 zt pt xt MySQL query: “Select * where food=Korean” qt https://arxiv.org/abs/1604.04562 Material: http://deepdialogue.miulab.tw
  • 128. 128 E2E MemNN for Dialogues (Bordes et al., 2016)  Split dialogue system actions into subtasks  API issuing  API updating  Option displaying  Information informing https://arxiv.org/abs/1605.07683 Material: http://deepdialogue.miulab.tw
  • 129. 129 E2E RL-Based KB-InfoBot (Dhingra et al., 2017) Movie=?; Actor=Bill Murray; Release Year=1993 Find me the Bill Murray’s movie. I think it came out in 1993. When was it released? Groundhog Day is a Bill Murray movie which came out in 1993. KB-InfoBot User Entity-Centric Knowledge Base 129Idea: differentiable database for propagating the gradients http://www.aclweb.org/anthology/P/P17/P17-1045.pdf Movie Actor Release Year Groundhog Day Bill Murray 1993 Australia Nicole Kidman X Mad Max: Fury Road X 2015 Material: http://deepdialogue.miulab.tw
  • 130. 130 E2E RL-Based System (Zhao and Eskenazi, 2016) 130  Joint learning  NLU, DST, Dialogue Policy  Deep RL for training  Deep Q-network  Deep recurrent network Baseline RL Hybrid-RL http://www.aclweb.org/anthology/W/W16/W16-36.pdf Material: http://deepdialogue.miulab.tw
  • 131. 131 E2E LSTM-Based Dialogue Control (Williams and Zweig, 2016) 131  Idea: an LSTM maps from raw dialogue history directly to a distribution over system actions  Developers can provide software including business rules & programmatic APIs  LSTM can take actions in the real world on behalf of the user  The LSTM can be optimized using SL or RL https://arxiv.org/abs/1606.01269 Material: http://deepdialogue.miulab.tw
  • 132. 132 E2E Task-Completion Bot (TC-Bot) (Li et al., 2017) wi B- type wi +1 wi+2 O O EOS <intent > wi B- type wi +1 wi+2 O O EOS <intent > Semantic Frame request_movie genre=action, date=this weekend System Action / Policy request_location User Dialogue Action Inform(location=San Francisco) Time t-1 wi <slot> wi +1 wi+2 O O EOS <intent> Language Understanding (LU) Time t-2 Time t Dialogue Management (DM) w0 w1 w2 Natural Language Generation (NLG) EOS User Goal User Agenda Modeling User Simulator End-to-End Neural Dialogue System Text Input Are there any action movies to see this weekend? Idea: supervised learning for each component and reinforcement learning for end-to-end training the neural dialogue system 132 https://arxiv.org/abs/1703.01008 Material: http://deepdialogue.miulab.tw
  • 133. 133 E2E Task-Completion Bot (TC-Bot) (Li et al., 2017)  User goal: Two tickets for “the witch” tomorrow 9:30 PM at regal meridian 16, Seattle. RULE BASED SYSTEM User: Can I get 2 tickets for the witch? Agent: Which theater do you want? User: Regal meridian 16 Agent: What time would you like to see it? User: 9:30 pm Agent: What date would you like to see it? User: Tomorrow! Agent: How many people are you? User: Just two adults. Agent: What is your location? User: Seattle! Agent: What kind of video format do you like? User: Oh, any kind is okay. Agent: Do you have any price constraint? User: No, I don’t care. Agent: Great - I was able to purchase 2 tickets for you to see the witch tomorrow at regal meridian 16 theater in Seattle at 9:30 pm. User: Thank you. Agent: Thank you! 133 The system can learn how to efficiently interact with users for task completion REINFORCEMENT LEARNING SYSTEM User: Can I get 2 tickets for the witch? Agent: What date would you like to see it? User: Tomorrow Agent: Which theater do you prefer? User: Regal meridian 16 Agent: What is your location? User: Seattle Agent: What time would you like to see it? User: 9:30 pm. Agent: Great - I was able to purchase 2 tickets for you to see the witch tomorrow at regal meridian 16 theater in Seattle at 9:30 pm. User: Thanks. Agent: Thanks! https://arxiv.org/abs/1703.01008 Material: http://deepdialogue.miulab.tw
  • 134. 134 Hierarchical RL for Composite Tasks (Peng et al., 2017) 134 Travel Planning Actions • Set of tasks that need to be fulfilled collectively! • Build a dialog manager that satisfies cross- subtask constraints (slot constraints) • Temporally constructed goals • hotel_check_in_time > departure_flight_time • # flight_tickets = #people checking in the hotel • hotel_check_out_time< return_flight_time, https://arxiv.org/abs/1704.03084Peng et.al., EMNLP 2017 Material: http://deepdialogue.miulab.tw
  • 135. 135 Hierarchical RL for Composite Tasks (Peng et al., 2017) 135  The dialog model makes decisions over two levels: meta- controller and controller  The agent learns these policies simultaneously  the policy of optimal sequence of goals to follow 𝜋 𝑔 𝑔𝑡, 𝑠𝑡; 𝜃1  Policy 𝜋 𝑎,𝑔 𝑎 𝑡, 𝑔𝑡, 𝑠𝑡; 𝜃2 for each sub-goal 𝑔𝑡 Meta- Controller Controller (mitigate reward sparsity issues) https://arxiv.org/abs/1704.03084Peng et.al., EMNLP 2017 Material: http://deepdialogue.miulab.tw
  • 136. 136 Outline  Introduction  Background Knowledge  Neural Network Basics  Reinforcement Learning  Modular Dialogue System  Spoken/Natural Language Understanding (SLU/NLU)  Dialogue Management  Dialogue State Tracking (DST)  Dialogue Policy Optimization  Natural Language Generation (NLG)  Recent Trends and Challenges  End-to-End Neural Dialogue System  Multimodality  Dialogue Breath  Dialogue Depth 136 Material: http://deepdialogue.miulab.tw
  • 137. 137 Brain Signal for Understanding 137  Misunderstanding detection by brain signal  Green: listen to the correct answer  Red: listen to the wrong answer http://dl.acm.org/citation.cfm?id=2388695 Detecting misunderstanding via brain signal in order to correct the understanding results Material: http://deepdialogue.miulab.tw
  • 138. 138 Video for Intent Understanding 138 Proactive (from camera) I want to see a movie on TV! Intent: turn_on_tv May I turn on the TV for you? Proactively understanding user intent to initiate the dialogues. Material: http://deepdialogue.miulab.tw
  • 139. 139 App Behavior for Understanding 139  Task: user intent prediction  Challenge: language ambiguity  User preference ✓ Some people prefer “Message” to “Email” ✓ Some people prefer “Ping” to “Text”  App-level contexts ✓ “Message” is more likely to follow “Camera” ✓ “Email” is more likely to follow “Excel” send to vivian v.s. Email? Message? Communication Considering behavioral patterns in history to model understanding for intent prediction. http://dl.acm.org/citation.cfm?id=2820781 Material: http://deepdialogue.miulab.tw
  • 140. 140 Video Highlight Prediction Using Audience Chat Reactions 140 https://arxiv.org/pdf/1707.08559.pdfFu et.al., EMNLP 2017 Material: http://deepdialogue.miulab.tw
  • 141. 141 Video Highlight Prediction Using Audience Chat Reactions 141 https://arxiv.org/pdf/1707.08559.pdfFu et.al., EMNLP 2017  Goal: predict highlight from the video  Input : multi-modal and multi-lingual (real time text commentary from fans)  Output: tag if a frame part of a highlight or not Material: http://deepdialogue.miulab.tw
  • 142. 142 Evolution Roadmap 142 Dialogue breadth (coverage) Dialoguedepth(complexity) What is influenza? I’ve got a cold what do I do? Tell me a joke. I feel sad… Material: http://deepdialogue.miulab.tw
  • 143. 143 Outline  Introduction  Background Knowledge  Neural Network Basics  Reinforcement Learning  Modular Dialogue System  Spoken/Natural Language Understanding (SLU/NLU)  Dialogue Management  Dialogue State Tracking (DST)  Dialogue Policy Optimization  Natural Language Generation (NLG)  Recent Trends and Challenges  End-to-End Neural Dialogue System  Multimodality  Dialogue Breath  Dialogue Depth 143 Material: http://deepdialogue.miulab.tw
  • 144. 144 Evolution Roadmap 144 Single domain systems Extended systems Multi- domain systems Open domain systems Dialogue breadth (coverage) Dialoguedepth(complexity) What is influenza? I’ve got a cold what do I do? Tell me a joke. I feel sad… Material: http://deepdialogue.miulab.tw
  • 145. 145 Intent Expansion (Chen et al., 2016)  Transfer dialogue acts across domains  Dialogue acts are similar for multiple domains  Learning new intents by information from other domains CDSSM New Intent Intent Representation 1 2 K : Embedding Generation K+1 K+2<change_calender> Training Data <change_note> “adjust my note” : <change_setting> “volume turn down” The dialogue act representations can be automatically learned for other domains http://ieeexplore.ieee.org/abstract/document/7472838/ postpone my meeting to five pm Material: http://deepdialogue.miulab.tw
  • 146. 146 Zero-Shot Learning (Daupin et al., 2016)  Semantic utterance classification  Use query click logs to define a task that makes the networks learn the meaning or intent behind the queries  The semantic features are the last hidden layer of the DNN  Use Zero-Shot Discriminative embedding model combines H with the minimization of entropy of a zero-shot classifier https://arxiv.org/abs/1401.0509 Material: http://deepdialogue.miulab.tw
  • 147. 147 Domain Adaptation for SLU (Kim et al., 2016)  Frustratingly easy domain adaptation  Novel neural approaches to domain adaptation  Improve slot tagging on several domains http://www.aclweb.org/anthology/C/C16/C16-1038.pdf Material: http://deepdialogue.miulab.tw
  • 148. 148 Policy for Domain Adaptation (Gašić et al., 2015)  Bayesian committee machine (BCM) enables estimated Q-function to share knowledge across domains QRDR QHDH QL DL Committee Model The policy from a new domain can be boosted by the committee policy http://ieeexplore.ieee.org/abstract/document/7404871/ Material: http://deepdialogue.miulab.tw
  • 149. 149 Outline  Introduction  Background Knowledge  Neural Network Basics  Reinforcement Learning  Modular Dialogue System  Spoken/Natural Language Understanding (SLU/NLU)  Dialogue Management  Dialogue State Tracking (DST)  Dialogue Policy Optimization  Natural Language Generation (NLG)  Recent Trends and Challenges  End-to-End Neural Dialogue System  Multimodality  Dialogue Breath  Dialogue Depth 149 Material: http://deepdialogue.miulab.tw
  • 150. 150 Evolution Roadmap 150 Knowledge based system Common sense system Empathetic systems Dialogue breadth (coverage) Dialoguedepth(complexity) What is influenza? I’ve got a cold what do I do? Tell me a joke. I feel sad… Material: http://deepdialogue.miulab.tw
  • 151. 151 High-Level Intention for Dialogue Planning (Sun et al., 2016)  High-level intention may span several domains Schedule a lunch with Vivian. find restaurant check location contact play music What kind of restaurants do you prefer? The distance is … Should I send the restaurant information to Vivian? Users can interact via high-level descriptions and the system learns how to plan the dialogues http://dl.acm.org/citation.cfm?id=2856818; http://www.lrec-conf.org/proceedings/lrec2016/pdf/75_Paper.pdf Material: http://deepdialogue.miulab.tw
  • 152. 152 Empathy in Dialogue System (Fung et al., 2016)  Embed an empathy module  Recognize emotion using multimodality  Generate emotion-aware responses 152Emotion Recognizer vision speech text https://arxiv.org/abs/1605.04072 Material: http://deepdialogue.miulab.tw
  • 153. 153 Visual Object Discovery through Dialogues (Vries et al., 2017)  Recognize objects using “Guess What?” game  Includes “spatial”, “visual”, “object taxonomy” and “interaction” 153 https://arxiv.org/pdf/1611.08481.pdf Material: http://deepdialogue.miulab.tw
  • 155. 155 Summarized Challenges 155 Human-machine interfaces is a hot topic but several components must be integrated! Most state-of-the-art technologies are based on DNN • Requires huge amounts of labeled data • Several frameworks/models are available Fast domain adaptation with scarse data + re-use of rules/knowledge Handling reasoning Data collection and analysis from un-structured data Complex-cascade systems requires high accuracy for working good as a whole Material: http://deepdialogue.miulab.tw
  • 156. 156 Brief Conclusions  Introduce recent deep learning methods used in dialogue models  Highlight main components of dialogue systems and new deep learning architectures used for these components  Talk about challenges and new avenues for current state-of-the-art research  Provide all materials online! 156 http://deepdialogue.miulab.tw Material: http://deepdialogue.miulab.tw
  • 157. THANKS FOR ATTENTION! Q & A Thanks to Tsung-Hsien Wen, Pei-Hao Su, Li Deng, Jianfeng Gao, Sungjin Lee, Milica Gašić, Lihong Li, Xiujin Li, Abhinav Rastogi, Ankur Bapna, PArarth Shah and Gokhan Tur for sharing their slides. deepdialogue.miulab.tw