Outline
Introduction
Formal Language Theory
Stochastic Language Models (SLM)
N-gram Language Models
N-gram Smoothing
Class N-grams
Adaptive Language Models
Language Model Evaluation
4. Components of ASR System
Speech Signal
Spectral
Analysis
Feature
Extraction
Search
and Match
Recognised Words
Acoustic Models
Lexical Models
Language Models
Representation
Constraints - Knowledge
Decoder
Language Models
4 / 56
5. Why do we need language models?
Bayes’ rule:
P(words|sounds) =
P(sounds|words)P(words)
P(sounds)
where
P(words): a priori probability of the words
(Language Model)
We could use non informative priors
(P(words) = 1/N), but. . .
5 / 56
6. Branching Factor
I if we have N words in the dictionary
I at every word boundary we have to consider N
equally likely alternatives
I N can be in the order of millions
word
word1
word2
. . .
wordN
6 / 56
8. Language Models in ASR
We want to:
1. limit the branching factor in the recognition
network
2. augment and complete the acoustic
probabilities
I we are only interested to know if the sequence
of words is plausible grammatically or not
I this kind of grammar is integrated in the
recognition network prior to decoding
8 / 56
9. Language Models in Dialogue Systems
I we want to assign a class to each word (noun,
verb, attribute. . . parts of speech)
I parsing is usually performed on the output of a
speech recogniser
The grammar is used twice in a Dialogue System!!
9 / 56
10. Language Models in ASR
I small vocabulary: often formal grammar
specified by hand
I example: loop of digits as in the HTK exercise
I large vocabulary: often stochastic grammar
estimated from data
10 / 56
12. Formal Language Theory
grammar: formal specification of permissible
structures for the language
parser: algorithm that can analyse a sentence
and determine if its structure is
compliant with the grammar
12 / 56
14. Chomsky’s formal grammar
Noam Chomsky: linguist, philosopher, . . .
G = (V , T, P, S)
where
V : set of non-terminal constituents
T: set of terminals (lexical items)
P: set of production rules
S: start symbol
13 / 56
15. Example
S = sentence
V = {NP (noun phrase),
NP1, VP (verb
phrase), NAME, ADJ,
V (verb), N (noun)}
T = {Mary , person , loves
, that , . . . }
P = {S → NP VP
NP → NAME
NP → ADJ NP1
NP1 → N
VP → VERB NP
NAME → Mary
V → loves
N → person
ADJ → that }
14 / 56
16. Example
S = sentence
V = {NP (noun phrase),
NP1, VP (verb
phrase), NAME, ADJ,
V (verb), N (noun)}
T = {Mary , person , loves
, that , . . . }
P = {S → NP VP
NP → NAME
NP → ADJ NP1
NP1 → N
VP → VERB NP
NAME → Mary
V → loves
N → person
ADJ → that }
S
NP
NAME
Mary
VP
V
loves
NP
ADJ
that
NP1
N
person
14 / 56
17. Chomsky’s hierarchy
Greek letters: sequence of terminals or
non-terminals
Upper-case Latin letters: single non-terminal
Lower-case Latin letters: single terminal
Types Constraints Automata
Phrase structure
grammar
α → β. This is the most general
grammar
Turing ma-
chine
Context-sensitive
grammar
length of α ≤ length of β Linear
bounded
Context-free
grammar
A → β. Equivalent to A → w, A →
BC
Push down
Regular grammar A → w, A → wB Finite-state
15 / 56
18. Chomsky’s hierarchy
Greek letters: sequence of terminals or
non-terminals
Upper-case Latin letters: single non-terminal
Lower-case Latin letters: single terminal
Types Constraints Automata
Phrase structure
grammar
α → β. This is the most general
grammar
Turing ma-
chine
Context-sensitive
grammar
length of α ≤ length of β Linear
bounded
Context-free
grammar
A → β. Equivalent to A → w, A →
BC
Push down
Regular grammar A → w, A → wB Finite-state
Context-free and regular grammars are used in
practice
15 / 56
19. Are languages context-free?
Mostly true, with exceptions
Swiss German:
“. . . das mer d’chind em Hans es huus lönd häfte
aastriiche”
Word-by-word:
“. . . that we the children Hans the house let help
paint”
Translation:
“. . . that we let the children help Hans paint the
house”
16 / 56
20. Parsers
I assign each word in a sentence to a part of
speech
I originally developed for programming languages
(no ambiguities)
I only available for context-free and regular
grammars
I top-down: start with S and generate rules until
you reach the words (terminal symbols)
I bottom-up: start with the words and work your
way up until you reach S
17 / 56
25. Example: Top-down parser
Parts of speech Rules
S
NP VP S → NP VP
NAME VP NP → NAME
Mary VP NAME → Mary
Mary V NP VP → V NP
18 / 56
26. Example: Top-down parser
Parts of speech Rules
S
NP VP S → NP VP
NAME VP NP → NAME
Mary VP NAME → Mary
Mary V NP VP → V NP
Mary loves NP V → loves
18 / 56
27. Example: Top-down parser
Parts of speech Rules
S
NP VP S → NP VP
NAME VP NP → NAME
Mary VP NAME → Mary
Mary V NP VP → V NP
Mary loves NP V → loves
Mary loves ADJ NP1 NP → ADJ NP1
18 / 56
28. Example: Top-down parser
Parts of speech Rules
S
NP VP S → NP VP
NAME VP NP → NAME
Mary VP NAME → Mary
Mary V NP VP → V NP
Mary loves NP V → loves
Mary loves ADJ NP1 NP → ADJ NP1
Mary loves that NP1 ADJ → that
18 / 56
29. Example: Top-down parser
Parts of speech Rules
S
NP VP S → NP VP
NAME VP NP → NAME
Mary VP NAME → Mary
Mary V NP VP → V NP
Mary loves NP V → loves
Mary loves ADJ NP1 NP → ADJ NP1
Mary loves that NP1 ADJ → that
Mary loves that N NP1 → N
18 / 56
30. Example: Top-down parser
Parts of speech Rules
S
NP VP S → NP VP
NAME VP NP → NAME
Mary VP NAME → Mary
Mary V NP VP → V NP
Mary loves NP V → loves
Mary loves ADJ NP1 NP → ADJ NP1
Mary loves that NP1 ADJ → that
Mary loves that N NP1 → N
Mary loves that person N → person
18 / 56
33. Example: Bottom-up parser
Parts of speech Rules
Mary loves that person
NAME loves that person NAME → Mary
NAME V that person V → loves
19 / 56
34. Example: Bottom-up parser
Parts of speech Rules
Mary loves that person
NAME loves that person NAME → Mary
NAME V that person V → loves
NAME V ADJ person ADJ → that
19 / 56
35. Example: Bottom-up parser
Parts of speech Rules
Mary loves that person
NAME loves that person NAME → Mary
NAME V that person V → loves
NAME V ADJ person ADJ → that
NAME V ADJ N N → person
19 / 56
36. Example: Bottom-up parser
Parts of speech Rules
Mary loves that person
NAME loves that person NAME → Mary
NAME V that person V → loves
NAME V ADJ person ADJ → that
NAME V ADJ N N → person
NP V ADJ N NP → NAME
19 / 56
37. Example: Bottom-up parser
Parts of speech Rules
Mary loves that person
NAME loves that person NAME → Mary
NAME V that person V → loves
NAME V ADJ person ADJ → that
NAME V ADJ N N → person
NP V ADJ N NP → NAME
NP V ADJ NP1 NP1 → N
19 / 56
38. Example: Bottom-up parser
Parts of speech Rules
Mary loves that person
NAME loves that person NAME → Mary
NAME V that person V → loves
NAME V ADJ person ADJ → that
NAME V ADJ N N → person
NP V ADJ N NP → NAME
NP V ADJ NP1 NP1 → N
NP V NP NP → ADJ NP1
19 / 56
39. Example: Bottom-up parser
Parts of speech Rules
Mary loves that person
NAME loves that person NAME → Mary
NAME V that person V → loves
NAME V ADJ person ADJ → that
NAME V ADJ N N → person
NP V ADJ N NP → NAME
NP V ADJ NP1 NP1 → N
NP V NP NP → ADJ NP1
NP VP VP → V NP
19 / 56
40. Example: Bottom-up parser
Parts of speech Rules
Mary loves that person
NAME loves that person NAME → Mary
NAME V that person V → loves
NAME V ADJ person ADJ → that
NAME V ADJ N N → person
NP V ADJ N NP → NAME
NP V ADJ NP1 NP1 → N
NP V NP NP → ADJ NP1
NP VP VP → V NP
S S → NP VP
19 / 56
41. Top-down vs bottom-up parsers
I Top-down characteristics:
+ very predictive
+ only consider grammatical combinations
– predict constituents that do not have a match in
the text
I Bottom-up characteristics:
+ check input text only once
+ suitable for robust language processing
– may build trees that do not lead to full parse
I All in all, similar performance
20 / 56
43. Chart parsing (dynamic programming)
S NP ° VP
V[2,2] loves
Name Mary
NP Name
Mary loves that person
21 / 56
44. Chart parsing (dynamic programming)
V loves
VP V °NP
Name Mary
NP Name
S NP °VP
Mary loves that person
ADJ that
21 / 56
45. Chart parsing (dynamic programming)
ADJ that
NP ADJ ° NP1
S NP °VP
V loves
VP V °NP
Name Mary
NP Name
S NP °VP
Mary loves that person
N person
21 / 56
46. Chart parsing (dynamic programming)
ADJ that
NP ADJ ° NP1
S NP °VP
V loves
VP V °NP
Name Mary
NP Name
S NP °VP
Mary loves that person
N person
NP1 N
NP ADJ NP1
VP V NP
S NP VP
21 / 56
48. Stochastic Language Models (SLM)
1. formal grammars lack coverage (for general
domains)
2. spoken language does not follow strictly the
grammar
Model sequences of words statistically:
P(W ) = P(w1w2 . . . wn)
23 / 56
49. Probabilistic Context-free grammars
(PCFGs)
Assign probabilities to generative rules:
P(A → α|G)
Then calculate probability of generating a word
sequence w1w2 . . . wn as probability of the rules
necessary to go from S to w1w2 . . . wn:
P(S ⇒ w1w2 . . . wn|G)
24 / 56
50. Training PCFGs
If annotated corpus, Maximum Likelihood estimate:
P(A → αj) =
C(A → αj)
Pm
i=1 C(A → αi)
If non-annotated corpus: inside-outside algorithm
(similar to HMM training, forward-backward)
25 / 56
52. Inside-outside probabilities
Chomsky’s normal forms: Ai → AmAn or Ai → wl
inside(s, Ai, t) = P(Ai ⇒ wsws+1 . . . wt)
outside(s, Ai, t) = P(S ⇒ w1 . . . ws−1 Ai wt+1 . . . wT )
Ai
w w w w w w
s s t t T
1 1 1
... ... ...
- +
S
27 / 56
59. N-gram estimation example
Corpus:
1: John read her book
2: I read a different book
3: John read a book by Mulan
P(John| < s >) = C(<s>,John)
C(<s>) = 2
3
P(read|John) = C(John,read)
C(John)
= 2
2
P(a|read) = C(read,a)
C(read)
= 2
3
P(book|a) = C(a,book)
C(a) = 1
2
P(< /s > |book) =
C(book,</s>)
C(book)
= 2
3
32 / 56
60. N-gram estimation example
Corpus:
1: John read her book
2: I read a different book
3: John read a book by Mulan
P(John| < s >) = C(<s>,John)
C(<s>) = 2
3
P(read|John) = C(John,read)
C(John)
= 2
2
P(a|read) = C(read,a)
C(read)
= 2
3
P(book|a) = C(a,book)
C(a) = 1
2
P(< /s > |book) =
C(book,</s>)
C(book)
= 2
3
P(John, read, a, book) = P(John| < s >)P(read|John)P(a|read) · · ·
P(book|a)P(< /s > |book) = 0.148
32 / 56
61. N-gram estimation example
Corpus:
1: John read her book
2: I read a different book
3: John read a book by Mulan
P(John| < s >) = C(<s>,John)
C(<s>) = 2
3
P(read|John) = C(John,read)
C(John)
= 2
2
P(a|read) = C(read,a)
C(read)
= 2
3
P(book|a) = C(a,book)
C(a) = 1
2
P(< /s > |book) =
C(book,</s>)
C(book)
= 2
3
P(John, read, a, book) = P(John| < s >)P(read|John)P(a|read) · · ·
P(book|a)P(< /s > |book) = 0.148
P(Mulan, read, a, book) = P(Mulan| < s >) · · · = 0
32 / 56
62. N-gram Smoothing
Problem:
I Many very possible word sequences may have
been observed in zero or very low numbers in
the training data
I Leads to extremely low probabilities, effectively
disabling this word sequence, no matter how
strong the acoustic evidence is
Solution: smoothing
I produce more robust probabilities for unseen
data at the cost of modelling the training data
slightly worse
33 / 56
63. Simplest Smoothing technique
Instead of ML estimate
P(wi |wi−N+1, . . . , wi−1) =
C(wi−N+1, . . . , wi−1, wi )
P
wi
C(wi−N+1, . . . , wi−1, wi )
Use
P(wi |wi−N+1, . . . , wi−1) =
1 + C(wi−N+1, . . . , wi−1, wi )
P
wi
(1 + C(wi−N+1, . . . , wi−1, wi ))
I prevents zero probabilities
I but still very low probabilities
34 / 56
64. N-gram simple smoothing example
Corpus:
1: John read her book
2: I read a different book
3: John read a book by Mulan
P(John| < s >) = 1+C(<s>,John)
11+C(<s>) = 3
14
P(read|John) = 1+C(John,read)
11+C(John)
= 3
13
. . .
P(Mulan| < s >) = 1+C(<s>,Mulan)
11+C(<s>) = 1
14
35 / 56
65. N-gram simple smoothing example
Corpus:
1: John read her book
2: I read a different book
3: John read a book by Mulan
P(John| < s >) = 1+C(<s>,John)
11+C(<s>) = 3
14
P(read|John) = 1+C(John,read)
11+C(John)
= 3
13
. . .
P(Mulan| < s >) = 1+C(<s>,Mulan)
11+C(<s>) = 1
14
P(John, read, a, book) = P(John| < s >)P(read|John)P(a|read) · · ·
P(book|a)P(< /s > |book) = 0.00035(0.148)
35 / 56
66. N-gram simple smoothing example
Corpus:
1: John read her book
2: I read a different book
3: John read a book by Mulan
P(John| < s >) = 1+C(<s>,John)
11+C(<s>) = 3
14
P(read|John) = 1+C(John,read)
11+C(John)
= 3
13
. . .
P(Mulan| < s >) = 1+C(<s>,Mulan)
11+C(<s>) = 1
14
P(John, read, a, book) = P(John| < s >)P(read|John)P(a|read) · · ·
P(book|a)P(< /s > |book) = 0.00035(0.148)
P(Mulan, read, a, book) = P(Mulan| < s >)P(read|Mulan)P(a|read) · · ·
P(book|a)P(< /s > |book) = 0.000084(0)
35 / 56
67. Interpolation vs Backoff smoothing
Interpolation models:
I Linear combination with lower order n-grams
I Modifies the probabilities of both nonzero and
zero count n-grams
Backoff models:
I Use lower order n-grams when the requested
n-gram has zero or very low count in the
training data
I Nonzero count n-grams are unchanged
I Discounting: Reduce the probability of seen
n-grams and distribute among unseen ones
36 / 56
69. Deleted interpolation smoothing
Recursively interpolate with n-grams of lower order:
if historyn = wi−n+1, . . . , wi−1
PI (wi|historyn) = λhistoryn
P(wi|historyn) +
(1 − λhistoryn
)PI (wi|historyn−1)
I hard to estimate λhistoryn
for every history
I cluster into moderate number of weights
38 / 56
71. Good-Turing estimate
I Partition n-grams into groups depending on
their frequency in the training data
I Change the number of occurrences of an
n-gram according to
r∗
= (r + 1)
nr+1
nr
where r is the occurrence number, nr is the
number of n-grams that occur r times
40 / 56
72. Katz smoothing
based on Good-Turing: combine higher and lower
order n-grams
For every N-gram:
1. if count r is large (> 5 or 8), do not change it
2. if count r is small but non-zero, discount with
≈ r∗
3. if count r = 0, reassign discounted counts with
lower order N-gram
C∗
(wi−1, wi) = α(wi−1)P(wi)
41 / 56
73. Kneser-Ney smoothing: motivation
Background
I Lower order n-grams are often used as backoff model if the count
of a higher-order n-gram is too low (e.g. unigram instead of
bigram)
Problem
I Some words with relatively high unigram probability only occur in
a few bigrams. E.g. Francisco, which is mainly found in San
Francisco. However, infrequent word pairs, such as New Francisco,
will be given too high probability if the unigram probabilities of
New and Francisco are used. Maybe instead, the Francisco
unigram should have a lower value to prevent it from occurring in
other contexts.
I can’t see without my reading. . .
42 / 56
74. Kneser-Ney intuition
If a word has been seen in many contexts it is more
likely to be seen in new contexts as well.
I instead of backing off to lower order n-gram,
use continuation probability
Example: instead of unigram P(wi), use
PCONTINUATION(wi) =
|{wi−1 : C(wi−1wi) > 0}|
P
wi
|{wi−1 : C(wi−1wi) > 0}|
43 / 56
75. Kneser-Ney intuition
If a word has been seen in many contexts it is more
likely to be seen in new contexts as well.
I instead of backing off to lower order n-gram,
use continuation probability
Example: instead of unigram P(wi), use
PCONTINUATION(wi) =
|{wi−1 : C(wi−1wi) > 0}|
P
wi
|{wi−1 : C(wi−1wi) > 0}|
I can’t see without my reading. . . glasses
43 / 56
76. Class N-grams
1. Group words into semantic or grammatical
classes
2. build n-grams for class sequences:
P(wi|ci−N+1 . . . ci−1) = P(wi|ci)P(ci|ci−N+1 . . . ci−1)
I rapid adaptation, small training sets, small
models
I works on limited domains
I classes can be rule-based or data-driven
44 / 56
77. Combining PCFGs and N-grams
Only N-grams:
Meeting at three with Zhou Li
Meeting at four PM with Derek
P(Zhou|three, with) and P(Derek|PM, with))
N-grams + CFGs:
Meeting {at three: TIME} with {Zhou Li: NAME}
Meeting {at four PM: TIME} with {Derek: NAME}
P(NAME|TIME, with)
45 / 56
78. Adaptive Language Models
I conversational topic is not stationary
I topic stationary over some period of time
I build more specialised models that can adapt in
time
Techniques
I Cache Language Models
I Topic-Adaptive Models
I Maximum Entropy Models
46 / 56
79. Cache Language Models
1. build a full static n-gram model
2. during conversation accumulate low order
n-grams
3. interpolate between 1 and 2
47 / 56
80. Topic-Adaptive Models
1. cluster documents into topics (manually or
data-driven)
2. use information retrieval techniques with
current recognition output to select the right
cluster
3. if off-line run recognition in several passes
48 / 56
81. Maximum Entropy Models
Instead of linear combination:
1. reformulate information sources into constraints
2. choose maximum entropy distribution that
satisfies the constraints
49 / 56
82. Maximum Entropy Models
Instead of linear combination:
1. reformulate information sources into constraints
2. choose maximum entropy distribution that
satisfies the constraints
Constraints general form:
X
X
P(X)fi(X) = Ei
Example: unigram
fwi
=
1 if w = wi
0 otherwise
49 / 56
84. Language Model Evaluation
I Evaluation in combination with Speech
Recogniser
I hard to separate contribution of the two
I Evaluation based on probabilities assigned to
text in the training and test set
51 / 56
86. Perplexity of a model
We do not know the “true” distribution
p(w1, . . . , wn). But we have a model
m(w1, . . . , wn). The cross-entropy is:
H(p, m) = −
X
w1,...,wn
p(w1, . . . , wn) log m(w1, . . . , wn)
Cross-entropy is upper bound to entropy:
H ≤ H(p, m)
The better the model, the lower the cross-entropy
and the lower the perplexity (on the same data)
53 / 56
87. Test-set Perplexity
Estimate the distribution p(w1, . . . , wn) on the
training data
Evaluate it on the test data
H = −
X
w1,...,wn∈test set
p(w1, . . . , wn) log p(w1, . . . , wn)
PP = 2H
54 / 56
88. Perplexity and branching factor
Perplexity is roughly the geometric mean of the
branching factor
word
word1
word2
. . .
wordN
Shannon: 2.39 for English letters and 130 for
English words
Digit strings: 10
n-gram English: 50–1000
Wall Street Journal test set: 180 (bigram) 91
(trigram)
55 / 56
89. Performance of N-grams
Models Perplexity Word Error Rate
Unigram Katz 1196.45 14.85%
Unigram Kneser-Ney 1199.59 14.86%
Bigram Katz 176.31 11.38%
Bigram Kneser-Ney 176.11 11.34%
Trigram Katz 95.19 9.69%
Trigram Kneser-Ney 91.47 9.60%
Wall Street Journal database Dictionary: 60 000
words
Training set: 260 000 000 words
56 / 56