SlideShare a Scribd company logo
Text Mining 2
Language Modeling
!
Madrid Summer School 2014
Advanced Statistics and Data Mining
!
Florian Leitner
florian.leitner@upm.es
License:
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
A Motivation for Applying
Statistics to NLP
Two sentences:
“Colorless green ideas sleep furiously.” (from Noam Chomsky’s 1955 thesis)
“Furiously ideas green sleep colorless.”
Which one is (grammatically) correct?
!
An unfinished sentence:
“Adam went went jogging with his …”
What is a correct phrase to complete this sentence?
31
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Incentive and Applications
!
• Manual rule-based
language processing would
become cumbersome.
• Word frequencies
(probabilities) are easy to
measure, because large
amounts of texts are
available to us.
• Modeling language based on
probabilities enables many
existing applications:
‣ Machine Translation
‣ Voice Recognition
‣ Predictive Keyboards
‣ Spelling Correction
‣ Part-of-Speech (PoS) Tagging
‣ Linguistic Parsing & Chunking
• (analyzes the grammatical structure of
sentences)
32
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Zipf’s (Power) Law
Word frequency inversely
proportional to its rank (ordered
by counts)
Words of lower rank “clump”
within a region of a document
Word frequency is inversely
proportional to its length
Almost all words are rare
33
f ∝ 1 ÷ r
k = E[f × r]
5 10 15 20 25
0.20.40.60.81.0
Word
Frequency
lexical
richness
k
This is what makes
language modeling hard!
the, be, to, of, … …, malarkey, quodlibet
E = Expected Value
the “true” mean
dim. reduction
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
The Conditional Probability
for Dependent Events
34
Conditional Probability
P(X | Y) = P(X ∩ Y) ÷ P(Y)
*Independence
P(X ∩ Y) = P(X) × P(Y)
P(X | Y) = P(X)
P(Y | X) = P(Y)
` `
Joint Probability
P(X ∩ Y) = P(X, Y) = P(X × Y)
The multiplication principle
for dependent events*:
P(X ∩ Y) = P(Y) × P(X | Y)
and
therefore, by using a little algebra:
X Y
wholen-gram
nextword
vs.
REMINDER
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
The Chain Rule of
Conditional Probability
“The Joint Probability of Dependent Variables”
(don’t say I told you that…)
P(A,B,C,D) = P(A) × P(B|A) × P(C|A,B) × P(D|A,B,C)
35
P(B,Y,B) = P(B) × P(Y|B) × P(B|B,Y,B)
1/10 = 2/5 × 3/4 × 1/3
P(B)
2/5
P(Y|B)
3/4
P(B|B,Y)
1/3
P(w1, ..., wn) =
nY
i=1
P(wi|w1, ..., wi 1) =
nY
i=1
P(wi|wi 1
1 )
NB: the ∑ of all “trigrams” will be 1!
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Probabilistic Language
Modeling
‣ Manning & Schütze. Statistical Natural Language Processing. 1999
• A sentence W is defined as a sequence of words w1, …, wn
• Probability of next word wn in a sentence is: P(wn |w1, …, wn-1)
‣ a conditional probability
• The probability of the whole sentence is: P(W) = P(w1, …, wn)
‣ the chain rule of conditional probability
• These counts & probabilities form the language model

[for a given document collection (=corpus)].
‣ the model variables are discrete (counts)
‣ only needs to deal with probability mass (not density)
36
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Words, Tokens, Shingles
and N-Grams
37
Text with words
“tokenization”
Tokens
2-Shingles
3-Shingles
a.k.a. k-Shingling
This is a sentence .
This is is a a sentence sentence .
This is a is a sentence a sentence .
This is a sentence.
{
{
{
{
{
{
{
NB:
Character-based,
Regular Expressions,
Probabilistic, …
Character N-Grams
all trigrams of “sentence”: [sen, ent, nte, ten, enc, nce]
Token N-Grams
REMINDER
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Modeling the Stochastic Process of
“Generating Words”
Using the Chain Rule
“This is a long sentence having many…” ➡
P(W) = P(this) ×
P(is | this) ×
P(a | this, is) ×
P(long | this, is, a) ×
P(sentence | this, is, a, long) ×
P(having | this, is, a, long, sentence) ×
P(many | this, is, a, long, sentence, having) ×
….
38
n-grams for n > 5:
Insufficient Data
(and expensive to calculate)
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
The Markov Property
A Markov process is a stochastic process who’s future state only
depends on the current state, but not its past.
39
S1
S2S3
a token (i.e., any word,
symbol, …) in S
transitions = likelihood of
that (next) token given the
current token
3 Words (S1,2,3)
Unigram Model
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Modeling the Stochastic Process of
“Generating Words”
Assuming it is Markovian
!
!
• 1st
Order Markov Chains
‣ Bigram Model, k=1, P(be | this)
• 2nd
Order Markov Chains
‣ Trigram Model, k=2, P(long | this, be)
• 3rd
Order Markov Chains
‣ Quadrigram Model, k=3,

P(sentence | this, be, long)
• …
40
Ti−0Ti−1
Ti−0Ti−1Ti−2Ti−3
Ti−0Ti−1Ti−2
dependencies could span over a dozen tokens, but
these sizes are generally sufficient to work by
Y
P(wi|wi 1
1 ) ⇡
Y
P(wi|wi 1
i k) Ti
Unigram “Model”
NB: these are the Dynamic Bayesian
Network representations of the
Markov Chains (see lesson 5)
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
• Unigrams:
• Bigrams:
• n-Grams (n=k+1):
Calculating n-Gram
Probabilities
41
P(wi|wi 1) =
count(wi 1, wi)
count(wi 1)
P(wi) =
count(wi)
N
N = total word count
P(wi|wi 1
i k) =
count(wi
i k)
count(wi 1
i k) k = n-gram size - 1
‣ Language Model:
Practical Tip:
transform all probabilities
to logs to avoid underflows
and work with addition
P(W) =
Y
P(wi|wi 1
i k) =
=
Y
P(wi|wi k, ...wi 1)
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Calculating Probabilities
for the Initial Tokens
• How to calculate the first/last few tokens in n-gram models?
‣ “This is a sentence.” ➡ P(this | ???)
‣ P(wn|wn-k, …, wn-1)
• Fall back to lower-order Markov models
‣ P(w1|wn-k,…,wn-1) = P(w1); P(w2|wn-k,…,wn-1) = P(w2|w1); …
• Fill positions prior to n = 1 with a generic “start token”.
‣ left and/or right padding
‣ conventionally, something like “<s>” and “</s>”, “*” or “·” is used
• (but anything will do, as long as it does not collide with a possible, real token in your system)
42
NB: it is important to maintain the sentence terminal token
to generate robust probability distributions; do not drop it!
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Unseen n-Grams and the
Zero Probability Issue
• Even for unigrams, unseen words will occur sooner or later
• The longer the n-grams, the sooner unseen cases will occur
• “Colorless green ideas sleep furiously.” (Chomsky, 1955)
• As unseen tokens have zero probability, P(W) = 0, too
!
‣ Intuition/Idea: Divert a bit of the overall probability mass of
each [seen] token to all possible unseen tokens
• Terminology: model smoothing (Chen & Goodman, 1998)
43
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Additive / Laplace
Smoothing
• Add a smoothing factor α to all n-gram counts:
!
!
• V is the size of the vocabulary
‣ the number of unique words in the training corpus
• α ≤ 1 usually; if α =1, it is known as Add-One Smoothing
• Very old: first suggested by Pierre-Simon Laplace in 1816
• But it performs poorly (compared to “modern” approaches)
‣ Gale & Church, 1994
44
P(wi|wi 1
i k) =
count(wi
i k) + ↵
count(wi 1
i k) + ↵V
nltk.probability.LidstoneProbDist
nltk.probability.LaplaceProbDist (α = 1)
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Linear Interpolation
Smoothing
• A “Finite Mixture Model” in terms of statistics.
‣ a.k.a. Jelinek-Mercer Smoothing (1980)
• Sum the conditional probability over all n n-grams (the uni-, bi-, trigram, etc.) at
the current token, using the parameter λ for interpolation:
!
!
!
‣ With: ∑ λκ = 1
• Unseen n-gram probabilities can now be interpolated from their “root”
‣ i.e., the n-gram’s 0, 1, …, κ = n-1 interpolated probabilities
• Condition the λ parameter on a held-out development set using Expectation
Maximization or any other MLE optimization technique.
• The “baseline model” in Chen & Goodman’s (1998) study.
45
P(wi|wi 1
i k) =
kX
=0
P(wi|wi 1
i )
[nltk.probability.WittenBellProbDist]
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Advanced Smoothing
Techniques
• Use the frequencies of extremely rare items to estimate
the frequencies of unseen items
• Approach is generally rather costly to calculate
• Good-Turing Estimate (1953 [and WW2 Bletchey Park])
‣ Reallocate probability mass from higher-order n-grams to their lower-
order “stems”.
!
‣ Needs a MLE (max. likelihood estimate) when no higher-order n-gram count
is available.
‣ Higher order n-gram counts tend to be noisy (imprecise because rare)
‣ GTE ≈ “Smoothed Add-One”, and the foundation of most advances
smoothing techniques
46
c⇤
= (c + 1)
Nc+1
Nc
smoothed count:
original count
Nc: number of unique n-
grams of n-gram size c/c+1
nltk.probability.SimpleGoodTuringProbDist
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Interpolated Kneser-Ney
Smoothing
• A lower-order probability should be smoothed proportional to Γ,
the number of different words that it follows.
• In “San Francisco”, “Francisco” alone might not be significant, but will lead to a high unigram probability for
“Francisco”.
!
!
!
!
‣ Subtracts some fraction δ of probability mass from non-zero counts.
‣ Lower-order counts made significantly smaller than their higher-order counts.
• Reduces the mass of “Francisco” with an artificially high unigram probability (because it almost exclusively
occurs as “San Francisco”), so it is less likely to be used to interpolate unseen cases.
‣ (Chen & Goodman, 1998) interpretation of (Kneser & Ney, 1995)
47
the number of different words wi-1 that precede withe total count of n-grams of size n=k+1
1. steal some probability… 2. …reassign it to its lower-order terms wi-1
…
!
!
!
3. …proportional to the word wi’s history
let’s call this a words history
(wi 1
i k, wi) = |{(wi 1
i k, wi) : count(wi 1, wi)}|
PKN (wi|wi
i k) =
max{count(wi
i k) , 0}
Nk
+
(wi 1
i k, wi)
Nk
PKN (wi|wi 1
i k+1)
Nk =
X
i
count(wi
i k)
nltk.probability.KneserNeyProbDist (NLTK3 only)
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Stupid Backoff Smoothing
for Large-scale Datasets
• Brants et al. Large language models in machine translation. 2007
!
!
!
‣ Brants set α = 0.4
• Intuition is similar to Kneser-Ney Smoothing, but no scaling with a
word’s history is performed to make the method efficient.
• Efficiently smoothes billions of n-grams (>1012
[trillions] of tokens)
• Other useful large-scale techniques:
‣ Compression, e.g., Huffman codes (integers) instead of (string) tokens
‣ Transform (string) tokens to (integer) hashes (at the possible cost of collisions)
48
the “backoff”
NB that if you were backing off from a tri- to a unigram, you’d apply ααP(wi)
PSB(wi|wi 1
i k) =
(
P (wi|wi 1
i k) if count(wi
i k) > 0
↵PSB(wi|wi 1
i k+1) otherwise
[nltk.model.NgramModel] (NB: broken in NLTK3!)(Katz Smoothing)
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
The Final Piece:
Language Model Evaluation
• How good is the model you just trained?
• Did your update to the model do any good…?
• Is your model better than someone else’s?
‣ NB: You should compare on the same test set!
49
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Extrinsic Model Evaluation:
Error Rate
• Extrinsic evaluation: minimizing the Error Rate
‣ evaluates a model’s error frequency
‣ estimates the model’s per-word error rate by comparing the generated
sequences to all true sequences (which cannot be established) using a manual
classification of errors (therefore, “extrinsic”)
‣ time consuming, but can evaluate non-probabilistic approaches, too
50
a “perfect prediction” would evaluate to zero
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Intrinsic Model Evaluation:
Cross-Entropy / Perplexity
• Intrinsic evaluation: minimizing Perplexity (Rubinstein, 1997)
‣ compares a model’s probability distribution (to the “perfect” model)
‣ estimates a distance based on cross-entropy (Kullback-Leibler divergence)
between the generated word distribution and the true distribution (which
cannot be observed) using the empirical distribution as a proxy
‣ efficient, but can only evaluate probabilistic language models and is only an
approximation of the model quality
51
again, a “perfect prediction” would evaluate to zero
perplexity = “confusion”
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Shannon Entropy
• Answers the questions:
‣ “How much information (bits) will I gain when I see wn?”
‣ “How predictable is wn from its past?”
!
• Each outcome provides -log2 P bits of information (“surprise”).
‣ Claude E. Shannon, 1948
• The more probable an event (outcome), the lower its entropy.
• A certain event (p=1 or 0) has zero entropy (“no surprise”).
!
• Therefore, the entropy H in our model P for a sentence W is:
‣ H = 1/N × -∑ P(wn|w1, …, wn-1) log2 P(wn|w1, …, wn-1)
52
Image Source:
WikiMedia Commons,
Brona Brejova
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
From Cross-Entropy
to Model Perplexity
• Cross-entropy compares the model probability distribution P to
the true distribution PT:
‣ H(PT, P, W) = 1/N × -∑ PT(wn|wn-k, …, wn-1) log2 P(wn|wn-k, …, wn-1)
• CE can be simplified if the “true” distribution is a “stationary,
ergodic process”:
‣ H(P, W) = 1/N × -∑ log2 P(wn|wn-k, …, wn-1)
• The relationship between perplexity PPX and cross-entropy is
defined as:
‣ PPX(W) = 2H(P, W) = P(W)1/N
• where W is the sentence, P(W) is the Markov model, and N is the number of tokens in it
53
an “unchanging” language -> a rather naïve assumption
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Interpreting Perplexity
• The lower the perplexity, the better (“less surprised”) the
model.
• Perplexity is an indicator of the number of equiprobable
choices at each step.
‣ PPX = 26 for a model generating an infinite random sequence of Latin letters
• “[A-Z]+”
‣ Perplexity produces “big numbers” rather than cross-entropy’s “small bits”
• Typical (bigram) perplexities in English texts range from 50 to almost 1000 corresponding to a cross-
entropy from about 5.6 to 10 bits/word.
‣ Chen & Goodman, 1998
‣ Manning & Schütze, 1999
54
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Practical: Develop your own
Language Model
• Build a 3rd
order Markov model with Linear Interpolation smoothing
and evaluate the perplexity.
• Note: as of NLTK 3.0a4, language modeling is “broken”
‣ try, e.g.: http://www.speech.sri.com/projects/srilm/
• nltk.NgramModel(n, train, pad_left=True,
pad_right=False, estimator=None, *eargs, **ekwds)
‣ pad_* pads sentence starts/ends of n-gram models with empty strings
‣ estimator is a smoothing function that a ConditionalFreqDist to a
ConditionalProbDist
‣ eargs and ekwds are extra arguments and keywords for creating the estimator’s
CPDs.
• How do smoothing parameters and (smaller…) n-gram size change
perplexity?
55
Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining
Write a simple Spelling
Corrector in pure Python
• http://www.norvig.com/spell-correct.html
• http://www.norvig.com/spell.py
!
from nltk.corpus import brown
from collections import Count
CORPUS = brown.words()
NWORDS = Counter(w.lower() for w in CORPUS if w.isalpha())
56
Bertrand Russell, 1947
“Not to be absolutely certain is, I think, one of the
essential things in rationality.”

More Related Content

PDF
Overview of text mining and NLP (+software)
PDF
OUTDATED Text Mining 1/5: Introduction
PDF
OUTDATED Text Mining 3/5: String Processing
PPTX
Ngrams smoothing
PDF
Text mining - from Bayes rule to dependency parsing
PDF
OUTDATED Text Mining 5/5: Information Extraction
PDF
Parallel Corpora in (Machine) Translation: goals, issues and methodologies
PDF
Semantics and Computational Semantics
Overview of text mining and NLP (+software)
OUTDATED Text Mining 1/5: Introduction
OUTDATED Text Mining 3/5: String Processing
Ngrams smoothing
Text mining - from Bayes rule to dependency parsing
OUTDATED Text Mining 5/5: Information Extraction
Parallel Corpora in (Machine) Translation: goals, issues and methodologies
Semantics and Computational Semantics

What's hot (20)

PDF
Crash Course in Natural Language Processing (2016)
PDF
Natural Language Processing
PPT
Natural Language Processing
PPT
Natural language procssing
PDF
IE: Named Entity Recognition (NER)
PDF
Intro to nlp
PDF
A general method applicable to the search for anglicisms in russian social ne...
PDF
Lecture: Summarization
PPTX
A statistical approach to machine translation
PDF
Semantic Role Labeling
PPTX
Natural lanaguage processing
PDF
Temporal Semantic Techniques for Text Analysis and Applications
PPTX
NLP_KASHK:Evaluating Language Model
PDF
Can functional programming be liberated from static typing?
PPTX
Named entity recognition (ner) with nltk
PDF
Python2 unicode-pt1
PPTX
Word2vec slide(lab seminar)
PPTX
ورشة تضمين الكلمات في التعلم العميق Word embeddings workshop
PPTX
A Simple Introduction to Word Embeddings
PDF
Word Embeddings, why the hype ?
Crash Course in Natural Language Processing (2016)
Natural Language Processing
Natural Language Processing
Natural language procssing
IE: Named Entity Recognition (NER)
Intro to nlp
A general method applicable to the search for anglicisms in russian social ne...
Lecture: Summarization
A statistical approach to machine translation
Semantic Role Labeling
Natural lanaguage processing
Temporal Semantic Techniques for Text Analysis and Applications
NLP_KASHK:Evaluating Language Model
Can functional programming be liberated from static typing?
Named entity recognition (ner) with nltk
Python2 unicode-pt1
Word2vec slide(lab seminar)
ورشة تضمين الكلمات في التعلم العميق Word embeddings workshop
A Simple Introduction to Word Embeddings
Word Embeddings, why the hype ?
Ad

Similar to OUTDATED Text Mining 2/5: Language Modeling (20)

PDF
Adnan: Introduction to Natural Language Processing
PDF
An Introduction to Natural Language Processing with Deep Learning
PPTX
FOrmalLanguage and Automata -undecidability.pptx
PPTX
LLMSDFSDFSDFSDFDFSDFSDFDSFSDFSDFDS24aug.pptx
PPTX
LLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLM24aug.pptx
PPTX
Language models
PDF
Probability Theory Application and statitics
PDF
Time series anomaly discovery with grammar-based compression
PDF
lec03-LanguageModels_230214_161016.pdf
PDF
Lecture 6
PPTX
LLM24aug.pptxxz khi ong mat troi thuc dat me
PPTX
Next word Prediction
PDF
Dynamic Programming From CS 6515(Fibonacci, LIS, LCS))
PPT
Natural Language Processing: N-Gram Language Models
PPT
N GRAM FOR NATURAL LANGUGAE PROCESSINGG
PPT
Natural Language Processing: N-Gram Language Models
PPTX
Lecture 2: Language
PPT
intro.ppt
PPT
4888009.pptnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
PDF
Ted Willke - The Brain’s Guide to Dealing with Context in Language Understanding
Adnan: Introduction to Natural Language Processing
An Introduction to Natural Language Processing with Deep Learning
FOrmalLanguage and Automata -undecidability.pptx
LLMSDFSDFSDFSDFDFSDFSDFDSFSDFSDFDS24aug.pptx
LLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLM24aug.pptx
Language models
Probability Theory Application and statitics
Time series anomaly discovery with grammar-based compression
lec03-LanguageModels_230214_161016.pdf
Lecture 6
LLM24aug.pptxxz khi ong mat troi thuc dat me
Next word Prediction
Dynamic Programming From CS 6515(Fibonacci, LIS, LCS))
Natural Language Processing: N-Gram Language Models
N GRAM FOR NATURAL LANGUGAE PROCESSINGG
Natural Language Processing: N-Gram Language Models
Lecture 2: Language
intro.ppt
4888009.pptnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
Ted Willke - The Brain’s Guide to Dealing with Context in Language Understanding
Ad

Recently uploaded (20)

PDF
Clinical guidelines as a resource for EBP(1).pdf
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PPTX
climate analysis of Dhaka ,Banglades.pptx
PPT
Reliability_Chapter_ presentation 1221.5784
PPT
ISS -ESG Data flows What is ESG and HowHow
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PPT
Quality review (1)_presentation of this 21
PPTX
IBA_Chapter_11_Slides_Final_Accessible.pptx
PDF
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PDF
Business Analytics and business intelligence.pdf
PPTX
Qualitative Qantitative and Mixed Methods.pptx
PPTX
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
PPTX
Supervised vs unsupervised machine learning algorithms
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PPTX
Database Infoormation System (DBIS).pptx
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
Clinical guidelines as a resource for EBP(1).pdf
Galatica Smart Energy Infrastructure Startup Pitch Deck
climate analysis of Dhaka ,Banglades.pptx
Reliability_Chapter_ presentation 1221.5784
ISS -ESG Data flows What is ESG and HowHow
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
Introduction-to-Cloud-ComputingFinal.pptx
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
Quality review (1)_presentation of this 21
IBA_Chapter_11_Slides_Final_Accessible.pptx
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
Business Analytics and business intelligence.pdf
Qualitative Qantitative and Mixed Methods.pptx
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
Supervised vs unsupervised machine learning algorithms
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
Database Infoormation System (DBIS).pptx
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...

OUTDATED Text Mining 2/5: Language Modeling

  • 1. Text Mining 2 Language Modeling ! Madrid Summer School 2014 Advanced Statistics and Data Mining ! Florian Leitner florian.leitner@upm.es License:
  • 2. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining A Motivation for Applying Statistics to NLP Two sentences: “Colorless green ideas sleep furiously.” (from Noam Chomsky’s 1955 thesis) “Furiously ideas green sleep colorless.” Which one is (grammatically) correct? ! An unfinished sentence: “Adam went went jogging with his …” What is a correct phrase to complete this sentence? 31
  • 3. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Incentive and Applications ! • Manual rule-based language processing would become cumbersome. • Word frequencies (probabilities) are easy to measure, because large amounts of texts are available to us. • Modeling language based on probabilities enables many existing applications: ‣ Machine Translation ‣ Voice Recognition ‣ Predictive Keyboards ‣ Spelling Correction ‣ Part-of-Speech (PoS) Tagging ‣ Linguistic Parsing & Chunking • (analyzes the grammatical structure of sentences) 32
  • 4. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Zipf’s (Power) Law Word frequency inversely proportional to its rank (ordered by counts) Words of lower rank “clump” within a region of a document Word frequency is inversely proportional to its length Almost all words are rare 33 f ∝ 1 ÷ r k = E[f × r] 5 10 15 20 25 0.20.40.60.81.0 Word Frequency lexical richness k This is what makes language modeling hard! the, be, to, of, … …, malarkey, quodlibet E = Expected Value the “true” mean dim. reduction
  • 5. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining The Conditional Probability for Dependent Events 34 Conditional Probability P(X | Y) = P(X ∩ Y) ÷ P(Y) *Independence P(X ∩ Y) = P(X) × P(Y) P(X | Y) = P(X) P(Y | X) = P(Y) ` ` Joint Probability P(X ∩ Y) = P(X, Y) = P(X × Y) The multiplication principle for dependent events*: P(X ∩ Y) = P(Y) × P(X | Y) and therefore, by using a little algebra: X Y wholen-gram nextword vs. REMINDER
  • 6. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining The Chain Rule of Conditional Probability “The Joint Probability of Dependent Variables” (don’t say I told you that…) P(A,B,C,D) = P(A) × P(B|A) × P(C|A,B) × P(D|A,B,C) 35 P(B,Y,B) = P(B) × P(Y|B) × P(B|B,Y,B) 1/10 = 2/5 × 3/4 × 1/3 P(B) 2/5 P(Y|B) 3/4 P(B|B,Y) 1/3 P(w1, ..., wn) = nY i=1 P(wi|w1, ..., wi 1) = nY i=1 P(wi|wi 1 1 ) NB: the ∑ of all “trigrams” will be 1!
  • 7. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Probabilistic Language Modeling ‣ Manning & Schütze. Statistical Natural Language Processing. 1999 • A sentence W is defined as a sequence of words w1, …, wn • Probability of next word wn in a sentence is: P(wn |w1, …, wn-1) ‣ a conditional probability • The probability of the whole sentence is: P(W) = P(w1, …, wn) ‣ the chain rule of conditional probability • These counts & probabilities form the language model
 [for a given document collection (=corpus)]. ‣ the model variables are discrete (counts) ‣ only needs to deal with probability mass (not density) 36
  • 8. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Words, Tokens, Shingles and N-Grams 37 Text with words “tokenization” Tokens 2-Shingles 3-Shingles a.k.a. k-Shingling This is a sentence . This is is a a sentence sentence . This is a is a sentence a sentence . This is a sentence. { { { { { { { NB: Character-based, Regular Expressions, Probabilistic, … Character N-Grams all trigrams of “sentence”: [sen, ent, nte, ten, enc, nce] Token N-Grams REMINDER
  • 9. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Modeling the Stochastic Process of “Generating Words” Using the Chain Rule “This is a long sentence having many…” ➡ P(W) = P(this) × P(is | this) × P(a | this, is) × P(long | this, is, a) × P(sentence | this, is, a, long) × P(having | this, is, a, long, sentence) × P(many | this, is, a, long, sentence, having) × …. 38 n-grams for n > 5: Insufficient Data (and expensive to calculate)
  • 10. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining The Markov Property A Markov process is a stochastic process who’s future state only depends on the current state, but not its past. 39 S1 S2S3 a token (i.e., any word, symbol, …) in S transitions = likelihood of that (next) token given the current token 3 Words (S1,2,3) Unigram Model
  • 11. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Modeling the Stochastic Process of “Generating Words” Assuming it is Markovian ! ! • 1st Order Markov Chains ‣ Bigram Model, k=1, P(be | this) • 2nd Order Markov Chains ‣ Trigram Model, k=2, P(long | this, be) • 3rd Order Markov Chains ‣ Quadrigram Model, k=3,
 P(sentence | this, be, long) • … 40 Ti−0Ti−1 Ti−0Ti−1Ti−2Ti−3 Ti−0Ti−1Ti−2 dependencies could span over a dozen tokens, but these sizes are generally sufficient to work by Y P(wi|wi 1 1 ) ⇡ Y P(wi|wi 1 i k) Ti Unigram “Model” NB: these are the Dynamic Bayesian Network representations of the Markov Chains (see lesson 5)
  • 12. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining • Unigrams: • Bigrams: • n-Grams (n=k+1): Calculating n-Gram Probabilities 41 P(wi|wi 1) = count(wi 1, wi) count(wi 1) P(wi) = count(wi) N N = total word count P(wi|wi 1 i k) = count(wi i k) count(wi 1 i k) k = n-gram size - 1 ‣ Language Model: Practical Tip: transform all probabilities to logs to avoid underflows and work with addition P(W) = Y P(wi|wi 1 i k) = = Y P(wi|wi k, ...wi 1)
  • 13. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Calculating Probabilities for the Initial Tokens • How to calculate the first/last few tokens in n-gram models? ‣ “This is a sentence.” ➡ P(this | ???) ‣ P(wn|wn-k, …, wn-1) • Fall back to lower-order Markov models ‣ P(w1|wn-k,…,wn-1) = P(w1); P(w2|wn-k,…,wn-1) = P(w2|w1); … • Fill positions prior to n = 1 with a generic “start token”. ‣ left and/or right padding ‣ conventionally, something like “<s>” and “</s>”, “*” or “·” is used • (but anything will do, as long as it does not collide with a possible, real token in your system) 42 NB: it is important to maintain the sentence terminal token to generate robust probability distributions; do not drop it!
  • 14. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Unseen n-Grams and the Zero Probability Issue • Even for unigrams, unseen words will occur sooner or later • The longer the n-grams, the sooner unseen cases will occur • “Colorless green ideas sleep furiously.” (Chomsky, 1955) • As unseen tokens have zero probability, P(W) = 0, too ! ‣ Intuition/Idea: Divert a bit of the overall probability mass of each [seen] token to all possible unseen tokens • Terminology: model smoothing (Chen & Goodman, 1998) 43
  • 15. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Additive / Laplace Smoothing • Add a smoothing factor α to all n-gram counts: ! ! • V is the size of the vocabulary ‣ the number of unique words in the training corpus • α ≤ 1 usually; if α =1, it is known as Add-One Smoothing • Very old: first suggested by Pierre-Simon Laplace in 1816 • But it performs poorly (compared to “modern” approaches) ‣ Gale & Church, 1994 44 P(wi|wi 1 i k) = count(wi i k) + ↵ count(wi 1 i k) + ↵V nltk.probability.LidstoneProbDist nltk.probability.LaplaceProbDist (α = 1)
  • 16. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Linear Interpolation Smoothing • A “Finite Mixture Model” in terms of statistics. ‣ a.k.a. Jelinek-Mercer Smoothing (1980) • Sum the conditional probability over all n n-grams (the uni-, bi-, trigram, etc.) at the current token, using the parameter λ for interpolation: ! ! ! ‣ With: ∑ λκ = 1 • Unseen n-gram probabilities can now be interpolated from their “root” ‣ i.e., the n-gram’s 0, 1, …, κ = n-1 interpolated probabilities • Condition the λ parameter on a held-out development set using Expectation Maximization or any other MLE optimization technique. • The “baseline model” in Chen & Goodman’s (1998) study. 45 P(wi|wi 1 i k) = kX =0 P(wi|wi 1 i ) [nltk.probability.WittenBellProbDist]
  • 17. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Advanced Smoothing Techniques • Use the frequencies of extremely rare items to estimate the frequencies of unseen items • Approach is generally rather costly to calculate • Good-Turing Estimate (1953 [and WW2 Bletchey Park]) ‣ Reallocate probability mass from higher-order n-grams to their lower- order “stems”. ! ‣ Needs a MLE (max. likelihood estimate) when no higher-order n-gram count is available. ‣ Higher order n-gram counts tend to be noisy (imprecise because rare) ‣ GTE ≈ “Smoothed Add-One”, and the foundation of most advances smoothing techniques 46 c⇤ = (c + 1) Nc+1 Nc smoothed count: original count Nc: number of unique n- grams of n-gram size c/c+1 nltk.probability.SimpleGoodTuringProbDist
  • 18. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Interpolated Kneser-Ney Smoothing • A lower-order probability should be smoothed proportional to Γ, the number of different words that it follows. • In “San Francisco”, “Francisco” alone might not be significant, but will lead to a high unigram probability for “Francisco”. ! ! ! ! ‣ Subtracts some fraction δ of probability mass from non-zero counts. ‣ Lower-order counts made significantly smaller than their higher-order counts. • Reduces the mass of “Francisco” with an artificially high unigram probability (because it almost exclusively occurs as “San Francisco”), so it is less likely to be used to interpolate unseen cases. ‣ (Chen & Goodman, 1998) interpretation of (Kneser & Ney, 1995) 47 the number of different words wi-1 that precede withe total count of n-grams of size n=k+1 1. steal some probability… 2. …reassign it to its lower-order terms wi-1 … ! ! ! 3. …proportional to the word wi’s history let’s call this a words history (wi 1 i k, wi) = |{(wi 1 i k, wi) : count(wi 1, wi)}| PKN (wi|wi i k) = max{count(wi i k) , 0} Nk + (wi 1 i k, wi) Nk PKN (wi|wi 1 i k+1) Nk = X i count(wi i k) nltk.probability.KneserNeyProbDist (NLTK3 only)
  • 19. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Stupid Backoff Smoothing for Large-scale Datasets • Brants et al. Large language models in machine translation. 2007 ! ! ! ‣ Brants set α = 0.4 • Intuition is similar to Kneser-Ney Smoothing, but no scaling with a word’s history is performed to make the method efficient. • Efficiently smoothes billions of n-grams (>1012 [trillions] of tokens) • Other useful large-scale techniques: ‣ Compression, e.g., Huffman codes (integers) instead of (string) tokens ‣ Transform (string) tokens to (integer) hashes (at the possible cost of collisions) 48 the “backoff” NB that if you were backing off from a tri- to a unigram, you’d apply ααP(wi) PSB(wi|wi 1 i k) = ( P (wi|wi 1 i k) if count(wi i k) > 0 ↵PSB(wi|wi 1 i k+1) otherwise [nltk.model.NgramModel] (NB: broken in NLTK3!)(Katz Smoothing)
  • 20. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining The Final Piece: Language Model Evaluation • How good is the model you just trained? • Did your update to the model do any good…? • Is your model better than someone else’s? ‣ NB: You should compare on the same test set! 49
  • 21. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Extrinsic Model Evaluation: Error Rate • Extrinsic evaluation: minimizing the Error Rate ‣ evaluates a model’s error frequency ‣ estimates the model’s per-word error rate by comparing the generated sequences to all true sequences (which cannot be established) using a manual classification of errors (therefore, “extrinsic”) ‣ time consuming, but can evaluate non-probabilistic approaches, too 50 a “perfect prediction” would evaluate to zero
  • 22. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Intrinsic Model Evaluation: Cross-Entropy / Perplexity • Intrinsic evaluation: minimizing Perplexity (Rubinstein, 1997) ‣ compares a model’s probability distribution (to the “perfect” model) ‣ estimates a distance based on cross-entropy (Kullback-Leibler divergence) between the generated word distribution and the true distribution (which cannot be observed) using the empirical distribution as a proxy ‣ efficient, but can only evaluate probabilistic language models and is only an approximation of the model quality 51 again, a “perfect prediction” would evaluate to zero perplexity = “confusion”
  • 23. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Shannon Entropy • Answers the questions: ‣ “How much information (bits) will I gain when I see wn?” ‣ “How predictable is wn from its past?” ! • Each outcome provides -log2 P bits of information (“surprise”). ‣ Claude E. Shannon, 1948 • The more probable an event (outcome), the lower its entropy. • A certain event (p=1 or 0) has zero entropy (“no surprise”). ! • Therefore, the entropy H in our model P for a sentence W is: ‣ H = 1/N × -∑ P(wn|w1, …, wn-1) log2 P(wn|w1, …, wn-1) 52 Image Source: WikiMedia Commons, Brona Brejova
  • 24. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining From Cross-Entropy to Model Perplexity • Cross-entropy compares the model probability distribution P to the true distribution PT: ‣ H(PT, P, W) = 1/N × -∑ PT(wn|wn-k, …, wn-1) log2 P(wn|wn-k, …, wn-1) • CE can be simplified if the “true” distribution is a “stationary, ergodic process”: ‣ H(P, W) = 1/N × -∑ log2 P(wn|wn-k, …, wn-1) • The relationship between perplexity PPX and cross-entropy is defined as: ‣ PPX(W) = 2H(P, W) = P(W)1/N • where W is the sentence, P(W) is the Markov model, and N is the number of tokens in it 53 an “unchanging” language -> a rather naïve assumption
  • 25. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Interpreting Perplexity • The lower the perplexity, the better (“less surprised”) the model. • Perplexity is an indicator of the number of equiprobable choices at each step. ‣ PPX = 26 for a model generating an infinite random sequence of Latin letters • “[A-Z]+” ‣ Perplexity produces “big numbers” rather than cross-entropy’s “small bits” • Typical (bigram) perplexities in English texts range from 50 to almost 1000 corresponding to a cross- entropy from about 5.6 to 10 bits/word. ‣ Chen & Goodman, 1998 ‣ Manning & Schütze, 1999 54
  • 26. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Practical: Develop your own Language Model • Build a 3rd order Markov model with Linear Interpolation smoothing and evaluate the perplexity. • Note: as of NLTK 3.0a4, language modeling is “broken” ‣ try, e.g.: http://www.speech.sri.com/projects/srilm/ • nltk.NgramModel(n, train, pad_left=True, pad_right=False, estimator=None, *eargs, **ekwds) ‣ pad_* pads sentence starts/ends of n-gram models with empty strings ‣ estimator is a smoothing function that a ConditionalFreqDist to a ConditionalProbDist ‣ eargs and ekwds are extra arguments and keywords for creating the estimator’s CPDs. • How do smoothing parameters and (smaller…) n-gram size change perplexity? 55
  • 27. Florian Leitner <florian.leitner@upm.es> MSS/ASDM: Text Mining Write a simple Spelling Corrector in pure Python • http://www.norvig.com/spell-correct.html • http://www.norvig.com/spell.py ! from nltk.corpus import brown from collections import Count CORPUS = brown.words() NWORDS = Counter(w.lower() for w in CORPUS if w.isalpha()) 56
  • 28. Bertrand Russell, 1947 “Not to be absolutely certain is, I think, one of the essential things in rationality.”