SlideShare a Scribd company logo
What is Word2Vec?
Traian Rebedea
Bucharest Machine Learning reading group
25-Aug-15
Intro
• About n-grams: “simple models trained on huge
amounts of data outperform complex systems
trained on less data”
• Solution: “possible to train more complex models
on much larger data set, and they typically
outperform the simple models”
• Why? “neural network based language models
significantly outperform N-gram models”
• How? “distributed representations of words”
(Hinton, 1986 – not discussed today)
Goal
• “learning high-quality word vectors from huge
data sets with billions of words, and with
millions of words in the vocabulary”
• Resulting word representations
– Similar words tend to be close to each other
– Words can have multiple degrees of similarity
Previous work
• Representation of words as continuous vectors
• Neural network language model (NNLM) (Bengio et al., 2003 – not
discussed today)
• Mikolov previously proposed (MSc thesis, PhD thesis, other papers)
to first learn word vectors “using neural network with a single
hidden layer” and then train the NNLM independently
• Word2vec directly extends this work => “word vectors learned using
a simple model”
• These word vectors were useful in various NLP applications
• Many architectures and models have been proposed for computing
these word vectors (e.g. see Socher’s Stanford group work which
resulted in GloVe - http://nlp.stanford.edu/projects/glove/)
• “these architectures were significantly more computationally
expensive for training than” word2vec (in 2013)
Model Architectures
• Some “classic” NLP for estimating continuous
representations of words
– LSA (Latent Semantic Analysis)
– LDA (Latent Dirichlet Allocation)
• Distributed representations of words learned by
neural networks outperform LSA on various tasks
that require to preserve linear regularities among
words
• LDA is computationally expensive and cannot be
trained on very large datasets
Model Architectures
• Feedforward Neural Net Language Model
(NNLM)
Model Architectures
• Recurrent Neural Net Language Model
(RNNLM)
• Simple Elman RNN
Word2vec (log-linear) Models
• Previous models - the complexity is in the non-linear
hidden layer of the model
• Explore simpler models
– Not able to represent the data as precisely as NN
– Can be trained on more data
• In earlier works, Mikolov found that “neural network
language model can be successfully trained in two
steps”:
– Continuous word vectors are learned using simple model
– The N-gram NNLM is trained on top of these distributed
representations of words
Continuous BoW (CBOW) Model
• Similar to the feedforward NNLM, but
• Non-linear hidden layer removed
• Projection layer shared for all words
– Not just the projection matrix
• Thus, all words get projected into the same position
– Their vectors are just averaged
• Called CBOW (continuous BoW) because the order of
the words is lost
• Another modification is to use words from past and
from future (window centered on current word)
CBOW Model
Continuous Skip-gram Model
• Similar to CBOW, but instead of predicting the current word based
on the context
• Tries to maximize classification of a word based on another word in
the same sentence
• Thus, uses each current word as an input to a log-linear classifier
• Predicts words within a certain window
• Observations
– Larger window size => better quality of the resulting word vectors,
higher training time
– More distant words are usually less related to the current word than
those close to it
– Give less weight to the distant words by sampling less from those
words in the training examples
Continuous Skip-gram Model
Results
• Training high dimensional word vectors on a large amount of
data captures “subtle semantic relationships between words”
– Mikolov has made a similar observation for the previous models he
has proposed (e.g. the RNN model, see Mikolov, T., Yih, W. T., & Zweig, G. (2013, June).
Linguistic Regularities in Continuous Space Word Representations. In HLT-NAACL (pp. 746-751).)
Results
• “Comprehensive test set that contains five types of
semantic questions, and nine types of syntactic questions”
– 8869 semantic questions
– 10675 syntactic questions
• E.g. “For example, we made a list of 68 large American
cities and the states they belong to, and formed about 2.5K
questions by picking two word pairs at random.”
• Methodology
• Input: ”What is the word that is similar to small in the same
sense as biggest is similar to big?”
• Compute: X = vector("biggest") – vector("big") +
vector("small") and then find closest word to X using cosine
Results
• Other results are reported as well
Skip-gram Revisited
• Formally, the skip-gram model proposes that for a give
sequence of words to maximize:
• Where T = size of the sequence (or number of words
considered for training)
• c = window/context size
• Mikolov, also says that for each word the model only
uses a random window size r = random(1..c)
– This way words that are closer to the “input” word have a
higher probability of being used in training than words that
are more distant
Skip-gram Revisited
• As already seen, p(wt+j|wt) should be the output
of a classifier (e.g. softmax)
• wI is the “input” vector representation of word w
• wO is the “output” (or “context”) vector
representation of word w
• Computing log p(wO|wI) takes O(W) time, where
W is the vocabulary dimension
Skip-gram Alternative View
• Training
– Where
• Getting to
Skip-gram Improvements
• Hierarchical softmax
• Negative sampling
• Subsampling of frequent words
Hierarchical Softmax
• Computationally efficient approximation of
the softmax
• When W output nodes, need to evaluate only
about log(W) nodes to obtain the softmax
probability distribution
Negative Sampling
• Noise Contrastive Estimation (NCE) is an alternative to
hierarchical softmax
• NCE – “a good model should be able to differentiate data from
noise by means of logistic regression”
• “While NCE can be shown to approximately maximize the log
probability of the softmax, the Skip-gram model is only
concerned with learning high-quality vector representations,
so we are free to simplify NCE as long as the vector
representations retain their quality. We define Negative
sampling (NEG) by the objective”:
Subsampling of Frequent Words
• Frequent words provide less information value than
the rare words
• “More, the vector representations of frequent words
do not change significantly after training on several
million examples”
• Each word wi in the training set is discarded with a
probability depending on the frequency of the word
Other remarks
• Mikolov also developed a method to extract relevant
n-grams (bigrams and trigrams) using something
similar to PMI
• Effects of improvements
• Vectors can also be summed
Other Applications
• Dependency-based contexts
• Word2vec for machine learning translation
Dependency-based Contexts
• Levi and Goldberg, 2014: Propose to use
dependency-based contexts instead of linear BoW
(windows of size k)
Dependency-based Contexts
• Why?
– Syntactic dependencies are “more inclusive and more
focused” than BoW
– Capture relations to words that are far apart and that
are not used by small window BoW
– Remove “coincidental contexts which are within the
window but not directly related to the target word”
• A possible problem
– Dependency parsing is still somewhat computational
expensive
– However, English Wikipedia can be parsed on a small
cluster and the results can then be persisted
Dependency-based Contexts
• Examples of syntactic contexts
Dependency-based Contexts
• Comparison with BoW word2vec
Dependency-based Contexts
• Evaluation on the WordSim353 dataset with pairs of similar words
– Relatedness (topical similarity)
– Similarity (functional similarity)
– Both (these pairs have been ignored)
• Task: “rank the similar pairs in the dataset above the related ones”
• Simple ranking: Pairs ranked by cosine similarity of the embedded words
Dependency-based Contexts
• Main conclusion
• Dependency-based context is more useful to
capture functional similarities (e.g. similarity)
between words
• Linear BoW context is more useful to capture
topical similarities (e.g. relatedness) between
words
– The larger the size of the window, the better it
captures related concepts
• Therefore, dependency-based contexts would
perform poorly in analogy experiments
Estimating Similarities Across
Languages
• Given a set of word pairs in two languages (or different
types of corpora) and their associated vector
representations (xi and zi)
• They can have even different dimensions (d1 and d2)
• Find a transformation matrix, W(d2, d1), such that Wxi
approximates as close as possible zi, for all pairs i
• Solved using stochastic gradient descent
• The transformation is seen as a linear transformation
(rotation and scaling) between the two spaces
Estimating Similarities Across
Languages
• Authors also highlight this using a manual rotation (between
En and Sp) and a visualization with 2D-PCA
Estimating Similarities Across
Languages
• The most frequent 5K words from the source
language and their translations given GT –
training data for learning the Translation
Matrix
• Subsequent 1K words in the source language
and their translations are used as a test set
Estimating Similarities Across
Languages
• Very simple baselines
More Explanations
• CBOW model with a single input word
Update Equations
• Maximize the conditional probability of observing the actual
output word wO (denote its index in the output layer as j)
given the input context word wI with regard to the weights
CBOW with Larger Context
Skip-gram Model
• Context and input word have changed order
Skip-gram Model
More…
• Why does word2vec work?
• It seems that SGNS (skip-gram negative sampling) is actually performing a
(weighted) implicit matrix factorization
• The matrix is using the PMI between words and contexts
• PMI and implicit matrix factorizations have been widely used in NLP
• It is interesting that the PMI matrix emerges as the optimal solution for
SGNS’s objective
Final
• “PMI matrices are commonly used by the traditional approach
to represent words (often dubbed "distributional semantics").
What's really striking about this discovery, is that word2vec
(specifically, SGNS) is doing something very similar to what
the NLP community has been doing for about 20 years; it's
just doing it really well.”
Omer Levy - http://www.quora.com/How-does-word2vec-work
References
Word2vec & related papers:
• Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector
space. arXiv preprint arXiv:1301.3781.
• Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and
phrases and their compositionality. In Advances in neural information processing systems (pp. 3111-3119).
• Mikolov, T., Yih, W. T., & Zweig, G. (2013, June). Linguistic Regularities in Continuous Space Word Representations.
In HLT-NAACL (pp. 746-751).
Explanations
• Rong, X. (2014). word2vec Parameter Learning Explained. arXiv preprint arXiv:1411.2738.
• Goldberg, Y., & Levy, O. (2014). word2vec Explained: deriving Mikolov et al.'s negative-sampling word-embedding
method. arXiv preprint arXiv:1402.3722.
• Levy, O., & Goldberg, Y. (2014). Neural word embedding as implicit matrix factorization. In Advances in Neural
Information Processing Systems (pp. 2177-2185).
• Dyer, C. (2014). Notes on Noise Contrastive Estimation and Negative Sampling. arXiv preprint arXiv:1410.8251.
Applications of word2vec
• Mikolov, T., Le, Q. V., & Sutskever, I. (2013). Exploiting similarities among languages for machine translation. arXiv
preprint arXiv:1309.4168.
• Levy, O., & Goldberg, Y. (2014). Dependency based word embeddings. In Proceedings of the 52nd Annual Meeting
of the Association for Computational Linguistics (Vol. 2, pp. 302-308).

More Related Content

PPTX
Tutorial on word2vec
PDF
Word Embeddings - Introduction
PPTX
A Simple Introduction to Word Embeddings
PDF
Word2Vec: Vector presentation of words - Mohammad Mahdavi
PPTX
PDF
Word2Vec
PPTX
Text similarity measures
PDF
Neural Machine Translation (D3L4 Deep Learning for Speech and Language UPC 2017)
Tutorial on word2vec
Word Embeddings - Introduction
A Simple Introduction to Word Embeddings
Word2Vec: Vector presentation of words - Mohammad Mahdavi
Word2Vec
Text similarity measures
Neural Machine Translation (D3L4 Deep Learning for Speech and Language UPC 2017)

What's hot (20)

PDF
NLP using transformers
PDF
bag-of-words models
PDF
Word Embeddings, why the hype ?
PDF
Transformer Introduction (Seminar Material)
PPTX
Word2 vec
PPTX
Introduction For seq2seq(sequence to sequence) and RNN
PDF
Natural language processing
PPTX
Word embeddings
PDF
Introduction to Transformers for NLP - Olga Petrova
PPTX
Natural language processing and transformer models
PPTX
Introduction to natural language processing (NLP)
PPTX
Word embedding
PDF
A Review of Deep Contextualized Word Representations (Peters+, 2018)
PPTX
PDF
An introduction to the Transformers architecture and BERT
PPTX
Natural Language Processing (NLP) - Introduction
PDF
BERT: Bidirectional Encoder Representations from Transformers
PPTX
Word2vec slide(lab seminar)
PPT
Introduction to Natural Language Processing
NLP using transformers
bag-of-words models
Word Embeddings, why the hype ?
Transformer Introduction (Seminar Material)
Word2 vec
Introduction For seq2seq(sequence to sequence) and RNN
Natural language processing
Word embeddings
Introduction to Transformers for NLP - Olga Petrova
Natural language processing and transformer models
Introduction to natural language processing (NLP)
Word embedding
A Review of Deep Contextualized Word Representations (Peters+, 2018)
An introduction to the Transformers architecture and BERT
Natural Language Processing (NLP) - Introduction
BERT: Bidirectional Encoder Representations from Transformers
Word2vec slide(lab seminar)
Introduction to Natural Language Processing
Ad

Similar to What is word2vec? (20)

PPTX
Word representations in vector space
PPTX
Deep Learning Bangalore meet up
PPTX
DLBLR talk
PDF
Word2vec on the italian language: first experiments
PDF
Deep learning for nlp
PPTX
A Panorama of Natural Language Processing
PPTX
Efficient estimation of word representations in vector space (2013)
PPTX
Word_Embedding.pptx
PDF
Word2vec and Friends
PPTX
Embedding for fun fumarola Meetup Milano DLI luglio
PPTX
Tomáš Mikolov - Distributed Representations for NLP
PDF
Contemporary Models of Natural Language Processing
PDF
Course Assignment : Skip gram
PDF
Yoav Goldberg: Word Embeddings What, How and Whither
PPTX
Text Mining for Lexicography
PPTX
wordembedding.pptx
PDF
(Deep) Neural Networks在 NLP 和 Text Mining 总结
PPTX
Designing, Visualizing and Understanding Deep Neural Networks
PDF
Word2vec ultimate beginner
PPTX
Word_Embeddings.pptx
Word representations in vector space
Deep Learning Bangalore meet up
DLBLR talk
Word2vec on the italian language: first experiments
Deep learning for nlp
A Panorama of Natural Language Processing
Efficient estimation of word representations in vector space (2013)
Word_Embedding.pptx
Word2vec and Friends
Embedding for fun fumarola Meetup Milano DLI luglio
Tomáš Mikolov - Distributed Representations for NLP
Contemporary Models of Natural Language Processing
Course Assignment : Skip gram
Yoav Goldberg: Word Embeddings What, How and Whither
Text Mining for Lexicography
wordembedding.pptx
(Deep) Neural Networks在 NLP 和 Text Mining 总结
Designing, Visualizing and Understanding Deep Neural Networks
Word2vec ultimate beginner
Word_Embeddings.pptx
Ad

More from Traian Rebedea (20)

PPTX
An Evolution of Deep Learning Models for AI2 Reasoning Challenge
PDF
AI @ Wholi - Bucharest.AI Meetup #5
PDF
Deep neural networks for matching online social networking profiles
PDF
Intro to Deep Learning for Question Answering
PPT
How useful are semantic links for the detection of implicit references in csc...
PPT
A focused crawler for romanian words discovery
PPTX
Detecting and Describing Historical Periods in a Large Corpora
PDF
Practical machine learning - Part 1
PPT
Propunere de dezvoltare a carierei universitare
PPT
Automatic plagiarism detection system for specialized corpora
PPT
Relevance based ranking of video comments on YouTube
PPT
Opinion mining for social media and news items in Romanian
PPT
PhD Defense: Computer-Based Support and Feedback for Collaborative Chat Conve...
PPT
Importanța algoritmilor pentru problemele de la interviuri
PPTX
Web services for supporting the interactions of learners in the social web - ...
PPT
Automatic assessment of collaborative chat conversations with PolyCAFe - EC-T...
PPT
Conclusions and Recommendations of the Romanian ICT RTD Survey
PPT
Istoria Web-ului - part 2 - tentativ How to Web 2009
PPT
Istoria Web-ului - part 1 (2) - tentativ How to Web 2009
PPT
Istoria Web-ului - part 1 - tentativ How to Web 2009
An Evolution of Deep Learning Models for AI2 Reasoning Challenge
AI @ Wholi - Bucharest.AI Meetup #5
Deep neural networks for matching online social networking profiles
Intro to Deep Learning for Question Answering
How useful are semantic links for the detection of implicit references in csc...
A focused crawler for romanian words discovery
Detecting and Describing Historical Periods in a Large Corpora
Practical machine learning - Part 1
Propunere de dezvoltare a carierei universitare
Automatic plagiarism detection system for specialized corpora
Relevance based ranking of video comments on YouTube
Opinion mining for social media and news items in Romanian
PhD Defense: Computer-Based Support and Feedback for Collaborative Chat Conve...
Importanța algoritmilor pentru problemele de la interviuri
Web services for supporting the interactions of learners in the social web - ...
Automatic assessment of collaborative chat conversations with PolyCAFe - EC-T...
Conclusions and Recommendations of the Romanian ICT RTD Survey
Istoria Web-ului - part 2 - tentativ How to Web 2009
Istoria Web-ului - part 1 (2) - tentativ How to Web 2009
Istoria Web-ului - part 1 - tentativ How to Web 2009

Recently uploaded (20)

PDF
KodekX | Application Modernization Development
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Cloud computing and distributed systems.
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Machine learning based COVID-19 study performance prediction
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Modernizing your data center with Dell and AMD
PPT
Teaching material agriculture food technology
PPTX
Big Data Technologies - Introduction.pptx
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Network Security Unit 5.pdf for BCA BBA.
KodekX | Application Modernization Development
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
NewMind AI Weekly Chronicles - August'25 Week I
20250228 LYD VKU AI Blended-Learning.pptx
Spectral efficient network and resource selection model in 5G networks
Cloud computing and distributed systems.
Reach Out and Touch Someone: Haptics and Empathic Computing
Diabetes mellitus diagnosis method based random forest with bat algorithm
CIFDAQ's Market Insight: SEC Turns Pro Crypto
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Unlocking AI with Model Context Protocol (MCP)
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Machine learning based COVID-19 study performance prediction
The AUB Centre for AI in Media Proposal.docx
Modernizing your data center with Dell and AMD
Teaching material agriculture food technology
Big Data Technologies - Introduction.pptx
Dropbox Q2 2025 Financial Results & Investor Presentation
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Network Security Unit 5.pdf for BCA BBA.

What is word2vec?

  • 1. What is Word2Vec? Traian Rebedea Bucharest Machine Learning reading group 25-Aug-15
  • 2. Intro • About n-grams: “simple models trained on huge amounts of data outperform complex systems trained on less data” • Solution: “possible to train more complex models on much larger data set, and they typically outperform the simple models” • Why? “neural network based language models significantly outperform N-gram models” • How? “distributed representations of words” (Hinton, 1986 – not discussed today)
  • 3. Goal • “learning high-quality word vectors from huge data sets with billions of words, and with millions of words in the vocabulary” • Resulting word representations – Similar words tend to be close to each other – Words can have multiple degrees of similarity
  • 4. Previous work • Representation of words as continuous vectors • Neural network language model (NNLM) (Bengio et al., 2003 – not discussed today) • Mikolov previously proposed (MSc thesis, PhD thesis, other papers) to first learn word vectors “using neural network with a single hidden layer” and then train the NNLM independently • Word2vec directly extends this work => “word vectors learned using a simple model” • These word vectors were useful in various NLP applications • Many architectures and models have been proposed for computing these word vectors (e.g. see Socher’s Stanford group work which resulted in GloVe - http://nlp.stanford.edu/projects/glove/) • “these architectures were significantly more computationally expensive for training than” word2vec (in 2013)
  • 5. Model Architectures • Some “classic” NLP for estimating continuous representations of words – LSA (Latent Semantic Analysis) – LDA (Latent Dirichlet Allocation) • Distributed representations of words learned by neural networks outperform LSA on various tasks that require to preserve linear regularities among words • LDA is computationally expensive and cannot be trained on very large datasets
  • 6. Model Architectures • Feedforward Neural Net Language Model (NNLM)
  • 7. Model Architectures • Recurrent Neural Net Language Model (RNNLM) • Simple Elman RNN
  • 8. Word2vec (log-linear) Models • Previous models - the complexity is in the non-linear hidden layer of the model • Explore simpler models – Not able to represent the data as precisely as NN – Can be trained on more data • In earlier works, Mikolov found that “neural network language model can be successfully trained in two steps”: – Continuous word vectors are learned using simple model – The N-gram NNLM is trained on top of these distributed representations of words
  • 9. Continuous BoW (CBOW) Model • Similar to the feedforward NNLM, but • Non-linear hidden layer removed • Projection layer shared for all words – Not just the projection matrix • Thus, all words get projected into the same position – Their vectors are just averaged • Called CBOW (continuous BoW) because the order of the words is lost • Another modification is to use words from past and from future (window centered on current word)
  • 11. Continuous Skip-gram Model • Similar to CBOW, but instead of predicting the current word based on the context • Tries to maximize classification of a word based on another word in the same sentence • Thus, uses each current word as an input to a log-linear classifier • Predicts words within a certain window • Observations – Larger window size => better quality of the resulting word vectors, higher training time – More distant words are usually less related to the current word than those close to it – Give less weight to the distant words by sampling less from those words in the training examples
  • 13. Results • Training high dimensional word vectors on a large amount of data captures “subtle semantic relationships between words” – Mikolov has made a similar observation for the previous models he has proposed (e.g. the RNN model, see Mikolov, T., Yih, W. T., & Zweig, G. (2013, June). Linguistic Regularities in Continuous Space Word Representations. In HLT-NAACL (pp. 746-751).)
  • 14. Results • “Comprehensive test set that contains five types of semantic questions, and nine types of syntactic questions” – 8869 semantic questions – 10675 syntactic questions • E.g. “For example, we made a list of 68 large American cities and the states they belong to, and formed about 2.5K questions by picking two word pairs at random.” • Methodology • Input: ”What is the word that is similar to small in the same sense as biggest is similar to big?” • Compute: X = vector("biggest") – vector("big") + vector("small") and then find closest word to X using cosine
  • 15. Results • Other results are reported as well
  • 16. Skip-gram Revisited • Formally, the skip-gram model proposes that for a give sequence of words to maximize: • Where T = size of the sequence (or number of words considered for training) • c = window/context size • Mikolov, also says that for each word the model only uses a random window size r = random(1..c) – This way words that are closer to the “input” word have a higher probability of being used in training than words that are more distant
  • 17. Skip-gram Revisited • As already seen, p(wt+j|wt) should be the output of a classifier (e.g. softmax) • wI is the “input” vector representation of word w • wO is the “output” (or “context”) vector representation of word w • Computing log p(wO|wI) takes O(W) time, where W is the vocabulary dimension
  • 18. Skip-gram Alternative View • Training – Where • Getting to
  • 19. Skip-gram Improvements • Hierarchical softmax • Negative sampling • Subsampling of frequent words
  • 20. Hierarchical Softmax • Computationally efficient approximation of the softmax • When W output nodes, need to evaluate only about log(W) nodes to obtain the softmax probability distribution
  • 21. Negative Sampling • Noise Contrastive Estimation (NCE) is an alternative to hierarchical softmax • NCE – “a good model should be able to differentiate data from noise by means of logistic regression” • “While NCE can be shown to approximately maximize the log probability of the softmax, the Skip-gram model is only concerned with learning high-quality vector representations, so we are free to simplify NCE as long as the vector representations retain their quality. We define Negative sampling (NEG) by the objective”:
  • 22. Subsampling of Frequent Words • Frequent words provide less information value than the rare words • “More, the vector representations of frequent words do not change significantly after training on several million examples” • Each word wi in the training set is discarded with a probability depending on the frequency of the word
  • 23. Other remarks • Mikolov also developed a method to extract relevant n-grams (bigrams and trigrams) using something similar to PMI • Effects of improvements • Vectors can also be summed
  • 24. Other Applications • Dependency-based contexts • Word2vec for machine learning translation
  • 25. Dependency-based Contexts • Levi and Goldberg, 2014: Propose to use dependency-based contexts instead of linear BoW (windows of size k)
  • 26. Dependency-based Contexts • Why? – Syntactic dependencies are “more inclusive and more focused” than BoW – Capture relations to words that are far apart and that are not used by small window BoW – Remove “coincidental contexts which are within the window but not directly related to the target word” • A possible problem – Dependency parsing is still somewhat computational expensive – However, English Wikipedia can be parsed on a small cluster and the results can then be persisted
  • 27. Dependency-based Contexts • Examples of syntactic contexts
  • 29. Dependency-based Contexts • Evaluation on the WordSim353 dataset with pairs of similar words – Relatedness (topical similarity) – Similarity (functional similarity) – Both (these pairs have been ignored) • Task: “rank the similar pairs in the dataset above the related ones” • Simple ranking: Pairs ranked by cosine similarity of the embedded words
  • 30. Dependency-based Contexts • Main conclusion • Dependency-based context is more useful to capture functional similarities (e.g. similarity) between words • Linear BoW context is more useful to capture topical similarities (e.g. relatedness) between words – The larger the size of the window, the better it captures related concepts • Therefore, dependency-based contexts would perform poorly in analogy experiments
  • 31. Estimating Similarities Across Languages • Given a set of word pairs in two languages (or different types of corpora) and their associated vector representations (xi and zi) • They can have even different dimensions (d1 and d2) • Find a transformation matrix, W(d2, d1), such that Wxi approximates as close as possible zi, for all pairs i • Solved using stochastic gradient descent • The transformation is seen as a linear transformation (rotation and scaling) between the two spaces
  • 32. Estimating Similarities Across Languages • Authors also highlight this using a manual rotation (between En and Sp) and a visualization with 2D-PCA
  • 33. Estimating Similarities Across Languages • The most frequent 5K words from the source language and their translations given GT – training data for learning the Translation Matrix • Subsequent 1K words in the source language and their translations are used as a test set
  • 35. More Explanations • CBOW model with a single input word
  • 36. Update Equations • Maximize the conditional probability of observing the actual output word wO (denote its index in the output layer as j) given the input context word wI with regard to the weights
  • 37. CBOW with Larger Context
  • 38. Skip-gram Model • Context and input word have changed order
  • 40. More… • Why does word2vec work? • It seems that SGNS (skip-gram negative sampling) is actually performing a (weighted) implicit matrix factorization • The matrix is using the PMI between words and contexts • PMI and implicit matrix factorizations have been widely used in NLP • It is interesting that the PMI matrix emerges as the optimal solution for SGNS’s objective
  • 41. Final • “PMI matrices are commonly used by the traditional approach to represent words (often dubbed "distributional semantics"). What's really striking about this discovery, is that word2vec (specifically, SGNS) is doing something very similar to what the NLP community has been doing for about 20 years; it's just doing it really well.” Omer Levy - http://www.quora.com/How-does-word2vec-work
  • 42. References Word2vec & related papers: • Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. • Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (pp. 3111-3119). • Mikolov, T., Yih, W. T., & Zweig, G. (2013, June). Linguistic Regularities in Continuous Space Word Representations. In HLT-NAACL (pp. 746-751). Explanations • Rong, X. (2014). word2vec Parameter Learning Explained. arXiv preprint arXiv:1411.2738. • Goldberg, Y., & Levy, O. (2014). word2vec Explained: deriving Mikolov et al.'s negative-sampling word-embedding method. arXiv preprint arXiv:1402.3722. • Levy, O., & Goldberg, Y. (2014). Neural word embedding as implicit matrix factorization. In Advances in Neural Information Processing Systems (pp. 2177-2185). • Dyer, C. (2014). Notes on Noise Contrastive Estimation and Negative Sampling. arXiv preprint arXiv:1410.8251. Applications of word2vec • Mikolov, T., Le, Q. V., & Sutskever, I. (2013). Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. • Levy, O., & Goldberg, Y. (2014). Dependency based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 302-308).

Editor's Notes

  • #23: Where t=10^-5, chosen empiricaly