SlideShare a Scribd company logo
Word Space Models
and
Random Indexing
By
Dileepa Jayakody
Overview
● Text Similarity
● Word Space Model
– Distributional hypothesis
– Distance and Similarity measures
– Pros & Cons
– Dimension Reduction
● Random Indexing
– Example
– Random Indexing Parameters
– Data pre-processing in Random Indexing
– Random Indexing Benefits and Concerns
Text Similarity
● Human readers determine the similarity between texts by
comparing the abstract meaning of the texts, whether they
discuss a similar topic
● How to model meaning in programming?
● In simplest way, if 2 texts contain the same words, it is believed
the texts have a similar meaning
Meaning of a Word
● The meaning of a word can be determined by the context
formed by the surrounding words
● E.g : The meaning of the word “foorbar” is determined by the
words co-occurred with it. e.g. "drink," "beverage" or "sodas."
– He drank the foobar at the game.
– Foobar is the number three beverage.
– A case of foobar is cheap compared to other sodas.
– Foobar tastes better when cold.
● Co-occurrence matrix represent the context vectors of
words/documents
Word Space Model
● The word-space model is a computational model of meaning to
represent similarity between words/text
● It derives the meaning of words by plotting the words in an n-
dimensional geometric space
Word Space Model
● The dimensions in word-space n can be arbitrarily large
(word * word | word * document)
● The coordinates used to plot each word depends upon the
frequency of the contextual feature that each word co-occur
with within a text
● e.g. words that do not co-occur with the word to be plotted
within a given context are assigned a coordinate value of zero
● The set of zero and non-zero values corresponding to the
coordinates of a word in a word-space are recorded in a context
vector
Distributional Hypothesis in Word Space
● To deduce a certain level of meaning, the coordinates of a word
needs to be measured relative to the coordinates of other words
● Linguistic concept known as the distributional hypothesis
states that “words that occur in the same contexts tend to have
similar meanings”
● The level of closeness of words in the word-space is called the
spatial proximity of words
● Spatial proximity represents the semantic similarity of words in
word space models
Distance and Similarity Measures
● Cosine Similarity
(A common approach used to determine spatial proximity by
measuring the cosine of the angle between the plotted context
vectors of the text)
● Other measures
– Euclidean
– Lin
– Jaccard
– Dice
Word Space Models
● Latent Semantic Analysis (document based co-occurrence :
word * document)
● Hyperspace Analogue to Language (word based co-occurrence :
word * word)
● Latent Dirichlet Allocation
● Random Indexing
Word Space Model Pros & Cons
● Pros
– Mathematically well defined model allows us to define
semantic similarity in mathematical terms
– Constitutes a purely descriptive approach to semantic
modeling; it does not require any previous linguistic or
semantic knowledge
● Cons
– Efficiency and scalability problems with the high
dimensionality of the context vectors
– Majority of the cells in the matrix will be zero due to the
sparse data problem
Dimension Reduction
● Singular Value Decomposition
– matrix factorization technique that can be used to decompose
and approximate a matrix, so that the resulting matrix has
much fewer columns but similar in dimensions
● Non-negative matrix factorization
Cons of Dimension Reduction
● Computationally very costly
● One-time operation; Constructing the co-occurrence matrix and
then transforming it has to be done from scratch, every time
new data is encountered
● Fails to avoid the initial huge co-occurrence matrix. Requires
initial sampling of the entire data which is computationally
cumbersome
● No intermediary results. It is only after co-occurrence matrix is
constructed and transformed the that any processing can begin
Random Indexing
Magnus Sahlgren,
Swedish Institute of Computer Science, 2005
● A word space model that is inherently incremental and does not
require a separate dimension reduction phase
● Each word is represented by two vectors
– Index vector : contains a randomly assigned label. The
random label is a vector filled mostly with zeros, except a
handful of +1 and -1 that are located at random indexes. Index
vectors are expected be orthogonal
e.g. school = [0,0,0,......,0,1,0,...........,-1,0,..............]
– Context vector : produced by scanning through the text; each
time a word occurs in a context (e.g. in a document, or within a
sliding context window), that context's d-dimensional index
vector is added to the context vector of the word in question
Random Indexing Example
● Sentence : "the quick brown fox jumps over the lazy dog."
● With a window-size of 2, the context vector for "fox" is
calculated by adding the index vectors as below;
● N-2(quick) + N-1(brown) + N1(jumps) + N2(over); where N-
k denotes the kth
permutation of the specified index vector
● Two words will have similar context vectors if the words
appear in similar contexts in the text
● Finally a document is represented by the sum of context vectors
of all words occurred in the document
Random Indexing Parameters
● The length of the vector
– determines the dimensionality, storage requirements
● The number of nonzero (+1,-1) entries in the index vector
– has an impact on how the random distortion will be
distributed over the index/context vector.
● Context window size (left and right context boundaries of a
word)
● Weighting Schemes for words within context window
– Constant weighting
– Weighting factor that depends on the distance to the focus
word in the middle of the context window
Data Preprocessing prior to Random
Indexing
● Filtering Stop words : Frequent words like and, the, thus, hence
contribute very little context unless looking at phrases
● Stemming words : reducing inflected words to their stem, base
or root form. e.g. fishing, fisher, fished > fish
● Lemmatizing words : Closely related to stemming, but reduces
the words to a single base or root form based on the word's
context. e.g : better, good > good
● Preprocessing numbers, smilies, money : <number>, <smiley>,
<money> to mark the sentence had a number/smiley at that
position
Random Indexing Vs LSA
● In contrast to other WSMs like LSA which first construct the
co-occurrence matrix and then extract context vectors; in the
Random Indexing approach, the process is backwards
● First context vectors are accumulated, then a co-occurrence
matrix is constructed by collecting the context vectors as rows
of the matrix
● Compresses sparse raw data to a smaller representation without
a separate dimensionality reduction phase as in LSA
Random Indexing Benefits
● The dimensionality of the final context vector of a document
will not depend on the number of documents or words that have
been indexed
● Method is incremental
● No need to sample all texts before results can be produced,
hence intermediate results can be gained
● Simple computation for context vector generation
● Doesn't require intensive processing power and memory
Random Indexing Design Concerns
● Random distortion
– Possible non orthogonal values in the index & context
vectors
– All words will have some similarity depending on the
dimension used for vectors compared to the corpora loaded
into the index (small dimension to represent a big corpora
could result in random distortions)
– Have to decide what level of random distortion is acceptable
to a context vector that represents a document based on the
context vectors of singular words
Random Indexing Design Concerns
● Negative similarity scores
● Words with no similarity would normally be expected to get a
cosine similarity score of zero, but with Random Indexing they
sometimes get a negative score due to opposite sign on the
same index in the word's context vector
● Proportional to the size of the corpora and dimensionality in the
Random Index
Conclusion
● Random Indexing is an efficient and scalable word space model
● Can be used for text analysis applications requiring incremental
approach to perform analysis
e.g: email clustering and categorizing, online forum analysis
● Need to predetermine the optimal values for the parameters to
gain high accuracy: dimensions, no. of non zero indexes and
context window size
Thank you

More Related Content

PPTX
Word embedding
PDF
Document Summarization
PDF
Networks and Natural Language Processing
ODP
PDF
Neural Network in Knowledge Bases
PDF
GENERATING SUMMARIES USING SENTENCE COMPRESSION AND STATISTICAL MEASURES
PDF
Cyprus_paper-final_D.O.Santos_et_al
PDF
SEMI-AUTOMATIC SIMULTANEOUS INTERPRETING QUALITY EVALUATION
Word embedding
Document Summarization
Networks and Natural Language Processing
Neural Network in Knowledge Bases
GENERATING SUMMARIES USING SENTENCE COMPRESSION AND STATISTICAL MEASURES
Cyprus_paper-final_D.O.Santos_et_al
SEMI-AUTOMATIC SIMULTANEOUS INTERPRETING QUALITY EVALUATION

What's hot (18)

PDF
machine translation evaluation resources and methods: a survey
PDF
THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIES
PDF
ON THE UTILITY OF A SYLLABLE-LIKE SEGMENTATION FOR LEARNING A TRANSLITERATION...
PDF
International Journal of Engineering and Science Invention (IJESI)
PDF
Sentence level sentiment polarity calculation for customer reviews by conside...
PDF
MT SUMMIT13.Language-independent Model for Machine Translation Evaluation wit...
PPTX
Database - Entity Relationship Diagram (ERD)
PDF
Cross lingual similarity discrimination with translation characteristics
PDF
DOCUMENT SUMMARIZATION IN KANNADA USING KEYWORD EXTRACTION
PDF
New word analogy corpus
PDF
Turkish language modeling using BERT
PPTX
CONTENT ANALYSIS AND Q-SORT
DOCX
A neural probabilistic language model
PDF
ANALYSIS OF MWES IN HINDI TEXT USING NLTK
PPTX
2010 PACLIC - pay attention to categories
PDF
Two Level Disambiguation Model for Query Translation
PDF
TSD2013.AUTOMATIC MACHINE TRANSLATION EVALUATION WITH PART-OF-SPEECH INFORMATION
PDF
Implementation Of Syntax Parser For English Language Using Grammar Rules
machine translation evaluation resources and methods: a survey
THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIES
ON THE UTILITY OF A SYLLABLE-LIKE SEGMENTATION FOR LEARNING A TRANSLITERATION...
International Journal of Engineering and Science Invention (IJESI)
Sentence level sentiment polarity calculation for customer reviews by conside...
MT SUMMIT13.Language-independent Model for Machine Translation Evaluation wit...
Database - Entity Relationship Diagram (ERD)
Cross lingual similarity discrimination with translation characteristics
DOCUMENT SUMMARIZATION IN KANNADA USING KEYWORD EXTRACTION
New word analogy corpus
Turkish language modeling using BERT
CONTENT ANALYSIS AND Q-SORT
A neural probabilistic language model
ANALYSIS OF MWES IN HINDI TEXT USING NLTK
2010 PACLIC - pay attention to categories
Two Level Disambiguation Model for Query Translation
TSD2013.AUTOMATIC MACHINE TRANSLATION EVALUATION WITH PART-OF-SPEECH INFORMATION
Implementation Of Syntax Parser For English Language Using Grammar Rules
Ad

Similar to Word Space Models & Random indexing (20)

PDF
Sst evalita2011 basile_pierpaolo
PDF
Big Data Palooza Talk: Aspects of Semantic Processing
PPTX
A Simple Introduction to Word Embeddings
PDF
Detecting semantic shift in large corpora by exploiting temporal random indexing
PPTX
Project Proposal Topics Modeling (Ir)
PPTX
L6.pptxsdv dfbdfjftj hgjythgfvfhjyggunghb fghtffn
ODP
Topic Modeling
PDF
Irmles2010 Random indexing spaces to bridge the human and data webs
PPTX
Topical_Facets
PPTX
Latent Semanctic Analysis Auro Tripathy
PPTX
LSI latent (par HATOUM Saria et DONGO ESCALANTE Irvin Franco)
PPTX
Neural Text Embeddings for Information Retrieval (WSDM 2017)
PPTX
What is word2vec?
PPTX
lecture14-distributed-reprennnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnsentations.pptx
PDF
"SSC" - Geometria e Semantica del Linguaggio
KEY
Random Indexing for Content-based Recommender Systems
PPT
PPTX
Text Mining for Lexicography
PDF
Interactive Analysis of Word Vector Embeddings
PPTX
LexicalSemanticsWordSenses.pptxMMMMMMMMMMMMMMMMMMMMMMMMM
Sst evalita2011 basile_pierpaolo
Big Data Palooza Talk: Aspects of Semantic Processing
A Simple Introduction to Word Embeddings
Detecting semantic shift in large corpora by exploiting temporal random indexing
Project Proposal Topics Modeling (Ir)
L6.pptxsdv dfbdfjftj hgjythgfvfhjyggunghb fghtffn
Topic Modeling
Irmles2010 Random indexing spaces to bridge the human and data webs
Topical_Facets
Latent Semanctic Analysis Auro Tripathy
LSI latent (par HATOUM Saria et DONGO ESCALANTE Irvin Franco)
Neural Text Embeddings for Information Retrieval (WSDM 2017)
What is word2vec?
lecture14-distributed-reprennnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnsentations.pptx
"SSC" - Geometria e Semantica del Linguaggio
Random Indexing for Content-based Recommender Systems
Text Mining for Lexicography
Interactive Analysis of Word Vector Embeddings
LexicalSemanticsWordSenses.pptxMMMMMMMMMMMMMMMMMMMMMMMMM
Ad

Recently uploaded (20)

PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
A comparative analysis of optical character recognition models for extracting...
PPTX
Spectroscopy.pptx food analysis technology
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Approach and Philosophy of On baking technology
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPT
Teaching material agriculture food technology
PDF
cuic standard and advanced reporting.pdf
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
Machine Learning_overview_presentation.pptx
PPTX
Tartificialntelligence_presentation.pptx
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Electronic commerce courselecture one. Pdf
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PDF
Assigned Numbers - 2025 - Bluetooth® Document
Network Security Unit 5.pdf for BCA BBA.
A comparative analysis of optical character recognition models for extracting...
Spectroscopy.pptx food analysis technology
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Approach and Philosophy of On baking technology
Accuracy of neural networks in brain wave diagnosis of schizophrenia
20250228 LYD VKU AI Blended-Learning.pptx
Digital-Transformation-Roadmap-for-Companies.pptx
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Teaching material agriculture food technology
cuic standard and advanced reporting.pdf
Advanced methodologies resolving dimensionality complications for autism neur...
Machine Learning_overview_presentation.pptx
Tartificialntelligence_presentation.pptx
Encapsulation_ Review paper, used for researhc scholars
MYSQL Presentation for SQL database connectivity
Electronic commerce courselecture one. Pdf
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
NewMind AI Weekly Chronicles - August'25-Week II
Assigned Numbers - 2025 - Bluetooth® Document

Word Space Models & Random indexing

  • 1. Word Space Models and Random Indexing By Dileepa Jayakody
  • 2. Overview ● Text Similarity ● Word Space Model – Distributional hypothesis – Distance and Similarity measures – Pros & Cons – Dimension Reduction ● Random Indexing – Example – Random Indexing Parameters – Data pre-processing in Random Indexing – Random Indexing Benefits and Concerns
  • 3. Text Similarity ● Human readers determine the similarity between texts by comparing the abstract meaning of the texts, whether they discuss a similar topic ● How to model meaning in programming? ● In simplest way, if 2 texts contain the same words, it is believed the texts have a similar meaning
  • 4. Meaning of a Word ● The meaning of a word can be determined by the context formed by the surrounding words ● E.g : The meaning of the word “foorbar” is determined by the words co-occurred with it. e.g. "drink," "beverage" or "sodas." – He drank the foobar at the game. – Foobar is the number three beverage. – A case of foobar is cheap compared to other sodas. – Foobar tastes better when cold. ● Co-occurrence matrix represent the context vectors of words/documents
  • 5. Word Space Model ● The word-space model is a computational model of meaning to represent similarity between words/text ● It derives the meaning of words by plotting the words in an n- dimensional geometric space
  • 6. Word Space Model ● The dimensions in word-space n can be arbitrarily large (word * word | word * document) ● The coordinates used to plot each word depends upon the frequency of the contextual feature that each word co-occur with within a text ● e.g. words that do not co-occur with the word to be plotted within a given context are assigned a coordinate value of zero ● The set of zero and non-zero values corresponding to the coordinates of a word in a word-space are recorded in a context vector
  • 7. Distributional Hypothesis in Word Space ● To deduce a certain level of meaning, the coordinates of a word needs to be measured relative to the coordinates of other words ● Linguistic concept known as the distributional hypothesis states that “words that occur in the same contexts tend to have similar meanings” ● The level of closeness of words in the word-space is called the spatial proximity of words ● Spatial proximity represents the semantic similarity of words in word space models
  • 8. Distance and Similarity Measures ● Cosine Similarity (A common approach used to determine spatial proximity by measuring the cosine of the angle between the plotted context vectors of the text) ● Other measures – Euclidean – Lin – Jaccard – Dice
  • 9. Word Space Models ● Latent Semantic Analysis (document based co-occurrence : word * document) ● Hyperspace Analogue to Language (word based co-occurrence : word * word) ● Latent Dirichlet Allocation ● Random Indexing
  • 10. Word Space Model Pros & Cons ● Pros – Mathematically well defined model allows us to define semantic similarity in mathematical terms – Constitutes a purely descriptive approach to semantic modeling; it does not require any previous linguistic or semantic knowledge ● Cons – Efficiency and scalability problems with the high dimensionality of the context vectors – Majority of the cells in the matrix will be zero due to the sparse data problem
  • 11. Dimension Reduction ● Singular Value Decomposition – matrix factorization technique that can be used to decompose and approximate a matrix, so that the resulting matrix has much fewer columns but similar in dimensions ● Non-negative matrix factorization
  • 12. Cons of Dimension Reduction ● Computationally very costly ● One-time operation; Constructing the co-occurrence matrix and then transforming it has to be done from scratch, every time new data is encountered ● Fails to avoid the initial huge co-occurrence matrix. Requires initial sampling of the entire data which is computationally cumbersome ● No intermediary results. It is only after co-occurrence matrix is constructed and transformed the that any processing can begin
  • 13. Random Indexing Magnus Sahlgren, Swedish Institute of Computer Science, 2005 ● A word space model that is inherently incremental and does not require a separate dimension reduction phase ● Each word is represented by two vectors – Index vector : contains a randomly assigned label. The random label is a vector filled mostly with zeros, except a handful of +1 and -1 that are located at random indexes. Index vectors are expected be orthogonal e.g. school = [0,0,0,......,0,1,0,...........,-1,0,..............] – Context vector : produced by scanning through the text; each time a word occurs in a context (e.g. in a document, or within a sliding context window), that context's d-dimensional index vector is added to the context vector of the word in question
  • 14. Random Indexing Example ● Sentence : "the quick brown fox jumps over the lazy dog." ● With a window-size of 2, the context vector for "fox" is calculated by adding the index vectors as below; ● N-2(quick) + N-1(brown) + N1(jumps) + N2(over); where N- k denotes the kth permutation of the specified index vector ● Two words will have similar context vectors if the words appear in similar contexts in the text ● Finally a document is represented by the sum of context vectors of all words occurred in the document
  • 15. Random Indexing Parameters ● The length of the vector – determines the dimensionality, storage requirements ● The number of nonzero (+1,-1) entries in the index vector – has an impact on how the random distortion will be distributed over the index/context vector. ● Context window size (left and right context boundaries of a word) ● Weighting Schemes for words within context window – Constant weighting – Weighting factor that depends on the distance to the focus word in the middle of the context window
  • 16. Data Preprocessing prior to Random Indexing ● Filtering Stop words : Frequent words like and, the, thus, hence contribute very little context unless looking at phrases ● Stemming words : reducing inflected words to their stem, base or root form. e.g. fishing, fisher, fished > fish ● Lemmatizing words : Closely related to stemming, but reduces the words to a single base or root form based on the word's context. e.g : better, good > good ● Preprocessing numbers, smilies, money : <number>, <smiley>, <money> to mark the sentence had a number/smiley at that position
  • 17. Random Indexing Vs LSA ● In contrast to other WSMs like LSA which first construct the co-occurrence matrix and then extract context vectors; in the Random Indexing approach, the process is backwards ● First context vectors are accumulated, then a co-occurrence matrix is constructed by collecting the context vectors as rows of the matrix ● Compresses sparse raw data to a smaller representation without a separate dimensionality reduction phase as in LSA
  • 18. Random Indexing Benefits ● The dimensionality of the final context vector of a document will not depend on the number of documents or words that have been indexed ● Method is incremental ● No need to sample all texts before results can be produced, hence intermediate results can be gained ● Simple computation for context vector generation ● Doesn't require intensive processing power and memory
  • 19. Random Indexing Design Concerns ● Random distortion – Possible non orthogonal values in the index & context vectors – All words will have some similarity depending on the dimension used for vectors compared to the corpora loaded into the index (small dimension to represent a big corpora could result in random distortions) – Have to decide what level of random distortion is acceptable to a context vector that represents a document based on the context vectors of singular words
  • 20. Random Indexing Design Concerns ● Negative similarity scores ● Words with no similarity would normally be expected to get a cosine similarity score of zero, but with Random Indexing they sometimes get a negative score due to opposite sign on the same index in the word's context vector ● Proportional to the size of the corpora and dimensionality in the Random Index
  • 21. Conclusion ● Random Indexing is an efficient and scalable word space model ● Can be used for text analysis applications requiring incremental approach to perform analysis e.g: email clustering and categorizing, online forum analysis ● Need to predetermine the optimal values for the parameters to gain high accuracy: dimensions, no. of non zero indexes and context window size