SlideShare a Scribd company logo
Text Mining 101
Manohar Swamynathan
August 2012
agenda:
o Text Mining Process Steps
o Calculate Term Weight
o Similarity Distance Measure
o Common Text Mining Techniques
o Appendix
- Required R packages for Text Mining
- Implemented Examples
o R code for obtaining and analyzing tweets.
o RTextTools – Ensemble Classification
o References
Manohar Swamynathan
Aug 2012
Step 1 – Data assemble
Text Corpus
Flat files
Social
Corporate
Database
CommonTextDataSources
Data Processing Step Brief Description
Explore Corpus through
Exploratory Data Analysis
Understand the types of variables, their functions, permissible values, and so on. Some formats
including html and xml contain tags and other data structures that provide more metadata.
Convert text to lowercase This is to avoid distinguish between words simply on case.
Remove Number(if
required)
Numbers may or may not be relevant to our analyses.
Remove Punctuations Punctuation can provide grammatical context which supports understanding. Often for initial
analyses we ignore the punctuation. Later we will use punctuation to support the extraction of
meaning.
Remove English stop
words
Stop words are common words found in a language. Words like for, of, are, etc are common stop
words.
Remove Own stop words(if
required)
Along with English stop words, we could instead or in addition remove our own stop words. The
choice of own stop word might depend on the domain of discourse, and might not become
apparent until we've done some analysis.
Strip whitespace Eliminate extra white-spaces. Any additional space that is not the space that occur within the
sentence or between words.
Stemming Stemming uses an algorithm that removes common word endings for English words, such as “es”,
“ed” and “'s”. Example, "computer" & "computers" become "comput"
Lemmatization Transform to dictionary base form i.e., "produce" & "produced" become "produce"
Sparse terms We are often not interested in infrequent terms in our documents. Such “sparse" terms should be
removed from the document term matrix.
Document term matrix A document term matrix is simply a matrix with documents as the rows and terms as the columns
and a count of the frequency of words as the cells of the matrix.
Step 2 - Data Processing
4
Python packages – textmining, nltk
R packages - tm, qdap, openNLP
Step 3 - Data Visualization
Frequency Chart Word Cloud
Correlation Plot
Step 4 – Models
Clustering
Classification
Sentiment Analysis
Document
Term Frequency - How frequently term appears?
Term Frequency TF(t) = (Number of times term t appears in a document) / (Total number of terms in the
document)
Example:
Calculate Term Weight (TF * IDF)
Inverse Document Frequency - How important a term is?
Document Frequency DF = d (number of documents containing a given term) / D (the size of the collection of
documents)
To normalize take log(d/D), but often D > d and log(d/D) will give negative value. So invert the ratio inside log
expression. Essentially we are compressing the scale of values so that very large or very small quantities are
smoothly compared
Inverse Document Frequency IDF(t) = log(Total number of documents / Number of documents with term t in it)
7
- Assume we have overall 10 million documents and the word spindle appears in one thousand of these
- Consider 2 documents containing 100 total words each, and contains term spindle x number of times
Document spindle – Frequency Total Words TF IDF TF * IDF
1 3 100 3/100 = 0.03 log(10,000,000/1,000) = 4 0.03 * 4 = 0.12
2 30 100 30/100 = .3 log(10,000,000/1,000) = 4 0.3 * 4 = 1.2
Similarity Distance Measure
Example:
Text 1: statistics skills and programming skills are equally important for analytics
Text 2: statistics skills and domain knowledge are important for analytics
Text 3: I like reading books and travelling
The three vectors are:
T1 = (1,2,1,1,0,1,1,1,1,1,0,0,0,0,0,0)
T2 = (1,1,1,0,1,1,0,1,1,1,1,0,0,0,0,0)
T3 = (0,0,1,0,0,0,0,0,0,0,0,1,1,1,1,1)
Degree of Similarity (T1 & T2) = (T1 %*% T2) / (sqrt(sum(T1^2)) * sqrt(sum(T2^2))) = 77%
Degree of Similarity (T1 & T3) = (T1 %*% T3) / (sqrt(sum(T1^2)) * sqrt(sum(T3^2))) = 12%
Additional Reading: Here is a detailed paper on comparing the efficiency of different distance measures for text documents.
URL – 1) http://home.iitk.ac.in/~spranjal/cs671/project/report.pdf
2) http://users.dsic.upv.es/~prosso/resources/BarronEtAl_ICON09.pdf
statisticsskills and programming knowledge are equally important for analytics domain I like reading books travelling
Text 1 1 2 1 1 0 1 1 1 1 1 0 0 0 0 0 0
Text 2 1 1 1 0 1 1 0 1 1 1 1 0 0 0 0 0
Text 3 0 0 1 0 0 0 0 0 0 0 0 1 1 1 1 1
X
Y
Euclidean
Cosine
- cosine value will be a
number between 0 and 1
- Smaller the angel bigger
the cosine value/similarity
8
Common Text Mining Techniques
• N-grams
• Shallow Natural Language Processing
• Deep Natural Language Processing
Example: "defense attorney for liberty and
montecito”
1-gram:
defense
attorney
for
liberty
and
montecito
2-gram:
defense attorney
for liberty
and montecito
attorney for
liberty and
attorney for
3-gram:
defense attorney for
liberty and montecito
attorney for liberty
for liberty and
liberty and montecito
4-gram:
defense attorney for liberty
attorney for liberty and
for liberty and montecito
5-gram:
defense attorney for liberty and montecito
attorney for liberty and montecito
Application:
 Probabilistic language model for predicting the
next item in a sequence in the form of a (n − 1)
 Widely used in probability, communication
theory, computational linguistics, biological
sequence analysis
Advantage:
 Relatively simple
 Simply increasing n, model can be used to store
more context
Disadvantage:
 Semantic value of the item is not considered
n-gram
Definition:
• n-gram is a contiguous sequence of n items from a
given sequence of text
• The items can be letters, words, syllables or base
pairs according to the application
10
Application:
- Taxonomy extraction (predefined terms and entities)
- Entities: People, organizations, locations, times, dates, prices, genes, proteins, diseases,
medicines
- Concept extraction (main idea or a theme)
Advantage:
- Less noisy than n-grams
Disadvantage:
- Does not specify role of items in the main sentence
Shallow NLP Technique
Definition:
- Assign a syntactic label (noun, verb etc.) to a chunk
- Knowledge extraction from text through semantic/syntactic analysis approach
11
Sentence - “The driver from Europe crashed the car with the white bumper”
1-gram
the
driver
from
europe
crashed
the
car
with
the
white
bumper
Part of Speech
DT – Determiner
NN - Noun, singular or mass
IN - Preposition or subordinating conjunction
NNP - Proper Noun, singular
VBD - Verb, past tense
DT – Determiner
NN - Noun, singular or mass
IN - Preposition or subordinating conjunction
DT – Determiner
JJ – Adjective
NN - Noun, singular or mass
- Convert to lowercase & PoS tag
Concept Extraction:
- Remove Stop words
- Retain only Noun’s & Verb’s
- Bi-gram with Noun’s & Verb’s retained
Bi-gram PoS
car white NN JJ
crashed car VBD NN
driver europe NN NNP
europe crashed NNP VBD
white bumper JJ NN
3-gram PoS
car white bumper NN JJ NN
crashed car white VBD NN JJ
driver europe crashed NN NNP VBD
europe crashed car NNP VBD NN
- 3-gram with Noun’s & Verb’s retained
Conclusion:
1-gram: Reduced noise, however no clear context
Bi-gram & 3-gram: Increased context, however there
is a information loss
Shallow NLP Technique
12
Stop words
Noun/Verb
Definition:
- Extension to the shallow NLP
- Detected relationships are expressed as complex construction to retain the context
- Example relationships: Located in, employed by, part of, married to
Applications:
- Develop features and representations appropriate for complex interpretation tasks
- Fraud detection
- Life science: prediction activities based on complex RNA-Sequence
Deep NLP technique
Example:
The above sentence can be represented using triples (Subject: Predicate [Modifier]: Object)
without loosing the context.
Triples:
driver : crash : car
driver : crash with : bumper
driver : be from : Europe
13
Technique General Steps Pros Cons
N-Gram
- Convert to lowercase
- Remove punctuations
- Remove special characters Simple technique Extremely noisy
Shallow NLP
technique
- POS tagging
- Lemmatization i.e., transform to
dictionary base form i.e., "produce" &
"produced" become "produce"
- Stemming i.e., transform to root word
i.e., 1) "computer" & "computers" become
"comput"
2) "product", "produce" & "produced"
become "produc"
- Chunking i.e., identify the phrasal
constituents in a sentence , including
noun/verb phrase etc., and splits the
sentence into chunks of semantically
related words
Less noisy than N-
Grams
Computationally
expensive
solution for
analyzing the
structure of texts.
Does not specify
the internal
structure or the
role of words in
the sentence
Deep NLP
technique
- Generate syntactic relationship
between each pair of words
- Extract subject, predicate, nagation,
objecct and named entity to form triples.
Context of the
sentence is
retained.
Sentence level
analysis is too
structured
Techniques - Summary
14
Appendix
15
2A - Explore Corpus through EDA
2B - Convert text to lowercase
2C - Remove
a) Numbers(if required)
b) Punctuations
c) English stop words
d) Own stop words(if
required)
e) Strip whitespace
f) Lemmatization/Stemming
g) Sparse terms
2D - Create document term matrix
Step 3 - Visualization
Corpus
Web
Documents
Step 1 – Data Assemble
Step 2 – Data Processing
Step 4 – Build Model(s)
 Clustering
 Classification
 Sentiment Analysis
FrequencyChartWordCloudCorrelationPlot
R - Text Mining Process Overview
16
DB
Package
Name
Category Description
tm Text Mining A framework for text mining applications
topicmodels Topic Modelling Fit topic models with Latent Dirichlet Allocation (LDA) and Comparative Text Mining (CTM)
wordcloud Visualization Plot a cloud comparing the frequencies of words
across documents
lda Topic Modelling Fit topic models with Latent Dirichlet Allocation
wordnet Text Mining Database of English which is commonly used in linguistics and text mining
RTextTools Text Mining Automatic text classification via supervised learning
qdap Sentiment
analysis
Transcript analysis, text mining and natural language processing
tm.plugin.dc Text Mining A plug-in for package tm to support distributed text mining
tm.plugin.mail Text Mining A plug-in for package tm to handle mail
textir Text Mining A suite of tools for inference about text documents and associated sentiment
tau Text Mining Utilities for text analysis
textcat Text Mining N-gram based text categorization
SnowballC Text Mining Word stemmer
twitteR Text Mining Provides an interface to the Twitter web API
ROAuth Text Mining Allows users to authenticate to the server of their choice (like Twitter)
RColorBrewer Visualization The packages provides palettes for drawing nice maps shaded according to a variable
ggplot2 Visualization Graphing package implemented on top of the R statistical package. Inspired by the
Grammar of Graphics seminal work of Leland Wilkinson
R – Required packages for Text Mining
17
Example 1 - Obtaining and analyzing tweets
Objective: R code for analyzing tweets relating to #AAA2011 (text mining, topic modelling, network analysis, clustering and
sentiment analysis)
What does the code do?
The code details ten steps in the analysis and visualization of the tweets:
Acquiring the raw Twitter data
Calculating some basic statistics with the raw Twitter data
Calculating some basic retweet statistics
Calculating the ratio of retweets to tweets
Calculating some basic statistics about URLs in tweets
Basic text mining for token frequency and token association analysis (word cloud)
Calculating sentiment scores of tweets, including on subsets containing tokens of interest
Hierarchical clustering of tokens based on multi scale bootstrap resampling
Topic modelling the tweet corpus using latent Dirichlet allocation
Network analysis of tweeters based on retweets
Code Source: Code was taken from following link and tweaked/added additional bits where required to ensure code runs fine
https://github.com/benmarwick/AAA2011-Tweets
How to Run or Test the code? - From the word doc copy the R code in the given sequence highlighted in yellow color and paste
on your R console.
18
RTextTools – Example for supervised Learning for Text Classification using Ensemble
RTextTools is a free, open source R machine learning package for automatic text classification.
The package includes nine algorithms for ensemble classification (svm, slda, boosting, bagging, random
forests, glmnet, decision trees, neural networks, and maximum entropy), comprehensive analytics, and
thorough documentation.
Users may use n-fold cross validation to calculate the accuracy of each algorithm on their dataset and
determine which algorithms to use in their ensemble.
(Using a four-ensemble agreement approach, Collingwood and Wilkerson (2012) found that when four of their
algorithms agree on the label of a textual document, the machine label matches the human label over 90% of
the time. The rate is just 45% when only two algorithms agree on the text label.)
Code Source: The codes is readily available for download and usage from the following link
https://github.com/timjurka/RTextTools . The code can be implemented without modification for testing,
however it’s set up such that changes can be incorporated easily based on our requirement.
Additional Reading: http://www.rtexttools.com/about-the-project.html
19
Example 2 - RTextTools
Penn Treebank - https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html
Stanford info lab - Finding Similar Items: http://infolab.stanford.edu/~ullman/mmds/ch3.pdf
TRIPLET EXTRACTION FROM SENTENCES: URL - http://ailab.ijs.si/delia_rusu/Papers/is_2007.pdf
Shallow and Deep NLP Processing for ontology learning, a Quick Overview:
http://azouaq.athabascau.ca/publications/Conferences,%20Workshops,%20Books/%5BBC2%5D_K
DW_2010.pdf
References
20

More Related Content

PDF
Analyzing Text Preprocessing and Feature Selection Methods for Sentiment Anal...
PDF
What is data engineering?
PDF
stackconf 2022: Introduction to Vector Search with Weaviate
PDF
Learning to Rank for Recommender Systems - ACM RecSys 2013 tutorial
PPTX
Machine Learning Tutorial Part - 2 | Machine Learning Tutorial For Beginners ...
PDF
Introduction to Machine Learning with SciKit-Learn
PDF
Gaussian Processes: Applications in Machine Learning
PDF
Pinecone Vector Database.pdf
Analyzing Text Preprocessing and Feature Selection Methods for Sentiment Anal...
What is data engineering?
stackconf 2022: Introduction to Vector Search with Weaviate
Learning to Rank for Recommender Systems - ACM RecSys 2013 tutorial
Machine Learning Tutorial Part - 2 | Machine Learning Tutorial For Beginners ...
Introduction to Machine Learning with SciKit-Learn
Gaussian Processes: Applications in Machine Learning
Pinecone Vector Database.pdf

What's hot (20)

PPTX
Real time analytics
PPTX
Dimension reduction techniques[Feature Selection]
PPTX
(The life of a) Data engineer
PDF
Image Segmentation with Deep Learning - Xavier Giro & Carles Ventura - ISSonD...
PPT
Machine learning Algorithm
PPTX
Introduction to Data Engineering
PDF
A Primer on Entity Resolution
PDF
Supervised vs Unsupervised vs Reinforcement Learning | Edureka
PDF
07 Analysis of Algorithms: Order Statistics
PDF
BIM Data Mining Unit1 by Tekendra Nath Yogi
PPT
K means Clustering Algorithm
PPT
Knowledge discovery thru data mining
PPTX
K-Nearest Neighbor(KNN)
PPTX
bigquery.pptx
PPTX
Introduction to Named Entity Recognition
PPTX
Explainability for Natural Language Processing
PDF
05 Classification And Prediction
PDF
Scikit Learn Tutorial | Machine Learning with Python | Python for Data Scienc...
PDF
Recommender Systems (Machine Learning Summer School 2014 @ CMU)
PDF
Bias and variance trade off
Real time analytics
Dimension reduction techniques[Feature Selection]
(The life of a) Data engineer
Image Segmentation with Deep Learning - Xavier Giro & Carles Ventura - ISSonD...
Machine learning Algorithm
Introduction to Data Engineering
A Primer on Entity Resolution
Supervised vs Unsupervised vs Reinforcement Learning | Edureka
07 Analysis of Algorithms: Order Statistics
BIM Data Mining Unit1 by Tekendra Nath Yogi
K means Clustering Algorithm
Knowledge discovery thru data mining
K-Nearest Neighbor(KNN)
bigquery.pptx
Introduction to Named Entity Recognition
Explainability for Natural Language Processing
05 Classification And Prediction
Scikit Learn Tutorial | Machine Learning with Python | Python for Data Scienc...
Recommender Systems (Machine Learning Summer School 2014 @ CMU)
Bias and variance trade off
Ad

Viewers also liked (20)

PPT
Text mining, By Hadi Mohammadzadeh
PDF
Elements of Text Mining Part - I
PPT
Big Data & Text Mining
PDF
Text Mining with R -- an Analysis of Twitter Data
PPTX
Textmining Information Extraction
PPT
Predictive Text Analytics
PPT
Enabling Exploration Through Text Analytics
PDF
European Transport Networks
DOC
Log Data Mining
PDF
Information Extraction
PPTX
Text Analytics for Dummies 2010
PPT
Unmanned railway tracking and anti collision system using gsm
PPTX
TextMining with R
PPTX
Quick Tour of Text Mining
PPT
Log Mining: Beyond Log Analysis
PDF
Project report for railway security monotorin system
PPTX
Introduction to Text Mining
PDF
Text mining
PPT
Textmining Introduction
PDF
Efficient Practices for Large Scale Text Mining Process
Text mining, By Hadi Mohammadzadeh
Elements of Text Mining Part - I
Big Data & Text Mining
Text Mining with R -- an Analysis of Twitter Data
Textmining Information Extraction
Predictive Text Analytics
Enabling Exploration Through Text Analytics
European Transport Networks
Log Data Mining
Information Extraction
Text Analytics for Dummies 2010
Unmanned railway tracking and anti collision system using gsm
TextMining with R
Quick Tour of Text Mining
Log Mining: Beyond Log Analysis
Project report for railway security monotorin system
Introduction to Text Mining
Text mining
Textmining Introduction
Efficient Practices for Large Scale Text Mining Process
Ad

Similar to Text Mining Analytics 101 (20)

PPTX
3. introduction to text mining
PPTX
3. introduction to text mining
PPT
Text Mining
PDF
Data Science - Part XI - Text Analytics
PPTX
Cork AI Meetup Number 3
PPTX
Text mining introduction-1
PPT
Copy of 10text (2)
PPT
Chapter 10 Data Mining Techniques
PPTX
MODULE 4-Text Analytics.pptx
PPT
Textmining
PDF
Big Data Palooza Talk: Aspects of Semantic Processing
PPTX
Text Mining Infrastructure in R
PDF
05_nlp_Vectorization_ML_model_in_text_analysis.pdf
PPTX
Using topic modelling frameworks for NLP and semantic search
PDF
Analysing Demonetisation through Text Mining using Live Twitter Data!
PPT
Text Analytics for Semantic Computing
PDF
Shilpa shukla processing_text
PDF
Crash-course in Natural Language Processing
PDF
Conceptual foundations of text mining and preprocessing steps nfaoui el_habib
3. introduction to text mining
3. introduction to text mining
Text Mining
Data Science - Part XI - Text Analytics
Cork AI Meetup Number 3
Text mining introduction-1
Copy of 10text (2)
Chapter 10 Data Mining Techniques
MODULE 4-Text Analytics.pptx
Textmining
Big Data Palooza Talk: Aspects of Semantic Processing
Text Mining Infrastructure in R
05_nlp_Vectorization_ML_model_in_text_analysis.pdf
Using topic modelling frameworks for NLP and semantic search
Analysing Demonetisation through Text Mining using Live Twitter Data!
Text Analytics for Semantic Computing
Shilpa shukla processing_text
Crash-course in Natural Language Processing
Conceptual foundations of text mining and preprocessing steps nfaoui el_habib

Recently uploaded (20)

PDF
Mega Projects Data Mega Projects Data
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PDF
annual-report-2024-2025 original latest.
PPTX
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
PPT
Reliability_Chapter_ presentation 1221.5784
PPTX
Introduction to Knowledge Engineering Part 1
PDF
.pdf is not working space design for the following data for the following dat...
PPTX
Business Acumen Training GuidePresentation.pptx
PPTX
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
PDF
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
PPT
Quality review (1)_presentation of this 21
PPTX
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PDF
Lecture1 pattern recognition............
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PDF
Business Analytics and business intelligence.pdf
PPTX
Computer network topology notes for revision
Mega Projects Data Mega Projects Data
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
Galatica Smart Energy Infrastructure Startup Pitch Deck
annual-report-2024-2025 original latest.
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
Reliability_Chapter_ presentation 1221.5784
Introduction to Knowledge Engineering Part 1
.pdf is not working space design for the following data for the following dat...
Business Acumen Training GuidePresentation.pptx
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
Quality review (1)_presentation of this 21
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
Lecture1 pattern recognition............
Acceptance and paychological effects of mandatory extra coach I classes.pptx
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
Business Analytics and business intelligence.pdf
Computer network topology notes for revision

Text Mining Analytics 101

  • 1. Text Mining 101 Manohar Swamynathan August 2012
  • 2. agenda: o Text Mining Process Steps o Calculate Term Weight o Similarity Distance Measure o Common Text Mining Techniques o Appendix - Required R packages for Text Mining - Implemented Examples o R code for obtaining and analyzing tweets. o RTextTools – Ensemble Classification o References Manohar Swamynathan Aug 2012
  • 3. Step 1 – Data assemble Text Corpus Flat files Social Corporate Database CommonTextDataSources
  • 4. Data Processing Step Brief Description Explore Corpus through Exploratory Data Analysis Understand the types of variables, their functions, permissible values, and so on. Some formats including html and xml contain tags and other data structures that provide more metadata. Convert text to lowercase This is to avoid distinguish between words simply on case. Remove Number(if required) Numbers may or may not be relevant to our analyses. Remove Punctuations Punctuation can provide grammatical context which supports understanding. Often for initial analyses we ignore the punctuation. Later we will use punctuation to support the extraction of meaning. Remove English stop words Stop words are common words found in a language. Words like for, of, are, etc are common stop words. Remove Own stop words(if required) Along with English stop words, we could instead or in addition remove our own stop words. The choice of own stop word might depend on the domain of discourse, and might not become apparent until we've done some analysis. Strip whitespace Eliminate extra white-spaces. Any additional space that is not the space that occur within the sentence or between words. Stemming Stemming uses an algorithm that removes common word endings for English words, such as “es”, “ed” and “'s”. Example, "computer" & "computers" become "comput" Lemmatization Transform to dictionary base form i.e., "produce" & "produced" become "produce" Sparse terms We are often not interested in infrequent terms in our documents. Such “sparse" terms should be removed from the document term matrix. Document term matrix A document term matrix is simply a matrix with documents as the rows and terms as the columns and a count of the frequency of words as the cells of the matrix. Step 2 - Data Processing 4 Python packages – textmining, nltk R packages - tm, qdap, openNLP
  • 5. Step 3 - Data Visualization Frequency Chart Word Cloud Correlation Plot
  • 6. Step 4 – Models Clustering Classification Sentiment Analysis Document
  • 7. Term Frequency - How frequently term appears? Term Frequency TF(t) = (Number of times term t appears in a document) / (Total number of terms in the document) Example: Calculate Term Weight (TF * IDF) Inverse Document Frequency - How important a term is? Document Frequency DF = d (number of documents containing a given term) / D (the size of the collection of documents) To normalize take log(d/D), but often D > d and log(d/D) will give negative value. So invert the ratio inside log expression. Essentially we are compressing the scale of values so that very large or very small quantities are smoothly compared Inverse Document Frequency IDF(t) = log(Total number of documents / Number of documents with term t in it) 7 - Assume we have overall 10 million documents and the word spindle appears in one thousand of these - Consider 2 documents containing 100 total words each, and contains term spindle x number of times Document spindle – Frequency Total Words TF IDF TF * IDF 1 3 100 3/100 = 0.03 log(10,000,000/1,000) = 4 0.03 * 4 = 0.12 2 30 100 30/100 = .3 log(10,000,000/1,000) = 4 0.3 * 4 = 1.2
  • 8. Similarity Distance Measure Example: Text 1: statistics skills and programming skills are equally important for analytics Text 2: statistics skills and domain knowledge are important for analytics Text 3: I like reading books and travelling The three vectors are: T1 = (1,2,1,1,0,1,1,1,1,1,0,0,0,0,0,0) T2 = (1,1,1,0,1,1,0,1,1,1,1,0,0,0,0,0) T3 = (0,0,1,0,0,0,0,0,0,0,0,1,1,1,1,1) Degree of Similarity (T1 & T2) = (T1 %*% T2) / (sqrt(sum(T1^2)) * sqrt(sum(T2^2))) = 77% Degree of Similarity (T1 & T3) = (T1 %*% T3) / (sqrt(sum(T1^2)) * sqrt(sum(T3^2))) = 12% Additional Reading: Here is a detailed paper on comparing the efficiency of different distance measures for text documents. URL – 1) http://home.iitk.ac.in/~spranjal/cs671/project/report.pdf 2) http://users.dsic.upv.es/~prosso/resources/BarronEtAl_ICON09.pdf statisticsskills and programming knowledge are equally important for analytics domain I like reading books travelling Text 1 1 2 1 1 0 1 1 1 1 1 0 0 0 0 0 0 Text 2 1 1 1 0 1 1 0 1 1 1 1 0 0 0 0 0 Text 3 0 0 1 0 0 0 0 0 0 0 0 1 1 1 1 1 X Y Euclidean Cosine - cosine value will be a number between 0 and 1 - Smaller the angel bigger the cosine value/similarity 8
  • 9. Common Text Mining Techniques • N-grams • Shallow Natural Language Processing • Deep Natural Language Processing
  • 10. Example: "defense attorney for liberty and montecito” 1-gram: defense attorney for liberty and montecito 2-gram: defense attorney for liberty and montecito attorney for liberty and attorney for 3-gram: defense attorney for liberty and montecito attorney for liberty for liberty and liberty and montecito 4-gram: defense attorney for liberty attorney for liberty and for liberty and montecito 5-gram: defense attorney for liberty and montecito attorney for liberty and montecito Application:  Probabilistic language model for predicting the next item in a sequence in the form of a (n − 1)  Widely used in probability, communication theory, computational linguistics, biological sequence analysis Advantage:  Relatively simple  Simply increasing n, model can be used to store more context Disadvantage:  Semantic value of the item is not considered n-gram Definition: • n-gram is a contiguous sequence of n items from a given sequence of text • The items can be letters, words, syllables or base pairs according to the application 10
  • 11. Application: - Taxonomy extraction (predefined terms and entities) - Entities: People, organizations, locations, times, dates, prices, genes, proteins, diseases, medicines - Concept extraction (main idea or a theme) Advantage: - Less noisy than n-grams Disadvantage: - Does not specify role of items in the main sentence Shallow NLP Technique Definition: - Assign a syntactic label (noun, verb etc.) to a chunk - Knowledge extraction from text through semantic/syntactic analysis approach 11
  • 12. Sentence - “The driver from Europe crashed the car with the white bumper” 1-gram the driver from europe crashed the car with the white bumper Part of Speech DT – Determiner NN - Noun, singular or mass IN - Preposition or subordinating conjunction NNP - Proper Noun, singular VBD - Verb, past tense DT – Determiner NN - Noun, singular or mass IN - Preposition or subordinating conjunction DT – Determiner JJ – Adjective NN - Noun, singular or mass - Convert to lowercase & PoS tag Concept Extraction: - Remove Stop words - Retain only Noun’s & Verb’s - Bi-gram with Noun’s & Verb’s retained Bi-gram PoS car white NN JJ crashed car VBD NN driver europe NN NNP europe crashed NNP VBD white bumper JJ NN 3-gram PoS car white bumper NN JJ NN crashed car white VBD NN JJ driver europe crashed NN NNP VBD europe crashed car NNP VBD NN - 3-gram with Noun’s & Verb’s retained Conclusion: 1-gram: Reduced noise, however no clear context Bi-gram & 3-gram: Increased context, however there is a information loss Shallow NLP Technique 12 Stop words Noun/Verb
  • 13. Definition: - Extension to the shallow NLP - Detected relationships are expressed as complex construction to retain the context - Example relationships: Located in, employed by, part of, married to Applications: - Develop features and representations appropriate for complex interpretation tasks - Fraud detection - Life science: prediction activities based on complex RNA-Sequence Deep NLP technique Example: The above sentence can be represented using triples (Subject: Predicate [Modifier]: Object) without loosing the context. Triples: driver : crash : car driver : crash with : bumper driver : be from : Europe 13
  • 14. Technique General Steps Pros Cons N-Gram - Convert to lowercase - Remove punctuations - Remove special characters Simple technique Extremely noisy Shallow NLP technique - POS tagging - Lemmatization i.e., transform to dictionary base form i.e., "produce" & "produced" become "produce" - Stemming i.e., transform to root word i.e., 1) "computer" & "computers" become "comput" 2) "product", "produce" & "produced" become "produc" - Chunking i.e., identify the phrasal constituents in a sentence , including noun/verb phrase etc., and splits the sentence into chunks of semantically related words Less noisy than N- Grams Computationally expensive solution for analyzing the structure of texts. Does not specify the internal structure or the role of words in the sentence Deep NLP technique - Generate syntactic relationship between each pair of words - Extract subject, predicate, nagation, objecct and named entity to form triples. Context of the sentence is retained. Sentence level analysis is too structured Techniques - Summary 14
  • 16. 2A - Explore Corpus through EDA 2B - Convert text to lowercase 2C - Remove a) Numbers(if required) b) Punctuations c) English stop words d) Own stop words(if required) e) Strip whitespace f) Lemmatization/Stemming g) Sparse terms 2D - Create document term matrix Step 3 - Visualization Corpus Web Documents Step 1 – Data Assemble Step 2 – Data Processing Step 4 – Build Model(s)  Clustering  Classification  Sentiment Analysis FrequencyChartWordCloudCorrelationPlot R - Text Mining Process Overview 16 DB
  • 17. Package Name Category Description tm Text Mining A framework for text mining applications topicmodels Topic Modelling Fit topic models with Latent Dirichlet Allocation (LDA) and Comparative Text Mining (CTM) wordcloud Visualization Plot a cloud comparing the frequencies of words across documents lda Topic Modelling Fit topic models with Latent Dirichlet Allocation wordnet Text Mining Database of English which is commonly used in linguistics and text mining RTextTools Text Mining Automatic text classification via supervised learning qdap Sentiment analysis Transcript analysis, text mining and natural language processing tm.plugin.dc Text Mining A plug-in for package tm to support distributed text mining tm.plugin.mail Text Mining A plug-in for package tm to handle mail textir Text Mining A suite of tools for inference about text documents and associated sentiment tau Text Mining Utilities for text analysis textcat Text Mining N-gram based text categorization SnowballC Text Mining Word stemmer twitteR Text Mining Provides an interface to the Twitter web API ROAuth Text Mining Allows users to authenticate to the server of their choice (like Twitter) RColorBrewer Visualization The packages provides palettes for drawing nice maps shaded according to a variable ggplot2 Visualization Graphing package implemented on top of the R statistical package. Inspired by the Grammar of Graphics seminal work of Leland Wilkinson R – Required packages for Text Mining 17
  • 18. Example 1 - Obtaining and analyzing tweets Objective: R code for analyzing tweets relating to #AAA2011 (text mining, topic modelling, network analysis, clustering and sentiment analysis) What does the code do? The code details ten steps in the analysis and visualization of the tweets: Acquiring the raw Twitter data Calculating some basic statistics with the raw Twitter data Calculating some basic retweet statistics Calculating the ratio of retweets to tweets Calculating some basic statistics about URLs in tweets Basic text mining for token frequency and token association analysis (word cloud) Calculating sentiment scores of tweets, including on subsets containing tokens of interest Hierarchical clustering of tokens based on multi scale bootstrap resampling Topic modelling the tweet corpus using latent Dirichlet allocation Network analysis of tweeters based on retweets Code Source: Code was taken from following link and tweaked/added additional bits where required to ensure code runs fine https://github.com/benmarwick/AAA2011-Tweets How to Run or Test the code? - From the word doc copy the R code in the given sequence highlighted in yellow color and paste on your R console. 18
  • 19. RTextTools – Example for supervised Learning for Text Classification using Ensemble RTextTools is a free, open source R machine learning package for automatic text classification. The package includes nine algorithms for ensemble classification (svm, slda, boosting, bagging, random forests, glmnet, decision trees, neural networks, and maximum entropy), comprehensive analytics, and thorough documentation. Users may use n-fold cross validation to calculate the accuracy of each algorithm on their dataset and determine which algorithms to use in their ensemble. (Using a four-ensemble agreement approach, Collingwood and Wilkerson (2012) found that when four of their algorithms agree on the label of a textual document, the machine label matches the human label over 90% of the time. The rate is just 45% when only two algorithms agree on the text label.) Code Source: The codes is readily available for download and usage from the following link https://github.com/timjurka/RTextTools . The code can be implemented without modification for testing, however it’s set up such that changes can be incorporated easily based on our requirement. Additional Reading: http://www.rtexttools.com/about-the-project.html 19 Example 2 - RTextTools
  • 20. Penn Treebank - https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html Stanford info lab - Finding Similar Items: http://infolab.stanford.edu/~ullman/mmds/ch3.pdf TRIPLET EXTRACTION FROM SENTENCES: URL - http://ailab.ijs.si/delia_rusu/Papers/is_2007.pdf Shallow and Deep NLP Processing for ontology learning, a Quick Overview: http://azouaq.athabascau.ca/publications/Conferences,%20Workshops,%20Books/%5BBC2%5D_K DW_2010.pdf References 20