SlideShare a Scribd company logo
Text Categorization as a Graph Classification
Problem
1
Outline
Section 1 Introduction
Section 2 Review of the related work
Section 3 Preliminary concepts
Section 4 Proposed approaches
Section 5 Experimental evaluation
Section 6 Conclusion
References
2
1. What is text mining ?
2. Bag-of-words and its issues
3. Graph-of-words - A new approach
Introduction
3
Introduction
What is Text mining?
Search engines
Understand user’s queries. E.g. What is Google?
Find matching websites or documents (ranking).
Product recommendation
Understand product description.
Understand product reviews. 4
Introduction
Bag-of-words and its issues
Definition
A text (such as a sentence or a document) is represented as the bag (multiset)
of its words.
5
Introduction
Bag-of-words and its issues
Example
“He likes watching action movies, she likes watching romantic movies”
⇒ [ “He”, “likes”, “watching”, “action”, “movies”, “she”, “likes”, “watching”,
“romantic”, “movies” ].
The sentence has 10 distinct words, by using indexes of the list, it can be
represented by a 10-entry vector: [ 1, 2, 2, 1, 2, 1, 2, 2, 1, 2 ]
6
Introduction
Bag-of-words and its issues
Problems
There are millions of n-gram features when dealing with thousands of news
articles, but only a few hundreds actually present in each article and tens
of class labels.
N-gram fails to capture word inversion and subset matching (e.g., “article
about news” vs. “news article”).
7
Introduction
Graph-of-words - A new approach
8
Consider the task of text categorization as a graph
classification problem.
Represent textual documents as graph-of-words
instead of traditional n-gram bag-of-words.
Extract more discriminative features that
correspond to long-distance n-grams through
frequent subgraph mining.
Introduction
Graph-of-words - A new approach
9
Summary:
1. Constructs a graph-of-words for each document
in the set
2. For each graphs from step 1 , extract its main
core (for cost-effective)
3. Find all frequent subgraphs size n in the
obtained set of graphs from step 2
4. Remove isomorphic subgraphs to reduce the
total number of features
5. Finally, extract n-gram features on the
remaining text
● Subgraph feature mining on graph-of-words representations by Markov et
al. (2007)
Kudo and Matsumoto (2004), Matsumoto et al. (2005), Jiang et al. (2010) and
Arora et al. (2010) suggested using parse and dependency trees
representation for text categorization, but the support value (i.e. the total
number of features) was not discussed and can potentially lead to millions
of subgraphs on standard datasets.
Review of the related works
10
1. Graph-of-words model
2. Subgraph isomorphism
3. K-core and main core
Preliminary Concepts
11
Definition
An undirected graph G = (V, E) , where
V is the set of vertices, which represents unique terms of the document
E is the set of edges, which represents co-occurrences between the terms
within a fixed-size sliding window
12
Preliminary Concepts
Graph-of-words model
Definition
Given two graphs G and H, an isomorphism of G and H is a bijection between the
vertex sets of G and H such that any two vertices u and v of G are adjacent in G if
and only if f(u) and f(v) are adjacent in H.
Example
13
Preliminary Concepts
Subgraph isomorphism
Definition
A subgraph H = (V’, E’) induced by the subset of vertices V’ ⊆ V and the subset of
edges E’ ⊆ E of graph G = (V, E) is called a k-core, where k is an integer, if and
only if: H is the maximal subgraph holds the property ∀ v ∈ V’, deg(v) >= k.
k-core: a maximal connected subgraph whose vertices are at least of degree k
within that subgraph.
main core: the k-core with the largest k.
Preliminary Concepts
K-core and main core
14
Example
Fig. Two 3-cores of a graph
Preliminary Concepts
K-core and main core
15
1. Unsupervised feature mining using gSpan
2. Find frequent subgraphs using gSpan
3. Unsupervised support selection
4. Considered classifiers
5. Multiclass scenario
6. Main core mining using gSpan
Proposed approaches
16
Idea
● Considered the task of text categorization as a graph classification problem
● Representing textual documents as graph-of-words and then extracting
subgraph features to train a graph classifier.
● Each document is a separate graph-of-words and the collection of
documents thus corresponds to a set of graphs.
Proposed approaches
Unsupervised feature mining using gSpan
17
Given
● D = {G0
, G1
, G2
, ..., GN
} a graph dataset
● Support(g) the number of graphs (in D) in which g is a subgraph
● minSup minimum support threshold
Problem
Find any subgraph so that support(g) >= minSup
Proposed approaches
Find frequent subgraphs using gSpan
18
Frequent subgraph : a subgraph of multiple graph in D
Proposed approaches
Find frequent subgraphs using gSpan
19
Baseline solution
● Enumerate all the subgraphs and testing for isomorphism throughout the
collection => very expensive
Propose solution
● Use gSpan (graph-based Substructure pattern mining )
Proposed approaches
Find frequent subgraphs using gSpan
20
gSpan Idea:
1. For each graph, build a lexicographic order of all the edges using depth-first-
search (DFS) traversal
2. Assign to each of them a unique minimum DFS code.
3. Based on all these DFS codes, a hierarchical search tree is constructed at the
collection-level.
4. By pre-order traversal of this tree, gSpan discovers all frequent subgraphs
with required support.
Proposed approaches
Find frequent subgraphs using gSpan
21
Note :
● Given two graphs G and G’
G is isomorphic to G’ if and only if minDFS(G) = minDFS(G’)
The lower the support will result in:
1. more features
2. longer the mining
3. longer feature vector generation
4. longer learning .
Proposed approaches
Find frequent subgraphs using gSpan
22
Given
D = {G0
, G1
, G2
,... ,GN
} a graph dataset
Support(g) denotes the number of graphs (in D) in which g is a subgraph
minSup denotes the minimum support threshold
Proposed approaches
Unsupervised support selection (Select best minSup)
23
Situation
The classifier can only improve its goodness of fit with more features
=> It is likely that the lowest support will lead to the best test accuracy
As the support decreases, the number of features increases slightly up until a
point where it increases exponentially
=> This makes both the feature vector generation and the learning expensive,
especially with multiple classes.
Proposed approaches
Unsupervised support selection (Select best minSup)
24
Problem
Select best minSup
Solution
Use the Elbow method
Proposed approaches
Unsupervised support selection (Select best minSup)
25
Elbow method
Example: selecting the number of clusters in k-means clustering
Choose a number of clusters so that adding
another cluster doesn't give much better
modeling of the data
Proposed approaches
Unsupervised support selection (Select best minSup)
26
Elbow method
In our case :
Choose a minSup so that decreasing this value by a unit will :
not give much better accuracy
but increase the number of features significantly
Proposed approaches
Unsupervised support selection (Select best minSup)
27
Standard baseline classifiers
K-nearest neighbors (kNN) (Larkey and Croft, 1996)
Naive Bayes (NB) (McCallum and Nigam, 1998)
Linear Support Vector Machines (SVM) (Joachims, 1998)
Proposed approaches
Considered classifiers
28
Problem
Single support value might lead to some classes generating a tremendous
number of features ( hundreds of thousands ) and some others only a few (a few
hundreds subgraphs)
⇒ Need an extremely low support to include discriminative features for
these minority classes
⇒ Resulting in an exponential number of features because of the majority
classes.
Proposed approaches
Multiclass scenario
29
Solution
Mine frequent subgraphs per class using the same relative support (in %)
Then aggregate each feature set into a global one at the cost of a supervised
process (but still avoids cross validating).
Proposed approaches
Multiclass scenario
30
Problem
The number of features (subgraphs) to be extracted is very large when mining
frequent subgraphs directly !
How to extract discriminative features while maintaining word dependence
and retaining as much classification information as possible ?
Solution
Reduce the graphs’ size by keeping the densest subgraphs.
Proposed approaches
Main core using gSpan
31
Implementation
Batagelj-Zaveršnik algorithm, which is optimally implemented (in C++
language) by gSpan.
Proposed approaches
Main core using gSpan
32
1. Datasets
2. Results
3. Unsupervised support selection
4. Distributions of mined n-grams
Experimental evaluation
33
Experimental evaluation
Datasets
34
● WebKB: 4 most frequent categories among labeled web pages from
various CS departments
(2,803 for training and 1,396 for test )
● R8: 8 most frequent categories of Reuters- 21578, a set of labeled news
articles from the 1987 Reuters newswire
(5,485 for training and 2,189 for test )
● LingSpam: 2,893 emails classified as spam or legitimate messages
(10 sets for 10-fold cross validation )
● Amazon: 8,000 product reviews over four different sub-collections
(books, DVDs, electronics and kitchen appliances) classified as positive
or negative
(1,600 for training and 400 for test )
Experimental evaluation
Datasets
35
● Multi-class document categorization : WebKB and R8
● Spam detection (Ling-Spam)
● Opinion mining (Amazon) so as to cover all the main subtasks of text
categorization
Table 1: Total number of features (n-grams or subgraphs) vs. number of features present only in main
cores along with the reduction of the dimension of the feature space on all four datasets.
36
Experimental evaluation
Results
Table 2: Test accuracy and macro-average F1-score on four standard datasets. Bold font marks the best
performance in a column * indicates statistical significance at p < 0.05 using micro sign test with regards
to the SVM baseline of the same column. MC corresponds to unsupervised feature selection using the
main core of each graph-of-words to extract n-gram and subgraph features. gSpan mining support
values are 1.6% (WebKB), 7% (R8), 4% (LingSpam) and 0.5% (Amazon).
37
Experimental evaluation
Results
Figure 2: Distribution of non-zero n-gram feature values before and after unsupervised feature selection
(main core retention) on R8 dataset. 38
Experimental evaluation
Results
Figure 3: Number of subgraph features/accuracy in test per support (%) on WebKB (left) and R8 (right)
datasets: in black, the selected support value chosen via the elbow method and in red, the accuracy in
test for the SVM baseline.
Experimental evaluation
Unsupervised support selection
39
Figure 4: Distribution of n-grams (standard and long-distance ones) among all the features on WebKB
dataset.
Experimental evaluation
Distribution of mined n-grams
40
Figure 5: Distribution of n-grams (standard and long-distance ones) among the top 5% most
discriminative features for SVM on WebKB dataset.
Experimental evaluation
Distribution of mined n-grams
41
Conclusion
New graph-of-words approach for text mining.
Consider the problem as a graph classification
Achieved:
Extract more discriminative features that correspond to long-distance n-grams
through frequent subgraph mining
42
References
Text Categorization as a Graph Classification Problem (François Rousseau, Emmanouil Kiagias ,Michalis Vazirgiannis )
http://www.aclweb.org/anthology/P15-1164
gSpan: Graph-Based Substructure Pattern Mining (Xifeng Yan and Jiawei Han )
http://cs.ucsb.edu/~xyan/papers/gSpan-short.pdf
Determining the number of clusters in a data set - The Elbow Method
https://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set
Graph isomorphism
https://en.wikipedia.org/wiki/Graph_isomorphism
43
44

More Related Content

PDF
Text Classification/Categorization
PPT
20070702 Text Categorization
PPTX
Text categorization
PPT
Text categorization
PPT
Text classification using Text kernels
PDF
Text Categorization Using Improved K Nearest Neighbor Algorithm
PDF
Text classification-php-v4
PPTX
Probabilistic models (part 1)
Text Classification/Categorization
20070702 Text Categorization
Text categorization
Text categorization
Text classification using Text kernels
Text Categorization Using Improved K Nearest Neighbor Algorithm
Text classification-php-v4
Probabilistic models (part 1)

What's hot (13)

PPTX
Tdm probabilistic models (part 2)
PPT
Lec 4,5
PPT
[ppt]
PPT
Chapter 09 class advanced
PDF
Mapping Subsets of Scholarly Information
PPT
Chapter 11 cluster advanced : web and text mining
PDF
Learning to Rank - From pairwise approach to listwise
PPTX
Deep Learning for Search
PDF
Data clustering
PDF
10 clusbasic
PPT
Ir models
PDF
Cluster analysis
PPT
Capter10 cluster basic
Tdm probabilistic models (part 2)
Lec 4,5
[ppt]
Chapter 09 class advanced
Mapping Subsets of Scholarly Information
Chapter 11 cluster advanced : web and text mining
Learning to Rank - From pairwise approach to listwise
Deep Learning for Search
Data clustering
10 clusbasic
Ir models
Cluster analysis
Capter10 cluster basic
Ad

Viewers also liked (6)

PDF
Tutorial on Text Categorization, EACL, 2003
PPTX
PDF
Graph theory with algorithms and its applications
PDF
Cs6702 graph theory and applications 2 marks questions and answers
PDF
Text categorization with Lucene and Solr
Tutorial on Text Categorization, EACL, 2003
Graph theory with algorithms and its applications
Cs6702 graph theory and applications 2 marks questions and answers
Text categorization with Lucene and Solr
Ad

Similar to Text categorization as graph (20)

PPT
A+Novel+Approach+Based+On+Prototypes+And+Rough+Sets+For+Document+And+Feature+...
PPTX
Text clustering
PPT
My8clst
PDF
Matrix Factorization In Recommender Systems
PDF
Reference Scope Identification of Citances Using Convolutional Neural Network
PDF
Image Similarity Detection at Scale Using LSH and Tensorflow with Andrey Gusev
PPT
ClustIII.ppt
PPTX
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
DOCX
Neural nw k means
PDF
Mapreduce Algorithms
PPTX
Machine Learning Algorithms (Part 1)
PDF
Similarity Features, and their Role in Concept Alignment Learning
DOC
TEXT CLUSTERING.doc
PPTX
Algorithm, Concepts in performance analysis
PPT
Max Entropy
PDF
A Mathematical Programming Approach for Selection of Variables in Cluster Ana...
PPTX
PaperReview_ “Few-shot Graph Classification with Contrastive Loss and Meta-cl...
PDF
Advanced Data Structures 2006
PPT
Lect12 graph mining
PDF
Ehtsham Elahi, Senior Research Engineer, Personalization Science and Engineer...
A+Novel+Approach+Based+On+Prototypes+And+Rough+Sets+For+Document+And+Feature+...
Text clustering
My8clst
Matrix Factorization In Recommender Systems
Reference Scope Identification of Citances Using Convolutional Neural Network
Image Similarity Detection at Scale Using LSH and Tensorflow with Andrey Gusev
ClustIII.ppt
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Neural nw k means
Mapreduce Algorithms
Machine Learning Algorithms (Part 1)
Similarity Features, and their Role in Concept Alignment Learning
TEXT CLUSTERING.doc
Algorithm, Concepts in performance analysis
Max Entropy
A Mathematical Programming Approach for Selection of Variables in Cluster Ana...
PaperReview_ “Few-shot Graph Classification with Contrastive Loss and Meta-cl...
Advanced Data Structures 2006
Lect12 graph mining
Ehtsham Elahi, Senior Research Engineer, Personalization Science and Engineer...

More from Harry Potter (20)

PDF
How to build a rest api.pptx
PPTX
Business analytics and data mining
PPTX
Big picture of data mining
PPTX
Data mining and knowledge discovery
PPTX
Cache recap
PPTX
Directory based cache coherence
PPTX
How analysis services caching works
PPTX
Optimizing shared caches in chip multiprocessors
PPTX
Hardware managed cache
PPTX
Smm & caching
PPTX
Data structures and algorithms
PPT
Abstract data types
PPTX
Abstraction file
PPTX
Object model
PPTX
Concurrency with java
PPTX
Encapsulation anonymous class
PPT
Abstract class
PPTX
Object oriented analysis
PPTX
Api crash
PPTX
Rest api to integrate with your site
How to build a rest api.pptx
Business analytics and data mining
Big picture of data mining
Data mining and knowledge discovery
Cache recap
Directory based cache coherence
How analysis services caching works
Optimizing shared caches in chip multiprocessors
Hardware managed cache
Smm & caching
Data structures and algorithms
Abstract data types
Abstraction file
Object model
Concurrency with java
Encapsulation anonymous class
Abstract class
Object oriented analysis
Api crash
Rest api to integrate with your site

Recently uploaded (20)

PDF
KodekX | Application Modernization Development
PDF
Electronic commerce courselecture one. Pdf
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PDF
Network Security Unit 5.pdf for BCA BBA.
PPT
Teaching material agriculture food technology
PPTX
Cloud computing and distributed systems.
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Empathic Computing: Creating Shared Understanding
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
KodekX | Application Modernization Development
Electronic commerce courselecture one. Pdf
Advanced methodologies resolving dimensionality complications for autism neur...
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Agricultural_Statistics_at_a_Glance_2022_0.pdf
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Network Security Unit 5.pdf for BCA BBA.
Teaching material agriculture food technology
Cloud computing and distributed systems.
Dropbox Q2 2025 Financial Results & Investor Presentation
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Diabetes mellitus diagnosis method based random forest with bat algorithm
The Rise and Fall of 3GPP – Time for a Sabbatical?
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Chapter 3 Spatial Domain Image Processing.pdf
Empathic Computing: Creating Shared Understanding
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton

Text categorization as graph

  • 1. Text Categorization as a Graph Classification Problem 1
  • 2. Outline Section 1 Introduction Section 2 Review of the related work Section 3 Preliminary concepts Section 4 Proposed approaches Section 5 Experimental evaluation Section 6 Conclusion References 2
  • 3. 1. What is text mining ? 2. Bag-of-words and its issues 3. Graph-of-words - A new approach Introduction 3
  • 4. Introduction What is Text mining? Search engines Understand user’s queries. E.g. What is Google? Find matching websites or documents (ranking). Product recommendation Understand product description. Understand product reviews. 4
  • 5. Introduction Bag-of-words and its issues Definition A text (such as a sentence or a document) is represented as the bag (multiset) of its words. 5
  • 6. Introduction Bag-of-words and its issues Example “He likes watching action movies, she likes watching romantic movies” ⇒ [ “He”, “likes”, “watching”, “action”, “movies”, “she”, “likes”, “watching”, “romantic”, “movies” ]. The sentence has 10 distinct words, by using indexes of the list, it can be represented by a 10-entry vector: [ 1, 2, 2, 1, 2, 1, 2, 2, 1, 2 ] 6
  • 7. Introduction Bag-of-words and its issues Problems There are millions of n-gram features when dealing with thousands of news articles, but only a few hundreds actually present in each article and tens of class labels. N-gram fails to capture word inversion and subset matching (e.g., “article about news” vs. “news article”). 7
  • 8. Introduction Graph-of-words - A new approach 8 Consider the task of text categorization as a graph classification problem. Represent textual documents as graph-of-words instead of traditional n-gram bag-of-words. Extract more discriminative features that correspond to long-distance n-grams through frequent subgraph mining.
  • 9. Introduction Graph-of-words - A new approach 9 Summary: 1. Constructs a graph-of-words for each document in the set 2. For each graphs from step 1 , extract its main core (for cost-effective) 3. Find all frequent subgraphs size n in the obtained set of graphs from step 2 4. Remove isomorphic subgraphs to reduce the total number of features 5. Finally, extract n-gram features on the remaining text
  • 10. ● Subgraph feature mining on graph-of-words representations by Markov et al. (2007) Kudo and Matsumoto (2004), Matsumoto et al. (2005), Jiang et al. (2010) and Arora et al. (2010) suggested using parse and dependency trees representation for text categorization, but the support value (i.e. the total number of features) was not discussed and can potentially lead to millions of subgraphs on standard datasets. Review of the related works 10
  • 11. 1. Graph-of-words model 2. Subgraph isomorphism 3. K-core and main core Preliminary Concepts 11
  • 12. Definition An undirected graph G = (V, E) , where V is the set of vertices, which represents unique terms of the document E is the set of edges, which represents co-occurrences between the terms within a fixed-size sliding window 12 Preliminary Concepts Graph-of-words model
  • 13. Definition Given two graphs G and H, an isomorphism of G and H is a bijection between the vertex sets of G and H such that any two vertices u and v of G are adjacent in G if and only if f(u) and f(v) are adjacent in H. Example 13 Preliminary Concepts Subgraph isomorphism
  • 14. Definition A subgraph H = (V’, E’) induced by the subset of vertices V’ ⊆ V and the subset of edges E’ ⊆ E of graph G = (V, E) is called a k-core, where k is an integer, if and only if: H is the maximal subgraph holds the property ∀ v ∈ V’, deg(v) >= k. k-core: a maximal connected subgraph whose vertices are at least of degree k within that subgraph. main core: the k-core with the largest k. Preliminary Concepts K-core and main core 14
  • 15. Example Fig. Two 3-cores of a graph Preliminary Concepts K-core and main core 15
  • 16. 1. Unsupervised feature mining using gSpan 2. Find frequent subgraphs using gSpan 3. Unsupervised support selection 4. Considered classifiers 5. Multiclass scenario 6. Main core mining using gSpan Proposed approaches 16
  • 17. Idea ● Considered the task of text categorization as a graph classification problem ● Representing textual documents as graph-of-words and then extracting subgraph features to train a graph classifier. ● Each document is a separate graph-of-words and the collection of documents thus corresponds to a set of graphs. Proposed approaches Unsupervised feature mining using gSpan 17
  • 18. Given ● D = {G0 , G1 , G2 , ..., GN } a graph dataset ● Support(g) the number of graphs (in D) in which g is a subgraph ● minSup minimum support threshold Problem Find any subgraph so that support(g) >= minSup Proposed approaches Find frequent subgraphs using gSpan 18
  • 19. Frequent subgraph : a subgraph of multiple graph in D Proposed approaches Find frequent subgraphs using gSpan 19
  • 20. Baseline solution ● Enumerate all the subgraphs and testing for isomorphism throughout the collection => very expensive Propose solution ● Use gSpan (graph-based Substructure pattern mining ) Proposed approaches Find frequent subgraphs using gSpan 20
  • 21. gSpan Idea: 1. For each graph, build a lexicographic order of all the edges using depth-first- search (DFS) traversal 2. Assign to each of them a unique minimum DFS code. 3. Based on all these DFS codes, a hierarchical search tree is constructed at the collection-level. 4. By pre-order traversal of this tree, gSpan discovers all frequent subgraphs with required support. Proposed approaches Find frequent subgraphs using gSpan 21
  • 22. Note : ● Given two graphs G and G’ G is isomorphic to G’ if and only if minDFS(G) = minDFS(G’) The lower the support will result in: 1. more features 2. longer the mining 3. longer feature vector generation 4. longer learning . Proposed approaches Find frequent subgraphs using gSpan 22
  • 23. Given D = {G0 , G1 , G2 ,... ,GN } a graph dataset Support(g) denotes the number of graphs (in D) in which g is a subgraph minSup denotes the minimum support threshold Proposed approaches Unsupervised support selection (Select best minSup) 23
  • 24. Situation The classifier can only improve its goodness of fit with more features => It is likely that the lowest support will lead to the best test accuracy As the support decreases, the number of features increases slightly up until a point where it increases exponentially => This makes both the feature vector generation and the learning expensive, especially with multiple classes. Proposed approaches Unsupervised support selection (Select best minSup) 24
  • 25. Problem Select best minSup Solution Use the Elbow method Proposed approaches Unsupervised support selection (Select best minSup) 25
  • 26. Elbow method Example: selecting the number of clusters in k-means clustering Choose a number of clusters so that adding another cluster doesn't give much better modeling of the data Proposed approaches Unsupervised support selection (Select best minSup) 26
  • 27. Elbow method In our case : Choose a minSup so that decreasing this value by a unit will : not give much better accuracy but increase the number of features significantly Proposed approaches Unsupervised support selection (Select best minSup) 27
  • 28. Standard baseline classifiers K-nearest neighbors (kNN) (Larkey and Croft, 1996) Naive Bayes (NB) (McCallum and Nigam, 1998) Linear Support Vector Machines (SVM) (Joachims, 1998) Proposed approaches Considered classifiers 28
  • 29. Problem Single support value might lead to some classes generating a tremendous number of features ( hundreds of thousands ) and some others only a few (a few hundreds subgraphs) ⇒ Need an extremely low support to include discriminative features for these minority classes ⇒ Resulting in an exponential number of features because of the majority classes. Proposed approaches Multiclass scenario 29
  • 30. Solution Mine frequent subgraphs per class using the same relative support (in %) Then aggregate each feature set into a global one at the cost of a supervised process (but still avoids cross validating). Proposed approaches Multiclass scenario 30
  • 31. Problem The number of features (subgraphs) to be extracted is very large when mining frequent subgraphs directly ! How to extract discriminative features while maintaining word dependence and retaining as much classification information as possible ? Solution Reduce the graphs’ size by keeping the densest subgraphs. Proposed approaches Main core using gSpan 31
  • 32. Implementation Batagelj-Zaveršnik algorithm, which is optimally implemented (in C++ language) by gSpan. Proposed approaches Main core using gSpan 32
  • 33. 1. Datasets 2. Results 3. Unsupervised support selection 4. Distributions of mined n-grams Experimental evaluation 33
  • 34. Experimental evaluation Datasets 34 ● WebKB: 4 most frequent categories among labeled web pages from various CS departments (2,803 for training and 1,396 for test ) ● R8: 8 most frequent categories of Reuters- 21578, a set of labeled news articles from the 1987 Reuters newswire (5,485 for training and 2,189 for test ) ● LingSpam: 2,893 emails classified as spam or legitimate messages (10 sets for 10-fold cross validation ) ● Amazon: 8,000 product reviews over four different sub-collections (books, DVDs, electronics and kitchen appliances) classified as positive or negative (1,600 for training and 400 for test )
  • 35. Experimental evaluation Datasets 35 ● Multi-class document categorization : WebKB and R8 ● Spam detection (Ling-Spam) ● Opinion mining (Amazon) so as to cover all the main subtasks of text categorization
  • 36. Table 1: Total number of features (n-grams or subgraphs) vs. number of features present only in main cores along with the reduction of the dimension of the feature space on all four datasets. 36 Experimental evaluation Results
  • 37. Table 2: Test accuracy and macro-average F1-score on four standard datasets. Bold font marks the best performance in a column * indicates statistical significance at p < 0.05 using micro sign test with regards to the SVM baseline of the same column. MC corresponds to unsupervised feature selection using the main core of each graph-of-words to extract n-gram and subgraph features. gSpan mining support values are 1.6% (WebKB), 7% (R8), 4% (LingSpam) and 0.5% (Amazon). 37 Experimental evaluation Results
  • 38. Figure 2: Distribution of non-zero n-gram feature values before and after unsupervised feature selection (main core retention) on R8 dataset. 38 Experimental evaluation Results
  • 39. Figure 3: Number of subgraph features/accuracy in test per support (%) on WebKB (left) and R8 (right) datasets: in black, the selected support value chosen via the elbow method and in red, the accuracy in test for the SVM baseline. Experimental evaluation Unsupervised support selection 39
  • 40. Figure 4: Distribution of n-grams (standard and long-distance ones) among all the features on WebKB dataset. Experimental evaluation Distribution of mined n-grams 40
  • 41. Figure 5: Distribution of n-grams (standard and long-distance ones) among the top 5% most discriminative features for SVM on WebKB dataset. Experimental evaluation Distribution of mined n-grams 41
  • 42. Conclusion New graph-of-words approach for text mining. Consider the problem as a graph classification Achieved: Extract more discriminative features that correspond to long-distance n-grams through frequent subgraph mining 42
  • 43. References Text Categorization as a Graph Classification Problem (François Rousseau, Emmanouil Kiagias ,Michalis Vazirgiannis ) http://www.aclweb.org/anthology/P15-1164 gSpan: Graph-Based Substructure Pattern Mining (Xifeng Yan and Jiawei Han ) http://cs.ucsb.edu/~xyan/papers/gSpan-short.pdf Determining the number of clusters in a data set - The Elbow Method https://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set Graph isomorphism https://en.wikipedia.org/wiki/Graph_isomorphism 43
  • 44. 44