SlideShare a Scribd company logo
Nguyen Thanh Sang
Network Science Lab
Dept. of Artificial Intelligence
The Catholic University of Korea
E-mail: sang.ngt99@gmail.com
2023-04-07
1
 Paper
 Introduction
 Problem
 Contributions
 Framework
 Experiment
 Conclusion
2
Hierarchical graph structure
• Hierarchical structure is pervasive across complex networks with examples spanning from
neuroscience, economics, social organizations, urban systems, communications,
pharmaceuticals and biology, particularly metabolic and gene networks.
3
Problems
 Many embedding methods can be used in the node classification task by converting the graph
structure into sequences by performing random walks on the graph and computing co-
occurrence statistics.
=> unsupervised algorithms and cannot perform node classification tasks in an end-to-end
applications.
4
Problems
 Graph convolutional networks (GCNs) generates node embedding by combining information
from neighborhoods.
 lack the “graph pooling” mechanism, which restricts the scale of the receptive field.
 difficulty in obtaining adequate global information.
 adding too many convolutional layers will result in the output features over-smoothed and
make them indistinguishable
5
Problems
 Some recent methods try to get the global information through deeper models.
 they are either unsupervised models or need many training examples.
 they are still not capable of solving the semi-supervised node classification task directly.
6
Contributions
• First work to design a deep hierarchical model for the semi-supervised node classification task
which consists of more layers with larger receptive fields.
 obtain more global information through the coarsening and refining procedures.
• Applying deep architectures and the pooling mechanism into classification tasks.
• The proposed model outperforms other state-of-the-art approaches and gains a considerable
improvement over other approaches with very few labeled samples provided for each class.
7
Overview
• For each coarsening layer, the GCN is conducted to learn node representations.
• A coarsening operation is performed to aggregate structurally similar nodes into hyper-nodes.
 each hyper-node represents a local structure of the original graph, which can facilitate exploiting global
structures on the graph.
• Following coarsening layers, a symmetric graph refining layers is applied to restore the original graph
structure for node classification tasks.
 comprehensively capture nodes’ information from local to global perspectives, leading to better node
representations.
8
Graph Convolutional Networks
• Update node representation by using neighbor features.
9
Graph Coarsening Layer
 Two steps:
• Structural equivalence grouping (SEG). If two nodes
share the same set of neighbors, they are considered to
be structurally equivalent.
 assign these two nodes to be a hyper-node.
• Structural similarity grouping (SSG). Then, we
calculate the structural similarity between the
unmarked node pairs.
=> form a new hyper-node and mark the two nodes by
largest structural similarity. The largest structural similarity
is defined by comparing the normalized connection
strength:
10
Graph Coarsening Layer
• The hidden node embedding matrix is determined as:
• Update adjacency matrix:
 The hidden representation is fed into the next layer as
input.
 The resulting node embedding to generate in each
coarsening layer will then be of lower resolution.
11
Graph Refining Layer
• To restore the original topological structure of the graph and further facilitate node
classification
 stacking the same numbers of graph refining layers as coarsening layers.
• Each refining layer contains two steps:
o generating node embedding vectors
o restoring node representations.
• A residual connections between the two corresponding coarsening and refining layers.
12
Node Weight Embedding and Multiple Channels
• Multi-channel mechanisms help explore
features in different subspaces and H-GCN
employs multiple channels on GCN to
obtain rich information jointly at each layer
13
The Output Layer
• Softmax classifier:
• The loss function is defined as the cross-entropy of predictions over the labeled nodes:
14
Datasets
• Four widely-used datasets including three citation networks and one knowledge graph.
15
Experiment
• The proposed method consistently outperforms
other state-of-the-art methods, which verify the
effectiveness of the proposed coarsening and
refining mechanisms.
• DeepWalk cannot model the attribute information,
which heavily restricts its performance.
• The proposed H-GCN manages to capture global
information through different levels of
convolutional layers and achieves the best results
among all four datasets.
16
Impact of Scale of Training Data
• The proposed method outperforms other baselines
in all cases.
• With the number of labeled data decreasing, it
obtains a more considerable margin over these
baseline algorithms.
• The proposed H-GCN with increased receptive
fields is well-suited when training data is extremely
scarce and thereby is of significant practical values.
17
Coarsening, refining layers and embedding weights
• The proposed H-GCN has better performance
compared to H-GCN without coarsening mechanisms
on all datasets.
=> The coarsening and refining mechanisms contribute
to performance improvements since they can obtain
global information with larger receptive fields.
• Model with node weight embeddings performs
better,
 the necessity to add this embedding vector in the
node embeddings.
18
Comparing with number of coarsening layers and channels
• Since fewer labeled nodes are supplied on NELL than
others, deeper layers and larger receptive fields are needed.
• Adding too many coarsening layers, the performance drops
due to overfitting.
• The performance improves with the number of channels
increasing until four channels => help capture accurate
node features.
• Too many channels will inevitably introduce redundant
parameters to the model, leading to overfitting as well.
19
Conclusions
• A novel hierarchical graph convolutional networks for the semi-supervised node
classification task.
• The H-GCN model consists of coarsening layers and symmetric refining layers.
• By grouping structurally similar nodes to hyper-nodes, this model can get a larger receptive
field and enable sufficient information propagation.
• Compared with other previous work, H-GCN is deeper and can fully utilize both local and
global information.
• The has achieved substantial gains over them in the case that labeled data is extremely
scarce.
20

More Related Content

PDF
Node classification with graph neural network based centrality measures and f...
PPTX
NS-CUK Seminar: S.T.Nguyen, Review on "DeepGCNs: Can GCNs Go as Deep as CNNs?...
PPTX
NS - CUK Seminar: S.T.Nguyen, Review on "Hypergraph Neural Networks", AAAI 2019
PPTX
Chapter 3.pptx
PDF
Gnn overview
PPTX
NS-CUK Seminar: H.E.Lee, Review on "Graph Star Net for Generalized Multi-Tas...
PPTX
NS-CUK Seminar: S.T.Nguyen, Review on "Graph Pointer Neural Networks", AAAI 2022
PPTX
Sun_MAPL_GNN.pptx
Node classification with graph neural network based centrality measures and f...
NS-CUK Seminar: S.T.Nguyen, Review on "DeepGCNs: Can GCNs Go as Deep as CNNs?...
NS - CUK Seminar: S.T.Nguyen, Review on "Hypergraph Neural Networks", AAAI 2019
Chapter 3.pptx
Gnn overview
NS-CUK Seminar: H.E.Lee, Review on "Graph Star Net for Generalized Multi-Tas...
NS-CUK Seminar: S.T.Nguyen, Review on "Graph Pointer Neural Networks", AAAI 2022
Sun_MAPL_GNN.pptx

Similar to NS-CUK Seminar: S.T.Nguyen, Review on "Hierarchical Graph Convolutional Networks for Semi-supervised Node Classification", IJCAI 2019 (20)

PPTX
Chapter 4 better.pptx
PPTX
NS-CUK Seminar: S.T.Nguyen, Review on "Make Heterophily Graphs Better Fit GNN...
PPTX
240401_Thuy_Labseminar[Train Once and Explain Everywhere: Pre-training Interp...
PDF
Graph neural networks overview
PPTX
Uncovering the Structural Fairness in Graph Contrastive Learning.pptx
PPTX
NS-CUK Joint Journal Club: V.T.Hoang, Review on "NAGphormer: A Tokenized Grap...
PPTX
240729_JW_labseminar[Semi-Supervised Classification with Graph Convolutional ...
PPTX
[NS][Lab_Seminar_240710]Improving Graph Networks through Selection-based Conv...
PPTX
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...
PPTX
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
PPTX
NS-CUK Seminar: S.T.Nguyen, Review on "Geom-GCN: Geometric Graph Convolutiona...
PPTX
RETHINKING THE EXPRESSIVE POWER OF GNNS VIA GRAPH BICONNECTIVITY.pptx
PDF
Grl book
PPTX
NS-CUK Joint Journal Club: S.T.Nguyen, Review on “Cluster-GCN: An Efficient A...
PPTX
240722_Thuy_Labseminar[Unveiling Global Interactive Patterns across Graphs: T...
PPTX
240311_Thuy_Labseminar[Contrastive Multi-View Representation Learning on Grap...
PPTX
008 GNNs at Scale With Graph Data Science Sampling and Python Client Integrat...
PDF
DriverPack Solution Download Full ISO
PDF
Wondershare Recoverit 13.5.12.11 Free Download
PDF
RadioBOSS Advanced 7.0.8 Free Download
Chapter 4 better.pptx
NS-CUK Seminar: S.T.Nguyen, Review on "Make Heterophily Graphs Better Fit GNN...
240401_Thuy_Labseminar[Train Once and Explain Everywhere: Pre-training Interp...
Graph neural networks overview
Uncovering the Structural Fairness in Graph Contrastive Learning.pptx
NS-CUK Joint Journal Club: V.T.Hoang, Review on "NAGphormer: A Tokenized Grap...
240729_JW_labseminar[Semi-Supervised Classification with Graph Convolutional ...
[NS][Lab_Seminar_240710]Improving Graph Networks through Selection-based Conv...
Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks, arXiv e-...
Introduction to Graph Neural Networks: Basics and Applications - Katsuhiko Is...
NS-CUK Seminar: S.T.Nguyen, Review on "Geom-GCN: Geometric Graph Convolutiona...
RETHINKING THE EXPRESSIVE POWER OF GNNS VIA GRAPH BICONNECTIVITY.pptx
Grl book
NS-CUK Joint Journal Club: S.T.Nguyen, Review on “Cluster-GCN: An Efficient A...
240722_Thuy_Labseminar[Unveiling Global Interactive Patterns across Graphs: T...
240311_Thuy_Labseminar[Contrastive Multi-View Representation Learning on Grap...
008 GNNs at Scale With Graph Data Science Sampling and Python Client Integrat...
DriverPack Solution Download Full ISO
Wondershare Recoverit 13.5.12.11 Free Download
RadioBOSS Advanced 7.0.8 Free Download
Ad

More from ssuser4b1f48 (20)

PPTX
NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...
PPTX
NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...
PPTX
NS-CUK Seminar: H.B.Kim, Review on "Cluster-GCN: An Efficient Algorithm for ...
PPTX
NS-CUK Seminar: H.E.Lee, Review on "Weisfeiler and Leman Go Neural: Higher-O...
PPTX
NS-CUK Seminar:V.T.Hoang, Review on "GRPE: Relative Positional Encoding for G...
PPTX
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
PDF
Aug 22nd, 2023: Case Studies - The Art and Science of Animation Production)
PDF
Aug 17th, 2023: Case Studies - Examining Gamification through Virtual/Augment...
PDF
Aug 10th, 2023: Case Studies - The Power of eXtended Reality (XR) with 360°
PDF
Aug 8th, 2023: Case Studies - Utilizing eXtended Reality (XR) in Drones)
PPTX
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
PPTX
NS-CUK Seminar: H.E.Lee, Review on "Gated Graph Sequence Neural Networks", I...
PPTX
NS-CUK Seminar:V.T.Hoang, Review on "Augmentation-Free Self-Supervised Learni...
PPTX
NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...
PPTX
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...
PPTX
NS-CUK Seminar: H.B.Kim, Review on "Inductive Representation Learning on Lar...
PPTX
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...
PPTX
NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...
PPTX
NS-CUK Seminar: H.B.Kim, Review on "metapath2vec: Scalable representation le...
PPTX
NS-CUK Seminar: V.T.Hoang, Review on "Namkyeong Lee, et al. Relational Self-...
NS-CUK Seminar: V.T.Hoang, Review on "GOAT: A Global Transformer on Large-sca...
NS-CUK Seminar: J.H.Lee, Review on "Graph Propagation Transformer for Graph R...
NS-CUK Seminar: H.B.Kim, Review on "Cluster-GCN: An Efficient Algorithm for ...
NS-CUK Seminar: H.E.Lee, Review on "Weisfeiler and Leman Go Neural: Higher-O...
NS-CUK Seminar:V.T.Hoang, Review on "GRPE: Relative Positional Encoding for G...
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
Aug 22nd, 2023: Case Studies - The Art and Science of Animation Production)
Aug 17th, 2023: Case Studies - Examining Gamification through Virtual/Augment...
Aug 10th, 2023: Case Studies - The Power of eXtended Reality (XR) with 360°
Aug 8th, 2023: Case Studies - Utilizing eXtended Reality (XR) in Drones)
NS-CUK Seminar: J.H.Lee, Review on "Learnable Structural Semantic Readout for...
NS-CUK Seminar: H.E.Lee, Review on "Gated Graph Sequence Neural Networks", I...
NS-CUK Seminar:V.T.Hoang, Review on "Augmentation-Free Self-Supervised Learni...
NS-CUK Journal club: H.E.Lee, Review on " A biomedical knowledge graph-based ...
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...
NS-CUK Seminar: H.B.Kim, Review on "Inductive Representation Learning on Lar...
NS-CUK Seminar: H.E.Lee, Review on "PTE: Predictive Text Embedding through L...
NS-CUK Seminar: J.H.Lee, Review on "Relational Self-Supervised Learning on Gr...
NS-CUK Seminar: H.B.Kim, Review on "metapath2vec: Scalable representation le...
NS-CUK Seminar: V.T.Hoang, Review on "Namkyeong Lee, et al. Relational Self-...
Ad

Recently uploaded (20)

PDF
Encapsulation theory and applications.pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Electronic commerce courselecture one. Pdf
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Machine learning based COVID-19 study performance prediction
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
cuic standard and advanced reporting.pdf
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Encapsulation theory and applications.pdf
Per capita expenditure prediction using model stacking based on satellite ima...
Review of recent advances in non-invasive hemoglobin estimation
“AI and Expert System Decision Support & Business Intelligence Systems”
Electronic commerce courselecture one. Pdf
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Encapsulation_ Review paper, used for researhc scholars
Machine learning based COVID-19 study performance prediction
MIND Revenue Release Quarter 2 2025 Press Release
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
NewMind AI Weekly Chronicles - August'25 Week I
Programs and apps: productivity, graphics, security and other tools
Reach Out and Touch Someone: Haptics and Empathic Computing
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
20250228 LYD VKU AI Blended-Learning.pptx
Unlocking AI with Model Context Protocol (MCP)
cuic standard and advanced reporting.pdf
Agricultural_Statistics_at_a_Glance_2022_0.pdf

NS-CUK Seminar: S.T.Nguyen, Review on "Hierarchical Graph Convolutional Networks for Semi-supervised Node Classification", IJCAI 2019

  • 1. Nguyen Thanh Sang Network Science Lab Dept. of Artificial Intelligence The Catholic University of Korea E-mail: sang.ngt99@gmail.com 2023-04-07
  • 2. 1  Paper  Introduction  Problem  Contributions  Framework  Experiment  Conclusion
  • 3. 2 Hierarchical graph structure • Hierarchical structure is pervasive across complex networks with examples spanning from neuroscience, economics, social organizations, urban systems, communications, pharmaceuticals and biology, particularly metabolic and gene networks.
  • 4. 3 Problems  Many embedding methods can be used in the node classification task by converting the graph structure into sequences by performing random walks on the graph and computing co- occurrence statistics. => unsupervised algorithms and cannot perform node classification tasks in an end-to-end applications.
  • 5. 4 Problems  Graph convolutional networks (GCNs) generates node embedding by combining information from neighborhoods.  lack the “graph pooling” mechanism, which restricts the scale of the receptive field.  difficulty in obtaining adequate global information.  adding too many convolutional layers will result in the output features over-smoothed and make them indistinguishable
  • 6. 5 Problems  Some recent methods try to get the global information through deeper models.  they are either unsupervised models or need many training examples.  they are still not capable of solving the semi-supervised node classification task directly.
  • 7. 6 Contributions • First work to design a deep hierarchical model for the semi-supervised node classification task which consists of more layers with larger receptive fields.  obtain more global information through the coarsening and refining procedures. • Applying deep architectures and the pooling mechanism into classification tasks. • The proposed model outperforms other state-of-the-art approaches and gains a considerable improvement over other approaches with very few labeled samples provided for each class.
  • 8. 7 Overview • For each coarsening layer, the GCN is conducted to learn node representations. • A coarsening operation is performed to aggregate structurally similar nodes into hyper-nodes.  each hyper-node represents a local structure of the original graph, which can facilitate exploiting global structures on the graph. • Following coarsening layers, a symmetric graph refining layers is applied to restore the original graph structure for node classification tasks.  comprehensively capture nodes’ information from local to global perspectives, leading to better node representations.
  • 9. 8 Graph Convolutional Networks • Update node representation by using neighbor features.
  • 10. 9 Graph Coarsening Layer  Two steps: • Structural equivalence grouping (SEG). If two nodes share the same set of neighbors, they are considered to be structurally equivalent.  assign these two nodes to be a hyper-node. • Structural similarity grouping (SSG). Then, we calculate the structural similarity between the unmarked node pairs. => form a new hyper-node and mark the two nodes by largest structural similarity. The largest structural similarity is defined by comparing the normalized connection strength:
  • 11. 10 Graph Coarsening Layer • The hidden node embedding matrix is determined as: • Update adjacency matrix:  The hidden representation is fed into the next layer as input.  The resulting node embedding to generate in each coarsening layer will then be of lower resolution.
  • 12. 11 Graph Refining Layer • To restore the original topological structure of the graph and further facilitate node classification  stacking the same numbers of graph refining layers as coarsening layers. • Each refining layer contains two steps: o generating node embedding vectors o restoring node representations. • A residual connections between the two corresponding coarsening and refining layers.
  • 13. 12 Node Weight Embedding and Multiple Channels • Multi-channel mechanisms help explore features in different subspaces and H-GCN employs multiple channels on GCN to obtain rich information jointly at each layer
  • 14. 13 The Output Layer • Softmax classifier: • The loss function is defined as the cross-entropy of predictions over the labeled nodes:
  • 15. 14 Datasets • Four widely-used datasets including three citation networks and one knowledge graph.
  • 16. 15 Experiment • The proposed method consistently outperforms other state-of-the-art methods, which verify the effectiveness of the proposed coarsening and refining mechanisms. • DeepWalk cannot model the attribute information, which heavily restricts its performance. • The proposed H-GCN manages to capture global information through different levels of convolutional layers and achieves the best results among all four datasets.
  • 17. 16 Impact of Scale of Training Data • The proposed method outperforms other baselines in all cases. • With the number of labeled data decreasing, it obtains a more considerable margin over these baseline algorithms. • The proposed H-GCN with increased receptive fields is well-suited when training data is extremely scarce and thereby is of significant practical values.
  • 18. 17 Coarsening, refining layers and embedding weights • The proposed H-GCN has better performance compared to H-GCN without coarsening mechanisms on all datasets. => The coarsening and refining mechanisms contribute to performance improvements since they can obtain global information with larger receptive fields. • Model with node weight embeddings performs better,  the necessity to add this embedding vector in the node embeddings.
  • 19. 18 Comparing with number of coarsening layers and channels • Since fewer labeled nodes are supplied on NELL than others, deeper layers and larger receptive fields are needed. • Adding too many coarsening layers, the performance drops due to overfitting. • The performance improves with the number of channels increasing until four channels => help capture accurate node features. • Too many channels will inevitably introduce redundant parameters to the model, leading to overfitting as well.
  • 20. 19 Conclusions • A novel hierarchical graph convolutional networks for the semi-supervised node classification task. • The H-GCN model consists of coarsening layers and symmetric refining layers. • By grouping structurally similar nodes to hyper-nodes, this model can get a larger receptive field and enable sufficient information propagation. • Compared with other previous work, H-GCN is deeper and can fully utilize both local and global information. • The has achieved substantial gains over them in the case that labeled data is extremely scarce.
  • 21. 20