SlideShare a Scribd company logo
Ling Liu
Distributed Data Intensive Systems Lab (DiSL)
School of Computer Science
Georgia Institute of Technology
Joint work with Yang Zhou, Kisung Lee, Qi Zhang
Why Graph Processing?
Graphs are everywhere!
2.84 PB adjacency list
2.84 PB edge list
Human connectome.
Gerhard et al., Frontiers in Neuroinformatics 5(3), 2011
2
NA = 6.022 ⇥ 1023
mol 1
Paul Burkhardt, Chris Waring An NSA Big Graph experiment
Graphs
•  Real information networks are represented
as graphs and analyzed using graph theory
– Internet
– Road networks
– Utility Grids
– Protein Interactions
•  A graph is
– a collection of binary relationships, or
– networks of pairwise interactions, including
social networks, digital networks. 3
Why Distributed Graph
Processing?
They are getting bigger!
Scale of Big Graphs
•  Social Scale: 1 billion vertices, 100 billion edges
–  adjacency matrix: >108 GB
–  adjacency list: >103GB
–  edge list: >103GB
•  Web Scale: 50 billion vertices, 1 trillion edges
–  adjacency matrix: >1011 GB
–  adjacency list: > 104 GB
–  edge list: > 104 GB
•  Brain Scale: 100 billion vertices, 100 trillion edges
–  adjacency matrix: >1020 GB
–  adjacency list: > 106 GB
–  edge list: > 106 GB 5
1 terabyte (TB) =1,024GB ~103GB
1 petabyte (PB) =1,024TB ~ 106GB
1 exabyte (EB) =1,024PB~109GB
1 zettabyte (ZB) =1,024EB~1012GB.
1 yottabyte (YB) =1,024ZB~1015GB.
[Paul Burkhardt, Chris Waring 2013]
Classifications of Graphs
•  Simple Graphs v.s. Multigraphs
–  Simple graph: allow only one edge per pair of vertices
–  Multigraph: allow multiple parallel edges between pairs
of vertices
•  Small Graphs v.s. Big Graphs
–  Small graph: can fit whole graph and its processing in
main memory
–  Big Graph: cannot fit the whole graph and its processing
in main memory
6
How hard it can be?
•  Difficult to parallelize (data/computation)
–  irregular data access increases latency
–  skewed data distribution creates bottlenecks
•  giant component
•  high degree vertices, highly skewed edge weight distribution
•  Increased size imposes greater challenge . . .
–  Latency
–  resource contention (i.e. hot-spotting)
•  Algorithm complexity really matters!
–  Run-time of O(n2) on a trillion node graph is not practical!
7
How do we store and process graphs?
•  Conventional approach is to store and compute
in-memory.
–  SHARED-MEMORY
•  Parallel Random Access Machine (PRAM)
•  data in globally-shared memory
•  implicit communication by updating memory
•  fast-random access
–  DISTRIBUTED-MEMORY
•  Bulk Synchronous Parallel (BSP)
•  data distributed to local, private memory
•  explicit communication by sending messages
•  easier to scale by adding more machines
8
[Paul Burkhardt, Chris Waring 2013]
Top Super Computer
Installations
9
[Paul Burkhardt, Chris Waring 2013]
Big Data Makes Bigger
Challenges to Graph Problems
•  Increasing volume, velocity, variety of Big Data
are posting significant challenges to scalable
graph algorithms.
•  Big Graph Challenges:
–  How will graph applications adapt to big data at
petabyte scale?
–  Ability to store and process Big Graphs impacts
typical data structures.
•  Big graphs challenge our conventional thinking
on both algorithms and computing architectures
10
HPDC’15
11
Existing Graph Processing Systems
•  Single PC Systems
–  GraphLab [Low et al., UAI'10]
–  GraphChi [Kyrola et al., OSDI'12]
–  X-Stream [Roy et al., SOSP'13]
–  TurboGraph [Han et al., KDD‘13]
–  GraphLego [Zhou, Liu et al., ACM HPDC‘15]
•  Distributed Shared Memory Systems
–  Pregel [Malewicz et al., SIGMOD'10]
–  Giraph/Hama – Apache Software Foundation
–  Distributed GraphLab [Low et al., VLDB'12]
–  PowerGraph [Gonzalez et al., OSDI'12]
–  SPARK-GraphX [Gonzalez et al., OSDI'14]
–  PathGraph [Yuan et.al, IEEE SC 2014]
–  GraphMap [LeeLiu et al. IEEE SC 2015]
Parallel Graph Processing:
Challenges
•  Structure driven computation
–  Storage and Data Transfer Issues
•  Irregular Graph Structure and
Computation Model
–  Storage and Data/Computation Partitioning Issues
–  Partitioning v.s. Load/Resource Balancing
12
Parallel Graph Processing:
Opportunities
•  Extend Existing Paradigms
–  Vertex centric
–  Edge centric
•  BUILD NEW FRAMEWORKS!
–  Centralized Approaches
•  GraphLego [ACM HPDC 2015] / GraphTwist [VLDB2015]
–  Distributed Approaches
•  GraphMap [IEEE SC 2015], PathGraph [IEEE SC 2014]
13
Parallel Graph Processing
•  Part I: Singe Machine Approaches
•  Can big graphs be processed on a single
machine?
•  How to effectively maximize sequential access and
minimize random access?
•  Part II: Distributed Approaches
•  How to effectively partition a big graph for graph
computation?
•  Are large clusters always better for big graphs?
14
Part I:
Parallel Graph Processing
(single machine)
Human connectome.
Gerhard et al., Frontiers in Neuroinformatics 5(3), 2011
2
NA = 6.022 ⇥ 1023
mol 1
Paul Burkhardt, Chris Waring An NSA Big Graph experiment
7/28/15
HPDC’15
16
Vertex-centric Computation Model
•  Think like a vertex
•  vertex_scatter(vertex v)
–  send updates over outgoing edges of v
•  vertex_gather(vertex v)
–  apply updates from inbound edges of v
•  repeat the computation iterations
–  for all vertices v
•  vertex_scatter(v)
–  for all vertices v
•  vertex_gather(v)
7/28/15
HPDC’15
17
Edge-centric
Computation Model (X-Stream)
•  Think like an edge (source vertex and destination vertex)
•  edge_scatter(edge e)
–  send update over e (from source vertex to destination vertex)
•  update_gather(update u)
–  apply update u to u.destination
•  repeat the computation iterations
–  for all edges e
•  edge_scatter(e)
–  for all updates u
•  update_gather(u)
7/28/15
HPDC’15
18
Challenges of Big Graphs
•  Graph size v.s. limited resource
–  Handling big graphs with billions of vertices and edges in memory may
require hundreds of gigabytes of DRAM
•  High-degree vertices
–  In uk-union with 133.6M vertices: the maximum indegree is 6,366,525 and
the maximum outdegree is 22,429
•  Skewed vertex degree distribution
–  In Yahoo web with 1.4B vertices: the average vertex degree is 4.7, 49% of
the vertices have degree zero and the maximum indegree is 7,637,656
•  Skewed edge weight distribution
–  In DBLP with 0.96M vertices: among 389 coauthors of Elisa Bertino, she has
only one coauthored paper with 198 coauthors, two coauthored papers with
74 coauthors, three coauthored papers with 30 coauthors, and coauthored
paper larger than 4 with 87 coauthors
7/28/15
HPDC’15
19
Real-world Big Graphs
7/28/15
HPDC’15
20
Graph Processing Systems: Challenges
•  Diverse types of processed graphs
–  Simple graph: not allow for parallel edges (multiple edges) between a pair of
vertices
–  Multigraph: allow for parallel edges between a pair of vertices
•  Different kinds of graph applications
–  Matrix-vector multiplication and graph traversal with the cost of O(n2)
–  Matrix-matrix multiplication with the cost of O(n3)
•  Random access
–  It is inefficient for both access and storage. A bunch of random accesses
are necessary but would hurt the performance of graph processing systems
•  Workload imbalance
–  The time of computing on a vertex and its edges is much faster than the
time to access to the vertex state and its edge data in memory or on disk
–  The computation workloads on different vertices are significantly
imbalanced due to the highly skewed vertex degree distribution.
7/28/15
HPDC’15
21
GraphLego: Our Approach
•  Flexible multi-level hierarchical graph parallel abstractions
–  Model a large graph as a 3D cube with source vertex, destination vertex and
edge weight as the dimensions
–  Partitioning a big graph by: slice, strip, dice based graph partitioning
•  Access Locality Optimization
–  Dice-based data placement: store a large graph on disk by minimizing non-
sequential disk access and enabling more structured in-memory access
–  Construct partition-to-chunk index and vertex-to-partition index to facilitate fast
access to slices, strips and dices
–  implement partition-level in-memory gzip compression to optimize disk I/Os
•  Optimization for Partitioning Parameters
–  Build a regression-based learning model to discover the latent relationship
between the number of partitions and the runtime
7/28/15
HPDC’15
22
Modeling a Graph as a 3D Cube
•  Model a directed graph G=(V,E,W) as a 3D cube I=(S,D,E,W)
with source vertices (S=V), destination vertices (D=V) and
edge weights (W) as the three dimensions
7/28/15
HPDC’15
23
Multi-level Hierarchical Graph Parallel
Abstractions
7/28/15
HPDC’15
24
Slice Partitioning: DBLP Example
7/28/15
HPDC’15
25
Strip Partitioning of DB Slice
7/28/15
HPDC’15
26
Dice Partitioning: An Example
SVP: v1,v2,v3, v5,v6,v7, v11, v12, v15
DVP: v2,v3,v4,v5,v6, v7,v8,v9,v10,v11, v12,v13,v14,v15.v16
7/28/15
HPDC’15
27
Dice Partition Storage (OEDs)
7/28/15
HPDC’15
28
Advantage: Multi-level Hierarchical
Graph Parallel Abstractions
•  Choose smaller subgraph blocks such as dice partition or
strip partition to balance the parallel computation efficiency
among partition blocks
•  Use larger subgraph blocks such as slice partition or strip
partition to maximize sequential access and minimize
random access
7/28/15
HPDC’15
29
Partial Aggregation in Parallel
•  Aggregation operation
–  Partially scatter vertex updates in parallel
–  Partially gather vertex updates in parallel
•  Two-level Graph Computation Parallelism
–  Parallel partial update at the subgraph partition level (slice, strip or dice)
–  Parallel partial update at the vertex level
Scatter Gather
Multi-threading Update and
Asynchronization
7/28/15
HPDC’15
30
7/28/15
HPDC’15
31
Programmable Interface
•  vertex centric programming API, such as Scatter and Gather.
•  Compile iterative algorithms into a sequence of internal function (routine)
calls that understand the internal data structures for accessing the graph
by different types of subgraph partition blocks
Gather vertex updates
from neighbor vertices
and incoming edges
Scatter vertex
updates to
outgoing edges
7/28/15
HPDC’15
32
Partition-to-chunk Index
Vertex-to-partition Index
•  The dice-level index is a dense index that maps a dice ID and its DVP (or SVP)
to the chunks on disk where the corresponding dice partition is stored physically
•  The strip-level index is a two level sparse index, which maps a strip ID to the
dice-level index-blocks and then map each dice ID to the dice partition chunks in
the physical storage
•  The slice level index is a three-level sparse index with slice index blocks at the
top, strip index blocks at the middle and dice index blocks at the bottom,
enabling fast retrieval of dices with a slice-specific condition
7/28/15
HPDC’15
33
Partition-level Compression
•  Iterative computations on large graphs incur non-
trivial cost for the I/O processing
–  The I/O processing of Twitter dataset on a PC with 4 CPU
cores and 16GB memory takes 50.2% of the total running
time for PageRank (5 iterations)
•  Apply in-memory gzip compression to transform
each graph partition block into a compressed format
before storing them on disk
7/28/15
HPDC’15
34
Configuration of Partitioning Parameters
•  User definition
•  Simple estimation
•  Regression-based learning
–  Construct a polynomial regression model to model the nonlinear relationship between
independent variables p, q, r (partition parameters) and dependent variable T (runtime) with
latent coefficient αijk and error term ε	

–  The goal of regression-based learning is to determine the latent αijk and ε to get the function
between p, q, r and T	

–  Select m limited samples of (pl, ql, rl, Tl) (1≤l≤m) from the existing experiment results	

–  Solve m linear equations consisting of m selected samples to generate the concrete αijk and ε	

–  Utilize a successive convex approximation method (SCA) to find the optimal solution (i.e., the
minimum runtime T) of the above polynomial function and the optimal parameters (i.e., p, q
and r) when T is minimum
7/28/15
HPDC’15
35
Experimental Evaluation
•  Computer server
–  Intel Core i5 2.66 GHz, 16 GB RAM, 1 TB hard drive, Linux 64-bit
•  Graph parallel systems
–  GraphLab [Low et al., UAI'10]
–  GraphChi [Kyrola et al., OSDI'12]
–  X-Stream [Roy et al., SOSP'13]
•  Graph applications
7/28/15
HPDC’15
36
Execution Efficiency on Single Graph
7/28/15
HPDC’15
37
Execution Efficiency on Multiple Graphs
7/28/15
HPDC’15
38
Decision of #Partitions
7/28/15
HPDC’15
39
Efficiency of Regression-based Learning
GraphLego: Resource Aware GPS
•  Flexible multi-level hierarchical graph parallel
abstractions
–  Model a large graph as a 3D cube with source vertex, destination vertex
and edge weight as the dimensions
–  Partitioning a big graph by: slice, strip, dice based graph
partitioning
•  Access Locality Optimization
–  Dice-based data placement: store a large graph on disk by minimizing
non-sequential disk access and enabling more structured in-memory
access
–  Construct partition-to-chunk index and vertex-to-partition index to
facilitate fast access to slices, strips and dices
–  implement partition-level in-memory gzip compression to optimize disk I/
Os
•  Optimization for Partitioning Parameters
–  Build a regression-based learning model to discover the latent
relationship between the number of partitions and the runtime
40
41
Questions
Open Source:
https://sites.google.com/site/git_GraphLego/
7/28/15
HPDC’15

More Related Content

PDF
Ling liu part 02:big graph processing
PPTX
Chengqi zhang graph processing and mining in the era of big data
PPTX
Map reduce programming model to solve graph problems
PPT
Benchmarking graph databases on the problem of community detection
PPTX
High Performance Data Analytics with Java on Large Multicore HPC Clusters
PDF
Relational Algebra and MapReduce
PDF
Scalable Distributed Real-Time Clustering for Big Data Streams
ODP
Google's Dremel
Ling liu part 02:big graph processing
Chengqi zhang graph processing and mining in the era of big data
Map reduce programming model to solve graph problems
Benchmarking graph databases on the problem of community detection
High Performance Data Analytics with Java on Large Multicore HPC Clusters
Relational Algebra and MapReduce
Scalable Distributed Real-Time Clustering for Big Data Streams
Google's Dremel

What's hot (20)

PDF
GraphX: Graph analytics for insights about developer communities
PPTX
Near-Duplicate Video Retrieval by Aggregating Intermediate CNN Layers
PDF
A NOBEL HYBRID APPROACH FOR EDGE DETECTION
PDF
Recent progress on distributing deep learning
PPTX
Dremel interactive analysis of web scale datasets
PPTX
Big Data Analytics with Storm, Spark and GraphLab
PDF
Dremel Paper Review
PDF
Time-Evolving Graph Processing On Commodity Clusters
PPTX
Exploring Titan and Spark GraphX for Analyzing Time-Varying Electrical Networks
PPT
Dremel: Interactive Analysis of Web-Scale Datasets
PPT
Giraph++: From "Think Like a Vertex" to "Think Like a Graph"
PPTX
2011.10.14 Apache Giraph - Hortonworks
DOCX
Neo4j vs giraph
PDF
benchmarks-sigmod09
PPTX
Yarn spark next_gen_hadoop_8_jan_2014
PDF
Parallel Data Processing with MapReduce: A Survey
PPTX
Distributed computing poli
PDF
Generalized Linear Models with H2O
PDF
Parikshit Ram – Senior Machine Learning Scientist, Skytree at MLconf ATL
PDF
Todtree
GraphX: Graph analytics for insights about developer communities
Near-Duplicate Video Retrieval by Aggregating Intermediate CNN Layers
A NOBEL HYBRID APPROACH FOR EDGE DETECTION
Recent progress on distributing deep learning
Dremel interactive analysis of web scale datasets
Big Data Analytics with Storm, Spark and GraphLab
Dremel Paper Review
Time-Evolving Graph Processing On Commodity Clusters
Exploring Titan and Spark GraphX for Analyzing Time-Varying Electrical Networks
Dremel: Interactive Analysis of Web-Scale Datasets
Giraph++: From "Think Like a Vertex" to "Think Like a Graph"
2011.10.14 Apache Giraph - Hortonworks
Neo4j vs giraph
benchmarks-sigmod09
Yarn spark next_gen_hadoop_8_jan_2014
Parallel Data Processing with MapReduce: A Survey
Distributed computing poli
Generalized Linear Models with H2O
Parikshit Ram – Senior Machine Learning Scientist, Skytree at MLconf ATL
Todtree
Ad

Viewers also liked (13)

PDF
Citation Graph Analysis to Identify Memes in Scientific Literature
PDF
Graphlab under the hood
PDF
Machine Learning in the Cloud with GraphLab
PDF
Joey gonzalez, graph lab, m lconf 2013
PDF
GraphChi big graph processing
PPTX
CS267_Graph_Lab
PDF
PDF
PowerGraph
PPTX
Large-Scale Graph Computation on Just a PC: Aapo Kyrola Ph.D. thesis defense
PDF
Jeff Bradshaw, Founder, Adaptris
PPT
Big Graph Analytics on Neo4j with Apache Spark
PDF
Graph processing - Graphlab
PDF
Graph processing - Powergraph and GraphX
Citation Graph Analysis to Identify Memes in Scientific Literature
Graphlab under the hood
Machine Learning in the Cloud with GraphLab
Joey gonzalez, graph lab, m lconf 2013
GraphChi big graph processing
CS267_Graph_Lab
PowerGraph
Large-Scale Graph Computation on Just a PC: Aapo Kyrola Ph.D. thesis defense
Jeff Bradshaw, Founder, Adaptris
Big Graph Analytics on Neo4j with Apache Spark
Graph processing - Graphlab
Graph processing - Powergraph and GraphX
Ad

Similar to Ling liu part 01:big graph processing (20)

PPTX
Graph processing
PPTX
PowerLyra@EuroSys2015
PDF
Graph chi
PDF
ScaleGraph - A High-Performance Library for Billion-Scale Graph Analytics
PDF
Big Graph : Tools, Techniques, Issues, Challenges and Future Directions
PDF
BIG GRAPH: TOOLS, TECHNIQUES, ISSUES, CHALLENGES AND FUTURE DIRECTIONS
PDF
F14 lec12graphs
PDF
The Graph Abstract Data Type-DATA STRUCTURE.pdf
PDF
Graph Algorithms - Map-Reduce Graph Processing
PPTX
Leveraging Multiple GPUs and CPUs for Graphlet Counting in Large Networks
PDF
Out-of-core GPU Memory Management for MapReduce-based Large-scale Graph Proce...
PDF
The Future is Big Graphs: A Community View on Graph Processing Systems
PDF
Graph Analysis Beyond Linear Algebra
PDF
Distributed graph processing
PDF
Graph Analysis: New Algorithm Models, New Architectures
PPTX
Data Structure and algorithms - Graph1.pptx
PDF
Demystifying Distributed Graph Processing
PDF
What Makes Graph Queries Difficult?
PDF
Exploring optimizations for dynamic PageRank algorithm based on GPU : V4
PDF
An NSA Big Graph experiment
Graph processing
PowerLyra@EuroSys2015
Graph chi
ScaleGraph - A High-Performance Library for Billion-Scale Graph Analytics
Big Graph : Tools, Techniques, Issues, Challenges and Future Directions
BIG GRAPH: TOOLS, TECHNIQUES, ISSUES, CHALLENGES AND FUTURE DIRECTIONS
F14 lec12graphs
The Graph Abstract Data Type-DATA STRUCTURE.pdf
Graph Algorithms - Map-Reduce Graph Processing
Leveraging Multiple GPUs and CPUs for Graphlet Counting in Large Networks
Out-of-core GPU Memory Management for MapReduce-based Large-scale Graph Proce...
The Future is Big Graphs: A Community View on Graph Processing Systems
Graph Analysis Beyond Linear Algebra
Distributed graph processing
Graph Analysis: New Algorithm Models, New Architectures
Data Structure and algorithms - Graph1.pptx
Demystifying Distributed Graph Processing
What Makes Graph Queries Difficult?
Exploring optimizations for dynamic PageRank algorithm based on GPU : V4
An NSA Big Graph experiment

More from jins0618 (20)

PDF
Machine Status Prediction for Dynamic and Heterogenous Cloud Environment
PDF
Latent Interest and Topic Mining on User-item Bipartite Networks
PDF
Web Service QoS Prediction Approach in Mobile Internet Environments
PDF
吕潇 星环科技大数据技术探索与应用实践
PPT
李战怀 大数据环境下数据存储与管理的研究
PPTX
2015 07-tuto0-courseoutline
PDF
Christian jensen advanced routing in spatial networks using big data
PPTX
Jeffrey xu yu large graph processing
PDF
Calton pu experimental methods on performance in cloud and accuracy in big da...
PPTX
Wang ke mining revenue-maximizing bundling configuration
PDF
Wang ke classification by cut clearance under threshold
PPTX
2015 07-tuto2-clus type
PPTX
2015 07-tuto1-phrase mining
PPTX
2015 07-tuto3-mining hin
PPTX
2015 07-tuto0-courseoutline
PPTX
Weiyi meng web data truthfulness analysis
PPTX
Ke yi small summaries for big data
PDF
Gao cong geospatial social media data management and context-aware recommenda...
PPTX
Chen li asterix db: 大数据处理开源平台
PDF
Movies&demographics
Machine Status Prediction for Dynamic and Heterogenous Cloud Environment
Latent Interest and Topic Mining on User-item Bipartite Networks
Web Service QoS Prediction Approach in Mobile Internet Environments
吕潇 星环科技大数据技术探索与应用实践
李战怀 大数据环境下数据存储与管理的研究
2015 07-tuto0-courseoutline
Christian jensen advanced routing in spatial networks using big data
Jeffrey xu yu large graph processing
Calton pu experimental methods on performance in cloud and accuracy in big da...
Wang ke mining revenue-maximizing bundling configuration
Wang ke classification by cut clearance under threshold
2015 07-tuto2-clus type
2015 07-tuto1-phrase mining
2015 07-tuto3-mining hin
2015 07-tuto0-courseoutline
Weiyi meng web data truthfulness analysis
Ke yi small summaries for big data
Gao cong geospatial social media data management and context-aware recommenda...
Chen li asterix db: 大数据处理开源平台
Movies&demographics

Recently uploaded (20)

PPTX
Business Ppt On Nestle.pptx huunnnhhgfvu
PPTX
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
PDF
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
PDF
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
PDF
Introduction to Business Data Analytics.
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PPTX
Global journeys: estimating international migration
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PPTX
Introduction to Knowledge Engineering Part 1
PPTX
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
PPTX
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
PPTX
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PDF
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PPTX
IBA_Chapter_11_Slides_Final_Accessible.pptx
PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PPT
Chapter 2 METAL FORMINGhhhhhhhjjjjmmmmmmmmm
Business Ppt On Nestle.pptx huunnnhhgfvu
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
Introduction to Business Data Analytics.
Acceptance and paychological effects of mandatory extra coach I classes.pptx
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
Global journeys: estimating international migration
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
Introduction to Knowledge Engineering Part 1
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
Galatica Smart Energy Infrastructure Startup Pitch Deck
Data_Analytics_and_PowerBI_Presentation.pptx
IBA_Chapter_11_Slides_Final_Accessible.pptx
oil_refinery_comprehensive_20250804084928 (1).pptx
Chapter 2 METAL FORMINGhhhhhhhjjjjmmmmmmmmm

Ling liu part 01:big graph processing

  • 1. Ling Liu Distributed Data Intensive Systems Lab (DiSL) School of Computer Science Georgia Institute of Technology Joint work with Yang Zhou, Kisung Lee, Qi Zhang
  • 2. Why Graph Processing? Graphs are everywhere! 2.84 PB adjacency list 2.84 PB edge list Human connectome. Gerhard et al., Frontiers in Neuroinformatics 5(3), 2011 2 NA = 6.022 ⇥ 1023 mol 1 Paul Burkhardt, Chris Waring An NSA Big Graph experiment
  • 3. Graphs •  Real information networks are represented as graphs and analyzed using graph theory – Internet – Road networks – Utility Grids – Protein Interactions •  A graph is – a collection of binary relationships, or – networks of pairwise interactions, including social networks, digital networks. 3
  • 5. Scale of Big Graphs •  Social Scale: 1 billion vertices, 100 billion edges –  adjacency matrix: >108 GB –  adjacency list: >103GB –  edge list: >103GB •  Web Scale: 50 billion vertices, 1 trillion edges –  adjacency matrix: >1011 GB –  adjacency list: > 104 GB –  edge list: > 104 GB •  Brain Scale: 100 billion vertices, 100 trillion edges –  adjacency matrix: >1020 GB –  adjacency list: > 106 GB –  edge list: > 106 GB 5 1 terabyte (TB) =1,024GB ~103GB 1 petabyte (PB) =1,024TB ~ 106GB 1 exabyte (EB) =1,024PB~109GB 1 zettabyte (ZB) =1,024EB~1012GB. 1 yottabyte (YB) =1,024ZB~1015GB. [Paul Burkhardt, Chris Waring 2013]
  • 6. Classifications of Graphs •  Simple Graphs v.s. Multigraphs –  Simple graph: allow only one edge per pair of vertices –  Multigraph: allow multiple parallel edges between pairs of vertices •  Small Graphs v.s. Big Graphs –  Small graph: can fit whole graph and its processing in main memory –  Big Graph: cannot fit the whole graph and its processing in main memory 6
  • 7. How hard it can be? •  Difficult to parallelize (data/computation) –  irregular data access increases latency –  skewed data distribution creates bottlenecks •  giant component •  high degree vertices, highly skewed edge weight distribution •  Increased size imposes greater challenge . . . –  Latency –  resource contention (i.e. hot-spotting) •  Algorithm complexity really matters! –  Run-time of O(n2) on a trillion node graph is not practical! 7
  • 8. How do we store and process graphs? •  Conventional approach is to store and compute in-memory. –  SHARED-MEMORY •  Parallel Random Access Machine (PRAM) •  data in globally-shared memory •  implicit communication by updating memory •  fast-random access –  DISTRIBUTED-MEMORY •  Bulk Synchronous Parallel (BSP) •  data distributed to local, private memory •  explicit communication by sending messages •  easier to scale by adding more machines 8 [Paul Burkhardt, Chris Waring 2013]
  • 9. Top Super Computer Installations 9 [Paul Burkhardt, Chris Waring 2013]
  • 10. Big Data Makes Bigger Challenges to Graph Problems •  Increasing volume, velocity, variety of Big Data are posting significant challenges to scalable graph algorithms. •  Big Graph Challenges: –  How will graph applications adapt to big data at petabyte scale? –  Ability to store and process Big Graphs impacts typical data structures. •  Big graphs challenge our conventional thinking on both algorithms and computing architectures 10
  • 11. HPDC’15 11 Existing Graph Processing Systems •  Single PC Systems –  GraphLab [Low et al., UAI'10] –  GraphChi [Kyrola et al., OSDI'12] –  X-Stream [Roy et al., SOSP'13] –  TurboGraph [Han et al., KDD‘13] –  GraphLego [Zhou, Liu et al., ACM HPDC‘15] •  Distributed Shared Memory Systems –  Pregel [Malewicz et al., SIGMOD'10] –  Giraph/Hama – Apache Software Foundation –  Distributed GraphLab [Low et al., VLDB'12] –  PowerGraph [Gonzalez et al., OSDI'12] –  SPARK-GraphX [Gonzalez et al., OSDI'14] –  PathGraph [Yuan et.al, IEEE SC 2014] –  GraphMap [LeeLiu et al. IEEE SC 2015]
  • 12. Parallel Graph Processing: Challenges •  Structure driven computation –  Storage and Data Transfer Issues •  Irregular Graph Structure and Computation Model –  Storage and Data/Computation Partitioning Issues –  Partitioning v.s. Load/Resource Balancing 12
  • 13. Parallel Graph Processing: Opportunities •  Extend Existing Paradigms –  Vertex centric –  Edge centric •  BUILD NEW FRAMEWORKS! –  Centralized Approaches •  GraphLego [ACM HPDC 2015] / GraphTwist [VLDB2015] –  Distributed Approaches •  GraphMap [IEEE SC 2015], PathGraph [IEEE SC 2014] 13
  • 14. Parallel Graph Processing •  Part I: Singe Machine Approaches •  Can big graphs be processed on a single machine? •  How to effectively maximize sequential access and minimize random access? •  Part II: Distributed Approaches •  How to effectively partition a big graph for graph computation? •  Are large clusters always better for big graphs? 14
  • 15. Part I: Parallel Graph Processing (single machine) Human connectome. Gerhard et al., Frontiers in Neuroinformatics 5(3), 2011 2 NA = 6.022 ⇥ 1023 mol 1 Paul Burkhardt, Chris Waring An NSA Big Graph experiment
  • 16. 7/28/15 HPDC’15 16 Vertex-centric Computation Model •  Think like a vertex •  vertex_scatter(vertex v) –  send updates over outgoing edges of v •  vertex_gather(vertex v) –  apply updates from inbound edges of v •  repeat the computation iterations –  for all vertices v •  vertex_scatter(v) –  for all vertices v •  vertex_gather(v)
  • 17. 7/28/15 HPDC’15 17 Edge-centric Computation Model (X-Stream) •  Think like an edge (source vertex and destination vertex) •  edge_scatter(edge e) –  send update over e (from source vertex to destination vertex) •  update_gather(update u) –  apply update u to u.destination •  repeat the computation iterations –  for all edges e •  edge_scatter(e) –  for all updates u •  update_gather(u)
  • 18. 7/28/15 HPDC’15 18 Challenges of Big Graphs •  Graph size v.s. limited resource –  Handling big graphs with billions of vertices and edges in memory may require hundreds of gigabytes of DRAM •  High-degree vertices –  In uk-union with 133.6M vertices: the maximum indegree is 6,366,525 and the maximum outdegree is 22,429 •  Skewed vertex degree distribution –  In Yahoo web with 1.4B vertices: the average vertex degree is 4.7, 49% of the vertices have degree zero and the maximum indegree is 7,637,656 •  Skewed edge weight distribution –  In DBLP with 0.96M vertices: among 389 coauthors of Elisa Bertino, she has only one coauthored paper with 198 coauthors, two coauthored papers with 74 coauthors, three coauthored papers with 30 coauthors, and coauthored paper larger than 4 with 87 coauthors
  • 20. 7/28/15 HPDC’15 20 Graph Processing Systems: Challenges •  Diverse types of processed graphs –  Simple graph: not allow for parallel edges (multiple edges) between a pair of vertices –  Multigraph: allow for parallel edges between a pair of vertices •  Different kinds of graph applications –  Matrix-vector multiplication and graph traversal with the cost of O(n2) –  Matrix-matrix multiplication with the cost of O(n3) •  Random access –  It is inefficient for both access and storage. A bunch of random accesses are necessary but would hurt the performance of graph processing systems •  Workload imbalance –  The time of computing on a vertex and its edges is much faster than the time to access to the vertex state and its edge data in memory or on disk –  The computation workloads on different vertices are significantly imbalanced due to the highly skewed vertex degree distribution.
  • 21. 7/28/15 HPDC’15 21 GraphLego: Our Approach •  Flexible multi-level hierarchical graph parallel abstractions –  Model a large graph as a 3D cube with source vertex, destination vertex and edge weight as the dimensions –  Partitioning a big graph by: slice, strip, dice based graph partitioning •  Access Locality Optimization –  Dice-based data placement: store a large graph on disk by minimizing non- sequential disk access and enabling more structured in-memory access –  Construct partition-to-chunk index and vertex-to-partition index to facilitate fast access to slices, strips and dices –  implement partition-level in-memory gzip compression to optimize disk I/Os •  Optimization for Partitioning Parameters –  Build a regression-based learning model to discover the latent relationship between the number of partitions and the runtime
  • 22. 7/28/15 HPDC’15 22 Modeling a Graph as a 3D Cube •  Model a directed graph G=(V,E,W) as a 3D cube I=(S,D,E,W) with source vertices (S=V), destination vertices (D=V) and edge weights (W) as the three dimensions
  • 26. 7/28/15 HPDC’15 26 Dice Partitioning: An Example SVP: v1,v2,v3, v5,v6,v7, v11, v12, v15 DVP: v2,v3,v4,v5,v6, v7,v8,v9,v10,v11, v12,v13,v14,v15.v16
  • 28. 7/28/15 HPDC’15 28 Advantage: Multi-level Hierarchical Graph Parallel Abstractions •  Choose smaller subgraph blocks such as dice partition or strip partition to balance the parallel computation efficiency among partition blocks •  Use larger subgraph blocks such as slice partition or strip partition to maximize sequential access and minimize random access
  • 29. 7/28/15 HPDC’15 29 Partial Aggregation in Parallel •  Aggregation operation –  Partially scatter vertex updates in parallel –  Partially gather vertex updates in parallel •  Two-level Graph Computation Parallelism –  Parallel partial update at the subgraph partition level (slice, strip or dice) –  Parallel partial update at the vertex level Scatter Gather
  • 31. 7/28/15 HPDC’15 31 Programmable Interface •  vertex centric programming API, such as Scatter and Gather. •  Compile iterative algorithms into a sequence of internal function (routine) calls that understand the internal data structures for accessing the graph by different types of subgraph partition blocks Gather vertex updates from neighbor vertices and incoming edges Scatter vertex updates to outgoing edges
  • 32. 7/28/15 HPDC’15 32 Partition-to-chunk Index Vertex-to-partition Index •  The dice-level index is a dense index that maps a dice ID and its DVP (or SVP) to the chunks on disk where the corresponding dice partition is stored physically •  The strip-level index is a two level sparse index, which maps a strip ID to the dice-level index-blocks and then map each dice ID to the dice partition chunks in the physical storage •  The slice level index is a three-level sparse index with slice index blocks at the top, strip index blocks at the middle and dice index blocks at the bottom, enabling fast retrieval of dices with a slice-specific condition
  • 33. 7/28/15 HPDC’15 33 Partition-level Compression •  Iterative computations on large graphs incur non- trivial cost for the I/O processing –  The I/O processing of Twitter dataset on a PC with 4 CPU cores and 16GB memory takes 50.2% of the total running time for PageRank (5 iterations) •  Apply in-memory gzip compression to transform each graph partition block into a compressed format before storing them on disk
  • 34. 7/28/15 HPDC’15 34 Configuration of Partitioning Parameters •  User definition •  Simple estimation •  Regression-based learning –  Construct a polynomial regression model to model the nonlinear relationship between independent variables p, q, r (partition parameters) and dependent variable T (runtime) with latent coefficient αijk and error term ε –  The goal of regression-based learning is to determine the latent αijk and ε to get the function between p, q, r and T –  Select m limited samples of (pl, ql, rl, Tl) (1≤l≤m) from the existing experiment results –  Solve m linear equations consisting of m selected samples to generate the concrete αijk and ε –  Utilize a successive convex approximation method (SCA) to find the optimal solution (i.e., the minimum runtime T) of the above polynomial function and the optimal parameters (i.e., p, q and r) when T is minimum
  • 35. 7/28/15 HPDC’15 35 Experimental Evaluation •  Computer server –  Intel Core i5 2.66 GHz, 16 GB RAM, 1 TB hard drive, Linux 64-bit •  Graph parallel systems –  GraphLab [Low et al., UAI'10] –  GraphChi [Kyrola et al., OSDI'12] –  X-Stream [Roy et al., SOSP'13] •  Graph applications
  • 40. GraphLego: Resource Aware GPS •  Flexible multi-level hierarchical graph parallel abstractions –  Model a large graph as a 3D cube with source vertex, destination vertex and edge weight as the dimensions –  Partitioning a big graph by: slice, strip, dice based graph partitioning •  Access Locality Optimization –  Dice-based data placement: store a large graph on disk by minimizing non-sequential disk access and enabling more structured in-memory access –  Construct partition-to-chunk index and vertex-to-partition index to facilitate fast access to slices, strips and dices –  implement partition-level in-memory gzip compression to optimize disk I/ Os •  Optimization for Partitioning Parameters –  Build a regression-based learning model to discover the latent relationship between the number of partitions and the runtime 40