SlideShare a Scribd company logo
A Distributed Storage System for
Structured Data
Bigtable
Presenter:
Yunming Zhang
Conglong Li
Saturday, September 21, 13
References
SOCC 2010 Key Note Slides
Jeff Dean Google
Introduction to Distributed Computing, Winter 2008
University of Washington
2
Saturday, September 21, 13
Motivation
Lots of (semi) structured data at Google
URLs
Contents, crawl metadata, links
Per-user data:
User preference settings, search results
Scale is large
Billions of URLs, hundreds of million of users,
Existing Commercial database doesn’t meet the
requirements
3
Saturday, September 21, 13
Store and manage all the state reliably and efficiently
Allow asynchronous processes to update different
pieces of data continuously
Very high read/write rates
Efficient scans over all or interesting subsets of
data
Often want to examine data changes over time
Goals
4
Saturday, September 21, 13
BigTable vs. GFS
GFS provides raw data storage
We need:
More sophisticated storage
Key - value mapping
Flexible enough to be useful
Store semi-structured data
Reliable, scalable, etc.
5
Saturday, September 21, 13
BigTable
Bigtable is a distributed storage system for managing
large scale structured data
Wide applicability
Scalability
High performance
High availability
6
Saturday, September 21, 13
Overview
Data Model
API
Implementation Structures
Optimizations
Performance Evaluation
Applications
Conclusions
7
Saturday, September 21, 13
Data Model
Sparse
Sorted
Multidimensional
8
Saturday, September 21, 13
Cell
Contains multiple versions of the data
Can locate a data using row key, column key and a
time stamp
Treats data as uninterpreted array of bytes that allow
clients to serialize various forms of structured and
semi-structured data
Supports automatic garbage collection per column
family for management of versioned data
9
Saturday, September 21, 13
Store and manage all the state reliably and efficiently
Allow asynchronous processes to update different
pieces of data continuously
Very high read/write rates
Efficient scans over all or interesting subsets of
data
Often want to examine data changes over time
Goals
10
Saturday, September 21, 13
Row
Row key is an arbitrary string
Access to column data in a row is atomic
Row creation is implicit upon storing data
Rows ordered lexicographically
Rows close together lexicographically usually reside
on one or a small number of machines
11
Saturday, September 21, 13
Columns
Columns are grouped into Column Families:
family:optional_qualifier
Column family
Has associated type information
Usually of the same type 12
Saturday, September 21, 13
Overview
Data Model
API
Implementation Structures
Optimizations
Performance Evaluation
Applications
Conclusions
13
Saturday, September 21, 13
API
Metadata operations
Create/delete tables, column families, change
metadata, modify access control list
Writes ( atomic )
Set (), DeleteCells(), DeleteRow()
Reads
Scanner: read arbitrary cells in a BigTable
14
Saturday, September 21, 13
Overview
Data Model
API
Implementation Structures
Optimizations
Performance Evaluation
Applications
Conclusions
15
Saturday, September 21, 13
Tablets
Large tables broken into tablets at row boundaries
Tablet holds contiguous range of rows
Clients can often choose row keys for locality
Aim for ~100MB to 200MB of data per tablet
Serving machine responsible for ~100 tablets
Fast recovery:
100 machine each pick up 1 tablet from failed machine
Fine-grained load balancing:
Migrate tablets away from overloaded machine
16
Saturday, September 21, 13
Tablets and Splitting
Saturday, September 21, 13
System Structure
Master
Metadata operations
Load balancing
Keep track of live tablet servers
Master failure
Tablet server
Accept read and write to data
18
Saturday, September 21, 13
System Structure
Saturday, September 21, 13
System Structure
read/write
Saturday, September 21, 13
System Structure
Metadata operations
Saturday, September 21, 13
Locating Tablets
3-level hierarchical lookup scheme for tablets
Location is ip port of servers in META tables
22
Saturday, September 21, 13
Tablet Representation
and serving
Append only tablet log
SSTable on GFS
A Sorted map of string to string
If you want to find a row data, all the data are
contiguous
Memtable write buffer
When a read comes in, you have to merge SSTable data
and uncommitted value.
23
Saturday, September 21, 13
Tablet Representation
and Serving
24
Saturday, September 21, 13
Tablet Representation
and Serving
25
Saturday, September 21, 13
Compaction
Tablet state represented as a set of immutable compacted
SSTable files, plus tail of log
Minor compaction:
When in-memory buffer fills up, it freezes the in-memory
buffer and create a new SSTable
Major compaction:
Periodically compact all SSTables for tablet into new base
SSTable on GFS
Storage reclaimed from deletions at this point
Produce new tables
26
Saturday, September 21, 13
Overview
Data Model
API
Implementation Structures
Optimizations
Performance Evaluation
Applications
Conclusions
27
Saturday, September 21, 13
Reliable system for storing and managing all the states
Allow asynchronous processes to update different
pieces of data continuously
Very high read/write rates
Efficient scans over all or interesting subsets of
data
Often want to examine data changes over time
Goals
28
Saturday, September 21, 13
Locality Groups
Clients can group multiple column families together
into a locality group
A separate SSTable is generated for each locality group
Enable more efficient read
Can be declared to be in-memory
29
Saturday, September 21, 13
Compression
Many opportunities for compression
Similar values in columns and cells
Within each SSTable for a locality group, encode
compressed blocks
Keep blocks small for random access
Exploit fact that many values very similar
30
Saturday, September 21, 13
Reliable system for storing and managing all the states
Allow asynchronous processes to update different
pieces of data continuously
Very high read/write rates
Efficient scans over all or interesting subsets of
data
Often want to examine data changes over time
Goals
31
Saturday, September 21, 13
Commit log and recovery
Single commit log file per tablet server
reduce the number of concurrent file writes to GFS
Tablet Recovery
redo points in log
perform the same set of operations from last
persistent state
32
Saturday, September 21, 13
Overview
Data Model
API
Implementation Structures
Optimizations
Performance Evaluation
Applications
Conclusions
33
Saturday, September 21, 13
Performance evaluation
Test Environment
Based on a GFS with 1876 machines
400 GB IDE hard drives in each machine
Two-level tree-shaped switched network
Performance Tests
Random Read/Write
Sequential Read/Write
34
Saturday, September 21, 13
Single tablet-server performance
Random reads is the slowest
Transfer 64 KB SSTable over GFS to read 1000 byte
Random and sequential writes perform better
Append writes to server to a single commit log
Group commit
35
Saturday, September 21, 13
Performance Scaling
Performance didn’t scale linearly
Load imbalance in multiple server configurations
Larger data transfer overhead
36
Saturday, September 21, 13
Overview
Data Model
API
Implementation Structures
Optimizations
Performance Evaluation
Applications
Conclusions
37
Saturday, September 21, 13
Google Analytics
A service that analyzes traffic patterns at web sites
Raw Click Table
Row for each end-user session
Row key is (website name, time)
Summary Table
Extracts recent session data using MapReduce jobs
38
Saturday, September 21, 13
Google Earth
Use one table for preprocessing and one for serving
Different latency requirements (disk vs memory)
Each row in the imagery table represents a single
geographic segment
Column family to store data source
One column for each raw image
Very sparse
39
Saturday, September 21, 13
Personalized Search
Row key is a unique userid
A column family for each type of user action
Replicated across Bigtable clusters to increase
availability and reduce latency
40
Saturday, September 21, 13
Conclusions
Bigtable provides a high scalability, high performance,
high availability and flexible storage for structured
data.
It provides a low level read / write based interface for
other frameworks to build on top of it
It has enabled Google to deal with large scale data
efficiently
41
Saturday, September 21, 13

More Related Content

PPT
Big table
PPTX
Summary of "Google's Big Table" at nosql summer reading in Tokyo
ODP
Big table
PPT
Bigtable
PPT
Google Bigtable paper presentation
PDF
google Bigtable
PPTX
Big table
PDF
Google Bigtable Paper Presentation
Big table
Summary of "Google's Big Table" at nosql summer reading in Tokyo
Big table
Bigtable
Google Bigtable paper presentation
google Bigtable
Big table
Google Bigtable Paper Presentation

What's hot (20)

PPTX
GOOGLE BIGTABLE
PPTX
Google Big Table
PPTX
Google - Bigtable
PDF
Bigtable: A Distributed Storage System for Structured Data
PDF
Bigtable
PDF
Bigtable
PPT
PDF
The Google Bigtable
PDF
BigTable And Hbase
PDF
Bigtable and Dynamo
PPT
8. column oriented databases
PPTX
Google cluster architecture
PPTX
Column oriented database
PDF
Row or Columnar Database
PDF
Bigtable and Boxwood
PDF
Write intensive workloads and lsm trees
PPTX
Rise of Column Oriented Database
GOOGLE BIGTABLE
Google Big Table
Google - Bigtable
Bigtable: A Distributed Storage System for Structured Data
Bigtable
Bigtable
The Google Bigtable
BigTable And Hbase
Bigtable and Dynamo
8. column oriented databases
Google cluster architecture
Column oriented database
Row or Columnar Database
Bigtable and Boxwood
Write intensive workloads and lsm trees
Rise of Column Oriented Database
Ad

Viewers also liked (6)

PDF
App Engine overview (Android meetup 06-10)
PPTX
Bigtable a distributed storage system
PDF
Cassandra By Example: Data Modelling with CQL3
PDF
Cassandra Explained
PPTX
An Overview of Apache Cassandra
PDF
Cassandra NoSQL Tutorial
App Engine overview (Android meetup 06-10)
Bigtable a distributed storage system
Cassandra By Example: Data Modelling with CQL3
Cassandra Explained
An Overview of Apache Cassandra
Cassandra NoSQL Tutorial
Ad

Similar to Big table presentation-final (18)

PPT
bigtable-uw-presentation.ppt exceptional case situation analysis
PDF
3 map reduce perspectives
PDF
Bigtable_Paper
PDF
Google jeff dean lessons learned while building infrastructure software at go...
PDF
Google Bigtable
PPTX
Bigtable a distributed storage system
PPTX
Chapter Six Storage-systemsgggggggg.pptx
PDF
PDF
Bigtable osdi06
PPT
BigTable PreReading
PDF
Accumulo design
PDF
Accumulo design
PPTX
storage-systems.pptx
PDF
Bigtable osdi06
PDF
Bigtable osdi06
PDF
Bigtable osdi06
PPTX
My Other Computer is a Data Center: The Sector Perspective on Big Data
PDF
GCP Data Engineer cheatsheet
bigtable-uw-presentation.ppt exceptional case situation analysis
3 map reduce perspectives
Bigtable_Paper
Google jeff dean lessons learned while building infrastructure software at go...
Google Bigtable
Bigtable a distributed storage system
Chapter Six Storage-systemsgggggggg.pptx
Bigtable osdi06
BigTable PreReading
Accumulo design
Accumulo design
storage-systems.pptx
Bigtable osdi06
Bigtable osdi06
Bigtable osdi06
My Other Computer is a Data Center: The Sector Perspective on Big Data
GCP Data Engineer cheatsheet

Recently uploaded (20)

PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
KodekX | Application Modernization Development
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Approach and Philosophy of On baking technology
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
Spectroscopy.pptx food analysis technology
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
Big Data Technologies - Introduction.pptx
PDF
Machine learning based COVID-19 study performance prediction
Advanced methodologies resolving dimensionality complications for autism neur...
Understanding_Digital_Forensics_Presentation.pptx
Chapter 3 Spatial Domain Image Processing.pdf
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Unlocking AI with Model Context Protocol (MCP)
KodekX | Application Modernization Development
Digital-Transformation-Roadmap-for-Companies.pptx
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Mobile App Security Testing_ A Comprehensive Guide.pdf
Approach and Philosophy of On baking technology
Review of recent advances in non-invasive hemoglobin estimation
“AI and Expert System Decision Support & Business Intelligence Systems”
Spectral efficient network and resource selection model in 5G networks
Diabetes mellitus diagnosis method based random forest with bat algorithm
Spectroscopy.pptx food analysis technology
Agricultural_Statistics_at_a_Glance_2022_0.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Building Integrated photovoltaic BIPV_UPV.pdf
Big Data Technologies - Introduction.pptx
Machine learning based COVID-19 study performance prediction

Big table presentation-final

  • 1. A Distributed Storage System for Structured Data Bigtable Presenter: Yunming Zhang Conglong Li Saturday, September 21, 13
  • 2. References SOCC 2010 Key Note Slides Jeff Dean Google Introduction to Distributed Computing, Winter 2008 University of Washington 2 Saturday, September 21, 13
  • 3. Motivation Lots of (semi) structured data at Google URLs Contents, crawl metadata, links Per-user data: User preference settings, search results Scale is large Billions of URLs, hundreds of million of users, Existing Commercial database doesn’t meet the requirements 3 Saturday, September 21, 13
  • 4. Store and manage all the state reliably and efficiently Allow asynchronous processes to update different pieces of data continuously Very high read/write rates Efficient scans over all or interesting subsets of data Often want to examine data changes over time Goals 4 Saturday, September 21, 13
  • 5. BigTable vs. GFS GFS provides raw data storage We need: More sophisticated storage Key - value mapping Flexible enough to be useful Store semi-structured data Reliable, scalable, etc. 5 Saturday, September 21, 13
  • 6. BigTable Bigtable is a distributed storage system for managing large scale structured data Wide applicability Scalability High performance High availability 6 Saturday, September 21, 13
  • 7. Overview Data Model API Implementation Structures Optimizations Performance Evaluation Applications Conclusions 7 Saturday, September 21, 13
  • 9. Cell Contains multiple versions of the data Can locate a data using row key, column key and a time stamp Treats data as uninterpreted array of bytes that allow clients to serialize various forms of structured and semi-structured data Supports automatic garbage collection per column family for management of versioned data 9 Saturday, September 21, 13
  • 10. Store and manage all the state reliably and efficiently Allow asynchronous processes to update different pieces of data continuously Very high read/write rates Efficient scans over all or interesting subsets of data Often want to examine data changes over time Goals 10 Saturday, September 21, 13
  • 11. Row Row key is an arbitrary string Access to column data in a row is atomic Row creation is implicit upon storing data Rows ordered lexicographically Rows close together lexicographically usually reside on one or a small number of machines 11 Saturday, September 21, 13
  • 12. Columns Columns are grouped into Column Families: family:optional_qualifier Column family Has associated type information Usually of the same type 12 Saturday, September 21, 13
  • 13. Overview Data Model API Implementation Structures Optimizations Performance Evaluation Applications Conclusions 13 Saturday, September 21, 13
  • 14. API Metadata operations Create/delete tables, column families, change metadata, modify access control list Writes ( atomic ) Set (), DeleteCells(), DeleteRow() Reads Scanner: read arbitrary cells in a BigTable 14 Saturday, September 21, 13
  • 15. Overview Data Model API Implementation Structures Optimizations Performance Evaluation Applications Conclusions 15 Saturday, September 21, 13
  • 16. Tablets Large tables broken into tablets at row boundaries Tablet holds contiguous range of rows Clients can often choose row keys for locality Aim for ~100MB to 200MB of data per tablet Serving machine responsible for ~100 tablets Fast recovery: 100 machine each pick up 1 tablet from failed machine Fine-grained load balancing: Migrate tablets away from overloaded machine 16 Saturday, September 21, 13
  • 18. System Structure Master Metadata operations Load balancing Keep track of live tablet servers Master failure Tablet server Accept read and write to data 18 Saturday, September 21, 13
  • 22. Locating Tablets 3-level hierarchical lookup scheme for tablets Location is ip port of servers in META tables 22 Saturday, September 21, 13
  • 23. Tablet Representation and serving Append only tablet log SSTable on GFS A Sorted map of string to string If you want to find a row data, all the data are contiguous Memtable write buffer When a read comes in, you have to merge SSTable data and uncommitted value. 23 Saturday, September 21, 13
  • 26. Compaction Tablet state represented as a set of immutable compacted SSTable files, plus tail of log Minor compaction: When in-memory buffer fills up, it freezes the in-memory buffer and create a new SSTable Major compaction: Periodically compact all SSTables for tablet into new base SSTable on GFS Storage reclaimed from deletions at this point Produce new tables 26 Saturday, September 21, 13
  • 27. Overview Data Model API Implementation Structures Optimizations Performance Evaluation Applications Conclusions 27 Saturday, September 21, 13
  • 28. Reliable system for storing and managing all the states Allow asynchronous processes to update different pieces of data continuously Very high read/write rates Efficient scans over all or interesting subsets of data Often want to examine data changes over time Goals 28 Saturday, September 21, 13
  • 29. Locality Groups Clients can group multiple column families together into a locality group A separate SSTable is generated for each locality group Enable more efficient read Can be declared to be in-memory 29 Saturday, September 21, 13
  • 30. Compression Many opportunities for compression Similar values in columns and cells Within each SSTable for a locality group, encode compressed blocks Keep blocks small for random access Exploit fact that many values very similar 30 Saturday, September 21, 13
  • 31. Reliable system for storing and managing all the states Allow asynchronous processes to update different pieces of data continuously Very high read/write rates Efficient scans over all or interesting subsets of data Often want to examine data changes over time Goals 31 Saturday, September 21, 13
  • 32. Commit log and recovery Single commit log file per tablet server reduce the number of concurrent file writes to GFS Tablet Recovery redo points in log perform the same set of operations from last persistent state 32 Saturday, September 21, 13
  • 33. Overview Data Model API Implementation Structures Optimizations Performance Evaluation Applications Conclusions 33 Saturday, September 21, 13
  • 34. Performance evaluation Test Environment Based on a GFS with 1876 machines 400 GB IDE hard drives in each machine Two-level tree-shaped switched network Performance Tests Random Read/Write Sequential Read/Write 34 Saturday, September 21, 13
  • 35. Single tablet-server performance Random reads is the slowest Transfer 64 KB SSTable over GFS to read 1000 byte Random and sequential writes perform better Append writes to server to a single commit log Group commit 35 Saturday, September 21, 13
  • 36. Performance Scaling Performance didn’t scale linearly Load imbalance in multiple server configurations Larger data transfer overhead 36 Saturday, September 21, 13
  • 37. Overview Data Model API Implementation Structures Optimizations Performance Evaluation Applications Conclusions 37 Saturday, September 21, 13
  • 38. Google Analytics A service that analyzes traffic patterns at web sites Raw Click Table Row for each end-user session Row key is (website name, time) Summary Table Extracts recent session data using MapReduce jobs 38 Saturday, September 21, 13
  • 39. Google Earth Use one table for preprocessing and one for serving Different latency requirements (disk vs memory) Each row in the imagery table represents a single geographic segment Column family to store data source One column for each raw image Very sparse 39 Saturday, September 21, 13
  • 40. Personalized Search Row key is a unique userid A column family for each type of user action Replicated across Bigtable clusters to increase availability and reduce latency 40 Saturday, September 21, 13
  • 41. Conclusions Bigtable provides a high scalability, high performance, high availability and flexible storage for structured data. It provides a low level read / write based interface for other frameworks to build on top of it It has enabled Google to deal with large scale data efficiently 41 Saturday, September 21, 13