SlideShare a Scribd company logo
Page 1 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
HBase Read High Availability Using
Timeline-Consistent Region Replicas
Enis Soztutar (enis@hortonworks.com)
Devaraj Das (ddas@hortonworks.com)
Page 2 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
About Us
Enis Soztutar
Committer and PMC member in Apache
HBase and Hadoop since 2007
HBase team @Hortonworks
Twitter @enissoz
Devaraj Das
Committer and PMC member in
Hadoop since 2006
Committer at HBase
Co-founder @Hortonworks
	
  
	
  
	
  
	
  
	
  
	
  
Page 3 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Outline of the talk
PART I: Use case and semantics
§  CAP recap
§  Use case and motivation
§  Region replicas
§  Timeline consistency
§  Semantics
PART II : Implementation and next steps
§  Server side
§  Client side
§  Data replication
§  Next steps & Summary
Page 4 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Part I
Use case and semantics
Page 5 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
CAP reCAP
Partition tolerance
Consistency Availability
Pick Two
HBase is CP
Page 6 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Availability
CAP reCAP
•  In a distributed system you cannot NOT have P
•  C vs A is about what happens if there is a network
partition!
•  A an C are NEVER binary values, always a range
•  Different operations in the system can have
different A / C choices
•  HBase cannot be simplified as CP
Partition tolerance
Consistency
Pick Two
HBase is CP
Page 7 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
HBase consistency model
For a single row, HBase is strongly consistent within a data center
Across rows HBase is not strongly consistent (but available!).
When a RS goes down, only the regions on that server become
unavailable. Other regions are unaffected.
HBase multi-DC replication is “eventual consistent”
HBase applications should carefully design the schema for correct
semantics / performance tradeoff
Page 8 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Use cases and motivation
More and more applications are looking for a “0 down time” platform
§  30 seconds downtime (aggressive MTTR time) is too much
Certain classes of apps are willing to tolerate decreased consistency
guarantees in favor of availability
§  Especially for READs
Some build wrappers around the native API to be able to handle failures of
destination servers
§  Multi-DC: when one server is down in one DC, the client switches to a different one
Can we do something in HBase natively?
§  Within the same cluster?
Page 9 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Use cases and motivation
Designing the application requires careful tradeoff consideration
§  In schema design since single-row is strong consistent, but no multi-row trx
§  Multi-datacenter replication (active-passive, active-active, backups etc)
It is good to be able to give the application flexibility to pick-and-choose
§  Higher availability vs stronger consistency
Read vs Write
§  Different consistency models for read vs write
§  Read-repair, latest ts-wins vs linearizable updates
Page 10 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Initial goals
Support applications talking to a single cluster really well
§  No perceived downtime
§  Only for READs
If apps wants to tolerate cluster failures
§  Use HBase replication
§  Combine that with wrappers in the application
Page 11 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Introducing….
Region Replicas in HBase
Timeline Consistency in HBase
Page 12 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Region replicas
For every region of the table, there can be more than one replica
§  Every region replica has an associated “replica_id”, starting from 0
§  Each region replica is hosted by a different region server
Tables can be configured with a REGION_REPLICATION parameter
§  Default is 1
§  No change in the current behavior
One replica per region is the “default” or “primary”
§  Only this can accepts WRITEs
§  All reads from this region replica return the most recent data
Other replicas, also called “secondaries” follow the primary
§  They see only committed updates
Page 13 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Region replicas
Secondary region replicas are read-only
§  No writes are routed to secondary replicas
§  Data is replicated to secondary regions (more on this later)
§  Serve data from the same data files are primary
§  May not have received the recent data
§  Reads and Scans can be performed, returning possibly stale data
Region replica placement is done to maximize availability of any particular
region
§  Region replicas are not co-located on same region servers
§  And same racks (if possible)
Page 14 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
rowkey column:value column:value …
RegionServer
Region
memstore
DataNode
b2
b9 b1
DataNode
b2
b1
DataNode
b1
Client
Read and write
RegionServer
Page 15 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Page 15
rowkey column:value column:value …
RegionServer
Region
DataNode
b2
b9 b1
DataNode
b2
b1
DataNode
b1
Client
Read and write
memstore
RegionServer
rowkey column:value column:value …
memstore
Region replica
Read only
Page 16 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
TIMELINE Consistency
Introduced a Consistency enum
§  STRONG
§  TIMELINE
Consistency.STRONG is default
Consistency can be set per read operation (per-get or per-scan)
Timeline-consistent read RPCs sent to more than one replica
Semantics is a bit different than Eventual Consistency model
Page 17 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
TIMELINE Consistency
public enum Consistency {
STRONG,
TIMELINE
}
Get get = new Get(row);
get.setConsistency(Consistency.TIMELINE);
...
Result result = table.get(get);
…
if (result.isStale()) {
...
}
Page 18 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
TIMELINE Consistency Semantics
Can be though of as in-cluster active-passive replication
Single homed and ordered updates
§  All writes are handled and ordered by the primary region
§  All writes are STRONG consistency
Secondaries apply the mutations in order
Only get/scan requests to secondaries
Get/Scan Result can be inspected to see whether the result was from
possibly stale data
Page	
  19	
   ©	
  Hortonworks	
  Inc.	
  2011	
  –	
  2014.	
  All	
  Rights	
  Reserved	
  
TIMELINE Consistency Example
Client1	
  
X=1	
  
Client2	
  
WAL	
  
Data:	
  
Replica_id=0	
  (primary)	
  
Replica_id=1	
  	
  
Replica_id=2	
  
replicaJon	
  
replicaJon	
  
X=3	
  
WAL	
  
Data:	
  
WAL	
  
Data:	
  
X=1	
  X=1	
  Write	
  
Page	
  20	
   ©	
  Hortonworks	
  Inc.	
  2011	
  –	
  2014.	
  All	
  Rights	
  Reserved	
  
TIMELINE Consistency Example
Client1	
  
X=1	
  
Client2	
  
WAL	
  
Data:	
  
Replica_id=0	
  (primary)	
  
Replica_id=1	
  	
  
Replica_id=2	
  
replicaJon	
  
replicaJon	
  
X=3	
  
WAL	
  
Data:	
  
WAL	
  
Data:	
  
X=1	
  
X=1	
  
X=1	
  
X=1	
  
X=1	
  
X=1	
  Read	
  
X=1	
  Read	
  
X=1	
  Read	
  
Page	
  21	
   ©	
  Hortonworks	
  Inc.	
  2011	
  –	
  2014.	
  All	
  Rights	
  Reserved	
  
TIMELINE Consistency Example
Client1	
  
X=1	
  
Client2	
  
WAL	
  
Data:	
  
Replica_id=0	
  (primary)	
  
Replica_id=1	
  	
  
Replica_id=2	
  
replicaJon	
  
replicaJon	
  
WAL	
  
Data:	
  
WAL	
  
Data:	
  
Write	
  
X=1	
  
X=1	
  
X=2	
   X=2	
  
X=2	
  
Page	
  22	
   ©	
  Hortonworks	
  Inc.	
  2011	
  –	
  2014.	
  All	
  Rights	
  Reserved	
  
TIMELINE Consistency Example
Client1	
  
X=1	
  
Client2	
  
WAL	
  
Data:	
  
Replica_id=0	
  (primary)	
  
Replica_id=1	
  	
  
Replica_id=2	
  
replicaJon	
  
replicaJon	
  
WAL	
  
Data:	
  
WAL	
  
Data:	
  
X=2	
  
X=1	
  
X=2	
  
X=2	
  
X=2	
  
X=2	
  Read	
  
X=2	
  Read	
  
X=1	
  Read	
  
Page	
  23	
   ©	
  Hortonworks	
  Inc.	
  2011	
  –	
  2014.	
  All	
  Rights	
  Reserved	
  
TIMELINE Consistency Example
Client1	
  
X=1	
  
Client2	
  
WAL	
  
Data:	
  
Replica_id=0	
  (primary)	
  
Replica_id=1	
  	
  
Replica_id=2	
  
replicaJon	
  
replicaJon	
  
WAL	
  
Data:	
  
WAL	
  
Data:	
  
X=2	
  
X=1	
  
X=3	
  
X=2	
  
Write	
   X=3	
  
X=3	
  
Page	
  24	
   ©	
  Hortonworks	
  Inc.	
  2011	
  –	
  2014.	
  All	
  Rights	
  Reserved	
  
TIMELINE Consistency Example
Client1	
  
X=1	
  
Client2	
  
WAL	
  
Data:	
  
Replica_id=0	
  (primary)	
  
Replica_id=1	
  	
  
Replica_id=2	
  
replicaJon	
  
replicaJon	
  
WAL	
  
Data:	
  
WAL	
  
Data:	
  
X=2	
  
X=1	
  
X=3	
  
X=2	
   X=3	
  
X=3	
  Read	
  
X=2	
  Read	
  
X=1	
  Read	
  
Page 25 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
PART II
Implementation and next steps
Page 26 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Region replicas – recap
Every region replica has an associated “replica_id”, starting from 0
Each region replica is hosted by a different region server
§  All replicas can serve READs
One replica per region is the “default” or “primary”
§  Only this can accepts WRITEs
§  All reads from this region replica return the most recent data
Page 27 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Updates in the Master
Replica creation
§  Created during table creation
No distinction between primary & secondary replicas
Meta table contain all information in one row
Load balancer improvements
§  LB made aware of replicas
§  Does best effort to place replicas in machines/racks to maximize availability
Alter table support
§  For adjusting number of replicas
Page 28 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Updates in the RegionServer
Treats non-default replicas as read-only
Storefile management
§  Keeps itself up-to-date with the changes to do with store file creation/deletions
Page 29 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
IPC layer high level flow
Client
YES
Response within
timeout (10 millis)?
NO Send READ to all
secondaries
Send READ to primary
Poll for response
Wait for response
Take the first
successful response;
cancel others
Similar flow for GET/Batch-GET/
Scan, except that Scan is sticky
to the server it sees success
from.
Page 30 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Performance and Testing
No significant performance issues discovered
§  Added interrupt handling in the RPCs to cancel unneeded replica RPCs
Deeper level of performance testing work is still in progress
Tested via IT tests
§  fails if response is not received within a certain time
Page 31 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Next steps
What has been described so far is in “Phase-1” of the project
Phase-2
§  WAL replication
§  Handling of Merges and Splits
§  Latency guarantees
– Cancellation of RPCs server side
– Promotion of one Secondary to Primary, and recruiting a new Secondary
Use the infrastructure to implement consensus protocols for read/write
within a single datacenter
Page 32 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Data Replication
Data should be replicated from primary regions to secondary regions
A regions data = Data files on hdfs + in-memory data in Memstores
Data files MUST be shared. We do not want to store multiple copies
Do not cause more writes than necessary
Two solutions:
§  Region snapshots : Share only data files
§  Async WAL Replication : Share data files, every region replica has its own in-memory data
Page 33 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Data Replication – Region Snapshots
Primary region works as usual
§  Buffer up mutations in memstore
§  Flush to disk when full
§  Compact files when needed
§  Deleted files are kept in archive directory for some time
Secondary regions periodically look for new files in primary region
§  When a new flushed file is seen, just open it and start serving data from there
§  When a compaction is seen, open new file, close the files that are gone
§  Good for read-only, bulk load data or less frequently updated data
Implemented in phase 1
Page 34 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Data Replication - Async WAL Replication
Being implemented in Phase 2
Uses replication source to tail the WAL files from RS
§  Plugs in a custom replication sink to replay the edits on the secondaries
§  Flush and Compaction events are written to WAL. Secondaries pick new files when they see
the entry
A secondary region open will:
§  Open region files of the primary region
§  Setup a replication queue based on last seen seqId
§  Accumulate edits in memstore (memory management issues in the next slide)
§  Mimic flushes and compactions from primary region
Page 35 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Memory management & flushes
Memory Snapshots-based approach
§  The secondaries looks for WAL-edit entries Start-Flush, Commit-Flush
§  They mimic what the primary does in terms of taking snapshots
– When a flush is successful, the snapshot is let go
§  If the RegionServer hosting secondary is under memory pressure
– Make some other primary region flush
Flush-based approach
§  Treat the secondary regions as regular regions
§  Allow them to flush as usual
§  Flush to the local disk, and clean them up periodically or on certain events
– Treat them as a normal store file for serving reads
Page 36 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Summary
Pros
§  High-availability for read-only tables
§  High-availability for stale reads
§  Very low-latency for the above
Cons
§  Increased memory from memstores of the secondaries
§  Increased blockcache usage
§  Extra network traffic for the replica calls
§  Increased number of regions to manage in the cluster
Page 37 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
References
Apache branch hbase-10070 (https://github.com/apache/hbase/tree/
hbase-10070)
HDP-2.1 comes with experimental support for Phase-1
More on the use cases for this work is in Sudarshan’s (Bloomberg) talk
§  “Case Studies” track titled “HBase at Bloomberg: High Availability Needs for the Financial
Industry”
Page 38 © Hortonworks Inc. 2011 – 2014. All Rights Reserved
Thanks
Q & A

More Related Content

PPTX
HBase Backups
PPTX
A Survey of HBase Application Archetypes
PPTX
Harmonizing Multi-tenant HBase Clusters for Managing Workload Diversity
PPTX
HBaseCon 2012 | Mignify: A Big Data Refinery Built on HBase - Internet Memory...
PPTX
HBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBase
PPTX
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster
PPTX
HBase at Bloomberg: High Availability Needs for the Financial Industry
PPTX
HBase Data Modeling and Access Patterns with Kite SDK
HBase Backups
A Survey of HBase Application Archetypes
Harmonizing Multi-tenant HBase Clusters for Managing Workload Diversity
HBaseCon 2012 | Mignify: A Big Data Refinery Built on HBase - Internet Memory...
HBaseCon 2013: Project Valta - A Resource Management Layer over Apache HBase
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster
HBase at Bloomberg: High Availability Needs for the Financial Industry
HBase Data Modeling and Access Patterns with Kite SDK

What's hot (20)

PPTX
HBaseCon 2015: HBase and Spark
PDF
Data Evolution in HBase
PPT
HBaseCon 2013: Apache HBase Replication
PDF
HBaseCon 2015- HBase @ Flipboard
PPTX
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...
PDF
Large-scale Web Apps @ Pinterest
PPTX
HBase Read High Availability Using Timeline Consistent Region Replicas
PPTX
HBaseCon 2012 | Building a Large Search Platform on a Shoestring Budget
PDF
HBaseCon 2013: Integration of Apache Hive and HBase
PPT
Real-Time Video Analytics Using Hadoop and HBase (HBaseCon 2013)
PDF
HBaseCon 2015: Graph Processing of Stock Market Order Flow in HBase on AWS
PPTX
HBase in Practice
PPTX
HBaseCon 2013: Streaming Data into Apache HBase using Apache Flume: Experienc...
PPT
HBaseCon 2012 | You’ve got HBase! How AOL Mail Handles Big Data
PPTX
Taming the Elephant: Efficient and Effective Apache Hadoop Management
PPTX
Meet hbase 2.0
PPTX
HBaseCon 2015 General Session: State of HBase
PPTX
Apache Spark on Apache HBase: Current and Future
PPTX
Optimizing Apache HBase for Cloud Storage in Microsoft Azure HDInsight
PDF
Tales from the Cloudera Field
HBaseCon 2015: HBase and Spark
Data Evolution in HBase
HBaseCon 2013: Apache HBase Replication
HBaseCon 2015- HBase @ Flipboard
HBaseCon 2015: Apache Phoenix - The Evolution of a Relational Database Layer ...
Large-scale Web Apps @ Pinterest
HBase Read High Availability Using Timeline Consistent Region Replicas
HBaseCon 2012 | Building a Large Search Platform on a Shoestring Budget
HBaseCon 2013: Integration of Apache Hive and HBase
Real-Time Video Analytics Using Hadoop and HBase (HBaseCon 2013)
HBaseCon 2015: Graph Processing of Stock Market Order Flow in HBase on AWS
HBase in Practice
HBaseCon 2013: Streaming Data into Apache HBase using Apache Flume: Experienc...
HBaseCon 2012 | You’ve got HBase! How AOL Mail Handles Big Data
Taming the Elephant: Efficient and Effective Apache Hadoop Management
Meet hbase 2.0
HBaseCon 2015 General Session: State of HBase
Apache Spark on Apache HBase: Current and Future
Optimizing Apache HBase for Cloud Storage in Microsoft Azure HDInsight
Tales from the Cloudera Field
Ad

Viewers also liked (20)

PPTX
HBaseCon 2013: Apache HBase and HDFS - Understanding Filesystem Usage in HBase
PPTX
Content Identification using HBase
PDF
Intro to HBase Internals & Schema Design (for HBase users)
PPTX
HBase and HDFS: Understanding FileSystem Usage in HBase
PDF
HBaseCon 2013: Apache HBase, Meet Ops. Ops, Meet Apache HBase.
PPTX
Cross-Site BigTable using HBase
PPTX
HBaseCon 2012 | HBase for the Worlds Libraries - OCLC
PPTX
HBaseCon 2015: Trafodion - Integrating Operational SQL into HBase
PPTX
HBaseCon 2012 | Relaxed Transactions for HBase - Francis Liu, Yahoo!
PPTX
HBaseCon 2013: Apache HBase on Flash
PPT
HBaseCon 2013: Apache Hadoop and Apache HBase for Real-Time Video Analytics
PPTX
HBaseCon 2012 | Scaling GIS In Three Acts
PPTX
HBaseCon 2012 | Unique Sets on HBase and Hadoop - Elliot Clark, StumbleUpon
PPT
HBaseCon 2012 | Building Mobile Infrastructure with HBase
PPTX
HBaseCon 2013: Evolving a First-Generation Apache HBase Deployment to Second...
PPTX
HBaseCon 2013: Being Smarter Than the Smart Meter
PDF
HBaseCon 2012 | Content Addressable Storages for Fun and Profit - Berk Demir,...
PPTX
HBaseCon 2012 | Living Data: Applying Adaptable Schemas to HBase - Aaron Kimb...
PPTX
HBaseCon 2013: Rebuilding for Scale on Apache HBase
PPTX
HBaseCon 2012 | Leveraging HBase for the World’s Largest Curated Genomic Data...
HBaseCon 2013: Apache HBase and HDFS - Understanding Filesystem Usage in HBase
Content Identification using HBase
Intro to HBase Internals & Schema Design (for HBase users)
HBase and HDFS: Understanding FileSystem Usage in HBase
HBaseCon 2013: Apache HBase, Meet Ops. Ops, Meet Apache HBase.
Cross-Site BigTable using HBase
HBaseCon 2012 | HBase for the Worlds Libraries - OCLC
HBaseCon 2015: Trafodion - Integrating Operational SQL into HBase
HBaseCon 2012 | Relaxed Transactions for HBase - Francis Liu, Yahoo!
HBaseCon 2013: Apache HBase on Flash
HBaseCon 2013: Apache Hadoop and Apache HBase for Real-Time Video Analytics
HBaseCon 2012 | Scaling GIS In Three Acts
HBaseCon 2012 | Unique Sets on HBase and Hadoop - Elliot Clark, StumbleUpon
HBaseCon 2012 | Building Mobile Infrastructure with HBase
HBaseCon 2013: Evolving a First-Generation Apache HBase Deployment to Second...
HBaseCon 2013: Being Smarter Than the Smart Meter
HBaseCon 2012 | Content Addressable Storages for Fun and Profit - Berk Demir,...
HBaseCon 2012 | Living Data: Applying Adaptable Schemas to HBase - Aaron Kimb...
HBaseCon 2013: Rebuilding for Scale on Apache HBase
HBaseCon 2012 | Leveraging HBase for the World’s Largest Curated Genomic Data...
Ad

Similar to HBase Read High Availability Using Timeline-Consistent Region Replicas (20)

PPTX
HBase Read High Availabilty using Timeline Consistent Region Replicas
PDF
Design Patterns For Distributed NO-reational databases
PPTX
Grokking Techtalk #40: Consistency and Availability tradeoff in database cluster
PPTX
Apache Hadoop 3.0 Community Update
PPTX
Design Patterns For Real Time Streaming Data Analytics
PPTX
Design Patterns For Real Time Streaming Data Analytics
PPT
HBASE by Nicolas Liochon - Meetup HUGFR du 22 Sept 2014
PDF
Sept 17 2013 - THUG - HBase a Technical Introduction
PDF
Client-centric Consistency Models
PDF
Architectural Overview of MapR's Apache Hadoop Distribution
PDF
Design Patterns for Distributed Non-Relational Databases
PPTX
Megastore by Google
PPTX
Apache Phoenix and HBase - Hadoop Summit Tokyo, Japan
PPTX
IoT:what about data storage?
PPTX
Data Engineering for Data Scientists
PPT
Key Challenges in Cloud Computing and How Yahoo! is Approaching Them
PPTX
HDFS Erasure Code Storage - Same Reliability at Better Storage Efficiency
PDF
Scalable Data Storage Getting You Down? To The Cloud!
PDF
Scalable Data Storage Getting you Down? To the Cloud!
PPTX
Apache phoenix: Past, Present and Future of SQL over HBAse
HBase Read High Availabilty using Timeline Consistent Region Replicas
Design Patterns For Distributed NO-reational databases
Grokking Techtalk #40: Consistency and Availability tradeoff in database cluster
Apache Hadoop 3.0 Community Update
Design Patterns For Real Time Streaming Data Analytics
Design Patterns For Real Time Streaming Data Analytics
HBASE by Nicolas Liochon - Meetup HUGFR du 22 Sept 2014
Sept 17 2013 - THUG - HBase a Technical Introduction
Client-centric Consistency Models
Architectural Overview of MapR's Apache Hadoop Distribution
Design Patterns for Distributed Non-Relational Databases
Megastore by Google
Apache Phoenix and HBase - Hadoop Summit Tokyo, Japan
IoT:what about data storage?
Data Engineering for Data Scientists
Key Challenges in Cloud Computing and How Yahoo! is Approaching Them
HDFS Erasure Code Storage - Same Reliability at Better Storage Efficiency
Scalable Data Storage Getting You Down? To The Cloud!
Scalable Data Storage Getting you Down? To the Cloud!
Apache phoenix: Past, Present and Future of SQL over HBAse

More from HBaseCon (20)

PDF
hbaseconasia2017: Building online HBase cluster of Zhihu based on Kubernetes
PDF
hbaseconasia2017: HBase on Beam
PDF
hbaseconasia2017: HBase Disaster Recovery Solution at Huawei
PDF
hbaseconasia2017: Removable singularity: a story of HBase upgrade in Pinterest
PDF
hbaseconasia2017: HareQL:快速HBase查詢工具的發展過程
PDF
hbaseconasia2017: Apache HBase at Netease
PDF
hbaseconasia2017: HBase在Hulu的使用和实践
PDF
hbaseconasia2017: 基于HBase的企业级大数据平台
PDF
hbaseconasia2017: HBase at JD.com
PDF
hbaseconasia2017: Large scale data near-line loading method and architecture
PDF
hbaseconasia2017: Ecosystems with HBase and CloudTable service at Huawei
PDF
hbaseconasia2017: HBase Practice At XiaoMi
PDF
hbaseconasia2017: hbase-2.0.0
PDF
HBaseCon2017 Democratizing HBase
PDF
HBaseCon2017 Removable singularity: a story of HBase upgrade in Pinterest
PDF
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBase
PDF
HBaseCon2017 Transactions in HBase
PDF
HBaseCon2017 Highly-Available HBase
PDF
HBaseCon2017 Apache HBase at Didi
PDF
HBaseCon2017 gohbase: Pure Go HBase Client
hbaseconasia2017: Building online HBase cluster of Zhihu based on Kubernetes
hbaseconasia2017: HBase on Beam
hbaseconasia2017: HBase Disaster Recovery Solution at Huawei
hbaseconasia2017: Removable singularity: a story of HBase upgrade in Pinterest
hbaseconasia2017: HareQL:快速HBase查詢工具的發展過程
hbaseconasia2017: Apache HBase at Netease
hbaseconasia2017: HBase在Hulu的使用和实践
hbaseconasia2017: 基于HBase的企业级大数据平台
hbaseconasia2017: HBase at JD.com
hbaseconasia2017: Large scale data near-line loading method and architecture
hbaseconasia2017: Ecosystems with HBase and CloudTable service at Huawei
hbaseconasia2017: HBase Practice At XiaoMi
hbaseconasia2017: hbase-2.0.0
HBaseCon2017 Democratizing HBase
HBaseCon2017 Removable singularity: a story of HBase upgrade in Pinterest
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBase
HBaseCon2017 Transactions in HBase
HBaseCon2017 Highly-Available HBase
HBaseCon2017 Apache HBase at Didi
HBaseCon2017 gohbase: Pure Go HBase Client

Recently uploaded (20)

PDF
medical staffing services at VALiNTRY
PDF
T3DD25 TYPO3 Content Blocks - Deep Dive by André Kraus
PPTX
Operating system designcfffgfgggggggvggggggggg
PDF
Upgrade and Innovation Strategies for SAP ERP Customers
PDF
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
PPTX
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
PDF
wealthsignaloriginal-com-DS-text-... (1).pdf
PDF
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
PPTX
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
PPTX
VVF-Customer-Presentation2025-Ver1.9.pptx
PPTX
Odoo POS Development Services by CandidRoot Solutions
PDF
Wondershare Filmora 15 Crack With Activation Key [2025
PPTX
L1 - Introduction to python Backend.pptx
PPTX
ai tools demonstartion for schools and inter college
PDF
How to Choose the Right IT Partner for Your Business in Malaysia
PPTX
Essential Infomation Tech presentation.pptx
PDF
Softaken Excel to vCard Converter Software.pdf
PDF
Audit Checklist Design Aligning with ISO, IATF, and Industry Standards — Omne...
PDF
System and Network Administration Chapter 2
PDF
How to Migrate SBCGlobal Email to Yahoo Easily
medical staffing services at VALiNTRY
T3DD25 TYPO3 Content Blocks - Deep Dive by André Kraus
Operating system designcfffgfgggggggvggggggggg
Upgrade and Innovation Strategies for SAP ERP Customers
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
wealthsignaloriginal-com-DS-text-... (1).pdf
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
VVF-Customer-Presentation2025-Ver1.9.pptx
Odoo POS Development Services by CandidRoot Solutions
Wondershare Filmora 15 Crack With Activation Key [2025
L1 - Introduction to python Backend.pptx
ai tools demonstartion for schools and inter college
How to Choose the Right IT Partner for Your Business in Malaysia
Essential Infomation Tech presentation.pptx
Softaken Excel to vCard Converter Software.pdf
Audit Checklist Design Aligning with ISO, IATF, and Industry Standards — Omne...
System and Network Administration Chapter 2
How to Migrate SBCGlobal Email to Yahoo Easily

HBase Read High Availability Using Timeline-Consistent Region Replicas

  • 1. Page 1 © Hortonworks Inc. 2011 – 2014. All Rights Reserved HBase Read High Availability Using Timeline-Consistent Region Replicas Enis Soztutar (enis@hortonworks.com) Devaraj Das (ddas@hortonworks.com)
  • 2. Page 2 © Hortonworks Inc. 2011 – 2014. All Rights Reserved About Us Enis Soztutar Committer and PMC member in Apache HBase and Hadoop since 2007 HBase team @Hortonworks Twitter @enissoz Devaraj Das Committer and PMC member in Hadoop since 2006 Committer at HBase Co-founder @Hortonworks            
  • 3. Page 3 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Outline of the talk PART I: Use case and semantics §  CAP recap §  Use case and motivation §  Region replicas §  Timeline consistency §  Semantics PART II : Implementation and next steps §  Server side §  Client side §  Data replication §  Next steps & Summary
  • 4. Page 4 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Part I Use case and semantics
  • 5. Page 5 © Hortonworks Inc. 2011 – 2014. All Rights Reserved CAP reCAP Partition tolerance Consistency Availability Pick Two HBase is CP
  • 6. Page 6 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Availability CAP reCAP •  In a distributed system you cannot NOT have P •  C vs A is about what happens if there is a network partition! •  A an C are NEVER binary values, always a range •  Different operations in the system can have different A / C choices •  HBase cannot be simplified as CP Partition tolerance Consistency Pick Two HBase is CP
  • 7. Page 7 © Hortonworks Inc. 2011 – 2014. All Rights Reserved HBase consistency model For a single row, HBase is strongly consistent within a data center Across rows HBase is not strongly consistent (but available!). When a RS goes down, only the regions on that server become unavailable. Other regions are unaffected. HBase multi-DC replication is “eventual consistent” HBase applications should carefully design the schema for correct semantics / performance tradeoff
  • 8. Page 8 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Use cases and motivation More and more applications are looking for a “0 down time” platform §  30 seconds downtime (aggressive MTTR time) is too much Certain classes of apps are willing to tolerate decreased consistency guarantees in favor of availability §  Especially for READs Some build wrappers around the native API to be able to handle failures of destination servers §  Multi-DC: when one server is down in one DC, the client switches to a different one Can we do something in HBase natively? §  Within the same cluster?
  • 9. Page 9 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Use cases and motivation Designing the application requires careful tradeoff consideration §  In schema design since single-row is strong consistent, but no multi-row trx §  Multi-datacenter replication (active-passive, active-active, backups etc) It is good to be able to give the application flexibility to pick-and-choose §  Higher availability vs stronger consistency Read vs Write §  Different consistency models for read vs write §  Read-repair, latest ts-wins vs linearizable updates
  • 10. Page 10 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Initial goals Support applications talking to a single cluster really well §  No perceived downtime §  Only for READs If apps wants to tolerate cluster failures §  Use HBase replication §  Combine that with wrappers in the application
  • 11. Page 11 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Introducing…. Region Replicas in HBase Timeline Consistency in HBase
  • 12. Page 12 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Region replicas For every region of the table, there can be more than one replica §  Every region replica has an associated “replica_id”, starting from 0 §  Each region replica is hosted by a different region server Tables can be configured with a REGION_REPLICATION parameter §  Default is 1 §  No change in the current behavior One replica per region is the “default” or “primary” §  Only this can accepts WRITEs §  All reads from this region replica return the most recent data Other replicas, also called “secondaries” follow the primary §  They see only committed updates
  • 13. Page 13 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Region replicas Secondary region replicas are read-only §  No writes are routed to secondary replicas §  Data is replicated to secondary regions (more on this later) §  Serve data from the same data files are primary §  May not have received the recent data §  Reads and Scans can be performed, returning possibly stale data Region replica placement is done to maximize availability of any particular region §  Region replicas are not co-located on same region servers §  And same racks (if possible)
  • 14. Page 14 © Hortonworks Inc. 2011 – 2014. All Rights Reserved rowkey column:value column:value … RegionServer Region memstore DataNode b2 b9 b1 DataNode b2 b1 DataNode b1 Client Read and write RegionServer
  • 15. Page 15 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Page 15 rowkey column:value column:value … RegionServer Region DataNode b2 b9 b1 DataNode b2 b1 DataNode b1 Client Read and write memstore RegionServer rowkey column:value column:value … memstore Region replica Read only
  • 16. Page 16 © Hortonworks Inc. 2011 – 2014. All Rights Reserved TIMELINE Consistency Introduced a Consistency enum §  STRONG §  TIMELINE Consistency.STRONG is default Consistency can be set per read operation (per-get or per-scan) Timeline-consistent read RPCs sent to more than one replica Semantics is a bit different than Eventual Consistency model
  • 17. Page 17 © Hortonworks Inc. 2011 – 2014. All Rights Reserved TIMELINE Consistency public enum Consistency { STRONG, TIMELINE } Get get = new Get(row); get.setConsistency(Consistency.TIMELINE); ... Result result = table.get(get); … if (result.isStale()) { ... }
  • 18. Page 18 © Hortonworks Inc. 2011 – 2014. All Rights Reserved TIMELINE Consistency Semantics Can be though of as in-cluster active-passive replication Single homed and ordered updates §  All writes are handled and ordered by the primary region §  All writes are STRONG consistency Secondaries apply the mutations in order Only get/scan requests to secondaries Get/Scan Result can be inspected to see whether the result was from possibly stale data
  • 19. Page  19   ©  Hortonworks  Inc.  2011  –  2014.  All  Rights  Reserved   TIMELINE Consistency Example Client1   X=1   Client2   WAL   Data:   Replica_id=0  (primary)   Replica_id=1     Replica_id=2   replicaJon   replicaJon   X=3   WAL   Data:   WAL   Data:   X=1  X=1  Write  
  • 20. Page  20   ©  Hortonworks  Inc.  2011  –  2014.  All  Rights  Reserved   TIMELINE Consistency Example Client1   X=1   Client2   WAL   Data:   Replica_id=0  (primary)   Replica_id=1     Replica_id=2   replicaJon   replicaJon   X=3   WAL   Data:   WAL   Data:   X=1   X=1   X=1   X=1   X=1   X=1  Read   X=1  Read   X=1  Read  
  • 21. Page  21   ©  Hortonworks  Inc.  2011  –  2014.  All  Rights  Reserved   TIMELINE Consistency Example Client1   X=1   Client2   WAL   Data:   Replica_id=0  (primary)   Replica_id=1     Replica_id=2   replicaJon   replicaJon   WAL   Data:   WAL   Data:   Write   X=1   X=1   X=2   X=2   X=2  
  • 22. Page  22   ©  Hortonworks  Inc.  2011  –  2014.  All  Rights  Reserved   TIMELINE Consistency Example Client1   X=1   Client2   WAL   Data:   Replica_id=0  (primary)   Replica_id=1     Replica_id=2   replicaJon   replicaJon   WAL   Data:   WAL   Data:   X=2   X=1   X=2   X=2   X=2   X=2  Read   X=2  Read   X=1  Read  
  • 23. Page  23   ©  Hortonworks  Inc.  2011  –  2014.  All  Rights  Reserved   TIMELINE Consistency Example Client1   X=1   Client2   WAL   Data:   Replica_id=0  (primary)   Replica_id=1     Replica_id=2   replicaJon   replicaJon   WAL   Data:   WAL   Data:   X=2   X=1   X=3   X=2   Write   X=3   X=3  
  • 24. Page  24   ©  Hortonworks  Inc.  2011  –  2014.  All  Rights  Reserved   TIMELINE Consistency Example Client1   X=1   Client2   WAL   Data:   Replica_id=0  (primary)   Replica_id=1     Replica_id=2   replicaJon   replicaJon   WAL   Data:   WAL   Data:   X=2   X=1   X=3   X=2   X=3   X=3  Read   X=2  Read   X=1  Read  
  • 25. Page 25 © Hortonworks Inc. 2011 – 2014. All Rights Reserved PART II Implementation and next steps
  • 26. Page 26 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Region replicas – recap Every region replica has an associated “replica_id”, starting from 0 Each region replica is hosted by a different region server §  All replicas can serve READs One replica per region is the “default” or “primary” §  Only this can accepts WRITEs §  All reads from this region replica return the most recent data
  • 27. Page 27 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Updates in the Master Replica creation §  Created during table creation No distinction between primary & secondary replicas Meta table contain all information in one row Load balancer improvements §  LB made aware of replicas §  Does best effort to place replicas in machines/racks to maximize availability Alter table support §  For adjusting number of replicas
  • 28. Page 28 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Updates in the RegionServer Treats non-default replicas as read-only Storefile management §  Keeps itself up-to-date with the changes to do with store file creation/deletions
  • 29. Page 29 © Hortonworks Inc. 2011 – 2014. All Rights Reserved IPC layer high level flow Client YES Response within timeout (10 millis)? NO Send READ to all secondaries Send READ to primary Poll for response Wait for response Take the first successful response; cancel others Similar flow for GET/Batch-GET/ Scan, except that Scan is sticky to the server it sees success from.
  • 30. Page 30 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Performance and Testing No significant performance issues discovered §  Added interrupt handling in the RPCs to cancel unneeded replica RPCs Deeper level of performance testing work is still in progress Tested via IT tests §  fails if response is not received within a certain time
  • 31. Page 31 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Next steps What has been described so far is in “Phase-1” of the project Phase-2 §  WAL replication §  Handling of Merges and Splits §  Latency guarantees – Cancellation of RPCs server side – Promotion of one Secondary to Primary, and recruiting a new Secondary Use the infrastructure to implement consensus protocols for read/write within a single datacenter
  • 32. Page 32 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Data Replication Data should be replicated from primary regions to secondary regions A regions data = Data files on hdfs + in-memory data in Memstores Data files MUST be shared. We do not want to store multiple copies Do not cause more writes than necessary Two solutions: §  Region snapshots : Share only data files §  Async WAL Replication : Share data files, every region replica has its own in-memory data
  • 33. Page 33 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Data Replication – Region Snapshots Primary region works as usual §  Buffer up mutations in memstore §  Flush to disk when full §  Compact files when needed §  Deleted files are kept in archive directory for some time Secondary regions periodically look for new files in primary region §  When a new flushed file is seen, just open it and start serving data from there §  When a compaction is seen, open new file, close the files that are gone §  Good for read-only, bulk load data or less frequently updated data Implemented in phase 1
  • 34. Page 34 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Data Replication - Async WAL Replication Being implemented in Phase 2 Uses replication source to tail the WAL files from RS §  Plugs in a custom replication sink to replay the edits on the secondaries §  Flush and Compaction events are written to WAL. Secondaries pick new files when they see the entry A secondary region open will: §  Open region files of the primary region §  Setup a replication queue based on last seen seqId §  Accumulate edits in memstore (memory management issues in the next slide) §  Mimic flushes and compactions from primary region
  • 35. Page 35 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Memory management & flushes Memory Snapshots-based approach §  The secondaries looks for WAL-edit entries Start-Flush, Commit-Flush §  They mimic what the primary does in terms of taking snapshots – When a flush is successful, the snapshot is let go §  If the RegionServer hosting secondary is under memory pressure – Make some other primary region flush Flush-based approach §  Treat the secondary regions as regular regions §  Allow them to flush as usual §  Flush to the local disk, and clean them up periodically or on certain events – Treat them as a normal store file for serving reads
  • 36. Page 36 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Summary Pros §  High-availability for read-only tables §  High-availability for stale reads §  Very low-latency for the above Cons §  Increased memory from memstores of the secondaries §  Increased blockcache usage §  Extra network traffic for the replica calls §  Increased number of regions to manage in the cluster
  • 37. Page 37 © Hortonworks Inc. 2011 – 2014. All Rights Reserved References Apache branch hbase-10070 (https://github.com/apache/hbase/tree/ hbase-10070) HDP-2.1 comes with experimental support for Phase-1 More on the use cases for this work is in Sudarshan’s (Bloomberg) talk §  “Case Studies” track titled “HBase at Bloomberg: High Availability Needs for the Financial Industry”
  • 38. Page 38 © Hortonworks Inc. 2011 – 2014. All Rights Reserved Thanks Q & A