SlideShare a Scribd company logo
© Hortonworks Inc. 2013
HDFS What is New and Futures
Sanjay Radia, Founder, Architect
Suresh Srinivas, Founder, Architect
Hortonworks Inc.
Page 1
© Hortonworks Inc. 2013
About me
• Founder, Architect, Hortonworks
• Part of the Hadoop team at Yahoo! since 2007
– Chief Architect of Hadoop Core at Yahoo!
– Apache Hadoop PMC and Committer
• Prior
– Data center automation, virtualization, Java, HA, OSs, File
Systems (Startup, Sun Microsystems, …)
– Ph.D., University of Waterloo
Page 2
Architecting the Future of Big Data
© Hortonworks Inc. 2013
Agenda
• Hadoop 2.0 – What’s new
– Federation
– HA
– Snapshots
– Other features
• Future
– Major Architectural Directions
– Short term and long term features
Page 3
Architecting the Future of Big Data
© Hortonworks Inc. 2013
We have been hard at work…
• Progress is being made in many areas
– Write-pipeline, Append
– Scalability
– Performance
– Enterprise features
– Ongoing operability improvements
– Enhancements for other projects in the ecosystem
– Expand Hadoop ecosystem to more platforms and use cases
• 2192 commits in Hadoop in the last year
– Almost a million lines of changes
– ~150 contributors
– Lot of new contributors - ~80 with < 3 patches
• 350K lines of changes in HDFS and common
Page 4
Architecting the Future of Big Data
© Hortonworks Inc. 2013
Building on Rock-solid Foundation
• Original design choices - simple and robust
– Storage: Rely in OS’s file system rather than use raw disk
– Storage Fault Tolerance: multiple replicas, active monitoring
– Namenode Master
• Reliability
– Over 7 9’s of data reliability, less than 0.58 failures across 25 clusters
• Operability
– Small teams can manage large clusters
• An operator per 3K node cluster
– Fast Time to repair on node or disk failure
• Minutes to an hour Vs. RAID array repairs taking many long hours
• Scalable - proven by large scale deployments not bits
– > 100 PB storage, > 500 million files, > 4500 nodes in a single cluster
– > 60 K nodes of HDFS in deployment and use
Page 5
Architecting the Future of Big Data
6
HDFS’ Generic Storage Service
Opportunities for Innovation
• Federation - Distributed (Partitioned) Namespace
– Simple and Robust due to independent masters
– Scalability, Isolation, Availability
• New Services – Independent Block Pools
– New FS - Partial namespace in memory
– MR Tmp storage, Object store directly on block storage
– Shadow file system – caches HDFS, NFS, S3
• Future: move Block Management in DataNodes
– Simplifies namespace/application implementation
– Distributed namenode becomes significantly simple
Storage Service
HDFS
Namespace
Alternate NN
Implementation
HBase
MR tmp
© Hortonworks Inc. 2013
Federation
• Block Storage as generic storage service
– DNs store blocks in Block Pools for all the Namespace Volumes
• Multiple independent Namenodes and Namespace Volumes in a cluster
– Scalability by adding more namenodes/namespaces
– Isolation – separating applications to their own namespaces
– Client side mount tables/ViewFS for integrated views
Page 7
Architecting the Future of Big Data
DN 1 DN 2 DN m
.. .. ..
NS1
Foreign
NS n
..
.
..
.
NS k
Block Pools
Pool nPool kPool 1
NN-1 NN-k NN-n
Common Storage
BlockStorageNamespace
8
Managing Namespaces
• Federation has multiple namespaces
• Don’t you need a single global namespace?
– Some tenants want private namespace
• Hadoop as service – each tenant its own namespace
– Global? Key is to share the data and the names used
to access the data
• A single global namespace is one way share
• Client-side mount table is another way to share.
– Shared mount-table => “global” shared view
– Personalized mount-table => per-application view
• Share the data that matter by mounting it
• Client-side implementation of mount tables
– No single point of failure
– No hotspot for root and top level directories
Client-side
mount-table
homeproject
NS1 NS3NS2
NS4
tmp
/
data
O'Reilly Strata & Hadoop8
© Hortonworks Inc. 2011
High Availability
© Hortonworks Inc. 2013
HA for HDFS
• Hadoop 1.x (HDP 1.x)
– Failover using industry standard solution (Linux HA, VSphere)
– Shared storage
– Failover times 1 minutes to 3-4 minutes for 100 to 300 node cluster
– Full-stack HA
• Clients, JT, HBase, HCat automatically pause and retry during failover
• NN, JT, Hcat all have automatic failover
• Hadoop 2.x (HDP 2.x)
– Failover over using Failover Controller
– Quorum Journal Manager (No shared storage)
• Failover times are 30 to 120 seconds less (since Standby NN is hot)
– Full-stack HA
Page 10
© Hortonworks Inc. 2013
Hadoop Full Stack HA
Page 11
Architecting the Future of Big Data
HA Cluster for Master Daemons
Server Server Server
NN JT
Failover
Apps
Running
Outside
JT into Safemode
NN
jo
b
jo
b
jo
b
jo
b
jo
b
Slave Nodes of Hadoop Cluster
© Hortonworks Inc. 2013
High Availability – Release 2.0
• Supports manual and automatic failover
• Automatic failover with Failover Controller
– Active NN election and failure detection using ZooKeeper
– Periodic NN health check
– Failover on NN failure
• Removed shared storage dependency
– Quorum Journal Manager
• 3 to 5 Journal Nodes for storing editlog
• Edit must be written to quorum number of Journal Nodes
Available in Release 2.0.3-alpha
Page 12
Architecting the Future of Big Data
© Hortonworks Inc. 2013
Namenode HA in Hadoop 2
Page 13
Architecting the Future of Big Data
NN
Active
NN
Standby
JNJN JN
Shared NN state
through Quorum
of JournalNodes
DN
FailoverController
Active
ZK
Cmds
Monitor Health
of NN. OS, HW
Monitor Health
of NN. OS, HW
Block Reports to Active & Standby
DN fencing: only obey commands
from active
DN DN
FailoverController
Standby
ZK ZK
Heartbeat Heartbeat
DN
Namenode HA has no external dependency
© Hortonworks Inc. 2013
Snapshots (HDFS-2802)
• Snapshot entire namespace or sub directories
– Nested snapshots allowed
– Managed by Admin
• Users can take snapshots of directories they own
• Support for read-only COW snapshots
– Design allows read-write snapshots
• Namenode only operation – no data copy made
– Metadata in namenode - no complicated distributed mechanism
– Datanodes have no knowledge
• Efficient
– Instantaneous creation
– Memory used is highly optimized
• State proportional to the changes between the snapshots
– Does not affect regular HDFS operations
Page 14
Architecting the Future of Big Data
© Hortonworks Inc. 2013
Snapshot – APIs and CLIs
• All regular commands & APIs can be used with snapshot path
– /<path>/.snapshot/snapshot_name/file.txt
– Copy /<path>/.snapshot/snap1/ImportantFile /<path>/
• CLIs
– Allow snapshots
• dfsadmin –allowSnapshots <dir>
• dfsadmin –disAllowSnapshots <dir>
– Create/delete/rename snapshots
• fs –createSnapshot<dir> [snapshot_name]
• fs –deleteSnapshot<dir> <snapshot_name>
• fs –renameSnapshot<dir> <old_name> <new_name>
– Tool to print diff between snapshots
– Admin tool to print all snapshottable directories and snapshots
Page 15
Architecting the Future of Big Data
© Hortonworks Inc. 2013
Performance Improvements
• Many Improvements
– SSE4.2 CRC32C – ~3x less CPU on read path
– Read path improvements for fewer memory copies
– Short-circuit read for 2-3x faster random reads
• Unix domain socket based local reads
- All applications, not just for special services like HBase
– I/O improvements using posix_fadvise()
– libhdfs improvements for zero copy reads
• Significant improvements - IO 2.5x to 5x faster
– Lot of improvements back ported to release 1.x
• Available in Apache release 1.1 and HDP 1.1
Page 16
Architecting the Future of Big Data
© Hortonworks Inc. 2013
Other Features
• New append pipeline
• Protobuf, wire compatibility
– Post 2.0 GA stronger wire compatibility in Apache Hadoop and HDP Releases
• Rolling upgrades
– With relaxed version checks
• Improvements for other projects
– Stale node to improve HBase MTTR
• Block placement enhancements
– Better support for other topologies such as VMs and Cloud
• On the wire encryption
– Both data-transfer and RPC protocols
• Support for NFS gateway
• Expanding ecosystem, platforms and applicability
– Native support for Windows
Page 17
Architecting the Future of Big Data
© Hortonworks Inc. 2013
Enterprise Readiness
• Storage fault-tolerance – built into HDFS 
– Over 7’9s of data reliability
• High Availability 
• Standard Interfaces 
– WebHdfs(REST), Fuse, NFS, HTTPFS, libwebhdfs and libhdfs
• Wire protocol compatibility 
– Protocol buffers
• Rolling upgrades 
• Snapshots 
• Disaster Recovery 
– Distcp for parallel and incremental copies across cluster
– Apache Ambari and HDP for automated management
Page 18
Architecting the Future of Big Data
© Hortonworks Inc. 2011
HDFS Futures
Architecting the Future of Big Data
Page 19
© Hortonworks Inc. 2013
Storage Abstraction
• Fundamental storage abstraction improvements
• Short Term
– Heterogeneous storage
• Support SSDs and disks for different storage categories
• Match storage to different access patterns
• Disk/storage addressing/locality and status collection
– Block level APIs for apps that don’t need file system interface
– Granular block placement policies
– Use of Ram for Caching data and intermediate query data
• Long Term
– Explore support for objects/Key value store and APIs
– Serving from Datanodes optimized based on file structure
Page 20
Architecting the Future of Big Data
21
Next Steps… first class support for volumes
• NameServer - Container for
namespaces
› Lots of small namespace volumes
• Chosen per user/tenant/data feed
• Management policies (quota, …)
• Mount tables for unified namespace
• Can be managed by a central volume
server
› Move namespace for balancing
• WorkingSet of namespace in memory
› Many more namespaces in a server
• Number of NameServers =
› Sum of (Namespace working set)
› Sum of (Namespace throughput)
2
Datanode Datanode…
…
NameServers as
Containers of Namespaces
Storage Layer
O'Reilly Strata & Hadoop
© Hortonworks Inc. 2013
Higher Scalability
• Even higher scalability of namespace
– Only working set in Namenode memory
– Namenode as container of namespaces
• Support large number of namespaces
– Explore new types of namespaces
• Further scale the block storage
– Block management to Datanodes
– Block collection/Mega block group abstraction
Page 22
Architecting the Future of Big Data
© Hortonworks Inc. 2013
High Availability
• Further enhancements to HA
– Expand Full stack HA to include other dependent services
– Support multiple standby nodes, including N+K
– Use standby for reads
– Simplify management – eliminate special daemons for journals
• Move Namenode metadata to HDFS
Page 23
Architecting the Future of Big Data
© Hortonworks Inc. 2013
Q & A
• Myths and misinformation
– Not reliable (was never true)
– Namenode dies, all state is lost (was never true)
– Does not support disaster recovery (distcp in Hadcop0.15)
– Hard to operate for new comers
– Performance improvements (always ongoing)
• Major improvements in 1.2 and 2.x
– Namenode is a single point of failure
– Needs shared NFS storage for HA
– Does not have point in time recovery
Thank You! Page 24
Architecting the Future of Big Data
© Hortonworks Inc. 2011
Backup slides
Architecting the Future of Big Data
Page 25
© Hortonworks Inc. 2013
Snapshot Design
• Based on Persistent Data Structures
– Maintains changes in the diff list at the Inodes
• Tracks creation, deletion, and modification
– Snapshot state Sn = current - ∆n
• A large number of snapshots supported
– State proportional to the changes between the snapshots
– Supports millions of snapshots
Page 26
Architecting the Future of Big Data
Current Sn S0Sn-1
∆n ∆n-1 ∆0

More Related Content

PPTX
Hadoop Backup and Disaster Recovery
PPTX
Hadoop 3.0 features
PPTX
Evolving HDFS to Generalized Storage Subsystem
PDF
Hadoop 2 - Beyond MapReduce
PPTX
Solving Hadoop Replication Challenges with an Active-Active Paxos Algorithm
PDF
Nicholas:hdfs what is new in hadoop 2
PPTX
Hadoop configuration & performance tuning
PDF
Bikas saha:the next generation of hadoop– hadoop 2 and yarn
Hadoop Backup and Disaster Recovery
Hadoop 3.0 features
Evolving HDFS to Generalized Storage Subsystem
Hadoop 2 - Beyond MapReduce
Solving Hadoop Replication Challenges with an Active-Active Paxos Algorithm
Nicholas:hdfs what is new in hadoop 2
Hadoop configuration & performance tuning
Bikas saha:the next generation of hadoop– hadoop 2 and yarn

What's hot (20)

PDF
Hadoop Operations - Best practices from the field
PPTX
Oct 2012 HUG: Hadoop .Next (0.23) - Customer Impact and Deployment
PDF
Apache Hadoop YARN, NameNode HA, HDFS Federation
PDF
Tutorial Haddop 2.3
PPTX
Hadoop Operations - Best Practices from the Field
PPTX
HDFS Tiered Storage: Mounting Object Stores in HDFS
PDF
WANdisco Non-Stop Hadoop: PHXDataConference Presentation Oct 2014
PDF
Hadoop 3.0 - Revolution or evolution?
PPTX
Ambari Meetup: NameNode HA
PPTX
High Availability for HBase Tables - Past, Present, and Future
PDF
Hadoop 2 - More than MapReduce
PPTX
Lessons Learned Running Hadoop and Spark in Docker Containers
PPTX
HBase and HDFS: Understanding FileSystem Usage in HBase
PPTX
Hadoop World 2011: Hadoop and Performance - Todd Lipcon & Yanpei Chen, Cloudera
PDF
Scale 12 x Efficient Multi-tenant Hadoop 2 Workloads with Yarn
PPTX
LLAP: Sub-Second Analytical Queries in Hive
PDF
Hadoop 2 - Going beyond MapReduce
PDF
Apache Hadoop YARN
PPTX
Introduction to YARN and MapReduce 2
PPTX
Apache Tez : Accelerating Hadoop Query Processing
Hadoop Operations - Best practices from the field
Oct 2012 HUG: Hadoop .Next (0.23) - Customer Impact and Deployment
Apache Hadoop YARN, NameNode HA, HDFS Federation
Tutorial Haddop 2.3
Hadoop Operations - Best Practices from the Field
HDFS Tiered Storage: Mounting Object Stores in HDFS
WANdisco Non-Stop Hadoop: PHXDataConference Presentation Oct 2014
Hadoop 3.0 - Revolution or evolution?
Ambari Meetup: NameNode HA
High Availability for HBase Tables - Past, Present, and Future
Hadoop 2 - More than MapReduce
Lessons Learned Running Hadoop and Spark in Docker Containers
HBase and HDFS: Understanding FileSystem Usage in HBase
Hadoop World 2011: Hadoop and Performance - Todd Lipcon & Yanpei Chen, Cloudera
Scale 12 x Efficient Multi-tenant Hadoop 2 Workloads with Yarn
LLAP: Sub-Second Analytical Queries in Hive
Hadoop 2 - Going beyond MapReduce
Apache Hadoop YARN
Introduction to YARN and MapReduce 2
Apache Tez : Accelerating Hadoop Query Processing
Ad

Similar to HDFS- What is New and Future (20)

PPTX
Scaling HDFS to Manage Billions of Files with Distributed Storage Schemes
PPTX
Scaling HDFS to Manage Billions of Files with Distributed Storage Schemes
PPTX
Hadoop operations-2014-strata-new-york-v5
PPTX
Storage and-compute-hdfs-map reduce
PPTX
Democratizing Memory Storage
PPTX
HDFS Federation++
PPTX
Evolving HDFS to a Generalized Distributed Storage Subsystem
PPTX
Evolving HDFS to a Generalized Storage Subsystem
PPTX
Hadoop operations-2015-hadoop-summit-san-jose-v5
PPTX
HDFS: Optimization, Stabilization and Supportability
PPTX
Hdfs 2016-hadoop-summit-dublin-v1
PPTX
Hadoop File system (HDFS)
PDF
HBase for Architects
PPTX
HDFS: Hadoop Distributed Filesystem
PPTX
Hdfs 2016-hadoop-summit-san-jose-v4
PPTX
HDFS - What's New and Future
PDF
Storage Systems for big data - HDFS, HBase, and intro to KV Store - Redis
PPTX
Interactive Hadoop via Flash and Memory
PDF
HDFS Design Principles
PPTX
Ozone and HDFS’s evolution
Scaling HDFS to Manage Billions of Files with Distributed Storage Schemes
Scaling HDFS to Manage Billions of Files with Distributed Storage Schemes
Hadoop operations-2014-strata-new-york-v5
Storage and-compute-hdfs-map reduce
Democratizing Memory Storage
HDFS Federation++
Evolving HDFS to a Generalized Distributed Storage Subsystem
Evolving HDFS to a Generalized Storage Subsystem
Hadoop operations-2015-hadoop-summit-san-jose-v5
HDFS: Optimization, Stabilization and Supportability
Hdfs 2016-hadoop-summit-dublin-v1
Hadoop File system (HDFS)
HBase for Architects
HDFS: Hadoop Distributed Filesystem
Hdfs 2016-hadoop-summit-san-jose-v4
HDFS - What's New and Future
Storage Systems for big data - HDFS, HBase, and intro to KV Store - Redis
Interactive Hadoop via Flash and Memory
HDFS Design Principles
Ozone and HDFS’s evolution
Ad

More from DataWorks Summit (20)

PPTX
Data Science Crash Course
PPTX
Floating on a RAFT: HBase Durability with Apache Ratis
PPTX
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
PDF
HBase Tales From the Trenches - Short stories about most common HBase operati...
PPTX
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
PPTX
Managing the Dewey Decimal System
PPTX
Practical NoSQL: Accumulo's dirlist Example
PPTX
HBase Global Indexing to support large-scale data ingestion at Uber
PPTX
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
PPTX
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
PPTX
Supporting Apache HBase : Troubleshooting and Supportability Improvements
PPTX
Security Framework for Multitenant Architecture
PDF
Presto: Optimizing Performance of SQL-on-Anything Engine
PPTX
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
PPTX
Extending Twitter's Data Platform to Google Cloud
PPTX
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
PPTX
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
PPTX
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
PDF
Computer Vision: Coming to a Store Near You
PPTX
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Data Science Crash Course
Floating on a RAFT: HBase Durability with Apache Ratis
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
HBase Tales From the Trenches - Short stories about most common HBase operati...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Managing the Dewey Decimal System
Practical NoSQL: Accumulo's dirlist Example
HBase Global Indexing to support large-scale data ingestion at Uber
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Security Framework for Multitenant Architecture
Presto: Optimizing Performance of SQL-on-Anything Engine
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Extending Twitter's Data Platform to Google Cloud
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Computer Vision: Coming to a Store Near You
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark

Recently uploaded (20)

PDF
Explore Luxemburry.eu, the ancient of lands in Europe
PDF
Step Into Lima’s Magic Explore Peru’s Historic Capital From Anywhere.pdf
PDF
Hunza Autumn. Hunza Autumn Tours. Pakistan Autumn Tour
PDF
Travel Adventures: Explore the Gem Around The World.
PDF
4Days Golden Triangle Tour India Pdf Doc
PPTX
Minimalist Business Slides XL by Slidesgo.pptx
PDF
Memorable Outdoor Adventures with Premium River Rafting & Guided Tours
PDF
International Kailash Mansarovar Yatra, Visa, Permits, and Package.pdf
PDF
Best Things to Do in Orlando in 2025 Travel Guide.pdf
PDF
Understanding Travel Insurance: Your Safety Net While Exploring the World
PDF
Delhi to Kashmir Tour Package at Best Price.pdf
PPTX
How Trade Tariffs Impacted Travel and Passport Services in the USA
PDF
Autumn in Pakistan. Hunza Autumn Tours.
PPTX
MALDIVES.pptx.pptx short power point to guide your explanation
PPTX
Exploration of Botanical Gardens of India
PPTX
The Tourism potential of Uzbekistan.pptx
PPSX
Detian Transnational Waterfall, Chongzuo, Guangxi, CN. (中國 廣西崇左市 德天跨國瀑布).ppsx
PDF
Introduction of Secrets of Mount Kailash.pdf
PDF
World Regional Geography 6th Edition Lydia Mihelic Pulsipher Download Test Ba...
PDF
How to Choose the Best Tour Operators in Rajasthan – A Complete Guide.pdf
Explore Luxemburry.eu, the ancient of lands in Europe
Step Into Lima’s Magic Explore Peru’s Historic Capital From Anywhere.pdf
Hunza Autumn. Hunza Autumn Tours. Pakistan Autumn Tour
Travel Adventures: Explore the Gem Around The World.
4Days Golden Triangle Tour India Pdf Doc
Minimalist Business Slides XL by Slidesgo.pptx
Memorable Outdoor Adventures with Premium River Rafting & Guided Tours
International Kailash Mansarovar Yatra, Visa, Permits, and Package.pdf
Best Things to Do in Orlando in 2025 Travel Guide.pdf
Understanding Travel Insurance: Your Safety Net While Exploring the World
Delhi to Kashmir Tour Package at Best Price.pdf
How Trade Tariffs Impacted Travel and Passport Services in the USA
Autumn in Pakistan. Hunza Autumn Tours.
MALDIVES.pptx.pptx short power point to guide your explanation
Exploration of Botanical Gardens of India
The Tourism potential of Uzbekistan.pptx
Detian Transnational Waterfall, Chongzuo, Guangxi, CN. (中國 廣西崇左市 德天跨國瀑布).ppsx
Introduction of Secrets of Mount Kailash.pdf
World Regional Geography 6th Edition Lydia Mihelic Pulsipher Download Test Ba...
How to Choose the Best Tour Operators in Rajasthan – A Complete Guide.pdf

HDFS- What is New and Future

  • 1. © Hortonworks Inc. 2013 HDFS What is New and Futures Sanjay Radia, Founder, Architect Suresh Srinivas, Founder, Architect Hortonworks Inc. Page 1
  • 2. © Hortonworks Inc. 2013 About me • Founder, Architect, Hortonworks • Part of the Hadoop team at Yahoo! since 2007 – Chief Architect of Hadoop Core at Yahoo! – Apache Hadoop PMC and Committer • Prior – Data center automation, virtualization, Java, HA, OSs, File Systems (Startup, Sun Microsystems, …) – Ph.D., University of Waterloo Page 2 Architecting the Future of Big Data
  • 3. © Hortonworks Inc. 2013 Agenda • Hadoop 2.0 – What’s new – Federation – HA – Snapshots – Other features • Future – Major Architectural Directions – Short term and long term features Page 3 Architecting the Future of Big Data
  • 4. © Hortonworks Inc. 2013 We have been hard at work… • Progress is being made in many areas – Write-pipeline, Append – Scalability – Performance – Enterprise features – Ongoing operability improvements – Enhancements for other projects in the ecosystem – Expand Hadoop ecosystem to more platforms and use cases • 2192 commits in Hadoop in the last year – Almost a million lines of changes – ~150 contributors – Lot of new contributors - ~80 with < 3 patches • 350K lines of changes in HDFS and common Page 4 Architecting the Future of Big Data
  • 5. © Hortonworks Inc. 2013 Building on Rock-solid Foundation • Original design choices - simple and robust – Storage: Rely in OS’s file system rather than use raw disk – Storage Fault Tolerance: multiple replicas, active monitoring – Namenode Master • Reliability – Over 7 9’s of data reliability, less than 0.58 failures across 25 clusters • Operability – Small teams can manage large clusters • An operator per 3K node cluster – Fast Time to repair on node or disk failure • Minutes to an hour Vs. RAID array repairs taking many long hours • Scalable - proven by large scale deployments not bits – > 100 PB storage, > 500 million files, > 4500 nodes in a single cluster – > 60 K nodes of HDFS in deployment and use Page 5 Architecting the Future of Big Data
  • 6. 6 HDFS’ Generic Storage Service Opportunities for Innovation • Federation - Distributed (Partitioned) Namespace – Simple and Robust due to independent masters – Scalability, Isolation, Availability • New Services – Independent Block Pools – New FS - Partial namespace in memory – MR Tmp storage, Object store directly on block storage – Shadow file system – caches HDFS, NFS, S3 • Future: move Block Management in DataNodes – Simplifies namespace/application implementation – Distributed namenode becomes significantly simple Storage Service HDFS Namespace Alternate NN Implementation HBase MR tmp
  • 7. © Hortonworks Inc. 2013 Federation • Block Storage as generic storage service – DNs store blocks in Block Pools for all the Namespace Volumes • Multiple independent Namenodes and Namespace Volumes in a cluster – Scalability by adding more namenodes/namespaces – Isolation – separating applications to their own namespaces – Client side mount tables/ViewFS for integrated views Page 7 Architecting the Future of Big Data DN 1 DN 2 DN m .. .. .. NS1 Foreign NS n .. . .. . NS k Block Pools Pool nPool kPool 1 NN-1 NN-k NN-n Common Storage BlockStorageNamespace
  • 8. 8 Managing Namespaces • Federation has multiple namespaces • Don’t you need a single global namespace? – Some tenants want private namespace • Hadoop as service – each tenant its own namespace – Global? Key is to share the data and the names used to access the data • A single global namespace is one way share • Client-side mount table is another way to share. – Shared mount-table => “global” shared view – Personalized mount-table => per-application view • Share the data that matter by mounting it • Client-side implementation of mount tables – No single point of failure – No hotspot for root and top level directories Client-side mount-table homeproject NS1 NS3NS2 NS4 tmp / data O'Reilly Strata & Hadoop8
  • 9. © Hortonworks Inc. 2011 High Availability
  • 10. © Hortonworks Inc. 2013 HA for HDFS • Hadoop 1.x (HDP 1.x) – Failover using industry standard solution (Linux HA, VSphere) – Shared storage – Failover times 1 minutes to 3-4 minutes for 100 to 300 node cluster – Full-stack HA • Clients, JT, HBase, HCat automatically pause and retry during failover • NN, JT, Hcat all have automatic failover • Hadoop 2.x (HDP 2.x) – Failover over using Failover Controller – Quorum Journal Manager (No shared storage) • Failover times are 30 to 120 seconds less (since Standby NN is hot) – Full-stack HA Page 10
  • 11. © Hortonworks Inc. 2013 Hadoop Full Stack HA Page 11 Architecting the Future of Big Data HA Cluster for Master Daemons Server Server Server NN JT Failover Apps Running Outside JT into Safemode NN jo b jo b jo b jo b jo b Slave Nodes of Hadoop Cluster
  • 12. © Hortonworks Inc. 2013 High Availability – Release 2.0 • Supports manual and automatic failover • Automatic failover with Failover Controller – Active NN election and failure detection using ZooKeeper – Periodic NN health check – Failover on NN failure • Removed shared storage dependency – Quorum Journal Manager • 3 to 5 Journal Nodes for storing editlog • Edit must be written to quorum number of Journal Nodes Available in Release 2.0.3-alpha Page 12 Architecting the Future of Big Data
  • 13. © Hortonworks Inc. 2013 Namenode HA in Hadoop 2 Page 13 Architecting the Future of Big Data NN Active NN Standby JNJN JN Shared NN state through Quorum of JournalNodes DN FailoverController Active ZK Cmds Monitor Health of NN. OS, HW Monitor Health of NN. OS, HW Block Reports to Active & Standby DN fencing: only obey commands from active DN DN FailoverController Standby ZK ZK Heartbeat Heartbeat DN Namenode HA has no external dependency
  • 14. © Hortonworks Inc. 2013 Snapshots (HDFS-2802) • Snapshot entire namespace or sub directories – Nested snapshots allowed – Managed by Admin • Users can take snapshots of directories they own • Support for read-only COW snapshots – Design allows read-write snapshots • Namenode only operation – no data copy made – Metadata in namenode - no complicated distributed mechanism – Datanodes have no knowledge • Efficient – Instantaneous creation – Memory used is highly optimized • State proportional to the changes between the snapshots – Does not affect regular HDFS operations Page 14 Architecting the Future of Big Data
  • 15. © Hortonworks Inc. 2013 Snapshot – APIs and CLIs • All regular commands & APIs can be used with snapshot path – /<path>/.snapshot/snapshot_name/file.txt – Copy /<path>/.snapshot/snap1/ImportantFile /<path>/ • CLIs – Allow snapshots • dfsadmin –allowSnapshots <dir> • dfsadmin –disAllowSnapshots <dir> – Create/delete/rename snapshots • fs –createSnapshot<dir> [snapshot_name] • fs –deleteSnapshot<dir> <snapshot_name> • fs –renameSnapshot<dir> <old_name> <new_name> – Tool to print diff between snapshots – Admin tool to print all snapshottable directories and snapshots Page 15 Architecting the Future of Big Data
  • 16. © Hortonworks Inc. 2013 Performance Improvements • Many Improvements – SSE4.2 CRC32C – ~3x less CPU on read path – Read path improvements for fewer memory copies – Short-circuit read for 2-3x faster random reads • Unix domain socket based local reads - All applications, not just for special services like HBase – I/O improvements using posix_fadvise() – libhdfs improvements for zero copy reads • Significant improvements - IO 2.5x to 5x faster – Lot of improvements back ported to release 1.x • Available in Apache release 1.1 and HDP 1.1 Page 16 Architecting the Future of Big Data
  • 17. © Hortonworks Inc. 2013 Other Features • New append pipeline • Protobuf, wire compatibility – Post 2.0 GA stronger wire compatibility in Apache Hadoop and HDP Releases • Rolling upgrades – With relaxed version checks • Improvements for other projects – Stale node to improve HBase MTTR • Block placement enhancements – Better support for other topologies such as VMs and Cloud • On the wire encryption – Both data-transfer and RPC protocols • Support for NFS gateway • Expanding ecosystem, platforms and applicability – Native support for Windows Page 17 Architecting the Future of Big Data
  • 18. © Hortonworks Inc. 2013 Enterprise Readiness • Storage fault-tolerance – built into HDFS  – Over 7’9s of data reliability • High Availability  • Standard Interfaces  – WebHdfs(REST), Fuse, NFS, HTTPFS, libwebhdfs and libhdfs • Wire protocol compatibility  – Protocol buffers • Rolling upgrades  • Snapshots  • Disaster Recovery  – Distcp for parallel and incremental copies across cluster – Apache Ambari and HDP for automated management Page 18 Architecting the Future of Big Data
  • 19. © Hortonworks Inc. 2011 HDFS Futures Architecting the Future of Big Data Page 19
  • 20. © Hortonworks Inc. 2013 Storage Abstraction • Fundamental storage abstraction improvements • Short Term – Heterogeneous storage • Support SSDs and disks for different storage categories • Match storage to different access patterns • Disk/storage addressing/locality and status collection – Block level APIs for apps that don’t need file system interface – Granular block placement policies – Use of Ram for Caching data and intermediate query data • Long Term – Explore support for objects/Key value store and APIs – Serving from Datanodes optimized based on file structure Page 20 Architecting the Future of Big Data
  • 21. 21 Next Steps… first class support for volumes • NameServer - Container for namespaces › Lots of small namespace volumes • Chosen per user/tenant/data feed • Management policies (quota, …) • Mount tables for unified namespace • Can be managed by a central volume server › Move namespace for balancing • WorkingSet of namespace in memory › Many more namespaces in a server • Number of NameServers = › Sum of (Namespace working set) › Sum of (Namespace throughput) 2 Datanode Datanode… … NameServers as Containers of Namespaces Storage Layer O'Reilly Strata & Hadoop
  • 22. © Hortonworks Inc. 2013 Higher Scalability • Even higher scalability of namespace – Only working set in Namenode memory – Namenode as container of namespaces • Support large number of namespaces – Explore new types of namespaces • Further scale the block storage – Block management to Datanodes – Block collection/Mega block group abstraction Page 22 Architecting the Future of Big Data
  • 23. © Hortonworks Inc. 2013 High Availability • Further enhancements to HA – Expand Full stack HA to include other dependent services – Support multiple standby nodes, including N+K – Use standby for reads – Simplify management – eliminate special daemons for journals • Move Namenode metadata to HDFS Page 23 Architecting the Future of Big Data
  • 24. © Hortonworks Inc. 2013 Q & A • Myths and misinformation – Not reliable (was never true) – Namenode dies, all state is lost (was never true) – Does not support disaster recovery (distcp in Hadcop0.15) – Hard to operate for new comers – Performance improvements (always ongoing) • Major improvements in 1.2 and 2.x – Namenode is a single point of failure – Needs shared NFS storage for HA – Does not have point in time recovery Thank You! Page 24 Architecting the Future of Big Data
  • 25. © Hortonworks Inc. 2011 Backup slides Architecting the Future of Big Data Page 25
  • 26. © Hortonworks Inc. 2013 Snapshot Design • Based on Persistent Data Structures – Maintains changes in the diff list at the Inodes • Tracks creation, deletion, and modification – Snapshot state Sn = current - ∆n • A large number of snapshots supported – State proportional to the changes between the snapshots – Supports millions of snapshots Page 26 Architecting the Future of Big Data Current Sn S0Sn-1 ∆n ∆n-1 ∆0