SlideShare a Scribd company logo
Vu Pham
Preface
Content of this Lecture:
In this lecture, we will discuss design goals of HDFS, the
read/write process to HDFS, the main configuration
tuning parameters to control HDFS performance and
robustness.
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Introduction
Hadoop provides a distributed file system and a framework for
the analysis and transformation of very large data sets using
the MapReduce paradigm.
An important characteristic of Hadoop is the partitioning of
data and computation across many (thousands) of hosts, and
executing application computations in parallel close to their
data.
A Hadoop cluster scales computation capacity, storage capacity
and IO bandwidth by simply adding commodity servers.
Hadoop clusters at Yahoo! span 25,000 servers, and store 25
petabytes of application data, with the largest cluster being
3500 servers. One hundred other organizations worldwide
report using Hadoop.
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Introduction
Hadoop is an Apache project; all components are available via
the Apache open source license.
Yahoo! has developed and contributed to 80% of the core of
Hadoop (HDFS and MapReduce).
HBase was originally developed at Powerset, now a department
at Microsoft.
Hive was originated and developed at Facebook.
Pig, ZooKeeper, and Chukwa were originated and developed at
Yahoo!
Avro was originated at Yahoo! and is being co-developed with
Cloudera.
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Hadoop Project Components
Big Data Computing
HDFS Distributed file system
MapReduce Distributed computation framework
HBase Column-oriented table service
Pig
Dataflow language and parallel execution
framework
Hive Data warehouse infrastructure
ZooKeeper Distributed coordination service
Chukwa System for collecting management data
Avro Data serialization system
Hadoop Distributed File System (HDFS)
Vu Pham
HDFS Design Concepts
Scalable distributed filesystem: So essentially, as you add disks
you get scalable performance. And as you add more, you're
adding a lot of disks, and that scales out the performance.
Distributed data on local disks on several nodes.
Low cost commodity hardware: A lot of performance out of it
because you're aggregating performance.
Big Data Computing
Node 1
B1
Node 2
B2
Node n
Bn
…
Hadoop Distributed File System (HDFS)
Vu Pham
HDFS Design Goals
Hundreds/Thousands of nodes and disks:
It means there's a higher probability of hardware failure. So the design
needs to handle node/disk failures.
Portability across heterogeneous hardware/software:
Implementation across lots of different kinds of hardware and software.
Handle large data sets:
Need to handle terabytes to petabytes.
Enable processing with high throughput
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Techniques to meet HDFS design goals
Simplified coherency model:
The idea is to write once and then read many times. And that simplifies
the number of operations required to commit the write.
Data replication:
Helps to handle hardware failures.
Try to spread the data, same piece of data on different nodes.
Move computation close to the data:
So you're not moving data around. That improves your performance and
throughput.
Relax POSIX requirements to increase the throughput.
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Basic architecture of HDFS
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
HDFS Architecture: Key Components
Single NameNode: A master server that manages the file system
namespace and basically regulates access to these files from
clients, and it also keeps track of where the data is on the
DataNodes and where the blocks are distributed essentially.
Multiple DataNodes: Typically one per node in a cluster. So
you're basically using storage which is local.
Basic Functions:
Manage the storage on the DataNode.
Read and write requests on the clients
Block creation, deletion, and replication is all based on instructions from
the NameNode.
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Original HDFS Design
Single NameNode
Multiple DataNodes
Manage storage- blocks of data
Serving read/write requests from clients
Block creation, deletion, replication
Big Data Computing Big Data Enabling Technologies
Vu Pham
HDFS in Hadoop 2
HDFS Federation: Basically what we are doing is trying to have
multiple data nodes, and multiple name nodes. So that we can
increase the name space data. So, if you recall from the first design
you have essentially a single node handling all the namespace
responsibilities. And you can imagine as you start having thousands of
nodes that they'll not scale, and if you have billions of files, you will
have scalability issues. So to address that, the federation aspect was
brought in. That also brings performance improvements.
Benefits:
Increase namespace scalability
Performance
Isolation
Big Data Computing Big Data Enabling Technologies
Vu Pham
HDFS in Hadoop 2
How its done
Multiple Namenode servers
Multiple namespaces
Data is now stored in Block pools
So there is a pool associated with each namenode or
namespace.
And these pools are essentially spread out over all the data
nodes.
Big Data Computing Big Data Enabling Technologies
Vu Pham
HDFS in Hadoop 2
High Availability-
Redundant NameNodes
Heterogeneous Storage
and Archival Storage
ARCHIVE, DISK, SSD, RAM_DISK
Big Data Computing Big Data Enabling Technologies
Vu Pham
Federation: Block Pools
Big Data Computing Big Data Enabling Technologies
So, if you remember the original design you have one name space and a bunch of
data nodes. So, the structure looks similar.
You have a bunch of NameNodes, instead of one NameNode. And each of those
NameNodes is essentially right into these pools, but the pools are spread out over the
data nodes just like before. This is where the data is spread out. You can gloss over
the different data nodes. So, the block pool is essentially the main thing that's
different.
Vu Pham
HDFS Performance Measures
Determine the number of blocks for a given file size,
Key HDFS and system components that are affected
by the block size.
An impact of using a lot of small files on HDFS and
system
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Recall: HDFS Architecture
Distributed data on local disks on several nodes
Big Data Computing
Node 1
B1
Node 2
B2
Node n
Bn
…
Hadoop Distributed File System (HDFS)
Vu Pham
HDFS Block Size
Default block size is 64 megabytes.
Good for large files!
So a 10GB file will be broken into: 10 x 1024/64=160 blocks
Big Data Computing
Node 1
B1
Node 2
B2
Node n
Bn
…
Hadoop Distributed File System (HDFS)
Vu Pham
Importance of No. of Blocks in a file
NameNode memory usage: Every block that you create basically
every file could be a lot of blocks as we saw in the previous case,
160 blocks. And if you have millions of files that's millions of
objects essentially. And for each object, it uses a bit of memory on
the NameNode, so that is a direct effect of the number of blocks.
But if you have replication, then you have 3 times the number of
blocks.
Number of map tasks: Number of maps typically depends on the
number of blocks being processed.
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Large No. of small files: Impact on Name node
Memory usage: Typically, the usage is around 150 bytes per
object. Now, if you have a billion objects, that's going to be like
300GB of memory.
Network load: Number of checks with datanodes proportional
to number of blocks
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Large No. of small files: Performance Impact
Number of map tasks: Suppose we have 10GB of data to
process and you have them all in lots of 32k file sizes? Then we
will end up with 327680 map tasks.
Huge list of tasks that are queued.
The other impact of this is the map tasks, each time they spin up
and spin down, there's a latency involved with that because you
are starting up Java processes and stopping them.
Inefficient disk I/O with small sizes
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
HDFS optimized for large files
Lots of small files is bad!
Solution:
Merge/Concatenate files
Sequence files
HBase, HIVE configuration
CombineFileInputFormat
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Big Data Computing
Read/Write Processes in HDFS
Hadoop Distributed File System (HDFS)
Vu Pham
Read Process in HDFS
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Write Process in HDFS
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Big Data Computing
HDFS Tuning Parameters
Hadoop Distributed File System (HDFS)
Vu Pham
Overview
Tuning parameters
Specifically DFS Block size
NameNode, DataNode system/dfs parameters.
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
HDFS XML configuration files
Tuning environment typically in HDFS XML configuration files,
for example, in the hdfs-site.xml.
This is more for system administrators of Hadoop clusters, but
it's good to know what changes affect impact the performance,
and especially if your trying things out on your own there some
important parameters to keep in mind.
Commercial vendors have GUI based management console
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
HDFS Block Size
Recall: impacts how much NameNode memory is used, number
of map tasks that are showing up, and also have impacts on
performance.
Default 64 megabytes: Typically bumped up to 128 megabytes
and can be changed based on workloads.
The parameter that this changes dfs.blocksize or dfs.block.size.
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
HDFS Replication
Default replication is 3.
Parameter: dfs.replication
Tradeoffs:
Lower it to reduce replication cost
Less robust
Higher replication can make data local to more workers
Lower replication ➔ More space
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Lot of other parameters
Various tunables for datanode, namenode.
Examples:
Dfs.datanode.handler.count (10): Sets the number of server
threads on each datanode
Dfs.namenode.fs-limits.max-blocks-per-file: Maximum number
of blocks per file.
Full List:
http://hadoop.apache.org/docs/current/hadoop-project-
dist/hadoop-hdfs/hdfs-default.xml
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Big Data Computing
HDFS Performance and
Robustness
Hadoop Distributed File System (HDFS)
Vu Pham
Common Failures
DataNode Failures: Server can fail, disk can crash, data
corruption.
Network Failures: Sometimes there's data corruption because
of network issues or disk issue. So, all of that could lead to a
failure in the DataNode aspect of HDFS. You could have network
failures. So, you could have a network go down between a
particular and the name node that can affect a lot of data nodes
at the same time.
NameNode Failures: Could have name node failures, disk failure
on the name node itself or the name node itself could corrupt
this process.
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
HDFS Robustness
NameNode receives heartbeat and block reports from
DataNodes
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Mitigation of common failures
Periodic heartbeat: from DataNode to NameNode.
DataNodes without recent heartbeat:
Mark the data. And any new I/O that comes up is not going to be sent to
that data node. Also remember that NameNode has information on all
the replication information for the files on the file system. So, if it knows
that a datanode fails which blocks will follow that replication factor.
Now this replication factor is set for the entire system and also you could
set it for particular file when you're writing the file. Either way, the
NameNode knows which blocks fall below replication factor. And it will
restart the process to re-replicate.
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Mitigation of common failures
Checksum computed on file creation.
Checksums stored in HDFS namespace.
Used to check retrieved data.
Re-read from alternate replica
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Mitigation of common failures
Multiple copies of central meta data structures.
Failover to standby NameNode- manual by default.
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Performance
Changing blocksize and replication factor can improve
performance.
Example: Distributed copy
Hadoop distcp allows parallel transfer of files.
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Replication trade off with respect to robustness
One performance tradeoff is, actually when you go out
to do some of the map reduce jobs, having replicas
gives additional locality possibilities, but the big trade
off is the robustness. In this case, we said no replicas.
Might lose a node or a local disk: can't recover because
there is no replication.
Similarly, with data corruption, if you get a checksum
that's bad, now you can't recover because you don't
have a replica.
Other parameters changes can have similar effects.
Big Data Computing Hadoop Distributed File System (HDFS)
Vu Pham
Conclusion
In this lecture, we have discussed design goals of HDFS,
the read/write process to HDFS, the main configuration
tuning parameters to control HDFS performance and
robustness.
Big Data Computing Hadoop Distributed File System (HDFS)

More Related Content

PPTX
Seminar ppt
PPTX
OPERATING SYSTEM .pptx
PPT
hadoop
PPT
hadoop
PPTX
Managing Big data with Hadoop
PPTX
Big data with HDFS and Mapreduce
PPTX
Big Data Analytics -Introduction education
DOCX
500 data engineering interview question.docx
Seminar ppt
OPERATING SYSTEM .pptx
hadoop
hadoop
Managing Big data with Hadoop
Big data with HDFS and Mapreduce
Big Data Analytics -Introduction education
500 data engineering interview question.docx

Similar to Hadoop Distributed File System in Big data (20)

PPT
Hadoop training by keylabs
PDF
hdfs readrmation ghghg bigdats analytics info.pdf
PDF
big data hadoop technonolgy for storing and processing data
PPT
Hadoop Technology
PPTX
Introduction to Hadoop Distributed File System(HDFS).pptx
PDF
Hadoop Ecosystem
PPTX
Apache Hadoop Big Data Technology
PPTX
Hadoop and BigData - July 2016
PPTX
Bigdata and Hadoop Introduction
PDF
lec4_ref.pdf
PDF
Unit 3 Big Data àaaaaaaaaaaaTutorial.pdf
PPTX
Hadoop by kamran khan
PPTX
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...
PDF
Hadoop overview.pdf
PPTX
Top Hadoop Big Data Interview Questions and Answers for Fresher
PPTX
Introduction to HDFS
PDF
PPT
How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
PPTX
2.introduction to hdfs
Hadoop training by keylabs
hdfs readrmation ghghg bigdats analytics info.pdf
big data hadoop technonolgy for storing and processing data
Hadoop Technology
Introduction to Hadoop Distributed File System(HDFS).pptx
Hadoop Ecosystem
Apache Hadoop Big Data Technology
Hadoop and BigData - July 2016
Bigdata and Hadoop Introduction
lec4_ref.pdf
Unit 3 Big Data àaaaaaaaaaaaTutorial.pdf
Hadoop by kamran khan
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...
Hadoop overview.pdf
Top Hadoop Big Data Interview Questions and Answers for Fresher
Introduction to HDFS
How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
2.introduction to hdfs
Ad

Recently uploaded (20)

PPTX
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PPTX
climate analysis of Dhaka ,Banglades.pptx
PPTX
Supervised vs unsupervised machine learning algorithms
PDF
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PDF
Launch Your Data Science Career in Kochi – 2025
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PDF
Fluorescence-microscope_Botany_detailed content
PPTX
Computer network topology notes for revision
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
PDF
Foundation of Data Science unit number two notes
PPTX
Major-Components-ofNKJNNKNKNKNKronment.pptx
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PDF
Mega Projects Data Mega Projects Data
PPTX
Business Acumen Training GuidePresentation.pptx
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PPTX
STUDY DESIGN details- Lt Col Maksud (21).pptx
PPT
Miokarditis (Inflamasi pada Otot Jantung)
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
Data_Analytics_and_PowerBI_Presentation.pptx
climate analysis of Dhaka ,Banglades.pptx
Supervised vs unsupervised machine learning algorithms
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
Launch Your Data Science Career in Kochi – 2025
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
Fluorescence-microscope_Botany_detailed content
Computer network topology notes for revision
Introduction-to-Cloud-ComputingFinal.pptx
Foundation of Data Science unit number two notes
Major-Components-ofNKJNNKNKNKNKronment.pptx
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
Mega Projects Data Mega Projects Data
Business Acumen Training GuidePresentation.pptx
Galatica Smart Energy Infrastructure Startup Pitch Deck
STUDY DESIGN details- Lt Col Maksud (21).pptx
Miokarditis (Inflamasi pada Otot Jantung)
Ad

Hadoop Distributed File System in Big data

  • 1. Vu Pham Preface Content of this Lecture: In this lecture, we will discuss design goals of HDFS, the read/write process to HDFS, the main configuration tuning parameters to control HDFS performance and robustness. Big Data Computing Hadoop Distributed File System (HDFS)
  • 2. Vu Pham Introduction Hadoop provides a distributed file system and a framework for the analysis and transformation of very large data sets using the MapReduce paradigm. An important characteristic of Hadoop is the partitioning of data and computation across many (thousands) of hosts, and executing application computations in parallel close to their data. A Hadoop cluster scales computation capacity, storage capacity and IO bandwidth by simply adding commodity servers. Hadoop clusters at Yahoo! span 25,000 servers, and store 25 petabytes of application data, with the largest cluster being 3500 servers. One hundred other organizations worldwide report using Hadoop. Big Data Computing Hadoop Distributed File System (HDFS)
  • 3. Vu Pham Introduction Hadoop is an Apache project; all components are available via the Apache open source license. Yahoo! has developed and contributed to 80% of the core of Hadoop (HDFS and MapReduce). HBase was originally developed at Powerset, now a department at Microsoft. Hive was originated and developed at Facebook. Pig, ZooKeeper, and Chukwa were originated and developed at Yahoo! Avro was originated at Yahoo! and is being co-developed with Cloudera. Big Data Computing Hadoop Distributed File System (HDFS)
  • 4. Vu Pham Hadoop Project Components Big Data Computing HDFS Distributed file system MapReduce Distributed computation framework HBase Column-oriented table service Pig Dataflow language and parallel execution framework Hive Data warehouse infrastructure ZooKeeper Distributed coordination service Chukwa System for collecting management data Avro Data serialization system Hadoop Distributed File System (HDFS)
  • 5. Vu Pham HDFS Design Concepts Scalable distributed filesystem: So essentially, as you add disks you get scalable performance. And as you add more, you're adding a lot of disks, and that scales out the performance. Distributed data on local disks on several nodes. Low cost commodity hardware: A lot of performance out of it because you're aggregating performance. Big Data Computing Node 1 B1 Node 2 B2 Node n Bn … Hadoop Distributed File System (HDFS)
  • 6. Vu Pham HDFS Design Goals Hundreds/Thousands of nodes and disks: It means there's a higher probability of hardware failure. So the design needs to handle node/disk failures. Portability across heterogeneous hardware/software: Implementation across lots of different kinds of hardware and software. Handle large data sets: Need to handle terabytes to petabytes. Enable processing with high throughput Big Data Computing Hadoop Distributed File System (HDFS)
  • 7. Vu Pham Techniques to meet HDFS design goals Simplified coherency model: The idea is to write once and then read many times. And that simplifies the number of operations required to commit the write. Data replication: Helps to handle hardware failures. Try to spread the data, same piece of data on different nodes. Move computation close to the data: So you're not moving data around. That improves your performance and throughput. Relax POSIX requirements to increase the throughput. Big Data Computing Hadoop Distributed File System (HDFS)
  • 8. Vu Pham Basic architecture of HDFS Big Data Computing Hadoop Distributed File System (HDFS)
  • 9. Vu Pham HDFS Architecture: Key Components Single NameNode: A master server that manages the file system namespace and basically regulates access to these files from clients, and it also keeps track of where the data is on the DataNodes and where the blocks are distributed essentially. Multiple DataNodes: Typically one per node in a cluster. So you're basically using storage which is local. Basic Functions: Manage the storage on the DataNode. Read and write requests on the clients Block creation, deletion, and replication is all based on instructions from the NameNode. Big Data Computing Hadoop Distributed File System (HDFS)
  • 10. Vu Pham Original HDFS Design Single NameNode Multiple DataNodes Manage storage- blocks of data Serving read/write requests from clients Block creation, deletion, replication Big Data Computing Big Data Enabling Technologies
  • 11. Vu Pham HDFS in Hadoop 2 HDFS Federation: Basically what we are doing is trying to have multiple data nodes, and multiple name nodes. So that we can increase the name space data. So, if you recall from the first design you have essentially a single node handling all the namespace responsibilities. And you can imagine as you start having thousands of nodes that they'll not scale, and if you have billions of files, you will have scalability issues. So to address that, the federation aspect was brought in. That also brings performance improvements. Benefits: Increase namespace scalability Performance Isolation Big Data Computing Big Data Enabling Technologies
  • 12. Vu Pham HDFS in Hadoop 2 How its done Multiple Namenode servers Multiple namespaces Data is now stored in Block pools So there is a pool associated with each namenode or namespace. And these pools are essentially spread out over all the data nodes. Big Data Computing Big Data Enabling Technologies
  • 13. Vu Pham HDFS in Hadoop 2 High Availability- Redundant NameNodes Heterogeneous Storage and Archival Storage ARCHIVE, DISK, SSD, RAM_DISK Big Data Computing Big Data Enabling Technologies
  • 14. Vu Pham Federation: Block Pools Big Data Computing Big Data Enabling Technologies So, if you remember the original design you have one name space and a bunch of data nodes. So, the structure looks similar. You have a bunch of NameNodes, instead of one NameNode. And each of those NameNodes is essentially right into these pools, but the pools are spread out over the data nodes just like before. This is where the data is spread out. You can gloss over the different data nodes. So, the block pool is essentially the main thing that's different.
  • 15. Vu Pham HDFS Performance Measures Determine the number of blocks for a given file size, Key HDFS and system components that are affected by the block size. An impact of using a lot of small files on HDFS and system Big Data Computing Hadoop Distributed File System (HDFS)
  • 16. Vu Pham Recall: HDFS Architecture Distributed data on local disks on several nodes Big Data Computing Node 1 B1 Node 2 B2 Node n Bn … Hadoop Distributed File System (HDFS)
  • 17. Vu Pham HDFS Block Size Default block size is 64 megabytes. Good for large files! So a 10GB file will be broken into: 10 x 1024/64=160 blocks Big Data Computing Node 1 B1 Node 2 B2 Node n Bn … Hadoop Distributed File System (HDFS)
  • 18. Vu Pham Importance of No. of Blocks in a file NameNode memory usage: Every block that you create basically every file could be a lot of blocks as we saw in the previous case, 160 blocks. And if you have millions of files that's millions of objects essentially. And for each object, it uses a bit of memory on the NameNode, so that is a direct effect of the number of blocks. But if you have replication, then you have 3 times the number of blocks. Number of map tasks: Number of maps typically depends on the number of blocks being processed. Big Data Computing Hadoop Distributed File System (HDFS)
  • 19. Vu Pham Large No. of small files: Impact on Name node Memory usage: Typically, the usage is around 150 bytes per object. Now, if you have a billion objects, that's going to be like 300GB of memory. Network load: Number of checks with datanodes proportional to number of blocks Big Data Computing Hadoop Distributed File System (HDFS)
  • 20. Vu Pham Large No. of small files: Performance Impact Number of map tasks: Suppose we have 10GB of data to process and you have them all in lots of 32k file sizes? Then we will end up with 327680 map tasks. Huge list of tasks that are queued. The other impact of this is the map tasks, each time they spin up and spin down, there's a latency involved with that because you are starting up Java processes and stopping them. Inefficient disk I/O with small sizes Big Data Computing Hadoop Distributed File System (HDFS)
  • 21. Vu Pham HDFS optimized for large files Lots of small files is bad! Solution: Merge/Concatenate files Sequence files HBase, HIVE configuration CombineFileInputFormat Big Data Computing Hadoop Distributed File System (HDFS)
  • 22. Vu Pham Big Data Computing Read/Write Processes in HDFS Hadoop Distributed File System (HDFS)
  • 23. Vu Pham Read Process in HDFS Big Data Computing Hadoop Distributed File System (HDFS)
  • 24. Vu Pham Write Process in HDFS Big Data Computing Hadoop Distributed File System (HDFS)
  • 25. Vu Pham Big Data Computing HDFS Tuning Parameters Hadoop Distributed File System (HDFS)
  • 26. Vu Pham Overview Tuning parameters Specifically DFS Block size NameNode, DataNode system/dfs parameters. Big Data Computing Hadoop Distributed File System (HDFS)
  • 27. Vu Pham HDFS XML configuration files Tuning environment typically in HDFS XML configuration files, for example, in the hdfs-site.xml. This is more for system administrators of Hadoop clusters, but it's good to know what changes affect impact the performance, and especially if your trying things out on your own there some important parameters to keep in mind. Commercial vendors have GUI based management console Big Data Computing Hadoop Distributed File System (HDFS)
  • 28. Vu Pham HDFS Block Size Recall: impacts how much NameNode memory is used, number of map tasks that are showing up, and also have impacts on performance. Default 64 megabytes: Typically bumped up to 128 megabytes and can be changed based on workloads. The parameter that this changes dfs.blocksize or dfs.block.size. Big Data Computing Hadoop Distributed File System (HDFS)
  • 29. Vu Pham HDFS Replication Default replication is 3. Parameter: dfs.replication Tradeoffs: Lower it to reduce replication cost Less robust Higher replication can make data local to more workers Lower replication ➔ More space Big Data Computing Hadoop Distributed File System (HDFS)
  • 30. Vu Pham Lot of other parameters Various tunables for datanode, namenode. Examples: Dfs.datanode.handler.count (10): Sets the number of server threads on each datanode Dfs.namenode.fs-limits.max-blocks-per-file: Maximum number of blocks per file. Full List: http://hadoop.apache.org/docs/current/hadoop-project- dist/hadoop-hdfs/hdfs-default.xml Big Data Computing Hadoop Distributed File System (HDFS)
  • 31. Vu Pham Big Data Computing HDFS Performance and Robustness Hadoop Distributed File System (HDFS)
  • 32. Vu Pham Common Failures DataNode Failures: Server can fail, disk can crash, data corruption. Network Failures: Sometimes there's data corruption because of network issues or disk issue. So, all of that could lead to a failure in the DataNode aspect of HDFS. You could have network failures. So, you could have a network go down between a particular and the name node that can affect a lot of data nodes at the same time. NameNode Failures: Could have name node failures, disk failure on the name node itself or the name node itself could corrupt this process. Big Data Computing Hadoop Distributed File System (HDFS)
  • 33. Vu Pham HDFS Robustness NameNode receives heartbeat and block reports from DataNodes Big Data Computing Hadoop Distributed File System (HDFS)
  • 34. Vu Pham Mitigation of common failures Periodic heartbeat: from DataNode to NameNode. DataNodes without recent heartbeat: Mark the data. And any new I/O that comes up is not going to be sent to that data node. Also remember that NameNode has information on all the replication information for the files on the file system. So, if it knows that a datanode fails which blocks will follow that replication factor. Now this replication factor is set for the entire system and also you could set it for particular file when you're writing the file. Either way, the NameNode knows which blocks fall below replication factor. And it will restart the process to re-replicate. Big Data Computing Hadoop Distributed File System (HDFS)
  • 35. Vu Pham Mitigation of common failures Checksum computed on file creation. Checksums stored in HDFS namespace. Used to check retrieved data. Re-read from alternate replica Big Data Computing Hadoop Distributed File System (HDFS)
  • 36. Vu Pham Mitigation of common failures Multiple copies of central meta data structures. Failover to standby NameNode- manual by default. Big Data Computing Hadoop Distributed File System (HDFS)
  • 37. Vu Pham Performance Changing blocksize and replication factor can improve performance. Example: Distributed copy Hadoop distcp allows parallel transfer of files. Big Data Computing Hadoop Distributed File System (HDFS)
  • 38. Vu Pham Replication trade off with respect to robustness One performance tradeoff is, actually when you go out to do some of the map reduce jobs, having replicas gives additional locality possibilities, but the big trade off is the robustness. In this case, we said no replicas. Might lose a node or a local disk: can't recover because there is no replication. Similarly, with data corruption, if you get a checksum that's bad, now you can't recover because you don't have a replica. Other parameters changes can have similar effects. Big Data Computing Hadoop Distributed File System (HDFS)
  • 39. Vu Pham Conclusion In this lecture, we have discussed design goals of HDFS, the read/write process to HDFS, the main configuration tuning parameters to control HDFS performance and robustness. Big Data Computing Hadoop Distributed File System (HDFS)