SlideShare a Scribd company logo
By 邓侃 Ph.D, 朱小杰, 李小吉
SmartClouder.com
2 / 30
Location




                  Metadata:
                  File Name, Size, Type, Timestamp, etc.
Tree-structured
                  Functionality:
   Directory
                  Create, Truncate, Delete,
                  Read, Write, Seek, List, Open, Close


                                                           3 / 30
Directory ID                                       Subdir IDs & File IDs
  1                                21,22,
  21                               31,32,33,…
  22                               34, 35, …


       Node ID                                 Metadata                                      Storage Chunk IDs
  22               theNameOfADir|20120306|24(Number of subdirs and files)              NIL
  32               theNameOfAFile|20120306|1024 (File Size)                            3A28C329, …


                                                                   How to allocate storage space
• System architect’s 3 routines:                                   for directory and file?
 Define the data structure first,
 Decide the workflow,
 Design the modules and where to deploy.

• File system’s fundamental data structure:
 Tree structure of directories and files,
 Metadata, physical storage address.                               A homebrew data structure
                                                                        of File System

                                                                                                                 4 / 30
• Slab: fixed-size storage space.
• SlabClass:
  A group of slabs of the same size.
• Chunk:
 Each slab splits into many chunks,
 the chunks are of the same size.
• Slots:
 An address list pointing to the re-usable chunks.                  Learn from
                                                                    MemCached
• Why split the storage into fixed-size slabs and chunks?
 Easy to re-use, but may waste storage space.
• Before storing a data, find the slab with the appropriate size,
  equal or a little bigger than the data.

                                                                             5 / 30
Directory ID     Subdir IDs & File IDs   Node                                            Storage
                                                                  Metadata
                                              ID                                            Chunk ID
1                  21,22,
                                                    theNameOfADir|20120306|24(Number of
21                 31,32,33,…                22                                           NIL
                                                    subdirs and files)
22                 34, 35, …                        theNameOfAFile|20120306|1024 (File
                                             32                                           3A28C329, …
                                                    Size)


                                                       % fappend
                                                         /home/user/test.txt
     Workflow                                            “content added to the tail …”




                                                                                                6 / 30
• When appending new data,
 the storage location is chosen by the size.
 No guarantee to allocate the data
 of the same file, in continuous blocks.
• Frequently modifying file, induce fragmentation.
• Fragmentation decrease disk IO efficiency,
 because disk IO involves in mechanical movement.
• From time to time, do defragmentation.
• Flash storage may not care about fragmentation.

                                                     7 / 30
Linux Ext2 Layers



                                                       Ext2 Physical Layer
                                                                                                            Ext2 Directory




                           Linux Ext2 structures are similar to
                               our homebrew structure.



     Directory ID              Subdir IDs & File IDs
                                                                                                        Ext2 iNode
1                       21,22,

21                      31,32,33,…

22                      34, 35, …


Node ID                          Metadata                        Storage Chunk ID

                theNameOfADir|20120306|24(Number of
22                                                         NIL
                subdirs and files)

32              theNameOfAFile|20120306|1024 (File Size)   3A28C329, …



                                                                                                                             8 / 30
Linux Ext2 System Architecture



      Data stored in
      buffer cache first.



Virtual File System



Transparent to the FS
implementation.


                                                              Separate data flow
Transparent to the device.                                    from control flow.


                                                                           9 / 30
• Writing to disk is slow,
   So, appending is slow, but still much faster than random accessing.
• Store in buffer cache first, then write to disk.
   Store in buffer cache first, then write to disk as log, then merger (commit) into files.


                       Linux Ext3 = Linux Ext2 + Journaling File System

       Time: T1                                                        Commit Time
                       Committed File                                        Log (Journal)
Position    #1    #2        #3          #4   #5   #6   @1              @2             @3            @4
Data        10    20        30          40   50   -    Add 2 to #2     Append 90      Del #4        Minus 2 to #2



       Time: T2
                       Committed File                                        Log (Journal)
Position    #1    #2        #3          #4   #5   #6   @1               @2              @3               @4
Data        10    22        30          -    50   90   Minus 2 to #2    Add 2 to #5     Set #2 80


Data( #2) = ?     Data( #2) = File( #2) + Log(@1) + Log(@2) + Log(@3) = 80


                                                                                                         10 / 30
• When a single machine’s local disk space is not sufficient,
 expand the storage by mounting remote file system to the local one.
• VFS makes the File System interface consistent,
 the variety of the FS implementations is transparent to the client.


                            Distributed Linux Ext3 with NFS


  Local file system                                             Remote file system




 Different FS fits
 different types of data.

 Cannot mount too many                                      Data Protocol: XDR
 remote FS’s, consuming                                External Data Representation
 too much local resource.
                                                                                11 / 30
• Data structure:
 1. tree structure of directory and file,
 2. metadata (inode).

• Physical layer:
 1. fixed-size blocks,
 2. block groups of different sizes.

• Problems to solve:
 Fragmentation,
 Disk IO is slow, but append is faster than random access.
 Local disk space is not sufficient.

• Concepts to remember:
 iNode: metadata of directory and file in Unix/Linux.
 Virtual File System: an abstract layer to unify the APIs of different FS implementations.
 Journaling File System: append to log first, then merge into file.

                                                                                   12 / 30
13 / 30
• Hadoop is an open source project, supervised by Apache org.
 Implemented in Java.
• Hadoop is a distributed system, for large scale storage and paralleled computing.
 A mimic of Google system.

          Pig                           Chukwa                          Hive                               HBase
        MapReduce                                      HDFS                                   Zoo Keeper
                                Core                                                               Avro


  Google
                        MapReduce                                     GFS                          BigTable

         Hadoop Common: The common utilities that support the other Hadoop subprojects.
         Avro: A data serialization system that provides dynamic integration with scripting languages.
         Chukwa: A data collection system for managing large distributed systems.
         HBase: A scalable, distributed database that supports structured data storage for large tables.
         HDFS: A distributed file system that provides high throughput access to application data.
         Hive: A data warehouse infrastructure that provides data summarization and ad hoc querying.
         MapReduce: A software framework for distributed processing of large data sets on compute clusters.
         Pig: A high-level data-flow language and execution framework for parallel computation.
         ZooKeeper: A high-performance coordination service for distributed applications.


                                                                                                                   14 / 30
• Hadoop is popular.
 Adopted by many companies.

• Yahoo:
 More than 100,000 CPUs in more than 25,000 computers running it.
 The biggest cluster: 4000 nodes,
 2 x 4CPU boxes, with 4 x 1 TB disk, and 16 GB RAM.

• Amazon :
 Process millions of sessions daily for analytics.
 Using both Java and streaming APIs.

• Facebook:
 Use it to store copies of internal log and dimension data sources.
 a source for reporting and analytics, with machine learning algorithms.

• Facebook:
 Use it to analyze the log of search, data mining on web page database.
 Process 3000 TB data per week.


                                                                           15 / 30
• Distributed File system,
  Every long file is split into blocks of 256 MB,
  each block is allocated to different storage node.
• Reliability,
  Each block has multiple replica.
• Master-Slave architecture.
  Master: Namenode,
  Slave: Datanode.




                                                       16 / 30
Namenode handles the FS namespace,
                                      including open, close, rename
                                      file or directory.

                                      Namenode supervises
                                      the creation, deletion and copy
                                      of blocks.




Namespace tree is cached in RAM,
also stored permanently in FSImage.

Log takes record of open, close,
rename file or directory, etc.

                                                                           17 / 30
SecondaryNameNode executes
                                         the merge of fsimage and edits.




NameNode is of Journaling file system.
Client’s request to create, delete and
rename directory and file,
is written into “edits” first,
then merged into “fsimage”.

                                                                   18 / 30
HDFS Create

FSDataOutputStream is returned by
DistributedFileSystem after contacting
NameNode.

Upload file is split into multiple packets.
Only after the first packet has been
stored into all the replica,
the second packet begins to upload.



        HDFS Read

FSDataOutputStream is returned by
DistributedFileSystem after contacting
NameNode to find the block address.

FSDataOutputStream read the block
from the datanode, if fails it goes to the replica.
Only after finishing reading the first block,
it goes to the second.

                                                      19 / 30
Upload a File (576MB)
Splitting:

                     A (256MB)                                 B (256MB)                            C (64MB)

                                     HDFS writes the replica into different nodes in different racks.
Replication:                         Enhance the reliability, but reduce the write speed.

         Rack1                                                    Rack2
             DataNode1   DataNode2        DataNode3                 DataNode1    DataNode2

             A     B     B’     C         A’     C’                  A’’ C ’’         B’’
                                                           Only after a failure to reading the first replica ,
                                                           then start to read the second replica.

              1. Read block A X               2.Read block A’  √
Consistency:                    A                             B                             C
                                     Rack1-DataNode1-A            Rack1-DataNode1-B             Rack1-DataNode2-C
         NameNode                   [Rack1-DataNode3-A’]       [Rack1-DataNode2-B’]         [Rack1-DataNode3-C ’]
                                 [Rack2-DataNode1-A’’]         [Rack2-DataNode2-B’’]        [Rack2-DataNode1-C ’’]




                                                                                                             20 / 30
• Hadoop HDFS:
 An open source implementation of Google File System.

• Master-Slave Architecture:
 Namenode, the master for namespace, i.e. directory and file.
 Datanode, the blocks for data storage.

• Namenode is of Journaling File System:
 Creation, deletion, rename of the namespace is written into “edits” first,
 Then merge into the FSImage file, by the SecondaryNamenode.

• HDFS write,
 A file splits into packets,
 After the first packet is written into every replica,
 then the second packet starts to upload.




                                                                              21 / 30
22 / 30
2000


1997   1998




       2001   2009


                 23 / 30
• HDFS and Google File System,
 are very similar to each other.
• A single master with shadow masters.
 Multiple chunk servers,
 containing fixed size data blocks.
• HeartBeat message between
 the master and chunkservers
 to ensure each other’s aliveness.




                                         24 / 30
• 1,2: A client asks the master
  for all the replica addresses.
• 3: The client sends data to
 the nearest chunk server,
 the other chunk servers are in pipeline,
 the received data is buffered in cache.
• 4: The client sends a write request
 to the primary.
• 5: The primary replica decides
 the offset of the received data
 in the chunk.
• 6: Completion messages
 from secondary replicas.
• 7: The primary replies to the client.




                                            25 / 30
Experimental
Environment




Aggregate
Rates




               26 / 30
Read is much faster than write




N clients WRITE simultaneously to N distinct files.
N clients APPEND simultaneously to 1 single file.




                                                      27 / 30
• GFS is for large files, not optimized for small files:
  Millions of files, each >100 MB or >1 GB,
  from BigTable, MapReduce records, etc.                   GFS is for big files
• Workload: streaming
  Large streaming reads ( > 1MB), small random reads (a few KBs).
  Sequentially append, and seldom modified again.
  Response time for read and write is not critical.

• GFS has no directory (inode) structure:
  Simply uses directory-like filenames, e.g. /foo/bar
  Thus listing files in a directory is slow.

• Re-replication:
  When the number of replicas falls below predefined threshold,
  the master assigns the highest priority to clone such chunks.

• Recovery:
  The master and the chunk servers restore their states within a few seconds.
  The shadow master provides read-only accesses.
                                                                                  28 / 30
• GFS is NOT for all kinds of files:
 Migrate files to other FS’s which fit better,
 Picasa, Gmail, Project codes.                             GFS is not a panacea
• Highly relaxed consistency:
 3 replica, but okay for “at-least-once”, sometimes too risky.
 No transaction guarantee, “all succeed or roll back”.

• Master is the bottleneck:
 All client has to contact the master first.
 The namespace of all directories and files, cannot be too big.




                                                                                  29 / 30
• No stupid questions, but it is stupid if not ask.
• When sleepy, the best trick to wake up is to ask questions.


                                                                30 / 30

More Related Content

PDF
DaStor/Cassandra report for CDR solution
DOC
PDF
PDF
A Survey On Solid-State Drive Forensic Analysis Techniques
PDF
March 2013 (BMC: 32/64 bit channel discussion)
PPTX
Couchbase Server 2.0 - XDCR - Deep dive
PDF
ASPLOS2011 workshop RESoLVE "Effect of Disk Prefetching of Guest OS "
PDF
Linux Symposium 2009 Slide Suzaki "Effect of readahead and file system block ...
DaStor/Cassandra report for CDR solution
A Survey On Solid-State Drive Forensic Analysis Techniques
March 2013 (BMC: 32/64 bit channel discussion)
Couchbase Server 2.0 - XDCR - Deep dive
ASPLOS2011 workshop RESoLVE "Effect of Disk Prefetching of Guest OS "
Linux Symposium 2009 Slide Suzaki "Effect of readahead and file system block ...

Similar to 北航云计算公开课03 google file system (20)

PPTX
DHT2 - O Brother, Where Art Thou with Shyam Ranganathan
PDF
Disk forensics
PPTX
Ext filesystem4
PDF
Btrfs: Design, Implementation and the Current Status
PPTX
Bigdata and Hadoop
ODP
Case study of BtrFS: A fault tolerant File system
PPTX
file system overview in oerating system .
PDF
Btrfs by Chris Mason
PPTX
Key-value databases in practice Redis @ DotNetToscana
KEY
MongoDB Best Practices in AWS
PPTX
Facebook's Approach to Big Data Storage Challenge
PPT
cuda_programming for vietual reality in 3d
PDF
Scalable Storage for Massive Volume Data Systems
PPTX
Working of Volatile and Non-Volatile memory
PDF
Gregory engels nsd crash course - ilug10
PDF
How do i implement the directory entry in the file system in OSP2 .pdf
PPTX
Google
PDF
Active directory interview_questions
PDF
Active directory interview_questions
DHT2 - O Brother, Where Art Thou with Shyam Ranganathan
Disk forensics
Ext filesystem4
Btrfs: Design, Implementation and the Current Status
Bigdata and Hadoop
Case study of BtrFS: A fault tolerant File system
file system overview in oerating system .
Btrfs by Chris Mason
Key-value databases in practice Redis @ DotNetToscana
MongoDB Best Practices in AWS
Facebook's Approach to Big Data Storage Challenge
cuda_programming for vietual reality in 3d
Scalable Storage for Massive Volume Data Systems
Working of Volatile and Non-Volatile memory
Gregory engels nsd crash course - ilug10
How do i implement the directory entry in the file system in OSP2 .pdf
Google
Active directory interview_questions
Active directory interview_questions
Ad

Recently uploaded (20)

PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
A Presentation on Artificial Intelligence
PDF
Network Security Unit 5.pdf for BCA BBA.
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Spectral efficient network and resource selection model in 5G networks
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
A comparative analysis of optical character recognition models for extracting...
PPTX
Spectroscopy.pptx food analysis technology
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Approach and Philosophy of On baking technology
PPTX
Cloud computing and distributed systems.
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PPTX
sap open course for s4hana steps from ECC to s4
PDF
MIND Revenue Release Quarter 2 2025 Press Release
Chapter 3 Spatial Domain Image Processing.pdf
Advanced methodologies resolving dimensionality complications for autism neur...
The Rise and Fall of 3GPP – Time for a Sabbatical?
Programs and apps: productivity, graphics, security and other tools
Per capita expenditure prediction using model stacking based on satellite ima...
A Presentation on Artificial Intelligence
Network Security Unit 5.pdf for BCA BBA.
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Encapsulation_ Review paper, used for researhc scholars
Spectral efficient network and resource selection model in 5G networks
“AI and Expert System Decision Support & Business Intelligence Systems”
A comparative analysis of optical character recognition models for extracting...
Spectroscopy.pptx food analysis technology
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Approach and Philosophy of On baking technology
Cloud computing and distributed systems.
Mobile App Security Testing_ A Comprehensive Guide.pdf
gpt5_lecture_notes_comprehensive_20250812015547.pdf
sap open course for s4hana steps from ECC to s4
MIND Revenue Release Quarter 2 2025 Press Release
Ad

北航云计算公开课03 google file system

  • 1. By 邓侃 Ph.D, 朱小杰, 李小吉 SmartClouder.com
  • 3. Location Metadata: File Name, Size, Type, Timestamp, etc. Tree-structured Functionality: Directory Create, Truncate, Delete, Read, Write, Seek, List, Open, Close 3 / 30
  • 4. Directory ID Subdir IDs & File IDs 1 21,22, 21 31,32,33,… 22 34, 35, … Node ID Metadata Storage Chunk IDs 22 theNameOfADir|20120306|24(Number of subdirs and files) NIL 32 theNameOfAFile|20120306|1024 (File Size) 3A28C329, … How to allocate storage space • System architect’s 3 routines: for directory and file? Define the data structure first, Decide the workflow, Design the modules and where to deploy. • File system’s fundamental data structure: Tree structure of directories and files, Metadata, physical storage address. A homebrew data structure of File System 4 / 30
  • 5. • Slab: fixed-size storage space. • SlabClass: A group of slabs of the same size. • Chunk: Each slab splits into many chunks, the chunks are of the same size. • Slots: An address list pointing to the re-usable chunks. Learn from MemCached • Why split the storage into fixed-size slabs and chunks? Easy to re-use, but may waste storage space. • Before storing a data, find the slab with the appropriate size, equal or a little bigger than the data. 5 / 30
  • 6. Directory ID Subdir IDs & File IDs Node Storage Metadata ID Chunk ID 1 21,22, theNameOfADir|20120306|24(Number of 21 31,32,33,… 22 NIL subdirs and files) 22 34, 35, … theNameOfAFile|20120306|1024 (File 32 3A28C329, … Size) % fappend /home/user/test.txt Workflow “content added to the tail …” 6 / 30
  • 7. • When appending new data, the storage location is chosen by the size. No guarantee to allocate the data of the same file, in continuous blocks. • Frequently modifying file, induce fragmentation. • Fragmentation decrease disk IO efficiency, because disk IO involves in mechanical movement. • From time to time, do defragmentation. • Flash storage may not care about fragmentation. 7 / 30
  • 8. Linux Ext2 Layers Ext2 Physical Layer Ext2 Directory Linux Ext2 structures are similar to our homebrew structure. Directory ID Subdir IDs & File IDs Ext2 iNode 1 21,22, 21 31,32,33,… 22 34, 35, … Node ID Metadata Storage Chunk ID theNameOfADir|20120306|24(Number of 22 NIL subdirs and files) 32 theNameOfAFile|20120306|1024 (File Size) 3A28C329, … 8 / 30
  • 9. Linux Ext2 System Architecture Data stored in buffer cache first. Virtual File System Transparent to the FS implementation. Separate data flow Transparent to the device. from control flow. 9 / 30
  • 10. • Writing to disk is slow, So, appending is slow, but still much faster than random accessing. • Store in buffer cache first, then write to disk. Store in buffer cache first, then write to disk as log, then merger (commit) into files. Linux Ext3 = Linux Ext2 + Journaling File System Time: T1 Commit Time Committed File Log (Journal) Position #1 #2 #3 #4 #5 #6 @1 @2 @3 @4 Data 10 20 30 40 50 - Add 2 to #2 Append 90 Del #4 Minus 2 to #2 Time: T2 Committed File Log (Journal) Position #1 #2 #3 #4 #5 #6 @1 @2 @3 @4 Data 10 22 30 - 50 90 Minus 2 to #2 Add 2 to #5 Set #2 80 Data( #2) = ? Data( #2) = File( #2) + Log(@1) + Log(@2) + Log(@3) = 80 10 / 30
  • 11. • When a single machine’s local disk space is not sufficient, expand the storage by mounting remote file system to the local one. • VFS makes the File System interface consistent, the variety of the FS implementations is transparent to the client. Distributed Linux Ext3 with NFS Local file system Remote file system Different FS fits different types of data. Cannot mount too many Data Protocol: XDR remote FS’s, consuming External Data Representation too much local resource. 11 / 30
  • 12. • Data structure: 1. tree structure of directory and file, 2. metadata (inode). • Physical layer: 1. fixed-size blocks, 2. block groups of different sizes. • Problems to solve: Fragmentation, Disk IO is slow, but append is faster than random access. Local disk space is not sufficient. • Concepts to remember: iNode: metadata of directory and file in Unix/Linux. Virtual File System: an abstract layer to unify the APIs of different FS implementations. Journaling File System: append to log first, then merge into file. 12 / 30
  • 14. • Hadoop is an open source project, supervised by Apache org. Implemented in Java. • Hadoop is a distributed system, for large scale storage and paralleled computing. A mimic of Google system. Pig Chukwa Hive HBase MapReduce HDFS Zoo Keeper Core Avro Google MapReduce GFS BigTable Hadoop Common: The common utilities that support the other Hadoop subprojects. Avro: A data serialization system that provides dynamic integration with scripting languages. Chukwa: A data collection system for managing large distributed systems. HBase: A scalable, distributed database that supports structured data storage for large tables. HDFS: A distributed file system that provides high throughput access to application data. Hive: A data warehouse infrastructure that provides data summarization and ad hoc querying. MapReduce: A software framework for distributed processing of large data sets on compute clusters. Pig: A high-level data-flow language and execution framework for parallel computation. ZooKeeper: A high-performance coordination service for distributed applications. 14 / 30
  • 15. • Hadoop is popular. Adopted by many companies. • Yahoo: More than 100,000 CPUs in more than 25,000 computers running it. The biggest cluster: 4000 nodes, 2 x 4CPU boxes, with 4 x 1 TB disk, and 16 GB RAM. • Amazon : Process millions of sessions daily for analytics. Using both Java and streaming APIs. • Facebook: Use it to store copies of internal log and dimension data sources. a source for reporting and analytics, with machine learning algorithms. • Facebook: Use it to analyze the log of search, data mining on web page database. Process 3000 TB data per week. 15 / 30
  • 16. • Distributed File system, Every long file is split into blocks of 256 MB, each block is allocated to different storage node. • Reliability, Each block has multiple replica. • Master-Slave architecture. Master: Namenode, Slave: Datanode. 16 / 30
  • 17. Namenode handles the FS namespace, including open, close, rename file or directory. Namenode supervises the creation, deletion and copy of blocks. Namespace tree is cached in RAM, also stored permanently in FSImage. Log takes record of open, close, rename file or directory, etc. 17 / 30
  • 18. SecondaryNameNode executes the merge of fsimage and edits. NameNode is of Journaling file system. Client’s request to create, delete and rename directory and file, is written into “edits” first, then merged into “fsimage”. 18 / 30
  • 19. HDFS Create FSDataOutputStream is returned by DistributedFileSystem after contacting NameNode. Upload file is split into multiple packets. Only after the first packet has been stored into all the replica, the second packet begins to upload. HDFS Read FSDataOutputStream is returned by DistributedFileSystem after contacting NameNode to find the block address. FSDataOutputStream read the block from the datanode, if fails it goes to the replica. Only after finishing reading the first block, it goes to the second. 19 / 30
  • 20. Upload a File (576MB) Splitting: A (256MB) B (256MB) C (64MB) HDFS writes the replica into different nodes in different racks. Replication: Enhance the reliability, but reduce the write speed. Rack1 Rack2 DataNode1 DataNode2 DataNode3 DataNode1 DataNode2 A B B’ C A’ C’ A’’ C ’’ B’’ Only after a failure to reading the first replica , then start to read the second replica. 1. Read block A X 2.Read block A’  √ Consistency: A B C Rack1-DataNode1-A Rack1-DataNode1-B Rack1-DataNode2-C NameNode [Rack1-DataNode3-A’] [Rack1-DataNode2-B’] [Rack1-DataNode3-C ’] [Rack2-DataNode1-A’’] [Rack2-DataNode2-B’’] [Rack2-DataNode1-C ’’] 20 / 30
  • 21. • Hadoop HDFS: An open source implementation of Google File System. • Master-Slave Architecture: Namenode, the master for namespace, i.e. directory and file. Datanode, the blocks for data storage. • Namenode is of Journaling File System: Creation, deletion, rename of the namespace is written into “edits” first, Then merge into the FSImage file, by the SecondaryNamenode. • HDFS write, A file splits into packets, After the first packet is written into every replica, then the second packet starts to upload. 21 / 30
  • 23. 2000 1997 1998 2001 2009 23 / 30
  • 24. • HDFS and Google File System, are very similar to each other. • A single master with shadow masters. Multiple chunk servers, containing fixed size data blocks. • HeartBeat message between the master and chunkservers to ensure each other’s aliveness. 24 / 30
  • 25. • 1,2: A client asks the master for all the replica addresses. • 3: The client sends data to the nearest chunk server, the other chunk servers are in pipeline, the received data is buffered in cache. • 4: The client sends a write request to the primary. • 5: The primary replica decides the offset of the received data in the chunk. • 6: Completion messages from secondary replicas. • 7: The primary replies to the client. 25 / 30
  • 27. Read is much faster than write N clients WRITE simultaneously to N distinct files. N clients APPEND simultaneously to 1 single file. 27 / 30
  • 28. • GFS is for large files, not optimized for small files: Millions of files, each >100 MB or >1 GB, from BigTable, MapReduce records, etc. GFS is for big files • Workload: streaming Large streaming reads ( > 1MB), small random reads (a few KBs). Sequentially append, and seldom modified again. Response time for read and write is not critical. • GFS has no directory (inode) structure: Simply uses directory-like filenames, e.g. /foo/bar Thus listing files in a directory is slow. • Re-replication: When the number of replicas falls below predefined threshold, the master assigns the highest priority to clone such chunks. • Recovery: The master and the chunk servers restore their states within a few seconds. The shadow master provides read-only accesses. 28 / 30
  • 29. • GFS is NOT for all kinds of files: Migrate files to other FS’s which fit better, Picasa, Gmail, Project codes. GFS is not a panacea • Highly relaxed consistency: 3 replica, but okay for “at-least-once”, sometimes too risky. No transaction guarantee, “all succeed or roll back”. • Master is the bottleneck: All client has to contact the master first. The namespace of all directories and files, cannot be too big. 29 / 30
  • 30. • No stupid questions, but it is stupid if not ask. • When sleepy, the best trick to wake up is to ask questions. 30 / 30