SlideShare a Scribd company logo
GOOGLE
FILE
SYSTEM
Presented by
Junyoung Jung (2012104030)
Jaehong Jeong (2011104050)
Dept. of Computer Engineering
Kyung Hee Univ.
Big Data Programming (Prof. Lee Hae Joon), 2017-Fall
2
Contents1. INTRODUCTION
2. DESIGN
3. SYSTEM INTERACTIONS
4. MASTER OPERATION
5. HIGH AVAILABILITY
6. MEASUREMENTS
7. CONCLUSIONS
INTRODUCTION
This paper…
Background
Different Points in the Design space
3
1
1.1 THIS PAPER…
4
The Google File System
- Sanjay Ghemeawat, Howard Gobioff, and Shun-Tak Leung
- Google
- 19th ACM Symposium on Operating System Principles, 2003
Previous Distributed File System
1.2 BACKGROUND
5
Performance Scalability Reliability Availability
Previous Distributed File System
1.2 BACKGROUND
6
Performance Scalability Reliability Availabililty
GFS(Google File System) has the same goal.
However, it reflects a marked departure from some earlier file system.
1. Component failures are the norm rather than the exception.
2. Files are huge by traditional standards.
3. Most files are mutated by appending new data.
4. Co-design applications and file system API.
5. Sustained bandwidth more critical than low latency.
1.3 DIFFERENT POINTS IN THE DESIGN SPACE
7
DESIGN
Interface
Architecture
8
2
Familiar Interface
- create
- delete
- open
- close
- read
- write
2.1 INTERFACE
9
Moreover…
- Snapshot
ㆍ Low cost
ㆍ Make a copy of a file/directory tree
ㆍ Duplicate metadata
- Atomic Record append
ㆍ Atomicity with multiple concurrent writes
2.2 ARCHITECTURE
10
2.2 ARCHITECTURE
11
Chunk
- Files are divided into fixed-size chunk
- 64MB
- Larger than typical file system block sizes
Advantages from large chunk size
- Reduce interaction between client and master
- Client can perform many operations on a given chunk
- Reduce size of metadata stored on the master
2.2 ARCHITECTURE
12
GFS chunkservers
- Store chunks on local disks as Linux files.
- Read/Write chunk data specified by a chunk handle & byte range
2.2 ARCHITECTURE
13
GFS master
- Maintains all file system metadata
ㆍ Namespace
ㆍ Access-control information
ㆍ Chunk locations
ㆍ ‘Lease’ management
2.2 ARCHITECTURE
14
GFS client
- GFS client code linked into each application.
- GFS client code implements the file system API.
- GFS client code communicates with the master & chunkservers.
2.2 ARCHITECTURE
15
Using fixed chunk size, translate filename & byte offset to chunk index.
Send request to master
2.2 ARCHITECTURE
16
Replies with chunk handle & location of chunkserver replicas
(including which is ‘primary’)
2.2 ARCHITECTURE
17
Cache info using filename & chunk index as key
2.2 ARCHITECTURE
18
Request data from nearest chunkserver
“chunk handle & index into chunk”
2.2 ARCHITECTURE
19
No Need to talk more about this 64MB chunk
Until cached info expires or file re-opend.
2.2 ARCHITECTURE
20
Metadata only
Data only
2.2 ARCHITECTURE
21
Metadata only
Data only
Metadata
- Master stores three types
- Stored in memory
- Kept persistent thru logging
SYSTEM INTERACTIONS
Leases & Mutation Order
Data Flow
Atomic Appends
Snapshot
22
3
3.1 LEASES & MUTATION ORDER
23
Objective
- Ensure data consistent & defined.
- Minimize load on master.
Master grants ‘lease’ to one replica
- Called ‘primary’ chunkserver.
Primary defines a mutation order between mutations
- All secondaries follows this order
3.2 DATAFLOW
24
3.3 ATOMIC APPENDS
25
The Client Specifies only the data
Similar to writes
- Mutation order is determined by the primary
- All secondaries use the same mutation order
GFS appends data to the file at least once atomically
3.4 SNAPSHOT
26
Goals
- To quickly create branch copies of huge data sets
- To easily checkpoint the current state
Copy-on-write technique
- Metadata for the source file or directory tree is duplicated.
- Reference count for chunks are incremented.
- Chunks are copied later at the first write.
MASTER OPERATION
Namespace Management and Locking
Replica Placement
Creation, Re-replication, Rebalancing
Garbage Collection
Stale Replica Detection 27
4
“ The master executes all
namespace operations and
manages chunk replicas
throughout the system.
2828
4.1 Namespace Management and Locking
▰ Namespaces are represented as a lookup table mapping full
pathnames to metadata
▰ Use locks over regions of the namespace to ensure proper
serialization
▰ Each master operation acquires a set of locks before it runs
29
4.1 Namespace Management and Locking
▰ Example of Locking Mechanism
▻ Preventing /home/user/foo from being created while /home/user is being snapshotted to /save/user
▻ Snapshot operation
▻ - Read locks on /home and /save
▻ - Write locks on /home/user and /save/user
▻ File creation
▻ - Read locks on /home and /home/user
▻ - Write locks on /home/user/foo
▻ Conflict locks on /home/user
▰ Locking scheme is that it allows concurrent mutations in
the same directory
30
4.2 Replica Placement
▰ The chunk replica placement policy serves two purposes:
▻ Maximize data reliability and availability
▻ Maximize network bandwidth utilization.
31
4.3 Creation, Re-replication, Rebalancing
▰ Creation
▻ Disk space utilization
▻ Number of recent creations on each chunkserver
▰ Re-replication
▻ Prioritized: How far it is from its replication goal…
▻ The highest priority chunk is cloned first by copying the chunk data directly from an existing replica
▰ Rebalancing
▻ Periodically
32
4.4 Garbage Collection
▰ Deleted files
▻ Deletion operation is logged
▻ File is renamed to a hidden name, then may be removed later or get recovered
▰ Orphaned chunks (unreachable chunks)
▻ Identified and removed during a regular scan of the chunk namespace
33
4.5 Stale Replica Detection
▰ Stale replicas
▻ Chunk version numbering
▻ The client or the chunkserver verifies the version number when it performs the
operation so that it is always accessing up-to-date data.
34
FAULT TOLERANCE AND DIAGNOSIS
High Availability
Data Integrity
Diagnostic Tools
35
5
“ We cannot completely trust the
machines, nor can we completely
trust the disks.
3636
5.1 High Availability
▰ Fast Recovery
▻ - Operation log and Checkpointing
▰ Chunk Replication
▻ - Each chunk is replicated on multiple chunkservers on different racks
▰ Master Replication
▻ - Operation log and check points are replicated on multiple machines
37
5.2 Data Integrity
▰ Checksumming to detect corruption of stored data
▰ Each chunkserver independently verifies the integrity
38
5.3 Diagnostic Tools
▰ Extensive and detailed diagnostic logging has helped
immeasurably in problem isolation, debugging, and
performance analysis, while incurring only a minimal cost .
▰ RPC requests and replies, etc..
39
MEASUREMENTS
Micro-benchmarks
Real World Clusters
Workload Breakdown
40
6
“ A few micro-benchmarks to
illustrate the bottlenecks
inherent in the GFS architecture
and implementation
4141
6.1 Micro-benchmarks
▰ One master, two master replicas, 16 chunkservers, and 16
clients. (2003)
▰ All the machines are configured with dual 1.4 GHz PIII
processors, 2 GB of memory, two 80 GB 5400 rpm disks,
and a 100 Mbps full-duplex Ethernet connection to an HP
2524 switch.
42
6.1 Micro-benchmarks
43
6.2 Real World Clusters
44
6.3 Workload Breakdown
▰ Methodology and Caveats
▰ Chunkserver Workload
▰ Appends versus Writes
▰ Mster Workload
45
CONCLUSIONS
46
7
Conclusions
▰ Different than previous file systems
▰ Satisfies needs of the application
▰ Fault tolerance
47
48
THANKS!

More Related Content

PPTX
Google file system
PPTX
Google file system GFS
PPTX
Google File System
PPTX
GOOGLE FILE SYSTEM
PDF
The Google File System (GFS)
PPTX
GFS & HDFS Introduction
PPTX
Google File System
PPTX
Cloud File System with GFS and HDFS
Google file system
Google file system GFS
Google File System
GOOGLE FILE SYSTEM
The Google File System (GFS)
GFS & HDFS Introduction
Google File System
Cloud File System with GFS and HDFS

What's hot (20)

PPT
google file system
PPT
Google File System
PPT
Synchronization in distributed systems
PPTX
file sharing semantics by Umar Danjuma Maiwada
PPTX
Cloud Resource Management
PPTX
Multi Tenancy In The Cloud
PPTX
Memory virtualization
PPT
4.file service architecture
PPTX
Cluster computing
PPT
File replication
PPTX
Storage As A Service (StAAS)
PPT
Cluster Computing
PPT
Distributed System-Multicast & Indirect communication
PPTX
Network Virtualization
PPTX
OpenStack Introduction
PPTX
Google file system
PPTX
Scheduling in Cloud Computing
PPTX
Introduction to GCP (Google Cloud Platform)
PPTX
Cloud computing presentation
PPT
google file system
Google File System
Synchronization in distributed systems
file sharing semantics by Umar Danjuma Maiwada
Cloud Resource Management
Multi Tenancy In The Cloud
Memory virtualization
4.file service architecture
Cluster computing
File replication
Storage As A Service (StAAS)
Cluster Computing
Distributed System-Multicast & Indirect communication
Network Virtualization
OpenStack Introduction
Google file system
Scheduling in Cloud Computing
Introduction to GCP (Google Cloud Platform)
Cloud computing presentation
Ad

Similar to Google File System (20)

PPT
Gfs google-file-system-13331
PPT
googlefs-vijay.ppt ghix hdlp pdopld og un
PPTX
GFS xouzfz h ghdzg ix booc ug nog ghzg m
PPT
advanced Google file System
PPT
Advance google file system
PDF
The Google file system
PPT
Gfs final
PPT
tittle
PPTX
Google File System
PPT
Distributed file systems (from Google)
PPT
Distributed computing seminar lecture 3 - distributed file systems
PPT
Lec3 Dfs
PDF
Google File System: System and Design Overview
PPTX
storage-systems.pptx
PPT
Gfs介绍
PPTX
Cluster based storage - Nasd and Google file system - advanced operating syst...
PPTX
Google
PPT
PPT
Lalit
Gfs google-file-system-13331
googlefs-vijay.ppt ghix hdlp pdopld og un
GFS xouzfz h ghdzg ix booc ug nog ghzg m
advanced Google file System
Advance google file system
The Google file system
Gfs final
tittle
Google File System
Distributed file systems (from Google)
Distributed computing seminar lecture 3 - distributed file systems
Lec3 Dfs
Google File System: System and Design Overview
storage-systems.pptx
Gfs介绍
Cluster based storage - Nasd and Google file system - advanced operating syst...
Google
Lalit
Ad

More from Junyoung Jung (20)

PDF
[KCC oral] 정준영
PDF
전자석을 이용한 타자 연습기
PDF
[2018 평창올림픽 기념 SW 공모전] Nolza 보고서
PDF
[2018 평창올림픽 기념 SW 공모전] Nolza - Activity curation service
PDF
SCC (Security Control Center)
PDF
sauber92's Potfolio (ver.2012~2017)
PDF
Electron을 사용해서 Arduino 제어하기
PDF
[UNITHON 5TH] KOK - 프로귀찮러를 위한 지출관리 서비스
PDF
[우아주, Etc] 정준영 - 페이시스템
PDF
[우아주, 7월] 정준영
PDF
[team608] 전자석을 이용한 타자연습기
PDF
[Kcc poster] 정준영
PDF
[Graduation Project] 전자석을 이용한 타자 연습기
PDF
[KCC poster]정준영
PDF
16 학술제 마무리 자료
PDF
[Maybee] inSpot
PDF
[대학생 연합 해커톤 UNITHON 3RD] Mingginyu_ppt
PDF
[2016 K-global 스마트디바이스톤] inSpot
PDF
[2015전자과공모전] ppt
PDF
[C++]6 function2
[KCC oral] 정준영
전자석을 이용한 타자 연습기
[2018 평창올림픽 기념 SW 공모전] Nolza 보고서
[2018 평창올림픽 기념 SW 공모전] Nolza - Activity curation service
SCC (Security Control Center)
sauber92's Potfolio (ver.2012~2017)
Electron을 사용해서 Arduino 제어하기
[UNITHON 5TH] KOK - 프로귀찮러를 위한 지출관리 서비스
[우아주, Etc] 정준영 - 페이시스템
[우아주, 7월] 정준영
[team608] 전자석을 이용한 타자연습기
[Kcc poster] 정준영
[Graduation Project] 전자석을 이용한 타자 연습기
[KCC poster]정준영
16 학술제 마무리 자료
[Maybee] inSpot
[대학생 연합 해커톤 UNITHON 3RD] Mingginyu_ppt
[2016 K-global 스마트디바이스톤] inSpot
[2015전자과공모전] ppt
[C++]6 function2

Recently uploaded (20)

PPT
Project quality management in manufacturing
PDF
Digital Logic Computer Design lecture notes
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
Internet of Things (IOT) - A guide to understanding
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPTX
Sustainable Sites - Green Building Construction
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
OOP with Java - Java Introduction (Basics)
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPTX
Lesson 3_Tessellation.pptx finite Mathematics
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PPTX
Welding lecture in detail for understanding
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
web development for engineering and engineering
Project quality management in manufacturing
Digital Logic Computer Design lecture notes
CYBER-CRIMES AND SECURITY A guide to understanding
Internet of Things (IOT) - A guide to understanding
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
Sustainable Sites - Green Building Construction
Foundation to blockchain - A guide to Blockchain Tech
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
OOP with Java - Java Introduction (Basics)
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Lesson 3_Tessellation.pptx finite Mathematics
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
Welding lecture in detail for understanding
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
CH1 Production IntroductoryConcepts.pptx
Operating System & Kernel Study Guide-1 - converted.pdf
Embodied AI: Ushering in the Next Era of Intelligent Systems
web development for engineering and engineering

Google File System

  • 1. GOOGLE FILE SYSTEM Presented by Junyoung Jung (2012104030) Jaehong Jeong (2011104050) Dept. of Computer Engineering Kyung Hee Univ. Big Data Programming (Prof. Lee Hae Joon), 2017-Fall
  • 2. 2 Contents1. INTRODUCTION 2. DESIGN 3. SYSTEM INTERACTIONS 4. MASTER OPERATION 5. HIGH AVAILABILITY 6. MEASUREMENTS 7. CONCLUSIONS
  • 4. 1.1 THIS PAPER… 4 The Google File System - Sanjay Ghemeawat, Howard Gobioff, and Shun-Tak Leung - Google - 19th ACM Symposium on Operating System Principles, 2003
  • 5. Previous Distributed File System 1.2 BACKGROUND 5 Performance Scalability Reliability Availability
  • 6. Previous Distributed File System 1.2 BACKGROUND 6 Performance Scalability Reliability Availabililty GFS(Google File System) has the same goal. However, it reflects a marked departure from some earlier file system.
  • 7. 1. Component failures are the norm rather than the exception. 2. Files are huge by traditional standards. 3. Most files are mutated by appending new data. 4. Co-design applications and file system API. 5. Sustained bandwidth more critical than low latency. 1.3 DIFFERENT POINTS IN THE DESIGN SPACE 7
  • 9. Familiar Interface - create - delete - open - close - read - write 2.1 INTERFACE 9 Moreover… - Snapshot ㆍ Low cost ㆍ Make a copy of a file/directory tree ㆍ Duplicate metadata - Atomic Record append ㆍ Atomicity with multiple concurrent writes
  • 11. 2.2 ARCHITECTURE 11 Chunk - Files are divided into fixed-size chunk - 64MB - Larger than typical file system block sizes Advantages from large chunk size - Reduce interaction between client and master - Client can perform many operations on a given chunk - Reduce size of metadata stored on the master
  • 12. 2.2 ARCHITECTURE 12 GFS chunkservers - Store chunks on local disks as Linux files. - Read/Write chunk data specified by a chunk handle & byte range
  • 13. 2.2 ARCHITECTURE 13 GFS master - Maintains all file system metadata ㆍ Namespace ㆍ Access-control information ㆍ Chunk locations ㆍ ‘Lease’ management
  • 14. 2.2 ARCHITECTURE 14 GFS client - GFS client code linked into each application. - GFS client code implements the file system API. - GFS client code communicates with the master & chunkservers.
  • 15. 2.2 ARCHITECTURE 15 Using fixed chunk size, translate filename & byte offset to chunk index. Send request to master
  • 16. 2.2 ARCHITECTURE 16 Replies with chunk handle & location of chunkserver replicas (including which is ‘primary’)
  • 17. 2.2 ARCHITECTURE 17 Cache info using filename & chunk index as key
  • 18. 2.2 ARCHITECTURE 18 Request data from nearest chunkserver “chunk handle & index into chunk”
  • 19. 2.2 ARCHITECTURE 19 No Need to talk more about this 64MB chunk Until cached info expires or file re-opend.
  • 21. 2.2 ARCHITECTURE 21 Metadata only Data only Metadata - Master stores three types - Stored in memory - Kept persistent thru logging
  • 22. SYSTEM INTERACTIONS Leases & Mutation Order Data Flow Atomic Appends Snapshot 22 3
  • 23. 3.1 LEASES & MUTATION ORDER 23 Objective - Ensure data consistent & defined. - Minimize load on master. Master grants ‘lease’ to one replica - Called ‘primary’ chunkserver. Primary defines a mutation order between mutations - All secondaries follows this order
  • 25. 3.3 ATOMIC APPENDS 25 The Client Specifies only the data Similar to writes - Mutation order is determined by the primary - All secondaries use the same mutation order GFS appends data to the file at least once atomically
  • 26. 3.4 SNAPSHOT 26 Goals - To quickly create branch copies of huge data sets - To easily checkpoint the current state Copy-on-write technique - Metadata for the source file or directory tree is duplicated. - Reference count for chunks are incremented. - Chunks are copied later at the first write.
  • 27. MASTER OPERATION Namespace Management and Locking Replica Placement Creation, Re-replication, Rebalancing Garbage Collection Stale Replica Detection 27 4
  • 28. “ The master executes all namespace operations and manages chunk replicas throughout the system. 2828
  • 29. 4.1 Namespace Management and Locking ▰ Namespaces are represented as a lookup table mapping full pathnames to metadata ▰ Use locks over regions of the namespace to ensure proper serialization ▰ Each master operation acquires a set of locks before it runs 29
  • 30. 4.1 Namespace Management and Locking ▰ Example of Locking Mechanism ▻ Preventing /home/user/foo from being created while /home/user is being snapshotted to /save/user ▻ Snapshot operation ▻ - Read locks on /home and /save ▻ - Write locks on /home/user and /save/user ▻ File creation ▻ - Read locks on /home and /home/user ▻ - Write locks on /home/user/foo ▻ Conflict locks on /home/user ▰ Locking scheme is that it allows concurrent mutations in the same directory 30
  • 31. 4.2 Replica Placement ▰ The chunk replica placement policy serves two purposes: ▻ Maximize data reliability and availability ▻ Maximize network bandwidth utilization. 31
  • 32. 4.3 Creation, Re-replication, Rebalancing ▰ Creation ▻ Disk space utilization ▻ Number of recent creations on each chunkserver ▰ Re-replication ▻ Prioritized: How far it is from its replication goal… ▻ The highest priority chunk is cloned first by copying the chunk data directly from an existing replica ▰ Rebalancing ▻ Periodically 32
  • 33. 4.4 Garbage Collection ▰ Deleted files ▻ Deletion operation is logged ▻ File is renamed to a hidden name, then may be removed later or get recovered ▰ Orphaned chunks (unreachable chunks) ▻ Identified and removed during a regular scan of the chunk namespace 33
  • 34. 4.5 Stale Replica Detection ▰ Stale replicas ▻ Chunk version numbering ▻ The client or the chunkserver verifies the version number when it performs the operation so that it is always accessing up-to-date data. 34
  • 35. FAULT TOLERANCE AND DIAGNOSIS High Availability Data Integrity Diagnostic Tools 35 5
  • 36. “ We cannot completely trust the machines, nor can we completely trust the disks. 3636
  • 37. 5.1 High Availability ▰ Fast Recovery ▻ - Operation log and Checkpointing ▰ Chunk Replication ▻ - Each chunk is replicated on multiple chunkservers on different racks ▰ Master Replication ▻ - Operation log and check points are replicated on multiple machines 37
  • 38. 5.2 Data Integrity ▰ Checksumming to detect corruption of stored data ▰ Each chunkserver independently verifies the integrity 38
  • 39. 5.3 Diagnostic Tools ▰ Extensive and detailed diagnostic logging has helped immeasurably in problem isolation, debugging, and performance analysis, while incurring only a minimal cost . ▰ RPC requests and replies, etc.. 39
  • 41. “ A few micro-benchmarks to illustrate the bottlenecks inherent in the GFS architecture and implementation 4141
  • 42. 6.1 Micro-benchmarks ▰ One master, two master replicas, 16 chunkservers, and 16 clients. (2003) ▰ All the machines are configured with dual 1.4 GHz PIII processors, 2 GB of memory, two 80 GB 5400 rpm disks, and a 100 Mbps full-duplex Ethernet connection to an HP 2524 switch. 42
  • 44. 6.2 Real World Clusters 44
  • 45. 6.3 Workload Breakdown ▰ Methodology and Caveats ▰ Chunkserver Workload ▰ Appends versus Writes ▰ Mster Workload 45
  • 47. Conclusions ▰ Different than previous file systems ▰ Satisfies needs of the application ▰ Fault tolerance 47