SlideShare a Scribd company logo
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 06 Issue: 03 | Mar 2019 www.irjet.net p-ISSN: 2395-0072
© 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 2422
A Study of Comparatively Analysis for HDFS and Google File System
towards to Handle Big Data
Rajesh R Savaliya1, Dr. Akash Saxena2
1Research Scholor, Rai University, Vill. Saroda, Tal. Dholka Dist. Ahmedabad, Gujatat-382 260
2PHD Guide, Rai University, Vill. Saroda, Tal. Dholka Dist. Ahmedabad, Gujatat-382 260
---------------------------------------------------------------------------***---------------------------------------------------------------------------
ABSTRACT - BIG-DATA handling and management is the current requirement of software development industry in face of
software developments now a day. It is becomes very necessary for software development industry to store large amount of
Data and retrieves the only required information from the stored large scale data in the system. This paper presents the
comparison of two similar distributed file working and handling parameters towards frameworks which is used to work with
storage of Big-Data in hadoop distributed file system and Google file system. This paper also includes the Map Reduse
Structure which common model used by both HDFS and GFS to handle the Big Data. These analyses will useful for
understanding the frame work and highlight the features those are common and difference between Hadoop DFS and GFS.
KEYWORDS: HDFS, GFS, NameNode, MasterNode, DataNode, ChunkServer, Big-Data.
1. INTRODUCTION
Big-Data is the keyword which is used to describe the large amount of data, produced by electronic transactions as well as
social media all over the world now a day. Hadoop Distributed File System and Google File System have been developed to
implement and handle large amount of data and provide high throughputs [1]. Big data challenges are complexity as well as
velocity, variety, volume of data and are included insight into consideration in the development of HDFS and GFS to store,
maintain and retrieve the large amount of Big-Data currently generated in field of IT [2]. First Google was developed and
publish in articles distributed file system in the world of IT that is GFS, then after Apache open-source was implement DFS as
an Hadoop DFS based on Google’s implementations. Differences and similarities in the both type of file system have been made
based on so many parameters, levels and different criteria to handle the big-data. The main important aim of HDFS and GFS
ware build for to work with large amount of data file coming from different terminals in various formats and large scale data
size (in TB or peta byte) distributed around hundreds of storage disks available for commodity hardware. Both HDFS and GFS
are developing to handle big-data of different formats [3].
1.1 Hadoop Distributed File System Framework
HDFS is the Hadoop Distributed File system which is an open source file distributed and large scale data file handling
framework and it is design by Apache. Currently so many network based application development environment using this
concepts such as Whatup, Facebook, Amazon. HDFS and MapReduce are core components of Hadoop system [4]. HDFS is the
Distributed File system which is used to handle the storage of large amount of file data in the DataNode[5].
Figure 1: HDFS Framework
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 06 Issue: 03 | Mar 2019 www.irjet.net p-ISSN: 2395-0072
© 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 2423
Hadoop Distributed File system is a scalable and platform independent system develop in Java.The HDFS is a master-slaver
distributed framework specially designed to work with storage of large scale data file in the large cluster which has DataNode
and NameNode. The NameNode is work as a master server to handle and store the metadata for large amount of data file in
the HDFS. NameNode is also used to manage the data file access through the different clients. The DataNode is used to handle
the storage management of large scale data file. Now the main role of the MapReduce is the decomposition of tasks, moniter
the task and then integrates the final results. MapReduse programming techniques successfully implemented by the Google to
store and process the big amount of data files [6].
1.2 Google File System Framework
Google File System is the network and node based framework. GFS is the based on scalable, reliable, availability, fault
tolerance and distributed file system structure design by Google to handle the large amount of data files. Google File System is
made to storage system on low cost commodity hardware. GFS is used to optimize the large amount of data storage. GFS
develop to handle the big-data stored in hierarchical directories structure. Namespace means metadata, data access control
handle by the master, that will deals with and monitors update status of each and every chunk server based on particular time
intervals.
Google File System has node cluster with single master and multiple chunk servers which are continuously accessed by
different client. In GSF chunk server is used to store data as a Linux files on local disks and that stored data will be divided into
(64 MB) size’s chunk. Stored data which are minimum three times replicated on the network. The large size chunk is very
helpful to reduce network traffic or overhead. GFS has larges clusters more than 1000 nodes of 300 TB size disk storage
capacities and it will continuous access by large number of clients [7].
1.2.1 GFS has importance features like,
 Fault tolerance
 Scalability
 Reliability
 High Availability
 Data Replication
 Metadata management
 Automatic data recovery
 High aggregate throughput
 Reduced client and master transaction using of large size chunk server
1. 2. 2 Frameworks of GFS
Google File System is a master/chunk server communication framework. GFS consists of only single master with multiple
number of chunk-server. Multiple Clients can easily access both master as well as chunkserver. The by default chunk size is
64MB and data file will be divided into the number chunk of fixed size. Master has 64bit pointer using which master will
manage each chunk [7]. Reliability fulfilled using each chunk is replicate on multiples chunkserver. There are three time
replicas created by default in GFS.
1.2.2.1 Master
Master is used to handle namespace means all the metadata to maintain the bigdata file. Master will keep the track of location
of each replica chunk and periodically provide the information to each chunkserver. Master is also responsible handle to less
than 64 byte metadata for each 64MB chunk [10]. It will also responsible to collect the information for each chunkserver and
avoid fragmentation using garbage collection technique. Master acknowledges the future request to the client.
1.2.2.2 Client
The working of the client will be responsible to ask the master for which chunkserver to refer for work. Client will create
chunk index using name and the byte offset. Client also ensures future request interaction in between master and client.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 06 Issue: 03 | Mar 2019 www.irjet.net p-ISSN: 2395-0072
© 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 2424
Figure 2: GFS Framework
1.2.2.3 Snapshot
The role of a snapshot is an internal function of Google File System that ensures consistency control and it creates a copy of a
directory or file immediately. Snapshot mostly used to create checkpoints of current state for commit so that rollback later.
1.2.2.4 Data Integrity
GFS cluster consists thousands of machines so it will help to avoid the machine failures or loss of data. For avoid this problem
each chunkserver maintain its own copy.
1.2.2.5 Garbage collection
Instead of instantly reclaiming or free up the unused physical memory storage space after a file or a chunk is deleted from the
system, for that GFS apply a lazy action strategy of Garbage Collection. This approach ensures that system is more reliable and
simple.
2. COMPARATIVELY ANALYSIS OF HDFS WITH GFS
Key Point HDFS Framework GFS Framework
Objective Main objective of HDFS to handle the
Big-Data
Main objective of HDFS to handle the
Big-Data
Language used to
Develop
Java Language C, CPP Language
Implemented by Open source community, Yahoo,
Facebook, IBM
Google
Platform Work on Cross-platform Work on Linux
License by Apache Proprietary or design by google for its
own used.
Files Management HDFS supports a traditional hierarchical
directories data structure [9].
GFS supports a hierarchical directories
data structure and access by path names
[9].
Types of Nodes used NameNode and DataNode Chunk-server and MasterNode
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 06 Issue: 03 | Mar 2019 www.irjet.net p-ISSN: 2395-0072
© 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 2425
Hardware used Commodity Hardware or Server Commodity Hardware or Server
Append Opration Only supports append operation supports append operation
and we can also append base on offset.
Database Files Hbase Bigtable is the database
Delete Opration and
Garbage Collection
First, deleted files are renamed and store
in particular folder then finally remove
using garbage collection method.
GFS has unique garbage collection
method in which we cannot
reclaiminstantly.
It will rename the namespace
It will delete after the 3 days during the
second scaned.
Default size HDFS has by default DataNode size 128
MB but it can be change by the user
GFS has by default chunk size 64 MB but
it can be change by the user
Snapshots HDFS allowed upto 65536 snapshots for
each directory in HDFS 2.
In GFS Each directories and files can be
snapshotted.
Meta-Data Meta-Data information managed by
NameNode.
Meta-Data information managed by
MasterNode.
Data Integrity Data Integrity maintain in between
NameNode and DataNode.
Data Integrity maintain in between
MasterNode and Chunk-Server.
Replication There are two time replicas created by
default in GFS [10].
There are three time replicas created by
default in GFS [10].
Communication Pipelining is used to data transfer over
the TCP protocol.
RPC based protocol used on top of
TCPIP.
Cache management HDFS provide the distributed cache
facility using Mapreduse framework
GFS does not provide the cache facility
3. CONCLUSION
From the above way, it is concluded that this paper describes an insight of comparatively studies towards two most powerful
distributed big-data processing Framework which are Hadoop Distributed File System and Google File System. This studies
was performed to observe the performance for both HDFS and GFS big-data transactions such as storing as well as retrieving
of large scale data file. Finally, this can be concludes that successfully manage network maintenance, power failures, hard drive
failures, router failures, misconfiguration, etc. GFS provide the better Garbage collection, Replication and file management as
compare as HDFS.
REFERENCES
1) http://hadoop.apache.org.[Accessed: Oct. 11, 2018]
2) http://en.wikipedia.org/wiki/Big_data .[Accessed: Oct. 19, 2018]
3) Gemayel. N, “Analyzing Google File System and Hadoop Distributed File System” Research Journal of Information
Technology , PP. 67-74, 15 September 2016.
4) Sager, S. Lad, Naveen Kumar, Dr. S.D. Joshi, “Comparison study on Hadoop’s HDFS with Lustre File System”,
International Journal of Scientific Engineering and Applied Science, Vol. 1, Issue-8, PP. 491-194, November 2015.
5) R.Vijayakumari, R.Kirankumar and K.Gangadhara Rao, “Comparative analysis of Google File System and Hadoop
Distributed File System”, International Journal of Advanced Trends in Computer Science and Engineering, Vol.1, PP.
553– 558, 24-25 February 2014.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 06 Issue: 03 | Mar 2019 www.irjet.net p-ISSN: 2395-0072
© 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 2426
6) Ameya Daphalapuraka, Manali Shimpi and Priya Newalkar, “Mapreduse & Comparison of HDFS And GFS”,
International Journal of Engineering And Computer Science, Vol. 1, Issue-8, PP. 8321- 8325 September 2014.
7) Giacinto, Donvito, Giovanni Marzulli2 and Domenico Diacono, “Testing of several distributed file-systems (HDFS, Ceph
and GlusterFS) for supporting the HEP experiments analysis”, International Conference on Computing in High Energy
and Nuclear Physics, PP. 1-7, 2014.
8) Dr.A.P Mittal, Dr. Vanita Jain and Tanuj Ahuja, “Google File System and Hadoop Distributed File System- An Analogy ”,
International Journal of Innovations & Advancement in Computer Science , Vol. 4, PP. 626-636, March 2015.
9) Monali Mavani, “Comparative Analisis of Andrew File System and Hadoop Diatributed File System”, Lecture Note on
Software Engineerin, Vol. 4 No. 2, PP. 122-125, May 2013.
10) Yuval Carmel, ” HDFS Vs. GFS”, Topics in Storage System-Spring , PP. 20-31, 2013.
BIOGRAPHIES
Authors’ Profile
Mr. Rajeshkumar Rameshbhai Savaliya from Ambaba Commerce College, MIBM & DICA Sabargam and
master degree in Master of science and Information Technologies(M.Sc-IT) from Veer Narmad South
Gujarat University.Rajesh R Savaliya has teaching as well programming experience and PHD Pursuing
from RAI University.
Co-Authors’ Profile
Dr. Akash Saxena PhD Guide from Rai University.

More Related Content

PDF
IRJET- Generate Distributed Metadata using Blockchain Technology within HDFS ...
PDF
IRJET- A Novel Approach to Process Small HDFS Files with Apache Spark
PDF
Introduction to Big Data and Hadoop using Local Standalone Mode
PDF
Design architecture based on web
PDF
Harnessing Hadoop and Big Data to Reduce Execution Times
PDF
An Efficient Approach to Manage Small Files in Distributed File Systems
PPTX
Managing Big data with Hadoop
PPTX
The Exabyte Journey and DataBrew with CICD
IRJET- Generate Distributed Metadata using Blockchain Technology within HDFS ...
IRJET- A Novel Approach to Process Small HDFS Files with Apache Spark
Introduction to Big Data and Hadoop using Local Standalone Mode
Design architecture based on web
Harnessing Hadoop and Big Data to Reduce Execution Times
An Efficient Approach to Manage Small Files in Distributed File Systems
Managing Big data with Hadoop
The Exabyte Journey and DataBrew with CICD

What's hot (19)

PDF
IRJET- Performing Load Balancing between Namenodes in HDFS
PDF
IRJET- Cross User Bigdata Deduplication
PDF
Survey Paper on Big Data and Hadoop
PDF
IRJET- Big Data-A Review Study with Comparitive Analysis of Hadoop
PDF
Dr.Hadoop- an infinite scalable metadata management for Hadoop-How the baby e...
PDF
Hadoop and its role in Facebook: An Overview
PPTX
Module 01 - Understanding Big Data and Hadoop 1.x,2.x
PDF
Review on Big Data Security in Hadoop
PDF
Big Data Analysis and Its Scheduling Policy – Hadoop
PDF
Cloud Computing Ambiance using Secluded Access Control Method
PDF
A Survey on Different File Handling Mechanisms in HDFS
PPTX
Construindo Data Lakes - Visão Prática com Hadoop e BigData
PPTX
Hadoop and big data training
PDF
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
PPTX
Hadoop for High-Performance Climate Analytics - Use Cases and Lessons Learned
PDF
Cidr11 paper32
PDF
A0930105
PPTX
Hadoop Adminstration with Latest Release (2.0)
IRJET- Performing Load Balancing between Namenodes in HDFS
IRJET- Cross User Bigdata Deduplication
Survey Paper on Big Data and Hadoop
IRJET- Big Data-A Review Study with Comparitive Analysis of Hadoop
Dr.Hadoop- an infinite scalable metadata management for Hadoop-How the baby e...
Hadoop and its role in Facebook: An Overview
Module 01 - Understanding Big Data and Hadoop 1.x,2.x
Review on Big Data Security in Hadoop
Big Data Analysis and Its Scheduling Policy – Hadoop
Cloud Computing Ambiance using Secluded Access Control Method
A Survey on Different File Handling Mechanisms in HDFS
Construindo Data Lakes - Visão Prática com Hadoop e BigData
Hadoop and big data training
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
Hadoop for High-Performance Climate Analytics - Use Cases and Lessons Learned
Cidr11 paper32
A0930105
Hadoop Adminstration with Latest Release (2.0)
Ad

Similar to IRJET- A Study of Comparatively Analysis for HDFS and Google File System Towards to Handle Big Data (20)

PPTX
GFS & HDFS Introduction
PPTX
Chaptor 2- Big Data Processing in big data technologies
PPT
Distributed file systems (from Google)
PPT
Distributed computing seminar lecture 3 - distributed file systems
PPT
Lec3 Dfs
PPTX
GFS presenttn.pptx
PDF
Google File System
PPTX
Cloud storage
PPTX
storage-systems.pptx
PPTX
Cloud File System with GFS and HDFS
PPTX
Gfs vs hdfs
PDF
Survey of Parallel Data Processing in Context with MapReduce
PDF
Design Issues and Challenges of Peer-to-Peer Video on Demand System
PDF
Google File System
PPTX
Google
PPT
Google File System
PPTX
GOOGLE FILE SYSTEM
PDF
Seminar Report on Google File System
PPTX
Google file system
PDF
GFS & HDFS Introduction
Chaptor 2- Big Data Processing in big data technologies
Distributed file systems (from Google)
Distributed computing seminar lecture 3 - distributed file systems
Lec3 Dfs
GFS presenttn.pptx
Google File System
Cloud storage
storage-systems.pptx
Cloud File System with GFS and HDFS
Gfs vs hdfs
Survey of Parallel Data Processing in Context with MapReduce
Design Issues and Challenges of Peer-to-Peer Video on Demand System
Google File System
Google
Google File System
GOOGLE FILE SYSTEM
Seminar Report on Google File System
Google file system
Ad

More from IRJET Journal (20)

PDF
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
PDF
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
PDF
Kiona – A Smart Society Automation Project
PDF
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
PDF
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
PDF
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
PDF
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
PDF
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
PDF
BRAIN TUMOUR DETECTION AND CLASSIFICATION
PDF
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
PDF
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
PDF
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
PDF
Breast Cancer Detection using Computer Vision
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
Kiona – A Smart Society Automation Project
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
BRAIN TUMOUR DETECTION AND CLASSIFICATION
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
Breast Cancer Detection using Computer Vision
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...

Recently uploaded (20)

PPTX
bas. eng. economics group 4 presentation 1.pptx
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PDF
Digital Logic Computer Design lecture notes
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPTX
Internet of Things (IOT) - A guide to understanding
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PPTX
Sustainable Sites - Green Building Construction
PDF
composite construction of structures.pdf
PPT
Mechanical Engineering MATERIALS Selection
PPT
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
UNIT 4 Total Quality Management .pptx
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
bas. eng. economics group 4 presentation 1.pptx
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
R24 SURVEYING LAB MANUAL for civil enggi
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
UNIT-1 - COAL BASED THERMAL POWER PLANTS
Digital Logic Computer Design lecture notes
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Internet of Things (IOT) - A guide to understanding
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Automation-in-Manufacturing-Chapter-Introduction.pdf
Sustainable Sites - Green Building Construction
composite construction of structures.pdf
Mechanical Engineering MATERIALS Selection
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
UNIT 4 Total Quality Management .pptx
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks

IRJET- A Study of Comparatively Analysis for HDFS and Google File System Towards to Handle Big Data

  • 1. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 06 Issue: 03 | Mar 2019 www.irjet.net p-ISSN: 2395-0072 © 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 2422 A Study of Comparatively Analysis for HDFS and Google File System towards to Handle Big Data Rajesh R Savaliya1, Dr. Akash Saxena2 1Research Scholor, Rai University, Vill. Saroda, Tal. Dholka Dist. Ahmedabad, Gujatat-382 260 2PHD Guide, Rai University, Vill. Saroda, Tal. Dholka Dist. Ahmedabad, Gujatat-382 260 ---------------------------------------------------------------------------***--------------------------------------------------------------------------- ABSTRACT - BIG-DATA handling and management is the current requirement of software development industry in face of software developments now a day. It is becomes very necessary for software development industry to store large amount of Data and retrieves the only required information from the stored large scale data in the system. This paper presents the comparison of two similar distributed file working and handling parameters towards frameworks which is used to work with storage of Big-Data in hadoop distributed file system and Google file system. This paper also includes the Map Reduse Structure which common model used by both HDFS and GFS to handle the Big Data. These analyses will useful for understanding the frame work and highlight the features those are common and difference between Hadoop DFS and GFS. KEYWORDS: HDFS, GFS, NameNode, MasterNode, DataNode, ChunkServer, Big-Data. 1. INTRODUCTION Big-Data is the keyword which is used to describe the large amount of data, produced by electronic transactions as well as social media all over the world now a day. Hadoop Distributed File System and Google File System have been developed to implement and handle large amount of data and provide high throughputs [1]. Big data challenges are complexity as well as velocity, variety, volume of data and are included insight into consideration in the development of HDFS and GFS to store, maintain and retrieve the large amount of Big-Data currently generated in field of IT [2]. First Google was developed and publish in articles distributed file system in the world of IT that is GFS, then after Apache open-source was implement DFS as an Hadoop DFS based on Google’s implementations. Differences and similarities in the both type of file system have been made based on so many parameters, levels and different criteria to handle the big-data. The main important aim of HDFS and GFS ware build for to work with large amount of data file coming from different terminals in various formats and large scale data size (in TB or peta byte) distributed around hundreds of storage disks available for commodity hardware. Both HDFS and GFS are developing to handle big-data of different formats [3]. 1.1 Hadoop Distributed File System Framework HDFS is the Hadoop Distributed File system which is an open source file distributed and large scale data file handling framework and it is design by Apache. Currently so many network based application development environment using this concepts such as Whatup, Facebook, Amazon. HDFS and MapReduce are core components of Hadoop system [4]. HDFS is the Distributed File system which is used to handle the storage of large amount of file data in the DataNode[5]. Figure 1: HDFS Framework
  • 2. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 06 Issue: 03 | Mar 2019 www.irjet.net p-ISSN: 2395-0072 © 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 2423 Hadoop Distributed File system is a scalable and platform independent system develop in Java.The HDFS is a master-slaver distributed framework specially designed to work with storage of large scale data file in the large cluster which has DataNode and NameNode. The NameNode is work as a master server to handle and store the metadata for large amount of data file in the HDFS. NameNode is also used to manage the data file access through the different clients. The DataNode is used to handle the storage management of large scale data file. Now the main role of the MapReduce is the decomposition of tasks, moniter the task and then integrates the final results. MapReduse programming techniques successfully implemented by the Google to store and process the big amount of data files [6]. 1.2 Google File System Framework Google File System is the network and node based framework. GFS is the based on scalable, reliable, availability, fault tolerance and distributed file system structure design by Google to handle the large amount of data files. Google File System is made to storage system on low cost commodity hardware. GFS is used to optimize the large amount of data storage. GFS develop to handle the big-data stored in hierarchical directories structure. Namespace means metadata, data access control handle by the master, that will deals with and monitors update status of each and every chunk server based on particular time intervals. Google File System has node cluster with single master and multiple chunk servers which are continuously accessed by different client. In GSF chunk server is used to store data as a Linux files on local disks and that stored data will be divided into (64 MB) size’s chunk. Stored data which are minimum three times replicated on the network. The large size chunk is very helpful to reduce network traffic or overhead. GFS has larges clusters more than 1000 nodes of 300 TB size disk storage capacities and it will continuous access by large number of clients [7]. 1.2.1 GFS has importance features like,  Fault tolerance  Scalability  Reliability  High Availability  Data Replication  Metadata management  Automatic data recovery  High aggregate throughput  Reduced client and master transaction using of large size chunk server 1. 2. 2 Frameworks of GFS Google File System is a master/chunk server communication framework. GFS consists of only single master with multiple number of chunk-server. Multiple Clients can easily access both master as well as chunkserver. The by default chunk size is 64MB and data file will be divided into the number chunk of fixed size. Master has 64bit pointer using which master will manage each chunk [7]. Reliability fulfilled using each chunk is replicate on multiples chunkserver. There are three time replicas created by default in GFS. 1.2.2.1 Master Master is used to handle namespace means all the metadata to maintain the bigdata file. Master will keep the track of location of each replica chunk and periodically provide the information to each chunkserver. Master is also responsible handle to less than 64 byte metadata for each 64MB chunk [10]. It will also responsible to collect the information for each chunkserver and avoid fragmentation using garbage collection technique. Master acknowledges the future request to the client. 1.2.2.2 Client The working of the client will be responsible to ask the master for which chunkserver to refer for work. Client will create chunk index using name and the byte offset. Client also ensures future request interaction in between master and client.
  • 3. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 06 Issue: 03 | Mar 2019 www.irjet.net p-ISSN: 2395-0072 © 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 2424 Figure 2: GFS Framework 1.2.2.3 Snapshot The role of a snapshot is an internal function of Google File System that ensures consistency control and it creates a copy of a directory or file immediately. Snapshot mostly used to create checkpoints of current state for commit so that rollback later. 1.2.2.4 Data Integrity GFS cluster consists thousands of machines so it will help to avoid the machine failures or loss of data. For avoid this problem each chunkserver maintain its own copy. 1.2.2.5 Garbage collection Instead of instantly reclaiming or free up the unused physical memory storage space after a file or a chunk is deleted from the system, for that GFS apply a lazy action strategy of Garbage Collection. This approach ensures that system is more reliable and simple. 2. COMPARATIVELY ANALYSIS OF HDFS WITH GFS Key Point HDFS Framework GFS Framework Objective Main objective of HDFS to handle the Big-Data Main objective of HDFS to handle the Big-Data Language used to Develop Java Language C, CPP Language Implemented by Open source community, Yahoo, Facebook, IBM Google Platform Work on Cross-platform Work on Linux License by Apache Proprietary or design by google for its own used. Files Management HDFS supports a traditional hierarchical directories data structure [9]. GFS supports a hierarchical directories data structure and access by path names [9]. Types of Nodes used NameNode and DataNode Chunk-server and MasterNode
  • 4. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 06 Issue: 03 | Mar 2019 www.irjet.net p-ISSN: 2395-0072 © 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 2425 Hardware used Commodity Hardware or Server Commodity Hardware or Server Append Opration Only supports append operation supports append operation and we can also append base on offset. Database Files Hbase Bigtable is the database Delete Opration and Garbage Collection First, deleted files are renamed and store in particular folder then finally remove using garbage collection method. GFS has unique garbage collection method in which we cannot reclaiminstantly. It will rename the namespace It will delete after the 3 days during the second scaned. Default size HDFS has by default DataNode size 128 MB but it can be change by the user GFS has by default chunk size 64 MB but it can be change by the user Snapshots HDFS allowed upto 65536 snapshots for each directory in HDFS 2. In GFS Each directories and files can be snapshotted. Meta-Data Meta-Data information managed by NameNode. Meta-Data information managed by MasterNode. Data Integrity Data Integrity maintain in between NameNode and DataNode. Data Integrity maintain in between MasterNode and Chunk-Server. Replication There are two time replicas created by default in GFS [10]. There are three time replicas created by default in GFS [10]. Communication Pipelining is used to data transfer over the TCP protocol. RPC based protocol used on top of TCPIP. Cache management HDFS provide the distributed cache facility using Mapreduse framework GFS does not provide the cache facility 3. CONCLUSION From the above way, it is concluded that this paper describes an insight of comparatively studies towards two most powerful distributed big-data processing Framework which are Hadoop Distributed File System and Google File System. This studies was performed to observe the performance for both HDFS and GFS big-data transactions such as storing as well as retrieving of large scale data file. Finally, this can be concludes that successfully manage network maintenance, power failures, hard drive failures, router failures, misconfiguration, etc. GFS provide the better Garbage collection, Replication and file management as compare as HDFS. REFERENCES 1) http://hadoop.apache.org.[Accessed: Oct. 11, 2018] 2) http://en.wikipedia.org/wiki/Big_data .[Accessed: Oct. 19, 2018] 3) Gemayel. N, “Analyzing Google File System and Hadoop Distributed File System” Research Journal of Information Technology , PP. 67-74, 15 September 2016. 4) Sager, S. Lad, Naveen Kumar, Dr. S.D. Joshi, “Comparison study on Hadoop’s HDFS with Lustre File System”, International Journal of Scientific Engineering and Applied Science, Vol. 1, Issue-8, PP. 491-194, November 2015. 5) R.Vijayakumari, R.Kirankumar and K.Gangadhara Rao, “Comparative analysis of Google File System and Hadoop Distributed File System”, International Journal of Advanced Trends in Computer Science and Engineering, Vol.1, PP. 553– 558, 24-25 February 2014.
  • 5. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 06 Issue: 03 | Mar 2019 www.irjet.net p-ISSN: 2395-0072 © 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 2426 6) Ameya Daphalapuraka, Manali Shimpi and Priya Newalkar, “Mapreduse & Comparison of HDFS And GFS”, International Journal of Engineering And Computer Science, Vol. 1, Issue-8, PP. 8321- 8325 September 2014. 7) Giacinto, Donvito, Giovanni Marzulli2 and Domenico Diacono, “Testing of several distributed file-systems (HDFS, Ceph and GlusterFS) for supporting the HEP experiments analysis”, International Conference on Computing in High Energy and Nuclear Physics, PP. 1-7, 2014. 8) Dr.A.P Mittal, Dr. Vanita Jain and Tanuj Ahuja, “Google File System and Hadoop Distributed File System- An Analogy ”, International Journal of Innovations & Advancement in Computer Science , Vol. 4, PP. 626-636, March 2015. 9) Monali Mavani, “Comparative Analisis of Andrew File System and Hadoop Diatributed File System”, Lecture Note on Software Engineerin, Vol. 4 No. 2, PP. 122-125, May 2013. 10) Yuval Carmel, ” HDFS Vs. GFS”, Topics in Storage System-Spring , PP. 20-31, 2013. BIOGRAPHIES Authors’ Profile Mr. Rajeshkumar Rameshbhai Savaliya from Ambaba Commerce College, MIBM & DICA Sabargam and master degree in Master of science and Information Technologies(M.Sc-IT) from Veer Narmad South Gujarat University.Rajesh R Savaliya has teaching as well programming experience and PHD Pursuing from RAI University. Co-Authors’ Profile Dr. Akash Saxena PhD Guide from Rai University.