SlideShare a Scribd company logo
Big Data
Presented by : SHIVAM SHUKLA
Contents
 What is Big data ?
 History
 Three V’s
 Why Big Data important ?
 Technologies related to Big Data
Hadoop
Why Hadoop?
Hbase
Why Hbase?
Some features of Hbase
Hive
About
Points to remember
Sqoop
Working
Difference
What is Big Data ?
 Big data is a term that describes the large volume of data :
a) Structured
b) Unstructured
c) Semi-structured
 That inundates a business on a day-to-day basis.
 But it’s not the amount of data that’s important. It’s what
organizations do with the data that matters.
History
 While the term “big data” is relatively new, the act of gathering and
storing large amounts of information for eventual analysis is ages
old.
 The concept gained momentum in the early 2000s, when industry
analyst Doug Laney articulated the now-mainstream definition of
big data as the three Vs:
Volume
Velocity
Variety
Three V’s :
 Volume
Defines the huge amount of data that is produced each day by
organizations in the world
 Velocity
Refers to speed with which the data is generated , analyzed and
reprocessed
 Variety
refers to diversity of data and data sources
Big data and tools
Additional V’s
With the time new V’s of big data introduced
 Validity
It refers to the guarantee of data quality or,
alternatively, Veracity is the authenticity and credibility of the data.
 Value
denotes the added value for companies. Many companies have
recently established their own data platforms, filled their data pools
and invested a lot of money in infrastructure. It is now a question of
generating business value from their investments.
Why is Big Data important ?
 The importance of big data doesn’t revolve around how much data
you have, but what you do with it.
 You can take data from any source and analyze it to find answers
that enable
Cost reduction
Time reduction
Smart decision making
Some Technologies related to Big
data
 Hadoop framework
 Hbase
 Hive
 Scoop
Hadoop
 Hadoop is developed by Doug cutting and Michael j. cafarella.
 Hadoop is a apache open source frame work designed for
Managing the data
Processing the data
Analyzing the data
Storing the data
 Hadoop is written in java and not OLAP(online analytical
processing).
 It is used for offline processing.
 Logo for Hadoop is a YELLOW ELEPHANT
Why Hadoop ?
 Fast :
 In HDFS the data distributed over the cluster and are mapped
which helps in faster retrieval.
 Scalable :
 Hadoop cluster can be extended by just adding nodes in the
cluster.
 Cost Effective :
 Hadoop is open source and uses commodity hardware to store
data so it really cost effective as compared to traditional
relational database management system.
 Resilient to failure :
 HDFS has the property with which it can replicate data over the
network, so if one node is down or some other network failure
happens, then Hadoop takes the other copy of data and use it.
HBase
 HBase is an open source framework provided by Apache. It is a
sorted map data built on Hadoop.
 It is column oriented and horizontally scalable.
 It has set of tables which keep data in key value format.
 It is type of a database designed for mainly managing the
unstructured data
 Logo for Apache HBase is a DOLPHIN
Why Hbase?
 RDBMS get exponentially slow as the data becomes large.
 Expects data to be highly structured, i.e. ability to fit in a well-
defined schema.
 Any change in schema might require a downtime.
 For sparse datasets, too much of overhead of maintaining NULL
values.
Some feature of
Hbase
 Horizontally scalable: You can add any number of columns anytime.
 Often referred as a key value store or column family-oriented
database, or storing versioned maps of maps.
 fundamentally, it's a platform for storing and retrieving data with
random access.
 It doesn't care about datatypes(storing an integer in one row and a
string in another for the same column).
 There is only one kind of data type which is byte array.
 It doesn't enforce relationships within your data.
 It is designed to run on a cluster of computers.
Hive
 Hive is a data warehouse infrastructure tool to process structured
data in Hadoop.
 It runs SQL like queries called HQL (Hive query language) which
gets internally converted to map reduce jobs.
 Initially Hive was developed by Facebook, later the Apache
Software Foundation took it up and developed it further as an open
source under the name Apache Hive.
 Hive supports Data definition Language(DDL), Data Manipulation
Language(DML) and user defined functions.
 The logo for hive is a yellow and black BEE
Hive is not :
 A relational database
 designed for Online Transaction Processing (OLTP)
 A language for real-time queries and row-level updates
 Even with small amount of data ,time to return the response can’t be
compared to RDBMS.
Points to remember about
hive
 Hive Query Language is similar to SQL and gets reduced to map
reduce jobs in backend.
 Hive's default database is derby.
 It also called as a No Sql.
 It provides SQL type language for querying called HiveQL or HQL.
 It is designed for OLAP(Online analytics processing).
Sqoop
 Sqoop is a tool designed to transfer data between Hadoop and
relational database servers.
 It is used to import data from relational databases such as MySQL,
Oracle to Hadoop HDFS, and export from Hadoop file system to
relational databases.
 It is provided by the Apache Software Foundation.
 Sqoop- “SQL to Hadoop and Hadoop to SQL”
Working of sqoop
Difference
Sqoop Import
 The import tool imports
individual tables from
RDBMS to HDFS.
 Each row in a table is treated
as a record in HDFS.
 All records are stored as text
data in text files or as binary
data in Avro and Sequence
files.
Sqoop Export
 The export tool exports a set of
files from HDFS back to an
RDBMS.
 The files given as input to
Sqoop contain records, which
are called as rows in table.
 Those are read and parsed into
a set of records and delimited
with user-specified delimiter.
Thank you
Any queries

More Related Content

ODP
An introduction to Apache Hadoop Hive
PPT
Introduction to Hive for Hadoop
PPTX
Hadoop Hive Tutorial | Hive Fundamentals | Hive Architecture
PDF
What are Hadoop Components? Hadoop Ecosystem and Architecture | Edureka
PPTX
Big data and Hadoop
PPTX
Hadoop Presentation - PPT
PDF
Hadoop Ecosystem
PPTX
Hadoop Tutorial For Beginners
An introduction to Apache Hadoop Hive
Introduction to Hive for Hadoop
Hadoop Hive Tutorial | Hive Fundamentals | Hive Architecture
What are Hadoop Components? Hadoop Ecosystem and Architecture | Edureka
Big data and Hadoop
Hadoop Presentation - PPT
Hadoop Ecosystem
Hadoop Tutorial For Beginners

What's hot (20)

PPTX
PPT on Hadoop
PPT
Hadoop hive presentation
PPTX
Hadoop Presentation
PPTX
Big data Hadoop presentation
PDF
Big Data and Hadoop Ecosystem
PPTX
Introduction to MapReduce | MapReduce Architecture | MapReduce Fundamentals
PPTX
An intriduction to hive
PPTX
Introduction to Big Data & Hadoop Architecture - Module 1
PPTX
Big Data & Hadoop Tutorial
PPTX
Hadoop and Big Data
PPT
Big data and hadoop
PDF
An Introduction of Apache Hadoop
PPTX
Hadoop: Distributed Data Processing
PPTX
HADOOP TECHNOLOGY ppt
PPTX
Hadoop And Their Ecosystem
PPTX
Hadoop Architecture
PPTX
PPTX
Big data Analytics Hadoop
PPTX
Hadoop project design and a usecase
PPTX
Hadoop
PPT on Hadoop
Hadoop hive presentation
Hadoop Presentation
Big data Hadoop presentation
Big Data and Hadoop Ecosystem
Introduction to MapReduce | MapReduce Architecture | MapReduce Fundamentals
An intriduction to hive
Introduction to Big Data & Hadoop Architecture - Module 1
Big Data & Hadoop Tutorial
Hadoop and Big Data
Big data and hadoop
An Introduction of Apache Hadoop
Hadoop: Distributed Data Processing
HADOOP TECHNOLOGY ppt
Hadoop And Their Ecosystem
Hadoop Architecture
Big data Analytics Hadoop
Hadoop project design and a usecase
Hadoop
Ad

Similar to Big data and tools (20)

PPTX
Case study on big data
PPTX
Intro to Hadoop
PPTX
Intro to Hybrid Data Warehouse
PPTX
Hive and querying data
PDF
BIGDATA ppts
PPTX
Overview of Big data, Hadoop and Microsoft BI - version1
PPTX
Overview of big data & hadoop version 1 - Tony Nguyen
PPTX
A Glimpse of Bigdata - Introduction
PPTX
Hadoop An Introduction
PPT
Hadoop presentation
PPTX
Apache hadoop introduction and architecture
PDF
2.1-HADOOP.pdf
PPT
Big Data Analytics 2014
PPT
Hadoop a Natural Choice for Data Intensive Log Processing
PDF
What is Apache Hadoop and its ecosystem?
PPT
How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
PPTX
Unit II Hadoop Ecosystem_Updated.pptx
PDF
BIGDATA MODULE 3.pdf
PPTX
Hadoop_arunam_ppt
PDF
What is hadoop
Case study on big data
Intro to Hadoop
Intro to Hybrid Data Warehouse
Hive and querying data
BIGDATA ppts
Overview of Big data, Hadoop and Microsoft BI - version1
Overview of big data & hadoop version 1 - Tony Nguyen
A Glimpse of Bigdata - Introduction
Hadoop An Introduction
Hadoop presentation
Apache hadoop introduction and architecture
2.1-HADOOP.pdf
Big Data Analytics 2014
Hadoop a Natural Choice for Data Intensive Log Processing
What is Apache Hadoop and its ecosystem?
How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
Unit II Hadoop Ecosystem_Updated.pptx
BIGDATA MODULE 3.pdf
Hadoop_arunam_ppt
What is hadoop
Ad

Recently uploaded (20)

PPTX
Computer network topology notes for revision
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
PPT
Miokarditis (Inflamasi pada Otot Jantung)
PDF
Introduction to Business Data Analytics.
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PPTX
Global journeys: estimating international migration
PPTX
Business Ppt On Nestle.pptx huunnnhhgfvu
PPTX
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
PPTX
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PPT
Quality review (1)_presentation of this 21
PDF
Mega Projects Data Mega Projects Data
PPTX
climate analysis of Dhaka ,Banglades.pptx
PPTX
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PPTX
Major-Components-ofNKJNNKNKNKNKronment.pptx
PPT
Chapter 2 METAL FORMINGhhhhhhhjjjjmmmmmmmmm
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
Computer network topology notes for revision
Introduction-to-Cloud-ComputingFinal.pptx
Miokarditis (Inflamasi pada Otot Jantung)
Introduction to Business Data Analytics.
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
Global journeys: estimating international migration
Business Ppt On Nestle.pptx huunnnhhgfvu
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
Acceptance and paychological effects of mandatory extra coach I classes.pptx
oil_refinery_comprehensive_20250804084928 (1).pptx
Quality review (1)_presentation of this 21
Mega Projects Data Mega Projects Data
climate analysis of Dhaka ,Banglades.pptx
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
Major-Components-ofNKJNNKNKNKNKronment.pptx
Chapter 2 METAL FORMINGhhhhhhhjjjjmmmmmmmmm
168300704-gasification-ppt.pdfhghhhsjsjhsuxush

Big data and tools

  • 1. Big Data Presented by : SHIVAM SHUKLA
  • 2. Contents  What is Big data ?  History  Three V’s  Why Big Data important ?  Technologies related to Big Data Hadoop Why Hadoop? Hbase Why Hbase? Some features of Hbase
  • 4. What is Big Data ?  Big data is a term that describes the large volume of data : a) Structured b) Unstructured c) Semi-structured  That inundates a business on a day-to-day basis.  But it’s not the amount of data that’s important. It’s what organizations do with the data that matters.
  • 5. History  While the term “big data” is relatively new, the act of gathering and storing large amounts of information for eventual analysis is ages old.  The concept gained momentum in the early 2000s, when industry analyst Doug Laney articulated the now-mainstream definition of big data as the three Vs: Volume Velocity Variety
  • 6. Three V’s :  Volume Defines the huge amount of data that is produced each day by organizations in the world  Velocity Refers to speed with which the data is generated , analyzed and reprocessed  Variety refers to diversity of data and data sources
  • 8. Additional V’s With the time new V’s of big data introduced  Validity It refers to the guarantee of data quality or, alternatively, Veracity is the authenticity and credibility of the data.  Value denotes the added value for companies. Many companies have recently established their own data platforms, filled their data pools and invested a lot of money in infrastructure. It is now a question of generating business value from their investments.
  • 9. Why is Big Data important ?  The importance of big data doesn’t revolve around how much data you have, but what you do with it.  You can take data from any source and analyze it to find answers that enable Cost reduction Time reduction Smart decision making
  • 10. Some Technologies related to Big data  Hadoop framework  Hbase  Hive  Scoop
  • 11. Hadoop  Hadoop is developed by Doug cutting and Michael j. cafarella.  Hadoop is a apache open source frame work designed for Managing the data Processing the data Analyzing the data Storing the data  Hadoop is written in java and not OLAP(online analytical processing).  It is used for offline processing.  Logo for Hadoop is a YELLOW ELEPHANT
  • 12. Why Hadoop ?  Fast :  In HDFS the data distributed over the cluster and are mapped which helps in faster retrieval.  Scalable :  Hadoop cluster can be extended by just adding nodes in the cluster.  Cost Effective :  Hadoop is open source and uses commodity hardware to store data so it really cost effective as compared to traditional relational database management system.  Resilient to failure :  HDFS has the property with which it can replicate data over the network, so if one node is down or some other network failure happens, then Hadoop takes the other copy of data and use it.
  • 13. HBase  HBase is an open source framework provided by Apache. It is a sorted map data built on Hadoop.  It is column oriented and horizontally scalable.  It has set of tables which keep data in key value format.  It is type of a database designed for mainly managing the unstructured data  Logo for Apache HBase is a DOLPHIN
  • 14. Why Hbase?  RDBMS get exponentially slow as the data becomes large.  Expects data to be highly structured, i.e. ability to fit in a well- defined schema.  Any change in schema might require a downtime.  For sparse datasets, too much of overhead of maintaining NULL values.
  • 15. Some feature of Hbase  Horizontally scalable: You can add any number of columns anytime.  Often referred as a key value store or column family-oriented database, or storing versioned maps of maps.  fundamentally, it's a platform for storing and retrieving data with random access.  It doesn't care about datatypes(storing an integer in one row and a string in another for the same column).  There is only one kind of data type which is byte array.  It doesn't enforce relationships within your data.  It is designed to run on a cluster of computers.
  • 16. Hive  Hive is a data warehouse infrastructure tool to process structured data in Hadoop.  It runs SQL like queries called HQL (Hive query language) which gets internally converted to map reduce jobs.  Initially Hive was developed by Facebook, later the Apache Software Foundation took it up and developed it further as an open source under the name Apache Hive.  Hive supports Data definition Language(DDL), Data Manipulation Language(DML) and user defined functions.  The logo for hive is a yellow and black BEE
  • 17. Hive is not :  A relational database  designed for Online Transaction Processing (OLTP)  A language for real-time queries and row-level updates  Even with small amount of data ,time to return the response can’t be compared to RDBMS.
  • 18. Points to remember about hive  Hive Query Language is similar to SQL and gets reduced to map reduce jobs in backend.  Hive's default database is derby.  It also called as a No Sql.  It provides SQL type language for querying called HiveQL or HQL.  It is designed for OLAP(Online analytics processing).
  • 19. Sqoop  Sqoop is a tool designed to transfer data between Hadoop and relational database servers.  It is used to import data from relational databases such as MySQL, Oracle to Hadoop HDFS, and export from Hadoop file system to relational databases.  It is provided by the Apache Software Foundation.  Sqoop- “SQL to Hadoop and Hadoop to SQL”
  • 21. Difference Sqoop Import  The import tool imports individual tables from RDBMS to HDFS.  Each row in a table is treated as a record in HDFS.  All records are stored as text data in text files or as binary data in Avro and Sequence files. Sqoop Export  The export tool exports a set of files from HDFS back to an RDBMS.  The files given as input to Sqoop contain records, which are called as rows in table.  Those are read and parsed into a set of records and delimited with user-specified delimiter.