SlideShare a Scribd company logo
Apache Pig
Unit 3:
NoSQL – Pig - Introduction to Pig, Execution Modes of Pig, Comparison of Pig
with Databases, Grunt, Pig Latin, User Defined Functions,
Data Processing operators – Hive - Hive Shell, Hive Services, Hive Metastore,
Comparison with Traditional Databases, HiveQL, Tables,
Querying – MongoDB - Needs-Terms-Data Types Query Language – Cassandra -
Introduction- Features-Querying Commands.
• Pig Overview
• Execution Modes
• Installation
• Pig Latin Basics
• Developing Pig Script
– Most Occurred Start Letter
• Resources
power point presentation on pig -hadoop framework
What Is Pig?
• Developed by Yahoo! and a top level Apache project
• Immediately makes data on a cluster available to non-
Java programmers via Pig Latin – a dataflow language
• Interprets Pig Latin and generates MapReduce jobs that
run on the cluster
• Enables easy data summarization, ad-hoc reporting and
querying, and analysis of large volumes of data
• Pig interpreter runs on a client machine – no
administrative overhead required
WHAT IS PIG?
Framework for analyzing large un-
structured and semi-structured data
on top of hadoop
Pig engine: runtime environment
where the program executed.
Pig Latin: is simple but powerful data
flow language similar to scripting
language.
1. SQL
2. Provide common data
operations( loud, filters, joins,
group, store)
Pig Terms
• All data in Pig one of four types:
– An Atom is a simple data value - stored as a string but
can be used as either a string or a number
– A Tuple is a data record consisting of a sequence of
"fields"
• Each field is a piece of data of any type (atom, tuple or bag)
– A Bag is a set of tuples (also referred to as a ‘Relation’)
• The concept of a “kind of a” table
– A Map is a map from keys that are string literals to
values that can be any data type
• The concept of a hash map
Pig Capabilities
• Support for
– Grouping
– Joins
– Filtering
– Aggregation
• Extensibility
– Support for User Defined Functions (UDF’s)
• Leverages the same massive parallelism as
native MapReduce
Pig Basics
• Pig is a client application
– No cluster software is required
• Interprets Pig Latin scripts to MapReduce jobs
– Parses Pig Latin scripts
– Performs optimization
– Creates execution plan
• Submits MapReduce jobs to the cluster
Difference between Pig and MapReduce
Apache Pig MapReduce
It is a scripting language. It is a compiled programming language.
Abstraction is at higher level. Abstraction is at lower level.
It have less line of code as compared to
MapReduce.
Lines of code is more.
Less effort is needed for Apache Pig.
More development efforts are required for
MapReduce.
Code efficiency is less as compared to
MapReduce.
As compared to Pig efficiency of code is higher.
Pig provides built in functions for ordering,
sorting and union.
Hard to perform data operations.
It allows nested data types like map, tuple and
bag
It does not allow nested data types
Execution Modes
• Pig has two execution modes
– Local Mode all files are installed and run using your
local host and file system
– Pig runs in a single JVM and accesses the local
filesystem. This mode is suitable only for small
datasets and when trying out Pig.
– MapReduce Mode - all files are installed and run on a
Hadoop cluster and HDFS installation
– Pig translates queries into MapReduce jobs and runs
them on a Hadoop cluster. It is what you use when
you want to run Pig on large datasets.
RUNNING PIG PROGRAMS
There are three ways of executing Pig programs
• Script -> Pig can run a script file that contains Pig commands
• Grunt -> Interactive shell for running Pig commands
• Embedded -> users can run Pig programs from Java using the
PigServer class
RUNNING PIG PROGRAMS
Execution Modes
• Interactive
– By using the Grunt shell by invoking Pig on the
command line
$ pig
grunt>
• Batch
– Run Pig in batch mode using Pig Scripts and the
"pig" command
$ pig –f id.pig –p <param>=<value> ...
PIG LATIN DATA FLOW
 A LOAD statement to read data from the system
 A series of “transformation” statement to
process the data
 A DUMP statement to view results or STORE
statement to save the result
LOAD TRANSFORM DUMP OR STORE
Pig Latin
• Pig Latin scripts are generally organized as follows
– A LOAD statement reads data
– A series of “transformation” statements process the
data
– A STORE statement writes the output to the filesystem
• A DUMP statement displays output on the screen
• Logical vs. physical plans:
– All statements are stored and validated as a logical
plan
– Once a STORE or DUMP statement is found the logical
plan is executed
Example Pig Script
-- Load the content of a file into a pig bag named ‘input_lines’
input_lines = LOAD 'CHANGES.txt' AS (line:chararray);
-- Extract words from each line and put them into a pig bag named ‘words’
words = FOREACH input_lines GENERATE FLATTEN(TOKENIZE(line)) AS word;
-- filter out any words that are just white spaces
filtered_words = FILTER words BY word MATCHES 'w+';
-- create a group for each word
word_groups = GROUP filtered_words BY word;
-- count the entries in each group
word_count = FOREACH word_groups GENERATE COUNT(filtered_words) AS count, group AS
word;
-- order the records by count
ordered_word_count = ORDER word_count BY count DESC;
-- Store the results ( executes the pig script )
STORE ordered_word_count INTO 'output’;
Basic “grunt” Shell Commands
• Help is available
$ pig -h
• Pig supports HDFS commands
grunt> pwd
– put, get, cp, ls, mkdir, rm, mv, etc.
About Pig Scripts
• Pig Latin statements grouped together in a file
• Can be run from the command line or the
shell
• Support parameter passing
• Comments are supported
– Inline comments '--'
– Block comments /* */
Simple Data Types
Type Description
int 4-byte integer
long 8-byte integer
float 4-byte (single precision) floating point
double 8-byte (double precision) floating point
bytearray Array of bytes; blob
chararray String (“hello world”)
boolean True/False (case insensitive)
datetime A date and time
biginteger Java BigInteger
bigdecimal Java BigDecimal
Complex Data Types
Type Description
Tuple Ordered set of fields (a “row / record”)
Bag Collection of tuples (a “resultset / table”)
Map A set of key-value pairs
Keys must be of type chararray
Pig Data Formats
• BinStorage
– Loads and stores data in machine-readable (binary) format
• PigStorage
– Loads and stores data as structured, field delimited text
files
• TextLoader
– Loads unstructured data in UTF-8 format
• PigDump
– Stores data in UTF-8 format
• YourOwnFormat!
– via UDFs
Loading Data Into Pig
• Loads data from an HDFS file
var = LOAD 'employees.txt';
var = LOAD 'employees.txt' AS (id, name,
salary);
var = LOAD 'employees.txt' using PigStorage()
AS (id, name, salary);
• Each LOAD statement defines a new bag
– Each bag can have multiple elements (atoms)
– Each element can be referenced by name or position ($n)
• A bag is immutable
• A bag can be aliased and referenced later
Input And Output
• STORE
– Writes output to an HDFS file in a specified directory
grunt> STORE processed INTO 'processed_txt';
• Fails if directory exists
• Writes output files, part-[m|r]-xxxxx, to the directory
– PigStorage can be used to specify a field delimiter
• DUMP
– Write output to screen
grunt> DUMP processed;
Relational Operators
• FOREACH
– Applies expressions to every record in a bag
• FILTER
– Filters by expression
• GROUP
– Collect records with the same key
• ORDER BY
– Sorting
• DISTINCT
– Removes duplicates
FOREACH . . .GENERATE
• Use the FOREACH …GENERATE operator to work
with rows of data, call functions, etc.
• Basic syntax:
alias2 = FOREACH alias1 GENERATE expression;
• Example:
DUMP alias1;
(1,2,3) (4,2,1) (8,3,4) (4,3,3) (7,2,5) (8,4,3)
alias2 = FOREACH alias1 GENERATE col1, col2;
DUMP alias2;
(1,2) (4,2) (8,3) (4,3) (7,2) (8,4)
FILTER. . .BY
• Use the FILTER operator to restrict tuples or rows
of data
• Basic syntax:
alias2 = FILTER alias1 BY expression;
• Example:
DUMP alias1;
(1,2,3) (4,2,1) (8,3,4) (4,3,3) (7,2,5) (8,4,3)
alias2 = FILTER alias1 BY (col1 == 8) OR (NOT
(col2+col3 > col1));
DUMP alias2;
(4,2,1) (8,3,4) (7,2,5) (8,4,3)
GROUP. . .ALL
• Use the GROUP…ALL operator to group data
– Use GROUP when only one relation is involved
– Use COGROUP with multiple relations are involved
• Basic syntax:
alias2 = GROUP alias1 ALL;
• Example:
DUMP alias1;
(John,18,4.0F) (Mary,19,3.8F) (Bill,20,3.9F)
(Joe,18,3.8F)
alias2 = GROUP alias1 BY col2;
DUMP alias2;
(18,{(John,18,4.0F),(Joe,18,3.8F)})
(19,{(Mary,19,3.8F)})
(20,{(Bill,20,3.9F)})
ORDER. . .BY
• Use the ORDER…BY operator to sort a relation
based on one or more fields
• Basic syntax:
alias = ORDER alias BY field_alias [ASC|DESC];
• Example:
DUMP alias1;
(1,2,3) (4,2,1) (8,3,4) (4,3,3) (7,2,5) (8,4,3)
alias2 = ORDER alias1 BY col3 DESC;
DUMP alias2;
(7,2,5) (8,3,4) (1,2,3) (4,3,3) (8,4,3) (4,2,1)
DISTINCT. . .
• Use the DISTINCT operator to remove
duplicate tuples in a relation.
• Basic syntax:
alias2 = DISTINCT alias1;
• Example:
DUMP alias1;
(8,3,4) (1,2,3) (4,3,3) (4,3,3) (1,2,3)
alias2= DISTINCT alias1;
DUMP alias2;
(8,3,4) (1,2,3) (4,3,3)
Relational Operators
• FLATTEN
– Used to un-nest tuples as well as bags
• INNER JOIN
– Used to perform an inner join of two or more relations based on
common field values
• OUTER JOIN
– Used to perform left, right or full outer joins
• SPLIT
– Used to partition the contents of a relation into two or more
relations
• SAMPLE
– Used to select a random data sample with the stated sample
size
INNER JOIN. . .
• Use the JOIN operator to perform an inner, equi-
join join of two or more relations based on
common field values
• The JOIN operator always performs an inner join
• Inner joins ignore null keys
– Filter null keys before the join
• JOIN and COGROUP operators perform similar
functions
– JOIN creates a flat set of output records
– COGROUP creates a nested set of output records
INNER JOIN Example
DUMP Alias1;
(1,2,3)
(4,2,1)
(8,3,4)
(4,3,3)
(7,2,5)
(8,4,3)
DUMP Alias2;
(2,4)
(8,9)
(1,3)
(2,7)
(2,9)
(4,6)
(4,9)
Join Alias1 by Col1 to
Alias2 by Col1
Alias3 = JOIN Alias1 BY
Col1, Alias2 BY Col1;
Dump Alias3;
(1,2,3,1,3)
(4,2,1,4,6)
(4,3,3,4,6)
(4,2,1,4,9)
(4,3,3,4,9)
(8,3,4,8,9)
(8,4,3,8,9)
OUTER JOIN. . .
• Use the OUTER JOIN operator to perform left, right, or
full outer joins
– Pig Latin syntax closely adheres to the SQL standard
• The keyword OUTER is optional
– keywords LEFT, RIGHT and FULL will imply left outer, right
outer and full outer joins respectively
• Outer joins will only work provided the relations which
need to produce nulls (in the case of non-matching
keys) have schemas
• Outer joins will only work for two-way joins
– To perform a multi-way outer join perform multiple two-
way outer join statements
User-Defined Functions
• Natively written in Java, packaged as a jar file
– Other languages include Jython, JavaScript, Ruby,
Groovy, and Python
• Register the jar with the REGISTER statement
• Optionally, alias it with the DEFINE statement
REGISTER /src/myfunc.jar;
A = LOAD 'students';
B = FOREACH A GENERATE myfunc.MyEvalFunc($0);
DEFINE
• DEFINE can be used to work with UDFs and also
streaming commands
– Useful when dealing with complex input/output
formats
/* read and write comma-delimited data */
DEFINE Y 'stream.pl' INPUT(stdin USING PigStreaming(','))
OUTPUT(stdout USING PigStreaming(','));
A = STREAM X THROUGH Y;
/* Define UDFs to a more readable format */
DEFINE MAXNUM org.apache.pig.piggybank.evaluation.math.MAX;
A = LOAD ‘student_data’ AS (name:chararray, gpa1:float, gpa2:double);
B = FOREACH A GENERATE name, MAXNUM(gpa1, gpa2);
DUMP B;
References
• http://pig.apache.org
Apache Pig Installation
• Step 1: Download the Pig version 0.17.0 tar file from the
official Apache pig site. Navigate to the
website https://downloads.apache.org/pig/latest/.
Download the file ‘pig-0.17.0.tar.gz’ from the website.
Apache Pig Installation
• Step 2: Add the path variables of PIG_HOME and
PIG_HOMEbin
Now click on the Path variable
in the System variables. This will
open a new tab. Then click the
‘New’ button. And add the
value C:pig-0.17.0bin in the
text box.
Apache Pig Installation
Step 3: Correcting the Pig Command File
Find file ‘pig.cmd’ in the bin folder of the pig file ( C:pig-
0.17.0bin)
• set HADOOP_BIN_PATH = %HADOOP_HOME%bin
Apache Pig Example
Download the dataset containing the Agriculture related data about crops
in various regions and their area and produce. The link for dataset –
https://www.kaggle.com/abhinand05/crop-production-in-india The dataset
contains 7 columns namely as follows.
State_Name : chararray ;
District_Name : chararray ;
Crop_Year : int ;
Season : chararray ;
Crop : chararray ;
Area : int ;
Production : int
No of rows: 246092
No of columns: 7
Apache Pig Example
2.Enter pig local mode using
grunt > pig -x local
3. Load the dataset in the local
mode
grunt > agriculture= LOAD 'F:/csv files/crop_production.csv' using PigStorage (',')
as ( State_Name:chararray , District_Name:chararray , Crop_Year:int ,
Season:chararray , Crop:chararray , Area:int , Production:int ) ;
Apache Pig Example
4. Dump and describe the data set agriculture
using
grunt > dump agriculture;
grunt > describe agriculture;
5. Executing the PIG queries in local mode
Generate Total Crop wise Production and Area
grunt > cropinfo = FOREACH( GROUP agriculture BY Crop ) GENERATE group AS Crop,
SUM(agriculture.Area) as AreaPerCrop , SUM(agriculture.Production) as ProductionPerCrop;
grunt > DESCRIBE cropinfo;
OUTPUT: Crop , AreaperCrop , ProductionPerCrop.
5. Executing the PIG queries in local mode
Average crop production in each district after the year 2000.
grunt > averagecrops = FOREACH (GROUP agriculture by District_Name){ after_year = FILTER agriculture BY
Crop_Year>2000; GENERATE group AS District_Name , AVG(after_year.(Production)) AS AvgProd; };
grunt > DESCRIBE averagecrops;
OUTPUT: district names, average production of all crops in each district after the year 2000.
Apache Hive
Based on Slides by Adam Shook
What Is Hive?
• Developed by Facebook and a top-level Apache project
• A data warehousing infrastructure based on Hadoop
• Immediately makes data on a cluster available to non-
Java programmers via SQL like queries
• Built on HiveQL (HQL), a SQL-like query language
• Interprets HiveQL and generates MapReduce jobs that
run on the cluster
• Using Hive, we can skip the requirement of the
traditional approach of writing complex
MapReduce programs. Hive supports Data
Definition Language (DDL), Data Manipulation
Language (DML), and User Defined Functions
(UDF).
Hive Architecture
HiveQL
• HiveQL / HQL provides the basic SQL-like
operations:
– Select columns using SELECT
– Filter rows using WHERE
– JOIN between tables
– Evaluate aggregates using GROUP BY
– Store query results into another table
– Download results to a local directory (i.e., export
from HDFS)
– Manage tables and queries with CREATE, DROP, and
ALTER
Primitive Data Types
Type Comments
TINYINT, SMALLINT, INT, BIGINT 1, 2, 4 and 8-byte integers
BOOLEAN TRUE/FALSE
FLOAT, DOUBLE Single and double precision real numbers
STRING Character string
TIMESTAMP Unix-epoch offset or datetime string
DECIMAL Arbitrary-precision decimal
BINARY Opaque; ignore these bytes
Complex Data Types
Type Comments
STRUCT A collection of elements
If S is of type STRUCT {a INT, b INT}:
S.a returns element a
MAP Key-value tuple
If M is a map from 'group' to GID:
M['group'] returns value of GID
ARRAY Indexed list
If A is an array of elements ['a','b','c']:
A[0] returns 'a'
HiveQL Limitations
• HQL only supports equi-joins, outer joins, left
semi-joins
• Because it is only a shell for Map-Reduce,
complex queries can be hard to optimise
• Missing large parts of full SQL specification:
– HAVING clause in SELECT
– Correlated sub-queries
– Sub-queries outside FROM clauses
– Updatable or materialized views
– Stored procedures
Hive Metastore
• Stores Hive metadata
• Default metastore database uses Apache Derby
• Various configurations:
– Embedded (in-process metastore, in-process
database)
• Mainly for unit tests
– Local (in-process metastore, out-of-process database)
• Each Hive client connects to the metastore directly
– Remote (out-of-process metastore, out-of-process
database)
• Each Hive client connects to a metastore server, which
connects to the metadata database itself
Hive Warehouse
• Hive tables are stored in the Hive
“warehouse”
– Default HDFS location: /user/hive/warehouse
• Tables are stored as sub-directories in the
warehouse directory
• Partitions are subdirectories of tables
• External tables are supported in Hive
• The actual data is stored in flat files
Hive Schemas
• Hive is schema-on-read
– Schema is only enforced when the data is read (at
query time)
– Allows greater flexibility: same data can be read
using multiple schemas
• Contrast with an RDBMS, which is schema-on-
write
– Schema is enforced when the data is loaded
– Speeds up queries at the expense of load times
Comparision between Hive and Pig
Hive Pig
Hive is commonly used by Data Analysts. Pig is commonly used by programmers.
It follows SQL-like queries. It follows the data-flow language.
It can handle structured data. It can handle semi-structured data.
It works on server-side of HDFS cluster. It works on client-side of HDFS cluster.
Hive is slower than Pig. Pig is comparatively faster than Hive.
Create Table Syntax
CREATE TABLE table_name
(col1 data_type,
col2 data_type,
col3 data_type,
col4 datatype )
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS format_type;
Simple Table
CREATE TABLE page_view
(viewTime INT,
userid BIGINT,
page_url STRING,
referrer_url STRING,
ip STRING COMMENT 'IP Address of the User' )
ROW FORMAT DELIMITED
FIELDS TERMINATED BY 't'
STORED AS TEXTFILE;
More Complex Table
CREATE TABLE employees (
(name STRING,
salary FLOAT,
subordinates ARRAY<STRING>,
deductions MAP<STRING, FLOAT>,
address STRUCT<street:STRING,
city:STRING,
state:STRING,
zip:INT>)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY 't'
STORED AS TEXTFILE;
External Table
CREATE EXTERNAL TABLE page_view_stg
(viewTime INT,
userid BIGINT,
page_url STRING,
referrer_url STRING,
ip STRING COMMENT 'IP Address of the User')
ROW FORMAT DELIMITED
FIELDS TERMINATED BY 't'
STORED AS TEXTFILE
LOCATION '/user/staging/page_view';
More About Tables
• CREATE TABLE
– LOAD: file moved into Hive’s data warehouse directory
– DROP: both metadata and data deleted
• CREATE EXTERNAL TABLE
– LOAD: no files moved
– DROP: only metadata deleted
– Use this when sharing with other Hadoop
applications, or when you want to use multiple
schemas on the same data
Partitioning
• Can make some queries faster
• Divide data based on partition column
• Use PARTITION BY clause when creating table
• Use PARTITION clause when loading data
• SHOW PARTITIONS will show a table’s
partitions
Bucketing
• Can speed up queries that involve sampling
the data
– Sampling works without bucketing, but Hive has
to scan the entire dataset
• Use CLUSTERED BY when creating table
– For sorted buckets, add SORTED BY
• To query a sample of your data, use
TABLESAMPLE
Browsing Tables And Partitions
Command Comments
SHOW TABLES; Show all the tables in the database
SHOW TABLES 'page.*'; Show tables matching the
specification ( uses regex syntax )
SHOW PARTITIONS page_view; Show the partitions of the page_view
table
DESCRIBE page_view; List columns of the table
DESCRIBE EXTENDED page_view; More information on columns (useful
only for debugging )
DESCRIBE page_view
PARTITION (ds='2008-10-31');
List information about a partition
Loading Data
• Use LOAD DATA to load data from a file or
directory
– Will read from HDFS unless LOCAL keyword is
specified
– Will append data unless OVERWRITE specified
– PARTITION required if destination table is partitioned
LOAD DATA LOCAL INPATH '/tmp/pv_2008-06-8_us.txt'
OVERWRITE INTO TABLE page_view
PARTITION (date='2008-06-08', country='US')
Inserting Data
• Use INSERT to load data from a Hive query
– Will append data unless OVERWRITE specified
– PARTITION required if destination table is
partitioned
FROM page_view_stg pvs
INSERT OVERWRITE TABLE page_view
PARTITION (dt='2008-06-08', country='US')
SELECT pvs.viewTime, pvs.userid,
pvs.page_url, pvs.referrer_url
WHERE pvs.country = 'US';
Loading And Inserting Data: Summary
Use this For this purpose
LOAD Load data from a file or directory
INSERT Load data from a query
• One partition at a time
• Use multiple INSERTs to insert into
multiple partitions in the one query
CREATE TABLE AS (CTAS) Insert data while creating a table
Add/modify external file Load new data into external table
Sample Select Clauses
• Select from a single table
SELECT *
FROM sales
WHERE amount > 10 AND
region = "US";
• Select from a partitioned table
SELECT page_views.*
FROM page_views
WHERE page_views.date >= '2008-03-01' AND
page_views.date <= '2008-03-31'
Relational Operators
• ALL and DISTINCT
– Specify whether duplicate rows should be returned
– ALL is the default (all matching rows are returned)
– DISTINCT removes duplicate rows from the result set
• WHERE
– Filters by expression
– Does not support IN, EXISTS or sub-queries in the
WHERE clause
• LIMIT
– Indicates the number of rows to be returned
Relational Operators
• GROUP BY
– Group data by column values
– Select statement can only include columns
included in the
GROUP BY clause
• ORDER BY / SORT BY
– ORDER BY performs total ordering
• Slow, poor performance
– SORT BY performs partial ordering
• Sorts output from each reducer
Advanced Hive Operations
• JOIN
– If only one column in each table is used in the join, then
only one MapReduce job will run
• This results in 1 MapReduce job:
SELECT * FROM a JOIN b ON a.key = b.key JOIN c ON b.key = c.key
• This results in 2 MapReduce jobs:
SELECT * FROM a JOIN b ON a.key = b.key JOIN c ON b.key2 = c.key
– If multiple tables are joined, put the biggest table last and
the reducer will stream the last table, buffer the others
– Use left semi-joins to take the place of IN/EXISTS
SELECT a.key, a.val FROM a LEFT SEMI JOIN b on a.key = b.key;
Advanced Hive Operations
• JOIN
– Do not specify join conditions in the WHERE clause
• Hive does not know how to optimise such queries
• Will compute a full Cartesian product before filtering it
• Join Example
SELECT
a.ymd, a.price_close, b.price_close
FROM stocks a
JOIN stocks b ON a.ymd = b.ymd
WHERE a.symbol = 'AAPL' AND
b.symbol = 'IBM' AND
a.ymd > '2010-01-01';
Hive Stinger
• MPP-style execution of Hive queries
• Available since Hive 0.13
• No MapReduce
• We will talk about this more when we get to
SQL on Hadoop
References
• http://hive.apache.org

More Related Content

PPT
Distributed System-Multicast & Indirect communication
PPTX
Semaphore
PPTX
SYNCHRONIZATION IN MULTIPROCESSING
PPT
Chapter 4 a interprocess communication
PPT
Chapter 8 distributed file systems
PPTX
Message queue architecture
PPTX
NETCONF YANG tutorial
PPTX
Leaky bucket A
Distributed System-Multicast & Indirect communication
Semaphore
SYNCHRONIZATION IN MULTIPROCESSING
Chapter 4 a interprocess communication
Chapter 8 distributed file systems
Message queue architecture
NETCONF YANG tutorial
Leaky bucket A

What's hot (20)

PPTX
Iot rpl
PPTX
IP Multicasting
PDF
models of distributed computing
PPTX
Monolithic and Microservices styles of Architecture
PPTX
Distributed Shared Memory Systems
PPTX
Deadlock ppt
PPT
Multiprocessor Systems
PPTX
04 coms 525 tcpip - arp and rarp
PPTX
User datagram protocol (udp)
PPTX
Unit 3
PPT
Fundamental of Quality of Service(QoS)
PDF
Federated Cloud Computing - The OpenNebula Experience v1.0s
PPTX
Comparison of MQTT and DDS as M2M Protocols for the Internet of Things
PPTX
Routing Presentation
PPTX
INTER PROCESS COMMUNICATION (IPC).pptx
PDF
Cloud Computing Architecture
PPTX
Application layer protocols
PPTX
OSI Model - Open Systems Interconnection
PPTX
Replication in Distributed Systems
PPT
RTP.ppt
Iot rpl
IP Multicasting
models of distributed computing
Monolithic and Microservices styles of Architecture
Distributed Shared Memory Systems
Deadlock ppt
Multiprocessor Systems
04 coms 525 tcpip - arp and rarp
User datagram protocol (udp)
Unit 3
Fundamental of Quality of Service(QoS)
Federated Cloud Computing - The OpenNebula Experience v1.0s
Comparison of MQTT and DDS as M2M Protocols for the Internet of Things
Routing Presentation
INTER PROCESS COMMUNICATION (IPC).pptx
Cloud Computing Architecture
Application layer protocols
OSI Model - Open Systems Interconnection
Replication in Distributed Systems
RTP.ppt
Ad

Similar to power point presentation on pig -hadoop framework (20)

PPTX
PigHive presentation and hive impor.pptx
PPTX
PigHive.pptx
PPTX
Understanding Pig and Hive in Apache Hadoop
PPTX
PigHive.pptx
PPTX
Apache pig power_tools_by_viswanath_gangavaram_r&d_dsg_i_labs
PDF
06 pig-01-intro
PPTX
Unit-5 [Pig] working and architecture.pptx
PPT
pig.ppt
PDF
Apache Pig: A big data processor
PPTX
AWS Hadoop and PIG and overview
PPTX
03 pig intro
PPTX
Hadoop with Python
PDF
pig intro.pdf
PDF
Lecture 2 part 3
PDF
Unit V.pdf
PDF
Introduction to Pig & Pig Latin | Big Data Hadoop Spark Tutorial | CloudxLab
PPTX
Apache PIG
PPTX
Pig power tools_by_viswanath_gangavaram
PDF
unit-4-apache pig-.pdf
PigHive presentation and hive impor.pptx
PigHive.pptx
Understanding Pig and Hive in Apache Hadoop
PigHive.pptx
Apache pig power_tools_by_viswanath_gangavaram_r&d_dsg_i_labs
06 pig-01-intro
Unit-5 [Pig] working and architecture.pptx
pig.ppt
Apache Pig: A big data processor
AWS Hadoop and PIG and overview
03 pig intro
Hadoop with Python
pig intro.pdf
Lecture 2 part 3
Unit V.pdf
Introduction to Pig & Pig Latin | Big Data Hadoop Spark Tutorial | CloudxLab
Apache PIG
Pig power tools_by_viswanath_gangavaram
unit-4-apache pig-.pdf
Ad

More from bhargavi804095 (20)

PPTX
Reinforcement learning ppt in machine learning.pptx
PPTX
Presentation-lokesh IMAGES for research.pptx
PPT
concept on arrays and pointers with examples arrays-pointers.ppt
PPT
Lec3-coa give sthe information abt instruction set.ppt
PDF
computerregisters during data and address communication.pdf
PPT
Computer forensics and cyber security powerpoint presentation
PPT
chapter1-basic-structure-of-computers.ppt
PPTX
Ch10_The_STACK_and_Subroutines_Slides.pptx
PDF
Pointers are one of the core components of the C programming language.
PPTX
Lec26.pptx An array is a linear data structure
DOCX
08-Pointers.docx An array is a linear data structure
PDF
The Greibach normal form is referred to as GNF gnf1.pdf
PPT
java1.pptjava is programming language, having core and advanced java
PDF
Big Data Analytics is not something which was just invented yesterday!
PPT
Apache Spark™ is a multi-language engine for executing data-S5.ppt
PPTX
C++ was developed by Bjarne Stroustrup, as an extension to the C language. cp...
PPT
A File is a collection of data stored in the secondary memory. So far data wa...
PPTX
C++ helps you to format the I/O operations like determining the number of dig...
PPT
While writing program in any language, you need to use various variables to s...
PPT
Python is a high-level, general-purpose programming language. Its design phil...
Reinforcement learning ppt in machine learning.pptx
Presentation-lokesh IMAGES for research.pptx
concept on arrays and pointers with examples arrays-pointers.ppt
Lec3-coa give sthe information abt instruction set.ppt
computerregisters during data and address communication.pdf
Computer forensics and cyber security powerpoint presentation
chapter1-basic-structure-of-computers.ppt
Ch10_The_STACK_and_Subroutines_Slides.pptx
Pointers are one of the core components of the C programming language.
Lec26.pptx An array is a linear data structure
08-Pointers.docx An array is a linear data structure
The Greibach normal form is referred to as GNF gnf1.pdf
java1.pptjava is programming language, having core and advanced java
Big Data Analytics is not something which was just invented yesterday!
Apache Spark™ is a multi-language engine for executing data-S5.ppt
C++ was developed by Bjarne Stroustrup, as an extension to the C language. cp...
A File is a collection of data stored in the secondary memory. So far data wa...
C++ helps you to format the I/O operations like determining the number of dig...
While writing program in any language, you need to use various variables to s...
Python is a high-level, general-purpose programming language. Its design phil...

Recently uploaded (20)

PPTX
climate analysis of Dhaka ,Banglades.pptx
PPTX
Moving the Public Sector (Government) to a Digital Adoption
PPTX
Bharatiya Antariksh Hackathon 2025 Idea Submission PPT.pptx
PPTX
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
PPTX
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
PPTX
Database Infoormation System (DBIS).pptx
PPT
Quality review (1)_presentation of this 21
PPTX
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
PDF
Clinical guidelines as a resource for EBP(1).pdf
PDF
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
PPTX
Introduction to machine learning and Linear Models
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PPTX
Computer network topology notes for revision
PPTX
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PPTX
Business Ppt On Nestle.pptx huunnnhhgfvu
PDF
Mega Projects Data Mega Projects Data
PPT
Chapter 3 METAL JOINING.pptnnnnnnnnnnnnn
climate analysis of Dhaka ,Banglades.pptx
Moving the Public Sector (Government) to a Digital Adoption
Bharatiya Antariksh Hackathon 2025 Idea Submission PPT.pptx
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
Database Infoormation System (DBIS).pptx
Quality review (1)_presentation of this 21
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
Introduction-to-Cloud-ComputingFinal.pptx
Clinical guidelines as a resource for EBP(1).pdf
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
Introduction to machine learning and Linear Models
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
oil_refinery_comprehensive_20250804084928 (1).pptx
Computer network topology notes for revision
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
Data_Analytics_and_PowerBI_Presentation.pptx
Business Ppt On Nestle.pptx huunnnhhgfvu
Mega Projects Data Mega Projects Data
Chapter 3 METAL JOINING.pptnnnnnnnnnnnnn

power point presentation on pig -hadoop framework

  • 1. Apache Pig Unit 3: NoSQL – Pig - Introduction to Pig, Execution Modes of Pig, Comparison of Pig with Databases, Grunt, Pig Latin, User Defined Functions, Data Processing operators – Hive - Hive Shell, Hive Services, Hive Metastore, Comparison with Traditional Databases, HiveQL, Tables, Querying – MongoDB - Needs-Terms-Data Types Query Language – Cassandra - Introduction- Features-Querying Commands.
  • 2. • Pig Overview • Execution Modes • Installation • Pig Latin Basics • Developing Pig Script – Most Occurred Start Letter • Resources
  • 4. What Is Pig? • Developed by Yahoo! and a top level Apache project • Immediately makes data on a cluster available to non- Java programmers via Pig Latin – a dataflow language • Interprets Pig Latin and generates MapReduce jobs that run on the cluster • Enables easy data summarization, ad-hoc reporting and querying, and analysis of large volumes of data • Pig interpreter runs on a client machine – no administrative overhead required
  • 5. WHAT IS PIG? Framework for analyzing large un- structured and semi-structured data on top of hadoop Pig engine: runtime environment where the program executed. Pig Latin: is simple but powerful data flow language similar to scripting language. 1. SQL 2. Provide common data operations( loud, filters, joins, group, store)
  • 6. Pig Terms • All data in Pig one of four types: – An Atom is a simple data value - stored as a string but can be used as either a string or a number – A Tuple is a data record consisting of a sequence of "fields" • Each field is a piece of data of any type (atom, tuple or bag) – A Bag is a set of tuples (also referred to as a ‘Relation’) • The concept of a “kind of a” table – A Map is a map from keys that are string literals to values that can be any data type • The concept of a hash map
  • 7. Pig Capabilities • Support for – Grouping – Joins – Filtering – Aggregation • Extensibility – Support for User Defined Functions (UDF’s) • Leverages the same massive parallelism as native MapReduce
  • 8. Pig Basics • Pig is a client application – No cluster software is required • Interprets Pig Latin scripts to MapReduce jobs – Parses Pig Latin scripts – Performs optimization – Creates execution plan • Submits MapReduce jobs to the cluster
  • 9. Difference between Pig and MapReduce Apache Pig MapReduce It is a scripting language. It is a compiled programming language. Abstraction is at higher level. Abstraction is at lower level. It have less line of code as compared to MapReduce. Lines of code is more. Less effort is needed for Apache Pig. More development efforts are required for MapReduce. Code efficiency is less as compared to MapReduce. As compared to Pig efficiency of code is higher. Pig provides built in functions for ordering, sorting and union. Hard to perform data operations. It allows nested data types like map, tuple and bag It does not allow nested data types
  • 10. Execution Modes • Pig has two execution modes – Local Mode all files are installed and run using your local host and file system – Pig runs in a single JVM and accesses the local filesystem. This mode is suitable only for small datasets and when trying out Pig. – MapReduce Mode - all files are installed and run on a Hadoop cluster and HDFS installation – Pig translates queries into MapReduce jobs and runs them on a Hadoop cluster. It is what you use when you want to run Pig on large datasets.
  • 11. RUNNING PIG PROGRAMS There are three ways of executing Pig programs • Script -> Pig can run a script file that contains Pig commands • Grunt -> Interactive shell for running Pig commands • Embedded -> users can run Pig programs from Java using the PigServer class RUNNING PIG PROGRAMS
  • 12. Execution Modes • Interactive – By using the Grunt shell by invoking Pig on the command line $ pig grunt> • Batch – Run Pig in batch mode using Pig Scripts and the "pig" command $ pig –f id.pig –p <param>=<value> ...
  • 13. PIG LATIN DATA FLOW  A LOAD statement to read data from the system  A series of “transformation” statement to process the data  A DUMP statement to view results or STORE statement to save the result LOAD TRANSFORM DUMP OR STORE
  • 14. Pig Latin • Pig Latin scripts are generally organized as follows – A LOAD statement reads data – A series of “transformation” statements process the data – A STORE statement writes the output to the filesystem • A DUMP statement displays output on the screen • Logical vs. physical plans: – All statements are stored and validated as a logical plan – Once a STORE or DUMP statement is found the logical plan is executed
  • 15. Example Pig Script -- Load the content of a file into a pig bag named ‘input_lines’ input_lines = LOAD 'CHANGES.txt' AS (line:chararray); -- Extract words from each line and put them into a pig bag named ‘words’ words = FOREACH input_lines GENERATE FLATTEN(TOKENIZE(line)) AS word; -- filter out any words that are just white spaces filtered_words = FILTER words BY word MATCHES 'w+'; -- create a group for each word word_groups = GROUP filtered_words BY word; -- count the entries in each group word_count = FOREACH word_groups GENERATE COUNT(filtered_words) AS count, group AS word; -- order the records by count ordered_word_count = ORDER word_count BY count DESC; -- Store the results ( executes the pig script ) STORE ordered_word_count INTO 'output’;
  • 16. Basic “grunt” Shell Commands • Help is available $ pig -h • Pig supports HDFS commands grunt> pwd – put, get, cp, ls, mkdir, rm, mv, etc.
  • 17. About Pig Scripts • Pig Latin statements grouped together in a file • Can be run from the command line or the shell • Support parameter passing • Comments are supported – Inline comments '--' – Block comments /* */
  • 18. Simple Data Types Type Description int 4-byte integer long 8-byte integer float 4-byte (single precision) floating point double 8-byte (double precision) floating point bytearray Array of bytes; blob chararray String (“hello world”) boolean True/False (case insensitive) datetime A date and time biginteger Java BigInteger bigdecimal Java BigDecimal
  • 19. Complex Data Types Type Description Tuple Ordered set of fields (a “row / record”) Bag Collection of tuples (a “resultset / table”) Map A set of key-value pairs Keys must be of type chararray
  • 20. Pig Data Formats • BinStorage – Loads and stores data in machine-readable (binary) format • PigStorage – Loads and stores data as structured, field delimited text files • TextLoader – Loads unstructured data in UTF-8 format • PigDump – Stores data in UTF-8 format • YourOwnFormat! – via UDFs
  • 21. Loading Data Into Pig • Loads data from an HDFS file var = LOAD 'employees.txt'; var = LOAD 'employees.txt' AS (id, name, salary); var = LOAD 'employees.txt' using PigStorage() AS (id, name, salary); • Each LOAD statement defines a new bag – Each bag can have multiple elements (atoms) – Each element can be referenced by name or position ($n) • A bag is immutable • A bag can be aliased and referenced later
  • 22. Input And Output • STORE – Writes output to an HDFS file in a specified directory grunt> STORE processed INTO 'processed_txt'; • Fails if directory exists • Writes output files, part-[m|r]-xxxxx, to the directory – PigStorage can be used to specify a field delimiter • DUMP – Write output to screen grunt> DUMP processed;
  • 23. Relational Operators • FOREACH – Applies expressions to every record in a bag • FILTER – Filters by expression • GROUP – Collect records with the same key • ORDER BY – Sorting • DISTINCT – Removes duplicates
  • 24. FOREACH . . .GENERATE • Use the FOREACH …GENERATE operator to work with rows of data, call functions, etc. • Basic syntax: alias2 = FOREACH alias1 GENERATE expression; • Example: DUMP alias1; (1,2,3) (4,2,1) (8,3,4) (4,3,3) (7,2,5) (8,4,3) alias2 = FOREACH alias1 GENERATE col1, col2; DUMP alias2; (1,2) (4,2) (8,3) (4,3) (7,2) (8,4)
  • 25. FILTER. . .BY • Use the FILTER operator to restrict tuples or rows of data • Basic syntax: alias2 = FILTER alias1 BY expression; • Example: DUMP alias1; (1,2,3) (4,2,1) (8,3,4) (4,3,3) (7,2,5) (8,4,3) alias2 = FILTER alias1 BY (col1 == 8) OR (NOT (col2+col3 > col1)); DUMP alias2; (4,2,1) (8,3,4) (7,2,5) (8,4,3)
  • 26. GROUP. . .ALL • Use the GROUP…ALL operator to group data – Use GROUP when only one relation is involved – Use COGROUP with multiple relations are involved • Basic syntax: alias2 = GROUP alias1 ALL; • Example: DUMP alias1; (John,18,4.0F) (Mary,19,3.8F) (Bill,20,3.9F) (Joe,18,3.8F) alias2 = GROUP alias1 BY col2; DUMP alias2; (18,{(John,18,4.0F),(Joe,18,3.8F)}) (19,{(Mary,19,3.8F)}) (20,{(Bill,20,3.9F)})
  • 27. ORDER. . .BY • Use the ORDER…BY operator to sort a relation based on one or more fields • Basic syntax: alias = ORDER alias BY field_alias [ASC|DESC]; • Example: DUMP alias1; (1,2,3) (4,2,1) (8,3,4) (4,3,3) (7,2,5) (8,4,3) alias2 = ORDER alias1 BY col3 DESC; DUMP alias2; (7,2,5) (8,3,4) (1,2,3) (4,3,3) (8,4,3) (4,2,1)
  • 28. DISTINCT. . . • Use the DISTINCT operator to remove duplicate tuples in a relation. • Basic syntax: alias2 = DISTINCT alias1; • Example: DUMP alias1; (8,3,4) (1,2,3) (4,3,3) (4,3,3) (1,2,3) alias2= DISTINCT alias1; DUMP alias2; (8,3,4) (1,2,3) (4,3,3)
  • 29. Relational Operators • FLATTEN – Used to un-nest tuples as well as bags • INNER JOIN – Used to perform an inner join of two or more relations based on common field values • OUTER JOIN – Used to perform left, right or full outer joins • SPLIT – Used to partition the contents of a relation into two or more relations • SAMPLE – Used to select a random data sample with the stated sample size
  • 30. INNER JOIN. . . • Use the JOIN operator to perform an inner, equi- join join of two or more relations based on common field values • The JOIN operator always performs an inner join • Inner joins ignore null keys – Filter null keys before the join • JOIN and COGROUP operators perform similar functions – JOIN creates a flat set of output records – COGROUP creates a nested set of output records
  • 31. INNER JOIN Example DUMP Alias1; (1,2,3) (4,2,1) (8,3,4) (4,3,3) (7,2,5) (8,4,3) DUMP Alias2; (2,4) (8,9) (1,3) (2,7) (2,9) (4,6) (4,9) Join Alias1 by Col1 to Alias2 by Col1 Alias3 = JOIN Alias1 BY Col1, Alias2 BY Col1; Dump Alias3; (1,2,3,1,3) (4,2,1,4,6) (4,3,3,4,6) (4,2,1,4,9) (4,3,3,4,9) (8,3,4,8,9) (8,4,3,8,9)
  • 32. OUTER JOIN. . . • Use the OUTER JOIN operator to perform left, right, or full outer joins – Pig Latin syntax closely adheres to the SQL standard • The keyword OUTER is optional – keywords LEFT, RIGHT and FULL will imply left outer, right outer and full outer joins respectively • Outer joins will only work provided the relations which need to produce nulls (in the case of non-matching keys) have schemas • Outer joins will only work for two-way joins – To perform a multi-way outer join perform multiple two- way outer join statements
  • 33. User-Defined Functions • Natively written in Java, packaged as a jar file – Other languages include Jython, JavaScript, Ruby, Groovy, and Python • Register the jar with the REGISTER statement • Optionally, alias it with the DEFINE statement REGISTER /src/myfunc.jar; A = LOAD 'students'; B = FOREACH A GENERATE myfunc.MyEvalFunc($0);
  • 34. DEFINE • DEFINE can be used to work with UDFs and also streaming commands – Useful when dealing with complex input/output formats /* read and write comma-delimited data */ DEFINE Y 'stream.pl' INPUT(stdin USING PigStreaming(',')) OUTPUT(stdout USING PigStreaming(',')); A = STREAM X THROUGH Y; /* Define UDFs to a more readable format */ DEFINE MAXNUM org.apache.pig.piggybank.evaluation.math.MAX; A = LOAD ‘student_data’ AS (name:chararray, gpa1:float, gpa2:double); B = FOREACH A GENERATE name, MAXNUM(gpa1, gpa2); DUMP B;
  • 36. Apache Pig Installation • Step 1: Download the Pig version 0.17.0 tar file from the official Apache pig site. Navigate to the website https://downloads.apache.org/pig/latest/. Download the file ‘pig-0.17.0.tar.gz’ from the website.
  • 37. Apache Pig Installation • Step 2: Add the path variables of PIG_HOME and PIG_HOMEbin Now click on the Path variable in the System variables. This will open a new tab. Then click the ‘New’ button. And add the value C:pig-0.17.0bin in the text box.
  • 38. Apache Pig Installation Step 3: Correcting the Pig Command File Find file ‘pig.cmd’ in the bin folder of the pig file ( C:pig- 0.17.0bin) • set HADOOP_BIN_PATH = %HADOOP_HOME%bin
  • 39. Apache Pig Example Download the dataset containing the Agriculture related data about crops in various regions and their area and produce. The link for dataset – https://www.kaggle.com/abhinand05/crop-production-in-india The dataset contains 7 columns namely as follows. State_Name : chararray ; District_Name : chararray ; Crop_Year : int ; Season : chararray ; Crop : chararray ; Area : int ; Production : int No of rows: 246092 No of columns: 7
  • 40. Apache Pig Example 2.Enter pig local mode using grunt > pig -x local 3. Load the dataset in the local mode grunt > agriculture= LOAD 'F:/csv files/crop_production.csv' using PigStorage (',') as ( State_Name:chararray , District_Name:chararray , Crop_Year:int , Season:chararray , Crop:chararray , Area:int , Production:int ) ;
  • 41. Apache Pig Example 4. Dump and describe the data set agriculture using grunt > dump agriculture; grunt > describe agriculture;
  • 42. 5. Executing the PIG queries in local mode Generate Total Crop wise Production and Area grunt > cropinfo = FOREACH( GROUP agriculture BY Crop ) GENERATE group AS Crop, SUM(agriculture.Area) as AreaPerCrop , SUM(agriculture.Production) as ProductionPerCrop; grunt > DESCRIBE cropinfo; OUTPUT: Crop , AreaperCrop , ProductionPerCrop.
  • 43. 5. Executing the PIG queries in local mode Average crop production in each district after the year 2000. grunt > averagecrops = FOREACH (GROUP agriculture by District_Name){ after_year = FILTER agriculture BY Crop_Year>2000; GENERATE group AS District_Name , AVG(after_year.(Production)) AS AvgProd; }; grunt > DESCRIBE averagecrops; OUTPUT: district names, average production of all crops in each district after the year 2000.
  • 44. Apache Hive Based on Slides by Adam Shook
  • 45. What Is Hive? • Developed by Facebook and a top-level Apache project • A data warehousing infrastructure based on Hadoop • Immediately makes data on a cluster available to non- Java programmers via SQL like queries • Built on HiveQL (HQL), a SQL-like query language • Interprets HiveQL and generates MapReduce jobs that run on the cluster • Using Hive, we can skip the requirement of the traditional approach of writing complex MapReduce programs. Hive supports Data Definition Language (DDL), Data Manipulation Language (DML), and User Defined Functions (UDF).
  • 47. HiveQL • HiveQL / HQL provides the basic SQL-like operations: – Select columns using SELECT – Filter rows using WHERE – JOIN between tables – Evaluate aggregates using GROUP BY – Store query results into another table – Download results to a local directory (i.e., export from HDFS) – Manage tables and queries with CREATE, DROP, and ALTER
  • 48. Primitive Data Types Type Comments TINYINT, SMALLINT, INT, BIGINT 1, 2, 4 and 8-byte integers BOOLEAN TRUE/FALSE FLOAT, DOUBLE Single and double precision real numbers STRING Character string TIMESTAMP Unix-epoch offset or datetime string DECIMAL Arbitrary-precision decimal BINARY Opaque; ignore these bytes
  • 49. Complex Data Types Type Comments STRUCT A collection of elements If S is of type STRUCT {a INT, b INT}: S.a returns element a MAP Key-value tuple If M is a map from 'group' to GID: M['group'] returns value of GID ARRAY Indexed list If A is an array of elements ['a','b','c']: A[0] returns 'a'
  • 50. HiveQL Limitations • HQL only supports equi-joins, outer joins, left semi-joins • Because it is only a shell for Map-Reduce, complex queries can be hard to optimise • Missing large parts of full SQL specification: – HAVING clause in SELECT – Correlated sub-queries – Sub-queries outside FROM clauses – Updatable or materialized views – Stored procedures
  • 51. Hive Metastore • Stores Hive metadata • Default metastore database uses Apache Derby • Various configurations: – Embedded (in-process metastore, in-process database) • Mainly for unit tests – Local (in-process metastore, out-of-process database) • Each Hive client connects to the metastore directly – Remote (out-of-process metastore, out-of-process database) • Each Hive client connects to a metastore server, which connects to the metadata database itself
  • 52. Hive Warehouse • Hive tables are stored in the Hive “warehouse” – Default HDFS location: /user/hive/warehouse • Tables are stored as sub-directories in the warehouse directory • Partitions are subdirectories of tables • External tables are supported in Hive • The actual data is stored in flat files
  • 53. Hive Schemas • Hive is schema-on-read – Schema is only enforced when the data is read (at query time) – Allows greater flexibility: same data can be read using multiple schemas • Contrast with an RDBMS, which is schema-on- write – Schema is enforced when the data is loaded – Speeds up queries at the expense of load times
  • 54. Comparision between Hive and Pig Hive Pig Hive is commonly used by Data Analysts. Pig is commonly used by programmers. It follows SQL-like queries. It follows the data-flow language. It can handle structured data. It can handle semi-structured data. It works on server-side of HDFS cluster. It works on client-side of HDFS cluster. Hive is slower than Pig. Pig is comparatively faster than Hive.
  • 55. Create Table Syntax CREATE TABLE table_name (col1 data_type, col2 data_type, col3 data_type, col4 datatype ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS format_type;
  • 56. Simple Table CREATE TABLE page_view (viewTime INT, userid BIGINT, page_url STRING, referrer_url STRING, ip STRING COMMENT 'IP Address of the User' ) ROW FORMAT DELIMITED FIELDS TERMINATED BY 't' STORED AS TEXTFILE;
  • 57. More Complex Table CREATE TABLE employees ( (name STRING, salary FLOAT, subordinates ARRAY<STRING>, deductions MAP<STRING, FLOAT>, address STRUCT<street:STRING, city:STRING, state:STRING, zip:INT>) ROW FORMAT DELIMITED FIELDS TERMINATED BY 't' STORED AS TEXTFILE;
  • 58. External Table CREATE EXTERNAL TABLE page_view_stg (viewTime INT, userid BIGINT, page_url STRING, referrer_url STRING, ip STRING COMMENT 'IP Address of the User') ROW FORMAT DELIMITED FIELDS TERMINATED BY 't' STORED AS TEXTFILE LOCATION '/user/staging/page_view';
  • 59. More About Tables • CREATE TABLE – LOAD: file moved into Hive’s data warehouse directory – DROP: both metadata and data deleted • CREATE EXTERNAL TABLE – LOAD: no files moved – DROP: only metadata deleted – Use this when sharing with other Hadoop applications, or when you want to use multiple schemas on the same data
  • 60. Partitioning • Can make some queries faster • Divide data based on partition column • Use PARTITION BY clause when creating table • Use PARTITION clause when loading data • SHOW PARTITIONS will show a table’s partitions
  • 61. Bucketing • Can speed up queries that involve sampling the data – Sampling works without bucketing, but Hive has to scan the entire dataset • Use CLUSTERED BY when creating table – For sorted buckets, add SORTED BY • To query a sample of your data, use TABLESAMPLE
  • 62. Browsing Tables And Partitions Command Comments SHOW TABLES; Show all the tables in the database SHOW TABLES 'page.*'; Show tables matching the specification ( uses regex syntax ) SHOW PARTITIONS page_view; Show the partitions of the page_view table DESCRIBE page_view; List columns of the table DESCRIBE EXTENDED page_view; More information on columns (useful only for debugging ) DESCRIBE page_view PARTITION (ds='2008-10-31'); List information about a partition
  • 63. Loading Data • Use LOAD DATA to load data from a file or directory – Will read from HDFS unless LOCAL keyword is specified – Will append data unless OVERWRITE specified – PARTITION required if destination table is partitioned LOAD DATA LOCAL INPATH '/tmp/pv_2008-06-8_us.txt' OVERWRITE INTO TABLE page_view PARTITION (date='2008-06-08', country='US')
  • 64. Inserting Data • Use INSERT to load data from a Hive query – Will append data unless OVERWRITE specified – PARTITION required if destination table is partitioned FROM page_view_stg pvs INSERT OVERWRITE TABLE page_view PARTITION (dt='2008-06-08', country='US') SELECT pvs.viewTime, pvs.userid, pvs.page_url, pvs.referrer_url WHERE pvs.country = 'US';
  • 65. Loading And Inserting Data: Summary Use this For this purpose LOAD Load data from a file or directory INSERT Load data from a query • One partition at a time • Use multiple INSERTs to insert into multiple partitions in the one query CREATE TABLE AS (CTAS) Insert data while creating a table Add/modify external file Load new data into external table
  • 66. Sample Select Clauses • Select from a single table SELECT * FROM sales WHERE amount > 10 AND region = "US"; • Select from a partitioned table SELECT page_views.* FROM page_views WHERE page_views.date >= '2008-03-01' AND page_views.date <= '2008-03-31'
  • 67. Relational Operators • ALL and DISTINCT – Specify whether duplicate rows should be returned – ALL is the default (all matching rows are returned) – DISTINCT removes duplicate rows from the result set • WHERE – Filters by expression – Does not support IN, EXISTS or sub-queries in the WHERE clause • LIMIT – Indicates the number of rows to be returned
  • 68. Relational Operators • GROUP BY – Group data by column values – Select statement can only include columns included in the GROUP BY clause • ORDER BY / SORT BY – ORDER BY performs total ordering • Slow, poor performance – SORT BY performs partial ordering • Sorts output from each reducer
  • 69. Advanced Hive Operations • JOIN – If only one column in each table is used in the join, then only one MapReduce job will run • This results in 1 MapReduce job: SELECT * FROM a JOIN b ON a.key = b.key JOIN c ON b.key = c.key • This results in 2 MapReduce jobs: SELECT * FROM a JOIN b ON a.key = b.key JOIN c ON b.key2 = c.key – If multiple tables are joined, put the biggest table last and the reducer will stream the last table, buffer the others – Use left semi-joins to take the place of IN/EXISTS SELECT a.key, a.val FROM a LEFT SEMI JOIN b on a.key = b.key;
  • 70. Advanced Hive Operations • JOIN – Do not specify join conditions in the WHERE clause • Hive does not know how to optimise such queries • Will compute a full Cartesian product before filtering it • Join Example SELECT a.ymd, a.price_close, b.price_close FROM stocks a JOIN stocks b ON a.ymd = b.ymd WHERE a.symbol = 'AAPL' AND b.symbol = 'IBM' AND a.ymd > '2010-01-01';
  • 71. Hive Stinger • MPP-style execution of Hive queries • Available since Hive 0.13 • No MapReduce • We will talk about this more when we get to SQL on Hadoop