SlideShare a Scribd company logo
© Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice.
ARCHITECTURE OVERVIEW
Vertica Training
Version 6.1
training@vertica.com
2
The Analytics Platform
3
Column Orientation
• Vertica organizes data for each column
• Each column is stored separately on disk
• Only reads the columns needed to answer the query
• Significant reduction of disk I/O
AAPL NYASE NYAASE NYSE NYASE NGGYSE NYGGGSE NYSE NYSE NYSE 143.74 NYSE NYSE NYSE 5/05/12
5/05/12
5/06/12
5/05/12
5/06/12
AAPL NYASE NYAASE NYSE NYASE NGGYSE NYGGGSE NYSE NYSE NYSE 143.74 NYSE NYSE NYSE 5/06/12
BBY NYASE NYAASE NYSE NYASE NGGYSE NYGGGSE NYSE NYSE NYSE 37.03 NYSE NYSE NYSE 5/05/12
BBY NYASE NYAASE NYSE NYASE NGGYSE NYGGGSE NYSE NYSE NYSE 37.13 NYSE NYSE NYSE 5/06/12
SELECT
avg(price)
FROM
tickstore
WHERE
symbol = 'AAPL'
date = '5/06/12'
Column Store - Reads 3 columns
Row Store - Reads all columns
NQDS
NYSE
NYSE
NYSE
NQDS
NYSE
NYSE
NYSE
NQDS
NYSE
NYSE
NYSE
NQDS
NQDS
NYSE
NYSE
NYSE
NQDS
NYSE
NYSE
NYSE
NQDS
NYSE
NYSE
NYSE
NQDS
NQDS
NYSE
NYSE
NYSE
NQDS
NYSE
NYSE
NYSE
NQDS
NYSE
NYSE
NYSE
NQDS
NQDS
NYSE
NYSE
NYSE
NQDS
NYSE
NYSE
NYSE
NQDS
NYSE
NYSE
NYSE
NQDS
NQDS
NYSE
NYSE
NYSE
NQDS
NYSE
NYSE
NYSE
NQDS
NYSE
NYSE
NYSE
NQDS
NQDS
NYSE
NYSE
NYSE
NQDS
NYSE
NYSE
NYSE
NQDS
NYSE
NYSE
NYSE
NQDS
NQDS
NYSE
NYSE
NYSE
NQDS
NYSE
NYSE
NYSE
NQDS
NYSE
NYSE
NYSE
NQDS
NQDS
NYSE
NYSE
NYSE
NQDS
NYSE
NYSE
NYSE
NQDS
NYSE
NYSE
NYSE
NQDS
NQDS
NYSE
NYSE
NYSE
NQDS
NYSE
NYSE
NYSE
NQDS
NYSE
NYSE
NYSE
NQDS
AAPL
AAPL
BBY
BBY
143.74
143.75
37.03
37.13
4
Engine
processes
encoded
blocks
Uncompress
Materialization:
full row result set is
created
Data on Disk:
Encoded +
Compressed
Results
Advanced Compression
• Slower disk I/O is replaced with fast
CPU cycles and aggressive encoding
and compression
• Sorting and cardinality help determine
encoding
Transaction Date Customer ID Trade
5/05/2012
5/05/2012
5/05/2012
5/05/2012
5/05/2012
5/05/2012
5/05/2012
5/05/2012
5/05/2012
5/05/2012
5/05/2012
0000001
0000001
0000003
0000003
0000005
0000011
0000011
0000020
0000026
0000050
0000051
0000052
Few values
sorted
5/05/2012, 16
RLE
0000001
0
2
2
4
10
10
19
25
49
50
51
DeltaVal
Many values
integer
Many Others…
100.25
302.43
991.23
73.45
134.09
843.11
208.13
114.29
83.07
43.98
229.76
Many distinct
values
LZO
ÞìÃp:±æ+©>
Hì&ì¥YÛ¡×¥
©éa½?50ÓJ
Compressed Processing and Late Materialization
Stores more data, provides more views, and uses less hardware
• Operates on encoded data
• Data is decoded as late as possible
• Implements late materialization
Encoding and Compression Mechanism
5
B
A
B
A
High Availability
• RAID-like functionality within database
• If a node fails, a copy is available on one of the surviving nodes
• No need for manual log-based recovery
• Always-on Queries and Loads
• System continues to load and query when nodes are down
• Automatically recovers missing data by querying other nodes
A C
Node 1
C B
Node 3Node 2Node 2
6
• Database Designer (DBD) recommends a physical database design that
provides the best performance for the user's workload
• Analyzes your logical schema, sample data, and your sample queries
• Minimizes DBA tuning
• Run anytime for additional optimization, without stopping the database
A B AB C C
> Physical schema, compression to:
 Make queries in sample set run fast
 Fit within trickle load requirements
 Ensure all SQL queries can be answered
Database Designer GeneratesDBA Provides
> Logical schema
 Create table
> Sample set of
 Typical queries
 Sample data
> K-safety level
Automatic Database Design
7
• Parallel design leverages data projections to enable distributed storage and
workload
• Active redundancy
• Automatic replication, failover and recovery
• Shared-nothing, grid-based database architecture provides high scalability on
clusters of commodity hardware
Client Network
Private Data Network
1+ TB 1+ TB 1+ TB
Node 1
 2 Quad Core
 16+GB RAM
Node 2
 2 Quad Core
 16+GB RAM
Node 3
 2 Quad Core
 16+GB RAM
Nodes are Peers
• No specialized nodes
• All nodes are peers
• Query/Load to any node
• Continuous real-time
load and query
Massively Parallel Processing (MPP)
8
• Simple integration with Hadoop and existing BI and ETL tools
• Supports SQL, ODBC, JDBC and majority ETL and BI reporting products
• Leverages existing investments to lower Total Cost of Ownership (TOC)
Application Integration
SQL, ODBC,
JDBC, ADO.net
Bulk and Trickle
Loads
ETL, Replication, Data Quality Analytics, Reporting
9
Vertica Architecture Advantages
Vertica Internals:
Projections
11
What is a Projection?
• Collection of table columns
• Stores data in a format to optimize query execution
• Similar in concept to materialized views
12
Projections
TABLE
Logical
PROJECTIONS
Physical
foo_p1 foo_p2 foo_p3
ACCCA AB B
Node 1
foo
13
Tables versus Projections
Anchor Table
Session ID Page Time
10256 http://my.vertica.com/ 01:02:02 PM
10257 http://www.vertica.com/the-analytics-platform/ 01:02:02 PM
10256 http://www.vertica.com/customer-experience/ 01:02:03 PM
10256 http://www.vertica.com/customer-experience/training/ 01:02:05 PM
10257 http://www.vertica.com/industries/ 01:02:56 PM
10257 http://www.vertica.com/industries/web-social-gaming/ 01:03:41 PM
10256 http://my.vertica.com/ 01:35:26 PM
Projection 2
Page Session
ID
http://my.vertica.com/ 10256
http://my.vertica.com/ 10256
http://www.vertica.com/customer-experience/ 10256
http://www.vertica.com/customer-experience/training/ 10256
http://www.vertica.com/industries/ 10257
http://www.vertica.com/industries/web-social-gaming/ 10257
http://www.vertica.com/the-analytics-platform/ 10257
Projection 1
Session Time Page
ID
10256 01:02:02 http://my.vertica.com/
10256 01:02:03 http://www.vertica.com/customer-experience/
10256 01:02:05 http://www.vertica.com/customer-experience/training/
10256 01:35:26 http://my.vertica.com/
10257 01:02:02 http://www.vertica.com/the-analytics-platform
10257 01:02:56 http://www.vertica.com/industries/
10257 01:03:41 http://www.vertica.com/industries/web-social-gaming/
Projections are optimized for common query patterns
14
Projection Basics
• Anchor table is not stored
• Data is stored in a sorted and compressed format
• No need for indexing
• Transparent to the end user and applications
• Created by Database Designer (DBD)
– Can be manually created
• Best projection for a query is chosen by the
Optimizer at query execution time
15
Replication
• For a small projection, copy the full projection to each node
– Inherently provides high availability of the projection
Replicated Projection
Source Data
Node1 Node2 Node3
16
Segmentation
• For a large projection, distribute projection data across
multiple nodes
– Segmenting the projection splits the data into subsets of rows
– Design for random data distribution
Segmented Projection
Source Data
Node1 Node2 Node3
17
Segmentation and High Availability
• Buddy projections provide high availability for segmented projections
– Buddy projection is a copy of a projection segment that is stored on a different node
– K-safety is the number of replicas of data that exist on the cluster
Segmented Projection
Segmented
Buddy Projection
Source Data
Node1 Node2 Node3
K-safe
1
18
Projection Basics: Maintenance
• Data is loaded directly into projections
• No need to rebuild or refresh projections after the
initial refresh
• New projections can be created at any time, either
manually or by running the Database Designer
• Old projections can be dropped
19
Projections: Summary
• A projection is
– A collection of encoded and compressed columns with a
sort order and segmentation
– Automatically maintained during data load
– Tuned for different queries
• Automatic database design
– Ensures data is stored in sorted, encoded, compressed
projections
– Removes the need for complex table space tuning,
partitioning, indexes, design and update, materialized
views on top of base tables
CAB
Vertica Internals:
Distributed Query Execution
21
Vertica Query Execution Basics
• SQL query is written against tables
SELECT count(*) FROM fact;
• Vertica translates the query to execute against
projections
SELECT count(*) FROM fact_p1;
• The query Optimizer picks an optimal query plan
– Picks the best projection(s) for the query
– Picks the order in which to execute joins, query predicates,
etc.
– The query plan with the lowest cost is selected
22
Sample Query Plan
Query Plan:
Access Path:
+-GROUPBY HASH [Cost: 34, Rows: 2]
| Aggregates: sum_float(customer.age), count(customer.age), count(*)
| Group By: customer.country, customer."name"
| +---> STORAGE ACCESS for customer [Cost: 33, Rows: 2]
| | Projection: public.customer_p1
| | Materialize: customer.country, customer.age, customer."name"
| | Filter: (customer.gender = 'M')
| | Filter: (customer.country = ANY (ARRAY['GER', 'USA']))
Query:
EXPLAIN
SELECT name, country,
avg(age), count(*)
FROM customer
WHERE gender = 'M' and
country in ('USA',
'GER')
GROUP BY country, name;
23
Query Execution Workflow
• Client connects to a node and issues a query
– Node the client is connected to becomes the initiator node
– Other nodes in the cluster are executor nodes
• Initiator node parses the query and picks an execution plan
• Initiator node distributes query plan to executor nodes
SELECT count(*) FROM
fact;
INITIATOREXECUTOR EXECUTOR
24
Query Execution Workflow
• All nodes execute the query plan locally
• Executor nodes send partial query results back to initiator node
• Initiator node aggregates results from all nodes
• Initiator node returns final result to the user
SELECT count(*) FROM
fact;
3
103
4
1010
INITIATOREXECUTOR EXECUTOR
Vertica Internals:
Transactions and Locking
26
Vertica Transactions
• Transactions
– Sequence of operations, ending with COMMIT or
ROLLBACK
– Provide for atomicity and isolation of database operations
• Transaction sources
– User transactions
– Internal Vertica transactions
• Initialization / shutdown
• Recovery
• Tuple Mover: mergeout and moveout
27
Vertica Transaction Model
• All changes are made to new files
• No files are updated
Closed Epochs
Inserts, Deletes,
Updates
Historical
Epochs
(no locks)
Current
Epoch
Current epoch advanced
on DML commit
Latest
Epoch
(no locks)
Ancient
History
Mark
(AHM)
28
Benefits of Vertica Transaction Model
• No contention between reads and writes
– Once a file is written to disk, it is never written to again
– Updates work as concurrent deletes and inserts
– To rollback, simply throw away incomplete files
– Benefit: No undo logs needed
• Durability through K-Safety
– All data is redundantly stored on multiple nodes
– Recover data by querying other nodes
– Benefit: No redo logs needed
• Simple / lightweight commit protocol
Vertica Internals:
Hybrid Data Store: WOS and ROS
30
Hybrid Data Store: WOS and ROS
• Write Optimized Store (WOS) – In-memory data store for low-
latency data load
• Read Optimized Store (ROS) – On disk, optimized data store
Asynchronous Data Transfer
TUPLE MOVER
• On disk
• Sorted / Compressed
• Segmented
• Large data loaded direct
A B C
Write Optimized Store
(WOS)
 In memory
 Unsorted / Uncompressed
 Segmented
 K safe
 Low latency / Small quick inserts
Read Optimized Store
(ROS)
31
Tuple Mover - moveout
• Load frequently into WOS, data is available for query in seconds
• moveout task asynchronously moves data to ROS
IBM 60.25 10,000 1/15/2012
MSFT 60.53 12,500 1/15/2012
IBM 60.19 7,100 1/15/2012
NFLX 28.29 25,000 1/16/2012
Write Optimized Row-Store
(WOS)
Asynchronous Data Transfer
TUPLE MOVER
Read Optimized Column-Store
(ROS)
IBM,1 60.25 10,000 1/15/2012,2
60.53 13,500 1/16/2012,2
MSFT,2 62.29 11,000
NFLX,1 28.29 25,000 1/16/2012,1
32
Tuple Mover - moveout
• Load frequently into WOS, data is available for query in seconds
• moveout task asynchronously moves data to ROS
Read Optimized Column-Store
(ROS)
IBM ,2 60.25 10,000 1/15/2012,2
MSFT,1 60.53 12,500 1/16/2012,1
60.19 7,100
NFLX ,1 28.29 25,000 1/16/2012,1
Asynchronous Data Transfer
TUPLE MOVER
Write Optimized Row-Store
(WOS)
IBM,1 60.25 10,000 1/15/2012,2
60.53 13,500 1/16/2012,2
MSFT,2 62.29 11,000
NFLX,1 28.29 25,000 1/16/2012,1
33
Write Optimized Row-Store
(WOS)
Read Optimized Column-Store
(ROS)
Tuple Mover - moveout
• Load frequently into WOS, data is available for query in seconds
• moveout task asynchronously moves data to ROS
Asynchronous Data Transfer
TUPLE MOVER
Write Optimized Row-Store
(WOS)
IBM,2 60.25 10,000 1/15/2012,2
MSFT,1 60.53 12,500 1/16/2012,1
60.19 7,100
NFLX ,1 28.29 25,000 1/16/2012,1
IBM,1 60.25 10,000 1/15/2012,2
60.53 13,500 1/16/2012,2
MSFT,2 62.29 11,000
NFLX,1 28.29 25,000 1/16/2012,1
34
Tuple Mover - mergeout
• mergeout task combines ROS containers to reduce
fragmentation
Write Optimized Row-Store
(WOS)
Read Optimized Column-Store
(ROS)
Write Optimized Row-Store
(WOS)
Write Optimized Row-Store
(WOS)
IBM,2 60.25 10,000 1/15/2012,2
MSFT,1 60.53 12,500 1/16/2012,1
60.19 7,100
NFLX ,1 28.29 25,000 1/16/2012,1
IBM,1 60.25 10,000 1/15/2012,2
60.53 13,500 1/16/2012,2
MSFT,2 62.29 11,000
NFLX,1 28.29 25,000 1/16/2012,1
35
Tuple Mover - mergeout
• mergeout task combines ROS containers to reduce
fragmentation
Write Optimized Row-Store
(WOS)
Read Optimized Column-Store
(ROS)
IBM,2 60.25 10,000 1/15/2012,2
MSFT,1 60.53 12,500 1/16/2012,1
60.19 7,100
NFLX ,1 28.29 25,000 1/16/2012,1
Write Optimized Row-Store
(WOS)
IBM,1 60.25 10,000 1/15/2012,2
60.53 13,500 1/16/2012,2
MSFT,2 62.29 11,000
NFLX,1 28.29 25,000 1/16/2012,1Asynchronous Data Transfer
Write Optimized Row-Store
(WOS)
TUPLE MOVER
NFLX,2 28.29,2 25,000,2 1/16/2012,2
MSFT,3 62.29 11,000
60.53,2 12,500
13,500 1/16/2012,3
IBM,3 60.19 7,100 1/15/2012,3
60.25,2 10,000,2
36
Tuple Mover - mergeout
• mergeout task combines ROS containers to reduce
fragmentation
Read Optimized Column-Store
(ROS)
Write Optimized Row-Store
(WOS)
Write Optimized Row-Store
(WOS)
IBM,3 60.19 7,100 1/15/2012,3
60.25,2 10,000,2
MSFT,3 62.29 11,000
60.53,2 12,500
13,500 1/16/2012,3
NFLX,2 28.29,2 25,000,2 1/16/2012,2
37
Real-time Analytics
• Real-time analytics on large volume of data is a reality on the Vertica
Analytic Database
• Hybrid storage architecture enables low-latency loads
• Concurrent load /query enabled by transaction model and asynchronous tuple
mover activity
• Vertica achieves very low data latency (seconds) and full context (store years
of detailed history)
• Load performance scales with cluster size – proven at 10+ TB per hour
A B C
Write Optimized
Store (WOS)
Read Optimized
Store (ROS)

More Related Content

PDF
Vertica 7.0 Architecture Overview
PDF
9/ IBM POWER @ OPEN'16
PPTX
Hp vertica certification guide
PPT
Hadoop World Vertica
PDF
Tips and Tricks for SAP Sybase IQ
PPTX
An Expert Guide to Migrating Legacy Databases to PostgreSQL
 
PDF
EDBT 2013 - Near Realtime Analytics with IBM DB2 Analytics Accelerator
PPTX
Oracle big data appliance and solutions
Vertica 7.0 Architecture Overview
9/ IBM POWER @ OPEN'16
Hp vertica certification guide
Hadoop World Vertica
Tips and Tricks for SAP Sybase IQ
An Expert Guide to Migrating Legacy Databases to PostgreSQL
 
EDBT 2013 - Near Realtime Analytics with IBM DB2 Analytics Accelerator
Oracle big data appliance and solutions

What's hot (20)

PDF
ASE Performance and Tuning Parameters Beyond the cfg File
PDF
Practical Partitioning in Production with Postgres
 
PPTX
ApexMeetup Geode - Talk1 2016-03-17
PDF
Welcome To The 2016 Query Store!
PPTX
IBM Power Systems Announcement Update
PDF
Tips and Tricks for SAP Sybase ASE
PDF
Perfect trio : temporal tables, transparent archiving in db2 for z_os and idaa
PPTX
Trusted advisory on technology comparison --exadata, hana, db2
PPTX
How to Design for Database High Availability
 
PPTX
Understanding the IBM Power Systems Advantage
PDF
Enterprise PostgreSQL - EDB's answer to conventional Databases
PPTX
Hekaton (xtp) introduction
PDF
Using Databases and Containers From Development to Deployment
PPTX
WEBINAR: Architectures for Digital Transformation and Next-Generation Systems...
PPTX
An Introduction to Apache Geode (incubating)
PPTX
OLTP+OLAP=HTAP
 
PDF
EDB Postgres DBA Best Practices
 
PPTX
Hadoop World 2011: Unlocking the Value of Big Data with Oracle - Jean-Pierre ...
PPTX
Beginner's Guide to High Availability for Postgres
 
PDF
Overview of Postgres 9.5
 
ASE Performance and Tuning Parameters Beyond the cfg File
Practical Partitioning in Production with Postgres
 
ApexMeetup Geode - Talk1 2016-03-17
Welcome To The 2016 Query Store!
IBM Power Systems Announcement Update
Tips and Tricks for SAP Sybase ASE
Perfect trio : temporal tables, transparent archiving in db2 for z_os and idaa
Trusted advisory on technology comparison --exadata, hana, db2
How to Design for Database High Availability
 
Understanding the IBM Power Systems Advantage
Enterprise PostgreSQL - EDB's answer to conventional Databases
Hekaton (xtp) introduction
Using Databases and Containers From Development to Deployment
WEBINAR: Architectures for Digital Transformation and Next-Generation Systems...
An Introduction to Apache Geode (incubating)
OLTP+OLAP=HTAP
 
EDB Postgres DBA Best Practices
 
Hadoop World 2011: Unlocking the Value of Big Data with Oracle - Jean-Pierre ...
Beginner's Guide to High Availability for Postgres
 
Overview of Postgres 9.5
 
Ad

Similar to Introduction 6.1 01_architecture_overview (20)

PPTX
Vertica-Database
PDF
A short introduction to Vertica
PPTX
Big Data .. Are you ready for the next wave?
PDF
Big Data & SQL: The On-Ramp to Hadoop
PDF
4AA6-4492ENW
ODP
Vertica
PDF
Level Up – How to Achieve Hadoop Acceleration
PPTX
Vertica
PPTX
Vertica
PPTX
Machine Learning on Distributed Systems by Josh Poduska
PPTX
Carpe Datum: Building Big Data Analytical Applications with HP Haven
PDF
3rd day big data
PDF
GoodData Developers Share Their Big Data Platform Wish List
PDF
Vertica Analytics Database general overview
PPTX
Big iron 2 (published)
PDF
Combining Hadoop RDBMS for Large-Scale Big Data Analytics
PPTX
Webinar with SnagAJob, HP Vertica and Looker - Data at the speed of busines s...
PDF
HPE Vertica_7.0.x Administrators Guide
PDF
GoodData Developers Share Their Big Data Platform Wish List
PPT
Hw09 Hadoop + Vertica
Vertica-Database
A short introduction to Vertica
Big Data .. Are you ready for the next wave?
Big Data & SQL: The On-Ramp to Hadoop
4AA6-4492ENW
Vertica
Level Up – How to Achieve Hadoop Acceleration
Vertica
Vertica
Machine Learning on Distributed Systems by Josh Poduska
Carpe Datum: Building Big Data Analytical Applications with HP Haven
3rd day big data
GoodData Developers Share Their Big Data Platform Wish List
Vertica Analytics Database general overview
Big iron 2 (published)
Combining Hadoop RDBMS for Large-Scale Big Data Analytics
Webinar with SnagAJob, HP Vertica and Looker - Data at the speed of busines s...
HPE Vertica_7.0.x Administrators Guide
GoodData Developers Share Their Big Data Platform Wish List
Hw09 Hadoop + Vertica
Ad

Recently uploaded (20)

PDF
Microsoft Core Cloud Services powerpoint
PDF
Optimise Shopper Experiences with a Strong Data Estate.pdf
PPTX
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
PPTX
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
PPT
ISS -ESG Data flows What is ESG and HowHow
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PDF
Data Engineering Interview Questions & Answers Batch Processing (Spark, Hadoo...
PDF
Transcultural that can help you someday.
PPTX
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
PPTX
Market Analysis -202507- Wind-Solar+Hybrid+Street+Lights+for+the+North+Amer...
PPT
Predictive modeling basics in data cleaning process
PPTX
Pilar Kemerdekaan dan Identi Bangsa.pptx
PPTX
importance of Data-Visualization-in-Data-Science. for mba studnts
PDF
Business Analytics and business intelligence.pdf
PDF
Mega Projects Data Mega Projects Data
PPTX
QUANTUM_COMPUTING_AND_ITS_POTENTIAL_APPLICATIONS[2].pptx
PPTX
A Complete Guide to Streamlining Business Processes
PPTX
SAP 2 completion done . PRESENTATION.pptx
PDF
Lecture1 pattern recognition............
PPT
DATA COLLECTION METHODS-ppt for nursing research
Microsoft Core Cloud Services powerpoint
Optimise Shopper Experiences with a Strong Data Estate.pdf
AI Strategy room jwfjksfksfjsjsjsjsjfsjfsj
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
ISS -ESG Data flows What is ESG and HowHow
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
Data Engineering Interview Questions & Answers Batch Processing (Spark, Hadoo...
Transcultural that can help you someday.
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
Market Analysis -202507- Wind-Solar+Hybrid+Street+Lights+for+the+North+Amer...
Predictive modeling basics in data cleaning process
Pilar Kemerdekaan dan Identi Bangsa.pptx
importance of Data-Visualization-in-Data-Science. for mba studnts
Business Analytics and business intelligence.pdf
Mega Projects Data Mega Projects Data
QUANTUM_COMPUTING_AND_ITS_POTENTIAL_APPLICATIONS[2].pptx
A Complete Guide to Streamlining Business Processes
SAP 2 completion done . PRESENTATION.pptx
Lecture1 pattern recognition............
DATA COLLECTION METHODS-ppt for nursing research

Introduction 6.1 01_architecture_overview

  • 1. © Copyright 2013 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. ARCHITECTURE OVERVIEW Vertica Training Version 6.1 training@vertica.com
  • 3. 3 Column Orientation • Vertica organizes data for each column • Each column is stored separately on disk • Only reads the columns needed to answer the query • Significant reduction of disk I/O AAPL NYASE NYAASE NYSE NYASE NGGYSE NYGGGSE NYSE NYSE NYSE 143.74 NYSE NYSE NYSE 5/05/12 5/05/12 5/06/12 5/05/12 5/06/12 AAPL NYASE NYAASE NYSE NYASE NGGYSE NYGGGSE NYSE NYSE NYSE 143.74 NYSE NYSE NYSE 5/06/12 BBY NYASE NYAASE NYSE NYASE NGGYSE NYGGGSE NYSE NYSE NYSE 37.03 NYSE NYSE NYSE 5/05/12 BBY NYASE NYAASE NYSE NYASE NGGYSE NYGGGSE NYSE NYSE NYSE 37.13 NYSE NYSE NYSE 5/06/12 SELECT avg(price) FROM tickstore WHERE symbol = 'AAPL' date = '5/06/12' Column Store - Reads 3 columns Row Store - Reads all columns NQDS NYSE NYSE NYSE NQDS NYSE NYSE NYSE NQDS NYSE NYSE NYSE NQDS NQDS NYSE NYSE NYSE NQDS NYSE NYSE NYSE NQDS NYSE NYSE NYSE NQDS NQDS NYSE NYSE NYSE NQDS NYSE NYSE NYSE NQDS NYSE NYSE NYSE NQDS NQDS NYSE NYSE NYSE NQDS NYSE NYSE NYSE NQDS NYSE NYSE NYSE NQDS NQDS NYSE NYSE NYSE NQDS NYSE NYSE NYSE NQDS NYSE NYSE NYSE NQDS NQDS NYSE NYSE NYSE NQDS NYSE NYSE NYSE NQDS NYSE NYSE NYSE NQDS NQDS NYSE NYSE NYSE NQDS NYSE NYSE NYSE NQDS NYSE NYSE NYSE NQDS NQDS NYSE NYSE NYSE NQDS NYSE NYSE NYSE NQDS NYSE NYSE NYSE NQDS NQDS NYSE NYSE NYSE NQDS NYSE NYSE NYSE NQDS NYSE NYSE NYSE NQDS AAPL AAPL BBY BBY 143.74 143.75 37.03 37.13
  • 4. 4 Engine processes encoded blocks Uncompress Materialization: full row result set is created Data on Disk: Encoded + Compressed Results Advanced Compression • Slower disk I/O is replaced with fast CPU cycles and aggressive encoding and compression • Sorting and cardinality help determine encoding Transaction Date Customer ID Trade 5/05/2012 5/05/2012 5/05/2012 5/05/2012 5/05/2012 5/05/2012 5/05/2012 5/05/2012 5/05/2012 5/05/2012 5/05/2012 0000001 0000001 0000003 0000003 0000005 0000011 0000011 0000020 0000026 0000050 0000051 0000052 Few values sorted 5/05/2012, 16 RLE 0000001 0 2 2 4 10 10 19 25 49 50 51 DeltaVal Many values integer Many Others… 100.25 302.43 991.23 73.45 134.09 843.11 208.13 114.29 83.07 43.98 229.76 Many distinct values LZO ÞìÃp:±æ+©> Hì&ì¥YÛ¡×¥ ©éa½?50ÓJ Compressed Processing and Late Materialization Stores more data, provides more views, and uses less hardware • Operates on encoded data • Data is decoded as late as possible • Implements late materialization Encoding and Compression Mechanism
  • 5. 5 B A B A High Availability • RAID-like functionality within database • If a node fails, a copy is available on one of the surviving nodes • No need for manual log-based recovery • Always-on Queries and Loads • System continues to load and query when nodes are down • Automatically recovers missing data by querying other nodes A C Node 1 C B Node 3Node 2Node 2
  • 6. 6 • Database Designer (DBD) recommends a physical database design that provides the best performance for the user's workload • Analyzes your logical schema, sample data, and your sample queries • Minimizes DBA tuning • Run anytime for additional optimization, without stopping the database A B AB C C > Physical schema, compression to:  Make queries in sample set run fast  Fit within trickle load requirements  Ensure all SQL queries can be answered Database Designer GeneratesDBA Provides > Logical schema  Create table > Sample set of  Typical queries  Sample data > K-safety level Automatic Database Design
  • 7. 7 • Parallel design leverages data projections to enable distributed storage and workload • Active redundancy • Automatic replication, failover and recovery • Shared-nothing, grid-based database architecture provides high scalability on clusters of commodity hardware Client Network Private Data Network 1+ TB 1+ TB 1+ TB Node 1  2 Quad Core  16+GB RAM Node 2  2 Quad Core  16+GB RAM Node 3  2 Quad Core  16+GB RAM Nodes are Peers • No specialized nodes • All nodes are peers • Query/Load to any node • Continuous real-time load and query Massively Parallel Processing (MPP)
  • 8. 8 • Simple integration with Hadoop and existing BI and ETL tools • Supports SQL, ODBC, JDBC and majority ETL and BI reporting products • Leverages existing investments to lower Total Cost of Ownership (TOC) Application Integration SQL, ODBC, JDBC, ADO.net Bulk and Trickle Loads ETL, Replication, Data Quality Analytics, Reporting
  • 11. 11 What is a Projection? • Collection of table columns • Stores data in a format to optimize query execution • Similar in concept to materialized views
  • 13. 13 Tables versus Projections Anchor Table Session ID Page Time 10256 http://my.vertica.com/ 01:02:02 PM 10257 http://www.vertica.com/the-analytics-platform/ 01:02:02 PM 10256 http://www.vertica.com/customer-experience/ 01:02:03 PM 10256 http://www.vertica.com/customer-experience/training/ 01:02:05 PM 10257 http://www.vertica.com/industries/ 01:02:56 PM 10257 http://www.vertica.com/industries/web-social-gaming/ 01:03:41 PM 10256 http://my.vertica.com/ 01:35:26 PM Projection 2 Page Session ID http://my.vertica.com/ 10256 http://my.vertica.com/ 10256 http://www.vertica.com/customer-experience/ 10256 http://www.vertica.com/customer-experience/training/ 10256 http://www.vertica.com/industries/ 10257 http://www.vertica.com/industries/web-social-gaming/ 10257 http://www.vertica.com/the-analytics-platform/ 10257 Projection 1 Session Time Page ID 10256 01:02:02 http://my.vertica.com/ 10256 01:02:03 http://www.vertica.com/customer-experience/ 10256 01:02:05 http://www.vertica.com/customer-experience/training/ 10256 01:35:26 http://my.vertica.com/ 10257 01:02:02 http://www.vertica.com/the-analytics-platform 10257 01:02:56 http://www.vertica.com/industries/ 10257 01:03:41 http://www.vertica.com/industries/web-social-gaming/ Projections are optimized for common query patterns
  • 14. 14 Projection Basics • Anchor table is not stored • Data is stored in a sorted and compressed format • No need for indexing • Transparent to the end user and applications • Created by Database Designer (DBD) – Can be manually created • Best projection for a query is chosen by the Optimizer at query execution time
  • 15. 15 Replication • For a small projection, copy the full projection to each node – Inherently provides high availability of the projection Replicated Projection Source Data Node1 Node2 Node3
  • 16. 16 Segmentation • For a large projection, distribute projection data across multiple nodes – Segmenting the projection splits the data into subsets of rows – Design for random data distribution Segmented Projection Source Data Node1 Node2 Node3
  • 17. 17 Segmentation and High Availability • Buddy projections provide high availability for segmented projections – Buddy projection is a copy of a projection segment that is stored on a different node – K-safety is the number of replicas of data that exist on the cluster Segmented Projection Segmented Buddy Projection Source Data Node1 Node2 Node3 K-safe 1
  • 18. 18 Projection Basics: Maintenance • Data is loaded directly into projections • No need to rebuild or refresh projections after the initial refresh • New projections can be created at any time, either manually or by running the Database Designer • Old projections can be dropped
  • 19. 19 Projections: Summary • A projection is – A collection of encoded and compressed columns with a sort order and segmentation – Automatically maintained during data load – Tuned for different queries • Automatic database design – Ensures data is stored in sorted, encoded, compressed projections – Removes the need for complex table space tuning, partitioning, indexes, design and update, materialized views on top of base tables CAB
  • 21. 21 Vertica Query Execution Basics • SQL query is written against tables SELECT count(*) FROM fact; • Vertica translates the query to execute against projections SELECT count(*) FROM fact_p1; • The query Optimizer picks an optimal query plan – Picks the best projection(s) for the query – Picks the order in which to execute joins, query predicates, etc. – The query plan with the lowest cost is selected
  • 22. 22 Sample Query Plan Query Plan: Access Path: +-GROUPBY HASH [Cost: 34, Rows: 2] | Aggregates: sum_float(customer.age), count(customer.age), count(*) | Group By: customer.country, customer."name" | +---> STORAGE ACCESS for customer [Cost: 33, Rows: 2] | | Projection: public.customer_p1 | | Materialize: customer.country, customer.age, customer."name" | | Filter: (customer.gender = 'M') | | Filter: (customer.country = ANY (ARRAY['GER', 'USA'])) Query: EXPLAIN SELECT name, country, avg(age), count(*) FROM customer WHERE gender = 'M' and country in ('USA', 'GER') GROUP BY country, name;
  • 23. 23 Query Execution Workflow • Client connects to a node and issues a query – Node the client is connected to becomes the initiator node – Other nodes in the cluster are executor nodes • Initiator node parses the query and picks an execution plan • Initiator node distributes query plan to executor nodes SELECT count(*) FROM fact; INITIATOREXECUTOR EXECUTOR
  • 24. 24 Query Execution Workflow • All nodes execute the query plan locally • Executor nodes send partial query results back to initiator node • Initiator node aggregates results from all nodes • Initiator node returns final result to the user SELECT count(*) FROM fact; 3 103 4 1010 INITIATOREXECUTOR EXECUTOR
  • 26. 26 Vertica Transactions • Transactions – Sequence of operations, ending with COMMIT or ROLLBACK – Provide for atomicity and isolation of database operations • Transaction sources – User transactions – Internal Vertica transactions • Initialization / shutdown • Recovery • Tuple Mover: mergeout and moveout
  • 27. 27 Vertica Transaction Model • All changes are made to new files • No files are updated Closed Epochs Inserts, Deletes, Updates Historical Epochs (no locks) Current Epoch Current epoch advanced on DML commit Latest Epoch (no locks) Ancient History Mark (AHM)
  • 28. 28 Benefits of Vertica Transaction Model • No contention between reads and writes – Once a file is written to disk, it is never written to again – Updates work as concurrent deletes and inserts – To rollback, simply throw away incomplete files – Benefit: No undo logs needed • Durability through K-Safety – All data is redundantly stored on multiple nodes – Recover data by querying other nodes – Benefit: No redo logs needed • Simple / lightweight commit protocol
  • 29. Vertica Internals: Hybrid Data Store: WOS and ROS
  • 30. 30 Hybrid Data Store: WOS and ROS • Write Optimized Store (WOS) – In-memory data store for low- latency data load • Read Optimized Store (ROS) – On disk, optimized data store Asynchronous Data Transfer TUPLE MOVER • On disk • Sorted / Compressed • Segmented • Large data loaded direct A B C Write Optimized Store (WOS)  In memory  Unsorted / Uncompressed  Segmented  K safe  Low latency / Small quick inserts Read Optimized Store (ROS)
  • 31. 31 Tuple Mover - moveout • Load frequently into WOS, data is available for query in seconds • moveout task asynchronously moves data to ROS IBM 60.25 10,000 1/15/2012 MSFT 60.53 12,500 1/15/2012 IBM 60.19 7,100 1/15/2012 NFLX 28.29 25,000 1/16/2012 Write Optimized Row-Store (WOS) Asynchronous Data Transfer TUPLE MOVER Read Optimized Column-Store (ROS) IBM,1 60.25 10,000 1/15/2012,2 60.53 13,500 1/16/2012,2 MSFT,2 62.29 11,000 NFLX,1 28.29 25,000 1/16/2012,1
  • 32. 32 Tuple Mover - moveout • Load frequently into WOS, data is available for query in seconds • moveout task asynchronously moves data to ROS Read Optimized Column-Store (ROS) IBM ,2 60.25 10,000 1/15/2012,2 MSFT,1 60.53 12,500 1/16/2012,1 60.19 7,100 NFLX ,1 28.29 25,000 1/16/2012,1 Asynchronous Data Transfer TUPLE MOVER Write Optimized Row-Store (WOS) IBM,1 60.25 10,000 1/15/2012,2 60.53 13,500 1/16/2012,2 MSFT,2 62.29 11,000 NFLX,1 28.29 25,000 1/16/2012,1
  • 33. 33 Write Optimized Row-Store (WOS) Read Optimized Column-Store (ROS) Tuple Mover - moveout • Load frequently into WOS, data is available for query in seconds • moveout task asynchronously moves data to ROS Asynchronous Data Transfer TUPLE MOVER Write Optimized Row-Store (WOS) IBM,2 60.25 10,000 1/15/2012,2 MSFT,1 60.53 12,500 1/16/2012,1 60.19 7,100 NFLX ,1 28.29 25,000 1/16/2012,1 IBM,1 60.25 10,000 1/15/2012,2 60.53 13,500 1/16/2012,2 MSFT,2 62.29 11,000 NFLX,1 28.29 25,000 1/16/2012,1
  • 34. 34 Tuple Mover - mergeout • mergeout task combines ROS containers to reduce fragmentation Write Optimized Row-Store (WOS) Read Optimized Column-Store (ROS) Write Optimized Row-Store (WOS) Write Optimized Row-Store (WOS) IBM,2 60.25 10,000 1/15/2012,2 MSFT,1 60.53 12,500 1/16/2012,1 60.19 7,100 NFLX ,1 28.29 25,000 1/16/2012,1 IBM,1 60.25 10,000 1/15/2012,2 60.53 13,500 1/16/2012,2 MSFT,2 62.29 11,000 NFLX,1 28.29 25,000 1/16/2012,1
  • 35. 35 Tuple Mover - mergeout • mergeout task combines ROS containers to reduce fragmentation Write Optimized Row-Store (WOS) Read Optimized Column-Store (ROS) IBM,2 60.25 10,000 1/15/2012,2 MSFT,1 60.53 12,500 1/16/2012,1 60.19 7,100 NFLX ,1 28.29 25,000 1/16/2012,1 Write Optimized Row-Store (WOS) IBM,1 60.25 10,000 1/15/2012,2 60.53 13,500 1/16/2012,2 MSFT,2 62.29 11,000 NFLX,1 28.29 25,000 1/16/2012,1Asynchronous Data Transfer Write Optimized Row-Store (WOS) TUPLE MOVER NFLX,2 28.29,2 25,000,2 1/16/2012,2 MSFT,3 62.29 11,000 60.53,2 12,500 13,500 1/16/2012,3 IBM,3 60.19 7,100 1/15/2012,3 60.25,2 10,000,2
  • 36. 36 Tuple Mover - mergeout • mergeout task combines ROS containers to reduce fragmentation Read Optimized Column-Store (ROS) Write Optimized Row-Store (WOS) Write Optimized Row-Store (WOS) IBM,3 60.19 7,100 1/15/2012,3 60.25,2 10,000,2 MSFT,3 62.29 11,000 60.53,2 12,500 13,500 1/16/2012,3 NFLX,2 28.29,2 25,000,2 1/16/2012,2
  • 37. 37 Real-time Analytics • Real-time analytics on large volume of data is a reality on the Vertica Analytic Database • Hybrid storage architecture enables low-latency loads • Concurrent load /query enabled by transaction model and asynchronous tuple mover activity • Vertica achieves very low data latency (seconds) and full context (store years of detailed history) • Load performance scales with cluster size – proven at 10+ TB per hour A B C Write Optimized Store (WOS) Read Optimized Store (ROS)