SlideShare a Scribd company logo
Optimize Your Reporting In Less Than 10 Minutes
David Nhim, News Distribution Network, Inc.
June 24th, 2015
Housekeeping
• The recording will be sent to all webinar participants after the event.
• Questions? Type them in the chat box and we will answer.
• Posting to social? Use #AWSandChartio
Today’s Speakers
Matt Train
@Chartio
David Nhim
@Newsinc
Brandon Chavis
@AWScloud
Fast, simple, petabyte-scale data warehousing for less than $1,000/TB/Year
Amazon Redshift
Common Customer Use Cases
• Reduce costs by
extending DW rather than
adding HW
• Migrate completely from
existing DW systems
• Respond faster to
business
• Improve performance by
an order of magnitude
• Make more data
available for analysis
• Access business data via
standard reporting tools
• Add analytic functionality
to applications
• Scale DW capacity as
demand grows
• Reduce HW & SW costs
by an order of magnitude
Traditional Enterprise DW Companies with Big Data SaaS Companies
Amazon Redshift is easy to use
• Provision in minutes
• Monitor query performance
• Point and click resize
• Built in security
• Automatic backups
Amazon Redshift is priced to let
you analyze all your data
Price is nodes times hourly
cost
No charge for leader node
3x data compression on avg
Price includes 3 copies of
data
DS2 (HDD)
Price Per Hour for
DW1.XL Single Node
Effective Annual
Price per TB compressed
On-Demand $ 0.850 $ 3,725
1 Year Reservation $ 0.500 $ 2,190
3 Year Reservation $ 0.228 $ 999
DC1 (SSD)
Price Per Hour for
DW2.L Single Node
Effective Annual
Price per TB compressed
On-Demand $ 0.250 $ 13,690
1 Year Reservation $ 0.161 $ 8,795
3 Year Reservation $ 0.100 $ 5,500
Amazon Redshift Node Types
• Optimized for I/O intensive workloads
• High disk density
• On demand at $0.85/hour
• As low as $1,000/TB/Year
• Scale from 2TB to 2PB
DS2.XL: 31 GB RAM, 2 Cores
2 TB compressed storage, 0.5 GB/sec scan
DS2.8XL: 244 GB RAM, 16 Cores
16 TB compressed, 4 GB/sec scan
• High performance at smaller storage size
• High compute and memory density
• On demand at $0.25/hour
• As low as $5,500/TB/Year
• Scale from 160GB to 326TB
DC1.L: 16 GB RAM, 2 Cores
160 GB compressed SSD storage
DC1.8XL: 256 GB RAM, 32 Cores
2.56 TB of compressed SSD storage
Amazon Redshift Architecture
• Leader Node
– SQL endpoint
– Stores metadata
– Coordinates query execution
• Compute Nodes
– Local, columnar storage
– Execute queries in parallel
– Load, backup, restore via
Amazon S3; load from
Amazon DynamoDB or SSH
• Two hardware platforms
– Optimized for data processing
– DW1: HDD; scale from 2TB to 2PB
– DW2: SSD; scale from 160GB to 330TB
10 GigE
(HPC)
Ingestion
Backup
Restore
JDBC/ODBC
Amazon Redshift enables end-to-end
security
• SSL to secure data in transit; load encrypted
from Amazon S3; ECDHE perfect forward
security
• Encryption to secure data at rest
– AES-256; hardware accelerated
– All blocks on disks & in Amazon S3 encrypted
– On-premises HSM & AWS CloudHSM support
• UNLOAD to Amazon S3 supports SSE and client-
side encryption
• Audit logging & AWS CloudTrail integration
• Amazon VPC and IAM support
• SOC 1/2/3, PCI-DSS Level 1, FedRAMP, HIPAA
10 GigE
(HPC)
Ingestion
Backup
Restore
Customer VPC
Internal
VPC
JDBC/ODBC
Amazon Redshift integrates with multiple
data sources
Amazon S3
Amazon EMR
Amazon Redshift
DynamoDB
Amazon RDS
Corporate Datacenter
NDN Introduction
2015
• Transition Items & Interim Plan
• Marketing Approach & Priorities
• Brand Development Process
• Resourcing
• Next Steps
The Broadest Offering of
Video Available
Anywhere
400+ Premium Sources
4,000 New Videos Daily
The Digital Media Exchange
400 Premium
Content Providers
4,000 High-Traffic
Publishers
The Web’s Best Publishers Lead with Video
from NDN
Competitive Insight
NDN is a leader in the News/Information category, ranked #2 behind
Huffington Post Media Group.
NDN Powers the Full Video Experience
for Publishers
NDN Single Video Player &
Fixed Placement
Perfect Pixel has Redefined the Video
Workflow
NDN Wire Match
NDN Wire Match:
automates placement of AP
video recommended by AP
editors
Powering Video On 44 of the Top 50
Newspaper Sites
TopU.S.
NewspapersOnline
NDN is the Leader in Local News
• Breaking News Video Available
from over 250 Stations in 155
US News Markets
• Coverage for 90% of the US
Audience
The Largest Consortium of Digital Local
News Video Ever Created
Participating broadcasters:
257 Stations in 155 Markets
BI Initiative
• Needed self-service BI
• Must be user-friendly
• Easy to Manage
• Reviewed over a dozen BI vendors
– Build or Buy
– Self Hosted vs Cloud
– Training/Support
– POC process
Tech @ NDN
• Tools
– Kinesis for Real-Time Data Collection
– Python / EMR / Pentaho for ETL
– Redshift for Data Warehousing
– Chartio for Visualization
Data Warehouse
Architecture
RDBMS
Logs
ETL
DIMENSION
S
Architecture
• Real-time data collector encodes messages in protocol buffers and
sends payload to kinesis
• Micro-batching
– ETL process continuously reads from kinesis, batches the data, and
loads into Redshift
– ~15 minutes behind real-time
Redshift Basics
• Redshift is a distributed column store
– don’t treat it like a traditional row store
– Don’t do “SELECT * FROM” queries
• No Referential Integrity
– primary / foreign keys ignored except for query planning
– Enforce uniqueness via ETL
• No UDFs or Stored Procedures
– Must rely on built in functions
– Do as much pre-processing outside of cluster
Redshift
• Use COPY command to bulk load data
– Raw inserts are slow
– “Insert Into Table … Values …”
• Deep copies to rebuild tables rather than do a full vacuum.
– Create table then Insert Into “Select * from”
– Vacuum took as long as three days for some tables
Distribution
• Distribution Styles
– Use “All” distribution for dimension tables
– Use “Even” distribution for summary tables
– Use “Key” distribution for fact tables
Select most often joined column as dist key.
Strive for join data locality
Sort Keys
• Select a timestamp based column with the lowest grain that makes
sense (minute truncated timestamp)
• Insert Data in Sort key order to minimize the need for vacuum
Compression Encoding
• Use compression to reduce I/O
– Use ANALYZE COMPRESSION to get recommended encodings for
your table or use COPY bulk loading tool do it for you
– Use Run Length Encoding on rollup columns like hour, day, month, year,
booleans (assuming a timestamp for your sortkey)
Summary Tables
• Aggregate Tables / Materialized Views
– Pre-build your summaries and complex queries
– Your biggest boost in query performance will come from using summary
tables
– Adds ETL complexity, but reduces reporting complexity
– Chartio’s Data Store is also an option if your data set is < 1 M rows
Avoid Updates on fact tables
• Avoid doing Updates on your fact tables
– Updates are equivalent to delete then insert and will ruin your sort order
– Vacuum will be required after large updates
• Deletes remain in your table
– Marked and hidden, but don’t disappear until a vacuum delete or full
vacuum is performed
Caching
• Configure Chartio with the appropriate cache timeout values
– 15 min, 1 hour, 8 hours
• Use Chartio’s data store feature
– Ideal for storing complex query results or aggregates
Views
• Use views instead of tables
– Easier to update Chartio schemas if using a view
– Can add mandatory filters
– Can change view w/o affecting Chartio
Chartio Filters and Drilldowns
• Encourage use of dashboard filters and variables
– Allows for dynamic filtering and focused reporting
• Configure drilldowns on dashboards
– Makes exploration more natural
Redshift Workload Manager
• Use the Workload Manager (WLM)
– Prevent long queries from blocking other users
– Create multiple query queues for ETL, BI, Machine Learning, etc
– Set separate memory settings and query timeout values for each queue
Quick Stats
• 14 event types
• 300 M ~ 1 B events / day
• ½ Terabyte uncompressed data / day
• 30 – 50 data points per event type
• 50+ users (about half the company)
• 80+ dashboards, majority user generated
• Reportable dimensions include:
– Partners, Geo-location, Device, EventType, Playlists, Widgets,
Date/Time …
Data At A Glance
Data At A Glance
Chartio Summary
• Easy to deploy
• Easy to manage
• Dead simple to use
• Great performance
• Responsive support
• Continually improving and adding new features
Redshift Summary
• Easy to Deploy
• Easy to Resize
• Automated backups
• Familiar postgres-like interface
• High performance
• Can use OLAP/Relational tools
Data
Sources
Schema/B
usiness
Rules
Interactive
Mode
SQL Mode
Data
Stores
TV Screens
Scheduled
Emails
Data
Exploration
Dashboards
Embedded
Data
Pipeline/
Data
Blending
Optimize Your Reporting In Less Than 10 Minutes
Optimize Your Reporting In Less Than 10 Minutes
Next steps
Download Chartio Guide:
Optimizing Amazon Redshift Query Performance
https://chartio.com/redshift
Questions?
Chartio
Matt Train
mtrain@chartio.com
chartio.com
News Distribution
Network, Inc.
David Nhim
dnhim@newsinc.com
newsinc.com
AWS
Brandon Chavis
chavisb@amazon.com
aws.amazon.com

More Related Content

PDF
[よくわかるAmazon Redshift in 大阪]Amazon Redshift最新情報と導入事例のご紹介
PDF
[よくわかるAmazon Redshift]Amazon Redshift最新情報と導入事例のご紹介
PDF
Revolutionary Storage for Modern Databases, Applications and Infrastrcture
PPTX
Taking Splunk to the Next Level - Architecture Breakout Session
PDF
Apache Flink & Kudu: a connector to develop Kappa architectures
PPTX
Taking Splunk to the Next Level – Architecture
PPTX
Preventative Maintenance of Robots in Automotive Industry
PPTX
Enabling the Active Data Warehouse with Apache Kudu
[よくわかるAmazon Redshift in 大阪]Amazon Redshift最新情報と導入事例のご紹介
[よくわかるAmazon Redshift]Amazon Redshift最新情報と導入事例のご紹介
Revolutionary Storage for Modern Databases, Applications and Infrastrcture
Taking Splunk to the Next Level - Architecture Breakout Session
Apache Flink & Kudu: a connector to develop Kappa architectures
Taking Splunk to the Next Level – Architecture
Preventative Maintenance of Robots in Automotive Industry
Enabling the Active Data Warehouse with Apache Kudu

What's hot (9)

PPTX
Taking Splunk to the Next Level - Technical
PDF
A Closer Look at Apache Kudu
PPTX
Hadoop AWS infrastructure cost evaluation
PPTX
Back to School - St. Louis Hadoop Meetup September 2016
PDF
Applications in the Cloud
PPTX
Kudu Deep-Dive
PDF
Philly DB MapR Overview
PPTX
Implement SQL Server on an Azure VM
PPTX
Taking Splunk to the Next Level - Architecture Breakout Session
Taking Splunk to the Next Level - Technical
A Closer Look at Apache Kudu
Hadoop AWS infrastructure cost evaluation
Back to School - St. Louis Hadoop Meetup September 2016
Applications in the Cloud
Kudu Deep-Dive
Philly DB MapR Overview
Implement SQL Server on an Azure VM
Taking Splunk to the Next Level - Architecture Breakout Session
Ad

Viewers also liked (9)

PPTX
Redshift Chartio Event Presentation
PPTX
Using cohort analysis to understand your SaaS business | Growth Hacking Brussels
PPTX
The Vital Metrics Every Sales Team Should Be Measuring
PPTX
How To Drive Exponential Growth Using Unconventional Data Sources
PPTX
Producing and Analyzing Rich Data with PostgreSQL
PDF
From Data to Insight: Uncovering the 'Aha' Moments That Matter
PPTX
Learn How to Run Python on Redshift
PPTX
Using the PostgreSQL Extension Ecosystem for Advanced Analytics
PPTX
WHAT DATA DO YOU NEED TO BUILD A COMPREHENSIVE HEALTH SCORE?
Redshift Chartio Event Presentation
Using cohort analysis to understand your SaaS business | Growth Hacking Brussels
The Vital Metrics Every Sales Team Should Be Measuring
How To Drive Exponential Growth Using Unconventional Data Sources
Producing and Analyzing Rich Data with PostgreSQL
From Data to Insight: Uncovering the 'Aha' Moments That Matter
Learn How to Run Python on Redshift
Using the PostgreSQL Extension Ecosystem for Advanced Analytics
WHAT DATA DO YOU NEED TO BUILD A COMPREHENSIVE HEALTH SCORE?
Ad

Similar to Optimize Your Reporting In Less Than 10 Minutes (17)

PPTX
Best storage engine for MySQL
PPTX
Redshift overview
PPTX
Solving Office 365 Big Challenges using Cassandra + Spark
PPTX
Enterprise Data World 2018 - Building Cloud Self-Service Analytical Solution
PDF
Meta scale kognitio hadoop webinar
PPT
Building Scalable Big Data Infrastructure Using Open Source Software Presenta...
PDF
Operational-Analytics
PPTX
How Glidewell Moves Data to Amazon Redshift
PDF
ADV Slides: When and How Data Lakes Fit into a Modern Data Architecture
PPTX
SplunkLive! Nutanix Session - Turnkey and scalable infrastructure for Splunk ...
PDF
ADV Slides: Platforming Your Data for Success – Databases, Hadoop, Managed Ha...
PPTX
Big Data Analytics on the Cloud Oracle Applications AWS Redshift & Tableau
PPTX
Big Data Warehousing Meetup: Real-time Trade Data Monitoring with Storm & Cas...
PDF
Redshift deep dive
PDF
AWS를 활용한 첫 빅데이터 프로젝트 시작하기(김일호)- AWS 웨비나 시리즈 2015
PDF
Amazon Elastic Map Reduce - Ian Meyers
PDF
Apache CarbonData+Spark to realize data convergence and Unified high performa...
Best storage engine for MySQL
Redshift overview
Solving Office 365 Big Challenges using Cassandra + Spark
Enterprise Data World 2018 - Building Cloud Self-Service Analytical Solution
Meta scale kognitio hadoop webinar
Building Scalable Big Data Infrastructure Using Open Source Software Presenta...
Operational-Analytics
How Glidewell Moves Data to Amazon Redshift
ADV Slides: When and How Data Lakes Fit into a Modern Data Architecture
SplunkLive! Nutanix Session - Turnkey and scalable infrastructure for Splunk ...
ADV Slides: Platforming Your Data for Success – Databases, Hadoop, Managed Ha...
Big Data Analytics on the Cloud Oracle Applications AWS Redshift & Tableau
Big Data Warehousing Meetup: Real-time Trade Data Monitoring with Storm & Cas...
Redshift deep dive
AWS를 활용한 첫 빅데이터 프로젝트 시작하기(김일호)- AWS 웨비나 시리즈 2015
Amazon Elastic Map Reduce - Ian Meyers
Apache CarbonData+Spark to realize data convergence and Unified high performa...

Recently uploaded (20)

PPTX
Cloud computing and distributed systems.
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Empathic Computing: Creating Shared Understanding
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
cuic standard and advanced reporting.pdf
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPTX
sap open course for s4hana steps from ECC to s4
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Network Security Unit 5.pdf for BCA BBA.
PPTX
Spectroscopy.pptx food analysis technology
PDF
Unlocking AI with Model Context Protocol (MCP)
DOCX
The AUB Centre for AI in Media Proposal.docx
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
Big Data Technologies - Introduction.pptx
Cloud computing and distributed systems.
Encapsulation_ Review paper, used for researhc scholars
Empathic Computing: Creating Shared Understanding
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Dropbox Q2 2025 Financial Results & Investor Presentation
cuic standard and advanced reporting.pdf
Digital-Transformation-Roadmap-for-Companies.pptx
sap open course for s4hana steps from ECC to s4
MYSQL Presentation for SQL database connectivity
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Network Security Unit 5.pdf for BCA BBA.
Spectroscopy.pptx food analysis technology
Unlocking AI with Model Context Protocol (MCP)
The AUB Centre for AI in Media Proposal.docx
Programs and apps: productivity, graphics, security and other tools
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Per capita expenditure prediction using model stacking based on satellite ima...
Agricultural_Statistics_at_a_Glance_2022_0.pdf
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Big Data Technologies - Introduction.pptx

Optimize Your Reporting In Less Than 10 Minutes

  • 1. Optimize Your Reporting In Less Than 10 Minutes David Nhim, News Distribution Network, Inc. June 24th, 2015
  • 2. Housekeeping • The recording will be sent to all webinar participants after the event. • Questions? Type them in the chat box and we will answer. • Posting to social? Use #AWSandChartio
  • 3. Today’s Speakers Matt Train @Chartio David Nhim @Newsinc Brandon Chavis @AWScloud
  • 4. Fast, simple, petabyte-scale data warehousing for less than $1,000/TB/Year Amazon Redshift
  • 5. Common Customer Use Cases • Reduce costs by extending DW rather than adding HW • Migrate completely from existing DW systems • Respond faster to business • Improve performance by an order of magnitude • Make more data available for analysis • Access business data via standard reporting tools • Add analytic functionality to applications • Scale DW capacity as demand grows • Reduce HW & SW costs by an order of magnitude Traditional Enterprise DW Companies with Big Data SaaS Companies
  • 6. Amazon Redshift is easy to use • Provision in minutes • Monitor query performance • Point and click resize • Built in security • Automatic backups
  • 7. Amazon Redshift is priced to let you analyze all your data Price is nodes times hourly cost No charge for leader node 3x data compression on avg Price includes 3 copies of data DS2 (HDD) Price Per Hour for DW1.XL Single Node Effective Annual Price per TB compressed On-Demand $ 0.850 $ 3,725 1 Year Reservation $ 0.500 $ 2,190 3 Year Reservation $ 0.228 $ 999 DC1 (SSD) Price Per Hour for DW2.L Single Node Effective Annual Price per TB compressed On-Demand $ 0.250 $ 13,690 1 Year Reservation $ 0.161 $ 8,795 3 Year Reservation $ 0.100 $ 5,500
  • 8. Amazon Redshift Node Types • Optimized for I/O intensive workloads • High disk density • On demand at $0.85/hour • As low as $1,000/TB/Year • Scale from 2TB to 2PB DS2.XL: 31 GB RAM, 2 Cores 2 TB compressed storage, 0.5 GB/sec scan DS2.8XL: 244 GB RAM, 16 Cores 16 TB compressed, 4 GB/sec scan • High performance at smaller storage size • High compute and memory density • On demand at $0.25/hour • As low as $5,500/TB/Year • Scale from 160GB to 326TB DC1.L: 16 GB RAM, 2 Cores 160 GB compressed SSD storage DC1.8XL: 256 GB RAM, 32 Cores 2.56 TB of compressed SSD storage
  • 9. Amazon Redshift Architecture • Leader Node – SQL endpoint – Stores metadata – Coordinates query execution • Compute Nodes – Local, columnar storage – Execute queries in parallel – Load, backup, restore via Amazon S3; load from Amazon DynamoDB or SSH • Two hardware platforms – Optimized for data processing – DW1: HDD; scale from 2TB to 2PB – DW2: SSD; scale from 160GB to 330TB 10 GigE (HPC) Ingestion Backup Restore JDBC/ODBC
  • 10. Amazon Redshift enables end-to-end security • SSL to secure data in transit; load encrypted from Amazon S3; ECDHE perfect forward security • Encryption to secure data at rest – AES-256; hardware accelerated – All blocks on disks & in Amazon S3 encrypted – On-premises HSM & AWS CloudHSM support • UNLOAD to Amazon S3 supports SSE and client- side encryption • Audit logging & AWS CloudTrail integration • Amazon VPC and IAM support • SOC 1/2/3, PCI-DSS Level 1, FedRAMP, HIPAA 10 GigE (HPC) Ingestion Backup Restore Customer VPC Internal VPC JDBC/ODBC
  • 11. Amazon Redshift integrates with multiple data sources Amazon S3 Amazon EMR Amazon Redshift DynamoDB Amazon RDS Corporate Datacenter
  • 13. • Transition Items & Interim Plan • Marketing Approach & Priorities • Brand Development Process • Resourcing • Next Steps The Broadest Offering of Video Available Anywhere 400+ Premium Sources 4,000 New Videos Daily
  • 14. The Digital Media Exchange 400 Premium Content Providers 4,000 High-Traffic Publishers
  • 15. The Web’s Best Publishers Lead with Video from NDN
  • 16. Competitive Insight NDN is a leader in the News/Information category, ranked #2 behind Huffington Post Media Group.
  • 17. NDN Powers the Full Video Experience for Publishers
  • 18. NDN Single Video Player & Fixed Placement
  • 19. Perfect Pixel has Redefined the Video Workflow
  • 20. NDN Wire Match NDN Wire Match: automates placement of AP video recommended by AP editors
  • 21. Powering Video On 44 of the Top 50 Newspaper Sites TopU.S. NewspapersOnline
  • 22. NDN is the Leader in Local News • Breaking News Video Available from over 250 Stations in 155 US News Markets • Coverage for 90% of the US Audience
  • 23. The Largest Consortium of Digital Local News Video Ever Created Participating broadcasters: 257 Stations in 155 Markets
  • 24. BI Initiative • Needed self-service BI • Must be user-friendly • Easy to Manage • Reviewed over a dozen BI vendors – Build or Buy – Self Hosted vs Cloud – Training/Support – POC process
  • 25. Tech @ NDN • Tools – Kinesis for Real-Time Data Collection – Python / EMR / Pentaho for ETL – Redshift for Data Warehousing – Chartio for Visualization
  • 27. Architecture • Real-time data collector encodes messages in protocol buffers and sends payload to kinesis • Micro-batching – ETL process continuously reads from kinesis, batches the data, and loads into Redshift – ~15 minutes behind real-time
  • 28. Redshift Basics • Redshift is a distributed column store – don’t treat it like a traditional row store – Don’t do “SELECT * FROM” queries • No Referential Integrity – primary / foreign keys ignored except for query planning – Enforce uniqueness via ETL • No UDFs or Stored Procedures – Must rely on built in functions – Do as much pre-processing outside of cluster
  • 29. Redshift • Use COPY command to bulk load data – Raw inserts are slow – “Insert Into Table … Values …” • Deep copies to rebuild tables rather than do a full vacuum. – Create table then Insert Into “Select * from” – Vacuum took as long as three days for some tables
  • 30. Distribution • Distribution Styles – Use “All” distribution for dimension tables – Use “Even” distribution for summary tables – Use “Key” distribution for fact tables Select most often joined column as dist key. Strive for join data locality
  • 31. Sort Keys • Select a timestamp based column with the lowest grain that makes sense (minute truncated timestamp) • Insert Data in Sort key order to minimize the need for vacuum
  • 32. Compression Encoding • Use compression to reduce I/O – Use ANALYZE COMPRESSION to get recommended encodings for your table or use COPY bulk loading tool do it for you – Use Run Length Encoding on rollup columns like hour, day, month, year, booleans (assuming a timestamp for your sortkey)
  • 33. Summary Tables • Aggregate Tables / Materialized Views – Pre-build your summaries and complex queries – Your biggest boost in query performance will come from using summary tables – Adds ETL complexity, but reduces reporting complexity – Chartio’s Data Store is also an option if your data set is < 1 M rows
  • 34. Avoid Updates on fact tables • Avoid doing Updates on your fact tables – Updates are equivalent to delete then insert and will ruin your sort order – Vacuum will be required after large updates • Deletes remain in your table – Marked and hidden, but don’t disappear until a vacuum delete or full vacuum is performed
  • 35. Caching • Configure Chartio with the appropriate cache timeout values – 15 min, 1 hour, 8 hours • Use Chartio’s data store feature – Ideal for storing complex query results or aggregates
  • 36. Views • Use views instead of tables – Easier to update Chartio schemas if using a view – Can add mandatory filters – Can change view w/o affecting Chartio
  • 37. Chartio Filters and Drilldowns • Encourage use of dashboard filters and variables – Allows for dynamic filtering and focused reporting • Configure drilldowns on dashboards – Makes exploration more natural
  • 38. Redshift Workload Manager • Use the Workload Manager (WLM) – Prevent long queries from blocking other users – Create multiple query queues for ETL, BI, Machine Learning, etc – Set separate memory settings and query timeout values for each queue
  • 39. Quick Stats • 14 event types • 300 M ~ 1 B events / day • ½ Terabyte uncompressed data / day • 30 – 50 data points per event type • 50+ users (about half the company) • 80+ dashboards, majority user generated • Reportable dimensions include: – Partners, Geo-location, Device, EventType, Playlists, Widgets, Date/Time …
  • 40. Data At A Glance
  • 41. Data At A Glance
  • 42. Chartio Summary • Easy to deploy • Easy to manage • Dead simple to use • Great performance • Responsive support • Continually improving and adding new features
  • 43. Redshift Summary • Easy to Deploy • Easy to Resize • Automated backups • Familiar postgres-like interface • High performance • Can use OLAP/Relational tools
  • 47. Next steps Download Chartio Guide: Optimizing Amazon Redshift Query Performance https://chartio.com/redshift
  • 48. Questions? Chartio Matt Train mtrain@chartio.com chartio.com News Distribution Network, Inc. David Nhim dnhim@newsinc.com newsinc.com AWS Brandon Chavis chavisb@amazon.com aws.amazon.com

Editor's Notes

  • #5: For those unfamiliar with Amazon Redshift, it is a fast, fully managed, petabyte-scale data warehouse for less than $1000 per terabyte per year. fast, cost effective, easy to use (launch cluster in a few minutes, scale with the push of a button)
  • #6: Migrate from traditional DW and add new use cases and more data Huge per gain for big data companies at PB scale, and because they can connect their data to reporting tool they open up data to business SaaS companies can cost effectively scale
  • #7: Redshift is not only cheaper but also easy to use. Provisioning takes 15 minutes.
  • #10: 1. Redshift is columnar, massively parallel process data warehouse designed to be run as a clustered system 2. Redshift uses postgres protocol over JDBC and ODBC to connect to your SQL client or BI tools. 3. The leader node is your SQL endpoint, stores meta data and coordinates query execution. 4. Data stored on compute nodes, and queries are executed in parallel. Compute nodes can also be loaded in parallel from Amazon S3, DynamoDB, Elastic MapReduce using our COPY command. Store backups of data to S3 in parallel. Nodes communicate with each other and S3 over a 10GE connection 5. Have two hardware platforms. The DW1 is our magnetic platform designed for large data warehouses and scales for 2TB to 1.6PB. DW2 is our SSD platform designed for high performance workloads. If you have less than half a TB of data DW2 is most cost effective.
  • #12: Redshift is designed to be a central data warehouse where you can pull in data from all your data sources to get a complete picture. We integrate with a host of AWS services. You can load data in parallel directly from Amazon S3, Amazon DynamoDB (no sql data store), and Amazon EMR (hadoop service) using our COPY command. You can also COPY data directly from your own on premise databases using an SSH connection. Because customers rely on Redshift as a central data store, having good business intelligence tools is important.
  • #45: Section Header