SlideShare a Scribd company logo
#datastack#datastack
Shaun
#datastack#datastack
What you’re going to learn
1 How top engineering organizations are building their
data infrastructure
The 7 core challenges of data integration
Why companies like Asana, Buffer, and SeatGeek
choose Redshift for their analytics warehouse
...and much more!
2
3
Shaun
#datastack
Data Infrastructure:
Then and Now
Dillon
#datastack
The traditional approach: ETL Dillon
END USERBI TEAMETL TEAM EDW TEAM
A
B
D
CZ
P
SUMMAR
Y
ELT - Heavy Transformation Restricted Q&AOLAP / Silos
SUMMAR
Y
F
E
#datastack
How companies are doing it today: ELT
Dillon
Modeling Layer
Transform at Query
FFF
Database
Extract Load
- name:
first_purchasers
type: single_value
base_view: orders
measures:
[orders.customer.all]
Analytics
Viz & Exploration
C
C
C
Transform (and
Explore!)
#datastack
Benefits of this approach
1.Redshift is performant enough to handle most
transformations
2.Users prefer performing transformations in a language
they already use (SQL) or with UI
3.Transformations are much simpler, more transparent
4.Performing transformations alongside raw data is great
for auditability
Dillon
#datastack
Data infrastructure has geek cred Shaun
#datastack
Data infrastructure has geek cred Shaun
#datastack
Data infrastructure has geek cred Shaun
#datastack
Data infrastructure has geek cred Shaun
#datastack#datastack
Data Integration
Data Warehouse
BI/Analytics
What the stack looks likeShaun
#datastack
Data Integration
Shaun
#datastack
Why consolidation matters
#datastack#datastack
internal analytics Shaun
#datastack
Quick poll Shaun
What top five data sources are a top priority for you to
integrate/keep integrated?
● production databases
● events
● error logs
● billing
● email marketing
● crm
● advertising
● erp
● a/b testing
● support
#datastack
“A year ago, we were facing a lot of stability problems with our data processing.
When there was a major shift in a graph, people immediately questioned the
data integrity. It was hard to distinguish interesting insights from
bugs. Data science is already an art so you need the infrastructure to give you
trustworthy answers to the questions you ask. 99% correctness is not
good enough. And on the data infrastructure team, we were spending a lot of
time churning on fighting urgent fires, and that prevented us from
making much long-term progress. It was painful.”
- Marco Gallotta, Asana, How to Build Stable, Accessible Data Infrastructure at a Startup
#datastack
“Our story would end here if real-time processing were perfect. But it’s not: some
events can come in days late, some time ranges need to be re-
processed after initial ingestion due to code changes or data revisions, various
components of the real-time pipeline can fail, and so on.”
- Gian Merlino, MetaMarkets, Building a Data Pipeline That Handles Billions of Events in Real-Time
#datastack
7 core challenges of data integration
Connections: Every API is a
unique and special snowflake
Accuracy: Ordering data on a
distributed system
Latency: Large object data stores
(Amazon S3, Redshift) are
optimized for batches not streams
Scale: Data will grow
exponentially as your company
grows
Flexibility: you’re interacting with
systems you don’t control
Monitoring: Notifications for
expired credentials, errors,
notifications of disruptions
Maintenance: Justifying
investment in ongoing
maintenance/improvement
Shaun
#datastack
Or...try Pipeline Shaun
Ad Platforms Customer SupportWeb Data
Marketing
Automation
CRM PaymentsEcommerce
#datastack
Warehousing Infrastructure
Shaun
#datastack
Analytics warehouse Shaun
Redshift is the most common
analytics warehouse.
Chosen by: Asana, Braintree, Looker, Seatgeek,
VigLink, Buffer
#datastack#datastack
awesome Shaun
#datastack#datastack
AirBnB experiment
Hive Redshift
Test 1: 3 billion rows of data 28 minutes <6 minutes
Test 2: two joins with millions of rows 182 seconds 8 seconds
Cost $1.29/hour/node $0.85/hour/node
Shaun
#datastack
Periscope research Shaun
#datastack
DiamondStream’s dashboard query
performance
Shaun
#datastack
Business Intelligence
& Analytics
Dillon
#datastack#datastack
A broken model Dillon
● Feedback loop is broken
● Disparate reporting
● Non-unified decision
making
● Versioning
● Reusability is lost
Marketing
Finance
AM
#datastack
Constraints of SQL Dillon
SQL is versatile, but shares the same flavor as
assembly-only languages such as Perl
Can write but not read
Promotes one-off, piecemeal analysis
Disparate interpretation
#datastack
The critical multiplier: modeling Dillon
Any SQL Data Warehouse
Modeling Layer
What’s our most
successful
marketing campaign
How does our Q4
Pipeline looks?
Who are our
healthiest / happiest
customers?
#datastack#datastack
analytics Dillon
● Data access
● Uniform definitions
● A Shared View
● Collaboration
● Analytical Speed
#datastack
What You Can Do
Dillon
#datastack#datastack
analytics tools Dillon
Week 1 Week 2-3
RJMetrics
Pipeline
BLOCKS
#datastack#datastack
marketing
#datastack#datastack
marketing
#datastack#datastack
analytics
#datastack#datastack
analytics
#datastack
Thank you!

More Related Content

PPTX
Beyond Data Discovery: The Value Unlocked by Modern Data Modeling
PDF
The 3 Insights Defining Modern Analytics
PPTX
How the economist with cloud BI and Looker have improved data-driven decision...
PPTX
Embedding Data & Analytics With Looker
PPTX
Webinar with SnagAJob, HP Vertica and Looker - Data at the speed of busines s...
PPTX
When and Where to Embed Business Intelligence
PDF
Join 2017_Deep Dive_To Use or Not Use PDT's
PPTX
Stop refreshing vanity metrics & start focusing on the metrics that inform de...
Beyond Data Discovery: The Value Unlocked by Modern Data Modeling
The 3 Insights Defining Modern Analytics
How the economist with cloud BI and Looker have improved data-driven decision...
Embedding Data & Analytics With Looker
Webinar with SnagAJob, HP Vertica and Looker - Data at the speed of busines s...
When and Where to Embed Business Intelligence
Join 2017_Deep Dive_To Use or Not Use PDT's
Stop refreshing vanity metrics & start focusing on the metrics that inform de...

What's hot (19)

PDF
Power to the People: A Stack to Empower Every User to Make Data-Driven Decisions
PPTX
From Architecture to Analytics: A look at Simply Business’s data strategy
PPTX
Data Stack Considerations: Build vs. Buy at Tout
PDF
Data Modeling in Looker
PPTX
Operationalizing analytics to scale
PPTX
Data Democracy: Hadoop + Redshift
PDF
Join2017_Deep Dive_AWS Operations
PPTX
Identifying Users Across Platforms with a Universal ID Webinar Slides
PDF
Join 2017_Deep Dive_Integrating Looker with R and Python
PDF
SiSense Overview
PPTX
Frank Bien Opening Keynote - Join 2016
PPTX
Sisesnse Business Intelligence Tool
PPTX
Wisdom of Crowds Webinar Deck
PDF
Sisense is Redefining Business Analytics
PDF
Data Exploration and Analytics for the Modern Business
PDF
Utilizing Human Data Validation For KPI Analysis And Machine Learning
PDF
Embrace Tableau Innovations
PDF
A2B Data™ Brochure
PDF
"Building Data Foundations and Analytics Tools Across The Product" by Crystal...
Power to the People: A Stack to Empower Every User to Make Data-Driven Decisions
From Architecture to Analytics: A look at Simply Business’s data strategy
Data Stack Considerations: Build vs. Buy at Tout
Data Modeling in Looker
Operationalizing analytics to scale
Data Democracy: Hadoop + Redshift
Join2017_Deep Dive_AWS Operations
Identifying Users Across Platforms with a Universal ID Webinar Slides
Join 2017_Deep Dive_Integrating Looker with R and Python
SiSense Overview
Frank Bien Opening Keynote - Join 2016
Sisesnse Business Intelligence Tool
Wisdom of Crowds Webinar Deck
Sisense is Redefining Business Analytics
Data Exploration and Analytics for the Modern Business
Utilizing Human Data Validation For KPI Analysis And Machine Learning
Embrace Tableau Innovations
A2B Data™ Brochure
"Building Data Foundations and Analytics Tools Across The Product" by Crystal...
Ad

Viewers also liked (18)

PPTX
Winning with Data
PPTX
Frank Bien Opening Keynote - Join 2016
PPTX
The Power of Smart Counting at The RealReal
PPTX
2016 Stackies Awards: 41 Marketing Technology Stacks
PPTX
ROI & Social webinar with Craig Rosenberg & Jason Miller
PPTX
Meet Looker 4
PDF
Operationalizing Data Analytics
PDF
Do You Want to Be Rolling Stones or Vanilla Ice? by Steve Sloan, Chief Produc...
PPTX
PDF
Creating a Data-Driven Organization, Crunchconf, October 2015
PDF
Building a Data Driven Growth Organization by Heather Zynczak, CMO, Domo
PPTX
Building Search Marketing Technology Stack #smxlondon - @kelvinnewman
PPTX
Harvesting business Value with Data Science
PPTX
Practical advice to build a data driven company
PDF
How To Become A Data Driven Organization
PDF
Creating a Data-Driven Organization: an executive summary
PDF
Auto Scaling Systems With Elastic Spark Streaming: Spark Summit East talk by ...
PDF
Becoming a Data-Driven Organization - Aligning Business & Data Strategy
Winning with Data
Frank Bien Opening Keynote - Join 2016
The Power of Smart Counting at The RealReal
2016 Stackies Awards: 41 Marketing Technology Stacks
ROI & Social webinar with Craig Rosenberg & Jason Miller
Meet Looker 4
Operationalizing Data Analytics
Do You Want to Be Rolling Stones or Vanilla Ice? by Steve Sloan, Chief Produc...
Creating a Data-Driven Organization, Crunchconf, October 2015
Building a Data Driven Growth Organization by Heather Zynczak, CMO, Domo
Building Search Marketing Technology Stack #smxlondon - @kelvinnewman
Harvesting business Value with Data Science
Practical advice to build a data driven company
How To Become A Data Driven Organization
Creating a Data-Driven Organization: an executive summary
Auto Scaling Systems With Elastic Spark Streaming: Spark Summit East talk by ...
Becoming a Data-Driven Organization - Aligning Business & Data Strategy
Ad

Similar to How to Build a Data-Driven Company: From Infrastructure to Insights (20)

PDF
¿Cómo modernizar una arquitectura de TI con la virtualización de datos?
PDF
ADV Slides: When and How Data Lakes Fit into a Modern Data Architecture
PDF
클라우드에서의 데이터 웨어하우징 & 비즈니스 인텔리전스
PDF
Take Action: The New Reality of Data-Driven Business
PDF
5 Steps for Architecting a Data Lake
PDF
ADV Slides: The Evolution of the Data Platform and What It Means to Enterpris...
PDF
Data Virtualization: An Introduction
PDF
Transforming Devon’s Data Pipeline with an Open Source Data Hub—Built on Data...
PDF
ETL VS ELT.pdf
PPTX
Data Engineer's Lunch #60: Series - Developing Enterprise Consciousness
PPTX
Managing Large Amounts of Data with Salesforce
PPTX
Big Data's Impact on the Enterprise
DOCX
Gowthami_Resume
PDF
LinkedInSaxoBankDataWorkbench
PDF
Webinar - Accelerating Hadoop Success with Rapid Data Integration for the Mod...
DOC
Informatica Interview Questions & Answers
PDF
Building a Single Logical Data Lake: For Advanced Analytics, Data Science, an...
PPTX
The Evolution of a Scrappy Startup to a Successful Web Service
PPTX
Data Lakehouse, Data Mesh, and Data Fabric (r1)
PPTX
Data Lakehouse, Data Mesh, and Data Fabric (r2)
¿Cómo modernizar una arquitectura de TI con la virtualización de datos?
ADV Slides: When and How Data Lakes Fit into a Modern Data Architecture
클라우드에서의 데이터 웨어하우징 & 비즈니스 인텔리전스
Take Action: The New Reality of Data-Driven Business
5 Steps for Architecting a Data Lake
ADV Slides: The Evolution of the Data Platform and What It Means to Enterpris...
Data Virtualization: An Introduction
Transforming Devon’s Data Pipeline with an Open Source Data Hub—Built on Data...
ETL VS ELT.pdf
Data Engineer's Lunch #60: Series - Developing Enterprise Consciousness
Managing Large Amounts of Data with Salesforce
Big Data's Impact on the Enterprise
Gowthami_Resume
LinkedInSaxoBankDataWorkbench
Webinar - Accelerating Hadoop Success with Rapid Data Integration for the Mod...
Informatica Interview Questions & Answers
Building a Single Logical Data Lake: For Advanced Analytics, Data Science, an...
The Evolution of a Scrappy Startup to a Successful Web Service
Data Lakehouse, Data Mesh, and Data Fabric (r1)
Data Lakehouse, Data Mesh, and Data Fabric (r2)

More from Looker (14)

PDF
Join 2017_Deep Dive_Table Calculations 201
PDF
Join 2017_Deep Dive_Table Calculations 101
PDF
Join 2017_Deep Dive_Smart Caching
PDF
Join 2017_Deep Dive_Sessionization
PDF
Join 2017_Deep Dive_Redshift Optimization
PDF
Join 2017_Deep Dive_Customer Retention
PDF
Join 2017_Deep Dive_Workflows with Zapier
PDF
Join 2017 - Deep Dive - Action Hub
PPTX
Winning the 3rd Wave of BI
PPTX
The Three Pillars of Customer Success Analytics
PPTX
Creating a Single Source of Truth: Leverage all of your data with powerful an...
PPTX
Advanced Analytics for Salesforce
PPTX
Custom Calculations: Your business is unique — shouldn't your metrics be?
PPTX
Lloyd Tabb on Symmetric Aggregates
Join 2017_Deep Dive_Table Calculations 201
Join 2017_Deep Dive_Table Calculations 101
Join 2017_Deep Dive_Smart Caching
Join 2017_Deep Dive_Sessionization
Join 2017_Deep Dive_Redshift Optimization
Join 2017_Deep Dive_Customer Retention
Join 2017_Deep Dive_Workflows with Zapier
Join 2017 - Deep Dive - Action Hub
Winning the 3rd Wave of BI
The Three Pillars of Customer Success Analytics
Creating a Single Source of Truth: Leverage all of your data with powerful an...
Advanced Analytics for Salesforce
Custom Calculations: Your business is unique — shouldn't your metrics be?
Lloyd Tabb on Symmetric Aggregates

Recently uploaded (20)

PPTX
Big Data Technologies - Introduction.pptx
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PDF
KodekX | Application Modernization Development
PPTX
Cloud computing and distributed systems.
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Approach and Philosophy of On baking technology
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPTX
Spectroscopy.pptx food analysis technology
PDF
Chapter 3 Spatial Domain Image Processing.pdf
Big Data Technologies - Introduction.pptx
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
The Rise and Fall of 3GPP – Time for a Sabbatical?
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Network Security Unit 5.pdf for BCA BBA.
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Unlocking AI with Model Context Protocol (MCP)
Reach Out and Touch Someone: Haptics and Empathic Computing
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
KodekX | Application Modernization Development
Cloud computing and distributed systems.
Advanced methodologies resolving dimensionality complications for autism neur...
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Building Integrated photovoltaic BIPV_UPV.pdf
Approach and Philosophy of On baking technology
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Spectroscopy.pptx food analysis technology
Chapter 3 Spatial Domain Image Processing.pdf

How to Build a Data-Driven Company: From Infrastructure to Insights

Editor's Notes

  • #2: Good afternoon, everyone! Thanks so much for joining us today. I’m going to introduce you to my co-host in just a second, but first, let me run through just a few housekeeping details.
  • #3: We have a lot on the agenda for today. The core of our presentation is going to focus on how companies like yours are solving their data infrastructure challenges. We’re going to cover the challenges engineers should expect around data integration, why Amazon Redshift is quickly becoming the data warehouse of choice, cultural barriers to building a data-driven company, and a lot more.
  • #4: First thing we’re going to cover is data infrastructure, or the actual architecture of legacy and modern data pipelines
  • #5: For the last 30 years or so, really since the inception of modern databases, data warehousing has been the standard model to aggregate data and provide business-directed analytics Data is extracted from various sources…. databases, third-party applications, flat files, etc…. and transformed into a predefined model, then loaded into the data warehouse This ETL process results in data cubes and data silos, where analytics are separated by key groupings for various departments, such as marketing, product, sales, etc. This results in a few issues that are fundamentally prohibitive to creating a data-driven organization First, it’s very resource intensive (and expensive) to manage all of the transformations and data loading Second, it results in latency in the analytics process. End users only have access to pre-defined metrics, which are typically too broad or inflexible to guide nimble decision making. This means that end-users aren’t really getting any actionable insights from these metrics - they’re just looking at high level analysis Third, it restricts drilling. If an end-user finds an interesting piece of information…. say sales accelerated drastically for a certain user age group, and you want to know why… that end-user needs to rmake another data request from the ETL or IT team, who will then take some time to return the request. This latency constrains end users from making data-driven decisions. These were commonly recognized problems. So nowadays, as Shaun was mentioning, modern tech companies have reworked this process
  • #6: Nowadays, companies are collecting more data than ever before Additionally, database technology has witnessed significant advances in the last several years... Databases themselves are now capable of performing sophisticated analysis very quickly This removes the need for data silos and data cubes - all analytics can be performed directly on the central database What this means, is that it now makes sense to shift the burden of complex transformations to the front of the pipeline - to the BI tool - where transformations can be performed on-the-fly, at query time
  • #7: Several benefits to this approach, some of which I mentioned a minute ago but are worth repeating: First, you no longer require huge, resource-intensive engineering or ETL team to move all of your data - so it’s much cheaper on the resource side Secondly, Technical users can pull data in a language they’re used to, SQL…. and if you have a modeling layer, like Looker provides, then users can actually query the data directly from the UI, without any technical knowledge. Transformations aren’t being done by engineers on the backend, they’re being performed as the user pulls the data, so they’re much easier to repeat and easier to understand Lastly, this allows you to audit transformations, so you users understand the components behind analysis - they’ll understand how a metric is defined And Shaun has a few examples of this in practice
  • #8: In the process of data engineering going from being a clumsy, multi-year project -- it’s gained some geek cred. Over the past year we’ve watched as one company after the next shared their “how we built our data infrastructure” blog posts. Yes, even looker. At some point data infrastructure gained geek cred. We were really interested in the details behind all these projects so we did a “meta-analysis” where we looked at how these companies solved core data engineering challenges.
  • #9: We looked at Zulilly
  • #10: Spotify
  • #11: Seatgeek, Buffer, Asana, and many more.
  • #12: Some of these companies (like Netflix and Spotify) are building data products -- recommendation engines. That stack can look slightly different. For this event, we’re going to focus on companies who are building data infrastructure for analytics. And for these companies what we saw is that the process looks very much like what Dillon was just describing. First, they extract data from the variety of sources. Then they load it into the data warehouse. Then they do transformations on top of that.
  • #13: Let’s start at the first part of the conversation. Extract & Load, or more simply, data integration.
  • #14: And just to clarify, the reason this step is so important is because all future insights depend on it. Here are some of the use cases that the Asana team laid out. “It’s difficult work – but an absolute requirement of great intelligence.”
  • #15: Here are the most common data sources that we saw companies connecting to. Our analysis of how companies built their data infrastructure was based largely on blog posts (and some conversations) on the topic. One limitation there is that engineers tend to write these pieces fairly soon after completion of the project and there’s often the understanding that more data sources will be added on later. Asana built data connections to the most sources, but there’s an enormous amount of data that can be derived just from connecting ad spend to purchase history living in your production databases.
  • #16: Now, for some audience participation, could you grab your mouse and fill in this poll? What top five data sources are a top priority for you to integrate nad keep integrated? While you’re filling in your answers, let me just say that data consolidation comes with it’s own special challenges. When Asana first started building their data infrastructure they did it using Python scripts and MySQL. And if you’re just starting out this can work for you too, but you will outgrow it eventually. And I’m going to say more on that in a second, but first let’s take a look at the results.
  • #17: So in the Asana teams own words, here are some of the challenges they faced during consolidation -- doubts about data integrity due to a lack of monitoring and logging, insights vs. bugs. Urgent fires when systems went down.
  • #18: And this is from MetaMarkets. Braintree’s team said: deletes are nearly impossible to keep track of, you have to keep track of data that changed, batch updates are slow and it’s difficult to know how long they’ll take.
  • #19: A big part of my job involves talking to people every day about their data infrastructure. These posts touch on some of the problems you can expect, but keep in mind -- these people are the successful ones. I’ve been on calls with many a frustrated engineer throwing in the towel on their data infrastructure projects after 1 year at the task. Data consolidation is hard. Here are 7 of the core challenges.
  • #20: Early last month we released a SaaS product designed to solve this problem -- called Pipeline. It takes data from any number of integrations and that data flows into a datawarehouse with super low latency. We’re aggressively releasing new integrations each month, so if you need an integration you don’t see here today, let us know! If you want to learn more about this, stick around at the end for a demo.
  • #21: The next step in the process is data warehousing. Hands down the top pick for warehousing was Redshift.
  • #22: Among the companies that we looked at, Redshift was the most popular choice for an analytics warehouse.
  • #23: The most common reason? speed. People are seeing dramatic improvements in query time using Redshift. Asana said that queries that were taking hours now take a few seconds. Similarly, seatgeek had a critical query that took 20 minutes, now takes half a minute in redshift.
  • #24: Here are the results of AirBnB tests that show performance in both query time and cost. Source: http://nerds.airbnb.com/redshift-performance-cost/
  • #25: Here’s some research from Periscope showing Redshift vs. Postgres shows similar performance gains.
  • #26: And here is research from DiamondStream showing how much better their internal dashboards performed when built on Redshift vs. MS SQL. I think it’s this final reason why Looker is such a big fan of Redshift and recommends it to their clients. source: http://www.datasciencecentral.com/profiles/blogs/why-5-companies-chose-amazon-redshift
  • #27: Right, thanks Shaun... So earlier I talked a bit about the structural differences between old data architecture vs modern data architecture - now I’m going to elaborate a bit on how that architecture impacts business intelligence and analytics work flows
  • #28: This slide shows workflows with the legacy architecture I described earlier As a reminder, with legacy architecture, each department is working in silos, all serviced by a central IT or Analyst team This is fundamentally prohibitive to a data-drive culture for a few reasons: First, it’s extremely resource-intensive for the central data team to service the needs of their business users. Second, it creates a bottleneck in the analytics process. You’ll see that the arrows are flowing away from the central data team, and that’s for a specific reason. The data team will provide pre-determined metrics for various departments, then rerun and distribute those metrics periodically. These metrics are typically overly broad and not actionable. If a user has further questions about the analysis…. and that is often the case. How do you know what questions to ask about the data, unless you’ve seen the data already?... Iif a user has a further question, they need to submit a request from the data team, who will may take a few days to turn it around. This latency restricts end-users from making quick, informed business decisions based on their data. Plus, in most companies, there is typically a hierarchy to who receives data. The Executive team can get all the data they want, while requests from sales reps, marketing managers, etc. are pushed to the back of the line. These groups rarely have the ability to make strategic decisions based on the analysis they request Lastly, this model results in disparate reporting. If 5 different departments request the same metric from 5 different database analysts, it’s highly likely that those analysts will have differing ideas about the appropriate way to calculate a metric. Especially when you get into the more sophisticated stuff - things like Affinity Analysis... if I buy X what is the likelihood I buy Y?.... There are a few statistically defensible ways to calculate that metrics. In practice, it’s very common for large organizations to have non-unified definitions, which leads to headaches, data chaos, and an inability to make decisions based on data
  • #29: One of the factors that contributes to these workflow issues, which is sort of the last point I touched on, is the difficulty in consistently defining metrics across a company Part of this is because of the nature of SQL, the de facto language for querying databases SQL can be easy to write, but difficult to read / audit If you give 10 analysts the same metrics, you’ll very likely get 10 different queries, some of which may yield the same results, some of which may not In practice, this often results in data analysts recycling and slightly modifying old queries, without ever really understanding the inner workings of the query This then jeopardizes the integrity of the data, which makes it difficult to consistently interpret results
  • #30: How do we solve this issue of one-off queries and silo’d reporting? We create a data model as an intermediary All definitions of metrics, and data transformations, are defined in one place, where all users can access and understand them Now, you don’t need those 10 analysts, you only need 1-2 who monitor the modeling layer, and you can be confident all users are working off of the same definitions and interpretations of the results You can also link together data from different sources, so you can link Salesforce marketo zendesk data together to get a comprehensive view of your customer This allows us to maintain “data governance”, which is a term that you probably hear a lot lately So, how does this modeling layer impact workflows?
  • #31: This slide depicts BI and analysis workflows with modern architecture creates a, but creates a truly data-driven environment All users have equal access to the data through a UI, they don’t need to know SQL. So now Sales, marketing, finance, customer success, teams that previously could not directly access data, have the ability to explore their database in full detail Since everyone is looking at the same numbers and reports, business users can collaborate and facilitate meaningful conversations, based on shared insights Business users can make informed strategic decisions on the fly, which results in tangible, significant competitive advantages So, how do you set up this kind of architecture? I think a good example of this is one of our customers, Infectious Media, who offer digital advertising for a myriad of Fortune companies. With Looker, their Sales optimization team has the ability to see, in real time, how various advertising campaigns are performing across every website and publisher. If a certain type of website is driving the most clicks or conversions, the optimization team can immediately determine why, then redirect future campaign efforts towards those specific websites or publishers, and perhaps new, similar ones. In a world where advertisements sometimes only last a week or two, the ability to constantly iterate on, and refine campaign strategy, results in tangible differences in top line sales. This represents the most significant competitive advantage a company in this space can possess… This model is required for a company to survive.
  • #32: Now that we understand the benefits, I’ll explain how the set-up of these modern infrastructures is easier than ever. And I’ll illustrate this with an example using RJ Pipeline
  • #33: Say you’re a company that collects data from a number of various sources, such as 3rd party applications Rather than needing to perform complex transformations (like with legacy architecture), you can dump all of your data directly into a centralized location using a middleware tool such as RJ Pipeline. This completely centralizes all of your data, and prepares it for analytics, with a few clicks. No need for heavy engineering resources and workloads Once the data is centralized, you can quickly add a tool with modeling layers to help distribute data to all of your end users (again, the modeling layer is key here) Working with a tool like Looker, for example, we have an offering called Looker Blocks, which is essentially pre-templated code for your modeling layer for all sorts of third-party applications and types of analysis…. These Blocks can be copied into your data model, so now even most of the actual data model development is initially taken care of for you The result, is going from having silo’d data in several disparate applications with unequal access for users…. to having data centralized in a modern database, with a full analytics suite on top, that can be accessed by any user What would have taken… quite literally…. months of intensive engineering efforts, is now accomplished in 1,2, or 3 weeks… Which is pretty astounding. That time-to-value from your data is something we’ve never really seen before in the data space.