SlideShare a Scribd company logo
xPatterns on Spark, Shark,
Tachyon and Mesos
Seattle Spark Meetup
May 2014
2 Atigeo Confidential
• xPatterns Architecture
• xPatterns Infrastructure Evolution
• Ingestion API & GUI (Demo)
• Transformation API & GUI (Demo)
• Jaws Http SharkServer API & GUI (Demo)
• Export to NoSql API & GUI (Demo)
• xPatterns dashboard application (Demo)
• xPatterns monitoring and instrumentation (Demo)
• ELT pipeline rebuilt on BDAS: from 0.8.0 to 0.9.1
• Lessons Learned, Tips & Tricks
Agenda
3 Atigeo Confidential
4 Atigeo Confidential
5 Atigeo Confidential
• Hadoop -> Spark: faster distributed computing engine leveraging in-memory computation at a much lower
operational cost, machine learning primitives, simpler programming model (Scala, Python, Java), faster job
submission, shell for quick prototyping and testing, ideal for our iterative algorithms
• Hive -> Shark: interactive queries on large datasets have become reasonable requests (in-memory caching
yields 4-20x performance improvement, ELT script base migration required minimal effort (same familiar
HiveQL, with a few exceptions)
• NO resource manager - > Mesos: multiple workloads from multiple frameworks can co-exist and fairly
consume the cluster resources (policy based). More mature than YARN, allows us to separate production
from experimentation workloads, co-locates legacy Hadoop MR jobs, multiple Shark servers (Jaws), multiple
Spark Job servers, mixed Hive and Shark queries (ELT), and establish priority queues: no more
unmanageable contention and delayed execution while maximizing cluster utilization (dynamic scheduling)
• No Cache -> Tachyon: in-memory distributed file system, with HDFS backup, resilience through lineage
rather than replication, our out-of-process cache that survives Spark JVM restarts, allows for fine tuning
performance and experimenting against cached warehouse tables without reload. Faster than in process
cache due to delayed GC. Provides data sharing between multiple Spark/Shark jobs, efficient in-memory
columnar storage with compression support for minimal footprint
• Cloudera Manager Dashboards-> Ganglia: distributed monitoring system for dashboards with historical
metrics data (CPU, RAM, disk I/O, network I/O) and Spark/Hadoop metrics. This is a nice addition to our
Nagios (monitoring and alerts) and Graphite (instrumentation dashboards)
xPatterns Infrastructure Evolution
6 Atigeo Confidential
• Highly available, scalable and resilient distributed download tool exposed through Restful API
& GUI
• Supports encryption/decryption, compression/decompression, automatic backup & restore
(aws S3) and geo-failover (hdfs and S3 in both us-east and us-west ec2 regions)
• Support multiple input sources: sftp, S3 and 450+ sources through Talend Integration
• Configurable throughput (number of parallel Spark processors, in both fine-grained and
coarse-grained Mesos modes)
• File Transfer log and file transition state history for auditing purposes (pluggable persistence
model, Cassandra/hdfs), configurable alerts, reports
• Ingest + Backup: download + decompression + hdfs persistence + encryption + S3 upload
• Restore: S3 download + decryption + decompress + hdfs persistence
• Geo-failover: backup on S3 us-east + restore from S3 us-east into west-coast hdfs + backup on
S3 us-west
• Ingestion jobs can be resumed from any stage after failure (# of Spark task retries exhausted)
• Http streaming API exposed for high-throughput push model ingestion (ingestion into Kafka
pub-sub, batch Spark job for transfer into hdfs)
Distributed Data Ingestion API & GUI
7 Atigeo Confidential
8 Atigeo Confidential
T-Component API & GUI
• Data Transformation component for building a data pipeline with monitoring and quality gates
• Exposes all of Oozie’s action types and adds Spark (Java & Scala) and Shark (QL) stages
• Uses patched Ooyala SparkJobServer (multiple contexts in same JVM bug fixed by us!)
• Spark stage required to run code that accepts an xPatterns-managed Spark context (coarse-grained or
fine-grained) as parameter
• DAG and job execution info persistence in Hive Metastore
• Exposes full API for job, stages, resources management and scheduled pipeline execution
• Demo: submit a Spark driver program as Spark stage for transforming ingested hdfs files, submit a Shar
stage for creating a Shark table and further transforming the datasets through HiveQL statements
9 Atigeo Confidential
• T-component DAG executed by Oozie
• Spark and Shark stages executed through ssh actions
• Spark stage sent to SparkJobServer
• SharSk stage executed through shark CLI for now (SharkServer2 in the future)
• Support for pySpark stage coming soon
10 Atigeo Confidential
• Jaws: a highly scalable and resilient restful (http) interface on top of a managed Shark session
that can concurrently and asynchronously submit Shark queries, return persisted results
(automatically limited in size or paged), execution logs and job information (Cassandra or hdfs
persisted).
• Jaws can be load balanced for higher availability and scalability and it fuels a web-based GUI
that is integrated in the xPatterns Management Console (Warehouse Explorer)
• Jaws exposes configuration options for fine-tuning Spark & Shark performance and running
against a stand-alone Spark deployment, with or without Tachyon as in-memory distributed
file system on top of HDFS, and with or without Mesos as resource manager
• Provides different deployment recipes for all combinations of Spark, Mesos and Tachyon
• Shark editor provides analysts, data scientists with a view into the warehouse through a
metadata explorer, provides a query editor with intelligent features like auto-complete, a
results viewer, logs viewer and historical queries for asynchronously retrieving persisted
results, logs and query information for both running and historical queries
• Adding web-style pagination and query cancellation, spray io http layer (REST on Akka)
Jaws REST SharkServer & GUI
11 Atigeo Confidential
Jaws REST SharkServer & GUI
12 Atigeo Confidential
13 Atigeo Confidential
Export to NoSql API
• Datasets in the warehouse need to be exposed to high-throughput low-latency real-time
APIs. Each application requires extra processing performed on top of the core datasets,
hence additional transformations are executed for building data marts inside the
warehouse
• Exporter tool builds the efficient data model and runs an export of data from a Shark/Hive
table to a Cassandra Column Family, through a custom Spark job with configurable
throughput (configurable Spark processors against a Cassandra ring) (instrumentation
dashboard embedded, logs, progress and instrumentation events pushed though SSE)
• Data Modeling is driven by the read access patterns provided by an application engineer
building dashboards and visualizations: lookup key, columns (record fields to read), paging,
sorting, filtering
• The end result of a job run is a REST API endpoint (instrumented, monitored, resilient, geo-
replicated) that uses the underlying generated Cassandra data model and fuels the data in
the dashboards
• Configuration API provided for creating export jobs and executing them (ad-hoc or
scheduled).
14 Atigeo Confidential
15 Atigeo Confidential
Mesos/Spark cluster
16 Atigeo Confidential
Cassandra multi DC ring – write latency
17 Atigeo Confidential
Nagios monitoring
18 Atigeo Confidential
19 Atigeo Confidential
Referral Provider Network
• One of the many applications that we built for our largest healthcare customers using
the xPatterns APIs and tools on the new upgraded infrastructure: ELT Pipeline, Jaws,
Export to NoSql API. The dashboard for the RPN application was built using D3.js and
angular against the generic api published by the export tool.
• The application allows for building a graph of downstream and upstream referred and
referring providers, grouped by specialty, with computed aggregates like patient counts,
claim counts and total charged amounts. RPN is used for both fraud detection and for
aiding a clinic buying decision, by following the busiest graph paths.
• The dataset behind the app consists of 8 billion medical records, from which we
extracted 1.7 million providers (Shark warehouse) and built 53 million relationships in
the graph (persisted in Cassandra)
• While we demo the graph building we will also look at the Graphite instrumentation
dashboard for analyzing the runtime performance of the geo-replicated Cassandra read
operations (latency in the 20-50ms range)
20 Atigeo Confidential
21 Atigeo Confidential
Graphite – Cassandra multi DC ring
22 Atigeo Confidential
• 20 billion healthcare records, 200 TB of compressed hdfs data
• Processing pipeline, a mixture of custom MR and mostly Hive scripts, converted to Spark and
Shark, with performance gains of 3-4x (for disk intensive operations) to 20-40x for queries on
cached tables (Spark cache or Tachyon which is slightly faster with added resilience benefits)
• Daily processing reduced from 14 hours to 1.5hours!
• Shark 0.8.1 does not support: map join auto-conversion, automatic calculation of number of
reducers, reducer or map out phase disk spills, skew joins etc … we have to either manually
fine tune the cluster and the query based on the specific dataset, or we are better off with
Hive under these circumstances … so we use Mesos to manage Hadoop and Spark under the
same cluster, mixing Hive and Shark workloads (demo)
• 0.9.0 fixes many of the problems, but still requires patches! (spill & mesos fine-grained)
• Tested against multiple cluster configurations of the same cost, using 3 types of instances:
m1.xlarge (4c x 15GB), m2.4xlarge (8c x 68.4GB) and cc2.8xlarge (32c x 60.8GB).
• Jaws config settings explained: set mapreduce.job.reduces=…, set
shark.column.compress=true, spark.default.parallelism=384,
spark.storage.memoryFraction=0.3, spark.shuffle.memoryFraction=0.6,
spark.shuffle.consolidateFiles=true, spark.shuffle.spill=false|true, spark.mesos.coarse=false,
spark.scheduler.mode=FAIR
• Multiple sparkContexts in the same JVM, Mesos framework starvation bug
ELT processing and data quality pipeline
23 Atigeo Confidential
• Export to Semantic Search API (solrCloud/lucene)
• pySpark Job Server
• pySpark  Shark/Tachyon interop (either)
• pySpark  Spark SQL (1.0) interop (or)
• Parquet columnar storage for warehouse data
Coming soon …
24 Atigeo Confidential
Q & A
Oh btw … we’re hiring!
© 2013 Atigeo, LLC. All rights reserved. Atigeo and the xPatterns logo are trademarks of Atigeo. The information herein is for informational purposes only and represents the current view of Atigeo as of the date of this
presentation. Because Atigeo must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Atigeo, and Atigeo cannot guarantee the accuracy of any information provided
after the date of this presentation. ATIGEO MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
26 Atigeo Confidential

More Related Content

PPTX
Tachyon meetup San Francisco Oct 2014
PPTX
xPatterns ... beyond Hadoop (Spark, Shark, Mesos, Tachyon)
PDF
Reactive streams
PDF
Streaming Analytics with Spark, Kafka, Cassandra and Akka
PDF
Tachyon and Apache Spark
PDF
Reactive app using actor model & apache spark
PDF
How to deploy Apache Spark 
to Mesos/DCOS
PDF
PSUG #52 Dataflow and simplified reactive programming with Akka-streams
Tachyon meetup San Francisco Oct 2014
xPatterns ... beyond Hadoop (Spark, Shark, Mesos, Tachyon)
Reactive streams
Streaming Analytics with Spark, Kafka, Cassandra and Akka
Tachyon and Apache Spark
Reactive app using actor model & apache spark
How to deploy Apache Spark 
to Mesos/DCOS
PSUG #52 Dataflow and simplified reactive programming with Akka-streams

What's hot (20)

PDF
Akka in Production - ScalaDays 2015
PDF
A Tale of Two APIs: Using Spark Streaming In Production
PDF
Lambda Architecture with Spark, Spark Streaming, Kafka, Cassandra, Akka and S...
PDF
Reactive dashboard’s using apache spark
PPTX
Spark Kernel Talk - Apache Spark Meetup San Francisco (July 2015)
PDF
Real-Time Anomaly Detection with Spark MLlib, Akka and Cassandra
PPTX
Building production spark streaming applications
PPTX
Spark, Tachyon and Mesos internals
PPTX
Lambda architecture on Spark, Kafka for real-time large scale ML
ODP
Lambda Architecture with Spark
PPTX
Alpine academy apache spark series #1 introduction to cluster computing wit...
PDF
Real-time personal trainer on the SMACK stack
PDF
Reactive programming on Android
PDF
Building Scalable Data Pipelines - 2016 DataPalooza Seattle
PDF
Feeding Cassandra with Spark-Streaming and Kafka
PDF
Fast and Simplified Streaming, Ad-Hoc and Batch Analytics with FiloDB and Spa...
PDF
Recipes for Running Spark Streaming Applications in Production-(Tathagata Das...
PPTX
Developing a Real-time Engine with Akka, Cassandra, and Spray
PDF
SMACK Stack - Fast Data Done Right by Stefan Siprell at Codemotion Dubai
PDF
Spark Summit EU talk by Steve Loughran
Akka in Production - ScalaDays 2015
A Tale of Two APIs: Using Spark Streaming In Production
Lambda Architecture with Spark, Spark Streaming, Kafka, Cassandra, Akka and S...
Reactive dashboard’s using apache spark
Spark Kernel Talk - Apache Spark Meetup San Francisco (July 2015)
Real-Time Anomaly Detection with Spark MLlib, Akka and Cassandra
Building production spark streaming applications
Spark, Tachyon and Mesos internals
Lambda architecture on Spark, Kafka for real-time large scale ML
Lambda Architecture with Spark
Alpine academy apache spark series #1 introduction to cluster computing wit...
Real-time personal trainer on the SMACK stack
Reactive programming on Android
Building Scalable Data Pipelines - 2016 DataPalooza Seattle
Feeding Cassandra with Spark-Streaming and Kafka
Fast and Simplified Streaming, Ad-Hoc and Batch Analytics with FiloDB and Spa...
Recipes for Running Spark Streaming Applications in Production-(Tathagata Das...
Developing a Real-time Engine with Akka, Cassandra, and Spray
SMACK Stack - Fast Data Done Right by Stefan Siprell at Codemotion Dubai
Spark Summit EU talk by Steve Loughran
Ad

Viewers also liked (9)

PPTX
Reactive Streams and RabbitMQ
PDF
Docker. Does it matter for Java developer ?
PDF
Akka in Practice: Designing Actor-based Applications
PDF
Akka and AngularJS – Reactive Applications in Practice
PDF
Reactive Jersey Client
PDF
A Journey to Reactive Function Programming
PDF
Resilient Applications with Akka Persistence - Scaladays 2014
PDF
12 Factor App: Best Practices for JVM Deployment
PPTX
Micro services, reactive manifesto and 12-factors
Reactive Streams and RabbitMQ
Docker. Does it matter for Java developer ?
Akka in Practice: Designing Actor-based Applications
Akka and AngularJS – Reactive Applications in Practice
Reactive Jersey Client
A Journey to Reactive Function Programming
Resilient Applications with Akka Persistence - Scaladays 2014
12 Factor App: Best Practices for JVM Deployment
Micro services, reactive manifesto and 12-factors
Ad

Similar to xPatterns on Spark, Shark, Mesos, Tachyon (20)

PPTX
xPatterns - Spark Summit 2014
PPTX
Cassandra in xPatterns
PPTX
AWS Big Data Demystified #1: Big data architecture lessons learned
PPTX
Introduction to AWS Big Data
PDF
In Memory Data Pipeline And Warehouse At Scale - BerlinBuzzwords 2015
PDF
xPatterns on Spark, Tachyon and Mesos - Bucharest meetup
PPTX
Building an intelligent big data application in 30 minutes
PPTX
Big Data Retrospective - STL Big Data IDEA Jan 2019
PDF
Re invent 2018 meetup presentation
PPTX
Make your data fly - Building data platform in AWS
PDF
Cassandra Community Webinar: Apache Spark Analytics at The Weather Channel - ...
PPTX
AWS big-data-demystified #1.1 | Big Data Architecture Lessons Learned | English
PDF
New Analytics Toolbox
PPTX
Lessons learned from embedding Cassandra in xPatterns
PPTX
Big Data in 200 km/h | AWS Big Data Demystified #1.3
PPTX
In Memory Analytics with Apache Spark
PDF
Architecting Applications With Multiple Open Source Big Data Technologies
PDF
New Analytics Toolbox DevNexus 2015
PDF
Spark Meetup at Uber
PPTX
Architecting an Open Source AI Platform 2018 edition
xPatterns - Spark Summit 2014
Cassandra in xPatterns
AWS Big Data Demystified #1: Big data architecture lessons learned
Introduction to AWS Big Data
In Memory Data Pipeline And Warehouse At Scale - BerlinBuzzwords 2015
xPatterns on Spark, Tachyon and Mesos - Bucharest meetup
Building an intelligent big data application in 30 minutes
Big Data Retrospective - STL Big Data IDEA Jan 2019
Re invent 2018 meetup presentation
Make your data fly - Building data platform in AWS
Cassandra Community Webinar: Apache Spark Analytics at The Weather Channel - ...
AWS big-data-demystified #1.1 | Big Data Architecture Lessons Learned | English
New Analytics Toolbox
Lessons learned from embedding Cassandra in xPatterns
Big Data in 200 km/h | AWS Big Data Demystified #1.3
In Memory Analytics with Apache Spark
Architecting Applications With Multiple Open Source Big Data Technologies
New Analytics Toolbox DevNexus 2015
Spark Meetup at Uber
Architecting an Open Source AI Platform 2018 edition

Recently uploaded (20)

PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPTX
Cloud computing and distributed systems.
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPTX
Big Data Technologies - Introduction.pptx
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
cuic standard and advanced reporting.pdf
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Encapsulation_ Review paper, used for researhc scholars
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Mobile App Security Testing_ A Comprehensive Guide.pdf
Cloud computing and distributed systems.
“AI and Expert System Decision Support & Business Intelligence Systems”
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Building Integrated photovoltaic BIPV_UPV.pdf
Reach Out and Touch Someone: Haptics and Empathic Computing
Understanding_Digital_Forensics_Presentation.pptx
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Dropbox Q2 2025 Financial Results & Investor Presentation
20250228 LYD VKU AI Blended-Learning.pptx
Big Data Technologies - Introduction.pptx
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
cuic standard and advanced reporting.pdf
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Encapsulation_ Review paper, used for researhc scholars

xPatterns on Spark, Shark, Mesos, Tachyon

  • 1. xPatterns on Spark, Shark, Tachyon and Mesos Seattle Spark Meetup May 2014
  • 2. 2 Atigeo Confidential • xPatterns Architecture • xPatterns Infrastructure Evolution • Ingestion API & GUI (Demo) • Transformation API & GUI (Demo) • Jaws Http SharkServer API & GUI (Demo) • Export to NoSql API & GUI (Demo) • xPatterns dashboard application (Demo) • xPatterns monitoring and instrumentation (Demo) • ELT pipeline rebuilt on BDAS: from 0.8.0 to 0.9.1 • Lessons Learned, Tips & Tricks Agenda
  • 5. 5 Atigeo Confidential • Hadoop -> Spark: faster distributed computing engine leveraging in-memory computation at a much lower operational cost, machine learning primitives, simpler programming model (Scala, Python, Java), faster job submission, shell for quick prototyping and testing, ideal for our iterative algorithms • Hive -> Shark: interactive queries on large datasets have become reasonable requests (in-memory caching yields 4-20x performance improvement, ELT script base migration required minimal effort (same familiar HiveQL, with a few exceptions) • NO resource manager - > Mesos: multiple workloads from multiple frameworks can co-exist and fairly consume the cluster resources (policy based). More mature than YARN, allows us to separate production from experimentation workloads, co-locates legacy Hadoop MR jobs, multiple Shark servers (Jaws), multiple Spark Job servers, mixed Hive and Shark queries (ELT), and establish priority queues: no more unmanageable contention and delayed execution while maximizing cluster utilization (dynamic scheduling) • No Cache -> Tachyon: in-memory distributed file system, with HDFS backup, resilience through lineage rather than replication, our out-of-process cache that survives Spark JVM restarts, allows for fine tuning performance and experimenting against cached warehouse tables without reload. Faster than in process cache due to delayed GC. Provides data sharing between multiple Spark/Shark jobs, efficient in-memory columnar storage with compression support for minimal footprint • Cloudera Manager Dashboards-> Ganglia: distributed monitoring system for dashboards with historical metrics data (CPU, RAM, disk I/O, network I/O) and Spark/Hadoop metrics. This is a nice addition to our Nagios (monitoring and alerts) and Graphite (instrumentation dashboards) xPatterns Infrastructure Evolution
  • 6. 6 Atigeo Confidential • Highly available, scalable and resilient distributed download tool exposed through Restful API & GUI • Supports encryption/decryption, compression/decompression, automatic backup & restore (aws S3) and geo-failover (hdfs and S3 in both us-east and us-west ec2 regions) • Support multiple input sources: sftp, S3 and 450+ sources through Talend Integration • Configurable throughput (number of parallel Spark processors, in both fine-grained and coarse-grained Mesos modes) • File Transfer log and file transition state history for auditing purposes (pluggable persistence model, Cassandra/hdfs), configurable alerts, reports • Ingest + Backup: download + decompression + hdfs persistence + encryption + S3 upload • Restore: S3 download + decryption + decompress + hdfs persistence • Geo-failover: backup on S3 us-east + restore from S3 us-east into west-coast hdfs + backup on S3 us-west • Ingestion jobs can be resumed from any stage after failure (# of Spark task retries exhausted) • Http streaming API exposed for high-throughput push model ingestion (ingestion into Kafka pub-sub, batch Spark job for transfer into hdfs) Distributed Data Ingestion API & GUI
  • 8. 8 Atigeo Confidential T-Component API & GUI • Data Transformation component for building a data pipeline with monitoring and quality gates • Exposes all of Oozie’s action types and adds Spark (Java & Scala) and Shark (QL) stages • Uses patched Ooyala SparkJobServer (multiple contexts in same JVM bug fixed by us!) • Spark stage required to run code that accepts an xPatterns-managed Spark context (coarse-grained or fine-grained) as parameter • DAG and job execution info persistence in Hive Metastore • Exposes full API for job, stages, resources management and scheduled pipeline execution • Demo: submit a Spark driver program as Spark stage for transforming ingested hdfs files, submit a Shar stage for creating a Shark table and further transforming the datasets through HiveQL statements
  • 9. 9 Atigeo Confidential • T-component DAG executed by Oozie • Spark and Shark stages executed through ssh actions • Spark stage sent to SparkJobServer • SharSk stage executed through shark CLI for now (SharkServer2 in the future) • Support for pySpark stage coming soon
  • 10. 10 Atigeo Confidential • Jaws: a highly scalable and resilient restful (http) interface on top of a managed Shark session that can concurrently and asynchronously submit Shark queries, return persisted results (automatically limited in size or paged), execution logs and job information (Cassandra or hdfs persisted). • Jaws can be load balanced for higher availability and scalability and it fuels a web-based GUI that is integrated in the xPatterns Management Console (Warehouse Explorer) • Jaws exposes configuration options for fine-tuning Spark & Shark performance and running against a stand-alone Spark deployment, with or without Tachyon as in-memory distributed file system on top of HDFS, and with or without Mesos as resource manager • Provides different deployment recipes for all combinations of Spark, Mesos and Tachyon • Shark editor provides analysts, data scientists with a view into the warehouse through a metadata explorer, provides a query editor with intelligent features like auto-complete, a results viewer, logs viewer and historical queries for asynchronously retrieving persisted results, logs and query information for both running and historical queries • Adding web-style pagination and query cancellation, spray io http layer (REST on Akka) Jaws REST SharkServer & GUI
  • 11. 11 Atigeo Confidential Jaws REST SharkServer & GUI
  • 13. 13 Atigeo Confidential Export to NoSql API • Datasets in the warehouse need to be exposed to high-throughput low-latency real-time APIs. Each application requires extra processing performed on top of the core datasets, hence additional transformations are executed for building data marts inside the warehouse • Exporter tool builds the efficient data model and runs an export of data from a Shark/Hive table to a Cassandra Column Family, through a custom Spark job with configurable throughput (configurable Spark processors against a Cassandra ring) (instrumentation dashboard embedded, logs, progress and instrumentation events pushed though SSE) • Data Modeling is driven by the read access patterns provided by an application engineer building dashboards and visualizations: lookup key, columns (record fields to read), paging, sorting, filtering • The end result of a job run is a REST API endpoint (instrumented, monitored, resilient, geo- replicated) that uses the underlying generated Cassandra data model and fuels the data in the dashboards • Configuration API provided for creating export jobs and executing them (ad-hoc or scheduled).
  • 16. 16 Atigeo Confidential Cassandra multi DC ring – write latency
  • 19. 19 Atigeo Confidential Referral Provider Network • One of the many applications that we built for our largest healthcare customers using the xPatterns APIs and tools on the new upgraded infrastructure: ELT Pipeline, Jaws, Export to NoSql API. The dashboard for the RPN application was built using D3.js and angular against the generic api published by the export tool. • The application allows for building a graph of downstream and upstream referred and referring providers, grouped by specialty, with computed aggregates like patient counts, claim counts and total charged amounts. RPN is used for both fraud detection and for aiding a clinic buying decision, by following the busiest graph paths. • The dataset behind the app consists of 8 billion medical records, from which we extracted 1.7 million providers (Shark warehouse) and built 53 million relationships in the graph (persisted in Cassandra) • While we demo the graph building we will also look at the Graphite instrumentation dashboard for analyzing the runtime performance of the geo-replicated Cassandra read operations (latency in the 20-50ms range)
  • 21. 21 Atigeo Confidential Graphite – Cassandra multi DC ring
  • 22. 22 Atigeo Confidential • 20 billion healthcare records, 200 TB of compressed hdfs data • Processing pipeline, a mixture of custom MR and mostly Hive scripts, converted to Spark and Shark, with performance gains of 3-4x (for disk intensive operations) to 20-40x for queries on cached tables (Spark cache or Tachyon which is slightly faster with added resilience benefits) • Daily processing reduced from 14 hours to 1.5hours! • Shark 0.8.1 does not support: map join auto-conversion, automatic calculation of number of reducers, reducer or map out phase disk spills, skew joins etc … we have to either manually fine tune the cluster and the query based on the specific dataset, or we are better off with Hive under these circumstances … so we use Mesos to manage Hadoop and Spark under the same cluster, mixing Hive and Shark workloads (demo) • 0.9.0 fixes many of the problems, but still requires patches! (spill & mesos fine-grained) • Tested against multiple cluster configurations of the same cost, using 3 types of instances: m1.xlarge (4c x 15GB), m2.4xlarge (8c x 68.4GB) and cc2.8xlarge (32c x 60.8GB). • Jaws config settings explained: set mapreduce.job.reduces=…, set shark.column.compress=true, spark.default.parallelism=384, spark.storage.memoryFraction=0.3, spark.shuffle.memoryFraction=0.6, spark.shuffle.consolidateFiles=true, spark.shuffle.spill=false|true, spark.mesos.coarse=false, spark.scheduler.mode=FAIR • Multiple sparkContexts in the same JVM, Mesos framework starvation bug ELT processing and data quality pipeline
  • 23. 23 Atigeo Confidential • Export to Semantic Search API (solrCloud/lucene) • pySpark Job Server • pySpark  Shark/Tachyon interop (either) • pySpark  Spark SQL (1.0) interop (or) • Parquet columnar storage for warehouse data Coming soon …
  • 24. 24 Atigeo Confidential Q & A Oh btw … we’re hiring!
  • 25. © 2013 Atigeo, LLC. All rights reserved. Atigeo and the xPatterns logo are trademarks of Atigeo. The information herein is for informational purposes only and represents the current view of Atigeo as of the date of this presentation. Because Atigeo must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Atigeo, and Atigeo cannot guarantee the accuracy of any information provided after the date of this presentation. ATIGEO MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

Editor's Notes

  • #4: the logical architecture diagram with the 3 logical layers of xPatterns: Infrastructure, Analytics and Visualization and the roles: ELT Engineer, Data Scientist, Application Engineer. xPatterns is a big data analytics platform as a service that enables a rapid development of enterprise-grade analytical applications. It provides tools, api sets and a management console for building an ELT pipeline with data monitoring and quality gates, a data warehouse for ad-hoc and scheduled querying, analysis, model building and experimentation, tools for exporting data to NoSql and solrCloud cluster for real-time access through low-latency/high-throughput apis as well as dashboard and visualization api/tools leveraging the available data and models. In this presentation we will showcase one of the analytical applications build on top of xPatterns for our largest customer for that runs xPatterns in production on top a data warehouse consisting of several hundreds TB of medical, pharmacy and lab data records consisting of tens of billions of records. We will showcase the xPatterns components in the form of APIs and tools employed throughout the entire lifecycle of this application.”
  • #5: The physical architecture diagram for our largest customer deployment, demonstrating the enterprise-grade attributes of the platform: scalability, high availability, performance, resilience, manageability while providing means for geo-failover (warehouse), geo-replication (real-time DB), data and system monitoring, instrumentation, backup & restore. Cassandra rings are DC-replicated across EC2 east and west coast regions, data between geo-replicas synchronized in real time through an ipsec tunnel (VPC-to-VPC). Geo-replicated apis behind an AWS Route 53 DNS service (latency based resource records sets) and ELBs ensures users requests are served from the closest geographical location. Failure to an entire region (happened to us during a big conference!) does not affect our availability and SLAs. User facing dashboards are served from Cassandra (real-time store), with data being exported from a data warehouse (Shark/Hive) build on top a Mesos-managed Spark/Hadoop cluster. Export jobs are instrumented and provide a throttling mechanism to control throughput. Export jobs run on the east-coast only, data is synchronized in real time with the west coast ring. Generated apis are automatically instrumented (Graphite) and monitored (Nagios).
  • #12: Jaws: a highly scalable and resilient restful (http) interface on top of a managed Shark session that can concurrently and asynchronously submit Shark queries, return persisted results (automatically limited in size), execution logs and job information (Cassandra or hdfs persisted). Jaws can be load balanced for higher availability and scalability and it fuels a web-based GUI called Shark Editor that is integrated in the xPatterns Management Console Jaws exposes configuration options for fine-tuning Spark & Shark performance and running against a stand-alone Spark deployment, with or without Tachyon as in-memory distributed file system on top of HDFS, and with or without Mesos as resource manager Provides different deployment recipes for all combinations of Spark, Mesos and Tachyon Shark editor provides analysts, data scientists with a view into the warehouse through a metadata explorer, provides a query editor with intelligent features like auto-complete, a results viewer, logs viewer and historical queries for asynchronously retrieving persisted results, logs and query information for both running and historical queries (DEMO)
  • #14: Datasets in the warehouse need to be exposed to high-throughput low-latency real-time APIs. Each application requires extra processing performed on top of the core datasets, hence additional transformations are executed for building data marts inside the warehouse Pre-optimization Shark/Hive queries required for building an efficient data model for Cassandra persistence: minimal number of column families, wide rows (50-100 MB compressed). Resulting data model is efficient for both read (dashboard/API) and write (export/updates) requests Exporter tool builds the efficient data model and runs an export of data from a Shark/Hive table to a Cassandra Column Family, through a custom Spark job with configurable throughput (configurable Spark processors against a Cassandra ring) Data Modeling is driven by the read access patterns: lookup key, columns (record fields to read), paging, sorting, filtering. The data access patterns is used for automatically publishing a REST api that uses the underlying generated Cassandra data model and it fuels the data in the dashboards Execution logs behind workflows, progress report and instrumentation events for the dashboard are pushed to the browser through SSE (Zookeeper watchers used for synchronization)
  • #15: Datasets in the warehouse need to be exposed to high-throughput low-latency real-time APIs. Each application requires extra processing performed on top of the core datasets, hence additional transformations are executed for building data marts inside the warehouse Pre-optimization Shark/Hive queries required for building an efficient data model for Cassandra persistence: minimal number of column families, wide rows (50-100 MB compressed). Resulting data model is efficient for both read (dashboard/API) and write (export/updates) requests Exporter tool builds the efficient data model and runs an export of data from a Shark/Hive table to a Cassandra Column Family, through a custom Spark job with configurable throughput (configurable Spark processors against a Cassandra ring) Data Modeling is driven by the read access patterns: lookup key, columns (record fields to read), paging, sorting, filtering. The data access patterns is used for automatically publishing a REST api that uses the underlying generated Cassandra data model and it fuels the data in the dashboards Execution logs behind workflows, progress report and instrumentation events for the dashboard are pushed to the browser through SSE (Zookeeper watchers used for synchronization)
  • #16: Mesos/Spark context (CoarseGrainedMode) with a fixed 120 cores spread out across 4 nodes for the export job
  • #17: Instrumentation dashboard showcasing the write latency measured during the export to noSql job (7ms max). Writes are performed against the east-coast DC … they are propagated to the west coast, however the JMX metric exposed (Write.Latency.OneMinuteRate) does not reflect it … need to build a new dashboard with different metrics!
  • #18: Nagios monitoring for the geo-replicated, instrumented generated apis. The APIs (readers) and the Spark executors (writers) have a retry mechanism (AOP aspects) that implement throttling when Cassandra is under siege …
  • #19: Ganglia monitoring Dashboard
  • #20: Referral Provider Network: one of the 6 applications that we built for our healthcare customer using the xPatterns APIs and tools on the new beyond Hadoop infrastructure: ELT Pipeline, Export to NoSQL API. The dashboard for the RPN application was built using D3.js and angular against the generic api published by the export tool. The application allows for building a graph of downstream and upstream referred and referring providers, grouped by specialty and with computed aggregates like patient counts, claim counts and total charged amounts. RPN is used for both fraud detection and for aiding a clinic buying decision, by following the busiest graph paths. The dataset behind the app consists of 8 billion medical records, from which we extracted 1.7 million providers (Shark warehouse) and built 53 million relationships in the graph (persisted in Cassandra) While we demo the graph building we will also look at the Graphite instrumentation dashboard for analyzing the runtime performance of the geo-replicated Cassandra read operations during the demo  
  • #21: Referral Provider Network: one of the 6 applications that we built for our healthcare customer using the xPatterns APIs and tools on the new beyond Hadoop infrastructure: ELT Pipeline, Export to NoSQL API. The dashboard for the RPN application was built using D3.js and angular against the generic api published by the export tool. The application allows for building a graph of downstream and upstream referred and referring providers, grouped by specialty and with computed aggregates like patient counts, claim counts and total charged amounts. RPN is used for both fraud detection and for aiding a clinic buying decision, by following the busiest graph paths. The dataset behind the app consists of 8 billion medical records, from which we extracted 1.7 million providers (Shark warehouse) and built 53 million relationships in the graph (persisted in Cassandra) While we demo the graph building we will also look at the Graphite instrumentation dashboard for analyzing the runtime performance of the geo-replicated Cassandra read operations during the demo  
  • #22: Instrumentation dashboard showcasing the read latency measured during peak (40ms average, 60peak)
  • #23: We converted the data ingestion tool to Spark so we can completely replace Hadoop and Hive in the future and for the faster job submission. We are converting all of our MR jobs to Spark jobs, for both ingestion and exports to Cassandra Different dataset require different cluster configurations, biggest problem of Spark 0.8.1 is that intermediate output and reducer input is not spilled to disk, causing OOM frequently (0.9.0 solves the reducer part).
  • #27: Cassandra is xPatterns: real-time database for user facing apis and dashboards applications, system of records for real-time analytics use-cases (Kafka/Storm/Cassandra), distribute in-memory cache store for configuration data, persistence store for user feedback in semantic search and dynamic ontology use cases (soldCloud/Cassandra/Zookeeper).