SlideShare a Scribd company logo
HPC-ABDS High Performance
Computing Enhanced
Apache Big Data Stack
(with a bias to Streaming)
2nd International Workshop on
Scalable Computing For Real-Time Big Data Applications (SCRAMBL'15)
in conjunction with CCGrid'15
May 4, 2015
Geoffrey Fox, Judy Qiu, Shantenu Jha,
Supun Kamburugamuve, Andre Luckow
gcf@indiana.edu http://www.infomall.org
School of Informatics and Computing
Digital Science Center
Indiana University Bloomington5/4/2015 1
HPC-ABDS
5/4/2015 2
35/4/2015
Kaleidoscope of (Apache) Big Data Stack (ABDS) and HPC Technologies
Cross-
Cutting
Functions
1) Message
and Data
Protocols:
Avro, Thrift,
Protobuf
2) Distributed
Coordination
: Google
Chubby,
Zookeeper,
Giraffe,
JGroups
3) Security &
Privacy:
InCommon,
Eduroam
OpenStack
Keystone,
LDAP, Sentry,
Sqrrl, OpenID,
SAML OAuth
4)
Monitoring:
Ambari,
Ganglia,
Nagios, Inca
17) Workflow-Orchestration: ODE, ActiveBPEL, Airavata, Pegasus, Kepler, Swift, Taverna, Triana, Trident, BioKepler, Galaxy, IPython, Dryad,
Naiad, Oozie, Tez, Google FlumeJava, Crunch, Cascading, Scalding, e-Science Central, Azure Data Factory, Google Cloud Dataflow, NiFi (NSA),
Jitterbit, Talend, Pentaho, Apatar
16) Application and Analytics: Mahout , MLlib , MLbase, DataFu, R, pbdR, Bioconductor, ImageJ, OpenCV, Scalapack, PetSc, Azure Machine
Learning, Google Prediction API & Translation API, mlpy, scikit-learn, PyBrain, CompLearn, DAAL(Intel), Caffe, Torch, Theano, DL4j, H2O, IBM
Watson, Oracle PGX, GraphLab, GraphX, MapGraph, IBM System G, GraphBuilder(Intel), TinkerPop, Google Fusion Tables, CINET, NWB,
Elasticsearch, Kibana, Logstash, Graylog, Splunk, Tableau, D3.js, three.js, Potree
15B) Application Hosting Frameworks: Google App Engine, AppScale, Red Hat OpenShift, Heroku, Aerobatic, AWS Elastic Beanstalk, Azure,
Cloud Foundry, Pivotal, IBM BlueMix, Ninefold, Jelastic, Stackato, appfog, CloudBees, Engine Yard, CloudControl, dotCloud, Dokku, OSGi,
HUBzero, OODT, Agave, Atmosphere
15A) High level Programming: Kite, Hive, HCatalog, Tajo, Shark, Phoenix, Impala, MRQL, SAP HANA, HadoopDB, PolyBase, Pivotal HD/Hawq,
Presto, Google Dremel, Google BigQuery, Amazon Redshift, Drill, Kyoto Cabinet, Pig, Sawzall, Google Cloud DataFlow, Summingbird
14B) Streams: Storm, S4, Samza, Granules, Google MillWheel, Amazon Kinesis, LinkedIn Databus, Facebook Puma/Ptail/Scribe/ODS, Azure Stream
Analytics
14A) Basic Programming model and runtime, SPMD, MapReduce: Hadoop, Spark, Twister, Stratosphere (Apache Flink), Reef, Hama, Giraph,
Pregel, Pegasus, Ligra, GraphChi
13) Inter process communication Collectives, point-to-point, publish-subscribe: MPI, Harp, Netty, ZeroMQ, ActiveMQ, RabbitMQ,
NaradaBrokering, QPid, Kafka, Kestrel, JMS, AMQP, Stomp, MQTT, Public Cloud: Amazon SNS, Lambda, Google Pub Sub, Azure Queues, Event
Hubs
12) In-memory databases/caches: Gora (general object from NoSQL), Memcached, Redis, LMDB (key value), Hazelcast, Ehcache, Infinispan
12) Object-relational mapping: Hibernate, OpenJPA, EclipseLink, DataNucleus, ODBC/JDBC
12) Extraction Tools: UIMA, Tika
11C) SQL(NewSQL): Oracle, DB2, SQL Server, SQLite, MySQL, PostgreSQL, CUBRID, Galera Cluster, SciDB, Rasdaman, Apache Derby, Pivotal
Greenplum, Google Cloud SQL, Azure SQL, Amazon RDS, Google F1, IBM dashDB, N1QL, BlinkDB
11B) NoSQL: Lucene, Solr, Solandra, Voldemort, Riak, Berkeley DB, Kyoto/Tokyo Cabinet, Tycoon, Tyrant, MongoDB, Espresso, CouchDB,
Couchbase, IBM Cloudant, Pivotal Gemfire, HBase, Google Bigtable, LevelDB, Megastore and Spanner, Accumulo, Cassandra, RYA, Sqrrl, Neo4J,
Yarcdata, AllegroGraph, Blazegraph, Facebook Tao, Titan:db, Jena, Sesame
Public Cloud: Azure Table, Amazon Dynamo, Google DataStore
11A) File management: iRODS, NetCDF, CDF, HDF, OPeNDAP, FITS, RCFile, ORC, Parquet
10) Data Transport: BitTorrent, HTTP, FTP, SSH, Globus Online (GridFTP), Flume, Sqoop, Pivotal GPLOAD/GPFDIST
9) Cluster Resource Management: Mesos, Yarn, Helix, Llama, Google Omega, Facebook Corona, Celery, HTCondor, SGE, OpenPBS, Moab, Slurm,
Torque, Globus Tools, Pilot Jobs
8) File systems: HDFS, Swift, Haystack, f4, Cinder, Ceph, FUSE, Gluster, Lustre, GPFS, GFFS
Public Cloud: Amazon S3, Azure Blob, Google Cloud Storage
7) Interoperability: Libvirt, Libcloud, JClouds, TOSCA, OCCI, CDMI, Whirr, Saga, Genesis
6) DevOps: Docker, Puppet, Chef, Ansible, SaltStack, Boto, Cobbler, Xcat, Razor, CloudMesh, Juju, Foreman, OpenStack Heat, Rocks, Cisco
Intelligent Automation for Cloud, Ubuntu MaaS, Facebook Tupperware, AWS OpsWorks, OpenStack Ironic, Google Kubernetes, Buildstep, Gitreceive
5) IaaS Management from HPC to hypervisors: Xen, KVM, Hyper-V, VirtualBox, OpenVZ, LXC, Linux-Vserver, OpenStack, OpenNebula,
Eucalyptus, Nimbus, CloudStack, CoreOS, VMware ESXi, vSphere and vCloud, Amazon, Azure, Google and other public Clouds,
Networking: Google Cloud DNS, Amazon Route 53
21 layers
Over 300
Software
Packages
May 2
2015
45/4/2015
Kaleidoscope of (Apache) Big Data Stack (ABDS) and HPC Technologies
Cross-
Cutting
Functions
1) Message
and Data
Protocols:
Avro, Thrift,
Protobuf
2) Distributed
Coordination
: Google
Chubby,
Zookeeper,
Giraffe,
JGroups
3) Security &
Privacy:
InCommon,
Eduroam
OpenStack
Keystone,
LDAP, Sentry,
Sqrrl, OpenID,
SAML OAuth
4)
Monitoring:
Ambari,
Ganglia,
Nagios, Inca
17) Workflow-Orchestration: ODE, ActiveBPEL, Airavata, Pegasus, Kepler, Swift, Taverna, Triana, Trident, BioKepler, Galaxy, IPython, Dryad,
Naiad, Oozie, Tez, Google FlumeJava, Crunch, Cascading, Scalding, e-Science Central, Azure Data Factory, Google Cloud Dataflow, NiFi (NSA),
Jitterbit, Talend, Pentaho, Apatar
16) Application and Analytics: Mahout , MLlib , MLbase, DataFu, R, pbdR, Bioconductor, ImageJ, OpenCV, Scalapack, PetSc, Azure Machine
Learning, Google Prediction API & Translation API, mlpy, scikit-learn, PyBrain, CompLearn, DAAL(Intel), Caffe, Torch, Theano, DL4j, H2O, IBM
Watson, Oracle PGX, GraphLab, GraphX, MapGraph, IBM System G, GraphBuilder(Intel), TinkerPop, Google Fusion Tables, CINET, NWB,
Elasticsearch, Kibana, Logstash, Graylog, Splunk, Tableau, D3.js, three.js, Potree
15B) Application Hosting Frameworks: Google App Engine, AppScale, Red Hat OpenShift, Heroku, Aerobatic, AWS Elastic Beanstalk, Azure,
Cloud Foundry, Pivotal, IBM BlueMix, Ninefold, Jelastic, Stackato, appfog, CloudBees, Engine Yard, CloudControl, dotCloud, Dokku, OSGi,
HUBzero, OODT, Agave, Atmosphere
15A) High level Programming: Kite, Hive, HCatalog, Tajo, Shark, Phoenix, Impala, MRQL, SAP HANA, HadoopDB, PolyBase, Pivotal HD/Hawq,
Presto, Google Dremel, Google BigQuery, Amazon Redshift, Drill, Kyoto Cabinet, Pig, Sawzall, Google Cloud DataFlow, Summingbird
14B) Streams: Storm, S4, Samza, Granules, Google MillWheel, Amazon Kinesis, LinkedIn Databus, Facebook Puma/Ptail/Scribe/ODS, Azure Stream
Analytics
14A) Basic Programming model and runtime, SPMD, MapReduce: Hadoop, Spark, Twister, Stratosphere (Apache Flink), Reef, Hama, Giraph,
Pregel, Pegasus, Ligra, GraphChi
13) Inter process communication Collectives, point-to-point, publish-subscribe: MPI, Harp, Netty, ZeroMQ, ActiveMQ, RabbitMQ,
NaradaBrokering, QPid, Kafka, Kestrel, JMS, AMQP, Stomp, MQTT, Public Cloud: Amazon SNS, Lambda, Google Pub Sub, Azure Queues, Event
Hubs
12) In-memory databases/caches: Gora (general object from NoSQL), Memcached, Redis, LMDB (key value), Hazelcast, Ehcache, Infinispan
12) Object-relational mapping: Hibernate, OpenJPA, EclipseLink, DataNucleus, ODBC/JDBC
12) Extraction Tools: UIMA, Tika
11C) SQL(NewSQL): Oracle, DB2, SQL Server, SQLite, MySQL, PostgreSQL, CUBRID, Galera Cluster, SciDB, Rasdaman, Apache Derby, Pivotal
Greenplum, Google Cloud SQL, Azure SQL, Amazon RDS, Google F1, IBM dashDB, N1QL, BlinkDB
11B) NoSQL: Lucene, Solr, Solandra, Voldemort, Riak, Berkeley DB, Kyoto/Tokyo Cabinet, Tycoon, Tyrant, MongoDB, Espresso, CouchDB,
Couchbase, IBM Cloudant, Pivotal Gemfire, HBase, Google Bigtable, LevelDB, Megastore and Spanner, Accumulo, Cassandra, RYA, Sqrrl, Neo4J,
Yarcdata, AllegroGraph, Blazegraph, Facebook Tao, Titan:db, Jena, Sesame
Public Cloud: Azure Table, Amazon Dynamo, Google DataStore
11A) File management: iRODS, NetCDF, CDF, HDF, OPeNDAP, FITS, RCFile, ORC, Parquet
10) Data Transport: BitTorrent, HTTP, FTP, SSH, Globus Online (GridFTP), Flume, Sqoop, Pivotal GPLOAD/GPFDIST
9) Cluster Resource Management: Mesos, Yarn, Helix, Llama, Google Omega, Facebook Corona, Celery, HTCondor, SGE, OpenPBS, Moab, Slurm,
Torque, Globus Tools, Pilot Jobs
8) File systems: HDFS, Swift, Haystack, f4, Cinder, Ceph, FUSE, Gluster, Lustre, GPFS, GFFS
Public Cloud: Amazon S3, Azure Blob, Google Cloud Storage
7) Interoperability: Libvirt, Libcloud, JClouds, TOSCA, OCCI, CDMI, Whirr, Saga, Genesis
6) DevOps: Docker, Puppet, Chef, Ansible, SaltStack, Boto, Cobbler, Xcat, Razor, CloudMesh, Juju, Foreman, OpenStack Heat, Rocks, Cisco
Intelligent Automation for Cloud, Ubuntu MaaS, Facebook Tupperware, AWS OpsWorks, OpenStack Ironic, Google Kubernetes, Buildstep, Gitreceive
5) IaaS Management from HPC to hypervisors: Xen, KVM, Hyper-V, VirtualBox, OpenVZ, LXC, Linux-Vserver, OpenStack, OpenNebula,
Eucalyptus, Nimbus, CloudStack, CoreOS, VMware ESXi, vSphere and vCloud, Amazon, Azure, Google and other public Clouds,
Networking: Google Cloud DNS, Amazon Route 53
21 layers
Over 300
Software
Packages
May 2
2015
There are a lot of Big Data and HPC Software systems in 17 (21) layers
Build on – do not compete with the over 300 HPC-ABDS systems
NIST Big Data Initiative
Led by Chaitin Baru, Bob Marcus,
Wo Chang
5/4/2015 5
NBD-PWG (NIST Big Data Public Working
Group) Subgroups & Co-Chairs
• There were 5 Subgroups
– Note mainly industry
• Requirements and Use Cases Sub Group
– Geoffrey Fox, Indiana U.; Joe Paiva, VA; Tsegereda Beyene, Cisco
• Definitions and Taxonomies SG
– Nancy Grady, SAIC; Natasha Balac, SDSC; Eugene Luster, R2AD
• Reference Architecture Sub Group
– Orit Levin, Microsoft; James Ketner, AT&T; Don Krapohl, Augmented
Intelligence
• Security and Privacy Sub Group
– Arnab Roy, CSA/Fujitsu Nancy Landreville, U. MD Akhil Manchanda, GE
• Technology Roadmap Sub Group
– Carl Buffington, Vistronix; Dan McClary, Oracle; David Boyd, Data
Tactics
• See http://bigdatawg.nist.gov/usecases.php
• And http://bigdatawg.nist.gov/V1_output_docs.php 65/4/2015
Comment period ends May 21
http://bigdatawg.nist.gov/V1_output_docs.php5/4/2015 7
Use Case Template
• 26 fields completed for 51
areas
• Government Operation: 4
• Commercial: 8
• Defense: 3
• Healthcare and Life Sciences:
10
• Deep Learning and Social
Media: 6
• The Ecosystem for Research:
4
• Astronomy and Physics: 5
• Earth, Environmental and
Polar Science: 10
• Energy: 1
85/4/2015
51 Detailed Use Cases: Contributed July-September 2013
Covers goals, data features such as 3 V’s, software, hardware
• http://bigdatawg.nist.gov/usecases.php
• https://bigdatacoursespring2014.appspot.com/course (Section 5)
• Government Operation(4): National Archives and Records Administration, Census Bureau
• Commercial(8): Finance in Cloud, Cloud Backup, Mendeley (Citations), Netflix, Web Search,
Digital Materials, Cargo shipping (as in UPS)
• Defense(3): Sensors, Image surveillance, Situation Assessment
• Healthcare and Life Sciences(10): Medical records, Graph and Probabilistic analysis,
Pathology, Bioimaging, Genomics, Epidemiology, People Activity models, Biodiversity
• Deep Learning and Social Media(6): Driving Car, Geolocate images/cameras, Twitter, Crowd
Sourcing, Network Science, NIST benchmark datasets
• The Ecosystem for Research(4): Metadata, Collaboration, Language Translation, Light source
experiments
• Astronomy and Physics(5): Sky Surveys including comparison to simulation, Large Hadron
Collider at CERN, Belle Accelerator II in Japan
• Earth, Environmental and Polar Science(10): Radar Scattering in Atmosphere, Earthquake,
Ocean, Earth Observation, Ice sheet Radar scattering, Earth radar mapping, Climate
simulation datasets, Atmospheric turbulence identification, Subsurface Biogeochemistry
(microbes to watersheds), AmeriFlux and FLUXNET gas sensors
• Energy(1): Smart grid 9
26 Features for each use case
Biased to science
5/4/2015
Table 4: Characteristics of 6 Distributed Applications
Application
Example
Execution Unit Communication Coordination Execution Environment
Montage Multiple sequential and
parallel executable
Files Dataflow
(DAG)
Dynamic process
creation, execution
NEKTAR Multiple concurrent
parallel executables
Stream based Dataflow Co-scheduling, data
streaming, async. I/O
Replica-
Exchange
Multiple seq. and
parallel executables
Pub/sub Dataflow and
events
Decoupled
coordination and
messaging
Climate
Prediction
(generation)
Multiple seq. & parallel
executables
Files and
messages
Master-
Worker,
events
@Home (BOINC)
Climate
Prediction
(analysis)
Multiple seq. & parallel
executables
Files and
messages
Dataflow Dynamics process
creation, workflow
execution
SCOOP Multiple Executable Files and
messages
Dataflow Preemptive scheduling,
reservations
Coupled
Fusion
Multiple executable Stream-based Dataflow Co-scheduling, data
streaming, async I/O
10
Part of Property Summary Table
5/4/2015
Features and Examples with a
streaming tinge ……
5/4/2015 11
51 Use Cases: What is Parallelism Over?
• People: either the users (but see below) or subjects of application and often both
• Decision makers like researchers or doctors (users of application)
• Items such as Images, EMR, Sequences below; observations or contents of online
store
– Images or “Electronic Information nuggets”
– EMR: Electronic Medical Records (often similar to people parallelism)
– Protein or Gene Sequences;
– Material properties, Manufactured Object specifications, etc., in custom dataset
– Modelled entities like vehicles and people
• Sensors – Internet of Things
• Events such as detected anomalies in telescope or credit card data or atmosphere
• (Complex) Nodes in RDF Graph
• Simple nodes as in a learning network
• Tweets, Blogs, Documents, Web Pages, etc.
– And characters/words in them
• Files or data to be backed up, moved or assigned metadata
• Particles/cells/mesh points as in parallel simulations 125/4/2015
Features of 51 Use Cases I
• PP (26) “All” Pleasingly Parallel or Map Only
• MR (18) Classic MapReduce MR (add MRStat below for full count)
• MRStat (7) Simple version of MR where key computations are
simple reduction as found in statistical averages such as histograms
and averages
• MRIter (23) Iterative MapReduce or MPI (Spark, Twister)
• Graph (9) Complex graph data structure needed in analysis
• Fusion (11) Integrate diverse data to aid discovery/decision making;
could involve sophisticated algorithms or could just be a portal
• Streaming (41) Some data comes in incrementally and is processed
this way
• Classify (30) Classification: divide data into categories
• S/Q (12) Index, Search and Query
5/4/2015 13
Features of 51 Use Cases II
• CF (4) Collaborative Filtering for recommender engines
• LML (36) Local Machine Learning (Independent for each parallel
entity) – application could have GML as well
• GML (23) Global Machine Learning: Deep Learning, Clustering, LDA,
PLSI, MDS,
– Large Scale Optimizations as in Variational Bayes, MCMC, Lifted Belief
Propagation, Stochastic Gradient Descent, L-BFGS, Levenberg-Marquardt . Can
call EGO or Exascale Global Optimization with scalable parallel algorithm
• Workflow (51) Universal
• GIS (16) Geotagged data and often displayed in ESRI, Microsoft
Virtual Earth, Google Earth, GeoServer etc.
• HPC (5) Classic large-scale simulation of cosmos, materials, etc.
generating (visualization) data
• Agent (2) Simulations of models of data-defined macroscopic
entities represented as agents5/4/2015 14
Internet of Things and Streaming Apps
• It is projected that there will be 24 (Mobile Industry Group) to 50 (Cisco)
billion devices on the Internet by 2020.
• The cloud natural controller of and resource provider for the Internet of
Things.
• Smart phones/watches, Wearable devices (Smart People), “Intelligent
River” “Smart Homes and Grid” and “Ubiquitous Cities”, Robotics.
• Majority of use cases (41/51) are streaming – experimental science gathers
data in a stream – sometimes batched as in a field trip. Below is sample
• 10: Cargo Shipping Tracking as in UPS, Fedex PP GIS LML
• 13: Large Scale Geospatial Analysis and Visualization PP GIS LML
• 28: Truthy: Information diffusion research from Twitter Data PP MR for
Search, GML for community determination
• 39: Particle Physics: Analysis of LHC Large Hadron Collider Data: Discovery
of Higgs particle PP Local Processing Global statistics
• 50: DOE-BER AmeriFlux and FLUXNET Networks PP GIS LML
• 51: Consumption forecasting in Smart Grids PP GIS LML 155/4/2015
Growth of Internet of Things December 2014
• Currently Phones etc. but will change to “real things”
165/4/2015
Big Data Patterns –
the Big Data present
and parallel computing past
5/4/2015 17
7 Computational Giants of
NRC Massive Data Analysis Report
1) G1: Basic Statistics e.g. MRStat
2) G2: Generalized N-Body Problems
3) G3: Graph-Theoretic Computations
4) G4: Linear Algebraic Computations
5) G5: Optimizations e.g. Linear Programming
6) G6: Integration e.g. LDA and other GML
7) G7: Alignment Problems e.g. BLAST
http://www.nap.edu/catalog.php?record_id=18374
5/4/2015 18
HPC Benchmark Classics
• Linpack or HPL: Parallel LU factorization
for solution of linear equations
• NPB version 1: Mainly classic HPC solver kernels
– MG: Multigrid
– CG: Conjugate Gradient
– FT: Fast Fourier Transform
– IS: Integer sort
– EP: Embarrassingly Parallel
– BT: Block Tridiagonal
– SP: Scalar Pentadiagonal
– LU: Lower-Upper symmetric Gauss Seidel
5/4/2015 19
13 Berkeley Dwarfs
1) Dense Linear Algebra
2) Sparse Linear Algebra
3) Spectral Methods
4) N-Body Methods
5) Structured Grids
6) Unstructured Grids
7) MapReduce
8) Combinational Logic
9) Graph Traversal
10) Dynamic Programming
11) Backtrack and
Branch-and-Bound
12) Graphical Models
13) Finite State Machines
First 6 of these correspond to
Colella’s original.
Monte Carlo dropped.
N-body methods are a subset of
Particle in Colella.
Note a little inconsistent in that
MapReduce is a programming
model and spectral method is a
numerical method.
Need multiple facets!
5/4/2015 20
What’s after Giants and Dwarfs?
Ogres ……..
Facets of the Ogres
5/4/2015 21
Introducing Big Data Ogres and their Facets I
• Big Data Ogres provide a systematic approach to understanding
applications, and as such they have facets which represent key
characteristics defined both from our experience and from a
bottom-up study of features from several individual applications.
• The facets capture common characteristics (shared by several
problems)which are inevitably multi-dimensional and often
overlapping.
• Ogres characteristics are cataloged in four distinct dimensions or
views.
• Each view consists of facets; when multiple facets are linked
together, they describe classes of big data problems represented
as an Ogre.
• Instances of Ogres are particular big data problems
• A set of Ogre instances that cover a rich set of facets could form a
benchmark set
• Ogres and their instances can be atomic or composite
5/4/2015 22
Introducing Big Data Ogres and their Facets II
• Ogres characteristics are cataloged in four distinct dimensions or
views.
• Each view consists of facets; when multiple facets are linked
together, they describe classes of big data problems represented
as an Ogre.
• One view of an Ogre is the overall problem architecture which is
naturally related to the machine architecture needed to support
data intensive application while still being different.
• Then there is the execution (computational) features view,
describing issues such as I/O versus compute rates, iterative
nature of computation and the classic V’s of Big Data: defining
problem size, rate of change, etc.
• The data source & style view includes facets specifying how the
data is collected, stored and accessed.
• The final processing view has facets which describe classes of
processing steps including algorithms and kernels. Ogres are
specified by the particular value of a set of facets linked from the
different views.5/4/2015 23
Problem
Architecture
View
Pleasingly Parallel
Classic MapReduce
Map-Collective
Map Point-to-Point
Shared Memory
Single Program Multiple Data
Bulk Synchronous Parallel
Fusion
Dataflow
Agents
Workflow
Geospatial Information System
HPC Simulations
Internet of Things
Metadata/Provenance
Shared / Dedicated / Transient / Permanent
Archived/Batched/Streaming
HDFS/Lustre/GPFS
Files/Objects
Enterprise Data Model
SQL/NoSQL/NewSQL
PerformanceMetrics
Flops/ByteFlopsperByte;MemoryI/O
ExecutionEnvironment;Corelibraries
Volume
Velocity
Variety
Veracity
CommunicationStructure
DataAbstraction
Metric=M/Non-Metric=N
𝑂𝑁2
=NN/𝑂(𝑁)=N
Regular=R/Irregular=I
Dynamic=D/Static=S
LinearAlgebraKernels
GraphAlgorithms
DeepLearning
Classification
RecommenderEngine
Search/Query/Index
BasicStatistics
Streaming
Alignment
OptimizationMethodology
GlobalAnalytics
LocalAnalytics
Micro-benchmarks
Visualization
Data Source and Style View
Execution View
Processing View
1
2
3
4
6
7
8
9
10
11
12
10
9
8
7
6
5
4
3
2
1
1 2 3 4 5 6 7 8 9 10 12 14
9 8 7 5 4 3 2 114 13 12 11 10 6
13
Map Streaming 5
4 Ogre
Views and
50 Facets
Iterative/Simple
11
5/4/2015 24
Facets of the Ogres
Meta or Macro Aspects:
Problem Architecture
5/4/2015 25
Problem Architecture View of Ogres (Meta or MacroPatterns)
i. Pleasingly Parallel – as in BLAST, Protein docking, some (bio-)imagery including Local
Analytics or Machine Learning – ML or filtering pleasingly parallel, as in bio-imagery,
radar images (pleasingly parallel but sophisticated local analytics)
ii. Classic MapReduce: Search, Index and Query and Classification algorithms like
collaborative filtering (G1 for MRStat in Features, G7)
iii. Map-Collective: Iterative maps + communication dominated by “collective” operations as
in reduction, broadcast, gather, scatter. Common datamining pattern
iv. Map-Point to Point: Iterative maps + communication dominated by many small point to
point messages as in graph algorithms
v. Map-Streaming: Describes streaming, steering and assimilation problems
vi. Shared Memory: Some problems are asynchronous and are easier to parallelize on shared
rather than distributed memory – see some graph algorithms
vii. SPMD: Single Program Multiple Data, common parallel programming feature
viii. BSP or Bulk Synchronous Processing: well-defined compute-communication phases
ix. Fusion: Knowledge discovery often involves fusion of multiple methods.
x. Dataflow: Important application features often occurring in composite Ogres
xi. Use Agents: as in epidemiology (swarm approaches)
xii. Workflow: All applications often involve orchestration (workflow) of multiple components
Note problem and machine architectures are related5/4/2015 26
Hardware, Software, Applications
• In my old papers (especially book Parallel Computing
Works!), I discussed computing as multiple complex systems
mapped into each other
Problem  Numerical formulation  Software  Hardware
• Each of these 4 systems has an architecture that can be
described in similar language
• One gets an easy programming model if architecture of
problem matches that of Software
• One gets good performance if architecture of hardware
matches that of software and problem
• So “MapReduce” can be used as architecture of software
(programming model) or “Numerical formulation of
problem”
5/4/2015 27
8 Data Analysis Problem Architectures
 1) Pleasingly Parallel PP or “map-only” in MapReduce
 BLAST Analysis; Local Machine Learning
 2A) Classic MapReduce MR, Map followed by reduction
 High Energy Physics (HEP) Histograms; Web search; Recommender Engines
 2B) Simple version of classic MapReduce MRStat
 Final reduction is just simple statistics
 3) Iterative MapReduce MRIter
 Expectation maximization Clustering Linear Algebra, PageRank
 4A) Map Point to Point Communication
 Classic MPI; PDE Solvers and Particle Dynamics; Graph processing Graph
 4B) GPU (Accelerator) enhanced 4A) – especially for deep learning
 5) Map + Streaming + Communication
 Images from Synchrotron sources; Telescopes; Internet of Things IoT
 6) Shared memory allowing parallel threads which are tricky to program
but lower latency
 Difficult to parallelize asynchronous parallel Graph Algorithms5/4/2015 28
(1) Map Only
M
(3) Iterative Map Reduce
or Map-Collective
(2) Classic
MapReduce
Input
map
reduce
Input
map
reduce
Iterations
Input
Output
map
6 Forms of
MapReduce
(4) Point to Point or
Map-Communication
ap Reduce
llective
ations
Local
Graph
(5) Map Streaming
maps brokers
Events
(6) Shared memory
Map Communicates
Map &
Communicate
Shared Memory
5/4/2015 29
Shared MemoryStreaming
Graph
Iterative
MR
Basic Statistics
PP
Local Analytics
Facets in the Execution Features
Views
5/4/2015 30
One View of Ogres has Facets that are
micropatterns or Execution Features
i. Performance Metrics; property found by benchmarking Ogre
ii. Flops per byte; memory or I/O
iii. Execution Environment; Core libraries needed: matrix-matrix/vector algebra, conjugate
gradient, reduction, broadcast; Cloud, HPC etc.
iv. Volume: property of an Ogre instance
v. Velocity: qualitative property of Ogre with value associated with instance
vi. Variety: important property especially of composite Ogres
vii. Veracity: important property of “mini-applications” but not kernels
viii. Communication Structure; Interconnect requirements; Is communication BSP,
Asynchronous, Pub-Sub, Collective, Point to Point?
ix. Is application (graph) static or dynamic?
x. Most applications consist of a set of interconnected entities; is this regular as a set of
pixels or is it a complicated irregular graph?
xi. Are algorithms Iterative or not?
xii. Data Abstraction: key-value, pixel, graph(G3), vector, bags of words or items
xiii. Are data points in metric or non-metric spaces?
xiv. Is algorithm O(N2) or O(N) (up to logs) for N points per iteration (G2)
5/4/2015 31
Facets of the Ogres
Data Source and Style Aspects
5/4/2015 32
Data Source and Style View of Ogres I
i. SQL NewSQL or NoSQL: NoSQL includes Document,
Column, Key-value, Graph, Triple store; NewSQL is SQL redone to
exploit NoSQL performance
ii. Other Enterprise data systems: 10 examples from NIST integrate
SQL/NoSQL
iii. Set of Files or Objects: as managed in iRODS and extremely
common in scientific research
iv. File systems, Object, Blob and Data-parallel (HDFS) raw storage:
Separated from computing or colocated? HDFS v Lustre v. Openstack
Swift v. GPFS
v. Archive/Batched/Streaming: Streaming is incremental update of
datasets with new algorithms to achieve real-time response (G7);
Before data gets to compute system, there is often an initial data
gathering phase which is characterized by a block size and timing.
Block size varies from month (Remote Sensing, Seismic) to day
(genomic) to seconds or lower (Real time control, streaming)5/4/2015 33
Data Source and Style View of Ogres II
vi. Shared/Dedicated/Transient/Permanent: qualitative property of
data; Other characteristics are needed for permanent
auxiliary/comparison datasets and these could be interdisciplinary,
implying nontrivial data movement/replication
vii. Metadata/Provenance: Clear qualitative property but not for
kernels as important aspect of data collection process
viii. Internet of Things: 24 to 50 Billion devices on Internet by 2020
ix. HPC simulations: generate major (visualization) output that often
needs to be mined
x. Using GIS: Geographical Information Systems provide attractive
access to geospatial data
Note 10 Bob Marcus (led NIST effort) Use cases
5/4/2015 34
2. Perform real time analytics on data
source streams and notify users when
specified events occur
Storm, Kafka, Hbase, Zookeeper
Streaming Data
Streaming Data
Streaming Data
Posted Data Identified Events
Filter Identifying
Events
Repository
Specify filter
Archive
Post Selected
Events
Fetch streamed
Data
5/4/2015 35
5. Perform interactive analytics on data in
analytics-optimized database
Hadoop, Spark, Giraph, Pig …
Data Storage: HDFS, Hbase
Data, Streaming, Batch …..
Mahout, R
5/4/2015 36
5A. Perform interactive analytics on
observational scientific data
Grid or Many Task Software, Hadoop, Spark, Giraph, Pig …
Data Storage: HDFS, Hbase, File Collection
Streaming Twitter data for
Social Networking
Science Analysis Code,
Mahout, R
Transport batch of data to primary
analysis data system
Record Scientific Data in
“field”
Local
Accumulate
and initial
computing
Direct Transfer
NIST examples include
LHC, Remote Sensing,
Astronomy and
Bioinformatics
5/4/2015 37
Facets of the Ogres Processing
View
5/4/2015 38
Facets in Processing (run time) View of Ogres I
i. Micro-benchmarks ogres that exercise simple features of hardware
such as communication, disk I/O, CPU, memory performance
ii. Local Analytics executed on a single core or perhaps node
iii. Global Analytics requiring iterative programming models (G5,G6)
across multiple nodes of a parallel system
iv. Optimization Methodology: overlapping categories
i. Nonlinear Optimization (G6)
ii. Machine Learning
iii. Maximum Likelihood or 2 minimizations
iv. Expectation Maximization (often Steepest descent)
v. Combinatorial Optimization
vi. Linear/Quadratic Programming (G5)
vii. Dynamic Programming
v. Visualization is key application capability with algorithms like MDS
useful but it itself part of “mini-app” or composite Ogre
vi. Alignment (G7) as in BLAST compares samples with repository5/4/2015 39
Facets in Processing (run time) View of Ogres II
vii. Streaming divided into 5 categories depending on event size and
synchronization and integration
– Set of independent events where precise time sequencing unimportant.
– Time series of connected small events where time ordering important.
– Set of independent large events where each event needs parallel processing with time
sequencing not critical
– Set of connected large events where each event needs parallel processing with time
sequencing critical.
– Stream of connected small or large events to be integrated in a complex way.
viii. Basic Statistics (G1): MRStat in NIST problem features
ix. Search/Query/Index: Classic database which is well studied (Baru, Rabl tutorial)
x. Recommender Engine: core to many e-commerce, media businesses;
collaborative filtering key technology
xi. Classification: assigning items to categories based on many methods
– MapReduce good in Alignment, Basic statistics, S/Q/I, Recommender, Calssification
xii. Deep Learning of growing importance due to success in speech recognition etc.
xiii. Problem set up as a graph (G3) as opposed to vector, grid, bag of words etc.
xiv. Using Linear Algebra Kernels: much machine learning uses linear algebra kernels
5/4/2015 40
5/4/2015 41
Benchmarks based on Ogres
Analytics
5/4/2015 42
Benchmarks/Mini-apps spanning Facets
• Look at NSF SPIDAL Project, NIST 51 use cases, Baru-Rabl review
• Catalog facets of benchmarks and choose entries to cover “all facets”
• Micro Benchmarks: SPEC, EnhancedDFSIO (HDFS), Terasort, Wordcount,
Grep, MPI, Basic Pub-Sub ….
• SQL and NoSQL Data systems, Search, Recommenders: TPC (-C to x–HS for
Hadoop), BigBench, Yahoo Cloud Serving, Berkeley Big Data, HiBench,
BigDataBench, Cloudsuite, Linkbench
– includes MapReduce cases Search, Bayes, Random Forests, Collaborative Filtering
• Spatial Query: select from image or earth data
• Alignment: Biology as in BLAST
• Streaming: Online classifiers, Cluster tweets, Robotics, Industrial Internet of
Things, Astronomy; BGBenchmark; choose to cover all 5 subclasses
• Pleasingly parallel (Local Analytics): as in initial steps of LHC, Pathology,
Bioimaging (differ in type of data analysis)
• Global Analytics: Outlier, Clustering, LDA, SVM, Deep Learning, MDS,
PageRank, Levenberg-Marquardt, Graph 500 entries
• Workflow and Composite (analytics on xSQL) linking above5/4/2015 43
HPC-ABDS Runtime
5/4/2015 44
455/4/2015
Kaleidoscope of (Apache) Big Data Stack (ABDS) and HPC Technologies
Cross-
Cutting
Functions
1) Message
and Data
Protocols:
Avro, Thrift,
Protobuf
2) Distributed
Coordination
: Google
Chubby,
Zookeeper,
Giraffe,
JGroups
3) Security &
Privacy:
InCommon,
Eduroam
OpenStack
Keystone,
LDAP, Sentry,
Sqrrl, OpenID,
SAML OAuth
4)
Monitoring:
Ambari,
Ganglia,
Nagios, Inca
17) Workflow-Orchestration: ODE, ActiveBPEL, Airavata, Pegasus, Kepler, Swift, Taverna, Triana, Trident, BioKepler, Galaxy, IPython, Dryad,
Naiad, Oozie, Tez, Google FlumeJava, Crunch, Cascading, Scalding, e-Science Central, Azure Data Factory, Google Cloud Dataflow, NiFi (NSA),
Jitterbit, Talend, Pentaho, Apatar
16) Application and Analytics: Mahout , MLlib , MLbase, DataFu, R, pbdR, Bioconductor, ImageJ, OpenCV, Scalapack, PetSc, Azure Machine
Learning, Google Prediction API & Translation API, mlpy, scikit-learn, PyBrain, CompLearn, DAAL(Intel), Caffe, Torch, Theano, DL4j, H2O, IBM
Watson, Oracle PGX, GraphLab, GraphX, MapGraph, IBM System G, GraphBuilder(Intel), TinkerPop, Google Fusion Tables, CINET, NWB,
Elasticsearch, Kibana, Logstash, Graylog, Splunk, Tableau, D3.js, three.js, Potree
15B) Application Hosting Frameworks: Google App Engine, AppScale, Red Hat OpenShift, Heroku, Aerobatic, AWS Elastic Beanstalk, Azure,
Cloud Foundry, Pivotal, IBM BlueMix, Ninefold, Jelastic, Stackato, appfog, CloudBees, Engine Yard, CloudControl, dotCloud, Dokku, OSGi,
HUBzero, OODT, Agave, Atmosphere
15A) High level Programming: Kite, Hive, HCatalog, Tajo, Shark, Phoenix, Impala, MRQL, SAP HANA, HadoopDB, PolyBase, Pivotal HD/Hawq,
Presto, Google Dremel, Google BigQuery, Amazon Redshift, Drill, Kyoto Cabinet, Pig, Sawzall, Google Cloud DataFlow, Summingbird
14B) Streams: Storm, S4, Samza, Granules, Google MillWheel, Amazon Kinesis, LinkedIn Databus, Facebook Puma/Ptail/Scribe/ODS, Azure Stream
Analytics
14A) Basic Programming model and runtime, SPMD, MapReduce: Hadoop, Spark, Twister, Stratosphere (Apache Flink), Reef, Hama, Giraph,
Pregel, Pegasus, Ligra, GraphChi
13) Inter process communication Collectives, point-to-point, publish-subscribe: MPI, Harp, Netty, ZeroMQ, ActiveMQ, RabbitMQ,
NaradaBrokering, QPid, Kafka, Kestrel, JMS, AMQP, Stomp, MQTT, Public Cloud: Amazon SNS, Lambda, Google Pub Sub, Azure Queues, Event
Hubs
12) In-memory databases/caches: Gora (general object from NoSQL), Memcached, Redis, LMDB (key value), Hazelcast, Ehcache, Infinispan
12) Object-relational mapping: Hibernate, OpenJPA, EclipseLink, DataNucleus, ODBC/JDBC
12) Extraction Tools: UIMA, Tika
11C) SQL(NewSQL): Oracle, DB2, SQL Server, SQLite, MySQL, PostgreSQL, CUBRID, Galera Cluster, SciDB, Rasdaman, Apache Derby, Pivotal
Greenplum, Google Cloud SQL, Azure SQL, Amazon RDS, Google F1, IBM dashDB, N1QL, BlinkDB
11B) NoSQL: Lucene, Solr, Solandra, Voldemort, Riak, Berkeley DB, Kyoto/Tokyo Cabinet, Tycoon, Tyrant, MongoDB, Espresso, CouchDB,
Couchbase, IBM Cloudant, Pivotal Gemfire, HBase, Google Bigtable, LevelDB, Megastore and Spanner, Accumulo, Cassandra, RYA, Sqrrl, Neo4J,
Yarcdata, AllegroGraph, Blazegraph, Facebook Tao, Titan:db, Jena, Sesame
Public Cloud: Azure Table, Amazon Dynamo, Google DataStore
11A) File management: iRODS, NetCDF, CDF, HDF, OPeNDAP, FITS, RCFile, ORC, Parquet
10) Data Transport: BitTorrent, HTTP, FTP, SSH, Globus Online (GridFTP), Flume, Sqoop, Pivotal GPLOAD/GPFDIST
9) Cluster Resource Management: Mesos, Yarn, Helix, Llama, Google Omega, Facebook Corona, Celery, HTCondor, SGE, OpenPBS, Moab, Slurm,
Torque, Globus Tools, Pilot Jobs
8) File systems: HDFS, Swift, Haystack, f4, Cinder, Ceph, FUSE, Gluster, Lustre, GPFS, GFFS
Public Cloud: Amazon S3, Azure Blob, Google Cloud Storage
7) Interoperability: Libvirt, Libcloud, JClouds, TOSCA, OCCI, CDMI, Whirr, Saga, Genesis
6) DevOps: Docker, Puppet, Chef, Ansible, SaltStack, Boto, Cobbler, Xcat, Razor, CloudMesh, Juju, Foreman, OpenStack Heat, Rocks, Cisco
Intelligent Automation for Cloud, Ubuntu MaaS, Facebook Tupperware, AWS OpsWorks, OpenStack Ironic, Google Kubernetes, Buildstep, Gitreceive
5) IaaS Management from HPC to hypervisors: Xen, KVM, Hyper-V, VirtualBox, OpenVZ, LXC, Linux-Vserver, OpenStack, OpenNebula,
Eucalyptus, Nimbus, CloudStack, CoreOS, VMware ESXi, vSphere and vCloud, Amazon, Azure, Google and other public Clouds,
Networking: Google Cloud DNS, Amazon Route 53
21 layers
Over 300
Software
Packages
May 2
2015
465/4/2015
Kaleidoscope of (Apache) Big Data Stack (ABDS) and HPC Technologies
Cross-
Cutting
Functions
1) Message
and Data
Protocols:
Avro, Thrift,
Protobuf
2) Distributed
Coordination
: Google
Chubby,
Zookeeper,
Giraffe,
JGroups
3) Security &
Privacy:
InCommon,
Eduroam
OpenStack
Keystone,
LDAP, Sentry,
Sqrrl, OpenID,
SAML OAuth
4)
Monitoring:
Ambari,
Ganglia,
Nagios, Inca
17) Workflow-Orchestration: ODE, ActiveBPEL, Airavata, Pegasus, Kepler, Swift, Taverna, Triana, Trident, BioKepler, Galaxy, IPython, Dryad,
Naiad, Oozie, Tez, Google FlumeJava, Crunch, Cascading, Scalding, e-Science Central, Azure Data Factory, Google Cloud Dataflow, NiFi (NSA),
Jitterbit, Talend, Pentaho, Apatar
16) Application and Analytics: Mahout , MLlib , MLbase, DataFu, R, pbdR, Bioconductor, ImageJ, OpenCV, Scalapack, PetSc, Azure Machine
Learning, Google Prediction API & Translation API, mlpy, scikit-learn, PyBrain, CompLearn, DAAL(Intel), Caffe, Torch, Theano, DL4j, H2O, IBM
Watson, Oracle PGX, GraphLab, GraphX, MapGraph, IBM System G, GraphBuilder(Intel), TinkerPop, Google Fusion Tables, CINET, NWB,
Elasticsearch, Kibana, Logstash, Graylog, Splunk, Tableau, D3.js, three.js, Potree
15B) Application Hosting Frameworks: Google App Engine, AppScale, Red Hat OpenShift, Heroku, Aerobatic, AWS Elastic Beanstalk, Azure,
Cloud Foundry, Pivotal, IBM BlueMix, Ninefold, Jelastic, Stackato, appfog, CloudBees, Engine Yard, CloudControl, dotCloud, Dokku, OSGi,
HUBzero, OODT, Agave, Atmosphere
15A) High level Programming: Kite, Hive, HCatalog, Tajo, Shark, Phoenix, Impala, MRQL, SAP HANA, HadoopDB, PolyBase, Pivotal HD/Hawq,
Presto, Google Dremel, Google BigQuery, Amazon Redshift, Drill, Kyoto Cabinet, Pig, Sawzall, Google Cloud DataFlow, Summingbird
14B) Streams: Storm, S4, Samza, Granules, Google MillWheel, Amazon Kinesis, LinkedIn Databus, Facebook Puma/Ptail/Scribe/ODS, Azure Stream
Analytics
14A) Basic Programming model and runtime, SPMD, MapReduce: Hadoop, Spark, Twister, Stratosphere (Apache Flink), Reef, Hama, Giraph,
Pregel, Pegasus, Ligra, GraphChi
13) Inter process communication Collectives, point-to-point, publish-subscribe: MPI, Harp, Netty, ZeroMQ, ActiveMQ, RabbitMQ,
NaradaBrokering, QPid, Kafka, Kestrel, JMS, AMQP, Stomp, MQTT, Public Cloud: Amazon SNS, Lambda, Google Pub Sub, Azure Queues, Event
Hubs
12) In-memory databases/caches: Gora (general object from NoSQL), Memcached, Redis, LMDB (key value), Hazelcast, Ehcache, Infinispan
12) Object-relational mapping: Hibernate, OpenJPA, EclipseLink, DataNucleus, ODBC/JDBC
12) Extraction Tools: UIMA, Tika
11C) SQL(NewSQL): Oracle, DB2, SQL Server, SQLite, MySQL, PostgreSQL, CUBRID, Galera Cluster, SciDB, Rasdaman, Apache Derby, Pivotal
Greenplum, Google Cloud SQL, Azure SQL, Amazon RDS, Google F1, IBM dashDB, N1QL, BlinkDB
11B) NoSQL: Lucene, Solr, Solandra, Voldemort, Riak, Berkeley DB, Kyoto/Tokyo Cabinet, Tycoon, Tyrant, MongoDB, Espresso, CouchDB,
Couchbase, IBM Cloudant, Pivotal Gemfire, HBase, Google Bigtable, LevelDB, Megastore and Spanner, Accumulo, Cassandra, RYA, Sqrrl, Neo4J,
Yarcdata, AllegroGraph, Blazegraph, Facebook Tao, Titan:db, Jena, Sesame
Public Cloud: Azure Table, Amazon Dynamo, Google DataStore
11A) File management: iRODS, NetCDF, CDF, HDF, OPeNDAP, FITS, RCFile, ORC, Parquet
10) Data Transport: BitTorrent, HTTP, FTP, SSH, Globus Online (GridFTP), Flume, Sqoop, Pivotal GPLOAD/GPFDIST
9) Cluster Resource Management: Mesos, Yarn, Helix, Llama, Google Omega, Facebook Corona, Celery, HTCondor, SGE, OpenPBS, Moab, Slurm,
Torque, Globus Tools, Pilot Jobs
8) File systems: HDFS, Swift, Haystack, f4, Cinder, Ceph, FUSE, Gluster, Lustre, GPFS, GFFS
Public Cloud: Amazon S3, Azure Blob, Google Cloud Storage
7) Interoperability: Libvirt, Libcloud, JClouds, TOSCA, OCCI, CDMI, Whirr, Saga, Genesis
6) DevOps: Docker, Puppet, Chef, Ansible, SaltStack, Boto, Cobbler, Xcat, Razor, CloudMesh, Juju, Foreman, OpenStack Heat, Rocks, Cisco
Intelligent Automation for Cloud, Ubuntu MaaS, Facebook Tupperware, AWS OpsWorks, OpenStack Ironic, Google Kubernetes, Buildstep, Gitreceive
5) IaaS Management from HPC to hypervisors: Xen, KVM, Hyper-V, VirtualBox, OpenVZ, LXC, Linux-Vserver, OpenStack, OpenNebula,
Eucalyptus, Nimbus, CloudStack, CoreOS, VMware ESXi, vSphere and vCloud, Amazon, Azure, Google and other public Clouds,
Networking: Google Cloud DNS, Amazon Route 53
21 layers
Over 300
Software
Packages
May 2
2015
There are a lot of Big Data and HPC Software systems in 17 (21) layers
Build on – do not compete with the over 300 HPC-ABDS systems
(1) Map Only
M
(3) Iterative Map Reduce
or Map-Collective
(2) Classic
MapReduce
Input
map
reduce
Input
map
reduce
Iterations
Input
Output
map
6 Forms of
MapReduce
(4) Point to Point or
Map-Communication
ap Reduce
llective
ations
Local
Graph
(5) Map Streaming
maps brokers
Events
(6) Shared memory
Map Communicates
Map &
Communicate
Shared Memory
5/4/2015 47
Shared MemoryStreaming
Graph
Iterative
MR
Basic Statistics
PP
Local Analytics
8 Data Analysis Problem Architectures
 1) Pleasingly Parallel PP or “map-only” in MapReduce
 BLAST Analysis; Local Machine Learning
 2A) Classic MapReduce MR, Map followed by reduction
 High Energy Physics (HEP) Histograms; Web search; Recommender Engines
 2B) Simple version of classic MapReduce MRStat
 Final reduction is just simple statistics
 3) Iterative MapReduce MRIter
 Expectation maximization Clustering Linear Algebra, PageRank
 4A) Map Point to Point Communication
 Classic MPI; PDE Solvers and Particle Dynamics; Graph processing Graph
 4B) GPU (Accelerator) enhanced 4A) – especially for deep learning
 5) Map + Streaming + Communication
 Images from Synchrotron sources; Telescopes; Internet of Things IoT
 6) Shared memory allowing parallel threads which are tricky to program
but lower latency
 Difficult to parallelize asynchronous parallel Graph Algorithms5/4/2015 48
Functionality of 21 HPC-ABDS Layers
1) Message Protocols:
2) Distributed Coordination:
3) Security & Privacy:
4) Monitoring:
5) IaaS Management from HPC to hypervisors:
6) DevOps:
7) Interoperability:
8) File systems:
9) Cluster Resource Management:
10) Data Transport:
11) A) File management
B) NoSQL
C) SQL
12) In-memory databases&caches / Object-relational mapping / Extraction Tools
13) Inter process communication Collectives, point-to-point, publish-subscribe, MPI:
14) A) Basic Programming model and runtime, SPMD, MapReduce:
B) Streaming:
15) A) High level Programming:
B) Frameworks
16) Application and Analytics:
17) Workflow-Orchestration:5/4/2015 49
Here are 21 functionalities.
(including 11, 14, 15 subparts)
Lets discuss how these are used in
particular applications
4 Cross cutting at top
17 in order of layered diagram
starting at bottom
Exemplar Software for a Big Data Initiative
• Functionality of ABDS and Performance of HPC
• Workflow: Apache Crunch, Python or Kepler
• Data Analytics: Mahout, R, ImageJ, Scalapack
• High level Programming: Hive, Pig
• Batch Parallel Programming model: Hadoop, Spark, Giraph, Harp,
MPI;
• Streaming Programming model: Storm, Kafka or RabbitMQ
• In-memory: Memcached
• Data Management: Hbase, MongoDB, MySQL
• Distributed Coordination: Zookeeper
• Cluster Management: Yarn, Slurm
• File Systems: HDFS, Object store (Swift),Lustre
• DevOps: Cloudmesh, Chef, Puppet, Docker, Cobbler
• IaaS: Amazon, Azure, OpenStack, Docker, SR-IOV
• Monitoring: Inca, Ganglia, Nagios5/4/2015 50
ABDS v. HPC Architecture
5/4/2015 51
HPC-ABDS Stack Summarized I
• The HPC-ABDS software is broken up into 21 layers so that one can discuss
software systems in reasonable size groups.
– The layers where there is especial opportunity to integrate HPC are colored
green in figure.
• We note that data systems that we construct from this software can run
interoperably on virtualized or non-virtualized environments aimed at key
scientific data analysis problems.
• Most of ABDS emphasizes scalability but not performance and one of our
goals is to produce high performance environments. Here there is clear
need for better node performance and support of accelerators like Xeon-Phi
and GPU’s.
• Figure “ABDS v. HPC Architecture” contrasts modern ABDS and HPC stacks
illustrating most of the 21 layers and labelling on left with layer number
used in HPC-ABDS Figure.
• The omitted layers in architecture figure are Interoperability, DevOps,
Monitoring and Security (layers 7, 6, 4, 3) which are all important and
clearly applicable to both HPC and ABDS.
• We also add an extra layer “language” not discussed in HPC-ABDS Figure.5/4/2015 52
HPC-ABDS Stack Summarized II• Lower layers where HPC can make a major impact include scheduling layer
9 where Apache technologies like Yarn and Mesos need to be integrated
with the sophisticated HPC cluster and HTC approaches.
• Storage layer 8 is another important area where HPC distributed and
parallel storage environments need to be reconciled with the “data parallel”
storage seen in HDFS in many ABDS systems.
• However the most important issues are probably at the higher layers with
data management(11), communication(13), (high layer or basic)
programming(15, 14), analytics (16) and orchestration (17). These are
areas where there is rapid commodity/commercial innovation and we
discuss them in order below.
• Much science data analysis is centered on files (8) but we expect movement
to the common commodity approaches of Object stores (8), SQL and
NoSQL (11) where latter has a proliferation of systems with different
characteristics – especially in the data abstraction that varies over
row/column, key-value, graph and documents.
• Note recent developments at the programming layer (15A) like Apache
Hive and Drill, which offer high-layer access models like SQL implemented
on scalable NoSQL data systems.
– Generalize Drill to other views such as “FITS on anything” (astronomy”) or
“Excel on Anything” or “Matplotlib on anything”
5/4/2015 53
HPC-ABDS Stack Summarized III
• The communication layer (13) includes Publish-subscribe technology used in many
approaches to streaming data as well the HPC communication technologies (MPI)
which are much faster than most default Apache approaches but can be added to
some systems like Hadoop whose modern version is modular and allows plug-ins
for HPC stalwarts like MPI and sophisticated load balancers.
– Need to extend to Giraph and include load-balancing
• The programming layer (14) includes both the classic batch processing typified by
Hadoop (14A) and streaming by Storm (14B).
• Investigating Map-Streaming programming models seems important. ABDS
Streaming is nice but doesn’t support real-time or parallel processing.
• The programming offerings (14) differ in approaches to data model (key-value,
array, pixel, bags, graph), fault tolerance and communication. The trade-offs here
have major performance issues.
– Too many ~identical programming systems!
– Recent survey of graph databases from Wikidata with 49 features; chose
BlazeGraph
• You also see systems like Apache Pig (15A) offering data parallel interfaces.
• At the high layer we see both programming models (15A) and Platform (15B) as a
Service toolkits where the Google App Engine is well known but there are many
entries include the recent BlueMix from IBM.5/4/2015 54
HPC-ABDS Stack Summarized IV
• The orchestration or workflow layer 17 has seen an explosion of
activity in the ABDS space although with systems like Pegasus,
Taverna, Kepler, Swift and IPython, HPC has substantial experience.
• There are commodity orchestration dataflow systems like Tez and
projects like Apache Crunch with a data parallel emphasis based on
ideas from Google FlumeJava.
– A modern version of the latter presumably underlies Google’s recently
announced Cloud Dataflow that unifies support of multiple batch and
streaming components; a capability that we expect to become common.
• The implementation of the analytics layer 16 depends on details of
orchestration and especially programming layers but probably most
important are quality parallel algorithms.
– As many machine learning algorithms involve linear algebra, HPC expertise is
directly applicable as is fundamental mathematics needed to develop O(NlogN)
algorithms for analytics that are naively O(N2).
– Streaming (online) algorithms are an important new area of research
5/4/2015 55
DevOps, Platforms and Orchestration
• DevOps Level 6 includes several automation capabilities
including systems like OpenStack Heat, Juju and Kubernetes to
build virtual clusters and a standard TOSCA that has several
good studies from Stuttgart
– TOSCA specifies system to be instantiated and managed
• TOSCA is closely related to workflow (orchestration) standard
BPEL
– BPEL specifies system to be executed
• In Level 17, should evaluate new orchestration systems from
ABDS such as NiFi or Crunch and toolkits like Cascading
– Ensure streaming and batch supported
• Level 15B has application hosting environments such as GAE,
Heroku, Dokku (for Docker), Jelastic
– These platforms bring together a focused set of tools to address a finite
but broad application area
• Should look at these 3 levels to build HPC and Streaming
systems5/4/2015 56
Analytics and the DIKW Pipeline
• Data goes through a pipeline (Big Data is also Big Wisdom etc.)
Raw data  Data  Information  Knowledge  Wisdom 
Decisions
• Each link enabled by a filter which is “business logic” or “analytics”
– All filters are Analytics
• However I am most interested in filters that involve “sophisticated
analytics” which require non trivial parallel algorithms
– Improve state of art in both algorithm quality and (parallel) performance
• See Apache Crunch or Google Cloud Dataflow supporting pipelined
analytics
– And Pegasus, Taverna, Kepler from Grid community
More Analytics
KnowledgeInformation
Analytics
InformationData
5/4/2015 57
Internet of Things
Storm Storm Storm Storm Storm Storm
Archival Storage – NOSQL like Hbase
Streaming Processing (Iterative MapReduce)
Batch Processing (Iterative MapReduce)
Raw
Data
Information WisdomKnowledgeData Decisions
Pub-Sub
System Orchestration / Dataflow / Workflow
Cloud DIKW based on HPC-ABDS to integrate
streaming and batch Big Data
5/4/2015 58
5 Classes of Streaming Applications
• Set of independent small events where precise time sequencing
unimportant.
– e.g. Independent search requests or tweets from users
• Time series of connected small events where time ordering
important.
– e.g. Streaming audio or video; robot monitoring
• Set of independent large events where each event needs parallel
processing with time sequencing not critical
– e.g. Processing images from telescopes or light sources with material or
biological sciences.
• Set of connected large events where each event needs parallel
processing with time sequencing critical.
– e.g. Processing high resolution monitoring (including video) information
from robots (self-driving cars) with real time response needed
• Stream of connected small or large events that need to be
integrated in a complex way.
– e.g. Tweets or other online data where we are using them to update old
and find new clusters rather just classifying tweets based on previous
clusters i.e. where we update model as well as using it to classify event.
5/4/2015 59
Science Streaming Examples
Streaming Application Details
1 Data Assimilation
Integrate data into simulations to enhance quality.
Distributed Data sources
2 Analysis of Simulation Results
Climate, Fusion, Molecular Dynamics, Materials.
Typically local or in-situ data
3 Steering Experiments
Control of simulation or Experiment. Data could be
local or distributed
4 Astronomy, Light Sources
Outlier event detection; classification; build model,
accumulate data. Distributed Data sources
5
Environmental Sensors, Smart
Grid, Transportation systems
Environmental sensor data or Internet of Things; many
small events. Distributed Data sources.
6 Robotics
Cloud control of Robots including cars. Need
processing near robot when decision time < ~1 second
7
Network Science: Online
Classification
Build model and classify with online (streaming)
algorithms5/4/2015 60
IoT Activities at Indiana University
Parallel Clustering for Tweets: Judy Qiu, Emilio Ferrara,
Xiaoming Gao
Parallel Cloud Control for Robots: Supun
Kamburugamuve, Hengjing He, David Crandall
5/4/2015 61
• IoTCloud uses Zookeeper,
Storm, Hbase, RabbitMQ for
robot cloud control
• Focus on high performance
(parallel) control functions
• Guaranteed real time
response
5/4/2015 62
Parallel
simultaneous
localization and
mapping
(SLAM) in the
cloud
Latency with RabbitMQ
Different Message sizes in
bytes
Latency with Kafka
Note change in scales
for latency and
message size
5/4/2015 63
Robot Latency Kafka & RabbitMQ
Kinect with
Turtlebot
and
RabbitMQ
RabbitMQ
versus
Kafka
5/4/2015 64
Parallel SLAM Simultaneous Localization
and Mapping by Particle Filtering
5/4/2015 65
Parallel Overheads SLAM Simultaneous Localization
and Mapping: I/O and Garbage Collection
5/4/2015 66
Parallel Overheads SLAM Simultaneous
Localization and Mapping: Load Imbalance
5/4/2015 67
Load Imbalance overhead
Parallel Tweet Clustering with Storm
• Judy Qiu, Emilio Ferrara and Xiaoming Gao
• Storm Bolts coordinated by ActiveMQ to synchronize parallel cluster
center updates – add loops to Storm
• 2 million streaming tweets processed in 40 minutes; 35,000 clusters
Sequential
Parallel –
eventually
10,000 bolts
5/4/2015 68
Parallel Tweet Clustering with Storm
• Speedup on up to 96 bolts on two clusters Moe and Madrid
• Red curve is old algorithm;
• green and blue new algorithm
• Full Twitter – 1000 way parallelism
• Full Everything – 10,000 way parallelism
5/4/2015 69
Data Science Curriculum
5/4/2015 70
SOIC Data Science Program
• Cross Disciplinary Faculty – 31 in School of Informatics and Computing, a
few in statistics and expanding across campus
• Affordable online and traditional residential curricula or mix thereof
• Masters, Certificate, PhD Minor in place; Full PhD being studied
• http://www.soic.indiana.edu/graduate/degrees/data-science/index.html
715/4/2015
IU Data Science Program
• Program managed by cross disciplinary Faculty in Data Science.
Currently Statistics and Informatics and Computing School but will
expand scope to full campus
• A purely online 4-course Certificate in Data Science has been
running since January 2014
– Fall 2015 expect ~100 students enrolled taking 1-2 classes per
semester/year
– Most students are professionals taking courses in “free time”
• A campus wide Ph.D. Minor in Data Science has been approved.
• Exploring PhD in Data Science
• Courses labelled as “Decision-maker” and “Technical” paths
where McKinsey says an order of magnitude more (1.5 million by
2018) unmet job openings in Decision-maker track
725/4/2015
McKinsey Institute on Big Data Jobs
• There will be a shortage of talent necessary for organizations to take
advantage of big data. By 2018, the United States alone could face a
shortage of 140,000 to 190,000 people with deep analytical skills as
well as 1.5 million managers and analysts with the know-how to use
the analysis of big data to make effective decisions.
• IU Data Science Decision Maker Path aimed at 1.5 million jobs.
Technical Path covers the 140,000 to 190,000
http://www.mckinsey.com/mgi/publications/big_data/index.asp
735/4/2015
IU Data Science Program: Masters
• Masters Fully approved by University and State October 14
2014 and started January 2015
• Blended online and residential (any combination)
– Online offered at in-state rates (~$1100 per course)
• Informatics, Computer Science, Information and Library
Science in School of Informatics and Computing and the
Department of Statistics, College of Arts and Science, IUB
• 30 credits (10 conventional courses)
• Basic (general) Masters degree plus tracks
– Currently only track is “Computational and Analytic Data
Science ”
– Other tracks expected such as Biomedical Data Science
• Fall 2015, over 200 applicants to program; cap enrollment
745/4/2015
Some Online Data Science Classes
• Big Data Applications & Analytics
– ~40 hours of video mainly discussing applications (The X in
X-Informatics or X-Analytics) in context of big data and
clouds https://bigdatacourse.appspot.com/course
• Big Data Open Source Software and Projects
http://bigdataopensourceprojects.soic.indiana.edu/
– ~27 Hours of video discussing HPC-ABDS and use on
FutureSystems for Big Data software
• Divided into sections (coherent topics), units (~lectures) and
lessons (5-20 minutes) where student is meant to stay awake
755/4/2015
• 1 Unit: Organizational Introduction
• 1 Unit: Motivation: Big Data and the Cloud; Centerpieces of the Future Economy
• 3 Units: Pedagogical Introduction: What is Big Data, Data Analytics and X-Informatics
• SideMOOC: Python for Big Data Applications and Analytics: NumPy, SciPy, MatPlotlib
• SideMOOC: Using FutureSystems for Java and Python
• 4 Units: X-Informatics with X= LHC Analysis and Discovery of Higgs particle
– Integrated Technology: Explore Events; histograms and models; basic statistics (Python and some in Java)
• 3 Units on a Big Data Use Cases Survey
• SideMOOC: Using Plotviz Software for Displaying Point Distributions in 3D
• 3 Units: X-Informatics with X= e-Commerce and Lifestyle
• Technology (Python or Java): Recommender Systems - K-Nearest Neighbors
• Technology: Clustering and heuristic methods
• 1 Unit: Parallel Computing Overview and familiar examples
• 4 Units: Cloud Computing Technology for Big Data Applications & Analytics
• 2 Units: X-Informatics with X = Web Search and Text Mining and their technologies
• Technology for Big Data Applications & Analytics : Kmeans (Python/Java)
• Technology for Big Data Applications & Analytics: MapReduce
• Technology for Big Data Applications & Analytics : Kmeans and MapReduce Parallelism (Python/Java)
• Technology for Big Data Applications & Analytics : PageRank (Python/Java)
• 3 Units: X-Informatics with X = Sports
• 1 Unit: X-Informatics with X = Health
• 1 Unit: X-Informatics with X = Internet of Things & Sensors
• 1 Unit: X-Informatics with X = Radar for Remote Sensing
Big Data Applications & Analytics Topics
Red = Software
765/4/2015
http://x-informatics.appspot.com/course
Example
Google
Course
Builder
MOOC
4 levels
Course
Section (12)
Units(29)
Lessons(~150)
Units are
roughly
traditional
lecture
Lessons are
~10 minute
segments
5/4/2015 77
http://x-informatics.appspot.com/course
Example
Google
Course
Builder
MOOC
The Physics
Section
expands to 4
units and 2
Homeworks
Unit 9 expands
to 5 lessons
Lessons played
on YouTube
“talking head
video +
PowerPoint”
5/4/2015 78
The community group for one of classes
and one forum (“No more malls”)
5/4/2015 79
Big Data & Open Source Software Projects Overview
• This course studies DevOps and software used in many commercial
activities to study Big Data.
• The course of this talk!!
• The backdrop for course is the ~300 software subsystems HPC-ABDS
(High Performance Computing enhanced - Apache Big Data Stack)
illustrated at http://hpc-abds.org/kaleidoscope/
• The cloud computing architecture underlying ABDS and contrast of this with
HPC.
• The main activity of the course is building a significant project using multiple
HPC-ABDS subsystems combined with user code and data.
• Projects will be suggested or students can chose their own
• http://cloudmesh.github.io/introduction_to_cloud_computing/class/lesson/projects.html
• For more information,
see: http://bigdataopensourceprojects.soic.indiana.edu/ and
• http://cloudmesh.github.io/introduction_to_cloud_computing/class/bdossp_sp15/week_plan.html
5/4/2015 80
5/4/2015 81
5/4/2015 82
5/4/2015 83
Potpourri of Online Technologies
• Canvas (Indiana University Default): Best for interface with IU grading and
records
• Google Course Builder: Best for management and integration of
components
• Ad hoc web pages: alternative easy to build integration
• Microsoft Mix: Simplest faculty preparation interface
• Adobe Presenter/Camtasia: More powerful video preparation that support
subtitles but not clearly needed
• Google Community: Good social interaction support
• YouTube: Best user interface for videos
• Hangout: Best for instructor-students online interactions (one instructor to 9
students with live feed). Hangout on air mixes live and streaming (30 second
delay from archived YouTube) and more participants
• OpenEdX possible future of Google Course Builder and getting easier to
use
845/4/2015
Lessons / Insights
• Proposed classification of Big Data applications
with features generalized as facets and kernels for
analytics
• Integrate (don’t compete) HPC with “Commodity
Big data” (Google to Amazon to Enterprise/Startup
Data Analytics)
– i.e. improve Mahout; don’t compete with it
– Use Hadoop plug-ins rather than replacing Hadoop
• Enhanced Apache Big Data Stack HPC-ABDS has
over 300 members with HPC opportunities at
Resource management, Storage/Data, Streaming,
Programming, monitoring, workflow layers.
5/4/2015 85

More Related Content

PDF
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
PPTX
High Performance Processing of Streaming Data
PPTX
Cloud Services for Big Data Analytics
PPTX
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
PPTX
51 Use Cases and implications for HPC & Apache Big Data Stack
PPTX
What is the "Big Data" version of the Linpack Benchmark? ; What is “Big Data...
PPTX
Classification of Big Data Use Cases by different Facets
PPTX
Comparing Big Data and Simulation Applications and Implications for Software ...
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
High Performance Processing of Streaming Data
Cloud Services for Big Data Analytics
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
51 Use Cases and implications for HPC & Apache Big Data Stack
What is the "Big Data" version of the Linpack Benchmark? ; What is “Big Data...
Classification of Big Data Use Cases by different Facets
Comparing Big Data and Simulation Applications and Implications for Software ...

What's hot (20)

PPTX
Matching Data Intensive Applications and Hardware/Software Architectures
PDF
04 open source_tools
PPTX
Matching Data Intensive Applications and Hardware/Software Architectures
PDF
07 data structures_and_representations
PDF
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
PPTX
Beyond Hadoop 1.0: A Holistic View of Hadoop YARN, Spark and GraphLab
PPTX
High Performance Data Analytics with Java on Large Multicore HPC Clusters
PPTX
Apache hadoop introduction and architecture
PPTX
Anusua Trivedi, Data Scientist at Texas Advanced Computing Center (TACC), UT ...
PPTX
Big Data Analytics with Storm, Spark and GraphLab
PDF
Is Spark Replacing Hadoop
PDF
Spark Application for Time Series Analysis
PDF
Managing Big Data (Chapter 2, SC 11 Tutorial)
PDF
The Open Science Data Cloud: Empowering the Long Tail of Science
DOCX
Hadoop Seminar Report
PPTX
Module 01 - Understanding Big Data and Hadoop 1.x,2.x
PPTX
Big data vahidamiri-tabriz-13960226-datastack.ir
PDF
Hadoop - Architectural road map for Hadoop Ecosystem
PDF
Introduction to Big Data and Science Clouds (Chapter 1, SC 11 Tutorial)
PDF
Share and analyze geonomic data at scale by Andy Petrella and Xavier Tordoir
Matching Data Intensive Applications and Hardware/Software Architectures
04 open source_tools
Matching Data Intensive Applications and Hardware/Software Architectures
07 data structures_and_representations
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Beyond Hadoop 1.0: A Holistic View of Hadoop YARN, Spark and GraphLab
High Performance Data Analytics with Java on Large Multicore HPC Clusters
Apache hadoop introduction and architecture
Anusua Trivedi, Data Scientist at Texas Advanced Computing Center (TACC), UT ...
Big Data Analytics with Storm, Spark and GraphLab
Is Spark Replacing Hadoop
Spark Application for Time Series Analysis
Managing Big Data (Chapter 2, SC 11 Tutorial)
The Open Science Data Cloud: Empowering the Long Tail of Science
Hadoop Seminar Report
Module 01 - Understanding Big Data and Hadoop 1.x,2.x
Big data vahidamiri-tabriz-13960226-datastack.ir
Hadoop - Architectural road map for Hadoop Ecosystem
Introduction to Big Data and Science Clouds (Chapter 1, SC 11 Tutorial)
Share and analyze geonomic data at scale by Andy Petrella and Xavier Tordoir
Ad

Viewers also liked (6)

PPTX
Big Data HPC Convergence
PDF
Infrastructure coders logstash
PDF
Creating a Data-Driven Organization, Crunchconf, October 2015
PDF
Something about Kafka - Why Kafka is so fast
PDF
Advanced Streaming Analytics with Apache Flink and Apache Kafka, Stephan Ewen
PDF
Fluentd vs. Logstash for OpenStack Log Management
Big Data HPC Convergence
Infrastructure coders logstash
Creating a Data-Driven Organization, Crunchconf, October 2015
Something about Kafka - Why Kafka is so fast
Advanced Streaming Analytics with Apache Flink and Apache Kafka, Stephan Ewen
Fluentd vs. Logstash for OpenStack Log Management
Ad

Similar to HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a bias to Streaming) (20)

PPTX
Dibbs spidal april6-2016
PDF
Devops Spark Streaming
PPTX
Let Spark Fly: Advantages and Use Cases for Spark on Hadoop
PDF
Introduction to apache hadoop
PDF
Hadoop and OpenStack - Hadoop Summit San Jose 2014
PDF
Hadoop and OpenStack
PDF
Open Source Tools
PPTX
Infrastructure Cloud DevOps- additional skills
PDF
OSDC 2015: Bernd Mathiske | Why the Datacenter Needs an Operating System
PDF
SMACK Stack 1.1
PPTX
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
PDF
Hopsworks in the cloud Berlin Buzzwords 2019
PDF
Data Science und Machine Learning im Kubernetes-Ökosystem
PPTX
Designing High-Performance and Scalable Middleware for HPC, AI and Data Science
PPTX
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
PPT
Hadoop trainingin bangalore
PPTX
Designing High performance & Scalable Middleware for HPC
PPTX
A Container-based Sizing Framework for Apache Hadoop/Spark Clusters
PPT
Cyberinfrastructure and Applications Overview: Howard University June22
PDF
Ucx an open source framework for hpc network ap is and beyond
Dibbs spidal april6-2016
Devops Spark Streaming
Let Spark Fly: Advantages and Use Cases for Spark on Hadoop
Introduction to apache hadoop
Hadoop and OpenStack - Hadoop Summit San Jose 2014
Hadoop and OpenStack
Open Source Tools
Infrastructure Cloud DevOps- additional skills
OSDC 2015: Bernd Mathiske | Why the Datacenter Needs an Operating System
SMACK Stack 1.1
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
Hopsworks in the cloud Berlin Buzzwords 2019
Data Science und Machine Learning im Kubernetes-Ökosystem
Designing High-Performance and Scalable Middleware for HPC, AI and Data Science
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Hadoop trainingin bangalore
Designing High performance & Scalable Middleware for HPC
A Container-based Sizing Framework for Apache Hadoop/Spark Clusters
Cyberinfrastructure and Applications Overview: Howard University June22
Ucx an open source framework for hpc network ap is and beyond

More from Geoffrey Fox (20)

PPTX
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
PPTX
High Performance Computing and Big Data
PPTX
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
PPTX
Data Science and Online Education
PPTX
Big Data HPC Convergence and a bunch of other things
PPTX
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
PPTX
Visualizing and Clustering Life Science Applications in Parallel 
PPTX
Lessons from Data Science Program at Indiana University: Curriculum, Students...
PPTX
Data Science Curriculum at Indiana University
DOCX
Experience with Online Teaching with Open Source MOOC Technology
PDF
Big Data and Clouds: Research and Education
PDF
High Performance Data Analytics and a Java Grande Run Time
PDF
Multi-faceted Classification of Big Data Use Cases and Proposed Architecture ...
PPTX
Remarks on MOOC's
PPTX
FutureGrid Computing Testbed as a Service
PPTX
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
PPTX
NIST Big Data Public Working Group NBD-PWG
PPT
Linking Programming models between Grids, Web 2.0 and Multicore
PPT
CTS Conference Web 2.0 Tutorial Part 2
PPT
CTS Conference Web 2.0 Tutorial Part 1
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
High Performance Computing and Big Data
Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC...
Data Science and Online Education
Big Data HPC Convergence and a bunch of other things
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
Visualizing and Clustering Life Science Applications in Parallel 
Lessons from Data Science Program at Indiana University: Curriculum, Students...
Data Science Curriculum at Indiana University
Experience with Online Teaching with Open Source MOOC Technology
Big Data and Clouds: Research and Education
High Performance Data Analytics and a Java Grande Run Time
Multi-faceted Classification of Big Data Use Cases and Proposed Architecture ...
Remarks on MOOC's
FutureGrid Computing Testbed as a Service
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
NIST Big Data Public Working Group NBD-PWG
Linking Programming models between Grids, Web 2.0 and Multicore
CTS Conference Web 2.0 Tutorial Part 2
CTS Conference Web 2.0 Tutorial Part 1

Recently uploaded (20)

PPTX
Introduction-to-Cloud-ComputingFinal.pptx
PPTX
Introduction to Knowledge Engineering Part 1
PDF
.pdf is not working space design for the following data for the following dat...
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PPTX
Supervised vs unsupervised machine learning algorithms
PDF
Foundation of Data Science unit number two notes
PPT
Reliability_Chapter_ presentation 1221.5784
PPTX
Business Acumen Training GuidePresentation.pptx
PDF
Introduction to Business Data Analytics.
PPTX
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PDF
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
PDF
Lecture1 pattern recognition............
PPTX
Major-Components-ofNKJNNKNKNKNKronment.pptx
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PPT
Chapter 3 METAL JOINING.pptnnnnnnnnnnnnn
PPTX
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
PPTX
Moving the Public Sector (Government) to a Digital Adoption
PDF
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
Introduction-to-Cloud-ComputingFinal.pptx
Introduction to Knowledge Engineering Part 1
.pdf is not working space design for the following data for the following dat...
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
Supervised vs unsupervised machine learning algorithms
Foundation of Data Science unit number two notes
Reliability_Chapter_ presentation 1221.5784
Business Acumen Training GuidePresentation.pptx
Introduction to Business Data Analytics.
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
Galatica Smart Energy Infrastructure Startup Pitch Deck
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
Lecture1 pattern recognition............
Major-Components-ofNKJNNKNKNKNKronment.pptx
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
Chapter 3 METAL JOINING.pptnnnnnnnnnnnnn
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
Moving the Public Sector (Government) to a Digital Adoption
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf

HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a bias to Streaming)

  • 1. HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a bias to Streaming) 2nd International Workshop on Scalable Computing For Real-Time Big Data Applications (SCRAMBL'15) in conjunction with CCGrid'15 May 4, 2015 Geoffrey Fox, Judy Qiu, Shantenu Jha, Supun Kamburugamuve, Andre Luckow gcf@indiana.edu http://www.infomall.org School of Informatics and Computing Digital Science Center Indiana University Bloomington5/4/2015 1
  • 3. 35/4/2015 Kaleidoscope of (Apache) Big Data Stack (ABDS) and HPC Technologies Cross- Cutting Functions 1) Message and Data Protocols: Avro, Thrift, Protobuf 2) Distributed Coordination : Google Chubby, Zookeeper, Giraffe, JGroups 3) Security & Privacy: InCommon, Eduroam OpenStack Keystone, LDAP, Sentry, Sqrrl, OpenID, SAML OAuth 4) Monitoring: Ambari, Ganglia, Nagios, Inca 17) Workflow-Orchestration: ODE, ActiveBPEL, Airavata, Pegasus, Kepler, Swift, Taverna, Triana, Trident, BioKepler, Galaxy, IPython, Dryad, Naiad, Oozie, Tez, Google FlumeJava, Crunch, Cascading, Scalding, e-Science Central, Azure Data Factory, Google Cloud Dataflow, NiFi (NSA), Jitterbit, Talend, Pentaho, Apatar 16) Application and Analytics: Mahout , MLlib , MLbase, DataFu, R, pbdR, Bioconductor, ImageJ, OpenCV, Scalapack, PetSc, Azure Machine Learning, Google Prediction API & Translation API, mlpy, scikit-learn, PyBrain, CompLearn, DAAL(Intel), Caffe, Torch, Theano, DL4j, H2O, IBM Watson, Oracle PGX, GraphLab, GraphX, MapGraph, IBM System G, GraphBuilder(Intel), TinkerPop, Google Fusion Tables, CINET, NWB, Elasticsearch, Kibana, Logstash, Graylog, Splunk, Tableau, D3.js, three.js, Potree 15B) Application Hosting Frameworks: Google App Engine, AppScale, Red Hat OpenShift, Heroku, Aerobatic, AWS Elastic Beanstalk, Azure, Cloud Foundry, Pivotal, IBM BlueMix, Ninefold, Jelastic, Stackato, appfog, CloudBees, Engine Yard, CloudControl, dotCloud, Dokku, OSGi, HUBzero, OODT, Agave, Atmosphere 15A) High level Programming: Kite, Hive, HCatalog, Tajo, Shark, Phoenix, Impala, MRQL, SAP HANA, HadoopDB, PolyBase, Pivotal HD/Hawq, Presto, Google Dremel, Google BigQuery, Amazon Redshift, Drill, Kyoto Cabinet, Pig, Sawzall, Google Cloud DataFlow, Summingbird 14B) Streams: Storm, S4, Samza, Granules, Google MillWheel, Amazon Kinesis, LinkedIn Databus, Facebook Puma/Ptail/Scribe/ODS, Azure Stream Analytics 14A) Basic Programming model and runtime, SPMD, MapReduce: Hadoop, Spark, Twister, Stratosphere (Apache Flink), Reef, Hama, Giraph, Pregel, Pegasus, Ligra, GraphChi 13) Inter process communication Collectives, point-to-point, publish-subscribe: MPI, Harp, Netty, ZeroMQ, ActiveMQ, RabbitMQ, NaradaBrokering, QPid, Kafka, Kestrel, JMS, AMQP, Stomp, MQTT, Public Cloud: Amazon SNS, Lambda, Google Pub Sub, Azure Queues, Event Hubs 12) In-memory databases/caches: Gora (general object from NoSQL), Memcached, Redis, LMDB (key value), Hazelcast, Ehcache, Infinispan 12) Object-relational mapping: Hibernate, OpenJPA, EclipseLink, DataNucleus, ODBC/JDBC 12) Extraction Tools: UIMA, Tika 11C) SQL(NewSQL): Oracle, DB2, SQL Server, SQLite, MySQL, PostgreSQL, CUBRID, Galera Cluster, SciDB, Rasdaman, Apache Derby, Pivotal Greenplum, Google Cloud SQL, Azure SQL, Amazon RDS, Google F1, IBM dashDB, N1QL, BlinkDB 11B) NoSQL: Lucene, Solr, Solandra, Voldemort, Riak, Berkeley DB, Kyoto/Tokyo Cabinet, Tycoon, Tyrant, MongoDB, Espresso, CouchDB, Couchbase, IBM Cloudant, Pivotal Gemfire, HBase, Google Bigtable, LevelDB, Megastore and Spanner, Accumulo, Cassandra, RYA, Sqrrl, Neo4J, Yarcdata, AllegroGraph, Blazegraph, Facebook Tao, Titan:db, Jena, Sesame Public Cloud: Azure Table, Amazon Dynamo, Google DataStore 11A) File management: iRODS, NetCDF, CDF, HDF, OPeNDAP, FITS, RCFile, ORC, Parquet 10) Data Transport: BitTorrent, HTTP, FTP, SSH, Globus Online (GridFTP), Flume, Sqoop, Pivotal GPLOAD/GPFDIST 9) Cluster Resource Management: Mesos, Yarn, Helix, Llama, Google Omega, Facebook Corona, Celery, HTCondor, SGE, OpenPBS, Moab, Slurm, Torque, Globus Tools, Pilot Jobs 8) File systems: HDFS, Swift, Haystack, f4, Cinder, Ceph, FUSE, Gluster, Lustre, GPFS, GFFS Public Cloud: Amazon S3, Azure Blob, Google Cloud Storage 7) Interoperability: Libvirt, Libcloud, JClouds, TOSCA, OCCI, CDMI, Whirr, Saga, Genesis 6) DevOps: Docker, Puppet, Chef, Ansible, SaltStack, Boto, Cobbler, Xcat, Razor, CloudMesh, Juju, Foreman, OpenStack Heat, Rocks, Cisco Intelligent Automation for Cloud, Ubuntu MaaS, Facebook Tupperware, AWS OpsWorks, OpenStack Ironic, Google Kubernetes, Buildstep, Gitreceive 5) IaaS Management from HPC to hypervisors: Xen, KVM, Hyper-V, VirtualBox, OpenVZ, LXC, Linux-Vserver, OpenStack, OpenNebula, Eucalyptus, Nimbus, CloudStack, CoreOS, VMware ESXi, vSphere and vCloud, Amazon, Azure, Google and other public Clouds, Networking: Google Cloud DNS, Amazon Route 53 21 layers Over 300 Software Packages May 2 2015
  • 4. 45/4/2015 Kaleidoscope of (Apache) Big Data Stack (ABDS) and HPC Technologies Cross- Cutting Functions 1) Message and Data Protocols: Avro, Thrift, Protobuf 2) Distributed Coordination : Google Chubby, Zookeeper, Giraffe, JGroups 3) Security & Privacy: InCommon, Eduroam OpenStack Keystone, LDAP, Sentry, Sqrrl, OpenID, SAML OAuth 4) Monitoring: Ambari, Ganglia, Nagios, Inca 17) Workflow-Orchestration: ODE, ActiveBPEL, Airavata, Pegasus, Kepler, Swift, Taverna, Triana, Trident, BioKepler, Galaxy, IPython, Dryad, Naiad, Oozie, Tez, Google FlumeJava, Crunch, Cascading, Scalding, e-Science Central, Azure Data Factory, Google Cloud Dataflow, NiFi (NSA), Jitterbit, Talend, Pentaho, Apatar 16) Application and Analytics: Mahout , MLlib , MLbase, DataFu, R, pbdR, Bioconductor, ImageJ, OpenCV, Scalapack, PetSc, Azure Machine Learning, Google Prediction API & Translation API, mlpy, scikit-learn, PyBrain, CompLearn, DAAL(Intel), Caffe, Torch, Theano, DL4j, H2O, IBM Watson, Oracle PGX, GraphLab, GraphX, MapGraph, IBM System G, GraphBuilder(Intel), TinkerPop, Google Fusion Tables, CINET, NWB, Elasticsearch, Kibana, Logstash, Graylog, Splunk, Tableau, D3.js, three.js, Potree 15B) Application Hosting Frameworks: Google App Engine, AppScale, Red Hat OpenShift, Heroku, Aerobatic, AWS Elastic Beanstalk, Azure, Cloud Foundry, Pivotal, IBM BlueMix, Ninefold, Jelastic, Stackato, appfog, CloudBees, Engine Yard, CloudControl, dotCloud, Dokku, OSGi, HUBzero, OODT, Agave, Atmosphere 15A) High level Programming: Kite, Hive, HCatalog, Tajo, Shark, Phoenix, Impala, MRQL, SAP HANA, HadoopDB, PolyBase, Pivotal HD/Hawq, Presto, Google Dremel, Google BigQuery, Amazon Redshift, Drill, Kyoto Cabinet, Pig, Sawzall, Google Cloud DataFlow, Summingbird 14B) Streams: Storm, S4, Samza, Granules, Google MillWheel, Amazon Kinesis, LinkedIn Databus, Facebook Puma/Ptail/Scribe/ODS, Azure Stream Analytics 14A) Basic Programming model and runtime, SPMD, MapReduce: Hadoop, Spark, Twister, Stratosphere (Apache Flink), Reef, Hama, Giraph, Pregel, Pegasus, Ligra, GraphChi 13) Inter process communication Collectives, point-to-point, publish-subscribe: MPI, Harp, Netty, ZeroMQ, ActiveMQ, RabbitMQ, NaradaBrokering, QPid, Kafka, Kestrel, JMS, AMQP, Stomp, MQTT, Public Cloud: Amazon SNS, Lambda, Google Pub Sub, Azure Queues, Event Hubs 12) In-memory databases/caches: Gora (general object from NoSQL), Memcached, Redis, LMDB (key value), Hazelcast, Ehcache, Infinispan 12) Object-relational mapping: Hibernate, OpenJPA, EclipseLink, DataNucleus, ODBC/JDBC 12) Extraction Tools: UIMA, Tika 11C) SQL(NewSQL): Oracle, DB2, SQL Server, SQLite, MySQL, PostgreSQL, CUBRID, Galera Cluster, SciDB, Rasdaman, Apache Derby, Pivotal Greenplum, Google Cloud SQL, Azure SQL, Amazon RDS, Google F1, IBM dashDB, N1QL, BlinkDB 11B) NoSQL: Lucene, Solr, Solandra, Voldemort, Riak, Berkeley DB, Kyoto/Tokyo Cabinet, Tycoon, Tyrant, MongoDB, Espresso, CouchDB, Couchbase, IBM Cloudant, Pivotal Gemfire, HBase, Google Bigtable, LevelDB, Megastore and Spanner, Accumulo, Cassandra, RYA, Sqrrl, Neo4J, Yarcdata, AllegroGraph, Blazegraph, Facebook Tao, Titan:db, Jena, Sesame Public Cloud: Azure Table, Amazon Dynamo, Google DataStore 11A) File management: iRODS, NetCDF, CDF, HDF, OPeNDAP, FITS, RCFile, ORC, Parquet 10) Data Transport: BitTorrent, HTTP, FTP, SSH, Globus Online (GridFTP), Flume, Sqoop, Pivotal GPLOAD/GPFDIST 9) Cluster Resource Management: Mesos, Yarn, Helix, Llama, Google Omega, Facebook Corona, Celery, HTCondor, SGE, OpenPBS, Moab, Slurm, Torque, Globus Tools, Pilot Jobs 8) File systems: HDFS, Swift, Haystack, f4, Cinder, Ceph, FUSE, Gluster, Lustre, GPFS, GFFS Public Cloud: Amazon S3, Azure Blob, Google Cloud Storage 7) Interoperability: Libvirt, Libcloud, JClouds, TOSCA, OCCI, CDMI, Whirr, Saga, Genesis 6) DevOps: Docker, Puppet, Chef, Ansible, SaltStack, Boto, Cobbler, Xcat, Razor, CloudMesh, Juju, Foreman, OpenStack Heat, Rocks, Cisco Intelligent Automation for Cloud, Ubuntu MaaS, Facebook Tupperware, AWS OpsWorks, OpenStack Ironic, Google Kubernetes, Buildstep, Gitreceive 5) IaaS Management from HPC to hypervisors: Xen, KVM, Hyper-V, VirtualBox, OpenVZ, LXC, Linux-Vserver, OpenStack, OpenNebula, Eucalyptus, Nimbus, CloudStack, CoreOS, VMware ESXi, vSphere and vCloud, Amazon, Azure, Google and other public Clouds, Networking: Google Cloud DNS, Amazon Route 53 21 layers Over 300 Software Packages May 2 2015 There are a lot of Big Data and HPC Software systems in 17 (21) layers Build on – do not compete with the over 300 HPC-ABDS systems
  • 5. NIST Big Data Initiative Led by Chaitin Baru, Bob Marcus, Wo Chang 5/4/2015 5
  • 6. NBD-PWG (NIST Big Data Public Working Group) Subgroups & Co-Chairs • There were 5 Subgroups – Note mainly industry • Requirements and Use Cases Sub Group – Geoffrey Fox, Indiana U.; Joe Paiva, VA; Tsegereda Beyene, Cisco • Definitions and Taxonomies SG – Nancy Grady, SAIC; Natasha Balac, SDSC; Eugene Luster, R2AD • Reference Architecture Sub Group – Orit Levin, Microsoft; James Ketner, AT&T; Don Krapohl, Augmented Intelligence • Security and Privacy Sub Group – Arnab Roy, CSA/Fujitsu Nancy Landreville, U. MD Akhil Manchanda, GE • Technology Roadmap Sub Group – Carl Buffington, Vistronix; Dan McClary, Oracle; David Boyd, Data Tactics • See http://bigdatawg.nist.gov/usecases.php • And http://bigdatawg.nist.gov/V1_output_docs.php 65/4/2015
  • 7. Comment period ends May 21 http://bigdatawg.nist.gov/V1_output_docs.php5/4/2015 7
  • 8. Use Case Template • 26 fields completed for 51 areas • Government Operation: 4 • Commercial: 8 • Defense: 3 • Healthcare and Life Sciences: 10 • Deep Learning and Social Media: 6 • The Ecosystem for Research: 4 • Astronomy and Physics: 5 • Earth, Environmental and Polar Science: 10 • Energy: 1 85/4/2015
  • 9. 51 Detailed Use Cases: Contributed July-September 2013 Covers goals, data features such as 3 V’s, software, hardware • http://bigdatawg.nist.gov/usecases.php • https://bigdatacoursespring2014.appspot.com/course (Section 5) • Government Operation(4): National Archives and Records Administration, Census Bureau • Commercial(8): Finance in Cloud, Cloud Backup, Mendeley (Citations), Netflix, Web Search, Digital Materials, Cargo shipping (as in UPS) • Defense(3): Sensors, Image surveillance, Situation Assessment • Healthcare and Life Sciences(10): Medical records, Graph and Probabilistic analysis, Pathology, Bioimaging, Genomics, Epidemiology, People Activity models, Biodiversity • Deep Learning and Social Media(6): Driving Car, Geolocate images/cameras, Twitter, Crowd Sourcing, Network Science, NIST benchmark datasets • The Ecosystem for Research(4): Metadata, Collaboration, Language Translation, Light source experiments • Astronomy and Physics(5): Sky Surveys including comparison to simulation, Large Hadron Collider at CERN, Belle Accelerator II in Japan • Earth, Environmental and Polar Science(10): Radar Scattering in Atmosphere, Earthquake, Ocean, Earth Observation, Ice sheet Radar scattering, Earth radar mapping, Climate simulation datasets, Atmospheric turbulence identification, Subsurface Biogeochemistry (microbes to watersheds), AmeriFlux and FLUXNET gas sensors • Energy(1): Smart grid 9 26 Features for each use case Biased to science 5/4/2015
  • 10. Table 4: Characteristics of 6 Distributed Applications Application Example Execution Unit Communication Coordination Execution Environment Montage Multiple sequential and parallel executable Files Dataflow (DAG) Dynamic process creation, execution NEKTAR Multiple concurrent parallel executables Stream based Dataflow Co-scheduling, data streaming, async. I/O Replica- Exchange Multiple seq. and parallel executables Pub/sub Dataflow and events Decoupled coordination and messaging Climate Prediction (generation) Multiple seq. & parallel executables Files and messages Master- Worker, events @Home (BOINC) Climate Prediction (analysis) Multiple seq. & parallel executables Files and messages Dataflow Dynamics process creation, workflow execution SCOOP Multiple Executable Files and messages Dataflow Preemptive scheduling, reservations Coupled Fusion Multiple executable Stream-based Dataflow Co-scheduling, data streaming, async I/O 10 Part of Property Summary Table 5/4/2015
  • 11. Features and Examples with a streaming tinge …… 5/4/2015 11
  • 12. 51 Use Cases: What is Parallelism Over? • People: either the users (but see below) or subjects of application and often both • Decision makers like researchers or doctors (users of application) • Items such as Images, EMR, Sequences below; observations or contents of online store – Images or “Electronic Information nuggets” – EMR: Electronic Medical Records (often similar to people parallelism) – Protein or Gene Sequences; – Material properties, Manufactured Object specifications, etc., in custom dataset – Modelled entities like vehicles and people • Sensors – Internet of Things • Events such as detected anomalies in telescope or credit card data or atmosphere • (Complex) Nodes in RDF Graph • Simple nodes as in a learning network • Tweets, Blogs, Documents, Web Pages, etc. – And characters/words in them • Files or data to be backed up, moved or assigned metadata • Particles/cells/mesh points as in parallel simulations 125/4/2015
  • 13. Features of 51 Use Cases I • PP (26) “All” Pleasingly Parallel or Map Only • MR (18) Classic MapReduce MR (add MRStat below for full count) • MRStat (7) Simple version of MR where key computations are simple reduction as found in statistical averages such as histograms and averages • MRIter (23) Iterative MapReduce or MPI (Spark, Twister) • Graph (9) Complex graph data structure needed in analysis • Fusion (11) Integrate diverse data to aid discovery/decision making; could involve sophisticated algorithms or could just be a portal • Streaming (41) Some data comes in incrementally and is processed this way • Classify (30) Classification: divide data into categories • S/Q (12) Index, Search and Query 5/4/2015 13
  • 14. Features of 51 Use Cases II • CF (4) Collaborative Filtering for recommender engines • LML (36) Local Machine Learning (Independent for each parallel entity) – application could have GML as well • GML (23) Global Machine Learning: Deep Learning, Clustering, LDA, PLSI, MDS, – Large Scale Optimizations as in Variational Bayes, MCMC, Lifted Belief Propagation, Stochastic Gradient Descent, L-BFGS, Levenberg-Marquardt . Can call EGO or Exascale Global Optimization with scalable parallel algorithm • Workflow (51) Universal • GIS (16) Geotagged data and often displayed in ESRI, Microsoft Virtual Earth, Google Earth, GeoServer etc. • HPC (5) Classic large-scale simulation of cosmos, materials, etc. generating (visualization) data • Agent (2) Simulations of models of data-defined macroscopic entities represented as agents5/4/2015 14
  • 15. Internet of Things and Streaming Apps • It is projected that there will be 24 (Mobile Industry Group) to 50 (Cisco) billion devices on the Internet by 2020. • The cloud natural controller of and resource provider for the Internet of Things. • Smart phones/watches, Wearable devices (Smart People), “Intelligent River” “Smart Homes and Grid” and “Ubiquitous Cities”, Robotics. • Majority of use cases (41/51) are streaming – experimental science gathers data in a stream – sometimes batched as in a field trip. Below is sample • 10: Cargo Shipping Tracking as in UPS, Fedex PP GIS LML • 13: Large Scale Geospatial Analysis and Visualization PP GIS LML • 28: Truthy: Information diffusion research from Twitter Data PP MR for Search, GML for community determination • 39: Particle Physics: Analysis of LHC Large Hadron Collider Data: Discovery of Higgs particle PP Local Processing Global statistics • 50: DOE-BER AmeriFlux and FLUXNET Networks PP GIS LML • 51: Consumption forecasting in Smart Grids PP GIS LML 155/4/2015
  • 16. Growth of Internet of Things December 2014 • Currently Phones etc. but will change to “real things” 165/4/2015
  • 17. Big Data Patterns – the Big Data present and parallel computing past 5/4/2015 17
  • 18. 7 Computational Giants of NRC Massive Data Analysis Report 1) G1: Basic Statistics e.g. MRStat 2) G2: Generalized N-Body Problems 3) G3: Graph-Theoretic Computations 4) G4: Linear Algebraic Computations 5) G5: Optimizations e.g. Linear Programming 6) G6: Integration e.g. LDA and other GML 7) G7: Alignment Problems e.g. BLAST http://www.nap.edu/catalog.php?record_id=18374 5/4/2015 18
  • 19. HPC Benchmark Classics • Linpack or HPL: Parallel LU factorization for solution of linear equations • NPB version 1: Mainly classic HPC solver kernels – MG: Multigrid – CG: Conjugate Gradient – FT: Fast Fourier Transform – IS: Integer sort – EP: Embarrassingly Parallel – BT: Block Tridiagonal – SP: Scalar Pentadiagonal – LU: Lower-Upper symmetric Gauss Seidel 5/4/2015 19
  • 20. 13 Berkeley Dwarfs 1) Dense Linear Algebra 2) Sparse Linear Algebra 3) Spectral Methods 4) N-Body Methods 5) Structured Grids 6) Unstructured Grids 7) MapReduce 8) Combinational Logic 9) Graph Traversal 10) Dynamic Programming 11) Backtrack and Branch-and-Bound 12) Graphical Models 13) Finite State Machines First 6 of these correspond to Colella’s original. Monte Carlo dropped. N-body methods are a subset of Particle in Colella. Note a little inconsistent in that MapReduce is a programming model and spectral method is a numerical method. Need multiple facets! 5/4/2015 20
  • 21. What’s after Giants and Dwarfs? Ogres …….. Facets of the Ogres 5/4/2015 21
  • 22. Introducing Big Data Ogres and their Facets I • Big Data Ogres provide a systematic approach to understanding applications, and as such they have facets which represent key characteristics defined both from our experience and from a bottom-up study of features from several individual applications. • The facets capture common characteristics (shared by several problems)which are inevitably multi-dimensional and often overlapping. • Ogres characteristics are cataloged in four distinct dimensions or views. • Each view consists of facets; when multiple facets are linked together, they describe classes of big data problems represented as an Ogre. • Instances of Ogres are particular big data problems • A set of Ogre instances that cover a rich set of facets could form a benchmark set • Ogres and their instances can be atomic or composite 5/4/2015 22
  • 23. Introducing Big Data Ogres and their Facets II • Ogres characteristics are cataloged in four distinct dimensions or views. • Each view consists of facets; when multiple facets are linked together, they describe classes of big data problems represented as an Ogre. • One view of an Ogre is the overall problem architecture which is naturally related to the machine architecture needed to support data intensive application while still being different. • Then there is the execution (computational) features view, describing issues such as I/O versus compute rates, iterative nature of computation and the classic V’s of Big Data: defining problem size, rate of change, etc. • The data source & style view includes facets specifying how the data is collected, stored and accessed. • The final processing view has facets which describe classes of processing steps including algorithms and kernels. Ogres are specified by the particular value of a set of facets linked from the different views.5/4/2015 23
  • 24. Problem Architecture View Pleasingly Parallel Classic MapReduce Map-Collective Map Point-to-Point Shared Memory Single Program Multiple Data Bulk Synchronous Parallel Fusion Dataflow Agents Workflow Geospatial Information System HPC Simulations Internet of Things Metadata/Provenance Shared / Dedicated / Transient / Permanent Archived/Batched/Streaming HDFS/Lustre/GPFS Files/Objects Enterprise Data Model SQL/NoSQL/NewSQL PerformanceMetrics Flops/ByteFlopsperByte;MemoryI/O ExecutionEnvironment;Corelibraries Volume Velocity Variety Veracity CommunicationStructure DataAbstraction Metric=M/Non-Metric=N 𝑂𝑁2 =NN/𝑂(𝑁)=N Regular=R/Irregular=I Dynamic=D/Static=S LinearAlgebraKernels GraphAlgorithms DeepLearning Classification RecommenderEngine Search/Query/Index BasicStatistics Streaming Alignment OptimizationMethodology GlobalAnalytics LocalAnalytics Micro-benchmarks Visualization Data Source and Style View Execution View Processing View 1 2 3 4 6 7 8 9 10 11 12 10 9 8 7 6 5 4 3 2 1 1 2 3 4 5 6 7 8 9 10 12 14 9 8 7 5 4 3 2 114 13 12 11 10 6 13 Map Streaming 5 4 Ogre Views and 50 Facets Iterative/Simple 11 5/4/2015 24
  • 25. Facets of the Ogres Meta or Macro Aspects: Problem Architecture 5/4/2015 25
  • 26. Problem Architecture View of Ogres (Meta or MacroPatterns) i. Pleasingly Parallel – as in BLAST, Protein docking, some (bio-)imagery including Local Analytics or Machine Learning – ML or filtering pleasingly parallel, as in bio-imagery, radar images (pleasingly parallel but sophisticated local analytics) ii. Classic MapReduce: Search, Index and Query and Classification algorithms like collaborative filtering (G1 for MRStat in Features, G7) iii. Map-Collective: Iterative maps + communication dominated by “collective” operations as in reduction, broadcast, gather, scatter. Common datamining pattern iv. Map-Point to Point: Iterative maps + communication dominated by many small point to point messages as in graph algorithms v. Map-Streaming: Describes streaming, steering and assimilation problems vi. Shared Memory: Some problems are asynchronous and are easier to parallelize on shared rather than distributed memory – see some graph algorithms vii. SPMD: Single Program Multiple Data, common parallel programming feature viii. BSP or Bulk Synchronous Processing: well-defined compute-communication phases ix. Fusion: Knowledge discovery often involves fusion of multiple methods. x. Dataflow: Important application features often occurring in composite Ogres xi. Use Agents: as in epidemiology (swarm approaches) xii. Workflow: All applications often involve orchestration (workflow) of multiple components Note problem and machine architectures are related5/4/2015 26
  • 27. Hardware, Software, Applications • In my old papers (especially book Parallel Computing Works!), I discussed computing as multiple complex systems mapped into each other Problem  Numerical formulation  Software  Hardware • Each of these 4 systems has an architecture that can be described in similar language • One gets an easy programming model if architecture of problem matches that of Software • One gets good performance if architecture of hardware matches that of software and problem • So “MapReduce” can be used as architecture of software (programming model) or “Numerical formulation of problem” 5/4/2015 27
  • 28. 8 Data Analysis Problem Architectures  1) Pleasingly Parallel PP or “map-only” in MapReduce  BLAST Analysis; Local Machine Learning  2A) Classic MapReduce MR, Map followed by reduction  High Energy Physics (HEP) Histograms; Web search; Recommender Engines  2B) Simple version of classic MapReduce MRStat  Final reduction is just simple statistics  3) Iterative MapReduce MRIter  Expectation maximization Clustering Linear Algebra, PageRank  4A) Map Point to Point Communication  Classic MPI; PDE Solvers and Particle Dynamics; Graph processing Graph  4B) GPU (Accelerator) enhanced 4A) – especially for deep learning  5) Map + Streaming + Communication  Images from Synchrotron sources; Telescopes; Internet of Things IoT  6) Shared memory allowing parallel threads which are tricky to program but lower latency  Difficult to parallelize asynchronous parallel Graph Algorithms5/4/2015 28
  • 29. (1) Map Only M (3) Iterative Map Reduce or Map-Collective (2) Classic MapReduce Input map reduce Input map reduce Iterations Input Output map 6 Forms of MapReduce (4) Point to Point or Map-Communication ap Reduce llective ations Local Graph (5) Map Streaming maps brokers Events (6) Shared memory Map Communicates Map & Communicate Shared Memory 5/4/2015 29 Shared MemoryStreaming Graph Iterative MR Basic Statistics PP Local Analytics
  • 30. Facets in the Execution Features Views 5/4/2015 30
  • 31. One View of Ogres has Facets that are micropatterns or Execution Features i. Performance Metrics; property found by benchmarking Ogre ii. Flops per byte; memory or I/O iii. Execution Environment; Core libraries needed: matrix-matrix/vector algebra, conjugate gradient, reduction, broadcast; Cloud, HPC etc. iv. Volume: property of an Ogre instance v. Velocity: qualitative property of Ogre with value associated with instance vi. Variety: important property especially of composite Ogres vii. Veracity: important property of “mini-applications” but not kernels viii. Communication Structure; Interconnect requirements; Is communication BSP, Asynchronous, Pub-Sub, Collective, Point to Point? ix. Is application (graph) static or dynamic? x. Most applications consist of a set of interconnected entities; is this regular as a set of pixels or is it a complicated irregular graph? xi. Are algorithms Iterative or not? xii. Data Abstraction: key-value, pixel, graph(G3), vector, bags of words or items xiii. Are data points in metric or non-metric spaces? xiv. Is algorithm O(N2) or O(N) (up to logs) for N points per iteration (G2) 5/4/2015 31
  • 32. Facets of the Ogres Data Source and Style Aspects 5/4/2015 32
  • 33. Data Source and Style View of Ogres I i. SQL NewSQL or NoSQL: NoSQL includes Document, Column, Key-value, Graph, Triple store; NewSQL is SQL redone to exploit NoSQL performance ii. Other Enterprise data systems: 10 examples from NIST integrate SQL/NoSQL iii. Set of Files or Objects: as managed in iRODS and extremely common in scientific research iv. File systems, Object, Blob and Data-parallel (HDFS) raw storage: Separated from computing or colocated? HDFS v Lustre v. Openstack Swift v. GPFS v. Archive/Batched/Streaming: Streaming is incremental update of datasets with new algorithms to achieve real-time response (G7); Before data gets to compute system, there is often an initial data gathering phase which is characterized by a block size and timing. Block size varies from month (Remote Sensing, Seismic) to day (genomic) to seconds or lower (Real time control, streaming)5/4/2015 33
  • 34. Data Source and Style View of Ogres II vi. Shared/Dedicated/Transient/Permanent: qualitative property of data; Other characteristics are needed for permanent auxiliary/comparison datasets and these could be interdisciplinary, implying nontrivial data movement/replication vii. Metadata/Provenance: Clear qualitative property but not for kernels as important aspect of data collection process viii. Internet of Things: 24 to 50 Billion devices on Internet by 2020 ix. HPC simulations: generate major (visualization) output that often needs to be mined x. Using GIS: Geographical Information Systems provide attractive access to geospatial data Note 10 Bob Marcus (led NIST effort) Use cases 5/4/2015 34
  • 35. 2. Perform real time analytics on data source streams and notify users when specified events occur Storm, Kafka, Hbase, Zookeeper Streaming Data Streaming Data Streaming Data Posted Data Identified Events Filter Identifying Events Repository Specify filter Archive Post Selected Events Fetch streamed Data 5/4/2015 35
  • 36. 5. Perform interactive analytics on data in analytics-optimized database Hadoop, Spark, Giraph, Pig … Data Storage: HDFS, Hbase Data, Streaming, Batch ….. Mahout, R 5/4/2015 36
  • 37. 5A. Perform interactive analytics on observational scientific data Grid or Many Task Software, Hadoop, Spark, Giraph, Pig … Data Storage: HDFS, Hbase, File Collection Streaming Twitter data for Social Networking Science Analysis Code, Mahout, R Transport batch of data to primary analysis data system Record Scientific Data in “field” Local Accumulate and initial computing Direct Transfer NIST examples include LHC, Remote Sensing, Astronomy and Bioinformatics 5/4/2015 37
  • 38. Facets of the Ogres Processing View 5/4/2015 38
  • 39. Facets in Processing (run time) View of Ogres I i. Micro-benchmarks ogres that exercise simple features of hardware such as communication, disk I/O, CPU, memory performance ii. Local Analytics executed on a single core or perhaps node iii. Global Analytics requiring iterative programming models (G5,G6) across multiple nodes of a parallel system iv. Optimization Methodology: overlapping categories i. Nonlinear Optimization (G6) ii. Machine Learning iii. Maximum Likelihood or 2 minimizations iv. Expectation Maximization (often Steepest descent) v. Combinatorial Optimization vi. Linear/Quadratic Programming (G5) vii. Dynamic Programming v. Visualization is key application capability with algorithms like MDS useful but it itself part of “mini-app” or composite Ogre vi. Alignment (G7) as in BLAST compares samples with repository5/4/2015 39
  • 40. Facets in Processing (run time) View of Ogres II vii. Streaming divided into 5 categories depending on event size and synchronization and integration – Set of independent events where precise time sequencing unimportant. – Time series of connected small events where time ordering important. – Set of independent large events where each event needs parallel processing with time sequencing not critical – Set of connected large events where each event needs parallel processing with time sequencing critical. – Stream of connected small or large events to be integrated in a complex way. viii. Basic Statistics (G1): MRStat in NIST problem features ix. Search/Query/Index: Classic database which is well studied (Baru, Rabl tutorial) x. Recommender Engine: core to many e-commerce, media businesses; collaborative filtering key technology xi. Classification: assigning items to categories based on many methods – MapReduce good in Alignment, Basic statistics, S/Q/I, Recommender, Calssification xii. Deep Learning of growing importance due to success in speech recognition etc. xiii. Problem set up as a graph (G3) as opposed to vector, grid, bag of words etc. xiv. Using Linear Algebra Kernels: much machine learning uses linear algebra kernels 5/4/2015 40
  • 42. Benchmarks based on Ogres Analytics 5/4/2015 42
  • 43. Benchmarks/Mini-apps spanning Facets • Look at NSF SPIDAL Project, NIST 51 use cases, Baru-Rabl review • Catalog facets of benchmarks and choose entries to cover “all facets” • Micro Benchmarks: SPEC, EnhancedDFSIO (HDFS), Terasort, Wordcount, Grep, MPI, Basic Pub-Sub …. • SQL and NoSQL Data systems, Search, Recommenders: TPC (-C to x–HS for Hadoop), BigBench, Yahoo Cloud Serving, Berkeley Big Data, HiBench, BigDataBench, Cloudsuite, Linkbench – includes MapReduce cases Search, Bayes, Random Forests, Collaborative Filtering • Spatial Query: select from image or earth data • Alignment: Biology as in BLAST • Streaming: Online classifiers, Cluster tweets, Robotics, Industrial Internet of Things, Astronomy; BGBenchmark; choose to cover all 5 subclasses • Pleasingly parallel (Local Analytics): as in initial steps of LHC, Pathology, Bioimaging (differ in type of data analysis) • Global Analytics: Outlier, Clustering, LDA, SVM, Deep Learning, MDS, PageRank, Levenberg-Marquardt, Graph 500 entries • Workflow and Composite (analytics on xSQL) linking above5/4/2015 43
  • 45. 455/4/2015 Kaleidoscope of (Apache) Big Data Stack (ABDS) and HPC Technologies Cross- Cutting Functions 1) Message and Data Protocols: Avro, Thrift, Protobuf 2) Distributed Coordination : Google Chubby, Zookeeper, Giraffe, JGroups 3) Security & Privacy: InCommon, Eduroam OpenStack Keystone, LDAP, Sentry, Sqrrl, OpenID, SAML OAuth 4) Monitoring: Ambari, Ganglia, Nagios, Inca 17) Workflow-Orchestration: ODE, ActiveBPEL, Airavata, Pegasus, Kepler, Swift, Taverna, Triana, Trident, BioKepler, Galaxy, IPython, Dryad, Naiad, Oozie, Tez, Google FlumeJava, Crunch, Cascading, Scalding, e-Science Central, Azure Data Factory, Google Cloud Dataflow, NiFi (NSA), Jitterbit, Talend, Pentaho, Apatar 16) Application and Analytics: Mahout , MLlib , MLbase, DataFu, R, pbdR, Bioconductor, ImageJ, OpenCV, Scalapack, PetSc, Azure Machine Learning, Google Prediction API & Translation API, mlpy, scikit-learn, PyBrain, CompLearn, DAAL(Intel), Caffe, Torch, Theano, DL4j, H2O, IBM Watson, Oracle PGX, GraphLab, GraphX, MapGraph, IBM System G, GraphBuilder(Intel), TinkerPop, Google Fusion Tables, CINET, NWB, Elasticsearch, Kibana, Logstash, Graylog, Splunk, Tableau, D3.js, three.js, Potree 15B) Application Hosting Frameworks: Google App Engine, AppScale, Red Hat OpenShift, Heroku, Aerobatic, AWS Elastic Beanstalk, Azure, Cloud Foundry, Pivotal, IBM BlueMix, Ninefold, Jelastic, Stackato, appfog, CloudBees, Engine Yard, CloudControl, dotCloud, Dokku, OSGi, HUBzero, OODT, Agave, Atmosphere 15A) High level Programming: Kite, Hive, HCatalog, Tajo, Shark, Phoenix, Impala, MRQL, SAP HANA, HadoopDB, PolyBase, Pivotal HD/Hawq, Presto, Google Dremel, Google BigQuery, Amazon Redshift, Drill, Kyoto Cabinet, Pig, Sawzall, Google Cloud DataFlow, Summingbird 14B) Streams: Storm, S4, Samza, Granules, Google MillWheel, Amazon Kinesis, LinkedIn Databus, Facebook Puma/Ptail/Scribe/ODS, Azure Stream Analytics 14A) Basic Programming model and runtime, SPMD, MapReduce: Hadoop, Spark, Twister, Stratosphere (Apache Flink), Reef, Hama, Giraph, Pregel, Pegasus, Ligra, GraphChi 13) Inter process communication Collectives, point-to-point, publish-subscribe: MPI, Harp, Netty, ZeroMQ, ActiveMQ, RabbitMQ, NaradaBrokering, QPid, Kafka, Kestrel, JMS, AMQP, Stomp, MQTT, Public Cloud: Amazon SNS, Lambda, Google Pub Sub, Azure Queues, Event Hubs 12) In-memory databases/caches: Gora (general object from NoSQL), Memcached, Redis, LMDB (key value), Hazelcast, Ehcache, Infinispan 12) Object-relational mapping: Hibernate, OpenJPA, EclipseLink, DataNucleus, ODBC/JDBC 12) Extraction Tools: UIMA, Tika 11C) SQL(NewSQL): Oracle, DB2, SQL Server, SQLite, MySQL, PostgreSQL, CUBRID, Galera Cluster, SciDB, Rasdaman, Apache Derby, Pivotal Greenplum, Google Cloud SQL, Azure SQL, Amazon RDS, Google F1, IBM dashDB, N1QL, BlinkDB 11B) NoSQL: Lucene, Solr, Solandra, Voldemort, Riak, Berkeley DB, Kyoto/Tokyo Cabinet, Tycoon, Tyrant, MongoDB, Espresso, CouchDB, Couchbase, IBM Cloudant, Pivotal Gemfire, HBase, Google Bigtable, LevelDB, Megastore and Spanner, Accumulo, Cassandra, RYA, Sqrrl, Neo4J, Yarcdata, AllegroGraph, Blazegraph, Facebook Tao, Titan:db, Jena, Sesame Public Cloud: Azure Table, Amazon Dynamo, Google DataStore 11A) File management: iRODS, NetCDF, CDF, HDF, OPeNDAP, FITS, RCFile, ORC, Parquet 10) Data Transport: BitTorrent, HTTP, FTP, SSH, Globus Online (GridFTP), Flume, Sqoop, Pivotal GPLOAD/GPFDIST 9) Cluster Resource Management: Mesos, Yarn, Helix, Llama, Google Omega, Facebook Corona, Celery, HTCondor, SGE, OpenPBS, Moab, Slurm, Torque, Globus Tools, Pilot Jobs 8) File systems: HDFS, Swift, Haystack, f4, Cinder, Ceph, FUSE, Gluster, Lustre, GPFS, GFFS Public Cloud: Amazon S3, Azure Blob, Google Cloud Storage 7) Interoperability: Libvirt, Libcloud, JClouds, TOSCA, OCCI, CDMI, Whirr, Saga, Genesis 6) DevOps: Docker, Puppet, Chef, Ansible, SaltStack, Boto, Cobbler, Xcat, Razor, CloudMesh, Juju, Foreman, OpenStack Heat, Rocks, Cisco Intelligent Automation for Cloud, Ubuntu MaaS, Facebook Tupperware, AWS OpsWorks, OpenStack Ironic, Google Kubernetes, Buildstep, Gitreceive 5) IaaS Management from HPC to hypervisors: Xen, KVM, Hyper-V, VirtualBox, OpenVZ, LXC, Linux-Vserver, OpenStack, OpenNebula, Eucalyptus, Nimbus, CloudStack, CoreOS, VMware ESXi, vSphere and vCloud, Amazon, Azure, Google and other public Clouds, Networking: Google Cloud DNS, Amazon Route 53 21 layers Over 300 Software Packages May 2 2015
  • 46. 465/4/2015 Kaleidoscope of (Apache) Big Data Stack (ABDS) and HPC Technologies Cross- Cutting Functions 1) Message and Data Protocols: Avro, Thrift, Protobuf 2) Distributed Coordination : Google Chubby, Zookeeper, Giraffe, JGroups 3) Security & Privacy: InCommon, Eduroam OpenStack Keystone, LDAP, Sentry, Sqrrl, OpenID, SAML OAuth 4) Monitoring: Ambari, Ganglia, Nagios, Inca 17) Workflow-Orchestration: ODE, ActiveBPEL, Airavata, Pegasus, Kepler, Swift, Taverna, Triana, Trident, BioKepler, Galaxy, IPython, Dryad, Naiad, Oozie, Tez, Google FlumeJava, Crunch, Cascading, Scalding, e-Science Central, Azure Data Factory, Google Cloud Dataflow, NiFi (NSA), Jitterbit, Talend, Pentaho, Apatar 16) Application and Analytics: Mahout , MLlib , MLbase, DataFu, R, pbdR, Bioconductor, ImageJ, OpenCV, Scalapack, PetSc, Azure Machine Learning, Google Prediction API & Translation API, mlpy, scikit-learn, PyBrain, CompLearn, DAAL(Intel), Caffe, Torch, Theano, DL4j, H2O, IBM Watson, Oracle PGX, GraphLab, GraphX, MapGraph, IBM System G, GraphBuilder(Intel), TinkerPop, Google Fusion Tables, CINET, NWB, Elasticsearch, Kibana, Logstash, Graylog, Splunk, Tableau, D3.js, three.js, Potree 15B) Application Hosting Frameworks: Google App Engine, AppScale, Red Hat OpenShift, Heroku, Aerobatic, AWS Elastic Beanstalk, Azure, Cloud Foundry, Pivotal, IBM BlueMix, Ninefold, Jelastic, Stackato, appfog, CloudBees, Engine Yard, CloudControl, dotCloud, Dokku, OSGi, HUBzero, OODT, Agave, Atmosphere 15A) High level Programming: Kite, Hive, HCatalog, Tajo, Shark, Phoenix, Impala, MRQL, SAP HANA, HadoopDB, PolyBase, Pivotal HD/Hawq, Presto, Google Dremel, Google BigQuery, Amazon Redshift, Drill, Kyoto Cabinet, Pig, Sawzall, Google Cloud DataFlow, Summingbird 14B) Streams: Storm, S4, Samza, Granules, Google MillWheel, Amazon Kinesis, LinkedIn Databus, Facebook Puma/Ptail/Scribe/ODS, Azure Stream Analytics 14A) Basic Programming model and runtime, SPMD, MapReduce: Hadoop, Spark, Twister, Stratosphere (Apache Flink), Reef, Hama, Giraph, Pregel, Pegasus, Ligra, GraphChi 13) Inter process communication Collectives, point-to-point, publish-subscribe: MPI, Harp, Netty, ZeroMQ, ActiveMQ, RabbitMQ, NaradaBrokering, QPid, Kafka, Kestrel, JMS, AMQP, Stomp, MQTT, Public Cloud: Amazon SNS, Lambda, Google Pub Sub, Azure Queues, Event Hubs 12) In-memory databases/caches: Gora (general object from NoSQL), Memcached, Redis, LMDB (key value), Hazelcast, Ehcache, Infinispan 12) Object-relational mapping: Hibernate, OpenJPA, EclipseLink, DataNucleus, ODBC/JDBC 12) Extraction Tools: UIMA, Tika 11C) SQL(NewSQL): Oracle, DB2, SQL Server, SQLite, MySQL, PostgreSQL, CUBRID, Galera Cluster, SciDB, Rasdaman, Apache Derby, Pivotal Greenplum, Google Cloud SQL, Azure SQL, Amazon RDS, Google F1, IBM dashDB, N1QL, BlinkDB 11B) NoSQL: Lucene, Solr, Solandra, Voldemort, Riak, Berkeley DB, Kyoto/Tokyo Cabinet, Tycoon, Tyrant, MongoDB, Espresso, CouchDB, Couchbase, IBM Cloudant, Pivotal Gemfire, HBase, Google Bigtable, LevelDB, Megastore and Spanner, Accumulo, Cassandra, RYA, Sqrrl, Neo4J, Yarcdata, AllegroGraph, Blazegraph, Facebook Tao, Titan:db, Jena, Sesame Public Cloud: Azure Table, Amazon Dynamo, Google DataStore 11A) File management: iRODS, NetCDF, CDF, HDF, OPeNDAP, FITS, RCFile, ORC, Parquet 10) Data Transport: BitTorrent, HTTP, FTP, SSH, Globus Online (GridFTP), Flume, Sqoop, Pivotal GPLOAD/GPFDIST 9) Cluster Resource Management: Mesos, Yarn, Helix, Llama, Google Omega, Facebook Corona, Celery, HTCondor, SGE, OpenPBS, Moab, Slurm, Torque, Globus Tools, Pilot Jobs 8) File systems: HDFS, Swift, Haystack, f4, Cinder, Ceph, FUSE, Gluster, Lustre, GPFS, GFFS Public Cloud: Amazon S3, Azure Blob, Google Cloud Storage 7) Interoperability: Libvirt, Libcloud, JClouds, TOSCA, OCCI, CDMI, Whirr, Saga, Genesis 6) DevOps: Docker, Puppet, Chef, Ansible, SaltStack, Boto, Cobbler, Xcat, Razor, CloudMesh, Juju, Foreman, OpenStack Heat, Rocks, Cisco Intelligent Automation for Cloud, Ubuntu MaaS, Facebook Tupperware, AWS OpsWorks, OpenStack Ironic, Google Kubernetes, Buildstep, Gitreceive 5) IaaS Management from HPC to hypervisors: Xen, KVM, Hyper-V, VirtualBox, OpenVZ, LXC, Linux-Vserver, OpenStack, OpenNebula, Eucalyptus, Nimbus, CloudStack, CoreOS, VMware ESXi, vSphere and vCloud, Amazon, Azure, Google and other public Clouds, Networking: Google Cloud DNS, Amazon Route 53 21 layers Over 300 Software Packages May 2 2015 There are a lot of Big Data and HPC Software systems in 17 (21) layers Build on – do not compete with the over 300 HPC-ABDS systems
  • 47. (1) Map Only M (3) Iterative Map Reduce or Map-Collective (2) Classic MapReduce Input map reduce Input map reduce Iterations Input Output map 6 Forms of MapReduce (4) Point to Point or Map-Communication ap Reduce llective ations Local Graph (5) Map Streaming maps brokers Events (6) Shared memory Map Communicates Map & Communicate Shared Memory 5/4/2015 47 Shared MemoryStreaming Graph Iterative MR Basic Statistics PP Local Analytics
  • 48. 8 Data Analysis Problem Architectures  1) Pleasingly Parallel PP or “map-only” in MapReduce  BLAST Analysis; Local Machine Learning  2A) Classic MapReduce MR, Map followed by reduction  High Energy Physics (HEP) Histograms; Web search; Recommender Engines  2B) Simple version of classic MapReduce MRStat  Final reduction is just simple statistics  3) Iterative MapReduce MRIter  Expectation maximization Clustering Linear Algebra, PageRank  4A) Map Point to Point Communication  Classic MPI; PDE Solvers and Particle Dynamics; Graph processing Graph  4B) GPU (Accelerator) enhanced 4A) – especially for deep learning  5) Map + Streaming + Communication  Images from Synchrotron sources; Telescopes; Internet of Things IoT  6) Shared memory allowing parallel threads which are tricky to program but lower latency  Difficult to parallelize asynchronous parallel Graph Algorithms5/4/2015 48
  • 49. Functionality of 21 HPC-ABDS Layers 1) Message Protocols: 2) Distributed Coordination: 3) Security & Privacy: 4) Monitoring: 5) IaaS Management from HPC to hypervisors: 6) DevOps: 7) Interoperability: 8) File systems: 9) Cluster Resource Management: 10) Data Transport: 11) A) File management B) NoSQL C) SQL 12) In-memory databases&caches / Object-relational mapping / Extraction Tools 13) Inter process communication Collectives, point-to-point, publish-subscribe, MPI: 14) A) Basic Programming model and runtime, SPMD, MapReduce: B) Streaming: 15) A) High level Programming: B) Frameworks 16) Application and Analytics: 17) Workflow-Orchestration:5/4/2015 49 Here are 21 functionalities. (including 11, 14, 15 subparts) Lets discuss how these are used in particular applications 4 Cross cutting at top 17 in order of layered diagram starting at bottom
  • 50. Exemplar Software for a Big Data Initiative • Functionality of ABDS and Performance of HPC • Workflow: Apache Crunch, Python or Kepler • Data Analytics: Mahout, R, ImageJ, Scalapack • High level Programming: Hive, Pig • Batch Parallel Programming model: Hadoop, Spark, Giraph, Harp, MPI; • Streaming Programming model: Storm, Kafka or RabbitMQ • In-memory: Memcached • Data Management: Hbase, MongoDB, MySQL • Distributed Coordination: Zookeeper • Cluster Management: Yarn, Slurm • File Systems: HDFS, Object store (Swift),Lustre • DevOps: Cloudmesh, Chef, Puppet, Docker, Cobbler • IaaS: Amazon, Azure, OpenStack, Docker, SR-IOV • Monitoring: Inca, Ganglia, Nagios5/4/2015 50
  • 51. ABDS v. HPC Architecture 5/4/2015 51
  • 52. HPC-ABDS Stack Summarized I • The HPC-ABDS software is broken up into 21 layers so that one can discuss software systems in reasonable size groups. – The layers where there is especial opportunity to integrate HPC are colored green in figure. • We note that data systems that we construct from this software can run interoperably on virtualized or non-virtualized environments aimed at key scientific data analysis problems. • Most of ABDS emphasizes scalability but not performance and one of our goals is to produce high performance environments. Here there is clear need for better node performance and support of accelerators like Xeon-Phi and GPU’s. • Figure “ABDS v. HPC Architecture” contrasts modern ABDS and HPC stacks illustrating most of the 21 layers and labelling on left with layer number used in HPC-ABDS Figure. • The omitted layers in architecture figure are Interoperability, DevOps, Monitoring and Security (layers 7, 6, 4, 3) which are all important and clearly applicable to both HPC and ABDS. • We also add an extra layer “language” not discussed in HPC-ABDS Figure.5/4/2015 52
  • 53. HPC-ABDS Stack Summarized II• Lower layers where HPC can make a major impact include scheduling layer 9 where Apache technologies like Yarn and Mesos need to be integrated with the sophisticated HPC cluster and HTC approaches. • Storage layer 8 is another important area where HPC distributed and parallel storage environments need to be reconciled with the “data parallel” storage seen in HDFS in many ABDS systems. • However the most important issues are probably at the higher layers with data management(11), communication(13), (high layer or basic) programming(15, 14), analytics (16) and orchestration (17). These are areas where there is rapid commodity/commercial innovation and we discuss them in order below. • Much science data analysis is centered on files (8) but we expect movement to the common commodity approaches of Object stores (8), SQL and NoSQL (11) where latter has a proliferation of systems with different characteristics – especially in the data abstraction that varies over row/column, key-value, graph and documents. • Note recent developments at the programming layer (15A) like Apache Hive and Drill, which offer high-layer access models like SQL implemented on scalable NoSQL data systems. – Generalize Drill to other views such as “FITS on anything” (astronomy”) or “Excel on Anything” or “Matplotlib on anything” 5/4/2015 53
  • 54. HPC-ABDS Stack Summarized III • The communication layer (13) includes Publish-subscribe technology used in many approaches to streaming data as well the HPC communication technologies (MPI) which are much faster than most default Apache approaches but can be added to some systems like Hadoop whose modern version is modular and allows plug-ins for HPC stalwarts like MPI and sophisticated load balancers. – Need to extend to Giraph and include load-balancing • The programming layer (14) includes both the classic batch processing typified by Hadoop (14A) and streaming by Storm (14B). • Investigating Map-Streaming programming models seems important. ABDS Streaming is nice but doesn’t support real-time or parallel processing. • The programming offerings (14) differ in approaches to data model (key-value, array, pixel, bags, graph), fault tolerance and communication. The trade-offs here have major performance issues. – Too many ~identical programming systems! – Recent survey of graph databases from Wikidata with 49 features; chose BlazeGraph • You also see systems like Apache Pig (15A) offering data parallel interfaces. • At the high layer we see both programming models (15A) and Platform (15B) as a Service toolkits where the Google App Engine is well known but there are many entries include the recent BlueMix from IBM.5/4/2015 54
  • 55. HPC-ABDS Stack Summarized IV • The orchestration or workflow layer 17 has seen an explosion of activity in the ABDS space although with systems like Pegasus, Taverna, Kepler, Swift and IPython, HPC has substantial experience. • There are commodity orchestration dataflow systems like Tez and projects like Apache Crunch with a data parallel emphasis based on ideas from Google FlumeJava. – A modern version of the latter presumably underlies Google’s recently announced Cloud Dataflow that unifies support of multiple batch and streaming components; a capability that we expect to become common. • The implementation of the analytics layer 16 depends on details of orchestration and especially programming layers but probably most important are quality parallel algorithms. – As many machine learning algorithms involve linear algebra, HPC expertise is directly applicable as is fundamental mathematics needed to develop O(NlogN) algorithms for analytics that are naively O(N2). – Streaming (online) algorithms are an important new area of research 5/4/2015 55
  • 56. DevOps, Platforms and Orchestration • DevOps Level 6 includes several automation capabilities including systems like OpenStack Heat, Juju and Kubernetes to build virtual clusters and a standard TOSCA that has several good studies from Stuttgart – TOSCA specifies system to be instantiated and managed • TOSCA is closely related to workflow (orchestration) standard BPEL – BPEL specifies system to be executed • In Level 17, should evaluate new orchestration systems from ABDS such as NiFi or Crunch and toolkits like Cascading – Ensure streaming and batch supported • Level 15B has application hosting environments such as GAE, Heroku, Dokku (for Docker), Jelastic – These platforms bring together a focused set of tools to address a finite but broad application area • Should look at these 3 levels to build HPC and Streaming systems5/4/2015 56
  • 57. Analytics and the DIKW Pipeline • Data goes through a pipeline (Big Data is also Big Wisdom etc.) Raw data  Data  Information  Knowledge  Wisdom  Decisions • Each link enabled by a filter which is “business logic” or “analytics” – All filters are Analytics • However I am most interested in filters that involve “sophisticated analytics” which require non trivial parallel algorithms – Improve state of art in both algorithm quality and (parallel) performance • See Apache Crunch or Google Cloud Dataflow supporting pipelined analytics – And Pegasus, Taverna, Kepler from Grid community More Analytics KnowledgeInformation Analytics InformationData 5/4/2015 57
  • 58. Internet of Things Storm Storm Storm Storm Storm Storm Archival Storage – NOSQL like Hbase Streaming Processing (Iterative MapReduce) Batch Processing (Iterative MapReduce) Raw Data Information WisdomKnowledgeData Decisions Pub-Sub System Orchestration / Dataflow / Workflow Cloud DIKW based on HPC-ABDS to integrate streaming and batch Big Data 5/4/2015 58
  • 59. 5 Classes of Streaming Applications • Set of independent small events where precise time sequencing unimportant. – e.g. Independent search requests or tweets from users • Time series of connected small events where time ordering important. – e.g. Streaming audio or video; robot monitoring • Set of independent large events where each event needs parallel processing with time sequencing not critical – e.g. Processing images from telescopes or light sources with material or biological sciences. • Set of connected large events where each event needs parallel processing with time sequencing critical. – e.g. Processing high resolution monitoring (including video) information from robots (self-driving cars) with real time response needed • Stream of connected small or large events that need to be integrated in a complex way. – e.g. Tweets or other online data where we are using them to update old and find new clusters rather just classifying tweets based on previous clusters i.e. where we update model as well as using it to classify event. 5/4/2015 59
  • 60. Science Streaming Examples Streaming Application Details 1 Data Assimilation Integrate data into simulations to enhance quality. Distributed Data sources 2 Analysis of Simulation Results Climate, Fusion, Molecular Dynamics, Materials. Typically local or in-situ data 3 Steering Experiments Control of simulation or Experiment. Data could be local or distributed 4 Astronomy, Light Sources Outlier event detection; classification; build model, accumulate data. Distributed Data sources 5 Environmental Sensors, Smart Grid, Transportation systems Environmental sensor data or Internet of Things; many small events. Distributed Data sources. 6 Robotics Cloud control of Robots including cars. Need processing near robot when decision time < ~1 second 7 Network Science: Online Classification Build model and classify with online (streaming) algorithms5/4/2015 60
  • 61. IoT Activities at Indiana University Parallel Clustering for Tweets: Judy Qiu, Emilio Ferrara, Xiaoming Gao Parallel Cloud Control for Robots: Supun Kamburugamuve, Hengjing He, David Crandall 5/4/2015 61
  • 62. • IoTCloud uses Zookeeper, Storm, Hbase, RabbitMQ for robot cloud control • Focus on high performance (parallel) control functions • Guaranteed real time response 5/4/2015 62 Parallel simultaneous localization and mapping (SLAM) in the cloud
  • 63. Latency with RabbitMQ Different Message sizes in bytes Latency with Kafka Note change in scales for latency and message size 5/4/2015 63
  • 64. Robot Latency Kafka & RabbitMQ Kinect with Turtlebot and RabbitMQ RabbitMQ versus Kafka 5/4/2015 64
  • 65. Parallel SLAM Simultaneous Localization and Mapping by Particle Filtering 5/4/2015 65
  • 66. Parallel Overheads SLAM Simultaneous Localization and Mapping: I/O and Garbage Collection 5/4/2015 66
  • 67. Parallel Overheads SLAM Simultaneous Localization and Mapping: Load Imbalance 5/4/2015 67 Load Imbalance overhead
  • 68. Parallel Tweet Clustering with Storm • Judy Qiu, Emilio Ferrara and Xiaoming Gao • Storm Bolts coordinated by ActiveMQ to synchronize parallel cluster center updates – add loops to Storm • 2 million streaming tweets processed in 40 minutes; 35,000 clusters Sequential Parallel – eventually 10,000 bolts 5/4/2015 68
  • 69. Parallel Tweet Clustering with Storm • Speedup on up to 96 bolts on two clusters Moe and Madrid • Red curve is old algorithm; • green and blue new algorithm • Full Twitter – 1000 way parallelism • Full Everything – 10,000 way parallelism 5/4/2015 69
  • 71. SOIC Data Science Program • Cross Disciplinary Faculty – 31 in School of Informatics and Computing, a few in statistics and expanding across campus • Affordable online and traditional residential curricula or mix thereof • Masters, Certificate, PhD Minor in place; Full PhD being studied • http://www.soic.indiana.edu/graduate/degrees/data-science/index.html 715/4/2015
  • 72. IU Data Science Program • Program managed by cross disciplinary Faculty in Data Science. Currently Statistics and Informatics and Computing School but will expand scope to full campus • A purely online 4-course Certificate in Data Science has been running since January 2014 – Fall 2015 expect ~100 students enrolled taking 1-2 classes per semester/year – Most students are professionals taking courses in “free time” • A campus wide Ph.D. Minor in Data Science has been approved. • Exploring PhD in Data Science • Courses labelled as “Decision-maker” and “Technical” paths where McKinsey says an order of magnitude more (1.5 million by 2018) unmet job openings in Decision-maker track 725/4/2015
  • 73. McKinsey Institute on Big Data Jobs • There will be a shortage of talent necessary for organizations to take advantage of big data. By 2018, the United States alone could face a shortage of 140,000 to 190,000 people with deep analytical skills as well as 1.5 million managers and analysts with the know-how to use the analysis of big data to make effective decisions. • IU Data Science Decision Maker Path aimed at 1.5 million jobs. Technical Path covers the 140,000 to 190,000 http://www.mckinsey.com/mgi/publications/big_data/index.asp 735/4/2015
  • 74. IU Data Science Program: Masters • Masters Fully approved by University and State October 14 2014 and started January 2015 • Blended online and residential (any combination) – Online offered at in-state rates (~$1100 per course) • Informatics, Computer Science, Information and Library Science in School of Informatics and Computing and the Department of Statistics, College of Arts and Science, IUB • 30 credits (10 conventional courses) • Basic (general) Masters degree plus tracks – Currently only track is “Computational and Analytic Data Science ” – Other tracks expected such as Biomedical Data Science • Fall 2015, over 200 applicants to program; cap enrollment 745/4/2015
  • 75. Some Online Data Science Classes • Big Data Applications & Analytics – ~40 hours of video mainly discussing applications (The X in X-Informatics or X-Analytics) in context of big data and clouds https://bigdatacourse.appspot.com/course • Big Data Open Source Software and Projects http://bigdataopensourceprojects.soic.indiana.edu/ – ~27 Hours of video discussing HPC-ABDS and use on FutureSystems for Big Data software • Divided into sections (coherent topics), units (~lectures) and lessons (5-20 minutes) where student is meant to stay awake 755/4/2015
  • 76. • 1 Unit: Organizational Introduction • 1 Unit: Motivation: Big Data and the Cloud; Centerpieces of the Future Economy • 3 Units: Pedagogical Introduction: What is Big Data, Data Analytics and X-Informatics • SideMOOC: Python for Big Data Applications and Analytics: NumPy, SciPy, MatPlotlib • SideMOOC: Using FutureSystems for Java and Python • 4 Units: X-Informatics with X= LHC Analysis and Discovery of Higgs particle – Integrated Technology: Explore Events; histograms and models; basic statistics (Python and some in Java) • 3 Units on a Big Data Use Cases Survey • SideMOOC: Using Plotviz Software for Displaying Point Distributions in 3D • 3 Units: X-Informatics with X= e-Commerce and Lifestyle • Technology (Python or Java): Recommender Systems - K-Nearest Neighbors • Technology: Clustering and heuristic methods • 1 Unit: Parallel Computing Overview and familiar examples • 4 Units: Cloud Computing Technology for Big Data Applications & Analytics • 2 Units: X-Informatics with X = Web Search and Text Mining and their technologies • Technology for Big Data Applications & Analytics : Kmeans (Python/Java) • Technology for Big Data Applications & Analytics: MapReduce • Technology for Big Data Applications & Analytics : Kmeans and MapReduce Parallelism (Python/Java) • Technology for Big Data Applications & Analytics : PageRank (Python/Java) • 3 Units: X-Informatics with X = Sports • 1 Unit: X-Informatics with X = Health • 1 Unit: X-Informatics with X = Internet of Things & Sensors • 1 Unit: X-Informatics with X = Radar for Remote Sensing Big Data Applications & Analytics Topics Red = Software 765/4/2015
  • 78. http://x-informatics.appspot.com/course Example Google Course Builder MOOC The Physics Section expands to 4 units and 2 Homeworks Unit 9 expands to 5 lessons Lessons played on YouTube “talking head video + PowerPoint” 5/4/2015 78
  • 79. The community group for one of classes and one forum (“No more malls”) 5/4/2015 79
  • 80. Big Data & Open Source Software Projects Overview • This course studies DevOps and software used in many commercial activities to study Big Data. • The course of this talk!! • The backdrop for course is the ~300 software subsystems HPC-ABDS (High Performance Computing enhanced - Apache Big Data Stack) illustrated at http://hpc-abds.org/kaleidoscope/ • The cloud computing architecture underlying ABDS and contrast of this with HPC. • The main activity of the course is building a significant project using multiple HPC-ABDS subsystems combined with user code and data. • Projects will be suggested or students can chose their own • http://cloudmesh.github.io/introduction_to_cloud_computing/class/lesson/projects.html • For more information, see: http://bigdataopensourceprojects.soic.indiana.edu/ and • http://cloudmesh.github.io/introduction_to_cloud_computing/class/bdossp_sp15/week_plan.html 5/4/2015 80
  • 84. Potpourri of Online Technologies • Canvas (Indiana University Default): Best for interface with IU grading and records • Google Course Builder: Best for management and integration of components • Ad hoc web pages: alternative easy to build integration • Microsoft Mix: Simplest faculty preparation interface • Adobe Presenter/Camtasia: More powerful video preparation that support subtitles but not clearly needed • Google Community: Good social interaction support • YouTube: Best user interface for videos • Hangout: Best for instructor-students online interactions (one instructor to 9 students with live feed). Hangout on air mixes live and streaming (30 second delay from archived YouTube) and more participants • OpenEdX possible future of Google Course Builder and getting easier to use 845/4/2015
  • 85. Lessons / Insights • Proposed classification of Big Data applications with features generalized as facets and kernels for analytics • Integrate (don’t compete) HPC with “Commodity Big data” (Google to Amazon to Enterprise/Startup Data Analytics) – i.e. improve Mahout; don’t compete with it – Use Hadoop plug-ins rather than replacing Hadoop • Enhanced Apache Big Data Stack HPC-ABDS has over 300 members with HPC opportunities at Resource management, Storage/Data, Streaming, Programming, monitoring, workflow layers. 5/4/2015 85

Editor's Notes

  • #27: Big dwarfs are Ogres Implement Ogres in ABDS+
  • #34: Big dwarfs are Ogres Implement Ogres in ABDS+
  • #35: Big dwarfs are Ogres Implement Ogres in ABDS+
  • #40: Big dwarfs are Ogres Implement Ogres in ABDS+
  • #41: Big dwarfs are Ogres Implement Ogres in ABDS+