SlideShare a Scribd company logo
4
Most read
5
Most read
6
Most read
Dr. D. Sugumar
• “Framework that allows distributed processing of large data sets across
clusters of computers…
• using simple programming models.
• It is designed to scale up from single servers to thousands of machines,
each offering local computation and storage.
• Rather than rely on hardware to deliver high-availability, the library itself is
designed to detect and handle failures at the application layer, so
delivering a highly-available service on top of a cluster of computers, each
of which may be prone to failures.”
Source: https://hadoop.apache.org/
• Inspired by “Google File System”
• Stores large files (typically gigabytes-terabytes) across multiple machines, replicating
across multiple hosts
• Breaks up files into fixed-size blocks (typically 64 MiB), distributes blocks
• The blocks of a file are replicated for fault tolerance
• The block size and replication factor are configurable per file
• Default replication value (3) - data is stored on three nodes: two on the same rack, and one on a different
rack
• File system intentionally not fully POSIX-compliant
• Write-once-read-many access model for files. A file once created, written, and closed cannot be changed.
This assumption simplifies data coherency issues and enables high throughput data access
• Intend to add support for appending-writes in the future
• Can rename & remove files
• “Namenode” tracks names and where the blocks are
• Hadoop can work with any distributed file system but this loses locality
• To reduce network traffic, Hadoop must know which servers are closest to the data; HDFS
does this
• Hadoop job tracker schedules jobs to task trackers with an awareness of the data location
• For example, if node A contains data (x,y,z) and node B contains data (a,b,c), the job tracker
schedules node B to perform tasks on (a,b,c) and node A would be scheduled to perform tasks
on (x,y,z)
• This reduces the amount of traffic that goes over the network and prevents unnecessary data
transfer
• Location awareness can significantly reduce job-completion times when running data-intensive
jobs
Source: http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
• Data often more structured
• Google BigTable (2006 paper)
• Designed to scale to Petabytes, 1000s of machines
• Maps two arbitrary string values (row key and column key) and timestamp into an associated
arbitrary byte array
• Tables split into multiple “tablets” along row chosen so tablet will be ~200 megabytes in size
(compressed when necessary)
• Data maintained in lexicographic order by row key; clients can exploit this by selecting row keys
for good locality (e.g., reversing hostname in URL)
• Not a relational database; really a sparse, distributed multi-dimensional sorted map
• Implementations of approach include: Apache Accumulo (from NSA; cell-level access labels),
Apache Cassandra, Apache Hbase, Google Cloud BigTable (released 2005)
Sources: https://en.wikipedia.org/wiki/BigTable; “Bigtable…” by Chang
• MapReduce problems:
• Many problems aren’t easily described as map-reduce
• Persistence to disk typically slower than in-memory work
• Alternative: Apache Spark
• a general-purpose processing engine that can be used instead of MapReduce
• Processing engine; instead of just “map” and “reduce”, defines a large set of operations
(transformations & actions)
• Operations can be arbitrarily combined in any order
• Open source software
• Supports Java, Scala and Python
• Original key construct: Resilient Distributed Dataset (RDD)
• Original construct, so we’ll focus on that first
• More recently added: DataFrames & DataSets
• Different APIs for aggregate data
• RDDs represent data or transformations on data
• RDDs can be created from Hadoop InputFormats (such as HDFS files),
“parallelize()” datasets, or by transforming other RDDs (you can stack
RDDs)
• Actions can be applied to RDDs; actions force calculations and return
values
• Lazy evaluation: Nothing computed until an action requires it
• RDDs are best suited for applications that apply the same operation to all
elements of a dataset
• Less suitable for applications that make asynchronous fine-grained updates to shared
state
• map(func): Return a new distributed dataset formed by passing each element of the source
through a function func.
• filter(func): Return a new dataset formed by selecting those elements of the source on which
func returns true
• union(otherDataset): Return a new dataset that contains the union of the elements in the source
dataset and the argument.
• intersection(otherDataset): Return a new RDD that contains the intersection of elements in the
source dataset and the argument.
• distinct([numTasks])): Return a new dataset that contains the distinct elements of the source
dataset
• join(otherDataset, [numTasks]): When called on datasets of type (K, V) and (K, W), returns a
dataset of (K, (V, W)) pairs with all pairs of elements for each key. Outer joins are supported
through leftOuterJoin, rightOuterJoin, and fullOuterJoin.
Source: https://spark.apache.org/docs/latest/programming-guide.html
• reduce(func): Aggregate the elements of the dataset using a function func (which takes
two arguments and returns one). The function should be commutative and associative
so that it can be computed correctly in parallel.
• collect(): Return all the elements of the dataset as an array at the driver program. This is
usually useful after a filter or other operation that returns a sufficiently small subset of the
data.
• count(): Return the number of elements in the dataset.
Source: https://spark.apache.org/docs/latest/programming-guide.html
Remember: Actions cause calculations to be performed;
transformations just set things up (lazy evaluation)
• You can persist (cache) an RDD
• When you persist an RDD, each node stores any partitions of it that it computes in
memory and reuses them in other actions on that dataset (or datasets derived from it)
• Allows future actions to be much faster (often >10x).
• Mark RDD to be persisted using the persist() or cache() methods on it. The first time it is
computed in an action, it will be kept in memory on the nodes.
• Cache is fault-tolerant – if any partition of an RDD is lost, it will automatically be
recomputed using the transformations that originally created it
• Can choose storage level (MEMORY_ONLY, DISK_ONLY, MEMORY_AND_DISK, etc.)
• Can manually call unpersist()
• For simplicity I’ve emphasized a single API set – the original one, RDD
• Spark now has three sets of APIs—RDDs, DataFrames, and Datasets
• RDD – In Spark 1.0 release, “lower level”
• DataFrames – Introduced in Spark 1.3 release
• Dataset – Introduced in Spark 1.6 release
• Each with pros/cons/limitations
• DataFrame:
• Unlike an RDD, data organized into named columns, e.g. a table in a relational database.
• Imposes a structure onto a distributed collection of data, allowing higher-level abstraction
• Dataset:
• Extension of DataFrame API which provides type-safe, object-oriented programming interface (compile-time error
detection)
Both built on Spark SQL engine & use Catalyst to generate optimized logical and physical query plan; both can be
converted to an RDD
https://data-flair.training/blogs/apache-spark-rdd-vs-dataframe-vs-dataset/
https://databricks.com/blog/2016/07/14/a-tale-of-three-apache-spark-apis-rdds-dataframes-and-datasets.html
• Python & R don’t have compile-time type safety checks, so only support DataFrame
• Error detection only at runtime
• Java & Scala support compile-time type safety checks, so support both DataSet and DataFrame
• Dataset APIs are all expressed as lambda functions and JVM typed objects
• any mismatch of typed-parameters will be detected at compile time.
https://databricks.com/blog/2016/07/14/a-tale-of-three-apache-spark-apis-rdds-dataframes-and-datasets.html
• Spark SQL
• Spark Streaming – stream processing of live datastreams
• MLlib - machine learning
• GraphX – graph manipulation
• extends Spark RDD with Graph abstraction: a directed multigraph with properties attached to each vertex and edge.
Hadoop MR
Record
Spark
Record (2014)
Data Size 102.5 TB 100 TB
Elapsed Time 72 mins 23 mins
# Nodes 2100 206
# Cores 50400 physical 6592 virtualized
Cluster disk
throughput
3150 GB/s
(est.)
618 GB/s
Network
dedicated data
center, 10Gbps
virtualized (EC2) 10Gbps
network
Sort rate 1.42 TB/min 4.27 TB/min
Sort rate/node 0.67 GB/min 20.7 GB/min
http://databricks.com/blog/2014/11/05/spark-officially-sets-a-new-record-in-large-scale-sorting.html
Sort benchmark, Daytona Gray: sort of 100 TB of data (1 trillion records)
Spark-based
System
3x faster
with 1/10
# of nodes
• Performance: Spark normally faster but with caveats
• Spark can process data in-memory; Hadoop MapReduce persists back to the disk
after a map or reduce action
• Spark generally outperforms MapReduce, but it often needs lots of memory to do
well; if there are other resource-demanding services or can’t fit in memory, Spark
degrades
• MapReduce easily runs alongside other services with minor performance differences,
& works well with the 1-pass jobs it was designed for
• Ease of use: Spark is easier to program
• Data processing: Spark more general
• Maturity: Spark maturing, Hadoop MapReduce mature
“Spark vs. Hadoop MapReduce” by Saggi Neumann (November 24, 2014)
https://www.xplenty.com/blog/2014/11/apache-spark-vs-hadoop-mapreduce/
Source: https://amplab.cs.berkeley.edu/software/
Don’t need to memorize this figure – the point is to know
that components can be combined to solve big data problems
• https://www.softwaretestinghelp.com/apache-spark-tutorial/

More Related Content

PDF
Apache Spark Overview
PDF
Spark shuffle introduction
PPTX
Introduction to Storm
PPTX
Ozone- Object store for Apache Hadoop
PDF
Intro to HBase
PPTX
Intro to Apache Spark
PDF
Apache Spark Introduction
PDF
Cassandra Introduction & Features
Apache Spark Overview
Spark shuffle introduction
Introduction to Storm
Ozone- Object store for Apache Hadoop
Intro to HBase
Intro to Apache Spark
Apache Spark Introduction
Cassandra Introduction & Features

What's hot (20)

PPTX
Introduction to Apache Spark
PPTX
Change Data Capture to Data Lakes Using Apache Pulsar and Apache Hudi - Pulsa...
PDF
MyRocks Deep Dive
PDF
Introduction to MongoDB
PDF
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
PPTX
Druid deep dive
KEY
Introduction to memcached
PDF
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
PPTX
Apache Spark Architecture
PDF
Apache Spark in Depth: Core Concepts, Architecture & Internals
PDF
Spark (Structured) Streaming vs. Kafka Streams
PDF
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
PDF
Introduction to Redis
PPTX
Optimizing Apache Spark SQL Joins
PPTX
Building a Virtual Data Lake with Apache Arrow
PDF
Apache Spark on K8S Best Practice and Performance in the Cloud
PDF
Impacts of Sharding, Partitioning, Encoding, and Sorting on Distributed Query...
PPTX
HDFS Erasure Coding in Action
PPTX
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...
PPTX
Ozone and HDFS’s evolution
Introduction to Apache Spark
Change Data Capture to Data Lakes Using Apache Pulsar and Apache Hudi - Pulsa...
MyRocks Deep Dive
Introduction to MongoDB
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
Druid deep dive
Introduction to memcached
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Apache Spark Architecture
Apache Spark in Depth: Core Concepts, Architecture & Internals
Spark (Structured) Streaming vs. Kafka Streams
Improving SparkSQL Performance by 30%: How We Optimize Parquet Pushdown and P...
Introduction to Redis
Optimizing Apache Spark SQL Joins
Building a Virtual Data Lake with Apache Arrow
Apache Spark on K8S Best Practice and Performance in the Cloud
Impacts of Sharding, Partitioning, Encoding, and Sorting on Distributed Query...
HDFS Erasure Coding in Action
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...
Ozone and HDFS’s evolution
Ad

Similar to Apache Spark (20)

PPT
Hadoop - Introduction to HDFS
PPTX
Hadoop ppt on the basics and architecture
PPSX
Hadoop-Quick introduction
PPT
Big Data Technologies - Hadoop
PDF
Apache Hadoop 1.1
PPT
Apache hadoop, hdfs and map reduce Overview
PPT
Scala and spark
PPT
11. From Hadoop to Spark 1:2
PPTX
Multivariate algorithms in distributed data processing computing.pptx
PPTX
Multivariate algorithms in distributed data processing computing.pptx
PDF
Lecture 2 part 1
PDF
getFamiliarWithHadoop
PPTX
Study Notes: Apache Spark
PPTX
PPT
PPT
Apache Spark™ is a multi-language engine for executing data-S5.ppt
PPTX
Mapreduce is for Hadoop Ecosystem in Data Science
PPTX
Apache Spark Core
PPTX
Introduction to Hadoop
PPTX
hadoop
Hadoop - Introduction to HDFS
Hadoop ppt on the basics and architecture
Hadoop-Quick introduction
Big Data Technologies - Hadoop
Apache Hadoop 1.1
Apache hadoop, hdfs and map reduce Overview
Scala and spark
11. From Hadoop to Spark 1:2
Multivariate algorithms in distributed data processing computing.pptx
Multivariate algorithms in distributed data processing computing.pptx
Lecture 2 part 1
getFamiliarWithHadoop
Study Notes: Apache Spark
Apache Spark™ is a multi-language engine for executing data-S5.ppt
Mapreduce is for Hadoop Ecosystem in Data Science
Apache Spark Core
Introduction to Hadoop
hadoop
Ad

More from SugumarSarDurai (19)

PDF
Parking NYC.pdf
PDF
Power BI.pdf
PDF
Unit 6.pdf
PDF
Unit 5.pdf
PDF
07 Data-Exploration.pdf
PDF
06 Excel.pdf
PDF
05 python.pdf
PDF
00-01 DSnDA.pdf
PDF
03-Data-Analysis-Final.pdf
PDF
UNit4.pdf
PDF
UNit4d.pdf
PDF
Unit 4 Time Study.pdf
PDF
Unit 3 Micro and Memo motion study.pdf
PDF
02 Work study -Part_1.pdf
PDF
02 Method Study part_2.pdf
PDF
01 Production_part_2.pdf
PDF
01 Production_part_1.pdf
PDF
01 Industrial Management_Part_1a .pdf
PDF
01 Industrial Management_Part_1 .pdf
Parking NYC.pdf
Power BI.pdf
Unit 6.pdf
Unit 5.pdf
07 Data-Exploration.pdf
06 Excel.pdf
05 python.pdf
00-01 DSnDA.pdf
03-Data-Analysis-Final.pdf
UNit4.pdf
UNit4d.pdf
Unit 4 Time Study.pdf
Unit 3 Micro and Memo motion study.pdf
02 Work study -Part_1.pdf
02 Method Study part_2.pdf
01 Production_part_2.pdf
01 Production_part_1.pdf
01 Industrial Management_Part_1a .pdf
01 Industrial Management_Part_1 .pdf

Recently uploaded (20)

PPTX
master seminar digital applications in india
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PDF
Microbial disease of the cardiovascular and lymphatic systems
PDF
Mark Klimek Lecture Notes_240423 revision books _173037.pdf
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PPTX
GDM (1) (1).pptx small presentation for students
PDF
TR - Agricultural Crops Production NC III.pdf
PDF
Business Ethics Teaching Materials for college
PPTX
Open Quiz Monsoon Mind Game Prelims.pptx
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PPTX
Open Quiz Monsoon Mind Game Final Set.pptx
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PPTX
Cardiovascular Pharmacology for pharmacy students.pptx
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
Pre independence Education in Inndia.pdf
master seminar digital applications in india
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
Microbial disease of the cardiovascular and lymphatic systems
Mark Klimek Lecture Notes_240423 revision books _173037.pdf
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
STATICS OF THE RIGID BODIES Hibbelers.pdf
GDM (1) (1).pptx small presentation for students
TR - Agricultural Crops Production NC III.pdf
Business Ethics Teaching Materials for college
Open Quiz Monsoon Mind Game Prelims.pptx
human mycosis Human fungal infections are called human mycosis..pptx
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
Open Quiz Monsoon Mind Game Final Set.pptx
Abdominal Access Techniques with Prof. Dr. R K Mishra
Cardiovascular Pharmacology for pharmacy students.pptx
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
Renaissance Architecture: A Journey from Faith to Humanism
O5-L3 Freight Transport Ops (International) V1.pdf
2.FourierTransform-ShortQuestionswithAnswers.pdf
Pre independence Education in Inndia.pdf

Apache Spark

  • 2. • “Framework that allows distributed processing of large data sets across clusters of computers… • using simple programming models. • It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. • Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.” Source: https://hadoop.apache.org/
  • 3. • Inspired by “Google File System” • Stores large files (typically gigabytes-terabytes) across multiple machines, replicating across multiple hosts • Breaks up files into fixed-size blocks (typically 64 MiB), distributes blocks • The blocks of a file are replicated for fault tolerance • The block size and replication factor are configurable per file • Default replication value (3) - data is stored on three nodes: two on the same rack, and one on a different rack • File system intentionally not fully POSIX-compliant • Write-once-read-many access model for files. A file once created, written, and closed cannot be changed. This assumption simplifies data coherency issues and enables high throughput data access • Intend to add support for appending-writes in the future • Can rename & remove files • “Namenode” tracks names and where the blocks are
  • 4. • Hadoop can work with any distributed file system but this loses locality • To reduce network traffic, Hadoop must know which servers are closest to the data; HDFS does this • Hadoop job tracker schedules jobs to task trackers with an awareness of the data location • For example, if node A contains data (x,y,z) and node B contains data (a,b,c), the job tracker schedules node B to perform tasks on (a,b,c) and node A would be scheduled to perform tasks on (x,y,z) • This reduces the amount of traffic that goes over the network and prevents unnecessary data transfer • Location awareness can significantly reduce job-completion times when running data-intensive jobs Source: http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
  • 5. • Data often more structured • Google BigTable (2006 paper) • Designed to scale to Petabytes, 1000s of machines • Maps two arbitrary string values (row key and column key) and timestamp into an associated arbitrary byte array • Tables split into multiple “tablets” along row chosen so tablet will be ~200 megabytes in size (compressed when necessary) • Data maintained in lexicographic order by row key; clients can exploit this by selecting row keys for good locality (e.g., reversing hostname in URL) • Not a relational database; really a sparse, distributed multi-dimensional sorted map • Implementations of approach include: Apache Accumulo (from NSA; cell-level access labels), Apache Cassandra, Apache Hbase, Google Cloud BigTable (released 2005) Sources: https://en.wikipedia.org/wiki/BigTable; “Bigtable…” by Chang
  • 6. • MapReduce problems: • Many problems aren’t easily described as map-reduce • Persistence to disk typically slower than in-memory work • Alternative: Apache Spark • a general-purpose processing engine that can be used instead of MapReduce
  • 7. • Processing engine; instead of just “map” and “reduce”, defines a large set of operations (transformations & actions) • Operations can be arbitrarily combined in any order • Open source software • Supports Java, Scala and Python • Original key construct: Resilient Distributed Dataset (RDD) • Original construct, so we’ll focus on that first • More recently added: DataFrames & DataSets • Different APIs for aggregate data
  • 8. • RDDs represent data or transformations on data • RDDs can be created from Hadoop InputFormats (such as HDFS files), “parallelize()” datasets, or by transforming other RDDs (you can stack RDDs) • Actions can be applied to RDDs; actions force calculations and return values • Lazy evaluation: Nothing computed until an action requires it • RDDs are best suited for applications that apply the same operation to all elements of a dataset • Less suitable for applications that make asynchronous fine-grained updates to shared state
  • 9. • map(func): Return a new distributed dataset formed by passing each element of the source through a function func. • filter(func): Return a new dataset formed by selecting those elements of the source on which func returns true • union(otherDataset): Return a new dataset that contains the union of the elements in the source dataset and the argument. • intersection(otherDataset): Return a new RDD that contains the intersection of elements in the source dataset and the argument. • distinct([numTasks])): Return a new dataset that contains the distinct elements of the source dataset • join(otherDataset, [numTasks]): When called on datasets of type (K, V) and (K, W), returns a dataset of (K, (V, W)) pairs with all pairs of elements for each key. Outer joins are supported through leftOuterJoin, rightOuterJoin, and fullOuterJoin. Source: https://spark.apache.org/docs/latest/programming-guide.html
  • 10. • reduce(func): Aggregate the elements of the dataset using a function func (which takes two arguments and returns one). The function should be commutative and associative so that it can be computed correctly in parallel. • collect(): Return all the elements of the dataset as an array at the driver program. This is usually useful after a filter or other operation that returns a sufficiently small subset of the data. • count(): Return the number of elements in the dataset. Source: https://spark.apache.org/docs/latest/programming-guide.html Remember: Actions cause calculations to be performed; transformations just set things up (lazy evaluation)
  • 11. • You can persist (cache) an RDD • When you persist an RDD, each node stores any partitions of it that it computes in memory and reuses them in other actions on that dataset (or datasets derived from it) • Allows future actions to be much faster (often >10x). • Mark RDD to be persisted using the persist() or cache() methods on it. The first time it is computed in an action, it will be kept in memory on the nodes. • Cache is fault-tolerant – if any partition of an RDD is lost, it will automatically be recomputed using the transformations that originally created it • Can choose storage level (MEMORY_ONLY, DISK_ONLY, MEMORY_AND_DISK, etc.) • Can manually call unpersist()
  • 12. • For simplicity I’ve emphasized a single API set – the original one, RDD • Spark now has three sets of APIs—RDDs, DataFrames, and Datasets • RDD – In Spark 1.0 release, “lower level” • DataFrames – Introduced in Spark 1.3 release • Dataset – Introduced in Spark 1.6 release • Each with pros/cons/limitations
  • 13. • DataFrame: • Unlike an RDD, data organized into named columns, e.g. a table in a relational database. • Imposes a structure onto a distributed collection of data, allowing higher-level abstraction • Dataset: • Extension of DataFrame API which provides type-safe, object-oriented programming interface (compile-time error detection) Both built on Spark SQL engine & use Catalyst to generate optimized logical and physical query plan; both can be converted to an RDD https://data-flair.training/blogs/apache-spark-rdd-vs-dataframe-vs-dataset/ https://databricks.com/blog/2016/07/14/a-tale-of-three-apache-spark-apis-rdds-dataframes-and-datasets.html
  • 14. • Python & R don’t have compile-time type safety checks, so only support DataFrame • Error detection only at runtime • Java & Scala support compile-time type safety checks, so support both DataSet and DataFrame • Dataset APIs are all expressed as lambda functions and JVM typed objects • any mismatch of typed-parameters will be detected at compile time. https://databricks.com/blog/2016/07/14/a-tale-of-three-apache-spark-apis-rdds-dataframes-and-datasets.html
  • 15. • Spark SQL • Spark Streaming – stream processing of live datastreams • MLlib - machine learning • GraphX – graph manipulation • extends Spark RDD with Graph abstraction: a directed multigraph with properties attached to each vertex and edge.
  • 16. Hadoop MR Record Spark Record (2014) Data Size 102.5 TB 100 TB Elapsed Time 72 mins 23 mins # Nodes 2100 206 # Cores 50400 physical 6592 virtualized Cluster disk throughput 3150 GB/s (est.) 618 GB/s Network dedicated data center, 10Gbps virtualized (EC2) 10Gbps network Sort rate 1.42 TB/min 4.27 TB/min Sort rate/node 0.67 GB/min 20.7 GB/min http://databricks.com/blog/2014/11/05/spark-officially-sets-a-new-record-in-large-scale-sorting.html Sort benchmark, Daytona Gray: sort of 100 TB of data (1 trillion records) Spark-based System 3x faster with 1/10 # of nodes
  • 17. • Performance: Spark normally faster but with caveats • Spark can process data in-memory; Hadoop MapReduce persists back to the disk after a map or reduce action • Spark generally outperforms MapReduce, but it often needs lots of memory to do well; if there are other resource-demanding services or can’t fit in memory, Spark degrades • MapReduce easily runs alongside other services with minor performance differences, & works well with the 1-pass jobs it was designed for • Ease of use: Spark is easier to program • Data processing: Spark more general • Maturity: Spark maturing, Hadoop MapReduce mature “Spark vs. Hadoop MapReduce” by Saggi Neumann (November 24, 2014) https://www.xplenty.com/blog/2014/11/apache-spark-vs-hadoop-mapreduce/
  • 18. Source: https://amplab.cs.berkeley.edu/software/ Don’t need to memorize this figure – the point is to know that components can be combined to solve big data problems

Editor's Notes

  • #6: Bigtable: A Distributed Storage System for Structured Data Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, Robert E. Gruber