SlideShare a Scribd company logo
Using MapReduce for
Large-scale Medical Image
Analysis
HISB 2012
Presented by : Roger Schaer - HES-SO Valais
Summary
Introduction
Methods
Results & Interpretation
Conclusions
2
Introduction
Introduction
Exponential growth of imaging data (past 20 years)
Year
Amountofimagesproduced
perdayattheHUG
4
Introduction (continued)
Mainly caused by :
Modern imaging techniques (3D, 4D) : Large files !
Large collections (available on the Internet)
Increasingly complex algorithms make processing
this data more challenging
Requires a lot of computation power, storage and
network bandwidth
5
Introduction (continued)
Flexible and scalable infrastructures are needed
Several approaches exist :
Single, powerful machine
Local cluster / grid
Alternative infrastructures (Graphics cards)
Cloud computing solutions
First two approaches have been tested and compared
6
Introduction (continued)
3 large-scale medical image processing use cases
Parameter optimization for Support Vector Machines
Content-based image feature extraction & indexing
3D texture feature extraction using the Riesz
transform
NOTE : I mostly handled the infrastructure
aspects !
7
Methods
Methods
MapReduce
Hadoop Cluster
Support Vector Machines
Image Indexing
Solid 3D Texture Analysis Using the Riesz Transform
9
MapReduce
MapReduce is a programming model
Developed by Google
Map Phase : Key/Value pair input, Intermediate
output
Reduce phase : For each intermediate key, process
the list of associated values
Trivial example : Word Count application
10
MapReduce : WordCount
11
MapReduce : WordCount
INPUT
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
INPUT
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
INPUT MAP
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
INPUT MAP
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
INPUT MAP
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
INPUT MAP
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
INPUT MAP
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
INPUT MAP
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
world 1
INPUT MAP
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
world 1
INPUT MAP
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
world 1
hello 1
INPUT MAP
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
world 1
hello 1
hadoop 1
INPUT MAP
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
world 1
hello 1
hadoop 1
INPUT MAP
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
world 1
hello 1
hadoop 1
bye 1
INPUT MAP
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
world 1
hello 1
hadoop 1
bye 1
hadoop 1
INPUT MAP
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
world 1
hello 1
hadoop 1
bye 1
hadoop 1
INPUT MAP REDUCE
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
world 1
hello 1
hadoop 1
bye 1
hadoop 1
INPUT MAP REDUCE
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
world 1
hello 1
hadoop 1
bye 1
hadoop 1
INPUT MAP REDUCE
hello 2
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
world 1
hello 1
hadoop 1
bye 1
hadoop 1
INPUT MAP REDUCE
hello 2
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
world 1
hello 1
hadoop 1
bye 1
hadoop 1
INPUT MAP REDUCE
hello 2
world 2
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
world 1
hello 1
hadoop 1
bye 1
hadoop 1
INPUT MAP REDUCE
hello 2
world 2
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
world 1
hello 1
hadoop 1
bye 1
hadoop 1
INPUT MAP REDUCE
hello 2
world 2
goodbye 1
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
world 1
hello 1
hadoop 1
bye 1
hadoop 1
INPUT MAP REDUCE
hello 2
world 2
goodbye 1
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
world 1
hello 1
hadoop 1
bye 1
hadoop 1
INPUT MAP REDUCE
hello 2
world 2
goodbye 1
hadoop 2
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
world 1
hello 1
hadoop 1
bye 1
hadoop 1
INPUT MAP REDUCE
hello 2
world 2
goodbye 1
hadoop 2
11
MapReduce : WordCount
#1 hello world
#2 goodbye world
#3 hello hadoop
#4 bye hadoop
...
hello 1
world 1
goodbye 1
world 1
hello 1
hadoop 1
bye 1
hadoop 1
INPUT MAP REDUCE
hello 2
world 2
goodbye 1
hadoop 2
bye 1
11
Hadoop
Apache’s implementation of MapReduce
Consists of
Distributed storage system : HDFS
Execution framework : Hadoop MapReduce
Master node which orchestrates the task distribution
Worker nodes which perform the tasks
Typical node runs a DataNode and TaskTracker
12
Support Vector Machines
Computes a decision boundary (hyperplane) that
separates inputs of different classes represented in a
given feature space transformed by a given kernel
The values of two parameters need to be adapted to
the data:
Cost C of errors
σ of the Gaussian kernel
13
Support Vector Machines
Computes a decision boundary (hyperplane) that
separates inputs of different classes represented in a
given feature space transformed by a given kernel
The values of two parameters need to be adapted to
the data:
Cost C of errors
σ of the Gaussian kernel
0
5
10
15
20
0 5 10 15 20 13
Support Vector Machines
Computes a decision boundary (hyperplane) that
separates inputs of different classes represented in a
given feature space transformed by a given kernel
The values of two parameters need to be adapted to
the data:
Cost C of errors
σ of the Gaussian kernel
0
5
10
15
20
0 5 10 15 20 13
Support Vector Machines
Computes a decision boundary (hyperplane) that
separates inputs of different classes represented in a
given feature space transformed by a given kernel
The values of two parameters need to be adapted to
the data:
Cost C of errors
σ of the Gaussian kernel
0
5
10
15
20
0 5 10 15 20 13
Support Vector Machines
Computes a decision boundary (hyperplane) that
separates inputs of different classes represented in a
given feature space transformed by a given kernel
The values of two parameters need to be adapted to
the data:
Cost C of errors
σ of the Gaussian kernel
0
5
10
15
20
0 5 10 15 20
?
13
Support Vector Machines
Computes a decision boundary (hyperplane) that
separates inputs of different classes represented in a
given feature space transformed by a given kernel
The values of two parameters need to be adapted to
the data:
Cost C of errors
σ of the Gaussian kernel
0
5
10
15
20
0 5 10 15 20 13
SVM (continued)
Goal : find optimal value couple (C, σ) to train a SVM
Allowing best classification performance of 5 lung
texture patterns
Execution on 1 PC (without Hadoop) can take weeks
Due to extensive leave-one-patient-out cross-
validation with 86 patients
Parallelization : Split job by parameter value couples 14
Image Indexing
Vocabulary
File
Image Files
Feature
Extractor
Feature Vectors
Files
Bag of Visual
Words Factory
Index File
Two phases
Extract features from
images
Construct bags of
visual words by
quantization
Component-based /
Monolithic approaches
Parallelization : Each task
processes N images 15
Image Indexing
Vocabulary
File
Image Files
Feature
Extractor
Feature Vectors
Files
Bag of Visual
Words Factory
Index File
Two phases
Extract features from
images
Construct bags of
visual words by
quantization
Component-based /
Monolithic approaches
Parallelization : Each task
processes N images 15
Feature
Extractor
+
Bag of
Visual
Words
Factory
3D Texture Analysis (Riesz)
Features are extracted from 3D images (see below)
Parallelization : Each task processes N images
16
Results & Interpretation
Hadoop Cluster
Minimally invasive setup (>=2 free cores per node)
18
Support Vector Machines
Optimization : Longer tasks = bad performance
Because the optimization of the hyperplane is more
difficult to compute (more iterations needed)
After 2 patients (out of 86), check if : ti ≥ F · tref.
If time exceeds average (+margin), terminate task
19
Support Vector Machines
Black : tasks to be interrupted by the new algorithm
Optimized algorithm : ~50h → ~9h15min
All the best tasks (highest accuracy) are not killed 20
σ (Sigma)
C
(Cost)
Accuracy(%)
Image Indexing
1K IMAGES
Shows the calculation time in function of the # of tasks
Both experiments were executed using hadoop
Once on a single computer, then on our cluster of PCs 21
Image Indexing
1K IMAGES 10K IMAGES
Shows the calculation time in function of the # of tasks
Both experiments were executed using hadoop
Once on a single computer, then on our cluster of PCs 21
Image Indexing
1K IMAGES 10K IMAGES 100K IMAGES
Shows the calculation time in function of the # of tasks
Both experiments were executed using hadoop
Once on a single computer, then on our cluster of PCs 21
Riesz 3D
Particularity : code was a series of Matlab® scripts
Instead of rewriting the whole application :
Used Hadoop streaming feature (uses stdin/stdout)
To maximize scalability, GNU Octave was used
Great compatibility between Matlab® and Octave
22
Riesz 3D
Particularity : code was a series of Matlab® scripts
Instead of rewriting the whole application :
Used Hadoop streaming feature (uses stdin/stdout)
To maximize scalability, GNU Octave was used
Great compatibility between Matlab® and Octave
RESULTS
1 task (no Hadoop) 42 tasks (idle) 42 tasks (normal)
131h32m42s 6h29m51s 5h51m31s
22
Conclusions
Conclusions
MapReduce is
Flexible (worked with very varied use cases)
Easy to use (2-phase programming model is simple)
Efficient (>=20x speedup for all use cases)
Hadoop is
Easy to deploy & manage
User-friendly (nice Web UIs)
24
Conclusions (continued)
Speedups for the different use cases
SVMs
Image
Indexing
3D Feature
Extraction
Single task 990h* 21h* 131h30
42 tasks on
hadoop
50h / 9h15** 1h 5h50
Speedup 20x / 107x** 21x 22.5x
* estimation ** using the optimized algorithm 25
Lessons Learned
It is important to use physically distributed resources
Overloading a single machine hurts performance
Data locality notably speeds up jobs
Not every application is infinitely scalable
Performance improvements level off at some point
26
Future work
Take it to the next level : The Cloud
Amazon Elastic Cloud Compute (IaaS)
Amazon Elastic MapReduce (PaaS)
Cloudbursting
Use both local resources + Cloud (for peak usage)
27
Thank you ! Questions ?

More Related Content

PPTX
Webinar | Using Hadoop Analytics to Gain a Big Data Advantage
PDF
Virtualizing Hadoop
PDF
Parallel Distributed Image Stacking and Mosaicing with Hadoop__HadoopSummit2010
PPTX
A Non-Standard use Case of Hadoop: High Scale Image Processing and Analytics
PPTX
Big Data - The 5 Vs Everyone Must Know
PPTX
Cheetah:Data Warehouse on Top of MapReduce
PPTX
Hipi: Computer Vision at Large Scale
PDF
Hadoop World 2011: Indexing the Earth - Large Scale Satellite Image Processin...
Webinar | Using Hadoop Analytics to Gain a Big Data Advantage
Virtualizing Hadoop
Parallel Distributed Image Stacking and Mosaicing with Hadoop__HadoopSummit2010
A Non-Standard use Case of Hadoop: High Scale Image Processing and Analytics
Big Data - The 5 Vs Everyone Must Know
Cheetah:Data Warehouse on Top of MapReduce
Hipi: Computer Vision at Large Scale
Hadoop World 2011: Indexing the Earth - Large Scale Satellite Image Processin...

Viewers also liked (18)

PDF
15 minute presentation about Thesis
PDF
TDWI Solution Summit San Diego 2014 Advanced Analytics at Macys.com
ZIP
Rapid JCR applications development with Sling
PDF
Hadoop on OpenStack - Sahara @DevNation 2014
PDF
Distilling Hadoop Patterns of Use and How You Can Use Them for Your Big Data ...
PDF
Resource Management in Impala - StampedeCon 2016
PDF
Big Data Architecture and Deployment
PDF
Terabyte-scale image similarity search: experience and best practice
PDF
The Evolution of Data Analysis with Hadoop - StampedeCon 2014
PPTX
Video Analysis in Hadoop
PPTX
5 Reasons Why Healthcare Data is Unique and Difficult to Measure
KEY
Big Data Trends
PDF
Large-scale social media analysis with Hadoop
PDF
Big Data: Issues and Challenges
PDF
Big image analytics for (Re-) insurer
PPTX
What is big data?
PPTX
Big Data in Healthcare Made Simple: Where It Stands Today and Where It’s Going
PPTX
Big data and Hadoop
15 minute presentation about Thesis
TDWI Solution Summit San Diego 2014 Advanced Analytics at Macys.com
Rapid JCR applications development with Sling
Hadoop on OpenStack - Sahara @DevNation 2014
Distilling Hadoop Patterns of Use and How You Can Use Them for Your Big Data ...
Resource Management in Impala - StampedeCon 2016
Big Data Architecture and Deployment
Terabyte-scale image similarity search: experience and best practice
The Evolution of Data Analysis with Hadoop - StampedeCon 2014
Video Analysis in Hadoop
5 Reasons Why Healthcare Data is Unique and Difficult to Measure
Big Data Trends
Large-scale social media analysis with Hadoop
Big Data: Issues and Challenges
Big image analytics for (Re-) insurer
What is big data?
Big Data in Healthcare Made Simple: Where It Stands Today and Where It’s Going
Big data and Hadoop
Ad

Similar to Using MapReduce for Large–scale Medical Image Analysis (20)

PPTX
Introduction to Map Reduce
PDF
Introduction to map reduce
PPT
Behm Shah Pagerank
PPT
Hadoop trainting-in-hyderabad@kelly technologies
PPT
Hadoop institutes-in-bangalore
PPTX
MAP REDUCE IN DATA SCIENCE.pptx
PPTX
2 hadoop
PPTX
Mapreduce introduction
PDF
Hadoop trainting in hyderabad@kelly technologies
PDF
Hadoop ecosystem
PPTX
Map Reduce
PPT
HDFS-HC: A Data Placement Module for Heterogeneous Hadoop Clusters
PDF
Brief introduction on Hadoop,Dremel, Pig, FlumeJava and Cassandra
PDF
Hadoop 101 for bioinformaticians
PDF
Sparse matrix computations in MapReduce
PDF
MapReduce Algorithm Design
PPT
Introduction To Map Reduce
PPT
Hadoop classes in mumbai
PDF
Ling liu part 02:big graph processing
PDF
MapReduce Algorithm Design - Parallel Reduce Operations
Introduction to Map Reduce
Introduction to map reduce
Behm Shah Pagerank
Hadoop trainting-in-hyderabad@kelly technologies
Hadoop institutes-in-bangalore
MAP REDUCE IN DATA SCIENCE.pptx
2 hadoop
Mapreduce introduction
Hadoop trainting in hyderabad@kelly technologies
Hadoop ecosystem
Map Reduce
HDFS-HC: A Data Placement Module for Heterogeneous Hadoop Clusters
Brief introduction on Hadoop,Dremel, Pig, FlumeJava and Cassandra
Hadoop 101 for bioinformaticians
Sparse matrix computations in MapReduce
MapReduce Algorithm Design
Introduction To Map Reduce
Hadoop classes in mumbai
Ling liu part 02:big graph processing
MapReduce Algorithm Design - Parallel Reduce Operations
Ad

More from Institute of Information Systems (HES-SO) (20)

PPTX
Classification of noisy free-text prostate cancer pathology reports using nat...
PPTX
Machine learning assisted citation screening for Systematic Reviews - Anjani ...
PPTX
Exploiting biomedical literature to mine out a large multimodal dataset of ra...
PDF
L'IoT dans les usines. Quels avantages ?
PDF
Studying Public Medical Images from Open Access Literature and Social Network...
PPTX
Risques opérationnels et le système de contrôle interne : les limites d’un te...
PDF
Le contrôle interne dans les administrations publiques tient-il toutes ses pr...
PPTX
Le système de contrôle interne : Présentation générale, enjeux et méthodes
PPTX
Crowdsourcing-based Mobile Application for Wheelchair Accessibility
PDF
Quelle(s) valeur(s) pour le leadership stratégique ?
PDF
A 3-D Riesz-Covariance Texture Model for the Prediction of Nodule Recurrence ...
PDF
Challenges in medical imaging and the VISCERAL model
PDF
NOSE: une approche Smart-City pour les zones périphériques et extra-urbaines
PDF
Medical image analysis and big data evaluation infrastructures
PDF
Medical image analysis, retrieval and evaluation infrastructures
PDF
How to detect soft falls on devices
PDF
FUNDAMENTALS OF TEXTURE PROCESSING FOR BIOMEDICAL IMAGE ANALYSIS
PDF
MOBILE COLLECTION AND DISSEMINATION OF SENIORS’ SKILLS
PDF
Enhanced Students Laboratory The GET project
Classification of noisy free-text prostate cancer pathology reports using nat...
Machine learning assisted citation screening for Systematic Reviews - Anjani ...
Exploiting biomedical literature to mine out a large multimodal dataset of ra...
L'IoT dans les usines. Quels avantages ?
Studying Public Medical Images from Open Access Literature and Social Network...
Risques opérationnels et le système de contrôle interne : les limites d’un te...
Le contrôle interne dans les administrations publiques tient-il toutes ses pr...
Le système de contrôle interne : Présentation générale, enjeux et méthodes
Crowdsourcing-based Mobile Application for Wheelchair Accessibility
Quelle(s) valeur(s) pour le leadership stratégique ?
A 3-D Riesz-Covariance Texture Model for the Prediction of Nodule Recurrence ...
Challenges in medical imaging and the VISCERAL model
NOSE: une approche Smart-City pour les zones périphériques et extra-urbaines
Medical image analysis and big data evaluation infrastructures
Medical image analysis, retrieval and evaluation infrastructures
How to detect soft falls on devices
FUNDAMENTALS OF TEXTURE PROCESSING FOR BIOMEDICAL IMAGE ANALYSIS
MOBILE COLLECTION AND DISSEMINATION OF SENIORS’ SKILLS
Enhanced Students Laboratory The GET project

Recently uploaded (20)

PDF
Machine learning based COVID-19 study performance prediction
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Empathic Computing: Creating Shared Understanding
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Encapsulation theory and applications.pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Approach and Philosophy of On baking technology
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PPTX
Cloud computing and distributed systems.
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
sap open course for s4hana steps from ECC to s4
Machine learning based COVID-19 study performance prediction
Encapsulation_ Review paper, used for researhc scholars
Empathic Computing: Creating Shared Understanding
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Spectral efficient network and resource selection model in 5G networks
Building Integrated photovoltaic BIPV_UPV.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
Review of recent advances in non-invasive hemoglobin estimation
Digital-Transformation-Roadmap-for-Companies.pptx
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Dropbox Q2 2025 Financial Results & Investor Presentation
Encapsulation theory and applications.pdf
Per capita expenditure prediction using model stacking based on satellite ima...
Understanding_Digital_Forensics_Presentation.pptx
Approach and Philosophy of On baking technology
“AI and Expert System Decision Support & Business Intelligence Systems”
Cloud computing and distributed systems.
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
sap open course for s4hana steps from ECC to s4

Using MapReduce for Large–scale Medical Image Analysis

  • 1. Using MapReduce for Large-scale Medical Image Analysis HISB 2012 Presented by : Roger Schaer - HES-SO Valais
  • 4. Introduction Exponential growth of imaging data (past 20 years) Year Amountofimagesproduced perdayattheHUG 4
  • 5. Introduction (continued) Mainly caused by : Modern imaging techniques (3D, 4D) : Large files ! Large collections (available on the Internet) Increasingly complex algorithms make processing this data more challenging Requires a lot of computation power, storage and network bandwidth 5
  • 6. Introduction (continued) Flexible and scalable infrastructures are needed Several approaches exist : Single, powerful machine Local cluster / grid Alternative infrastructures (Graphics cards) Cloud computing solutions First two approaches have been tested and compared 6
  • 7. Introduction (continued) 3 large-scale medical image processing use cases Parameter optimization for Support Vector Machines Content-based image feature extraction & indexing 3D texture feature extraction using the Riesz transform NOTE : I mostly handled the infrastructure aspects ! 7
  • 9. Methods MapReduce Hadoop Cluster Support Vector Machines Image Indexing Solid 3D Texture Analysis Using the Riesz Transform 9
  • 10. MapReduce MapReduce is a programming model Developed by Google Map Phase : Key/Value pair input, Intermediate output Reduce phase : For each intermediate key, process the list of associated values Trivial example : Word Count application 10
  • 13. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... INPUT 11
  • 14. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... INPUT MAP 11
  • 15. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... INPUT MAP 11
  • 16. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 INPUT MAP 11
  • 17. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 INPUT MAP 11
  • 18. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 INPUT MAP 11
  • 19. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 INPUT MAP 11
  • 20. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 INPUT MAP 11
  • 21. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 INPUT MAP 11
  • 22. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 INPUT MAP 11
  • 23. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 INPUT MAP 11
  • 24. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 INPUT MAP 11
  • 25. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 INPUT MAP 11
  • 26. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP 11
  • 27. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE 11
  • 28. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE 11
  • 29. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE hello 2 11
  • 30. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE hello 2 11
  • 31. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE hello 2 world 2 11
  • 32. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE hello 2 world 2 11
  • 33. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE hello 2 world 2 goodbye 1 11
  • 34. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE hello 2 world 2 goodbye 1 11
  • 35. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE hello 2 world 2 goodbye 1 hadoop 2 11
  • 36. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE hello 2 world 2 goodbye 1 hadoop 2 11
  • 37. MapReduce : WordCount #1 hello world #2 goodbye world #3 hello hadoop #4 bye hadoop ... hello 1 world 1 goodbye 1 world 1 hello 1 hadoop 1 bye 1 hadoop 1 INPUT MAP REDUCE hello 2 world 2 goodbye 1 hadoop 2 bye 1 11
  • 38. Hadoop Apache’s implementation of MapReduce Consists of Distributed storage system : HDFS Execution framework : Hadoop MapReduce Master node which orchestrates the task distribution Worker nodes which perform the tasks Typical node runs a DataNode and TaskTracker 12
  • 39. Support Vector Machines Computes a decision boundary (hyperplane) that separates inputs of different classes represented in a given feature space transformed by a given kernel The values of two parameters need to be adapted to the data: Cost C of errors σ of the Gaussian kernel 13
  • 40. Support Vector Machines Computes a decision boundary (hyperplane) that separates inputs of different classes represented in a given feature space transformed by a given kernel The values of two parameters need to be adapted to the data: Cost C of errors σ of the Gaussian kernel 0 5 10 15 20 0 5 10 15 20 13
  • 41. Support Vector Machines Computes a decision boundary (hyperplane) that separates inputs of different classes represented in a given feature space transformed by a given kernel The values of two parameters need to be adapted to the data: Cost C of errors σ of the Gaussian kernel 0 5 10 15 20 0 5 10 15 20 13
  • 42. Support Vector Machines Computes a decision boundary (hyperplane) that separates inputs of different classes represented in a given feature space transformed by a given kernel The values of two parameters need to be adapted to the data: Cost C of errors σ of the Gaussian kernel 0 5 10 15 20 0 5 10 15 20 13
  • 43. Support Vector Machines Computes a decision boundary (hyperplane) that separates inputs of different classes represented in a given feature space transformed by a given kernel The values of two parameters need to be adapted to the data: Cost C of errors σ of the Gaussian kernel 0 5 10 15 20 0 5 10 15 20 ? 13
  • 44. Support Vector Machines Computes a decision boundary (hyperplane) that separates inputs of different classes represented in a given feature space transformed by a given kernel The values of two parameters need to be adapted to the data: Cost C of errors σ of the Gaussian kernel 0 5 10 15 20 0 5 10 15 20 13
  • 45. SVM (continued) Goal : find optimal value couple (C, σ) to train a SVM Allowing best classification performance of 5 lung texture patterns Execution on 1 PC (without Hadoop) can take weeks Due to extensive leave-one-patient-out cross- validation with 86 patients Parallelization : Split job by parameter value couples 14
  • 46. Image Indexing Vocabulary File Image Files Feature Extractor Feature Vectors Files Bag of Visual Words Factory Index File Two phases Extract features from images Construct bags of visual words by quantization Component-based / Monolithic approaches Parallelization : Each task processes N images 15
  • 47. Image Indexing Vocabulary File Image Files Feature Extractor Feature Vectors Files Bag of Visual Words Factory Index File Two phases Extract features from images Construct bags of visual words by quantization Component-based / Monolithic approaches Parallelization : Each task processes N images 15 Feature Extractor + Bag of Visual Words Factory
  • 48. 3D Texture Analysis (Riesz) Features are extracted from 3D images (see below) Parallelization : Each task processes N images 16
  • 50. Hadoop Cluster Minimally invasive setup (>=2 free cores per node) 18
  • 51. Support Vector Machines Optimization : Longer tasks = bad performance Because the optimization of the hyperplane is more difficult to compute (more iterations needed) After 2 patients (out of 86), check if : ti ≥ F · tref. If time exceeds average (+margin), terminate task 19
  • 52. Support Vector Machines Black : tasks to be interrupted by the new algorithm Optimized algorithm : ~50h → ~9h15min All the best tasks (highest accuracy) are not killed 20 σ (Sigma) C (Cost) Accuracy(%)
  • 53. Image Indexing 1K IMAGES Shows the calculation time in function of the # of tasks Both experiments were executed using hadoop Once on a single computer, then on our cluster of PCs 21
  • 54. Image Indexing 1K IMAGES 10K IMAGES Shows the calculation time in function of the # of tasks Both experiments were executed using hadoop Once on a single computer, then on our cluster of PCs 21
  • 55. Image Indexing 1K IMAGES 10K IMAGES 100K IMAGES Shows the calculation time in function of the # of tasks Both experiments were executed using hadoop Once on a single computer, then on our cluster of PCs 21
  • 56. Riesz 3D Particularity : code was a series of Matlab® scripts Instead of rewriting the whole application : Used Hadoop streaming feature (uses stdin/stdout) To maximize scalability, GNU Octave was used Great compatibility between Matlab® and Octave 22
  • 57. Riesz 3D Particularity : code was a series of Matlab® scripts Instead of rewriting the whole application : Used Hadoop streaming feature (uses stdin/stdout) To maximize scalability, GNU Octave was used Great compatibility between Matlab® and Octave RESULTS 1 task (no Hadoop) 42 tasks (idle) 42 tasks (normal) 131h32m42s 6h29m51s 5h51m31s 22
  • 59. Conclusions MapReduce is Flexible (worked with very varied use cases) Easy to use (2-phase programming model is simple) Efficient (>=20x speedup for all use cases) Hadoop is Easy to deploy & manage User-friendly (nice Web UIs) 24
  • 60. Conclusions (continued) Speedups for the different use cases SVMs Image Indexing 3D Feature Extraction Single task 990h* 21h* 131h30 42 tasks on hadoop 50h / 9h15** 1h 5h50 Speedup 20x / 107x** 21x 22.5x * estimation ** using the optimized algorithm 25
  • 61. Lessons Learned It is important to use physically distributed resources Overloading a single machine hurts performance Data locality notably speeds up jobs Not every application is infinitely scalable Performance improvements level off at some point 26
  • 62. Future work Take it to the next level : The Cloud Amazon Elastic Cloud Compute (IaaS) Amazon Elastic MapReduce (PaaS) Cloudbursting Use both local resources + Cloud (for peak usage) 27
  • 63. Thank you ! Questions ?