SlideShare a Scribd company logo
Distributed Deep Learning for 
Classification and Regression 
Problems using H2O 
Arno Candel, H2O.ai 
Hadoop User Group Meetup 
San Francisco, 12/10/14
Who am I? 
@ArnoCandel 
PhD in Computational Physics, 2005 
from ETH Zurich Switzerland 
! 
6 years at SLAC - Accelerator Physics Modeling 
2 years at Skytree - Machine Learning 
1 year at H2O.ai - Machine Learning 
! 
15 years in Supercomputing & Modeling 
! 
Named “2014 Big Data All-Star” by Fortune Magazine 
!
H2O Deep Learning, @ArnoCandel 
Outline 
Intro (5 mins) 
Methods & Implementation (10 mins) 
Results & Live Demos (20 mins) 
Higgs boson classification 
MNIST handwritten digits 
text classification 
3
H2O Deep Learning, @ArnoCandel 
Teamwork at H2O.ai 
Java, Apache v2 Open-Source 
#1 Java Machine Learning in Github 
Join the community! 
4
H2O Deep Learning, @ArnoCandel 
H2O: Open-Source (Apache v2) 
Predictive Analytics Platform 
5
H2O Deep Learning, @ArnoCandel 6 
H2O Architecture - Designed for speed, 
scale, accuracy & ease of use 
Key technical points: 
• distributed JVMs + REST API 
• no Java GC issues 
(data in byte[], Double) 
• loss-less number compression 
• Hadoop integration (v1,YARN) 
• R package (CRAN) 
Pre-built fully featured algos: 
K-Means, NB, PCA, CoxPH, 
GLM, RF, GBM, DeepLearning
H2O Deep Learning, @ArnoCandel 
What is Deep Learning? 
Wikipedia: 
Deep learning is a set of algorithms in 
machine learning that attempt to model 
high-level abstractions in data by using 
architectures composed of multiple 
non-linear transformations. 
Input: 
Image 
Output: 
User ID 
7 
Example: Facebook DeepFace
H2O Deep Learning, @ArnoCandel 
What is NOT Deep 
Linear models are not deep 
(by definition) 
! 
Neural nets with 1 hidden layer are not deep 
(only 1 layer - no feature hierarchy) 
! 
SVMs and Kernel methods are not deep 
(2 layers: kernel + linear) 
! 
Classification trees are not deep 
(operate on original input space, no new features generated) 
8
H2O Deep Learning, @ArnoCandel 
H2O Deep Learning 
1970s multi-layer feed-forward Neural Network 
(stochastic gradient descent with back-propagation) 
! 
+ distributed processing for big data 
(fine-grain in-memory MapReduce on distributed data) 
! 
+ multi-threaded speedup 
(async fork/join worker threads operate at FORTRAN speeds) 
! 
+ smart algorithms for fast & accurate results 
(automatic standardization, one-hot encoding of categoricals, missing value imputation, weight & 
bias initialization, adaptive learning rate, momentum, dropout/l1/L2 regularization, grid search, 
N-fold cross-validation, checkpointing, load balancing, auto-tuning, model averaging, etc.) 
! 
= powerful tool for (un)supervised 
machine learning on real-world data 
9 
all 320 cores maxed out
H2O Deep Learning, @ArnoCandel 
Example Neural Network 
“fully connected” directed graph of neurons 
age 
income 
employment 
input/output neuron 
hidden neuron 
married 
single 
Input layer 
Hidden 
layer 1 
Hidden 
layer 2 
Output layer 
#connections 3x4 4x3 3x2 
information flow 
#neurons 3 4 3 2 
10
H2O Deep Learning, @ArnoCandel 
Prediction: Forward Propagation 
“neurons activate each other via weighted sums” 
age 
income 
employment 
uij 
vjk 
zk pl 
yj = tanh(sumi(xi*uij)+bj) 
xi 
yj 
11 
married 
per-class probabilities 
sum(pl) = 1 
wkl 
zk = tanh(sumj(yj*vjk)+ck) 
single 
pl = softmax(sumk(zk*wkl)+dl) 
softmax(xk) = exp(xk) / sumk(exp(xk)) 
activation function: tanh 
alternative: 
x -> max(0,x) “rectifier” 
pl is a non-linear function of xi: 
can approximate ANY function 
with enough layers! 
bj, ck, dl: bias values 
(indep. of inputs)
H2O Deep Learning, @ArnoCandel 
Data preparation & Initialization 
Neural Networks are sensitive to numerical noise, 
operate best in the linear regime (not saturated) 
age 
income 
employment 
xi 
Automatic standardization of data 
xi: mean = 0, stddev = 1 
! 
horizontalize categorical variables, e.g. 
{full-time, part-time, none, self-employed} 
-> 
{0,1,0} = part-time, {0,0,0} = self-employed 
married 
single 
wkl 
Automatic initialization of weights 
! 
12 
Poor man’s initialization: random weights wkl 
! 
Default (better): Uniform distribution in 
+/- sqrt(6/(#units + #units_previous_layer))
H2O Deep Learning, @ArnoCandel 
Training: Update Weights & Biases 
For each training row, we make a prediction and compare 
with the actual label (supervised learning): 
predicted actual 
0.8 1 married 
Objective: minimize prediction error (MSE or cross-entropy) 
Mean Square Error = (0.22 + 0.22)/2 “penalize differences per-class” 
! 
Cross-entropy = -log(0.8) “strongly penalize non-1-ness” 
1 
Stochastic Gradient Descent: Update weights and biases via 
gradient of the error (via back-propagation): 
w <— w - rate * ∂E/∂w 
13 
0.2 0 single 
E 
w 
rate
H2O Deep Learning, @ArnoCandel 
Backward Propagation 
How to compute ∂E/∂wi for wi <— wi - rate * ∂E/∂wi ? 
Naive: For every i, evaluate E twice at (w1,…,wi±Δ,…,wN)… Slow! 
Backprop: Compute ∂E/∂wi via chain rule going backwards 
xi 
! 
net = sumi(wi*xi) + b 
wi 
y = activation(net) 
E = error(y) 
∂E/∂wi = ∂E/∂y * ∂y/∂net * ∂net/∂wi 
= ∂(error(y))/∂y * ∂(activation(net))/∂net * xi 
14
H2O Deep Learning, @ArnoCandel 
H2O Deep Learning Architecture 
K-V 
HTTPD 
nodes/JVMs: sync 
threads: async 
communication 
K-V 
HTTPD 
w 
1 
w w 
2 
1 
w w w w 
1 3 2 4 
w1 w3 w2 
w4 
3 2 
w w2+w4 1+w3 
4 
1 2 
w* = (w1+w2+w3+w4)/4 
map: 
each node trains a 
copy of the weights 
and biases with 
(some* or all of) its 
local data with 
asynchronous F/J 
threads 
initial model: weights and biases w 
1 
1 
updated model: w* 
H2O atomic 
in-memory 
K-V store 
reduce: 
model averaging: 
average weights and 
biases from all nodes, 
speedup is at least 
#nodes/log(#rows) 
arxiv:1209.4129v3 
i 
Query & display 
the model via 
JSON, WWW 
Keep iterating over the data (“epochs”), score from time to time 
*auto-tuned (default) or user-specified number of points per MapReduce iteration 
15
H2O Deep Learning, @ArnoCandel 
Adaptive learning rate - ADADELTA (Google) 
Automatically set learning rate for each neuron 
based on its training history 
Regularization 
L1: penalizes non-zero weights 
L2: penalizes large weights 
Dropout: randomly ignore certain inputs 
Grid Search and Checkpointing 
Run a grid search to scan many hyper-parameters, 
then continue training the most 
promising model(s) 
16 
“Secret” Sauce to Higher Accuracy
H2O Deep Learning, @ArnoCandel 
Detail: Adaptive Learning Rate 
! 
Compute moving average of Δwi2 at time t for window length rho: 
! 
E[Δwi2]t = rho * E[Δwi2]t-1 + (1-rho) * Δwi2 
! 
Compute RMS of Δwi at time t with smoothing epsilon: 
! 
RMS[Δwi]t = sqrt( E[Δwi2]t + epsilon ) 
Adaptive acceleration / momentum: 
accumulate previous weight updates, 
but over a window of time 
Adaptive annealing / progress: 
Gradient-dependent learning rate, 
moving window prevents “freezing” 
(unlike ADAGRAD: no window) 
Do the same for ∂E/∂wi, then 
obtain per-weight learning rate: 
RMS[Δwi]t-1 
RMS[∂E/∂wi]t 
rate(wi, t) = 
cf. ADADELTA paper 
17
H2O Deep Learning, @ArnoCandel 
Detail: Dropout Regularization 
18 
Training: 
For each hidden neuron, for each training sample, for each iteration, 
ignore (zero out) a different random fraction p of input activations. 
! 
age 
income 
employment 
married 
single 
X 
X 
X 
Testing: 
Use all activations, but reduce them by a factor p 
(to “simulate” the missing activations during training). 
cf. Geoff Hinton's paper
H2O Deep Learning, @ArnoCandel 19 
Application: Higgs Boson Classification 
Large Hadron Collider: Largest experiment of mankind! 
$13+ billion, 16.8 miles long, 120 MegaWatts, -456F, 1PB/day, etc. 
Higgs boson discovery (July ’12) led to 2013 Nobel prize! 
Higgs 
vs 
Background 
http://arxiv.org/pdf/1402.4735v2.pdf 
Images courtesy CERN / LHC 
HIGGS UCI Dataset: 
21 low-level features AND 
7 high-level derived features (physics formulae) 
Train: 10M rows, Valid: 500k, Test: 500k rows
H2O Deep Learning, @ArnoCandel 20 
Higgs: Derived features are important! 
? ? ? 
Former baseline for AUC: 0.733 and 0.816 
H2O Algorithm low-level H2O AUC all features H2O AUC 
Generalized Linear Model 0.596 0.684 
Random Forest 0.764 0.840 
add 
derived 
Gradient Boosted Trees 0.753 ! 
0.839 
Neural Net 1 hidden layer 0.760 features 
0.830 
H2O Deep Learning ? 
Live Demo: Let’s see what Deep Learning 
can do with low-level features alone!
H2O Deep Learning, @ArnoCandel 
MNIST: digits classification 
MNIST = Digitized handwritten 
digits database (Yann LeCun) 
Yann LeCun: “Yet another advice: don't get 
fooled by people who claim to have a solution 
to Artificial General Intelligence. Ask them what 
error rate they get on MNIST or ImageNet.” 
Data: 28x28=784 pixels with 
(gray-scale) values in 0…255 
Standing world record: 
Without distortions or 
convolutions, the best-ever 
published error rate on test 
set: 0.83% (Microsoft) 
21 
Train: 60,000 rows 784 integer columns 10 classes 
Test: 10,000 rows 784 integer columns 10 classes
H2O Deep Learning, @ArnoCandel 22 
H2O Deep Learning beats MNIST 
Standard 60k/10k data 
No distortions 
No convolutions 
No unsupervised training 
No ensemble 
! 
10 hours on 10 16-core nodes 
World-record! 
0.83% test set error 
http://learn.h2o.ai/content/hands-on_training/deep_learning.html
H2O Deep Learning, @ArnoCandel 
POJO Model Export for 
Production Scoring 
23 
Plain old Java code is 
auto-generated to take 
your H2O Deep Learning 
models into production!
H2O Deep Learning, @ArnoCandel 
Parallel Scalability 
(for 64 epochs on MNIST, with “0.87%” parameters) 
24 
Speedup 
40.00 
30.00 
20.00 
10.00 
0.00 
1 2 4 8 16 32 63 
H2O Nodes 
Training Time 
2.7 mins 
100 
75 
50 
25 
0 
in minutes 
1 2 4 8 16 32 63 
H2O Nodes 
(4 cores per node, 1 epoch per node per MapReduce)
H2O Deep Learning, @ArnoCandel 
Text Classification 
Goal: Predict the item from 
seller’s text description 
25 
“Vintage 18KT gold Rolex 2 Tone 
in great condition” 
Data: Bag of words vector 0,0,1,0,0,0,0,0,1,0,0,0,1,…,0 
gold vintage condition 
Train: 578,361 rows 8,647 cols 467 classes 
Test: 64,263 rows 8,647 cols 143 classes
H2O Deep Learning, @ArnoCandel 
26 
Text Classification 
Train: 578,361 rows 8,647 cols 467 classes 
Test: 64,263 rows 8,647 cols 143 classes 
Out-Of-The-Box: 11.6% test set error after 10 epochs! 
Predicts the correct class (out of 143) 88.4% of the time! 
Note 1: H2O columnar-compressed in-memory 
store only needs 60 MB to store 5 billion 
values (dense CSV needs 18 GB) 
Note 2: No tuning was done 
(results are for illustration only)
H2O Deep Learning, @ArnoCandel 
MNIST: Unsupervised Anomaly Detection 
with Deep Learning (Autoencoder) 
27 
http://learn.h2o.ai/content/hands-on_training/anomaly_detection.html 
small/median/large reconstruction error: 
The good The bad The ugly
H2O Deep Learning, @ArnoCandel 28 
Higgs: Live Demo (Continued) 
How well did 
Deep Learning do? 
<your guess?> 
reference paper results 
Any guesses for AUC on low-level features? 
AUC=0.76 was the best for RF/GBM/NN (H2O) 
Let’s see how H2O did in the past 10 minutes!
H2O Deep Learning, @ArnoCandel 
H2O Steam: Scoring Platform 
29 
http://server:port/steam/index.html 
Higgs Dataset Demo on 10-node cluster 
Let’s score all our H2O models and compare them! 
Live Demo
H2O Deep Learning, @ArnoCandel 30 
Scoring Higgs Models in H2O Steam 
Live Demo on 10-node cluster: 
<10 minutes runtime for all H2O algos! 
Better than LHC baseline of AUC=0.73!
H2O Deep Learning, @ArnoCandel 31 
Higgs Particle Detection with H2O 
HIGGS UCI Dataset: 
21 low-level features AND 
7 high-level derived features 
Train: 10M rows, Test: 500k rows 
Algorithm 
*Nature paper: http://arxiv.org/pdf/1402.4735v2.pdf 
Paper’s 
l-l AUC 
low-level 
H2O AUC 
all features 
H2O AUC 
Parameters (not heavily tuned), 
H2O running on 10 nodes 
Generalized Linear Model - 0.596 0.684 default, binomial 
Random Forest - 0.764 0.840 50 trees, max depth 50 
Gradient Boosted Trees 0.73 0.753 0.839 50 trees, max depth 15 
Neural Net 1 layer 0.733 0.760 0.830 1x300 Rectifier, 100 epochs 
Deep Learning 3 hidden layers 0.836 0.850 - 3x1000 Rectifier, L2=1e-5, 40 epochs 
Deep Learning 4 hidden layers 0.868 0.869 - 4x500 Rectifier, L1=L2=1e-5, 300 epochs 
Deep Learning 5 hidden layers 0.880 0.871 - 5x500 Rectifier, L1=L2=1e-5 
Deep Learning on low-level features alone beats everything else! 
Prelim. H2O results compare well with paper’s results* (TMVA & Theano)
H2O Deep Learning, @ArnoCandel 32 
H2O Deep Learning Booklet 
R Vignette: Explains methods & parameters! 
Grab your copy today 
or download at 
http://t.co/kWzyFMGJ2S 
or view as GitBook: 
http://h2o.gitbooks.io/ 
h2o-deep-learning/ 
Even more tips & tricks in our other 
presentations and at H2O World next week!
H2O Deep Learning, @ArnoCandel 
You can participate! 
33 
- Images: Convolutional & Pooling Layers PUB-644 
- Sequences: Recurrent Neural Networks PUB-1052 
- Faster Training: GPGPU support PUB-1013 
- Pre-Training: Stacked Auto-Encoders PUB-1014 
- Ensembles PUB-1072 
- Use H2O at Kaggle Challenges!
H2O Deep Learning, @ArnoCandel 
H2O Kaggle Starter Scripts 
34
H2O Deep Learning, @ArnoCandel 
Re-Live H2O World! 
35 
http://h2o.ai/h2o-world/ 
http://learn.h2o.ai 
Watch the Videos 
Day 1 
• Hands-On Training 
• Supervised 
• Unsupervised 
• Advanced Topics 
• Markting Usecase 
• Product Demos 
• Hacker-Fest with 
Cliff Click (CTO, Hotspot) 
Day 2 
• Speakers from Academia & Industry 
• Trevor Hastie (ML) 
• John Chambers (S, R) 
• Josh Bloch (Java API) 
• Many use cases from customers 
• 3 Top Kaggle Contestants (Top 10) 
• 3 Panel discussions
H2O Deep Learning, @ArnoCandel 
Key Take-Aways 
H2O is an open source predictive analytics platform 
for data scientists and business analysts who need 
scalable and fast machine learning. 
! 
H2O Deep Learning is ready to take your advanced 
analytics to the next level - Try it on your data! 
! 
Join our Community and Meetups! 
https://github.com/h2oai 
h2ostream community forum 
www.h2o.ai/ 
@h2oai 
36 
Thank you!

More Related Content

PDF
H2O Open Source Deep Learning, Arno Candel 03-20-14
PDF
MLconf - Distributed Deep Learning for Classification and Regression Problems...
PDF
H2ODeepLearningThroughExamples021215
PDF
Deep Learning in the Wild with Arno Candel
PDF
Alex Tellez, Deep Learning Applications
PDF
H2O Deep Learning at Next.ML
PDF
H2O Distributed Deep Learning by Arno Candel 071614
PPTX
Introduction to Deep Learning
H2O Open Source Deep Learning, Arno Candel 03-20-14
MLconf - Distributed Deep Learning for Classification and Regression Problems...
H2ODeepLearningThroughExamples021215
Deep Learning in the Wild with Arno Candel
Alex Tellez, Deep Learning Applications
H2O Deep Learning at Next.ML
H2O Distributed Deep Learning by Arno Candel 071614
Introduction to Deep Learning

What's hot (19)

PDF
Introduction to Deep Learning with Python
PDF
Understanding Java Garbage Collection
PDF
Understanding Java Garbage Collection - And What You Can Do About It
PDF
Marco Cattaneo "Event data processing in LHCb"
PDF
Java Garbage Collection - How it works
PDF
Deep Recurrent Neural Networks for Sequence Learning in Spark by Yves Mabiala
PPTX
Java Thread and Process Performance for Parallel Machine Learning on Multicor...
PDF
Towards a Systematic Study of Big Data Performance and Benchmarking
PPTX
Squeezing Deep Learning Into Mobile Phones
PDF
Collections forceawakens
PPTX
Zaharia spark-scala-days-2012
PDF
Alex Smola, Director of Machine Learning, AWS/Amazon, at MLconf SF 2016
PDF
Intro to Machine Learning for GPUs
PDF
Applying your Convolutional Neural Networks
PDF
JVM Garbage Collection Tuning
PDF
Discovering Your AI Super Powers - Tips and Tricks to Jumpstart your AI Projects
PDF
GBM in H2O with Cliff Click: H2O API
PDF
Svm map reduce_slides
PDF
Java collections the force awakens
Introduction to Deep Learning with Python
Understanding Java Garbage Collection
Understanding Java Garbage Collection - And What You Can Do About It
Marco Cattaneo "Event data processing in LHCb"
Java Garbage Collection - How it works
Deep Recurrent Neural Networks for Sequence Learning in Spark by Yves Mabiala
Java Thread and Process Performance for Parallel Machine Learning on Multicor...
Towards a Systematic Study of Big Data Performance and Benchmarking
Squeezing Deep Learning Into Mobile Phones
Collections forceawakens
Zaharia spark-scala-days-2012
Alex Smola, Director of Machine Learning, AWS/Amazon, at MLconf SF 2016
Intro to Machine Learning for GPUs
Applying your Convolutional Neural Networks
JVM Garbage Collection Tuning
Discovering Your AI Super Powers - Tips and Tricks to Jumpstart your AI Projects
GBM in H2O with Cliff Click: H2O API
Svm map reduce_slides
Java collections the force awakens
Ad

Viewers also liked (7)

PPTX
H2O on Hadoop Dec 12
PPTX
Data Science, Machine Learning, and H2O
PDF
H2O Big Data Environments
PDF
Python and H2O with Cliff Click at PyData Dallas 2015
PDF
H2O PySparkling Water
PPTX
Skutil - H2O meets Sklearn - Taylor Smith
PDF
Sparkling Water 2.0 - Michal Malohlava
H2O on Hadoop Dec 12
Data Science, Machine Learning, and H2O
H2O Big Data Environments
Python and H2O with Cliff Click at PyData Dallas 2015
H2O PySparkling Water
Skutil - H2O meets Sklearn - Taylor Smith
Sparkling Water 2.0 - Michal Malohlava
Ad

Similar to San Francisco Hadoop User Group Meetup Deep Learning (20)

PDF
Deep Learning through Examples
PDF
H2O.ai's Distributed Deep Learning by Arno Candel 04/03/14
PDF
Webinar: Deep Learning with H2O
PDF
How to win data science competitions with Deep Learning
PDF
ArnoCandelScalabledatascienceanddeeplearningwithh2o_gotochg
PPTX
Online learning, Vowpal Wabbit and Hadoop
PPTX
Distributed Deep Learning + others for Spark Meetup
PDF
Arno candel scalabledatascienceanddeeplearningwithh2o_reworkboston2015
PPTX
Data-Centric Parallel Programming
PDF
qconsf 2013: Top 10 Performance Gotchas for scaling in-memory Algorithms - Sr...
PDF
Top 10 Performance Gotchas for scaling in-memory Algorithms.
PDF
Using Deep Learning to do Real-Time Scoring in Practical Applications
PPT
PPTX
Distributed deep learning_over_spark_20_nov_2014_ver_2.8
PDF
Large data with Scikit-learn - Boston Data Mining Meetup - Alex Perrier
PPTX
Inferno Scalable Deep Learning on Spark
PPTX
Deep learning
PPTX
Deep Learning and Watson Studio
PDF
Convolutional neural networks for image classification — evidence from Kaggle...
PPTX
Transforming Big Data with Spark and Shark - AWS Re:Invent 2012 BDT 305
Deep Learning through Examples
H2O.ai's Distributed Deep Learning by Arno Candel 04/03/14
Webinar: Deep Learning with H2O
How to win data science competitions with Deep Learning
ArnoCandelScalabledatascienceanddeeplearningwithh2o_gotochg
Online learning, Vowpal Wabbit and Hadoop
Distributed Deep Learning + others for Spark Meetup
Arno candel scalabledatascienceanddeeplearningwithh2o_reworkboston2015
Data-Centric Parallel Programming
qconsf 2013: Top 10 Performance Gotchas for scaling in-memory Algorithms - Sr...
Top 10 Performance Gotchas for scaling in-memory Algorithms.
Using Deep Learning to do Real-Time Scoring in Practical Applications
Distributed deep learning_over_spark_20_nov_2014_ver_2.8
Large data with Scikit-learn - Boston Data Mining Meetup - Alex Perrier
Inferno Scalable Deep Learning on Spark
Deep learning
Deep Learning and Watson Studio
Convolutional neural networks for image classification — evidence from Kaggle...
Transforming Big Data with Spark and Shark - AWS Re:Invent 2012 BDT 305

More from Sri Ambati (20)

PDF
H2O Label Genie Starter Track - Support Presentation
PDF
H2O.ai Agents : From Theory to Practice - Support Presentation
PDF
H2O Generative AI Starter Track - Support Presentation Slides.pdf
PDF
H2O Gen AI Ecosystem Overview - Level 1 - Slide Deck
PDF
An In-depth Exploration of Enterprise h2oGPTe Slide Deck
PDF
Intro to Enterprise h2oGPTe Presentation Slides
PDF
Enterprise h2o GPTe Learning Path Slide Deck
PDF
H2O Wave Course Starter - Presentation Slides
PDF
Large Language Models (LLMs) - Level 3 Slides
PDF
Data Science and Machine Learning Platforms (2024) Slides
PDF
Data Prep for H2O Driverless AI - Slides
PDF
H2O Cloud AI Developer Services - Slides (2024)
PDF
LLM Learning Path Level 2 - Presentation Slides
PDF
LLM Learning Path Level 1 - Presentation Slides
PDF
Hydrogen Torch - Starter Course - Presentation Slides
PDF
Presentation Resources - H2O Gen AI Ecosystem Overview - Level 2
PDF
H2O Driverless AI Starter Course - Slides and Assignments
PPTX
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
PDF
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
PPTX
Generative AI Masterclass - Model Risk Management.pptx
H2O Label Genie Starter Track - Support Presentation
H2O.ai Agents : From Theory to Practice - Support Presentation
H2O Generative AI Starter Track - Support Presentation Slides.pdf
H2O Gen AI Ecosystem Overview - Level 1 - Slide Deck
An In-depth Exploration of Enterprise h2oGPTe Slide Deck
Intro to Enterprise h2oGPTe Presentation Slides
Enterprise h2o GPTe Learning Path Slide Deck
H2O Wave Course Starter - Presentation Slides
Large Language Models (LLMs) - Level 3 Slides
Data Science and Machine Learning Platforms (2024) Slides
Data Prep for H2O Driverless AI - Slides
H2O Cloud AI Developer Services - Slides (2024)
LLM Learning Path Level 2 - Presentation Slides
LLM Learning Path Level 1 - Presentation Slides
Hydrogen Torch - Starter Course - Presentation Slides
Presentation Resources - H2O Gen AI Ecosystem Overview - Level 2
H2O Driverless AI Starter Course - Slides and Assignments
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
Generative AI Masterclass - Model Risk Management.pptx

Recently uploaded (20)

PDF
17 Powerful Integrations Your Next-Gen MLM Software Needs
PDF
Autodesk AutoCAD Crack Free Download 2025
PDF
Navsoft: AI-Powered Business Solutions & Custom Software Development
PDF
iTop VPN 6.5.0 Crack + License Key 2025 (Premium Version)
PPTX
Monitoring Stack: Grafana, Loki & Promtail
DOCX
Greta — No-Code AI for Building Full-Stack Web & Mobile Apps
PDF
Website Design Services for Small Businesses.pdf
PDF
Product Update: Alluxio AI 3.7 Now with Sub-Millisecond Latency
PDF
How AI/LLM recommend to you ? GDG meetup 16 Aug by Fariman Guliev
PDF
iTop VPN Crack Latest Version Full Key 2025
PDF
wealthsignaloriginal-com-DS-text-... (1).pdf
PDF
Odoo Companies in India – Driving Business Transformation.pdf
PDF
Designing Intelligence for the Shop Floor.pdf
PPTX
Operating system designcfffgfgggggggvggggggggg
PPTX
Reimagine Home Health with the Power of Agentic AI​
PDF
CapCut Video Editor 6.8.1 Crack for PC Latest Download (Fully Activated) 2025
PDF
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
PDF
Download FL Studio Crack Latest version 2025 ?
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 42 Updates Latest 2025
PDF
Wondershare Filmora 15 Crack With Activation Key [2025
17 Powerful Integrations Your Next-Gen MLM Software Needs
Autodesk AutoCAD Crack Free Download 2025
Navsoft: AI-Powered Business Solutions & Custom Software Development
iTop VPN 6.5.0 Crack + License Key 2025 (Premium Version)
Monitoring Stack: Grafana, Loki & Promtail
Greta — No-Code AI for Building Full-Stack Web & Mobile Apps
Website Design Services for Small Businesses.pdf
Product Update: Alluxio AI 3.7 Now with Sub-Millisecond Latency
How AI/LLM recommend to you ? GDG meetup 16 Aug by Fariman Guliev
iTop VPN Crack Latest Version Full Key 2025
wealthsignaloriginal-com-DS-text-... (1).pdf
Odoo Companies in India – Driving Business Transformation.pdf
Designing Intelligence for the Shop Floor.pdf
Operating system designcfffgfgggggggvggggggggg
Reimagine Home Health with the Power of Agentic AI​
CapCut Video Editor 6.8.1 Crack for PC Latest Download (Fully Activated) 2025
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
Download FL Studio Crack Latest version 2025 ?
Internet Downloader Manager (IDM) Crack 6.42 Build 42 Updates Latest 2025
Wondershare Filmora 15 Crack With Activation Key [2025

San Francisco Hadoop User Group Meetup Deep Learning

  • 1. Distributed Deep Learning for Classification and Regression Problems using H2O Arno Candel, H2O.ai Hadoop User Group Meetup San Francisco, 12/10/14
  • 2. Who am I? @ArnoCandel PhD in Computational Physics, 2005 from ETH Zurich Switzerland ! 6 years at SLAC - Accelerator Physics Modeling 2 years at Skytree - Machine Learning 1 year at H2O.ai - Machine Learning ! 15 years in Supercomputing & Modeling ! Named “2014 Big Data All-Star” by Fortune Magazine !
  • 3. H2O Deep Learning, @ArnoCandel Outline Intro (5 mins) Methods & Implementation (10 mins) Results & Live Demos (20 mins) Higgs boson classification MNIST handwritten digits text classification 3
  • 4. H2O Deep Learning, @ArnoCandel Teamwork at H2O.ai Java, Apache v2 Open-Source #1 Java Machine Learning in Github Join the community! 4
  • 5. H2O Deep Learning, @ArnoCandel H2O: Open-Source (Apache v2) Predictive Analytics Platform 5
  • 6. H2O Deep Learning, @ArnoCandel 6 H2O Architecture - Designed for speed, scale, accuracy & ease of use Key technical points: • distributed JVMs + REST API • no Java GC issues (data in byte[], Double) • loss-less number compression • Hadoop integration (v1,YARN) • R package (CRAN) Pre-built fully featured algos: K-Means, NB, PCA, CoxPH, GLM, RF, GBM, DeepLearning
  • 7. H2O Deep Learning, @ArnoCandel What is Deep Learning? Wikipedia: Deep learning is a set of algorithms in machine learning that attempt to model high-level abstractions in data by using architectures composed of multiple non-linear transformations. Input: Image Output: User ID 7 Example: Facebook DeepFace
  • 8. H2O Deep Learning, @ArnoCandel What is NOT Deep Linear models are not deep (by definition) ! Neural nets with 1 hidden layer are not deep (only 1 layer - no feature hierarchy) ! SVMs and Kernel methods are not deep (2 layers: kernel + linear) ! Classification trees are not deep (operate on original input space, no new features generated) 8
  • 9. H2O Deep Learning, @ArnoCandel H2O Deep Learning 1970s multi-layer feed-forward Neural Network (stochastic gradient descent with back-propagation) ! + distributed processing for big data (fine-grain in-memory MapReduce on distributed data) ! + multi-threaded speedup (async fork/join worker threads operate at FORTRAN speeds) ! + smart algorithms for fast & accurate results (automatic standardization, one-hot encoding of categoricals, missing value imputation, weight & bias initialization, adaptive learning rate, momentum, dropout/l1/L2 regularization, grid search, N-fold cross-validation, checkpointing, load balancing, auto-tuning, model averaging, etc.) ! = powerful tool for (un)supervised machine learning on real-world data 9 all 320 cores maxed out
  • 10. H2O Deep Learning, @ArnoCandel Example Neural Network “fully connected” directed graph of neurons age income employment input/output neuron hidden neuron married single Input layer Hidden layer 1 Hidden layer 2 Output layer #connections 3x4 4x3 3x2 information flow #neurons 3 4 3 2 10
  • 11. H2O Deep Learning, @ArnoCandel Prediction: Forward Propagation “neurons activate each other via weighted sums” age income employment uij vjk zk pl yj = tanh(sumi(xi*uij)+bj) xi yj 11 married per-class probabilities sum(pl) = 1 wkl zk = tanh(sumj(yj*vjk)+ck) single pl = softmax(sumk(zk*wkl)+dl) softmax(xk) = exp(xk) / sumk(exp(xk)) activation function: tanh alternative: x -> max(0,x) “rectifier” pl is a non-linear function of xi: can approximate ANY function with enough layers! bj, ck, dl: bias values (indep. of inputs)
  • 12. H2O Deep Learning, @ArnoCandel Data preparation & Initialization Neural Networks are sensitive to numerical noise, operate best in the linear regime (not saturated) age income employment xi Automatic standardization of data xi: mean = 0, stddev = 1 ! horizontalize categorical variables, e.g. {full-time, part-time, none, self-employed} -> {0,1,0} = part-time, {0,0,0} = self-employed married single wkl Automatic initialization of weights ! 12 Poor man’s initialization: random weights wkl ! Default (better): Uniform distribution in +/- sqrt(6/(#units + #units_previous_layer))
  • 13. H2O Deep Learning, @ArnoCandel Training: Update Weights & Biases For each training row, we make a prediction and compare with the actual label (supervised learning): predicted actual 0.8 1 married Objective: minimize prediction error (MSE or cross-entropy) Mean Square Error = (0.22 + 0.22)/2 “penalize differences per-class” ! Cross-entropy = -log(0.8) “strongly penalize non-1-ness” 1 Stochastic Gradient Descent: Update weights and biases via gradient of the error (via back-propagation): w <— w - rate * ∂E/∂w 13 0.2 0 single E w rate
  • 14. H2O Deep Learning, @ArnoCandel Backward Propagation How to compute ∂E/∂wi for wi <— wi - rate * ∂E/∂wi ? Naive: For every i, evaluate E twice at (w1,…,wi±Δ,…,wN)… Slow! Backprop: Compute ∂E/∂wi via chain rule going backwards xi ! net = sumi(wi*xi) + b wi y = activation(net) E = error(y) ∂E/∂wi = ∂E/∂y * ∂y/∂net * ∂net/∂wi = ∂(error(y))/∂y * ∂(activation(net))/∂net * xi 14
  • 15. H2O Deep Learning, @ArnoCandel H2O Deep Learning Architecture K-V HTTPD nodes/JVMs: sync threads: async communication K-V HTTPD w 1 w w 2 1 w w w w 1 3 2 4 w1 w3 w2 w4 3 2 w w2+w4 1+w3 4 1 2 w* = (w1+w2+w3+w4)/4 map: each node trains a copy of the weights and biases with (some* or all of) its local data with asynchronous F/J threads initial model: weights and biases w 1 1 updated model: w* H2O atomic in-memory K-V store reduce: model averaging: average weights and biases from all nodes, speedup is at least #nodes/log(#rows) arxiv:1209.4129v3 i Query & display the model via JSON, WWW Keep iterating over the data (“epochs”), score from time to time *auto-tuned (default) or user-specified number of points per MapReduce iteration 15
  • 16. H2O Deep Learning, @ArnoCandel Adaptive learning rate - ADADELTA (Google) Automatically set learning rate for each neuron based on its training history Regularization L1: penalizes non-zero weights L2: penalizes large weights Dropout: randomly ignore certain inputs Grid Search and Checkpointing Run a grid search to scan many hyper-parameters, then continue training the most promising model(s) 16 “Secret” Sauce to Higher Accuracy
  • 17. H2O Deep Learning, @ArnoCandel Detail: Adaptive Learning Rate ! Compute moving average of Δwi2 at time t for window length rho: ! E[Δwi2]t = rho * E[Δwi2]t-1 + (1-rho) * Δwi2 ! Compute RMS of Δwi at time t with smoothing epsilon: ! RMS[Δwi]t = sqrt( E[Δwi2]t + epsilon ) Adaptive acceleration / momentum: accumulate previous weight updates, but over a window of time Adaptive annealing / progress: Gradient-dependent learning rate, moving window prevents “freezing” (unlike ADAGRAD: no window) Do the same for ∂E/∂wi, then obtain per-weight learning rate: RMS[Δwi]t-1 RMS[∂E/∂wi]t rate(wi, t) = cf. ADADELTA paper 17
  • 18. H2O Deep Learning, @ArnoCandel Detail: Dropout Regularization 18 Training: For each hidden neuron, for each training sample, for each iteration, ignore (zero out) a different random fraction p of input activations. ! age income employment married single X X X Testing: Use all activations, but reduce them by a factor p (to “simulate” the missing activations during training). cf. Geoff Hinton's paper
  • 19. H2O Deep Learning, @ArnoCandel 19 Application: Higgs Boson Classification Large Hadron Collider: Largest experiment of mankind! $13+ billion, 16.8 miles long, 120 MegaWatts, -456F, 1PB/day, etc. Higgs boson discovery (July ’12) led to 2013 Nobel prize! Higgs vs Background http://arxiv.org/pdf/1402.4735v2.pdf Images courtesy CERN / LHC HIGGS UCI Dataset: 21 low-level features AND 7 high-level derived features (physics formulae) Train: 10M rows, Valid: 500k, Test: 500k rows
  • 20. H2O Deep Learning, @ArnoCandel 20 Higgs: Derived features are important! ? ? ? Former baseline for AUC: 0.733 and 0.816 H2O Algorithm low-level H2O AUC all features H2O AUC Generalized Linear Model 0.596 0.684 Random Forest 0.764 0.840 add derived Gradient Boosted Trees 0.753 ! 0.839 Neural Net 1 hidden layer 0.760 features 0.830 H2O Deep Learning ? Live Demo: Let’s see what Deep Learning can do with low-level features alone!
  • 21. H2O Deep Learning, @ArnoCandel MNIST: digits classification MNIST = Digitized handwritten digits database (Yann LeCun) Yann LeCun: “Yet another advice: don't get fooled by people who claim to have a solution to Artificial General Intelligence. Ask them what error rate they get on MNIST or ImageNet.” Data: 28x28=784 pixels with (gray-scale) values in 0…255 Standing world record: Without distortions or convolutions, the best-ever published error rate on test set: 0.83% (Microsoft) 21 Train: 60,000 rows 784 integer columns 10 classes Test: 10,000 rows 784 integer columns 10 classes
  • 22. H2O Deep Learning, @ArnoCandel 22 H2O Deep Learning beats MNIST Standard 60k/10k data No distortions No convolutions No unsupervised training No ensemble ! 10 hours on 10 16-core nodes World-record! 0.83% test set error http://learn.h2o.ai/content/hands-on_training/deep_learning.html
  • 23. H2O Deep Learning, @ArnoCandel POJO Model Export for Production Scoring 23 Plain old Java code is auto-generated to take your H2O Deep Learning models into production!
  • 24. H2O Deep Learning, @ArnoCandel Parallel Scalability (for 64 epochs on MNIST, with “0.87%” parameters) 24 Speedup 40.00 30.00 20.00 10.00 0.00 1 2 4 8 16 32 63 H2O Nodes Training Time 2.7 mins 100 75 50 25 0 in minutes 1 2 4 8 16 32 63 H2O Nodes (4 cores per node, 1 epoch per node per MapReduce)
  • 25. H2O Deep Learning, @ArnoCandel Text Classification Goal: Predict the item from seller’s text description 25 “Vintage 18KT gold Rolex 2 Tone in great condition” Data: Bag of words vector 0,0,1,0,0,0,0,0,1,0,0,0,1,…,0 gold vintage condition Train: 578,361 rows 8,647 cols 467 classes Test: 64,263 rows 8,647 cols 143 classes
  • 26. H2O Deep Learning, @ArnoCandel 26 Text Classification Train: 578,361 rows 8,647 cols 467 classes Test: 64,263 rows 8,647 cols 143 classes Out-Of-The-Box: 11.6% test set error after 10 epochs! Predicts the correct class (out of 143) 88.4% of the time! Note 1: H2O columnar-compressed in-memory store only needs 60 MB to store 5 billion values (dense CSV needs 18 GB) Note 2: No tuning was done (results are for illustration only)
  • 27. H2O Deep Learning, @ArnoCandel MNIST: Unsupervised Anomaly Detection with Deep Learning (Autoencoder) 27 http://learn.h2o.ai/content/hands-on_training/anomaly_detection.html small/median/large reconstruction error: The good The bad The ugly
  • 28. H2O Deep Learning, @ArnoCandel 28 Higgs: Live Demo (Continued) How well did Deep Learning do? <your guess?> reference paper results Any guesses for AUC on low-level features? AUC=0.76 was the best for RF/GBM/NN (H2O) Let’s see how H2O did in the past 10 minutes!
  • 29. H2O Deep Learning, @ArnoCandel H2O Steam: Scoring Platform 29 http://server:port/steam/index.html Higgs Dataset Demo on 10-node cluster Let’s score all our H2O models and compare them! Live Demo
  • 30. H2O Deep Learning, @ArnoCandel 30 Scoring Higgs Models in H2O Steam Live Demo on 10-node cluster: <10 minutes runtime for all H2O algos! Better than LHC baseline of AUC=0.73!
  • 31. H2O Deep Learning, @ArnoCandel 31 Higgs Particle Detection with H2O HIGGS UCI Dataset: 21 low-level features AND 7 high-level derived features Train: 10M rows, Test: 500k rows Algorithm *Nature paper: http://arxiv.org/pdf/1402.4735v2.pdf Paper’s l-l AUC low-level H2O AUC all features H2O AUC Parameters (not heavily tuned), H2O running on 10 nodes Generalized Linear Model - 0.596 0.684 default, binomial Random Forest - 0.764 0.840 50 trees, max depth 50 Gradient Boosted Trees 0.73 0.753 0.839 50 trees, max depth 15 Neural Net 1 layer 0.733 0.760 0.830 1x300 Rectifier, 100 epochs Deep Learning 3 hidden layers 0.836 0.850 - 3x1000 Rectifier, L2=1e-5, 40 epochs Deep Learning 4 hidden layers 0.868 0.869 - 4x500 Rectifier, L1=L2=1e-5, 300 epochs Deep Learning 5 hidden layers 0.880 0.871 - 5x500 Rectifier, L1=L2=1e-5 Deep Learning on low-level features alone beats everything else! Prelim. H2O results compare well with paper’s results* (TMVA & Theano)
  • 32. H2O Deep Learning, @ArnoCandel 32 H2O Deep Learning Booklet R Vignette: Explains methods & parameters! Grab your copy today or download at http://t.co/kWzyFMGJ2S or view as GitBook: http://h2o.gitbooks.io/ h2o-deep-learning/ Even more tips & tricks in our other presentations and at H2O World next week!
  • 33. H2O Deep Learning, @ArnoCandel You can participate! 33 - Images: Convolutional & Pooling Layers PUB-644 - Sequences: Recurrent Neural Networks PUB-1052 - Faster Training: GPGPU support PUB-1013 - Pre-Training: Stacked Auto-Encoders PUB-1014 - Ensembles PUB-1072 - Use H2O at Kaggle Challenges!
  • 34. H2O Deep Learning, @ArnoCandel H2O Kaggle Starter Scripts 34
  • 35. H2O Deep Learning, @ArnoCandel Re-Live H2O World! 35 http://h2o.ai/h2o-world/ http://learn.h2o.ai Watch the Videos Day 1 • Hands-On Training • Supervised • Unsupervised • Advanced Topics • Markting Usecase • Product Demos • Hacker-Fest with Cliff Click (CTO, Hotspot) Day 2 • Speakers from Academia & Industry • Trevor Hastie (ML) • John Chambers (S, R) • Josh Bloch (Java API) • Many use cases from customers • 3 Top Kaggle Contestants (Top 10) • 3 Panel discussions
  • 36. H2O Deep Learning, @ArnoCandel Key Take-Aways H2O is an open source predictive analytics platform for data scientists and business analysts who need scalable and fast machine learning. ! H2O Deep Learning is ready to take your advanced analytics to the next level - Try it on your data! ! Join our Community and Meetups! https://github.com/h2oai h2ostream community forum www.h2o.ai/ @h2oai 36 Thank you!