Eawag: Swiss Federal Institute of Aquatic Science and Technology
Recurrent Neuronal Network tailored for
Weather Radar Nowcasting
June 23, 2016
Andreas Scheidegger
Andreas Scheidegger – Eawag
Weather Radar Nowcasting
t0
t-1t-2t-3
t-4
t5t4t3t2
t1
nowcast
model
available inputs:
every pixel of the previous images
desired outputs:
every pixel of the next n images
Andreas Scheidegger – Eawag
Recurrent ANN
Pt
Ht Ht+1
ANN
Pt+1
^
last observed
image
Andreas Scheidegger – Eawag
Recurrent ANN
Pt
Ht Ht+1
ANN
Ht+2
ANN
Pt+1
^ Pt+2
^
last observed
image
Andreas Scheidegger – Eawag
Recurrent ANN
Pt
Ht Ht+1
ANN
Ht+2 Ht+3
Pt+3
^
ANN ANN
Pt+1
^ Pt+2
^
last observed
image
Andreas Scheidegger – Eawag
Recurrent ANN
Pt
Ht Ht+1
ANN
Ht+2 Ht+3 Ht+4
Pt+3
^ Pt+4
^
ANN ANN ANN
Pt+1
^ Pt+2
^
last observed
image
Andreas Scheidegger – Eawag
Model structure
Pt
Ht Ht+1
ANN
Pt+1
^
v j=σ(∑
k
wjk uk +bj)
Traditional Artificial Neuronal
Networks (ANN) are (nested) non-
linear regressions:
Traditional Artificial Neuronal
Networks (ANN) are (nested) non-
linear regressions:
How can we do better?
Andreas Scheidegger – Eawag
Model structure
Pt
Ht Ht+1
ANN
Pt+1
^
v j=σ(∑
k
wjk uk +bj)
Traditional Artificial Neuronal
Networks (ANN) are (nested) non-
linear regressions:
Traditional Artificial Neuronal
Networks (ANN) are (nested) non-
linear regressions:
How can we do better?
tailor model structure to problem
Andreas Scheidegger – Eawag
Model structure
Pt
Ht Ht+1
ANN
Pt+1
^
Andreas Scheidegger – Eawag
Model structure
convolution
Pt
Ht
Pt+1
Ht+1
+
fully
connected
fully
connected
fully
connected
spatial
transformer
deconvolution
^
Gaussian blur
fully
connected
Andreas Scheidegger – Eawag
Spatial transformer
convolution
Pt
Ht
Pt+1
Ht+1
+
fully
connected
fully
connected
fully
connected
spatial
transformer
deconvolution
^
Gaussian blur
fully
connected
Jaderberg,M.,Simonyan,K.,Zisserman,A.,andKavukcuoglu,K.
(2015)SpatialTransformerNetworks.arXiv:1506.02025[cs].
Andreas Scheidegger – Eawag
Spatial Transformer
input output
Jaderberg, M., Simonyan, K., Zisserman, A., and
Kavukcuoglu, K. (2015) Spatial Transformer Networks.
arXiv:1506.02025 [cs].
Andreas Scheidegger – Eawag
Spatial transformer
convolution
Pt
Ht
Pt+1
Ht+1
+
fully
connected
fully
connected
fully
connected
spatial
transformer
deconvolution
^
Gaussian blur
fully
connected
Andreas Scheidegger – Eawag
Local correction
convolution
Pt
Ht
Pt+1
Ht+1
+
fully
connected
fully
connected
fully
connected
spatial
transformer
deconvolution
^
Gaussian blur
fully
connected
Andreas Scheidegger – Eawag
Local correction
Andreas Scheidegger – Eawag
Local correction
Andreas Scheidegger – Eawag
Gaussian blur
convolution
Pt
Ht
Pt+1
Ht+1
+
fully
connected
fully
connected
fully
connected
spatial
transformer
deconvolution
^
Gaussian blur
fully
connected
Andreas Scheidegger – Eawag
Recurrent ANN
Pt
Ht Ht+1
ANN
Ht+2 Ht+3 Ht+4
Pt+3
^ Pt+4
^
ANN ANN ANN
Pt+1
^ Pt+2
^
last observed
image
Andreas Scheidegger – Eawag
Model structure
convolution
Pt
Ht
Pt+1
Ht+1
+
fully
connected
fully
connected
fully
connected
spatial
transformer
deconvolution
^
Gaussian blur
fully
connected
Andreas Scheidegger – Eawag
Prediction
Forecast horizon: 2.5 minutes
observed predicted
Andreas Scheidegger – Eawag
Prediction
observed predicted
Forecast horizon: 15 minutes
Andreas Scheidegger – Eawag
Prediction
observed predicted
Forecast horizon: 30 minutes
Andreas Scheidegger – Eawag
Prediction
observed predicted
Forecast horizon: 45 minutes
Andreas Scheidegger – Eawag
Prediction
observed predicted
Forecast horizon: 60 minutes
Andreas Scheidegger – Eawag
Prediction
Andreas Scheidegger – Eawag
Conclusions
Andreas Scheidegger – Eawag
Deep learning may be a
hype – but it's a useful
tool nevertheless!
Conclusions
Andreas Scheidegger – Eawag
v j=σ( ∑k
wjk
u k
+b j)
Combine domain
knowledge with data
driven modeling
Deep learning may be a
hype – but it's a useful
tool nevertheless!
Conclusions
Andreas Scheidegger – Eawag
and Outlook
v j=σ( ∑k
wjk
u k
+b j)
Combine domain
knowledge with data
driven modeling
Deep learning may be a
hype – but it's a useful
tool nevertheless!
Conclusions
Andreas Scheidegger – Eawag
and Outlook
v j=σ( ∑k
wjk
u k
+b j)
Combine domain
knowledge with data
driven modeling
θt+1=θt−λ ∇ L(θt)
Online parameter adaptionDeep learning may be a
hype – but it's a useful
tool nevertheless!
Conclusions
Andreas Scheidegger – Eawag
and Outlook
v j=σ( ∑k
wjk
u k
+b j)
Combine domain
knowledge with data
driven modeling
θt+1=θt−λ ∇ L(θt)
Online parameter adaption
L(θ)Other objective function?
Deep learning may be a
hype – but it's a useful
tool nevertheless!
Conclusions
Andreas Scheidegger – Eawag
and Outlook
v j=σ( ∑k
wjk
u k
+b j)
Combine domain
knowledge with data
driven modeling
θt+1=θt−λ ∇ L(θt)
Online parameter adaption
L(θ)Other objective function?
Deep learning may be a
hype – but it's a useful
tool nevertheless!
Include other inputs
Conclusions
Andreas Scheidegger – Eawag
and Outlook
v j=σ( ∑k
wjk
u k
+b j)
Combine domain
knowledge with data
driven modeling
θt+1=θt−λ ∇ L(θt)
Online parameter adaption
L(θ)Other objective function?
Deep learning may be a
hype – but it's a useful
tool nevertheless!
Include other inputs
Predict prediction uncertainty
Conclusions
Andreas Scheidegger – Eawag
and Outlook
v j=σ( ∑k
wjk
u k
+b j)
Combine domain
knowledge with data
driven modeling
θt+1=θt−λ ∇ L(θt)
Online parameter adaption
L(θ)Other objective function?
Deep learning may be a
hype – but it's a useful
tool nevertheless!
Include other inputs
Interested in collaboration?
andreas.scheidegger@eawag.ch
Predict prediction uncertainty
Conclusions
Andreas Scheidegger – Eawag
Andreas Scheidegger – Eawag
Training and Implementaion
θnew=θold−λ ∇ L(θold)
Loss function
Scheduled Learning
Regularisation
Drop-out
L(θ)=∫
0
∞
RMS(τ;θ) p(τ)d τ Runs on cuda compatible GPUs
Based on Python library chainer
Srivastava, et al. (2014). Dropout: A simple way to
prevent neural networks from overfitting. The Journal
of Machine Learning Research 15, 1929–1958.
Stochastic Gradient Decent
Implementation
Bengio, et al. (2015). Scheduled sampling for sequence prediction with
recurrent neural networks. arXiv preprint arXiv:1506.03099.
Tokui, et al., (2015). Chainer: a Next-Generation Open
Source Framework for Deep Learning.
Andreas Scheidegger – Eawag
Training
Pt
Ht Ht+1
Pt+1
ANN
Ht+2 Ht+3 Ht+4
Pt+3
^ Pt+4
^
ANN ANN ANN
Pt+2
training predictions
observed images predicted images
Andreas Scheidegger – Eawag
Scheduled Training
Pt
Ht Ht+1
Pt+1
ANN
Ht+2 Ht+3 Ht+4
Pt+3
^ Pt+4
^
ANN ANN ANN
Pt+1
^
or
Pt+2
Pt+2
^
or
training predictions
observed or predicted images predicted images
Bengio, Set al. (2015). Scheduled sampling for sequence prediction with
recurrent neural networks. arXiv preprint arXiv:1506.03099.
Andreas Scheidegger – Eawag
Results
Andreas Scheidegger – Eawag
Prediction animated
Andreas Scheidegger – Eawag
Prediction animated
Andreas Scheidegger – Eawag
Traditional model structures
Estimate velocity field
and translate last radar
image (optical flow)
Identify rain cells, track
and extrapolate them
→ seems to be the most
“natural” approach
→ can model grows
and decay of cells
● Models can be difficult to tune
● Local features (e.g. mountain range) cannot be modeled
Convolution layers
https://
developer.apple.com/library/ios/documentation/Performance/Conceptual/
vImage/ConvolutionOperations/ConvolutionOperations.html
Convolution layers
https://
developer.apple.com/library/ios/documentation/Performance/Conceptual/
vImage/ConvolutionOperations/ConvolutionOperations.html https://en.wikipedia.org/wiki/Kernel_(im
age_processing)
Spatial Transformer
Jaderberg, M., Simonyan, K., Zisserman, A., and Kavukcuoglu, K. (2015) Spatial
Transformer Networks. arXiv:1506.02025 [cs].

More Related Content

PPTX
Art&Technology for a better NTNU and a better world
PPTX
Spark algorithms
PDF
Eyeriss Introduction
PDF
PyTorch 튜토리얼 (Touch to PyTorch)
PDF
15 unionfind
PDF
쉽게 설명하는 GAN (What is this? Gum? It's GAN.)
PDF
GraphX and Pregel - Apache Spark
PDF
딥러닝 중급 - AlexNet과 VggNet (Basic of DCNN : AlexNet and VggNet)
Art&Technology for a better NTNU and a better world
Spark algorithms
Eyeriss Introduction
PyTorch 튜토리얼 (Touch to PyTorch)
15 unionfind
쉽게 설명하는 GAN (What is this? Gum? It's GAN.)
GraphX and Pregel - Apache Spark
딥러닝 중급 - AlexNet과 VggNet (Basic of DCNN : AlexNet and VggNet)

What's hot (7)

PDF
Presentation esa udrescu
PDF
Parameter study bonn
PDF
Deep Convolutional GANs - meaning of latent space
PPTX
Poster-SetCoverAlgorithm
PDF
Rolf huisman programming quantum computers in dot net using q#
PDF
Smooth Pinball based Quantile Neural Network
PDF
DeepLearning 6_5 ~ 6_5_3
Presentation esa udrescu
Parameter study bonn
Deep Convolutional GANs - meaning of latent space
Poster-SetCoverAlgorithm
Rolf huisman programming quantum computers in dot net using q#
Smooth Pinball based Quantile Neural Network
DeepLearning 6_5 ~ 6_5_3
Ad

Similar to Recurrent Neuronal Network tailored for Weather Radar Nowcasting (20)

PDF
Simple big data, in Python
PDF
Scientific visualization with_gr
PDF
Pyparis2017 / Scikit-learn - an incomplete yearly review, by Gael Varoquaux
PDF
Pycon9 dibernado
PDF
Europy17_dibernardo
PDF
stanford_graph-learning_workshop.pdf
PDF
Deep Learning And Business Models (VNITC 2015-09-13)
PDF
PDF
H2O Distributed Deep Learning by Arno Candel 071614
PDF
AET vs. AED: Unsupervised Representation Learning by Auto-Encoding Transforma...
PDF
Introduction to computing Processing and performance.pdf
PDF
Data Structures and Algorithm - Week 8 - Minimum Spanning Trees
PDF
Attention for Deep Learning - Xavier Giro - UPC TelecomBCN Barcelona 2020
PPTX
Apache Sparkを用いたスケーラブルな時系列データの異常検知モデル学習ソフトウェアの開発
ZIP
Analysis
PPTX
New directions for mahout
PDF
From grep to BERT
PDF
Security of Artificial Intelligence
PDF
WISS 2015 - Machine Learning lecture by Ludovic Samper
PDF
Predictive Testing
Simple big data, in Python
Scientific visualization with_gr
Pyparis2017 / Scikit-learn - an incomplete yearly review, by Gael Varoquaux
Pycon9 dibernado
Europy17_dibernardo
stanford_graph-learning_workshop.pdf
Deep Learning And Business Models (VNITC 2015-09-13)
H2O Distributed Deep Learning by Arno Candel 071614
AET vs. AED: Unsupervised Representation Learning by Auto-Encoding Transforma...
Introduction to computing Processing and performance.pdf
Data Structures and Algorithm - Week 8 - Minimum Spanning Trees
Attention for Deep Learning - Xavier Giro - UPC TelecomBCN Barcelona 2020
Apache Sparkを用いたスケーラブルな時系列データの異常検知モデル学習ソフトウェアの開発
Analysis
New directions for mahout
From grep to BERT
Security of Artificial Intelligence
WISS 2015 - Machine Learning lecture by Ludovic Samper
Predictive Testing
Ad

More from Andreas Scheidegger (8)

PPTX
Bayesian End-Member Mixing Model
PPTX
Sensitivity analysis
PDF
Formulation of model likelihood functions
PDF
Review of probability calculus
PPTX
Kalibrierung von Kanalnetzmodellen mit binären Messdaten
PPTX
Experimental design approach for optimal selection and placement of rain sensors
PPTX
Bayesian assimilation of rainfall sensors with fundamentally different integr...
PPTX
New information sources for rain fields
Bayesian End-Member Mixing Model
Sensitivity analysis
Formulation of model likelihood functions
Review of probability calculus
Kalibrierung von Kanalnetzmodellen mit binären Messdaten
Experimental design approach for optimal selection and placement of rain sensors
Bayesian assimilation of rainfall sensors with fundamentally different integr...
New information sources for rain fields

Recently uploaded (20)

PDF
From Molecular Interactions to Solubility in Deep Eutectic Solvents: Explorin...
PDF
Cosmology using numerical relativity - what hapenned before big bang?
PPTX
endocrine - management of adrenal incidentaloma.pptx
PPTX
SCIENCE 4 Q2W5 PPT.pptx Lesson About Plnts and animals and their habitat
PDF
CuO Nps photocatalysts 15156456551564161
PPTX
2currentelectricity1-201006102815 (1).pptx
PPTX
HAEMATOLOGICAL DISEASES lack of red blood cells, which carry oxygen throughou...
PDF
Communicating Health Policies to Diverse Populations (www.kiu.ac.ug)
PPTX
Introduction to Immunology (Unit-1).pptx
PPTX
AP CHEM 1.2 Mass spectroscopy of elements
PPTX
diabetes and its complications nephropathy neuropathy
PPTX
TORCH INFECTIONS in pregnancy with toxoplasma
PPTX
LIPID & AMINO ACID METABOLISM UNIT-III, B PHARM II SEMESTER
PPTX
PMR- PPT.pptx for students and doctors tt
PDF
Unit 5 Preparations, Reactions, Properties and Isomersim of Organic Compounds...
PDF
Science Form five needed shit SCIENEce so
PPT
THE CELL THEORY AND ITS FUNDAMENTALS AND USE
PPT
Cell Structure Description and Functions
PPTX
Understanding the Circulatory System……..
PPTX
Presentation1 INTRODUCTION TO ENZYMES.pptx
From Molecular Interactions to Solubility in Deep Eutectic Solvents: Explorin...
Cosmology using numerical relativity - what hapenned before big bang?
endocrine - management of adrenal incidentaloma.pptx
SCIENCE 4 Q2W5 PPT.pptx Lesson About Plnts and animals and their habitat
CuO Nps photocatalysts 15156456551564161
2currentelectricity1-201006102815 (1).pptx
HAEMATOLOGICAL DISEASES lack of red blood cells, which carry oxygen throughou...
Communicating Health Policies to Diverse Populations (www.kiu.ac.ug)
Introduction to Immunology (Unit-1).pptx
AP CHEM 1.2 Mass spectroscopy of elements
diabetes and its complications nephropathy neuropathy
TORCH INFECTIONS in pregnancy with toxoplasma
LIPID & AMINO ACID METABOLISM UNIT-III, B PHARM II SEMESTER
PMR- PPT.pptx for students and doctors tt
Unit 5 Preparations, Reactions, Properties and Isomersim of Organic Compounds...
Science Form five needed shit SCIENEce so
THE CELL THEORY AND ITS FUNDAMENTALS AND USE
Cell Structure Description and Functions
Understanding the Circulatory System……..
Presentation1 INTRODUCTION TO ENZYMES.pptx

Recurrent Neuronal Network tailored for Weather Radar Nowcasting

  • 1. Eawag: Swiss Federal Institute of Aquatic Science and Technology Recurrent Neuronal Network tailored for Weather Radar Nowcasting June 23, 2016 Andreas Scheidegger
  • 2. Andreas Scheidegger – Eawag Weather Radar Nowcasting t0 t-1t-2t-3 t-4 t5t4t3t2 t1 nowcast model available inputs: every pixel of the previous images desired outputs: every pixel of the next n images
  • 3. Andreas Scheidegger – Eawag Recurrent ANN Pt Ht Ht+1 ANN Pt+1 ^ last observed image
  • 4. Andreas Scheidegger – Eawag Recurrent ANN Pt Ht Ht+1 ANN Ht+2 ANN Pt+1 ^ Pt+2 ^ last observed image
  • 5. Andreas Scheidegger – Eawag Recurrent ANN Pt Ht Ht+1 ANN Ht+2 Ht+3 Pt+3 ^ ANN ANN Pt+1 ^ Pt+2 ^ last observed image
  • 6. Andreas Scheidegger – Eawag Recurrent ANN Pt Ht Ht+1 ANN Ht+2 Ht+3 Ht+4 Pt+3 ^ Pt+4 ^ ANN ANN ANN Pt+1 ^ Pt+2 ^ last observed image
  • 7. Andreas Scheidegger – Eawag Model structure Pt Ht Ht+1 ANN Pt+1 ^ v j=σ(∑ k wjk uk +bj) Traditional Artificial Neuronal Networks (ANN) are (nested) non- linear regressions: Traditional Artificial Neuronal Networks (ANN) are (nested) non- linear regressions: How can we do better?
  • 8. Andreas Scheidegger – Eawag Model structure Pt Ht Ht+1 ANN Pt+1 ^ v j=σ(∑ k wjk uk +bj) Traditional Artificial Neuronal Networks (ANN) are (nested) non- linear regressions: Traditional Artificial Neuronal Networks (ANN) are (nested) non- linear regressions: How can we do better? tailor model structure to problem
  • 9. Andreas Scheidegger – Eawag Model structure Pt Ht Ht+1 ANN Pt+1 ^
  • 10. Andreas Scheidegger – Eawag Model structure convolution Pt Ht Pt+1 Ht+1 + fully connected fully connected fully connected spatial transformer deconvolution ^ Gaussian blur fully connected
  • 11. Andreas Scheidegger – Eawag Spatial transformer convolution Pt Ht Pt+1 Ht+1 + fully connected fully connected fully connected spatial transformer deconvolution ^ Gaussian blur fully connected Jaderberg,M.,Simonyan,K.,Zisserman,A.,andKavukcuoglu,K. (2015)SpatialTransformerNetworks.arXiv:1506.02025[cs].
  • 12. Andreas Scheidegger – Eawag Spatial Transformer input output Jaderberg, M., Simonyan, K., Zisserman, A., and Kavukcuoglu, K. (2015) Spatial Transformer Networks. arXiv:1506.02025 [cs].
  • 13. Andreas Scheidegger – Eawag Spatial transformer convolution Pt Ht Pt+1 Ht+1 + fully connected fully connected fully connected spatial transformer deconvolution ^ Gaussian blur fully connected
  • 14. Andreas Scheidegger – Eawag Local correction convolution Pt Ht Pt+1 Ht+1 + fully connected fully connected fully connected spatial transformer deconvolution ^ Gaussian blur fully connected
  • 15. Andreas Scheidegger – Eawag Local correction
  • 16. Andreas Scheidegger – Eawag Local correction
  • 17. Andreas Scheidegger – Eawag Gaussian blur convolution Pt Ht Pt+1 Ht+1 + fully connected fully connected fully connected spatial transformer deconvolution ^ Gaussian blur fully connected
  • 18. Andreas Scheidegger – Eawag Recurrent ANN Pt Ht Ht+1 ANN Ht+2 Ht+3 Ht+4 Pt+3 ^ Pt+4 ^ ANN ANN ANN Pt+1 ^ Pt+2 ^ last observed image
  • 19. Andreas Scheidegger – Eawag Model structure convolution Pt Ht Pt+1 Ht+1 + fully connected fully connected fully connected spatial transformer deconvolution ^ Gaussian blur fully connected
  • 20. Andreas Scheidegger – Eawag Prediction Forecast horizon: 2.5 minutes observed predicted
  • 21. Andreas Scheidegger – Eawag Prediction observed predicted Forecast horizon: 15 minutes
  • 22. Andreas Scheidegger – Eawag Prediction observed predicted Forecast horizon: 30 minutes
  • 23. Andreas Scheidegger – Eawag Prediction observed predicted Forecast horizon: 45 minutes
  • 24. Andreas Scheidegger – Eawag Prediction observed predicted Forecast horizon: 60 minutes
  • 25. Andreas Scheidegger – Eawag Prediction
  • 26. Andreas Scheidegger – Eawag Conclusions
  • 27. Andreas Scheidegger – Eawag Deep learning may be a hype – but it's a useful tool nevertheless! Conclusions
  • 28. Andreas Scheidegger – Eawag v j=σ( ∑k wjk u k +b j) Combine domain knowledge with data driven modeling Deep learning may be a hype – but it's a useful tool nevertheless! Conclusions
  • 29. Andreas Scheidegger – Eawag and Outlook v j=σ( ∑k wjk u k +b j) Combine domain knowledge with data driven modeling Deep learning may be a hype – but it's a useful tool nevertheless! Conclusions
  • 30. Andreas Scheidegger – Eawag and Outlook v j=σ( ∑k wjk u k +b j) Combine domain knowledge with data driven modeling θt+1=θt−λ ∇ L(θt) Online parameter adaptionDeep learning may be a hype – but it's a useful tool nevertheless! Conclusions
  • 31. Andreas Scheidegger – Eawag and Outlook v j=σ( ∑k wjk u k +b j) Combine domain knowledge with data driven modeling θt+1=θt−λ ∇ L(θt) Online parameter adaption L(θ)Other objective function? Deep learning may be a hype – but it's a useful tool nevertheless! Conclusions
  • 32. Andreas Scheidegger – Eawag and Outlook v j=σ( ∑k wjk u k +b j) Combine domain knowledge with data driven modeling θt+1=θt−λ ∇ L(θt) Online parameter adaption L(θ)Other objective function? Deep learning may be a hype – but it's a useful tool nevertheless! Include other inputs Conclusions
  • 33. Andreas Scheidegger – Eawag and Outlook v j=σ( ∑k wjk u k +b j) Combine domain knowledge with data driven modeling θt+1=θt−λ ∇ L(θt) Online parameter adaption L(θ)Other objective function? Deep learning may be a hype – but it's a useful tool nevertheless! Include other inputs Predict prediction uncertainty Conclusions
  • 34. Andreas Scheidegger – Eawag and Outlook v j=σ( ∑k wjk u k +b j) Combine domain knowledge with data driven modeling θt+1=θt−λ ∇ L(θt) Online parameter adaption L(θ)Other objective function? Deep learning may be a hype – but it's a useful tool nevertheless! Include other inputs Interested in collaboration? andreas.scheidegger@eawag.ch Predict prediction uncertainty Conclusions
  • 36. Andreas Scheidegger – Eawag Training and Implementaion θnew=θold−λ ∇ L(θold) Loss function Scheduled Learning Regularisation Drop-out L(θ)=∫ 0 ∞ RMS(τ;θ) p(τ)d τ Runs on cuda compatible GPUs Based on Python library chainer Srivastava, et al. (2014). Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15, 1929–1958. Stochastic Gradient Decent Implementation Bengio, et al. (2015). Scheduled sampling for sequence prediction with recurrent neural networks. arXiv preprint arXiv:1506.03099. Tokui, et al., (2015). Chainer: a Next-Generation Open Source Framework for Deep Learning.
  • 37. Andreas Scheidegger – Eawag Training Pt Ht Ht+1 Pt+1 ANN Ht+2 Ht+3 Ht+4 Pt+3 ^ Pt+4 ^ ANN ANN ANN Pt+2 training predictions observed images predicted images
  • 38. Andreas Scheidegger – Eawag Scheduled Training Pt Ht Ht+1 Pt+1 ANN Ht+2 Ht+3 Ht+4 Pt+3 ^ Pt+4 ^ ANN ANN ANN Pt+1 ^ or Pt+2 Pt+2 ^ or training predictions observed or predicted images predicted images Bengio, Set al. (2015). Scheduled sampling for sequence prediction with recurrent neural networks. arXiv preprint arXiv:1506.03099.
  • 39. Andreas Scheidegger – Eawag Results
  • 40. Andreas Scheidegger – Eawag Prediction animated
  • 41. Andreas Scheidegger – Eawag Prediction animated
  • 42. Andreas Scheidegger – Eawag Traditional model structures Estimate velocity field and translate last radar image (optical flow) Identify rain cells, track and extrapolate them → seems to be the most “natural” approach → can model grows and decay of cells ● Models can be difficult to tune ● Local features (e.g. mountain range) cannot be modeled
  • 45. Spatial Transformer Jaderberg, M., Simonyan, K., Zisserman, A., and Kavukcuoglu, K. (2015) Spatial Transformer Networks. arXiv:1506.02025 [cs].