SlideShare a Scribd company logo
Reservoir Computing
Fast Deep Learning for Sequences
Claudio Gallicchio, University of Pisa (Italy)
About me
● Researcher at the Department of Computer Science,
University of Pisa
● Machine Learning, Deep Learning, Neural Networks,
Dynamical Systems
○ Reservoir Computing
○ Deep Randomized Neural Networks
○ Learning in Structured Domains
● IEEE Task Forces
○ Chair of the IEEE Task Force on Reservoir Computing
○ Vice-Chair of the IEEE Task Force on Randomization-Based
Neural Networks and Learning Systems
● Workshops, Tutorials
○ DL in Unconventional Neuromorphic Hardware (IJCNN-21)
○ ML for irregular time-series (ECML PKDD-21)
○ Deep Randomized Neural Networks (AAAI-21)
gallicch@di.unipi.it
Reservoir
Computing
● Convenient way of designing
Neural Networks for sequential
data
● Stability
● Efficiency
Deep Learning for
Time-Series
Machine Learning
Algorithms that learn from the data
Classical
Programming
data
rules
answers
Machine
Learning
data
answers
rules
Deep Learning
● Learn representations from the data
● Progressive abstraction
Sequential Data
Recurrent Neural Networks
● State update:
ℎ! = #" $!, ℎ!#$
● Output function:
&! = '"!
ℎ!
recurrent
layer/cell
%!
&!
ℎ!
state input old state
set of parameters
output state
Recurrent Neural Networks
● State update:
(! = )*+ℎ ,! -() + (!#$ -))
● Output function:
/! = (* -)+
recurrent
layer/cell
%!
&!
ℎ!
input weight matrix
recurrent weight matrix
output weight matrix
"!"
"""
""#
Computational Graph
recurrent
layer/cell
%!
&!
ℎ!
%"
&"
ℎ"
%#
&#
ℎ#
%!
&!
ℎ!
…
"!"
"""
""#
"!" "!" "!"
""" """
""#
""#
""#
," ,# ,!
L
Forward Computation
Fading/Exploding memory:
● the influence of inputs
far in the past
vanishes/explodes in
the current state
● many (non-linear)
transformations
!!
"!
ℎ!
!"
""
!#
"#
ℎ#
…
!!" !!" !!"
!"" !""
!"#
!"#
!"#
ℎ"
!$
"$
ℎ$
!!"
!"#
!""
!!
"!
ℎ!
!"
""
!#
"#
ℎ#
…
!!" !!" !!"
!"" !""
!"#
!"#
!"#
$! $" $#
ℎ"
!$
"$
ℎ$
!!"
!"#
$$
!""
Backpropagation Through Time (BPTT)
Gradient Propagation
● gradient might
vanish/explode through
many non-linear
transformations
● difficult to train on long-
term dependencies
Bengio et al, “Learning long-term dependencies
with gradient descent is difficult”, IEEE
Transactions on Neural Networks, 1994
Pascanu et al, “On the difficulty of training
recurrent neural networks”, ICML 2013
Approaches
● Gated architectures
○ LSTM, GRU
○ training is slow
Randomization in
Deep Neural
Networks
Deep Learning
Deep Learning models achieved tremendous success over the
years. This comes at very high cost in terms of
● Time
● Parameters
Do we really need this all the time?
Example: embedded applications
Source: https://bitalino.com/en/freestyle-kit-bt
Source: https://www.eenewsembedded.com/news/
raspberry-pi-3-now-compute-module-format
Complexity / Accuracy Tradeoff
Accuracy
Complexity
Deep NNs
Linear
models
SVMs-like
Deep
Randomized
NNs
The Philosophy
“Randomization is
computationally cheaper than
optimization”
Rahimi, A. and Recht, B., 2008. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning.
Advances in neural information processing systems, 21, pp.1313-1320.
Rahimi, A. and Recht, B., 2007. Random features for large-scale kernel machines. Advances in neural information processing systems,
20, pp. 1177-1184.
Randomization = Efficiency
● Training algorithms are cheaper and simpler
● Model transfer: don’t need to transmit all the weights
● Amenable to neuromorphic implementations
Historical note: the cortico-striatal model
● Fixed recurrent connections
in the PFC
● Dopamine regulated
connections between PFC
and neurons in the striatum
(CD)
Dominey, P.F., 2013. Recurrent temporal networks and
language acquisition—from corticostriatal
neurophysiology to reservoir computing. Frontiers in
psychology, 4, p.500.
Reservoir
Computing
Reservoir Computing: focus on the dynamical system
input layer
reservoir
readout
fixed
trainable
Randomly initialized under stability
conditions on the dynamical system
Stable dynamics - Echo State Property
Verstraeten, David, et al. Neural networks 20.3 (2007).
Lukoševičius, Mantas, and Herbert Jaeger. Computer
Science Review 3.3 (2009).
!! = tanh((!)"# + ℎ!$%)##)
Echo State Network
Jaeger, Herbert, and Harald Haas.
Science 304.5667 (2004): 78-80.
Liquid State Machine
Maass, Wolfgang, Thomas Natschläger, and
Henry Markram. Neural computation 14.11
(2002): 2531-2560.
Fractal Prediction Machine
Tino, Peter, and Georg Dorffner. Machine
Learning 45.2 (2001): 187-217.
Echo State Networks (ESNs)
Reservoir
(! = tanh(,!5"# + (!$%5##)
● large layer of recurrent units
● sparsely connected
● randomly initialized (ESP)
● untrained
input layer
reservoir
readout
Echo State Networks (ESNs)
Readout
/! = (!-#&
● linear combination of the reservoir
state variables
● can be trained in closed form
-#& = 7'7 $%7'8
input layer
reservoir
readout
Reservoir Initialization
● Random but stable
○ the state should not be sensitive to tiny input perturbations
● Control the max singular value of 5##, or
● Control the spectral radius of 5##
9 5## < 1
1. Generate a random matrix and then re-scale its -()##)
2. Generate a random matrix from .(−
&'
&(
,
&'
&(
)
hyper-parameter
Why does it work?
Suffix-based Markovian
organization of the state
space of contraction
reservoir mappings even
prior to learning.
+1
+1
-1
-1
-1
+1
+1
-1
?
Influence on the state
Gallicchio, Claudio, and Alessio
Micheli. "Architectural and
markovian factors of echo state
networks." Neural Networks24.5
(2011): 440-456.
ESNs exploit the
architectural bias of
RNNs
Chaotic attractors
Distributed Intelligence Applications
Dragone, Mauro, et al. "A cognitive robotic
ecology approach to self-configuring and evolving
AAL systems." Engineering Applications of
Artificial Intelligence 45 (2015): 269-280.
Robot localization in critical environments
Dragone, Mauro, et
al. ESANN. 2016.
Human Activity Recognition
● Classification of human daily activities from RSS
data generated by sensors worn by the user
http://archive.ics.uci.edu/ml/datasets/Activity+Recognition+system+based+on+Multisensor+data+fusion+%28AReM%29
Dataset is available online on the UCI repository
Clinical applications
● Automatic assessment of balance skills
● Predict the outcome of the Berg Balance Scale (BBS) clinical test
from time-series of pressure sensors
Bacciu, Davide, et al.
Engineering Applications of
Artificial Intelligence 66
(2017): 60-74.
oremi
Wii
Balance
Board
BBS
https://www.linkedin.com/company/teaching-horizon-2020/
https://twitter.com/TEACHING_H2020
https://www.teaching-h2020.eu
RC in Autonomous Vehicles
● Automatic detection of physiological, emotional, cognitive state of the
human à Human-centric personalization
● Good performance in human state monitoring + efficiency
D. Bacciu, D. Di Sarli, C. Gallicchio, A. Micheli, N.
Puccinelli, “Benchmarking Reservoir and Recurrent Neural
Networks for Human State and Activity Recognition”, IWANN 2021
Distributed, embeddable and federated learning
…and Beyond
Physical Reservoir Computing
Tanaka, G., Yamane, T., Héroux, J.B., Nakane, R.,
Kanazawa, N., Takeda, S., Numata, H., Nakano, D. and
Hirose, A., 2019. Recent advances in physical reservoir
computing: A review. Neural Networks, 115, pp.100-123.
Workshop W1: Deep Learning in Unconventional Neuromorphic Hardware
Friday, July 23, 12:30PM-4:30PM, Room: IJCNN Virtual Room 1
https://events.femto-st.fr/DLUNH/en/program
Depth in RNNs
shallow deep
input
deep
readout
deep
reservoir
Pascanu, R., Gulcehre, C., Cho, K. and Bengio,
Y., 2013. How to construct deep recurrent neural
networks. arXiv preprint arXiv:1312.6026.
Deep Echo State Networks
input layer
reservoir
layer 1
reservoir
layer L
fixed
readout
Gallicchio, Claudio, Alessio Micheli, and
Luca Pedrelli. "Deep reservoir
computing: A critical experimental
analysis." Neurocomputing 268 (2017):
87-99
reservoir
layer 2 trainable
Multiple time-scales
● Effects of input perturbations last
longer in the higher reservoir layers
● Multiple time-scales representation is
intrinsic
Gallicchio, Claudio, Alessio Micheli, and Luca Pedrelli.
"Deep reservoir computing: A critical experimental
analysis." Neurocomputing 268 (2017): 87-99
Gallicchio, C. and Micheli, A., 2018, July. Why Layering in
Recurrent Neural Networks? A DeepESN Survey. In 2018
International Joint Conference on Neural Networks
(IJCNN)(pp. 1-8). IEEE.
Implementation
https://github.com/gallicch/DeepRC-TF
Structured data
time-series graphs
Neural networks for graphs
,
{ }
Vertex-wise graph encoding
● time-step → vertex
● previous time step → neighborhood
v1
v2
v3
v4
v
embedding (state)
input features
embeddings of
neighbors
It’s accurate
Gallicchio, C. and Micheli, A.,
2020. Fast and Deep Graph
Neural Networks. In AAAI (pp.
3898-3905).
It’s accurate
Gallicchio, C. and Micheli, A.,
2020. Fast and Deep Graph
Neural Networks. In AAAI (pp.
3898-3905).
It’s fast
Gallicchio, C. and Micheli, A.,
2020. Fast and Deep Graph
Neural Networks. In AAAI (pp.
3898-3905).
Conclusions
Summary
● Reservoir Computing: paradigm for designing and
training RNNs
○ fixed hidden recurrent layer (controlled for asymptotic stability)
○ trainable readout layer
● Fast (& simple) training compared to standard RNNs
● Good for sensor data
● Very active area of research…
○ Embedded applications
○ Unconventional Neuromorphic Hardware
○ European Projects
Deep Randomized Neural Networks
Gallicchio, C. and Scardapane, S., 2020. Deep
Randomized Neural Networks. In Recent Trends in
Learning From Data (pp. 43-68). Springer, Cham.
https://arxiv.org/pdf/2002.12287.pdf
AAAI-21 tutorial website:
https://sites.google.com/site/cgallicch/resources/tutorial_DRNN
IEEE Task Force on Randomization-based Neural
Networks and Learning Systems
Promote the research and applications of deep
rand. neural networks and learning systems, to
demonstrate the competitive performance of
randomization-based algorithms in diverse
scenarios, to educate the research community
about the randomization-based learning methods
and their relationships,
https://sites.google.com/view/randnn-tf/
IEEE Task Force on Reservoir Computing
Promote and stimulate the development of
Reservoir Computing research under both
theoretical and application perspectives.
https://sites.google.com/view/reservoir-computing-tf/
Reservoir Computing
Fast Deep Learning for Sequences
Claudio Gallicchio
gallicch@di.unipi.it
https://www.linkedin.com/in/claudio-gallicchio-05a47038/
https://twitter.com/claudiogallicc1
Thanks for attending!

More Related Content

PPTX
20191018 reservoir computing
PDF
“Introduction to DNN Model Compression Techniques,” a Presentation from Xailient
PPTX
Support vector machines (svm)
PPTX
An introduction to quantum machine learning.pptx
PPTX
PDF
Support Vector Machines for Classification
PPTX
Feedforward neural network
PDF
Principal Component Analysis
20191018 reservoir computing
“Introduction to DNN Model Compression Techniques,” a Presentation from Xailient
Support vector machines (svm)
An introduction to quantum machine learning.pptx
Support Vector Machines for Classification
Feedforward neural network
Principal Component Analysis

What's hot (20)

PDF
Reservoir Computing Overview (with emphasis on Liquid State Machines)
PPT
backpropagation in neural networks
PPTX
Deep Belief nets
PPTX
MACHINE LEARNING - GENETIC ALGORITHM
PPTX
Recurrent Neural Network
PPTX
Introduction to artificial neural network
PPTX
Prospects of Deep Learning in Medical Imaging
PDF
Siamese networks
PPT
neural networks
PDF
Artificial Neural Network
PDF
Introduction to Deep Learning, Keras, and TensorFlow
PPTX
Quantum computing in machine learning
PPTX
HML: Historical View and Trends of Deep Learning
PPTX
Randomized Algorithm- Advanced Algorithm
PPT
Neural network final NWU 4.3 Graphics Course
PDF
What is Deep Learning | Deep Learning Simplified | Deep Learning Tutorial | E...
PPTX
Introduction Of Artificial neural network
PDF
Tutorial on Deep Generative Models
PPTX
Naive bayes
PPT
Genetic Algorithms - Artificial Intelligence
Reservoir Computing Overview (with emphasis on Liquid State Machines)
backpropagation in neural networks
Deep Belief nets
MACHINE LEARNING - GENETIC ALGORITHM
Recurrent Neural Network
Introduction to artificial neural network
Prospects of Deep Learning in Medical Imaging
Siamese networks
neural networks
Artificial Neural Network
Introduction to Deep Learning, Keras, and TensorFlow
Quantum computing in machine learning
HML: Historical View and Trends of Deep Learning
Randomized Algorithm- Advanced Algorithm
Neural network final NWU 4.3 Graphics Course
What is Deep Learning | Deep Learning Simplified | Deep Learning Tutorial | E...
Introduction Of Artificial neural network
Tutorial on Deep Generative Models
Naive bayes
Genetic Algorithms - Artificial Intelligence
Ad

Similar to Reservoir computing fast deep learning for sequences (20)

PDF
IEEE CIS Webinar Sustainable futures.pdf
PPTX
Claudio Gallicchio - Deep Reservoir Computing for Structured Data
PDF
A Survey on Image Processing using CNN in Deep Learning
PDF
NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...
PDF
DLD meetup 2017, Efficient Deep Learning
PDF
Machine Learning @ NECST
PDF
EchoBay: optimization of Echo State Networks under memory and time constraints
PDF
Easy to learn deep learning guide - elementry
PDF
Recent advances of AI for medical imaging : Engineering perspectives
PPTX
Dp2 ppt by_bikramjit_chowdhury_final
PDF
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...
PDF
Pres Tesi LM-2016+transcript_eng
DOCX
Project Report -Vaibhav
PDF
Recent developments in Deep Learning
PDF
New artificial neural network design for Chua chaotic system prediction usin...
PDF
M010237578
PDF
2020 ml swarm ascend presentation
PPTX
Image classification with Deep Neural Networks
PDF
resume v 5.0
PDF
IRJET- The Essentials of Neural Networks and their Applications
IEEE CIS Webinar Sustainable futures.pdf
Claudio Gallicchio - Deep Reservoir Computing for Structured Data
A Survey on Image Processing using CNN in Deep Learning
NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...
DLD meetup 2017, Efficient Deep Learning
Machine Learning @ NECST
EchoBay: optimization of Echo State Networks under memory and time constraints
Easy to learn deep learning guide - elementry
Recent advances of AI for medical imaging : Engineering perspectives
Dp2 ppt by_bikramjit_chowdhury_final
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...
Pres Tesi LM-2016+transcript_eng
Project Report -Vaibhav
Recent developments in Deep Learning
New artificial neural network design for Chua chaotic system prediction usin...
M010237578
2020 ml swarm ascend presentation
Image classification with Deep Neural Networks
resume v 5.0
IRJET- The Essentials of Neural Networks and their Applications
Ad

Recently uploaded (20)

PDF
Introduction to Business Data Analytics.
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PPTX
Major-Components-ofNKJNNKNKNKNKronment.pptx
PPTX
Database Infoormation System (DBIS).pptx
PPT
Miokarditis (Inflamasi pada Otot Jantung)
PPTX
05. PRACTICAL GUIDE TO MICROSOFT EXCEL.pptx
PPTX
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
PPTX
Moving the Public Sector (Government) to a Digital Adoption
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PPTX
1_Introduction to advance data techniques.pptx
PDF
Fluorescence-microscope_Botany_detailed content
PPT
Chapter 2 METAL FORMINGhhhhhhhjjjjmmmmmmmmm
PDF
Launch Your Data Science Career in Kochi – 2025
PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PPT
Quality review (1)_presentation of this 21
PPTX
IB Computer Science - Internal Assessment.pptx
PPTX
IBA_Chapter_11_Slides_Final_Accessible.pptx
PPT
Reliability_Chapter_ presentation 1221.5784
Introduction to Business Data Analytics.
Galatica Smart Energy Infrastructure Startup Pitch Deck
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
Major-Components-ofNKJNNKNKNKNKronment.pptx
Database Infoormation System (DBIS).pptx
Miokarditis (Inflamasi pada Otot Jantung)
05. PRACTICAL GUIDE TO MICROSOFT EXCEL.pptx
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
Moving the Public Sector (Government) to a Digital Adoption
Acceptance and paychological effects of mandatory extra coach I classes.pptx
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
1_Introduction to advance data techniques.pptx
Fluorescence-microscope_Botany_detailed content
Chapter 2 METAL FORMINGhhhhhhhjjjjmmmmmmmmm
Launch Your Data Science Career in Kochi – 2025
oil_refinery_comprehensive_20250804084928 (1).pptx
Quality review (1)_presentation of this 21
IB Computer Science - Internal Assessment.pptx
IBA_Chapter_11_Slides_Final_Accessible.pptx
Reliability_Chapter_ presentation 1221.5784

Reservoir computing fast deep learning for sequences

  • 1. Reservoir Computing Fast Deep Learning for Sequences Claudio Gallicchio, University of Pisa (Italy)
  • 2. About me ● Researcher at the Department of Computer Science, University of Pisa ● Machine Learning, Deep Learning, Neural Networks, Dynamical Systems ○ Reservoir Computing ○ Deep Randomized Neural Networks ○ Learning in Structured Domains ● IEEE Task Forces ○ Chair of the IEEE Task Force on Reservoir Computing ○ Vice-Chair of the IEEE Task Force on Randomization-Based Neural Networks and Learning Systems ● Workshops, Tutorials ○ DL in Unconventional Neuromorphic Hardware (IJCNN-21) ○ ML for irregular time-series (ECML PKDD-21) ○ Deep Randomized Neural Networks (AAAI-21) gallicch@di.unipi.it
  • 3. Reservoir Computing ● Convenient way of designing Neural Networks for sequential data ● Stability ● Efficiency
  • 5. Machine Learning Algorithms that learn from the data Classical Programming data rules answers Machine Learning data answers rules
  • 6. Deep Learning ● Learn representations from the data ● Progressive abstraction
  • 8. Recurrent Neural Networks ● State update: ℎ! = #" $!, ℎ!#$ ● Output function: &! = '"! ℎ! recurrent layer/cell %! &! ℎ! state input old state set of parameters output state
  • 9. Recurrent Neural Networks ● State update: (! = )*+ℎ ,! -() + (!#$ -)) ● Output function: /! = (* -)+ recurrent layer/cell %! &! ℎ! input weight matrix recurrent weight matrix output weight matrix "!" """ ""#
  • 11. Forward Computation Fading/Exploding memory: ● the influence of inputs far in the past vanishes/explodes in the current state ● many (non-linear) transformations !! "! ℎ! !" "" !# "# ℎ# … !!" !!" !!" !"" !"" !"# !"# !"# ℎ" !$ "$ ℎ$ !!" !"# !""
  • 12. !! "! ℎ! !" "" !# "# ℎ# … !!" !!" !!" !"" !"" !"# !"# !"# $! $" $# ℎ" !$ "$ ℎ$ !!" !"# $$ !"" Backpropagation Through Time (BPTT) Gradient Propagation ● gradient might vanish/explode through many non-linear transformations ● difficult to train on long- term dependencies Bengio et al, “Learning long-term dependencies with gradient descent is difficult”, IEEE Transactions on Neural Networks, 1994 Pascanu et al, “On the difficulty of training recurrent neural networks”, ICML 2013
  • 13. Approaches ● Gated architectures ○ LSTM, GRU ○ training is slow
  • 15. Deep Learning Deep Learning models achieved tremendous success over the years. This comes at very high cost in terms of ● Time ● Parameters Do we really need this all the time?
  • 16. Example: embedded applications Source: https://bitalino.com/en/freestyle-kit-bt Source: https://www.eenewsembedded.com/news/ raspberry-pi-3-now-compute-module-format
  • 17. Complexity / Accuracy Tradeoff Accuracy Complexity Deep NNs Linear models SVMs-like Deep Randomized NNs
  • 18. The Philosophy “Randomization is computationally cheaper than optimization” Rahimi, A. and Recht, B., 2008. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. Advances in neural information processing systems, 21, pp.1313-1320. Rahimi, A. and Recht, B., 2007. Random features for large-scale kernel machines. Advances in neural information processing systems, 20, pp. 1177-1184.
  • 19. Randomization = Efficiency ● Training algorithms are cheaper and simpler ● Model transfer: don’t need to transmit all the weights ● Amenable to neuromorphic implementations
  • 20. Historical note: the cortico-striatal model ● Fixed recurrent connections in the PFC ● Dopamine regulated connections between PFC and neurons in the striatum (CD) Dominey, P.F., 2013. Recurrent temporal networks and language acquisition—from corticostriatal neurophysiology to reservoir computing. Frontiers in psychology, 4, p.500.
  • 22. Reservoir Computing: focus on the dynamical system input layer reservoir readout fixed trainable Randomly initialized under stability conditions on the dynamical system Stable dynamics - Echo State Property Verstraeten, David, et al. Neural networks 20.3 (2007). Lukoševičius, Mantas, and Herbert Jaeger. Computer Science Review 3.3 (2009). !! = tanh((!)"# + ℎ!$%)##)
  • 23. Echo State Network Jaeger, Herbert, and Harald Haas. Science 304.5667 (2004): 78-80.
  • 24. Liquid State Machine Maass, Wolfgang, Thomas Natschläger, and Henry Markram. Neural computation 14.11 (2002): 2531-2560.
  • 25. Fractal Prediction Machine Tino, Peter, and Georg Dorffner. Machine Learning 45.2 (2001): 187-217.
  • 26. Echo State Networks (ESNs) Reservoir (! = tanh(,!5"# + (!$%5##) ● large layer of recurrent units ● sparsely connected ● randomly initialized (ESP) ● untrained input layer reservoir readout
  • 27. Echo State Networks (ESNs) Readout /! = (!-#& ● linear combination of the reservoir state variables ● can be trained in closed form -#& = 7'7 $%7'8 input layer reservoir readout
  • 28. Reservoir Initialization ● Random but stable ○ the state should not be sensitive to tiny input perturbations ● Control the max singular value of 5##, or ● Control the spectral radius of 5## 9 5## < 1 1. Generate a random matrix and then re-scale its -()##) 2. Generate a random matrix from .(− &' &( , &' &( ) hyper-parameter
  • 29. Why does it work? Suffix-based Markovian organization of the state space of contraction reservoir mappings even prior to learning. +1 +1 -1 -1 -1 +1 +1 -1 ? Influence on the state Gallicchio, Claudio, and Alessio Micheli. "Architectural and markovian factors of echo state networks." Neural Networks24.5 (2011): 440-456. ESNs exploit the architectural bias of RNNs
  • 31. Distributed Intelligence Applications Dragone, Mauro, et al. "A cognitive robotic ecology approach to self-configuring and evolving AAL systems." Engineering Applications of Artificial Intelligence 45 (2015): 269-280.
  • 32. Robot localization in critical environments Dragone, Mauro, et al. ESANN. 2016.
  • 33. Human Activity Recognition ● Classification of human daily activities from RSS data generated by sensors worn by the user http://archive.ics.uci.edu/ml/datasets/Activity+Recognition+system+based+on+Multisensor+data+fusion+%28AReM%29 Dataset is available online on the UCI repository
  • 34. Clinical applications ● Automatic assessment of balance skills ● Predict the outcome of the Berg Balance Scale (BBS) clinical test from time-series of pressure sensors Bacciu, Davide, et al. Engineering Applications of Artificial Intelligence 66 (2017): 60-74. oremi Wii Balance Board BBS
  • 36. RC in Autonomous Vehicles ● Automatic detection of physiological, emotional, cognitive state of the human à Human-centric personalization ● Good performance in human state monitoring + efficiency D. Bacciu, D. Di Sarli, C. Gallicchio, A. Micheli, N. Puccinelli, “Benchmarking Reservoir and Recurrent Neural Networks for Human State and Activity Recognition”, IWANN 2021
  • 37. Distributed, embeddable and federated learning
  • 39. Physical Reservoir Computing Tanaka, G., Yamane, T., Héroux, J.B., Nakane, R., Kanazawa, N., Takeda, S., Numata, H., Nakano, D. and Hirose, A., 2019. Recent advances in physical reservoir computing: A review. Neural Networks, 115, pp.100-123.
  • 40. Workshop W1: Deep Learning in Unconventional Neuromorphic Hardware Friday, July 23, 12:30PM-4:30PM, Room: IJCNN Virtual Room 1 https://events.femto-st.fr/DLUNH/en/program
  • 41. Depth in RNNs shallow deep input deep readout deep reservoir Pascanu, R., Gulcehre, C., Cho, K. and Bengio, Y., 2013. How to construct deep recurrent neural networks. arXiv preprint arXiv:1312.6026.
  • 42. Deep Echo State Networks input layer reservoir layer 1 reservoir layer L fixed readout Gallicchio, Claudio, Alessio Micheli, and Luca Pedrelli. "Deep reservoir computing: A critical experimental analysis." Neurocomputing 268 (2017): 87-99 reservoir layer 2 trainable
  • 43. Multiple time-scales ● Effects of input perturbations last longer in the higher reservoir layers ● Multiple time-scales representation is intrinsic Gallicchio, Claudio, Alessio Micheli, and Luca Pedrelli. "Deep reservoir computing: A critical experimental analysis." Neurocomputing 268 (2017): 87-99 Gallicchio, C. and Micheli, A., 2018, July. Why Layering in Recurrent Neural Networks? A DeepESN Survey. In 2018 International Joint Conference on Neural Networks (IJCNN)(pp. 1-8). IEEE.
  • 46. Neural networks for graphs , { }
  • 47. Vertex-wise graph encoding ● time-step → vertex ● previous time step → neighborhood v1 v2 v3 v4 v embedding (state) input features embeddings of neighbors
  • 48. It’s accurate Gallicchio, C. and Micheli, A., 2020. Fast and Deep Graph Neural Networks. In AAAI (pp. 3898-3905).
  • 49. It’s accurate Gallicchio, C. and Micheli, A., 2020. Fast and Deep Graph Neural Networks. In AAAI (pp. 3898-3905).
  • 50. It’s fast Gallicchio, C. and Micheli, A., 2020. Fast and Deep Graph Neural Networks. In AAAI (pp. 3898-3905).
  • 52. Summary ● Reservoir Computing: paradigm for designing and training RNNs ○ fixed hidden recurrent layer (controlled for asymptotic stability) ○ trainable readout layer ● Fast (& simple) training compared to standard RNNs ● Good for sensor data ● Very active area of research… ○ Embedded applications ○ Unconventional Neuromorphic Hardware ○ European Projects
  • 53. Deep Randomized Neural Networks Gallicchio, C. and Scardapane, S., 2020. Deep Randomized Neural Networks. In Recent Trends in Learning From Data (pp. 43-68). Springer, Cham. https://arxiv.org/pdf/2002.12287.pdf AAAI-21 tutorial website: https://sites.google.com/site/cgallicch/resources/tutorial_DRNN
  • 54. IEEE Task Force on Randomization-based Neural Networks and Learning Systems Promote the research and applications of deep rand. neural networks and learning systems, to demonstrate the competitive performance of randomization-based algorithms in diverse scenarios, to educate the research community about the randomization-based learning methods and their relationships, https://sites.google.com/view/randnn-tf/
  • 55. IEEE Task Force on Reservoir Computing Promote and stimulate the development of Reservoir Computing research under both theoretical and application perspectives. https://sites.google.com/view/reservoir-computing-tf/
  • 56. Reservoir Computing Fast Deep Learning for Sequences Claudio Gallicchio gallicch@di.unipi.it https://www.linkedin.com/in/claudio-gallicchio-05a47038/ https://twitter.com/claudiogallicc1 Thanks for attending!