SlideShare a Scribd company logo
ARTIFICIAL NEURAL NETWORKS IN HYDROLOGY BY THE ASCE TASK COMMITTEE A Paper Review
DEVELOPMENT OF ANN Information processing occurs at many single elements called nodes, also referred to as units, cells or neurons. Signals are passed between nodes through connection links. Each node typically applies a non-linear transformation called an activation function to its net input to determine its output signal.
CLASSIFICATION OF NEURAL NETWORKS 1) Based on Number of Layers a) Single (Hopfield Nets) b) Bilayer (Carpenter/Grossberg adaptive resonance networks) c) Multilayered (most back propagation networks)
2) Based on the direction of information flow and processing a) Feed Forward Network – The nodes are arranged in layers, starting from the input layer and ending at the output layer ; there could be several hidden layers too. b) Recurrent ANN – The information flows through nodes in both directions, ie from input to output and vice versa.
MATHEMATICAL ASPECTS The inputs form an input vector X=(x 1 , x 2 ….x n ) The sequence of weights leading to the node form a weight vector W j =(W 1j , W 2j …W nj ) where W ij  represents the connection weight from the ith node in the preceding layer to this node. Output y j  is computed as follows :
NETWORK TRAINING In order for an ANN to generate an output vector that is as close as possible to the target vector, a training process, also called as learning, is used to find optimal weight matrices W and bias vectors V that minimize a predetermined error function that usually has the form
Where ti = component of desired output T yi = corresponding ANN output P = number of training patterns p = number of output nodes Types of training Supervised – requires an external teacher to teach. It requires a large number of inputs and the corresponding outputs. Un-supervised – no teacher required. Only input data set is provided and the ANN automatically adapts its connection weights to cluster those input patterns into classes with similar properties.
TRAINING ALGORITHMS 1) Back - Propagation It basically minimizes the network error function. Each input pattern is passed through the network from input layer to output layer. Network output is compared with desired target output and error is computed. This error is propagated backward through the network to each node and correspondingly connection weights are adjusted.
Hence it involves two steps ; a forward pass in which effect of input is passed through the network to reach the output layer, after the error is computed the second steps begins in which the error generated at the output layer is propagated back to the input layer with the weights being modified.
Back-propagation is a first-order method based on the steepest gradient descent, with the direction vector being set equal to the negative of the gradient vector. Consequently, the solution often follows a zigzag path while trying to reach a minimum error position, which may slow down the training process. It is also possible for the training process to be trapped in the local minimum despite the use of a learning rate. (?)
CONJUGATE GRADIENT  ALGORITHMS It does not proceed along the direction of the error gradient, but in an orthogonal direction. This prevents the future steps from influencing the minimization achieved during the current step. (?)
RADIAL BASIS FUNCTION The hidden layer consists of a number of nodes and a parameter vector called a ‘‘center,’’ which can be considered the weight vector. The standard Euclidean distance is used to measure how far an input vector is from the center. For each node, the Euclidean distance between the center and the input vector of the network input is computed and transformed by a nonlinear function that determines the output of the nodes in the hidden layer.
The major task of RBF network design is to determine center c. The simplest and easiest way may be to choose the centers randomly from the training set. Or else the technique of clustering input training set into groups and choose the center of each group as the center. After the center is determined, the connection weights  wi  between the hidden layer and output layer can be determined simply through ordinary back-propagation training. The primary difference between the RBF network and back propagation is in the nature of the nonlinearities associated with hidden nodes. The nonlinearity in back-propagation is implemented by a fixed function such as a sigmoid. The RBF method, on the other hand, bases its nonlinearities on the data in the training set.
CASCADE CORRELATION  ALGORITHM It starts with a minimal network without any hidden nodes and grows during the training by adding new hidden units one by one. Once a new hidden node has been added to the network, its input-side weights are frozen. The hidden nodes are trained in order to maximize the correlation between output of the nodes and output error.

More Related Content

PDF
Flood analysis
PDF
Sensors for remote sensing
PPT
Lacey Regime Theory - Irrigation Engineering
PDF
Nepal road standard 2070
PDF
Site selection and Water harvesting suitability analysis in Badia of Jordan
 
PDF
Water Resources Engineering - Question Bank (Subjective)
PDF
WS-D1-CEWRE_PPT.pdf
PDF
Unit 12 River training work.pdf
Flood analysis
Sensors for remote sensing
Lacey Regime Theory - Irrigation Engineering
Nepal road standard 2070
Site selection and Water harvesting suitability analysis in Badia of Jordan
 
Water Resources Engineering - Question Bank (Subjective)
WS-D1-CEWRE_PPT.pdf
Unit 12 River training work.pdf

What's hot (20)

PPT
PDF
Book estimating and-costing
DOCX
Chapter 1
DOCX
intakes and transportation of water
PPTX
Flood routing
PPTX
Runoff & Flood Frequency Analysis
PPTX
Earthen dam
PPTX
Non equilibrium equation for unsteady radial flow
PDF
Book water resources systems - s vedula and p p mujumdar
PPTX
Rain gauges
PPTX
Vector data model
PPTX
Flood frequency analyses
PPTX
Class lecture on Hydrology by Rabindra Ranjan saha Lecture 13
PPT
Flood estimation
PDF
PPTX
Dam Engineering.pptx
PPTX
Analysis of runoff for vishwamitri river watershed using scs cn method and ge...
PPTX
4 runoff and floods
PDF
Sediment transport
PDF
1392741020 traverse survey
Book estimating and-costing
Chapter 1
intakes and transportation of water
Flood routing
Runoff & Flood Frequency Analysis
Earthen dam
Non equilibrium equation for unsteady radial flow
Book water resources systems - s vedula and p p mujumdar
Rain gauges
Vector data model
Flood frequency analyses
Class lecture on Hydrology by Rabindra Ranjan saha Lecture 13
Flood estimation
Dam Engineering.pptx
Analysis of runoff for vishwamitri river watershed using scs cn method and ge...
4 runoff and floods
Sediment transport
1392741020 traverse survey
Ad

Viewers also liked (15)

PPT
Neural networks for the prediction and forecasting of water resources variables
PPT
Learning from data for wind–wave forecasting
PPT
One–day wave forecasts based on artificial neural networks
PPTX
Water in the Western United States - California: A Case Study
PPT
DisEMBL - Artificial neural network prediction of protein disorder
PPT
Extreme value distribution to predict maximum precipitation
PPTX
A Level Physics - Telecommunications
PPT
Crime Mapping & Analysis – Georgia Tech
PPT
Final cyber physical system (1)
PDF
Security in Embedded systems
PPTX
intelligent transport systems
PPTX
Cyber Physical System: Architecture, Applications and Research Challenges
PPTX
Precipitation Data - Georgia, USA
PDF
Applications of Artificial Neural Networks in Civil Engineering
PPT
Intelligent Transportation System (ITS)
Neural networks for the prediction and forecasting of water resources variables
Learning from data for wind–wave forecasting
One–day wave forecasts based on artificial neural networks
Water in the Western United States - California: A Case Study
DisEMBL - Artificial neural network prediction of protein disorder
Extreme value distribution to predict maximum precipitation
A Level Physics - Telecommunications
Crime Mapping & Analysis – Georgia Tech
Final cyber physical system (1)
Security in Embedded systems
intelligent transport systems
Cyber Physical System: Architecture, Applications and Research Challenges
Precipitation Data - Georgia, USA
Applications of Artificial Neural Networks in Civil Engineering
Intelligent Transportation System (ITS)
Ad

Similar to Artificial neural networks in hydrology (20)

PPTX
Unit ii supervised ii
PPTX
CST413 KTU S7 CSE Machine Learning Neural Networks and Support Vector Machine...
PPT
Back_propagation_algorithm.Back_propagation_algorithm.Back_propagation_algorithm
PDF
Web spam classification using supervised artificial neural network algorithms
PDF
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
PDF
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
PDF
22PCOAM16 _ML_ Unit 2 Full unit notes.pdf
PDF
22PCOAM16 ML UNIT 2 NOTES & QB QUESTION WITH ANSWERS
PDF
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
PPTX
ML_ Unit 2_Part_B
PPT
ai7 (1) Artificial Neural Network Intro .ppt
PDF
H017376369
PDF
A New Classifier Based onRecurrent Neural Network Using Multiple Binary-Outpu...
PDF
Electricity Demand Forecasting Using Fuzzy-Neural Network
PPT
Lec 6-bp
PPT
INTRODUCTION TO ARTIFICIAL INTELLIGENCE.
PPTX
PPT
2.5 backpropagation
PPT
ai7.ppt
Unit ii supervised ii
CST413 KTU S7 CSE Machine Learning Neural Networks and Support Vector Machine...
Back_propagation_algorithm.Back_propagation_algorithm.Back_propagation_algorithm
Web spam classification using supervised artificial neural network algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
22PCOAM16 _ML_ Unit 2 Full unit notes.pdf
22PCOAM16 ML UNIT 2 NOTES & QB QUESTION WITH ANSWERS
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
ML_ Unit 2_Part_B
ai7 (1) Artificial Neural Network Intro .ppt
H017376369
A New Classifier Based onRecurrent Neural Network Using Multiple Binary-Outpu...
Electricity Demand Forecasting Using Fuzzy-Neural Network
Lec 6-bp
INTRODUCTION TO ARTIFICIAL INTELLIGENCE.
2.5 backpropagation
ai7.ppt

Recently uploaded (20)

PDF
Approach and Philosophy of On baking technology
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Electronic commerce courselecture one. Pdf
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Empathic Computing: Creating Shared Understanding
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Encapsulation theory and applications.pdf
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PPTX
Big Data Technologies - Introduction.pptx
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
Approach and Philosophy of On baking technology
Building Integrated photovoltaic BIPV_UPV.pdf
MYSQL Presentation for SQL database connectivity
Electronic commerce courselecture one. Pdf
The AUB Centre for AI in Media Proposal.docx
Empathic Computing: Creating Shared Understanding
Advanced methodologies resolving dimensionality complications for autism neur...
Encapsulation theory and applications.pdf
Digital-Transformation-Roadmap-for-Companies.pptx
“AI and Expert System Decision Support & Business Intelligence Systems”
Per capita expenditure prediction using model stacking based on satellite ima...
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Reach Out and Touch Someone: Haptics and Empathic Computing
Diabetes mellitus diagnosis method based random forest with bat algorithm
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Big Data Technologies - Introduction.pptx
20250228 LYD VKU AI Blended-Learning.pptx

Artificial neural networks in hydrology

  • 1. ARTIFICIAL NEURAL NETWORKS IN HYDROLOGY BY THE ASCE TASK COMMITTEE A Paper Review
  • 2. DEVELOPMENT OF ANN Information processing occurs at many single elements called nodes, also referred to as units, cells or neurons. Signals are passed between nodes through connection links. Each node typically applies a non-linear transformation called an activation function to its net input to determine its output signal.
  • 3. CLASSIFICATION OF NEURAL NETWORKS 1) Based on Number of Layers a) Single (Hopfield Nets) b) Bilayer (Carpenter/Grossberg adaptive resonance networks) c) Multilayered (most back propagation networks)
  • 4. 2) Based on the direction of information flow and processing a) Feed Forward Network – The nodes are arranged in layers, starting from the input layer and ending at the output layer ; there could be several hidden layers too. b) Recurrent ANN – The information flows through nodes in both directions, ie from input to output and vice versa.
  • 5. MATHEMATICAL ASPECTS The inputs form an input vector X=(x 1 , x 2 ….x n ) The sequence of weights leading to the node form a weight vector W j =(W 1j , W 2j …W nj ) where W ij represents the connection weight from the ith node in the preceding layer to this node. Output y j is computed as follows :
  • 6. NETWORK TRAINING In order for an ANN to generate an output vector that is as close as possible to the target vector, a training process, also called as learning, is used to find optimal weight matrices W and bias vectors V that minimize a predetermined error function that usually has the form
  • 7. Where ti = component of desired output T yi = corresponding ANN output P = number of training patterns p = number of output nodes Types of training Supervised – requires an external teacher to teach. It requires a large number of inputs and the corresponding outputs. Un-supervised – no teacher required. Only input data set is provided and the ANN automatically adapts its connection weights to cluster those input patterns into classes with similar properties.
  • 8. TRAINING ALGORITHMS 1) Back - Propagation It basically minimizes the network error function. Each input pattern is passed through the network from input layer to output layer. Network output is compared with desired target output and error is computed. This error is propagated backward through the network to each node and correspondingly connection weights are adjusted.
  • 9. Hence it involves two steps ; a forward pass in which effect of input is passed through the network to reach the output layer, after the error is computed the second steps begins in which the error generated at the output layer is propagated back to the input layer with the weights being modified.
  • 10. Back-propagation is a first-order method based on the steepest gradient descent, with the direction vector being set equal to the negative of the gradient vector. Consequently, the solution often follows a zigzag path while trying to reach a minimum error position, which may slow down the training process. It is also possible for the training process to be trapped in the local minimum despite the use of a learning rate. (?)
  • 11. CONJUGATE GRADIENT ALGORITHMS It does not proceed along the direction of the error gradient, but in an orthogonal direction. This prevents the future steps from influencing the minimization achieved during the current step. (?)
  • 12. RADIAL BASIS FUNCTION The hidden layer consists of a number of nodes and a parameter vector called a ‘‘center,’’ which can be considered the weight vector. The standard Euclidean distance is used to measure how far an input vector is from the center. For each node, the Euclidean distance between the center and the input vector of the network input is computed and transformed by a nonlinear function that determines the output of the nodes in the hidden layer.
  • 13. The major task of RBF network design is to determine center c. The simplest and easiest way may be to choose the centers randomly from the training set. Or else the technique of clustering input training set into groups and choose the center of each group as the center. After the center is determined, the connection weights wi between the hidden layer and output layer can be determined simply through ordinary back-propagation training. The primary difference between the RBF network and back propagation is in the nature of the nonlinearities associated with hidden nodes. The nonlinearity in back-propagation is implemented by a fixed function such as a sigmoid. The RBF method, on the other hand, bases its nonlinearities on the data in the training set.
  • 14. CASCADE CORRELATION ALGORITHM It starts with a minimal network without any hidden nodes and grows during the training by adding new hidden units one by one. Once a new hidden node has been added to the network, its input-side weights are frozen. The hidden nodes are trained in order to maximize the correlation between output of the nodes and output error.