SlideShare a Scribd company logo
S Y S T E M  M O N I T O R I N G The  use  of  machine  learning  techniques  in  structural and  motor  monitoring…
I N T R O D U C T I O N What  is  system  monitoring ? Why  doing  system  monitoring:  financial  and technical challenges. Machine  learning  techniques:  why  are  they  so popular? The  goal  of  machine  learning in system monitoring:  Pattern  Recognition
Structural Health Monitoring Information that needs to be extracted from a structure: damage location and size. Different methods of analyzing a structure: Vibration-based damage detection Electric potential-based damage detection Impedance-based damage detection sensor
Fault Detection In Motor  Measurement of current , voltage… Relevant information that needs to be extracted from a motor: bearing faults, broken bars, friction… Pertinent measurements to perform the diagnosis of a motor
From Sensor Data to the Training set Why is it impossible to use raw data from sensor to perform machine learning algorithm  (over-fitting, environmental noise, changes over the life-time) The importance of domain expertise Common method that have been applied to compute a training set. Time x 1 ………x n  y 1 …….y m … ...…………… … ...…………… … ...…………… … ...…………… … ...…………… Feature Extraction
Feature Extraction in the time-domain Comparison with a known sensor signal (undamaged) Computationally easy. Ideal for methods which are not based on a frequency theory. Known signal Unknown sensor signal Area Between the 2 Root Mean Square of each curve Root Mean Square of the difference Correlation Coefficient x 1 x n Input vector
Switching from time to frequency Domain Noise is attenuated Significant information are more easy to find Damage or faults are often directly related to specific frequencies and their harmonics Time Fourier Transformation x 1 x n Input vector Principal Component Analysis
Machine Learning techniques Neural Networks are the most common learning tool used in system monitoring: Easy to implement and to train Ability to perform pattern recognition and therefore to detect damages or defaults Two kind of networks are used in system monitoring: Feed-Forward Neural Network Kohonen Self Organizing Map
Feed Forward Neural Networks 1 1 Input Layer Hidden   Layer Output Layer x 1 x n Input vector y 1 y m Output vector The size of the network is usually small: Input vector: 3-20 nodes
Back-Propagation algorithm To train the feed forward neural Networks the most common algorithm is the back propagation algorithm Step1: Initialize weights. Step2: Present Inputs vector and desired outputs Present training vectors from the training set to the network; calculate the output of every node by propagating inputs through the network using the activation function selected (sigmoid, step…)   Step3: Update Weights Adapt weight starting at the output nodes and working back to the first hidden layer by:  Wij(t+1)=Wij(t) + ηδjX’i δj = yj(1-yj)(oj-yj) for output node δj=X’j (1-X’j) Σ δk Wjk   for hidden node Step4: Repeat by going to step 2 until weights do not change.
Real-world distribution. 0 for class A 1 for class B t=50 t=100 t=150 t=200 Training Architecture of the network Back-Propagation algorithm B A
Kohonen Self Organizing Map (SOM) Kohonen network can be viewed as a clustering method so that similar data samples tend to be mapped  to nearby neurons. Kohonen SOM is also a projection method which maps high-dimensional data space into low-dimensional space. Thanks to its clustering ability, Kohonen networks are used in system monitoring, to perform a preliminary organization of the input space. Because it is an unsupervised learning technique, it needs to be associated with another intelligent tool.
Kohonen Self Organizing Map (SOM) x 1 x n Input vector To each node of the network is associated a vector in the input space mi 1 mi n When the input vector is presented to the map, its distance to the weight vector of each node is computed. The map returns the closest node which is called the Best Matching Unit. BMU The output of the map is usually sent to another learning machine which will finish the process of pattern recognition.
Kohonen SOM training The training f the Kohonen Network is done by a specific algorithm. The goal is to obtain a map where 2 points which are nearby in the input space are also closed in the map. Step1: Initialize weights (randomly or with sample from the input space) Step2: Update each node in the map in proportion with the distance from its weight vector to the input vector: mi = mi + η(t) * hci(t) * [x(t)-mi(t)] Where: mi  is the weight vector of the ith node η (t)  is the learning rate h ci (t)  is the neighborhood function (the more a node is far from the BMU the smaller value is returned by this function)
Kohonen SOM training (example) 10x10 Kohonen Map X 1  X2  X3  Y … ...…………… … ...…………… … ...…………… 0   1   33   Training Set Where Y is the number of short-circuit terms and x1,x2,x3 are the amplitude of specific frequency in the spectrum
Kohonen SOM training (example) 10x10 Kohonen Map 0   4   5  6  7  8 9  10  11  13  15 18 19 21 25 23 28 29 29 31 33 X 1  X2  X3  Y … ...…………… … ...…………… … ...…………… 0   1   33   Training Set
Results Results are characterize by: Very good accuracy; usually about 90% of good tests Good results have to be associated with good domain expertise. Very specific results; Because there are many types of  faults and damages in a system, usually a network is dedicated to only one. Depending on the fault the input vector will be different. Better results are obtained with very particular network. Monitoring systems are often composed of several networks.
Conclusion Despite good results more research needs to be done in system monitoring especially in the case where several faults or damages occur at the same time.

More Related Content

PPTX
Smart home security using Telegram chatbot
PDF
Self Organizing Feature Map(SOM), Topographic Product, Cascade 2 Algorithm
PDF
Test
PPTX
Intrusion Detection Model using Self Organizing Maps.
PDF
Efficient Implementation of Self-Organizing Map for Sparse Input Data
PPTX
PDF
Support Vector Machines
PPTX
Background subtraction
Smart home security using Telegram chatbot
Self Organizing Feature Map(SOM), Topographic Product, Cascade 2 Algorithm
Test
Intrusion Detection Model using Self Organizing Maps.
Efficient Implementation of Self-Organizing Map for Sparse Input Data
Support Vector Machines
Background subtraction

What's hot (20)

PPT
TH1.L10.5: VALIDATION OF TIE-POINT CONCEPTS BY THE DEM ADJUSTMENT APPROACH OF...
PDF
Once-for-All: Train One Network and Specialize it for Efficient Deployment
PPT
Sefl Organizing Map
PPTX
Structure from motion
PPTX
Event classification & prediction using support vector machine
PPTX
Distance and Time Based Node Selection for Probabilistic Coverage in People-C...
PDF
Recognition and tracking moving objects using moving camera in complex scenes
DOC
Unit 1 notes
PDF
FEATURE EXTRACTION USING MFCC
PPTX
Image Classification And Support Vector Machine
PPT
Mitchell's Face Recognition
PDF
Structure and Motion - 3D Reconstruction of Cameras and Structure
PPTX
Support Vector Machine
PPSX
Edge Detection and Segmentation
PDF
Open GL T0074 56 sm3
PPTX
Lecture 2&3
PDF
Nearest Class Mean Metric Learning
PPTX
Handwritten Digit Recognition and performance of various modelsation[autosaved]
PPTX
Neural networks
PDF
L1-based compression of random forest modelSlide
TH1.L10.5: VALIDATION OF TIE-POINT CONCEPTS BY THE DEM ADJUSTMENT APPROACH OF...
Once-for-All: Train One Network and Specialize it for Efficient Deployment
Sefl Organizing Map
Structure from motion
Event classification & prediction using support vector machine
Distance and Time Based Node Selection for Probabilistic Coverage in People-C...
Recognition and tracking moving objects using moving camera in complex scenes
Unit 1 notes
FEATURE EXTRACTION USING MFCC
Image Classification And Support Vector Machine
Mitchell's Face Recognition
Structure and Motion - 3D Reconstruction of Cameras and Structure
Support Vector Machine
Edge Detection and Segmentation
Open GL T0074 56 sm3
Lecture 2&3
Nearest Class Mean Metric Learning
Handwritten Digit Recognition and performance of various modelsation[autosaved]
Neural networks
L1-based compression of random forest modelSlide
Ad

Similar to System Monitoring (20)

PDF
An ann approach for network
PDF
AN ANN APPROACH FOR NETWORK INTRUSION DETECTION USING ENTROPY BASED FEATURE S...
PPTX
PDF
Intrusion Detection System Using Self Organizing Map Algorithms
PDF
Image Morphing: A Literature Study
PDF
Intrusion Detection System Using Self Organizing Map Algorithms
PDF
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
PPTX
Sess07 Clustering02_KohonenNet.pptx
PPTX
self operating maps
PPTX
Introduction to Machine Learning basics.pptx
PDF
Presentation on SOM
PDF
International Journal of Engineering Research and Development (IJERD)
PDF
Self Organization Map
PDF
Capital market applications of neural networks etc
PDF
KCS-055 MLT U4.pdf
PPTX
Kohonen self organizing maps
PPTX
Data Applied: Similarity
PPTX
Data Applied:Similarity
PDF
Application of Artificial Neural Networking for Determining the Plane of Vibr...
PDF
AN IMPROVED MULTI-SOM ALGORITHM
An ann approach for network
AN ANN APPROACH FOR NETWORK INTRUSION DETECTION USING ENTROPY BASED FEATURE S...
Intrusion Detection System Using Self Organizing Map Algorithms
Image Morphing: A Literature Study
Intrusion Detection System Using Self Organizing Map Algorithms
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
Sess07 Clustering02_KohonenNet.pptx
self operating maps
Introduction to Machine Learning basics.pptx
Presentation on SOM
International Journal of Engineering Research and Development (IJERD)
Self Organization Map
Capital market applications of neural networks etc
KCS-055 MLT U4.pdf
Kohonen self organizing maps
Data Applied: Similarity
Data Applied:Similarity
Application of Artificial Neural Networking for Determining the Plane of Vibr...
AN IMPROVED MULTI-SOM ALGORITHM
Ad

More from butest (20)

PDF
EL MODELO DE NEGOCIO DE YOUTUBE
DOC
1. MPEG I.B.P frame之不同
PDF
LESSONS FROM THE MICHAEL JACKSON TRIAL
PPT
Timeline: The Life of Michael Jackson
DOCX
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
PDF
LESSONS FROM THE MICHAEL JACKSON TRIAL
PPTX
Com 380, Summer II
PPT
PPT
DOCX
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
DOC
MICHAEL JACKSON.doc
PPTX
Social Networks: Twitter Facebook SL - Slide 1
PPT
Facebook
DOCX
Executive Summary Hare Chevrolet is a General Motors dealership ...
DOC
Welcome to the Dougherty County Public Library's Facebook and ...
DOC
NEWS ANNOUNCEMENT
DOC
C-2100 Ultra Zoom.doc
DOC
MAC Printing on ITS Printers.doc.doc
DOC
Mac OS X Guide.doc
DOC
hier
DOC
WEB DESIGN!
EL MODELO DE NEGOCIO DE YOUTUBE
1. MPEG I.B.P frame之不同
LESSONS FROM THE MICHAEL JACKSON TRIAL
Timeline: The Life of Michael Jackson
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
LESSONS FROM THE MICHAEL JACKSON TRIAL
Com 380, Summer II
PPT
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
MICHAEL JACKSON.doc
Social Networks: Twitter Facebook SL - Slide 1
Facebook
Executive Summary Hare Chevrolet is a General Motors dealership ...
Welcome to the Dougherty County Public Library's Facebook and ...
NEWS ANNOUNCEMENT
C-2100 Ultra Zoom.doc
MAC Printing on ITS Printers.doc.doc
Mac OS X Guide.doc
hier
WEB DESIGN!

System Monitoring

  • 1. S Y S T E M M O N I T O R I N G The use of machine learning techniques in structural and motor monitoring…
  • 2. I N T R O D U C T I O N What is system monitoring ? Why doing system monitoring: financial and technical challenges. Machine learning techniques: why are they so popular? The goal of machine learning in system monitoring: Pattern Recognition
  • 3. Structural Health Monitoring Information that needs to be extracted from a structure: damage location and size. Different methods of analyzing a structure: Vibration-based damage detection Electric potential-based damage detection Impedance-based damage detection sensor
  • 4. Fault Detection In Motor Measurement of current , voltage… Relevant information that needs to be extracted from a motor: bearing faults, broken bars, friction… Pertinent measurements to perform the diagnosis of a motor
  • 5. From Sensor Data to the Training set Why is it impossible to use raw data from sensor to perform machine learning algorithm (over-fitting, environmental noise, changes over the life-time) The importance of domain expertise Common method that have been applied to compute a training set. Time x 1 ………x n y 1 …….y m … ...…………… … ...…………… … ...…………… … ...…………… … ...…………… Feature Extraction
  • 6. Feature Extraction in the time-domain Comparison with a known sensor signal (undamaged) Computationally easy. Ideal for methods which are not based on a frequency theory. Known signal Unknown sensor signal Area Between the 2 Root Mean Square of each curve Root Mean Square of the difference Correlation Coefficient x 1 x n Input vector
  • 7. Switching from time to frequency Domain Noise is attenuated Significant information are more easy to find Damage or faults are often directly related to specific frequencies and their harmonics Time Fourier Transformation x 1 x n Input vector Principal Component Analysis
  • 8. Machine Learning techniques Neural Networks are the most common learning tool used in system monitoring: Easy to implement and to train Ability to perform pattern recognition and therefore to detect damages or defaults Two kind of networks are used in system monitoring: Feed-Forward Neural Network Kohonen Self Organizing Map
  • 9. Feed Forward Neural Networks 1 1 Input Layer Hidden Layer Output Layer x 1 x n Input vector y 1 y m Output vector The size of the network is usually small: Input vector: 3-20 nodes
  • 10. Back-Propagation algorithm To train the feed forward neural Networks the most common algorithm is the back propagation algorithm Step1: Initialize weights. Step2: Present Inputs vector and desired outputs Present training vectors from the training set to the network; calculate the output of every node by propagating inputs through the network using the activation function selected (sigmoid, step…) Step3: Update Weights Adapt weight starting at the output nodes and working back to the first hidden layer by: Wij(t+1)=Wij(t) + ηδjX’i δj = yj(1-yj)(oj-yj) for output node δj=X’j (1-X’j) Σ δk Wjk for hidden node Step4: Repeat by going to step 2 until weights do not change.
  • 11. Real-world distribution. 0 for class A 1 for class B t=50 t=100 t=150 t=200 Training Architecture of the network Back-Propagation algorithm B A
  • 12. Kohonen Self Organizing Map (SOM) Kohonen network can be viewed as a clustering method so that similar data samples tend to be mapped to nearby neurons. Kohonen SOM is also a projection method which maps high-dimensional data space into low-dimensional space. Thanks to its clustering ability, Kohonen networks are used in system monitoring, to perform a preliminary organization of the input space. Because it is an unsupervised learning technique, it needs to be associated with another intelligent tool.
  • 13. Kohonen Self Organizing Map (SOM) x 1 x n Input vector To each node of the network is associated a vector in the input space mi 1 mi n When the input vector is presented to the map, its distance to the weight vector of each node is computed. The map returns the closest node which is called the Best Matching Unit. BMU The output of the map is usually sent to another learning machine which will finish the process of pattern recognition.
  • 14. Kohonen SOM training The training f the Kohonen Network is done by a specific algorithm. The goal is to obtain a map where 2 points which are nearby in the input space are also closed in the map. Step1: Initialize weights (randomly or with sample from the input space) Step2: Update each node in the map in proportion with the distance from its weight vector to the input vector: mi = mi + η(t) * hci(t) * [x(t)-mi(t)] Where: mi is the weight vector of the ith node η (t) is the learning rate h ci (t) is the neighborhood function (the more a node is far from the BMU the smaller value is returned by this function)
  • 15. Kohonen SOM training (example) 10x10 Kohonen Map X 1 X2 X3 Y … ...…………… … ...…………… … ...…………… 0 1 33 Training Set Where Y is the number of short-circuit terms and x1,x2,x3 are the amplitude of specific frequency in the spectrum
  • 16. Kohonen SOM training (example) 10x10 Kohonen Map 0 4 5 6 7 8 9 10 11 13 15 18 19 21 25 23 28 29 29 31 33 X 1 X2 X3 Y … ...…………… … ...…………… … ...…………… 0 1 33 Training Set
  • 17. Results Results are characterize by: Very good accuracy; usually about 90% of good tests Good results have to be associated with good domain expertise. Very specific results; Because there are many types of faults and damages in a system, usually a network is dedicated to only one. Depending on the fault the input vector will be different. Better results are obtained with very particular network. Monitoring systems are often composed of several networks.
  • 18. Conclusion Despite good results more research needs to be done in system monitoring especially in the case where several faults or damages occur at the same time.