Application of Machine Learning in Structural Health Monitoring
1. Department of Civil Engineering
SHARAD INSTITUTE OF TECHNOLOGY COLLEGE OF ENGINEERING
YADRAV
Application of Machine Learning in Structural Health
Monitoring
Dr. Maloth Naresh
2. • ML is a branch of artificial intelligence (AI) that
empowers systems to autonomously learn and gain
insights from sample data. ML primarily focuses on
enhancing computer architectures to perform specific
tasks efficiently [1].
• The relationship among various AI systems is
illustrated in Fig. 1. Within SHM, ML can be
categorized into three distinct phases: (a) data
collection, (b) feature extraction, and (c) feature
classification. The feature extraction process, driven by
the collected data, often employs statistical or signal
processing methods based on a physical model [2].
• The residuals within the network are selected as
indicators of structural damage. Finally, the chosen
features are utilized by classification algorithms to
determine the extent of damage [3].
Introduction
Fig. 1. Relation between different AI systems.
3. Performance of ML models
• Classification: The goal of this task is to identify the category to which the input belongs. Since the
inputs for steel structures are divided into two categories—"damaged" and "not damaged"—this can
serve as an example of a classification role.
• Regression: Modeling the link between a numerical result and several inputs is the aim of this work. The
output format is the sole distinction between classification and regression.
• Prediction: The purpose of prediction, a unique kind of regression, is to forecast the values of a given
time series in the future.
• Clustering: Creating clusters containing comparable samples from the input dataset is the goal of
clustering [84]. Similar to self-organizing maps, clustering operates in an unsupervised manner, unlike
supervised approaches such as classification, regression, and prediction tasks [4].
4. Classification of ML models
• In general, there are two types of ML models:
supervised and unsupervised.
• For training, supervised models need a dataset with
labeled human data. Therefore, finding the best
mapping between the inputs and the intended (or target)
outputs is the main goal of supervised learning [5]. As a
result, supervised algorithms need a human "supervisor"
to correctly categorize or target each data sample before
performing the exercise [6].
• On the other hand, unsupervised learning models only
require unlabeled input data. Investigating the data's
distribution is the aim of unsupervised learning to
gather valuable insights into its underlying structure [7].
ML models
Supervised models Unsupervised models
Fig. 2. Classification of ML models
5. Supervised models
• Training a model with labeled data where input attributes (X) correspond to known outputs (Y) is
known as supervised learning [8]. These models have to sub-categories (i) Regression Models and,
(ii) Classification Models.
(i) Regression Models
These models use input data to predict continuous values.
• Linear Regression
• Polynomial Regression
• Ridge and Lasso Regression
• Support Vector Regression
• Decision Tree Regression
• Random Forest Regression
• Gradient Boosting Regression
6. Cont…
(ii) Classification Models.
These models predict categorical labels.
• Logistic Regression
• K-Nearest Neighbors
• Support Vector Machines
• Naïve Bayes
• Decision Tree Classifier
• Random Forest Classifier
• Artificial Neural Networks.
7. Unsupervised learning searches for hidden patterns or organization in unlabeled data.
• K-Means Clustering
• Hierarchical Clustering
• DBSCAN (Density-Based Spatial Clustering of Applications with Noise)
• Gaussian Mixture Models
Supervised models
8. Limitation of ML models
• A certain feature/classifier set may not be the optimal option for every kind of SDD activity. In other words, a
feature/classifier set that is optimal for a certain kind of structure may not be a sensible option for any other
kind of structure.
• There is also no assurance that a particular feature/classifier set would be the best option for every kind of
structural deterioration. For instance, changes in boundary conditions may not be detected by a particular
feature/classifier set that is shown to be a good fit for detecting stiffness loss.
• Unreasonable classifiers or a fixed set of manually created features will most likely result in poor
SHM performance.
• The application of ML-based approaches in real-time SHM operations is hampered by feature extraction
techniques including modal estimation, AR modelling, and PCA, which often result in significant computing
complexity and delay.
9. Deep learning models
• To describe the input data in terms of a predetermined number of manually created features, basic
supervised and unsupervised machine learning algorithms need feature extraction beforehand, as
was previously mentioned. Although the selection of extracted features has a major impact on how
well these algorithms' function, it is crucial to pick the best set of features that best capture the
most unique and useful characteristics of the input data [85]. In reality, once the features have been
extracted, a simple machine-learning algorithm can easily figure out how to map the collected
characteristics to the desired output. This method of machine learning is helpful in situations when
the extracted characteristics may be determined manually, such as in the example above, where a
human expert might choose the features (i.e., the list of symptoms) [9].
• However, it might be challenging to manually choose a suitable set of features to be utilised for AI
system training for many jobs [86]. A computer vision system for detecting vehicles in photos
would be a suitable example to take into consideration for a better understanding [10].
11. References
1. Dietterich, T. G. (2000, June). Ensemble methods in machine learning. In International
workshop on multiple classifier systems (pp. 1-15). Berlin, Heidelberg: Springer Berlin
Heidelberg.
2. Guyon, I., & Elisseeff, A. (2006). An introduction to feature extraction. In Feature extraction:
foundations and applications (pp. 1-25). Berlin, Heidelberg: Springer Berlin Heidelberg.
3. Brownlee, J. (2019). How to choose a feature selection method for machine learning. Machine
Learning Mastery, 10, 1-7.
4. Avci, O., Abdeljaber, O., Kiranyaz, S., Hussein, M., Gabbouj, M., & Inman, D. J. (2021). A
review of vibration-based damage detection in civil structures: From traditional methods to
Machine Learning and Deep Learning applications. Mechanical systems and signal
processing, 147, 107077.
5. Kubat, M. (2017). An introduction to machine learning. Springer.
6. Figueiredo, E., & Santos, A. (2018). Machine learning algorithms for damage detection.
In Vibration-based techniques for damage detection and localization in engineering
structures (pp. 1-39).
7. Yegnanarayana, B. (2009). Artificial neural networks. PHI Learning Pvt. Ltd..
12. 8. Santos, A., Figueiredo, E., Silva, M. F. M., Sales, C. S., & Costa, J. C. W. A. (2016). Machine
learning algorithms for damage detection: Kernel-based approaches. Journal of Sound and
Vibration, 363, 584-599.
9. Pouyanfar, S., Sadiq, S., Yan, Y., Tian, H., Tao, Y., Reyes, M. P., ... & Iyengar, S. S. (2018). A
survey on deep learning: Algorithms, techniques, and applications. ACM computing surveys
(CSUR), 51(5), 1-36.
10. Liu, W., Wang, Z., Liu, X., Zeng, N., Liu, Y., & Alsaadi, F. E. (2017). A survey of deep neural
network architectures and their applications. Neurocomputing, 234, 11-26.