SlideShare a Scribd company logo
IAES International Journal of Artificial Intelligence (IJ-AI)
Vol. 14, No. 1, February 2025, pp. 159~165
ISSN: 2252-8938, DOI: 10.11591/ijai.v14.i1.pp159-165  159
Journal homepage: http://ijai.iaescore.com
A Fletcher-Reeves conjugate gradient algorithm-based
neuromodel for smart grid stability analysis
Adedayo Olukayode Ojo1
, Aiyedun Olatilewa Eyitayo1
, Moses Oluwafemi Onibonoje1
,
Saheed Lekan Gbadamosi2
1
Department of Electrical/Electronics and Computer Engineering, College of Engineering, Afe Babalola University Ado-Ekiti (ABUAD),
Ado-Ekiti, Nigeria
2
Department of Electrical and Electronics Engineering, Bowen University Iwo, Osun State, Nigeria
Article Info ABSTRACT
Article history:
Received Jul 29, 2023
Revised Sep 1, 2024
Accepted Oct 8, 2024
Interest in smart grid systems is growing around the globe as they are getting
increasingly popular for their efficiency and cost reduction at both ends of the
energy spectrum. This study, therefore, proposes a neuro model designed and
optimized with the Fletcher-Reeves conjugate gradient algorithm for
analyzing the stability of smart grids. The performance results achieved with
this algorithm was compared with those obtained when the same network was
trained with other algorithms. Our results show that the proposed model
outperforms existing techniques in terms of accuracy, efficiency, and speed.
This study contributes to the development of intelligent solutions for smart
grid stability analysis, which can enhance the reliability and sustainability of
power systems.
Keywords:
Conjugate gradient algorithm
Fletcher-Reeves
Neuro model
Power systems stability
Smart grid This is an open access article under the CC BY-SA license.
Corresponding Author:
Adedayo Olukayode Ojo
Department of Electrical/Electronics and Computer Engineering, College of Engineering
Afe Babalola University Ado-Ekiti (ABUAD)
PMB 5454, Km 8.5, Afe Babalola Way, Ado-Ekiti, Ekiti State, Nigeria
Email: ojoao@abuad.edu.ng
1. INTRODUCTION
Recently, the world’s energy system has been undergoing significant transitions. The transitions are
primarily driven by the need to update the evolving electrical infrastructure, integrate low-carbon energy
sources and satisfy the excess power consumption with new types of demands such as smart homes, electric
transportation, while maintaining supply protection [1]. The world in general is being forced to switch from
using fossil fuel power plants to using renewable energy sources because of the ongoing climate change, which
is in line with the sustainable development goal (SDG) 7 which entails transitioning from the use of fossil fuels
into using clean and affordable energy. Although, integrating various sources has its advantages such as
improved energy efficiency and also its sustainability, it also introduces new difficulties during the analysis of
the stability of the power system.
Therefore, it has become crucial that some kind of intelligent information processing technique be
introduced into energy management process as well as overall power system stability prediction and analysis.
There is a growing list of algorithms that are being developed suitable for this purpose. This is an improvement
over the more conventional technique of simulations combining fixed values for a particular subset and fixed
distribution of values for the other subset variables [2], [3], or the even more laborious measurement-based
techniques [4], [5]. Therefore, in this paper a Fletcher-Reeves algorithm combined with the conjugate gradient
algorithm is being used in analyzing the stability of a smart grid. The operation of this hybrid algorithm is
 ISSN: 2252-8938
Int J Artif Intell, Vol. 14, No. 1, February 2025: 159-165
160
based on the ratio of the norm squared of the recent gradient to the norm squared of the previous gradient.
The algorithm, used with the widely known backpropagation algorithm, can therefore train any network
provided its weight, net input and transfer functions have derivative functions. The conjugate gradient
algorithm also has other versions which includes work in in similar ways with the aim of reducing
generalization errors in neural network based applications [6]. These algorithms, or their variants, have been
applied for solving problems in a variety of fields and applications [4], [7]–[12]. But the aim of this paper is to
use the Fletcher-Reeves conjugate gradient algorithm in training machine learning based models leading to the
development of a neuromodel for analyzing smart grid stability.
The integration of renewable energy sources into existing power grids come with its own challenges
owing to the unpredictable nature of some of these renewable energy sources. For instance, solar electricity
generation is linked to the amount of exposure to sunlight. And quite often, availability and intensity of this
solar energy is too unpredictable and cannot be used directly in the quest for taking informed decisions in
power generation due to unpredictable cloud characteristics, leading to optical instability in the solar irradiance.
Several statistical methods, such as autoregressive moving average, Kalman filter, and Markov chain
model have been researched in an attempt to address smart grid’s unreliability. Other early statistical techniques
have few limitations which are also significant in smart grid stability, these techniques due to their limitations
reduce the precision of the prediction model [13]. These models, typically constructed using non-complex
statistical building blocks, perform unsatisfactorily under severe uncertainty. Additionally, these conventional
methods for stability forecasting such as the Markov model are only applicable within certain operating ranges
[14]. The Fletcher-Reeves conjugate gradient algorithm proposed in this article, however, is a very important
constituent of the neuromodel because it combines the advantages of neural networks and optimization
algorithms, allowing for fast and accurate convergence to a solution. Additionally, the use of neural networks
is also suitable for stability analysis because it can, to a satisfactory degree, capture the nonlinearities and
uncertainties in the smart grid system [15]. Using this algorithm-based neural model for smart grid stability
can empower system operators helping them make well-informed decisions during or before maintenance and
systems operations. It can also help in the development and design process of smart grid control systems that
will ensure the reliability and stability of the power system [16].
2. METHOD
2.1. Fletcher-Reeves conjugate gradient algorithm
This research utilizes a Fletcher-Reeves version of the conjugate gradient algorithm to analyze smart
grid systems’ stability. Fletcher-Reeves conjugate gradient algorithm’s operation is based on the ratio of the
norm squared of the recent gradient to the norm squared of the previous gradient. Mathematically, the
Fletcher-Reeves conjugate gradient algorithm can be represented in (1).
𝑥𝑘+1 = 𝑥𝑘 + 𝛼𝑘𝑑𝑘, 𝑘 = 0,1, … (1)
Let 𝑥𝑘 represent the current solution, with 𝛼𝑘 as the step size. The step length 𝛼𝑘 is obtained through
a line search process aimed at minimizing performance along the chosen search direction, 𝑑𝑘. This direction
guides the search toward a minimum point. Initially, the search direction is set as the negative gradient of
performance. In later iterations, it is recalculated using both the updated gradient and the previous search
direction, as shown in (2).
𝑑𝑘+1 = −𝑔𝑘 + (𝑑𝑘 × 𝑧) (2)
Where 𝑔𝑘 is the gradient of the objective function and the entity ‘z’ may be evaluated through a few methods.
For this algorithm under discussion, it is evaluated through the use of (3).
𝑧 =
𝑛𝑜𝑟𝑚𝑛𝑒𝑤_𝑠𝑞𝑟
𝑛𝑜𝑟𝑚_𝑠𝑞𝑟
(3)
Where “𝑛𝑜𝑟𝑚_𝑠𝑞𝑟” is the normal square of the previous gradient, and “𝑜𝑟𝑚𝑛𝑒𝑤_𝑠𝑞𝑟” is the norm square of
the current gradient [6]. A new update of the Fletcher-Reeves conjugate gradient algorithm [17], suggests
another formula for calculating the new gradient of the previous search direction, 𝑑𝑘 which is given by (4).
𝑑𝑘+1 = {
−𝑔𝑘+1, 𝑖𝑓 𝑘 = 0
−𝑔𝑘+1 + 𝛽𝑘𝑑𝑘, 𝑖𝑓 𝑘 > 0
(4)
Where 𝛽𝑘 is a scalar quantity and 𝑔𝑘+1 is the new gradient of the objective function.
Int J Artif Intell ISSN: 2252-8938 
A Fletcher-Reeves conjugate gradient algorithm-based neuromodel for … (Adedayo Olukayode Ojo)
161
2.2. Neural network architecture
The neural network architecture used in this study consists of three (3) layers which includes the input
layer, the hidden layer, and the output layer. The input layer had 12 neurons corresponding to the 12 features
in the dataset and an output layer. The hidden layer had 47 neurons and used the hyperbolic tangent (tanh)
activation function. The output layer had one neuron, which predicted the target variable using the sigmoid
activation function.
The tanh activation function was chosen for the hidden layer because of its ability to model complex
non-linear relationships between the input and output variables. The sigmoid activation function was used in
the output layer because it is suitable for binary classification problems. For this architecture, a training dataset
of 70% (42,000 samples) is fed into the network during training, and the network is trained and adjusted
according to its observed error surface. A validation dataset of 15% (9,000 samples) was then employed to
evaluate the ability of the network to handle new data that were not part of the training, and to stop the training
as soon as this ability becomes reduces below a treshold. Finally, a testing dataset of 15% (9000 samples) was
used, which provides an individual measure of network performance during and after training. This network
structure was selected based on previous research on similar datasets and problems, and was refined through
experimentation with different configurations of the neural network. The ultimate goal was to achieve high
accuracy in predicting the target variable while avoiding overfitting the training data.
Figure 1 shows the structure of the artificial neural network (ANN) employed in testing/training the
dataset. It includes the different layers of the network. The input layer consists of 12 different inputs which
includes the energy producer, the consumers, the reaction time amidst other input variables. The output layer
represents the percentage stability.
Figure 1. Neural network architecture
3. RESULTS AND DISCUSSION
After training the neural network, the observed regression from the neural network described above was
plotted. It was observed, as shown in Figure 2, that the neural network achieved an overall regression R squared
value of 0.98155. Other parameters were also evaluated, and they include the validation (0.98227 regression), the
testing (0.98146 regression). Overall, our results suggest that, in response to the training provided, the ANN
successfully learn the hidden inter-relationships between the variables make accurate predictions on new, separate
data, highlighting the potential of neural networks as a powerful tool for modelling and prediction in various
fields.
The plots shown in Figures 3 and 4 represent the 3D display of the relationship between selected
inputs and the target. The plot was able to capture the patterns in the data, and render the relationship between
the variables in a visually easy-to-appreciate format. The essence of these plots is that, apart from the pattern
learned by the ANN, it now becomes easier and intelligible for experts to visually estimate the relevance of
each variable at different values of other variables.
The choice of the tanh (hyperbolic tangent) transfer function for the hidden layer is a common
activation function in neural networks. The tanh function, as listed in Table 1, maps the input values to the
range (-1, 1), introducing non-linearity to the model. This non-linearity enables the network to learn complex
patterns and relationships in the data, improving the model's ability to capture more intricate features of the
input. However, it is essential to consider that the tanh function can suffer from the vanishing gradient problem,
especially during the early stages of training. The sigmoid transfer function is used in the output layer. It maps
the input values to the range (0, 1), which is suitable for binary classification tasks as in the case of this article.
The output values represent probabilities, with values closer to 0 indicating one class and values closer to 1
representing the other class. The sigmoid function is commonly used in binary classification problems because
it allows for a probabilistic interpretation of the model's output, which is often useful in decision-making
scenarios.
The total number of neurons in the network indicates the size of the hidden layer. In this case, the
hidden layer contains 40 neurons. The number of neurons in the hidden layer is often a hyperparameter that
 ISSN: 2252-8938
Int J Artif Intell, Vol. 14, No. 1, February 2025: 159-165
162
needs to be tuned during the network's design. Too few neurons may result in the network being unable to learn
complex patterns, while too many neurons can result in a situation where the network simply learns the exact
interrelations between the variables only for the provided data, while grossly underporforming for other data.
The selection of 40 neurons indicates that the model's architecture is likely designed to strike a balance between
complexity and generalization. The total number of weight elements represents the number of parameters that
need to be learned.
In this network, there are 60 weight elements. Each connection between neurons in the network has
an associated weight that is adjusted during training to minimize the error. The number of weight elements is
directly related to the complexity of the network and the total number of trainable parameters. In larger
networks, the number of weight elements can become substantial, leading to longer training times and the risk
of overfitting if not properly regularized.
Figure 2. Performance of the neuromodel for stability assessment
Figure 3. Stability analysis 1 Figure 4. Stability analysis 2
Int J Artif Intell ISSN: 2252-8938 
A Fletcher-Reeves conjugate gradient algorithm-based neuromodel for … (Adedayo Olukayode Ojo)
163
Table 1. Neural network specifications
ANN parameters Description/Value
Type of transfer function (hidden layer) tanh
Type of transfer function (output layer) Sigmoid
Weight update mechanism A Fletcher-Reeves conjugate gradient algorithm
Total number of neurons 40
Total number of weight elements 60
Maximum epochs 600
The maximum number of epochs sets an upper limit on the number of times the total training data is
fed to the network during the training phase. Training the network for a fixed number of epochs helps control
the duration of the training process and avoids overfitting. If the training performance plateaus before reaching
the maximum epochs, early stopping techniques can be employed to halt training prematurely, thereby
preventing unnecessary iterations and saving computational resources. Overall, the chosen ANN parameters
reveal a well-configured neural network for a binary classification task. However, it's important to note that
achieving optimal performance often involves experimenting with different architectures, hyperparameters,
and evaluation metrics specific to the dataset and problem at hand. The provided ANN parameters serve as a
starting point for training a model and can be further refined and optimized through iterative experimentation
and fine-tuning.
In Table 2, a comparison is made between the performance of the Fletcher-Reeves conjugate gradient
algorithm and other types of neural network algorithms in training the network under similar conditions. These
include the gradient descent algorithm, the Levenberg-Marquardt algorithm, and the layer sensitivity-based
ANN for the training, validation and testing. The satisfactory performance of the algorithms deployed for
training the neural network in this work shows the applicability of neural network in evaluating and predicting
the stability of smart grid systems. This is in agreement with the growing list of applications of other
softcomputing techniques as reported in [18]–[27].
Table 2. Outlook of the network performances
Network Training Testing
Training VAF Validation VAF CMD VAF RMSE
FRCG_ANN 98.77 97.83 0.9866 96.26 3.57
GDA_ANN 94.89 95.44 0.9652 95.12 7.03
LM_ANN 96.55 96.62 0.9839 93.57 4.10
LSB_ANN 97.07 97.87 0.9822 95.46 4.97
4. CONCLUSION
The Fletcher-Reeves conjugate gradient algorithm-based neuromodel is a promising approach for
smart grid stability analysis. The study presented in this paper has shown that this algorithm can be successfully
applied to power system stability analysis with high accuracy and efficiency. The use of ANN in analysing
power stability, provides a flexible and versatile platform for the analysis and control of power systems, and
the Fletcher-Reeves conjugate gradient algorithm is a powerful optimization tool that can be used to improve
the training process of these networks. The observations from this work indicate that the proposed neuro model
can accurately predict the stability of power systems, based on the input features that were used in the model.
This provides a valuable tool for power system operators and planners, who need to make critical decisions
about system stability and reliability. Overall, the Fletcher-Reeves conjugate gradient algorithm-based
neuromodel posses the capability to reduce error in power system stability assessment, and its application in
this field should be further explored and developed.
ACKNOWLEDGEMENTS
This paper is financed by the DAAD NiReMaS Project of the University of Cologne, Germany.
REFERENCES
[1] S. K. Samanta and C. K. Chanda, “Investigate the impact of smart grid stability analysis on synchronous generator,” in 2017 IEEE
Calcutta Conference (CALCON), IEEE, Dec. 2017, pp. 241–247, doi: 10.1109/CALCON.2017.8280732.
[2] A. Andreotti, G. Carpinelli, F. Mottola, and D. Proto, “A review of single-objective optimization models for plug-in vehicles
operation in smart grids part I: theoretical aspects,” in 2012 IEEE Power and Energy Society General Meeting, IEEE, Jul. 2012, pp.
1–8, doi: 10.1109/PESGM.2012.6345381.
[3] F. Mohammad and Y.-C. Kim, “Energy load forecasting model based on deep neural networks for smart grids,” International Journal
of System Assurance Engineering and Management, vol. 11, no. 4, pp. 824–834, Aug. 2020, doi: 10.1007/s13198-019-00884-9.
 ISSN: 2252-8938
Int J Artif Intell, Vol. 14, No. 1, February 2025: 159-165
164
[4] T. Esch, G. Bremer, W. Dirksen, C. Munoz, and K. V. Maydell, “Analysis of the integration of a DC-Charging station into a tram
grid by using long-term field measurements and a PHiL setup,” IEEE Access, vol. 12, pp. 88243–88250, 2024, doi:
10.1109/ACCESS.2024.3416491.
[5] B. Gao, X. Huang, J. Shi, Y. Tai, and J. Zhang, “Hourly forecasting of solar irradiance based on ceemdan and multi-strategy CNN-
LSTM neural networks,” Renewable Energy, vol. 162, pp. 1665–1683, Dec. 2020, doi: 10.1016/j.renene.2020.09.141.
[6] P. S. Sandhu and S. Chhabra, “A comparative analysis of conjugate gradient algorithms & PSO based neural network approaches
for reusability evaluation of procedure based software systems,” Chiang Mai Journal of Science, vol. 38, pp. 123–135, 2011.
[7] T. Gao, J. Wang, B. Zhang, H. Zhang, P. Ren, and N. R. Pal, “A polak-ribière-polyak conjugate gradient-based neuro-fuzzy network
and its convergence,” IEEE Access, vol. 6, pp. 41551–41565, 2018, doi: 10.1109/ACCESS.2018.2848117.
[8] A. O. Ojo, O. I. Esan, and O. O. Omitola, “Deep learning based software-defined indoor environment for space-time coded wireless
communication using reconfigurable intelligent surfaces,” International Journal of Microwave and Optical Technology, vol. 17,
no. 6, pp. 664–673, 2022.
[9] A. H. Ibrahim, P. Kumam, and W. Kumam, “A family of derivative-free conjugate gradient methods for constrained nonlinear
equations and image restoration,” IEEE Access, vol. 8, pp. 162714–162729, 2020, doi: 10.1109/ACCESS.2020.3020969.
[10] A. O. Ojo and O. M. Oluwafemi, “Performance analysis of bfgs quasi-newton neuro algorithm for the design of 30 ghz patch
antenna for 5g applications,” in 2022 IEEE Nigeria 4th International Conference on Disruptive Technologies for Sustainable
Development (NIGERCON), IEEE, 2022, pp. 1–5, doi: 10.1109/NIGERCON54645.2022.9803170.
[11] A. O. Ojo and O. M. Oluwafemi, “Evaluation of thermal comfort in a multi-occupancy office using polak-ribiére conjugate gradient
neuro-algorithm,” in 2022 IEEE Nigeria 4th International Conference on Disruptive Technologies for Sustainable Development
(NIGERCON), IEEE, 2022, pp. 1–5, doi: 10.1109/NIGERCON54645.2022.9803185.
[12] O. O. Adedayo, M. O. Onibonoje, and T. E. Fabunmi, “Simulative analysis of metropolitan electric distribution network using
conjugate gradient neuro-algorithm with powell/beale restarts,” in 2021 International Conference on Electrical, Computer and
Energy Technologies (ICECET), IEEE, 2021, pp. 1–5, doi: 10.1109/ICECET52533.2021.9698441.
[13] J. Li et al., “A novel hybrid short-term load forecasting method of smart grid using mlr and lstm neural network,” IEEE Transactions
on Industrial Informatics, vol. 17, no. 4, pp. 2443–2452, Apr. 2021, doi: 10.1109/TII.2020.3000184.
[14] G. Capizzi, G. L. Sciuto, C. Napoli, and E. Tramontana, “Advanced and adaptive dispatch for smart grids by means of predictive
models,” IEEE Transactions on Smart Grid, vol. 9, no. 6, pp. 6684–6691, Nov. 2018, doi: 10.1109/TSG.2017.2718241.
[15] O. Adedayo, M. Onibonoje, and M. Isa, “A layer-sensitivity based artificial neural network for characterization of oil palm fruitlets,”
International Journal of Applied Science and Engineering, vol. 18, no. 1, 2021, doi: 10.6703/IJASE.202103_18(1).011.
[16] M. B. Omar, R. Ibrahim, R. Mantri, J. Chaudhary, K. R. Selvaraj, and K. Bingi, “Smart grid stability prediction model using neural
networks to handle missing inputs,” Sensors, vol. 22, no. 12, Jun. 2022, doi: 10.3390/s22124342.
[17] B. A. Hassan and H. M. Sadeq, “The new algorithm form of the Fletcher–Reeves conjugate gradient algorithm,” Journal of
Multidisciplinary Modeling and Optimization, vol. 1, no. 1, pp. 41–51, 2018.
[18] D. J. Scott, P. V Coveney, J. A. Kilner, J. C. H. Rossiny, and N. M. N. Alford, “Prediction of the functional properties of ceramic
materials from composition using artificial neural networks,” Journal of the European Ceramic Society, vol. 27, no. 16, pp. 4425–
4435, Jan. 2007, doi: 10.1016/j.jeurceramsoc.2007.02.212.
[19] J. L. Pedreño-Molina, M. Pinzolas, and J. Monzó-Cabrera, “A new methodology for in situ calibration of a neural network-based
software sensor for s-parameter prediction in six-port reflectometers,” Neurocomputing, vol. 69, no. 16–18, pp. 2451–2455, Oct.
2006, doi: 10.1016/j.neucom.2006.01.008.
[20] S. Kara, “Classification of mitral stenosis from doppler signals using short time fourier transform and artificial neural networks,”
Expert Systems with Applications, vol. 33, no. 2, pp. 468–475, Aug. 2007, doi: 10.1016/j.eswa.2006.05.011.
[21] N. Srinivas, A. V. Babu, and M. D. Rajak, “ECG signal analysis using data clustering and artificial neural networks,” American
International Journal of Research in Science, Technology, Engineering & Mathematics, vol 4, no. 2, pp. 82–90, 2013.
[22] F. M. Al-Naima and A. H. Al-Timemy, “Resilient back propagation algorithm for breast biopsy classification based on artificial
neural networks,” in Computational Intelligence and Modern Heuristics, Amman, Jordan: InTech, 2010, doi: 10.5772/7817.
[23] T. R. Kiran and S. P. S. Rajput, “An effectiveness model for an indirect evaporative cooling (IEC) system: comparison of artificial
neural networks (ANN), adaptive neuro-fuzzy inference system (ANFIS) and fuzzy inference system (FIS) approach,” Applied Soft
Computing, vol. 11, no. 4, pp. 3525–3533, Jun. 2011, doi: 10.1016/j.asoc.2011.01.025.
[24] A. Hussein, M. Adda, M. Atieh, and W. Fahs, “Smart home design for disabled people based on neural networks,” Procedia
Computer Science, vol. 37, pp. 117–126, 2014, doi: 10.1016/j.procs.2014.08.020.
[25] S. Lin, F. Cao, and Z. Xu, “Essential rate for approximation by spherical neural networks,” Neural Networks, vol. 24, no. 7, pp.
752–758, Sep. 2011, doi: 10.1016/j.neunet.2011.04.005.
[26] S. Lukić et al., “Artificial neural networks based prediction of cerebral palsy in infants with central coordination disturbance,” Early
Human Development, vol. 88, no. 7, pp. 547–553, Jul. 2012, doi: 10.1016/j.earlhumdev.2012.01.001.
[27] A. Hasan and A. F. Peterson, “Measurement of complex permittivity using artificial neural networks,” IEEE Antennas and
Propagation Magazine, vol. 53, no. 1, pp. 200–203, Feb. 2011, doi: 10.1109/MAP.2011.5773614.
BIOGRAPHIES OF AUTHORS
Engr. Dr. Adedayo Olukayode Ojo graduated from the Department of Electronic
and Electrical Engineering in Ladoke Akintola University of Technology (LAUTECH) Nigeria
where he received his B.Tech. degree with honours. He obtained his M.Sc. degree in
microelectronics from Universiti Putra Malaysia (UPM), and his Ph.D. in wireless
communications from the Department of Electrical/Electronics and Computer Engineering, Afe
Babalola University Ado-Ekiti (ABUAD) during which he also had a productive research stay
at the Cologne Institute for Information System (CIIS) in Cologne, Germany. He is currently a
senior lecturer at the Department of Electrical/Electronic and Computer Engineering, Afe
Babalola University Ado Ekiti, Ekiti State Nigeria. His research interests include wireless
communication systems, microelectronics, soft computing, and intelligent systems. He can be
contacted at email: ojoao@abuad.edu.ng.
Int J Artif Intell ISSN: 2252-8938 
A Fletcher-Reeves conjugate gradient algorithm-based neuromodel for … (Adedayo Olukayode Ojo)
165
Aiyedun Olatilewa Eyitayo received the B.Eng. (Bachelor of Engineering) degree
in electrical electronic engineering from the Department of Electrical/Electronics and Computer
Engineering, College of Engineering, Afe Babalola University Ado-Ekiti (ABUAD), Nigeria.
His final year project is in the area of the design and modeling of a hybrid renewable microgrid
system. His research interests include intelligent systems, smart grids, modern power systems,
and power system stability. He can be contacted at email: tilewatayo2@gmail.com.
Engr. Dr. Moses Oluwafemi Onibonoje is an Associate Professor in the
Department of Electrical/Electronics and Computer Engineering at Afe Babalola University
Ado-Ekiti (ABUAD), Nigeria. He specializes in control and instrumentation engineering. His
research areas include wireless sensor networks, distributed systems, wireless instrumentations,
optimization methods, and agro-electro instrumentations. He holds a Ph.D. in electronic and
electrical engineering from Obafemi Awolowo University, Ile-Ife, Nigeria. He is coren
registered and a corporate member of the Nigerian Society of Engineers. He can be contacted
at email: onibonojemo@abuad.edu.ng.
Saheed Lekan Gbadamosi is an Associate Professor in Bowen University Iwo
and researcher specializing in sustainable energy systems. With a Ph.D. in electrical
engineering, he is a registered engineer in both South Africa and Nigeria. His research focuses
on power systems optimization, renewable energy integration, and cyber-physical systems. As
an educator, he has taught courses in power systems and renewable energy while mentoring
postgraduate students. His passion for sustainable development drives his commitment to
creating a greener future. He can be contacted at email: gbadamosiadeolu@gmail.com.

More Related Content

PDF
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...
PDF
28 16107 paper 088 ijeecs(edit)
PDF
Performance assessment of an optimization strategy proposed for power systems
PDF
Short Term Electrical Load Forecasting by Artificial Neural Network
PDF
Artificial Neural Network and Multi-Response Optimization in Reliability Meas...
PDF
Transforming an Existing Distribution Network Into Autonomous MICRO-GRID usin...
PDF
Two-way Load Flow Analysis using Newton-Raphson and Neural Network Methods
PDF
01 16286 32182-1-sm multiple (edit)
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...
28 16107 paper 088 ijeecs(edit)
Performance assessment of an optimization strategy proposed for power systems
Short Term Electrical Load Forecasting by Artificial Neural Network
Artificial Neural Network and Multi-Response Optimization in Reliability Meas...
Transforming an Existing Distribution Network Into Autonomous MICRO-GRID usin...
Two-way Load Flow Analysis using Newton-Raphson and Neural Network Methods
01 16286 32182-1-sm multiple (edit)

Similar to A Fletcher-Reeves conjugate gradient algorithm-based neuromodel for smart grid stability analysis (20)

PDF
MeMLO: Mobility-Enabled Multi-Level Optimization Sensor Network
PDF
Artificial Neural Networks for Control.pdf
PDF
Optimal artificial neural network configurations for hourly solar irradiation...
PDF
Power system transient stability margin estimation using artificial neural ne...
PDF
Novel Scheme for Minimal Iterative PSO Algorithm for Extending Network Lifeti...
PDF
A040101001006
PDF
Fuzzy expert system based optimal capacitor allocation in distribution system-2
PDF
Optimal state estimation techniques for accurate measurements in internet of...
PDF
Lifetime enhanced energy efficient wireless sensor networks using renewable e...
PDF
Residual Energy Based Cluster head Selection in WSNs for IoT Application
PDF
Neural Network Model Development with Soft Computing Techniques for Membrane ...
PDF
Enhancing hybrid renewable energy performance through deep Q-learning networ...
PDF
Firefly analytical hierarchy algorithm for optimal allocation and sizing of D...
PDF
Energy Management System in Microgrid with ANFIS Control Scheme using Heurist...
PDF
Firefly Algorithm to Opmimal Distribution of Reactive Power Compensation Units
PDF
Identification study of solar cell/module using recent optimization techniques
PDF
paper11
PDF
Matlab based comparative studies on selected mppt
PDF
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
PDF
Comparative Study of Neural Networks Algorithms for Cloud Computing CPU Sched...
MeMLO: Mobility-Enabled Multi-Level Optimization Sensor Network
Artificial Neural Networks for Control.pdf
Optimal artificial neural network configurations for hourly solar irradiation...
Power system transient stability margin estimation using artificial neural ne...
Novel Scheme for Minimal Iterative PSO Algorithm for Extending Network Lifeti...
A040101001006
Fuzzy expert system based optimal capacitor allocation in distribution system-2
Optimal state estimation techniques for accurate measurements in internet of...
Lifetime enhanced energy efficient wireless sensor networks using renewable e...
Residual Energy Based Cluster head Selection in WSNs for IoT Application
Neural Network Model Development with Soft Computing Techniques for Membrane ...
Enhancing hybrid renewable energy performance through deep Q-learning networ...
Firefly analytical hierarchy algorithm for optimal allocation and sizing of D...
Energy Management System in Microgrid with ANFIS Control Scheme using Heurist...
Firefly Algorithm to Opmimal Distribution of Reactive Power Compensation Units
Identification study of solar cell/module using recent optimization techniques
paper11
Matlab based comparative studies on selected mppt
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
Comparative Study of Neural Networks Algorithms for Cloud Computing CPU Sched...
Ad

More from IAESIJAI (20)

PDF
Hybrid model detection and classification of lung cancer
PDF
Adaptive kernel integration in visual geometry group 16 for enhanced classifi...
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
Enhancing fall detection and classification using Jarratt‐butterfly optimizat...
PDF
Deep ensemble learning with uncertainty aware prediction ranking for cervical...
PDF
Event detection in soccer matches through audio classification using transfer...
PDF
Detecting road damage utilizing retinaNet and mobileNet models on edge devices
PDF
Optimizing deep learning models from multi-objective perspective via Bayesian...
PDF
Squeeze-excitation half U-Net and synthetic minority oversampling technique o...
PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
Exploring DenseNet architectures with particle swarm optimization: efficient ...
PDF
A transfer learning-based deep neural network for tomato plant disease classi...
PDF
U-Net for wheel rim contour detection in robotic deburring
PDF
Deep learning-based classifier for geometric dimensioning and tolerancing sym...
PDF
Enhancing fire detection capabilities: Leveraging you only look once for swif...
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PDF
Depression detection through transformers-based emotion recognition in multiv...
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
Enhancing financial cybersecurity via advanced machine learning: analysis, co...
PDF
Crop classification using object-oriented method and Google Earth Engine
Hybrid model detection and classification of lung cancer
Adaptive kernel integration in visual geometry group 16 for enhanced classifi...
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
Enhancing fall detection and classification using Jarratt‐butterfly optimizat...
Deep ensemble learning with uncertainty aware prediction ranking for cervical...
Event detection in soccer matches through audio classification using transfer...
Detecting road damage utilizing retinaNet and mobileNet models on edge devices
Optimizing deep learning models from multi-objective perspective via Bayesian...
Squeeze-excitation half U-Net and synthetic minority oversampling technique o...
A novel scalable deep ensemble learning framework for big data classification...
Exploring DenseNet architectures with particle swarm optimization: efficient ...
A transfer learning-based deep neural network for tomato plant disease classi...
U-Net for wheel rim contour detection in robotic deburring
Deep learning-based classifier for geometric dimensioning and tolerancing sym...
Enhancing fire detection capabilities: Leveraging you only look once for swif...
Accuracy of neural networks in brain wave diagnosis of schizophrenia
Depression detection through transformers-based emotion recognition in multiv...
A comparative analysis of optical character recognition models for extracting...
Enhancing financial cybersecurity via advanced machine learning: analysis, co...
Crop classification using object-oriented method and Google Earth Engine
Ad

Recently uploaded (20)

PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Network Security Unit 5.pdf for BCA BBA.
PPTX
Spectroscopy.pptx food analysis technology
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
KodekX | Application Modernization Development
PPTX
sap open course for s4hana steps from ECC to s4
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Approach and Philosophy of On baking technology
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PPTX
Big Data Technologies - Introduction.pptx
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Empathic Computing: Creating Shared Understanding
Diabetes mellitus diagnosis method based random forest with bat algorithm
Reach Out and Touch Someone: Haptics and Empathic Computing
Dropbox Q2 2025 Financial Results & Investor Presentation
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
20250228 LYD VKU AI Blended-Learning.pptx
Network Security Unit 5.pdf for BCA BBA.
Spectroscopy.pptx food analysis technology
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
NewMind AI Weekly Chronicles - August'25 Week I
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
KodekX | Application Modernization Development
sap open course for s4hana steps from ECC to s4
Mobile App Security Testing_ A Comprehensive Guide.pdf
Approach and Philosophy of On baking technology
“AI and Expert System Decision Support & Business Intelligence Systems”
Big Data Technologies - Introduction.pptx
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Empathic Computing: Creating Shared Understanding

A Fletcher-Reeves conjugate gradient algorithm-based neuromodel for smart grid stability analysis

  • 1. IAES International Journal of Artificial Intelligence (IJ-AI) Vol. 14, No. 1, February 2025, pp. 159~165 ISSN: 2252-8938, DOI: 10.11591/ijai.v14.i1.pp159-165  159 Journal homepage: http://ijai.iaescore.com A Fletcher-Reeves conjugate gradient algorithm-based neuromodel for smart grid stability analysis Adedayo Olukayode Ojo1 , Aiyedun Olatilewa Eyitayo1 , Moses Oluwafemi Onibonoje1 , Saheed Lekan Gbadamosi2 1 Department of Electrical/Electronics and Computer Engineering, College of Engineering, Afe Babalola University Ado-Ekiti (ABUAD), Ado-Ekiti, Nigeria 2 Department of Electrical and Electronics Engineering, Bowen University Iwo, Osun State, Nigeria Article Info ABSTRACT Article history: Received Jul 29, 2023 Revised Sep 1, 2024 Accepted Oct 8, 2024 Interest in smart grid systems is growing around the globe as they are getting increasingly popular for their efficiency and cost reduction at both ends of the energy spectrum. This study, therefore, proposes a neuro model designed and optimized with the Fletcher-Reeves conjugate gradient algorithm for analyzing the stability of smart grids. The performance results achieved with this algorithm was compared with those obtained when the same network was trained with other algorithms. Our results show that the proposed model outperforms existing techniques in terms of accuracy, efficiency, and speed. This study contributes to the development of intelligent solutions for smart grid stability analysis, which can enhance the reliability and sustainability of power systems. Keywords: Conjugate gradient algorithm Fletcher-Reeves Neuro model Power systems stability Smart grid This is an open access article under the CC BY-SA license. Corresponding Author: Adedayo Olukayode Ojo Department of Electrical/Electronics and Computer Engineering, College of Engineering Afe Babalola University Ado-Ekiti (ABUAD) PMB 5454, Km 8.5, Afe Babalola Way, Ado-Ekiti, Ekiti State, Nigeria Email: ojoao@abuad.edu.ng 1. INTRODUCTION Recently, the world’s energy system has been undergoing significant transitions. The transitions are primarily driven by the need to update the evolving electrical infrastructure, integrate low-carbon energy sources and satisfy the excess power consumption with new types of demands such as smart homes, electric transportation, while maintaining supply protection [1]. The world in general is being forced to switch from using fossil fuel power plants to using renewable energy sources because of the ongoing climate change, which is in line with the sustainable development goal (SDG) 7 which entails transitioning from the use of fossil fuels into using clean and affordable energy. Although, integrating various sources has its advantages such as improved energy efficiency and also its sustainability, it also introduces new difficulties during the analysis of the stability of the power system. Therefore, it has become crucial that some kind of intelligent information processing technique be introduced into energy management process as well as overall power system stability prediction and analysis. There is a growing list of algorithms that are being developed suitable for this purpose. This is an improvement over the more conventional technique of simulations combining fixed values for a particular subset and fixed distribution of values for the other subset variables [2], [3], or the even more laborious measurement-based techniques [4], [5]. Therefore, in this paper a Fletcher-Reeves algorithm combined with the conjugate gradient algorithm is being used in analyzing the stability of a smart grid. The operation of this hybrid algorithm is
  • 2.  ISSN: 2252-8938 Int J Artif Intell, Vol. 14, No. 1, February 2025: 159-165 160 based on the ratio of the norm squared of the recent gradient to the norm squared of the previous gradient. The algorithm, used with the widely known backpropagation algorithm, can therefore train any network provided its weight, net input and transfer functions have derivative functions. The conjugate gradient algorithm also has other versions which includes work in in similar ways with the aim of reducing generalization errors in neural network based applications [6]. These algorithms, or their variants, have been applied for solving problems in a variety of fields and applications [4], [7]–[12]. But the aim of this paper is to use the Fletcher-Reeves conjugate gradient algorithm in training machine learning based models leading to the development of a neuromodel for analyzing smart grid stability. The integration of renewable energy sources into existing power grids come with its own challenges owing to the unpredictable nature of some of these renewable energy sources. For instance, solar electricity generation is linked to the amount of exposure to sunlight. And quite often, availability and intensity of this solar energy is too unpredictable and cannot be used directly in the quest for taking informed decisions in power generation due to unpredictable cloud characteristics, leading to optical instability in the solar irradiance. Several statistical methods, such as autoregressive moving average, Kalman filter, and Markov chain model have been researched in an attempt to address smart grid’s unreliability. Other early statistical techniques have few limitations which are also significant in smart grid stability, these techniques due to their limitations reduce the precision of the prediction model [13]. These models, typically constructed using non-complex statistical building blocks, perform unsatisfactorily under severe uncertainty. Additionally, these conventional methods for stability forecasting such as the Markov model are only applicable within certain operating ranges [14]. The Fletcher-Reeves conjugate gradient algorithm proposed in this article, however, is a very important constituent of the neuromodel because it combines the advantages of neural networks and optimization algorithms, allowing for fast and accurate convergence to a solution. Additionally, the use of neural networks is also suitable for stability analysis because it can, to a satisfactory degree, capture the nonlinearities and uncertainties in the smart grid system [15]. Using this algorithm-based neural model for smart grid stability can empower system operators helping them make well-informed decisions during or before maintenance and systems operations. It can also help in the development and design process of smart grid control systems that will ensure the reliability and stability of the power system [16]. 2. METHOD 2.1. Fletcher-Reeves conjugate gradient algorithm This research utilizes a Fletcher-Reeves version of the conjugate gradient algorithm to analyze smart grid systems’ stability. Fletcher-Reeves conjugate gradient algorithm’s operation is based on the ratio of the norm squared of the recent gradient to the norm squared of the previous gradient. Mathematically, the Fletcher-Reeves conjugate gradient algorithm can be represented in (1). 𝑥𝑘+1 = 𝑥𝑘 + 𝛼𝑘𝑑𝑘, 𝑘 = 0,1, … (1) Let 𝑥𝑘 represent the current solution, with 𝛼𝑘 as the step size. The step length 𝛼𝑘 is obtained through a line search process aimed at minimizing performance along the chosen search direction, 𝑑𝑘. This direction guides the search toward a minimum point. Initially, the search direction is set as the negative gradient of performance. In later iterations, it is recalculated using both the updated gradient and the previous search direction, as shown in (2). 𝑑𝑘+1 = −𝑔𝑘 + (𝑑𝑘 × 𝑧) (2) Where 𝑔𝑘 is the gradient of the objective function and the entity ‘z’ may be evaluated through a few methods. For this algorithm under discussion, it is evaluated through the use of (3). 𝑧 = 𝑛𝑜𝑟𝑚𝑛𝑒𝑤_𝑠𝑞𝑟 𝑛𝑜𝑟𝑚_𝑠𝑞𝑟 (3) Where “𝑛𝑜𝑟𝑚_𝑠𝑞𝑟” is the normal square of the previous gradient, and “𝑜𝑟𝑚𝑛𝑒𝑤_𝑠𝑞𝑟” is the norm square of the current gradient [6]. A new update of the Fletcher-Reeves conjugate gradient algorithm [17], suggests another formula for calculating the new gradient of the previous search direction, 𝑑𝑘 which is given by (4). 𝑑𝑘+1 = { −𝑔𝑘+1, 𝑖𝑓 𝑘 = 0 −𝑔𝑘+1 + 𝛽𝑘𝑑𝑘, 𝑖𝑓 𝑘 > 0 (4) Where 𝛽𝑘 is a scalar quantity and 𝑔𝑘+1 is the new gradient of the objective function.
  • 3. Int J Artif Intell ISSN: 2252-8938  A Fletcher-Reeves conjugate gradient algorithm-based neuromodel for … (Adedayo Olukayode Ojo) 161 2.2. Neural network architecture The neural network architecture used in this study consists of three (3) layers which includes the input layer, the hidden layer, and the output layer. The input layer had 12 neurons corresponding to the 12 features in the dataset and an output layer. The hidden layer had 47 neurons and used the hyperbolic tangent (tanh) activation function. The output layer had one neuron, which predicted the target variable using the sigmoid activation function. The tanh activation function was chosen for the hidden layer because of its ability to model complex non-linear relationships between the input and output variables. The sigmoid activation function was used in the output layer because it is suitable for binary classification problems. For this architecture, a training dataset of 70% (42,000 samples) is fed into the network during training, and the network is trained and adjusted according to its observed error surface. A validation dataset of 15% (9,000 samples) was then employed to evaluate the ability of the network to handle new data that were not part of the training, and to stop the training as soon as this ability becomes reduces below a treshold. Finally, a testing dataset of 15% (9000 samples) was used, which provides an individual measure of network performance during and after training. This network structure was selected based on previous research on similar datasets and problems, and was refined through experimentation with different configurations of the neural network. The ultimate goal was to achieve high accuracy in predicting the target variable while avoiding overfitting the training data. Figure 1 shows the structure of the artificial neural network (ANN) employed in testing/training the dataset. It includes the different layers of the network. The input layer consists of 12 different inputs which includes the energy producer, the consumers, the reaction time amidst other input variables. The output layer represents the percentage stability. Figure 1. Neural network architecture 3. RESULTS AND DISCUSSION After training the neural network, the observed regression from the neural network described above was plotted. It was observed, as shown in Figure 2, that the neural network achieved an overall regression R squared value of 0.98155. Other parameters were also evaluated, and they include the validation (0.98227 regression), the testing (0.98146 regression). Overall, our results suggest that, in response to the training provided, the ANN successfully learn the hidden inter-relationships between the variables make accurate predictions on new, separate data, highlighting the potential of neural networks as a powerful tool for modelling and prediction in various fields. The plots shown in Figures 3 and 4 represent the 3D display of the relationship between selected inputs and the target. The plot was able to capture the patterns in the data, and render the relationship between the variables in a visually easy-to-appreciate format. The essence of these plots is that, apart from the pattern learned by the ANN, it now becomes easier and intelligible for experts to visually estimate the relevance of each variable at different values of other variables. The choice of the tanh (hyperbolic tangent) transfer function for the hidden layer is a common activation function in neural networks. The tanh function, as listed in Table 1, maps the input values to the range (-1, 1), introducing non-linearity to the model. This non-linearity enables the network to learn complex patterns and relationships in the data, improving the model's ability to capture more intricate features of the input. However, it is essential to consider that the tanh function can suffer from the vanishing gradient problem, especially during the early stages of training. The sigmoid transfer function is used in the output layer. It maps the input values to the range (0, 1), which is suitable for binary classification tasks as in the case of this article. The output values represent probabilities, with values closer to 0 indicating one class and values closer to 1 representing the other class. The sigmoid function is commonly used in binary classification problems because it allows for a probabilistic interpretation of the model's output, which is often useful in decision-making scenarios. The total number of neurons in the network indicates the size of the hidden layer. In this case, the hidden layer contains 40 neurons. The number of neurons in the hidden layer is often a hyperparameter that
  • 4.  ISSN: 2252-8938 Int J Artif Intell, Vol. 14, No. 1, February 2025: 159-165 162 needs to be tuned during the network's design. Too few neurons may result in the network being unable to learn complex patterns, while too many neurons can result in a situation where the network simply learns the exact interrelations between the variables only for the provided data, while grossly underporforming for other data. The selection of 40 neurons indicates that the model's architecture is likely designed to strike a balance between complexity and generalization. The total number of weight elements represents the number of parameters that need to be learned. In this network, there are 60 weight elements. Each connection between neurons in the network has an associated weight that is adjusted during training to minimize the error. The number of weight elements is directly related to the complexity of the network and the total number of trainable parameters. In larger networks, the number of weight elements can become substantial, leading to longer training times and the risk of overfitting if not properly regularized. Figure 2. Performance of the neuromodel for stability assessment Figure 3. Stability analysis 1 Figure 4. Stability analysis 2
  • 5. Int J Artif Intell ISSN: 2252-8938  A Fletcher-Reeves conjugate gradient algorithm-based neuromodel for … (Adedayo Olukayode Ojo) 163 Table 1. Neural network specifications ANN parameters Description/Value Type of transfer function (hidden layer) tanh Type of transfer function (output layer) Sigmoid Weight update mechanism A Fletcher-Reeves conjugate gradient algorithm Total number of neurons 40 Total number of weight elements 60 Maximum epochs 600 The maximum number of epochs sets an upper limit on the number of times the total training data is fed to the network during the training phase. Training the network for a fixed number of epochs helps control the duration of the training process and avoids overfitting. If the training performance plateaus before reaching the maximum epochs, early stopping techniques can be employed to halt training prematurely, thereby preventing unnecessary iterations and saving computational resources. Overall, the chosen ANN parameters reveal a well-configured neural network for a binary classification task. However, it's important to note that achieving optimal performance often involves experimenting with different architectures, hyperparameters, and evaluation metrics specific to the dataset and problem at hand. The provided ANN parameters serve as a starting point for training a model and can be further refined and optimized through iterative experimentation and fine-tuning. In Table 2, a comparison is made between the performance of the Fletcher-Reeves conjugate gradient algorithm and other types of neural network algorithms in training the network under similar conditions. These include the gradient descent algorithm, the Levenberg-Marquardt algorithm, and the layer sensitivity-based ANN for the training, validation and testing. The satisfactory performance of the algorithms deployed for training the neural network in this work shows the applicability of neural network in evaluating and predicting the stability of smart grid systems. This is in agreement with the growing list of applications of other softcomputing techniques as reported in [18]–[27]. Table 2. Outlook of the network performances Network Training Testing Training VAF Validation VAF CMD VAF RMSE FRCG_ANN 98.77 97.83 0.9866 96.26 3.57 GDA_ANN 94.89 95.44 0.9652 95.12 7.03 LM_ANN 96.55 96.62 0.9839 93.57 4.10 LSB_ANN 97.07 97.87 0.9822 95.46 4.97 4. CONCLUSION The Fletcher-Reeves conjugate gradient algorithm-based neuromodel is a promising approach for smart grid stability analysis. The study presented in this paper has shown that this algorithm can be successfully applied to power system stability analysis with high accuracy and efficiency. The use of ANN in analysing power stability, provides a flexible and versatile platform for the analysis and control of power systems, and the Fletcher-Reeves conjugate gradient algorithm is a powerful optimization tool that can be used to improve the training process of these networks. The observations from this work indicate that the proposed neuro model can accurately predict the stability of power systems, based on the input features that were used in the model. This provides a valuable tool for power system operators and planners, who need to make critical decisions about system stability and reliability. Overall, the Fletcher-Reeves conjugate gradient algorithm-based neuromodel posses the capability to reduce error in power system stability assessment, and its application in this field should be further explored and developed. ACKNOWLEDGEMENTS This paper is financed by the DAAD NiReMaS Project of the University of Cologne, Germany. REFERENCES [1] S. K. Samanta and C. K. Chanda, “Investigate the impact of smart grid stability analysis on synchronous generator,” in 2017 IEEE Calcutta Conference (CALCON), IEEE, Dec. 2017, pp. 241–247, doi: 10.1109/CALCON.2017.8280732. [2] A. Andreotti, G. Carpinelli, F. Mottola, and D. Proto, “A review of single-objective optimization models for plug-in vehicles operation in smart grids part I: theoretical aspects,” in 2012 IEEE Power and Energy Society General Meeting, IEEE, Jul. 2012, pp. 1–8, doi: 10.1109/PESGM.2012.6345381. [3] F. Mohammad and Y.-C. Kim, “Energy load forecasting model based on deep neural networks for smart grids,” International Journal of System Assurance Engineering and Management, vol. 11, no. 4, pp. 824–834, Aug. 2020, doi: 10.1007/s13198-019-00884-9.
  • 6.  ISSN: 2252-8938 Int J Artif Intell, Vol. 14, No. 1, February 2025: 159-165 164 [4] T. Esch, G. Bremer, W. Dirksen, C. Munoz, and K. V. Maydell, “Analysis of the integration of a DC-Charging station into a tram grid by using long-term field measurements and a PHiL setup,” IEEE Access, vol. 12, pp. 88243–88250, 2024, doi: 10.1109/ACCESS.2024.3416491. [5] B. Gao, X. Huang, J. Shi, Y. Tai, and J. Zhang, “Hourly forecasting of solar irradiance based on ceemdan and multi-strategy CNN- LSTM neural networks,” Renewable Energy, vol. 162, pp. 1665–1683, Dec. 2020, doi: 10.1016/j.renene.2020.09.141. [6] P. S. Sandhu and S. Chhabra, “A comparative analysis of conjugate gradient algorithms & PSO based neural network approaches for reusability evaluation of procedure based software systems,” Chiang Mai Journal of Science, vol. 38, pp. 123–135, 2011. [7] T. Gao, J. Wang, B. Zhang, H. Zhang, P. Ren, and N. R. Pal, “A polak-ribière-polyak conjugate gradient-based neuro-fuzzy network and its convergence,” IEEE Access, vol. 6, pp. 41551–41565, 2018, doi: 10.1109/ACCESS.2018.2848117. [8] A. O. Ojo, O. I. Esan, and O. O. Omitola, “Deep learning based software-defined indoor environment for space-time coded wireless communication using reconfigurable intelligent surfaces,” International Journal of Microwave and Optical Technology, vol. 17, no. 6, pp. 664–673, 2022. [9] A. H. Ibrahim, P. Kumam, and W. Kumam, “A family of derivative-free conjugate gradient methods for constrained nonlinear equations and image restoration,” IEEE Access, vol. 8, pp. 162714–162729, 2020, doi: 10.1109/ACCESS.2020.3020969. [10] A. O. Ojo and O. M. Oluwafemi, “Performance analysis of bfgs quasi-newton neuro algorithm for the design of 30 ghz patch antenna for 5g applications,” in 2022 IEEE Nigeria 4th International Conference on Disruptive Technologies for Sustainable Development (NIGERCON), IEEE, 2022, pp. 1–5, doi: 10.1109/NIGERCON54645.2022.9803170. [11] A. O. Ojo and O. M. Oluwafemi, “Evaluation of thermal comfort in a multi-occupancy office using polak-ribiére conjugate gradient neuro-algorithm,” in 2022 IEEE Nigeria 4th International Conference on Disruptive Technologies for Sustainable Development (NIGERCON), IEEE, 2022, pp. 1–5, doi: 10.1109/NIGERCON54645.2022.9803185. [12] O. O. Adedayo, M. O. Onibonoje, and T. E. Fabunmi, “Simulative analysis of metropolitan electric distribution network using conjugate gradient neuro-algorithm with powell/beale restarts,” in 2021 International Conference on Electrical, Computer and Energy Technologies (ICECET), IEEE, 2021, pp. 1–5, doi: 10.1109/ICECET52533.2021.9698441. [13] J. Li et al., “A novel hybrid short-term load forecasting method of smart grid using mlr and lstm neural network,” IEEE Transactions on Industrial Informatics, vol. 17, no. 4, pp. 2443–2452, Apr. 2021, doi: 10.1109/TII.2020.3000184. [14] G. Capizzi, G. L. Sciuto, C. Napoli, and E. Tramontana, “Advanced and adaptive dispatch for smart grids by means of predictive models,” IEEE Transactions on Smart Grid, vol. 9, no. 6, pp. 6684–6691, Nov. 2018, doi: 10.1109/TSG.2017.2718241. [15] O. Adedayo, M. Onibonoje, and M. Isa, “A layer-sensitivity based artificial neural network for characterization of oil palm fruitlets,” International Journal of Applied Science and Engineering, vol. 18, no. 1, 2021, doi: 10.6703/IJASE.202103_18(1).011. [16] M. B. Omar, R. Ibrahim, R. Mantri, J. Chaudhary, K. R. Selvaraj, and K. Bingi, “Smart grid stability prediction model using neural networks to handle missing inputs,” Sensors, vol. 22, no. 12, Jun. 2022, doi: 10.3390/s22124342. [17] B. A. Hassan and H. M. Sadeq, “The new algorithm form of the Fletcher–Reeves conjugate gradient algorithm,” Journal of Multidisciplinary Modeling and Optimization, vol. 1, no. 1, pp. 41–51, 2018. [18] D. J. Scott, P. V Coveney, J. A. Kilner, J. C. H. Rossiny, and N. M. N. Alford, “Prediction of the functional properties of ceramic materials from composition using artificial neural networks,” Journal of the European Ceramic Society, vol. 27, no. 16, pp. 4425– 4435, Jan. 2007, doi: 10.1016/j.jeurceramsoc.2007.02.212. [19] J. L. Pedreño-Molina, M. Pinzolas, and J. Monzó-Cabrera, “A new methodology for in situ calibration of a neural network-based software sensor for s-parameter prediction in six-port reflectometers,” Neurocomputing, vol. 69, no. 16–18, pp. 2451–2455, Oct. 2006, doi: 10.1016/j.neucom.2006.01.008. [20] S. Kara, “Classification of mitral stenosis from doppler signals using short time fourier transform and artificial neural networks,” Expert Systems with Applications, vol. 33, no. 2, pp. 468–475, Aug. 2007, doi: 10.1016/j.eswa.2006.05.011. [21] N. Srinivas, A. V. Babu, and M. D. Rajak, “ECG signal analysis using data clustering and artificial neural networks,” American International Journal of Research in Science, Technology, Engineering & Mathematics, vol 4, no. 2, pp. 82–90, 2013. [22] F. M. Al-Naima and A. H. Al-Timemy, “Resilient back propagation algorithm for breast biopsy classification based on artificial neural networks,” in Computational Intelligence and Modern Heuristics, Amman, Jordan: InTech, 2010, doi: 10.5772/7817. [23] T. R. Kiran and S. P. S. Rajput, “An effectiveness model for an indirect evaporative cooling (IEC) system: comparison of artificial neural networks (ANN), adaptive neuro-fuzzy inference system (ANFIS) and fuzzy inference system (FIS) approach,” Applied Soft Computing, vol. 11, no. 4, pp. 3525–3533, Jun. 2011, doi: 10.1016/j.asoc.2011.01.025. [24] A. Hussein, M. Adda, M. Atieh, and W. Fahs, “Smart home design for disabled people based on neural networks,” Procedia Computer Science, vol. 37, pp. 117–126, 2014, doi: 10.1016/j.procs.2014.08.020. [25] S. Lin, F. Cao, and Z. Xu, “Essential rate for approximation by spherical neural networks,” Neural Networks, vol. 24, no. 7, pp. 752–758, Sep. 2011, doi: 10.1016/j.neunet.2011.04.005. [26] S. Lukić et al., “Artificial neural networks based prediction of cerebral palsy in infants with central coordination disturbance,” Early Human Development, vol. 88, no. 7, pp. 547–553, Jul. 2012, doi: 10.1016/j.earlhumdev.2012.01.001. [27] A. Hasan and A. F. Peterson, “Measurement of complex permittivity using artificial neural networks,” IEEE Antennas and Propagation Magazine, vol. 53, no. 1, pp. 200–203, Feb. 2011, doi: 10.1109/MAP.2011.5773614. BIOGRAPHIES OF AUTHORS Engr. Dr. Adedayo Olukayode Ojo graduated from the Department of Electronic and Electrical Engineering in Ladoke Akintola University of Technology (LAUTECH) Nigeria where he received his B.Tech. degree with honours. He obtained his M.Sc. degree in microelectronics from Universiti Putra Malaysia (UPM), and his Ph.D. in wireless communications from the Department of Electrical/Electronics and Computer Engineering, Afe Babalola University Ado-Ekiti (ABUAD) during which he also had a productive research stay at the Cologne Institute for Information System (CIIS) in Cologne, Germany. He is currently a senior lecturer at the Department of Electrical/Electronic and Computer Engineering, Afe Babalola University Ado Ekiti, Ekiti State Nigeria. His research interests include wireless communication systems, microelectronics, soft computing, and intelligent systems. He can be contacted at email: ojoao@abuad.edu.ng.
  • 7. Int J Artif Intell ISSN: 2252-8938  A Fletcher-Reeves conjugate gradient algorithm-based neuromodel for … (Adedayo Olukayode Ojo) 165 Aiyedun Olatilewa Eyitayo received the B.Eng. (Bachelor of Engineering) degree in electrical electronic engineering from the Department of Electrical/Electronics and Computer Engineering, College of Engineering, Afe Babalola University Ado-Ekiti (ABUAD), Nigeria. His final year project is in the area of the design and modeling of a hybrid renewable microgrid system. His research interests include intelligent systems, smart grids, modern power systems, and power system stability. He can be contacted at email: tilewatayo2@gmail.com. Engr. Dr. Moses Oluwafemi Onibonoje is an Associate Professor in the Department of Electrical/Electronics and Computer Engineering at Afe Babalola University Ado-Ekiti (ABUAD), Nigeria. He specializes in control and instrumentation engineering. His research areas include wireless sensor networks, distributed systems, wireless instrumentations, optimization methods, and agro-electro instrumentations. He holds a Ph.D. in electronic and electrical engineering from Obafemi Awolowo University, Ile-Ife, Nigeria. He is coren registered and a corporate member of the Nigerian Society of Engineers. He can be contacted at email: onibonojemo@abuad.edu.ng. Saheed Lekan Gbadamosi is an Associate Professor in Bowen University Iwo and researcher specializing in sustainable energy systems. With a Ph.D. in electrical engineering, he is a registered engineer in both South Africa and Nigeria. His research focuses on power systems optimization, renewable energy integration, and cyber-physical systems. As an educator, he has taught courses in power systems and renewable energy while mentoring postgraduate students. His passion for sustainable development drives his commitment to creating a greener future. He can be contacted at email: gbadamosiadeolu@gmail.com.