SlideShare a Scribd company logo
International Journal of Electrical and Computer Engineering (IJECE)
Vol. 8, No. 6, December 2018, pp. 4810~4822
ISSN: 2088-8708, DOI: 10.11591/ijece.v8i6.pp4810-4822  4810
Journal homepage: http://iaescore.com/journals/index.php/IJECE
A Novel Neuroglial Architecture for Modelling Singular
Perturbation System
Samia Salah, M’hamed Hadj Sadok, Abderrezak Guessoum
Department of electronics, University Saad Dahlab Blida1, Algeria
Article Info ABSTRACT
Article history:
Received Jan 23, 2018
Revised Jul 5, 2018
Accepted Jul 29, 2018
This work develops a new modular architecture that emulates a recently-
discovered biological paradigm. It originates from the human brain where the
information flows along two different pathways and is processed along two
time scales: one is a fast neural network (NN) and the other is a slow network
called the glial network (GN). It was found that the neural network is
powered and controlled by the glial network. Based on our biological
knowledge of glial cells and the powerful concept of modularity, a novel
approach called artificial neuroglial Network (ANGN) was designed and an
algorithm based on different concepts of modularity was also developed. The
implementation is based on the notion of multi-time scale systems.
Validation is performed through an asynchronous machine (ASM) modeled
in the standard singularly perturbed form. We apply the geometrical
approach, based on Gerschgorin’s circle theorem (GCT), to separate the fast
and slow variables, as well as the singular perturbation method (SPM) to
determine the reduced models. This new architecture makes it possible to
obtain smaller networks with less complexity and better performance.
Keyword:
Artificial neuro glial network
Asynchronous machine
Gerschgorin’s circle theorem
Glial network
Singular perturbation method
Copyright © 2018 Institute of Advanced Engineering and Science.
All rights reserved.
Corresponding Author:
Samia Salah,
Department of Electronics,
University Saad Dahlab Blida1,
Soumaa Road, BP 270, Blida, Algeria, (+213) 25 43 38 50
Email : salah_samia@yahoo.fr
1. INTRODUCTION
The last few years have witnessed a tremendous growth in the field of intelligent systems. Inspired
by biological neural networks, one such success has been achieved in evolution of artificial neural networks
(ANNs). ANNs are characterized by their distinctive capabilities of exhibiting massive parallelism,
generalization ability and being good function approximators. This renders them useful for solving a variety
of problems in pattern recognition, prediction, optimization and associative memory [1], [2]. Additionally,
they are also being employed in system modeling and control [3], [4].
These ANNs, efficient in numerous applications, are not as well suited for approximating non-linear
and high-dimensional functions with multiple time dynamics like the ones in singular perturbation systems
(SPSs) which increases the difficulties in system modeling, analysis and controller design .An effective way
to overcome this problem is to separate the original system states into subsystems that change rapidly and
those that vary slowly on the chosen time scale, using singularly perturbation method (SPM).
Some recent research results using the (SPM) to analyze and control the SPSs are published in [5],
[6]. However, accurate and faithful mathematical models for those systems are usually difficult to obtain due
to the uncertainties and nonlinearities. In this case, adequate system identification becomes important and
necessary, before a singular perturbation theory-based control scheme can be designed.
Recently, research using multi times scale neural networks have been proposed in literature to solve
the system identification problem of the nonlinear SPSs. Among them there are multi-time-scale dynamic
Int J Elec & Comp Eng ISSN: 2088-8708 
A Novel Neuroglial Architecture for Modelling Singular Perturbation System (Samia Salah)
4811
neural network (DNN) proposed in [7]. Or recurrent neural network (RNN) proposed in [8]. In these papers,
training methods are based on a gradient descent updating algorithm with fixed “learning gain”, such as back
propagation (BP) and RNN algorithms. The main drawback of these training methods is that the convergence
speed is usually very slow. To accelerate the training process ,researchers investigated the extended Kalman
filter (EKF) based training methods for NN in [9]. The theoretical analysis of EKF based training algorithm
requires the modeling uncertainty of the NN to be a Gaussian process, which may not be true in real
applications. Some other researchers also studied optimal bounded ellipsoid (OBE) algorithm-based learning
laws for NN [10]-[12]. All of these methods are complex and computationally intensive.
In this paper, we propose a new multi time-scale NN architecture called "artificial neuroglial
network" (ANGN) based on the powerful concept of "modularity" to solve the problems of singular
perturbation system training. The basic idea is to use the knowledge about the nervous system and the human
brain, where the information flows along two different pathways and is processed along two-time scales: one
is a fast-neural network (NN) and the other is a slow network called the glial network (GN). It was found that
the neural network is powered and controlled by the glial network [13], [14].
In our experiment, for a given application, depending on the complexity and the physical
characteristics of the problem, we divide our global model into two sub-models: slow and fast ones using the
singular perturbation method (SPM). The first difficulty that arises when decoupling variables is the
identification of both the slow and fast model variables. The solution is based on Gerschgorin’s circle
geometric theorem (GCT) [15]. This technique makes it possible to locate the eigenvalues in the complex
plane within groups of circles. The grouping of the modes is immediate whenever circles are disjoint, and
afterward, the number of slow and fast modes is determined.
Validation of the proposed approach is carried out on the ASM model, under the singularly
perturbed standard form. Subsequently, an algorithm is adopted to test the effectiveness and performance of
the proposed ANGN. This new architecture has made it possible to obtain networks of considerably smaller
size with simple structures which have a strong nonlinear approximation capability and which enables it to
model nonlinear singularly perturbed systems more accurately with less computation complexity, compared
to the conventional neural network model.
2. SINGULAR PERTURBATION METHOD
This method is used for multi-time scale systems that can be reduced to the standard form of
equation (3) by the determination of the parasitic term ε. Consider the state model of a linear system of
dimension n:
{ 𝑋̇ = 𝐴𝑋 + 𝐵𝑈
𝑌 = 𝐶𝑋
} (1)
Evolving according to two-time scales, it can be decoupled into two slow and fast subsystems. The
state vector X contains all the state variables corresponding to the dynamic elements. If x is the set of state
variables of slow elements, and z is the set of fast elements the model is written
,
2
1
2221
1211
U
B
B
z
x
AA
AA
z
x


























(2)
  






z
x
CCy ,21
with :
-𝑥(𝑡0) = 𝑥0 , 𝑎𝑛𝑑 𝑧(𝑡0) = 𝑧0
- 22221 ,, BAA - are very large compared to 11211 ,, BAA .
We introduce the parameter ε to normalize our model, we write: 21
*
21 AA  , 22
*
22 AA  ,
.2
*
2 BB  𝜀 can
be given by:  0120
1
22 LAAA  
 with : 21
1
220 AAL  
and 0
1
12110 LAAA  
By assuming that matrix A22 is invertible, the state equation in the standard singularly perturbed
form with ε as the perturbation parameter is then written as:
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 6, December 2018 : 4810 - 4822
4812
,*
2
1
*
22
*
21
1211
U
B
B
z
x
AA
AA
z
x



























(3)
2.1. Slow and fast reduced models
The slow reduced model is determined from eq. (3) by considering that :0
 








sss
sssss
sssss
uBxAAz
uDxCy
uBxAx
221
1
22

(4)
where ssss yuzx ,,, are the slow components of the variables yuzx ,,, respectively, with :
2
1
222
21
1
2221
2
1
22121
21
1
221211
BACD
AACCC
BAABB
AAAAA
s
s
s
s








(5)
with:   00 xtxs  .The initial value of the slow components sz is :    012
1
220 txAAtz ss


which is generally different from 0z . The fast variables z cannot therefore be approximated by sz in the
time interval ,0 T . We introduce the corrective term fz , defined by : sf zzz  which represents rapid
changes in z .Therefore, the boundary layer equation, expressed in the dilated time follows:
( 
dt
dz f

d
dz f
): (6)
The fast reduced model is then written:
   
 
 












021
1
2200
2
222
xAAztz
zCy
uBzA
d
dz
f
ff
ff
f


 (7)
3. IDENTIFICATION OF THE GEOMETRIC DYNAMICS
Setting the previous standard form assumes: a) Knowledge of the eigenvalues to determine the size
of the slow and fast eigenvectors. b) A suitable grouping of slow modes and fast modes.
Our attention is focused on geometrical methods including the circles of gerschgorin. The
localization of the eigenvalues on the complex plane makes it possible to put a system in the standard form
without having to calculate these eigenvalues. In the case of GCT, the grouping of modes is immediate as
soon as the eigenvalues are circumscribed in disjoint circles. The geometric method based on GCT for the
selection and separation of different time scales is represented as follows:
3.1. Gerschgorin’s circles theorem (GCT)
Noting  njiaij .....1,,  as the elements of the state matrix A ,𝑝𝑖, 𝑄𝑖 are expressed by
Int J Elec & Comp Eng ISSN: 2088-8708 
A Novel Neuroglial Architecture for Modelling Singular Perturbation System (Samia Salah)
4813
,
1




n
ij
j
iji ap ni .,,.........2,1 (8)
,
1




n
ij
i
iji aQ nj .,,.........2,1 (9)
Geometrical separation of the different dynamic modes is based on the application of the following
two theorems. These theorems obtained from Gerschgorin give a localization of the eigenvalues on the
complex plane.
3.1.1. Theorem 1
All the eigenvalues of a matrix of arbitrary rank n, are contained in n circle bundles centered at
𝑎11, 𝑎22, … . . , 𝑎 𝑛𝑛and radii 𝑅𝑙1, 𝑅𝑙2, … . . , 𝑅𝑙𝑛 for the lines or 𝑅 𝑐1, 𝑅 𝑐2, … . . , 𝑅 𝑐𝑛for the columns, which are
obtained by summing the modules of the off-diagonal terms appearing in the same line or column:
𝑅𝑙𝑖 = 𝑝𝑖 𝑎𝑛𝑑 𝑅 𝑐𝑖 = 𝑄𝑖.
3.1.2. Theorem 2
When a group of k line-circles (or k column-circles) is completely disjoint from the other circles, it
contains k eigenvalues [16]. When a group of k circles is completely disjoint from the other circles, it can be
said that the system then has at least two-time scales. Whether this group of circles is to the right or to the left
of the other circles, we can determine the k slow modes corresponding to these k circles or respectively the k
fast modes. Each circle represents a state of the system. It is then possible to give an adequate partition of the
model. Generally, this direct method does not make it possible to conclude immediately in all cases.
Dauphin-Tanguy [17] then proposes the use of transformations:
)1,....,1,,1,...,1( kk diagS  , nk ,,2,1  ;
these parameters allow the variation of the circles size and their optimization lead to circles of minimum
radii. However, this method does not separate all systems.
3.1.3. Changing the radius size
Let the matrix be:
)1,....,1,,1,...,1( kk diagS  ,
nk ,,2,1 
(10)
The change of base XSX k
'
leads to a new state matrix. The radii kR1 and ckR become
kkR 1 and kckR  respectively [18]. If the operation is repeated several times, the aggregated
transformation is:





1
SASA
XSX
with:
k
k
SS 
(11)
If there are two disjoint sets of circles, then the permutation matrix is:





1
PAPA
XPX
(12)
3.1.4. Moving Circle Centers
In order to improve the separation of dynamics, it is sometimes necessary to introduce a
displacement of the circles which is characterized by the following transformation [18]:
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 6, December 2018 : 4810 - 4822
4814
ijlnl JBIT . . (13)
Only the elements of line i and column j change, the centers of the circlesi and j are shifted from
iia and jja to jilii aBa  and jiljj aBa  , respectively. The choice of lB can be made in such a way that
0)( 2
 jiliijjlijij aBaaBaX If several circles intersect, the terms  ,...2,1lBl
are calculated in the
same way, so the final transformation is:





1
TATA
XTX
with: T lT , (14)
If two groups of circles are disjoint, the permutation matrix P is again:





1
PAPA
XPX
(15)
4. CONCEPT OF MODULARITY
The application of concept of modularity to define the new architecture of small artificial networks
involves the following four steps:
4.1. The decomposition
The decomposition of a task into subtasks is the first step toward the application of modularity. It
can be done on the input space (horizontal decomposition) or on the input variables (vertical
decomposition) [19].
4.2. Organization of the modular architecture.
The interconnection of the modules can be parallel or in series. In the parallel architecture, all
modules process their information simultaneously. The global output involves some modules or all of them,
depending on the application. The cooperation link between the modules which can be of type "and" or of
type "or" [20].
4.3. Nature of learning
The organization of NNs in a modular architecture makes learning more difficult. The modules of
such an architecture can follow different learning processes.
4.3.1. Independent learning
Training modules independently seems to be the simplest way. This suggests that the other modules
of the architecture do not participate in learning. The interaction between the modules then occurs only
during restitution phase [21].
4.3.2. Cooperative learning
The proposed idea is to use a global method to train all modules at the same time. It is necessary
then to have a fixed architecture, determined in advance. An example is given by ME [22].
4.4. Communication between modules
The techniques for calculating the overall output of a multi-network architecture are diversely
varied, among which is the technique of weighted votes [23]. A weight is then associated with each classifier
representing a measure of performance. Another technique is to minimize the mean square error (MSE) of
the global output.
5. PROPESED NEUROGLIAL NETWORK ARCHITECTURE
The architecture adopted for our ANGN is partly based on the concepts of modularity and remains
very close to the above-mentioned architecture "ME". In this architecture, a number of NNs (experts) are
Int J Elec & Comp Eng ISSN: 2088-8708 
A Novel Neuroglial Architecture for Modelling Singular Perturbation System (Samia Salah)
4815
supervised by a GN (Figure 1). The glial supervisor network determines the weighting coefficients of each
expert’s participation according to the input. The ANGN uses the "divide and conquer" strategy in which the
responses from the expert NNs are combined into a single rapid response. The latter is aggregated to the slow
response of the glial supervisor network, resulting in the overall response of our system. Algorithm
developed in this article is based on softmax function. In this algorithm, the supervisor GN evaluates the
performance of each expert NN according to the input and selects the best of them to be activated.
Figure 1. Neuroglial network architecture
As illustrated in Figure 1, our global ANGN is composed of K fast NNs and a slow supervisor GN.
The vector of the inputs is divided into two vectors Xs and Xf representing, respectively, the slow inputs and
the fast inputs of the network. The vector Xs is assigned to the supervisor network and the vector Xf is
assigned to the various experts. The responses of the expert modules are combined to form the fast output.
The supervisor GN has two outputs, the first one, which is used for control and supervision of the experts by
selecting the most suitable network and deactivating the others for each input vector. The second output
represents the slow response of the GN, this output is aggregated at the fast output to form the global
response of the ANGN. The structure of the global ANGN is close to multi-model approach. Indeed, each
expert network is specialized in a precise sub-problem, a vertical decomposition of the input variables into
fast and slow inputs, as well as a horizontal decomposition of the fast input space Xf and slow Xs is used.
5.1. Algorithm based on function softmax
In the ANGN, the vector of the fast inputs Xf is sectioned both sequentially and in parallel into K
vectors Xf1, Xf2,…., XfK . These K vectors Xfi constitute the respective inputs of the K experts. The slow input
vector XS is also sectioned in the same way into K vectors XS1, XS2,…, XSK which are applied consecutively to
the supervisor GN.
In this algorithm, each vector of the fast inputs 𝑥𝑖(𝑓), (𝑖 = 1,2,…,K) is applied to all experts at the
same time. These modules learn different examples from the learning base and specialize in specific groups
of responses that are then weighted by the supervisor GN according to their absolute differences and the
desired response. The expert whose response is the closest to the desired response will have the
highest weight.
The supervisor GN, whose input is the vector of slow inputs 𝑥𝑖(𝑠), evaluates the performance of each
expert according to the input and selects the best one to be activated. This algorithm shows many similarities
to the one developed by Jacob [22] in the architecture "mixtures of experts". These similarities are related to
the supervision and selection of experts, however there are three main differences:
 In the "mixture of experts" architecture, the supervisor and the experts have the same time scale. For our
ANGN, the two types of modules work on two different time scales, slow and fast.
 The task of the supervisor in the ME approach is to supervise and control the competition of experts. In
our approach, the GN supervises and controls the competition of experts as well as contributing to the
overall response of the system by providing the slow response.
𝑥 𝑠1
𝑥 𝑠2
.
.
𝑥 𝑠𝑘
𝑋
Expert network k
∑
Expert network 1
expert 1
Glial network
∑
Expert network 2
𝑖 = 1,2, … , 𝑘
𝑥 𝑠
𝑥𝑓
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 6, December 2018 : 4810 - 4822
4816
 For the selection, the experts' weighting is binary (0 or 1). In the ME approach weights are probabilities
between 0 and 1.
Before presenting our algorithm, it is worth noting that the experts’ learning are cooperative, which
means experts learn simultaneously and divide the task during the learning process. The weights of the expert
and those of the GN, for the selected vector, are updated at the same time by propagation. The learning of
experts and the GN is carried out simultaneously by following these steps:
1. The separation of input vector X into both slow and fast vectors: 𝑥 𝑠 and 𝑥𝑓.
2. Each vector 𝑥𝑖(𝑓), (𝑖 = 1,2,…,K) is intended for all experts.
3. Each vector 𝑥𝑖(𝑠), (𝑖 = 1,2,…,K) is intended for the supervisor GN.
4. The learning of the GN to obtain the desired slow response corresponding to the input.
5. The selection of the expert according to the value of the probability 𝑝(𝑖/𝑥 𝑠) and selection of the ith expert
by evaluating the slow input𝑥(𝑠) .
6. The output of expert i represents the conditional average of the desired response with respect the input
and the expert network.
The learning algorithm of the ANGN architecture.
1. Initialization of the synaptic weights of experts and the GN.
2. for each slow input vector 𝑥𝑖(𝑠) :
2.1 Calculate for 𝑖 = 1,2, … 𝑘
for 𝑚 = 1,2, … , 𝑞
end for
2.2. Repeat step 2.1 until the algorithm converges.
2.3. If max  ig than 1ig
else gi = O
2.4.
)()(
1
)(
1
)(
( fisi
K
i
i
fi
K
i
ifi
yy
ygy
Y 




 Aggregation of the two slow and fast outputs
end for
 
     
     
   
     
         )(
)()()()(
)()()(
)()()()(
)()(
)()()(
1
2
)(
2
)(
21
)(
)()()(
1
)(
1
)(1
1
2
1
exp
2
1
exp)(
)(
],...,,[)(
)(
))(exp(
))(exp(
)(
)()(
siiiii
fi
m
fii
m
fi
m
fi
m
fi
m
f
m
fi
si
m
si
m
si
m
si
m
si
m
s
m
si
m
si
m
si
m
si
k
j
fifj
fifi
i
Tq
iiifi
m
fi
T
fi
m
fi
k
j
j
i
i
i
T
sii
Xkgkhkaka
XkekhkWkW
kyke
XkekWkW
kykdke
kWXy
ydg
ydng
kh
yyyky
kWXy
ku
ku
kg
kaXku
d


































Int J Elec & Comp Eng ISSN: 2088-8708 
A Novel Neuroglial Architecture for Modelling Singular Perturbation System (Samia Salah)
4817
6. APPLICATION
The ASM is a highly coupled nonlinear complex system and is a typical example of two time-scales
system. The performance of the proposed ANGN architecture is assessed on both the reduced, slow and fast
models of the machine. The state model of the induction machine in the stationary coordinate system (𝛼, 𝛽)
can be written as:
 
 

































0
.
11
1
2
2














 s
r
s
rrr
s
r
S
r
s v
I
T
R
BT
R
T
B
I
T
dt
d (16)
   
 r
T
s
s
r
em JR
L
B
PT 2 (17)
Or:
 
 
.
11
1
2
2
















I
T
R
BT
R
T
B
I
T
A
rprrp
sp
r
sp



with : rrpssp TTTT   , ,𝐵𝑟 =
𝑀
𝐿 𝑟
𝑎𝑛𝑑𝜎 = (1 − 𝑀2)/(𝐿 𝑠 ∗ 𝐿 𝑟) )
Application of the GCT to the state matrix A results in two circles which intersect (Figure 2a). A
change in the size of rays is carried out by transformation 1.
 


















RB
I
PP
rr
s
0
0
, 2
111
To get:
.
11
11
222
22
1
















JI
T
I
T
I
T
I
T
A
rprp
spsp

 (18)
The centers of the two circles (Figure 2a) are relocated by:
,
0
,
22
2
2122 







II
I
PP 
And
.
1
1
0
2222
2
2




















JI
T
JI
T
I
T
A
spsp
sp




The new line circles are still double and disjoint (Figure 2b); the final transformation is:











r
s
PP 21
,
 







RBI
I
PP
r2
2
21
0 (19)
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 6, December 2018 : 4810 - 4822
4818
(a) (b)
Figure 2. (a) Circles that intersect; (b) Disjointed Circles
In this case, the slow and fast components are easilyidentified. We can then apply SPM to develop
the slow and fast submodels. By putting:







z
x

We get:
(20)
zJx
L
p
T T
sp
em 2
This form is standard, the flux x is slow and the flux z is fast. By decomposing the fluxes, the voltages and
the torque, we obtain:
 
   
       

 fssss
fs
s
vtvv
ztzz
txx



(21)
   femsemem TTT 
The reduced slow model is then:
 
 
 
 





































ss
ss
s
s
r
r
s
s
v
v
T
T
dt
d














1
1
1
0
0
1
1
 
      ,
1
ssssss
s
s
sem vv
L
pT
T  




-250 -200 -150 -100 -50 0
-200
-150
-100
-50
0
50
100
150
200
Re
Im
-200 -100 0 100
-300
-200
-100
0
100
200
300
Re
Im
Actual Eigenvalues Actual Eigenvalues
 
,
0
0
.
11
.
1
0
2
2
22
2






































 

s
r
sprp
sp v
RBI
I
z
x
I
T
I
T
I
T
z
x
dt
d
Int J Elec & Comp Eng ISSN: 2088-8708 
A Novel Neuroglial Architecture for Modelling Singular Perturbation System (Samia Salah)
4819
The reduced fast model is:
      ,
11
1









 ss
sp
ss
sp
ss v
T
L
I 




       
     






















fs
r
s
fr
ss
sp
ss
r
sr
IR
B
L
v
T
R
B








11
1
1
6.1. Results and discussion.
The ANGN_MAS model is composed of two groups of inputs vsα, vsβ, which are decomposed into
two slow inputs vsα(s), vsβ(s) and two fast inputs vsα(f), vsβ(f). The ANGN_MAS consists of four experts and a
glial supervisor network. The four experts and the glial network have similar architectures, consisting of an
input layer of four neurons and an output layer of one neuron. The effectiveness of the ANGN algorithm
proposed in this paper is illustrated by the performance index root mean square (RMS) value. The RMS of
the states error is calculated as:
RMS=√(∑ 𝑒2(𝑖)𝑛
𝑖=1 )
𝑛⁄
where n is the number of simulation steps, and 𝑒(𝑖) is the difference between the state variables of the model
and the true system at the ith step. Figure 3 (a) and (b) show the good convergence of this learning algorithm.
The minimum value, which is very close to zero occurs after only two iterations. The algorithm converged
perfectly and quickly.
(a) (b)
Figure 3. (a) Evolution of the glial network; (b) Evolution of experts
For a comparison with the algorithms in [7], we choose the same parameters of the induction motor.
The simulation results are presented in Figure 4 and 5. The RMS values for state variables are presented in
Table 1.
2 4
0
0.2
0.4
0.6
0.8
1
x 10
-26
iteration
RMS
2 4
0
0.2
0.4
0.6
0.8
x 10
-28
X: 2
Y: 7.55e-030
iteration
RMS
Expert1
Expert2
Expert3
Expert4
      fssfssfem IIpT   
 
 
 
 
 
 



































fs
fs
sfs
fs
s
s
fs
fs
v
v
LI
I
T
T
I
I
dt
d








1
1
0
0
1
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 6, December 2018 : 4810 - 4822
4820
Figure 4. Error for states 𝜑𝑟𝛼 , 𝜑 𝑟𝛽 , 𝑖 𝛼, iβ: of proposed ANGN
Figure 5. Error for states 𝜑𝑟𝛼 , 𝜑 𝑟𝛽 , 𝑖 𝛼, iβ: [7]
Table 1. The RMS values for state variables
φrα(wb) φrβ (wb) iα(A) iβ(A)
RMS (ref [7]) 0.0575 0.05334 0.0446 0.0452
RMS (ANGN) 1.029*10-23
3.33*10-24
3.33*10-23
9.8*10-23
The RMS values of all state variables in Figure 4 and 5 demonstrate that the performance has been
improved compared to those [7], it is very clear from these figures that the identification errors
∆𝜑 𝛼 , ∆𝜑 𝛽, ∆𝑖 𝛼 , ∆𝑖 𝛽 of the proposed ANGN are greatly reduced.. We can confirm that the state variables of
ANGN follow those of the nonlinear system more accurately and faster. This is due to the good separation of
slow and fast modes which result that the complexity of the architecture sub-networks of ANGN are
considerably reduced.
We conducted other comparisons of the proposed approach with both the modular approach in
[24] and the mixture of experts (ME) approach. The RMS values of the 𝑇𝑒𝑚 are given in Table 2. From Table
2, it is very clear that RMS values of ANGN are much smaller than both the values [24] and ME, which
means that the proposed algorithm in this paper can achieve more accurate results. From Figure 3(a) and
3(b), we can see. Therefore the excellent performance of the proposed model in terms of convergence speed
and reduction of the optimal RMS error than both the performance in Figure 6 and performance in Figure 7.
It is largely due to the small size of the sub-networks and the very limited number of examples presented at
the input of each expert. These advantages are the main properties of modularity.
0.05 0.1 0.15 0.2 0.25 0.3 0.35
-2
-1
0
1
2
3
4
Time (second)

(wb) 
0.02 0.04 0.06 0.08 0.1
-0.5
0
0.5
x 10
-10
0.05 0.1 0.15 0.2 0.25 0.3 0.35
-2
-1
0
1
2
3
Time (second)

(wb)

0.02 0.04 0.06 0.08 0.1
-0.5
0
0.5
x 10
-10
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
-6
-4
-2
0
2
4
6
Time (second)
i

(A)
i
0.02 0.04 0.06 0.08 0.1
-0.5
0
0.5
x 10
-10
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
-3
-2
-1
0
1
2
3
Time (second)
i

(A)
i
0.02 0.04 0.06 0.08 0.1
-1
-0.5
0
0.5
x 10
-11
Int J Elec & Comp Eng ISSN: 2088-8708 
A Novel Neuroglial Architecture for Modelling Singular Perturbation System (Samia Salah)
4821
Table 2. Comparison performance models
Models Number of iterations RMS
ref [24] 100 1.08932 E-27
ME 90 7.934 E-24
ANGN 2 17.455 E-30
Figure 6. Evolution of RMS of the ME model Figure 7. Evolution of RMS in ref [24]
7. CONCLUSION
In this paper, a novel architecture called artificial neuroglial network (ANGN), inspired by a
recently discovered fact in biology, was developed along with a modular algorithm based on the softmax
function. Because of its good characteristics, an ASM put in standard singularly perturbed form is chosen to
validate the proposed ANGN architecture. The slow and fast reduced models are obtained in two steps:
- Application of a geometric approach based on GCT to decouple the slow and fast variables.
- Application of the SPM to develop the reduced models.
The simulation results related to the learning of the ANGN_ASM model demonstrate the fast and
accurate convergence properties of the proposed algorithm. ANGN can modeling ASM very wel with less
computational complexity. The next step is to use these reduced models to develop controllers for the ASM.
REFERENCES
[1] S. Prasad, et al., “Comparison of Accuracy Measures for RS Image Classification using SVM and ANN
Classifiers,” International Journal of Electrical and Computer Engineering (IJECE), vol/issue: 7(3), pp. 1180-
1187, 2017.
[2] S. R. Borra, et al., “An Efficient Fingerprint Identification using Neural Network and BAT Algorithm,”
International Journal of Electrical and Computer Engineering (IJECE), vol/issue: 8(2), pp. 1194-1213, 2018.
[3] H. Mohammed and A. Meroufel, “Contribution to the Artifical Neural Network Speed Estimator in a Degraded
Mode for Sensor-Less Fuzzy Direct Control of Torque Application Using Dual Stars Induction Machine,”
International Journal of Electrical and Computer Engineering (IJECE), vol/issue: 5(4), pp. 729-741, 2015.
[4] Z. Mekrini and S. Bri, “High-Performance using Neural Networks in Direct Torque Control for Asynchronous
Machine,” International Journal of Electrical & Computer Engineering, vol/issue: 8(2), 2018.
[5] P. Kokotovic, et al., “Singular perturbation methods in control: analysis and design,” vol. 25, 1999.
[6] D. Naidu, “Singular perturbations and time scales in control theory and applications: an overview,” Dynamics of
Continuous Discrete and Impulsive Systems Series B, vol. 9, pp. 233-278, 2002.
[7] Z. J. Fu, et al., “Robust on-line nonlinear systems identification using multilayer dynamic neural networks with
two-time scales,” Neurocomputing, vol. 113, pp. 16-26, 2013.
[8] X. Li and W. Yu, “Dynamic system identification via recurrent multilayer perceptrons,” Information sciences,
vol/issue: 147(1-4), pp. 45-63, 2002.
[9] C. S. Leung and L. W. Chan, “Dual extended Kalman filtering in recurrent neural networks,” Neural Networks,
vol/issue: 16(2), pp. 223-239, 2003.
[10] D. D. Zheng, et al., “Indirect adaptive control of nonlinear system via dynamic multilayer neural networks with
multi‐time scales,” International Journal of Adaptive Control and Signal Processing, vol/issue: 29(4), pp. 505-523,
2015.
[11] D. D. Zheng, et al., “Identification and trajectory tracking control of nonlinear singularly perturbed systems,” IEEE
Transactions on Industrial Electronics, vol/issue: 64(5), pp. 3737-3747, 2017.
20 40 60 80 100
0
0.2
0.4
0.6
0.8
x 10
-23
iteration
RMS
Expert1
Expert2
Expert3
Expert4
the winner expert
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 8, No. 6, December 2018 : 4810 - 4822
4822
[12] D. D. Zheng, et al., “Robust identification for singularly perturbed nonlinear systems using multi-time-scale
dynamic neural network,” Decision and Control (CDC), 2017 IEEE 56th Annual Conference on, 2017.
[13] Astrocytes, “Science & vie,” pp. 67, 2005.
[14] A. B. Porto and A. Pazos, “Neuroglial behaviour in computer science,” Artificial neural networks in real-life
applications, IGI Global, pp. 1-21, 2006.
[15] Y. V. Hote and A. N. Jha, “New approach of Gerschgorin theorem in model order reduction,” International Journal
of Modelling and Simulation, vol/issue: 35(3-4), pp. 143-149, 2015.
[16] O. Touhami, et al., “Dynamics separation of induction machine models using gerschgorin's circles and singular
perturbations,” Electrical and Electronics Engineering, 2004. (ICEEE). 1st International Conference on, 2004.
[17] G. D. Tanguy, et al., “Singular perturbation method and reciprocal transformation on two-time scale systems,”
Multivariable Control, Springer, pp. 327-342, 1984.
[18] H. Guesbaoui and C. Iung, “Multi-time scale modelling in electrical machines,” Systems, Man and Cybernetics,
1993. 'Systems Engineering in the Service of Humans', Conference Proceeding, International Conference on. 1993.
[19] E. Ronco, et al., “Modular neural network and self-decomposition,” Connection Science (special issue: Combining
Neural Nets), 1996.
[20] Y. Bennani, “A modular and hybrid connectionist system for speaker identification,” Neural Computation,
vol/issue: 7(4), pp. 791-798, 1995.
[21] I. Kirschning, et al., “A parallel recurrent cascade-correlation neural network with natural connectionist glue,”
Neural Networks, 1995. Proceedings, IEEE International Conference on, 1995.
[22] R. A. Jacobs, et al., “Adaptive mixtures of local experts,” Neural computation, vol/issue: 3(1), pp. 79-87, 1991.
[23] H. Drucker, et al., “Boosting and other ensemble methods,” Neural Computation, vol/issue: 6(6), pp. 1289-1301,
1994.
[24] S. Salah, et al., “Application New Approach with Two Networks Slow and Fast on the Asynchronous Machine
World Academy of Science, Engineering and Technology,” International Journal of Electrical and Computer
Engineering, vol/issue: 7(8), pp. 1087-1091, 2013.

More Related Content

PDF
Microscopy images segmentation algorithm based on shearlet neural network
PDF
VARIATIONAL MONTE-CARLO APPROACH FOR ARTICULATED OBJECT TRACKING
PDF
Optimal neural network models for wind speed prediction
PDF
X-TREPAN : A Multi Class Regression and Adapted Extraction of Comprehensible ...
PDF
Dx25751756
PDF
1.meena tushir finalpaper-1-12
PDF
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...
Microscopy images segmentation algorithm based on shearlet neural network
VARIATIONAL MONTE-CARLO APPROACH FOR ARTICULATED OBJECT TRACKING
Optimal neural network models for wind speed prediction
X-TREPAN : A Multi Class Regression and Adapted Extraction of Comprehensible ...
Dx25751756
1.meena tushir finalpaper-1-12
The Positive Effects of Fuzzy C-Means Clustering on Supervised Learning Class...

What's hot (9)

PDF
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...
PDF
Experimental investigation on circular hollow steel
PDF
Geometric Correction for Braille Document Images
PDF
An iterative morphological decomposition algorithm for reduction of skeleton ...
PDF
2 achuthan c_pankaj--23-39
PDF
A Review of Image Classification Techniques
PDF
Relevance Vector Machines for Earthquake Response Spectra
PDF
Ijeet 06 08_001
PDF
www.ijerd.com
A Blind Steganalysis on JPEG Gray Level Image Based on Statistical Features a...
Experimental investigation on circular hollow steel
Geometric Correction for Braille Document Images
An iterative morphological decomposition algorithm for reduction of skeleton ...
2 achuthan c_pankaj--23-39
A Review of Image Classification Techniques
Relevance Vector Machines for Earthquake Response Spectra
Ijeet 06 08_001
www.ijerd.com
Ad

Similar to A Novel Neuroglial Architecture for Modelling Singular Perturbation System (20)

PDF
[IJCT-V3I2P19] Authors: Jieyin Mai, Xiaojun Li
PDF
Analysis of intelligent system design by neuro adaptive control
PDF
Analysis of intelligent system design by neuro adaptive control no restriction
PDF
Ijetcas14 536
PDF
Adaptive Control Scheme with Parameter Adaptation - From Human Motor Control ...
PDF
Investigation of auto-oscilational regimes of the system by dynamic nonlinear...
PDF
Two-dimensional Klein-Gordon and Sine-Gordon numerical solutions based on dee...
PDF
Ijciet 10 01_153-2
PPT
Neural Network
PDF
NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...
PDF
A New Neural Network For Solving Linear Programming Problems
PDF
Chebyshev Functional Link Artificial Neural Networks for Denoising of Image C...
PDF
Invariant Manifolds, Passage through Resonance, Stability and a Computer Assi...
PDF
Monotonicity of Phaselocked Solutions in Chains and Arrays of Nearest-Neighbo...
PDF
Evaluation the affects of mimo based rayleigh network cascaded with unstable ...
PDF
Redundancy in robot manipulators and multi robot systems
PDF
PDF
Adaptive pi based on direct synthesis nishant
PDF
Echo state networks and locomotion patterns
[IJCT-V3I2P19] Authors: Jieyin Mai, Xiaojun Li
Analysis of intelligent system design by neuro adaptive control
Analysis of intelligent system design by neuro adaptive control no restriction
Ijetcas14 536
Adaptive Control Scheme with Parameter Adaptation - From Human Motor Control ...
Investigation of auto-oscilational regimes of the system by dynamic nonlinear...
Two-dimensional Klein-Gordon and Sine-Gordon numerical solutions based on dee...
Ijciet 10 01_153-2
Neural Network
NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...
A New Neural Network For Solving Linear Programming Problems
Chebyshev Functional Link Artificial Neural Networks for Denoising of Image C...
Invariant Manifolds, Passage through Resonance, Stability and a Computer Assi...
Monotonicity of Phaselocked Solutions in Chains and Arrays of Nearest-Neighbo...
Evaluation the affects of mimo based rayleigh network cascaded with unstable ...
Redundancy in robot manipulators and multi robot systems
Adaptive pi based on direct synthesis nishant
Echo state networks and locomotion patterns
Ad

More from IJECEIAES (20)

PDF
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
PDF
Embedded machine learning-based road conditions and driving behavior monitoring
PDF
Advanced control scheme of doubly fed induction generator for wind turbine us...
PDF
Neural network optimizer of proportional-integral-differential controller par...
PDF
An improved modulation technique suitable for a three level flying capacitor ...
PDF
A review on features and methods of potential fishing zone
PDF
Electrical signal interference minimization using appropriate core material f...
PDF
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
PDF
Bibliometric analysis highlighting the role of women in addressing climate ch...
PDF
Voltage and frequency control of microgrid in presence of micro-turbine inter...
PDF
Enhancing battery system identification: nonlinear autoregressive modeling fo...
PDF
Smart grid deployment: from a bibliometric analysis to a survey
PDF
Use of analytical hierarchy process for selecting and prioritizing islanding ...
PDF
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...
PDF
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...
PDF
Adaptive synchronous sliding control for a robot manipulator based on neural ...
PDF
Remote field-programmable gate array laboratory for signal acquisition and de...
PDF
Detecting and resolving feature envy through automated machine learning and m...
PDF
Smart monitoring technique for solar cell systems using internet of things ba...
PDF
An efficient security framework for intrusion detection and prevention in int...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Embedded machine learning-based road conditions and driving behavior monitoring
Advanced control scheme of doubly fed induction generator for wind turbine us...
Neural network optimizer of proportional-integral-differential controller par...
An improved modulation technique suitable for a three level flying capacitor ...
A review on features and methods of potential fishing zone
Electrical signal interference minimization using appropriate core material f...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Bibliometric analysis highlighting the role of women in addressing climate ch...
Voltage and frequency control of microgrid in presence of micro-turbine inter...
Enhancing battery system identification: nonlinear autoregressive modeling fo...
Smart grid deployment: from a bibliometric analysis to a survey
Use of analytical hierarchy process for selecting and prioritizing islanding ...
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...
Adaptive synchronous sliding control for a robot manipulator based on neural ...
Remote field-programmable gate array laboratory for signal acquisition and de...
Detecting and resolving feature envy through automated machine learning and m...
Smart monitoring technique for solar cell systems using internet of things ba...
An efficient security framework for intrusion detection and prevention in int...

Recently uploaded (20)

PDF
III.4.1.2_The_Space_Environment.p pdffdf
PDF
Analyzing Impact of Pakistan Economic Corridor on Import and Export in Pakist...
PPTX
Nature of X-rays, X- Ray Equipment, Fluoroscopy
PDF
Categorization of Factors Affecting Classification Algorithms Selection
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PDF
PPT on Performance Review to get promotions
PDF
UNIT no 1 INTRODUCTION TO DBMS NOTES.pdf
PDF
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PDF
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PDF
Unit I ESSENTIAL OF DIGITAL MARKETING.pdf
PDF
Soil Improvement Techniques Note - Rabbi
PDF
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
PPTX
CURRICULAM DESIGN engineering FOR CSE 2025.pptx
PPT
A5_DistSysCh1.ppt_INTRODUCTION TO DISTRIBUTED SYSTEMS
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PDF
Integrating Fractal Dimension and Time Series Analysis for Optimized Hyperspe...
PPTX
Information Storage and Retrieval Techniques Unit III
PDF
Visual Aids for Exploratory Data Analysis.pdf
III.4.1.2_The_Space_Environment.p pdffdf
Analyzing Impact of Pakistan Economic Corridor on Import and Export in Pakist...
Nature of X-rays, X- Ray Equipment, Fluoroscopy
Categorization of Factors Affecting Classification Algorithms Selection
Automation-in-Manufacturing-Chapter-Introduction.pdf
PPT on Performance Review to get promotions
UNIT no 1 INTRODUCTION TO DBMS NOTES.pdf
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Unit I ESSENTIAL OF DIGITAL MARKETING.pdf
Soil Improvement Techniques Note - Rabbi
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
CURRICULAM DESIGN engineering FOR CSE 2025.pptx
A5_DistSysCh1.ppt_INTRODUCTION TO DISTRIBUTED SYSTEMS
R24 SURVEYING LAB MANUAL for civil enggi
Integrating Fractal Dimension and Time Series Analysis for Optimized Hyperspe...
Information Storage and Retrieval Techniques Unit III
Visual Aids for Exploratory Data Analysis.pdf

A Novel Neuroglial Architecture for Modelling Singular Perturbation System

  • 1. International Journal of Electrical and Computer Engineering (IJECE) Vol. 8, No. 6, December 2018, pp. 4810~4822 ISSN: 2088-8708, DOI: 10.11591/ijece.v8i6.pp4810-4822  4810 Journal homepage: http://iaescore.com/journals/index.php/IJECE A Novel Neuroglial Architecture for Modelling Singular Perturbation System Samia Salah, M’hamed Hadj Sadok, Abderrezak Guessoum Department of electronics, University Saad Dahlab Blida1, Algeria Article Info ABSTRACT Article history: Received Jan 23, 2018 Revised Jul 5, 2018 Accepted Jul 29, 2018 This work develops a new modular architecture that emulates a recently- discovered biological paradigm. It originates from the human brain where the information flows along two different pathways and is processed along two time scales: one is a fast neural network (NN) and the other is a slow network called the glial network (GN). It was found that the neural network is powered and controlled by the glial network. Based on our biological knowledge of glial cells and the powerful concept of modularity, a novel approach called artificial neuroglial Network (ANGN) was designed and an algorithm based on different concepts of modularity was also developed. The implementation is based on the notion of multi-time scale systems. Validation is performed through an asynchronous machine (ASM) modeled in the standard singularly perturbed form. We apply the geometrical approach, based on Gerschgorin’s circle theorem (GCT), to separate the fast and slow variables, as well as the singular perturbation method (SPM) to determine the reduced models. This new architecture makes it possible to obtain smaller networks with less complexity and better performance. Keyword: Artificial neuro glial network Asynchronous machine Gerschgorin’s circle theorem Glial network Singular perturbation method Copyright © 2018 Institute of Advanced Engineering and Science. All rights reserved. Corresponding Author: Samia Salah, Department of Electronics, University Saad Dahlab Blida1, Soumaa Road, BP 270, Blida, Algeria, (+213) 25 43 38 50 Email : salah_samia@yahoo.fr 1. INTRODUCTION The last few years have witnessed a tremendous growth in the field of intelligent systems. Inspired by biological neural networks, one such success has been achieved in evolution of artificial neural networks (ANNs). ANNs are characterized by their distinctive capabilities of exhibiting massive parallelism, generalization ability and being good function approximators. This renders them useful for solving a variety of problems in pattern recognition, prediction, optimization and associative memory [1], [2]. Additionally, they are also being employed in system modeling and control [3], [4]. These ANNs, efficient in numerous applications, are not as well suited for approximating non-linear and high-dimensional functions with multiple time dynamics like the ones in singular perturbation systems (SPSs) which increases the difficulties in system modeling, analysis and controller design .An effective way to overcome this problem is to separate the original system states into subsystems that change rapidly and those that vary slowly on the chosen time scale, using singularly perturbation method (SPM). Some recent research results using the (SPM) to analyze and control the SPSs are published in [5], [6]. However, accurate and faithful mathematical models for those systems are usually difficult to obtain due to the uncertainties and nonlinearities. In this case, adequate system identification becomes important and necessary, before a singular perturbation theory-based control scheme can be designed. Recently, research using multi times scale neural networks have been proposed in literature to solve the system identification problem of the nonlinear SPSs. Among them there are multi-time-scale dynamic
  • 2. Int J Elec & Comp Eng ISSN: 2088-8708  A Novel Neuroglial Architecture for Modelling Singular Perturbation System (Samia Salah) 4811 neural network (DNN) proposed in [7]. Or recurrent neural network (RNN) proposed in [8]. In these papers, training methods are based on a gradient descent updating algorithm with fixed “learning gain”, such as back propagation (BP) and RNN algorithms. The main drawback of these training methods is that the convergence speed is usually very slow. To accelerate the training process ,researchers investigated the extended Kalman filter (EKF) based training methods for NN in [9]. The theoretical analysis of EKF based training algorithm requires the modeling uncertainty of the NN to be a Gaussian process, which may not be true in real applications. Some other researchers also studied optimal bounded ellipsoid (OBE) algorithm-based learning laws for NN [10]-[12]. All of these methods are complex and computationally intensive. In this paper, we propose a new multi time-scale NN architecture called "artificial neuroglial network" (ANGN) based on the powerful concept of "modularity" to solve the problems of singular perturbation system training. The basic idea is to use the knowledge about the nervous system and the human brain, where the information flows along two different pathways and is processed along two-time scales: one is a fast-neural network (NN) and the other is a slow network called the glial network (GN). It was found that the neural network is powered and controlled by the glial network [13], [14]. In our experiment, for a given application, depending on the complexity and the physical characteristics of the problem, we divide our global model into two sub-models: slow and fast ones using the singular perturbation method (SPM). The first difficulty that arises when decoupling variables is the identification of both the slow and fast model variables. The solution is based on Gerschgorin’s circle geometric theorem (GCT) [15]. This technique makes it possible to locate the eigenvalues in the complex plane within groups of circles. The grouping of the modes is immediate whenever circles are disjoint, and afterward, the number of slow and fast modes is determined. Validation of the proposed approach is carried out on the ASM model, under the singularly perturbed standard form. Subsequently, an algorithm is adopted to test the effectiveness and performance of the proposed ANGN. This new architecture has made it possible to obtain networks of considerably smaller size with simple structures which have a strong nonlinear approximation capability and which enables it to model nonlinear singularly perturbed systems more accurately with less computation complexity, compared to the conventional neural network model. 2. SINGULAR PERTURBATION METHOD This method is used for multi-time scale systems that can be reduced to the standard form of equation (3) by the determination of the parasitic term ε. Consider the state model of a linear system of dimension n: { 𝑋̇ = 𝐴𝑋 + 𝐵𝑈 𝑌 = 𝐶𝑋 } (1) Evolving according to two-time scales, it can be decoupled into two slow and fast subsystems. The state vector X contains all the state variables corresponding to the dynamic elements. If x is the set of state variables of slow elements, and z is the set of fast elements the model is written , 2 1 2221 1211 U B B z x AA AA z x                           (2)          z x CCy ,21 with : -𝑥(𝑡0) = 𝑥0 , 𝑎𝑛𝑑 𝑧(𝑡0) = 𝑧0 - 22221 ,, BAA - are very large compared to 11211 ,, BAA . We introduce the parameter ε to normalize our model, we write: 21 * 21 AA  , 22 * 22 AA  , .2 * 2 BB  𝜀 can be given by:  0120 1 22 LAAA    with : 21 1 220 AAL   and 0 1 12110 LAAA   By assuming that matrix A22 is invertible, the state equation in the standard singularly perturbed form with ε as the perturbation parameter is then written as:
  • 3.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 8, No. 6, December 2018 : 4810 - 4822 4812 ,* 2 1 * 22 * 21 1211 U B B z x AA AA z x                            (3) 2.1. Slow and fast reduced models The slow reduced model is determined from eq. (3) by considering that :0           sss sssss sssss uBxAAz uDxCy uBxAx 221 1 22  (4) where ssss yuzx ,,, are the slow components of the variables yuzx ,,, respectively, with : 2 1 222 21 1 2221 2 1 22121 21 1 221211 BACD AACCC BAABB AAAAA s s s s         (5) with:   00 xtxs  .The initial value of the slow components sz is :    012 1 220 txAAtz ss   which is generally different from 0z . The fast variables z cannot therefore be approximated by sz in the time interval ,0 T . We introduce the corrective term fz , defined by : sf zzz  which represents rapid changes in z .Therefore, the boundary layer equation, expressed in the dilated time follows: (  dt dz f  d dz f ): (6) The fast reduced model is then written:                     021 1 2200 2 222 xAAztz zCy uBzA d dz f ff ff f    (7) 3. IDENTIFICATION OF THE GEOMETRIC DYNAMICS Setting the previous standard form assumes: a) Knowledge of the eigenvalues to determine the size of the slow and fast eigenvectors. b) A suitable grouping of slow modes and fast modes. Our attention is focused on geometrical methods including the circles of gerschgorin. The localization of the eigenvalues on the complex plane makes it possible to put a system in the standard form without having to calculate these eigenvalues. In the case of GCT, the grouping of modes is immediate as soon as the eigenvalues are circumscribed in disjoint circles. The geometric method based on GCT for the selection and separation of different time scales is represented as follows: 3.1. Gerschgorin’s circles theorem (GCT) Noting  njiaij .....1,,  as the elements of the state matrix A ,𝑝𝑖, 𝑄𝑖 are expressed by
  • 4. Int J Elec & Comp Eng ISSN: 2088-8708  A Novel Neuroglial Architecture for Modelling Singular Perturbation System (Samia Salah) 4813 , 1     n ij j iji ap ni .,,.........2,1 (8) , 1     n ij i iji aQ nj .,,.........2,1 (9) Geometrical separation of the different dynamic modes is based on the application of the following two theorems. These theorems obtained from Gerschgorin give a localization of the eigenvalues on the complex plane. 3.1.1. Theorem 1 All the eigenvalues of a matrix of arbitrary rank n, are contained in n circle bundles centered at 𝑎11, 𝑎22, … . . , 𝑎 𝑛𝑛and radii 𝑅𝑙1, 𝑅𝑙2, … . . , 𝑅𝑙𝑛 for the lines or 𝑅 𝑐1, 𝑅 𝑐2, … . . , 𝑅 𝑐𝑛for the columns, which are obtained by summing the modules of the off-diagonal terms appearing in the same line or column: 𝑅𝑙𝑖 = 𝑝𝑖 𝑎𝑛𝑑 𝑅 𝑐𝑖 = 𝑄𝑖. 3.1.2. Theorem 2 When a group of k line-circles (or k column-circles) is completely disjoint from the other circles, it contains k eigenvalues [16]. When a group of k circles is completely disjoint from the other circles, it can be said that the system then has at least two-time scales. Whether this group of circles is to the right or to the left of the other circles, we can determine the k slow modes corresponding to these k circles or respectively the k fast modes. Each circle represents a state of the system. It is then possible to give an adequate partition of the model. Generally, this direct method does not make it possible to conclude immediately in all cases. Dauphin-Tanguy [17] then proposes the use of transformations: )1,....,1,,1,...,1( kk diagS  , nk ,,2,1  ; these parameters allow the variation of the circles size and their optimization lead to circles of minimum radii. However, this method does not separate all systems. 3.1.3. Changing the radius size Let the matrix be: )1,....,1,,1,...,1( kk diagS  , nk ,,2,1  (10) The change of base XSX k ' leads to a new state matrix. The radii kR1 and ckR become kkR 1 and kckR  respectively [18]. If the operation is repeated several times, the aggregated transformation is:      1 SASA XSX with: k k SS  (11) If there are two disjoint sets of circles, then the permutation matrix is:      1 PAPA XPX (12) 3.1.4. Moving Circle Centers In order to improve the separation of dynamics, it is sometimes necessary to introduce a displacement of the circles which is characterized by the following transformation [18]:
  • 5.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 8, No. 6, December 2018 : 4810 - 4822 4814 ijlnl JBIT . . (13) Only the elements of line i and column j change, the centers of the circlesi and j are shifted from iia and jja to jilii aBa  and jiljj aBa  , respectively. The choice of lB can be made in such a way that 0)( 2  jiliijjlijij aBaaBaX If several circles intersect, the terms  ,...2,1lBl are calculated in the same way, so the final transformation is:      1 TATA XTX with: T lT , (14) If two groups of circles are disjoint, the permutation matrix P is again:      1 PAPA XPX (15) 4. CONCEPT OF MODULARITY The application of concept of modularity to define the new architecture of small artificial networks involves the following four steps: 4.1. The decomposition The decomposition of a task into subtasks is the first step toward the application of modularity. It can be done on the input space (horizontal decomposition) or on the input variables (vertical decomposition) [19]. 4.2. Organization of the modular architecture. The interconnection of the modules can be parallel or in series. In the parallel architecture, all modules process their information simultaneously. The global output involves some modules or all of them, depending on the application. The cooperation link between the modules which can be of type "and" or of type "or" [20]. 4.3. Nature of learning The organization of NNs in a modular architecture makes learning more difficult. The modules of such an architecture can follow different learning processes. 4.3.1. Independent learning Training modules independently seems to be the simplest way. This suggests that the other modules of the architecture do not participate in learning. The interaction between the modules then occurs only during restitution phase [21]. 4.3.2. Cooperative learning The proposed idea is to use a global method to train all modules at the same time. It is necessary then to have a fixed architecture, determined in advance. An example is given by ME [22]. 4.4. Communication between modules The techniques for calculating the overall output of a multi-network architecture are diversely varied, among which is the technique of weighted votes [23]. A weight is then associated with each classifier representing a measure of performance. Another technique is to minimize the mean square error (MSE) of the global output. 5. PROPESED NEUROGLIAL NETWORK ARCHITECTURE The architecture adopted for our ANGN is partly based on the concepts of modularity and remains very close to the above-mentioned architecture "ME". In this architecture, a number of NNs (experts) are
  • 6. Int J Elec & Comp Eng ISSN: 2088-8708  A Novel Neuroglial Architecture for Modelling Singular Perturbation System (Samia Salah) 4815 supervised by a GN (Figure 1). The glial supervisor network determines the weighting coefficients of each expert’s participation according to the input. The ANGN uses the "divide and conquer" strategy in which the responses from the expert NNs are combined into a single rapid response. The latter is aggregated to the slow response of the glial supervisor network, resulting in the overall response of our system. Algorithm developed in this article is based on softmax function. In this algorithm, the supervisor GN evaluates the performance of each expert NN according to the input and selects the best of them to be activated. Figure 1. Neuroglial network architecture As illustrated in Figure 1, our global ANGN is composed of K fast NNs and a slow supervisor GN. The vector of the inputs is divided into two vectors Xs and Xf representing, respectively, the slow inputs and the fast inputs of the network. The vector Xs is assigned to the supervisor network and the vector Xf is assigned to the various experts. The responses of the expert modules are combined to form the fast output. The supervisor GN has two outputs, the first one, which is used for control and supervision of the experts by selecting the most suitable network and deactivating the others for each input vector. The second output represents the slow response of the GN, this output is aggregated at the fast output to form the global response of the ANGN. The structure of the global ANGN is close to multi-model approach. Indeed, each expert network is specialized in a precise sub-problem, a vertical decomposition of the input variables into fast and slow inputs, as well as a horizontal decomposition of the fast input space Xf and slow Xs is used. 5.1. Algorithm based on function softmax In the ANGN, the vector of the fast inputs Xf is sectioned both sequentially and in parallel into K vectors Xf1, Xf2,…., XfK . These K vectors Xfi constitute the respective inputs of the K experts. The slow input vector XS is also sectioned in the same way into K vectors XS1, XS2,…, XSK which are applied consecutively to the supervisor GN. In this algorithm, each vector of the fast inputs 𝑥𝑖(𝑓), (𝑖 = 1,2,…,K) is applied to all experts at the same time. These modules learn different examples from the learning base and specialize in specific groups of responses that are then weighted by the supervisor GN according to their absolute differences and the desired response. The expert whose response is the closest to the desired response will have the highest weight. The supervisor GN, whose input is the vector of slow inputs 𝑥𝑖(𝑠), evaluates the performance of each expert according to the input and selects the best one to be activated. This algorithm shows many similarities to the one developed by Jacob [22] in the architecture "mixtures of experts". These similarities are related to the supervision and selection of experts, however there are three main differences:  In the "mixture of experts" architecture, the supervisor and the experts have the same time scale. For our ANGN, the two types of modules work on two different time scales, slow and fast.  The task of the supervisor in the ME approach is to supervise and control the competition of experts. In our approach, the GN supervises and controls the competition of experts as well as contributing to the overall response of the system by providing the slow response. 𝑥 𝑠1 𝑥 𝑠2 . . 𝑥 𝑠𝑘 𝑋 Expert network k ∑ Expert network 1 expert 1 Glial network ∑ Expert network 2 𝑖 = 1,2, … , 𝑘 𝑥 𝑠 𝑥𝑓
  • 7.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 8, No. 6, December 2018 : 4810 - 4822 4816  For the selection, the experts' weighting is binary (0 or 1). In the ME approach weights are probabilities between 0 and 1. Before presenting our algorithm, it is worth noting that the experts’ learning are cooperative, which means experts learn simultaneously and divide the task during the learning process. The weights of the expert and those of the GN, for the selected vector, are updated at the same time by propagation. The learning of experts and the GN is carried out simultaneously by following these steps: 1. The separation of input vector X into both slow and fast vectors: 𝑥 𝑠 and 𝑥𝑓. 2. Each vector 𝑥𝑖(𝑓), (𝑖 = 1,2,…,K) is intended for all experts. 3. Each vector 𝑥𝑖(𝑠), (𝑖 = 1,2,…,K) is intended for the supervisor GN. 4. The learning of the GN to obtain the desired slow response corresponding to the input. 5. The selection of the expert according to the value of the probability 𝑝(𝑖/𝑥 𝑠) and selection of the ith expert by evaluating the slow input𝑥(𝑠) . 6. The output of expert i represents the conditional average of the desired response with respect the input and the expert network. The learning algorithm of the ANGN architecture. 1. Initialization of the synaptic weights of experts and the GN. 2. for each slow input vector 𝑥𝑖(𝑠) : 2.1 Calculate for 𝑖 = 1,2, … 𝑘 for 𝑚 = 1,2, … , 𝑞 end for 2.2. Repeat step 2.1 until the algorithm converges. 2.3. If max  ig than 1ig else gi = O 2.4. )()( 1 )( 1 )( ( fisi K i i fi K i ifi yy ygy Y       Aggregation of the two slow and fast outputs end for                                  )( )()()()( )()()( )()()()( )()( )()()( 1 2 )( 2 )( 21 )( )()()( 1 )( 1 )(1 1 2 1 exp 2 1 exp)( )( ],...,,[)( )( ))(exp( ))(exp( )( )()( siiiii fi m fii m fi m fi m fi m f m fi si m si m si m si m si m s m si m si m si m si k j fifj fifi i Tq iiifi m fi T fi m fi k j j i i i T sii Xkgkhkaka XkekhkWkW kyke XkekWkW kykdke kWXy ydg ydng kh yyyky kWXy ku ku kg kaXku d                                  
  • 8. Int J Elec & Comp Eng ISSN: 2088-8708  A Novel Neuroglial Architecture for Modelling Singular Perturbation System (Samia Salah) 4817 6. APPLICATION The ASM is a highly coupled nonlinear complex system and is a typical example of two time-scales system. The performance of the proposed ANGN architecture is assessed on both the reduced, slow and fast models of the machine. The state model of the induction machine in the stationary coordinate system (𝛼, 𝛽) can be written as:                                      0 . 11 1 2 2                s r s rrr s r S r s v I T R BT R T B I T dt d (16)      r T s s r em JR L B PT 2 (17) Or:     . 11 1 2 2                 I T R BT R T B I T A rprrp sp r sp    with : rrpssp TTTT   , ,𝐵𝑟 = 𝑀 𝐿 𝑟 𝑎𝑛𝑑𝜎 = (1 − 𝑀2)/(𝐿 𝑠 ∗ 𝐿 𝑟) ) Application of the GCT to the state matrix A results in two circles which intersect (Figure 2a). A change in the size of rays is carried out by transformation 1.                     RB I PP rr s 0 0 , 2 111 To get: . 11 11 222 22 1                 JI T I T I T I T A rprp spsp   (18) The centers of the two circles (Figure 2a) are relocated by: , 0 , 22 2 2122         II I PP  And . 1 1 0 2222 2 2                     JI T JI T I T A spsp sp     The new line circles are still double and disjoint (Figure 2b); the final transformation is:            r s PP 21 ,          RBI I PP r2 2 21 0 (19)
  • 9.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 8, No. 6, December 2018 : 4810 - 4822 4818 (a) (b) Figure 2. (a) Circles that intersect; (b) Disjointed Circles In this case, the slow and fast components are easilyidentified. We can then apply SPM to develop the slow and fast submodels. By putting:        z x  We get: (20) zJx L p T T sp em 2 This form is standard, the flux x is slow and the flux z is fast. By decomposing the fluxes, the voltages and the torque, we obtain:                 fssss fs s vtvv ztzz txx    (21)    femsemem TTT  The reduced slow model is then:                                              ss ss s s r r s s v v T T dt d               1 1 1 0 0 1 1         , 1 ssssss s s sem vv L pT T       -250 -200 -150 -100 -50 0 -200 -150 -100 -50 0 50 100 150 200 Re Im -200 -100 0 100 -300 -200 -100 0 100 200 300 Re Im Actual Eigenvalues Actual Eigenvalues   , 0 0 . 11 . 1 0 2 2 22 2                                          s r sprp sp v RBI I z x I T I T I T z x dt d
  • 10. Int J Elec & Comp Eng ISSN: 2088-8708  A Novel Neuroglial Architecture for Modelling Singular Perturbation System (Samia Salah) 4819 The reduced fast model is:       , 11 1           ss sp ss sp ss v T L I                                          fs r s fr ss sp ss r sr IR B L v T R B         11 1 1 6.1. Results and discussion. The ANGN_MAS model is composed of two groups of inputs vsα, vsβ, which are decomposed into two slow inputs vsα(s), vsβ(s) and two fast inputs vsα(f), vsβ(f). The ANGN_MAS consists of four experts and a glial supervisor network. The four experts and the glial network have similar architectures, consisting of an input layer of four neurons and an output layer of one neuron. The effectiveness of the ANGN algorithm proposed in this paper is illustrated by the performance index root mean square (RMS) value. The RMS of the states error is calculated as: RMS=√(∑ 𝑒2(𝑖)𝑛 𝑖=1 ) 𝑛⁄ where n is the number of simulation steps, and 𝑒(𝑖) is the difference between the state variables of the model and the true system at the ith step. Figure 3 (a) and (b) show the good convergence of this learning algorithm. The minimum value, which is very close to zero occurs after only two iterations. The algorithm converged perfectly and quickly. (a) (b) Figure 3. (a) Evolution of the glial network; (b) Evolution of experts For a comparison with the algorithms in [7], we choose the same parameters of the induction motor. The simulation results are presented in Figure 4 and 5. The RMS values for state variables are presented in Table 1. 2 4 0 0.2 0.4 0.6 0.8 1 x 10 -26 iteration RMS 2 4 0 0.2 0.4 0.6 0.8 x 10 -28 X: 2 Y: 7.55e-030 iteration RMS Expert1 Expert2 Expert3 Expert4       fssfssfem IIpT                                                   fs fs sfs fs s s fs fs v v LI I T T I I dt d         1 1 0 0 1
  • 11.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 8, No. 6, December 2018 : 4810 - 4822 4820 Figure 4. Error for states 𝜑𝑟𝛼 , 𝜑 𝑟𝛽 , 𝑖 𝛼, iβ: of proposed ANGN Figure 5. Error for states 𝜑𝑟𝛼 , 𝜑 𝑟𝛽 , 𝑖 𝛼, iβ: [7] Table 1. The RMS values for state variables φrα(wb) φrβ (wb) iα(A) iβ(A) RMS (ref [7]) 0.0575 0.05334 0.0446 0.0452 RMS (ANGN) 1.029*10-23 3.33*10-24 3.33*10-23 9.8*10-23 The RMS values of all state variables in Figure 4 and 5 demonstrate that the performance has been improved compared to those [7], it is very clear from these figures that the identification errors ∆𝜑 𝛼 , ∆𝜑 𝛽, ∆𝑖 𝛼 , ∆𝑖 𝛽 of the proposed ANGN are greatly reduced.. We can confirm that the state variables of ANGN follow those of the nonlinear system more accurately and faster. This is due to the good separation of slow and fast modes which result that the complexity of the architecture sub-networks of ANGN are considerably reduced. We conducted other comparisons of the proposed approach with both the modular approach in [24] and the mixture of experts (ME) approach. The RMS values of the 𝑇𝑒𝑚 are given in Table 2. From Table 2, it is very clear that RMS values of ANGN are much smaller than both the values [24] and ME, which means that the proposed algorithm in this paper can achieve more accurate results. From Figure 3(a) and 3(b), we can see. Therefore the excellent performance of the proposed model in terms of convergence speed and reduction of the optimal RMS error than both the performance in Figure 6 and performance in Figure 7. It is largely due to the small size of the sub-networks and the very limited number of examples presented at the input of each expert. These advantages are the main properties of modularity. 0.05 0.1 0.15 0.2 0.25 0.3 0.35 -2 -1 0 1 2 3 4 Time (second)  (wb)  0.02 0.04 0.06 0.08 0.1 -0.5 0 0.5 x 10 -10 0.05 0.1 0.15 0.2 0.25 0.3 0.35 -2 -1 0 1 2 3 Time (second)  (wb)  0.02 0.04 0.06 0.08 0.1 -0.5 0 0.5 x 10 -10 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 -6 -4 -2 0 2 4 6 Time (second) i  (A) i 0.02 0.04 0.06 0.08 0.1 -0.5 0 0.5 x 10 -10 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 -3 -2 -1 0 1 2 3 Time (second) i  (A) i 0.02 0.04 0.06 0.08 0.1 -1 -0.5 0 0.5 x 10 -11
  • 12. Int J Elec & Comp Eng ISSN: 2088-8708  A Novel Neuroglial Architecture for Modelling Singular Perturbation System (Samia Salah) 4821 Table 2. Comparison performance models Models Number of iterations RMS ref [24] 100 1.08932 E-27 ME 90 7.934 E-24 ANGN 2 17.455 E-30 Figure 6. Evolution of RMS of the ME model Figure 7. Evolution of RMS in ref [24] 7. CONCLUSION In this paper, a novel architecture called artificial neuroglial network (ANGN), inspired by a recently discovered fact in biology, was developed along with a modular algorithm based on the softmax function. Because of its good characteristics, an ASM put in standard singularly perturbed form is chosen to validate the proposed ANGN architecture. The slow and fast reduced models are obtained in two steps: - Application of a geometric approach based on GCT to decouple the slow and fast variables. - Application of the SPM to develop the reduced models. The simulation results related to the learning of the ANGN_ASM model demonstrate the fast and accurate convergence properties of the proposed algorithm. ANGN can modeling ASM very wel with less computational complexity. The next step is to use these reduced models to develop controllers for the ASM. REFERENCES [1] S. Prasad, et al., “Comparison of Accuracy Measures for RS Image Classification using SVM and ANN Classifiers,” International Journal of Electrical and Computer Engineering (IJECE), vol/issue: 7(3), pp. 1180- 1187, 2017. [2] S. R. Borra, et al., “An Efficient Fingerprint Identification using Neural Network and BAT Algorithm,” International Journal of Electrical and Computer Engineering (IJECE), vol/issue: 8(2), pp. 1194-1213, 2018. [3] H. Mohammed and A. Meroufel, “Contribution to the Artifical Neural Network Speed Estimator in a Degraded Mode for Sensor-Less Fuzzy Direct Control of Torque Application Using Dual Stars Induction Machine,” International Journal of Electrical and Computer Engineering (IJECE), vol/issue: 5(4), pp. 729-741, 2015. [4] Z. Mekrini and S. Bri, “High-Performance using Neural Networks in Direct Torque Control for Asynchronous Machine,” International Journal of Electrical & Computer Engineering, vol/issue: 8(2), 2018. [5] P. Kokotovic, et al., “Singular perturbation methods in control: analysis and design,” vol. 25, 1999. [6] D. Naidu, “Singular perturbations and time scales in control theory and applications: an overview,” Dynamics of Continuous Discrete and Impulsive Systems Series B, vol. 9, pp. 233-278, 2002. [7] Z. J. Fu, et al., “Robust on-line nonlinear systems identification using multilayer dynamic neural networks with two-time scales,” Neurocomputing, vol. 113, pp. 16-26, 2013. [8] X. Li and W. Yu, “Dynamic system identification via recurrent multilayer perceptrons,” Information sciences, vol/issue: 147(1-4), pp. 45-63, 2002. [9] C. S. Leung and L. W. Chan, “Dual extended Kalman filtering in recurrent neural networks,” Neural Networks, vol/issue: 16(2), pp. 223-239, 2003. [10] D. D. Zheng, et al., “Indirect adaptive control of nonlinear system via dynamic multilayer neural networks with multi‐time scales,” International Journal of Adaptive Control and Signal Processing, vol/issue: 29(4), pp. 505-523, 2015. [11] D. D. Zheng, et al., “Identification and trajectory tracking control of nonlinear singularly perturbed systems,” IEEE Transactions on Industrial Electronics, vol/issue: 64(5), pp. 3737-3747, 2017. 20 40 60 80 100 0 0.2 0.4 0.6 0.8 x 10 -23 iteration RMS Expert1 Expert2 Expert3 Expert4 the winner expert
  • 13.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 8, No. 6, December 2018 : 4810 - 4822 4822 [12] D. D. Zheng, et al., “Robust identification for singularly perturbed nonlinear systems using multi-time-scale dynamic neural network,” Decision and Control (CDC), 2017 IEEE 56th Annual Conference on, 2017. [13] Astrocytes, “Science & vie,” pp. 67, 2005. [14] A. B. Porto and A. Pazos, “Neuroglial behaviour in computer science,” Artificial neural networks in real-life applications, IGI Global, pp. 1-21, 2006. [15] Y. V. Hote and A. N. Jha, “New approach of Gerschgorin theorem in model order reduction,” International Journal of Modelling and Simulation, vol/issue: 35(3-4), pp. 143-149, 2015. [16] O. Touhami, et al., “Dynamics separation of induction machine models using gerschgorin's circles and singular perturbations,” Electrical and Electronics Engineering, 2004. (ICEEE). 1st International Conference on, 2004. [17] G. D. Tanguy, et al., “Singular perturbation method and reciprocal transformation on two-time scale systems,” Multivariable Control, Springer, pp. 327-342, 1984. [18] H. Guesbaoui and C. Iung, “Multi-time scale modelling in electrical machines,” Systems, Man and Cybernetics, 1993. 'Systems Engineering in the Service of Humans', Conference Proceeding, International Conference on. 1993. [19] E. Ronco, et al., “Modular neural network and self-decomposition,” Connection Science (special issue: Combining Neural Nets), 1996. [20] Y. Bennani, “A modular and hybrid connectionist system for speaker identification,” Neural Computation, vol/issue: 7(4), pp. 791-798, 1995. [21] I. Kirschning, et al., “A parallel recurrent cascade-correlation neural network with natural connectionist glue,” Neural Networks, 1995. Proceedings, IEEE International Conference on, 1995. [22] R. A. Jacobs, et al., “Adaptive mixtures of local experts,” Neural computation, vol/issue: 3(1), pp. 79-87, 1991. [23] H. Drucker, et al., “Boosting and other ensemble methods,” Neural Computation, vol/issue: 6(6), pp. 1289-1301, 1994. [24] S. Salah, et al., “Application New Approach with Two Networks Slow and Fast on the Asynchronous Machine World Academy of Science, Engineering and Technology,” International Journal of Electrical and Computer Engineering, vol/issue: 7(8), pp. 1087-1091, 2013.