SlideShare a Scribd company logo
Contents
1.1 Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Deterministic Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.1 Simplified Algorithm (SA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.2 Kaczmarz’s Algorithm (KA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2.3 Projection Algorithm (PA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.4 Batch Least Square (BLS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.5 Weighted Least Square (WLS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.6 Recursive Least Square (RLS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.7 Recursive Least Square with Exponential Forgetting (RLS with λ) . . . . . . . . . . . . 4
1.2.8 Recursive Least Square with Varying Exponential Forgetting
(RLS with varying λ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Stochastic Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1 Extended Recursive Least Square (ERLS) . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.2 Extended Recursive Least Square with Exponential Forgetting
(ERLS with λ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3.3 Extended Recursive Least Square with Varying Exponential Forgetting
(ERLS with varying λ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3.4 Modified Extended Recursive Least Square (MERLS) . . . . . . . . . . . . . . . . . . . 6
1.3.5 Modified Extended Recursive Least Square with Exponential Forgetting
(MERLS with λ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.6 Modified Extended Recursive Least Square with Varying Exponential
Forgetting (MERLS with varying λ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 MATLAB Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.6 Contacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1
1.1. PARAMETER ESTIMATION Adaptive Control
1.1 Parameter Estimation
On-line determination of process parameters is a key element in adaptive control. A recursive parameter
estimator appears explicitly as a component of a self-tuning regulator. Parameter estimation also occurs
implicitly in a model-reference adaptive controller. This section presents some methods for real-time parameter
estimation. It is useful to view parameter estimation in the broader context of system identification. The key
elements of system identification are selection of model structure, experiment design, parameter estimation,
and validation. Since system identification is executed automatically in adaptive systems. it is essential to
have a good understanding of all aspects of the problem. Selection of model structure and parameterization are
fundamental issues. Simple transfer function models will be used in this chapter. The identification problems
are simplified significantly if the models are linear in the parameters.
This is about parameter estimation of discrete linear system transfer function using linear estimation methods.
1.2 Deterministic Parameter Estimation
G(z−1
) =
z−dB(z−1)
A(z−1)
B(z−1
) = b0 + b1z−1
+ ... + bnbz−nb
A(z−1
) = 1 + a1z−1
+ ... + anaz−na
y(k) = φT
(k)ˆθ(k − 1)
φT
(k) = [−y(k − 1) − y(k − 2) ... − y(k − na)
u(k − d)u(k − d − 1) ... u(k − d − nb)]
ˆθ(k − 1) = a1 a2 ... ana b0 b1 ... bnb
T
number of knowns = nu = na + nb + 1
n = Max(na, nb + d)
number of sampled data = N > nu + n − 1
NOTE : Estimation stats from time n + 1
1.2.1 Simplified Algorithm (SA)
φT
(k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1)
u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1)]
ˆθ(k + 1) = ˆθ(k) +
φ(k + 1)
φT (k + 1)φ(k + 1)
y(k + 1) − φT
(k + 1)ˆθ(k)
1.2.2 Kaczmarz’s Algorithm (KA)
φT
(k + 1) = [ −y(k) − y(k − 1) ... − y(k − na + 1)
u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1)]
ˆθ(k + 1) = ˆθ(k) +
γφ(k + 1)
φT (k + 1)φ(k + 1)
y(k + 1) − φT
(k + 1)ˆθ(k)
α ≥ 0
0 < γ < 2
Mohamed Mohamed El-Sayed Atyya Page 2 of 8
1.2. DETERMINISTIC PARAMETER ESTIMATION Adaptive Control
1.2.3 Projection Algorithm (PA)
φT
(k + 1) = [ −y(k) − y(k − 1) ... − y(k − na + 1)
u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1)]
ˆθ(k + 1) = ˆθ(k) +
γφ(k + 1)
α + φT (k + 1)φ(k + 1)
y(k + 1) − φT
(k + 1)ˆθ(k)
1.2.4 Batch Least Square (BLS)
ψ =




−y(k − 1) ... −y(k − na) u(k − d) ... u(k − d − nb)
−y(k) ... −y(k − na + 1) u(k − d + 1) ... u(k − d − nb + 1)
. ... . . ... .
−y(N − 1) ... −y(N − na) u(N − d) ... u(N − d − nb)




Y = y(k) y(k + 1) ... y(N)
T
∴ θ = ψT
ψ
−1
ψT
Y
1.2.5 Weighted Least Square (WLS)
θ = ψT
Wψ
−1
ψT
WY
Where W is a diagonal matrix with wii = γN−n−i−1 and γ < 1
1.2.6 Recursive Least Square (RLS)
P0 = 106
I , or P0 = ψT
ψ
−1
, size(P) = nu x nu
φT
(k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1)
u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1)]
K(k + 1) = P(k)φ(k + 1) I + φT
(k + 1)P(k)φ(k + 1)
−1
(k + 1) = y(k + 1) − φT
(k + 1)ˆθ(k)
ˆθ(k + 1) = ˆθ(k) + K(k + 1) (k + 1)
P(k + 1) = I − K(k + 1)φT
(k + 1) P(k)
P(k + 1) =
P(k + 1) + PT (k + 1)
2
Mohamed Mohamed El-Sayed Atyya Page 3 of 8
1.3. STOCHASTIC PARAMETER ESTIMATION Adaptive Control
1.2.7 Recursive Least Square with Exponential Forgetting (RLS with λ)
P0 = 106
I , or P0 = ψT
ψ
−1
, size(P) = nu x nu
φT
(k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1)
u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1)]
K(k + 1) = P(k)φ(k + 1) λ + φT
(k + 1)P(k)φ(k + 1)
−1
(k + 1) = y(k + 1) − φT
(k + 1)ˆθ(k)
ˆθ(k + 1) = ˆθ(k) + K(k + 1) (k + 1)
P(k + 1) = I − K(k + 1)φT
(k + 1) P(k)/λ
P(k + 1) =
P(k + 1) + PT (k + 1)
2
1.2.8 Recursive Least Square with Varying Exponential Forgetting
(RLS with varying λ)
P0 = 106
I , or P0 = ψT
ψ
−1
, size(P) = nu x nu
λ(k) = 0.3 → 0.999
φT
(k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1)
u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1)]
K(k + 1) = P(k)φ(k + 1) λ(k) + φT
(k + 1)P(k)φ(k + 1)
−1
(k + 1) = y(k + 1) − φT
(k + 1)ˆθ(k)
ˆθ(k + 1) = ˆθ(k) + K(k + 1) (k + 1)
P(k + 1) = λ(k) − K(k + 1)φT
(k + 1) P(k)
P(k + 1) =
P(k + 1) + PT (k + 1)
2
λ(k + 1) = 1 −
1 − φT (k + 1)K(k + 1) 2(k + 1)
σ2( )µ( )
NOTE : Update λ after n times
1.3 Stochastic Parameter Estimation
y(k) =
z−dB(z−1)
A(z−1)
u +
C(z−1)
A(z−1)
= φT
(k)ˆθ(k − 1)
B(z−1
) = b0 + b1z−1
+ ... + bnbz−nb
A(z−1
) = 1 + a1z−1
+ ... + anaz−na
C(z−1
) = 1 + c1z−1
+ ... + cncz−nc
ˆθ(k − 1) = [a1 a2 ... ana b0 b1 ... bnb c1 c2 ... cnc]T
number of knowns = nu = na + nb + nc + 1
n = Max(na, nb + nc + d)
number of sampled data = N > nu + n − 1
NOTE : Estimation stats from time n + 1
Mohamed Mohamed El-Sayed Atyya Page 4 of 8
1.3. STOCHASTIC PARAMETER ESTIMATION Adaptive Control
Figure 1.1: Noise is applied on the output signal
1.3.1 Extended Recursive Least Square (ERLS)
P0 = 106
I , or P0 = ψT
ψ
−1
, size(P) = nu x nu
φT
(k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1)
u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1)
(k) (k − 1) ... (k − nc + 1)]
K(k + 1) = P(k)φ(k + 1) I + φT
(k + 1)P(k)φ(k + 1)
−1
(k + 1) = y(k + 1) − φT
(k + 1)ˆθ(k)
ˆθ(k + 1) = ˆθ(k) + K(k + 1) (k + 1)
P(k + 1) = I − K(k + 1)φT
(k + 1) P(k)
P(k + 1) =
P(k + 1) + PT (k + 1)
2
1.3.2 Extended Recursive Least Square with Exponential Forgetting
(ERLS with λ)
P0 = 106
I , or P0 = ψT
ψ
−1
, size(P) = nu x nu
φT
(k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1)
u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1)
(k) (k − 1) ... (k − nc + 1)]
K(k + 1) = P(k)φ(k + 1) λ + φT
(k + 1)P(k)φ(k + 1)
−1
(k + 1) = y(k + 1) − φT
(k + 1)ˆθ(k)
ˆθ(k + 1) = ˆθ(k) + K(k + 1) (k + 1)
P(k + 1) = I − K(k + 1)φT
(k + 1) P(k)/λ
P(k + 1) =
P(k + 1) + PT (k + 1)
2
Mohamed Mohamed El-Sayed Atyya Page 5 of 8
1.3. STOCHASTIC PARAMETER ESTIMATION Adaptive Control
1.3.3 Extended Recursive Least Square with Varying Exponential Forgetting
(ERLS with varying λ)
P0 = 106
I , or P0 = ψT
ψ
−1
, size(P) = nu x nu
λ(k) = 0.3 → 0.999
φT
(k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1)
u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1)
(k) (k − 1) ... (k − nc + 1)]
K(k + 1) = P(k)φ(k + 1) λ(k) + φT
(k + 1)P(k)φ(k + 1)
−1
(k + 1) = y(k + 1) − φT
(k + 1)ˆθ(k)
ˆθ(k + 1) = ˆθ(k) + K(k + 1) (k + 1)
P(k + 1) = λ(k) − K(k + 1)φT
(k + 1) P(k)
P(k + 1) =
P(k + 1) + PT (k + 1)
2
λ(k + 1) = 1 −
1 − φT (k + 1)K(k + 1) 2(k + 1)
σ2( )µ( )
NOTE : Update λ after n times
1.3.4 Modified Extended Recursive Least Square (MERLS)
P0 = 106
I , or P0 = ψT
ψ
−1
, size(P) = nu x nu
φT
(k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1)
u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1)
(k) (k − 1) ... (k − nc + 1)]
K(k + 1) = P(k)φ(k + 1) I + φT
(k + 1)P(k)φ(k + 1)
−1
−
(k + 1) = y(k + 1) − φT
(k + 1)ˆθ(k)
ˆθ(k + 1) = ˆθ(k) + K(k + 1) −
(k + 1)
+
(k + 1) = y(k + 1) − φT
(k + 1)ˆθ(k + 1)
P(k + 1) = I − K(k + 1)φT
(k + 1) P(k)
P(k + 1) =
P(k + 1) + PT (k + 1)
2
Mohamed Mohamed El-Sayed Atyya Page 6 of 8
1.3. STOCHASTIC PARAMETER ESTIMATION Adaptive Control
1.3.5 Modified Extended Recursive Least Square with Exponential Forgetting
(MERLS with λ)
P0 = 106
I , or P0 = ψT
ψ
−1
, size(P) = nu x nu
φT
(k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1)
u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1)
(k) (k − 1) ... (k − nc + 1)]
K(k + 1) = P(k)φ(k + 1) λ + φT
(k + 1)P(k)φ(k + 1)
−1
−
(k + 1) = y(k + 1) − φT
(k + 1)ˆθ(k)
ˆθ(k + 1) = ˆθ(k) + K(k + 1) −
(k + 1)
+
(k + 1) = y(k + 1) − φT
(k + 1)ˆθ(k + 1)
P(k + 1) = I − K(k + 1)φT
(k + 1) P(k)/λ
P(k + 1) =
P(k + 1) + PT (k + 1)
2
1.3.6 Modified Extended Recursive Least Square with Varying Exponential
Forgetting (MERLS with varying λ)
P0 = 106
I , or P0 = ψT
ψ
−1
, size(P) = nu x nu
λ(k) = 0.3 → 0.999
φT
(k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1)
u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1)
(k) (k − 1) ... (k − nc + 1)]
K(k + 1) = P(k)φ(k + 1) λ(k) + φT
(k + 1)P(k)φ(k + 1)
−1
−
(k + 1) = y(k + 1) − φT
(k + 1)ˆθ(k)
ˆθ(k + 1) = ˆθ(k) + K(k + 1) −
(k + 1)
+
(k + 1) = y(k + 1) − φT
(k + 1)ˆθ(k + 1)
P(k + 1) = λ(k) − K(k + 1)φT
(k + 1) P(k)
P(k + 1) =
P(k + 1) + PT (k + 1)
2
λ−
(k + 1) = 1 −
1 − φT (k + 1)K(k + 1) −2
(k + 1)
σ2( −)µ( −)
λ+
(k + 1) = 1 −
1 − φT (k + 1)K(k + 1) +2
(k + 1)
σ2( +)µ( +)
σ−
=
σ2( −)
σ2( −) + σ2( +)
σ+
=
σ2( +)
σ2( −) + σ2( +)
λ(k + 1) = σ−
λ−
(k + 1) + σ+
λ+
(k + 1)
NOTE : Update λ after n times
Mohamed Mohamed El-Sayed Atyya Page 7 of 8
1.4. MATLAB CODES Adaptive Control
1.4 MATLAB Codes
1.2.1 http://goo.gl/Vddvtt
1.2.2 http://goo.gl/UwWFTW
1.2.3 http://goo.gl/tV4Ni6
1.2.4 http://goo.gl/rY2n7I
1.2.6 http://goo.gl/e7J2kq
1.2.7 http://goo.gl/3q6Yc6
1.2.8 http://goo.gl/SCPvEW
1.3.1 http://goo.gl/JnrdNh
1.3.2 http://goo.gl/xjpHha
1.3.3 http://goo.gl/6wVeuW
1.3.4 http://goo.gl/vuKeaL
1.3.5 http://goo.gl/mL0RCz
1.3.6 http://goo.gl/vzViYE
1.5 References
1. Karl Johan Astrom, Adaptive Control, 2nd Edition.
2. David I. Wilson, Advanced Control using MATLAB or Stabilising the unstabilisable, Auckland University
of Technology, New Zealand, May 15, 2015
1.6 Contacts
mohamed.atyya94@eng-st.cu.edu.eg
Mohamed Mohamed El-Sayed Atyya Page 8 of 8

More Related Content

PPTX
Introduction to Graph Theory
PDF
PDF
UMAP - Mathematics and implementational details
PPTX
Fuzzy c means manual work
PPTX
Numerical Method 2
PPT
Bayseian decision theory
PDF
Multiobjective optimization and trade offs using pareto optimality
Introduction to Graph Theory
UMAP - Mathematics and implementational details
Fuzzy c means manual work
Numerical Method 2
Bayseian decision theory
Multiobjective optimization and trade offs using pareto optimality

What's hot (20)

PDF
An Overview of Simple Linear Regression
PPTX
Gauss jordan method.pptx
PPTX
Basics of Integration and Derivatives
PDF
Visual Explanation of Ridge Regression and LASSO
PPTX
Simpson’s one third and weddle's rule
PDF
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
PDF
08 Machine Learning Maximum Aposteriori
PDF
Application of linear transformation in computer
PDF
Lesson 25: Unconstrained Optimization I
PPT
System Of Linear Equations
PDF
5. Linear Algebra for Machine Learning: Singular Value Decomposition and Prin...
PDF
Gomory's cutting plane method
PDF
Fractional calculus and applications
PPT
Introduction to Optimization.ppt
PPT
1st order differential equations
PDF
Special functions
PPTX
inverse z transform
PPTX
simpion's 3/8 rule
PPTX
Introduction to optimization
PDF
Convex Optimization
An Overview of Simple Linear Regression
Gauss jordan method.pptx
Basics of Integration and Derivatives
Visual Explanation of Ridge Regression and LASSO
Simpson’s one third and weddle's rule
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
08 Machine Learning Maximum Aposteriori
Application of linear transformation in computer
Lesson 25: Unconstrained Optimization I
System Of Linear Equations
5. Linear Algebra for Machine Learning: Singular Value Decomposition and Prin...
Gomory's cutting plane method
Fractional calculus and applications
Introduction to Optimization.ppt
1st order differential equations
Special functions
inverse z transform
simpion's 3/8 rule
Introduction to optimization
Convex Optimization
Ad

Similar to Parameter estimation (20)

PDF
Intro to Quant Trading Strategies (Lecture 6 of 10)
PDF
Forecasting With An Adaptive Control Algorithm
PDF
PONDICHERRY UNIVERSITY DEPARTMENT OF ELECTRONICS ENGINEERING.pdf
PPTX
lecture_09.pptx
PDF
APPROACHES IN USING EXPECTATIONMAXIMIZATION ALGORITHM FOR MAXIMUM LIKELIHOOD ...
PPTX
CMS 2024 lecture 9 Kalman filters for identification.pptx
PDF
Adaptive pi based on direct synthesis nishant
PDF
Section9 stochastic
PPTX
Adaptive filter
PDF
PDF
Discrete Kalman Filter (DKF)
PDF
Lec_13.pdf
PDF
Development of Robust Adaptive Inverse models using Bacterial Foraging Optimi...
PDF
Paper id 26201481
PDF
Fast parallelizable scenario-based stochastic optimization
PDF
tsoulkas_cumulants
PDF
Line Search Techniques by Fibonacci Search
PDF
Tail Probabilities for Randomized Program Runtimes via Martingales for Higher...
PDF
Reinforcement Learning: Hidden Theory and New Super-Fast Algorithms
PDF
A walk through the intersection between machine learning and mechanistic mode...
Intro to Quant Trading Strategies (Lecture 6 of 10)
Forecasting With An Adaptive Control Algorithm
PONDICHERRY UNIVERSITY DEPARTMENT OF ELECTRONICS ENGINEERING.pdf
lecture_09.pptx
APPROACHES IN USING EXPECTATIONMAXIMIZATION ALGORITHM FOR MAXIMUM LIKELIHOOD ...
CMS 2024 lecture 9 Kalman filters for identification.pptx
Adaptive pi based on direct synthesis nishant
Section9 stochastic
Adaptive filter
Discrete Kalman Filter (DKF)
Lec_13.pdf
Development of Robust Adaptive Inverse models using Bacterial Foraging Optimi...
Paper id 26201481
Fast parallelizable scenario-based stochastic optimization
tsoulkas_cumulants
Line Search Techniques by Fibonacci Search
Tail Probabilities for Randomized Program Runtimes via Martingales for Higher...
Reinforcement Learning: Hidden Theory and New Super-Fast Algorithms
A walk through the intersection between machine learning and mechanistic mode...
Ad

More from Mohamed Mohamed El-Sayed (20)

PDF
Optimal control systems
PDF
Neural networks and deep learning
PDF
Numerical solutions for ode (differential equations)
PDF
PDF
PDF
Numerical integration
PDF
Numerical differentiation
PDF
Numerical solutions for 1 non linear eq and system of non linear eqs
PDF
Numerical solutions for linear system of equations
PDF
PDF
PDF
PDF
V belt and rope drives
PDF
Flat belt pulleys
PDF
Flat belt drives
PDF
Optimal control systems
Neural networks and deep learning
Numerical solutions for ode (differential equations)
Numerical integration
Numerical differentiation
Numerical solutions for 1 non linear eq and system of non linear eqs
Numerical solutions for linear system of equations
V belt and rope drives
Flat belt pulleys
Flat belt drives

Recently uploaded (20)

PDF
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
PDF
III.4.1.2_The_Space_Environment.p pdffdf
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
DOCX
573137875-Attendance-Management-System-original
PDF
737-MAX_SRG.pdf student reference guides
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPT
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PDF
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PPTX
Fundamentals of safety and accident prevention -final (1).pptx
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PDF
PREDICTION OF DIABETES FROM ELECTRONIC HEALTH RECORDS
PDF
Well-logging-methods_new................
PPTX
Sustainable Sites - Green Building Construction
PPT
Project quality management in manufacturing
PPTX
UNIT 4 Total Quality Management .pptx
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
Artificial Intelligence
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
III.4.1.2_The_Space_Environment.p pdffdf
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
573137875-Attendance-Management-System-original
737-MAX_SRG.pdf student reference guides
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
UNIT-1 - COAL BASED THERMAL POWER PLANTS
Fundamentals of safety and accident prevention -final (1).pptx
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PREDICTION OF DIABETES FROM ELECTRONIC HEALTH RECORDS
Well-logging-methods_new................
Sustainable Sites - Green Building Construction
Project quality management in manufacturing
UNIT 4 Total Quality Management .pptx
Embodied AI: Ushering in the Next Era of Intelligent Systems
Artificial Intelligence
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx

Parameter estimation

  • 1. Contents 1.1 Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Deterministic Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.1 Simplified Algorithm (SA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.2 Kaczmarz’s Algorithm (KA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.3 Projection Algorithm (PA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.4 Batch Least Square (BLS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.5 Weighted Least Square (WLS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.6 Recursive Least Square (RLS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.7 Recursive Least Square with Exponential Forgetting (RLS with λ) . . . . . . . . . . . . 4 1.2.8 Recursive Least Square with Varying Exponential Forgetting (RLS with varying λ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Stochastic Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3.1 Extended Recursive Least Square (ERLS) . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3.2 Extended Recursive Least Square with Exponential Forgetting (ERLS with λ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3.3 Extended Recursive Least Square with Varying Exponential Forgetting (ERLS with varying λ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.4 Modified Extended Recursive Least Square (MERLS) . . . . . . . . . . . . . . . . . . . 6 1.3.5 Modified Extended Recursive Least Square with Exponential Forgetting (MERLS with λ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.6 Modified Extended Recursive Least Square with Varying Exponential Forgetting (MERLS with varying λ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4 MATLAB Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.6 Contacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1
  • 2. 1.1. PARAMETER ESTIMATION Adaptive Control 1.1 Parameter Estimation On-line determination of process parameters is a key element in adaptive control. A recursive parameter estimator appears explicitly as a component of a self-tuning regulator. Parameter estimation also occurs implicitly in a model-reference adaptive controller. This section presents some methods for real-time parameter estimation. It is useful to view parameter estimation in the broader context of system identification. The key elements of system identification are selection of model structure, experiment design, parameter estimation, and validation. Since system identification is executed automatically in adaptive systems. it is essential to have a good understanding of all aspects of the problem. Selection of model structure and parameterization are fundamental issues. Simple transfer function models will be used in this chapter. The identification problems are simplified significantly if the models are linear in the parameters. This is about parameter estimation of discrete linear system transfer function using linear estimation methods. 1.2 Deterministic Parameter Estimation G(z−1 ) = z−dB(z−1) A(z−1) B(z−1 ) = b0 + b1z−1 + ... + bnbz−nb A(z−1 ) = 1 + a1z−1 + ... + anaz−na y(k) = φT (k)ˆθ(k − 1) φT (k) = [−y(k − 1) − y(k − 2) ... − y(k − na) u(k − d)u(k − d − 1) ... u(k − d − nb)] ˆθ(k − 1) = a1 a2 ... ana b0 b1 ... bnb T number of knowns = nu = na + nb + 1 n = Max(na, nb + d) number of sampled data = N > nu + n − 1 NOTE : Estimation stats from time n + 1 1.2.1 Simplified Algorithm (SA) φT (k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1) u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1)] ˆθ(k + 1) = ˆθ(k) + φ(k + 1) φT (k + 1)φ(k + 1) y(k + 1) − φT (k + 1)ˆθ(k) 1.2.2 Kaczmarz’s Algorithm (KA) φT (k + 1) = [ −y(k) − y(k − 1) ... − y(k − na + 1) u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1)] ˆθ(k + 1) = ˆθ(k) + γφ(k + 1) φT (k + 1)φ(k + 1) y(k + 1) − φT (k + 1)ˆθ(k) α ≥ 0 0 < γ < 2 Mohamed Mohamed El-Sayed Atyya Page 2 of 8
  • 3. 1.2. DETERMINISTIC PARAMETER ESTIMATION Adaptive Control 1.2.3 Projection Algorithm (PA) φT (k + 1) = [ −y(k) − y(k − 1) ... − y(k − na + 1) u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1)] ˆθ(k + 1) = ˆθ(k) + γφ(k + 1) α + φT (k + 1)φ(k + 1) y(k + 1) − φT (k + 1)ˆθ(k) 1.2.4 Batch Least Square (BLS) ψ =     −y(k − 1) ... −y(k − na) u(k − d) ... u(k − d − nb) −y(k) ... −y(k − na + 1) u(k − d + 1) ... u(k − d − nb + 1) . ... . . ... . −y(N − 1) ... −y(N − na) u(N − d) ... u(N − d − nb)     Y = y(k) y(k + 1) ... y(N) T ∴ θ = ψT ψ −1 ψT Y 1.2.5 Weighted Least Square (WLS) θ = ψT Wψ −1 ψT WY Where W is a diagonal matrix with wii = γN−n−i−1 and γ < 1 1.2.6 Recursive Least Square (RLS) P0 = 106 I , or P0 = ψT ψ −1 , size(P) = nu x nu φT (k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1) u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1)] K(k + 1) = P(k)φ(k + 1) I + φT (k + 1)P(k)φ(k + 1) −1 (k + 1) = y(k + 1) − φT (k + 1)ˆθ(k) ˆθ(k + 1) = ˆθ(k) + K(k + 1) (k + 1) P(k + 1) = I − K(k + 1)φT (k + 1) P(k) P(k + 1) = P(k + 1) + PT (k + 1) 2 Mohamed Mohamed El-Sayed Atyya Page 3 of 8
  • 4. 1.3. STOCHASTIC PARAMETER ESTIMATION Adaptive Control 1.2.7 Recursive Least Square with Exponential Forgetting (RLS with λ) P0 = 106 I , or P0 = ψT ψ −1 , size(P) = nu x nu φT (k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1) u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1)] K(k + 1) = P(k)φ(k + 1) λ + φT (k + 1)P(k)φ(k + 1) −1 (k + 1) = y(k + 1) − φT (k + 1)ˆθ(k) ˆθ(k + 1) = ˆθ(k) + K(k + 1) (k + 1) P(k + 1) = I − K(k + 1)φT (k + 1) P(k)/λ P(k + 1) = P(k + 1) + PT (k + 1) 2 1.2.8 Recursive Least Square with Varying Exponential Forgetting (RLS with varying λ) P0 = 106 I , or P0 = ψT ψ −1 , size(P) = nu x nu λ(k) = 0.3 → 0.999 φT (k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1) u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1)] K(k + 1) = P(k)φ(k + 1) λ(k) + φT (k + 1)P(k)φ(k + 1) −1 (k + 1) = y(k + 1) − φT (k + 1)ˆθ(k) ˆθ(k + 1) = ˆθ(k) + K(k + 1) (k + 1) P(k + 1) = λ(k) − K(k + 1)φT (k + 1) P(k) P(k + 1) = P(k + 1) + PT (k + 1) 2 λ(k + 1) = 1 − 1 − φT (k + 1)K(k + 1) 2(k + 1) σ2( )µ( ) NOTE : Update λ after n times 1.3 Stochastic Parameter Estimation y(k) = z−dB(z−1) A(z−1) u + C(z−1) A(z−1) = φT (k)ˆθ(k − 1) B(z−1 ) = b0 + b1z−1 + ... + bnbz−nb A(z−1 ) = 1 + a1z−1 + ... + anaz−na C(z−1 ) = 1 + c1z−1 + ... + cncz−nc ˆθ(k − 1) = [a1 a2 ... ana b0 b1 ... bnb c1 c2 ... cnc]T number of knowns = nu = na + nb + nc + 1 n = Max(na, nb + nc + d) number of sampled data = N > nu + n − 1 NOTE : Estimation stats from time n + 1 Mohamed Mohamed El-Sayed Atyya Page 4 of 8
  • 5. 1.3. STOCHASTIC PARAMETER ESTIMATION Adaptive Control Figure 1.1: Noise is applied on the output signal 1.3.1 Extended Recursive Least Square (ERLS) P0 = 106 I , or P0 = ψT ψ −1 , size(P) = nu x nu φT (k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1) u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1) (k) (k − 1) ... (k − nc + 1)] K(k + 1) = P(k)φ(k + 1) I + φT (k + 1)P(k)φ(k + 1) −1 (k + 1) = y(k + 1) − φT (k + 1)ˆθ(k) ˆθ(k + 1) = ˆθ(k) + K(k + 1) (k + 1) P(k + 1) = I − K(k + 1)φT (k + 1) P(k) P(k + 1) = P(k + 1) + PT (k + 1) 2 1.3.2 Extended Recursive Least Square with Exponential Forgetting (ERLS with λ) P0 = 106 I , or P0 = ψT ψ −1 , size(P) = nu x nu φT (k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1) u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1) (k) (k − 1) ... (k − nc + 1)] K(k + 1) = P(k)φ(k + 1) λ + φT (k + 1)P(k)φ(k + 1) −1 (k + 1) = y(k + 1) − φT (k + 1)ˆθ(k) ˆθ(k + 1) = ˆθ(k) + K(k + 1) (k + 1) P(k + 1) = I − K(k + 1)φT (k + 1) P(k)/λ P(k + 1) = P(k + 1) + PT (k + 1) 2 Mohamed Mohamed El-Sayed Atyya Page 5 of 8
  • 6. 1.3. STOCHASTIC PARAMETER ESTIMATION Adaptive Control 1.3.3 Extended Recursive Least Square with Varying Exponential Forgetting (ERLS with varying λ) P0 = 106 I , or P0 = ψT ψ −1 , size(P) = nu x nu λ(k) = 0.3 → 0.999 φT (k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1) u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1) (k) (k − 1) ... (k − nc + 1)] K(k + 1) = P(k)φ(k + 1) λ(k) + φT (k + 1)P(k)φ(k + 1) −1 (k + 1) = y(k + 1) − φT (k + 1)ˆθ(k) ˆθ(k + 1) = ˆθ(k) + K(k + 1) (k + 1) P(k + 1) = λ(k) − K(k + 1)φT (k + 1) P(k) P(k + 1) = P(k + 1) + PT (k + 1) 2 λ(k + 1) = 1 − 1 − φT (k + 1)K(k + 1) 2(k + 1) σ2( )µ( ) NOTE : Update λ after n times 1.3.4 Modified Extended Recursive Least Square (MERLS) P0 = 106 I , or P0 = ψT ψ −1 , size(P) = nu x nu φT (k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1) u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1) (k) (k − 1) ... (k − nc + 1)] K(k + 1) = P(k)φ(k + 1) I + φT (k + 1)P(k)φ(k + 1) −1 − (k + 1) = y(k + 1) − φT (k + 1)ˆθ(k) ˆθ(k + 1) = ˆθ(k) + K(k + 1) − (k + 1) + (k + 1) = y(k + 1) − φT (k + 1)ˆθ(k + 1) P(k + 1) = I − K(k + 1)φT (k + 1) P(k) P(k + 1) = P(k + 1) + PT (k + 1) 2 Mohamed Mohamed El-Sayed Atyya Page 6 of 8
  • 7. 1.3. STOCHASTIC PARAMETER ESTIMATION Adaptive Control 1.3.5 Modified Extended Recursive Least Square with Exponential Forgetting (MERLS with λ) P0 = 106 I , or P0 = ψT ψ −1 , size(P) = nu x nu φT (k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1) u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1) (k) (k − 1) ... (k − nc + 1)] K(k + 1) = P(k)φ(k + 1) λ + φT (k + 1)P(k)φ(k + 1) −1 − (k + 1) = y(k + 1) − φT (k + 1)ˆθ(k) ˆθ(k + 1) = ˆθ(k) + K(k + 1) − (k + 1) + (k + 1) = y(k + 1) − φT (k + 1)ˆθ(k + 1) P(k + 1) = I − K(k + 1)φT (k + 1) P(k)/λ P(k + 1) = P(k + 1) + PT (k + 1) 2 1.3.6 Modified Extended Recursive Least Square with Varying Exponential Forgetting (MERLS with varying λ) P0 = 106 I , or P0 = ψT ψ −1 , size(P) = nu x nu λ(k) = 0.3 → 0.999 φT (k + 1) = [−y(k) − y(k − 1) ... − y(k − na + 1) u(k − d + 1) u(k − d − 2) ... u(k − d − nb + 1) (k) (k − 1) ... (k − nc + 1)] K(k + 1) = P(k)φ(k + 1) λ(k) + φT (k + 1)P(k)φ(k + 1) −1 − (k + 1) = y(k + 1) − φT (k + 1)ˆθ(k) ˆθ(k + 1) = ˆθ(k) + K(k + 1) − (k + 1) + (k + 1) = y(k + 1) − φT (k + 1)ˆθ(k + 1) P(k + 1) = λ(k) − K(k + 1)φT (k + 1) P(k) P(k + 1) = P(k + 1) + PT (k + 1) 2 λ− (k + 1) = 1 − 1 − φT (k + 1)K(k + 1) −2 (k + 1) σ2( −)µ( −) λ+ (k + 1) = 1 − 1 − φT (k + 1)K(k + 1) +2 (k + 1) σ2( +)µ( +) σ− = σ2( −) σ2( −) + σ2( +) σ+ = σ2( +) σ2( −) + σ2( +) λ(k + 1) = σ− λ− (k + 1) + σ+ λ+ (k + 1) NOTE : Update λ after n times Mohamed Mohamed El-Sayed Atyya Page 7 of 8
  • 8. 1.4. MATLAB CODES Adaptive Control 1.4 MATLAB Codes 1.2.1 http://goo.gl/Vddvtt 1.2.2 http://goo.gl/UwWFTW 1.2.3 http://goo.gl/tV4Ni6 1.2.4 http://goo.gl/rY2n7I 1.2.6 http://goo.gl/e7J2kq 1.2.7 http://goo.gl/3q6Yc6 1.2.8 http://goo.gl/SCPvEW 1.3.1 http://goo.gl/JnrdNh 1.3.2 http://goo.gl/xjpHha 1.3.3 http://goo.gl/6wVeuW 1.3.4 http://goo.gl/vuKeaL 1.3.5 http://goo.gl/mL0RCz 1.3.6 http://goo.gl/vzViYE 1.5 References 1. Karl Johan Astrom, Adaptive Control, 2nd Edition. 2. David I. Wilson, Advanced Control using MATLAB or Stabilising the unstabilisable, Auckland University of Technology, New Zealand, May 15, 2015 1.6 Contacts mohamed.atyya94@eng-st.cu.edu.eg Mohamed Mohamed El-Sayed Atyya Page 8 of 8