SlideShare a Scribd company logo
1
Tracking Systems
SOLO HERMELIN
Updated: 12.10.09Run This
http://www.solohermelin.com
2
Tracking SystemsSOLO
Table of Contents
Chi-square Distribution
Innovation in Kalman Filter
Kalman Filter
Linear Gaussian Markov Systems
Recursive Bayesian Estimation
Target Acceleration Models
General Problem
Evaluation of Kalman Filter Consistency
Innovation in Tracking Systems
Terminology
Functional Diagram of a Tracking System
Filtering and Prediction
Target Models as Markov Processes
Estimation for Static Systems
Information Kalman Filter
Target Estimators
Sensors
3
Tracking SystemsSOLO
Table of Contents (continue – 1)
The Cramér-Rao Lower Bound (CRLB) on the Variance of the Estimator
Nonlinear Estimation (Filtering)
Extended Kalman Filter
Additive Gaussian Nonlinear Filter
Gauss – Hermite Quadrature Approximation
Uscented Kalman Filter
Gating and Data Association
Optimal Correlation of Sensor Data with Tracks on
Surveillance Systems (R.G. Sea, Hughes, 1973)
Gating
Nearest-Neighbor Standard Filter
Global Nearest-Neighbor (GNN) Algorithms
Suboptimal Bayesian Algorithm: The PDAF
Non-Additive Non-Gaussian Nonlinear Filter
Nonlinear Estimation Using Particle Filters
4
Tracking SystemsSOLO
Table of Contents (continue – 2)
Track Life Cycle (Initialization, Maintenance & Deletion)
Filters for Maneuvering Target Detection
The Hybrid Model Approach
No Switching Between Models During the Scenario
Switching Between Models During the Scenario
The Interacting Multiple Model (IMM) Algorithm
The IMM-PDAF Algorithm
The IPDAF Algorithm
Multi-Target Tracking (MTT) Systems
Joint Probabilistic Data Association Filter (JPDAF)
Multi-Sensor Estimate
Track-to-Track of Two Sensors, Correlation and Fusion
Issues in Multi – Sensor Data Fusion
References
Multiple Hypothesis Tracking (MHT)
5
General Problem
I
0Ex
0Ey
Iz
Northx
Easty
Downz
Px
Py
Pz

Iy
Ix
t
t
Long
Lat
0Ez
Ex
Ey
Ez
AV



Target (T)
(object)
Platform (P)
(sensor)
SOLO
Provide information of the position and direction of movement (including estimated
errors) of uncooperative objects, to different located users.
To perform this task a common coordinate system is used.
Example: In a Earth neigh borough the Local Level Local North coordinate system
(Latitude, Longitude, Height above Sea Level) can be used to specify the position
and direction of motion of all objects.
The information is gathered by sensors
that are carried by platforms (P) that can be
static or moving (earth vehicles, aircraft,
missiles, satellites,…) relative to the
predefined coordinate system. It is assumed
that the platforms positions and velocities,
including their errors, are known and can be
used for this task:
SensorDownSensorEastSensorNordSensorDownSensorEastSensorNord
SensorLevelSeaSensorSensorSensorLevelSeaSensorSensor
VVVVVV
HLongLatHLongLat


,,,,,
,,,,,
The objects (T) positions and velocities are obtained by combining the information of
objects-to-sensors relative position and velocities and their errors to the information
of sensors (B) positions and velocities and their errors.
6
General Problem
 

B
x
L
x
B
z
L
y
L
z
B
y
T
V

PV

R

Az
El
Bx
SOLO
Assume that the platform with the sensor measure continuously and without error,
in the platform coordinates, the object (Target – T) and platform positions and velocities .
The relative position vector is defined
by three independent parameters. A possible
choice of those parameters is:
R

 









































 












ElR
ElAzR
ElAzRR
ElEl
ElEl
AzAz
AzAz
Rz
Ry
Rx
R
P
P
P
P
sin
cossin
coscos
0
0
cos0sin
010
sin0cos
100
0cossin
0sincos

R - Range from platform to object
Az - Sensor Azimuth angle relative to platform
El - Sensor Elevation angle relative to platform
Rotation Matrix from LLLN to P (Euler Angles):
     


















cccssscsscsc
csccssssccss
ssccc
CP
L 321
 - azimuth angle  - pitch angle  - roll angle
7
General ProblemSOLO
Assume that the platform with the sensor measure continuously and without error,
in the platform coordinates, the object (Target – T) and platform (P) positions and velocities .
The origin of the LLLN coordinate system is located at
the projection of the center of gravity CG of the platform
on the Earth surface, with zDown axis pointed down,
xNorth, yEast plan parallel to the local level, with xNorth
pointed to the local North and yEast pointed to the local East.
The platform is located at:
Latitude = Lat, Longitude = Long, Height = H
Rotation Matrix from E to L
   
   
   
   
    
























100
0cossin
0sincos
sin0cos
010
cos0sin
2/ 32 LongLong
LongLong
LatLat
LatLat
LongLatC L
E 
         
   
         













LatLongLatLongLat
LongLong
LatLongLatLongLat
sinsincoscoscos
0cossin
cossinsincossin
The earth radius is   26.298/1&10378135.6sin1 6
0
2
0  emRLateRRpB
The position of the platform in E coordinates is
 
 
   
 
   













LongLat
Long
LongLat
HRR BpB
E
B
coscos
sin
cossin

I
0Ex
0Ey
Iz
Northx
Easty
Downz
Px
Py
Pz

Iy
Ix
t
t
Long
Lat
0Ez
Ex
Ey
Ez
AV



Target (T)
(object)
Platform (P)
(sensor)
8
General Problem
 
 
   
 
   
























TT
T
TT
TpT
zET
yET
xET
E
T
LongLat
Long
LongLat
HR
R
R
R
R
coscos
sin
cossin

 

B
x
L
x
B
z
L
y
L
z
B
y
T
V

PV

R

Az
El
Bx
SOLO
The position of the platform (P) in E coordinates is  
 
   
 
   













LongLat
Long
LongLat
HRR Bp
E
B
coscos
sin
cossin

The position of the target (T) relative to platform (P)
in E coordinates is
   
     PTP
L
TL
E
PL
P
E
L
E
RCCRCCR


The position of the target (T) in E coordinates is
     EE
B
zET
yET
xET
E
T RR
R
R
R
R













Since the relation to target latitude LatT, longitude LongT and height HT is given by:
we have  
   
  TpTyETT
pTzETyETxETTTpT
zETxETT
HRRLong
RRRRHLateRR
RRLat





/sin
&sin1
/tan
1
2/12222
0
1
Run This
I
0Ex
0Ey
Iz
Northx
Easty
Downz
Px
Py
Pz

Iy
Ix
t
t
Long
Lat
0Ez
Ex
Ey
Ez
AV



Target (T)
(object)
Platform (P)
(sensor)
9
General Problem
 

B
x
L
x
B
z
L
y
L
z
B
y
T
V

PV

R

Az
El
Bx
SOLO
Assume that the platform with the sensor measure continuously and without error
in the platform (P) coordinates the object (Target – T) and platform positions and velocities .
Therefore the velocity vector of the object (T)
relative to the platform (P) can be obtained by
direct differentiation of the relative range
R

PTIP
P
TP VVR
td
Rd
V



 
PIP
PI
T
T VR
td
Rd
td
Rd
V



 
TV

PV

Az
El
Bx
 1tR

Time t1
IP

- Angular Rate vector of the
Platform (P) relative to inertia
(measured by its INS)
PV

- Platform (P) Velocity vector
(measured by its INS)
TV

- Target (T) Velocity vector
computed as follows:
TV

PV

 2
tR

Az
El
Bx
Bx
Time t2 TV

PV

Az
El
BxBx
Bx
 3tR

Time t3
P
td
Rd

-Differentiation of vector
in Platform (P) coordinates
R

Run This
10
General Problem
 kkx |ˆ
 kx
 1|1  kkP
 1| kkP
 1|1ˆ  kkx
 1kx
 kkP |
 kkP |1
 kkx |1ˆ 
 kt  1kt
Real Trajectory
Estimated Trajectory
 2kt
 1|2  kkP
 1|2ˆ  kkx
 2|2  kkP
 2|2ˆ  kkx
 3kt
Measurement Events
Predicted Errors
Updated Errors
SOLO
The platform with the sensors measure at discrete time and with measurement error.
It may happen that no data (no target detection) is obtained for each measurement.
Therefore it is necessary to estimate the
target trajectory parameters and their
errors from the measurements events,
and to predict them between measurements
events.
tk - time of measurements (k=0,1,2,…)
- sensor measurements ktz
- parameters of the real trajectory at time t. tx
- predicted parameters of the trajectory at time t. txˆ
- predicted parameters errors at time t (tk < t < tk+1). kttP /
- updated parameters errors at measurement time tk. kk ttP /
 txz ,
Filter
(Estimator/Predictor)
 k
txz ,
kt
 tx

 kttP /
T
V

P
V

 2
tR

Az
El
Bx
Bx
Bx
 1tR

 3tR

1
1
1
11
General ProblemSOLO
The problem is more complicated when there are Multiple Targets. In this case we must
determinate which measurement is associated to which target. This is done before
filtering.
T
V

P
V

 2
tR

Az
El
Bx
Bx
Bx
 1tR

 3tR

Bx
Bx
1
2
3
32
1
Bx
B
x
Bx
1
3
2
1
 k
txz ,11
 ktxz ,22
 k
txz ,33
 11 | kk ttS
 12 | kk ttS
 13 | kk ttS
 13 |ˆ kk ttz
 12 |ˆ kk ttz
 11 |ˆ kk ttz
 kk ttS |1
 kk ttS |2
 kk ttS |3
Filter
(Estimator/Predictor)
Target # 1
 tx1

 k
ttP /1
Filter
(Estimator/Predictor)
Target # N
txN

 kN ttP /
 txz ,  ktxz ,
kt
Data
Association
tz1
 tzN
12
General ProblemSOLO
If more Sensors are involved using Sensor Data Fusion we can improve the performance.
In this case we have a Multi-Sensor Multi-Target situation
1
 k
txz ,11
 k
txz ,22
 k
txz ,33
 1
1
1 | kk ttS
 1
1
2 | kk ttS
 13 | kk ttS
 13 |ˆ kk ttz
 12 |ˆ kk ttz
 11 |ˆ kk ttz
 kk ttS |
2
3
1st
Sensor
1
 k
txz ,11
 k
txz ,22
 k
txz ,33
 13 |ˆ kk ttz
 12 |ˆ kk ttz
 11 |ˆ kk ttz
 1
2
1 | kk ttS
 1
2
2 | kk ttS
 kk ttS |
2
3
2nd
Sensor
1
 k
txz ,11
 k
txz ,22
 k
txz ,33
 1
1
1 | kk ttS
 1
1
2 | kk ttS
 13 | kk ttS
 13 |ˆ kk ttz
 12 |ˆ kk ttz
 11 |ˆ kk ttz
 kk ttS |1
 kk ttS |2
 kk ttS |
1
3
 1
2
1 | kk ttS
 1
2
2 | kk ttS
 kk ttS |
2
3
Fused Data
Transducer 1
Feature Extraction,
Target Classification,
Identification,
and Tracking
Sensor 1
Fusion Processor
- Associate
- Correlate
- Track
- Estimate
- Classify
- Cue
Cue
Target
Report
Cue
Target
Report
Sensor – level Fusion
Transducer 2
Feature Extraction,
Target Classification,
Identification,
and Tracking
Sensor 2
1
TV

PV

 2
1
tR

Az
El
Bx
Bx
Bx
 1
1
tR

 3
1
tR

Bx
Bx
2
3
32
1
Bx
Bx
Bx
Bx
1
3
2
1st
Sensor
1
TV

PV

Bx
Bx
Bx
Bx
Bx
2
3
32
1
Bx
Bx
Bx
Bx
1
3
2
Ground
Radar
Data
Link
 1
2
tR

 2
2
tR
 3
2
tR

2nd
Sensor
1
TV

PV

 2
1
tR

Az
El
Bx
Bx
Bx
 1
1
tR

 3
1
tR

Bx
Bx
2
3
32
1
Bx
Bx
Bx
Bx
1
3
2
Ground
Radar
Data
Link
 1
2
tR

 2
2
tR
 3
2
tR

To perform this task we must perform Alignment of the Sensors Data
in Time (synchronization) and in Space (example GPS that provides accurate time & position)
Run This
13
General ProblemSOLO
Terminology
Sensor: a device that observes the (remote) environment by reception of some signals (energy)
Frame or Scan: “snapshot” of region of the environment obtained by the sensor at a point in time,
called the sampling time.
Signal Processing: processing of the sensor data to provide measurements
Target Detection: this is done by Signal Processing by “detecting” target characteristics,
by comparing them with a threshold and deleting “false targets (alarms)”.
Those capabilities are defined by the Probability of Detection PD and
the Probability of False Alarm” PFA.
Measurement Extraction: the final stage of Signal Processing by that generates a measurement.
Time stamp: the time to which a detection/measurement pertains.
Registration: alignment (space & time) of two or more sensors or alignment of a moving sensor
data from successive sampling times so that their data can be combined.
Track formation (or track assembly, target acquisition, measurement to measurement
association, scan to scan association): detection of a target (processing of measurements
from a number of sampling times to determine the presence of a target) and initialization of its
track (determination of the initial estimate of its state).
14
General ProblemSOLO
Terminology (continue – 1)
Tracking filter: state estimator of a target.
Data association: process of establishing which measurement (or weighted combination of
measurements) to be used in a state estimator.
Track continuation (maintenance or updating): association and incorporation of
measurements from a sampling time into a track filter.
Cluster tracking tracking of a set of nearby targets as a group rather than individuals.
Return to Table of Content
15
General ProblemSOLO
Return to Table of Content
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
(Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
Functional Diagram of a Tracking System
A Tracking System performs the following functions:
• Sensors Data Processing and Measurement
Formation that provides Targets Data
• Observation-to-Track Association
that relates Target Detected Data
to Existing Track Files.
• Track Maintenance (Initialization,
Confirmation and Deletion) of the
Targets Detected by the Sensors.
• Filtering and Prediction , for each Track processes the Data Associated to the Track,
Filter the Target State (Position, and may be Velocity and Acceleration) from Noise,
and Predict the Target State and Errors (Covariance Matrix) at the next
Sensors Measurement.
• Gating Computations that, using the Predicted Target State, provides the Gating to
enabling distinguishing between the Measurement from the Target of the specific
Track File to other Targets Detected by the Sensors.
16
SENSORSSOLO
Introduction
Classification of Sensors by the type of energy they use for sensing:
We deal with sensors used for target detection, identification,
acquisition and tracking, seekers for missile guidance.
• Electromagnetic Effect that are distinct by EM frequency:
- Micro-Wave Electro-Optical:
* Visible
* IR
* Laser
- Millimeter Wave Radars
• Acoustic Systems
Classification of Sensors by the source of energy they use for sensing:
• Passive where the source of energy is in the objects that are sensed
Example: Visible, IR, Acoustic Systems
• Semi – Active where the source of energy is actively produced externally to
the Sensor and sent toward the target that reflected it back to the sensor
Example: Radars, Laser, Acoustic Systems
• Active where the source of energy is actively produced by the Sensor
and sent toward the target that reflected it back to the sensor
Example: Radars, Laser, Acoustic Systems
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
)Initialization,
Confirmation
and Deletion(
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
17
SENSORSSOLO
Introduction
Classification of Sensors by the Carrying Vehicle:
• Sensors on Ground Fixed Sites
• Human Carriers
• Ground Vehicles
• Ships
• Submarines
• Torpedoes
• Air Vehicles (Aircraft, Helicopters, UAV, Balloons)
• Missiles (Seekers, Active Proximity Fuzes)
• Satellites
Classification of Sensors by the Measurements Type:
• Range and Direction to the Target (Active Sensors)
• Direction to the Target only (Passive and Semi-Active Sensors)
• Imaging of the Object
• Non-Imaging
See “Sensors.ppt” for
a detailed description
18
SENSORSSOLO
Introduction
1. Search Phase
Sensor Processes:
In this Phase a search for predefined Targets is performed.
The search is done to cover a predefined (or cued) Space Region.
The Angular Coverage may be performed by
• Scanning (Mechanically/Electronically) the Space Region
(Radar, EO Sensors)
• Steering toward the Space Region (EO Sensors, Sonar)
Radar System can perform also Search in Range and Range-Rate
2. Detection Phase
In this Phase the predefined Target is Detected, extracted from the noise and the
Background using the Target Properties that differentiate it, like:
• Target Intensity (Radar, EO Sensors, Sonars)
• Target Kinematics relative to the Background (Radar, EO Sensors, Sonars)
• Target Shape (EO Sensors, Radar)
The Sensor can use one or a combination of those methods.
There is a Probability that a False Target will be detected, therefore two quantities
Define the Detection performance:
• Probability of Detection ( ≤ 1 )
• Probability of False Alarm
Search
Search
Command
Detect
19
SOLO
Example: Airborne Electronic Scan Antenna
SENSORS
20
SENSORSSOLO
Introduction
3. Identification Phase
Sensor Processes (continue – 1):
In this Phase the Target of Interest is differentiate from
other Detected Targets.
4. Acquisition Phase
In this Phase we check that the Detection and Identification
occurred for a number of Search Frames and Initializes the
Track Phase.
5. Track Phase
In this Phase the Sensor, will update the
History of each Target (Track File),
Associating the Data in the present frame
to previous Histories. This phase continues
until Target Detection is not available for
a predefined number of frames.
Search
Search
Command
Detect
Identify
Target
Acquire
Track
Reacquire
End-of-Track
2121
SOLO
Properties of Electro-Magnetic Waves
SENSORS
22
SOLO
Generic Airborne Radar Block Diagram
f0
Receiver
REF
XMTR
Digital
Signal
Proc.
Radar Central
Computer
Pilot
CommandsData to
Displays
Antenna
Unit
T/R
(Circulator)
Power
Supply
A/D
Digital
Analog
Command &
Control Aircraft
AvionicsAvionics
BUS
Beam Control
(Mechanical or
Electronical)
Aircraft Power
Airborne Radar Block Diagram
Antenna – Transmits and receives Electromagnetic
Energy
T/R – Isolates between transmitting and receiving
channels
REF – Generates and Controls all Radar frequencies
XMTR – Transmits High Power EM Radar frequencies
RECEIVER – Receives Returned Radar Power, filter it
and down-converted to Base Band for
digitization trough A/D.
Digital Signal Processor – Processes all
the digitized signal to enhance the Target
of interest versus all other (clutter).
Power Supply – Supplies Power to all Radar components.
Radar Central Computer – Controls all
Radar Units activities, according to Pilot
Commands and Avionics data, and provides
output to Pilot Displays and Avionics.
SENSORS
23
Table of Content
SOLO
24
Radar Antenna
25
Radar Antenna
26
Radar Antenna
27
Radar Antenna
28
SOLO E-O and IR Systems Payloads
See “E-O & IR Systems
Payloads”.ppt for a detailed
presentation
29
SOLO E-O and IR Systems Payloads
0.9 kg 2.27 kg1.06 kg0.55 kg
Small, lightweight gimbals which come standard with rich features such as built-in moving maps, geo-
pointing and geo-locating. Cloud Cap gimbals are robust and proven with over 300 gimbals sold to date.
Complete with command/control/record software and joystick steering, Cloud Cap gimbals are ideal for
surveillance, inspection, law enforcement, fire fighting, and environmental monitoring. View a
comparison table of specifications for the TASE family of Gimbals.
30
SOLO
RAFAEL LITENING
Multi-Sensor, Multi-Mission Targeting & Navigation Pod
E-O and IR Systems Payloads
31
SOLO
RAFAEL RECCELITE
Real-Time Tactical Reconnaissance System
E-O and IR Systems Payloads
32
SOLO E-O and IR Systems Payloads
33
SEEKERS
SOLO
IR SEEKER COMPONENTS
• Electro-Optical Dome
• Telescope & Optics
• Electro-Optical Detector
• Electronics
• Cooling System
• Gimbal System:
- Gimbal Servo Motors
- Gimbal Angular Sensors
(Potentiometers or Resolvers
or Encoders)
- Telescope Inertial Angular Rates Sensors
• Signal Processing Algorithms
• Image Processing Algorithms
• Seeker Control Logics & Algorithms
Detector
Electronics
& Signal
Processing
Image
Processing
Seeker
Servo
Gimbal &
Torquer &
Angular Sensor
E.O.
Dome
Optics
Telescope
Detector
Dewar
Optical
Axix
Missile
C.L.
Line Of Sight
LOS

ˆEstimated
LOS Rate
Seeker
Logics &
Control
Missile
Commands
&Body
Inertial
Data
Gimbal
Angles
Tracking
Errors
Torque
Current
Rate - Gyro
SENSORS
34
Decision/Detection Theory
SOLO
Decision Theory deals with decisions that must be taken with imperfect, noise-
contaminated data.
In Decision Theory the various possible events that can occur are characterized as
Hypotheses. For example, the presence or absence of a signal in a noisy waveform
may be viewed as two alternative mutually exclusive hypotheses.
The object of the Statistical Decision Theory is to formulate a decision rule, that
operates on the received data to decide which hypothesis, among possible hypotheses,
gives the optimal (for a given criterion) decision .
The noise-contaminated data (signal) can be classified as:
• continuous stream of data (voice, images,... )
• discrete-time stream of data (radar, sonar, laser,... )
One other classification of the noise-contaminated data (signal) can be:
• known signals (radar/laser pulses defined by carrier frequency, width, coding,…)
• known signals with random parameters with known statistics.
SENSORS
35
Decision/Detection Theory
SOLO
Hypotheses
H0 – target is not present
H1 – target is present
Binary Detection
 0Hp - probability that target is not present
 1Hp - probability that target is present
 zHp |0 - probability that target is not present and not declared (correct decision)
 zHp |1 - probability that target is present and declared (correct decision)
Using Bayes’ rule:      
Z
dzzpzHpHp |00
     
Z
dzzpzHpHp |11
 zp - probability of the event Zz 
Since p (z) > 0 the Decision rules are:
   zHpzHp || 01  - target is not declared (H0)
   zHpzHp || 01  - target is declared (H1)    zHpzHp
H
H
|| 01
0
1


SENSORS
36
Decision/Detection Theory
SOLO
Hypotheses H0 – target is not present H1 – target is present
Binary Detection
 zHp |0 - probability that target is not present and not declared (correct decision)
 zHp |1 - probability that target is present and declared (correct decision)
 zp - probability of the event Zz 
Decision rules are:    zHpzHp
H
H
|| 01
0
1


Using again Bayes’ rule:
 
   
 
 
   
 zp
HpHzp
zHp
zp
HpHzp
zHp
H
H
00
0
11
1
|
|
|
|
0
1




 0| Hzp - a priori probability that target is not present (H0)
 1| Hzp - a priori probability that target is present (H1)
Since all probabilities are
non-negative
 
 
 
 1
0
0
1
0
1
|
|
Hp
Hp
Hzp
Hzp
H
H


SENSORS
37
Decision/Detection Theory
SOLO
Hypotheses
 1| Hzp - a priori probability density that target is present (likelihood of H1)
 0| Hzp - a priori probability density that target is absent (likelihood of H0)
PD - probability of detection = probability that the target is present and declared
PFA - probability of false alarm = probability that the target is absent but declared
PM - probability of miss = probability that the target is present but not declared
T - detection threshold
Detection Probabilities
  M
z
D PdzHzpP
T
 

1| 1
 


Tz
FA dzHzpP 0|
  D
z
M PdzHzpP
T
 
1| 1
DP
FA
P
 1
| Hzp 0| Hzp
M
P
z
Tz
 
 
T
Hzp
Hzp
T
T

0
1
|
|
H0 – target is not present H1 – target is present
Binary Detection
 
 
 
 
T
Hp
Hp
Hzp
Hzp
LR
H
H




1
0
0
1
0
1
|
|
:Likelihood Ratio Test (LTR)
SENSORS
38
Decision/Detection Theory
SOLO
Hypotheses
Decision Criteria on Definition of the Threshold T
1. Bayes Criterion
DP
FA
P
 1
| Hzp 0| Hzp
M
P
z
Tz
 
 
T
Hzp
Hzp
T
T

0
1
|
|
H0 – target is not present H1 – target is present
Binary Detection
 
 
 
 
T
Hp
Hp
Hzp
Hzp
LR
H
H




1
0
0
1
0
1
|
|
:Likelihood Ratio Test (LTR)
The optimal choice that optimizes the Likelihood Ratio is
 
 1
0
Hp
Hp
TBayes

This choose assume knowledge of p (H0) and P (H1), that in general are not known a priori.
2. Maximum Likelihood Criterion
Since p (H0) and P (H1) are not known a priori, we choose TML = 1
 1
| Hzp 0| Hzp
MP z
Tz
 
 
1
|
|
0
1
 ML
T
T
T
Hzp
Hzp
DP
FAP
SENSORS
39
Decision/Detection Theory
SOLO
Hypotheses
Decision Criteria on Definition of the Threshold T (continue)
3. Neyman-Pearson Criterion
H0 – target is not present H1 – target is present
Binary Detection
 
 
 
 
T
Hp
Hp
Hzp
Hzp
LR
H
H




1
0
0
1
0
1
|
|
:Likelihood Ratio Test (LTR)
Egon Sharpe Pearson
1895 - 1980
Jerzy Neyman
1894 - 1981
Neyman and Pearson choose to optimizes the probability of detection PD
keeping the probability of false alarm PFA constant.
 


T
TT
z
z
D
z
dzHzpP 1|maxmax    

Tz
FA dzHzpP 0|constrained to
Let use the Lagrange’s multiplier λ to add the constraint
   
















 

TT
TT
zz
zz
dzHzpdzHzpG 01 ||maxmax 
Maximum is obtained for:
    0|| 01



HzpHzp
z
G
TT
T
 DP
FAP
 1
| Hzp 0| Hzp
M
P
z
Tz
 
  PN
T
T
T
Hzp
Hzp

0
1
|
|
 
  PN
T
T
T
Hzp
Hzp

0
1
|
|

zT is define by requiring that:    

Tz
FA dzHzpP 0|
SENSORS
40
SOLO
Return to Table of Content
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
(Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
Filtering and Prediction
• Filtering and Prediction , for each Track processes the Data Associated to the Track,
Filter the Target State (Position, and may be Velocity and Acceleration) from Noise,
and Predict the Target State and Errors (Covariance Matrix) at the next
Sensors Measurement.
 kkx |ˆ
 kx
 1|1  kkP
 1| kkP
 1|1ˆ  kkx
 1kx
 kkP |
 kkP |1
 kkx |1ˆ 
 kt  1kt
Real Trajectory
Estimated Trajectory
 2kt
 1|2  kkP
 1|2ˆ  kkx
 2|2  kkP
 2|2ˆ  kkx
 3kt
Measurement Events
Predicted Errors
Updated Errors
41
SOLO
Discrete Filter/Predictor Architecture
State at tk
x (k)
Evolution
of the system
(true state)
Transition to tk+1
x (k+1)=
F(k) x (k)
+G (k) u (k)+ v (k)
Measurement at tk+1
z (k+1)=
H (k) x (k)+ w (k)
Control at tk
u (k)
Controller
k
t
1k
t
 kx
 1kx
 kt  1kt
Real Trajectory
The discrete representation of the system is given by
x (k) - system state vector
           
       kwkxkHkz
kvkukGkxkFkx


111
1
u (k) - system control input
v (k) - system unknown dynamics assumed white Gaussian
w (k) - measurement noise assumed white Gaussian
k - discrete time counter
Filtering and Prediction Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
(Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
42
SOLO
Discrete Filter/Predictor Architecture (continue – 1)
State at tk
x (k)
Evolution
of the system
(true state)
Transition to tk+1
x (k+1)=
F(k) x (k)
+G (k) u (k)+ v (k)
Measurement at tk+1
z (k+1)=
H (k) x (k)+ w (k)
Control at tk
u (k)
Controller
k
t
1k
t
 kx
 1kx
 kt  1kt
Real Trajectory
1. The output of the Filter/Predictor can
be at a higher rate than the input
(measurements)
Tmeasurements = m Toutput, m integer
2. Between measurements it will perform
State Prediction
         
     kkxkHkkz
kukGkkxkFkkx
|1ˆ1|1ˆ
|ˆ|1ˆ


3. At measurements it will perform
Update State
       
       11|1ˆ|1ˆ
|1ˆ11


kkKkkxkkx
kkxkHkzk



υ (k) - Innovation
K (k) – Filter Gain
State at tk
x (k)
Evolution
of the system
(true state)
Transition to tk+1
x (k+1)=
F(k) x (k)
+G (k) u (k)+ v (k)
Measurement at tk+1
z (k+1)=
H (k) x (k)+ w (k)
Estimation
of the state
Control at tk
u (k)
Controller
State Prediction
at tk +1
 
       kukGkkxkF
kkx


|ˆ
|1ˆ
Measurement
Prediction
at tk +1
     kkxkHkkz |1ˆ1|1ˆ 
Innovation
     kkzkzkv |1ˆ11 
Update State
Estimation at tk +1
 
     11|1ˆ
1|1ˆ


kvkKkkx
kkx
k
t
1k
t
State
Estimation
at tk
 kkx |ˆ
 kkx |ˆ
 kx
 1|1ˆ  kkx
 1kx
 kkx |1ˆ 
 kt  1kt
Real Trajectory
Estimated
Trajectory
 1kK
Filtering and Prediction Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
(Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
43
SOLO
Discrete Filter/Predictor Architecture (continue – 2)
The way that the Filter Gain K (k) is defined
will define the Filter properties.
1. K (k) can be chosen to satisfy the
bandwidth requirements. Since we have
Linear Time Constant System a
K (k)=constant may be chosen.
This is a Luenberger Observer.
2. Since we have a Linear Time Constant
System, if we assume White Gaussian
System and Measurement Disturbances
the Kalman Filter will provide the
Optimal Filter/Predictor. An important
byproduct is the Error Covariances.
State at tk
x (k)
Evolution
of the system
(true state)
Transition to tk+1
x (k+1)=
F(k) x (k)
+G (k) u (k)+ v (k)
Measurement at tk+1
z (k+1)=
H (k) x (k)+ w (k)
Estimation
of the state
Control at tk
u (k)
Controller
State Prediction
at tk +1
 
       kukGkkxkF
kkx


|ˆ
|1ˆ
Measurement
Prediction
at tk +1
     kkxkHkkz |1ˆ1|1ˆ 
Innovation
     kkzkzkv |1ˆ11 
Update State
Estimation at tk +1
 
     11|1ˆ
1|1ˆ


kvkKkkx
kkx
k
t
1k
t
State
Estimation
at tk
 kkx |ˆ
 kkx |ˆ
 kx
 1|1ˆ  kkx
 1kx
 kkx |1ˆ 
 kt  1kt
Real Trajectory
Estimated
Trajectory
 1kK
3. The Filter Gain K (k) can be chosen
as the steady-state value of the
Kalman Filter.
Filtering and Prediction
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
(Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
44
SOLO
Statistically State Estimation
Target Tracking Systems scan, periodically, with their Sensors for Targets. They need:
• to predict Target position at the next scan
(in order to be able to re-detect the
Target and measure its data) and
• to perform data association of detection
from scan-to-scan, in order to determine
a new or an old Target
Track # 1
Track # 2
New Targets
or
False Alarms
Old Targets
Scan # m
Scan # m+1
Scan # m+2
Scan # m+3
Tgt
# 1
Tgt
# 2
Tgt
# 1
Tgt
# 1
Tgt
# 2
Tgt
# 2
Tgt
# 2
Preliminary
Track # 1
Preliminary
Track # 2
False
Alarm
False
Alarm
Tgt
# 3
To perform those tasks Target Tracking Systems use Statistically State Estimation Theory.
Two main methods are commonly used:
• The Maximum Likelihood (ML) method (based on known/assumed statistics prior to
measurements.
• The Bayesian approach based on known statistics between states and measurements,
after performing the measurements.
Different Models are used to describe the Target Dynamics. Often Linear Dynamics is
enough to describe a dynamical system, but non-linear models must also be taken in
consideration. In many cases the measurements relations to the model states are also
non-linear. The unknown system dynamics or measurement errors are modeled by
White Noise Gauss Stochastic Processes.
Filtering and Prediction
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
(Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
45
SOLO
Target Models as Markov Processes
Markov Random Processes
A Markov Random Process is defined by:
Andrei Andreevich
Markov
1856 - 1922
          111 ,|,,,|, tttxtxptxtxp  
i.e. the Random Process, the past up to any time t1 is fully defined
by the process at t1.
Discrete Target Dynamic System
 
 kkkkk
kkkkk
vuxthz
wuxtfx
,,,
,,, 1111

 
x - state space vector (n x 1)
u - input vector (m x 1)
- measurement vector (p x 1)z
v - white measurement noise vector (p x 1)
- white input noise vector (n x 1)w
 
 kkkk
kkkk
vuxkhz
wuxkfx
,,,
,,,1 111

 
1ku
 1k
1kw1kv
kz
Assumptions:
Known:
- functional forms f (•), h (•)
- noise statistics p (wk), p (vk)
- initial state probability density
function (PDF) p (x0)
Filtering and Prediction
46
SOLO
Discrete Target Dynamic System as Markov Processes
 
 kkkkk
kkkkk
vuxthz
wuxtfx
,,,
,,, 1111

 
x - state space vector (n x 1)
u - input vector (m x 1)
- measurement vector (p x 1)z
v - white measurement noise vector (p x 1)
- white input noise vector (n x 1)w
Assumptions:
Known:
- functional forms f (•), h (•)
- noise statistics p (wk), p (vk)
- initial state probability density
function (pdf) p (x0)
Return to Table of Content
Using the k discrete (k=1,2,…) noisy measurements Z1:k={z1,z2,…,zk} we want to
estimate the hidden state xk, by filtering out the noise.
k – enumeration of the measurement events
The Estimator/Filter uses some assumptions about the model and an Optimization
Criteria to obtain the estimate of xk based on measurements Z1:k={z1,z2,…,zk} .
 kkkk ZxEx :1| |ˆ 
 
 kkkk
kkkk
vuxkhz
wuxkfx
,,,
,,,1 111

 
1ku
 1k
1kw1kv
kz
Filtering and Prediction
47
SOLO
Equation of motion of a point mass object are described by:
A
IV
RI
V
R
td
d
x
x
xx
xx



































33
33
3333
3333 0
00
0
A
V
R



- Range vector
- Velocity vector
- Acceleration vector



































A
V
R
I
I
A
V
R
td
d
xxx
xxx
xxx






333333
333333
333333
000
00
00
or:
Since the target acceleration vector is not measurable, we assume that it is
a random process defined by one of the following assumptions:
A

1. White Noise Acceleration Model (Nearly Constant Velocity – nCV) .
3. Piecewise (between samples) Constant White Noise Acceleration Model .
5. Singer Acceleration Model .
2. Wiener Process acceleration model (nearly Constant Acceleration – nCA) .
4. Piecewise (between samples) Constant Wiener Process Acceleration Model
(Constant Jerk – a derivative of acceleration)
6. Constant Speed Turning Model .
Target motion is modeled using the laws of physics.
V

R

Bx
A

Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDiscretewuxtfx kkkkk
,,,
,,, 1111

 

Filtering and Prediction
48
SOLO
1. White Noise Acceleration Model – Second Order Model
Nearly Constant Velocity Model (nCV)

            





























tqwtwEtwEtw
IV
RI
V
R
td
d T
B
x
x
A
xx
xx
x
,0&
0
00
0
33
33
3333
3333






Discrete System          kwkkxkkx 1
    





 

 3333
3333
66
00
0!
1
exp:
xx
xx
x
i
ii
T
I
TII
TAITA
i
dAT 
2
00
00
00
00
00
0
00
0
00
0
3333
3333
3333
3333
3333
3333
3333
33332
3333
3333






























 nA
II
A
I
A
xx
xxn
xx
xx
xx
xx
xx
xx
xx
xx

        tqwtwE T
             
T
TTT
dTBBTqkkwkwEk
0

Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDiscretewuxtfx kkkkk
,,,
,,, 1111

 

Filtering and Prediction
49
SOLO
1. White Noise Acceleration Model (continue – 1)
Nearly Constant Velocity Model (nCV)
 
 
 



d
ITI
I
I
II
TII
q
xx
xx
xx
T
x
x
xx
xx

















 
  3333
3333
3333
0 33
33
3333
3333 0
0
0
0
                   
T
TTTTT
dTBBTqkkQkkkwkwEk
0

 
      
 





d
ITI
TITI
qdITI
I
TI
q
T
xx
xx
xx
T
x
x
 














 

0 3333
33
2
33
3333
0 33
33 2/
     









TITI
TITI
qkkQk
xx
xxT
33
2
33
2
33
3
33
2/
2/3/
Guideline for Choice of Process Noise Intensity
The change in velocity over a sampling period T are of the order of TqQ 22
For a nearly constant velocity assumed by this model, the choice of q must be such
to give small changes in velocity compared to the actual velocity .V

Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDiscretewuxtfx kkkkk
,,,
,,, 1111

 

Filtering and Prediction
50
SOLO
2. Wiener Process Acceleration Model – Third Order Model
(Nearly Constant Acceleration – nCA)

            














































tIqwtwEtwEtw
IA
V
R
I
I
A
V
R
td
d
x
T
B
x
x
x
A
xxx
xxx
xxx
x
33
33
33
33
333333
333333
333333
,0&0
0
000
00
00




  



Discrete System          kwkkxkkx 1
   










 


333333
333333
2
333333
22
99
00
00
0
2/
2
1
!
1
exp:
xxx
xxx
xxx
x
i
ii
T
I
TII
TITII
TATAITA
i
dAT 
2
000
000
000
000
000
00
000
00
00
333333
333333
333333
333333
333333
333333
2
333333
333333
333333

































 nA
I
AI
I
A
xxx
xxx
xxx
n
xxx
xxx
xxx
xxx
xxx
xxx

Since the derivative of acceleration is the jerk, this model is also called White Noise Jerk Model.
        tIqwtwE x
T
33             
T
TTT
dTBBTqkkwkwEk
0

Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDiscretewuxtfx kkkkk
,,,
,,, 1111

 

Filtering and Prediction
51
SOLO
2. Wiener Process Acceleration Model (continue – 1)
(Nearly Constant Acceleration – nCA)
   
     
   




d
ITITI
ITI
I
I
II
TII
TITII
q
xxx
xxx
xxx
xxx
T
x
x
x
xxx
xxx
xxx


































 
3333
2
33
333333
333333
333333
0
33
33
33
333333
333333
2
333333
2/
0
00
000
0
00
0
2/
             
T
TTT
dTBBTqkkwkwEk
0

 
      
     
     
   






d
ITITI
TITITI
TITITI
qdITITI
I
TI
TI
q
T
xxx
xxx
xxx
xxx
T
x
x
x






























0
3333
2
33
33
2
33
3
33
2
33
3
33
4
33
3333
2
33
0
33
33
2
33
2/
2/
2/2/4/
2/
2/
     











TITITI
TITITI
TITITI
qkkQk
xxx
xxx
xxx
T
33
2
33
3
33
2
33
3
33
4
33
3
33
4
33
5
33
2/6/
2/3/8/
6/8/20/
Guideline for Choice of Process Noise Intensity
The change in acceleration over a sampling period T are of the order of TqQ 33
For a nearly constant acceleration assumed by this model, the choice of q must be such
to give small changes in velocity compared to the actual acceleration .A

        tIqwtwE x
T
33
Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDiscretewuxtfx kkkkk
,,,
,,, 1111

 

Filtering and Prediction
52
SOLO
3. Piecewise (between samples) Constant White Noise Acceleration Model – 2nd Order

     ,0&
0
00
0
33
33
3333
3333






























twEtw
IV
RI
V
R
td
d
B
x
x
A
xx
xx
x






Discrete System
                       kl
TTT
lqkllwkwEkkwkkxkkx  01
    





 

 3333
3333
66
00
0!
1
exp:
xx
xx
x
i
ii
T
I
TII
TAITA
i
dAT 
2
00
00
00
00
00
0
00
0
00
0
3333
3333
3333
3333
3333
3333
3333
33332
3333
3333






























 nA
II
A
I
A
xx
xxn
xx
xx
xx
xx
xx
xx
xx
xx

       
 
 
   kw
TI
TI
kw
I
d
I
TII
dkTwBTkwk
x
x
x
x
T
xx
xx
T
kw

























 
  33
2
33
33
33
0 3333
3333
0
2/0
0
: 



Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDiscretewuxtfx kkkkk
,,,
,,, 1111

 

Filtering and Prediction
53
SOLO
3. Piecewise (between samples) Constant White Noise Acceleration Model
               klxx
x
x
kl
TTT
TITI
TI
TI
qlqkllwkwEk  33
2
33
33
2
33
00 2/
2/







         lk
xx
xxTT
TITI
TITI
qllwkwEk ,2
33
3
33
3
33
4
33
0
2/
2/2/










Guideline for Choice of Process Noise Intensity
For this model q should be of the order of maximum acceleration magnitude aM.
A practical range is 0.5 aM ≤ q ≤ aM.
Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDiscretewuxtfx kkkkk
,,,
,,, 1111

 

Filtering and Prediction
54
SOLO
4. Piecewise (between samples) Constant Wiener Process Acceleration Model
(Constant Jerk – a derivative of acceleration)

     0&0
0
000
00
00
33
33
33
333333
333333
333333















































twEtw
IA
V
R
I
I
A
V
R
td
d
B
x
x
x
A
xxx
xxx
xxx
x




  



Discrete System
                       lk
TTT
lqkllwkwEkkwkkxkkx ,01 
   










 


333333
333333
2
333333
22
99
00
00
0
2/
2
1
!
1
exp:
xxx
xxx
xxx
x
i
ii
T
I
TII
TITII
TATAITA
i
dAT 
2
000
000
000
000
000
00
000
00
00
333333
333333
333333
333333
333333
333333
2
333333
333333
333333

































 nA
I
AI
I
A
xxx
xxx
xxx
n
xxx
xxx
xxx
xxx
xxx
xxx

       
 
   
     kw
I
TI
TI
kwd
I
TII
TITII
dkTwBTkwk
x
x
xT
x
x
x
xxx
xxx
xxxT
kw 












































 
33
33
2
33
0
33
33
33
333333
333333
2
333333
0
2/
0
0
0
00
0
2/
: 



Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDiscretewuxtfx kkkkk
,,,
,,, 1111

 

Filtering and Prediction
55
SOLO
4. Piecewise (between samples) Constant White Noise acceleration model (continue -1)
(Constant Jerk – a derivative of acceleration)
               lkxxx
x
x
x
lk
TTT
ITITI
I
TI
TI
qlqkllwkwEk ,3333
2
33
33
33
2
33
0,0 2/
2/












         lk
xxx
xxx
xxx
TT
ITITI
TITITI
TITITI
qllwkwEk ,
3333
2
33
33
2
33
3
33
2
33
3
33
4
33
0
2/
2/
2/2/2/












Guideline for Choice of Process Noise Intensity
For this model q should be of the order of maximum acceleration increment over a
sampling period ΔaM.
A practical range is 0.5 ΔaM ≤ q ≤ ΔaM.
Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDiscretewuxtfx kkkkk
,,,
,,, 1111

 

Filtering and Prediction
56
SOLO
5. Singer Target Model
R.A. Singer, “Estimating Optimal Tracking Filter Performance for Manned Maneuvering
Target”, IEEE Trans. Aerospace & Electronic Systems”, Vol. AES-6, July 1970,
pp. 437-483
The target acceleration is modeled as a zero-mean random process with exponential
autocorrelation
       T
etataER mTT

 /2 

where σm
2 is the variance of the target acceleration and τT is the time constant of its
autocorrelation (“decorrelation time”).
The target acceleration is assumed to:
1. Equal to the maximum acceleration value amax
with probability pM and to – amax
with the same probability.
2. Equal to zero with probability p0.
3. Uniformly distributed between [-amax, amax]
with the remaining probability 1-2 pM – p0 > 0.
maxa maxa
Mp Mp
0p
 ap
a
021 ppM 
             
max
0
maxmax0maxmax
2
21
0
a
pp
aauaauppaaaaap M
M

 
Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDiscretewuxtfx kkkkk
,,,
,,, 1111

 

Filtering and Prediction
57
SOLO
5. Singer Target Model (continue 1)
maxa maxa
Mp Mp
0p
 ap
a
021 ppM 
             
max
0
maxmax0maxmax
2
21
0
a
pp
aauaauppaaaaap M
M

 
           
    
     0
22
21
0
2
21
0
max
max
max
max
max
max
max
max
2
max
0
0maxmax
max
0
maxmax
0maxmax












a
a
M
M
a
a
M
a
a
M
a
a
a
a
pp
ppaa
daa
a
pp
aauaau
daappaaaadaapaaE 
           
    
    
 0
2
max
3
max
02
max
2
max
2
max
0
maxmax
2
0maxmax
22
41
3
32
21
2
21
0
max
max
max
max
max
max
max
max
pp
a
a
a
pp
paa
daa
a
pp
aauaau
daappaaaadaapaaE
M
a
a
M
M
a
a
M
a
a
M
a
a











 
     0
2
max
0
222
41
3
pp
a
aEaE Mm 


Use
     
max0max
00
max
max
aaa
afdaafaa
a
a



Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDiscretewuxtfx kkkkk
,,,
,,, 1111

 

Filtering and Prediction
58
SOLO
6. Target Acceleration Approximation by a Markov Process
w (t) x (t)
 tF
 tG 
x (t)
           twtGtxtFtxtx
td
d
 Given a Continuous Linear System:
Let start with the first order linear system describing Target Acceleration :
     twtata T
T
T 

1

    T
T
tt
a ett 
 /
0
0
, 

                tqwEwtwEtwE
              ttRtaEtataEtaE TT aaTTTT , 
                ttRtaEtataEtaE TT aaTTTT ,
                 2
, TTTTT aaaaaTTTT ttRtVtaEtataEtaE 
               tGtQtGtFtVtVtFtV
td
d TT
xxx
     qtVtV
td
d
TTTT aa
T
aa 

2
   00 ,
1
, tttt
td
d
TT a
T
a 

 
where
Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDiscretewuxtfx kkkkk
,,,
,,, 1111

 

Filtering and Prediction
Run This
59
SOLO
    qtVtV
td
d
TTTT aa
T
aa 

2
    









TT
TTTT
t
T
t
aaaa e
q
eVtV  
22
1
2
0
t
2/T
  T
t
ww eV
2
0











T
t
e
qT
2
1
2
2
qT
V statesteadyww 
 tVww
 
     
     










0,
0,
,







tVetttV
tVetVtt
ttR
TT
T
TTT
TT
T
TTT
TT
aa
T
aaa
aaaaa
aa
 
     
     










0,
0,
,







tVetVtt
tVetttV
ttR
TT
T
TTT
TT
T
TTT
TT
aaaaa
aa
T
aaa
aa
For    
2
5 T
statesteadyaaaaaa
T
q
VtVtV TTTTTT




 
      TT
TTTTTTTT
e
q
eVVttRttR
T
T
statesteadyaaaaaaaa






 
 
2
,,5
 tw  taT 
T
T
s
sH




1
6. Target Acceleration Approximation by a Markov Process (continue – 1)
Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDiscretewuxtfx kkkkk
,,,
,,, 1111

 

Run This
60
SOLO
  T
T
T
TT
ee
q
V a
T
aa



 /2/
2



2
2 T
a
q
T

 
T
12 
eTa
T
  2
0
2
2 T
T
aa qde
q
dVArea T
TT


 

 
 

τT is the correlation time of the noise w (t) and defines in Vaa (τ) the correlation
time corresponding to σa
2 /e.
One other way to find τT is by tacking the double sides Laplace Transform L2 on τ of:
       qdetqtqs s
ww  



 
2L
    
 
   sHqsH
s
q
dee
q
Vs
T
T
sT
ssaaaa
T
TTTT



 




2
2
/
2
1
2




 
L
 
 2
2/1/1 



q
Qww

T /12/1 
q
2/q
T /12/1 
τT defines the ω1/2 of half of the power spectrum
q/2 and τT =1/ ω1/2.
      TT
TTTTTTT
e
q
eVttRttR
T
T
aaaaaaa






 

2
,,5
2
T
aT
q

 2
2

6. Target Acceleration Approximation by a Markov Process (continue – 2)
Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDiscretewuxtfx kkkkk
,,,
,,, 1111

 

Filtering and Prediction
Run This
61
SOLO
7. Constant Speed Turning Model
Denote by and the constant velocity and turning rate vectors.P
td
d
VVV

 1  1

VVVV
td
d
VVV
td
d
V
td
d
A









  111:
0
    VVVVV
td
d
V
td
d
A
td
d 





22
0:
0
 







Define
   
2
00
:
V
AV

 

Denote the position vector of the vehicle relative to an Inertial system..P

We want to find ф (t) such that      TTT 
Therefore A
IA
V
P
I
I
A
V
P
td
d 



  





















































0
0
00
00
00
2

Continuous Time
Constant Speed
Target Model
Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDiscretewuxtfx kkkkk
,,,
,,, 1111

 

Filtering and Prediction
62
SOLO
7. Constant Speed Turning Model (continuous – 1)
A
B
C
O


nˆ
v

1v

Let rotate the vector around by a large angle
, to obtain the new vector

 OAPT

nˆ
T 

 OBP

From the drawing we have:

 CBACOAOBP

TPOA



  cos1ˆˆ 

TPnnAC
 Since direction of is:     sinˆˆ&ˆˆ TTT PPnnPnn


and it’s length is:
AC

  cos1sin TP

  sinˆ TPnCB



Since has the direction and the
absolute value
CB

TPn

ˆ
sinsinv
      sinˆcos1ˆˆ TTT PnPnnPP


        TPnTPnnPP TTT  sinˆcos1ˆˆ


We will find ф (T) by direct computation of a rotation:
Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDiscretewuxtfx kkkkk
,,,
,,, 1111

 

Filtering and Prediction
Run This
63
SOLO
7. Constant Speed Turning Model (continuous – 2)
       TPnnTPn
Td
Pd
V TT 



sinˆˆcosˆ 
   TT PnTVV

 ˆ0 
       TPnnTPn
Td
Vd
A TT  cosˆˆsinˆ 22




   TT PnnTAA

 ˆˆ0 2

    
   
   










TT
TT
TTT
ATVTA
ATVTV
ATVTPP






cossin
sincos
cos1sin
1
21
        TPnnTPnPP TTT  cos1ˆˆsinˆ 

Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDiscretewuxtfx kkkkk
,,,
,,, 1111

 

Filtering and Prediction
64
SOLO
7. Constant Speed Tourning Model (continuous – 3)
    
   
   










TT
TT
TTT
ATVTA
ATVTV
ATVTPP






cossin
sincos
cos1sin
1
21
    
   
   
 








































T
T
T
T
A
V
P
TT
TT
TTI
A
V
P



  






cossin0
sincos0
cos1sin
1
21
Discrete Time
Constant Speed
Target Model
Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDiscretewuxtfx kkkkk
,,,
,,, 1111

 

Filtering and Prediction
65
SOLO
7. Constant Speed Tourning Model (continuous – 4)
 
    
   
    











 

TT
TT
TTI
T



cossin0
sincos0
cos1sin
1
21
 
    
   
    











 


TT
TT
TTI
T



cossin0
sincos0
cos1sin
1
21
1
 
   
   
    












TT
TT
TT
T



sincos0
cossin0
sincos0
2
1

We want to find Λ (t) such that
     TTT  therefore      TTT 1
 
     
   
   
   
    
   
    






















 


TT
TT
TTI
TT
TT
TT
TTT






cossin0
sincos0
cos1sin
sincos0
cossin0
sincos0
1
21
2
1
1












00
100
010
2

We recovered the transfer matrix for the continuous
case.
Return to Table of Content
Target Acceleration Models
 
  ModelContinuouswuxtFx
ModelDisccretewuxtfx kkkkk
,,,
,,, 1111

 

Filtering and Prediction
66
SOLO
Optimal Static Estimate
The optimal procedure to estimate depends on the amount of knowledge of the
process that is initially available.
x
The following estimators are known and are used as function of the assumed initial
knowledge available:
Estimators Known initially
Weighted Least Square (WLS)
& Recursive WLS
1
     T
kkkkkkk vvvvERvEv  &Markov Estimator2
Maximum Likelihood Estimator (MLE)3     LikelihoodxZLxZp xZ  ,:||
Bayes Estimator4    Zxporvxp Zxvx |, |,
The amount of assumed initial knowledge available on the process increases in this order.
Estimation for Static Systems
v
H zx
The measurements are vxHz 
67
Estimation for Static Systems (continue – 1)SOLO
Parameter Vector: full specification of (static) parameters to be estimated
Measurements:
• collected over time and/or space
• affected by noise
 vRx

,Examples: or  avRx

,,



a
v
R



Position 3 D vector
Velocity 3 D vector
Acceleration 3 D vector
• relationship (nonlinear/linear) with parameter vector
   m
k
n
kk RzRxKkvxhz  ,;,,1 
Goal: Estimate the Parameter Vector using all measurementx
Approaches:
• treat as being deterministic (Minimal Least Square -MLE, LSE)x
• treat as being random (MAP Estimator, MMSE Estimator)x
68
z
SOLO
Optimal Weighted Last-Square Estimate
Assume that the set of p measurements, can be expressed as a linear combination,
of the elements of a constant vector plus a random, additive measurement error, :
v
H zx
x v
vxHz 
    1
1
 
W
T
xHzxHzWxHzJ

 T
pzzzz ,,, 21 
 T
n
xxxx ,,, 21

 T
pvvvv ,,, 21 
We want to find , the estimation of the constant vector , that minimizes the
cost function:
x

x
that minimizes J, is obtained by solving:0x

  02/ 1
 
xHzWHxJJ T
x

   zWHHWHx TT 111
0



This solution minimizes J iff :
         02/ 0
1
00
22
0
 
xxHWHxxxxxJxx TTT 
or the matrix HTW-1H is positive definite.
W is a hermitian (WH = W, H stands for complex conjugate and matrix transpose),
positive definite weighting matrix.
Estimation for Static Systems (continue – 2)
69
v
H zx
SOLO
Optimal Weighted Least-Square Estimate (continue – 1)
  zWHHWHx TT 111
0



Since the mean of the estimate is equal to the estimated parameter, the estimator
is unbiased.
vxHz Since is random with mean
      xHvExHvxHEzE 
0
        xxHWHHWHzEWHHWHxE TTTT
  111111
0

is also random with mean:0x

       0
1
00
12
00
1
0
*
: xHzWHxxHzWzxHzxHzWxHzJ TTT
W
T 
 
Using we want to find the minimum value of J:0
11
xHWHzWH TT 

     0
1
0
0
11
00
1
xHzWzxHWHzWHxxHzWz TTTTT 
  

 
2
0
2
0
1
0
1
0
11
1
0
WW
TTT
HWHx
TT
xHzxHWHxzWzxHWzzWz
TT


 

Estimation for Static Systems (continue – 3)
70
v
H zx
2
0
22
0
*
111   WWW
xHzxHzJ

SOLO
Optimal Weighted Least-Square Estimate (continue – 2)
where is a norm.aWaa T
W
12
: 

Using we obtain:
0
11
xHWHzWH TT 

   
0
,
0
1
0
1
0
0
1
000
0
1





xHWHxzWHx
xHzWxHxHzxH
TT
xHWH
TT
T
W
T





bWaba T
W
1
:, 

This suggest the definition of an inner product of two vectors and (relative to the
weighting matrix W) as
ba
z 0
ˆxHz 
0
ˆxH
W
2
0
22
0 111 ˆˆ   WWW
xHzxHz
Projection Theorem
The Optimal Estimate is such that is the projection (relative to the weighting
matrix W) of on the plane.
0x

z
0xH

xH
Estimation for Static Systems (continue – 4)
72
0z
SOLO
Recursive Weighted Least Square Estimate (RWLS)
Assume that the set of N measurements, can be expressed as a linear combination,
of the elements of a constant vector plus a random, additive measurement error, :
0v
0zx 0H
x v
vxHz  00
    1
0
0000
1
0000 

W
T
xHzxHzWxHzJ

We found that the optimal estimator ,
that minimizes the cost function:
 x

    0
1
00
1
0
1
00
zWHHWHx
TT 

is
Let define the following matrices for the complete measurement set



















W
W
W
z
z
z
H
H
H
0
0
:,:,: 0
1
0
1
0
1
   1
0
1
00
:

 HWHP
T
Therefore:
   
1
1 1
0 0 0 01 1
1 1 1 1 1 1 0 01 1
0 0
0 0
T T T T T T
W H W z
x H W H H W z H H H H
H zW W

 
 
 
       
                          
v
H zx
    0
1
00
zWHPx
T 


An additional measurement set, is obtained
and we want to find the optimal estimator .
z
 x

Estimation for Static Systems (continue – 5)
73
SOLO
Recursive Weighted Least Square Estimate (RWLS) (continue -1)
   1
0
1
00
:

 HWHP
T
    0
1
00
zWHPx
T 


       
   zWHzWHHWHHWH
z
z
W
W
HH
H
H
W
W
HHzWHHWHx
TTTT
TTTTTT
1
0
1
00
11
0
1
00
0
1
1
0
0
1
0
1
1
0
01
1
111
1
11
0
0
0
0














































Define     HWHPHWHHWHP TTT 111
0
1
00
1
: 

             

PHWHPHHPPHWHPP TT
LemmaMatrixInverse
T 1111
          111111 
 WHPWHHWHPWHPHHP TTTTT
                 
PHWHPPPHWHPHHPPP TTT 11
    
            zWHPzWHPHWHPHHPP
zWHzWHPx
TTTT
TT
1
0
1
00
1
1
0
1
00





Estimation for Static Systems (continue – 6)
74
v
H zx
SOLO
Recursive Weighted Least Square Estimate (RWLS) (continue -2)
    
           
 
 
    
 
 
 
 
        zWHPxHWHPx
zWHPzWHPHWHPHHPzWHP
zWHPzWHPHWHPHHPP
zWHzWHPx
TT
T
x
T
WHP
TT
x
T
TTTT
TT
T
11
1
0
1
00
1
0
1
00
1
0
1
00
1
1
0
1
00
1
















      


    0
1
00
zWHPx
T 


    HWHPP T 111 

         
xHzWHPxx T  1
Recursive Weighted Least Square Estimate
(RWLS)
z
 x

 x

Delay
  HWHP T 11 

H
  1
 WHP T
Estimator
Estimation for Static Systems (continue – 7)
75
   
    
       xHzWxHzxHzWxHz
xHz
xHz
W
W
xHzxHz
xHz
xHz
W
W
xHz
xHz
xHzWxHzJ
TT
TT
T
T























































1
00
1
000
00
1
1
0
00
00
1
000
11
1
1111
0
0
0
0
  0
1
00
1
: HWHP
T 

SOLO
Recursive Weighted Least Square Estimate (RWLS) (continue -3)
Second Way
We want to prove that
where     0
1
00
: zWHPx
T 


            
xxPxxxHzWxHz
TT  1
00
1
000
Therefore
                11
11
1   

WP
TT
xHzxxxHzWxHzxxPxxJ

Estimation for Static Systems (continue – 8)
76
Estimators
vxHz  00
v
0H 0zx
SOLO
Markov Estimate
For the particular vector measurement equation
where for the measurement noise, we know the mean:  vEv 
and the variance:    T
vvvvER 
v
  zRHHRHx
TT 1
0
1
0
1
00


We choose W = R in WLS, and we obtain:
   1
0
1
0:

 HRHP
T
    HRHPP T 111 

         
xHzRHPxx T  1
RWLS = Markov Estimate
W = R
z
 x

 x

Delay
  HRHP T 11 

H
  1
 RHP T
Estimator
In Recursive WLS, we obtain for a new
observation: vxHz 
v
H zx
Table of Content
77
Estimation for Static Systems (continuous – 9)SOLO
     
   
   
 k
k
k
k
k
Zp
xpxZp
dxxpxZp
xpxZp
Zxp
|
|
|
| 

Bayesian Approach: compute the posteriori Probability Density Function (PDF) of x
 kk zzZ ,,1  - Measurements up to k
 xp - Prior (before measurement) PDF of x
   xZLxZp kk ,|  - Likelihood function of given xkZ
 kZxp | - Posterior (after measurement ) PDF of xkZ
Likelihood Function: PDF of measurement conditioned on the parameter vector
Example   kk vxhz 
 2
,0;~ vk vv N i.i.d. process; k=1,…,K
(independent identical distribution)
    2
| ,;~| vkxz xhzxzp k
N
   

K
k
kxzkxZ xzpxZp kk
1
|| ||
Bayes Formula
78
Estimation for Static Systems (continuous – 10)SOLO
      k
T
x
k
MMSE
ZxxxxEZx |ˆˆminargˆ
ˆ

Minimum Mean Square Error (MMSE) Estimator
The minimum is given by
           0|ˆ2|ˆ2|ˆˆˆ  kkk
T
x ZxExZxxEZxxxxE
From which    


 xdZxpxZxEx kZxk k
|| |
*
We have      02|ˆˆˆˆ  k
T
xx ZxxxxE
          


 xdZxpxZxEZxxxxEZx kZxkk
T
x
k
MMSE
k
|||ˆˆminargˆ |
ˆ
Therefore
79
Estimation for Static Systems (continuous – 11)SOLO
Maximum Likelihood Estimator (MLE)
 xZpx kxZ
x
ML
k
|maxarg:ˆ |• Non-Bayesian Estimator
       vpxpvxpzxp vxvxzx
 ,, ,,
x
v
 vxp vx ,,
     
 
   








xHzRxHz
R
xHzpxzpxzL
T
p
vxz
1
2/12/
|
2
1
exp
2
1
|:,

        RWSquareLeastWeightedxHzRxHzxzp
T
x
xz
x
 1
| min|max
       02 11


 
xHzRHxHzRxHz
x
TT
0*11
 
xHRHzRH TT
  zRHHRHxx TT 111
: 


     HRHxHzRxHz
x
TT 11
2
2
2 


 this is a positive definite matrix, therefore
the solution minimizes
and maximizes
    xHzRxHz
T
 1
 xzp xz //
 
 
 
 
  



 
vRv
R
vp
xp
zxp
xzp T
pv
x
zx
xz
1
2/12/
|
|
2
1
exp
2
1|
|

Gaussian (normal), with zero mean
Example kk vxHz 
80
Estimation for Static Systems (continuous – 12)SOLO
Maximum A Posterior Estimator (MAP)
     xpxZpZxpx xkxZ
x
kZx
x
MAP
kk
|maxarg|maxarg:ˆ || •Bayesian Estimator
 
   
       







 
xxPxx
P
xp
T
nx
 1
2/12/
2
1
exp
2
1

   
 
   





 
xHzRxHz
R
xHzpxzp
T
pvxz
1
2/12//
2
1
exp
2
1
/

 
   
      









xHzRHPHxHz
RHPH
zp TT
Tpz
1
2/12/
2
1
exp
2
1

 
   
 
 
 
 
                  










xHzRHPHxHzxxPxxxHzRxHz
RHPH
RPzp
xpxzp
zxp
TTTT
T
nz
xxz
zx
111
2/1
2/12/1
2/
|
|
2
1
2
1
2
1
exp
2
1|
|


from which
vxHz Consider a gaussian vector , where , measurement, ,
where the gaussian noise is independent of and . RNv ,0~v
x      PxNx ,~

x
81
SOLO
                  xHzRHPHxHzxxPxxxHzRxHz TTTT

 111 
      11111111 
 RHPHRHHRRRRHPHR TTT
we have
then
Define:     111
:

 HRHPP T
           xHzRHxxPxHzRHxx TTT
  111 
 
   
               








 
xHzRHPxxPxHzRHPxx
P
zxp TTT
nzx
111
2/12/|
2
1
exp
2
1
|


then
 zxp zx
x
|max |        xHzRHPxxx T
 1*
:

Estimation for Static Systems (continuous – 13)
Maximum A Posterior Estimator (MAP) (continue – 1)
     xpxZpZxpx xkxZ
x
kZx
x
MAP
kk
|maxarg|maxarg:ˆ || •Bayesian Estimator
For Diffuse Uniform a Priori   constxpx 
    MLE
kxZ
x
kZx
x
MAP
xxZpZxpx kk
ˆ|maxarg|maxarg:ˆ || 
82
SOLO
Optimal Static Estimate (Summary)
Estimators
Known initially
Weighted Least Square (WLS)1
     T
kkkkkkk vvvvERvEv  &
Markov Estimator2
Estimation for Static Systems
2
0
22
0
*
111   WWW
xHzxHzJ

v
H zx
z 0
ˆxHz 
0
ˆxH
W
2
0
22
0 111 ˆˆ   WWW
xHzxHz
The measurements are
vxHz 
    1
1
 
W
T
xHzxHzWxHzJ

  zWHHWHx TTWLS
111 


& Recursive WLS
J
x
min
    HWHPHWHHWHP TTT 111
0
1
00
1 

         
xHzWHPxx T  1
Recursive Weighted Least Square Estimate
(RWLS)
z
 x

 x

Delay
  HWHP T 11 

H
  1
 WHP T
Estimator
    HRHPHRHHRHP TTT 111
0
1
0
1 

         
xHzRHPxx T  1
RWLS = Markov Estimator
W = R
z
 x

 x

Delay
  HRHP T 11 

H
  1
 RHP T
Estimator
No assumption about noise v
Assumption about noise v
83
SOLO
Optimal Static Estimate (Summary)
Estimators Known initially
Maximum Likelihood Estimator (MLE)3     LikelihoodxZLxZp xZ  ,:||
Bayes Estimator – Maximum Apriory
Estimator (MAP)
4    Zxporvxp Zxvx |, |,
Estimation for Static Systems
     
 
   





 
xHzRxHz
R
xHzpxzpxzL
T
pvxz
1
2/12/|
2
1
exp
2
1
|:,

v
H zx
        RWSquareLeastWeightedxHzRxHzxzp
T
x
xz
x
 1
| min|max
The measurements are
vxHz 
  zRHHRHx TTML 111 


     xpZxpZxpx XxZ
X
Zx
X
MAP
|maxarg|maxargˆ || 
84
Recursive Bayesian EstimationSOLO
Given a nonlinear discrete stochastic Markovian system we want to use k discrete
measurements Z1:k={z1,z2,…,zk} to estimate the hidden state xk. For this we want to
compute the probability of xk given all the measurements Z1:k={z1,z2,…,zk} .
If we know p ( xk| Z1:k ) then xk is estimated using:
    kkkkkkkk xdZxpxZxEx :1:1| ||:ˆ
          kkk
T
kkkkk
T
kkkkkk xdZxpxxxxZxxxxEP :1:1| |ˆˆ|ˆˆ
or more general we can compute all moments of the probability distribution p ( xk| Z1:k ):
       kkkkkk xdZxpxgZxgE :1:1 ||
Bayesian Estimation Introduction
Problem:
Estimate the
State of a
Non-linear Dynamic
Stochastic System
from Noisy
Measurements.
kx1kx
kz1kz
0x 1x 2x
1z 2z kZ :11:1 kZ
 11,  kk wxf
 kk vxh ,
 00 ,wxf
 11,vxh
 11,wxf
 22 ,vxh
Run This
85
Recursive Bayesian EstimationSOLO
To find the expression for p ( xk| Z1:k ) we use the theorem of joint probability (Bayes Rule):
   
 k
kk
RuleBayes
kk
Zp
Zxp
Zxp
:1
:1
:1
,
| 
Since Z1:k ={ zk, Z1:k-1 }:    
 1:1
1:1
:1
,
,,
|



kk
kkk
kk
Zzp
Zzxp
Zxp
The denominator of this expression is
     1:11:11:1 ,,|,,   kkkkk
RuleBayes
kkk ZxpZxzpZzxp
     
  
1:11:11:1 |,|  kkkkkk ZpZxpZxzp
Since the knowledge of xk supersedes the need for Z1:k-1 = {z1, z2,…,zk-1}
   kkkkk xzpZxzp |,| 1:1 
       
   1:11:1
1:11:1
:1
|
||
|



kkk
kkkkk
kk
ZpZzp
ZpZxpxzp
ZxpTherefore:
     1:11:11:1 |,   kkk
RuleBayes
kk ZpZzpZzp
and the nominator is
86
Recursive Bayesian EstimationSOLO
The final result is:
     
 1:1
1:1
:1
|
||
|



kk
kkkk
kk
Zzp
Zxpxzp
Zxp
     
 
   
 

 




1:1
1:1
1:1
1:1
:1
|
||
|
||
|1
kk
kkkkk
k
kk
kkkk
kkk
Zzp
xdZxpxzp
xd
Zzp
Zxpxzp
xdZxp
Since p ( xk| Z1:k ) is a probability distribution it must satisfy:   1| :1  kkk xdZxp
     
    


kkkkk
kkkk
kk
xdZxpxzp
Zxpxzp
Zxp
1:1
1:1
:1
||
||
|
and:
Therefore:         kkkkkkk xdZxpxzpZzp 1:11:1 |||
This is a recursive relation that needs the value of p (xk|Z1:k-1), assuming that
p (zk,xk) is obtained from the Markovian system definition.
kx1kx
kz1kz
0x 1x 2x
1z 2z kZ :11:1 kZ
 11,  kk wxf
 kk vxh ,
 00 ,wxf
 11,vxh
 11,wxf
 22 ,vxh
87
Recursive Bayesian EstimationSOLO
      11:111:111:11 |,||,   kkkkkk
Bayes
kkk xdZxpZxxpZxxpUsing:
          11:11111:111:1 |||,| kkkkkkkkkkk xdZxpxxpxdZxxpZxp
We obtain:
Since the knowledge of xk-1 supersedes the need for Z1:k-1 = {z1, z2,…,zk-1}
   11:11 |,|   kkkkk xxpZxxp
kx1kx
kz1kz
0x 1x 2x
1z 2z kZ :11:1 kZ
 11,  kk wxf
 kk vxh ,
 00 ,wxf
 11,vxh
 11,wxf
 22 ,vxh
Chapman – Kolmogorov Equation
Sydney Chapman
1888 - 1970
Andrey
Nikolaevich
Kolmogorov
1903 - 1987
88
Recursive Bayesian EstimationSOLO
          11:11111:111:1 |||,| kkkkkkkkkkk xdZxpxxpxdZxxpZxp
Summary
Using p (xk-1|Z1:k-1) from time-step k-1 and p (xk|xk-1) of the Markov system, compute:
     
    


kkkkk
kkkk
kk
xdZxpxzp
Zxpxzp
Zxp
1:1
1:1
:1
||
||
|
Using p (xk|Z1:k-1) from Prediction phase and p (zk|xk) of the Markov system, compute:
    kkkkkkkk xdZxpxZxEx :1:1| ||ˆ
          kkk
T
kkkkk
T
kkkkkk xdZxpxxxxZxxxxEP :1:1| |ˆˆ|ˆˆ
At stage k
k:=k+1
 1|11|
ˆˆ   kkkk xfx
Initialize with p (x0)0
Prediction phase
(before zk measurement)
1
Correction Step (after zk measurement)2
Filtering3
kx1kx
kz1kz
0x 1x 2x
1z 2z kZ :11:1 kZ
 11,  kk wxf
 kk vxh ,
 00 ,wxf
 11,vxh
 11,wxf
 22,vxh
89
SOLO
Linear Gaussian Systems
A Linear Combination of Independent Gaussian random vectors is also a
Gaussian random vector
mmm XaXaXaS  2211:
     
   
     
   



















  




mmmm
mmmm
YYYm
YpYp
mYYmS
aaajaaa
ajaajaaja
YdYdYYpSj m
mmYY
mm






  



2211
222
2
2
2
2
1
2
1
2
222
22
2
2
2
2
2
11
2
1
2
1
2
11,,
2
1
exp
2
1
exp
2
1
exp
2
1
exp
,,exp 21
11
1
   





 
 2
2
2
exp
2
1
,;
i
ii
i
iiiX
X
Xp i




Gaussian
distribution       





 


iiiiXiX jXdXpXj ii

22
2
1
expexp:
Moment-
Generating
Function
Proof:
Define    iX
ii
i
X
i
iYiii Xp
aa
Y
p
a
YpXaY iii
11
: 






       
 
  





 




iiiiiiX
asign
asign
ii
i
iX
iiiiYiY ajaXaXda
a
Xp
XajYdYpYj i
i
ii

222
2
1
expexpexp:
1
1
Review of Probability
90
SOLO
Linear Gaussian Systems
A Linear Combination of Independent Gaussian random vectors is also a
Gaussian random vector
mmm XaXaXaS  2211:
Therefore the Linear Combination of Independent Gaussian Random Variables is a
Gaussian Random Variable with
mmS
mmS
aaa
aaa
m
m






2211
222
2
2
2
2
1
2
1
2
Therefore the Sm probability distribution is:
   







 
 2
2
2
exp
2
1
,;
m
m
m
mm
S
S
S
SSm
x
Sp




Proof (continue – 1):
     



 mmmmS aaajaaam
  2211
222
2
2
2
2
1
2
1
2
2
1
exp
We found:
Review of Probability
q.e.d.
91
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems
 
 kkkk
kkkk
vuxkhz
wuxkfx
,,,
,,,1 111

 
kkkk
kkkkkkk
vxHz
wuGxx

  111111
wk-1 and vk, white noises, zero mean, Gaussian, independent
             kPkekeEkxEkxke x
T
xxx  &:
              lk
T
www kQlekeEkwEkwke ,
0
&: 

              lk
T
vvv kRlekeEkvEkvke ,
0
&: 

      0lekeE
T
vw






lk
lk
lk
1
0
,
kv
kH kzkx
kx1k
1kw
1k
1kx
1ku 1kG
1
zDelay
   Qwwpw ,0;N
   Rvvpv ,0;N
 
  





 
wQw
Q
wp T
nw
1
2/12/
2
1
exp
2
1

 
  





 
vRv
R
vp T
pv
1
2/12/
2
1
exp
2
1

A Linear Gaussian Markov Systems is defined as
   0|0000 ,;0
Pxxxp ttx   N
 
 
   



 

 00
1
0|0002/1
0|0
2/0
2
1
exp
2
1
0
xxPxx
P
xp t
T
tntx

92
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 1)
111111   kkkkkkk wuGxx kx1k
1kw
1k
1kx
1ku 1kG
Prediction phase (before zk measurement)
or 111|111|
ˆˆ   kkkkkkk uGxx
       
0
1:111111:1111:11| |||:ˆ   kkkkkkkkkkkk ZwEuGZxEZxEx
The expectation is
      
      1:1111|111111|111
1:11|1|1|
|ˆˆ
|ˆˆ:




k
T
kkkkkkkkkkkk
k
T
kkkkkkkk
ZwxxwxxE
ZxExxExEP
      
     T
k
Q
T
kkk
T
k
T
kkkkk
T
k
T
kkkkk
T
k
P
T
kkkkkkk
wwExxwE
wxxExxxxE
kk
11111
0
1|1111
1
0
11|11111|111|111
ˆ
ˆˆˆ
1|1





  
    
T
kk
T
kkkkkk QPP 1111|111|  
   1|1|1:1 ,ˆ;|   kkkkkkk PxxZxP N
Since is a Linear Combination of Independent
Gaussian Random Variables:
111111   kkkkkkk wuGxx
93
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 9)
kkkk vxHz 
kv
kH kzkx
   Rvvpv ,0;N  
  





 
vRv
R
vp T
pv
1
2/12/
2
1
exp
2
1

 
 
     








 



1|
1
1|1|2/1
1|
2/
ˆˆ
2
1
exp
2
1
kkkkk
T
kkkk
T
kkkk
k
T
kkkk
p
kz xHzRHPHxHz
RHPH
zp

from which   1|1:11|
ˆ|ˆ   kkkkkkk xHZzEz
    kk
T
kkkkk
T
kkkkkk
zz
kk SRHPHZzzzzEP   :ˆˆ 1|1:11|1|1|
   
      T
kkkk
T
kkkkkkkk
k
T
kkkkkk
xz
kk
HPZvxxHxxE
ZzzxxEP
1|1:11|1|
1:11|1|1|
ˆˆ
ˆˆ




We also have
Correction Step (after zk measurement) 2nd Way
Define the innovation: 1|1|
ˆˆ:   kkkkkk xHzzzi
94
Recursive Bayesian EstimationSOLO
Joint and Conditional Gaussian Random Variables







k
k
k
z
x
yDefine: assumed that they are Gaussian distributed
Prediction phase (before zk measurement) 2nd way (continue -1)
 



























1|
1|
1:1
1:1
1:1
ˆ
ˆ
|
|
|
kk
kk
kk
kk
kk
z
x
Zz
Zx
EZyE













































 zz
kk
zx
kk
xz
kk
xx
kk
k
T
kkk
kkk
kkk
kkkyy
kk
PP
PP
Z
zz
xx
zz
xx
EP
1|1|
1|1|
1:1
1|
1|
1|
1|
1|
ˆ
ˆ
ˆ
ˆ
where:     1|1:11|1|1|
ˆˆ   kkk
T
kkkkkk
xx
kk PZxxxxEP
    kk
T
kkkkk
T
kkkkkk
zz
kk SRHPHZzzzzEP   :ˆˆ 1|1:11|1|1|
    T
kkkk
T
kkkkkk
xz
kk HPZzzxxEP 1|1:11|1|1| ˆˆ  
Linear Gaussian Markov Systems (continue – 10)
95
     





 



 1|
1
1|1|2/1
1|
1:1,
ˆˆ
2
1
exp
2
1
|, kkk
yy
kk
T
kkk
yy
kk
kkkzx yyPyy
P
Zzxp

Recursive Bayesian EstimationSOLO
Joint and Conditional Gaussian Random Variables
The conditional probability distribution function (pdf) of xk given zk is given by:
Prediction phase (before zk measurement) 2nd Way (continue – 2)
     





 



 1|
1
1|1|2/1
1|
1:1 ˆˆ
2
1
exp
2
1
| kkk
zz
kk
T
kkk
zz
kk
kkz zzPzz
P
Zzp

   
 
 
   
   





















1|
1
1|1|
1|
1
1|1|
2/1
1|
2/1
1|
1:1
1:1,
|1:1|
ˆˆ
2
1
exp
ˆˆ
2
1
exp
2
2
|
|,
|,|
kkk
zz
kk
T
kkk
kkk
yy
kk
T
kkk
yy
kk
zz
kk
kkz
kkkzx
kkzxkkkzx
zzPzz
yyPyy
P
P
Zzp
Zzxp
zxpZzxp


       





 






1|
1
1|1|1|
1
1|1|2/1
1|
2/1
1|
ˆˆ
2
1
ˆˆ
2
1
exp
2
2
kkk
zz
kk
T
kkkkkk
yy
kk
T
kkk
yy
kk
zz
kk
zzPzzyyPyy
P
P


Linear Gaussian Markov Systems (continue – 11)
We assumed that is Gaussian distributed:






k
k
k
z
x
y
96
Recursive Bayesian EstimationSOLO
Joint and Conditional Gaussian Random Variables
Prediction phase (before zk measurement) 2nd Way (continue – 3)
         





 






1|
1
1|1|1|
1
1|1|2/1
1|
2/1
1|
| ˆˆ
2
1
ˆˆ
2
1
exp
2
2
| kkk
zz
kk
T
kkkkkk
zz
kk
T
kkk
yy
kk
zz
kk
kkzx zzPzzyyPyy
P
P
zxp


Define: 1|1| ˆ:&ˆ:   kkkkkkkk zzxx 
       
k
zz
kk
T
kk
zz
kk
T
kk
zx
kk
T
kk
xz
kk
T
kk
xx
kk
T
k
kkkzz
T
k
k
k
zz
kk
zx
kk
xz
kk
xx
kk
T
k
k
k
zz
kk
T
k
k
k
zz
kk
zx
kk
xz
kk
xx
kk
T
k
k
kkk
zz
kk
T
kkkkkk
yy
kk
T
kkk
PTTTT
P
TT
TT
P
PP
PP
zzPzzyyPyyq











1
1|1|1|1|1|
1
1|
1|1|
1|1|
1
1|
1
1|1|
1|1|
1|
1
1|1|1|
1
1|1| ˆˆˆˆ:




























































Linear Gaussian Markov Systems (continue – 12)
97
Recursive Bayesian EstimationSOLO
Joint and Conditional Gaussian Random Variables
Prediction phase (before zk measurement) 2nd way (continue – 4)
Using Inverse Matrix Lemma:
   
    

















11111
111111
nxmnxnmxnmxmmxnmxmnxmnxnmxnmxm
mxmnxmmxnmxmnxmnxnmxnmxmnxmnxn
mxmmxn
nxmnxn
BADCDCBADC
CBDCBADCBA
CD
BA






















zz
kk
zx
kk
xz
kk
xx
kk
zz
kk
zx
kk
xz
kk
xx
kk
TT
TT
PP
PP
1|1|
1|1|
1
1|1|
1|1|
in
1
1|1|1|
1
1|
1|
1
1|1|1|
1
1|
1|
1
1|1|1|
1
1|

















zz
kk
xz
kk
xz
kk
xx
kk
xz
kk
xx
kk
zx
kk
zz
kk
zz
kk
kkzxkkzzkkxzkkxxkkxx
PPTT
TTTTP
PPPPT
k
zz
kk
T
kk
zz
kk
T
kk
zx
kk
T
kk
xz
kk
T
kk
xx
kk
T
k PTTTTq 
1
1|1|1|1|1|

 
 
k
zz
kk
T
kk
zz
kk
T
k
k
xz
kk
xx
kk
zx
kk
T
kk
xz
kk
xx
kk
zx
kk
T
kk
xz
kk
T
kk
xx
kk
xx
kk
zx
kk
T
k
T
k
PT
TTTTTTTTTT


1
1|1|
1|
1
1|1|1|
1
1|1|1|1|
1
1|1|











   
     k
xz
kk
xx
kkk
xx
kk
T
k
xz
kk
xx
kkkk
zz
kk
xz
kk
xx
kkkkzx
zz
kk
T
k
k
xz
kk
xx
kk
xx
kk
T
k
xz
kk
xx
kkkk
xx
kk
T
k
xz
kk
xx
kkk
TT
TTTTTPTTTT
TTTTTTTT
zx
kk
Txz
kk


1|
1
1|1|1|
1
1|
0
1|1|
1
1|1|1|
1|
1
1|1|1|
1
1|1|1|
1
1|
1|1|


















  
Linear Gaussian Markov Systems (continue – 13)
98
Recursive Bayesian EstimationSOLO
Joint and Conditional Gaussian Random Variables
Prediction phase (before zk measurement) 2nd way (continue – 5)






















zz
kk
zx
kk
xz
kk
xx
kk
zz
kk
zx
kk
xz
kk
xx
kk
TT
TT
PP
PP
1|1|
1|1|
1
1|1|
1|1|
1
1|1|1|
1
1|
1|
1
1|1|1|
1
1|
1|
1
1|1|1|
1
1|

















zz
kk
xz
kk
xz
kk
xx
kk
xz
kk
xx
kk
zx
kk
zz
kk
zz
kk
kkzxkkzzkkxzkkxxkkxx
PPTT
TTTTP
PPPPT
   k
xz
kk
xx
kkk
xx
kk
T
k
xz
kk
xx
kkk TTTTTq  1|
1
1|1|1|
1
1| 



 
1|1| ˆ:&ˆ:   kkkkkkkk zzxx 
 
     



















1|1|1|1|1|2/1
1|
2/1
1|
2/1
1|
2/1
1|
|
ˆˆˆˆ
2
1
exp
2
2
2
1
exp
2
2
|
kkkkkkk
xx
kk
T
kkkkkkk
yy
kk
zz
kk
yy
kk
zz
kk
kkzx
zzKxxTzzKxx
P
P
q
P
P
zxp




 1|
1
1|1|1|
1
1|1| ˆˆ 



  kkk
K
zz
kk
xz
kkkkkk
xx
kk
xz
kkk zzPPxxTT
k



Linear Gaussian Markov Systems (continue – 14)
99
Recursive Bayesian EstimationSOLO
Joint and Conditional Gaussian Random Variables
Prediction phase (before zk measurement) 2nd Way (continue – 6)
       





 



 1|
1
1|1|1|1|1|
1
1|1|1|| ˆˆˆˆ
2
1
exp| kkk
xx
kk
xz
kkkkk
xx
kk
T
kkk
xx
kk
xz
kkkkkkkzx zzPPxxTzzPPxxczxp
From this we can see that
   1|
1
1|1|1|| ˆˆˆ| 

  kkk
K
zz
kk
xz
kkkkkkkk zzPPxxzxE
k



   
T
k
zz
kkk
xx
kk
zx
kk
zz
kk
xz
kk
xx
kk
xx
kkk
T
kkkkkk
xx
kk
KPKP
PPPPTZxxxxEP
1|1|
1|
1
1|1|1|
1
1|:1|||
ˆˆ








    1|1:11|1|1|
ˆˆ   kkk
T
kkkkkk
xx
kk PZxxxxEP
    k
T
kkkkkk
T
kkkkkk
zz
kk SHPHRZzzzzEP   :ˆˆ 1|1:11|1|1|
    T
kkkk
T
kkkkkk
xz
kk HPZzzxxEP 1|1:11|1|1| ˆˆ  
Linear Gaussian Markov Systems (continue – 15)
100
Recursive Bayesian EstimationSOLO
Joint and Conditional Gaussian Random Variables
Prediction phase (before zk measurement) 2nd Way (continue – 7)
From this we can see that
   111
1|1|
1
1|1|1||



  kk
T
kkkkkk
T
kkkkk
T
kkkkkkk HRHPPHHPHRHPPP
  1
1|
1
1|1|
1
1|1|





  k
T
kkk
T
kkkkk
T
kkk
zz
kk
xz
kkk SHPHPHRHPPPK
Linear Gaussian Markov Systems (continue – 16)
kk
T
kkkkk KSKPP  1||
or
    1|1:11|1|1|
ˆˆ   kkk
T
kkkkkk
xx
kk PZxxxxEP
    k
T
kkkkkk
T
kkkkkk
zz
kk SHPHRZzzzzEP   :ˆˆ 1|1:11|1|1|
    T
kkkk
T
kkkkkk
xz
kk HPZzzxxEP 1|1:11|1|1| ˆˆ  
101
We found that the optimal Kk is
 1
1|1|

 
T
kkkkk
T
kkkk HPHRHPK
    1111
|1
11
&
1
|1 1
1|
1



  

 k
T
kkk
T
kkkkkk
LemmaMatrixInverse
existPR
T
kkkkk RHHRHPHRRHPHR
kkk
  1111
1|
1
1|
1
1|





  k
T
kkk
T
kkkkk
T
kkkk
T
kkkk RHHRHPHRHPRHPK
    1111
|1
111
|1|1



  k
T
kkk
T
kkkkk
T
kkk
T
kkkkk RHHRHPHRHHRHPP
  1
|
1111
|1

  RHPRHHRHPK T
kkk
T
kkk
T
kkkk
If Rk
-1 and Pk|k-1
-1 exist:
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 17)
Relation Between 1st and 2nd ways
2nd Way
1st Way = 2nd Way
102
Recursive Bayesian EstimationSOLO
Closed-Form Solutions of Estimation
Closed-Form solutions for the Optimal Recursive Bayesian Estimation
can be derived only for special cases
The most important case:
• Dynamic and measurement models are linear
 
 kkkk
kkkk
vuxkhz
wuxkfx
,,,
,,,1 111

 
kkkk
kkkkkkk
vxHz
wuGxx

  111111
• Random noises are Gaussian
   Qwwpw ,0;N
   Rvvpv ,0;N
 
  





 wQw
Q
wp T
nw
2
1
exp
2
1
2/12/

 
  





 
vRv
R
vp T
pv
1
2/12/
2
1
exp
2
1

• Solution: KALMAN FILTER
• In other non-linear/non-Gaussian cases:
USE APPROXIMATIONS
103
Recursive Bayesian EstimationSOLO
Closed-Form Solutions of Estimation (continue – 1)
• Dynamic and measurement models are linear
kkkk
kkkkkkk
vxHz
wuGxx

  111111
kv
kH kzkx
kx1k
1kw
1k
1kx
1ku 1kG
1
zDelay
• The Optimal Estimator is the Kalman Filter
developed by R. E. Kalman in 1960
             1|1|1|&1|:1|  kkPkkekkeEkkxEkxkke x
T
xxx
              lk
T
www kQlekeEkwEkwke ,
0
&: 

              lk
T
vvv kRlekeEkvEkvke ,
0
&: 
       0lekeE
T
vw






lk
lk
lk
1
0
,
Rudolf E. Kalman
( 1920 - )
• K.F. is an Optimal Estimator (in the
Minimum Mean Square Estimator (MMSE) sense if:
- state and measurement models are linear
- the random elements are Gaussian
• Under those conditions, the covariance matrix:
- independent of the state (can be calculated off-line)
- equals the Cramer – Rao lower bound
104
Kalman Filter
State Estimation in a Linear System (one cycle)
SOLO
1|1
1|1
ˆ


kk
kk
P
x
1kt
kt
T
t
1|
1|
ˆ


kk
kk
P
x
kk
kk
P
x
|
|
ˆ
1:  kk
Initialization     T
xxxxEPxEx 00000|000
ˆˆˆ 0
State vector prediction111|111|
ˆˆ   kkkkkkk uGxx1
Covariance matrix extrapolation111|111|   k
T
kkkkkk QPP2
Innovation Covariancek
T
kkkkk RHPHS  1|3
Gain Matrix Computation1
1|

 k
T
kkkk SHPK4
Measurement & Innovation
1|ˆ
1|
ˆ


kkz
kkkkk xHzi5
Filteringkkkkkk iKxx  1||
ˆˆ6
Covariance matrix updating
 
    T
kkk
T
kkkkkk
kkkk
T
kkkkk
kkkk
T
kkkkkkk
KRKHKIPHKI
PHKI
KSKP
PHSHPPP










1|
1|
1|
1|
1
1|1||7
105
Kalman Filter
State Estimation in a Linear System (one cycle)
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Evolution
of the system
(true state)
Estimation
of the state
State
Covariance
and
Kalman Filter
Computations
Controller
1kt
1|1
ˆ  kkx
1kx
kkP|
2|1  kkP
kkx |
ˆ
kx
 1|1  kkP
1| kkP
1|
ˆ kkx
1kt kt
Real Trajectory
Estimated
Trajectory
Time
kt
Measurement at tk
kkkk vxHz 
State Prediction
at tk
111|111|
ˆˆ   kkkkkkk uGxx
State
Estimation
at tk-1
1|1
ˆ  kkx
Control at tk-1
1ku
State Error Covariance
at tk-1
   1|111|111|1
ˆˆ   kkk
T
kkkkk xxxxEP
State Prediction
Covariance at tkk
k
111|111|   k
T
kkkkkk QPP
Innovation Covariance
k
T
kkkkk RHPHS  1|
Kalman Filter Gain
1
1|

 k
T
kkkk SHPK
Update State
Covariance at tkk
k
T
kkkkkkk KSKPP  1||
Update State
Estimation at t k
kkkkkk Kxx  1||
ˆˆ
Measurement Prediction
at tk
1|1|
ˆˆ   kkkkk xHz
Transition to tk
11111   kkkkkk wuGxx
Innovation
1|ˆ  kkkk zz
State at tk-1
1kx
I.C.:  00|0
ˆ xEx     T
xxxxEP 0|000|000|0
ˆˆ I.C.:
Rudolf E. Kalman
( 1920 - )
106
1|1| ˆˆ:   kkkkkkkk zzxHzi
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 18)
Innovation in a Kalman Filter
The innovation is the quantity:
We found that:
       0ˆ||ˆ| 1|1:11:11|1:1   kkkkkkkkkk zZzEZzzEZiE
      k
T
kkkkkk
T
kkk
T
kkkkkk SHPHRZiiEZzzzzE   :ˆˆ 1|1:11:11|1|
Using the smoothing property of the expectation:
          
 
     xEdxxpxdxdyyxpx
dxdyypyxpxdyypdxyxpxyxEE
x
X
x y
YX
x y
yxp
YYX
y
Y
x
YX
YX



























 
  














,
||
,
,
||
,
  
    1:1  k
T
jk
T
jk ZiiEEiiEwe have:
Assuming, without loss of generality, that k-1 ≥ j, and innovation i (j) is
independent on Z1:k-1, and it can be taken outside the inner expectation:
       0
0
1:11:1 








 
T
jkkk
T
jk
T
jk iZiEEZiiEEiiE

107
1|1| ˆˆ:   kkkkkkkk zzxHzi
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 19)
Innovation in a Kalman Filter (continue – 1)
The innovation is the quantity:
We found that:
       0ˆ||ˆ| 1|1:11:11|1:1   kkkkkkkkkk zZzEZzzEZiE
  k
T
kkkkkk
T
kk SHPHRZiiE   :1|1:1
  jkiiE
T
jk  0
  jik
T
jk SiiE 
The uncorrelatedness property of the innovation implies that since they are Gaussian,
the innovation are independent of each other and thus the innovation sequence is
Strictly White.
Without the Gaussian assumption, the innovation sequence is Wide Sense White.
Thus the innovation sequence is zero mean and white for the Kalman (Optimal) Filter.
The innovation for the Kalman (Optimal) Filter extracts all the available information
from the measurement, leaving only zero-mean white noise in the measurement residual.
108
kk
T
kn iSiz
1
:
2 

Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 20)
Innovation in a Kalman Filter (continue – 2)
Define the quantity:
Let use:
kkk iSu
2/1
:


Since is Gaussian (a linear combination of the nz components of )
is Gaussian too with:
ki ku ki
    0:
0
2/1


 kkk iESuE       z
k
nk
S
T
kkkk
T
kkk
T
kk ISiiESSiiSEuuE 
 2/12/12/12/1
:

where Inz is the identity matrix of size nz. Therefore, since the covariance matrix of
u is diagonal, its components ui are uncorrelated and, since they are jointly Gaussian
they are also independent.
   1,0;Pr:
1
22 1
ii
n
i
ik
T
kkk
T
kn uuuuuiSi
z
z
N 


Therefore is chi-square distributed with nz degrees of freedom.
2
zn
Since Sk is symmetric and positive definite, it can be written as:
  0,,& 1  SiSSkn
H
kk
H
kkkk znz
diagDITTTDTS  
H
kkkk TDTS
11 
  2/12/1
1
2/12/12/1
,,&

 znSSk
H
kkkk diagDTDTS  
109
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
)Initialization,
Confirmation
and Deletion(
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
SOLO
Kalman Filter Initialization
State vector prediction111|111|
ˆˆ   kkkkkkk uGxx
Covariance matrix extrapolation111|111|   k
T
kkkkkk QPP
To Initialize the Kalman Filter we need to know 0|00|0 &ˆ Px
According to Bayesian Model the true initial state is a Gaussian random variable
 0|00|00 ,ˆ; PxxN
The chi-square test for the initial condition error is
    cxxPxx
T


0|00
1
0|00|00
ˆˆ
where c1 is the upper limit of the, say, 95% confidence region from the chi-square
distribution with nx degrees of freedom.
Recursive Bayesian Estimation
Linear Gaussian Markov Systems (continue – 21)
110
SOLO
Return to Table of Content
can be estimated using at least two measurements   0|0&0|0ˆ Px
From the first measurement, z1, using Least Square we obtain   1
111
1 zRHHRHx TT 


From the second measurement
1|222111|2
ˆˆ&ˆˆ xHzxx Predictions before the second measurement
RHPHS
T
 21|222
The Preliminary Track Gate used for the second measurement is determined from the
worst-case target conditions including maneuver and data miscorrelations.
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
(Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
Kalman Filter Initialization
Linear Gaussian Markov Systems (continue – 22)
Recursive Bayesian Estimation
111
SOLO
Return to Table of Content
Strategies for Kalman Filter Initialization (First Step)
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
(Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
MaxVT
ˆ





 
2
MaxT 





 
2
MaxT 
MaxVT
ˆ





 
2
MaxT 





 
2
MaxT 
minVT
MaxVT
minVT
MaxVT
MAX SPEED and
TURNING RATE
SPECIFIED
MAX, MIN SPEED
and
TURNING RATE
SPECIFIED
MAX SPEED
SPECIFIED
MAX, MIN SPEED
SPECIFIED
Kalman Filter Initialization
Linear Gaussian Markov Systems (continue – 23)
Recursive Bayesian Estimation
112
SOLO
Information Kalman Filter
For some applications (such as bearing only tracking) the initial state covariance
matrix P0|0 may be very large. As a result the Kalman Filter formulation can encounter
numerical problems.
For those cases is better to use a formulation with P0|0
-1.
kk
T
kkkkk HRHPP
11
1|
1
|




Start with:
1
|

 k
T
kkkk RHPK
 
  1
1
11
1|1
1
1
1
1
1
1
1
11|1
1
1|
















kkkk
T
kkk
T
kkk
Lemma
Matrix
Inverse
k
T
kkkkkk
QPQQQ
QPP
    111
1|
1111
1|
1 




 kkkk
T
kkk
T
kkk
Lemma
Matrix
Inverse
k
T
kkkkk RHPHRHHRRRHPHS
First Version: Change only the Covariance Matrices Computations
Linear Gaussian Markov Systems (continue – 24)
Recursive Bayesian Estimation
113
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Evolution
of the system
(true state)
Estimation
of the state
State
Covariance
and
Kalman Filter
Computations
Controller
1kt
1|1
ˆ  kkx
1kx
kkP |
2|1  kkP
kkx |
ˆ
kx
1|1  kkP
1| kkP
1|
ˆ kkx
1kt kt
Real Trajectory
Estimated
Trajectory
Time
kt
Measurement at tk
kkkk vxHz 
State Prediction
at tk
111|111|
ˆˆ   kkkkkkk uGxx
Control at tk-1
1ku
State Error Covariance
at tk-1
    1
1|111|11
1
1|1
ˆˆ






kkk
T
kkk
kk
xxxxE
P
State Prediction
Covariance at tkk
k
  1
11
11
1|11
1
111
1
1
1
1
1
1|














k
T
kkkkk
T
kkk
kkk
QPQQ
QP
Innovation Covariance
  111
1|
11
11






k
T
kkkkk
T
kkk
kk
RHPHRHHR
RS
Kalman Filter Gain
1
|

 k
T
kkkk RHPK
Update State
Covariance at tkk
k
kk
T
kkkkk HRHPP
11
1|
1
|




Update State
Estimation at t k
kkkkkk iKxx  1||
ˆˆ
Measurement Prediction
at tk
1|1|
ˆˆ   kkkkk xHz
Transition to tk
11111   kkkkkk wuGxx
Innovation
1|ˆ  kkkk zz
State Estimation
at tk-1
1|1
ˆ  kkx
State at tk-1
1kx
I.C.:  00|0
ˆ xEx     T
xxxxEP 0|000|000|0
ˆˆ I.C.:
Rudolf E. Kalman
( 1920 - )
Information Kalman Filter
Version 1
114
SOLO
For some applications (such as bearing only tracking)
the initial state covariance matrix P0|0 may be very
large. As a result the Kalman Filter formulation can encounter numerical problems.
For those cases is better to use a formulation with P0|0
-1.
kk
T
kkkkk HRHPP
11
1|
1
|




1
|

 k
T
kkkk RHPK
Start with:  1
11|1
1
1|



  k
T
kkkkkk QPP
    111
1|
1111
1|
1 




 kkkk
T
kkk
T
kkkk
T
kkkkk RHPHRHHRRRHPHS
Define:
    11
1|1
1
1|11|1|1
1
1|





  kkk
T
k
T
kkkkkk
T
kkkkkk PPAPA
   
    1|1|1|
11
11|1|
1|
11
11|1|1|
1
1
1
1|
1
1|
1|


























kkkkkk
B
kkkkk
kkkkkkkkk
Lemma
Matrix
Inverse
kkkkk
ABIAQAAI
AQAAAQAP
kk
  
Second Version: Change both the Covariance Matrices and Filter States Computations
Information Kalman Filter
Linear Gaussian Markov Systems (continue – 24)
Recursive Bayesian Estimation
115
SOLO
111|111|
ˆˆ   kkkkkkk uGxxStart with: and multiply by Pk|k-1
-1
1
11|111| :



  kkk
T
kkk PA
   1|
1
1|
1
|1|
11
1||
1
|
ˆˆˆ
1
|










kkkk
K
k
T
kkkkkkk
P
kk
T
kkkkkkk xHzRHPPxHRHPxP
kkk
  
    kk
T
kkkkkkkkk zRHxPxP
1
1|
1
1||
1
|
ˆˆ 





11
1
1|1|11
1
1|1|
1
1|
ˆˆ 





  kkkkkkkkkkkkk uGPxPxP
  1|1|
1
1| 

  kkkkkk ABIP
      11
1
1|1|1
1
1|1
1
11|1|
1
1|
ˆˆ 







  kkkkkkkkkkkkkkk uGPxPBIxP
 11
11|1|1| :

  kkkkkkk QAAB
Multiply the Update State Estimation Equation by Pk|k
-1:kkkkkk iKxx  1||
ˆˆ
Information Kalman Filter (continue – 1)
Linear Gaussian Markov Systems (continue – 24)
Recursive Bayesian Estimation
116
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Evolution
of the system
(true state)
Estimation
of the state
State
Covariance
and
Kalman Filter
Computations
Controller
1kt
1|1
ˆ  kkx
1kx
kkP |
2|1  kkP
kkx |
ˆ
kx
1|1  kkP
1| kkP
1|
ˆ kkx
1kt kt
Real Trajectory
Estimated
Trajectory
Time
kt
Measurement at tk
kkkk vxHz 
State Prediction
at tk
   
11
1
1|
1|1
1
1|1
1
11|1|
1
1|
ˆˆ












kkkk
kkkkkkkkkkk
uGP
xPBIxP
Control at tk-1
1ku
State Error Covariance
at tk-1
    1
1|111|11
1
1|1
ˆˆ






kkk
T
kkk
kk
xxxxE
P
State PredictionCovariance at tkk
k
 
 1
11|1|1|
1
1
1
1|111|
1|1|
1
1|














kkkkkkk
kkk
T
kkk
kkkkkk
QAAB
PA
ABIP
Innovation Covariance
  111
1|
11
11






k
T
kkkkk
T
kkk
kk
RHPHRHHR
RS
Kalman Filter Gain
1
|

 k
T
kkkk RHPK
Update State
Covariance at tkk
k
kk
T
kkkkk HRHPP
11
1|
1
|




Update State
Estimation at t k
kkkkkkkkkkk zRHxPxP
1
1|
1
1||
1
|
ˆˆ 





Measurement Prediction
at tk
1|1|
ˆˆ   kkkkk xHz
Transition to tk
11111   kkkkkk wuGxx
Innovation
1|ˆ  kkkk zz
State Estimation
at tk-1
1|11|1
ˆ  kkkk xP
State at tk-1
1kx
I.C.:  00|0
ˆ xEx     T
xxxxEP 0|000|000|0
ˆˆ I.C.:
Rudolf E. Kalman
( 1920 - )
Information Kalman Filter
(Version 2)
117
SOLO Review of Probability
Chi-square Distribution
      x
T
x
T
ePexExPxExq
11
:


Assume a n-dimensional vector is Gaussian, with mean and covariance P, then
we can define a (scalar) random variable:
x  xE
Since P is symmetric and positive definite, it can be written as:
  0,,& 1  PiPPPn
HH
P n
diagDITTTDTP  
H
P TDTP
11 
  2/12/1
1
2/12/12/1
,,&

 nPPP
H
P diagDTDTP  
Since is Gaussian (a linear combination of the n components of )
is Gaussian too, with:
x u   xEx 
     0:
0
2/1



xExEPuE       n
P
T
xx
T
xx
T
IPeeEPPeePEuuE 
 2/12/12/12/1
:

where In is the identity matrix of size n. Therefore, since the covariance matrix of
u is diagonal, its components ui are uncorrelated and, since they are jointly Gaussian
they are also independent.
   1,0;Pr:
1
21
ii
n
i
i
T
x
T
x uuuuuePeq N 

Therefore q is chi-square distributed with n degrees of freedom.
Let use:    xePxExPu 2/12/1
: 

118
SOLO Review of Probability
Derivation of Chi and Chi-square Distributions
Given k normal random independent variables X1, X2,…,Xk with zero men values and
same variance σ2, their joint density is given by
 
    




 








 

2
22
1
2/
1
2/1
2
2
1
2
exp
2
1
2
2
exp
,,1

 k
kk
k
i
i
normal
tindependenkXX
xx
x
xxp k


Define
Chi-square 0::
22
1
2
 kk
xxy 
Chi 0:
22
1  kk xx 
  



  kkkkkk
dxxdp k

22
1
Pr 
The region in χk space, where pΧk
(χk) is constant, is a hyper-shell of a volume
(A to be defined)
 dAVd k 1

 
  

Vd
kk
kkkkkkkk
dAdxxdp k




 1
2
2
2/
22
1
2
exp
2
1
Pr 
 









 
 
  







 2
2
2/
1
2
exp
2 



 k
kk
k
k
A
p k
Compute
1x
2x
3x

d ddV 2
4
119
SOLO Review of Probability
Derivation of Chi and Chi-square Distributions (continue – 1)
 
 
 k
k
kk
k
k U
A
p k





 







 2
2
2/
1
2
exp
2
Chi-square 0:
22
1
2
 kk
xxy 
         
 

























00
0
2
exp
22
1 2
2/1
2/
0
2
2
2
y
y
y
y
y
A
ypyp
d
yd
ypp
k
kk
y
k
Yk kkk


 


A is determined from the condition   1


dyypY
 
   
   
 2/
2
12/
222
exp
22
2/
2/2
0
2
2
2
22/
k
Ak
Ay
d
yyA
dyyp
k
k
k
kY



















 






   
 
 
 yU
yy
k
kyp
kk
Y 














2
2/2
2
2/
2
exp
2/
2/1
,;


Γ is the gamma function    



0
1
exp dttta a
     
 
 k
k
k
k
k
k
k U
k
p k





 











 2
212/2
2
exp
2/
2/1
 






00
01
:
a
a
aU
Function of
One Random
Variable
120
SOLO Review of Probability
Derivation of Chi and Chi-square Distributions (continue – 2)
Chi-square 0:
22
1
2
 kk
xxy 
Mean Value      2 2 2 2
1k kE E x E x k    
 
     
4
2 42 2 4
0
1, ,
& 3
th
i
i i
Moment of a
Gauss Distribution
x i i i i
x E x
i k
E x x E x x  
  


     

     
   
 
 
2
4
2 4
2
2 22 2 2 2 2 4 2 2 4
1
2 2 2 4 4 2 2 2 4
1 1 1 1 1
3
2 2 4 4
3 2
k
k
k k i
i
k k k k k
i j i i j
i j i i j
i j
k k
E k E k E x k
E x x k E x E x x k
k k k k k



     
 
 

    


   
       
   
    
       
    
    

   
k
k
Main
Diagonal
kVariance   2
22 2 2 4
2
k
kE k k
     
where xi
are Gaussian
with
Gauss’ Distribution
121
SOLO Review of Probability
Derivation of Chi and Chi-square Distributions (continue – 3)
Tail probabilities of the chi-square and normal densities.
The Table presents the points on the chi-square
distribution for a given upper tail probability
 xyQ  Pr
where y = χn
2 and n is the number of degrees
of freedom. This tabulated function is also
known as the complementary distribution.
An alternative way of writing the previous
equation is:    QxyQ n  1Pr1
2

which indicates that at the left of the point x
the probability mass is 1 – Q. This is
100 (1 – Q) percentile point.
Examples
1. The 95 % probability region for χ2
2 variable
can be taken at the one-sided probability
region (cutting off the 5% upper tail):     99.5,095.0,0
2
2 
5.99
2. Or the two-sided probability region (cutting off both 2.5% tails):       38.7,05.0975.0,025.0
2
2
2
2 
0.51
0.975 0.0250.05
7.38
3. For χ1002 variable, the two-sided 95% probability region (cutting off both 2.5% tails) is:
      130,74975.0,025.0
2
100
2
100 
74
130
Run This
122
SOLO Review of Probability
Derivation of Chi and Chi-square Distributions (continue – 4)
Note the skewedness of the chi-square
distribution: the above two-sided regions are
not symmetric about the corresponding means
  nE n 
2

Tail probabilities of the chi-square and normal densities.
For degrees of freedom above 100, the
following approximation of the points on the
chi-square distribution can be used:
    22
121
2
1
1  nQQn G
where G ( ) is given in the last line of the Table
and shows the point x on the standard (zero
mean and unity variance) Gaussian distribution
for the same tail probabilities.
In the case Pr { y } = N (y; 0,1) and with
Q = Pr { y>x }, we have x (1-Q) :=G (1-Q)
5.990.51
0.975 0.0250.05
7.38
Return to Table of Content
Run This
123
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 21)
Innovation in Tracking Systems
The fact that the innovation sequence is zero mean and white for the Kalman (Optimal)
Filter, is very important and can be used in Tracking Systems:
1. when a single target is detected with probability 1 (no false alarms), the innovation
can be used to check Filter Consistency (in fact the knowledge of Filter Parameters
Φ (k), G (k), H (k) – target model, Q (k), R (k) – system and measurement noises)
4. when multiple targets are detected with probability less then 1 and false alarms are
also detected, the innovation can be used to provide Gating information for each
target track and probability of each detection to be related to each track (data
association). This is done by running a Kalman Filter for each initiated track.
(see JPDAF and MTT methods) Return to Table of Content
2. when a single target is detected with probability 1 (no false alarms), and the
target initiate a unknown maneuver (change model) at an unknown time
the innovation can be used to detect the start of the maneuver (change of target model)
by detecting a Filter Inconsistency and choose from a bank of models (see IMM method)
(Φi (k), Gi (k), Hi (k) –i=1,…,n target models) the one with a white innovation.
3. when a single target is detected with probability less then 1 and false alarms are
also detected, the innovation can be used to provide information of the probability
of each detection to be the real target (providing Gating capability that eliminates
less probable detections) (see PDAF method).
124
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 22)
Evaluation of Kalman Filter Consistency
A state-estimator (filter) is called consistent if its state estimation error satisfy
        0|~:|ˆ  kkxEkkxkxE
                 kkPkkxkkxEkkxkxkkxkxE TT
||~|~:|ˆ|ˆ 
this is a finite-sample consistency property, that is, the estimation errors based on a
finite number of samples (measurements) should be consistent with the theoretical
statistical properties:
• Have zero mean (i.e. the estimates are unbiased).
• Have covariance matrix as calculated by the Filter.
The Consistency Criteria of a Filter are:
1. The state errors should be acceptable as zero mean and have magnitude commensurate
with the state covariance as yielded by the Filter.
2. The innovation should have the same property as in (1).
3. The innovation should be white noise.
Only the last two criteria (based on innovation) can be tested in real data applications.
The first criterion, which is the most important, can be tested only in simulations.
125
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 23)
Evaluation of Kalman Filter Consistency (continue – 1)
When we design the Kalman Filter, we can perform Monte Carlo (N independent runs)
Simulations to check the Filter Consistency (expected performances).
Real time (Single-Run Tests)
In Real Time, we can use a single run (N = 1). In this case the simulations are replaced
by assuming that we can replace the Ensemble Averages (of the simulations) by the
Time Averages based on the Ergodicity of the Innovation and perform only the tests
(2) and (3) based on Innovation properties.
The Innovation bias and covariance can be evaluated using
       

K
k
T
K
k
kiki
K
Ski
K
i
11 1
1ˆ&
1ˆ
126
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 24)
Evaluation of Kalman Filter Consistency (continue – 2)
Real time (Single-Run Tests) (continue – 1)
Test 2:               kSkikiEkiEkkzkzE T
 &0:1|ˆ
Using the Time-Average Normalized Innovation
Squared (NIS) statistics
     


K
k
T
i kikSki
K 1
11
:
must have a chi-square distribution with
K nz degrees of freedom.
iK 
Tail probabilities of the chi-square and normal densities.
The test is successful if  21,rri 
where the confidence interval [r1,r2] is defined
using the chi-square distribution of i
     1,Pr 21 rri
For example for K=50, nz=2, and α=0.05, using the two
tails of the chi-square distribution we get
 
 






6.250/130130925.0
5.150/7474025.0
~50
2
2
100
1
2
1002
100
r
r
i



0.975
0.025
74
130
Run This
127
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 25)
Evaluation of Kalman Filter Consistency (continue – 3)
Real time (Single-Run Tests) (continue – 2)
Test 3: Whiteness of Innovation
Use the Normalized Time-Average Autocorrelation
             
2/1
111
:








 
K
k
T
K
k
T
K
k
T
i lkilkikikilkikil
In view of the Central Limit Theorem, for large K, this statistics is normal distributed.
For l≠0 the variance can be shown to be 1/K that tends to zero for large K.
Denoting by ξ a zero-mean unity-variance normal
random variable, let r1 such that
     1,Pr 11 rr
For α=0.05, will define (from the normal distribution)
r1 = 1.96. Since has standard deviation of
The corresponding probability region for α=0.05 will
be [-r, r] where
i K/1
KKrr /96.1/1 
Normal Distribution
128
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 26)
Evaluation of Kalman Filter Consistency (continue – 4)
Monte-Carlo Simulation Based Tests
The tests will be based on the results of Monte-Carlo Simulations (Runs) that provide
N independent samples
             NikkxkkxEkkPkkxkxkkx
T
iiiii ,,1|~|~|&|ˆ:|~ 
Test 1:
For each run i we compute at each scan k
And compute the Normalized (state) Estimation Error Squared (NEES)
        NikkxkkPkkxk i
T
ixi ,,1|~||~: 1
 

Under the Hypothesis that the Filter is Consistent and the Linear Gaussian,
is chi-square distributed with nx (dimension of x) degrees of freedom.
Then
 kxi
   xxi nkE 
The average, over N runs, of is kxi
   

N
i
xix k
N
k
1
1
: 
129
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 27)
Evaluation of Kalman Filter Consistency (continue – 5)
Monte-Carlo Simulation Based Tests (continue – 1)
Test 1 (continue – 1):
The average, over N runs, of is kxi
   

N
i
xix k
N
k
1
1
: 
The test is successful if  21,rrx 
where the confidence interval [r1,r2] is defined
using the chi-square distribution of i
     1,Pr 21 rrx
For example for N=50, nx=2, and α=0.05, using the two
tails of the chi-square distribution we get
 
 






6.250/130130925.0
5.150/7474025.0
~50
2
2
100
1
2
1002
100
r
r
i



Tail probabilities of the chi-square and normal densities.
0.975
0.025
74
130
must have a chi-square distribution with
N nx degrees of freedom.
xN 
Run This
130
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 28)
Evaluation of Kalman Filter Consistency (continue – 6)
Monte-Carlo Simulation Based Tests (continue – 2)
The test is successful if  21,rri 
where the confidence interval [r1,r2] is defined
using the chi-square distribution of i
     1,Pr 21 rri
For example for N=50, nz=2, and α=0.05, using the two
tails of the chi-square distribution we get
 
 






6.250/130130925.0
5.150/7474025.0
~50
2
2
100
1
2
1002
100
r
r
i



Tail probabilities of the chi-square and normal densities.
0.975
0.025
74
130
must have a chi-square distribution with
N nz degrees of freedom.
iN 
Test 2:               kSkikiEkiEkkzkzE T
 &0:1|ˆ
Using the Normalized Innovation Squared (NIS)
statistics, compute from N Monte-Carlo runs:
       


N
j
jj
T
ji kikSki
N
k
1
11
:
131
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 29)
Evaluation of Kalman Filter Consistency (continue – 7)
Test 3: Whiteness of Innovation
Use the Normalized Sample Average Autocorrelation
             
2/1
111
:,








 
N
j
j
T
j
N
j
j
T
j
N
j
j
T
ji mimikikimikimk
In view of the Central Limit Theorem, for large N, this statistics is normal distributed.
For k≠m the variance can be shown to be 1/N that tends to zero for large N.
Denoting by ξ a zero-mean unity-variance normal
random variable, let r1 such that
     1,Pr 11 rr
For α=0.05, will define (from the normal distribution)
r1 = 1.96. Since has standard deviation of
The corresponding probability region for α=0.05 will
be [-r, r] where
i N/1
NNrr /96.1/1 
Normal Distribution
Monte-Carlo Simulation Based Tests (continue – 3)
132
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 30)
Evaluation of Kalman Filter Consistency (continue – 8)
Examples Bar-Shalom, Y, Li, X-R, “Estimation and Tracking: Principles, Techniques
and Software”, Artech House, 1993, pg.242
Monte-Carlo Simulation Based Tests (continue – 4)
Single Run, 95% probability
 99.5,0xTest (a) Passes if
A one-sided region is considered.
For nx = 2 we have
      99.5,095.0,02 2
2
2
2  xn
       


K
k
T
x kkxkkPkkx
K
k
1
1
|~||~1
:
      qkxkkx  1
See behavior of for various values of the process noise q
for filters that are perfectly matched.
133
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 31)
Evaluation of Kalman Filter Consistency (continue – 9)
Examples Bar-Shalom, Y, Li, X-R, “Estimation and Tracking: Principles, Techniques
and Software”, Artech House, 1993, pg.244
Monte-Carlo Simulation Based Tests (continue – 5)
Monte-Carlo, N=50, 95% probability
   6.2,5.150/130,50/74 xTest (a) Passes if
       


N
j
jj
T
jx kkxkkPkkx
N
k
1
1
|~||~1
:(a)
             
2/1
111
:,








 
N
j
j
T
j
N
j
j
T
j
N
j
j
T
ji mimikikimikimk(c)
The corresponding probability region for
α=0.05 will be [-r, r] where
28.050/96.1/1  Nrr
   43.1,65.050/4.71,50/3.32 iTest (b) Passes if
       


N
j
jj
T
ji kikSki
N
k
1
11
:(b)
      130,74925.0,025.02 2
100
2
100  xn
      71,32925.0,025.01 2
100
2
100  zn
134
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 32)
Evaluation of Kalman Filter Consistency (continue – 10)
Examples Bar-Shalom, Y, Li, X-R, “Estimation and Tracking: Principles, Techniques
and Software”, Artech House, 1993, pg.245
Monte-Carlo Simulation Based Tests (continue – 6)
Example Mismatched Filter
A Mismatched Filter is tested: Real System Process Noise q = 9 Filter Model Process Noise qF=1
       


K
k
T
x kkxkkPkkx
K
k
1
1
|~||~1
:
      qkxkkx  1
(1) Single Run
(2) A N=50 runs Monte-Carlo with the
95% probability region
       


N
j
jj
T
jx kkxkkPkkx
N
k
1
1
|~||~1
:
   6.2,5.150/130,50/74 xTest (2) Passes if
      130,74925.0,025.02 2
100
2
100  xn
Test Fails
Test Fails
 99.5,0xTest (1) Passes if
      99.5,095.0,02 2
2
2
2  xn
135
Recursive Bayesian EstimationSOLO
Linear Gaussian Markov Systems (continue – 33)
Evaluation of Kalman Filter Consistency (continue – 11)
Examples Bar-Shalom, Y, Li, X-R, “Estimation and Tracking: Principles, Techniques
and Software”, Artech House, 1993, pg.246
Monte-Carlo Simulation Based Tests (continue – 7)
Example Mismatched Filter (continue -1)
A Mismatched Filter is tested: Real System Process Noise q = 9 Filter Model Process Noise qF=1
      qkxkkx  1
(3) A N=50 runs Monte-Carlo with the
95% probability region
(4) A N=50 runs Monte-Carlo with the
95% probability region
       


N
j
jj
T
ji kikSki
N
k
1
11
:
   43.1,65.050/4.71,50/3.32 iTest (3) Passes if
      71,32925.0,025.01 2
100
2
100  zn
             
2/1
111
:,








 
N
j
j
T
j
N
j
j
T
j
N
j
j
T
ji mimikikimikimk
(c)
The corresponding probability region for
α=0.05 will be [-r, r] where
28.050/96.1/1  Nrr
Test Fails
Test Fails
Return to Table of Content
Innovation in Tracking
136
SOLO
Kalman Filter for Filtering Position and Velocity Measurements
Assume a Cartezian Model of a Non-maneuvering Target:

w
x
x
x
x
td
d
wx
xx
BA






























1
0
00
10




    





  10
1
!
1
2
1
exp: 22
0
T
TAITA
n
TATAIdAT nn
T

2
00
00
00
00
00
10
00
10
00
10 2






























 nAAA n














2
1
v
v
x
x
vxz
 Measurements
     











 











 
  T
TT
d
T
dBTT
T
TT
2/2/
1
0
10
1
:
2
0
2
00 




Discrete System







1111
1
kkkk
kkkkk
vxHz
wxx
 
 










































kj
V
PT
jkkkk
H
k
kjq
T
jkkkkk
vvERvxz
wwEQw
T
T
x
T
x
k
kk




2
2
111111
2
2
1
0
0
&
10
01
&
2/
10
1
1


Target Estimators
137
SOLO
Kalman Filter for Filtering Position and Velocity Measurements (continue – 1)
The Kalman Filter:
   
     






111111
1
ˆˆˆ
ˆˆ
kkkkkk
kkk
xHzKxx
xx
    T
kkk
T
kkkk QPP 1
       TT
T
T
Tpp
ppT
pp
pp
P q
kk
k 2/
2/
1
01
10
1 22
2
2212
1211
12212
1211
1 
































       TT
T
T
Tpp
TppTpp
pp
pp
P q
kk
k 2/
2/
1
01 22
2
2212
22121211
12212
1211
1 
















 









      2
23
34
222212
2212
2
221211
12212
1211
1
2/
2/4/2
q
kk
k
TT
TT
pTpp
TppTpTpp
pp
pp
P 

























Target Estimators
138
SOLO
Kalman Filter for Filtering Position and Velocity Measurements (continue – 2)
The Kalman Filter:
   
     






111111
1
ˆˆˆ
ˆˆ
kkkkkk
kkk
xHzKxx
xx
    1
1111111

  k
T
kkk
T
kkk RHPHHPK
   



































2
1112
12
2
22
2
12
2
22
2
112212
1211
1
2
2212
12
2
11
2212
1211 1
P
V
VPV
P
pp
pp
ppppp
pp
pp
pp
pp
pp




  
 











 2
222211
2
122212
2
122212
2
1212111211
2
12
2
2211
2
12
2
22
2
11
1
PV
PV
VP ppppppppp
pppppppp
ppp 


  
 
  










 2
12
2
1122
2
12
2
12
2
12
2
2211
2
12
2
22
2
11
1
pppp
pppp
ppp PV
PV
VP 


Target Estimators
139
SOLO
Kalman Filter for Filtering Position and Velocity Measurements (continue – 3)
The Kalman Filter:
    1
1111111

  k
T
kkk
T
kkk RHPHHPK
 
   
    








 T
kkk
T
kkkkk
kkk
k
KRKHKIPHKI
PHKI
P
11111111
111
1
  
 
  


















 2
12
2
1122
2
12
2
12
2
12
2
2211
2
12
2
22
2
1112221
1211
1
1
pppp
pppp
pppKK
KK
K
PV
PV
VPk
k



  
 
  










  22
11
2
12
2
12
22
22
2
12
2
22
2
11
11
1
VPV
PPV
VP
kk
pp
pp
ppp
HKI



     
  
 
  
















 
2212
1211
22
11
2
12
2
12
22
22
2
12
2
22
2
11
1111
1
pp
pp
pp
pp
ppp
PHKIP
VPV
PPV
VP
kkkk



 
  
  
  







































2
2
12221
1211
1
2
22
2
21
2
12
2
11
2
1222
2
11
222
12
22
12
2
1211
2
22
2
2
12
2
22
2
11
1
0
0
1
V
P
k
kVP
VP
PVVP
VPVP
VP
k
KK
KK
KK
KK
pppp
pppp
ppp
P







Target Estimators
140

w
x
x
x
x
td
d
BA
























1
0
00
10



SOLO
We want to find the steady-state form of the filter for
Assume that only the position measurements are available
x
x

- position
- velocity
      kjjkkk
k
kkkk RvvEvEv
x
x
vxHz 





 

 1111
1
1111 0&01

Discrete System







1111
1
kkkk
kkkkk
vxHz
wxx
 
   


























kjP
T
jkkkk
H
k
kjw
T
jkkkkk
vvERvxz
wwEQw
T
T
x
T
x
k
kk


2
111111
2
2
1
&01
&
2/
10
1
1


α - β (2-D) Filter with Piecewise Constant White Noise Acceleration Model
Target Estimators
141
SOLO
Discrete System







1111
1
kkkk
kkkkk
vxHz
wxx
 
   


























kjP
T
jkkkk
H
k
kjw
T
jkkkkk
vvERvxz
wwEQw
T
T
x
T
x
k
kk


2
111111
2
2
1
&01
&
2/
10
1
1


         11/111  kRkHkkPkHkS
T
        1
11/11

 kSkHkkPkK T
When the Kalman Filter reaches the steady-state
    







2212
1211
1/1lim/lim
pp
pp
kkPkkP
kk
  







2212
1211
/1lim
mm
mm
kkP
k
  2
11
2
1212
1211
0
1
01 PP m
mm
mm
S  












 
 





























 2
1112
2
1111
2
112212
1211
12
11
/
/1
0
1
P
P
P mm
mm
mmm
mm
k
k
K



        kkPkHkKIkkP /1111/1   































2212
1211
12
11
2212
1211
01
10
01
mm
mm
k
k
pp
pp
   
 
   
   

















 2
11
2
1222
2
1112
2
2
1112
22
1111
2
1212221211
12111111
//
//
1
11
PPP
PPPP
mmmmm
mmmm
mkmmk
mkmk


α - β (2-D) Filter with Piecewise Constant White Noise Acceleration Model (continue – 1)
Target Estimators
142
SOLO
From          kQkkkPkkkP
T
 //1
we obtain           kkQkkPkkkP T
 /1/ 1
    







2212
1211
1/1lim/lim
pp
pp
kkPkkP
kk
  







2212
1211
/1lim
mm
mm
kkP
k
  
T
TTT
TT
mm
mmT
pp
pp
Q
w








































 






1
01
2/
2/4/
10
1 2
23
34
2212
1211
2212
1211
1

For Piecewise (between samples) Constant White Noise acceleration model
   
  

















22
22
23
2212
23
2212
24
22
2
1211
1212221211
12111111
2/
2/4/2
1
11
ww
ww
TmTmTm
TmTmTmTmTm
mkmmk
mkmk




22
1212
23
221211
24
22
2
121111
2/
4/2
w
w
w
Tmk
TmTmk
TmTmTmk






α - β (2-D) Filter with Piecewise Constant White Noise Acceleration Model (continue – 2)
Target Estimators
143
SOLO
 11
2
1111 1/ kkm P  
12
22
12 / kTm w
  121211
22
121122 2//2// mkTkTTmkm w  
We obtained the following 5 equations with 5 unknowns: k11, k12, m11, m12, m22
 11
2
1212 1/ kkm P  
 2
111111 / Pmmk 1
 2
111212 / Pmmk 2
4/2
24
22
2
121111 wTmTmTmk 3
2/
23
221211 wTmTmk 4
22
1212 wTmk 5
Substitute the results obtained from and in1 2 34 5
       
  

  
 4/
11
2
2
12
2
11
2
12
12112
11
2
12
11
2
2
11
24
1212
22
22
12121111
14121
2
1
w
w
T
mkT
P
m
m
P
m
P
mk
P
k
k
T
k
k
k
T
k
T
k
kT
k
k

















3
0
4
1
2
2
12
2
121112
2
11  kTkkTkTk
α - β (2-D) Filter with Piecewise Constant White Noise Acceleration Model (continue – 3)
Target Estimators
144
SOLO
We obtained: 0
4
1
2
2
12
2
121112
2
11  kTkkTkTk
Kalata introduced the α, β parameters defined as: Tkk 1211 ::  
and the previous equation is written as function of α, β as:
0
4
1
2 22
 
which can be used to write α as a function of β:
2
2

 
  12
22
11
2
12
12
1 k
T
k
k
m wP 



We obtained:
 
T
TTm w
P





22
2
12
1



 
2
2
242
:
1






 P
wT
P
wT



2
: Target Maneuvering Index proportional to the ratio of:
Motion Uncertainty:
2
22
Tw
Observation Uncertainty: 2
P
α - β (2-D) Filter with Piecewise Constant White Noise Acceleration Model (continue – 4)
Target Estimators
145
SOLO
2
2

 We obtained:
 
2
2
242
:
1






 P
wT
0
2
 


The positive solution for from the above equation is:   8
22
1 2

Therefore:    

 84
4
84
4
1 222

and:
  


 8428168
16
1
11 222
2
2

   848
8
1 22

and:
   2
22
2
2/12/21 








α - β (2-D) Filter with Piecewise Constant White Noise Acceleration Model (continue – 5)
Target Estimators
146
SOLO
We found
   
  













1212221211
12111111
2212
1211
1
11
mkmmk
mkmk
pp
pp
 11
2
1111 1/ kkm P  
 11
2
1212 1/ kkm P  
  121211
22
121122
2//
2//
mkTk
TTmkm w

 
  2
11111111 1 Pkmkp 
  2
12121112 1 Pkmkp 
 
 













12
2//
2//
2
121211
121212121122
P
T
TT
mkTk
mkmkTkp
2
11 Pp 
2
12 P
T
p 


 
 
2
222
1
2/
P
T
p 





&
α - β (2-D) Filter with Piecewise Constant White Noise Acceleration Model (continue – 6)
Target Estimators
147
   848
8
1 22

SOLO
We found
   

 84
4
84
4
1 222

α, β gains, as function of λ in semi-log and log-log scales
α - β (2-D) Filter with Piecewise Constant White Noise Acceleration Model (continue – 7)
Target Estimators
148
SOLO
  
T
T
q
TT
TT
mm
mmT
pp
pp
Q 







































 






1
01
2/
2/3/
10
1
2
23
2212
1211
2212
1211
1
For White Noise acceleration model
   
  

















qTmqTmTm
qTmTmqTmTmTm
mkmmk
mkmk
22
2
2212
2
2212
3
22
2
1211
1212221211
12111111
2/
2/3/2
1
11


qTmk
qTmTmk
qTmTmTmk



1212
2
221211
3
22
2
121111
2/
3/2
α - β (2-D) Filter with White Noise Acceleration Model
 









TT
TT
qkQ
2/
2/3/
2
23
Target Estimators
149
SOLO
 11
2
1111 1/ kkm P  
1212 / kqTm 
  121211121122 2//2// mkTkqTTmkm 
We obtained the following 5 equations with 5 unknowns: k11, k12, m11, m12, m22
 11
2
1212 1/ kkm P  
 2
111111 / Pmmk 1
 2
111212 / Pmmk 2
3/2 3
22
2
121111 qTmTmTmk 3
2/2
221211 qTmTmk 4
qTmk 1212
5
Substitute the results obtained from and in1 2 34 5
       
  

  
 3/
11
2
2
12
2
11
2
12
12112
11
2
12
11
2
2
11
3
1212
22
12121111
13121
2
1
qT
mkqT
P
m
m
P
m
P
mk
P
k
k
T
k
k
k
T
k
T
k
kT
k
k














3
0
6
1
2
2
12
2
121112
2
11  kTkkTkTk
α - β (2-D) Filter with White Noise Acceleration Model (continue – 1)
Target Estimators
150
SOLO
We obtained: 0
6
1
2
2
12
2
121112
2
11  kTkkTkTk
The α, β parameters defined as: Tkk 1211 ::  
and the previous equation is written as function of α, β as:
0
6
1
2 22
 
which can be used to write α as a function of β:
212
2
2

 


 



1
/
1/ 11
2
12
12
12
T
k
k
T
qT
k
qT
m P
We obtained:
2
2
32
:
1
c
P
qT





α - β (2-D) Filter with White Noise Acceleration Model (continue – 2)
2
2
22
:
12
2
2
1
1
c










The equation for solving β is:
which can be solved numerically.
Target Estimators
151
SOLO
We found
   
  













1212221211
12111111
2212
1211
1
11
mkmmk
mkmk
pp
pp
 11
2
1111 1/ kkm P  
 11
2
1212 1/ kkm P  
  12121122 2// mkTkm 
  2
11111111 1 Pkmkp 
  2
12121112 1 Pkmkp 
 
 













12
2//
2//
2
121211
121212121122
P
T
TT
mkTk
mkmkTkp
2
11 Pp 
2
12 P
T
p 


 
 
2
222
1
2/
P
T
p 





&
α - β Filter with White Noise Acceleration Model (continue – 3)
Target Estimators
152

w
x
x
x
x
x
x
td
d
BA










































1
0
0
000
100
010





SOLO
We want to find the steady-state form of the filter for
Assume that only the position measurements are available
      kjjkkk
k
kkkk RvvEvEv
x
x
x
vxHz 










 

 1111
1
1111 0&001


Discrete System







1111
1
kkkk
kkkkk
vxHz
wxx  
   






































kjP
T
jkkkk
H
k
kjw
T
jkkkkk
vvERvxz
wwEQwT
T
xT
TT
x
k
kk


2
111111
2
22
1
&001
&
1
2/
100
10
2/1
1

  
α – β - γ (3-D) Filter with Piecewise Constant Wiener Process Acceleration Model
x
x
x


- position
- velocity
- acceleration
Target Estimators
153
SOLO
Piecewise (between samples) Constant White Noise acceleration model
              12/
1
2/
2
2
00 TTT
T
qlqkllwkwEk kl
TTT










 
        











12/
2/
2/2/2/
2
23
234
0
TT
TTT
TTT
qllwkwEk TT
Guideline for Choice of Process Noise Intensity
For this model q should be of the order of maximum acceleration increment over a
sampling period ΔaM.
A practical range is 0.5 ΔaM ≤ q ≤ ΔaM.
α – β - γ (3-D) Filter with Piecewise Constant Wiener Process Acceleration Model
(continue – 1)
Target Estimators
154
SOLO
The Target Maneuvering Index is defined as for α – β Filter as:
P
wT



2
:
α – β - γ (3-D) Filter with Piecewise Constant Wiener Process Acceleration Model
(continue – 2)
The three equations that yield the optimal steady-state gains are:
 
2
2
14





    1422 or: 2/2  



2

This system of three nonlinear equations can be solved numerically.
The corresponding update state covariance expressions are:
 
 
 
 
 
 
2
433
2
213
2
323
2
12
2
222
2
11
14
2
14
2
18
428
PP
PP
PP
T
p
T
p
T
p
T
p
T
pp























Target Estimators
155
SOLO
Target Estimators
α – β - γ Filter gains as functions of λ in semi-log and log-log scales:
α – β - γ (3-D) Filter with Piecewise Constant Wiener Process Acceleration Model
(continue – 3)
156
SOLO
Target Estimators
α – β (2-D) Filter and α – β - γ (3-D) Filter - Summary
Advantages
Disadvantages
• Computation requirements (memory, computation time) are low.
• Quick (but possible dirty) evaluation of track performances as measured by the
steady-state variances.
• very limited capability in clutter.
• when used independently for each coordinate, one can encounter instabilities
due to decoupling.
157
SOLO Nonlinear Estimation (Filtering)
Return to Table of Content
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
)Initialization,
Confirmation
and Deletion(
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
The assumption of Linearity of the System and the Measurements
and the Gaussian assumption are not valid like:
• Angles , Range measurements (Measurements to states nonlinearities)
• Tracking in the presence of constraints
• Terrain Navigation
• Tracking Extended (non-point target)
Therefore we must deal with Nonlinear Filters and Use Approximations.
158
SOLO Nonlinear Estimation (Filtering)
Return to Table of Content
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
)Initialization,
Confirmation
and Deletion(
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
The Nonlinear Filters are approximations of the
Optimal Bayesian Estimators:
• Analytic Approximations (Linearization of the models)
- Extended Kalman Filter
• Sampling Approaches
- Unscented Kalman Filter, Particle Filter
• Numerical Integration
- Approximate p (xk|Z1:k) on a grid of nodes
• Gaussian Sum Filter
- Approximate p (xk|Z1:k) with a Gaussian Mixture
159
SOLO
Additive Gaussian Nonlinear Filter  
  kkk
kkk
vxhz
wxfx

  11
Recursive Bayesian Estimation
        k
xx
kkkkkkkkkkk xdPxxxhZxzEz 1|1|1:111| ,ˆ;,|ˆ N
      T
kkkkkkkkkkkk
T
k
zz
kk zzRxdPxxxhxhP 1|1|1|1|1| ˆˆ,ˆ;    N
    T
kkkkkkkkkkk
T
k
xz
kk zxxdPxxxhxP 1|1|1|1|1| ˆˆ,ˆ;    N
        11|11|1111:11| .ˆ;|ˆ k
xx
kkkkkkkkkk xdPxxxfZxEx N
      T
kkkkkk
xx
kkkkkk
T
k
xx
kk xxQxdPxxxfxfP 1|1|111|11|11111|
ˆˆ,ˆ;    N
Summary (see “Bayesian Estimation” presentation)
The Kalman Filter, that uses this computations is given by:
   1|
1
1|1|1|| ˆˆ|ˆ 

  kkk
K
zz
kk
xz
kkkkkkkk zzPPxzxEx
k



   
T
k
zz
kkk
xx
kk
zx
kk
zz
kk
xx
kk
xx
kkk
T
kkkkkk
xx
kk
KPKP
PPPPZxxxxEP
1
1|1|
1|
1
1|1|1|:1|||
ˆˆ







160
SOLO
Additive Gaussian Nonlinear Filter (continue – 5)  
  kkk
kkk
vxhz
wxfx

  11
Recursive Bayesian Estimation
    xdPxxxgI xx,ˆ;N
To obtain the Kalman Filter, we must approximate integrals of the type:
Three approximation are presented:
(2) Gauss – Hermite Quadrature Approximation
(3) Unscented Transformation Approximation
(4) Monte Carlo Approximation
(1) Extended Kalman Filter
161
Extended Kalman Filter
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
In the extended Kalman filter, (EKF) the state transition
and observation models need not be linear functions of
the state but may instead be (differentiable) functions.
        11,1,1  kwkukxkfkx
        kkukxkhkz  ,,
State vector dynamics
Measurements
             kPkekeEkxEkxke x
T
xxx  &:
              lk
T
www kQlekeEkwEkwke ,
0
&: 

     lklekeE
T
vw ,0 






lk
lk
lk
1
0
,
The function f can be used to compute the predicted state from the previous estimate
and similarly the function h can be used to compute the predicted measurement from
the predicted state. However, f and h cannot be applied to the covariance directly.
Instead a matrix of partial derivatives (the Jacobian) is computed.
              
  
   
  
   111
2
1
111,1,11,1,1
1
2
2
1








keke
x
f
keke
x
f
kekukxEkfkukxkfke wx
Hessian
kxE
T
xx
Jacobian
kxE
wx 

              
  
   
  
   kke
x
h
keke
x
h
kkukxEkhkukxkhke x
Hessian
kxE
T
xx
Jacobian
kxE
z  









2
2
1
2
1
,,,,
Taylor’s Expansion:
162
Extended Kalman Filter
State Estimation (one cycle)
SOLO
 
 1|1
1|1ˆ


kkP
kkx
1kt kt
T
t
 
 1|
1|ˆ


kkP
kkx  
 kkP
kkx
|
|ˆ
1:  kk
 11|11| ,ˆ,1ˆ   kkkkk uxkfx
State vector prediction1
Jacobians Computation
1|1|1 ˆˆ
1 &






 
kkkk x
k
x
k
x
h
H
x
f
2
Covariance matrix extrapolation111|111|   k
T
kkkkkk QPP3
Innovation Covariancek
T
kkkkk RHPHS  1|4
Gain Matrix Computation1
1|

 k
T
kkkk SHPK5
Measurement & Innovation
1|ˆ
1|
ˆ


kkz
kkkkk xHzi6
Filteringkkkkkk iKxx  1||
ˆˆ7
Covariance matrix updating
 
    T
kkk
T
kkkkkk
kkkk
T
kkkkk
kkkk
T
kkkkkkk
KRKHKIPHKI
PHKI
KSKP
PHSHPPP










1|
1|
1|
1|
1
1|1||8
0 Initialization     T
xxxxEPxEx 00000|000
ˆˆˆ 
163
Extended Kalman Filter
State Estimation (one cycle)
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Evolution
of the system
(true state)
Estimation
of the state
State Covariance and
Kalman Filter ComputationsController
Innovation Covariance
k
T
kkkkk RHPHS  1|
Innovation
1|ˆ  kkkk zz
1kt
kt
Time
Jacobians Evaluation
kk
kk
xx
k
xx
k
x
h
H
x
f
|
1|1
ˆ
ˆ
1










State at tk-1
1kx
Control at tk-1
1ku
State
Estimation
at tk-1
1|1
ˆ  kkx
State Error Covariance
at tk-1 1|1  kkP
State Prediction Covariance
111|111|   k
T
kkkkkk QPP
State Prediction
at tk
 11|11| ,ˆ,1ˆ   kkkkk uxkfx
Measurement Prediction
at tk
 1|1|
ˆ,ˆ   kkkk xkhz
Transition to tk
  111,,1   kkkk wuxkfx
Measurement at tk
  kkk vxkhz  ,
Kalman Filter Gain
1
1|

 k
T
kkkk SHPK
Update State
Covariance at tkk
k
T
kkkkkkk KSKPP  1||
Update State
Estimation at t k
kkkkkk Kxx  1||
ˆˆ
I.C.:  00|0
ˆ xEx     T
xxxxEP 0|000|000|0
ˆˆ I.C.:
1|1
ˆ  kkx
1kx
kkP |
2|1  kkP
kkx |
ˆ
kx
1|1  kkP
1| kkP
1|
ˆ kkx
1kt kt
Real Trajectory
Estimated
Trajectory
Rudolf E. Kalman
( 1920 - )
164
Extended Kalman Filter
SOLO
Criticism of the Extended Kalman Filter
Unlike its linear counterpart, the extended Kalman filter is not an optimal estimator.
In addition, if the initial estimate of the state is wrong, or if the process is modeled
incorrectly, the filter may quickly diverge, owing to its linearization. Another problem
with the extended Kalman filter is that the estimated covariance matrix tends to
underestimate the true covariance matrix and therefore risks becoming inconsistent
in the statistical sense without the addition of "stabilizing noise".
Having stated this, the Extended Kalman filter can give reasonable performance, and
is arguably the de facto standard in navigation systems and GPS.
165
SOLO
Additive Gaussian Nonlinear Filter (continue – 5)
 
  kkk
kkk
vxhz
wxfx

  11
Recursive Bayesian Estimation
    xdPxxxgI xx,ˆ;N
To obtain the Kalman Filter, we must approximate integrals of the type:
Gauss – Hermite Quadrature Approximation
 
  
    







xdxxPxx
P
xgI xx
T
xx
n
ˆˆ
2
1
exp
2
1 1
2/1

Let Pxx = STS a Cholesky decomposition, and define:  xxSz ˆ
2
1
: 1
 
 
 

 zdezgI zz
n
T
2/
2
2

This integral can be approximated using the Gauss – Hermite
quadrature rule:
    


M
i
ii
z
zfwzdzfe
1
2
Carl Friedrich
Gauss
1777 - 1855
Charles Hermite
1822 - 1901
Andre – Louis
Cholesky
1875 - 1918
166
SOLO
Additive Gaussian Nonlinear Filter (continue – 6)
 
  kkk
kkk
vxhz
wxfx

  11
Recursive Bayesian Estimation
Gauss – Hermite Quadrature Approximation (continue – 1)
    


M
i
ii
z
zfwzdzfe
1
2
The quadrature points zi and weights wi are defined as follows:
A set of orthonormal Hermite polynomials are generated from the recurrence relationship:
   
     zH
j
j
zH
j
zzH
zHzH
jjj 11
4/1
01
11
2
/1,0






 
or in matrix form:
 
 
 
 
 
 
 
  
  Mj
j
zH
zH
zH
zH
zH
zH
zH
z jM
e
M
zh
M
J
M
M
zh
M
M
M
,,2,1
2
:
1
0
0
0
00
00
00
00
00
1
1
0
1
1
2
21
1
1
1
0



  











































































 

   zH
j
zH
j
zHz jjj
jj
11
1
2
1
2






     zHezhJzhz MMMM 
167
SOLO
Additive Gaussian Nonlinear Filter (continue – 7)
Recursive Bayesian Estimation
Gauss – Hermite Quadrature Approximation (continue – 2)
    


M
i
ii
z
zfwzdzfe
1
2
Orthonormal Hermitian
Polynomials in matrix form:
  Mj
j
JJ j
T
M
M
M
M ,,2,1
2
:
00
00
00
00
00
1
1
2
21
1
































     zHezhJzhz MMMM 
Let evaluate this equation for the M roots zi for which   MizH iM ,,2,10 
    MizhJzhz iMii ,,2,1 
From this equation we can see that zi and
are the eigenvalues and eigenvectors, respectively, of the symmetric matrix JM.
         MizHzHzHzh
T
iMiii ,,1,,, 110   
Because of the symmetry of JM the eigenvectors are orthogonal and can be normalized.
Define:
    MjizHWWzHv
M
j
ijiiij
i
j ,,2,1,:&/:
1
0
2
 


We have:
   
    li
li
li
li
M
j l
lj
i
ij
M
j
l
j
i
j zhzh
WWW
zH
W
zH
vv 





 
0
1
0
1
0
1
:
168
Uscented Kalman FilterSOLO
When the state transition and observation models – that is, the predict and update
functions f and h (see above) – are highly non-linear, the Extended Kalman Filter
can give particularly poor performance [JU97]. This is because only the mean is
propagated through the non-linearity. The Unscented Kalman Filter (UKF) [JU97]
uses a deterministic sampling technique known as the to pick a minimal set of
sample points (called “sigma points”) around the mean. These “sigma points” are
then propagated through the non-linear functions and the covariance of the estimate
is then recovered. The result is a filter which more accurately captures the true mean
and covariance. (This can be verified using Monte Carlo sampling or through a
Taylor series expansion of the posterior statistics.) In addition, this technique
removes the requirement to analytically calculate Jacobians, which for complex
functions can be a difficult task in itself.
  111,,1   kkkk wuxkfx
  kkk xkhz  ,
State vector dynamics
Measurements
             kPkekeEkxEkxke x
T
xxx  &:
              lk
T
www kQlekeEkwEkwke ,
0
&: 

     lklekeE
T
vw ,0 






lk
lk
lk
1
0
,
The Unscented Algorithm using              kPkekeEkxEkxke x
T
xxx  &:
determines              kPkekeEkzEkzke z
T
zzz  &:
169
Unscented Kalman FilterSOLO
    
 
n
n
j j
j
n
x
n
x
n
x
x
x
xx
fx
n
xxf

















1
0
ˆ
:
!
1
ˆ


Develop the nonlinear function f in a Taylor series around xˆ
Define also the operator     xf
x
xfxfD
n
n
j j
jx
n
x
n
x
x










 1
: 
Propagating Means and Covariances Through Nonlinear Transformations
Consider a nonlinear function . xfy 
Let compute
Assume is a random variable with a probability density function pX (x) (known or
unknown) with mean and covariance
x
     Txx
xxxxEPxEx ˆˆ,ˆ 
    
     


 
































0
ˆ
10
ˆ
0
!
1
!
1
!
1
ˆˆ
n
x
n
n
j j
j
n
x
n
x
n
n
x
f
x
xE
n
fxE
n
DE
n
xxfEy
x

 
   
      xxTT
PxxxxExxE
xxExE
xxx



ˆˆ
0ˆ
ˆ



170
Unscented Kalman Filter
SOLO
Propagating Means and Covariances Through Nonlinear Transformations
Consider a nonlinear function .
(continue – 1)
 xfy 
   
      xxTT
PxxxxExxE
xxExE
xxx



ˆˆ
0ˆ
ˆ



    









































































































































 



 
x
n
j j
jx
n
j j
jx
n
j j
j
x
n
j j
j
n
x
n
n
j j
j
f
x
xEf
x
xEf
x
xE
f
x
xExff
x
xE
n
xxfEy
xxx
xx
ˆ
4
1
ˆ
3
1
ˆ
2
1
ˆ
10
ˆ
1
!4
1
!3
1
!2
1
ˆ
!
1
ˆˆ


Since all the differentials of f are computed around the mean (non-random)xˆ
              xx
xxT
xxx
TT
xxx
TT
xxx fPfxxEfxxEfxE ˆˆˆˆ
2
 
        0
ˆ
1
0ˆ
1
ˆ0
ˆ 






























































  
x
n
j j
j
x
n
j j
j
x
xxx f
x
xEf
x
xEfxEfxE
xx


                  


xxxxxx
xxT
x
n
x
n
x fDEfDEfPxffDE
n
xxfEy ˆ
4
ˆ
3
ˆ
0
ˆ
!4
1
!3
1
!2
1
ˆ
!
1
ˆˆ 
171
Simon J. Julier
Unscented Kalman FilterSOLO
Propagating Means and Covariances Through Nonlinear Transformations
Consider a nonlinear function .
(continue - 2)
 xfy     
      xxTT
PxxxxExxE
xxExE
xxx



ˆˆ
0ˆ
ˆ



Unscented Transformation (UT), proposed by Julier and Uhlmann
uses a set of “sigma points” to provide an approximation of
the probabilistic properties through the nonlinear function
Jeffrey K. Uhlman
A set of “sigma points” S consists of p+1 vectors and their associated
weights S = { i=0,1,..,p: x(i) , W(i) }.
(1) Compute the transformation of the “sigma points” through the
nonlinear transformation f:
   
  pixfy ii
,,1,0 
(2) Compute the approximation of the mean:    


p
i
ii
yWy
0
ˆ
The estimation is unbiased if:
       
   
yWyyEWyWE
p
i
i
p
i
y
ii
p
i
ii
ˆˆ
00
ˆ
0







 

 
1
0

p
i
i
W
(3) The approximation of output covariance is given by
   
   
 

p
i
Tiiiyy
yyyyWP
0
ˆˆ
172
Unscented Kalman FilterSOLO
Propagating Means and Covariances Through Nonlinear Transformations
Consider a nonlinear function (continue – 3) xfy 
One set of points that satisfies the above conditions consists of a symmetric set of symmetric
p = 2nx points that lie on the covariance contour Pxx:
th
xn
   
 
 
 
   
 
   
x
x
ni
x
i
xxxni
i
xxxi
ni
nWW
nWW
P
W
n
xx
P
W
n
xx
WWxx
x
x
,,1
2/1
2/1
1
ˆ
1
ˆ
ˆ
0
0
0
0
0
00





































where is the row or column of the matrix square root of nx Pxx /(1-W0)
(the original covariance matrix Pxx multiplied by the number of dimensions of x, nx/(1-W0)).
This implies:
  i
xx
x WPn 01/ 
xxx
n
i
T
i
xxx
i
xxx
P
W
n
P
W
n
P
W
nx
01 00 111 


















Unscented Transformation (UT) (continue – 1)
173
Unscented Kalman FilterSOLO
Propagating Means and Covariances Through Nonlinear Transformations
Consider a nonlinear function (continue – 3) xfy 
Unscented Transformation (UT) (continue – 2)
   
 
 
 
 



















0
0
2,,1ˆ
!
1
,,1ˆ
!
1
0ˆ
n
xx
n
x
n
x
n
x
ii
nnixfD
n
nixfD
n
ixf
xfy
i
i




1
Unscented Algorithm:
   
     
         
       



































x
ii
x
i
x
iii
x
i
x
i
x
n
i
xx
x
n
i
x
x
n
i
xxx
x
n
i n
n
x
x
n
i n
n
x
x
n
i
ii
UT
xfDxfD
n
W
xfD
n
W
xf
xfDxfDxfDxf
n
W
xfW
xfD
nn
W
xfD
nn
W
xfWyWy
1
640
1
20
1
6420
0
1 0
0
1 0
0
0
2
0
ˆ
!6
1
ˆ
!4
11
ˆ
2
11
ˆ
ˆ
!6
1
ˆ
!4
1
ˆ
!2
1
ˆ
1
ˆ
ˆ
!
1
2
1
ˆ
!
1
2
1
ˆˆ





 
i
xxx
i
i
P
W
n
xxxx 









01
ˆˆ 
2 Since    
 
 















 

oddnxfD
evennxfD
xf
x
xxfD n
x
n
x
n
n
j j
ij
n
x
i
i
x
i
ˆ
ˆ
ˆˆ
1 

 
174
Unscented Kalman Filter
       







x
ii
n
i
xx
x
xxT
UT xfDxfD
n
W
xfPxfy
1
640
ˆ
!6
1
ˆ
!4
11
ˆ
2
1
ˆˆ 
 
i
xxx
i
i
P
W
n
xxxx 









01
ˆˆ 
SOLO
Propagating Means and Covariances Through Nonlinear Transformations
Consider a nonlinear function (continue – 4) xfy 
Unscented Transformation (UT) (continue – 3)
Unscented Algorithm:
   
     xfPxfP
W
n
n
W
xfP
W
n
P
W
n
n
W
xfP
W
n
P
W
n
n
W
xfD
n
W
xxTxxxT
x
n
i
T
i
xxx
i
xxxT
x
n
i
T
i
xxx
i
xxxT
x
n
i
x
x
x
xx
i
ˆ
2
1
ˆ
12
11
ˆ
112
11
ˆ
112
11
ˆ
2
11
0
0
1 00
0
1 00
0
1
20
































































Finally:
We found
                  


xxxxxx
xxT
x
n
x
n
x fDEfDEfPxffDE
n
xxfEy ˆ
4
ˆ
3
ˆ
0
ˆ
!4
1
!3
1
!2
1
ˆ
!
1
ˆˆ 
We can see that the two expressions agree exactly to the third order.
175
covariance
mean
Actual (sampling)
 xfy 
true mean
true
covariance
covariance
mean
Actual (sampling) Linearized (EKF)
 xfy 
 
APAP
xfy
xxTyy

 ˆˆ
true mean
true
covariance
 xf ˆAPA xxT
Uscented Kalman FilterSOLO
covariance
mean
sigma points
Actual (sampling) Linearized (EKF)
Unscented
Transformation
 xfy 
 
APAP
xfy
xxTyy

 ˆˆ  XY f
transformed
sigma points
UT mean
UT covariance
true mean
true
covariance
 xf ˆAPA xxT
weighted sample mean
and covariance
176
Uscented Kalman FilterSOLO
   
N
T
iiiz
N
ii zzPz
2
0
2
0

x
xP
xP




zP
 f
i
i
i
z
   xxi PxPxx  
Weighted
sample mean
Weighted
sample
covariance
Table of Content
177
Uscented Kalman FilterSOLO
UKF Summary
Initialization of UKF
     T
xxxxEPxEx 00000|000
ˆˆˆ 
       











R
Q
P
xxxxEPxxEx
TaaaaaTTaa
00
00
00
ˆˆ00ˆˆ
0|0
00000|0000
 TTTTa
vwxx :
For   ,,1 k
System Definition
     
     




 
lkk
T
lkkkkk
lkk
T
lkkkkkk
RvvEvEvxkhz
QwwEwEwuxkfx
,
,1111111
&0,
&0,,1


  Liuxkfx k
i
kk
i
kk 2,,1,0,ˆ,1ˆ 11|11|  
     
 
Li
L
W
L
WxWx m
i
m
L
i
i
kk
m
ikk 2,,1
2
1
&ˆˆ 0
2
0
1|1| 



 



0
Calculate the Sigma Points  
 


 















L
LiPxx
LiPxx
xx
i
kkkk
Li
kk
i
kkkk
i
kk
kkkk
,,1ˆˆ
,,1ˆˆ
ˆˆ
1|11|11|1
1|11|11|1
1|1
0
1|1


1
State Prediction and its Covariance2
 
      
 
Li
L
W
L
WxxxxWP c
i
c
L
i
T
kk
i
kkkk
i
kk
c
ikk 2,,1
2
1
&1ˆˆˆˆ 2
0
2
0
1|1|1|1|1| 



 





178
Uscented Kalman FilterSOLO
UKF Summary (continue – 1)
  Lixkhz i
kk
i
kk 2,,1,0ˆ,ˆ 1|1|  
     
 
Li
L
W
L
WzWz m
i
m
L
i
i
kk
m
ikk 2,,1
2
1
&ˆˆ 0
2
0
1|1| 



 



Measure Prediction3
Innovation and its Covariance4
1|ˆ  kkkk zzi
 
      
 
Li
L
W
L
WzzzzWPS c
i
c
L
i
T
kk
i
kkkk
i
kk
c
i
zz
kkk 2,,1
2
1
&1ˆˆˆˆ 2
0
2
0
1|1|1|1|1| 



 





Kalman Gain Computations5
 
      
 
Li
L
W
L
WzzxxWP c
i
c
L
i
T
kk
i
kkkk
i
kk
c
i
xz
kk 2,,1
2
1
&1ˆˆˆˆ 2
0
2
0
1|1|1|1|1| 



 





1
1|1|

 zz
kk
xz
kkk PPK
Update State and its Covariance6
kkkkkk iKxx  1||
ˆˆ
T
kkkkkkk KSKPP  1||
k = k+1 & return to 1
179
Unscented Kalman Filter
State Estimation (one cycle)
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Evolution
of the system
(true state)
Estimation
of the state
State Covariance and
Kalman Filter ComputationsController
Innovation
1|ˆ  kkkk zz
1kt
kt
Time
State at tk-1
1kx
Control at tk-1
1ku
State
Estimation
at tk-1
1|1
ˆ  kkx
State Error Covariance
at tk-1  1|1  kkP
State Prediction Covariance
 
  
 
L
i
T
kk
i
kkkk
i
kk
c
ikk xxxxWP
2
0
1|1|1|1|1|
ˆˆˆˆ
 
Li
uxkfx k
i
kk
i
kk
2,,1,0
,ˆ,1ˆ 11|11|

 
 
Li
xkhz i
kk
i
kk
2,,1,0
ˆ,ˆ 1|1|

 
Transition to tk
  111,,1   kkkk wuxkfx
Measurement at tk
  kkk vxkhz  ,
Update State
Covariance at tkk
k
T
kkkkkkk KSKPP  1||
Update State
Estimation at t k
kkkkkk Kxx  1||
ˆˆ
State Prediction at tk
 

 
L
i
i
kk
m
ikk xWx
2
0
1|1|
ˆˆ
Sigma Points Computation
LiPxx
LiPxx
xx
i
kkkk
Li
kk
i
kkkk
i
kk
kkkk
,,1ˆˆ
,,1ˆˆ
ˆˆ
1|11|11|1
1|11|11|1
1|1
0
1|1












Measurement Prediction at tk
 

 
L
i
i
kk
m
ikk zWz
2
0
1|1| ˆˆ
Innovation Covariance
 
  




L
i
T
kk
i
kkkk
i
kk
c
i
zz
kkk
zzzzW
PS
2
0
1|1|1|1|
1|
ˆˆˆˆ
1
||

 zz
ykk
zx
ykkk PPK
 
  
 
L
i
T
kk
i
kkkk
i
kk
c
i
zz
kk zzxxWP
2
0
1|1|1|1|1| ˆˆˆˆ
Kalman Filter Gain
I.C.:  00|0
ˆ xEx     T
xxxxEP 0|000|000|0
ˆˆ I.C.:
1|1
ˆ  kkx
1kx
kkP |
2|1  kkP
kkx |
ˆ
kx
1|1  kkP
1| kkP
1|
ˆ kkx
1kt kt
Real Trajectory
Estimated
Trajectory
covariance
mean
sigma points
Actual (sampling) Unscented
Transformation
 xfy 
 XY f
transformed
sigma points
UT mean
UT covariance
true mean
true
covariance
APA xxT
weighted sample mean
and covariance
Simon J. Julier Jeffrey K. Uhlman
180
Numerical Integration Using a Monte Carlo Approximation
sN
1
SOLO
A Monte Carlo Approximation of the Expected Value Integrals uses Discrete
Approximation to the Gaussian PDF  xx
Pxx ,ˆ;N
 xx
Pxx ,ˆ;N can be approximated by:
        

ss N
i
i
s
N
i
iixx
xx
N
xxwPxxx
11
1
,ˆ; Np
We can see that for any x we have
   


 

x
xx
xx
i
i
x N
i
ii
dPxwdxw
i
s
 ,ˆ;
1
N
The weight wi is not the probability of the point xi. The probability density near xi is
given by the density of the points in the region around xi, which can be obtained by a
normalized histogram of all xi.
Draw Ns samples from , where {xi , i = 1,2,…,Ns} are a set of support
points (random samples of particles) with weights {wi = 1/Ns, i=1,2,…,Ns}
 xx
Pxx ,ˆ;N
Monte Carlo Kalman Filter (MCKF)
181
Numerical Integration Using a Monte Carlo ApproximationSOLO
The Expected Value for any function g (x) can be estimated from:
                  

sss N
i
i
s
N
i
ii
N
i
ii
xp xg
N
xgwxxwxgxdxpxgxgE
111
1

which is the sample mean.
     
     




 
lkk
T
lkkkkk
lkk
T
lkkkkkk
RvvEvEvxkhz
QwwEwEwuxkfx
,
,1111111
&0,
&0,,1

Given the
System
Assuming that we computed the Mean and Covariance at stage k-1
let use the Monte Carlo Approximation to compute the predicted Mean and Covariance
at stage k
1|11|1 ,ˆ  kkkk Px
1|1| ,ˆ  kkkk Px
     
  
s
kk
N
i
k
i
kk
s
Zxpkkk uxkf
N
xEx
1
11|1|1| ,,1
1
ˆ 1:1
    
  
T
kkkkZxp
T
kkZxp
T
kkkkkk
xx
kk xxxxExxxxEP kkkk
1|1|||1|1|1|
ˆˆˆˆ 1:11:1
  
Monte Carlo Kalman Filter (MCKF) (continue – 1)
Draw Ns samples
    skkkkkkk
i
kk NiPxxZxpx ,,1,ˆ;|~ 1|11|111:111|1   N
~means Generate
(Draw) samples
from a predefined
distribution
182
Numerical Integration Using a Monte Carlo ApproximationSOLO
    
  
       
     
   
T
N
i
k
i
kk
s
N
i
k
i
kk
s
Zxpk
i
kk
T
k
i
kk
T
kkkkZxp
T
kk
i
kkkk
i
kk
T
kkkkZxp
T
kkZxp
T
kkkkkk
xx
kk
ss
kk
kk
kkkk
uxkf
N
uxkf
N
QuxfuxfE
xxwuxkfwuxkfE
xxxxExxxxEP















 








1
11|1
1
11|1|11|111|1
1|1||111|1111|1
1|1|||1|1|1|
,,1
1
,,1
1
,,
ˆˆ,,1,,1
ˆˆˆˆ
1:1
1:1
1:11:1
       











  





sss N
i
k
i
kk
s
N
i
k
i
kk
s
N
i
k
i
kk
T
k
i
kk
s
xx
kk uxkf
N
uxkf
N
uxkfuxkf
N
QP
1
11|1
1
11|1
1
11|111|11| ,,1
1
,,1
1
,,1,,1
1
Using the Monte Carlo Approximation we obtain:
     
  
s
kk
N
i
i
kk
s
Zxpkkk xkh
N
zEz
1
1||1| ,
1
ˆ 1:1
       











  





sss N
i
i
kk
s
N
i
i
kk
s
N
i
i
kk
Ti
kk
s
zz
kk xkh
N
xkh
N
xkhxkh
N
RP
1
1|
1
1|
1
1|1|1| ,
1
,
1
,,
1
Monte Carlo Kalman Filter (MCKF) (continue – 2)
    skkkkkkk
i
kk NiPxxZxpx ,,1,ˆ;|~ 1|1|1:11|   N
Now we approximate the predictive PDF, , as
and we draw new Ns (not necessarily the same as before) samples.
 1:1| kk Zxp  1|1| ,ˆ;  kkkkk PxxN
183
Numerical Integration Using a Monte Carlo ApproximationSOLO
In the same way we obtain:
   











  





sss N
i
i
kk
s
N
i
i
kk
s
N
i
i
kk
Ti
kk
s
zx
kk xkh
N
x
N
xkhx
N
P
1
1|
1
1|
1
1|1|1| ,
11
,
1
Monte Carlo Kalman Filter (MCKF) (continue – 3)
The Kalman Filter Equations are:
 1
1|1|

 zz
kk
zx
kkk PPK
 1|1|| ˆˆˆ   kkkkkkkk zzKxx
T
k
zz
kkk
xx
kk
xx
kk KPKPP 1|1||  
184
Monte Carlo Kalman Filter (MCKF)SOLO
MCKF Summary
     T
xxxxEPxEx 00000|000
ˆˆˆ 
       











R
Q
P
xxxxEPxxEx
TaaaaaTTaa
00
00
00
ˆˆ00ˆˆ
0|0
00000|0000
For   ,,1 k
System Definition:
     
   



 
kkkkkk
kkkkkkk
Rvvvxkhz
QwwPxxxwuxkfx
,0;,
,0;&,ˆ;,,1 1110|0000111
N
NN
  sk
ai
kk
ai
kk Niuxkfx ,,1,,1 11|11|  

 
sN
i
ai
kk
s
a
kk x
N
x
1
1|1|
1
ˆ
Initialization of MCKF0
State Prediction and its Covariance2
Ta
kk
a
kk
N
i
Tai
kk
ai
kk
s
a
kk xxxx
N
P
s
1|1|
1
1|1|1|
ˆˆ
1


  
Assuming for k-1 Gaussian distribution with Mean and Covariance1 a
kk
a
kk Px 1|11|1 ,ˆ 
Assuming Gaussian distribution with Mean and Covariance3 1|1| ,ˆ  kkkk Px
  s
a
kk
a
kk
a
k
ai
kk NiPxxx ,,1,ˆ;~ 1|11|111|1  N
Generate (Draw) Ns samples
  s
a
kk
a
kk
a
kk
aj
kk NjPxxx ,,1,ˆ;~ 1|1|1|1|  N
Generate (Draw) new Ns samples
 TTTTa
vwxx :
Augment the state space to include processing and
measurement noises.
185
Monte Carlo Kalman Filter (MCKF)SOLO
MCKF Summary (continue – 1)
  s
aj
kk
j
kk Njxkhz ,,1, 1|1|   
 
sN
j
j
kk
s
kk z
N
z
1
1|1|
1
ˆ
Measure Prediction4
  
 
sN
j
T
kk
j
kkkk
j
kk
s
zz
kkk zzzz
N
PS
1
1|1|1|1|1| ˆˆ
1
Innovation and its Covariance 1|ˆ  kkkk zzi7
  
 
s
a
N
j
T
kk
j
kk
a
kk
aj
kk
s
zx
kk zzxx
N
P
1
1|1|1|1|1| ˆˆ
1
6 Kalman Gain Computations
1
1|1|

 zz
kk
zx
kk
a
k PPK
a
Kalman Filter8
k
a
k
a
kk
a
kk iKxx  1||
ˆˆ
Ta
kk
a
k
a
kk
a
kk KSKPP  1||
k := k+1 & return to 1
Predicted Covariances Computations5
186
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Evolution
of the system
(true state)
Estimation
of the state
State Covariance and
Kalman Filter ComputationsController
Innovation
1|ˆ  kkkk zz
1kt
kt
Time
State at tk-1
1kx
Control at tk-1
1ku
State
Estimation
at tk-1
a
kkx 1|1
ˆ 
State Error Covariance
at tk-1
a
kkP 1|1 
 
s
ai
kk
i
kk
Ni
xkhz
,,1
, 1|1|

 
Transition to tk
  111,,1   kkkk wuxkfx
Measurement at tk
  kkk vxkhz  ,
Update State
Covariance at tkk
k
Ta
kk
a
k
a
kk
a
kk KSKPP  1||
Update State
Estimation at t k
k
a
k
a
kk
a
kk Kxx  1||
ˆˆ
State Prediction at tk
 
 
sN
i
k
ai
kk
s
a
kk uxk
N
x
1
11|11| ,,1
1
ˆ
Measurement Prediction at tk

 
sN
i
i
kk
s
kk z
N
z
0
1|1|
1
ˆ
Innovation Covariance
  




sN
i
T
kk
i
kkkk
i
kk
s
zz
kkk
zzzz
N
PS
1
1|1|1|1|
1|
ˆˆ
1
1
1|1|

 zz
kk
zx
kk
a
k PPK
a
  
 
s
a
N
i
T
kk
i
kk
a
kk
ai
kk
s
zx
kk zzxx
N
P
1
1|1|1|1|1| ˆˆ
1
Kalman Filter Gain
I.C.:  Ta
xx 0,0,ˆˆ 0|00|0   kk
a
RQPdiagP ,, 10|00|0 I.C.:
1|1
ˆ  kkx
1kx
kkP |
2|1  kkP
kkx |
ˆ
kx
1|1  kkP
1| kkP
1|
ˆ kkx
1kt kt
Real Trajectory
Estimated
Trajectory
Generate Prior Samples
 
s
a
kk
a
kk
a
k
ai
kk
Ni
Pxxx
,,1
,ˆ;~ 1|11|111|1

 N
Generate Predictive Samples
 
s
a
kk
a
kk
a
k
ai
kk
Ni
Pxxx
,,1
,ˆ;~ 1|1|1|

 N
State Prediction Covariance
  
 
sN
i
T
a
kk
ai
kk
a
kk
ai
kk
s
a
kk xxxx
N
P
1
1|1|1|1|1|
ˆˆ
1
Monte Carlo Kalman Filter (MCKF)
187
Nonlinear Estimation Using Particle Filters
SOLO
We assumed that p (xk|Z1:k) is a Gaussian PDF. If the true PDF is not Gaussian
(multivariate, heavily skewed or non-standard – not represented by any standard PDF)
the Gaussian distribution can never described it well.
Non-Additive Non-Gaussian Nonlinear Filter
 
 kkk
kkk
vxhz
wxfx
,
, 11

 
kk vw &1 are system and measurement white-noise sequences
independent of past and current states and on each other and
having known P.D.F.s    kk vpwp &1
We want to compute p (xk|Z1:k) recursively, assuming knowledge of p(xk-1|Z1:k-1)
in two stages, prediction (before) and update (after measurement)
Prediction (before measurement)
Use Chapman – Kolmogorov Equation to obtain:
        11:1111:1 ||| kkkkkkk xdZxpxxpZxp
where:         111111 |,|| kkkkkkkk wdxwpwxxpxxp
By assumption    111 |   kkk wpxwp
Since by knowing , is deterministically given by system equation11 &  kk wx kx
    
 
 








11
11
1111
,0
,1
,,|
kkk
kkk
kkkkkk
wxfx
wxfx
wxfxwxxp 
Therefore:          11111 ,| kkkkkkk wdwpwxfxxxp 
188
Nonlinear Estimation Using Particle Filters
SOLO Non-Additive Non-Gaussian Nonlinear Filter
 
 kkk
kkk
vxhz
wxfx
,
, 11

 
kk vw &1 are system and measurement white-noise sequences
independent of past and current states and on each other and
having known P.D.F.s    kk vpwp &1
We want to compute p (xk|Z1:k) recursively, assuming knowledge of p(xk-1|Z1:k-1)
in two stages, prediction (before) and update (after measurement)
Prediction (before measurement)
        11:1111:1 ||| kkkkkkk xdZxpxxpZxp
where:
Update (after measurement)
   
     
 
   
 
   
    




 
kkkkk
kkkk
kk
kkkk
Bayes
bp
apabp
bap
kkkkk
xdZxpxzp
Zxpxzp
Zzp
Zxpxzp
ZzxpZxp
1:1
1:1
1:1
1:1
|
|
1:1:1
||
||
|
||
,||
      kkkkkkkk vdxvpvxzpxzp |,||
By assumption    kkk vpxvp |
Since by knowing , is deterministically given by system equationkk vx & kz
    
 
 





kkk
kkk
kkkkkk
vxhz
vxhz
vxhzvxzp
,0
,1
,,| 
Therefore:         kkkkkkk vdvpvxhzxzp ,| 
         11111 ,| kkkkkkk wdwpwxfxxxp 
189
Nonlinear Estimation Using Particle Filters
SOLO Non-Additive Non-Gaussian Nonlinear Filter
 
 kkk
kkk
vxhz
wxfx
,
, 11

 
kk vw &1 are system and measurement white-noise sequences
independent of past and current states and on each other and
having known P.D.F.s    kk vpwp &1
We want to compute p (xk|Z1:k) recursively, assuming knowledge of p(xk-1|Z1:k-1)
in two stages, prediction (before) and update (after measurement)
Prediction (before measurement)         11:1111:1 ||| kkkkkkk xdZxpxxpZxp
         11111 ,| kkkkkkk wdwpwxfxxxp 
Update (after measurement)
   
     
 
   
 
   
    




 
kkkkk
kkkk
kk
kkkk
Bayes
bp
apabp
bap
kkkkk
xdZxpxzp
Zxpxzp
Zzp
Zxpxzp
ZzxpZxp
1:1
1:1
1:1
1:1
|
|
1:1:1
||
||
|
||
,||
We need to evaluate the following integrals:
        kkkkkkk vdvpvxhzxzp ,| 
We use the numeric Monte Carlo Method to evaluate the integrals:
Generate (Draw):     Sk
i
kk
i
k Nivpvwpw ,,1~&~ 11 
     S
N
i
i
k
i
k
i
kkk Nwxfxxxp
S

 
1
111 /,| 
     S
N
i
i
k
i
k
i
kkk Nvxhzxzp
S


1
/,| 
or
      S
N
i
i
kkkk
i
k
i
k
i
k Nxxxxpwxfx
S

 
1
111 /|, 
      S
N
i
i
kkkk
i
k
i
k
i
k Nzzxzpvxhz
S


1
/|, 
Analytic solutions for those integral
equations do not exist in the general
case.
190
SOLO
     
   kvkkk
xkkwkkkk
vpgivenvxhz
xpuwpgivenwuxfx
:,
,,:,, 011111 0

 
Monte Carlo Computations of and . kk xzp | 1| kk xxp
Generate (Draw)   Sx
i
Nixpx ,,1~ 00 0

For   ,,1 k
Initialization0
1 At stage k-1
Generate (Draw) NS samples   Skw
i
k Niwpw ,,1~ 11 
2 State Update   S
i
kk
i
k
i
k Niwuxfx ,,1,, 111  
3 Generate (Draw) Measurement Noise   Skv
i
k Nivpv ,,1~ 
k:=k+1 & return to 1
   
 
SN
i
S
i
kkkk Nxxxxp
1
1 /| 
   

SN
i
S
i
kkkk Nzzxzp
1
/| 
4 Measurement , Update   S
i
k
i
k
i
k Nivxhz ,,1, kz
Nonlinear Estimation Using Particle Filters
Non-Additive Non-Gaussian Nonlinear Filter
191
Nonlinear Estimation Using Particle Filters
SOLO Non-Additive Non-Gaussian Nonlinear Filter
 
 kkk
kkk
vxhz
wxfx
,
, 11

 
kk vw &1 are system and measurement white-noise sequences
independent of past and current states and on each other and
having known P.D.F.s    kk vpwp &1
We want to compute p (xk|Z1:k) recursively, assuming knowledge of p(xk-1|Z1:k-1)
in two stages, prediction (before) and update (after measurement)
Prediction (before measurement)         11:1111:1 ||| kkkkkkk xdZxpxxpZxp
Update (after measurement)
   
     
 
   
 
   
    




 
kkkkk
kkkk
kk
kkkk
Bayes
bp
apabp
bap
kkkkk
xdZxpxzp
Zxpxzp
Zzp
Zxpxzp
ZzxpZxp
1:1
1:1
1:1
1:1
|
|
1:1:1
||
||
|
||
,||
We use the numeric Monte Carlo Method to evaluate the integrals:
Generate (Draw):     Sk
i
kk
i
k Nivpvwpw ,,1~&~ 11 
      S
N
i
i
kkkk
i
k
i
k
i
k Nxxxxpwxfx
S

 
1
111 /|, 
      S
N
i
i
kkkk
i
k
i
k
i
k Nzzxzpvxhz
S


1
/|, 
             


 
SSS N
i
i
kk
S
N
i
kkk
i
kk
S
k
N
i
kk
i
kk
S
kk xx
N
xdZxpxx
N
xdZxpxx
N
Zxp
11
1
11:111
1
1:111:1
1
|
1
|
1
| 
  
192
Nonlinear Estimation Using Particle Filters
SOLO
We assumed that p (xk|Z1:k) is a Gaussian PDF. If the true PDF is not Gaussian
(multivariate, heavily skewed or non-standard – not represented by any standard PDF)
the Gaussian distribution can never described it well. In such cases approximate
Grid-Based Filters and Particle Filters will yield an improvement at the cost of
heavy computation demand.
   
 
0
|
|
:
:1
:1

kk
kk
k
Zxq
Zxp
xw
To overcome this difficulty we use The Principle of Importance Sampling.
Suppose that p (xk|Z1:k) is a PDF from which is difficult to draw samples.
Also suppose that q (xk|Z1:k) is another PDF from which samples can be easily drawn
(referred to Importance Density), for example a Gaussian PDF.
Now assume that we can find at each sample the scale factor w (xk) between the
two densities:
Using this we can write:
        
   
 
 
 
 
 
     
   






kkkk
kkkkk
kkk
kk
kk
kkk
kk
kk
k
kkkkZxpk
xdZxqxw
xdZxqxwxg
xdZxq
Zxq
Zxp
xdZxq
Zxq
Zxp
xg
xdZxpxgxgE kk
:1
:1
1
:1
:1
:1
:1
:1
:1
:1|
|
|
|
|
|
|
|
|
|:1
  
Non-Additive Non-Gaussian Nonlinear Filter  
 kkk
kkk
vxhz
wxfx
,
, 11

 
193
SOLO
    
     
   

kkkk
kkkkk
Zxpk
xdZxqxw
xdZxqxwxg
xgE kk
:1
:1
|
|
|
:1
   
 
 sN
i
i
k
s
i
ki
k
xw
N
xw
xw
1
1
:~
where
Generate (draw) Ns particle samples { xk
i, i=1,…,Ns } from q(xk|Z1:k)
  skk
i
k NiZxqx ,,1|~ :1 
    
   
 
   






s
s
s
kk
N
i
i
kkN
i
i
k
s
N
i
i
kk
s
Zxpk xwxg
xw
N
xwxg
N
xgE
1
1
1
|
~
1
1
:1
and estimate g(xk) using a Monte Carlo approximation:
Nonlinear Estimation Using Particle Filters
Non-Additive Non-Gaussian Nonlinear Filter  
 kkk
kkk
vxhz
wxfx
,
, 11

 
Importance Sampling (IS)
194
Nonlinear Estimation Using Particle Filters
SOLO
It would be useful if the importance density could be generated recursively (sequentially).
   
       
 
     
 
     
 kk
kkkk
Zzpc
kk
kkkkkk
bP
aPabP
baP
Bayes
kk
kkk
k
Zxq
Zxpxzpc
Zxq
ZzpZxpxzp
Zxq
Zzxp
xw
kk
:1
1:1
|/1:
:1
1:11:1
|
|:1
1:1
|
||
|
|/||
|
,| 1:1







       
   1:111:11
|,
1:11 |,||, 

  kkkkk
bPbaPbaP
Bayes
kkk ZxpZxxpZxxpUsing:
we obtain:
          11:111:1111:111:1 |,||,| kkkkkkkkkkkk xdZxpZxxpxdZxxpZxp
          11:111:1111:111:1 |,||,| kkkkkkkkkkkk xdZxqZxxqxdZxxqZxq
In the same way:
     
 
     
   




11:111:11
11:111:11
:1
1:1
|,|
|,||
|
||
kkkkkk
kkkkkkkk
kk
kkkk
k
xdZxqZxxq
xdZxpZxxpxzpc
Zxq
Zxpxzpc
xw
Sequance Importance Sampling (SIS)
Non-Additive Non-Gaussian Nonlinear Filter
195
Nonlinear Estimation Using Particle Filters
SOLO
It would be useful if the importance density could be generated recursively.
     
 
     
   




11:111:11
11:111:11
:1
1:1
|,|
|,||
|
||
kkkkkk
kkkkkkkk
kk
kkkk
k
xdZxqZxxq
xdZxpZxxpxzpc
Zxq
Zxpxzpc
xw
Suppose that at k-1 we have Ns particle samples and their probabilities
{ xk-1|k-1
i,wk-1
i ,i=1,…,Ns }, that constitute a random measure which characterizes the
posterior PDF for time up to tk-1. Then
     
 
sN
i
i
kkkk
i
kkkk xxZxpZxp
1
1|111:11|11:11 || 
 
       
      
 







 s
s
N
i
i
kkkk
i
kkkkk
k
N
i
i
kkkk
i
kkkkkkk
k
xxZxqZxxq
xdxxZxpZxxpxzpc
xw
1
1|111:11|11:11
1
1
1|111:11|11:11
|,|
|,||


     
 
sN
i
i
kkkk
i
kkkk xxZxqZxq
1
1|111:11|11:11 || 
Sequential Importance Sampling (SIS) (continue – 1)
We obtained:
Non-Additive Non-Gaussian Nonlinear Filter
196
Nonlinear Estimation Using Particle Filters
SOLO
   
 
   
 kk
kkkk
Bayes
kk
kk
k
Zxq
Zxpxzpc
Zxq
Zxp
xw
:1
1:1
:1
:1
|
||
|
| 

 
       
     
     
   
   
         
   1:11|11|1
1:11|11|1
|,|
|,|
1:11|11:11|1
1:11|11:11|1
1
1|111:11|11:11
1
1
1|111:11|11:11
||
|||
|,|
|,||
|,|
|,||
1|11:11|1
1|11:11|1


















 
 
k
i
kk
i
kkk
k
i
kk
i
kkkkk
xxpZxxp
xxqZxxq
k
i
kkk
i
kkk
k
i
kkk
i
kkkkk
N
i
i
kkkk
i
kkkkk
k
N
i
i
kkkk
i
kkkkkkk
k
Zxqxxq
Zxpxxpxzpc
ZxqZxxq
ZxpZxxpxzpc
xxZxqZxxq
xdxxZxpZxxpxzpc
xw
i
kkkk
i
kkk
i
kkkk
i
kkk
s
s


   
 1:11
1:11
1
|
|


 
kk
kk
k
Zxq
Zxp
xwSince
   
 i
kk
i
kk
i
kk
i
kk
i
kkki
k
i
k
xxq
xxpxzpc
ww
1|1|
1|1||
1
|
||



Define    
 k
i
kk
k
i
kki
kk
i
k
Zxq
Zxp
xww
:1|
:1|
|
|
|
: 
   
 1:11|1
1:11|1
1|11
|
|
:


 
k
i
kk
k
i
kki
kk
i
k
Zxq
Zxp
xww
Sequential Importance Sampling (SIS) (continue – 2)
Non-Additive Non-Gaussian Nonlinear Filter
197
 
 
 1
1:1
,~
|


 Nx
Zxp
i
k
kk
i=1,…,N=10 particles
 kk xzp |
SOLO
Sequential Importance Sampling (SIS) (continue – 3)
   

 
     
 
   
  twwwt
Zxxq
xxpxzp
ww i
kk
N
i
i
k
k
i
k
i
k
i
k
i
k
i
kk
N
i
k
i
k /~~
,|
||~~
1:11
1
/1
1  


     

 
N
i
i
kk
i
kkk NxxNxZxp
1
1
1:1 /:,| 
k:=k+1
 
 
 1
1:1
,~
|


 Nx
Zxp
i
k
kk
   
 i
k
i
k wx ,~
i=1,…,N=10 particles
 kk xzp |
Run This
Nonlinear Estimation Using Particle Filters
Non-Additive Non-Gaussian Nonlinear Filter  
 kkk
kkk
vxhz
wxfx
,
, 11

 
   

N
i
i
kk
i
kkk xxwZxp
1
:1| 
Generate (Draw)   Sx
i
Nixpx ,,1~ 00 0

For   ,,1 k
Initialization0
1 At stage k-1
Generate (Draw) NS samples   Skw
i
k Niwpw ,,1~ 11 
2 State Update   S
i
kk
i
k
i
k Niwuxfx ,,1,, 111  
Start with the approximation    
 
SN
i
S
i
kkkk Nxxxxp
1
1 /| 3
After measurement zk we compute      
 i
k
i
kkk wxZxp ~,| :1 4
Generate (Draw) NS samples   Skw
i
k Nivpv ,,1~ 
Compute  i
k
i
k
i
k vxhz ,
Approximate    

SN
i
S
i
kk
i
kk Nzzxzp
1
/| 
198
Nonlinear Estimation Using Particle Filters
SOLO
The resulting sequential importance sampling (SIS) algorithm is a Monte Carlo method
that forms the basis for most sequential MC Filters.
Sequential Importance Sampling (SIS) (continue – 4)
This sequential Monte Carlo method is known variously as:
• Bootstrap Filtering
• Condensation Algorithm
• Particle Filtering
• Interacting Particle Approximation
• Survival of the Fittest
Non-Additive Non-Gaussian Nonlinear Filter
199
Nonlinear Estimation Using Particle Filters
SOLO
Degeneracy Problem
Sequential Importance Sampling (SIS) (continue – 5)
A common problem with SIS particle filter is the degeneracy phenomenon, where after
a few iterations, all but one particle will have negligible weights.
It can be shown that the variance of the importance weights, wk
i, of the SIS algorithm,
can only increase over time, and that leads to the degeneracy problem. A suitable measure
of degeneracy is given by:
 
1
1ˆ
1
1
2
 
 

N
i
i
kN
i
i
k
eff wwhere
w
N
To see this let look at the following two cases:
1
 
N
N
NNi
N
w N
i
eff
i
k 
1
2
/1
1ˆ,,1,
1

2
 
1
1ˆ
0
1
1
2








N
i
i
k
eff
i
k
w
N
ji
ji
w
Hence, small Neff indicates a severe degeneracy and vice versa.
Non-Additive Non-Gaussian Nonlinear Filter
200
SOLO
The Bootstrap (Resampling)
• Popularized by Brad Efron (1979)
• The Bootstrap is a name generically applied to statistical resampling schemes
that allow uncertainty in the data to be assesed from the data themselves, in
other words
“pulling yourself up by your bootstraps”
The disadvantage of bootstrapping is that while (under some conditions) it is
asymptotically consistent, it does not provide general finite-sample
guarantees, and has a tendency to be overly optimistic.The apparent
simplicity may conceal the fact that important assumptions are being made
when undertaking the bootstrap analysis (e.g. independence of samples)
where these would be more formally stated in other approaches.
The advantage of bootstrapping over analytical methods is its great simplicity - it is
straightforward to apply the bootstrap to derive estimates of standard errors and
confidence intervals for complex estimators of complex parameters of the
distribution, such as percentile points, proportions, odds ratio, and correlation
coefficients.
Neil Gordon
Nonlinear Estimation Using Particle Filters
Sequential Importance Sampling (SIS) (continue – 6)
Non-Additive Non-Gaussian Nonlinear Filter
201
Nonlinear Estimation Using Particle Filters
j
C.D.F.
1
 j
kw~
0
SOLO
Resampling
Sequential Importance Sampling (SIS) (continue – 5)
Whenever a significant degeneracy is observed (i.e., when Neff falls bellow some
Threshold Nthr) during the sampling, where we obtained
   

N
i
i
kk
i
kkk xxwZxp
1
:1| 
we need to resample and replace the mapping representation
with a random measure
  Niwx i
k
i
k ,,1, 
  NiNxi
k ,,1/1,*

This is done by first computing the Cumulative Density Function (C.D.F.) of the
sampled distribution wk
i.
Initialize the C.D.F.: c1 = wk
1
Compute the C.D.F.: ci = ci-1 + wk
i
For i = 2:N
i := i + 1
Non-Additive Non-Gaussian Nonlinear Filter
202
ui
j
resampled index
C.D.F.
1 1
N
 j
kw~
0 0
SOLO
Resampling (continue – 1)
Sequential Importance Resampling (SIR) (continue – 2)
Using the method of Inverse Transform Algorithm we generate N independent and
identical distributed (i.i.d.) variables from the uniform distribution u, we sort them in
ascending order and we compare them with the Cumulative Distribution Function (C.D.F.)
of the normalized weights.
Nonlinear Estimation Using Particle Filters
Non-Additive Non-Gaussian Nonlinear Filter  
 kkk
kkk
vxhz
wxfx
,
, 11

 
Nonlinear Estimation Using Particle Filters
Non-Additive Non-Gaussian Nonlinear Filter  
 kkk
kkk
vxhz
wxfx
,
, 11

 
203
Nonlinear Estimation Using Particle Filters
ui
j
resampled index
C.D.F.
1 1
N
 j
kw~
0 0
SOLO
Resampling Algorithm (continue – 2)
Sequential Importance Sampling (SIS) (continue – 7)
Initialize the C.D.F.: c1 = wk
1
Compute the C.D.F.: ci = ci-1 + wk
i
For i = 2:N
i := i + 1
0
Start at the bottom of the C.D.F.: i = 1
Draw for the uniform distribution  1
,0~ 
NUui
1 For i=1:N
Move along the C.D.F. uj = ui +(j – 1) N-1.
For j=1:N2
WHILE uj > ci
j* = i + 1
END WHILE
3
END For
5 i := i + 1 If i < N Return to 1
4 Assign sample: i
k
j
k xx *
Assign weight:
1
 Nwj
k Assign parent: ii j

Non-Additive Non-Gaussian Nonlinear Filter
204
 
 
 1
1:1
,
|


 Nx
Zxp
i
k
kk
i=1,…,N=10 particles
 kk xzp |
SOLO
Resampling
Sequential Importance Resampling (SIR) (continue – 4)
   

 
     
 
   
 
twwwt
Zxxq
xxpxzp
ww
i
kk
N
i
i
k
k
i
k
i
k
i
k
i
k
i
kk
N
i
k
i
k
/~~
,|
||~~
1
:11
1
/1
1






After measurement zk-1 we
compute      
 i
k
i
kkk wxZxp ~,| :1 
1
Start with the approximation
   
 




N
i
i
kk
i
kkk
Nxx
NxZxp
1
1
1:1
/:
,|

0
Prediction
     
 i
kk
i
k
i
k nuxfx ,,*1 
to obtain    
 1
1:11 ,| 
  NxZxp i
kkk
3
k:=k+1
 
 
 1
1:1
,
|


 Nx
Zxp
i
k
kk
   
 i
k
i
k wx ,
i=1,…,N=10 particles
 kk xzp |
 
 
 1
1:1
,
|


 Nx
Zxp
i
k
kk
   
 i
k
i
k wx ,
 
 1
,* 
Nx i
k
i=1,…,N=10 particles
 kk xzp |
Resample
 
 
 1
1:1
,
|


 Nx
Zxp
i
k
kk
   
 i
k
i
k wx ,
 
 1
,* 
Nx i
k
 
 1
1, 
 Nx i
k
i=1,…,N=10 particles
 kk xzp |
 11 |  kk xzp
Resample
 
 
 1
1:1
,
|


 Nx
Zxp
i
k
kk
   
 i
k
i
k wx ,
 
 1
,* 
Nx i
k
 
 1
1, 
 Nx i
k
   
 i
k
i
k wx 11, 
i=1,…,N=10 particles
 kk xzp |
 11 |  kk xzp
Resample
Run This
Nonlinear Estimation Using Particle Filters
Non-Additive Non-Gaussian Nonlinear Filter  
 kkk
kkk
vxhz
wxfx
,
, 11

 
   

N
i
i
kk
i
kkk xxwZxp
1
:1| 
If Resample
to obtain    
 1
:1 ,*| 
 NxZxp i
kkk
2   tht
N
i
i
keff NwN 






 1
2
/1
205
Estimators
v
 vxh ,
z
x
Estimator
x

SOLO
The Cramér-Rao Lower Bound (CRLB) on the Variance of the Estimator
 xE
 - estimated mean vector
            TTT
x
xExExxExExxExE

 
2
 - estimated variance matrix
For a good estimator we want
  xxE 

- unbiased estimator vector
     TT
x
xExExxE

 
2
 - minimum estimation variance
    Tk
kzzZ 1: - the observation matrix after k observations
      xkzzLxZL k
,,,1,  - the Likelihood or the joint density function of Zk
We have:
 T
pzzzz ,,, 21   T
n
xxxx ,,, 21
  T
pvvvv ,,, 21 
The estimation of , using the measurements
of a system corrupted by noise is a random variable with
xˆ x z
v
        dvvpxvZpxZpxZL v
k
vz
k
xz
k
;//, //
                   
     xbxZdxZLZx
kzdzdxkzzLkzzxkzzxE
kkk




,
1,,,1,,1,,1





- estimator bias xb
therefore:
206
Estimators
v
 vxh ,
z
x
Estimator
x

SOLO
The Cramér-Rao Lower Bound on the Variance of the Estimator (continue – 1)
        xbxZdxZLZxZxE kkkk
  ,

We have:
        
x
xb
Zd
x
xZL
Zx
x
ZxE k
k
k
k








 1
,

Since L [Zk,x] is a joint density function, we have:
  1, 
kk
ZdxZL
       0
,,
0
,










k
k
k
k
k
k
Zd
x
xZL
xZd
x
xZL
xZd
x
xZL
      
x
xb
Zd
x
xZL
xZx k
k
k





 1
,
Using the fact that:      
x
xZL
xZL
x
xZL k
k
k




 ,ln
,
,
        
x
xb
Zd
x
xZL
xZLxZx k
k
kk





 1
,ln
,

207
EstimatorsSOLO
The Cramér-Rao Lower Bound on the Variance of the Estimator (continue – 2)
        
x
xb
Zd
x
xZL
xZLxZx k
k
kk





 1
,ln
,

Hermann Amandus
Schwarz
1843 - 1921
Let use Schwarz Inequality:
         dttgdttfdttgtf
22
2
The equality occurs if and only if f (t) = k g (t)
        xZL
x
xZL
gxZLxZxf k
k
kk
,
,ln
:&,:



choose:
      
           




































k
k
kkkk
k
k
kk
Zd
x
xZL
xZLZdxZLxZx
x
xb
Zd
x
xZL
xZLxZx
2
2
2
2
,ln
,,1
,ln
,


    
 
   




















k
k
k
kkk
Zd
x
xZL
xZL
x
xb
ZdxZLxZx 2
2
2
,ln
,
1
,

208
EstimatorsSOLO
The Cramér-Rao Lower Bound on the Variance of the Estimator (continue – 3)
    
 
   




















k
k
k
kkk
Zd
x
xZL
xZL
x
xb
ZdxZLxZx 2
2
2
,ln
,
1
,

This is the Cramér-Rao bound for a biased estimator
Harald Cramér
1893 – 1985
Cayampudi Radhakrishna
Rao
1920 -
       1,&  
kkk
ZdxZLxbxZxE

              
                 
   
  
  


1
2
0
2
22
,
,2,
,,






kk
kkkkkkkk
kkkkkkk
ZdxZLxb
ZdxZLZxEZxxbZdxZLZxEZx
ZdxZLxbZxEZxZdxZLxZx
       
 
   
 xb
Zd
x
xZL
xZL
x
xb
ZdxZLZxEZx
k
k
k
kkkk
x
2
2
2
22
,ln
,
1
, 






















209
EstimatorsSOLO
The Cramér-Rao Lower Bound on the Variance of the Estimator (continue – 4)
       
 
   
 xb
Zd
x
xZL
xZL
x
xb
ZdxZLZxEZx
k
k
k
kkkk
x
2
2
2
22
,ln
,
1
, 






















   
 
 
      0,
,ln
0
,
1,
,
,
,ln






 





kk
kxZL
x
xZL
x
xZL
k
k
kk
ZdxZL
x
xZL
Zd
x
xZL
ZdxZL
k
k
k
         
 
0,
,ln,ln
,
,ln
,
2
2








 




k
x
xZL
k
kk
kk
kx
ZdxZL
x
xZL
x
xZL
ZdxZL
x
xZL
k
  
    0
,ln,ln
2
2
2





























x
xZL
E
x
xZL
E
kkx
 
 
 
 
 
 xb
x
xZL
E
x
xb
xb
x
xZL
E
x
xb
k
k
x
2
2
2
2
2
2
2
2
,ln
1
,ln
1













































210
Estimators
    
 
 
 
 












































2
2
2
2
2
2
,ln
1
,ln
1
,
x
xZL
E
x
xb
x
xZL
E
x
xb
ZdxZLxZx k
k
kkk
SOLO
The Cramér-Rao Lower Bound on the Variance of the Estimator (continue – 5)
 
 
 
 
 
 xb
x
xZL
E
x
xb
xb
x
xZL
E
x
xb
k
k
x
2
2
2
2
2
2
2
2
,ln
1
,ln
1













































For an unbiased estimator (b (x) = 0), we have:
   


























2
22
2
,ln
1
,ln
1
x
xZL
E
x
xZL
E
k
k
x

http://www.york.ac.uk/depts/maths/histstat/people/cramer.gif
211
Cramér-Rao Lower Bound (CRLB)SOLO
Helpfully Relations
 
 
     zxfzxfzxf
zxf
zxf
T
xx
T
xx
T
xx ,ln,ln,
,
1
,ln 
Proof:
  

 RR pn
zxf :,Lemma 1: Given a function the following relations holds:
p
z RLemma 2: Let be a random vector with density p (y|x) parameterized by the
nonrandom vector , then:
n
x R
       xzpExzpxzpE
T
xxz
T
xxz |ln|ln|ln 
    
      xzpxzpExzp
xzp
ExzpE
T
xxz
T
xxz
T
xxz |ln|ln|
|
1
|ln
0








  
 
 
 
      0|||
|
1
|
|
1
1













 
  
pp
zdxzpzdxzpxzp
xzp
xzp
xzp
E
T
xx
T
xx
T
xxz
RR
Proof:
       zxpEzxpzxpE
T
xxzx
T
xxzx ,ln,ln,ln ,, 
    
      zxpzxpEzxp
zxp
EzxpE
T
xxzx
T
xxzx
T
xxzx ,ln,ln,
,
1
,ln ,
0
,, 







  
Lemma 3: Let be random vectors with joint density p (x,y), then:pn
zx RR  ,
 
 
 
      0,,,
,
1
,
,
1
1
, 












  
  
pnpn
zdxdzxpzdxdzxpzxp
zxp
zxp
zxp
E
T
xx
T
xx
T
xxzx
RR
Return to Table of Content
212
Cramér-Rao Lower Bound (CRLB)SOLO
Nonrandom Parameters
The Score of the estimation is defined by the logarithm of the likelihood  xzpx |ln
In Maximum Likelihood Estimation (MLE), this function returns a vector valued
Score given by the observations and a candidate parameter vector .
Score close to zero are good scores since they indicate that is close to a local
optimum of , since
p
z R n
x R
x
 xzp |
 
 
 xzp
xzp
xzp xx |
|
1
|ln 
Since the measurement vector is stochastic the Expected Value of the Score
is given by:
p
z R
      
 
        0||||
|
1
||ln|ln
1





ppp
p
zdxzpzdxzpzdxzpxzp
xzp
zdxzpxzpxzpE
xxx
xxz
RRR
R
v
 vxh ,
z
x
Estimator
x

The parameters are regarded as unknown but fixed.
The measurements are
n
x R p
z R
213
Cramér-Rao Lower Bound (CRLB)
         xzpExzpxzpExJ
T
xxz
T
xxz |ln|ln|ln: 
SOLO
The Fisher Information Matrix (FIM)
Fisher, Sir Ronald Aylmer
1890 - 1962
The Fisher Information Matrix (FIM) was defined by Ronald Aylmer
Fisher as the Covariance Matrix of the Score
       0||ln|ln  p
zdxzpxpxzpE xxz
R
The Expected Value of the Score is given by:
The Covariance of the Score is given by:
           
p
zdxzpxzpxpxzpxzpE
T
xx
T
xxz
R
||ln|ln|ln|ln
Nonrandom Parameters
The Cramér-Rao Lower Bound on the Variance of the Estimator – Multivariable Case
214
Fisher, Sir Ronald Aylmer (1890-1962)
The Fisher information is the amount of information that
an observable random variable z carries about an unknown
parameter x upon which the likelihood of z, L(x) = f (Z; x),
depends. The likelihood function is the joint probability of
the data, the Zs, conditional on the value of x, as a function
of x. Since the expectation of the score is zero, the variance
is simply the second moment of the score, the derivative of
the lan of the likelihood function with respect to x. Hence
the Fisher information can be written
            x
k
xx
x
Tk
x
k
x
xZLExZLxZLEx ,ln,ln,ln: J
Cramér-Rao Lower Bound (CRLB)
Return to Table of Content
215
Cramér-Rao Lower Bound (CRLB)SOLO
     rxn
yy
T
y
Trxr
yy
T
yyz ytMyzpEJ RR 
 **
:&|ln:
Nonrandom Parameters
The Likelihood p (z|x) may be over-parameterized so that some of x or combination of
elements of x do not affect p (z|x). In such a case the FIM for the parameters x becomes
singular. This leads to problems of computing the Cramér – Rao bounds. Let
(r ≤ n) be an alternative parameterization of the Likelihood such that p (z|y) is a well
defined density function for z given and the corresponding FIM is non-singular.
We define a possible non-invertible coordinate transformation .
r
y R
r
y R
 ytx 
Theorem 1: Nonrandom Parametric Cramér – Rao Bound
Assume that the observation has a well defined probability density function p (z|y)
for all , and let denote the parameter that yields the true distribution of .
Moreover, let be an Unbiased Estimator of , and let .
The estimation error covariance of is bounded for below by
p
z R
r
y R *y y
  n
zx Rˆ  ytx   ** ytx 
 zxˆ
    TT
z MJMxxxxE 1
*ˆ*ˆ 

where
are matrices that depend on the true unknown parameter vector .*y
216
Cramér-Rao Lower Bound (CRLB)SOLO
     **
:&|ln:
yy
T
y
T
yy
T
yyz ytMyzpEJ


Nonrandom Parameters
Theorem 1: Nonrandom Parametric Cramér – Rao Bound
Assume that the observation has a well defined probability density function p (z|y)
for all , and let denote the parameter that yields the true distribution of .
Moreover, let be an Unbiased Estimator of , and let .
The estimation error covariance of is bounded for below by
p
z R
r
y R *y y
  n
zx Rˆ  ytx   ** ytx 
 zxˆ     TT
z MJMxxxxE 1
*ˆ*ˆ 

where
are matrices that depend on the true unknown parameter vector .*y
Proof:
        0|ˆ p
zdyzpytzx
T
y
R
Tacking the gradient w.r.t. on both sides of this relation we obtain:y
             0|ˆ|   pp
zdyzpytzdytzxyzp T
y
T
y
RR
             
  
1
||ˆ|ln  
pp
zdyzpytzdyzpytzxyzp T
y
T
y
RR
           ytzdyzpytzxyzp T
y
T
y
p

R
|ˆ|ln
Consider the Random Vector:
 









yzp
xx
y |ln
ˆ
where:  
 
   




































0
0
|ln
ˆ
|ln
ˆ
yzpE
xxE
yzp
xx
E
yz
z
y
z
             
0|ˆ|ˆ
ˆ
sUnbiasenes
zxof
TT
pp
zdyzpytzxzdxzpxzx  
RR
Using the Unbiasedness of Estimator:
217
Cramér-Rao Lower Bound (CRLB)SOLO
     **
:&|ln:
yy
T
y
T
yy
T
yyz ytMyzpEJ


Nonrandom Parameters
Theorem 1: Nonrandom Parametric Cramér – Rao Bound
Assume that the observation has a well defined probability density function p (z|y)
for all , and let denote the parameter that yields the true distribution of .
Moreover, let be an Unbiased Estimator of , and let .
The estimation error covariance of is bounded for below by
p
z R
r
y R *y y
  n
zx Rˆ  ytx   ** ytx 
 zxˆ     TT
z MJMxxxxE 1
*ˆ*ˆ 

where
are matrices that depend on the true unknown parameter vector .*y
Proof (continue – 1):
Consider the Random Vector:  









yzp
xx
y |ln
ˆ
The Covariance Matrix is Positive Semi-definite by construction:
 
 
   




































0
0
|ln
ˆ
|ln
ˆ
yzpE
xxE
yzp
xx
E
yz
z
y
z
   
0
0
0
0
0|ln
ˆ
|ln
ˆ
1
11 definiteSemi
Positive
T
T
T
T
yy
z
IMJ
I
J
MJMC
I
JMI
JM
MC
yzp
xx
yzp
xx
E














 









































   T
z xxxxEC  ˆˆ:        yzpEyzpyzpEJ
T
yyz
T
yyz |ln|ln|ln: 
      ytxxyzpEM T
y
T
yz
T
 ˆ|ln:
    TT
z
Notations
Equivalent
definiteSemi
Positive
T
MJMxxxxECMJMC 11
ˆˆ:0 



           ytzdyzpytzxyzp T
y
T
y
p

R
|ˆ|lnWe found:
q.e.d.
where:
218
Cramér-Rao Lower Bound (CRLB)SOLO
     nxn
yy
T
y
Tnxn
yy
T
yyz ybIMyzpEJ RR 
 **
:&|ln:
Nonrandom Parameters
Corollary 1: Nonrandom Parametric Cramér – Rao Bound (Baiased Estimator)
Consider an estimaton problem defined by the likelihood p (y|z), and the fixed unknown
parameter . Any estimator with unknown bias has a mean square error
bounded from below by
*y  zyˆ  yb
       ***ˆ*ˆ 1
ybybMJMyyyyE TTT
z  
where
are matrices that depend on the true unknown parameter vector .*y
Proof:
Theorem 1 yields that:
Introduce the quantity , the estimator is an unbiased estimator of . ybyx :    zyzx ˆˆ  x
             ybIyzpEybIxxxxE T
y
T
yyz
TT
y
T
z 
1
|lnˆˆ
Using , we obtain: ybyx :
                 ybybybIyzpEybIyyyyE TT
y
T
yyz
TT
y
T
z 
1
|lnˆˆ
after suitably inserting the true parameter .*y
219
Cramér-Rao Lower Bound (CRLB)
              
           
         xbxb
x
xb
I
x
xZL
E
x
xb
I
xbxb
x
xb
I
x
xZL
x
xZL
E
x
xb
I
xZxxZxEZdxZLxZxxZx
T
x
kT
T
x
TkkT
x
TkkkkTkk


























































































1
2
2
1
,ln
,ln,ln
,

SOLO
The Cramér-Rao Lower Bound on the Variance of the Estimator
The multivariable form of the Cramér-Rao Lower Bound is:
  
 
  












n
k
n
k
k
xZx
xZx
xZx




11
    
 
 





























n
k
k
k
k
x
x
xZL
x
xZL
x
xZL
xZL
,ln
,ln
,ln
,ln
1

Fisher Information Matrix
     




































x
k
x
T
kk
x
xZL
E
x
xZL
x
xZL
E 2
2
,ln,ln,ln
:J
Fisher, Sir Ronald Aylmer
1890 - 1962
Return to Table of Content
220
Cramér-Rao Lower Bound (CRLB)SOLO
Random Parameters
Theorem 2: Random Parameters (Posterior Cramér – Rao Bound)
       
p
zdyzpytxyb
R
|ˆ
      rxnT
yz
TrxrT
yyyz ytEMyzpEJ RR  :&,ln: ,
where
then the Mean Square of the Estimate is Bounded from Below
ynr
t RR : x
For Random Parameters there is no true parameter value. Instead, the prior assumption
on the parameter distribution determines the probability of different parameter vectors.
Like in the nonrandom parametric case, we assume a possible non-invertible mapping
between a parameter vector and the sought parameter . The vector
is assumed to have been chosen such that the joint probability density p (y,z) is a well
defined density.
y
Let be two random vectors with a well defined joint density
p (y,z), and let be an estimate of . If the estimator bias
pr
zandy RR 
  n
zx Rˆ  ytx 
satisfies     njandriallforypybj
zi
,,1,,10lim  

    TT
yz MJMxxxxE 1
,
ˆˆ 
     0ˆˆ 1
,
definiteSemi
Positive
TT
yz MJMxxxxE



Equivalent
Notations
221
Cramér-Rao Lower Bound (CRLB)SOLO
Random Parameters
Theorem 2: Random Parameters (Posterior Cramér – Rao Bound)
    TT
yz MJMxxxxE 1
,
ˆˆ 
      ytEMyzpEJ T
yz
TT
yyyz  :&,ln: ,
then the Mean Square of the Estimate is Bounded from Below
Proof:
Let be two random vectors with a well defined joint density
p (y,z), and let be an estimate of . If the estimator bias
pr
zandy RR 
  n
zx Rˆ  ytx 
       
p
zdyzpytxyb
R
|ˆ and     njandriallforypybj
zi
,,1,,10lim  

Compute
             
 
   
 
        
ppp
zdytzxyzpzdyzpytzdypyzpytzxypyb
T
y
yp
T
y
yzp
T
y
T
y
RRR
ˆ,,|ˆ
, 

Integrating both sides w.r.t. over its complete range Rr yieldsy
                 

rprr
ydzdytzxyzpydypytydypyb
T
y
T
y
T
y
RRR
ˆ,
The (i,j) element of the left hand side matrix is:
   
        riiiiyjyj
i
j
ydydydydydydypybypybyd
y
ypyb
r
ii
r

    
111
00
0 















RR
222
Cramér-Rao Lower Bound (CRLB)SOLO
Random Parameters
Theorem 2: Random Parameters (Posterior Cramér – Rao Bound)
    TT
yz MJMxxxxE 1
,
ˆˆ 
      ytEMyzpEJ T
yz
TT
yyyz  :&,ln: ,
then the Mean Square of the Estimate is Bounded from Below
Let be two random vectors with a well defined joint density
p (y,z), and let be an estimate of . If the estimator bias
pr
zandy RR 
  n
zx Rˆ  ytx 
       
p
zdyzpytxyb
R
|ˆ and     njandriallforypybj
zi
,,1,,10lim  

Proof (continue – 1): We found             
 rrp
ydypytydzdytzxyzp T
y
T
y
RR
ˆ,
                ytEydypytydzdyzpytzxyzp T
yz
T
y
T
y
rrp
 
RR
,ˆ,ln
Consider the Random Vector:  









yzp
xx
y ,ln
ˆ
The Covariance Matrix is Positive Semi-definite by construction:
   
0
0
0
0
0,ln
ˆ
,ln
ˆ
1
11
,
definiteSemi
Positive
T
T
T
T
yy
yz
IMJ
I
J
MJMC
I
JMI
JM
MC
yzp
xx
yzp
xx
E














 









































       ytExxyzpEM T
yz
T
yyz
T
 ˆ,ln: ,
    TT
z
Notations
Equivalent
definiteSemi
Positive
T
MJMxxxxECMJMC 11
ˆˆ:0 


 q.e.d.
   T
yz xxxxEC  ˆˆ: ,        yzpEyzpyzpEJ
T
yyyz
T
yyz ,ln,ln|ln: , where:
 
 
   





































0
0
,ln
ˆ
,ln
ˆ
,
,
,
yzpE
xxE
yzp
xx
E
yyz
yz
y
yz
Return to Table of Content
223
Cramér-Rao Lower Bound (CRLB)SOLO
Nonrandom and Random Parameters Cramér – Rao Bounds
For the Nonrandom Parameters the Cramér – Rao Bound depends on the true unknown
parameter vector y , and on the model of the problem defined by p (z|y) and the mapping
x = t (y). Hence the bound can only be computed by using simulations, when the true value
of the sought parameter vector y is known.
For the Random Parameters the Cramér – Rao Bound can be computed even in real
applications. Since the parameters are random there is no unknown true parameter value.
Instead, in the posterior Cramér – Rao Bound the matrices J and M are computed by
mathematical expectation performed with respect to the prior distribution of the parameters.
Return to Table of Content
224
Cramér-Rao Lower Bound (CRLB)SOLO
Discrete Time Nonlinear Estimation
 
  p
kkk
n
kkk
vxhz
wxfx
R
R

 
,
, 11 kk vw &1 are system and measurement white-noise sequences
independent of past and current states and on each other and
having known P.D.F.s    kk vpwp &1
 0xpIn addition the P.D.F. of the initial state , is also given.
We found that the Cramér – Rao Lower Bound for the Random Parameters is given by:
           1
:1:1,
1
:1:1:1:1,:1:1|:1:1:1|:1, ,ln,ln,ln :1:1:1:1

 kk
T
XXZXkk
T
XkkXZX
T
kkkkkkZX XZpEXZpXZpEXXXXE kkkk

 1 kk xfxIf we have a deterministic state model, i.e. then we can use the Nonrandom
Parametric Cramér – Rao Lower Bound
           1
:1:1
1
:1:1:1:1:1:1|:1:1:1|:1 |ln|ln|ln :1:1:1:1

 kk
T
XXZkk
T
XkkXZ
T
kkkkkkZ XZpEXZpXZpEXXXXE kkkk

After k cycles we have k measurements and k random parameters
estimated by an Unbiased Estimator as .
 T
kk zzzZ ,,,: 21:1 
 T
kk xxxxX ,,,,: 210:0   T
kkkk xxxX |2|21|1:1|:1
ˆ,,ˆ,ˆ:ˆ 
The CRLB provides a lower bound for second-order (mean-squared) error only. Posterior
densities, which result from Nonlinear Filtering, are in general non-Gaussian. A full
statistical characterization of a non-Gaussian density requires higher order moments, in
addition to mean and covariance. Therefore, the CRLB for Nonlinear Filtering does not
fully characterize the accuracy of Filtering Algorithms.
225
Cramér-Rao Lower Bound (CRLB)SOLO
Discrete Time Nonlinear Estimation
Theorem 3: The Cramér – Rao Lower Bound for the Random Parameters is given by:
Let perform the partitioning   1
1:1:1 ,: xnkT
kkk xXX R    1
|1:1|1:1:1|:1
ˆ,ˆ:ˆ xnkT
kkkkkk xXX R 
 
  p
kkk
n
kkk
vxhz
wxfx
R
R

 
,
, 11 kk vw &1 are system and measurement white-noise sequences
independent of past and current states and on each other and
having known P.D.F.s    kk vpwp &1
 0xpIn addition the P.D.F. of the initial state , is also given.
After k cycles we have k measurements and k random parameters
estimated by an Unbiased Estimator as .
 T
kk zzzZ ,,,: 21:1 
 T
kk xxxxX ,,,,: 210:0   T
kkkk xxxX |2|21|1:1|:1
ˆ,,ˆ,ˆ:ˆ 
      
    
   nxn
kk
T
xxZXk
nxkn
kk
T
xXZXk
knxkn
kk
T
XXZXk
XZpEC
XZpEB
XZpEA
kk
kk
kk
R
R
R







:1:1,
1
:1:1,
11
:1:1,
,ln:
,ln:
,ln:
1:1
1:11:1
      nxn
kk
T
kkk
T
kkkkkkZX BABCJxxxxE R
 111
||, :ˆˆ      0ˆˆ
11
||,
definiteSemi
Positive
kk
T
kk
T
kkkkkkZX BABCxxxxE



Equivalent
Notations
226
Cramér-Rao Lower Bound (CRLB)SOLO
Discrete Time Nonlinear Estimation
The Cramér – Rao Bound for the Random Parameters is given by:
     0,ln,ln
ˆˆ
1
:1:1,:1:1,,
|
1:11:1|1:1
|
1:11:1|1:1
, 1:11:1
definiteSemi
Positive
kk
T
xXkkxXZX
T
kkk
kkk
kkk
kkk
ZX XZpXZpE
xx
XX
xx
XX
E kkkk
































Proof Theorem 3: Let perform the partitioning   1
1:1:1 ,: xnkT
kkk xXX R    1
|1:1|1:1:1|:1
ˆ,ˆ:ˆ xnkT
kkkkkk xXX R 
  
     
     
1
:1:1,:1:1,
:1:1,:1:1,1
:1:1,,,
,ln,ln
,ln,ln
,ln
1:1
1:11:11:1
1:11:1
















kk
T
xxZXkk
T
XxZX
kk
T
xXZXkk
T
XXZX
kk
T
xXxXZX
XZpEXZpE
XZpEXZpE
XZpE
kkkk
kkkk
kkkk
nkxnkkk
kk
T
kk
k
k
T
kk
T
k
kk
I
BAI
BABC
A
IAB
I
CB
BA
R





































 1
1
11
1
00
00
:
 
  p
kkk
n
kkk
vxhz
wxfx
R
R

 
,
, 11 kk vw &1 are system and measurement white-noise sequences
independent of past and current states and on each other and
having known P.D.F.s    kk vpwp &1
 0xpIn addition the P.D.F. of the initial state , is also given.
After k cycles we have k measurements and k random parameters
estimated by an Unbiased Estimator as .
 T
kk zzzZ ,,,: 21:1 
 T
kk xxxxX ,,,,: 210:0   T
kkkk xxxX |2|21|1:1|:1
ˆ,,ˆ,ˆ:ˆ 
227
Cramér-Rao Lower Bound (CRLB)SOLO
Discrete Time Nonlinear Estimation
       
       
 
0
0
0
0
0
ˆˆˆ
ˆ
1
111
111
||,1:11:1|1:1|,
|1:11:1|1:1,1:11:1|1:11:11:1|1:1,
definiteSemi
Positive
k
T
kkk
T
kk
kkk
T
kkkkkkZX
T
kkkkkkZX
T
kkkkkkZX
T
kkkkkkZX
IAB
I
BABC
A
I
BAI
xxxxEXXxxE
xxXXEXXXXE








































Proof Theorem 3 (continue – 1): We found
       
       
 
0
0
0
0
ˆˆˆ
ˆ
0
11
1
1
||,1:11:1|1:1|,
|1:11:1|1:1,1:11:1|1:11:11:1|1:1,
1
definiteSemi
Positive
kk
T
kk
k
k
T
k
T
kkkkkkZX
T
kkkkkkZX
T
kkkkkkZX
T
kkkkkkZXkk
BABC
A
IAB
I
xxxxEXXxxE
xxXXEXXXXE
I
BAI










































 
  p
kkk
n
kkk
vxhz
wxfx
R
R

 
,
, 11 kk vw &1 are system and measurement white-noise sequences
independent of past and current states and on each other and
having known P.D.F.s    kk vpwp &1
 0xpIn addition the P.D.F. of the initial state , is also given.
228
Cramér-Rao Lower Bound (CRLB)
     111
||, :ˆˆ

 kk
T
kkk
T
kkkkkkZX BABCJxxxxE
SOLO
Discrete Time Nonlinear Estimation
Prof Theorem 3 (continue – 2): We found
     
0
0
0
ˆˆ*
**
11
1
||,
definiteSemi
Positive
kk
T
kk
k
T
kkkkkkZX BABC
A
xxxxE























      0ˆˆ
11
||,
definiteSemi
Positive
kk
T
kk
T
kkkkkkZX BABCxxxxE



Equivalent
Notations
      
    
   nxn
kk
T
xxZXk
nxkn
kk
T
xXZXk
knxkn
kk
T
XXZXk
XZpEC
XZpEB
XZpEA
kk
kk
kk
R
R
R







:1:1,
1
:1:1,
11
:1:1,
,ln:
,ln:
,ln:
1:1
1:11:1
 
  p
kkk
n
kkk
vxhz
wxfx
R
R

 
,
, 11 kk vw &1 are system and measurement white-noise sequences
independent of past and current states and on each other and
having known P.D.F.s    kk vpwp &1
 0xpIn addition the P.D.F. of the initial state , is also given.
q.e.d.
229
Cramér-Rao Lower Bound (CRLB)
      
    
   nxn
kk
T
xxZXk
nxkn
kk
T
xXZXk
knxkn
kk
T
XXZXk
XZpEC
XZpEB
XZpEA
kk
kk
kk
R
R
R







:1:1,
1
:1:1,
11
:1:1,
,ln:
,ln:
,ln:
1:1
1:11:1
SOLO
Discrete Time Nonlinear Estimation – Recursive Cramér–Rao Lower Bound
We found
We want to compute Jk recursively, without the need for inverting large matrices as Ak.
     111
||, :ˆˆ

 kk
T
kkk
T
kkkkkkZX BABCJxxxxE
 
  p
kkk
n
kkk
vxhz
wxfx
R
R

 
,
, 11 kk vw &1 are system and measurement white-noise sequences
independent of past and current states and on each other and
having known P.D.F.s    kk vpwp &1
 0xpIn addition the P.D.F. of the initial state , is also given.
Theorem 4:The Recursive Cramér–Rao Lower Bound for the Random Parameters is given by:
       nxn
kkkkkk
T
kkkkkkZX DDJDDJxxxxE R


1
1211121221
111|111|1, :ˆˆ
      nxn
kk
T
kkk
T
kkkkkkZX BABCJxxxxE R
 111
||, :ˆˆ
  
     
      nxn
kk
T
kxxzkk
T
kxxxk
nxnT
kkk
T
kxxxk
nxn
kk
T
kxxxk
xzpExxpED
DxxpED
xxpED
kkkkkk
kkk
kkk
R
R
R









111|11|
22
21
11|
12
1|
11
|ln|ln:
|ln:
|ln:
11111
1
1
    000 lnln 000
xpxpEJ T
xxx The recursions start with the initial
information matrix J0,
230
Cramér-Rao Lower Bound (CRLB)SOLO
  
  
  kk
T
xxZXk
kk
T
xXZXk
kk
T
XXZXk
XZpEC
XZpEB
XZpEA
kk
kk
kk
:1:1,
:1:1,
:1:1,
,ln:
,ln:
,ln:
1:1
1:11:1





We found
We want to compute Jk recursively, without the need for inverting large matrices as Ak.
     111
||, :ˆˆ

 kk
T
kkk
T
kkkkkkZX BABCJxxxxE
Start with:
       kkkkkkkkkkkkk XxZpXxZzpXxZzpXZp :11:1:11:11:11:111:11:1 ,,,,|,,,,  
 
 
 
 
 kk
xxpMarkov
kkk
xzpMarkov
kkkk XZpXZxpXxZzp
kkkk
:1:1
|
:1:11
|
:11:11 ,,|,,|
111
    
 



     1:11:11 ,||  kkkkkk XZpxxpxzp
 
  p
kkk
n
kkk
vxhz
wxfx
R
R

 
,
, 11 kk vw &1 are system and measurement white-noise sequences
independent of past and current states and on each other and
having known P.D.F.s    kk vpwp &1
 0xpIn addition the P.D.F. of the initial state , is also given.
Proof of Theorem 4:
Discrete Time Nonlinear Estimation – Recursive Cramér–Rao Lower Bound
231
Cramér-Rao Lower Bound (CRLB)
  1
1
1
111
111
111
1
1:11:1,
11|1
|
1:11:1|1:1
11|1
|
1:11:1|1:1
, :,ln
ˆ
ˆ
ˆ
ˆ
1111:11
11:1
11:11:11:11:1

































































































k
k
T
k
T
k
kk
T
k
kkk
kk
T
xx
T
xx
T
Xx
T
xx
T
xx
T
Xx
T
xX
T
xX
T
XX
ZX
T
kkk
kkk
kkk
kkk
kkk
kkk
ZX I
FEL
ECB
LBA
XZpE
xx
xx
XX
xx
xx
XX
E
kkkkkk
kkkkkk
kkkkkk

SOLO
Proof of Theorem 4 (continue – 1):
Compute:
       kkkkkkkk XZpxxpxzpXZp :1:11111:11:1 ,||,  
          
    kkk
T
XXZX
kkkkkk
T
XXZXkk
T
XXZXk
AXZpE
XZpxxpxzpEXZpEA
kk
kkkk



 
:1:1,
:1:1111,1:11:1,1
,ln00
,ln|ln|ln,ln:
1:11:1
1:11:11:11:1
          
    kkk
T
xXZX
kkkkkk
T
xXZXkk
T
xXZXk
BXZpE
XZpxxpxzpEXZpEB
kk
kkkk



 
:1:1,
:1:1111,1:11:1,1
,ln00
,ln|ln|ln,ln:
1:1
1:11:1
          
        11
:1:1,1|
:1:1111,1:11:1,1
,ln|ln0
,ln|ln|ln,ln:
11
1 kk
C
kk
T
xxZX
D
kk
T
xxxx
kkkkkk
T
xxZXkk
T
xxZXk
DCXZpExxpE
XZpxxpxzpEXZpEC
k
kk
k
kkkk
kkkk





    
 
  p
kkk
n
kkk
vxhz
wxfx
R
R

 
,
, 11 kk vw &1 are system and measurement white-noise sequences
independent of past and current states and on each other and
having known P.D.F.s    kk vpwp &1
 0xpIn addition the P.D.F. of the initial state , is also given.
Discrete Time Nonlinear Estimation – Recursive Cramér–Rao Lower Bound
232
Cramér-Rao Lower Bound (CRLB)
  1
1
1
111
111
111
1
1:11:1,
11|1
|
1:11:1|1:1
11|1
|
1:11:1|1:1
, :,ln
ˆ
ˆ
ˆ
ˆ
1111:11
11:1
11:11:11:11:1

































































































k
k
T
k
T
k
kk
T
k
kkk
kk
T
xx
T
xx
T
Xx
T
xx
T
xx
T
Xx
T
xX
T
xX
T
XX
ZX
T
kkk
kkk
kkk
kkk
kkk
kkk
ZX I
FEL
ECB
LBA
XZpE
xx
xx
XX
xx
xx
XX
E
kkkkkk
kkkkkk
kkkkkk

SOLO
Proof of Theorem 4 (continue – 2):
Compute:
       kkkkkkkk XZpxxpxzpXZp :1:11111:11:1 ,||,  
          
         0,ln|ln|ln
,ln|ln|ln,ln:
0
:1:1,
0
1,
0
11,
:1:1111,1:11:1,1
11:111:111:1
11:111:1

































       kk
T
xXZXkk
T
xXZXkk
T
xXZX
kkkkkk
T
xXZXkk
T
xXZXk
XZpExxpExzpE
XZpxxpxzpEXZpEL
kkkkkk
kkkk
          
              12
1|
0
:1:1,1,
0
11,
:1:1111,1:11:1,1
:|ln,ln|ln|ln
,ln|ln|ln,ln:
11111
11
kkk
T
xxxxkk
T
xxZXkk
T
xxZXkk
T
xxZX
kkkkkk
T
xxZXkk
T
xxZXk
DxxpEXZpExxpExzpE
XZpxxpxzpEXZpEE
kkkkkkkkkk
kkkk
























    
          
           22
0
:1:1,1|11|
:1:1111,1:11:1,1
,ln|ln|ln
,ln|ln|ln,ln:
111111111
1111
kkk
T
xxZXkk
T
xxxxkk
T
xxxz
kkkkkk
T
xxZXkk
T
xxZXk
DXZpExxpExzpE
XZpxxpxzpEXZpEF
kkkkkkkkkk
kkkk















  
 
  p
kkk
n
kkk
vxhz
wxfx
R
R

 
,
, 11 kk vw &1 are system and measurement white-noise sequences
independent of past and current states and on each other and
having known P.D.F.s    kk vpwp &1
 0xpIn addition the P.D.F. of the initial state , is also given.
Discrete Time Nonlinear Estimation – Recursive Cramér–Rao Lower Bound
233
Cramér-Rao Lower Bound (CRLB)
1
2221
1211
1
111
111
111
1
1
11|1
|
1:11:1|1:1
11|1
|
1:11:1|1:1
,
0
0
:
ˆ
ˆ
ˆ
ˆ










































































kk
kkk
T
k
kk
k
T
k
T
k
kk
T
k
kkk
k
T
kkk
kkk
kkk
kkk
kkk
kkk
ZX
DD
DDCB
BA
FEL
ECB
LBA
I
xx
xx
XX
xx
xx
XX
E

SOLO
Proof of Theorem 4 (continue – 3):
We found:
 
  















































































I
DDCB
BA
I
DDCB
BA
DD
DCB
BA
I
DCB
BA
D
I
I kkk
T
k
kk
kkk
T
k
kk
kk
kk
T
k
kk
kk
T
k
kk
k
k
0
0
0
00
0
0
0
12
1
11
12
1
11
2122
11
1
11
211
Therefore:    
    1211112122
12
1
11
2122
1
1
111|111|1,
0
0:
ˆˆ
kkk
T
kkkkk
kkk
T
k
kk
kkk
k
T
kkkkkkZX
DBABDCDD
DDCB
BA
DDJ
JxxxxE




















 
  p
kkk
n
kkk
vxhz
wxfx
R
R

 
,
, 11 kk vw &1 are system and measurement white-noise sequences
independent of past and current states and on each other and
having known P.D.F.s    kk vpwp &1
 0xpIn addition the P.D.F. of the initial state , is also given.
Discrete Time Nonlinear Estimation – Recursive Cramér–Rao Lower Bound
234
Cramér-Rao Lower Bound (CRLB)SOLO
The recursions start with the initial information matrix J0, which can be computed
from the initial density p (x0) as follows:
      1
1211121221
111|111|1, :ˆˆ

  kkkkkk
T
kkkkkkZX DDJDDJxxxxE
  
  
  kk
T
xxZXk
kk
T
xXZXk
kk
T
XXZXk
XZpEC
XZpEB
XZpEA
kk
kk
kk
:1:1,
:1:1,
:1:1,
,ln:
,ln:
,ln:
1:1
1:11:1





Proof of Theorem 4 (continue – 4):
     111
||, :ˆˆ

 kk
T
kkk
T
kkkkkkZX BABCJxxxxE
  
     
     11|1|
22
21
1|
12
1|
11
|ln|ln:
|ln:
|ln:
1111111
11
1









kk
T
xxxzkk
T
xxxxk
T
kkk
T
xxxxk
kk
T
xxxxk
xzpExxpED
DxxpED
xxpED
kkkkkkkk
kkkk
kkkk
    000 lnln 000
xpxpEJ T
xxx 
 
  p
kkk
n
kkk
vxhz
wxfx
R
R

 
,
, 11 kk vw &1 are system and measurement white-noise sequences
independent of past and current states and on each other and
having known P.D.F.s    kk vpwp &1
 0xpIn addition the P.D.F. of the initial state , is also given.
Discrete Time Nonlinear Estimation – Recursive Cramér–Rao Lower Bound
235
Cramér-Rao Lower Bound (CRLB)SOLO
      1
1211121221
111|111|1, :ˆˆ

  kkkkkk
T
kkkkkkZX DDJDDJxxxxE
Proof of Theorem 4 (continue – 5):
  
     
   
          nxn
kk
T
xxxzk
nxn
kk
T
xxxxk
kkk
nxnT
kkk
T
xxxxk
nxn
kk
T
xxxxk
xzpEDxxpED
DDD
DxxpED
xxpED
kkkkkkkk
kkkk
kkkk
RR
R
R










11|
22
1|
22
222222
21
1|
12
1|
11
|ln:2|ln:1
21:
|ln:
|ln:
1111111
11
1
 
  p
kkk
n
kkk
vxhz
wxfx
R
R

 
,
, 11 kk vw &1 are system and measurement white-noise sequences
independent of past and current states and on each other and
having known P.D.F.s    kk vpwp &1
 0xpIn addition the P.D.F. of the initial state , is also given.
q.e.d.
       
tMeasuremen
Updated
22
ModelProcess
UsingPrediction
121112122
1 21: kkkkkkk DDDJDDJ 


Discrete Time Nonlinear Estimation – Recursive Cramér–Rao Lower Bound
236
Cramér-Rao Lower Bound (CRLB)SOLO
Discrete Time Nonlinear Estimation –Special Cases
       



 
00
1
000
0
0000
ˆˆ
2
1
exp
2
1
,ˆ;0
xxPxx
P
Pxxxp
T
x

N
 
  p
kkk
n
kkk
vxhz
wxfx
R
R

 
,
, 11 kk vw &1 are system and measurement white-noise sequences
independent of past and current states and on each other and
having known P.D.F.s    kk vpwp &1
 0xpIn addition the P.D.F. of the initial state , is also given.
Probability Density Function of is Gaussian0x
       00
1
000
1
0000
ˆˆˆ
2
1
ln 000
xxPxxPxxcxp
T
xxx 






 
          
    1
0
1
00
1
0
1
00000
1
0
1
00000
1
0000
ˆˆ
ˆˆlnln
0
000000




PPPPPxxxxEP
PxxxxPExpxpEJ
T
x
TT
xx
T
xxxx
Return to Table of Content
237
Cramér-Rao Lower Bound (CRLB)SOLO
Discrete Time Nonlinear Estimation –Special Cases
           



 

 kkkk
T
kkk
k
kkkwkk xfxQxfx
Q
Qwwpxxp 1
1
11
2
1
exp
2
1
,0;|

N
 
  p
kkkk
n
kkkk
vxhz
wxfx
R
R




1111
1
1& kk vw are system and measurement Gaussian white-noise
sequences, independent of past and current states and on each
other with covariances Qk and Rk+1, respectively
 0xpIn addition the P.D.F. of the initial state , is also given.
Additive Gaussian Noises
             kkkkk
T
kxkkkk
T
kkkxkkx xfxQxfxfxQxfxcxxp kkk







 



 1
1
1
1
11
2
1
|ln
           



 



 111
1
1111
1
11111
2
1
exp
2
1
,0;| kkkk
T
kkk
k
kkkvkk xhzRxhz
R
Rvvpxzp

N
             111
1
111111
1
1111211 111
2
1
|ln 



 






  kkkkk
T
kxkkkk
T
kkkxkkx xhzRxhxhzRxhzcxzp kkk
            11
11 11
|ln 
   kk
T
kx
T
k
T
kxk
T
kkkxkk
T
xx QxfxfQxfxxxp kkkkk
     T
k
T
kxk
T
k
T
kxk xhHxfF kk 111
:
~
&:
~


238
Cramér-Rao Lower Bound (CRLB)
SOLO
Discrete Time Nonlinear Estimation –Special Cases
 
  p
kkkk
n
kkkk
vxhz
wxfx
R
R




1111
1
1& kk vw are system and measurement Gaussian white-noise
sequences, independent of past and current states and on each
other with covariances Qk and Rk+1, respectively
 0xpIn addition the P.D.F. of the initial state , is also given.
Additive Gaussian Noises
              1
1
11|111111111
1
111|
~~
111111 





 
 kk
T
kxz
T
k
T
kx
T
k
T
kkkkkkkk
T
kxxz HRHExhRxhzxhzRxhE kkkkkk
        1
|
1
|1|
12 ~
|ln 1111

 
 k
T
kxxkkk
T
xxxkk
T
xxxxk QFEQxfExxpED kkkkkkkkk
     T
k
T
kxk
T
k
T
kxk xhHxfF kk 111
:
~
&:
~


       
            
 kk
T
kxx
T
k
T
kx
T
k
T
kkkkkkkk
T
kxxx
kk
T
xkkxxxkk
T
xxxxk
FQFE
xfQxfxxfxQxfE
xxpxxpExxpED
kk
kkkk
kkkkkkkk
~~
|ln|ln|ln:
1
|
?
11
1
|
11|1|
11
1
1
11











         1111|11|
22
|ln|ln|ln:2 11111111  
 kk
T
xkkxxzkk
T
xxxzk xzpxzpExzpED kkkkkkkk
The Jacobians of
computed at , respectively.
   11&  kkkk xhxf
1& kk xx
          1
1
1
|1|
22
11111
|ln:1 


   kkkkk
T
xxxkk
T
xxxxk QxfxQExxpED kkkkkkk
239
Cramér-Rao Lower Bound (CRLB)
 
 
 
   1
1
11|
22
122
1
|
12
1
|
11
~~
2
1
~
~~
11
1
1













kk
T
kxzk
kk
k
T
kxxk
kk
T
kxxk
HRHED
QD
QFED
FQFED
kk
kk
kk
SOLO
Discrete Time Nonlinear Estimation –Special Cases
 
  p
kkkk
n
kkkk
vxhz
wxfx
R
R




1111
1
1& kk vw are system and measurement Gaussian white-noise
sequences, independent of past and current states and on each
other with covariances Qk and Rk+1, respectively
 0xpIn addition the P.D.F. of the initial state , is also given.
Additive Gaussian Noises
     T
k
T
kxk
T
k
T
kxk xhHxfF kk 111
:
~
&:
~

 The Jacobians of
computed at , respectively.
   11&  kkkk xhxf
1& kk xx
       
tMeasuremen
Updated
22
ModelProcess
UsingPrediction
121112122
1 21: kkkkkkk DDDJDDJ 


We can calculate the expectations using a Monte Carlo
Simulation. Using we draw     01 &, xpvpwp kk 
 
    Nivpvwpw
xpx
k
i
kk
i
k ,,2,1~&~
~
11
00

We Simulate System States and Measurements
 
 
Ni
vxhz
wxfx
i
k
i
kk
i
k
i
k
i
kk
i
k
,,2,1
1111
1









We then average over realizations to get J0.
We average over realization to get next terms and so forth.
0x
1x
Return to Table of Content
240
Cramér-Rao Lower Bound (CRLB)
    1
1
11
22122112111
2&1&& 



 kk
T
kkkkk
T
kkkk
T
kk HRHDQDQFDFQFD
SOLO
Discrete Time Nonlinear Estimation –Special Cases
p
kkkk
n
kkkk
vxHz
wxFx
R
R




1111
1
1& kk vw are system and measurement Gaussian white-noise
sequences, independent of past and current states and on each
other with covariances Qk and Rk+1, respectively
 0xpIn addition the P.D.F. of the initial state , is also given.
Linear/ Gaussian System
    1
1
11
11
tsMeasuremen
Updated
1
1
11
ModelProcess
UsingPrediction
11111
1 







  kk
T
k
T
kkkk
Lemma
Inverse
Matrix
kk
T
kk
T
kkk
T
kkkkkk HRHFJFQHRHQFFQFJFQQJ
  
Define
 T
kkkkkkkkkkkkk FPFQPPJPJ |
1
|1
1
|
1
1|11 :&:&:  



  1
1
11
1
|11
1
11
1
|
1
1|1 







  kk
T
kkkkk
T
k
T
kkkkkkk HRHPHRHFPFQP
The conclusion is that CRLB for the Linear Gaussian Filtering Problem is
Equivalent to the Covariance Matrix of the Kalman Filter. Return to Table of Content
241
Cramér-Rao Lower Bound (CRLB)
SOLO
Discrete Time Nonlinear Estimation –Special Cases
p
kkkk
n
kkk
vxHz
xFx
R
R




1111
1
1kv are measurement Gaussian white-noise sequence,
independent of past and current states with covariance Rk+1.
Qk = 0.
 0xpIn addition the P.D.F. of the initial state , is also given.
Linear System with Zero System Noise
Define  1
|
0
1
|1
1
|
1
1|11 :&:&:





  T
kkkk
Q
kkkkkkkk FPFPPJPJ
k
  1
1
11
1
|11
1
11
1
|
1
1|1 







  kk
T
kkkkk
T
k
T
kkkkkk HRHPHRHFPFP
Return to Table of Content
242
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
)Initialization,
Confirmation
and Deletion(
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
SOLO Gating and Data Association
Measurement
2
Measurement
1
t1 t2 t3
Association Hypothesis 1 Association Hypothesis 2 Association Hypothesis 3
Measurement
2
Measurement
1
t1 t2 t3
Measurement
2
Measurement
1
t1 t2 t3
Measurement
2
Measurement
1
t1 t2 t3
When more then one Target is detected by the
Sensor in each of the Measurement Scans we must:
• Open and Manage a Track File for each Target
that contains the History of the Target Data.
• After each new Set (Scan) of Measurements
associate each Measurement to an existing
Track File or open a New Track File
(a New Target was detected).
• Only after the association with a Track File the Measurement Data is provided
to the Target Estimator (of the Track File) for Filtering and Prediction for the
next Scan.
243
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
)Initialization,
Confirmation
and Deletion(
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
SOLO Gating and Data Association
Background
Filtering: deals with a Single Target, i.e.
Probability of Detection PD = 1, Probability of False Alarm PFA = 0
Facts:
• Sensors operate with PD < 1 and PFA > 0.
• Multiple Targets are often present.
• Measurements (plots) are not labeled!
Problem: How to know which measurements correspond to which Target (Track File)
The goal of Gating and Data Association:
Determine the origin of each Measurement by associating it to the existing Track File,
New Track File or declaring it to be a False Detection.
244
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
)Initialization,
Confirmation
and Deletion(
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
SOLO Gating and Data Association
Gating and Data Association Techniques
• Gating (Ellipsoidal, Rectangular, Others)
• (Global) Nearest Neighbor (GNN, NN) Algorithm
• Multiple Hypothesis Tracking (MHT)
• (Joint) Probabilistic Data Association (JPDA/PDA)
• Multidimensional Assignment
245
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
)Initialization,
Confirmation
and Deletion(
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
SOLO Gating and Data Association
Data Association Techniques
• Nearest Neighbor (NN)
Single Scan Methods:
• Global Nearest Neighbor (GNN)
• (Joint) Probabilistic Data Association (PDA/JPDA)
Multiple Scan Methods:
• Multi Hypothesis Tracker (MHT)
• Multi Dimensional Association (MDA)
• Mixture Reduction Data Association (MRDA)
• Viterbi Data Association (VDA)
246
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 11 , ktxz
 12  kj tS
 kkj ttz |ˆ 11 
 12 , ktxz
 13 , ktxz
 kkj ttz |ˆ 12 
 11  kj tS
Trajectory j = 2
Trajectory j = 1
Measurements
at scan k+1
SOLO
Optimal Correlation of Sensor Data with Tracks on
Surveillance Systems (R.G. Sea, Hughes, 1973)
We have n stored tracks that have predicted measurements
and innovations co variances at scan k+1 given by:
At scan k+1 we have m sensor reports (no more than one report
per target)
Gating and Data Association
    njkSkkz jj ,,11,|1ˆ 
set of all sensor reports on scan k+1 mk zzD ,,11 
H – a particular hypothesis (from a complete set S of
hypotheses) connecting r (H) tracks to r measurements.
We want to solve the following Optimization Problem:
       
   
    HPHDP
cHPHDP
HPHDP
DHPDHP
SH
SH
SHSH
|max
1
|
|
max|max|*





Measurement
2
Measurement
1
t1 t2 t3
Association Hypothesis 1
Measurement
2
Measurement
1
t1 t2 t3
Association Hypothesis 2
Measurement
2
Measurement
1
t1 t2 t3
Association Hypothesis 3
Measurement
2
Measurement
1
t1 t2 t3
247
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 11 , ktxz
 12  kj tS
 kkj ttz |ˆ 11 
 12 , ktxz
 13 , ktxz
 kkj ttz |ˆ 12 
 11  kj tS
Trajectory j = 2
Trajectory j = 1
Measurements
at scan k+1
SOLO
Optimal Correlation of Sensor Data with Tracks on
Surveillance Systems (continuous - 1)
We have several tracks defined by the predicted measurements
and innovations co variances
Gating and Data Association
   
!m
V
em
m
V
FA

 

The probability density function of the False Alarms or New Targets, in the search volume
V, in terms of their spatial density λ , is given by a Poisson Distribution:
    njkSkkz jj ,,11,|1ˆ 
Not all the measurements are from a real target but are from
False Alarms. The common mathematical model for such false
measurements is that they are:
• uniformly spatially distributed
• independent across time
• this is the residual clutter (the constant clutter, if any, is not considered).
m is the number of measurements in scan k+1
 
V
orAlarmFalsezP i
1
TargetNew| 
Because of the uniformly space distribution in the search Volume, we have:
False Alarm Models
248
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 11 , ktxz
 12  kj tS
 kkj ttz |ˆ 11 
 12 , ktxz
 13 , ktxz
 kkj ttz |ˆ 12 
 11  kj tS
Trajectory j = 2
Trajectory j = 1
Measurements
at scan k+1
SOLO
Optimal Correlation of Sensor Data with Tracks on
Surveillance Systems (continuous - 2)
Gating and Data Association
 mk zzD ,,11 
H – a particular hypothesis (from a complete set S of hypotheses)
connecting r (H) tracks to r measurements and assuming m-r false alarms or new targets.
   
    
  











r rm
l
j
jij
T
ji
m
l
l
VS
zzSzz
HzpHDP
1 1
1
1
1
2
2/ˆˆexp
||



       
   
    HPHDP
cHPHDP
HPHDP
DHPDHP
SH
SH
SHSH
|max
1
|
|
max|max|*





P (D|H) - probability of the measurements
given that hypothesis H is true.
     

m
i
i
tindependen
tsmeasuremen
m HzPHzzPHDP
1
1 ||,,| 
where:
 
      









 
jtracktoconnecteditmeasuremen
S
zSz
Sz
orAlarmFalseistmeasuremeniif
V
HzP
j
jj
T
j
jj
i
2
2/ˆzˆzexp
,ˆ;z
TargetNew
1
|
i
1
i
iN
249
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 11 , ktxz
 12  kj tS
 kkj ttz |ˆ 11 
 12 , ktxz
 13 , ktxz
 kkj ttz |ˆ 12 
 11  kj tS
Trajectory j = 2
Trajectory j = 1
Measurements
at scan k+1
SOLO
Optimal Correlation of Sensor Data with Tracks on
Surveillance Systems (continuous - 3)
Gating and Data Association
       
   
    HPHDP
cHPHDP
HPHDP
DHPDHP
SH
SH
SHSH
|max
1
|
|
max|max|*





P (H) – probability of hypothesis H connecting tracks j1,…,jr
to measurements i1,…,ir from m sensor reports:
     mPrmPjjPjjiiPHP FA
tracks
r
tracks
r
tsmeasuremen
r 






















 ,,,,|,, 111
   
 
!
!
11
1
,,|,, 11
m
rm
rmmm
jjiiP
tracks
r
tsmeasuremen
r
















probability of connecting tracks j1,…,jr
to measurements i1,…,ir
    




















 n
j
D
r
D
D
DetectingNot
n
jjj
j
D
jj
Detecting
r
D
tracks
r j
j
j
r
j
r
j
P
P
P
PPjjP
11
,,
1
,,
1
1 1
1
1,,
1
1
 








probability of detecting only j1,…,jr
targets
     
 
V
rm
FAFA e
rm
V
rmrmP 
 



!
for (m-r) False Alarms or New Targets assume Poisson
Distribution with density λ over search volume V of
(m-r) reports
 mP probability of exactly m reports
where:
250
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 11 , ktxz
 12  kj tS
 kkj ttz |ˆ 11 
 12 , ktxz
 13 , ktxz
 kkj ttz |ˆ 12 
 11  kj tS
Trajectory j = 2
Trajectory j = 1
Measurements
at scan k+1
SOLO
Optimal Correlation of Sensor Data with Tracks on
Surveillance Systems (continuous - 4)
where:
Gating and Data Association
        HPHDP
c
DHPDHP
SHSH
|max
1
|max|*


   
    
 










r
j
jij
T
ji
rmm
l
l
S
zzSzz
V
HzpHDP
1
1
1 2
2/ˆˆexp1
||



      
 
 mPe
rm
V
P
P
P
m
rm
HP V
rmn
j
D
r
D
D
j
j
j 



 

 











  !
1
1!
!
11
            


















































 r
d
jij
T
ji
jD
D
const
n
j
D
Vm
ji
j
j
j
zzSzz
SP
P
PmPe
c
HPHDP
c 1
1
1
2
ˆˆ
2
1
1
ln2
2
1
1
1
ln|
1
ln







  
  
         jji
ji
jD
D
d
jij
T
ji
jiSH
Gd
SP
P
zzSzzHPHDP
c j
j
ji


































 

2
,
1
,
min
2
1
1
ln2ˆˆmin|
1
lnmax
2



  


















jD
D
j
SP
P
G
j
j


2
1
1
ln2:
251
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 11 , ktxz
 12  kj tS
 kkj ttz |ˆ 11 
 12 , ktxz
 13 , ktxz
 kkj ttz |ˆ 12 
 11  kj tS
Trajectory j = 2
Trajectory j = 1
Measurements
at scan k+1
SOLO
Optimal Correlation of Sensor Data with Tracks on
Surveillance Systems (continuous - 5)
Gating and Data Association
        HPHDP
c
DHPDHP
SHSH
|max
1
|max|*

  jji
ji
Gd 
2
,
min


















jD
D
j
SP
P
G
j
j


2
1
1
ln2: Association Gate to track j
Return to Table of ContentInnovation in Tracking
In order to find the measurement that
    nizzSzzd jij
T
jiji ,,1ˆˆ:
12


belongs to track j, compute
 jji
ji
Gd 
2
,
minand choose i for which we have
 mki zzDz ,,11  
252
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
)Initialization,
Confirmation
and Deletion(
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
SOLO Gating and Data Association
Gating
• A way of simplifying data association by eliminating
unlikely observation-to-track pairings.
• We perform this test for every Target being tracked.
• Observation which don’t fall in any of the Gates will be used to initiate potentially
new tracks.
• We use the “measurement prediction” of the filter
 1|1|
ˆ,ˆ   kkkk xkhz
• Using we device a Gate around it, and dismiss
all the observations thatfall outside the Gate,
for data association.
1|ˆ kkz
1|ˆ kkz
 1ktz Measurement
at tk-1
Measurement
Prediction
at tk
 ktS
1|ˆ kkz
 ktxz ,1
 ktS
1|ˆ kkz
 ktxz ,2
 ktxz ,3
Nearest-Neighbor
253
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
)Initialization,
Confirmation
and Deletion(
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
 ktxz ,1
 ktS
1|ˆ kkz
 ktxz ,2
 ktxz ,3
Nearest-Neighbor
                  
1|ˆ1|ˆ:,
~ 12
kkzkzkSkkzkzzdzkV
T
k
SOLO
Gating
Then the true measurement will be in the following region:
with probability determined by the Gate Threshold γ.
Gating and Data Association
Assumption: The true measurement conditioned on the path is
normally (Gaussian) distributed with the Probability Density Function (PDF) given by:
         kSkkzkzZkzp k
,1|ˆ;| 1

N
The region V (k,γ) is called a Gate or Validation Region (symbol V) or Association Region.
It is also known as the Ellipsoid of Probability Concentration.
The volume defined by the Ellipsoid V (k,γ) is given by:
 
 
    2/12/2/1
1
2
, kSckScdzdzkV z
zz
k
z
n
nn
zd
n 

 


 
















 

















oddn
n
n
evenn
n
n
n
c
z
z
n
zn
z
z
z
z
n
n
z
z
z
z
!1
!
2
1
2
!
2
2
1
2 2
1
1
2


Γ is the gamma function
   



0
1
exp dttta a
2/,3/4,,2 2
4321   cccc
is the volume of the unit ellipsoid of
nz dimension (of z measurement vector)
znc
Ellipsoidal Gating
254
 ktxz ,1
 ktS
1|ˆ kkz
 ktxz ,2
 ktxz ,3
Nearest-Neighbor
SOLO
Ellipsoidal Gating (continue – 1)
Then the true measurement will be in the following region:
with probability PG determined by the Gate Threshold γ.
Gating and Data Association
              
  
zn
kV
T
G dzdz
kS
kkzkzkSkkzkz
kP 1
,
1
2
2/1|ˆ1|ˆexp
, 



 

                  
1|ˆ1|ˆ:,
~ 12
kkzkzkSkkzkzzdzkV
T
k
If we transform to the principal axes of S-1(k)
    T
n
T
TTdiagSSTTS z
  122
1
111
&,,  
2
2
2
1
2
1112
/1/1 z
z
T
n
nTTT
zdTwd
wdTzd
T
k
wdwd
wdwdwdTSTwdzdSzdd

 




Zk:=dk
2 is chi-squared of order nz distributed (Papoulis pg.250)
  















2
exp
2
2 2
1
2
k
z
n
n
k
k
Z
n
Z
Zp z
z
k
   
















0 2
1
2
2
exp
2
2
, k
k
z
n
n
k
G dZ
Z
n
Z
kP z
z
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
)Initialization,
Confirmation
and Deletion(
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
255
 ktxz ,1
 ktS
1|ˆ kkz
 ktxz ,2
 ktxz ,3
Nearest-Neighbor
SOLO
Ellipsoidal Gating (continue – 2)
Then the true measurement will be in the following region:
with probability PG determined by the Gate Threshold γ.
Gating and Data Association
                  
1|ˆ1|ˆ:,
~ 12
kkzkzkSkkzkzzdzkV
T
k
   
















0 2
1
2
2
exp
2
2
, k
k
z
n
n
k
G dZ
Z
n
Z
kP z
z
   
   
     
     
       
      2/2/exp24/16
2/exp/23/125
2/exp2/114
2/exp/223
2/exp12
21
2












Gz
Gz
Gz
Gz
Gz
Gz
Pn
gcPn
Pn
gcPn
Pn
gcPn
This integral has the following solutions for different nz:
where: standard Gaussian probability integral.    
x
duuxgc
0
2
2/exp
2
1
:

Since Zk:=dk
2 is chi-squared of order nz distributed
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
)Initialization,
Confirmation
and Deletion(
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
256
 ktxz ,1
 ktS
1|ˆ kkz
 ktxz ,2
 ktxz ,3
Nearest-Neighbor
SOLO
Ellipsoidal Gating (continue – 3)
Then the true measurement will be in the following region:
with probability PG determined by the
Gate Threshold γ. Here we described
another way of determining γ, based on
the chi-squared distribution of dk
2.
Gating and Data Association
Tail probabilities of the chi-square and normal densities.
9.21
11.34
13.28
2
3
4
0.01
   01.01Pr
2
  typicallydP kG
28.13;4,01.0
34.11;3,01.0
21.9;2,01.0






z
z
z
n
n
n
                  
1|ˆ1|ˆ:,
~ 12
kkzkzkSkkzkzzdzkV
T
k
Since dk
2 is chi-squared of order nz
distributed we can use the chi-square
Table to determine γ
Return to Table of ContentInnovation in Tracking
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
)Initialization,
Confirmation
and Deletion(
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
257
Gating and Data Association Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Comparison of Major Data Association Algorithms
E. Waltz, J. Llinas,"Multisensor Data Fusion", Artech House, 1990, pg. 194
Major Characteristics
(1)
No of
previous
scan used
in data
assoc.
(2),(3)
Assoc.
metric
and
hypotesis
score
(4)
Assoc.
decision
rule and
hypotesis
maintenance
(5)
Use of
neighboring
observation
in track
estimation
Association
Algorithm
Nearest
Neighbor
0
(current
scan
only)
score is a
sum of
distance
metrics
hard
decision
single unique
neighbors
observation
used
* sequential process
* Assoc. matrix contains
all pairing metrics
REMARKS
Major
References
38
(A)
[38] P.G. Casnev, R.J. Prengman, ”Integration and Automation of Multiple Co-Located Radars”, Proc. IEEE EASCON, 1977, pp.10-1A-1E
[39] Y. Bar-Shalom, E. Tse, ”Tracking in a Cluttered Environment with Probabilistic Data Association”, Automatica, Vol. 11, September 1975, pp.451-460
[40] T.E. Fortman, Y. Bar-Shalom, M. Scheffe, ”Multi-Target Tracking Using Joint Probabilistic Data Association”, Proc. 1980, IEEE Conf. on Decision and Control, December 1980, pp.807-812
[41] R.W. Sittler, ”An Optimal Data Association Problem in Surveillance Theory”, IEEE Trans. Military Electronics Vol. MIL-8, April 1984, pp.125-139
[42] J.J. Stein, S.S. Blackman, ”Generalized Correlation of Multi-Target Track Data”, IEEE Trans. Aerospace and Electronic Systems, Vol. AES-11, No.6, November 1975, pp.1207-1217
[43] C.L. Morefield, ”Application of o-i Integer Programming to Multi-Target Tracking Problems”, IEEE Trans. Automatic Control, Vol AC-22, June 1977, pp.302-312
[44] D.B. Reid, ”An Algorithm for Tracking Multiple Targets”, IEEE Trans. Automatic Control, Vol. AC-24, December 1979, pp.843-854
[45] R.A. Singer, R.G. Sea, R.B. Housewright,”Derivation and Evaluation of Improved Tracking Filter for Use in Dense Multi-Target Environments”, IEEE Trans. Information Theory, Vol IT-20, July 1974, pp.423-432
Comparison of Major Data Association Algorithms
E. Waltz, J. Llinas,"Multisensor Data Fusion", Artech House, 1990, pg. 194
Major Characteristics
(1)
No of
previous
scan used
in data
assoc.
(2),(3)
Assoc.
metric
and
hypotesis
score
(4)
Assoc.
decision
rule and
hypotesis
maintenance
(5)
Use of
neighboring
observation
in track
estimation
Association
Algorithm
Probabilistic
Data Association
(PDA), Joint PDA
(JPDA)
0
(current
scan
only) A posteriori
probability
hard
decision
all-neighbors
(combined)
are used
* Tracks assumed to be
initiated
* PDA for STT, JPDA for MTT
* Suitable for dense targets
REMARKS
Major
References
39,40
(B)
Comparison of Major Data Association Algorithms
E. Waltz, J. Llinas,"Multisensor Data Fusion", Artech House, 1990, pg. 194
Major Characteristics
(1)
No of
previous
scan used
in data
assoc.
(2),(3)
Assoc.
metric
and
hypotesis
score
(4)
Assoc.
decision
rule and
hypotesis
maintenance
(5)
Use of
neighboring
observation
in track
estimation
Association
Algorithm
Maximum
Likelihood (ML)
N
likelihood
score
soft
decision
resulting in
multiple
hypotheses
(requiring
branching
or track
splitting)
all-neighbors
(individually)
used in
multiple
hypotheses
each used for
independent
estimates
* Batch process for a set of N scans.
In the limit N for full
scene batch processing.
* Suitable for initiation

REMARKS
Major
References
41
42,43
(C)
Comparison of Major Data Association Algorithms
E. Waltz, J. Llinas,"Multisensor Data Fusion", Artech House, 1990, pg. 194
Major Characteristics
(1)
No of
previous
scan used
in data
assoc.
(2),(3)
Assoc.
metric
and
hypotesis
score
(4)
Assoc.
decision
rule and
hypotesis
maintenance
(5)
Use of
neighboring
observation
in track
estimation
Association
Algorithm
Sequential
Bayesian
Probabilistic
N
A posteriori
probability
or
likelihood
score
soft
decision
resulting in
multiple
hypotheses
(requiring
branching
or track
splitting)
all-neighbors
(individually)
used in
multiple
hypotheses
each used for
independent
estimates
* Sequential process with multiple,
deferred hypotheses: pruning,
combining, clustering is required
to limit hypotheses
REMARKS
Major
References
44
(D)
Comparison of Major Data Association Algorithms
E. Waltz, J. Llinas,"Multisensor Data Fusion", Artech House, 1990, pg. 194
Major Characteristics
(1)
No of
previous
scan used
in data
assoc.
(2),(3)
Assoc.
metric
and
hypotesis
score
(4)
Assoc.
decision
rule and
hypotesis
maintenance
(5)
Use of
neighboring
observation
in track
estimation
Association
Algorithm
Optimal
Bayesian 
A posteriori
probability
or
likelihood
score
soft
decision
resulting in
multiple
hypotheses
(requiring
branching
or track
splitting)
all-neighbors
(individually)
used in
multiple
hypotheses
each used for
independent
estimates
* Batch process requires most
computation due to consideration
of all hypotheses.
REMARKS
Major
References
45
(E)
258
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 ktxz ,1
 ktS
 kk ttz |ˆ 1
 ktxz ,2
 ktxz ,3
Nearest-Neighbor
SOLO
Nearest-Neighbor Standard Filter
In the Nearest-Neighbor Standard Filter (NNSF) the validated
measurement next to the predicted measurement is used for
updating the state of the target.
The distance measure to be minimizes is the weighted norm of
the innovation:
               111|1ˆ1|1ˆ: 112
 
kikSkikkzzkSkkzzzd
TT
where S is the covariance matrix of the innovation.
Gating and Data Association
The problem of choosing the Nearest-Neighbor is that with some
probability, is not the correct measurement. Therefore the NNSF
will sometimes use incorrect measurements while “believing”
that they are correct.
Gatting & Data Association Table
259
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Global Nearest-Neighbor (GNN) Algorithms
Gating and Data Association
Gatting & Data Association Table
• Several 2D Algorithms are available
- Hungarian Method (Kuhn)
- Munkres Algorithm
- JV, JVC (Jonker – Volgenant – Castanon) Algorithms
- Auction Algorithm (Bertsekas)
• All these algorithms give the EXACT global solution
• They are polynomial order of complexity
• Difference in the speed of computation
- Auction Algorithm is considered the best
260
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Suboptimal Bayesian Algorithm: The PDAF
The Probabilistic Data Association Filter (PDAF) is a
Suboptimal Bayesian Algorithm that assumes that is
Only One Target of interest in the Gate and that the
track has been initialized.
At each sampling a Validation Gate (to be defined) is set up.
Among the possible validated measurement only one (or neither
one) can be a target and all other are clutter returns, or “false
alarms”, and are modeled as Independent Identical Distributed
(IID) random.
Gating and Data Association
The PDAF uses only the latest set of measurements (the Optimal Bayesian uses all
the measurements up to estimation time). The past is summarized approximately by
making the following basic assumption of the PDAF:
         1|,1|ˆ;| 1:1  kkPkkxkxZkxp k N
i.e., the state is assumed normally distributed (Gaussian) according to the latest
prediction of state estimate and covariance matrix.
 ktxz ,1
 kV
 1|ˆ kk ttz
 ktxz ,2
 km txz ,
 21 |ˆ  kk ttz
 32 |ˆ  kk ttz
Estimated Measurements
Of Track
The detection of the target occurs independently from sample to
sample with a known probability PD, which can be time-varying.
261
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 ktxz ,1
 kV
 1|ˆ kk ttz
 ktxz ,2
 km txz ,
SOLO
Suboptimal Bayesian Algorithm: The PDAF (continue – 1)
Following the white IID innovation assumption the Validation Gate
is defined by the ellipsoid
Gating and Data Association
        
 
      
  







 

    
ki
T
ki
k kkzkzkSkkzkzkzV 1|ˆ1|ˆ::
~ 1
Tail probabilities of the chi-square and normal densities.
9.21
11.34
13.28
2
3
4
0.01
• From the chi-square table, given α and nz,
we can determine γ
28.13;4,01.0
34.11;3,01.0
21.9;2,01.0






z
z
z
n
n
n
The weighted norm innovation is chi-square
distributed with number of degrees of freedom
equal to dimension nz of the measurement.
The value of γ is determined by defining the
required probability PG that a measurement is
in the gate:
       1
~
: kG VkzPP
262
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 ktxz ,1
 kV
 1|ˆ kk ttz
 ktxz ,2
 km txz ,
 detectedistmeasurementrueAPr:DP
SOLO
Suboptimal Bayesian Algorithm: The PDAF (continue – 2)
The fact that a measurement is obtained depends also on
the Probability of Detection PD of the target
Gating and Data Association
Probability that a true Target is detected in the gate = PD PG
Probability that no Target is detected in the gate = 1 - PD PG
Following the assumption that we have measurements mk (random variable) from the
ellipsoidal validation region , let define the events: kV
~
• θj (k) := { zj (k) is a target originated measurement } j=1,2,…,mk
(mk-1 are false alarms)
• θ0 (k) := { none of the measurements at time k are target originated } (mk false alarms)
with probabilities      kkjj mjZkPk ,...,1,0|: :1  
In view of the above assumptions those events are exclusive and exhaustive, and therefore
      1
1
0
0
  
kk m
j
j
m
j
j kkk 
The procedure that yields these probabilities is called Probabilistic Data Association (PDA).
263
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Suboptimal Bayesian Algorithm: The PDAF (continue – 3)
Gating and Data Association
βj (k) computation
          kkkjkjj mjZmkZkPZkPk ,...,1,0,,||: 1:1:1  
Z1:k - all measurements up to time k
Z (k) - all mk measurements at time k
Using Bayes’ rule for the mk exclusive and exhaustive events, we obtain:
              
       
km
i
kkikki
kkjkkj
kkjj mj
ZmkkZPZmkP
ZmkkZPZmkP
ZmkZkPk k
,...,1,0
,,|,|
,,|,|
,,|
0
1:11:1
1:11:1
1:1 







• θj (k) := { zj (k) is a target originated measurement } j=1,2,…,mk
(mk-1 are false alarms)
• θ0 (k) := { none of the measurements at time k are target originated} (mk false alarms)
 ktxz ,1
 kV
 1|ˆ kk ttz
 ktxz ,2
 km txz ,
Denoting by φ the number of false alarms (we have φ=mk-1 or φ=mk) we obtain:
    
         
       
       
   
 












 
0|
,...,1|1/1
0|1|10
,...,1|0|1/1
|,||1,1|
,|: 1
jmmP
mjmmPm
jmmPmmP
mjmmPmmPm
mmPmmkPmmPmmkP
ZmkPk
kk
kkkk
kkkk
kkkkkk
kkkkjkkkkj
k
kjj






          1:1
0
1:11:1 ,|,,|,| 

  kk
m
i
kkikki ZmkZPZmkkZPZmkP
k
 Likelihood Function
264
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Suboptimal Bayesian Algorithm: The PDAF (continue – 4)
Gating and Data Association
              
       
km
i
kkikki
kkjkkj
kkjj mj
ZmkkZPZmkP
ZmkkZPZmkP
ZmkZkPk k
,...,1,0
,,|,|
,,|,|
,,|
0
1:11:1
1:11:1
1:1 







 ktxz ,1
 kV
 1|ˆ kk ttz
 ktxz ,2
 km txz ,
Denoting by φ the number of false alarms (we have φ=mk-1 or φ=mk) we obtain:
        
 




 
0|
,...,1|1/1
,|: 1:1
jmmP
mjmmPm
ZmkPk
kk
kkkk
kkjj



βj (k) computation (continue – 1)
Using Bayes Formula we obtain:
     
 
 
 
 k
kGD
k
m
k
PP
kk
kk
mP
mPP
mP
mPmmP
mmP
kGD
111|
|1
1








    
     
 
 
   
 k
kGD
k
m
k
PP
kk
kk
mP
mPP
mP
mPmmP
mmP
kGD








1|
|
1
  
where μF is the probability mass function (pmf) of the number of false alarms and PD PG is the
probability that the target has been detected and its measurements fell in the gate.
The common denominator is:        kGDkGDk mPPmPPmP   11
265
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 ktxz ,1
 kV
 1|ˆ kk ttz
 ktxz ,2
 km txz ,
SOLO
Suboptimal Bayesian Algorithm: The PDAF (continue – 5)
Gating and Data Association
              
       
km
i
kkikki
kkjkkj
kkjj mj
ZmkkZPZmkP
ZmkkZPZmkP
ZmkZkPk k
,...,1,0
,,|,|
,,|,|
,,|
0
1:11:1
1:11:1
1:1 







We obtained:
             
            





 


01/11/1
,...,11/1/1
,|: 1
1
1:1
jmmPPPPmmPP
mjmmPPPPPPm
ZmkPk
kFkFGDGDkFkFGD
kkFkFGDGDGDk
kkjj



βj (k) computation (continue – 2)
Two methods can be used to compute μF (the pmf of false alarms):
(i) A Poisson model with a certain spatial density λ
(parametric):
   
!k
m
V
kF
m
V
em
k

 

(ii) A (nonparametric) diffuse prior model:       1kFkF mm
         
        





 


011
,...,11
,|: 1
1
1:1
jkVPPmPPkVPP
mjkVPPmPPPP
ZmkPk
GDkGDGD
kGDkGDGD
kkjj



      




 
01
,...,1/
,|: 1:1
jPP
mjmPP
ZmkPk
GD
kkGD
kkjj 
The nonparametric model can be obtained from Poisson model by choosing:  kVmk /:
V (k) volume of the
Ellipsoid Gate
    2/12/
2
1
2
, kS
n
kV z
z
n
z
n











266
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 ktxz ,1
 kV
 1|ˆ kk ttz
 ktxz ,2
 km txz ,
SOLO
Suboptimal Bayesian Algorithm: The PDAF (continue – 6)
Gating and Data Association
              
       
km
i
kkikki
kkjkkj
kkjj mj
ZmkkZPZmkP
ZmkkZPZmkP
ZmkZkPk k
,...,1,0
,,|,|
,,|,|
,,|
0
1:11:1
1:11:1
1:1 







Let compute:
βj (k) computation (continue – 3)
V (k) volume of the
Ellipsoid Gate
    2/12/
2
1
2
, kS
n
kV z
z
n
z
n











Since for mk measurements we can have only one target and mk-1 false alarms or
mk false alarms, we obtain
          
 
k
k
m
j
kkji
tindependen
tsmeasuremen
kkjmkkj ZmkzPZmkzzPZmkkZP
1
1:11:111:1 ,,|,,|,,,,|  
Assumptions: Gaussian pdf of correct target in the ellipsoidal gate,
with probability PG and uniform distribution of false alarms inside V (k).
  
           
 








 
TargetTrue
2
2/exp
,0;
TargetNew
1
,,| 11:1
kS
kikSki
PkSkiP
orAlarmFalseistmeasuremeniif
V
ZmkzP
j
T
j
GjG
kkjj


1-1-
N
              
 












0
,,1,0;
,,|,,|
11
1
1:1
1
jkV
mjkSkiPkV
ZmkzPZmkkZP
k
kk
m
kjG
mm
j
kkjj
k
kj
N

267
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Suboptimal Bayesian Algorithm: The PDAF (continue – 7)
Gating and Data Association
              
       
km
i
kkikki
kkjkkj
kkjj mj
ZmkkZPZmkP
ZmkkZPZmkP
ZmkZkPk k
,...,1,0
,,|,|
,,|,|
,,|
0
1:11:1
1:11:1
1:1 







βi (k) computation (continue – 4)
We obtained for parametric (Poisson) model:
    
 
      
 
 













0
,,1
2
2/exp
,,|
1
11
1:1
jkV
mj
kS
kikSki
PkV
ZmkkZP
k
k
m
k
j
T
j
G
m
kkj


         
        





 


011
,...,11
,|: 1
1
1:1
jkVPPmPPkVPP
mjkVPPmPPPP
ZmkPk
GDkGDGD
kGDkGDGD
kkjj



 
      
      
 
          














011
1
,...,1
2
2/exp
1
1
1
1
111
jkVkVPPmPPkVPP
c
mj
kS
kikSki
PkVkVPPmPPPP
c
k
k
k
m
GDkGDGD
k
j
T
j
G
m
GDkGDGD
j




c is a normalized factor.
268
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Suboptimal Bayesian Algorithm: The PDAF (continue – 8)
Gating and Data Association
              
       
km
i
kkikki
kkjkkj
kkjj mj
ZmkkZPZmkP
ZmkkZPZmkP
ZmkZkPk k
,...,1,0
,,|,|
,,|,|
,,|
0
1:11:1
1:11:1
1:1 







βj (k) computation (continue – 5)
We obtained
 
      
 
 
      
   
























02
11
,...,12/exp
1
01
1
,...,1
2
2/exp1
2
1
2
1
1
1
jkS
P
PP
c
mjkikSki
c
jPP
c
mj
kS
kikSki
P
c
k
D
GD
kj
T
j
GD
k
i
T
i
D
j



Finally
 


















0
,...,1
1
1
j
eb
b
mj
eb
e
k
k
k
m
l
l
km
l
l
j
j
      
   kS
P
PP
b
kikSkie
D
GD
j
T
jj
 2
1
:
2/exp: 1


 
where for
Poisson
(parametric)
Model:
For the nonparametric model we choose:  kVmk /:
269
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Suboptimal Bayesian Algorithm: The PDAF (continue – 9)
Gating and Data Association
βj (k) computation – Summary (continue – 6)
Evaluation of Association Probabilities βj (k)
 


















0
,...,1
1
1
j
eb
b
mj
eb
e
k
k
k
m
l
l
km
l
l
j
i
      
   kS
P
PP
b
kikSkie
D
GD
j
T
jj
 2
1
:
2/exp: 1


 
For Poisson
(parametric)
Model:
For the nonparametric model we choose:
 kVmk /:
Calculation of Innovations and Measurements Validations
 
     
     
 zGkj
j
T
jkj
jj
kj
nPd
kikSkid
kkzkzki
mjkz
,
1|ˆ
,,1
2
12






270
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Suboptimal Bayesian Algorithm: The PDAF (continue – 10)
Using the Total Probability Theorem (for exclusive & exhaustive
events)
Gating and Data Association
       
























events
all
n
i
i
events
no
ji
n
i
iBiBx BjiBBwhereBpBxpxp ii  11
| &|
we obtain
         
       
 
                     






kk
km
i
kiki
m
j
kjkj
m
j
kjkj
ZkpZkkxp
kk
ZkpZkkxEZkpkxdZkkxpkx
kxdZkxpkxZkxEkkx
0
:1:1
0
:1:1
|,|
:1:1
|,||,|
|||ˆ
0
:1:1



but            kZkpkkxZkkxE jkjjkj   :1:1 |&|ˆ,|
Therefore
              

kk m
j
jj
m
j
kjkj kkkxZkpZkkxEkkx
00
:1:1 |ˆ|,||ˆ 
estimation that the conditional state based on event θj (k) is correct and its probability βj (k)
This is the relation of the estimation of a exclusive & exhaustive mixture of events
with weights βj (k).
271
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Suboptimal Bayesian Algorithm: The PDAF (continue – 11)
Gating and Data Association
It is given by the Kalman Filter topology:
 kkxj |ˆ is the update state estimation conditioned on event θj (k) is correct.
                
       11|11
1|ˆ1|ˆ1|ˆ|ˆ
1



kSkHkkPkK
kkzkzkKkkxkikKkkxkkx
T
jjj
For j = 1,…,mk (a possible target detected in the Validation Gate)
   1|ˆ|ˆ0  kkxkkx
Therefore
              
         
 
     kikKkkxkkikKkkkx
kkikKkkxkkkxkkx
ki
m
j
jj
m
j
j
m
j
jj
m
j
jj
kk
kk






1|ˆ1|ˆ
1|ˆ|ˆ|ˆ
0
1
0
00
  


For i = 0 (no target detected in the Validation Gate) the innovation is   0|0 kki
       kikKkkxkkx  1|ˆ|ˆ
where is the combined innovation.     

km
j
jj kkiki
0
: 
272
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
            
              


km
j
jkj
T
of
Mixture
events
exhaustive
exclusive
k
T
kZkkkxkxkkxkxE
ZkkxkxkkxkxEkkP
0
:1
&
:1
,||ˆ|ˆ
||ˆ|ˆ|

              

kk m
j
jj
m
j
kjkj kkkxZkpZkkxEkkx
00
:1:1 |ˆ|,||ˆ 
SOLO
Suboptimal Bayesian Algorithm: The PDAF (continue – 12)
Gating and Data Association
The covariance of the mixture is given by
                    

km
j
j
T
jjjj kkkxkkxkkxkxkkxkkxkkxkxE
0
|ˆ|ˆ|ˆ|ˆ|ˆ|ˆ 
          
 
               

kk
j
m
j
T
ijj
m
j
j
kkP
jj kkxkkxkkkxkxEkkkxkxkkxkxE
0
0
0
|
|ˆ|ˆ|ˆ|ˆ|ˆ 
    
                          

k km
j
m
j
j
T
jjj
T
jj kkkxkkxkkxkkxkkkxkxEkkxkkx
0 0
0
|ˆ|ˆ|ˆ|ˆ|ˆ|ˆ|ˆ 
  
                  

kk m
j
j
T
jj
m
j
jj kkkxkkxkkxkkxkkkPkkP
00
|ˆ|ˆ|ˆ|ˆ|| 
   1||0  kkPkkP No target in Validated Gate
          k
T
j mjkKkSkKkkPkkP ,,11|| 1
 
One target in Validated Gate
273
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Suboptimal Bayesian Algorithm: The PDAF (continue – 13)
Gating and Data Association
                  

kk m
j
j
T
jj
m
j
jj kkkxkkxkkxkkxkkkPkkP
00
|ˆ|ˆ|ˆ|ˆ|| 
            k
Tc
j mjkKkSkKkkPkkPkkP ,,11||| 1
 
Since
     kkk
kk m
j
j
m
j
j 0
10
11    
and
                      
 
  
kPd
m
j
j
T
jj
c
k
kkkxkkxkkxkkxkkPkkkPkkkP 

0
00 |ˆ|ˆ|ˆ|ˆ|11|| 
   1||0  kkPkkP
We have:
                     
 
 
     
 
               kkxkkxkkkxkkxkkkxkkxkkkxkkx
kkxkkkxkkkxkkxkkkxkkxkkxkkx
T
m
j
j
T
jj
m
j
j
T
kkx
m
j
j
T
j
T
kkx
m
j
jj
m
j
j
T
jj
m
j
j
T
jj
kk
T
k
kkk
|ˆ|ˆ|ˆ|ˆ|ˆ|ˆ|ˆ|ˆ
|ˆ|ˆ|ˆ|ˆ|ˆ|ˆ|ˆ|ˆ
0
1
0
|ˆ
0
|ˆ
000




















  
  
274
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Suboptimal Bayesian Algorithm: The PDAF (continue – 14)
Gating and Data Association
             

km
j
i
T
jj kkkxkkxkkxkkxkPd
0
|ˆ|ˆ|ˆ|ˆ 
                           

km
j
j
T
jj kkikKkkxkikKkkxkikKkkxkikKkkx
0
1|ˆ1|ˆ1|ˆ1|ˆ 
               kKkkikikikikK T
m
j
j
T
jj
k






 0

             
 
   
 
         kKkkikikikkikkikikkikikK T
m
j
j
T
jj
T
ki
m
j
jj
ki
m
j
j
T
j
m
j
j
T
kk
T
kk




















   000
1
0

    
               kKkikikkikikKkPd TT
m
j
j
T
jj
k






 0

275
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
Suboptimal Bayesian Algorithm: The PDAF (continue – 15)
Gating and Data Association
            kPdkkPkkkPkkkP c
 |11|| 00 
               kKkikikkikikKkPd TT
m
j
j
T
jj
k






 0
: 
         kKkSkKkkPkkP Tc 1
1|| 

Finally we obtained
276
SOLO
 1|1  kkP
State CovarianceState Estimation
Predicted State
     1|111|  kkxkFkkx

         kRkHkkPkHkS T
 1|
Innovation Covariance
         111|111|  kQkFkkPkFkkP T
Covariance of Predicted State
       kSkHkkPkK T 1
1| 

Filter Gain
             kKkikikikikKkPd TT
m
j
T
jjj
k






 1

Effect of Measurement Origin
on State Covarriance
Update State Covariance
   
          kPdkkPkHkKI
kkPkkP


1|1
1||
0
0

Combined Innovation
   

km
j
jj kiki
1

Update State Estimation
       kikKkkxkkx  1||

 kS
 ki j
j
 ki
 kz j
One Cycle of
PDAF
Measurements
 1|1  kkx

   
 




























k
m
l
lj
m
l
l
j
kjDj
n
GD
mjebe
nobservatiovalidNojebb
mjdPe
SPPb
k
k
z
1/
0/
,,2,12/exp
21
1
1
2
2/



Evaluation of Association Probabilities
Predicted Measurements
     1|1|ˆ  kkxkHkkz

 
     
     
 zGkj
j
T
jkj
jj
kj
nPd
kikSkid
kkzkzki
mjkz
,
1|ˆ
,,2,1
2
12






Calculation of Innovation and
Measurement Validation
Suboptimal Bayesian Algorithm: The PDAF (continue – 16)
Gating and Data Association
277
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
(Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
SOLO
Track Initialization, Maintenance & Deletion
Track Life Cycle
(Initialization, Maintenance & Deletion)
Initial/
Terminal
State
Preliminary
Track
Tentative
Track
Confirmed
Track
No. of
Detections
≥ M
No
Second
Detection
Wait N
Scans
Initial
Detection
No. of
Detections
< M
L
Consecutive
Missed
Detections
No
Detection
No L
Consecutive
Missed
Detections
278
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
(Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
SOLO
Track Initialization
Track Life Cycle
(Initialization, Maintenance & Deletion)
Every Detection unassociated to an existing Track may be a False Alarm or a New Target.
A Track Formation requires a Measurement-to-Measurement Association.
Logic to Track Initialization (2 Detections for a Preliminary Track followed by
M detections out of N scans):
Every Unassociated Detection is a “Track Initiator”, yields a “Tentative Track”.1
Around the Initial Detection a Gate is set up based on2
• assumed maximum and minimum Target motion parameters.
• the measured noise intensities.
If is a Target, that gave rise to the initiator in the first scan, if detected in the second scan
will fall in the Gate with nearly unity probability. Following a detection, in the second scan,
this Track becomes a Preliminary Track, if there is no detection, this Track is dropped.
Since the Preliminary Track has two measurements, a Kalman Filter can be initialized and
used to set up a Gate for the next (third) sampling time.
3
Starting from the third scan a logic of M detections out of N scans (frames) is used for the
subsequent Gates.
4
If at the end (scan N + 2 at the latest) the logic requirement is satisfied, the Track becomes a
Confirmed Track, otherwise is dropped.
5
279
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
(Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
SOLO
Track Initialization
Track Maintenance
(Initialization, Maintenance & Deletion)
Target Model
• Target System is given by:
kkkk
kkkkkkk
vxHz
wuGxx

  111111
kv
kH kzkx
kx1k
1kw
1k
1kx
1ku 1kG
1
zDelay
• Target Filter Model is given by:
1|1|
111|111|
ˆˆ
ˆˆ




kkkkk
kkkkkkk
xHz
uGxx
• Filter Initialization is done in two steps:
1. Following an unassociated detection a Preliminary Large Gate is defined
2. After a second detection is associated in the Preliminary Gate the Kalman
Filter is initiated using the two measurements by defining
A Preliminary New Track is established.
0|00|0 ,ˆ Px
280
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
(Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
SOLO
Track Initialization
Track Life Cycle
(Initialization, Maintenance & Deletion)
Track # 1
Track # 2
New Targets
or
False Alarms
Old Targets
Scan # m
Scan # m+1
Scan # m+2
Scan # m+3
Tgt
# 1
Tgt
# 2
Tgt
# 1
Tgt
# 1
Tgt
# 2
Tgt
# 2
Tgt
# 2
Preliminary
Track # 1
Preliminary
Track # 2
False
Alarm
False
Alarm
Tgt
# 3
281
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
(Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
SOLO
Track Initialization
Track Life Cycle
(Initialization, Maintenance & Deletion)
Target Model (continue – 1)
• If detection (probability PD ) can be associated to the track, i.e. is in the
Acceptance Gate (probability PG ):
      1|1| ˆˆ kkkk
T
kkk zzSzz
• At each scan we perform State and Innovation Covariance Prediction:
k
T
kkkkk
k
T
kkkkkk
RHPHS
QPP




1|
111|111|
we update the Detection Indicator Vector:




otherwise
katgatetheindetectionaisif
k
0
1

and the State and State Covariance are updated, accordingly
 
kkkkkkkk
kkkkkkkkk
k
T
kkkk
KSKPP
zzKxx
SHPK









1||
1|1||
1
1|
ˆˆˆ
• If in M scans out of N we have an associated to Track detection the Track is Confirmed
otherwise is dropped.
282
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
(Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
State Detection Sequence
Indicator Vector
δ = 0 No Detection
δ = 1 Detection
Transition
D = detection
A = Acceptance
1 Initial (zero state)
2 δ2 = [1]
3 δ3 = [1 1]
4 δ4= [1 1 1]
5 δ5= [1 1 1 0]
6 δ6 = [1 1 0]
7 δ7= [1 1 0 1]
8a δ8a = [1 1 1 1] Confirmed State
8b δ3 = [1 1 1 0 1] Confirmed State
8c δ3 = [1 1 0 1 1] Confirmed State
1;2  DD
SOLO
Track Initialization
Track Life Cycle
(Initialization, Maintenance & Deletion)
Markov Chain for the Track Initialization Process for M=2, N=3
1;3  DD
6;4  AA
5;8  AaA
1;8  AbA
1;7  AA
1;8  AcA
D
D
1 2 3 4
5
6 7
8a
8b
8c
D
A
D
D 1
 12   113   1114 
 01115 
 0116 
 10117 
Preliminary
Track
Track
Confirmation
m=2/n=3
Initial
State
D
A A
A
AA
A A
A
A
State i of the Markov Chain is defined by the Detection Sequence Indicator Vector δi, where, for
example, δ7=[1 1 0 1] means Detection (D), followed by Detection (A), No Detection ( ), Detection (A).A
The Markov Chain probability vector, denoted by
μ (k), has components:
μi (k)= Pr { the chain is in State i at time k }
From the Markov Chain description by the Table or by the Graph we can define the relation:
          000,10:..,1,01 821    CIkkk
284
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
(Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
 1 2 3 4 5 6 7 8
1 1-πD πD 0 0 0 0 0 0
2 1-πD 0 πD 0 0 0 0 0
3 0 0 0 πA 0 1-πA 0 0
4 0 0 0 0 1-πA 0 0 πA
5 1-πA 0 0 0 0 0 0 πA
6 1-πA 0 0 0 0 0 0 πA
7 1-πA 0 0 0 0 πA 0 0
8 0 0 0 0 0 0 0 1
SOLO
Track Initialization
Track Life Cycle
(Initialization, Maintenance & Deletion)
Markov Chain for the Track Initialization Process for M=2, N=3 (continue – 2)
The acceptance probability is πA = PD• PG
where PD = Probability of Detection
PG = Probability that the true measurement will fall in the Gate
          000,10:..,1,01 821    CIkkk
Π
1 2 3 4
5
6 7
8a
8b
8c
D A
D
D 1
 12   113   1114 
 01115 
 0116 
 10117 
Preliminary
Track
Track
Confirmation
m=2/n=3
Initial
State
D
A A
A
AA
A
A
A
A
Since for each state we can move to only two states with probabilities πD /πA and 1 – πD /1- πA the
coefficients of Π matrix must satisfy:
1j
ji
The initialization probability is πD
285
Sensor Data
Processing and
Measurement
Formation
Observation-
to - Track
Association
Input
Data
Track
Maintenance
(Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House ,
1986
Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems
", Artech House ,1999
SOLO
Track Initialization
Track Life Cycle
(Initialization, Maintenance & Deletion)
Markov Chain for the Track Initialization Process for M=2, N=3 (continue – 2)
          000,10:..,1,01 821    CIkkk
1 2 3 4
5
6 7
8a
8b
8c
D A
D
D 1
 12   113   1114 
 01115 
 0116 
 10117 
Preliminary
Track
Track
Confirmation
m=2/n=3
Initial
State
D
A A
A
AA
A
A
A
A
and:
The Track Confirmation is attained in State 8.
Therefore:
 k8
Ck
k   188  kk 
Ck k
 






C
C
kk
kk
k
1
0
8
   






C
C
kk
kk
kk
1
0
188 
The Average Confirmation Time of a Target-originated Sequence is:
     C
k
C kkkkt  

1
88 1
286
Target Estimators
Filters for Maneuvering Target Detection
• Maneuver Detection Scheme
• Hybrid State Estimation Techniques
- Jump Markov Linear System (JMLS)
- Interactive Multiple Model (IMM)
- Variable Structure IMM
• Cramér - Rao Lower Bound (CRLB) for JMLS
SOLO
287
Target Estimators
Filters for Maneuvering Target Detection – Background
• The motion of a real target never follows the same dynamic model all the time.
• Essentially, there are (long – relative to measurement updates) period of constant
velocity (CV) motion with sudden changes in speed and heading.
• The measurements are of target position and velocity (sometimes), but not target
acceleration.
• There are two main approaches to deal with maneuvering targets using the
Kalman Filter framework:
SOLO
- Maneuver Detection Basic Schemes
- Hybrid-state estimation techniques, where a few predefined target maneuver
models run in parallel, using the same measurements, and recursively we
check what is the most plausible model in each time interval.
288
Target Estimators
Filters for Maneuvering Target Detection
SOLO
Maneuver Detection Basic Schemes
              kRkHkkPkHkSkSiki T
 1|,0;~ N
• Normalized Innovation Squared (NIS):        kikSkik T 1
: 

For the optimal Kalman Filter Gain the innovation is unbiased, Gausian white noise
           1|ˆ1|ˆ:  kkxkHkzkkzkzkiand innovation:
• Based on measurement model :             kRvkvkvkxkHkz ,0;~ N
• NIS, ε, is chi-square distributed with nz (order of ) degrees of freedom, χnz:z
     
 
 z
z
z
zz
zzn n
n
z
n
n
k
n
n U
n
p 

















 2
212/2
2
exp
2/
2/1
289
SOLO
Tail probabilities of the chi-square and normal densities.
9.21
11.34
13.28
2
3
4
• From the chi-square table
we can determine εmax
28.13;4,01.0
34.11;3,01.0
21.9;2,01.0
max
max
max






z
z
z
n
n
n
• For non-maneuvering motion
   01.01Pr max   typicallyk
Target Estimators
Filters for Maneuvering Target Detection
Maneuver Detection Basic Schemes (continue – 1)
0.01
 122 , ktxz
 12 ktS
 kk ttz |ˆ 12 
max kTarget Maneuvers
1
 111 , ktxz
 11 ktS
 kk ttz |ˆ 11 
max k
Non-maneuvering
Target
• Once a maneuver is detected Target
dynamic model must be changed.
• In the same way we can detect the
end of a Target maneuver.
290
SOLO Target Estimators
Filters for Maneuvering Target Detection
Maneuver Detection Basic Schemes (continue – 2)
Return to Table of Content
291
SOLO Target Estimators
The Hybrid Model Approach
- The Target Model, at a given time, is assumed to be one of r possible Target Models
(Constant Velocity, Constant Acceleration, Singer Model, etc…)
 r
jjMMModel 1

• All models are assumed Linear – Gaussian (or linearization of nonlinear models)
and a Kalman Filter type is used for state estimation and prediction.
• The measurements are received at discrete times kz  Tktzzk 
The information Z1:k at time k consists of all measurements received up to time k.
 kk zzzZ ,,,: 21:1 
Filter M1
   0|0,0|0ˆ Px
Filter Mj
   0|0,0|0ˆ Px
Filter Mr
   0|0,0|0ˆ Px
                
                kMRvkMkvkMkvkxkMHkz
kMQwkMkwkMkwkxkMFkx
,0;~,,
,0;~,1,11
N
N


Hybrid Model (have both continuous (noise) uncertainties as well as discrete (“model”
or “mode”) uncertainties.
kz
Measurements
292
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
• A Bayesian framework is used.
The prior probability that the system is in mode j (model Mj applies) is assumed given:
    rjZMP jj ,,1|0 0 
Z0 is the prior information and since the correct model is among the
assumed r possible models
  10
1

r
j
j
Two possible situations are considered:
1. No Switching between models during the scenario
2. Switching between models during the scenario
Return to Table of Content
293
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
1. No Switching between models during the scenario
Using Bayes formulation, the posterior probability of model j being correct, given the
measurement data up to k, Zk, is given by
       
 
   
 1:1
1:11:1
1:1
1:1
1:11:1
|
,||
|
,,
,||:




 
kk
jkkkj
kk
kkj
kkjkjj
ZzP
MZzPZMP
ZzP
ZzMP
ZzMPZMPk
   
   
   
    







 r
j
jkkj
jkkj
r
j
jkkkj
jkkkj
MZzPk
MZzPk
MZzPZMP
MZzPZMP
1
1:1
1:1
1
1:11:1
1:11:1
,|1
,|1
,||
,||


 
   
   




 r
j
jkkj
jkkj
j
MZzPk
MZzPk
k
1
1:1
1:1
,|1
,|1



    rjZMP jj ,,1|0 0 
with assumed a prior probabilities
P {zk|Z1:k-1, Mj} is the Likelihood Function Λj (k) of mode j at time k, which, under the
linear-Gaussian assumptions, is given by:
                  
 kS
kikSki
kSkikiPMZzPk
j
jj
T
j
jjjjkkj
2
2/exp
,0;,|:
1
1:1



 N
    j
kjkkj xMHzki ˆwhere Innovation of Filter Mj at time k
         jk
T
jk
j
jkj MRMHkkPMHkS  |
Innovation Covariance of Filter Mj
at time k
294
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
• Each Filter Mj will provide the mode-conditioned state estimate
the associated mode-conditioned covariance Pj (k|k)
and the Innovation Covariance Sj (k) or the Likelihood Function Λj (k) at time k
1. No Switching between models during the scenario (continue – 1)
 kkxj |ˆ
Filter M1
   1|1,1|1ˆ  kkPkkx
   kkPkkx |,|ˆ 11
   
 k
kSki
1
11 ,

Filter Mj
   1|1,1|1ˆ  kkPkkx
   kkPkkx jj |,|ˆ
   
 k
kSki
j
jj

,
Filter Mr
   1|1,1|1ˆ  kkPkkx
   kkPkkx rr |,|ˆ
   
 k
kSki
r
rr

,
 
    
    
   
   
  rjgiven
kk
kk
kSik
kSik
k jr
j
jj
jj
r
j
jkj
jkj
j ,,10
1
1
,0;1
,0;1
11







 






N
N
 k1  kj  kr
Computation of μj (k) Block Diagram
kz
Measurements
295
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
kz
Filter M1
   
 k
kSki
1
11 ,

Filter Mj
   kkPkkx jj |,|ˆ
   
 k
kSki
j
jj

,
Filter Mr
   
 k
kSki
r
rr

,
 
 1|1
1|1ˆ


kkP
kkx
        MixtureGaussiankkkxkkkx
r
j
j
r
j
jj 1|ˆ|ˆ
11
  

                rjkkxkkxkkxkkxkkPkkkP
T
jj
r
j
jj ,,1|ˆ|ˆ|ˆ|ˆ||
1
 

   kkPkkx rr |,|ˆ
 
 1|1
1|1ˆ


kkP
kkx
 
 1|1
1|1ˆ


kkP
kkx
• We have r Gaussian estimates , therefore to obtain the estimate of the system
state and its covariance we can use the results of a Gaussian mixture with r terms
to obtain the Overall State Estimate and its Covariance
1. No Switching between models during the scenario (continue – 2)
 kkxj |ˆ
Measurements
 
    
    
   
   
  rjgiven
kk
kk
kSik
kSik
k jr
i
ii
jj
r
i
iki
jkj
j ,,10
1
1
,0;1
,0;1
11







 






N
N
 k1  kj  kr   kkPkkx |,|ˆ 11
296
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
The results are exact under the following assumptions:
1. No Switching between models during the scenario (continue – 3)
     

r
j
jj kkxkkkx
1
|ˆ|ˆ 
               T
jj
r
j
jj kkxkkxkkxkkxkkPkkkP |ˆ|ˆ|ˆ|ˆ||
1
 

1. The correct model is among the models considered.
2. The same model has been in effect from the initial time.
If the mode set includes the correct one and no jump occurs, then the probability of the
true mode will converge to unity, that is, this approach yields consistent estimates of
the system parameters. Otherwise the probability of the model “nearest” to the correct
one will converge to unity.
Return to Table of Content
297
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
2. Switching between models during the scenario
As before the system is modeled by the equations:
                
                kMRvkMkvkMkvkxkMHkz
kMQwkMkwkMkwkxkMFkx
,0;~,,
,0;~,1,11
N
N


where M (k) denotes the model “at time k” – in effect during the sampling period
ending at k. Such systems are called Jump Linear Systems. The mode jump process
is assumed left-continuous (i.e. the impact of the jump starts at tk
+)
It is assumed that the mode (model) jump process is a Markov process with known
mode transition probabilities.
Probability of transition from Mi at k-1 to Mj at k is given by the Markov Chain:
    ijji MkMMkMPp  1|:
Since, all the possibilities are to jump from i to each of j=1,…,r (including j=i) we
must have
     11|
11
  
r
j
ij
r
j
ji MkMMkMPp
298
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
2. Switching between models during the scenario (continue – 1)
In this way the number of models running at each new measurement k are:
k = 1, there are r models
k = 2, there are r2 models, since each r models at k = 1 split in new r models
k = 3, there are r3 models, since each r2 models at k = 2 split in new r models
…………………………………………………………………………………….
k, there are rk models.
The number of models grows exponentially making this approach impractical.
The only way to avoid the exponentially increasing number of histories, which have
to be accounted for, is by going to suboptimal techniques.
Return to Table of Content
299
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
2. Switching between models during the scenario (continue – 2)
The Interacting Multiple Model (IMM) Algorithm
In the IMM approach, at time k the state estimate is computed under each possible
current model using r filters, with each filter using as start condition (for time k-1)
a different combination of the previous model-conditioned estimates – mixed initial
conditions.
We assume a transition from Mi at k-1 to Mj at k with a predefined probability:
    ijji MkMMkMPp  1|:      11|
11
  
r
j
ij
r
j
ji MkMMkMPp
Define
 1|1ˆ  kkxi - filtered state estimate at scan k-1 for Kalman Filter Model i
 1|1  kkPi - covariance matrix at scan k-1 for Kalman Filter Model i
 1ki - probability that the target performs as in model state i as
computed just after data is received on scan k-1
 1kji - conditional probability that the target made the transition
from state i to state j at scan k-1
 
       
       
 
  





 r
i
iji
iji
r
i
iij
iij
ji
kp
kp
ZMkMPMkMMkMP
ZMkMPMkMMkMP
k
11
1
1
|11|
|11|
1



300
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
2. Switching between models during the scenario (continue – 3)
The Interacting Multiple Model (IMM) Algorithm (continue – 1)
     
         T
jiji
r
i
ijij
kkxkkxkkxkkx
kkPkkkP
1|1ˆ1|1ˆ1|1ˆ1|1ˆ
1|111|1
00
1
0

 

For mixed Gaussian distribution we obtain the covariance of mixed initial conditions to be:
Conditional probability that the target made the transition from state i to state j at scan k-1
 
       
       
 
  





 r
i
iji
iji
r
i
iij
iij
ji
kp
kp
ZMkMPMkMMkMP
ZMkMPMkMMkMP
k
11
1
1
|11|
|11|
1



  rikkxi ,,11|1ˆ Mixing: The IMM algorithm starts with the initial condition
from the filter Mi (k-1), assumed Gaussian distributed, and computes the mixed initial
condition for the filter matched to Mj (k) according to
      rjkkxkkkx
r
i
ijij ,,11|1ˆ11|1ˆ
1
0  

301
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
2. Switching between models during the scenario (continue – 4)
The Interacting Multiple Model (IMM) Algorithm (continue – 2)
The next step, as described before, is to run the r Kalman Filters and to calculate:
 
   
   




 r
j
jkkj
jkkj
j
MZzPk
MZzPk
k
1
1:1
1:1
,|1
,|1



    rjZMP jj ,,1|0 0 
with assumed a prior probabilities
P {zk|Z1:k-1, Mj} is the Likelihood Function Λj (k) of mode j at time k, which, under the
Linear-Gaussian assumptions, is given by
           kSkikiPMZzPk jjjjkkj ,0;,|: 1:1 N 
    j
kjkkj xMHzki ˆwhere Innovation of Filter Mj at time k
         jk
T
jk
j
jkj MRMHkkPMHkS  |
Innovation Covariance of Filter Mj
at time k
     

r
j
jj kkxkkkx
1
|ˆ|ˆ 
               T
jj
r
j
jj kkxkkxkkxkkxkkPkkkP |ˆ|ˆ|ˆ|ˆ||
1
 

To obtain the estimate of the system state and its covariance we can use the results of a
Gaussian mixture with r terms
302
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
2. Switching between models during the scenario (continue – 5)
The Interacting Multiple Model (IMM) Algorithm (continue – 3)
IMM Estimation Algorithm Summary
• Interaction: Mixing of the previous cycle mode-conditioned state estimates and
covariance, using the predefined mixing probabilities,
to initialize the current cycle of each mode-conditioned Filter.
   1|1,1|1ˆ 00  kkPkkx jj
• Mode-Conditioned Filtering: Mixing Calculation of the State Estimate and the
covariance conditioned on a mode being in effect ,
as well as the mode likelihood function , for r parallel Filters.
   kkPkkx jj |,|ˆ
 kj
• Probability Evaluation: Computation of the mixing and the updated mode
probabilities μj (k) given μj (0), j=1,…,r.
• Overall State Estimate and Covariance: Combination of the latest mode-conditioned
State Estimate and Covariance .   kkPkkx |,|ˆ
     
     
         T
jiji
r
i
ijij
r
i
ijij
kkxkkxkkxkkx
kkPkkkP
rjkkxkkkx
1|1ˆ1|1ˆ1|1ˆ1|1ˆ
1|111|1
,,11|1ˆ11|1ˆ
00
1
0
1
0








 
            rjgivenkkkkk j
r
i
iijjj ,,101/1
1






 

     

r
j
jj kkxkkkx
1
|ˆ|ˆ 
                rjkkxkkxkkxkkxkkPkkkP
T
jj
r
j
jj ,,1|ˆ|ˆ|ˆ|ˆ||
1
 

303
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
kz
2. Switching between models during the scenario (continue – 6)
Filter M1
   kkPkkx |,|ˆ 11
   
 k
kSki
1
11 ,

Filter Mj
   kkPkkx jj |,|ˆ
   
 k
kSki
j
jj

,
Filter Mr
   
 k
kSki
r
rr

,
 
    
    
   
   
  rjgiven
kk
kk
kSik
kSik
k jr
i
ii
jj
r
i
iki
jkj
j ,,10
1
1
,0;1
,0;1
11







 






N
N
 k1  kj  kr
 
 
 


 r
i
iji
iji
ji
kp
kp
k
1
1
1
1



     
     
         T
jiji
r
i
ijij
r
i
ijij
kkxkkxkkxkkx
kkPkkkP
rjkkxkkkx
1|1ˆ1|1ˆ1|1ˆ1|1ˆ
1|111|1
,,11|1ˆ11|1ˆ
00
1
0
1
0








 
 
 1|1
1|1ˆ


kkP
kkx
j
j
 1ki
 
 1|1
1|1ˆ
0
0


kkP
kkx
j
j
 
 1|1
1|1ˆ
01
01


kkP
kkx
 
 1|1
1|1ˆ
0
0


kkP
kkx
r
r
The Interacting Multiple Model (IMM) Algorithm (continue – 4)
rjip ji ,,1, 
     

r
j
jj kkxkkkx
1
|ˆ|ˆ 
                rjkkxkkxkkxkkxkkPkkkP
T
jj
r
j
jj ,,1|ˆ|ˆ|ˆ|ˆ||
1
 

   kkPkkx rr |,|ˆ
Measurements
304
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
2. Switching between models during the scenario (continue – 7)
The Interacting Multiple Model (IMM) Algorithm (continue – 5)
Interaction
(Mixing)
1
1|1
ˆ  kkx
r
kkx 1|1
ˆ 
Filter
Mk
1
Filter
Mk
r
Model
Probability
Update
State
Estimate
Combination
01
1|1
ˆ  kkx
r
kkx0
1|1
ˆ 
kz
1
k
r
k
1
|
ˆ kkx
r
kkx |
ˆ
kkx |
ˆk
1k
1|1  kk
IMM Algorithm
jip
305
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
2. Switching between models during the scenario (continue – 8)
Bar-Shalom, Y., Fortmann, T.,E., “Tracking and Data Association”, Academic Press, 1988, pp. 233-237
Return to Table of Content
306
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
2. Switching between models during the scenario (continue – 9)
The IMM-PDAF Algorithm
In cases when we want to detect a Target Maneuver and the Probability of Detection,
PD, is less then 1, and False Alarms are possible we can combine the Interacting
Multiple Model (IMM) Algorithm that allows Target maneuver with the
Probabilistic Data Association Filter (PDAF) that deals with False Alarms, given the
IMM-PDAF Algorithm.
This is done by replacing the Kalman Filters Models of the IMM with PDAF Models.
Interaction
(Mixing)
1
1|1
ˆ  kkx r
kkx 1|1
ˆ 
PDAF
Mk
1
PDAF
Mk
r
Model
Probability
Update
State
Estimate
Combination
01
1|1
ˆ  kkx r
kkx0
1|1
ˆ 
kz
1
k
r
k
1
|
ˆ kkx r
kkx |
ˆ
kkx |
ˆk
1k
1|1  kk
IMM-PDAF Algorithm
307
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
2. Switching between models during the scenario (continue – 10)
The IMM-PDAF Algorithm (continue – 1)
The steps of IMM-PDAF are as follows:
Step 1: Mixing Initial Conditions
     
         T
jiji
r
i
ijij
kkxkkxkkxkkx
kkPkkkP
1|1ˆ1|1ˆ1|1ˆ1|1ˆ
1|111|1
00
1
0

 

For mixed Gaussian distribution we obtain the covariance of mixed initial conditions to be:
  rikkxi ,,11|1ˆ The IMM algorithm starts with the initial condition
from the filter Mi (k-1), assumed Gaussian distributed, and computes the mixed initial
condition for the filter matched to Mj (k) according to
      rjkkxkkkx
r
i
ijij ,,11|1ˆ11|1ˆ
1
0  

where
Conditional probability that the target made the transition from state i to state j at scan k-1
 
       
       
 
  







 r
i
iji
iji
r
i
kiij
kiij
ji
kp
kp
ZMkMPMkMMkMP
ZMkMPMkMMkMP
k
11
1:1
1:1
1
1
|11|
|11|
1



308
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
2. Switching between models during the scenario (continue – 11)
The IMM-PDAF Algorithm (continue – 2)
The steps of IMM-PDAF are as follows:
Step 2: Mode Conditioning PDAF
     riZmiMkZPk kkki ,,1,,|: 1:1  
From the r PDAF models we must obtain the likelihood functions Λi(k) i = 1,…,r , for each Model i.
But at PDAF we found that for the Model i:
            
 
km
j
kkkjikkkjikkk ZmiMkkZPZmiMkZkPZmiMkZP
0
1:11:11:1 ,,,|,,,|,,| 
    
 
      
 
 













0
,,1
2
2/exp
,,,|
1
11
1:1
jkV
mj
kS
kikSki
PkV
ZmiMkkZP
k
k
m
i
k
i
jii
T
ji
Gi
m
i
kkkji


       
        





 


011
,...,11
,,|, 1
1
1:1
jkVPPmPPkVPP
mjkVPPmPPPP
ZmiMkP
iGiDikGiDiiGiDi
kiGiDikGiDiGiDi
kkkji


 
    2/12/
2
1
2
, kS
n
kV i
n
z
n
i
z
z











     1|ˆ:  kkzkzki ijji
and
where          kRkHkkPkHkS i
T
iiii  1|:
309
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
2. Switching between models during the scenario (continue – 12)
The IMM-PDAF Algorithm (continue – 3)
The steps of IMM-PDAF are as follows:
Step 2: Mode Conditioning PDAF (continue – 1)
    
 
 
      
      
        
 






















k
iiii
k
i
iiii
k
k
i
ii
m
j
jii
iGDkGDi
m
i
D
iGDkGD
m
i
m
j
jii
T
ji
i
D
GD
kkki
keb
kVPPmPPkSkV
P
kVPPmPPkV
kikSki
kS
P
PP
ZmiMkZPk
1
1
1
1
1
1:1
12
1
2/exp
2
1
,,|:




From the r PDAF models we obtain the likelihood functions Λi(k) i = 1,…,r , for each Model i.
    2/12/
2
1
2
, kS
n
kV i
n
z
n
i
z
z











     1|ˆ:  kkzkzki ijji
where
         kRkHkkPkHkS i
T
iiii  1|:
      
   kS
P
PP
b
kikSkie
i
D
GD
i
jii
T
jiji
i
ii
 2
1
:
2/exp:
1




310
SOLO Target Estimators
The Hybrid Model (Multiple Model) Approach
2. Switching between models during the scenario (continue – 13)
The IMM-PDAF Algorithm (continue – 4)
The steps of IMM-PDAF are as follows:
Step 3: Probability Evaluation
Computation of the mixing and the updated mode probabilities μj (k) given μj (0),
j=1,…,r.
Combination of the latest mode-conditioned State Estimate and Covariance .   kkPkkx |,|ˆ
 
   
   
  rjlpgiven
kkp
kpk
k jjlr
i
r
l
iljl
r
l
ljlj
j ,,1,0&
1
1
1 1
1






 





     

r
j
jj kkxkkkx
1
|ˆ|ˆ 
                rjkkxkkxkkxkkxkkPkkkP
T
jj
r
j
jj ,,1|ˆ|ˆ|ˆ|ˆ||
1
 

Step 4: Overall State Estimate and Covariance
Return to Table of Content
311
Elements of a Basic MTT System
SOLO
Multi-Target Tracking (MTT) Systems
The task effort of tracking n targets can require substantially more computation resources
than n time the computation resources for tracking a single target, because is difficult to
establish the correspondence between observations and targets (Data Association).
Uncertainties in tracking targets:
• Uncertainties associated with the measurements (target origin).
• Inaccuracies due to the sensor performances (resolution, noise,..)
Tgt. 1
Tgt. 2
Measurement 2
Measurement 1
1. Measurement 1 from target 1 & Measurement 2 from target 2
2. Measurement 1 from target 2 & Measurement 2 from target 1
3. None of the above (False Alarm)
Hypotheses:
312
Elements of a Basic MTT System
SOLO
Multi-Target Tracking (MTT) Systems
Association Hypothesis 1
Measurement 2
Measurement 1
t1 t2 t3
Measurement
2
Measurement
1
t1 t2 t3
Measureme
nt 2
Measureme
nt 1
t
1
t
2
t
3
Measureme
nt 2
Measureme
nt 1
t
1
t
2
t
3
Association Hypothesis 2 Association Hypothesis 3
313
Elements of a Basic MTT System
SOLO
Alignment: Referencing of sensor data to a common time and spatial origin.
Association: Using a metric to compare tracks and data reports from different
sensors to determine candidates for the fusion process.
Correlation: Processing of the tracks and reports resulting from association
to determine if they belong to a common object and thus aid in
detecting, classifying and tracking the objects of interest.
Estimation: Predicting an object’s future position by updating the state vector
and error covariance matrix using the results of the correlation
process.
Classification: Assessing the tracks and object discrimination data to determine
target type, lethality, and threat priority.
Cueing: Feedback of threshold, integration time, and other data processing
parameters or information about areas over which to conduct a more
detailed search, based on the results of the fusion process.
Return to Table of Content
314
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 ktxz ,1
 kj tS 2
 11 |ˆ  kkj ttz
 ktxz ,2 ktxz ,3
 12 |ˆ  kkj ttz
 kj tS 1
Trajectory j = 2
Trajectory j = 1
Measurements
at scan k
 212 |ˆ  kkj ttz
 211 |ˆ  kkj ttz
SOLO
Joint Probabilistic Data Association Filter (JPDAF)
The JPDAF method is identical to the PDA except that
the association probabilities β are computed using
all observations and all tracks.
Gating and Data Association
In the PDA we dealt with only one target (track).
JPDAF deals with a known number of targets
(multiple targets) .
Both PDA and JPDAF are of target-oriented type, i.e.,
the probability that a measurement belongs to an established
target (track) is evaluated.
315
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 ktxz ,1
 kj tS 2
 11 |ˆ  kkj ttz
 ktxz ,2 ktxz ,3
 12 |ˆ  kkj ttz
 kj tS 1
Trajectory j = 2
Trajectory j = 1
Measurements
at scan k
 212 |ˆ  kkj ttz
 211 |ˆ  kkj ttz
SOLO
Joint Probabilistic Data Association Filter (JPDAF) (continue – 1)
Assumptions of JPDAF:
Gating and Data Association
• There are several targets to be tracked in the presence of false measurements.
• The number of targets r is known.
• The track of each target has been initialized.
• The state equations of the target are not necessarily the same.
• The validation regions of these target can intersect and have
common measurements.
• A target can give rise to at most one measurement – no multipath.
• The detection of a target occurs independently over time and
from another target according to a known probability.
• A measurement could have originated from at most one target (or none) – no unresolved
measurements are considered here.
       rjkkPkkxkx jjj ,,11|1,1|1ˆ;1 N
• The conditional pdf of each target’s state given the past measurements is assumed
Gaussian (a quasi-sufficient statistics that summarizes the past) and independent
across targets with available from the previous
cycle of the filter.
• With the past summarized by an approximate sufficient statistics, the association
probabilities are computed (only for the latest measurements) jointly across the
measurement and the targets.
316
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 ktxz ,1
 kj tS 2
 11 |ˆ  kkj ttz
 ktxz ,2 ktxz ,3
 12 |ˆ  kkj ttz
 kj tS 1
Trajectory j = 2
Trajectory j = 1
Measurements
at scan k
 212 |ˆ  kkj ttz
 211 |ˆ  kkj ttz
SOLO
Joint Probabilistic Data Association Filter (JPDAF) (continue -2)
• At the current time k we define the set of validated measurements:
Gating and Data Association
     km
ii kzkZ 1

Example: From Figure we can see 3 measurements (mk=3)
        ,,, 321 kzkzkzkZ 
• We also have r predefined target (tracks) i=1,…,r
Example: From Figure we can see 2 tracks (r = 2)
• From the validated measurements and their position relative
to track gates we define the Validation Matrix Ω that consists of
binary elements (0 or 1) indicating if measurement j has been validated for track j
(is inside the j Gate). Index i = 0 (no track) indicates a false alarm (clutter) origin,
which is possible for each measurement.
Example: From Figure
  
3
2
1
101
111
011
ˆ
210
meas
meas
meas
ji
ij













Measurement 1 can be FA or due track1, not track2
Measurement 2 can be FA or due track1, or track2
Measurement 3 can be FA or due track2, not track1
tracks
317
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 ktxz ,1
 kj tS 2
 11 |ˆ  kkj ttz
 ktxz ,2 ktxz ,3
 12 |ˆ  kkj ttz
 kj tS 1
Trajectory j = 2
Trajectory j = 1
Measurements
at scan k
 212 |ˆ  kkj ttz
 211 |ˆ  kkj ttz
SOLO
Joint Probabilistic Data Association Filter (JPDAF) (continue -3)
Gating and Data Association
• Define the Joint Association Events θ (Hypotheses) using the
Validation Matrix
Example: From Figure
  
3
2
1
111
111
011
ˆ
210
meas
meas
meas
ji
ij













tracks
Validation Matrix
     ij
ˆˆ 
Hypotesis
Number
Track Number
1 2
Comments
1 0 0 All measurements are False Alarms
2 1 0 Measurement # 1 due to target # 1, other are F.A.
3 2 0 Measurement # 2 due to target # 1, other are F.A.
4 3 0 Measurement # 3 due to target # 1, other are F.A.
5 0 2 Measurement # 2 due to target # 2, other are F.A.
6 1 2 Measurement # 1 due to target # 1, #2 due target #2.
7 3 2 Measurement # 3 due to target # 1, #2 due target #2.
8 0 3 Measurement # 3 due to target # 2, other are F.A
9 1 3 Measurement # 2 due to target # 1, #3due target #2.
10 2 3 Measurement # 1 due to target # 1, #3 due target #2.
Those are all the Hypotheses
(exhaustive) defined by the
Validation Matrix (or Figure)
Run This
318
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 ktxz ,1
 kj tS 2
 11 |ˆ  kkj ttz
 ktxz ,2 ktxz ,3
 12 |ˆ  kkj ttz
 kj tS 1
Trajectory j = 2
Trajectory j = 1
Measurements
at scan k
 212 |ˆ  kkj ttz
 211 |ˆ  kkj ttz
SOLO
Joint Probabilistic Data Association Filter (JPDAF) (continue -4)
Gating and Data Association
• Define the Joint Association Events θ (Hypotheses) using the
Validation Matrix
Example: From Figure
  
3
2
1
111
111
011
ˆ
210
meas
meas
meas
ji
ij













tracks
Validation Matrix
     ij
ˆˆ 
Those are all the Hypotheses
(exhaustive) defined by the
Validation Matrix (or Figure)
OH 1 2 3 4 5 6 7 8 9 10
O1 0 T1 0 0 0 T1 0 0 T1 0
O2 0 0 T1 0 T2 T2 T2 0 0 T1
O3 0 0 0 T1 0 0 T1 T2 T2 T2
hypothesis number
obs
Run This
319
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
We have n stored tracks that have predicted measurements
and innovations co variances at scan k given by:
At scan k+1 we have m sensor reports (no more than one report
per target)
    njkSkkz jj ,,1,1|ˆ 
set of all sensor reports on scan k mk zzZ ,,1 
H – a particular hypothesis (from a complete set S of hypotheses)
connecting r (H) tracks to r measurements.
We want to compute:
     
   
   HPHZP
cHPHZP
HPHZP
ZHP k
SH
k
k
k |
1
|
|
| 

Joint Probabilistic Data Association Filter (JPDAF) (continue -5)
Gating and Data Association
320
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
We have several tracks defined by the predicted measurements
and innovations co variances
   
!m
V
em
m
V
FA

 

The probability density function of the false alarms, in the search volume V, in terms of
their spatial density λ , is given by aPoisson Distribution:
    njkSkkz jj ,,11,|1ˆ 
Not all the measurements are from a real target but are from
False Alarms. The common mathematical model for such false
measurements is that they are:
• uniformly spatially distributed
• independent across time
• this is the residual clutter (the constant clutter, if any, is not considered.
m is the number of measurements in scan k+1
 
V
orAlarmFalsezP i
1
TargetNew| 
Because of the uniformly space distribution in the search Volume, we have:
False Alarm Models
Gating and Data Association
Joint Probabilistic Data Association Filter (JPDAF) (continue - 6)
We can use different probability densities for false alarms (λFA) and for new targets (λNT)
321
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
H – a particular hypothesis (from a complete set S of hypotheses)
connecting r (H) tracks to r measurements and assuming m-r false alarms or new targets.
   
    
  












r
V
rm
li
iji
T
ij
m
l
lk
rkm
kk
VS
zzSzz
HzpHZP
1
/1
1
1
1
1
2
2/ˆˆexp
||


 
     
   
   HPHZP
cHPHZP
HPHZP
ZHP k
SH
k
k
k |
1
|
|
| 

 mk zzZ ,,11 
P (Zk|H) - probability of the measurements
given that hypothesis H is true.
     

km
j
j
tindependen
tsmeasuremen
mk HzPHzzPHZP
1
1 ||,,| 
where:
 
      









 
itracktoconnectedjtmeasuremen
S
zSz
Sz
orAlarmFalseistmeasuremenjif
V
HzP
i
ii
T
i
ii
j
2
2/ˆzˆzexp
,ˆ;z
TargetNew
1
|
j
1
j
jN
Gating and Data Association
Joint Probabilistic Data Association Filter (JPDAF) (continue - 7)
322
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
     
   
   HPHZP
cHPHZP
HPHZP
ZHP k
SH
k
k
k |
1
|
|
| 

P (H) – probability of hypothesis H connecting tracks i1,…,ir
to measurements j1,…,jr from mk sensor reports:
     kkFA
tracks
r
tracks
r
tsmeasuremen
r mPrmPiiPiijjPHP 






















 ,,,,|,, 111
   
 
!
!
11
1
,,|,, 11
m
rm
rmmm
iijjP
tracks
r
tsmeasuremen
r
















probability of connecting tracks i1,…,ir
to measurements j1,…,jr
 





 DetectingNot
m
iii
i
D
ii
Detecting
r
D
tracks
r
k
r
i
r
j
PPiiP 











,,
1
,,
1
1
1
1
1,,


probability of detecting only i1,…,ir
targets
     
 
V
k
rm
kFAkFA e
rm
V
rmrmP
k

 



!
for (m-r) False Alarms or New Targets assume Poisson
Distribution with density λ over search volume V of
(mk-r) reports
 kmP probability of exactly mk reports
where:
Gating and Data Association
Joint Probabilistic Data Association Filter (JPDAF) (continue - 8)
323
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO
where:
     HPHZP
c
ZHP kk |
1
| 
   
    
 










r
i
iji
T
ij
rmm
l
lk
S
zzSzz
V
HzpHZP
kk
1
1
1 2
2/ˆˆexp1
||



      
 
 k
V
k
rm
DetectingNot
m
iii
i
D
ii
Detecting
r
D
k
k
mPe
rm
V
PP
m
rm
HP
kk
r
i
r
j







 


  !
1
!
!
,,
1
,,
1
1
1

 

Gating and Data Association
Joint Probabilistic Data Association Filter (JPDAF) (continue - 9)
       
        
 
 k
V
k
rm
DetectingNot
m
iii
i
D
ii
Detecting
r
D
k
k
r
i
iji
T
ij
rm
m
l
lkk
mPe
rm
V
PP
m
rm
S
zzSzz
Vc
HzpHPHZP
c
ZHP
kk
r
i
r
j
k
k


 





















!
1
!
!
2
2/ˆˆexp11
||
1
|
,,
1
,,
11
1
1
1
1

 

 
      

   
 DetectingNot
m
iii
i
D
ii
Detecting
r
D
rm
r
i
iji
T
ij
c
k
k
V k
r
i
r
j
k
PP
S
zzSzz
mP
m
e
c









,,
1
,,
11
1
/1
1
1
1
1
2
2/ˆˆexp
!
1






324
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 ktxz ,1
 kj tS 2
 11 |ˆ  kkj ttz
 ktxz ,2 ktxz ,3
 12 |ˆ  kkj ttz
 kj tS 1
Trajectory j = 2
Trajectory j = 1
Measurements
at scan k
 212 |ˆ  kkj ttz
 211 |ˆ  kkj ttz
SOLO
The probabilities of each hypothesis is given by:
 
      

   
 DetectingNot
m
iii
i
D
ii
Detecting
r
D
rm
r
g
i
iji
T
ij
k
k
r
i
r
j
k
ji
PP
S
zzSzz
c
ZHP 








,,
1
,,
11
1
1
1
1
2
2/ˆˆexp
'
1
|





Gating and Data Association
Joint Probabilistic Data Association Filter (JPDAF) (continue -10)
Hypotes
is
Number
Track
Number
1 2
Number
of
confirmed
Tracks r
Number
of
FA mk-r
Hypothesis Probability
1 0 0 0 3
2 1 0 1 2
3 2 0 1 2
4 3 0 1 2
5 0 2 1 2
6 1 2 2 1
7 3 2 2 1
8 0 3 1 2
9 1 3 2 1
10 2 3 2 1
Example: Number of observations mk=3 with equal PD
    '/1|
33
1 cPZHP Dk  
    '/1|
22
112 cPPgZHP DDk  
    '/1|
22
123 cPPgZHP DDk  
    '/1|
22
134 cPPgZHP DDk  
    '/1|
22
225 cPPgZHP DDk  
    '/1|
2
22116 cPPggZHP DDk  
    '/1|
2
22137 cPPggZHP DDk  
    '/1|
22
238 cPPgZHP DDk  
    '/1|
2
23119 cPPggZHP DDk  
    '/1|
2
231210 cPPggZHP DDk  
c’ is defined by
requiring:
P (H1|Zk)+…
…+P (H10|Zk)
=1
    
i
iji
T
ij
ji
S
zzSzz
g


2
2/ˆˆexp
:
1



Define:
i – track index
j – measurement index
Run This
(First Display)
325
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 ktxz ,1
 kj tS 2
 11 |ˆ  kkj ttz
 ktxz ,2 ktxz ,3
 12 |ˆ  kkj ttz
 kj tS 1
Trajectory j = 2
Trajectory j = 1
Measurements
at scan k
 212 |ˆ  kkj ttz
 211 |ˆ  kkj ttz
SOLO
For each track i and measurement j (event θi j) compute the association
probability βi j:
Gating and Data Association
Joint Probabilistic Data Association Filter (JPDAF) (continue -11)
Hypotes
is
Number
Track
Number
1 2
Number
of
confirmed
Tracks r
Number
of
FA mk-r
Hypothesis Probability
1 0 0 0 3
2 1 0 1 2
3 2 0 1 2
4 3 0 1 2
5 0 2 1 2
6 1 2 2 1
7 3 2 2 1
8 0 3 1 2
9 1 3 2 1
10 2 3 2 1
Example: Number of observations mk=3 with equal PD
    '/1|
33
1 cPZHP Dk  
    '/1|
22
112 cPPgZHP DDk  
    '/1|
22
123 cPPgZHP DDk  
    '/1|
22
134 cPPgZHP DDk  
    '/1|
22
225 cPPgZHP DDk  
    '/1|
2
22116 cPPggZHP DDk  
    '/1|
2
22137 cPPggZHP DDk  
    '/1|
22
238 cPPgZHP DDk  
    '/1|
2
23119 cPPggZHP DDk  
    '/1|
2
231210 cPPggZHP DDk  
Since the hypotheses H are exhaustive
and exclusive we can apply the Total Probability Theorem:
     
 







 
lji
lji
lji
l
ljiklkjiji
H
H
HP
HPZHPZP




0
1
||:


i – track index
j – measurement index
Track =1
     kkk ZHPZHPZHP ||| 85101 
     kkk ZHPZHPZHP ||| 96211 
   kk ZHPZHP || 10321 
   kk ZHPZHP || 7431 
Track =2
       kkkk ZHPZHPZHPZHP |||| 432102 
021 
     kkk ZHPZHPZHP ||| 76522 
     kkk ZHPZHPZHP ||| 109832 
  





 1|
l
kl ZHP
Run This
(First Display)
326
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
 ktxz ,1
 kj tS 2
 11 |ˆ  kkj ttz
 ktxz ,2 ktxz ,3
 12 |ˆ  kkj ttz
 kj tS 1
Trajectory j = 2
Trajectory j = 1
Measurements
at scan k
 212 |ˆ  kkj ttz
 211 |ˆ  kkj ttz
SOLO
• Computation of Hypotheses Probabilities:
Gating and Data Association
Joint Probabilistic Data Association Filter (JPDAF) (continue -12)
Summary:
 
     
     
 zGji
jii
T
jiji
ijji
kj
nPd
kikSkid
kkzkzki
riFor
mjkz
,
1|ˆ
,,1
,,2,1
2
12








• Calculation of Innovation and Measurement Validation for each Measurement versus
each Track
• Definition of all Hypotheses (exhausive & exclusive)
OH 1 2 3 4 5 6 7 8 9 10
O1 0 T1 0 0 0 T1 0 0 T1 0
O2 0 0 T1 0 T2 T2 T2 0 0 T1
O3 0 0 0 T1 0 0 T1 T2 T2 T2
hypothesis number l
obs
 
      

   
 DetectingNot
m
iii
i
D
ii
Detecting
r
D
rm
r
g
i
iji
T
ij
k
k
r
i
r
j
k
ji
PP
S
zzSzz
c
ZHP 








,,
1
,,
11
1
1
1
1
2
2/ˆˆexp
'
1
|




   





 1|
l
kl ZHP
Run This
(Second Display)
327
Sensor Data
Processing and
Measurement
Formation
Observation -
to - Track
Association
Input
Data Track Maintenance
( Initialization,
Confirmation
and Deletion)
Filtering and
Prediction
Gating
Computations
Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House,
1986
Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems",
Artech House, 1999
SOLO Gating and Data Association
Joint Probabilistic Data Association Filter (JPDAF) (continue -13)
Summary (continue – 1):
• Compute Combined Innovation for each Track
riii
m
itrackj
j
jijii ,,1
`
1
 



• Covariance Prediction for each Track
          rikQkFkkPkFkkP i
T
iiii ,,1111|11| 
• Innovation Covariance for each Track
          rikRkHkkPkHkS i
T
iiii ,,11| 
• For each track i and measurement j (event θi j) compute the association
probability βi j:
     
l
ljiklkjiji HPZHPZP  ||:
i – track index
j – measurement index
 







lji
lji
lji
H
H
HP



0
1

• Filter Gain for each Track         rikSkHkkPkK i
T
iii ,,11|
1


• Update State Estimation for each Track         rikikKkkxkkx iiii ,,11|ˆ|ˆ 
• Update State Covariance for each Track
              
     kKiiiikKkdP
rikdPkkPkHkKIkkPkkP
T
i
T
ii
m
itrackj
j
T
jijijiii
iiiiiiii
k














1
00 ,,11|11||

 
328
SOLO
 1|1  kkPi
State Covariance
         kRkHkkPkHkS i
T
iiii  1|
Innovation Covariance
         111|111|  kQkFkkPkFkkP i
T
iiii
Covariance of Predicted State
       kSkHkkPkK i
T
iii
1
1|


Filter Gain
             kKkikikikikKkPd
T
i
T
ii
m
itrackj
j
T
jijijiii
k










 


&
1

Effect of Measurement Origin
on State Covarriance
Update State Covariance
   
          kPdkkPkHkKI
kkPkkP
iiiii
iii


1|1
1||
0
0


Update State Estimation
       kikKkkxkkx iiii  1||

 kSi
 ki ji
ji
 kii
 kz i
One Cycle of
JPDAF for Track i
Measurements
Evaluation of Association Probabilities
Predicted Measurements
     1|1|ˆ  kkxkHkkz iii

Calculation of Innovation and
Measurement Validation
State Estimation for Track i
 1|1  kkxi

Predicted State of Track i
     1|111|  kkxkFkkx iii

 
     
     
 zGji
jii
T
jiji
ijji
kj
nPd
kikSkid
kkzkzki
riFor
mjkz
,
1|ˆ
,,1
,,2,1
2
12








Definition of all Hypotheses H
and their Probabilities
     
l
ljiklkjiji HPZHPZP  ||:
 
      

   
 DetectingNot
m
iii
i
D
ii
Detecting
r
D
rm
r
g
i
iji
T
ij
k
k
r
i
r
j
k
ji
PP
S
zzSzz
c
ZHP 








,,
1
,,
11
1
1
1
1
2
2/ˆˆexp
'
1
|





Combined Innovation
   



km
itrackj
j
jijii kiki
1

Gating and Data Association
Joint Probabilistic Data Association Filter (JPDAF) (continue -14)
Return to Table of Content
329
SOLO Multi Hypothesis Tracking (MHT)
Assumptions of MHT
 ktxz ,1
 kj tS 2
 11 |ˆ  kkj ttz
 ktxz ,2 ktxz ,3
 12 |ˆ  kkj ttz
 kj tS 1
Trajectory j = 2
Trajectory j = 1
Measurements
at scan k
 212 |ˆ  kkj ttz
 211 |ˆ  kkj ttz
• There is several targets to be tracked in the presence of false measurements.
• The number of targets r is unknown.
• The track of each target has to bee initialized.
• The state equations of the targets are the same.
• The validation regions of these targets can intersect and have
common measurements.
• A target can give rise to at most one measurement – no multipath.
• The detection of a target occurs independently over time and
from another target according to a known probability.
• A measurement could have originated from at most one target (or none) – no unresolved
measurements are considered here.
       rjkkPkkxkx jjj ,,11|1,1|1ˆ;1 N
• The conditional pdf of each target’s state given the past measurements is assumed
Gaussian (a quasi-sufficient statistics that summarizes the past) and independent
across targets with available from the previous
cycle of the filter.
• The origin of each sequence of measurements is considered.
• At each sampling time any measurement can originated from:
- an established track
- a new target (with a Poisson Probability λNT)
- a false alarm (with a Poisson Probability λFA)
330
SOLO Multi Hypothesis Tracking (MHT)
MHT Algorithm Steps
• The Hypotheses of the current time are obtained from:
• The Set of Hypotheses at the previous time augmented with
• All the Feasible Associations of the Present Measurements (Extensive and Exhaustive).
• The Probability of each Hypothesis is evaluated assuming:.
• measurements associated with a track are Gaussian distributed
around the predicted location of the corresponding track’s
measurement.
• false measurements are uniformly distributed in the surveillance
region and appear according to a fixed rate (λFA) Poisson process.
• The State Estimation for each Hypothesized Track is obtained from a Standard Filter.
• The selection of the Most Probable Hypothesis amounts to an Exhaustive Search
over the Set of All Feasible Hypotheses.
 ktxz ,1
 11 |ˆ  kkj ttz
 ktxz ,2 ktxz ,3
 kj tS 1
Trajectory j = 1
Measurements
at scan k
 211 |ˆ  kkj ttz
• new targets are uniformly distributed in the surveillance
region (or according to some other PDF) and appear according to
a fixed rate (λNT) Poisson process.
• An elaborate Hypothesis Management is needed.
331
SOLO Multi Hypothesis Tracking (MHT)
k=0
measurements
1 2
1 1
k=1
Plots
1,2
1 1 3
1 1 4
1 2 4
1 2 3
k=2
Plots
3,4
k=3
Plots
5,6 1 1 3 5
1 1 3 6
1 1 4 5
1 1 4 6
1 2 3 5
1 2 3 6
1 2 4 5
1 2 4 6
Hypotheses
MHT tree
At scan k we have m sensor reports (no more than one report
per target)
set of all sensor reports on scan k+1 mk zzZ ,,1 
Hl – a particular hypothesis (from a complete set S of
hypotheses) connecting r (H) tracks to r measurements.
Measureme
nt 2
Measureme
nt 1
t1 t2 t3
Association
Hypothesis 1
Measureme
nt 2
Measureme
nt 1
t
1
t
2
t
3 Association Hypothesis 2
Measureme
nt 2
Measureme
nt 1
t
1
t
2
t
3
Association Hypothesis 3
Measureme
nt 2
Measureme
nt 1
t1 t2 t3
332
SOLO Multi Hypothesis Tracking (MHT)
Donald B. Reid
PhD A&A Stanford U.
1972
Receive New Data Set
Perform Target Time
Update
Form new clusters,
identifying which targets
and measurements are
associated with each
cluster.
Initialization
(A priori targets)
Clusters
Hypotheses
Generation
Form new set of hypotheses,
calculate their probabilities, and
perform a target measurement
for each hypothesis of each
cluster.
Simplify hypothesis matrix of each
cluster. Transfer tentative targets
With unity probability to confirmed
Target category. Create new clusters
For confirmed targets no longer in
Hypothesis matrix.
Mash
Stop
Reduce number of
hypotheses by
elimination or
combination,.
Reduce
Return to Next
Data Set
Flow Diagram of
Multi Target Tracking
Algorithms
(Reid 1979)
333
SOLO Multi Hypothesis Tracking (MHT)
MHT Implementation Issues
• Need to manage Hypotheses to keep their number reasonable small.
• Limit the History (the deep of hypotheses is N last scans)
• Combining and pruning of Hypotheses:
- Retain only hypotheses with probability above certain threshold.
- Combine hypotheses with last M association in common.
• Clustering:
- Cluster is a set of tracks with common measurements and association
hypotheses; hypotheses sets from different clusters are evaluated separately.
334
SOLO Multi Hypothesis Tracking (MHT)
set of all sensor reports on scan k+1 mk zzZ ,,1 
Sum over
all feasible
assignement
Histories
(Lots of them)
Measureme
nt 2
Measureme
nt 1
t1 t2 t3
Accumulated measurements (plots) to time k:  kkk ZZZ ,1:1:1 
Hypotheses Sequences:  kLkk HHS ,,1 
     

L
l
kklkkklkk ZHxpZHpZxp
1
:1:1:1 ,|||
Probability
of Hkl
given all
current and
paste data
P.D.F. of
Target state
for a particular
Hypothesis Hkl
given current
and paste data
   
      

   
 DetectingNot
m
iii
i
D
ii
Detecting
r
D
rm
r
i
iji
T
ij
c
k
k
V
kkl
k
r
i
r
j
k
PP
S
zzSzz
mP
m
e
c
ZHp 








,,
1
,,
11
1
/1
1
1
1
1
2
2/ˆˆexp
!
1
|






We found:
335
Sensor # 1
Sensor # 2
Estimator
1v
xˆx
1z
2z
2v
SOLO
Multi-Sensor Estimate
Consider a system comprised of two sensors,
each making a single measurement, zi (i=1,2),
of a constant, but unknown quantity, x, in the
presence of random, dependent, unbiased
measurement errors, vi (i=1,2). We want to design an optimal estimator that
combines the two measurements.
     
     
       11
0
0
1122112
2
2
22222
2
1
2
11111










vEvvEvE
vEvEvEvxz
vEvEvEvxz
In absence of any other information, we chose an estimator that combines, linearly,
the two measurements:
2211
ˆ zkzkx 
where k1 and k2 must be found such that:
1. The Estimator is Unbiased:     0~ˆ  xExxE
        
          011
~ˆ
2121
0
22
0
11
2211


xkkxEkkvEkvEk
xvxkvxkExExxE
x

121  kk
Sensors Fusion
336
Sensor # 1
Sensor # 2
Estimator
1v
xˆx
1z
2z
2v
SOLO
Multi-sensor Estimate (continue – 1)
2211
ˆ zkzkx 
where k1 and k2 must be found such that:
1. The Estimator is Unbiased:     0~ˆ  xExxE 121  kk
2. Minimize the Mean Square Estimation Error:     2
,
2
,
~minˆmin
2121
xExxE
kkkk

             
              2111
2
2
2
1
2
1
2
12111
2
2
2
1
2
1
2
1
2
2111
2
2111
2
,
121min121min
1min1minˆmin
1
21
2
2
2
1
1
1121


kkkkvvEkkvEkvEk
vkvkExvxkvxkExxE
kk
kkkk














         0212122121 211
2
21
2
112111
2
2
2
1
2
1
2
1
1



 kkkkkkk
k
21
2
2
2
1
21
2
1
12
21
2
2
2
1
21
2
2
1
2
ˆ1ˆ&
2
ˆ









 kkk
    2
2
2
1
21
2
2
2
1
22
2
2
12
,
2
1~min 





xE Reduction of Covarriance Error
Estimator:
Sensors Fusion
337
Sensor # 1
Sensor # 2
Estimator
1v
xˆx
1z
2z
2v
SOLO
Multi-sensor Estimate (continue – 2)
21
2
1
1
2
2
2
1
1
2
1
1
2
2
11
2
1
1
2
2
2
1
1
2
1
1
2
1
2
21
2
2
2
1
21
2
1
1
21
2
2
2
1
21
2
2
22
22
ˆ
zz
zzx
























      2
2
2
11
2
1
1
2
2
2
1
2
21
2
2
2
1
22
2
2
12
,
2
1
2
1~min 










 
xE
1. Uncorrelated Measurement Noises (ρ =0)
    2
12
2
2
1
2
21
12
2
2
1
2
1
ˆ zzx

 
  0~min 2
xE
2. Fully Correlated Measurement Noises (ρ =±1)
3. Perfect Sensor (σ 1 = 0)
1
ˆ zx    0~min 2
xE The estimator will use the perfect sensor as expected.
21
2
1
1
1
2
11
2
1
1
1
1
ˆ zzx 









Sensors Fusion
338
Sensor # 1
Sensor # 2 Estimator
1v
xˆx
1z
2z
2v
Sensor # n
nznv
SOLO
Multi-sensor Estimate (continue – 3)
Consider a system comprised of n sensors,
each making a single measurement, zi (i=1,2,…,n),
of a constant, but unknown quantity, x, in the
presence of random, dependent, unbiased
measurement errors, vi (i=1,2,…,n). We want to design an optimal estimator that
combines the n measurements.
  nivEvxz iii ,,2,10 
or
  
           RVEVVEVEVE
v
v
v
x
z
z
z
nnnnn
nn
nn
T
V
n
UZ
n






















































2
2211
22
2
22112
112112
2
1
2
1
2
1
0
1
1
1








  ZK
z
z
z
kkkzkzkzkx T
n
nnn 














 2
1
212211 ,,,ˆEstimator:
Sensors Fusion
339
Sensor # 1
Sensor # 2 Estimator
1v
xˆx
1z
2z
2v
Sensor # n
nznv
SOLO
Multi-sensor Estimate (continue – 4)
ZKx T
ˆEstimator:
1. The Estimator is Unbiased:
          01ˆ~
0


VEKxUKxVKxUKExxExE TTTT
01UKT
2. Minimize the Mean Square Estimation Error:     2
1
2
1
ˆmin~min xxExE
UK
K
UK
K
TT


        KRKKVVEKVKVKExE T
UK
K
TT
UK
K
TTT
UK
K
UK
K
TTTT
111
2
1
minminmin~min


Use Lagrange multiplier λ (to be determined) to include the constraint 01UKT
   1 UKKRKKJ TT
    0


UKRKJ
K

11
 
URUUK TT

  URURUK T 111 
   112
1
~min


 URUxE T
UK
K
T













1
1
1
:

U
URK 1
 
Sensors Fusion
340
Multi Sensors Data FusionSOLO
Transducer 1
Feature Extraction,
Target Classification,
Identification,
and Tracking
Sensor 1
Fusion Processor
- Associate
- Correlate
- Track
- Estimate
- Classify
- Cue
Cue
Target
Report
Cue
Target
Report
Transducer N
Feature Extraction,
Target Classification,
Identification,
and Tracking
Sensor N
Sensor – level Fusion
Sensor 1
Fusion Processor
- Associate
- Correlate
- Track
- Estimate
- Classify
- Cue
Central – level Fusion
Cue
Minimally
Processed
Data
Sensor N
Cue
Minimally
Processed
Data
Sensor 1
Central-Level
Fusion Processor
- Associate
- Correlate
- Track
- Estimate
- Classify
- Cue
Hybrid Fusion
Sensor N
Sensor 1
Processing
Sensor N
Processing
Sensor 2
Sensor 2
Processing
Sensor-Level
Fusion Processor
Multi Sensors Systems Architectures
341
Multi Sensors Data FusionSOLO
Multi Sensors Systems Architectures
Centralized versus Distributed Architecture
Advantages Disadvantages
• Simple and Direct Logic • High Data Transfer
• Direct and Simple
Misalignment Correction
• Requires Additional Logic
for Track-to-Track
Association and Fusion
• Susceptible to Data Transfer
Latency
• Accurate Estimation
& Data Association
• Complex Misalignment
Correction
• More Vulnerable to ECM
and Bad sensor Data
CentralizedDistributed
• Moderate Data Transfer
• Direct and Simple
Misalignment Correction
• Less Vulnerable to ECM
and Bad sensor Data
• Less Accurate Data
Association and Tracking
Performance
Return to Table of Content
342
Sensors FusionSOLO
Sensor A
Track i
Sensor B
Track j
j
kk
i
kk
j
kk
i
kk
ij
k xxxxd ||||
~~ˆˆ: 
Track-to-Track of Two Sensors, Correlation and Fusion
We want to determine if the Track i from Sensor A and Track j from Sensor B,
potentially represent the same target.
     i
kk
i
k
i
k
i
kk
i
k
i
k
i
kk
i
k
i
k
i
k
i
kk
i
kk vxHKxHKIxHzKxx   1|1|1||
ˆˆˆˆ








DynamicsTargetReal
PredictorFilterˆˆ
11111
111|111|
kkkk
i
kk
kk
i
kk
i
k
i
kk
wuGxx
uGxx
In the same way:
11|111|1|
~ˆ:~
  k
j
kk
j
kk
j
kk
j
kk wxxxx
  j
k
j
k
j
kk
j
k
j
kk
j
kk
j
kk vKxHKIxxx  |||
~ˆ:~
Define:
11|111|1|
~ˆ:~
  k
i
kk
i
kk
i
kk
i
kk wxxxx
     i
k
i
k
i
kk
i
k
i
k
i
k
i
kk
i
kk
i
k
i
kk
i
kk
i
kk vKxHKIvKxxHKIxxx   |1|||
~ˆˆ:~
     
       Tj
kk
j
kk
Ti
kk
j
kk
Tj
kk
i
kk
Ti
kk
i
kk
Tj
kk
i
kk
j
kk
i
kk
Tijijij
kk
xxExxExxExxE
xxxxEddEU
||||||||
|||||
~~~~~~~~
~~~~:


j
kk
ji
kk
ij
kk
i
kk
ij
kk PPPPU ||||| 
Prediction
Estimation
343
Sensors FusionSOLO
Sensor A
Track i
Sensor B
Track j
Track-to-Track of Two Sensors, Correlation and Fusion (continue – 1)
  i
k
i
k
i
kk
i
k
i
kk
i
kk
i
kk vKxHKIxxx  |||
~ˆ:~
11|111|1|
~ˆ:~
  k
i
kk
i
kk
i
kk
i
kk wxxxx
In the same way:
11|111|1|
~ˆ:~
  k
j
kk
j
kk
j
kk
j
kk wxxxx
  j
k
j
k
j
kk
j
k
j
kk
j
kk
j
kk vKxHKIxxx  |||
~ˆ:~
   Ti
kk
i
kk
i
kk
Ti
kk
i
kk
i
kk xxEPxxEP 1|1|1||||
~~&~~
 
   Tj
kk
j
kk
j
kk
Tj
kk
j
kk
j
kk xxEPxxEP 1|1|1||||
~~&~~
 
          
        Tj
k
j
k
ij
kk
i
k
i
k
Tj
k
Tj
k
i
k
i
k
Tj
k
j
k
Tj
kk
i
k
i
k
Tj
k
Tj
k
i
kk
i
k
i
k
Tj
k
j
k
Tj
kk
i
kk
i
k
i
k
Tj
kk
i
kk
ij
kk
HKIPHKIKvvEKHKIxvEK
KvxEHKIHKIxxEHKIxxEP




|1
00
|1
0
|1|1|1|||
~
~~~~~:


      111|111111|11|111|1|1|
~~~~:   k
Tj
k
ij
kk
i
k
T
kk
Tj
k
Tj
kk
i
kk
i
k
Tj
kk
i
kk
ij
kk QPwwExxExxEP
  111|111|1|1|
~~:   k
Tj
k
ij
kk
i
k
Tj
kk
i
kk
ij
kk QPxxEP
     Tj
k
j
k
ij
kk
i
k
i
k
Tj
kk
i
kk
ij
kk HKIPHKIxxEP   |1|||
~~:
Prediction
Estimation
Prediction
Estimation
344
SOLO
Gating
Then the Track i of Sensor A and Track j of Sensor B are from
the same Target if:
with probability PG determined by the
Gate Threshold γ. Here we described
another way of determining γ, based on
the chi-squared distribution of dk
2.
Tail probabilities of the chi-square and normal densities.
9.21
11.34
13.28
2
3
4
0.01
   01.01Pr
2
  typicallydP kG
28.13;4,01.0
34.11;3,01.0
21.9;2,01.0






z
z
z
n
n
n
  
 ij
k
ij
kk
Tij
kk dkUdd
1
|
2
:
Since dk
2 is chi-squared of order nd
distributed we can use the chi-square
Table to determine γ
 k
i
tS
 1|ˆ kk
j
ttz
 ktxz ,
 1|ˆ kk
i
ttz
 k
j
tS
Trajectory i
Trajectory j
Measurements
at scan k
Track-to-Track of Two Sensors, Correlation and Fusion (continue – 2)
345
Sensors FusionSOLO
Track-to-Track of Two Sensors, Correlation and Fusion (continue – 3)
We want to combine the data from those two sensors by using:
where C has to be defined
 i
kk
j
kk
i
kk
c
kk xxCxx ||||
~~~~ 
Suppose that , then Track i from Sensor A and Track j from Sensor B,
potentially represent the same target.

 ij
k
ij
kk
Tij
kk dUdd
1
|
2
:
        0~~~~
0
|
0
|
0
|| 










i
kk
j
kk
i
kk
c
kk xExECxExE
        Ti
kk
j
kk
i
kk
i
kk
j
kk
i
kk
Tc
kk
c
kk
c
kk xxCxxxCxExxEP |||||||||
~~~~~~~~ 
           
         TTi
kk
i
kk
Tj
kk
i
kk
Ti
kk
j
kk
Tj
kk
j
kk
Ti
kk
j
kk
Tj
kk
j
kk
TTi
kk
i
kk
Tj
kk
i
kk
Ti
kk
i
kk
CxxExxExxExxEC
xxExxECCxxExxExxE
||||||||
||||||||||
~~~~~~~~
~~~~~~~~~~


      Ti
kk
ij
kk
Tij
kk
j
kk
Tij
kk
j
kk
Ti
kk
ij
kk
i
kk CPPPPCPPCCPPP ||||||||| 
We will determine C by requiring c
kk
C
Ptrace |min
    022 ||||||| 

 i
kk
ij
kk
Tij
kk
j
kk
i
kk
ij
kk
c
kk PPPPCPPPtrace
C
  
  1
|||
1
||||||
*




ij
kk
i
kk
ij
kk
i
kk
ij
kk
Tij
kk
j
kk
i
kk
ij
kk
UPP
PPPPPPC
  02 |||||2
2


 i
kk
ij
kk
Tij
kk
j
kk
c
kk PPPPPtrace
C
Minimization Condition
346
Sensors FusionSOLO
Sensor A
Track i
Sensor B
Track j
Compute Difference
j
kk
i
kk
ij
k xxd ||
ˆˆ 
Compute χ2 Statistics
    ij
k
ij
k
Tij
kk dUdd
12 

ij
d Perform
Gate Test
Assignment
i
kkx |
ˆ
j
kkx |
ˆ
11| ,,,,  k
i
k
i
k
i
k
i
kk QHKP
11| ,,,,  k
j
k
j
k
j
k
j
kk QHKP
    i
kk
j
kk
ij
kk
ij
kk
i
kk
i
kk
c
kk xxUPPxx ||
1
|||||
ˆˆˆˆ 

Recursive Track Estimate
c
kkx |
ˆ
ij
kkU |
i
kkx |
ˆ
j
kkx |
ˆ
Track-to-Track of Two Sensors, Correlation and Fusion (continue – 4)
Summary
Tij
kk
ij
kk
j
kk
i
kk
ij
kk PPPPU ||||| 
      00|0111|11|  
ijTj
k
j
kk
Tj
k
ij
kk
i
k
i
k
i
k
ij
kk PHKIQPHKIP
  Tij
k
ij
k
ij
kk ddEU :|
 
   Tij
kk
i
kk
ij
kk
ij
kk
i
kk
i
kk
c
kk
Tccc
kk
PPUPPPP
xxEP
||
1
|||||
|
ˆˆ:



Compute
and
Return to Table of Content
347
Sensors FusionSOLO
Issues in Multi – Sensor Data Fusion
Successful Multi – Sensor Data Fusion Requires the Following Practical
Issues to be Addressed:
• Spatial and Temporal Sensor Alignment
• Track Association & Fusion (for Distributed Architecture)
• Data Corruption (or Double-Counting) Problem
(Repeated Use of the Same Information)
• Handling Data Latency (e.g. Out of Sequence Measurements/Estimates)
• Communication Bandwidth Limitations
(How to Compress the Data)
• Fusion of Dissimilar Kinematic Data (1D with 2D or 3D)
• Picture Consistency
Return to Table of Content
348
Multi Target TrackingSOLO
References
S.S. Blackman, “Multiple-Target Tracking with Radar Applications”, Artech House, 1986
S.S. Blackman, R. Popoli , “Design and Analysis of Modern Tracking Systems”,
Artech House, 1999
Y. Bar-Shalom, T.E. Fortmann, “Tracking and Data Association”, Academic Press, 1988
E. Waltz, J. Llinas, “Multisensor Data Fusion”, Artech House, 1990
Y. Bar-Shalom, Ed., “Multitarget-Multisensor Tracking, Applications and Advances”,
Vol. II, Artech House, 1992
Y. Bar-Shalom, Xiao-Rong Li., “Multitarget-Multisensor Tracking: Principles and
Techniques”, YBS Publishing, 1995
Y. Bar-Shalom, W.D. Blair,“Multitarget-Multisensor Tracking, Applications and Advances”,
Vol. III, Artech House, 2000
Y. Bar-Shalom, Ed., “Multitarget-Multisensor Tracking, Applications and Advances”,
Vol. I, Artech House, 1990
Y. Bar-Shalom, Xiao-Rong Li., “Estimation and Tracking: Principles, Techniques
and Software”, Artech House, 1993
L.D.Stone, C.A. Barlow, T.L. Corwin, “Bayesian Multiple Target Tracking”,
Artech House, 1999
349
Multi Target TrackingSOLO
References (continue – 1)
Ristik, B. & Hernanadez, M.L., “Tracking Systems”, 2008 IEEE Radar Conference,
Rome, Italy
Return to Table of Content
Karlsson, R., “Simulation Based Methods for Target Tracking”,
Linköping University, Thesis No. 930, 2002
Karlsson, R., “Particle Filtering for Positioning and Tracking Applications”,
PhD Dissertation, Linköping University, No. 924, 2005
350
Multi Target TrackingSOLO
References
S.S. Blackman, “Multiple-Target Tracking with Radar Applications”, Artech House, 1986
S.S. Blackman, R. Popoli “Design and Analysis of Modern Tracking Systems”,
Artech House, 1999
L.A. Klein, “Sensor and Data Fusion”, Artech House,
351
Multi Target TrackingSOLO
References
D. Hall, J. Llinas,“Handbook of Multisensor Data Fusion”, Artech House,
D. Hall, S. A. H. McMullen, “Mathematical Techniques in Multisensor Data Fusion”.
Artech House
M.E. Liggins, D. Hall, J. Llinas, Ed.“Handbook of Multisensor Data Fusion:
Theory and Practice”, 2nd Ed., CRC Press, 2008
352
Multi Target TrackingSOLO
References
Y. Bar-Shalom, Ed.,“Multitarget-Multisensor Tracking, Applications and Advances”,
Vol. II, Artech House, 1992
Y. Bar-Shalom, W.D. Blair Ed.,“Multitarget-Multisensor Tracking, Applications and
Advances”, Vol. III, Artech House,
L.D.Stone, C.A. Barlow, T.L. Corwin, “Bayesian Multiple Target Tracking”,
Artech House, 1999
353
Multi Target TrackingSOLO
References
From left-to-right: Sam Blackman, Oliver Drummond,
Yaakoov Bar-Shalom and Rabinder Madan
From left-to-right: Fred Daum, X. Rong Li, Tom Kerr and
Sanjeev Arulambalam
A Raytheon THAAD radar, which uses Yaakov’s Bar-Shalom
JPDAF algorithm
httf://esplab1.ee.uconn.edu/AESmagMae02.htm The Workshop on Estimation, Tracking and Fusion:
A Tribute to Yaakov Bar-Shalom, 17 May 2001
354
Multi Target TrackingSOLO
References
“Special Issue in Data Fusion”, Proceedings of the IEEE, January 1997
Klein, L. A., “Sensor and Data Fusion Concepts and Applications”, 2nd Ed.,
SPIE Optical Engineering Press, 1999
355
“Proceedings of the IEEE”, March 2004, Special Issue on:
“Sequential State Estimation: From Kalman Filters to Particle Filters”
Julier, S.,J. and Uhlmann, J.,K., “Unscented Filtering and Nonlinear Estimation”,
pp.401 - 422
356
Branko Ristic Marcel L. Hernandez
Fredrik Gustafsson Niclas Bergman Rickard Karlsson
357
SOLO
Technion
Israeli Institute of Technology
1964 – 1968 BSc EE
1968 – 1971 MSc EE
Israeli Air Force
1970 – 1974
RAFAEL
Israeli Armament Development Authority
1974 – 2013
Stanford University
1983 – 1986 PhD AA
358
SOLO Review of Probability
Chi-square Distribution
 
 
 
 
 









00
02/exp
2/
2/1
;
2/2
2/
x
xxx
kkxp
k
k
  kxE 
  kxVar 2
    
  2/
21
exp
k
X
j
xjE





Probability Density Functions
Cumulative Distribution Function
Mean Value
Variance
Moment Generating Function
 
 
 








00
0
2/
2/,2/
;
x
x
k
xk
kxP

Γ is the gamma function    



0
1
exp dttta a
     
x
a
dtttxa
0
1
exp,γ is the incomplete gamma function
Distributions
examples
359
SOLO Review of Probability
Gaussian Mixture Equations
A mixture is a p.d.f. given by a weighted sum of p.d.f.s with the weighths summing up
to unity:
   

n
j
jjj Pxxpxp
1
,;N
A Gaussian Mixture is a p.d.f. consisting of a weighted sum of Gaussian densities
where:
1
1

n
j
jp
  jjj PxxxA ,;~: N
Denote by Aj the event that x is Gaussian distributed with mean and covariance Pjjx
with Aj , j=1,…,n, mutually exclusive and exhaustive:
and S
1A 2A nA
  jj pAP :
jiOAAandSAAA jin  21
        

n
j
jj
n
j
jjj AxpAPPxxpxp
11
|,;NTherefore:
360
SOLO Review of Probability
Gaussian Mixture Equations (continue – 1)
A Gaussian Mixture is a p.d.f. consisting of a weighted sum of Gaussian densities
        

n
j
jj
n
j
jjj AxpAPPxxpxp
11
|,;N
The mean of such a mixture is:
        

n
j
jj
n
j
jjj xpPxxEpxpxEx
11
,;N
The covariance of the mixture is:
       
   
       
       











n
j
j
T
jj
n
j
jj
T
jj
n
j
j
T
jjj
n
j
jj
T
jj
n
j
jj
T
jjjj
n
j
jj
TT
pxxxxpAxxExx
pxxAxxEpAxxxxE
pAxxxxxxxxE
pAxxxxExxxxE
11
0
1
0
1
1
1
  
  
361
SOLO Review of Probability
Gaussian Mixture Equations (continue – 2)
The covariance of the mixture is:
           PpPpxxxxpAxxxxExxxxE
n
j
jj
n
j
j
T
jj
n
j
jj
T
jj
T ~
111
  
where:
  

n
j
j
T
jj pxxxxP
1
:
~
Is the spread of the mean term.
T
n
j
j
T
jj
n
j
j
TT
x
n
j
jj
x
n
j
j
T
j
n
j
j
T
jj
xxpxx
pxxxpxpxxpxxP
T






1
1
1111
:
~

    T
n
j
j
T
jj
n
j
jj
T
xxpxxpPxxxxE    11
Note: Since we developed only first and second moments of the mixture, those relations
will still be correct even if the random variables in the mixture are not Gaussian.
362
SOLO Probability
Total Probability Theorem
Table of Content
nAAAS  21
jiOAA ji 












1A 2A
nAB
jiOAAandSAAA jin  21If
we say that the set space S is decomposed in exhaustive and
incompatible (exclusive) sets.
The Total Probability Theorem states that for any event B,
its probability can be decomposed in terms of conditional
probability as follows:
        

n
i
i
n
i
i BPBABAB
11
|Pr,PrPr
Using the relation:
         llll AABBBABA Pr|PrPr|PrPr 
      klOBABABAB lk
n
k
k ,
1



   

n
k
k BAB
1
PrPr
For any event B
we obtain:
363
SOLO Probability
Statistical Independent Events
       
          






























































n
i
i
n
n
kji
kji i
i
n
ji
ji i
i
n
i
i
tIndependen
lStatisticaA
n
i
i
n
n
kji
kji
kji
n
ji
ji
ji
n
i
i
n
i
i
AAAA
AAAAAAAA
i
1
1
3
,.
3
1
2
.
2
1
1
1
1
1
3
,.
2
.
1
11
Pr1PrPrPr
Pr1PrPrPrPr

 
From Theorem of Addition
Therefore
  







n
i
i
tIndependen
lStatisticaA
n
i
i AA
i
11
Pr1Pr1    





 n
i
i
tIndependen
lStatisticaA
n
i
i AA
i
11
Pr11Pr 
Since OAASAA
n
i
i
n
i
i
n
i
i
n
i
i 






























 
1111
&

















n
i
i
n
i
i AA
11
PrPr1
 





 n
i
i
tIndependen
lStatisticaA
n
i
i AA
i
11
PrPr 
If the n events Ai i = 1,2,…n are statistical independent
than are also statistical independentiA
 

n
i
iA
1
Pr








n
i
i
MorganDe
A
1
Pr   

n
i
i
tIndependen
lStatisticaA
A
i
1
Pr1
  nrAA
r
i
i
r
i
i ,,2PrPr
11
 






Table of Content
364
SOLO Probability
Theorem of Multiplication
         12112312121 |Pr|Pr|PrPrPr AAAAAAAAAAAAA nnn  
Proof
     ABABA /PrPrPr Start from
      12121 /PrPrPr AAAAAAA nn  
     2131212 /Pr/Pr/Pr AAAAAAAAA nn  
in the same way
     12122112211 /Pr/Pr/Pr   nnnnnnn AAAAAAAAAAAAA 

From those results we obtain:
         12112312121 |Pr|Pr|PrPrPr AAAAAAAAAAAAA nnn  
q.e.d.
Table of Content
365
SOLO Probability
Conditional Probability - Bayes Formula
Using the relation:
         llll AABBBABA  Pr|PrPr|PrPr 
      klOBABABAB lk
m
k
k ,
1



   


m
k
k
BAB
1
PrPr 
we obtain:
     
 
   
   




 m
k
kk
llll
l
AAB
AAB
B
AAB
BA
1
Pr|Pr
Pr|Pr
Pr
Pr|Pr
|Pr



and Bayes Formula
Thomas Bayes
1702 - 1761
S
jiOAA ji 
1A
mAAAB   21
2A 1A 2A

Table of Content
            

m
k
kk
m
k
k
m
k
k AABBBABAB
111
Pr|PrPr|PrPrPr 

More Related Content

PPT
Monopulse tracking radar
PPT
radar principles
PPT
Satellite orbit and constellation design
PPT
Introduction to radar
PPTX
Rutherford scattering & scattering cross section
PDF
Principles of RADAR Systems
PDF
Introduction to satellite communication
PDF
Tracking Radar
Monopulse tracking radar
radar principles
Satellite orbit and constellation design
Introduction to radar
Rutherford scattering & scattering cross section
Principles of RADAR Systems
Introduction to satellite communication
Tracking Radar

What's hot (20)

PPTX
Monopulse Radar
PPT
radar-principles
PPT
Earth Station Subsystem
PDF
Orbital mechanics
PPTX
Altitude and Orbit control.pptx
PPTX
Satellite communication
PPT
Radar Basics
PPT
Components of a Pulse Radar System
PPTX
Gps signal structure
PPTX
Satelite communication
PPTX
Sky wave propogation
PPTX
PPTX
Satellite communication
PDF
Active Phased Array Radar Systems
PPT
Satellite communication
PDF
Satellite Communication ppt-3-1.pdf
PPTX
Radar fundamentals
PPTX
Satellite telemetry tracking command and monitoring subsystem & Electronic an...
Monopulse Radar
radar-principles
Earth Station Subsystem
Orbital mechanics
Altitude and Orbit control.pptx
Satellite communication
Radar Basics
Components of a Pulse Radar System
Gps signal structure
Satelite communication
Sky wave propogation
Satellite communication
Active Phased Array Radar Systems
Satellite communication
Satellite Communication ppt-3-1.pdf
Radar fundamentals
Satellite telemetry tracking command and monitoring subsystem & Electronic an...
Ad

Similar to 1 tracking systems1 (20)

PPT
Multi Sensor Data Fusion In Target Tracking
PDF
PDF
IRJET- Clustering the Real Time Moving Object Adjacent Tracking
PDF
IRJET- Neural Extended Kalman Filter based Angle-Only Target Tracking with Di...
PDF
Computational Complexity Comparison Of Multi-Sensor Single Target Data Fusion...
PDF
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...
PDF
Design of Kalman filter for Airborne Applications
DOCX
Control Systems For Projectile DefenseRyan MendivilMar.docx
PPT
Multisensor data fusion for defense application
PPT
TargetTracking[1].ppt random finite set presentation
PDF
Jung.Rapport
PPT
Intro to Multitarget Tracking for CURVE
PPT
Multisensor data fusion in object tracking applications
PDF
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...
PPTX
Topic 4 Data Processing.pptx
PPTX
Global Positioning System ++ : Improved GPS using sensor data fusion
PPTX
Global Positioning System ++_Improved GPS using sensor data fusion
PDF
A multi sensor-information_fusion_method_based_on_factor_graph_for_integrated...
PDF
Inertial Navigation for Quadrotor Using Kalman Filter with Drift Compensation
PDF
International Journal of Engineering and Science Invention (IJESI)
Multi Sensor Data Fusion In Target Tracking
IRJET- Clustering the Real Time Moving Object Adjacent Tracking
IRJET- Neural Extended Kalman Filter based Angle-Only Target Tracking with Di...
Computational Complexity Comparison Of Multi-Sensor Single Target Data Fusion...
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...
Design of Kalman filter for Airborne Applications
Control Systems For Projectile DefenseRyan MendivilMar.docx
Multisensor data fusion for defense application
TargetTracking[1].ppt random finite set presentation
Jung.Rapport
Intro to Multitarget Tracking for CURVE
Multisensor data fusion in object tracking applications
COMPUTATIONAL COMPLEXITY COMPARISON OF MULTI-SENSOR SINGLE TARGET DATA FUSION...
Topic 4 Data Processing.pptx
Global Positioning System ++ : Improved GPS using sensor data fusion
Global Positioning System ++_Improved GPS using sensor data fusion
A multi sensor-information_fusion_method_based_on_factor_graph_for_integrated...
Inertial Navigation for Quadrotor Using Kalman Filter with Drift Compensation
International Journal of Engineering and Science Invention (IJESI)
Ad

More from Solo Hermelin (20)

PPT
5 introduction to quantum mechanics
PPT
Stabilization of linear time invariant systems, Factorization Approach
PPT
Slide Mode Control (S.M.C.)
PPT
Sliding Mode Observers
PPT
Reduced order observers
PPT
Inner outer and spectral factorizations
PPT
Keplerian trajectories
PPT
Anti ballistic missiles ii
PPT
Anti ballistic missiles i
PPT
Analytic dynamics
PPT
12 performance of an aircraft with parabolic polar
PPT
11 fighter aircraft avionics - part iv
PPT
10 fighter aircraft avionics - part iii
PPT
9 fighter aircraft avionics-part ii
PPT
8 fighter aircraft avionics-part i
PPT
6 computing gunsight, hud and hms
PPT
4 navigation systems
PPT
3 earth atmosphere
PPT
2 aircraft flight instruments
PPT
3 modern aircraft cutaway
5 introduction to quantum mechanics
Stabilization of linear time invariant systems, Factorization Approach
Slide Mode Control (S.M.C.)
Sliding Mode Observers
Reduced order observers
Inner outer and spectral factorizations
Keplerian trajectories
Anti ballistic missiles ii
Anti ballistic missiles i
Analytic dynamics
12 performance of an aircraft with parabolic polar
11 fighter aircraft avionics - part iv
10 fighter aircraft avionics - part iii
9 fighter aircraft avionics-part ii
8 fighter aircraft avionics-part i
6 computing gunsight, hud and hms
4 navigation systems
3 earth atmosphere
2 aircraft flight instruments
3 modern aircraft cutaway

Recently uploaded (20)

PDF
Unveiling a 36 billion solar mass black hole at the centre of the Cosmic Hors...
PPTX
INTRODUCTION TO EVS | Concept of sustainability
PDF
CAPERS-LRD-z9:AGas-enshroudedLittleRedDotHostingaBroad-lineActive GalacticNuc...
PDF
. Radiology Case Scenariosssssssssssssss
PDF
Placing the Near-Earth Object Impact Probability in Context
PDF
lecture 2026 of Sjogren's syndrome l .pdf
PPTX
ognitive-behavioral therapy, mindfulness-based approaches, coping skills trai...
PPTX
Introduction to Cardiovascular system_structure and functions-1
PDF
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
PPTX
Pharmacology of Autonomic nervous system
PDF
Mastering Bioreactors and Media Sterilization: A Complete Guide to Sterile Fe...
PPT
6.1 High Risk New Born. Padetric health ppt
PDF
Lymphatic System MCQs & Practice Quiz – Functions, Organs, Nodes, Ducts
PDF
Cosmic Outliers: Low-spin Halos Explain the Abundance, Compactness, and Redsh...
PDF
HPLC-PPT.docx high performance liquid chromatography
PDF
VARICELLA VACCINATION: A POTENTIAL STRATEGY FOR PREVENTING MULTIPLE SCLEROSIS
PDF
Assessment of environmental effects of quarrying in Kitengela subcountyof Kaj...
PPTX
2. Earth - The Living Planet Module 2ELS
PPTX
BIOMOLECULES PPT........................
PDF
SEHH2274 Organic Chemistry Notes 1 Structure and Bonding.pdf
Unveiling a 36 billion solar mass black hole at the centre of the Cosmic Hors...
INTRODUCTION TO EVS | Concept of sustainability
CAPERS-LRD-z9:AGas-enshroudedLittleRedDotHostingaBroad-lineActive GalacticNuc...
. Radiology Case Scenariosssssssssssssss
Placing the Near-Earth Object Impact Probability in Context
lecture 2026 of Sjogren's syndrome l .pdf
ognitive-behavioral therapy, mindfulness-based approaches, coping skills trai...
Introduction to Cardiovascular system_structure and functions-1
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
Pharmacology of Autonomic nervous system
Mastering Bioreactors and Media Sterilization: A Complete Guide to Sterile Fe...
6.1 High Risk New Born. Padetric health ppt
Lymphatic System MCQs & Practice Quiz – Functions, Organs, Nodes, Ducts
Cosmic Outliers: Low-spin Halos Explain the Abundance, Compactness, and Redsh...
HPLC-PPT.docx high performance liquid chromatography
VARICELLA VACCINATION: A POTENTIAL STRATEGY FOR PREVENTING MULTIPLE SCLEROSIS
Assessment of environmental effects of quarrying in Kitengela subcountyof Kaj...
2. Earth - The Living Planet Module 2ELS
BIOMOLECULES PPT........................
SEHH2274 Organic Chemistry Notes 1 Structure and Bonding.pdf

1 tracking systems1

  • 1. 1 Tracking Systems SOLO HERMELIN Updated: 12.10.09Run This http://www.solohermelin.com
  • 2. 2 Tracking SystemsSOLO Table of Contents Chi-square Distribution Innovation in Kalman Filter Kalman Filter Linear Gaussian Markov Systems Recursive Bayesian Estimation Target Acceleration Models General Problem Evaluation of Kalman Filter Consistency Innovation in Tracking Systems Terminology Functional Diagram of a Tracking System Filtering and Prediction Target Models as Markov Processes Estimation for Static Systems Information Kalman Filter Target Estimators Sensors
  • 3. 3 Tracking SystemsSOLO Table of Contents (continue – 1) The Cramér-Rao Lower Bound (CRLB) on the Variance of the Estimator Nonlinear Estimation (Filtering) Extended Kalman Filter Additive Gaussian Nonlinear Filter Gauss – Hermite Quadrature Approximation Uscented Kalman Filter Gating and Data Association Optimal Correlation of Sensor Data with Tracks on Surveillance Systems (R.G. Sea, Hughes, 1973) Gating Nearest-Neighbor Standard Filter Global Nearest-Neighbor (GNN) Algorithms Suboptimal Bayesian Algorithm: The PDAF Non-Additive Non-Gaussian Nonlinear Filter Nonlinear Estimation Using Particle Filters
  • 4. 4 Tracking SystemsSOLO Table of Contents (continue – 2) Track Life Cycle (Initialization, Maintenance & Deletion) Filters for Maneuvering Target Detection The Hybrid Model Approach No Switching Between Models During the Scenario Switching Between Models During the Scenario The Interacting Multiple Model (IMM) Algorithm The IMM-PDAF Algorithm The IPDAF Algorithm Multi-Target Tracking (MTT) Systems Joint Probabilistic Data Association Filter (JPDAF) Multi-Sensor Estimate Track-to-Track of Two Sensors, Correlation and Fusion Issues in Multi – Sensor Data Fusion References Multiple Hypothesis Tracking (MHT)
  • 5. 5 General Problem I 0Ex 0Ey Iz Northx Easty Downz Px Py Pz  Iy Ix t t Long Lat 0Ez Ex Ey Ez AV    Target (T) (object) Platform (P) (sensor) SOLO Provide information of the position and direction of movement (including estimated errors) of uncooperative objects, to different located users. To perform this task a common coordinate system is used. Example: In a Earth neigh borough the Local Level Local North coordinate system (Latitude, Longitude, Height above Sea Level) can be used to specify the position and direction of motion of all objects. The information is gathered by sensors that are carried by platforms (P) that can be static or moving (earth vehicles, aircraft, missiles, satellites,…) relative to the predefined coordinate system. It is assumed that the platforms positions and velocities, including their errors, are known and can be used for this task: SensorDownSensorEastSensorNordSensorDownSensorEastSensorNord SensorLevelSeaSensorSensorSensorLevelSeaSensorSensor VVVVVV HLongLatHLongLat   ,,,,, ,,,,, The objects (T) positions and velocities are obtained by combining the information of objects-to-sensors relative position and velocities and their errors to the information of sensors (B) positions and velocities and their errors.
  • 6. 6 General Problem    B x L x B z L y L z B y T V  PV  R  Az El Bx SOLO Assume that the platform with the sensor measure continuously and without error, in the platform coordinates, the object (Target – T) and platform positions and velocities . The relative position vector is defined by three independent parameters. A possible choice of those parameters is: R                                                           ElR ElAzR ElAzRR ElEl ElEl AzAz AzAz Rz Ry Rx R P P P P sin cossin coscos 0 0 cos0sin 010 sin0cos 100 0cossin 0sincos  R - Range from platform to object Az - Sensor Azimuth angle relative to platform El - Sensor Elevation angle relative to platform Rotation Matrix from LLLN to P (Euler Angles):                         cccssscsscsc csccssssccss ssccc CP L 321  - azimuth angle  - pitch angle  - roll angle
  • 7. 7 General ProblemSOLO Assume that the platform with the sensor measure continuously and without error, in the platform coordinates, the object (Target – T) and platform (P) positions and velocities . The origin of the LLLN coordinate system is located at the projection of the center of gravity CG of the platform on the Earth surface, with zDown axis pointed down, xNorth, yEast plan parallel to the local level, with xNorth pointed to the local North and yEast pointed to the local East. The platform is located at: Latitude = Lat, Longitude = Long, Height = H Rotation Matrix from E to L                                              100 0cossin 0sincos sin0cos 010 cos0sin 2/ 32 LongLong LongLong LatLat LatLat LongLatC L E                                       LatLongLatLongLat LongLong LatLongLatLongLat sinsincoscoscos 0cossin cossinsincossin The earth radius is   26.298/1&10378135.6sin1 6 0 2 0  emRLateRRpB The position of the platform in E coordinates is                            LongLat Long LongLat HRR BpB E B coscos sin cossin  I 0Ex 0Ey Iz Northx Easty Downz Px Py Pz  Iy Ix t t Long Lat 0Ez Ex Ey Ez AV    Target (T) (object) Platform (P) (sensor)
  • 8. 8 General Problem                                       TT T TT TpT zET yET xET E T LongLat Long LongLat HR R R R R coscos sin cossin     B x L x B z L y L z B y T V  PV  R  Az El Bx SOLO The position of the platform (P) in E coordinates is                            LongLat Long LongLat HRR Bp E B coscos sin cossin  The position of the target (T) relative to platform (P) in E coordinates is          PTP L TL E PL P E L E RCCRCCR   The position of the target (T) in E coordinates is      EE B zET yET xET E T RR R R R R              Since the relation to target latitude LatT, longitude LongT and height HT is given by: we have         TpTyETT pTzETyETxETTTpT zETxETT HRRLong RRRRHLateRR RRLat      /sin &sin1 /tan 1 2/12222 0 1 Run This I 0Ex 0Ey Iz Northx Easty Downz Px Py Pz  Iy Ix t t Long Lat 0Ez Ex Ey Ez AV    Target (T) (object) Platform (P) (sensor)
  • 9. 9 General Problem    B x L x B z L y L z B y T V  PV  R  Az El Bx SOLO Assume that the platform with the sensor measure continuously and without error in the platform (P) coordinates the object (Target – T) and platform positions and velocities . Therefore the velocity vector of the object (T) relative to the platform (P) can be obtained by direct differentiation of the relative range R  PTIP P TP VVR td Rd V      PIP PI T T VR td Rd td Rd V      TV  PV  Az El Bx  1tR  Time t1 IP  - Angular Rate vector of the Platform (P) relative to inertia (measured by its INS) PV  - Platform (P) Velocity vector (measured by its INS) TV  - Target (T) Velocity vector computed as follows: TV  PV   2 tR  Az El Bx Bx Time t2 TV  PV  Az El BxBx Bx  3tR  Time t3 P td Rd  -Differentiation of vector in Platform (P) coordinates R  Run This
  • 10. 10 General Problem  kkx |ˆ  kx  1|1  kkP  1| kkP  1|1ˆ  kkx  1kx  kkP |  kkP |1  kkx |1ˆ   kt  1kt Real Trajectory Estimated Trajectory  2kt  1|2  kkP  1|2ˆ  kkx  2|2  kkP  2|2ˆ  kkx  3kt Measurement Events Predicted Errors Updated Errors SOLO The platform with the sensors measure at discrete time and with measurement error. It may happen that no data (no target detection) is obtained for each measurement. Therefore it is necessary to estimate the target trajectory parameters and their errors from the measurements events, and to predict them between measurements events. tk - time of measurements (k=0,1,2,…) - sensor measurements ktz - parameters of the real trajectory at time t. tx - predicted parameters of the trajectory at time t. txˆ - predicted parameters errors at time t (tk < t < tk+1). kttP / - updated parameters errors at measurement time tk. kk ttP /  txz , Filter (Estimator/Predictor)  k txz , kt  tx   kttP / T V  P V   2 tR  Az El Bx Bx Bx  1tR   3tR  1 1 1
  • 11. 11 General ProblemSOLO The problem is more complicated when there are Multiple Targets. In this case we must determinate which measurement is associated to which target. This is done before filtering. T V  P V   2 tR  Az El Bx Bx Bx  1tR   3tR  Bx Bx 1 2 3 32 1 Bx B x Bx 1 3 2 1  k txz ,11  ktxz ,22  k txz ,33  11 | kk ttS  12 | kk ttS  13 | kk ttS  13 |ˆ kk ttz  12 |ˆ kk ttz  11 |ˆ kk ttz  kk ttS |1  kk ttS |2  kk ttS |3 Filter (Estimator/Predictor) Target # 1  tx1   k ttP /1 Filter (Estimator/Predictor) Target # N txN   kN ttP /  txz ,  ktxz , kt Data Association tz1  tzN
  • 12. 12 General ProblemSOLO If more Sensors are involved using Sensor Data Fusion we can improve the performance. In this case we have a Multi-Sensor Multi-Target situation 1  k txz ,11  k txz ,22  k txz ,33  1 1 1 | kk ttS  1 1 2 | kk ttS  13 | kk ttS  13 |ˆ kk ttz  12 |ˆ kk ttz  11 |ˆ kk ttz  kk ttS | 2 3 1st Sensor 1  k txz ,11  k txz ,22  k txz ,33  13 |ˆ kk ttz  12 |ˆ kk ttz  11 |ˆ kk ttz  1 2 1 | kk ttS  1 2 2 | kk ttS  kk ttS | 2 3 2nd Sensor 1  k txz ,11  k txz ,22  k txz ,33  1 1 1 | kk ttS  1 1 2 | kk ttS  13 | kk ttS  13 |ˆ kk ttz  12 |ˆ kk ttz  11 |ˆ kk ttz  kk ttS |1  kk ttS |2  kk ttS | 1 3  1 2 1 | kk ttS  1 2 2 | kk ttS  kk ttS | 2 3 Fused Data Transducer 1 Feature Extraction, Target Classification, Identification, and Tracking Sensor 1 Fusion Processor - Associate - Correlate - Track - Estimate - Classify - Cue Cue Target Report Cue Target Report Sensor – level Fusion Transducer 2 Feature Extraction, Target Classification, Identification, and Tracking Sensor 2 1 TV  PV   2 1 tR  Az El Bx Bx Bx  1 1 tR   3 1 tR  Bx Bx 2 3 32 1 Bx Bx Bx Bx 1 3 2 1st Sensor 1 TV  PV  Bx Bx Bx Bx Bx 2 3 32 1 Bx Bx Bx Bx 1 3 2 Ground Radar Data Link  1 2 tR   2 2 tR  3 2 tR  2nd Sensor 1 TV  PV   2 1 tR  Az El Bx Bx Bx  1 1 tR   3 1 tR  Bx Bx 2 3 32 1 Bx Bx Bx Bx 1 3 2 Ground Radar Data Link  1 2 tR   2 2 tR  3 2 tR  To perform this task we must perform Alignment of the Sensors Data in Time (synchronization) and in Space (example GPS that provides accurate time & position) Run This
  • 13. 13 General ProblemSOLO Terminology Sensor: a device that observes the (remote) environment by reception of some signals (energy) Frame or Scan: “snapshot” of region of the environment obtained by the sensor at a point in time, called the sampling time. Signal Processing: processing of the sensor data to provide measurements Target Detection: this is done by Signal Processing by “detecting” target characteristics, by comparing them with a threshold and deleting “false targets (alarms)”. Those capabilities are defined by the Probability of Detection PD and the Probability of False Alarm” PFA. Measurement Extraction: the final stage of Signal Processing by that generates a measurement. Time stamp: the time to which a detection/measurement pertains. Registration: alignment (space & time) of two or more sensors or alignment of a moving sensor data from successive sampling times so that their data can be combined. Track formation (or track assembly, target acquisition, measurement to measurement association, scan to scan association): detection of a target (processing of measurements from a number of sampling times to determine the presence of a target) and initialization of its track (determination of the initial estimate of its state).
  • 14. 14 General ProblemSOLO Terminology (continue – 1) Tracking filter: state estimator of a target. Data association: process of establishing which measurement (or weighted combination of measurements) to be used in a state estimator. Track continuation (maintenance or updating): association and incorporation of measurements from a sampling time into a track filter. Cluster tracking tracking of a set of nearby targets as a group rather than individuals. Return to Table of Content
  • 15. 15 General ProblemSOLO Return to Table of Content Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance (Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 Functional Diagram of a Tracking System A Tracking System performs the following functions: • Sensors Data Processing and Measurement Formation that provides Targets Data • Observation-to-Track Association that relates Target Detected Data to Existing Track Files. • Track Maintenance (Initialization, Confirmation and Deletion) of the Targets Detected by the Sensors. • Filtering and Prediction , for each Track processes the Data Associated to the Track, Filter the Target State (Position, and may be Velocity and Acceleration) from Noise, and Predict the Target State and Errors (Covariance Matrix) at the next Sensors Measurement. • Gating Computations that, using the Predicted Target State, provides the Gating to enabling distinguishing between the Measurement from the Target of the specific Track File to other Targets Detected by the Sensors.
  • 16. 16 SENSORSSOLO Introduction Classification of Sensors by the type of energy they use for sensing: We deal with sensors used for target detection, identification, acquisition and tracking, seekers for missile guidance. • Electromagnetic Effect that are distinct by EM frequency: - Micro-Wave Electro-Optical: * Visible * IR * Laser - Millimeter Wave Radars • Acoustic Systems Classification of Sensors by the source of energy they use for sensing: • Passive where the source of energy is in the objects that are sensed Example: Visible, IR, Acoustic Systems • Semi – Active where the source of energy is actively produced externally to the Sensor and sent toward the target that reflected it back to the sensor Example: Radars, Laser, Acoustic Systems • Active where the source of energy is actively produced by the Sensor and sent toward the target that reflected it back to the sensor Example: Radars, Laser, Acoustic Systems Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance )Initialization, Confirmation and Deletion( Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999
  • 17. 17 SENSORSSOLO Introduction Classification of Sensors by the Carrying Vehicle: • Sensors on Ground Fixed Sites • Human Carriers • Ground Vehicles • Ships • Submarines • Torpedoes • Air Vehicles (Aircraft, Helicopters, UAV, Balloons) • Missiles (Seekers, Active Proximity Fuzes) • Satellites Classification of Sensors by the Measurements Type: • Range and Direction to the Target (Active Sensors) • Direction to the Target only (Passive and Semi-Active Sensors) • Imaging of the Object • Non-Imaging See “Sensors.ppt” for a detailed description
  • 18. 18 SENSORSSOLO Introduction 1. Search Phase Sensor Processes: In this Phase a search for predefined Targets is performed. The search is done to cover a predefined (or cued) Space Region. The Angular Coverage may be performed by • Scanning (Mechanically/Electronically) the Space Region (Radar, EO Sensors) • Steering toward the Space Region (EO Sensors, Sonar) Radar System can perform also Search in Range and Range-Rate 2. Detection Phase In this Phase the predefined Target is Detected, extracted from the noise and the Background using the Target Properties that differentiate it, like: • Target Intensity (Radar, EO Sensors, Sonars) • Target Kinematics relative to the Background (Radar, EO Sensors, Sonars) • Target Shape (EO Sensors, Radar) The Sensor can use one or a combination of those methods. There is a Probability that a False Target will be detected, therefore two quantities Define the Detection performance: • Probability of Detection ( ≤ 1 ) • Probability of False Alarm Search Search Command Detect
  • 19. 19 SOLO Example: Airborne Electronic Scan Antenna SENSORS
  • 20. 20 SENSORSSOLO Introduction 3. Identification Phase Sensor Processes (continue – 1): In this Phase the Target of Interest is differentiate from other Detected Targets. 4. Acquisition Phase In this Phase we check that the Detection and Identification occurred for a number of Search Frames and Initializes the Track Phase. 5. Track Phase In this Phase the Sensor, will update the History of each Target (Track File), Associating the Data in the present frame to previous Histories. This phase continues until Target Detection is not available for a predefined number of frames. Search Search Command Detect Identify Target Acquire Track Reacquire End-of-Track
  • 22. 22 SOLO Generic Airborne Radar Block Diagram f0 Receiver REF XMTR Digital Signal Proc. Radar Central Computer Pilot CommandsData to Displays Antenna Unit T/R (Circulator) Power Supply A/D Digital Analog Command & Control Aircraft AvionicsAvionics BUS Beam Control (Mechanical or Electronical) Aircraft Power Airborne Radar Block Diagram Antenna – Transmits and receives Electromagnetic Energy T/R – Isolates between transmitting and receiving channels REF – Generates and Controls all Radar frequencies XMTR – Transmits High Power EM Radar frequencies RECEIVER – Receives Returned Radar Power, filter it and down-converted to Base Band for digitization trough A/D. Digital Signal Processor – Processes all the digitized signal to enhance the Target of interest versus all other (clutter). Power Supply – Supplies Power to all Radar components. Radar Central Computer – Controls all Radar Units activities, according to Pilot Commands and Avionics data, and provides output to Pilot Displays and Avionics. SENSORS
  • 28. 28 SOLO E-O and IR Systems Payloads See “E-O & IR Systems Payloads”.ppt for a detailed presentation
  • 29. 29 SOLO E-O and IR Systems Payloads 0.9 kg 2.27 kg1.06 kg0.55 kg Small, lightweight gimbals which come standard with rich features such as built-in moving maps, geo- pointing and geo-locating. Cloud Cap gimbals are robust and proven with over 300 gimbals sold to date. Complete with command/control/record software and joystick steering, Cloud Cap gimbals are ideal for surveillance, inspection, law enforcement, fire fighting, and environmental monitoring. View a comparison table of specifications for the TASE family of Gimbals.
  • 30. 30 SOLO RAFAEL LITENING Multi-Sensor, Multi-Mission Targeting & Navigation Pod E-O and IR Systems Payloads
  • 31. 31 SOLO RAFAEL RECCELITE Real-Time Tactical Reconnaissance System E-O and IR Systems Payloads
  • 32. 32 SOLO E-O and IR Systems Payloads
  • 33. 33 SEEKERS SOLO IR SEEKER COMPONENTS • Electro-Optical Dome • Telescope & Optics • Electro-Optical Detector • Electronics • Cooling System • Gimbal System: - Gimbal Servo Motors - Gimbal Angular Sensors (Potentiometers or Resolvers or Encoders) - Telescope Inertial Angular Rates Sensors • Signal Processing Algorithms • Image Processing Algorithms • Seeker Control Logics & Algorithms Detector Electronics & Signal Processing Image Processing Seeker Servo Gimbal & Torquer & Angular Sensor E.O. Dome Optics Telescope Detector Dewar Optical Axix Missile C.L. Line Of Sight LOS  ˆEstimated LOS Rate Seeker Logics & Control Missile Commands &Body Inertial Data Gimbal Angles Tracking Errors Torque Current Rate - Gyro SENSORS
  • 34. 34 Decision/Detection Theory SOLO Decision Theory deals with decisions that must be taken with imperfect, noise- contaminated data. In Decision Theory the various possible events that can occur are characterized as Hypotheses. For example, the presence or absence of a signal in a noisy waveform may be viewed as two alternative mutually exclusive hypotheses. The object of the Statistical Decision Theory is to formulate a decision rule, that operates on the received data to decide which hypothesis, among possible hypotheses, gives the optimal (for a given criterion) decision . The noise-contaminated data (signal) can be classified as: • continuous stream of data (voice, images,... ) • discrete-time stream of data (radar, sonar, laser,... ) One other classification of the noise-contaminated data (signal) can be: • known signals (radar/laser pulses defined by carrier frequency, width, coding,…) • known signals with random parameters with known statistics. SENSORS
  • 35. 35 Decision/Detection Theory SOLO Hypotheses H0 – target is not present H1 – target is present Binary Detection  0Hp - probability that target is not present  1Hp - probability that target is present  zHp |0 - probability that target is not present and not declared (correct decision)  zHp |1 - probability that target is present and declared (correct decision) Using Bayes’ rule:       Z dzzpzHpHp |00       Z dzzpzHpHp |11  zp - probability of the event Zz  Since p (z) > 0 the Decision rules are:    zHpzHp || 01  - target is not declared (H0)    zHpzHp || 01  - target is declared (H1)    zHpzHp H H || 01 0 1   SENSORS
  • 36. 36 Decision/Detection Theory SOLO Hypotheses H0 – target is not present H1 – target is present Binary Detection  zHp |0 - probability that target is not present and not declared (correct decision)  zHp |1 - probability that target is present and declared (correct decision)  zp - probability of the event Zz  Decision rules are:    zHpzHp H H || 01 0 1   Using again Bayes’ rule:                zp HpHzp zHp zp HpHzp zHp H H 00 0 11 1 | | | | 0 1      0| Hzp - a priori probability that target is not present (H0)  1| Hzp - a priori probability that target is present (H1) Since all probabilities are non-negative        1 0 0 1 0 1 | | Hp Hp Hzp Hzp H H   SENSORS
  • 37. 37 Decision/Detection Theory SOLO Hypotheses  1| Hzp - a priori probability density that target is present (likelihood of H1)  0| Hzp - a priori probability density that target is absent (likelihood of H0) PD - probability of detection = probability that the target is present and declared PFA - probability of false alarm = probability that the target is absent but declared PM - probability of miss = probability that the target is present but not declared T - detection threshold Detection Probabilities   M z D PdzHzpP T    1| 1     Tz FA dzHzpP 0|   D z M PdzHzpP T   1| 1 DP FA P  1 | Hzp 0| Hzp M P z Tz     T Hzp Hzp T T  0 1 | | H0 – target is not present H1 – target is present Binary Detection         T Hp Hp Hzp Hzp LR H H     1 0 0 1 0 1 | | :Likelihood Ratio Test (LTR) SENSORS
  • 38. 38 Decision/Detection Theory SOLO Hypotheses Decision Criteria on Definition of the Threshold T 1. Bayes Criterion DP FA P  1 | Hzp 0| Hzp M P z Tz     T Hzp Hzp T T  0 1 | | H0 – target is not present H1 – target is present Binary Detection         T Hp Hp Hzp Hzp LR H H     1 0 0 1 0 1 | | :Likelihood Ratio Test (LTR) The optimal choice that optimizes the Likelihood Ratio is    1 0 Hp Hp TBayes  This choose assume knowledge of p (H0) and P (H1), that in general are not known a priori. 2. Maximum Likelihood Criterion Since p (H0) and P (H1) are not known a priori, we choose TML = 1  1 | Hzp 0| Hzp MP z Tz     1 | | 0 1  ML T T T Hzp Hzp DP FAP SENSORS
  • 39. 39 Decision/Detection Theory SOLO Hypotheses Decision Criteria on Definition of the Threshold T (continue) 3. Neyman-Pearson Criterion H0 – target is not present H1 – target is present Binary Detection         T Hp Hp Hzp Hzp LR H H     1 0 0 1 0 1 | | :Likelihood Ratio Test (LTR) Egon Sharpe Pearson 1895 - 1980 Jerzy Neyman 1894 - 1981 Neyman and Pearson choose to optimizes the probability of detection PD keeping the probability of false alarm PFA constant.     T TT z z D z dzHzpP 1|maxmax      Tz FA dzHzpP 0|constrained to Let use the Lagrange’s multiplier λ to add the constraint                        TT TT zz zz dzHzpdzHzpG 01 ||maxmax  Maximum is obtained for:     0|| 01    HzpHzp z G TT T  DP FAP  1 | Hzp 0| Hzp M P z Tz     PN T T T Hzp Hzp  0 1 | |     PN T T T Hzp Hzp  0 1 | |  zT is define by requiring that:      Tz FA dzHzpP 0| SENSORS
  • 40. 40 SOLO Return to Table of Content Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance (Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 Filtering and Prediction • Filtering and Prediction , for each Track processes the Data Associated to the Track, Filter the Target State (Position, and may be Velocity and Acceleration) from Noise, and Predict the Target State and Errors (Covariance Matrix) at the next Sensors Measurement.  kkx |ˆ  kx  1|1  kkP  1| kkP  1|1ˆ  kkx  1kx  kkP |  kkP |1  kkx |1ˆ   kt  1kt Real Trajectory Estimated Trajectory  2kt  1|2  kkP  1|2ˆ  kkx  2|2  kkP  2|2ˆ  kkx  3kt Measurement Events Predicted Errors Updated Errors
  • 41. 41 SOLO Discrete Filter/Predictor Architecture State at tk x (k) Evolution of the system (true state) Transition to tk+1 x (k+1)= F(k) x (k) +G (k) u (k)+ v (k) Measurement at tk+1 z (k+1)= H (k) x (k)+ w (k) Control at tk u (k) Controller k t 1k t  kx  1kx  kt  1kt Real Trajectory The discrete representation of the system is given by x (k) - system state vector                    kwkxkHkz kvkukGkxkFkx   111 1 u (k) - system control input v (k) - system unknown dynamics assumed white Gaussian w (k) - measurement noise assumed white Gaussian k - discrete time counter Filtering and Prediction Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance (Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999
  • 42. 42 SOLO Discrete Filter/Predictor Architecture (continue – 1) State at tk x (k) Evolution of the system (true state) Transition to tk+1 x (k+1)= F(k) x (k) +G (k) u (k)+ v (k) Measurement at tk+1 z (k+1)= H (k) x (k)+ w (k) Control at tk u (k) Controller k t 1k t  kx  1kx  kt  1kt Real Trajectory 1. The output of the Filter/Predictor can be at a higher rate than the input (measurements) Tmeasurements = m Toutput, m integer 2. Between measurements it will perform State Prediction                kkxkHkkz kukGkkxkFkkx |1ˆ1|1ˆ |ˆ|1ˆ   3. At measurements it will perform Update State                11|1ˆ|1ˆ |1ˆ11   kkKkkxkkx kkxkHkzk    υ (k) - Innovation K (k) – Filter Gain State at tk x (k) Evolution of the system (true state) Transition to tk+1 x (k+1)= F(k) x (k) +G (k) u (k)+ v (k) Measurement at tk+1 z (k+1)= H (k) x (k)+ w (k) Estimation of the state Control at tk u (k) Controller State Prediction at tk +1          kukGkkxkF kkx   |ˆ |1ˆ Measurement Prediction at tk +1      kkxkHkkz |1ˆ1|1ˆ  Innovation      kkzkzkv |1ˆ11  Update State Estimation at tk +1        11|1ˆ 1|1ˆ   kvkKkkx kkx k t 1k t State Estimation at tk  kkx |ˆ  kkx |ˆ  kx  1|1ˆ  kkx  1kx  kkx |1ˆ   kt  1kt Real Trajectory Estimated Trajectory  1kK Filtering and Prediction Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance (Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999
  • 43. 43 SOLO Discrete Filter/Predictor Architecture (continue – 2) The way that the Filter Gain K (k) is defined will define the Filter properties. 1. K (k) can be chosen to satisfy the bandwidth requirements. Since we have Linear Time Constant System a K (k)=constant may be chosen. This is a Luenberger Observer. 2. Since we have a Linear Time Constant System, if we assume White Gaussian System and Measurement Disturbances the Kalman Filter will provide the Optimal Filter/Predictor. An important byproduct is the Error Covariances. State at tk x (k) Evolution of the system (true state) Transition to tk+1 x (k+1)= F(k) x (k) +G (k) u (k)+ v (k) Measurement at tk+1 z (k+1)= H (k) x (k)+ w (k) Estimation of the state Control at tk u (k) Controller State Prediction at tk +1          kukGkkxkF kkx   |ˆ |1ˆ Measurement Prediction at tk +1      kkxkHkkz |1ˆ1|1ˆ  Innovation      kkzkzkv |1ˆ11  Update State Estimation at tk +1        11|1ˆ 1|1ˆ   kvkKkkx kkx k t 1k t State Estimation at tk  kkx |ˆ  kkx |ˆ  kx  1|1ˆ  kkx  1kx  kkx |1ˆ   kt  1kt Real Trajectory Estimated Trajectory  1kK 3. The Filter Gain K (k) can be chosen as the steady-state value of the Kalman Filter. Filtering and Prediction Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance (Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999
  • 44. 44 SOLO Statistically State Estimation Target Tracking Systems scan, periodically, with their Sensors for Targets. They need: • to predict Target position at the next scan (in order to be able to re-detect the Target and measure its data) and • to perform data association of detection from scan-to-scan, in order to determine a new or an old Target Track # 1 Track # 2 New Targets or False Alarms Old Targets Scan # m Scan # m+1 Scan # m+2 Scan # m+3 Tgt # 1 Tgt # 2 Tgt # 1 Tgt # 1 Tgt # 2 Tgt # 2 Tgt # 2 Preliminary Track # 1 Preliminary Track # 2 False Alarm False Alarm Tgt # 3 To perform those tasks Target Tracking Systems use Statistically State Estimation Theory. Two main methods are commonly used: • The Maximum Likelihood (ML) method (based on known/assumed statistics prior to measurements. • The Bayesian approach based on known statistics between states and measurements, after performing the measurements. Different Models are used to describe the Target Dynamics. Often Linear Dynamics is enough to describe a dynamical system, but non-linear models must also be taken in consideration. In many cases the measurements relations to the model states are also non-linear. The unknown system dynamics or measurement errors are modeled by White Noise Gauss Stochastic Processes. Filtering and Prediction Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance (Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999
  • 45. 45 SOLO Target Models as Markov Processes Markov Random Processes A Markov Random Process is defined by: Andrei Andreevich Markov 1856 - 1922           111 ,|,,,|, tttxtxptxtxp   i.e. the Random Process, the past up to any time t1 is fully defined by the process at t1. Discrete Target Dynamic System    kkkkk kkkkk vuxthz wuxtfx ,,, ,,, 1111    x - state space vector (n x 1) u - input vector (m x 1) - measurement vector (p x 1)z v - white measurement noise vector (p x 1) - white input noise vector (n x 1)w    kkkk kkkk vuxkhz wuxkfx ,,, ,,,1 111    1ku  1k 1kw1kv kz Assumptions: Known: - functional forms f (•), h (•) - noise statistics p (wk), p (vk) - initial state probability density function (PDF) p (x0) Filtering and Prediction
  • 46. 46 SOLO Discrete Target Dynamic System as Markov Processes    kkkkk kkkkk vuxthz wuxtfx ,,, ,,, 1111    x - state space vector (n x 1) u - input vector (m x 1) - measurement vector (p x 1)z v - white measurement noise vector (p x 1) - white input noise vector (n x 1)w Assumptions: Known: - functional forms f (•), h (•) - noise statistics p (wk), p (vk) - initial state probability density function (pdf) p (x0) Return to Table of Content Using the k discrete (k=1,2,…) noisy measurements Z1:k={z1,z2,…,zk} we want to estimate the hidden state xk, by filtering out the noise. k – enumeration of the measurement events The Estimator/Filter uses some assumptions about the model and an Optimization Criteria to obtain the estimate of xk based on measurements Z1:k={z1,z2,…,zk} .  kkkk ZxEx :1| |ˆ     kkkk kkkk vuxkhz wuxkfx ,,, ,,,1 111    1ku  1k 1kw1kv kz Filtering and Prediction
  • 47. 47 SOLO Equation of motion of a point mass object are described by: A IV RI V R td d x x xx xx                                    33 33 3333 3333 0 00 0 A V R    - Range vector - Velocity vector - Acceleration vector                                    A V R I I A V R td d xxx xxx xxx       333333 333333 333333 000 00 00 or: Since the target acceleration vector is not measurable, we assume that it is a random process defined by one of the following assumptions: A  1. White Noise Acceleration Model (Nearly Constant Velocity – nCV) . 3. Piecewise (between samples) Constant White Noise Acceleration Model . 5. Singer Acceleration Model . 2. Wiener Process acceleration model (nearly Constant Acceleration – nCA) . 4. Piecewise (between samples) Constant Wiener Process Acceleration Model (Constant Jerk – a derivative of acceleration) 6. Constant Speed Turning Model . Target motion is modeled using the laws of physics. V  R  Bx A  Target Acceleration Models     ModelContinuouswuxtFx ModelDiscretewuxtfx kkkkk ,,, ,,, 1111     Filtering and Prediction
  • 48. 48 SOLO 1. White Noise Acceleration Model – Second Order Model Nearly Constant Velocity Model (nCV)                                            tqwtwEtwEtw IV RI V R td d T B x x A xx xx x ,0& 0 00 0 33 33 3333 3333       Discrete System          kwkkxkkx 1               3333 3333 66 00 0! 1 exp: xx xx x i ii T I TII TAITA i dAT  2 00 00 00 00 00 0 00 0 00 0 3333 3333 3333 3333 3333 3333 3333 33332 3333 3333                                nA II A I A xx xxn xx xx xx xx xx xx xx xx          tqwtwE T               T TTT dTBBTqkkwkwEk 0  Target Acceleration Models     ModelContinuouswuxtFx ModelDiscretewuxtfx kkkkk ,,, ,,, 1111     Filtering and Prediction
  • 49. 49 SOLO 1. White Noise Acceleration Model (continue – 1) Nearly Constant Velocity Model (nCV)          d ITI I I II TII q xx xx xx T x x xx xx                      3333 3333 3333 0 33 33 3333 3333 0 0 0 0                     T TTTTT dTBBTqkkQkkkwkwEk 0                  d ITI TITI qdITI I TI q T xx xx xx T x x                    0 3333 33 2 33 3333 0 33 33 2/                TITI TITI qkkQk xx xxT 33 2 33 2 33 3 33 2/ 2/3/ Guideline for Choice of Process Noise Intensity The change in velocity over a sampling period T are of the order of TqQ 22 For a nearly constant velocity assumed by this model, the choice of q must be such to give small changes in velocity compared to the actual velocity .V  Target Acceleration Models     ModelContinuouswuxtFx ModelDiscretewuxtfx kkkkk ,,, ,,, 1111     Filtering and Prediction
  • 50. 50 SOLO 2. Wiener Process Acceleration Model – Third Order Model (Nearly Constant Acceleration – nCA)                                                             tIqwtwEtwEtw IA V R I I A V R td d x T B x x x A xxx xxx xxx x 33 33 33 33 333333 333333 333333 ,0&0 0 000 00 00           Discrete System          kwkkxkkx 1                   333333 333333 2 333333 22 99 00 00 0 2/ 2 1 ! 1 exp: xxx xxx xxx x i ii T I TII TITII TATAITA i dAT  2 000 000 000 000 000 00 000 00 00 333333 333333 333333 333333 333333 333333 2 333333 333333 333333                                   nA I AI I A xxx xxx xxx n xxx xxx xxx xxx xxx xxx  Since the derivative of acceleration is the jerk, this model is also called White Noise Jerk Model.         tIqwtwE x T 33              T TTT dTBBTqkkwkwEk 0  Target Acceleration Models     ModelContinuouswuxtFx ModelDiscretewuxtfx kkkkk ,,, ,,, 1111     Filtering and Prediction
  • 51. 51 SOLO 2. Wiener Process Acceleration Model (continue – 1) (Nearly Constant Acceleration – nCA)                   d ITITI ITI I I II TII TITII q xxx xxx xxx xxx T x x x xxx xxx xxx                                     3333 2 33 333333 333333 333333 0 33 33 33 333333 333333 2 333333 2/ 0 00 000 0 00 0 2/               T TTT dTBBTqkkwkwEk 0                                 d ITITI TITITI TITITI qdITITI I TI TI q T xxx xxx xxx xxx T x x x                               0 3333 2 33 33 2 33 3 33 2 33 3 33 4 33 3333 2 33 0 33 33 2 33 2/ 2/ 2/2/4/ 2/ 2/                  TITITI TITITI TITITI qkkQk xxx xxx xxx T 33 2 33 3 33 2 33 3 33 4 33 3 33 4 33 5 33 2/6/ 2/3/8/ 6/8/20/ Guideline for Choice of Process Noise Intensity The change in acceleration over a sampling period T are of the order of TqQ 33 For a nearly constant acceleration assumed by this model, the choice of q must be such to give small changes in velocity compared to the actual acceleration .A          tIqwtwE x T 33 Target Acceleration Models     ModelContinuouswuxtFx ModelDiscretewuxtfx kkkkk ,,, ,,, 1111     Filtering and Prediction
  • 52. 52 SOLO 3. Piecewise (between samples) Constant White Noise Acceleration Model – 2nd Order       ,0& 0 00 0 33 33 3333 3333                               twEtw IV RI V R td d B x x A xx xx x       Discrete System                        kl TTT lqkllwkwEkkwkkxkkx  01               3333 3333 66 00 0! 1 exp: xx xx x i ii T I TII TAITA i dAT  2 00 00 00 00 00 0 00 0 00 0 3333 3333 3333 3333 3333 3333 3333 33332 3333 3333                                nA II A I A xx xxn xx xx xx xx xx xx xx xx                 kw TI TI kw I d I TII dkTwBTkwk x x x x T xx xx T kw                              33 2 33 33 33 0 3333 3333 0 2/0 0 :     Target Acceleration Models     ModelContinuouswuxtFx ModelDiscretewuxtfx kkkkk ,,, ,,, 1111     Filtering and Prediction
  • 53. 53 SOLO 3. Piecewise (between samples) Constant White Noise Acceleration Model                klxx x x kl TTT TITI TI TI qlqkllwkwEk  33 2 33 33 2 33 00 2/ 2/                 lk xx xxTT TITI TITI qllwkwEk ,2 33 3 33 3 33 4 33 0 2/ 2/2/           Guideline for Choice of Process Noise Intensity For this model q should be of the order of maximum acceleration magnitude aM. A practical range is 0.5 aM ≤ q ≤ aM. Target Acceleration Models     ModelContinuouswuxtFx ModelDiscretewuxtfx kkkkk ,,, ,,, 1111     Filtering and Prediction
  • 54. 54 SOLO 4. Piecewise (between samples) Constant Wiener Process Acceleration Model (Constant Jerk – a derivative of acceleration)       0&0 0 000 00 00 33 33 33 333333 333333 333333                                                twEtw IA V R I I A V R td d B x x x A xxx xxx xxx x           Discrete System                        lk TTT lqkllwkwEkkwkkxkkx ,01                    333333 333333 2 333333 22 99 00 00 0 2/ 2 1 ! 1 exp: xxx xxx xxx x i ii T I TII TITII TATAITA i dAT  2 000 000 000 000 000 00 000 00 00 333333 333333 333333 333333 333333 333333 2 333333 333333 333333                                   nA I AI I A xxx xxx xxx n xxx xxx xxx xxx xxx xxx                     kw I TI TI kwd I TII TITII dkTwBTkwk x x xT x x x xxx xxx xxxT kw                                                33 33 2 33 0 33 33 33 333333 333333 2 333333 0 2/ 0 0 0 00 0 2/ :     Target Acceleration Models     ModelContinuouswuxtFx ModelDiscretewuxtfx kkkkk ,,, ,,, 1111     Filtering and Prediction
  • 55. 55 SOLO 4. Piecewise (between samples) Constant White Noise acceleration model (continue -1) (Constant Jerk – a derivative of acceleration)                lkxxx x x x lk TTT ITITI I TI TI qlqkllwkwEk ,3333 2 33 33 33 2 33 0,0 2/ 2/                      lk xxx xxx xxx TT ITITI TITITI TITITI qllwkwEk , 3333 2 33 33 2 33 3 33 2 33 3 33 4 33 0 2/ 2/ 2/2/2/             Guideline for Choice of Process Noise Intensity For this model q should be of the order of maximum acceleration increment over a sampling period ΔaM. A practical range is 0.5 ΔaM ≤ q ≤ ΔaM. Target Acceleration Models     ModelContinuouswuxtFx ModelDiscretewuxtfx kkkkk ,,, ,,, 1111     Filtering and Prediction
  • 56. 56 SOLO 5. Singer Target Model R.A. Singer, “Estimating Optimal Tracking Filter Performance for Manned Maneuvering Target”, IEEE Trans. Aerospace & Electronic Systems”, Vol. AES-6, July 1970, pp. 437-483 The target acceleration is modeled as a zero-mean random process with exponential autocorrelation        T etataER mTT   /2   where σm 2 is the variance of the target acceleration and τT is the time constant of its autocorrelation (“decorrelation time”). The target acceleration is assumed to: 1. Equal to the maximum acceleration value amax with probability pM and to – amax with the same probability. 2. Equal to zero with probability p0. 3. Uniformly distributed between [-amax, amax] with the remaining probability 1-2 pM – p0 > 0. maxa maxa Mp Mp 0p  ap a 021 ppM                max 0 maxmax0maxmax 2 21 0 a pp aauaauppaaaaap M M    Target Acceleration Models     ModelContinuouswuxtFx ModelDiscretewuxtfx kkkkk ,,, ,,, 1111     Filtering and Prediction
  • 57. 57 SOLO 5. Singer Target Model (continue 1) maxa maxa Mp Mp 0p  ap a 021 ppM                max 0 maxmax0maxmax 2 21 0 a pp aauaauppaaaaap M M                          0 22 21 0 2 21 0 max max max max max max max max 2 max 0 0maxmax max 0 maxmax 0maxmax             a a M M a a M a a M a a a a pp ppaa daa a pp aauaau daappaaaadaapaaE                         0 2 max 3 max 02 max 2 max 2 max 0 maxmax 2 0maxmax 22 41 3 32 21 2 21 0 max max max max max max max max pp a a a pp paa daa a pp aauaau daappaaaadaapaaE M a a M M a a M a a M a a                   0 2 max 0 222 41 3 pp a aEaE Mm    Use       max0max 00 max max aaa afdaafaa a a    Target Acceleration Models     ModelContinuouswuxtFx ModelDiscretewuxtfx kkkkk ,,, ,,, 1111     Filtering and Prediction
  • 58. 58 SOLO 6. Target Acceleration Approximation by a Markov Process w (t) x (t)  tF  tG  x (t)            twtGtxtFtxtx td d  Given a Continuous Linear System: Let start with the first order linear system describing Target Acceleration :      twtata T T T   1      T T tt a ett   / 0 0 ,                   tqwEwtwEtwE               ttRtaEtataEtaE TT aaTTTT ,                  ttRtaEtataEtaE TT aaTTTT ,                  2 , TTTTT aaaaaTTTT ttRtVtaEtataEtaE                 tGtQtGtFtVtVtFtV td d TT xxx      qtVtV td d TTTT aa T aa   2    00 , 1 , tttt td d TT a T a     where Target Acceleration Models     ModelContinuouswuxtFx ModelDiscretewuxtfx kkkkk ,,, ,,, 1111     Filtering and Prediction Run This
  • 59. 59 SOLO     qtVtV td d TTTT aa T aa   2               TT TTTT t T t aaaa e q eVtV   22 1 2 0 t 2/T   T t ww eV 2 0            T t e qT 2 1 2 2 qT V statesteadyww   tVww                         0, 0, ,        tVetttV tVetVtt ttR TT T TTT TT T TTT TT aa T aaa aaaaa aa                         0, 0, ,        tVetVtt tVetttV ttR TT T TTT TT T TTT TT aaaaa aa T aaa aa For     2 5 T statesteadyaaaaaa T q VtVtV TTTTTT             TT TTTTTTTT e q eVVttRttR T T statesteadyaaaaaaaa           2 ,,5  tw  taT  T T s sH     1 6. Target Acceleration Approximation by a Markov Process (continue – 1) Target Acceleration Models     ModelContinuouswuxtFx ModelDiscretewuxtfx kkkkk ,,, ,,, 1111     Run This
  • 60. 60 SOLO   T T T TT ee q V a T aa     /2/ 2    2 2 T a q T    T 12  eTa T   2 0 2 2 T T aa qde q dVArea T TT           τT is the correlation time of the noise w (t) and defines in Vaa (τ) the correlation time corresponding to σa 2 /e. One other way to find τT is by tacking the double sides Laplace Transform L2 on τ of:        qdetqtqs s ww        2L           sHqsH s q dee q Vs T T sT ssaaaa T TTTT          2 2 / 2 1 2       L    2 2/1/1     q Qww  T /12/1  q 2/q T /12/1  τT defines the ω1/2 of half of the power spectrum q/2 and τT =1/ ω1/2.       TT TTTTTTT e q eVttRttR T T aaaaaaa          2 ,,5 2 T aT q   2 2  6. Target Acceleration Approximation by a Markov Process (continue – 2) Target Acceleration Models     ModelContinuouswuxtFx ModelDiscretewuxtfx kkkkk ,,, ,,, 1111     Filtering and Prediction Run This
  • 61. 61 SOLO 7. Constant Speed Turning Model Denote by and the constant velocity and turning rate vectors.P td d VVV   1  1  VVVV td d VVV td d V td d A            111: 0     VVVVV td d V td d A td d       22 0: 0          Define     2 00 : V AV     Denote the position vector of the vehicle relative to an Inertial system..P  We want to find ф (t) such that      TTT  Therefore A IA V P I I A V P td d                                                             0 0 00 00 00 2  Continuous Time Constant Speed Target Model Target Acceleration Models     ModelContinuouswuxtFx ModelDiscretewuxtfx kkkkk ,,, ,,, 1111     Filtering and Prediction
  • 62. 62 SOLO 7. Constant Speed Turning Model (continuous – 1) A B C O   nˆ v  1v  Let rotate the vector around by a large angle , to obtain the new vector   OAPT  nˆ T    OBP  From the drawing we have:   CBACOAOBP  TPOA      cos1ˆˆ   TPnnAC  Since direction of is:     sinˆˆ&ˆˆ TTT PPnnPnn   and it’s length is: AC    cos1sin TP    sinˆ TPnCB    Since has the direction and the absolute value CB  TPn  ˆ sinsinv       sinˆcos1ˆˆ TTT PnPnnPP           TPnTPnnPP TTT  sinˆcos1ˆˆ   We will find ф (T) by direct computation of a rotation: Target Acceleration Models     ModelContinuouswuxtFx ModelDiscretewuxtfx kkkkk ,,, ,,, 1111     Filtering and Prediction Run This
  • 63. 63 SOLO 7. Constant Speed Turning Model (continuous – 2)        TPnnTPn Td Pd V TT     sinˆˆcosˆ     TT PnTVV   ˆ0         TPnnTPn Td Vd A TT  cosˆˆsinˆ 22        TT PnnTAA   ˆˆ0 2                         TT TT TTT ATVTA ATVTV ATVTPP       cossin sincos cos1sin 1 21         TPnnTPnPP TTT  cos1ˆˆsinˆ   Target Acceleration Models     ModelContinuouswuxtFx ModelDiscretewuxtfx kkkkk ,,, ,,, 1111     Filtering and Prediction
  • 64. 64 SOLO 7. Constant Speed Tourning Model (continuous – 3)                        TT TT TTT ATVTA ATVTV ATVTPP       cossin sincos cos1sin 1 21                                                        T T T T A V P TT TT TTI A V P             cossin0 sincos0 cos1sin 1 21 Discrete Time Constant Speed Target Model Target Acceleration Models     ModelContinuouswuxtFx ModelDiscretewuxtfx kkkkk ,,, ,,, 1111     Filtering and Prediction
  • 65. 65 SOLO 7. Constant Speed Tourning Model (continuous – 4)                               TT TT TTI T    cossin0 sincos0 cos1sin 1 21                                TT TT TTI T    cossin0 sincos0 cos1sin 1 21 1                            TT TT TT T    sincos0 cossin0 sincos0 2 1  We want to find Λ (t) such that      TTT  therefore      TTT 1                                                             TT TT TTI TT TT TT TTT       cossin0 sincos0 cos1sin sincos0 cossin0 sincos0 1 21 2 1 1             00 100 010 2  We recovered the transfer matrix for the continuous case. Return to Table of Content Target Acceleration Models     ModelContinuouswuxtFx ModelDisccretewuxtfx kkkkk ,,, ,,, 1111     Filtering and Prediction
  • 66. 66 SOLO Optimal Static Estimate The optimal procedure to estimate depends on the amount of knowledge of the process that is initially available. x The following estimators are known and are used as function of the assumed initial knowledge available: Estimators Known initially Weighted Least Square (WLS) & Recursive WLS 1      T kkkkkkk vvvvERvEv  &Markov Estimator2 Maximum Likelihood Estimator (MLE)3     LikelihoodxZLxZp xZ  ,:|| Bayes Estimator4    Zxporvxp Zxvx |, |, The amount of assumed initial knowledge available on the process increases in this order. Estimation for Static Systems v H zx The measurements are vxHz 
  • 67. 67 Estimation for Static Systems (continue – 1)SOLO Parameter Vector: full specification of (static) parameters to be estimated Measurements: • collected over time and/or space • affected by noise  vRx  ,Examples: or  avRx  ,,    a v R    Position 3 D vector Velocity 3 D vector Acceleration 3 D vector • relationship (nonlinear/linear) with parameter vector    m k n kk RzRxKkvxhz  ,;,,1  Goal: Estimate the Parameter Vector using all measurementx Approaches: • treat as being deterministic (Minimal Least Square -MLE, LSE)x • treat as being random (MAP Estimator, MMSE Estimator)x
  • 68. 68 z SOLO Optimal Weighted Last-Square Estimate Assume that the set of p measurements, can be expressed as a linear combination, of the elements of a constant vector plus a random, additive measurement error, : v H zx x v vxHz      1 1   W T xHzxHzWxHzJ   T pzzzz ,,, 21   T n xxxx ,,, 21   T pvvvv ,,, 21  We want to find , the estimation of the constant vector , that minimizes the cost function: x  x that minimizes J, is obtained by solving:0x    02/ 1   xHzWHxJJ T x     zWHHWHx TT 111 0    This solution minimizes J iff :          02/ 0 1 00 22 0   xxHWHxxxxxJxx TTT  or the matrix HTW-1H is positive definite. W is a hermitian (WH = W, H stands for complex conjugate and matrix transpose), positive definite weighting matrix. Estimation for Static Systems (continue – 2)
  • 69. 69 v H zx SOLO Optimal Weighted Least-Square Estimate (continue – 1)   zWHHWHx TT 111 0    Since the mean of the estimate is equal to the estimated parameter, the estimator is unbiased. vxHz Since is random with mean       xHvExHvxHEzE  0         xxHWHHWHzEWHHWHxE TTTT   111111 0  is also random with mean:0x         0 1 00 12 00 1 0 * : xHzWHxxHzWzxHzxHzWxHzJ TTT W T    Using we want to find the minimum value of J:0 11 xHWHzWH TT        0 1 0 0 11 00 1 xHzWzxHWHzWHxxHzWz TTTTT        2 0 2 0 1 0 1 0 11 1 0 WW TTT HWHx TT xHzxHWHxzWzxHWzzWz TT      Estimation for Static Systems (continue – 3)
  • 70. 70 v H zx 2 0 22 0 * 111   WWW xHzxHzJ  SOLO Optimal Weighted Least-Square Estimate (continue – 2) where is a norm.aWaa T W 12 :   Using we obtain: 0 11 xHWHzWH TT       0 , 0 1 0 1 0 0 1 000 0 1      xHWHxzWHx xHzWxHxHzxH TT xHWH TT T W T      bWaba T W 1 :,   This suggest the definition of an inner product of two vectors and (relative to the weighting matrix W) as ba z 0 ˆxHz  0 ˆxH W 2 0 22 0 111 ˆˆ   WWW xHzxHz Projection Theorem The Optimal Estimate is such that is the projection (relative to the weighting matrix W) of on the plane. 0x  z 0xH  xH Estimation for Static Systems (continue – 4)
  • 71. 72 0z SOLO Recursive Weighted Least Square Estimate (RWLS) Assume that the set of N measurements, can be expressed as a linear combination, of the elements of a constant vector plus a random, additive measurement error, : 0v 0zx 0H x v vxHz  00     1 0 0000 1 0000   W T xHzxHzWxHzJ  We found that the optimal estimator , that minimizes the cost function:  x      0 1 00 1 0 1 00 zWHHWHx TT   is Let define the following matrices for the complete measurement set                    W W W z z z H H H 0 0 :,:,: 0 1 0 1 0 1    1 0 1 00 :   HWHP T Therefore:     1 1 1 0 0 0 01 1 1 1 1 1 1 1 0 01 1 0 0 0 0 T T T T T T W H W z x H W H H W z H H H H H zW W                                           v H zx     0 1 00 zWHPx T    An additional measurement set, is obtained and we want to find the optimal estimator . z  x  Estimation for Static Systems (continue – 5)
  • 72. 73 SOLO Recursive Weighted Least Square Estimate (RWLS) (continue -1)    1 0 1 00 :   HWHP T     0 1 00 zWHPx T               zWHzWHHWHHWH z z W W HH H H W W HHzWHHWHx TTTT TTTTTT 1 0 1 00 11 0 1 00 0 1 1 0 0 1 0 1 1 0 01 1 111 1 11 0 0 0 0                                               Define     HWHPHWHHWHP TTT 111 0 1 00 1 :                  PHWHPHHPPHWHPP TT LemmaMatrixInverse T 1111           111111   WHPWHHWHPWHPHHP TTTTT                   PHWHPPPHWHPHHPPP TTT 11                  zWHPzWHPHWHPHHPP zWHzWHPx TTTT TT 1 0 1 00 1 1 0 1 00      Estimation for Static Systems (continue – 6)
  • 73. 74 v H zx SOLO Recursive Weighted Least Square Estimate (RWLS) (continue -2)                                           zWHPxHWHPx zWHPzWHPHWHPHHPzWHP zWHPzWHPHWHPHHPP zWHzWHPx TT T x T WHP TT x T TTTT TT T 11 1 0 1 00 1 0 1 00 1 0 1 00 1 1 0 1 00 1                              0 1 00 zWHPx T        HWHPP T 111             xHzWHPxx T  1 Recursive Weighted Least Square Estimate (RWLS) z  x   x  Delay   HWHP T 11   H   1  WHP T Estimator Estimation for Static Systems (continue – 7)
  • 74. 75                 xHzWxHzxHzWxHz xHz xHz W W xHzxHz xHz xHz W W xHz xHz xHzWxHzJ TT TT T T                                                        1 00 1 000 00 1 1 0 00 00 1 000 11 1 1111 0 0 0 0   0 1 00 1 : HWHP T   SOLO Recursive Weighted Least Square Estimate (RWLS) (continue -3) Second Way We want to prove that where     0 1 00 : zWHPx T                 xxPxxxHzWxHz TT  1 00 1 000 Therefore                 11 11 1     WP TT xHzxxxHzWxHzxxPxxJ  Estimation for Static Systems (continue – 8)
  • 75. 76 Estimators vxHz  00 v 0H 0zx SOLO Markov Estimate For the particular vector measurement equation where for the measurement noise, we know the mean:  vEv  and the variance:    T vvvvER  v   zRHHRHx TT 1 0 1 0 1 00   We choose W = R in WLS, and we obtain:    1 0 1 0:   HRHP T     HRHPP T 111             xHzRHPxx T  1 RWLS = Markov Estimate W = R z  x   x  Delay   HRHP T 11   H   1  RHP T Estimator In Recursive WLS, we obtain for a new observation: vxHz  v H zx Table of Content
  • 76. 77 Estimation for Static Systems (continuous – 9)SOLO                k k k k k Zp xpxZp dxxpxZp xpxZp Zxp | | | |   Bayesian Approach: compute the posteriori Probability Density Function (PDF) of x  kk zzZ ,,1  - Measurements up to k  xp - Prior (before measurement) PDF of x    xZLxZp kk ,|  - Likelihood function of given xkZ  kZxp | - Posterior (after measurement ) PDF of xkZ Likelihood Function: PDF of measurement conditioned on the parameter vector Example   kk vxhz   2 ,0;~ vk vv N i.i.d. process; k=1,…,K (independent identical distribution)     2 | ,;~| vkxz xhzxzp k N      K k kxzkxZ xzpxZp kk 1 || || Bayes Formula
  • 77. 78 Estimation for Static Systems (continuous – 10)SOLO       k T x k MMSE ZxxxxEZx |ˆˆminargˆ ˆ  Minimum Mean Square Error (MMSE) Estimator The minimum is given by            0|ˆ2|ˆ2|ˆˆˆ  kkk T x ZxExZxxEZxxxxE From which        xdZxpxZxEx kZxk k || | * We have      02|ˆˆˆˆ  k T xx ZxxxxE               xdZxpxZxEZxxxxEZx kZxkk T x k MMSE k |||ˆˆminargˆ | ˆ Therefore
  • 78. 79 Estimation for Static Systems (continuous – 11)SOLO Maximum Likelihood Estimator (MLE)  xZpx kxZ x ML k |maxarg:ˆ |• Non-Bayesian Estimator        vpxpvxpzxp vxvxzx  ,, ,, x v  vxp vx ,,                     xHzRxHz R xHzpxzpxzL T p vxz 1 2/12/ | 2 1 exp 2 1 |:,          RWSquareLeastWeightedxHzRxHzxzp T x xz x  1 | min|max        02 11     xHzRHxHzRxHz x TT 0*11   xHRHzRH TT   zRHHRHxx TT 111 :         HRHxHzRxHz x TT 11 2 2 2     this is a positive definite matrix, therefore the solution minimizes and maximizes     xHzRxHz T  1  xzp xz //                 vRv R vp xp zxp xzp T pv x zx xz 1 2/12/ | | 2 1 exp 2 1| |  Gaussian (normal), with zero mean Example kk vxHz 
  • 79. 80 Estimation for Static Systems (continuous – 12)SOLO Maximum A Posterior Estimator (MAP)      xpxZpZxpx xkxZ x kZx x MAP kk |maxarg|maxarg:ˆ || •Bayesian Estimator                        xxPxx P xp T nx  1 2/12/ 2 1 exp 2 1                   xHzRxHz R xHzpxzp T pvxz 1 2/12// 2 1 exp 2 1 /                        xHzRHPHxHz RHPH zp TT Tpz 1 2/12/ 2 1 exp 2 1                                             xHzRHPHxHzxxPxxxHzRxHz RHPH RPzp xpxzp zxp TTTT T nz xxz zx 111 2/1 2/12/1 2/ | | 2 1 2 1 2 1 exp 2 1| |   from which vxHz Consider a gaussian vector , where , measurement, , where the gaussian noise is independent of and . RNv ,0~v x      PxNx ,~  x
  • 80. 81 SOLO                   xHzRHPHxHzxxPxxxHzRxHz TTTT   111        11111111   RHPHRHHRRRRHPHR TTT we have then Define:     111 :   HRHPP T            xHzRHxxPxHzRHxx TTT   111                                  xHzRHPxxPxHzRHPxx P zxp TTT nzx 111 2/12/| 2 1 exp 2 1 |   then  zxp zx x |max |        xHzRHPxxx T  1* :  Estimation for Static Systems (continuous – 13) Maximum A Posterior Estimator (MAP) (continue – 1)      xpxZpZxpx xkxZ x kZx x MAP kk |maxarg|maxarg:ˆ || •Bayesian Estimator For Diffuse Uniform a Priori   constxpx      MLE kxZ x kZx x MAP xxZpZxpx kk ˆ|maxarg|maxarg:ˆ || 
  • 81. 82 SOLO Optimal Static Estimate (Summary) Estimators Known initially Weighted Least Square (WLS)1      T kkkkkkk vvvvERvEv  & Markov Estimator2 Estimation for Static Systems 2 0 22 0 * 111   WWW xHzxHzJ  v H zx z 0 ˆxHz  0 ˆxH W 2 0 22 0 111 ˆˆ   WWW xHzxHz The measurements are vxHz      1 1   W T xHzxHzWxHzJ    zWHHWHx TTWLS 111    & Recursive WLS J x min     HWHPHWHHWHP TTT 111 0 1 00 1             xHzWHPxx T  1 Recursive Weighted Least Square Estimate (RWLS) z  x   x  Delay   HWHP T 11   H   1  WHP T Estimator     HRHPHRHHRHP TTT 111 0 1 0 1             xHzRHPxx T  1 RWLS = Markov Estimator W = R z  x   x  Delay   HRHP T 11   H   1  RHP T Estimator No assumption about noise v Assumption about noise v
  • 82. 83 SOLO Optimal Static Estimate (Summary) Estimators Known initially Maximum Likelihood Estimator (MLE)3     LikelihoodxZLxZp xZ  ,:|| Bayes Estimator – Maximum Apriory Estimator (MAP) 4    Zxporvxp Zxvx |, |, Estimation for Static Systems                    xHzRxHz R xHzpxzpxzL T pvxz 1 2/12/| 2 1 exp 2 1 |:,  v H zx         RWSquareLeastWeightedxHzRxHzxzp T x xz x  1 | min|max The measurements are vxHz    zRHHRHx TTML 111         xpZxpZxpx XxZ X Zx X MAP |maxarg|maxargˆ || 
  • 83. 84 Recursive Bayesian EstimationSOLO Given a nonlinear discrete stochastic Markovian system we want to use k discrete measurements Z1:k={z1,z2,…,zk} to estimate the hidden state xk. For this we want to compute the probability of xk given all the measurements Z1:k={z1,z2,…,zk} . If we know p ( xk| Z1:k ) then xk is estimated using:     kkkkkkkk xdZxpxZxEx :1:1| ||:ˆ           kkk T kkkkk T kkkkkk xdZxpxxxxZxxxxEP :1:1| |ˆˆ|ˆˆ or more general we can compute all moments of the probability distribution p ( xk| Z1:k ):        kkkkkk xdZxpxgZxgE :1:1 || Bayesian Estimation Introduction Problem: Estimate the State of a Non-linear Dynamic Stochastic System from Noisy Measurements. kx1kx kz1kz 0x 1x 2x 1z 2z kZ :11:1 kZ  11,  kk wxf  kk vxh ,  00 ,wxf  11,vxh  11,wxf  22 ,vxh Run This
  • 84. 85 Recursive Bayesian EstimationSOLO To find the expression for p ( xk| Z1:k ) we use the theorem of joint probability (Bayes Rule):      k kk RuleBayes kk Zp Zxp Zxp :1 :1 :1 , |  Since Z1:k ={ zk, Z1:k-1 }:      1:1 1:1 :1 , ,, |    kk kkk kk Zzp Zzxp Zxp The denominator of this expression is      1:11:11:1 ,,|,,   kkkkk RuleBayes kkk ZxpZxzpZzxp          1:11:11:1 |,|  kkkkkk ZpZxpZxzp Since the knowledge of xk supersedes the need for Z1:k-1 = {z1, z2,…,zk-1}    kkkkk xzpZxzp |,| 1:1             1:11:1 1:11:1 :1 | || |    kkk kkkkk kk ZpZzp ZpZxpxzp ZxpTherefore:      1:11:11:1 |,   kkk RuleBayes kk ZpZzpZzp and the nominator is
  • 85. 86 Recursive Bayesian EstimationSOLO The final result is:        1:1 1:1 :1 | || |    kk kkkk kk Zzp Zxpxzp Zxp                      1:1 1:1 1:1 1:1 :1 | || | || |1 kk kkkkk k kk kkkk kkk Zzp xdZxpxzp xd Zzp Zxpxzp xdZxp Since p ( xk| Z1:k ) is a probability distribution it must satisfy:   1| :1  kkk xdZxp              kkkkk kkkk kk xdZxpxzp Zxpxzp Zxp 1:1 1:1 :1 || || | and: Therefore:         kkkkkkk xdZxpxzpZzp 1:11:1 ||| This is a recursive relation that needs the value of p (xk|Z1:k-1), assuming that p (zk,xk) is obtained from the Markovian system definition. kx1kx kz1kz 0x 1x 2x 1z 2z kZ :11:1 kZ  11,  kk wxf  kk vxh ,  00 ,wxf  11,vxh  11,wxf  22 ,vxh
  • 86. 87 Recursive Bayesian EstimationSOLO       11:111:111:11 |,||,   kkkkkk Bayes kkk xdZxpZxxpZxxpUsing:           11:11111:111:1 |||,| kkkkkkkkkkk xdZxpxxpxdZxxpZxp We obtain: Since the knowledge of xk-1 supersedes the need for Z1:k-1 = {z1, z2,…,zk-1}    11:11 |,|   kkkkk xxpZxxp kx1kx kz1kz 0x 1x 2x 1z 2z kZ :11:1 kZ  11,  kk wxf  kk vxh ,  00 ,wxf  11,vxh  11,wxf  22 ,vxh Chapman – Kolmogorov Equation Sydney Chapman 1888 - 1970 Andrey Nikolaevich Kolmogorov 1903 - 1987
  • 87. 88 Recursive Bayesian EstimationSOLO           11:11111:111:1 |||,| kkkkkkkkkkk xdZxpxxpxdZxxpZxp Summary Using p (xk-1|Z1:k-1) from time-step k-1 and p (xk|xk-1) of the Markov system, compute:              kkkkk kkkk kk xdZxpxzp Zxpxzp Zxp 1:1 1:1 :1 || || | Using p (xk|Z1:k-1) from Prediction phase and p (zk|xk) of the Markov system, compute:     kkkkkkkk xdZxpxZxEx :1:1| ||ˆ           kkk T kkkkk T kkkkkk xdZxpxxxxZxxxxEP :1:1| |ˆˆ|ˆˆ At stage k k:=k+1  1|11| ˆˆ   kkkk xfx Initialize with p (x0)0 Prediction phase (before zk measurement) 1 Correction Step (after zk measurement)2 Filtering3 kx1kx kz1kz 0x 1x 2x 1z 2z kZ :11:1 kZ  11,  kk wxf  kk vxh ,  00 ,wxf  11,vxh  11,wxf  22,vxh
  • 88. 89 SOLO Linear Gaussian Systems A Linear Combination of Independent Gaussian random vectors is also a Gaussian random vector mmm XaXaXaS  2211:                                               mmmm mmmm YYYm YpYp mYYmS aaajaaa ajaajaaja YdYdYYpSj m mmYY mm             2211 222 2 2 2 2 1 2 1 2 222 22 2 2 2 2 2 11 2 1 2 1 2 11,, 2 1 exp 2 1 exp 2 1 exp 2 1 exp ,,exp 21 11 1             2 2 2 exp 2 1 ,; i ii i iiiX X Xp i     Gaussian distribution                 iiiiXiX jXdXpXj ii  22 2 1 expexp: Moment- Generating Function Proof: Define    iX ii i X i iYiii Xp aa Y p a YpXaY iii 11 :                                iiiiiiX asign asign ii i iX iiiiYiY ajaXaXda a Xp XajYdYpYj i i ii  222 2 1 expexpexp: 1 1 Review of Probability
  • 89. 90 SOLO Linear Gaussian Systems A Linear Combination of Independent Gaussian random vectors is also a Gaussian random vector mmm XaXaXaS  2211: Therefore the Linear Combination of Independent Gaussian Random Variables is a Gaussian Random Variable with mmS mmS aaa aaa m m       2211 222 2 2 2 2 1 2 1 2 Therefore the Sm probability distribution is:               2 2 2 exp 2 1 ,; m m m mm S S S SSm x Sp     Proof (continue – 1):           mmmmS aaajaaam   2211 222 2 2 2 2 1 2 1 2 2 1 exp We found: Review of Probability q.e.d.
  • 90. 91 Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems    kkkk kkkk vuxkhz wuxkfx ,,, ,,,1 111    kkkk kkkkkkk vxHz wuGxx    111111 wk-1 and vk, white noises, zero mean, Gaussian, independent              kPkekeEkxEkxke x T xxx  &:               lk T www kQlekeEkwEkwke , 0 &:                 lk T vvv kRlekeEkvEkvke , 0 &:         0lekeE T vw       lk lk lk 1 0 , kv kH kzkx kx1k 1kw 1k 1kx 1ku 1kG 1 zDelay    Qwwpw ,0;N    Rvvpv ,0;N             wQw Q wp T nw 1 2/12/ 2 1 exp 2 1              vRv R vp T pv 1 2/12/ 2 1 exp 2 1  A Linear Gaussian Markov Systems is defined as    0|0000 ,;0 Pxxxp ttx   N                00 1 0|0002/1 0|0 2/0 2 1 exp 2 1 0 xxPxx P xp t T tntx 
  • 91. 92 Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 1) 111111   kkkkkkk wuGxx kx1k 1kw 1k 1kx 1ku 1kG Prediction phase (before zk measurement) or 111|111| ˆˆ   kkkkkkk uGxx         0 1:111111:1111:11| |||:ˆ   kkkkkkkkkkkk ZwEuGZxEZxEx The expectation is              1:1111|111111|111 1:11|1|1| |ˆˆ |ˆˆ:     k T kkkkkkkkkkkk k T kkkkkkkk ZwxxwxxE ZxExxExEP             T k Q T kkk T k T kkkkk T k T kkkkk T k P T kkkkkkk wwExxwE wxxExxxxE kk 11111 0 1|1111 1 0 11|11111|111|111 ˆ ˆˆˆ 1|1              T kk T kkkkkk QPP 1111|111|      1|1|1:1 ,ˆ;|   kkkkkkk PxxZxP N Since is a Linear Combination of Independent Gaussian Random Variables: 111111   kkkkkkk wuGxx
  • 92. 93 Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 9) kkkk vxHz  kv kH kzkx    Rvvpv ,0;N             vRv R vp T pv 1 2/12/ 2 1 exp 2 1                         1| 1 1|1|2/1 1| 2/ ˆˆ 2 1 exp 2 1 kkkkk T kkkk T kkkk k T kkkk p kz xHzRHPHxHz RHPH zp  from which   1|1:11| ˆ|ˆ   kkkkkkk xHZzEz     kk T kkkkk T kkkkkk zz kk SRHPHZzzzzEP   :ˆˆ 1|1:11|1|1|           T kkkk T kkkkkkkk k T kkkkkk xz kk HPZvxxHxxE ZzzxxEP 1|1:11|1| 1:11|1|1| ˆˆ ˆˆ     We also have Correction Step (after zk measurement) 2nd Way Define the innovation: 1|1| ˆˆ:   kkkkkk xHzzzi
  • 93. 94 Recursive Bayesian EstimationSOLO Joint and Conditional Gaussian Random Variables        k k k z x yDefine: assumed that they are Gaussian distributed Prediction phase (before zk measurement) 2nd way (continue -1)                              1| 1| 1:1 1:1 1:1 ˆ ˆ | | | kk kk kk kk kk z x Zz Zx EZyE                                               zz kk zx kk xz kk xx kk k T kkk kkk kkk kkkyy kk PP PP Z zz xx zz xx EP 1|1| 1|1| 1:1 1| 1| 1| 1| 1| ˆ ˆ ˆ ˆ where:     1|1:11|1|1| ˆˆ   kkk T kkkkkk xx kk PZxxxxEP     kk T kkkkk T kkkkkk zz kk SRHPHZzzzzEP   :ˆˆ 1|1:11|1|1|     T kkkk T kkkkkk xz kk HPZzzxxEP 1|1:11|1|1| ˆˆ   Linear Gaussian Markov Systems (continue – 10)
  • 94. 95                  1| 1 1|1|2/1 1| 1:1, ˆˆ 2 1 exp 2 1 |, kkk yy kk T kkk yy kk kkkzx yyPyy P Zzxp  Recursive Bayesian EstimationSOLO Joint and Conditional Gaussian Random Variables The conditional probability distribution function (pdf) of xk given zk is given by: Prediction phase (before zk measurement) 2nd Way (continue – 2)                  1| 1 1|1|2/1 1| 1:1 ˆˆ 2 1 exp 2 1 | kkk zz kk T kkk zz kk kkz zzPzz P Zzp                                       1| 1 1|1| 1| 1 1|1| 2/1 1| 2/1 1| 1:1 1:1, |1:1| ˆˆ 2 1 exp ˆˆ 2 1 exp 2 2 | |, |,| kkk zz kk T kkk kkk yy kk T kkk yy kk zz kk kkz kkkzx kkzxkkkzx zzPzz yyPyy P P Zzp Zzxp zxpZzxp                        1| 1 1|1|1| 1 1|1|2/1 1| 2/1 1| ˆˆ 2 1 ˆˆ 2 1 exp 2 2 kkk zz kk T kkkkkk yy kk T kkk yy kk zz kk zzPzzyyPyy P P   Linear Gaussian Markov Systems (continue – 11) We assumed that is Gaussian distributed:       k k k z x y
  • 95. 96 Recursive Bayesian EstimationSOLO Joint and Conditional Gaussian Random Variables Prediction phase (before zk measurement) 2nd Way (continue – 3)                        1| 1 1|1|1| 1 1|1|2/1 1| 2/1 1| | ˆˆ 2 1 ˆˆ 2 1 exp 2 2 | kkk zz kk T kkkkkk zz kk T kkk yy kk zz kk kkzx zzPzzyyPyy P P zxp   Define: 1|1| ˆ:&ˆ:   kkkkkkkk zzxx          k zz kk T kk zz kk T kk zx kk T kk xz kk T kk xx kk T k kkkzz T k k k zz kk zx kk xz kk xx kk T k k k zz kk T k k k zz kk zx kk xz kk xx kk T k k kkk zz kk T kkkkkk yy kk T kkk PTTTT P TT TT P PP PP zzPzzyyPyyq            1 1|1|1|1|1| 1 1| 1|1| 1|1| 1 1| 1 1|1| 1|1| 1| 1 1|1|1| 1 1|1| ˆˆˆˆ:                                                             Linear Gaussian Markov Systems (continue – 12)
  • 96. 97 Recursive Bayesian EstimationSOLO Joint and Conditional Gaussian Random Variables Prediction phase (before zk measurement) 2nd way (continue – 4) Using Inverse Matrix Lemma:                           11111 111111 nxmnxnmxnmxmmxnmxmnxmnxnmxnmxm mxmnxmmxnmxmnxmnxnmxnmxmnxmnxn mxmmxn nxmnxn BADCDCBADC CBDCBADCBA CD BA                       zz kk zx kk xz kk xx kk zz kk zx kk xz kk xx kk TT TT PP PP 1|1| 1|1| 1 1|1| 1|1| in 1 1|1|1| 1 1| 1| 1 1|1|1| 1 1| 1| 1 1|1|1| 1 1|                  zz kk xz kk xz kk xx kk xz kk xx kk zx kk zz kk zz kk kkzxkkzzkkxzkkxxkkxx PPTT TTTTP PPPPT k zz kk T kk zz kk T kk zx kk T kk xz kk T kk xx kk T k PTTTTq  1 1|1|1|1|1|      k zz kk T kk zz kk T k k xz kk xx kk zx kk T kk xz kk xx kk zx kk T kk xz kk T kk xx kk xx kk zx kk T k T k PT TTTTTTTTTT   1 1|1| 1| 1 1|1|1| 1 1|1|1|1| 1 1|1|                     k xz kk xx kkk xx kk T k xz kk xx kkkk zz kk xz kk xx kkkkzx zz kk T k k xz kk xx kk xx kk T k xz kk xx kkkk xx kk T k xz kk xx kkk TT TTTTTPTTTT TTTTTTTT zx kk Txz kk   1| 1 1|1|1| 1 1| 0 1|1| 1 1|1|1| 1| 1 1|1|1| 1 1|1|1| 1 1| 1|1|                      Linear Gaussian Markov Systems (continue – 13)
  • 97. 98 Recursive Bayesian EstimationSOLO Joint and Conditional Gaussian Random Variables Prediction phase (before zk measurement) 2nd way (continue – 5)                       zz kk zx kk xz kk xx kk zz kk zx kk xz kk xx kk TT TT PP PP 1|1| 1|1| 1 1|1| 1|1| 1 1|1|1| 1 1| 1| 1 1|1|1| 1 1| 1| 1 1|1|1| 1 1|                  zz kk xz kk xz kk xx kk xz kk xx kk zx kk zz kk zz kk kkzxkkzzkkxzkkxxkkxx PPTT TTTTP PPPPT    k xz kk xx kkk xx kk T k xz kk xx kkk TTTTTq  1| 1 1|1|1| 1 1|       1|1| ˆ:&ˆ:   kkkkkkkk zzxx                             1|1|1|1|1|2/1 1| 2/1 1| 2/1 1| 2/1 1| | ˆˆˆˆ 2 1 exp 2 2 2 1 exp 2 2 | kkkkkkk xx kk T kkkkkkk yy kk zz kk yy kk zz kk kkzx zzKxxTzzKxx P P q P P zxp      1| 1 1|1|1| 1 1|1| ˆˆ       kkk K zz kk xz kkkkkk xx kk xz kkk zzPPxxTT k    Linear Gaussian Markov Systems (continue – 14)
  • 98. 99 Recursive Bayesian EstimationSOLO Joint and Conditional Gaussian Random Variables Prediction phase (before zk measurement) 2nd Way (continue – 6)                    1| 1 1|1|1|1|1| 1 1|1|1|| ˆˆˆˆ 2 1 exp| kkk xx kk xz kkkkk xx kk T kkk xx kk xz kkkkkkkzx zzPPxxTzzPPxxczxp From this we can see that    1| 1 1|1|1|| ˆˆˆ|     kkk K zz kk xz kkkkkkkk zzPPxxzxE k        T k zz kkk xx kk zx kk zz kk xz kk xx kk xx kkk T kkkkkk xx kk KPKP PPPPTZxxxxEP 1|1| 1| 1 1|1|1| 1 1|:1||| ˆˆ             1|1:11|1|1| ˆˆ   kkk T kkkkkk xx kk PZxxxxEP     k T kkkkkk T kkkkkk zz kk SHPHRZzzzzEP   :ˆˆ 1|1:11|1|1|     T kkkk T kkkkkk xz kk HPZzzxxEP 1|1:11|1|1| ˆˆ   Linear Gaussian Markov Systems (continue – 15)
  • 99. 100 Recursive Bayesian EstimationSOLO Joint and Conditional Gaussian Random Variables Prediction phase (before zk measurement) 2nd Way (continue – 7) From this we can see that    111 1|1| 1 1|1|1||      kk T kkkkkk T kkkkk T kkkkkkk HRHPPHHPHRHPPP   1 1| 1 1|1| 1 1|1|        k T kkk T kkkkk T kkk zz kk xz kkk SHPHPHRHPPPK Linear Gaussian Markov Systems (continue – 16) kk T kkkkk KSKPP  1|| or     1|1:11|1|1| ˆˆ   kkk T kkkkkk xx kk PZxxxxEP     k T kkkkkk T kkkkkk zz kk SHPHRZzzzzEP   :ˆˆ 1|1:11|1|1|     T kkkk T kkkkkk xz kk HPZzzxxEP 1|1:11|1|1| ˆˆ  
  • 100. 101 We found that the optimal Kk is  1 1|1|    T kkkkk T kkkk HPHRHPK     1111 |1 11 & 1 |1 1 1| 1         k T kkk T kkkkkk LemmaMatrixInverse existPR T kkkkk RHHRHPHRRHPHR kkk   1111 1| 1 1| 1 1|        k T kkk T kkkkk T kkkk T kkkk RHHRHPHRHPRHPK     1111 |1 111 |1|1      k T kkk T kkkkk T kkk T kkkkk RHHRHPHRHHRHPP   1 | 1111 |1    RHPRHHRHPK T kkk T kkk T kkkk If Rk -1 and Pk|k-1 -1 exist: Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 17) Relation Between 1st and 2nd ways 2nd Way 1st Way = 2nd Way
  • 101. 102 Recursive Bayesian EstimationSOLO Closed-Form Solutions of Estimation Closed-Form solutions for the Optimal Recursive Bayesian Estimation can be derived only for special cases The most important case: • Dynamic and measurement models are linear    kkkk kkkk vuxkhz wuxkfx ,,, ,,,1 111    kkkk kkkkkkk vxHz wuGxx    111111 • Random noises are Gaussian    Qwwpw ,0;N    Rvvpv ,0;N            wQw Q wp T nw 2 1 exp 2 1 2/12/              vRv R vp T pv 1 2/12/ 2 1 exp 2 1  • Solution: KALMAN FILTER • In other non-linear/non-Gaussian cases: USE APPROXIMATIONS
  • 102. 103 Recursive Bayesian EstimationSOLO Closed-Form Solutions of Estimation (continue – 1) • Dynamic and measurement models are linear kkkk kkkkkkk vxHz wuGxx    111111 kv kH kzkx kx1k 1kw 1k 1kx 1ku 1kG 1 zDelay • The Optimal Estimator is the Kalman Filter developed by R. E. Kalman in 1960              1|1|1|&1|:1|  kkPkkekkeEkkxEkxkke x T xxx               lk T www kQlekeEkwEkwke , 0 &:                 lk T vvv kRlekeEkvEkvke , 0 &:         0lekeE T vw       lk lk lk 1 0 , Rudolf E. Kalman ( 1920 - ) • K.F. is an Optimal Estimator (in the Minimum Mean Square Estimator (MMSE) sense if: - state and measurement models are linear - the random elements are Gaussian • Under those conditions, the covariance matrix: - independent of the state (can be calculated off-line) - equals the Cramer – Rao lower bound
  • 103. 104 Kalman Filter State Estimation in a Linear System (one cycle) SOLO 1|1 1|1 ˆ   kk kk P x 1kt kt T t 1| 1| ˆ   kk kk P x kk kk P x | | ˆ 1:  kk Initialization     T xxxxEPxEx 00000|000 ˆˆˆ 0 State vector prediction111|111| ˆˆ   kkkkkkk uGxx1 Covariance matrix extrapolation111|111|   k T kkkkkk QPP2 Innovation Covariancek T kkkkk RHPHS  1|3 Gain Matrix Computation1 1|   k T kkkk SHPK4 Measurement & Innovation 1|ˆ 1| ˆ   kkz kkkkk xHzi5 Filteringkkkkkk iKxx  1|| ˆˆ6 Covariance matrix updating       T kkk T kkkkkk kkkk T kkkkk kkkk T kkkkkkk KRKHKIPHKI PHKI KSKP PHSHPPP           1| 1| 1| 1| 1 1|1||7
  • 104. 105 Kalman Filter State Estimation in a Linear System (one cycle) Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Evolution of the system (true state) Estimation of the state State Covariance and Kalman Filter Computations Controller 1kt 1|1 ˆ  kkx 1kx kkP| 2|1  kkP kkx | ˆ kx  1|1  kkP 1| kkP 1| ˆ kkx 1kt kt Real Trajectory Estimated Trajectory Time kt Measurement at tk kkkk vxHz  State Prediction at tk 111|111| ˆˆ   kkkkkkk uGxx State Estimation at tk-1 1|1 ˆ  kkx Control at tk-1 1ku State Error Covariance at tk-1    1|111|111|1 ˆˆ   kkk T kkkkk xxxxEP State Prediction Covariance at tkk k 111|111|   k T kkkkkk QPP Innovation Covariance k T kkkkk RHPHS  1| Kalman Filter Gain 1 1|   k T kkkk SHPK Update State Covariance at tkk k T kkkkkkk KSKPP  1|| Update State Estimation at t k kkkkkk Kxx  1|| ˆˆ Measurement Prediction at tk 1|1| ˆˆ   kkkkk xHz Transition to tk 11111   kkkkkk wuGxx Innovation 1|ˆ  kkkk zz State at tk-1 1kx I.C.:  00|0 ˆ xEx     T xxxxEP 0|000|000|0 ˆˆ I.C.: Rudolf E. Kalman ( 1920 - )
  • 105. 106 1|1| ˆˆ:   kkkkkkkk zzxHzi Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 18) Innovation in a Kalman Filter The innovation is the quantity: We found that:        0ˆ||ˆ| 1|1:11:11|1:1   kkkkkkkkkk zZzEZzzEZiE       k T kkkkkk T kkk T kkkkkk SHPHRZiiEZzzzzE   :ˆˆ 1|1:11:11|1| Using the smoothing property of the expectation:                   xEdxxpxdxdyyxpx dxdyypyxpxdyypdxyxpxyxEE x X x y YX x y yxp YYX y Y x YX YX                                               , || , , || ,        1:1  k T jk T jk ZiiEEiiEwe have: Assuming, without loss of generality, that k-1 ≥ j, and innovation i (j) is independent on Z1:k-1, and it can be taken outside the inner expectation:        0 0 1:11:1            T jkkk T jk T jk iZiEEZiiEEiiE 
  • 106. 107 1|1| ˆˆ:   kkkkkkkk zzxHzi Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 19) Innovation in a Kalman Filter (continue – 1) The innovation is the quantity: We found that:        0ˆ||ˆ| 1|1:11:11|1:1   kkkkkkkkkk zZzEZzzEZiE   k T kkkkkk T kk SHPHRZiiE   :1|1:1   jkiiE T jk  0   jik T jk SiiE  The uncorrelatedness property of the innovation implies that since they are Gaussian, the innovation are independent of each other and thus the innovation sequence is Strictly White. Without the Gaussian assumption, the innovation sequence is Wide Sense White. Thus the innovation sequence is zero mean and white for the Kalman (Optimal) Filter. The innovation for the Kalman (Optimal) Filter extracts all the available information from the measurement, leaving only zero-mean white noise in the measurement residual.
  • 107. 108 kk T kn iSiz 1 : 2   Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 20) Innovation in a Kalman Filter (continue – 2) Define the quantity: Let use: kkk iSu 2/1 :   Since is Gaussian (a linear combination of the nz components of ) is Gaussian too with: ki ku ki     0: 0 2/1    kkk iESuE       z k nk S T kkkk T kkk T kk ISiiESSiiSEuuE   2/12/12/12/1 :  where Inz is the identity matrix of size nz. Therefore, since the covariance matrix of u is diagonal, its components ui are uncorrelated and, since they are jointly Gaussian they are also independent.    1,0;Pr: 1 22 1 ii n i ik T kkk T kn uuuuuiSi z z N    Therefore is chi-square distributed with nz degrees of freedom. 2 zn Since Sk is symmetric and positive definite, it can be written as:   0,,& 1  SiSSkn H kk H kkkk znz diagDITTTDTS   H kkkk TDTS 11    2/12/1 1 2/12/12/1 ,,&   znSSk H kkkk diagDTDTS  
  • 108. 109 Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance )Initialization, Confirmation and Deletion( Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 SOLO Kalman Filter Initialization State vector prediction111|111| ˆˆ   kkkkkkk uGxx Covariance matrix extrapolation111|111|   k T kkkkkk QPP To Initialize the Kalman Filter we need to know 0|00|0 &ˆ Px According to Bayesian Model the true initial state is a Gaussian random variable  0|00|00 ,ˆ; PxxN The chi-square test for the initial condition error is     cxxPxx T   0|00 1 0|00|00 ˆˆ where c1 is the upper limit of the, say, 95% confidence region from the chi-square distribution with nx degrees of freedom. Recursive Bayesian Estimation Linear Gaussian Markov Systems (continue – 21)
  • 109. 110 SOLO Return to Table of Content can be estimated using at least two measurements   0|0&0|0ˆ Px From the first measurement, z1, using Least Square we obtain   1 111 1 zRHHRHx TT    From the second measurement 1|222111|2 ˆˆ&ˆˆ xHzxx Predictions before the second measurement RHPHS T  21|222 The Preliminary Track Gate used for the second measurement is determined from the worst-case target conditions including maneuver and data miscorrelations. Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance (Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 Kalman Filter Initialization Linear Gaussian Markov Systems (continue – 22) Recursive Bayesian Estimation
  • 110. 111 SOLO Return to Table of Content Strategies for Kalman Filter Initialization (First Step) Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance (Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 MaxVT ˆ        2 MaxT         2 MaxT  MaxVT ˆ        2 MaxT         2 MaxT  minVT MaxVT minVT MaxVT MAX SPEED and TURNING RATE SPECIFIED MAX, MIN SPEED and TURNING RATE SPECIFIED MAX SPEED SPECIFIED MAX, MIN SPEED SPECIFIED Kalman Filter Initialization Linear Gaussian Markov Systems (continue – 23) Recursive Bayesian Estimation
  • 111. 112 SOLO Information Kalman Filter For some applications (such as bearing only tracking) the initial state covariance matrix P0|0 may be very large. As a result the Kalman Filter formulation can encounter numerical problems. For those cases is better to use a formulation with P0|0 -1. kk T kkkkk HRHPP 11 1| 1 |     Start with: 1 |   k T kkkk RHPK     1 1 11 1|1 1 1 1 1 1 1 1 11|1 1 1|                 kkkk T kkk T kkk Lemma Matrix Inverse k T kkkkkk QPQQQ QPP     111 1| 1111 1| 1       kkkk T kkk T kkk Lemma Matrix Inverse k T kkkkk RHPHRHHRRRHPHS First Version: Change only the Covariance Matrices Computations Linear Gaussian Markov Systems (continue – 24) Recursive Bayesian Estimation
  • 112. 113 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Evolution of the system (true state) Estimation of the state State Covariance and Kalman Filter Computations Controller 1kt 1|1 ˆ  kkx 1kx kkP | 2|1  kkP kkx | ˆ kx 1|1  kkP 1| kkP 1| ˆ kkx 1kt kt Real Trajectory Estimated Trajectory Time kt Measurement at tk kkkk vxHz  State Prediction at tk 111|111| ˆˆ   kkkkkkk uGxx Control at tk-1 1ku State Error Covariance at tk-1     1 1|111|11 1 1|1 ˆˆ       kkk T kkk kk xxxxE P State Prediction Covariance at tkk k   1 11 11 1|11 1 111 1 1 1 1 1 1|               k T kkkkk T kkk kkk QPQQ QP Innovation Covariance   111 1| 11 11       k T kkkkk T kkk kk RHPHRHHR RS Kalman Filter Gain 1 |   k T kkkk RHPK Update State Covariance at tkk k kk T kkkkk HRHPP 11 1| 1 |     Update State Estimation at t k kkkkkk iKxx  1|| ˆˆ Measurement Prediction at tk 1|1| ˆˆ   kkkkk xHz Transition to tk 11111   kkkkkk wuGxx Innovation 1|ˆ  kkkk zz State Estimation at tk-1 1|1 ˆ  kkx State at tk-1 1kx I.C.:  00|0 ˆ xEx     T xxxxEP 0|000|000|0 ˆˆ I.C.: Rudolf E. Kalman ( 1920 - ) Information Kalman Filter Version 1
  • 113. 114 SOLO For some applications (such as bearing only tracking) the initial state covariance matrix P0|0 may be very large. As a result the Kalman Filter formulation can encounter numerical problems. For those cases is better to use a formulation with P0|0 -1. kk T kkkkk HRHPP 11 1| 1 |     1 |   k T kkkk RHPK Start with:  1 11|1 1 1|      k T kkkkkk QPP     111 1| 1111 1| 1       kkkk T kkk T kkkk T kkkkk RHPHRHHRRRHPHS Define:     11 1|1 1 1|11|1|1 1 1|        kkk T k T kkkkkk T kkkkkk PPAPA         1|1|1| 11 11|1| 1| 11 11|1|1| 1 1 1 1| 1 1| 1|                           kkkkkk B kkkkk kkkkkkkkk Lemma Matrix Inverse kkkkk ABIAQAAI AQAAAQAP kk    Second Version: Change both the Covariance Matrices and Filter States Computations Information Kalman Filter Linear Gaussian Markov Systems (continue – 24) Recursive Bayesian Estimation
  • 114. 115 SOLO 111|111| ˆˆ   kkkkkkk uGxxStart with: and multiply by Pk|k-1 -1 1 11|111| :      kkk T kkk PA    1| 1 1| 1 |1| 11 1|| 1 | ˆˆˆ 1 |           kkkk K k T kkkkkkk P kk T kkkkkkk xHzRHPPxHRHPxP kkk        kk T kkkkkkkkk zRHxPxP 1 1| 1 1|| 1 | ˆˆ       11 1 1|1|11 1 1|1| 1 1| ˆˆ         kkkkkkkkkkkkk uGPxPxP   1|1| 1 1|     kkkkkk ABIP       11 1 1|1|1 1 1|1 1 11|1| 1 1| ˆˆ           kkkkkkkkkkkkkkk uGPxPBIxP  11 11|1|1| :    kkkkkkk QAAB Multiply the Update State Estimation Equation by Pk|k -1:kkkkkk iKxx  1|| ˆˆ Information Kalman Filter (continue – 1) Linear Gaussian Markov Systems (continue – 24) Recursive Bayesian Estimation
  • 115. 116 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Evolution of the system (true state) Estimation of the state State Covariance and Kalman Filter Computations Controller 1kt 1|1 ˆ  kkx 1kx kkP | 2|1  kkP kkx | ˆ kx 1|1  kkP 1| kkP 1| ˆ kkx 1kt kt Real Trajectory Estimated Trajectory Time kt Measurement at tk kkkk vxHz  State Prediction at tk     11 1 1| 1|1 1 1|1 1 11|1| 1 1| ˆˆ             kkkk kkkkkkkkkkk uGP xPBIxP Control at tk-1 1ku State Error Covariance at tk-1     1 1|111|11 1 1|1 ˆˆ       kkk T kkk kk xxxxE P State PredictionCovariance at tkk k    1 11|1|1| 1 1 1 1|111| 1|1| 1 1|               kkkkkkk kkk T kkk kkkkkk QAAB PA ABIP Innovation Covariance   111 1| 11 11       k T kkkkk T kkk kk RHPHRHHR RS Kalman Filter Gain 1 |   k T kkkk RHPK Update State Covariance at tkk k kk T kkkkk HRHPP 11 1| 1 |     Update State Estimation at t k kkkkkkkkkkk zRHxPxP 1 1| 1 1|| 1 | ˆˆ       Measurement Prediction at tk 1|1| ˆˆ   kkkkk xHz Transition to tk 11111   kkkkkk wuGxx Innovation 1|ˆ  kkkk zz State Estimation at tk-1 1|11|1 ˆ  kkkk xP State at tk-1 1kx I.C.:  00|0 ˆ xEx     T xxxxEP 0|000|000|0 ˆˆ I.C.: Rudolf E. Kalman ( 1920 - ) Information Kalman Filter (Version 2)
  • 116. 117 SOLO Review of Probability Chi-square Distribution       x T x T ePexExPxExq 11 :   Assume a n-dimensional vector is Gaussian, with mean and covariance P, then we can define a (scalar) random variable: x  xE Since P is symmetric and positive definite, it can be written as:   0,,& 1  PiPPPn HH P n diagDITTTDTP   H P TDTP 11    2/12/1 1 2/12/12/1 ,,&   nPPP H P diagDTDTP   Since is Gaussian (a linear combination of the n components of ) is Gaussian too, with: x u   xEx       0: 0 2/1    xExEPuE       n P T xx T xx T IPeeEPPeePEuuE   2/12/12/12/1 :  where In is the identity matrix of size n. Therefore, since the covariance matrix of u is diagonal, its components ui are uncorrelated and, since they are jointly Gaussian they are also independent.    1,0;Pr: 1 21 ii n i i T x T x uuuuuePeq N   Therefore q is chi-square distributed with n degrees of freedom. Let use:    xePxExPu 2/12/1 :  
  • 117. 118 SOLO Review of Probability Derivation of Chi and Chi-square Distributions Given k normal random independent variables X1, X2,…,Xk with zero men values and same variance σ2, their joint density is given by                         2 22 1 2/ 1 2/1 2 2 1 2 exp 2 1 2 2 exp ,,1   k kk k i i normal tindependenkXX xx x xxp k   Define Chi-square 0:: 22 1 2  kk xxy  Chi 0: 22 1  kk xx          kkkkkk dxxdp k  22 1 Pr  The region in χk space, where pΧk (χk) is constant, is a hyper-shell of a volume (A to be defined)  dAVd k 1        Vd kk kkkkkkkk dAdxxdp k      1 2 2 2/ 22 1 2 exp 2 1 Pr                            2 2 2/ 1 2 exp 2      k kk k k A p k Compute 1x 2x 3x  d ddV 2 4
  • 118. 119 SOLO Review of Probability Derivation of Chi and Chi-square Distributions (continue – 1)      k k kk k k U A p k                2 2 2/ 1 2 exp 2 Chi-square 0: 22 1 2  kk xxy                                       00 0 2 exp 22 1 2 2/1 2/ 0 2 2 2 y y y y y A ypyp d yd ypp k kk y k Yk kkk       A is determined from the condition   1   dyypY            2/ 2 12/ 222 exp 22 2/ 2/2 0 2 2 2 22/ k Ak Ay d yyA dyyp k k k kY                                     yU yy k kyp kk Y                2 2/2 2 2/ 2 exp 2/ 2/1 ,;   Γ is the gamma function        0 1 exp dttta a          k k k k k k k U k p k                    2 212/2 2 exp 2/ 2/1         00 01 : a a aU Function of One Random Variable
  • 119. 120 SOLO Review of Probability Derivation of Chi and Chi-square Distributions (continue – 2) Chi-square 0: 22 1 2  kk xxy  Mean Value      2 2 2 2 1k kE E x E x k             4 2 42 2 4 0 1, , & 3 th i i i Moment of a Gauss Distribution x i i i i x E x i k E x x E x x                             2 4 2 4 2 2 22 2 2 2 2 4 2 2 4 1 2 2 2 4 4 2 2 2 4 1 1 1 1 1 3 2 2 4 4 3 2 k k k k i i k k k k k i j i i j i j i i j i j k k E k E k E x k E x x k E x E x x k k k k k k                                                                  k k Main Diagonal kVariance   2 22 2 2 4 2 k kE k k       where xi are Gaussian with Gauss’ Distribution
  • 120. 121 SOLO Review of Probability Derivation of Chi and Chi-square Distributions (continue – 3) Tail probabilities of the chi-square and normal densities. The Table presents the points on the chi-square distribution for a given upper tail probability  xyQ  Pr where y = χn 2 and n is the number of degrees of freedom. This tabulated function is also known as the complementary distribution. An alternative way of writing the previous equation is:    QxyQ n  1Pr1 2  which indicates that at the left of the point x the probability mass is 1 – Q. This is 100 (1 – Q) percentile point. Examples 1. The 95 % probability region for χ2 2 variable can be taken at the one-sided probability region (cutting off the 5% upper tail):     99.5,095.0,0 2 2  5.99 2. Or the two-sided probability region (cutting off both 2.5% tails):       38.7,05.0975.0,025.0 2 2 2 2  0.51 0.975 0.0250.05 7.38 3. For χ1002 variable, the two-sided 95% probability region (cutting off both 2.5% tails) is:       130,74975.0,025.0 2 100 2 100  74 130 Run This
  • 121. 122 SOLO Review of Probability Derivation of Chi and Chi-square Distributions (continue – 4) Note the skewedness of the chi-square distribution: the above two-sided regions are not symmetric about the corresponding means   nE n  2  Tail probabilities of the chi-square and normal densities. For degrees of freedom above 100, the following approximation of the points on the chi-square distribution can be used:     22 121 2 1 1  nQQn G where G ( ) is given in the last line of the Table and shows the point x on the standard (zero mean and unity variance) Gaussian distribution for the same tail probabilities. In the case Pr { y } = N (y; 0,1) and with Q = Pr { y>x }, we have x (1-Q) :=G (1-Q) 5.990.51 0.975 0.0250.05 7.38 Return to Table of Content Run This
  • 122. 123 Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 21) Innovation in Tracking Systems The fact that the innovation sequence is zero mean and white for the Kalman (Optimal) Filter, is very important and can be used in Tracking Systems: 1. when a single target is detected with probability 1 (no false alarms), the innovation can be used to check Filter Consistency (in fact the knowledge of Filter Parameters Φ (k), G (k), H (k) – target model, Q (k), R (k) – system and measurement noises) 4. when multiple targets are detected with probability less then 1 and false alarms are also detected, the innovation can be used to provide Gating information for each target track and probability of each detection to be related to each track (data association). This is done by running a Kalman Filter for each initiated track. (see JPDAF and MTT methods) Return to Table of Content 2. when a single target is detected with probability 1 (no false alarms), and the target initiate a unknown maneuver (change model) at an unknown time the innovation can be used to detect the start of the maneuver (change of target model) by detecting a Filter Inconsistency and choose from a bank of models (see IMM method) (Φi (k), Gi (k), Hi (k) –i=1,…,n target models) the one with a white innovation. 3. when a single target is detected with probability less then 1 and false alarms are also detected, the innovation can be used to provide information of the probability of each detection to be the real target (providing Gating capability that eliminates less probable detections) (see PDAF method).
  • 123. 124 Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 22) Evaluation of Kalman Filter Consistency A state-estimator (filter) is called consistent if its state estimation error satisfy         0|~:|ˆ  kkxEkkxkxE                  kkPkkxkkxEkkxkxkkxkxE TT ||~|~:|ˆ|ˆ  this is a finite-sample consistency property, that is, the estimation errors based on a finite number of samples (measurements) should be consistent with the theoretical statistical properties: • Have zero mean (i.e. the estimates are unbiased). • Have covariance matrix as calculated by the Filter. The Consistency Criteria of a Filter are: 1. The state errors should be acceptable as zero mean and have magnitude commensurate with the state covariance as yielded by the Filter. 2. The innovation should have the same property as in (1). 3. The innovation should be white noise. Only the last two criteria (based on innovation) can be tested in real data applications. The first criterion, which is the most important, can be tested only in simulations.
  • 124. 125 Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 23) Evaluation of Kalman Filter Consistency (continue – 1) When we design the Kalman Filter, we can perform Monte Carlo (N independent runs) Simulations to check the Filter Consistency (expected performances). Real time (Single-Run Tests) In Real Time, we can use a single run (N = 1). In this case the simulations are replaced by assuming that we can replace the Ensemble Averages (of the simulations) by the Time Averages based on the Ergodicity of the Innovation and perform only the tests (2) and (3) based on Innovation properties. The Innovation bias and covariance can be evaluated using          K k T K k kiki K Ski K i 11 1 1ˆ& 1ˆ
  • 125. 126 Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 24) Evaluation of Kalman Filter Consistency (continue – 2) Real time (Single-Run Tests) (continue – 1) Test 2:               kSkikiEkiEkkzkzE T  &0:1|ˆ Using the Time-Average Normalized Innovation Squared (NIS) statistics         K k T i kikSki K 1 11 : must have a chi-square distribution with K nz degrees of freedom. iK  Tail probabilities of the chi-square and normal densities. The test is successful if  21,rri  where the confidence interval [r1,r2] is defined using the chi-square distribution of i      1,Pr 21 rri For example for K=50, nz=2, and α=0.05, using the two tails of the chi-square distribution we get           6.250/130130925.0 5.150/7474025.0 ~50 2 2 100 1 2 1002 100 r r i    0.975 0.025 74 130 Run This
  • 126. 127 Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 25) Evaluation of Kalman Filter Consistency (continue – 3) Real time (Single-Run Tests) (continue – 2) Test 3: Whiteness of Innovation Use the Normalized Time-Average Autocorrelation               2/1 111 :           K k T K k T K k T i lkilkikikilkikil In view of the Central Limit Theorem, for large K, this statistics is normal distributed. For l≠0 the variance can be shown to be 1/K that tends to zero for large K. Denoting by ξ a zero-mean unity-variance normal random variable, let r1 such that      1,Pr 11 rr For α=0.05, will define (from the normal distribution) r1 = 1.96. Since has standard deviation of The corresponding probability region for α=0.05 will be [-r, r] where i K/1 KKrr /96.1/1  Normal Distribution
  • 127. 128 Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 26) Evaluation of Kalman Filter Consistency (continue – 4) Monte-Carlo Simulation Based Tests The tests will be based on the results of Monte-Carlo Simulations (Runs) that provide N independent samples              NikkxkkxEkkPkkxkxkkx T iiiii ,,1|~|~|&|ˆ:|~  Test 1: For each run i we compute at each scan k And compute the Normalized (state) Estimation Error Squared (NEES)         NikkxkkPkkxk i T ixi ,,1|~||~: 1    Under the Hypothesis that the Filter is Consistent and the Linear Gaussian, is chi-square distributed with nx (dimension of x) degrees of freedom. Then  kxi    xxi nkE  The average, over N runs, of is kxi      N i xix k N k 1 1 : 
  • 128. 129 Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 27) Evaluation of Kalman Filter Consistency (continue – 5) Monte-Carlo Simulation Based Tests (continue – 1) Test 1 (continue – 1): The average, over N runs, of is kxi      N i xix k N k 1 1 :  The test is successful if  21,rrx  where the confidence interval [r1,r2] is defined using the chi-square distribution of i      1,Pr 21 rrx For example for N=50, nx=2, and α=0.05, using the two tails of the chi-square distribution we get           6.250/130130925.0 5.150/7474025.0 ~50 2 2 100 1 2 1002 100 r r i    Tail probabilities of the chi-square and normal densities. 0.975 0.025 74 130 must have a chi-square distribution with N nx degrees of freedom. xN  Run This
  • 129. 130 Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 28) Evaluation of Kalman Filter Consistency (continue – 6) Monte-Carlo Simulation Based Tests (continue – 2) The test is successful if  21,rri  where the confidence interval [r1,r2] is defined using the chi-square distribution of i      1,Pr 21 rri For example for N=50, nz=2, and α=0.05, using the two tails of the chi-square distribution we get           6.250/130130925.0 5.150/7474025.0 ~50 2 2 100 1 2 1002 100 r r i    Tail probabilities of the chi-square and normal densities. 0.975 0.025 74 130 must have a chi-square distribution with N nz degrees of freedom. iN  Test 2:               kSkikiEkiEkkzkzE T  &0:1|ˆ Using the Normalized Innovation Squared (NIS) statistics, compute from N Monte-Carlo runs:           N j jj T ji kikSki N k 1 11 :
  • 130. 131 Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 29) Evaluation of Kalman Filter Consistency (continue – 7) Test 3: Whiteness of Innovation Use the Normalized Sample Average Autocorrelation               2/1 111 :,           N j j T j N j j T j N j j T ji mimikikimikimk In view of the Central Limit Theorem, for large N, this statistics is normal distributed. For k≠m the variance can be shown to be 1/N that tends to zero for large N. Denoting by ξ a zero-mean unity-variance normal random variable, let r1 such that      1,Pr 11 rr For α=0.05, will define (from the normal distribution) r1 = 1.96. Since has standard deviation of The corresponding probability region for α=0.05 will be [-r, r] where i N/1 NNrr /96.1/1  Normal Distribution Monte-Carlo Simulation Based Tests (continue – 3)
  • 131. 132 Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 30) Evaluation of Kalman Filter Consistency (continue – 8) Examples Bar-Shalom, Y, Li, X-R, “Estimation and Tracking: Principles, Techniques and Software”, Artech House, 1993, pg.242 Monte-Carlo Simulation Based Tests (continue – 4) Single Run, 95% probability  99.5,0xTest (a) Passes if A one-sided region is considered. For nx = 2 we have       99.5,095.0,02 2 2 2 2  xn           K k T x kkxkkPkkx K k 1 1 |~||~1 :       qkxkkx  1 See behavior of for various values of the process noise q for filters that are perfectly matched.
  • 132. 133 Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 31) Evaluation of Kalman Filter Consistency (continue – 9) Examples Bar-Shalom, Y, Li, X-R, “Estimation and Tracking: Principles, Techniques and Software”, Artech House, 1993, pg.244 Monte-Carlo Simulation Based Tests (continue – 5) Monte-Carlo, N=50, 95% probability    6.2,5.150/130,50/74 xTest (a) Passes if           N j jj T jx kkxkkPkkx N k 1 1 |~||~1 :(a)               2/1 111 :,           N j j T j N j j T j N j j T ji mimikikimikimk(c) The corresponding probability region for α=0.05 will be [-r, r] where 28.050/96.1/1  Nrr    43.1,65.050/4.71,50/3.32 iTest (b) Passes if           N j jj T ji kikSki N k 1 11 :(b)       130,74925.0,025.02 2 100 2 100  xn       71,32925.0,025.01 2 100 2 100  zn
  • 133. 134 Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 32) Evaluation of Kalman Filter Consistency (continue – 10) Examples Bar-Shalom, Y, Li, X-R, “Estimation and Tracking: Principles, Techniques and Software”, Artech House, 1993, pg.245 Monte-Carlo Simulation Based Tests (continue – 6) Example Mismatched Filter A Mismatched Filter is tested: Real System Process Noise q = 9 Filter Model Process Noise qF=1           K k T x kkxkkPkkx K k 1 1 |~||~1 :       qkxkkx  1 (1) Single Run (2) A N=50 runs Monte-Carlo with the 95% probability region           N j jj T jx kkxkkPkkx N k 1 1 |~||~1 :    6.2,5.150/130,50/74 xTest (2) Passes if       130,74925.0,025.02 2 100 2 100  xn Test Fails Test Fails  99.5,0xTest (1) Passes if       99.5,095.0,02 2 2 2 2  xn
  • 134. 135 Recursive Bayesian EstimationSOLO Linear Gaussian Markov Systems (continue – 33) Evaluation of Kalman Filter Consistency (continue – 11) Examples Bar-Shalom, Y, Li, X-R, “Estimation and Tracking: Principles, Techniques and Software”, Artech House, 1993, pg.246 Monte-Carlo Simulation Based Tests (continue – 7) Example Mismatched Filter (continue -1) A Mismatched Filter is tested: Real System Process Noise q = 9 Filter Model Process Noise qF=1       qkxkkx  1 (3) A N=50 runs Monte-Carlo with the 95% probability region (4) A N=50 runs Monte-Carlo with the 95% probability region           N j jj T ji kikSki N k 1 11 :    43.1,65.050/4.71,50/3.32 iTest (3) Passes if       71,32925.0,025.01 2 100 2 100  zn               2/1 111 :,           N j j T j N j j T j N j j T ji mimikikimikimk (c) The corresponding probability region for α=0.05 will be [-r, r] where 28.050/96.1/1  Nrr Test Fails Test Fails Return to Table of Content Innovation in Tracking
  • 135. 136 SOLO Kalman Filter for Filtering Position and Velocity Measurements Assume a Cartezian Model of a Non-maneuvering Target:  w x x x x td d wx xx BA                               1 0 00 10                 10 1 ! 1 2 1 exp: 22 0 T TAITA n TATAIdAT nn T  2 00 00 00 00 00 10 00 10 00 10 2                                nAAA n               2 1 v v x x vxz  Measurements                                   T TT d T dBTT T TT 2/2/ 1 0 10 1 : 2 0 2 00      Discrete System        1111 1 kkkk kkkkk vxHz wxx                                               kj V PT jkkkk H k kjq T jkkkkk vvERvxz wwEQw T T x T x k kk     2 2 111111 2 2 1 0 0 & 10 01 & 2/ 10 1 1   Target Estimators
  • 136. 137 SOLO Kalman Filter for Filtering Position and Velocity Measurements (continue – 1) The Kalman Filter:                 111111 1 ˆˆˆ ˆˆ kkkkkk kkk xHzKxx xx     T kkk T kkkk QPP 1        TT T T Tpp ppT pp pp P q kk k 2/ 2/ 1 01 10 1 22 2 2212 1211 12212 1211 1                                         TT T T Tpp TppTpp pp pp P q kk k 2/ 2/ 1 01 22 2 2212 22121211 12212 1211 1                                   2 23 34 222212 2212 2 221211 12212 1211 1 2/ 2/4/2 q kk k TT TT pTpp TppTpTpp pp pp P                           Target Estimators
  • 137. 138 SOLO Kalman Filter for Filtering Position and Velocity Measurements (continue – 2) The Kalman Filter:                 111111 1 ˆˆˆ ˆˆ kkkkkk kkk xHzKxx xx     1 1111111    k T kkk T kkk RHPHHPK                                        2 1112 12 2 22 2 12 2 22 2 112212 1211 1 2 2212 12 2 11 2212 1211 1 P V VPV P pp pp ppppp pp pp pp pp pp                      2 222211 2 122212 2 122212 2 1212111211 2 12 2 2211 2 12 2 22 2 11 1 PV PV VP ppppppppp pppppppp ppp                       2 12 2 1122 2 12 2 12 2 12 2 2211 2 12 2 22 2 11 1 pppp pppp ppp PV PV VP    Target Estimators
  • 138. 139 SOLO Kalman Filter for Filtering Position and Velocity Measurements (continue – 3) The Kalman Filter:     1 1111111    k T kkk T kkk RHPHHPK                     T kkk T kkkkk kkk k KRKHKIPHKI PHKI P 11111111 111 1                            2 12 2 1122 2 12 2 12 2 12 2 2211 2 12 2 22 2 1112221 1211 1 1 pppp pppp pppKK KK K PV PV VPk k                        22 11 2 12 2 12 22 22 2 12 2 22 2 11 11 1 VPV PPV VP kk pp pp ppp HKI                                    2212 1211 22 11 2 12 2 12 22 22 2 12 2 22 2 11 1111 1 pp pp pp pp ppp PHKIP VPV PPV VP kkkk                                                      2 2 12221 1211 1 2 22 2 21 2 12 2 11 2 1222 2 11 222 12 22 12 2 1211 2 22 2 2 12 2 22 2 11 1 0 0 1 V P k kVP VP PVVP VPVP VP k KK KK KK KK pppp pppp ppp P        Target Estimators
  • 139. 140  w x x x x td d BA                         1 0 00 10    SOLO We want to find the steady-state form of the filter for Assume that only the position measurements are available x x  - position - velocity       kjjkkk k kkkk RvvEvEv x x vxHz           1111 1 1111 0&01  Discrete System        1111 1 kkkk kkkkk vxHz wxx                                 kjP T jkkkk H k kjw T jkkkkk vvERvxz wwEQw T T x T x k kk   2 111111 2 2 1 &01 & 2/ 10 1 1   α - β (2-D) Filter with Piecewise Constant White Noise Acceleration Model Target Estimators
  • 140. 141 SOLO Discrete System        1111 1 kkkk kkkkk vxHz wxx                                 kjP T jkkkk H k kjw T jkkkkk vvERvxz wwEQw T T x T x k kk   2 111111 2 2 1 &01 & 2/ 10 1 1            11/111  kRkHkkPkHkS T         1 11/11   kSkHkkPkK T When the Kalman Filter reaches the steady-state             2212 1211 1/1lim/lim pp pp kkPkkP kk           2212 1211 /1lim mm mm kkP k   2 11 2 1212 1211 0 1 01 PP m mm mm S                                                 2 1112 2 1111 2 112212 1211 12 11 / /1 0 1 P P P mm mm mmm mm k k K            kkPkHkKIkkP /1111/1                                   2212 1211 12 11 2212 1211 01 10 01 mm mm k k pp pp                                 2 11 2 1222 2 1112 2 2 1112 22 1111 2 1212221211 12111111 // // 1 11 PPP PPPP mmmmm mmmm mkmmk mkmk   α - β (2-D) Filter with Piecewise Constant White Noise Acceleration Model (continue – 1) Target Estimators
  • 141. 142 SOLO From          kQkkkPkkkP T  //1 we obtain           kkQkkPkkkP T  /1/ 1             2212 1211 1/1lim/lim pp pp kkPkkP kk           2212 1211 /1lim mm mm kkP k    T TTT TT mm mmT pp pp Q w                                                 1 01 2/ 2/4/ 10 1 2 23 34 2212 1211 2212 1211 1  For Piecewise (between samples) Constant White Noise acceleration model                         22 22 23 2212 23 2212 24 22 2 1211 1212221211 12111111 2/ 2/4/2 1 11 ww ww TmTmTm TmTmTmTmTm mkmmk mkmk     22 1212 23 221211 24 22 2 121111 2/ 4/2 w w w Tmk TmTmk TmTmTmk       α - β (2-D) Filter with Piecewise Constant White Noise Acceleration Model (continue – 2) Target Estimators
  • 142. 143 SOLO  11 2 1111 1/ kkm P   12 22 12 / kTm w   121211 22 121122 2//2// mkTkTTmkm w   We obtained the following 5 equations with 5 unknowns: k11, k12, m11, m12, m22  11 2 1212 1/ kkm P    2 111111 / Pmmk 1  2 111212 / Pmmk 2 4/2 24 22 2 121111 wTmTmTmk 3 2/ 23 221211 wTmTmk 4 22 1212 wTmk 5 Substitute the results obtained from and in1 2 34 5                 4/ 11 2 2 12 2 11 2 12 12112 11 2 12 11 2 2 11 24 1212 22 22 12121111 14121 2 1 w w T mkT P m m P m P mk P k k T k k k T k T k kT k k                  3 0 4 1 2 2 12 2 121112 2 11  kTkkTkTk α - β (2-D) Filter with Piecewise Constant White Noise Acceleration Model (continue – 3) Target Estimators
  • 143. 144 SOLO We obtained: 0 4 1 2 2 12 2 121112 2 11  kTkkTkTk Kalata introduced the α, β parameters defined as: Tkk 1211 ::   and the previous equation is written as function of α, β as: 0 4 1 2 22   which can be used to write α as a function of β: 2 2      12 22 11 2 12 12 1 k T k k m wP     We obtained:   T TTm w P      22 2 12 1      2 2 242 : 1        P wT P wT    2 : Target Maneuvering Index proportional to the ratio of: Motion Uncertainty: 2 22 Tw Observation Uncertainty: 2 P α - β (2-D) Filter with Piecewise Constant White Noise Acceleration Model (continue – 4) Target Estimators
  • 144. 145 SOLO 2 2   We obtained:   2 2 242 : 1        P wT 0 2     The positive solution for from the above equation is:   8 22 1 2  Therefore:       84 4 84 4 1 222  and:       8428168 16 1 11 222 2 2     848 8 1 22  and:    2 22 2 2/12/21          α - β (2-D) Filter with Piecewise Constant White Noise Acceleration Model (continue – 5) Target Estimators
  • 145. 146 SOLO We found                     1212221211 12111111 2212 1211 1 11 mkmmk mkmk pp pp  11 2 1111 1/ kkm P    11 2 1212 1/ kkm P     121211 22 121122 2// 2// mkTk TTmkm w      2 11111111 1 Pkmkp    2 12121112 1 Pkmkp                   12 2// 2// 2 121211 121212121122 P T TT mkTk mkmkTkp 2 11 Pp  2 12 P T p        2 222 1 2/ P T p       & α - β (2-D) Filter with Piecewise Constant White Noise Acceleration Model (continue – 6) Target Estimators
  • 146. 147    848 8 1 22  SOLO We found       84 4 84 4 1 222  α, β gains, as function of λ in semi-log and log-log scales α - β (2-D) Filter with Piecewise Constant White Noise Acceleration Model (continue – 7) Target Estimators
  • 147. 148 SOLO    T T q TT TT mm mmT pp pp Q                                                 1 01 2/ 2/3/ 10 1 2 23 2212 1211 2212 1211 1 For White Noise acceleration model                         qTmqTmTm qTmTmqTmTmTm mkmmk mkmk 22 2 2212 2 2212 3 22 2 1211 1212221211 12111111 2/ 2/3/2 1 11   qTmk qTmTmk qTmTmTmk    1212 2 221211 3 22 2 121111 2/ 3/2 α - β (2-D) Filter with White Noise Acceleration Model            TT TT qkQ 2/ 2/3/ 2 23 Target Estimators
  • 148. 149 SOLO  11 2 1111 1/ kkm P   1212 / kqTm    121211121122 2//2// mkTkqTTmkm  We obtained the following 5 equations with 5 unknowns: k11, k12, m11, m12, m22  11 2 1212 1/ kkm P    2 111111 / Pmmk 1  2 111212 / Pmmk 2 3/2 3 22 2 121111 qTmTmTmk 3 2/2 221211 qTmTmk 4 qTmk 1212 5 Substitute the results obtained from and in1 2 34 5                 3/ 11 2 2 12 2 11 2 12 12112 11 2 12 11 2 2 11 3 1212 22 12121111 13121 2 1 qT mkqT P m m P m P mk P k k T k k k T k T k kT k k               3 0 6 1 2 2 12 2 121112 2 11  kTkkTkTk α - β (2-D) Filter with White Noise Acceleration Model (continue – 1) Target Estimators
  • 149. 150 SOLO We obtained: 0 6 1 2 2 12 2 121112 2 11  kTkkTkTk The α, β parameters defined as: Tkk 1211 ::   and the previous equation is written as function of α, β as: 0 6 1 2 22   which can be used to write α as a function of β: 212 2 2           1 / 1/ 11 2 12 12 12 T k k T qT k qT m P We obtained: 2 2 32 : 1 c P qT      α - β (2-D) Filter with White Noise Acceleration Model (continue – 2) 2 2 22 : 12 2 2 1 1 c           The equation for solving β is: which can be solved numerically. Target Estimators
  • 150. 151 SOLO We found                     1212221211 12111111 2212 1211 1 11 mkmmk mkmk pp pp  11 2 1111 1/ kkm P    11 2 1212 1/ kkm P     12121122 2// mkTkm    2 11111111 1 Pkmkp    2 12121112 1 Pkmkp                   12 2// 2// 2 121211 121212121122 P T TT mkTk mkmkTkp 2 11 Pp  2 12 P T p        2 222 1 2/ P T p       & α - β Filter with White Noise Acceleration Model (continue – 3) Target Estimators
  • 151. 152  w x x x x x x td d BA                                           1 0 0 000 100 010      SOLO We want to find the steady-state form of the filter for Assume that only the position measurements are available       kjjkkk k kkkk RvvEvEv x x x vxHz                1111 1 1111 0&001   Discrete System        1111 1 kkkk kkkkk vxHz wxx                                             kjP T jkkkk H k kjw T jkkkkk vvERvxz wwEQwT T xT TT x k kk   2 111111 2 22 1 &001 & 1 2/ 100 10 2/1 1     α – β - γ (3-D) Filter with Piecewise Constant Wiener Process Acceleration Model x x x   - position - velocity - acceleration Target Estimators
  • 152. 153 SOLO Piecewise (between samples) Constant White Noise acceleration model               12/ 1 2/ 2 2 00 TTT T qlqkllwkwEk kl TTT                                 12/ 2/ 2/2/2/ 2 23 234 0 TT TTT TTT qllwkwEk TT Guideline for Choice of Process Noise Intensity For this model q should be of the order of maximum acceleration increment over a sampling period ΔaM. A practical range is 0.5 ΔaM ≤ q ≤ ΔaM. α – β - γ (3-D) Filter with Piecewise Constant Wiener Process Acceleration Model (continue – 1) Target Estimators
  • 153. 154 SOLO The Target Maneuvering Index is defined as for α – β Filter as: P wT    2 : α – β - γ (3-D) Filter with Piecewise Constant Wiener Process Acceleration Model (continue – 2) The three equations that yield the optimal steady-state gains are:   2 2 14          1422 or: 2/2      2  This system of three nonlinear equations can be solved numerically. The corresponding update state covariance expressions are:             2 433 2 213 2 323 2 12 2 222 2 11 14 2 14 2 18 428 PP PP PP T p T p T p T p T pp                        Target Estimators
  • 154. 155 SOLO Target Estimators α – β - γ Filter gains as functions of λ in semi-log and log-log scales: α – β - γ (3-D) Filter with Piecewise Constant Wiener Process Acceleration Model (continue – 3)
  • 155. 156 SOLO Target Estimators α – β (2-D) Filter and α – β - γ (3-D) Filter - Summary Advantages Disadvantages • Computation requirements (memory, computation time) are low. • Quick (but possible dirty) evaluation of track performances as measured by the steady-state variances. • very limited capability in clutter. • when used independently for each coordinate, one can encounter instabilities due to decoupling.
  • 156. 157 SOLO Nonlinear Estimation (Filtering) Return to Table of Content Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance )Initialization, Confirmation and Deletion( Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 The assumption of Linearity of the System and the Measurements and the Gaussian assumption are not valid like: • Angles , Range measurements (Measurements to states nonlinearities) • Tracking in the presence of constraints • Terrain Navigation • Tracking Extended (non-point target) Therefore we must deal with Nonlinear Filters and Use Approximations.
  • 157. 158 SOLO Nonlinear Estimation (Filtering) Return to Table of Content Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance )Initialization, Confirmation and Deletion( Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 The Nonlinear Filters are approximations of the Optimal Bayesian Estimators: • Analytic Approximations (Linearization of the models) - Extended Kalman Filter • Sampling Approaches - Unscented Kalman Filter, Particle Filter • Numerical Integration - Approximate p (xk|Z1:k) on a grid of nodes • Gaussian Sum Filter - Approximate p (xk|Z1:k) with a Gaussian Mixture
  • 158. 159 SOLO Additive Gaussian Nonlinear Filter     kkk kkk vxhz wxfx    11 Recursive Bayesian Estimation         k xx kkkkkkkkkkk xdPxxxhZxzEz 1|1|1:111| ,ˆ;,|ˆ N       T kkkkkkkkkkkk T k zz kk zzRxdPxxxhxhP 1|1|1|1|1| ˆˆ,ˆ;    N     T kkkkkkkkkkk T k xz kk zxxdPxxxhxP 1|1|1|1|1| ˆˆ,ˆ;    N         11|11|1111:11| .ˆ;|ˆ k xx kkkkkkkkkk xdPxxxfZxEx N       T kkkkkk xx kkkkkk T k xx kk xxQxdPxxxfxfP 1|1|111|11|11111| ˆˆ,ˆ;    N Summary (see “Bayesian Estimation” presentation) The Kalman Filter, that uses this computations is given by:    1| 1 1|1|1|| ˆˆ|ˆ     kkk K zz kk xz kkkkkkkk zzPPxzxEx k        T k zz kkk xx kk zx kk zz kk xx kk xx kkk T kkkkkk xx kk KPKP PPPPZxxxxEP 1 1|1| 1| 1 1|1|1|:1||| ˆˆ       
  • 159. 160 SOLO Additive Gaussian Nonlinear Filter (continue – 5)     kkk kkk vxhz wxfx    11 Recursive Bayesian Estimation     xdPxxxgI xx,ˆ;N To obtain the Kalman Filter, we must approximate integrals of the type: Three approximation are presented: (2) Gauss – Hermite Quadrature Approximation (3) Unscented Transformation Approximation (4) Monte Carlo Approximation (1) Extended Kalman Filter
  • 160. 161 Extended Kalman Filter Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO In the extended Kalman filter, (EKF) the state transition and observation models need not be linear functions of the state but may instead be (differentiable) functions.         11,1,1  kwkukxkfkx         kkukxkhkz  ,, State vector dynamics Measurements              kPkekeEkxEkxke x T xxx  &:               lk T www kQlekeEkwEkwke , 0 &:        lklekeE T vw ,0        lk lk lk 1 0 , The function f can be used to compute the predicted state from the previous estimate and similarly the function h can be used to compute the predicted measurement from the predicted state. However, f and h cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian) is computed.                             111 2 1 111,1,11,1,1 1 2 2 1         keke x f keke x f kekukxEkfkukxkfke wx Hessian kxE T xx Jacobian kxE wx                               kke x h keke x h kkukxEkhkukxkhke x Hessian kxE T xx Jacobian kxE z            2 2 1 2 1 ,,,, Taylor’s Expansion:
  • 161. 162 Extended Kalman Filter State Estimation (one cycle) SOLO    1|1 1|1ˆ   kkP kkx 1kt kt T t    1| 1|ˆ   kkP kkx    kkP kkx | |ˆ 1:  kk  11|11| ,ˆ,1ˆ   kkkkk uxkfx State vector prediction1 Jacobians Computation 1|1|1 ˆˆ 1 &         kkkk x k x k x h H x f 2 Covariance matrix extrapolation111|111|   k T kkkkkk QPP3 Innovation Covariancek T kkkkk RHPHS  1|4 Gain Matrix Computation1 1|   k T kkkk SHPK5 Measurement & Innovation 1|ˆ 1| ˆ   kkz kkkkk xHzi6 Filteringkkkkkk iKxx  1|| ˆˆ7 Covariance matrix updating       T kkk T kkkkkk kkkk T kkkkk kkkk T kkkkkkk KRKHKIPHKI PHKI KSKP PHSHPPP           1| 1| 1| 1| 1 1|1||8 0 Initialization     T xxxxEPxEx 00000|000 ˆˆˆ 
  • 162. 163 Extended Kalman Filter State Estimation (one cycle) Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Evolution of the system (true state) Estimation of the state State Covariance and Kalman Filter ComputationsController Innovation Covariance k T kkkkk RHPHS  1| Innovation 1|ˆ  kkkk zz 1kt kt Time Jacobians Evaluation kk kk xx k xx k x h H x f | 1|1 ˆ ˆ 1           State at tk-1 1kx Control at tk-1 1ku State Estimation at tk-1 1|1 ˆ  kkx State Error Covariance at tk-1 1|1  kkP State Prediction Covariance 111|111|   k T kkkkkk QPP State Prediction at tk  11|11| ,ˆ,1ˆ   kkkkk uxkfx Measurement Prediction at tk  1|1| ˆ,ˆ   kkkk xkhz Transition to tk   111,,1   kkkk wuxkfx Measurement at tk   kkk vxkhz  , Kalman Filter Gain 1 1|   k T kkkk SHPK Update State Covariance at tkk k T kkkkkkk KSKPP  1|| Update State Estimation at t k kkkkkk Kxx  1|| ˆˆ I.C.:  00|0 ˆ xEx     T xxxxEP 0|000|000|0 ˆˆ I.C.: 1|1 ˆ  kkx 1kx kkP | 2|1  kkP kkx | ˆ kx 1|1  kkP 1| kkP 1| ˆ kkx 1kt kt Real Trajectory Estimated Trajectory Rudolf E. Kalman ( 1920 - )
  • 163. 164 Extended Kalman Filter SOLO Criticism of the Extended Kalman Filter Unlike its linear counterpart, the extended Kalman filter is not an optimal estimator. In addition, if the initial estimate of the state is wrong, or if the process is modeled incorrectly, the filter may quickly diverge, owing to its linearization. Another problem with the extended Kalman filter is that the estimated covariance matrix tends to underestimate the true covariance matrix and therefore risks becoming inconsistent in the statistical sense without the addition of "stabilizing noise". Having stated this, the Extended Kalman filter can give reasonable performance, and is arguably the de facto standard in navigation systems and GPS.
  • 164. 165 SOLO Additive Gaussian Nonlinear Filter (continue – 5)     kkk kkk vxhz wxfx    11 Recursive Bayesian Estimation     xdPxxxgI xx,ˆ;N To obtain the Kalman Filter, we must approximate integrals of the type: Gauss – Hermite Quadrature Approximation                  xdxxPxx P xgI xx T xx n ˆˆ 2 1 exp 2 1 1 2/1  Let Pxx = STS a Cholesky decomposition, and define:  xxSz ˆ 2 1 : 1         zdezgI zz n T 2/ 2 2  This integral can be approximated using the Gauss – Hermite quadrature rule:        M i ii z zfwzdzfe 1 2 Carl Friedrich Gauss 1777 - 1855 Charles Hermite 1822 - 1901 Andre – Louis Cholesky 1875 - 1918
  • 165. 166 SOLO Additive Gaussian Nonlinear Filter (continue – 6)     kkk kkk vxhz wxfx    11 Recursive Bayesian Estimation Gauss – Hermite Quadrature Approximation (continue – 1)        M i ii z zfwzdzfe 1 2 The quadrature points zi and weights wi are defined as follows: A set of orthonormal Hermite polynomials are generated from the recurrence relationship:          zH j j zH j zzH zHzH jjj 11 4/1 01 11 2 /1,0         or in matrix form:                    Mj j zH zH zH zH zH zH zH z jM e M zh M J M M zh M M M ,,2,1 2 : 1 0 0 0 00 00 00 00 00 1 1 0 1 1 2 21 1 1 1 0                                                                                        zH j zH j zHz jjj jj 11 1 2 1 2            zHezhJzhz MMMM 
  • 166. 167 SOLO Additive Gaussian Nonlinear Filter (continue – 7) Recursive Bayesian Estimation Gauss – Hermite Quadrature Approximation (continue – 2)        M i ii z zfwzdzfe 1 2 Orthonormal Hermitian Polynomials in matrix form:   Mj j JJ j T M M M M ,,2,1 2 : 00 00 00 00 00 1 1 2 21 1                                      zHezhJzhz MMMM  Let evaluate this equation for the M roots zi for which   MizH iM ,,2,10      MizhJzhz iMii ,,2,1  From this equation we can see that zi and are the eigenvalues and eigenvectors, respectively, of the symmetric matrix JM.          MizHzHzHzh T iMiii ,,1,,, 110    Because of the symmetry of JM the eigenvectors are orthogonal and can be normalized. Define:     MjizHWWzHv M j ijiiij i j ,,2,1,:&/: 1 0 2     We have:         li li li li M j l lj i ij M j l j i j zhzh WWW zH W zH vv         0 1 0 1 0 1 :
  • 167. 168 Uscented Kalman FilterSOLO When the state transition and observation models – that is, the predict and update functions f and h (see above) – are highly non-linear, the Extended Kalman Filter can give particularly poor performance [JU97]. This is because only the mean is propagated through the non-linearity. The Unscented Kalman Filter (UKF) [JU97] uses a deterministic sampling technique known as the to pick a minimal set of sample points (called “sigma points”) around the mean. These “sigma points” are then propagated through the non-linear functions and the covariance of the estimate is then recovered. The result is a filter which more accurately captures the true mean and covariance. (This can be verified using Monte Carlo sampling or through a Taylor series expansion of the posterior statistics.) In addition, this technique removes the requirement to analytically calculate Jacobians, which for complex functions can be a difficult task in itself.   111,,1   kkkk wuxkfx   kkk xkhz  , State vector dynamics Measurements              kPkekeEkxEkxke x T xxx  &:               lk T www kQlekeEkwEkwke , 0 &:        lklekeE T vw ,0        lk lk lk 1 0 , The Unscented Algorithm using              kPkekeEkxEkxke x T xxx  &: determines              kPkekeEkzEkzke z T zzz  &:
  • 168. 169 Unscented Kalman FilterSOLO        n n j j j n x n x n x x x xx fx n xxf                  1 0 ˆ : ! 1 ˆ   Develop the nonlinear function f in a Taylor series around xˆ Define also the operator     xf x xfxfD n n j j jx n x n x x            1 :  Propagating Means and Covariances Through Nonlinear Transformations Consider a nonlinear function . xfy  Let compute Assume is a random variable with a probability density function pX (x) (known or unknown) with mean and covariance x      Txx xxxxEPxEx ˆˆ,ˆ                                                 0 ˆ 10 ˆ 0 ! 1 ! 1 ! 1 ˆˆ n x n n j j j n x n x n n x f x xE n fxE n DE n xxfEy x              xxTT PxxxxExxE xxExE xxx    ˆˆ 0ˆ ˆ   
  • 169. 170 Unscented Kalman Filter SOLO Propagating Means and Covariances Through Nonlinear Transformations Consider a nonlinear function . (continue – 1)  xfy            xxTT PxxxxExxE xxExE xxx    ˆˆ 0ˆ ˆ                                                                                                                                                         x n j j jx n j j jx n j j j x n j j j n x n n j j j f x xEf x xEf x xE f x xExff x xE n xxfEy xxx xx ˆ 4 1 ˆ 3 1 ˆ 2 1 ˆ 10 ˆ 1 !4 1 !3 1 !2 1 ˆ ! 1 ˆˆ   Since all the differentials of f are computed around the mean (non-random)xˆ               xx xxT xxx TT xxx TT xxx fPfxxEfxxEfxE ˆˆˆˆ 2           0 ˆ 1 0ˆ 1 ˆ0 ˆ                                                                   x n j j j x n j j j x xxx f x xEf x xEfxEfxE xx                        xxxxxx xxT x n x n x fDEfDEfPxffDE n xxfEy ˆ 4 ˆ 3 ˆ 0 ˆ !4 1 !3 1 !2 1 ˆ ! 1 ˆˆ 
  • 170. 171 Simon J. Julier Unscented Kalman FilterSOLO Propagating Means and Covariances Through Nonlinear Transformations Consider a nonlinear function . (continue - 2)  xfy            xxTT PxxxxExxE xxExE xxx    ˆˆ 0ˆ ˆ    Unscented Transformation (UT), proposed by Julier and Uhlmann uses a set of “sigma points” to provide an approximation of the probabilistic properties through the nonlinear function Jeffrey K. Uhlman A set of “sigma points” S consists of p+1 vectors and their associated weights S = { i=0,1,..,p: x(i) , W(i) }. (1) Compute the transformation of the “sigma points” through the nonlinear transformation f:       pixfy ii ,,1,0  (2) Compute the approximation of the mean:       p i ii yWy 0 ˆ The estimation is unbiased if:             yWyyEWyWE p i i p i y ii p i ii ˆˆ 00 ˆ 0             1 0  p i i W (3) The approximation of output covariance is given by            p i Tiiiyy yyyyWP 0 ˆˆ
  • 171. 172 Unscented Kalman FilterSOLO Propagating Means and Covariances Through Nonlinear Transformations Consider a nonlinear function (continue – 3) xfy  One set of points that satisfies the above conditions consists of a symmetric set of symmetric p = 2nx points that lie on the covariance contour Pxx: th xn                     x x ni x i xxxni i xxxi ni nWW nWW P W n xx P W n xx WWxx x x ,,1 2/1 2/1 1 ˆ 1 ˆ ˆ 0 0 0 0 0 00                                      where is the row or column of the matrix square root of nx Pxx /(1-W0) (the original covariance matrix Pxx multiplied by the number of dimensions of x, nx/(1-W0)). This implies:   i xx x WPn 01/  xxx n i T i xxx i xxx P W n P W n P W nx 01 00 111                    Unscented Transformation (UT) (continue – 1)
  • 172. 173 Unscented Kalman FilterSOLO Propagating Means and Covariances Through Nonlinear Transformations Consider a nonlinear function (continue – 3) xfy  Unscented Transformation (UT) (continue – 2)                                0 0 2,,1ˆ ! 1 ,,1ˆ ! 1 0ˆ n xx n x n x n x ii nnixfD n nixfD n ixf xfy i i     1 Unscented Algorithm:                                                                x ii x i x iii x i x i x n i xx x n i x x n i xxx x n i n n x x n i n n x x n i ii UT xfDxfD n W xfD n W xf xfDxfDxfDxf n W xfW xfD nn W xfD nn W xfWyWy 1 640 1 20 1 6420 0 1 0 0 1 0 0 0 2 0 ˆ !6 1 ˆ !4 11 ˆ 2 11 ˆ ˆ !6 1 ˆ !4 1 ˆ !2 1 ˆ 1 ˆ ˆ ! 1 2 1 ˆ ! 1 2 1 ˆˆ        i xxx i i P W n xxxx           01 ˆˆ  2 Since                           oddnxfD evennxfD xf x xxfD n x n x n n j j ij n x i i x i ˆ ˆ ˆˆ 1    
  • 173. 174 Unscented Kalman Filter                x ii n i xx x xxT UT xfDxfD n W xfPxfy 1 640 ˆ !6 1 ˆ !4 11 ˆ 2 1 ˆˆ    i xxx i i P W n xxxx           01 ˆˆ  SOLO Propagating Means and Covariances Through Nonlinear Transformations Consider a nonlinear function (continue – 4) xfy  Unscented Transformation (UT) (continue – 3) Unscented Algorithm:          xfPxfP W n n W xfP W n P W n n W xfP W n P W n n W xfD n W xxTxxxT x n i T i xxx i xxxT x n i T i xxx i xxxT x n i x x x xx i ˆ 2 1 ˆ 12 11 ˆ 112 11 ˆ 112 11 ˆ 2 11 0 0 1 00 0 1 00 0 1 20                                                                 Finally: We found                      xxxxxx xxT x n x n x fDEfDEfPxffDE n xxfEy ˆ 4 ˆ 3 ˆ 0 ˆ !4 1 !3 1 !2 1 ˆ ! 1 ˆˆ  We can see that the two expressions agree exactly to the third order.
  • 174. 175 covariance mean Actual (sampling)  xfy  true mean true covariance covariance mean Actual (sampling) Linearized (EKF)  xfy    APAP xfy xxTyy   ˆˆ true mean true covariance  xf ˆAPA xxT Uscented Kalman FilterSOLO covariance mean sigma points Actual (sampling) Linearized (EKF) Unscented Transformation  xfy    APAP xfy xxTyy   ˆˆ  XY f transformed sigma points UT mean UT covariance true mean true covariance  xf ˆAPA xxT weighted sample mean and covariance
  • 175. 176 Uscented Kalman FilterSOLO     N T iiiz N ii zzPz 2 0 2 0  x xP xP     zP  f i i i z    xxi PxPxx   Weighted sample mean Weighted sample covariance Table of Content
  • 176. 177 Uscented Kalman FilterSOLO UKF Summary Initialization of UKF      T xxxxEPxEx 00000|000 ˆˆˆ                     R Q P xxxxEPxxEx TaaaaaTTaa 00 00 00 ˆˆ00ˆˆ 0|0 00000|0000  TTTTa vwxx : For   ,,1 k System Definition                   lkk T lkkkkk lkk T lkkkkkk RvvEvEvxkhz QwwEwEwuxkfx , ,1111111 &0, &0,,1     Liuxkfx k i kk i kk 2,,1,0,ˆ,1ˆ 11|11|           Li L W L WxWx m i m L i i kk m ikk 2,,1 2 1 &ˆˆ 0 2 0 1|1|          0 Calculate the Sigma Points                        L LiPxx LiPxx xx i kkkk Li kk i kkkk i kk kkkk ,,1ˆˆ ,,1ˆˆ ˆˆ 1|11|11|1 1|11|11|1 1|1 0 1|1   1 State Prediction and its Covariance2            Li L W L WxxxxWP c i c L i T kk i kkkk i kk c ikk 2,,1 2 1 &1ˆˆˆˆ 2 0 2 0 1|1|1|1|1|           
  • 177. 178 Uscented Kalman FilterSOLO UKF Summary (continue – 1)   Lixkhz i kk i kk 2,,1,0ˆ,ˆ 1|1|           Li L W L WzWz m i m L i i kk m ikk 2,,1 2 1 &ˆˆ 0 2 0 1|1|          Measure Prediction3 Innovation and its Covariance4 1|ˆ  kkkk zzi            Li L W L WzzzzWPS c i c L i T kk i kkkk i kk c i zz kkk 2,,1 2 1 &1ˆˆˆˆ 2 0 2 0 1|1|1|1|1|            Kalman Gain Computations5            Li L W L WzzxxWP c i c L i T kk i kkkk i kk c i xz kk 2,,1 2 1 &1ˆˆˆˆ 2 0 2 0 1|1|1|1|1|            1 1|1|   zz kk xz kkk PPK Update State and its Covariance6 kkkkkk iKxx  1|| ˆˆ T kkkkkkk KSKPP  1|| k = k+1 & return to 1
  • 178. 179 Unscented Kalman Filter State Estimation (one cycle) Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Evolution of the system (true state) Estimation of the state State Covariance and Kalman Filter ComputationsController Innovation 1|ˆ  kkkk zz 1kt kt Time State at tk-1 1kx Control at tk-1 1ku State Estimation at tk-1 1|1 ˆ  kkx State Error Covariance at tk-1  1|1  kkP State Prediction Covariance        L i T kk i kkkk i kk c ikk xxxxWP 2 0 1|1|1|1|1| ˆˆˆˆ   Li uxkfx k i kk i kk 2,,1,0 ,ˆ,1ˆ 11|11|      Li xkhz i kk i kk 2,,1,0 ˆ,ˆ 1|1|    Transition to tk   111,,1   kkkk wuxkfx Measurement at tk   kkk vxkhz  , Update State Covariance at tkk k T kkkkkkk KSKPP  1|| Update State Estimation at t k kkkkkk Kxx  1|| ˆˆ State Prediction at tk      L i i kk m ikk xWx 2 0 1|1| ˆˆ Sigma Points Computation LiPxx LiPxx xx i kkkk Li kk i kkkk i kk kkkk ,,1ˆˆ ,,1ˆˆ ˆˆ 1|11|11|1 1|11|11|1 1|1 0 1|1             Measurement Prediction at tk      L i i kk m ikk zWz 2 0 1|1| ˆˆ Innovation Covariance          L i T kk i kkkk i kk c i zz kkk zzzzW PS 2 0 1|1|1|1| 1| ˆˆˆˆ 1 ||   zz ykk zx ykkk PPK        L i T kk i kkkk i kk c i zz kk zzxxWP 2 0 1|1|1|1|1| ˆˆˆˆ Kalman Filter Gain I.C.:  00|0 ˆ xEx     T xxxxEP 0|000|000|0 ˆˆ I.C.: 1|1 ˆ  kkx 1kx kkP | 2|1  kkP kkx | ˆ kx 1|1  kkP 1| kkP 1| ˆ kkx 1kt kt Real Trajectory Estimated Trajectory covariance mean sigma points Actual (sampling) Unscented Transformation  xfy   XY f transformed sigma points UT mean UT covariance true mean true covariance APA xxT weighted sample mean and covariance Simon J. Julier Jeffrey K. Uhlman
  • 179. 180 Numerical Integration Using a Monte Carlo Approximation sN 1 SOLO A Monte Carlo Approximation of the Expected Value Integrals uses Discrete Approximation to the Gaussian PDF  xx Pxx ,ˆ;N  xx Pxx ,ˆ;N can be approximated by:           ss N i i s N i iixx xx N xxwPxxx 11 1 ,ˆ; Np We can see that for any x we have          x xx xx i i x N i ii dPxwdxw i s  ,ˆ; 1 N The weight wi is not the probability of the point xi. The probability density near xi is given by the density of the points in the region around xi, which can be obtained by a normalized histogram of all xi. Draw Ns samples from , where {xi , i = 1,2,…,Ns} are a set of support points (random samples of particles) with weights {wi = 1/Ns, i=1,2,…,Ns}  xx Pxx ,ˆ;N Monte Carlo Kalman Filter (MCKF)
  • 180. 181 Numerical Integration Using a Monte Carlo ApproximationSOLO The Expected Value for any function g (x) can be estimated from:                     sss N i i s N i ii N i ii xp xg N xgwxxwxgxdxpxgxgE 111 1  which is the sample mean.                   lkk T lkkkkk lkk T lkkkkkk RvvEvEvxkhz QwwEwEwuxkfx , ,1111111 &0, &0,,1  Given the System Assuming that we computed the Mean and Covariance at stage k-1 let use the Monte Carlo Approximation to compute the predicted Mean and Covariance at stage k 1|11|1 ,ˆ  kkkk Px 1|1| ,ˆ  kkkk Px          s kk N i k i kk s Zxpkkk uxkf N xEx 1 11|1|1| ,,1 1 ˆ 1:1         T kkkkZxp T kkZxp T kkkkkk xx kk xxxxExxxxEP kkkk 1|1|||1|1|1| ˆˆˆˆ 1:11:1    Monte Carlo Kalman Filter (MCKF) (continue – 1) Draw Ns samples     skkkkkkk i kk NiPxxZxpx ,,1,ˆ;|~ 1|11|111:111|1   N ~means Generate (Draw) samples from a predefined distribution
  • 181. 182 Numerical Integration Using a Monte Carlo ApproximationSOLO                           T N i k i kk s N i k i kk s Zxpk i kk T k i kk T kkkkZxp T kk i kkkk i kk T kkkkZxp T kkZxp T kkkkkk xx kk ss kk kk kkkk uxkf N uxkf N QuxfuxfE xxwuxkfwuxkfE xxxxExxxxEP                          1 11|1 1 11|1|11|111|1 1|1||111|1111|1 1|1|||1|1|1| ,,1 1 ,,1 1 ,, ˆˆ,,1,,1 ˆˆˆˆ 1:1 1:1 1:11:1                            sss N i k i kk s N i k i kk s N i k i kk T k i kk s xx kk uxkf N uxkf N uxkfuxkf N QP 1 11|1 1 11|1 1 11|111|11| ,,1 1 ,,1 1 ,,1,,1 1 Using the Monte Carlo Approximation we obtain:          s kk N i i kk s Zxpkkk xkh N zEz 1 1||1| , 1 ˆ 1:1                            sss N i i kk s N i i kk s N i i kk Ti kk s zz kk xkh N xkh N xkhxkh N RP 1 1| 1 1| 1 1|1|1| , 1 , 1 ,, 1 Monte Carlo Kalman Filter (MCKF) (continue – 2)     skkkkkkk i kk NiPxxZxpx ,,1,ˆ;|~ 1|1|1:11|   N Now we approximate the predictive PDF, , as and we draw new Ns (not necessarily the same as before) samples.  1:1| kk Zxp  1|1| ,ˆ;  kkkkk PxxN
  • 182. 183 Numerical Integration Using a Monte Carlo ApproximationSOLO In the same way we obtain:                        sss N i i kk s N i i kk s N i i kk Ti kk s zx kk xkh N x N xkhx N P 1 1| 1 1| 1 1|1|1| , 11 , 1 Monte Carlo Kalman Filter (MCKF) (continue – 3) The Kalman Filter Equations are:  1 1|1|   zz kk zx kkk PPK  1|1|| ˆˆˆ   kkkkkkkk zzKxx T k zz kkk xx kk xx kk KPKPP 1|1||  
  • 183. 184 Monte Carlo Kalman Filter (MCKF)SOLO MCKF Summary      T xxxxEPxEx 00000|000 ˆˆˆ                     R Q P xxxxEPxxEx TaaaaaTTaa 00 00 00 ˆˆ00ˆˆ 0|0 00000|0000 For   ,,1 k System Definition:                kkkkkk kkkkkkk Rvvvxkhz QwwPxxxwuxkfx ,0;, ,0;&,ˆ;,,1 1110|0000111 N NN   sk ai kk ai kk Niuxkfx ,,1,,1 11|11|      sN i ai kk s a kk x N x 1 1|1| 1 ˆ Initialization of MCKF0 State Prediction and its Covariance2 Ta kk a kk N i Tai kk ai kk s a kk xxxx N P s 1|1| 1 1|1|1| ˆˆ 1      Assuming for k-1 Gaussian distribution with Mean and Covariance1 a kk a kk Px 1|11|1 ,ˆ  Assuming Gaussian distribution with Mean and Covariance3 1|1| ,ˆ  kkkk Px   s a kk a kk a k ai kk NiPxxx ,,1,ˆ;~ 1|11|111|1  N Generate (Draw) Ns samples   s a kk a kk a kk aj kk NjPxxx ,,1,ˆ;~ 1|1|1|1|  N Generate (Draw) new Ns samples  TTTTa vwxx : Augment the state space to include processing and measurement noises.
  • 184. 185 Monte Carlo Kalman Filter (MCKF)SOLO MCKF Summary (continue – 1)   s aj kk j kk Njxkhz ,,1, 1|1|      sN j j kk s kk z N z 1 1|1| 1 ˆ Measure Prediction4      sN j T kk j kkkk j kk s zz kkk zzzz N PS 1 1|1|1|1|1| ˆˆ 1 Innovation and its Covariance 1|ˆ  kkkk zzi7      s a N j T kk j kk a kk aj kk s zx kk zzxx N P 1 1|1|1|1|1| ˆˆ 1 6 Kalman Gain Computations 1 1|1|   zz kk zx kk a k PPK a Kalman Filter8 k a k a kk a kk iKxx  1|| ˆˆ Ta kk a k a kk a kk KSKPP  1|| k := k+1 & return to 1 Predicted Covariances Computations5
  • 185. 186 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Evolution of the system (true state) Estimation of the state State Covariance and Kalman Filter ComputationsController Innovation 1|ˆ  kkkk zz 1kt kt Time State at tk-1 1kx Control at tk-1 1ku State Estimation at tk-1 a kkx 1|1 ˆ  State Error Covariance at tk-1 a kkP 1|1    s ai kk i kk Ni xkhz ,,1 , 1|1|    Transition to tk   111,,1   kkkk wuxkfx Measurement at tk   kkk vxkhz  , Update State Covariance at tkk k Ta kk a k a kk a kk KSKPP  1|| Update State Estimation at t k k a k a kk a kk Kxx  1|| ˆˆ State Prediction at tk     sN i k ai kk s a kk uxk N x 1 11|11| ,,1 1 ˆ Measurement Prediction at tk    sN i i kk s kk z N z 0 1|1| 1 ˆ Innovation Covariance        sN i T kk i kkkk i kk s zz kkk zzzz N PS 1 1|1|1|1| 1| ˆˆ 1 1 1|1|   zz kk zx kk a k PPK a      s a N i T kk i kk a kk ai kk s zx kk zzxx N P 1 1|1|1|1|1| ˆˆ 1 Kalman Filter Gain I.C.:  Ta xx 0,0,ˆˆ 0|00|0   kk a RQPdiagP ,, 10|00|0 I.C.: 1|1 ˆ  kkx 1kx kkP | 2|1  kkP kkx | ˆ kx 1|1  kkP 1| kkP 1| ˆ kkx 1kt kt Real Trajectory Estimated Trajectory Generate Prior Samples   s a kk a kk a k ai kk Ni Pxxx ,,1 ,ˆ;~ 1|11|111|1   N Generate Predictive Samples   s a kk a kk a k ai kk Ni Pxxx ,,1 ,ˆ;~ 1|1|1|   N State Prediction Covariance      sN i T a kk ai kk a kk ai kk s a kk xxxx N P 1 1|1|1|1|1| ˆˆ 1 Monte Carlo Kalman Filter (MCKF)
  • 186. 187 Nonlinear Estimation Using Particle Filters SOLO We assumed that p (xk|Z1:k) is a Gaussian PDF. If the true PDF is not Gaussian (multivariate, heavily skewed or non-standard – not represented by any standard PDF) the Gaussian distribution can never described it well. Non-Additive Non-Gaussian Nonlinear Filter    kkk kkk vxhz wxfx , , 11    kk vw &1 are system and measurement white-noise sequences independent of past and current states and on each other and having known P.D.F.s    kk vpwp &1 We want to compute p (xk|Z1:k) recursively, assuming knowledge of p(xk-1|Z1:k-1) in two stages, prediction (before) and update (after measurement) Prediction (before measurement) Use Chapman – Kolmogorov Equation to obtain:         11:1111:1 ||| kkkkkkk xdZxpxxpZxp where:         111111 |,|| kkkkkkkk wdxwpwxxpxxp By assumption    111 |   kkk wpxwp Since by knowing , is deterministically given by system equation11 &  kk wx kx                  11 11 1111 ,0 ,1 ,,| kkk kkk kkkkkk wxfx wxfx wxfxwxxp  Therefore:          11111 ,| kkkkkkk wdwpwxfxxxp 
  • 187. 188 Nonlinear Estimation Using Particle Filters SOLO Non-Additive Non-Gaussian Nonlinear Filter    kkk kkk vxhz wxfx , , 11    kk vw &1 are system and measurement white-noise sequences independent of past and current states and on each other and having known P.D.F.s    kk vpwp &1 We want to compute p (xk|Z1:k) recursively, assuming knowledge of p(xk-1|Z1:k-1) in two stages, prediction (before) and update (after measurement) Prediction (before measurement)         11:1111:1 ||| kkkkkkk xdZxpxxpZxp where: Update (after measurement)                                  kkkkk kkkk kk kkkk Bayes bp apabp bap kkkkk xdZxpxzp Zxpxzp Zzp Zxpxzp ZzxpZxp 1:1 1:1 1:1 1:1 | | 1:1:1 || || | || ,||       kkkkkkkk vdxvpvxzpxzp |,|| By assumption    kkk vpxvp | Since by knowing , is deterministically given by system equationkk vx & kz               kkk kkk kkkkkk vxhz vxhz vxhzvxzp ,0 ,1 ,,|  Therefore:         kkkkkkk vdvpvxhzxzp ,|           11111 ,| kkkkkkk wdwpwxfxxxp 
  • 188. 189 Nonlinear Estimation Using Particle Filters SOLO Non-Additive Non-Gaussian Nonlinear Filter    kkk kkk vxhz wxfx , , 11    kk vw &1 are system and measurement white-noise sequences independent of past and current states and on each other and having known P.D.F.s    kk vpwp &1 We want to compute p (xk|Z1:k) recursively, assuming knowledge of p(xk-1|Z1:k-1) in two stages, prediction (before) and update (after measurement) Prediction (before measurement)         11:1111:1 ||| kkkkkkk xdZxpxxpZxp          11111 ,| kkkkkkk wdwpwxfxxxp  Update (after measurement)                                  kkkkk kkkk kk kkkk Bayes bp apabp bap kkkkk xdZxpxzp Zxpxzp Zzp Zxpxzp ZzxpZxp 1:1 1:1 1:1 1:1 | | 1:1:1 || || | || ,|| We need to evaluate the following integrals:         kkkkkkk vdvpvxhzxzp ,|  We use the numeric Monte Carlo Method to evaluate the integrals: Generate (Draw):     Sk i kk i k Nivpvwpw ,,1~&~ 11       S N i i k i k i kkk Nwxfxxxp S    1 111 /,|       S N i i k i k i kkk Nvxhzxzp S   1 /,|  or       S N i i kkkk i k i k i k Nxxxxpwxfx S    1 111 /|,        S N i i kkkk i k i k i k Nzzxzpvxhz S   1 /|,  Analytic solutions for those integral equations do not exist in the general case.
  • 189. 190 SOLO          kvkkk xkkwkkkk vpgivenvxhz xpuwpgivenwuxfx :, ,,:,, 011111 0    Monte Carlo Computations of and . kk xzp | 1| kk xxp Generate (Draw)   Sx i Nixpx ,,1~ 00 0  For   ,,1 k Initialization0 1 At stage k-1 Generate (Draw) NS samples   Skw i k Niwpw ,,1~ 11  2 State Update   S i kk i k i k Niwuxfx ,,1,, 111   3 Generate (Draw) Measurement Noise   Skv i k Nivpv ,,1~  k:=k+1 & return to 1       SN i S i kkkk Nxxxxp 1 1 /|       SN i S i kkkk Nzzxzp 1 /|  4 Measurement , Update   S i k i k i k Nivxhz ,,1, kz Nonlinear Estimation Using Particle Filters Non-Additive Non-Gaussian Nonlinear Filter
  • 190. 191 Nonlinear Estimation Using Particle Filters SOLO Non-Additive Non-Gaussian Nonlinear Filter    kkk kkk vxhz wxfx , , 11    kk vw &1 are system and measurement white-noise sequences independent of past and current states and on each other and having known P.D.F.s    kk vpwp &1 We want to compute p (xk|Z1:k) recursively, assuming knowledge of p(xk-1|Z1:k-1) in two stages, prediction (before) and update (after measurement) Prediction (before measurement)         11:1111:1 ||| kkkkkkk xdZxpxxpZxp Update (after measurement)                                  kkkkk kkkk kk kkkk Bayes bp apabp bap kkkkk xdZxpxzp Zxpxzp Zzp Zxpxzp ZzxpZxp 1:1 1:1 1:1 1:1 | | 1:1:1 || || | || ,|| We use the numeric Monte Carlo Method to evaluate the integrals: Generate (Draw):     Sk i kk i k Nivpvwpw ,,1~&~ 11        S N i i kkkk i k i k i k Nxxxxpwxfx S    1 111 /|,        S N i i kkkk i k i k i k Nzzxzpvxhz S   1 /|,                    SSS N i i kk S N i kkk i kk S k N i kk i kk S kk xx N xdZxpxx N xdZxpxx N Zxp 11 1 11:111 1 1:111:1 1 | 1 | 1 |    
  • 191. 192 Nonlinear Estimation Using Particle Filters SOLO We assumed that p (xk|Z1:k) is a Gaussian PDF. If the true PDF is not Gaussian (multivariate, heavily skewed or non-standard – not represented by any standard PDF) the Gaussian distribution can never described it well. In such cases approximate Grid-Based Filters and Particle Filters will yield an improvement at the cost of heavy computation demand.       0 | | : :1 :1  kk kk k Zxq Zxp xw To overcome this difficulty we use The Principle of Importance Sampling. Suppose that p (xk|Z1:k) is a PDF from which is difficult to draw samples. Also suppose that q (xk|Z1:k) is another PDF from which samples can be easily drawn (referred to Importance Density), for example a Gaussian PDF. Now assume that we can find at each sample the scale factor w (xk) between the two densities: Using this we can write:                                        kkkk kkkkk kkk kk kk kkk kk kk k kkkkZxpk xdZxqxw xdZxqxwxg xdZxq Zxq Zxp xdZxq Zxq Zxp xg xdZxpxgxgE kk :1 :1 1 :1 :1 :1 :1 :1 :1 :1| | | | | | | | | |:1    Non-Additive Non-Gaussian Nonlinear Filter    kkk kkk vxhz wxfx , , 11   
  • 192. 193 SOLO                 kkkk kkkkk Zxpk xdZxqxw xdZxqxwxg xgE kk :1 :1 | | | :1        sN i i k s i ki k xw N xw xw 1 1 :~ where Generate (draw) Ns particle samples { xk i, i=1,…,Ns } from q(xk|Z1:k)   skk i k NiZxqx ,,1|~ :1                       s s s kk N i i kkN i i k s N i i kk s Zxpk xwxg xw N xwxg N xgE 1 1 1 | ~ 1 1 :1 and estimate g(xk) using a Monte Carlo approximation: Nonlinear Estimation Using Particle Filters Non-Additive Non-Gaussian Nonlinear Filter    kkk kkk vxhz wxfx , , 11    Importance Sampling (IS)
  • 193. 194 Nonlinear Estimation Using Particle Filters SOLO It would be useful if the importance density could be generated recursively (sequentially).                              kk kkkk Zzpc kk kkkkkk bP aPabP baP Bayes kk kkk k Zxq Zxpxzpc Zxq ZzpZxpxzp Zxq Zzxp xw kk :1 1:1 |/1: :1 1:11:1 | |:1 1:1 | || | |/|| | ,| 1:1                   1:111:11 |, 1:11 |,||,     kkkkk bPbaPbaP Bayes kkk ZxpZxxpZxxpUsing: we obtain:           11:111:1111:111:1 |,||,| kkkkkkkkkkkk xdZxpZxxpxdZxxpZxp           11:111:1111:111:1 |,||,| kkkkkkkkkkkk xdZxqZxxqxdZxxqZxq In the same way:                       11:111:11 11:111:11 :1 1:1 |,| |,|| | || kkkkkk kkkkkkkk kk kkkk k xdZxqZxxq xdZxpZxxpxzpc Zxq Zxpxzpc xw Sequance Importance Sampling (SIS) Non-Additive Non-Gaussian Nonlinear Filter
  • 194. 195 Nonlinear Estimation Using Particle Filters SOLO It would be useful if the importance density could be generated recursively.                       11:111:11 11:111:11 :1 1:1 |,| |,|| | || kkkkkk kkkkkkkk kk kkkk k xdZxqZxxq xdZxpZxxpxzpc Zxq Zxpxzpc xw Suppose that at k-1 we have Ns particle samples and their probabilities { xk-1|k-1 i,wk-1 i ,i=1,…,Ns }, that constitute a random measure which characterizes the posterior PDF for time up to tk-1. Then         sN i i kkkk i kkkk xxZxpZxp 1 1|111:11|11:11 ||                             s s N i i kkkk i kkkkk k N i i kkkk i kkkkkkk k xxZxqZxxq xdxxZxpZxxpxzpc xw 1 1|111:11|11:11 1 1 1|111:11|11:11 |,| |,||           sN i i kkkk i kkkk xxZxqZxq 1 1|111:11|11:11 ||  Sequential Importance Sampling (SIS) (continue – 1) We obtained: Non-Additive Non-Gaussian Nonlinear Filter
  • 195. 196 Nonlinear Estimation Using Particle Filters SOLO            kk kkkk Bayes kk kk k Zxq Zxpxzpc Zxq Zxp xw :1 1:1 :1 :1 | || | |                                              1:11|11|1 1:11|11|1 |,| |,| 1:11|11:11|1 1:11|11:11|1 1 1|111:11|11:11 1 1 1|111:11|11:11 || ||| |,| |,|| |,| |,|| 1|11:11|1 1|11:11|1                       k i kk i kkk k i kk i kkkkk xxpZxxp xxqZxxq k i kkk i kkk k i kkk i kkkkk N i i kkkk i kkkkk k N i i kkkk i kkkkkkk k Zxqxxq Zxpxxpxzpc ZxqZxxq ZxpZxxpxzpc xxZxqZxxq xdxxZxpZxxpxzpc xw i kkkk i kkk i kkkk i kkk s s        1:11 1:11 1 | |     kk kk k Zxq Zxp xwSince      i kk i kk i kk i kk i kkki k i k xxq xxpxzpc ww 1|1| 1|1|| 1 | ||    Define      k i kk k i kki kk i k Zxq Zxp xww :1| :1| | | | :       1:11|1 1:11|1 1|11 | | :     k i kk k i kki kk i k Zxq Zxp xww Sequential Importance Sampling (SIS) (continue – 2) Non-Additive Non-Gaussian Nonlinear Filter
  • 196. 197      1 1:1 ,~ |    Nx Zxp i k kk i=1,…,N=10 particles  kk xzp | SOLO Sequential Importance Sampling (SIS) (continue – 3)                      twwwt Zxxq xxpxzp ww i kk N i i k k i k i k i k i k i kk N i k i k /~~ ,| ||~~ 1:11 1 /1 1              N i i kk i kkk NxxNxZxp 1 1 1:1 /:,|  k:=k+1      1 1:1 ,~ |    Nx Zxp i k kk      i k i k wx ,~ i=1,…,N=10 particles  kk xzp | Run This Nonlinear Estimation Using Particle Filters Non-Additive Non-Gaussian Nonlinear Filter    kkk kkk vxhz wxfx , , 11         N i i kk i kkk xxwZxp 1 :1|  Generate (Draw)   Sx i Nixpx ,,1~ 00 0  For   ,,1 k Initialization0 1 At stage k-1 Generate (Draw) NS samples   Skw i k Niwpw ,,1~ 11  2 State Update   S i kk i k i k Niwuxfx ,,1,, 111   Start with the approximation       SN i S i kkkk Nxxxxp 1 1 /| 3 After measurement zk we compute        i k i kkk wxZxp ~,| :1 4 Generate (Draw) NS samples   Skw i k Nivpv ,,1~  Compute  i k i k i k vxhz , Approximate      SN i S i kk i kk Nzzxzp 1 /| 
  • 197. 198 Nonlinear Estimation Using Particle Filters SOLO The resulting sequential importance sampling (SIS) algorithm is a Monte Carlo method that forms the basis for most sequential MC Filters. Sequential Importance Sampling (SIS) (continue – 4) This sequential Monte Carlo method is known variously as: • Bootstrap Filtering • Condensation Algorithm • Particle Filtering • Interacting Particle Approximation • Survival of the Fittest Non-Additive Non-Gaussian Nonlinear Filter
  • 198. 199 Nonlinear Estimation Using Particle Filters SOLO Degeneracy Problem Sequential Importance Sampling (SIS) (continue – 5) A common problem with SIS particle filter is the degeneracy phenomenon, where after a few iterations, all but one particle will have negligible weights. It can be shown that the variance of the importance weights, wk i, of the SIS algorithm, can only increase over time, and that leads to the degeneracy problem. A suitable measure of degeneracy is given by:   1 1ˆ 1 1 2      N i i kN i i k eff wwhere w N To see this let look at the following two cases: 1   N N NNi N w N i eff i k  1 2 /1 1ˆ,,1, 1  2   1 1ˆ 0 1 1 2         N i i k eff i k w N ji ji w Hence, small Neff indicates a severe degeneracy and vice versa. Non-Additive Non-Gaussian Nonlinear Filter
  • 199. 200 SOLO The Bootstrap (Resampling) • Popularized by Brad Efron (1979) • The Bootstrap is a name generically applied to statistical resampling schemes that allow uncertainty in the data to be assesed from the data themselves, in other words “pulling yourself up by your bootstraps” The disadvantage of bootstrapping is that while (under some conditions) it is asymptotically consistent, it does not provide general finite-sample guarantees, and has a tendency to be overly optimistic.The apparent simplicity may conceal the fact that important assumptions are being made when undertaking the bootstrap analysis (e.g. independence of samples) where these would be more formally stated in other approaches. The advantage of bootstrapping over analytical methods is its great simplicity - it is straightforward to apply the bootstrap to derive estimates of standard errors and confidence intervals for complex estimators of complex parameters of the distribution, such as percentile points, proportions, odds ratio, and correlation coefficients. Neil Gordon Nonlinear Estimation Using Particle Filters Sequential Importance Sampling (SIS) (continue – 6) Non-Additive Non-Gaussian Nonlinear Filter
  • 200. 201 Nonlinear Estimation Using Particle Filters j C.D.F. 1  j kw~ 0 SOLO Resampling Sequential Importance Sampling (SIS) (continue – 5) Whenever a significant degeneracy is observed (i.e., when Neff falls bellow some Threshold Nthr) during the sampling, where we obtained      N i i kk i kkk xxwZxp 1 :1|  we need to resample and replace the mapping representation with a random measure   Niwx i k i k ,,1,    NiNxi k ,,1/1,*  This is done by first computing the Cumulative Density Function (C.D.F.) of the sampled distribution wk i. Initialize the C.D.F.: c1 = wk 1 Compute the C.D.F.: ci = ci-1 + wk i For i = 2:N i := i + 1 Non-Additive Non-Gaussian Nonlinear Filter
  • 201. 202 ui j resampled index C.D.F. 1 1 N  j kw~ 0 0 SOLO Resampling (continue – 1) Sequential Importance Resampling (SIR) (continue – 2) Using the method of Inverse Transform Algorithm we generate N independent and identical distributed (i.i.d.) variables from the uniform distribution u, we sort them in ascending order and we compare them with the Cumulative Distribution Function (C.D.F.) of the normalized weights. Nonlinear Estimation Using Particle Filters Non-Additive Non-Gaussian Nonlinear Filter    kkk kkk vxhz wxfx , , 11    Nonlinear Estimation Using Particle Filters Non-Additive Non-Gaussian Nonlinear Filter    kkk kkk vxhz wxfx , , 11   
  • 202. 203 Nonlinear Estimation Using Particle Filters ui j resampled index C.D.F. 1 1 N  j kw~ 0 0 SOLO Resampling Algorithm (continue – 2) Sequential Importance Sampling (SIS) (continue – 7) Initialize the C.D.F.: c1 = wk 1 Compute the C.D.F.: ci = ci-1 + wk i For i = 2:N i := i + 1 0 Start at the bottom of the C.D.F.: i = 1 Draw for the uniform distribution  1 ,0~  NUui 1 For i=1:N Move along the C.D.F. uj = ui +(j – 1) N-1. For j=1:N2 WHILE uj > ci j* = i + 1 END WHILE 3 END For 5 i := i + 1 If i < N Return to 1 4 Assign sample: i k j k xx * Assign weight: 1  Nwj k Assign parent: ii j  Non-Additive Non-Gaussian Nonlinear Filter
  • 203. 204      1 1:1 , |    Nx Zxp i k kk i=1,…,N=10 particles  kk xzp | SOLO Resampling Sequential Importance Resampling (SIR) (continue – 4)                      twwwt Zxxq xxpxzp ww i kk N i i k k i k i k i k i k i kk N i k i k /~~ ,| ||~~ 1 :11 1 /1 1       After measurement zk-1 we compute        i k i kkk wxZxp ~,| :1  1 Start with the approximation           N i i kk i kkk Nxx NxZxp 1 1 1:1 /: ,|  0 Prediction        i kk i k i k nuxfx ,,*1  to obtain      1 1:11 ,|    NxZxp i kkk 3 k:=k+1      1 1:1 , |    Nx Zxp i k kk      i k i k wx , i=1,…,N=10 particles  kk xzp |      1 1:1 , |    Nx Zxp i k kk      i k i k wx ,    1 ,*  Nx i k i=1,…,N=10 particles  kk xzp | Resample      1 1:1 , |    Nx Zxp i k kk      i k i k wx ,    1 ,*  Nx i k    1 1,   Nx i k i=1,…,N=10 particles  kk xzp |  11 |  kk xzp Resample      1 1:1 , |    Nx Zxp i k kk      i k i k wx ,    1 ,*  Nx i k    1 1,   Nx i k      i k i k wx 11,  i=1,…,N=10 particles  kk xzp |  11 |  kk xzp Resample Run This Nonlinear Estimation Using Particle Filters Non-Additive Non-Gaussian Nonlinear Filter    kkk kkk vxhz wxfx , , 11         N i i kk i kkk xxwZxp 1 :1|  If Resample to obtain      1 :1 ,*|   NxZxp i kkk 2   tht N i i keff NwN         1 2 /1
  • 204. 205 Estimators v  vxh , z x Estimator x  SOLO The Cramér-Rao Lower Bound (CRLB) on the Variance of the Estimator  xE  - estimated mean vector             TTT x xExExxExExxExE    2  - estimated variance matrix For a good estimator we want   xxE   - unbiased estimator vector      TT x xExExxE    2  - minimum estimation variance     Tk kzzZ 1: - the observation matrix after k observations       xkzzLxZL k ,,,1,  - the Likelihood or the joint density function of Zk We have:  T pzzzz ,,, 21   T n xxxx ,,, 21   T pvvvv ,,, 21  The estimation of , using the measurements of a system corrupted by noise is a random variable with xˆ x z v         dvvpxvZpxZpxZL v k vz k xz k ;//, //                          xbxZdxZLZx kzdzdxkzzLkzzxkzzxE kkk     , 1,,,1,,1,,1      - estimator bias xb therefore:
  • 205. 206 Estimators v  vxh , z x Estimator x  SOLO The Cramér-Rao Lower Bound on the Variance of the Estimator (continue – 1)         xbxZdxZLZxZxE kkkk   ,  We have:          x xb Zd x xZL Zx x ZxE k k k k          1 ,  Since L [Zk,x] is a joint density function, we have:   1,  kk ZdxZL        0 ,, 0 ,           k k k k k k Zd x xZL xZd x xZL xZd x xZL        x xb Zd x xZL xZx k k k       1 , Using the fact that:       x xZL xZL x xZL k k k      ,ln , ,          x xb Zd x xZL xZLxZx k k kk       1 ,ln , 
  • 206. 207 EstimatorsSOLO The Cramér-Rao Lower Bound on the Variance of the Estimator (continue – 2)          x xb Zd x xZL xZLxZx k k kk       1 ,ln ,  Hermann Amandus Schwarz 1843 - 1921 Let use Schwarz Inequality:          dttgdttfdttgtf 22 2 The equality occurs if and only if f (t) = k g (t)         xZL x xZL gxZLxZxf k k kk , ,ln :&,:    choose:                                                        k k kkkk k k kk Zd x xZL xZLZdxZLxZx x xb Zd x xZL xZLxZx 2 2 2 2 ,ln ,,1 ,ln ,                                  k k k kkk Zd x xZL xZL x xb ZdxZLxZx 2 2 2 ,ln , 1 , 
  • 207. 208 EstimatorsSOLO The Cramér-Rao Lower Bound on the Variance of the Estimator (continue – 3)                                k k k kkk Zd x xZL xZL x xb ZdxZLxZx 2 2 2 ,ln , 1 ,  This is the Cramér-Rao bound for a biased estimator Harald Cramér 1893 – 1985 Cayampudi Radhakrishna Rao 1920 -        1,&   kkk ZdxZLxbxZxE                                               1 2 0 2 22 , ,2, ,,       kk kkkkkkkk kkkkkkk ZdxZLxb ZdxZLZxEZxxbZdxZLZxEZx ZdxZLxbZxEZxZdxZLxZx                xb Zd x xZL xZL x xb ZdxZLZxEZx k k k kkkk x 2 2 2 22 ,ln , 1 ,                       
  • 208. 209 EstimatorsSOLO The Cramér-Rao Lower Bound on the Variance of the Estimator (continue – 4)                xb Zd x xZL xZL x xb ZdxZLZxEZx k k k kkkk x 2 2 2 22 ,ln , 1 ,                                      0, ,ln 0 , 1, , , ,ln              kk kxZL x xZL x xZL k k kk ZdxZL x xZL Zd x xZL ZdxZL k k k             0, ,ln,ln , ,ln , 2 2               k x xZL k kk kk kx ZdxZL x xZL x xZL ZdxZL x xZL k        0 ,ln,ln 2 2 2                              x xZL E x xZL E kkx            xb x xZL E x xb xb x xZL E x xb k k x 2 2 2 2 2 2 2 2 ,ln 1 ,ln 1                                             
  • 209. 210 Estimators                                                          2 2 2 2 2 2 ,ln 1 ,ln 1 , x xZL E x xb x xZL E x xb ZdxZLxZx k k kkk SOLO The Cramér-Rao Lower Bound on the Variance of the Estimator (continue – 5)            xb x xZL E x xb xb x xZL E x xb k k x 2 2 2 2 2 2 2 2 ,ln 1 ,ln 1                                              For an unbiased estimator (b (x) = 0), we have:                               2 22 2 ,ln 1 ,ln 1 x xZL E x xZL E k k x  http://www.york.ac.uk/depts/maths/histstat/people/cramer.gif
  • 210. 211 Cramér-Rao Lower Bound (CRLB)SOLO Helpfully Relations          zxfzxfzxf zxf zxf T xx T xx T xx ,ln,ln, , 1 ,ln  Proof:      RR pn zxf :,Lemma 1: Given a function the following relations holds: p z RLemma 2: Let be a random vector with density p (y|x) parameterized by the nonrandom vector , then: n x R        xzpExzpxzpE T xxz T xxz |ln|ln|ln             xzpxzpExzp xzp ExzpE T xxz T xxz T xxz |ln|ln| | 1 |ln 0                        0||| | 1 | | 1 1                   pp zdxzpzdxzpxzp xzp xzp xzp E T xx T xx T xxz RR Proof:        zxpEzxpzxpE T xxzx T xxzx ,ln,ln,ln ,,             zxpzxpEzxp zxp EzxpE T xxzx T xxzx T xxzx ,ln,ln, , 1 ,ln , 0 ,,            Lemma 3: Let be random vectors with joint density p (x,y), then:pn zx RR  ,             0,,, , 1 , , 1 1 ,                    pnpn zdxdzxpzdxdzxpzxp zxp zxp zxp E T xx T xx T xxzx RR Return to Table of Content
  • 211. 212 Cramér-Rao Lower Bound (CRLB)SOLO Nonrandom Parameters The Score of the estimation is defined by the logarithm of the likelihood  xzpx |ln In Maximum Likelihood Estimation (MLE), this function returns a vector valued Score given by the observations and a candidate parameter vector . Score close to zero are good scores since they indicate that is close to a local optimum of , since p z R n x R x  xzp |      xzp xzp xzp xx | | 1 |ln  Since the measurement vector is stochastic the Expected Value of the Score is given by: p z R                  0|||| | 1 ||ln|ln 1      ppp p zdxzpzdxzpzdxzpxzp xzp zdxzpxzpxzpE xxx xxz RRR R v  vxh , z x Estimator x  The parameters are regarded as unknown but fixed. The measurements are n x R p z R
  • 212. 213 Cramér-Rao Lower Bound (CRLB)          xzpExzpxzpExJ T xxz T xxz |ln|ln|ln:  SOLO The Fisher Information Matrix (FIM) Fisher, Sir Ronald Aylmer 1890 - 1962 The Fisher Information Matrix (FIM) was defined by Ronald Aylmer Fisher as the Covariance Matrix of the Score        0||ln|ln  p zdxzpxpxzpE xxz R The Expected Value of the Score is given by: The Covariance of the Score is given by:             p zdxzpxzpxpxzpxzpE T xx T xxz R ||ln|ln|ln|ln Nonrandom Parameters The Cramér-Rao Lower Bound on the Variance of the Estimator – Multivariable Case
  • 213. 214 Fisher, Sir Ronald Aylmer (1890-1962) The Fisher information is the amount of information that an observable random variable z carries about an unknown parameter x upon which the likelihood of z, L(x) = f (Z; x), depends. The likelihood function is the joint probability of the data, the Zs, conditional on the value of x, as a function of x. Since the expectation of the score is zero, the variance is simply the second moment of the score, the derivative of the lan of the likelihood function with respect to x. Hence the Fisher information can be written             x k xx x Tk x k x xZLExZLxZLEx ,ln,ln,ln: J Cramér-Rao Lower Bound (CRLB) Return to Table of Content
  • 214. 215 Cramér-Rao Lower Bound (CRLB)SOLO      rxn yy T y Trxr yy T yyz ytMyzpEJ RR   ** :&|ln: Nonrandom Parameters The Likelihood p (z|x) may be over-parameterized so that some of x or combination of elements of x do not affect p (z|x). In such a case the FIM for the parameters x becomes singular. This leads to problems of computing the Cramér – Rao bounds. Let (r ≤ n) be an alternative parameterization of the Likelihood such that p (z|y) is a well defined density function for z given and the corresponding FIM is non-singular. We define a possible non-invertible coordinate transformation . r y R r y R  ytx  Theorem 1: Nonrandom Parametric Cramér – Rao Bound Assume that the observation has a well defined probability density function p (z|y) for all , and let denote the parameter that yields the true distribution of . Moreover, let be an Unbiased Estimator of , and let . The estimation error covariance of is bounded for below by p z R r y R *y y   n zx Rˆ  ytx   ** ytx   zxˆ     TT z MJMxxxxE 1 *ˆ*ˆ   where are matrices that depend on the true unknown parameter vector .*y
  • 215. 216 Cramér-Rao Lower Bound (CRLB)SOLO      ** :&|ln: yy T y T yy T yyz ytMyzpEJ   Nonrandom Parameters Theorem 1: Nonrandom Parametric Cramér – Rao Bound Assume that the observation has a well defined probability density function p (z|y) for all , and let denote the parameter that yields the true distribution of . Moreover, let be an Unbiased Estimator of , and let . The estimation error covariance of is bounded for below by p z R r y R *y y   n zx Rˆ  ytx   ** ytx   zxˆ     TT z MJMxxxxE 1 *ˆ*ˆ   where are matrices that depend on the true unknown parameter vector .*y Proof:         0|ˆ p zdyzpytzx T y R Tacking the gradient w.r.t. on both sides of this relation we obtain:y              0|ˆ|   pp zdyzpytzdytzxyzp T y T y RR                  1 ||ˆ|ln   pp zdyzpytzdyzpytzxyzp T y T y RR            ytzdyzpytzxyzp T y T y p  R |ˆ|ln Consider the Random Vector:            yzp xx y |ln ˆ where:                                             0 0 |ln ˆ |ln ˆ yzpE xxE yzp xx E yz z y z               0|ˆ|ˆ ˆ sUnbiasenes zxof TT pp zdyzpytzxzdxzpxzx   RR Using the Unbiasedness of Estimator:
  • 216. 217 Cramér-Rao Lower Bound (CRLB)SOLO      ** :&|ln: yy T y T yy T yyz ytMyzpEJ   Nonrandom Parameters Theorem 1: Nonrandom Parametric Cramér – Rao Bound Assume that the observation has a well defined probability density function p (z|y) for all , and let denote the parameter that yields the true distribution of . Moreover, let be an Unbiased Estimator of , and let . The estimation error covariance of is bounded for below by p z R r y R *y y   n zx Rˆ  ytx   ** ytx   zxˆ     TT z MJMxxxxE 1 *ˆ*ˆ   where are matrices that depend on the true unknown parameter vector .*y Proof (continue – 1): Consider the Random Vector:            yzp xx y |ln ˆ The Covariance Matrix is Positive Semi-definite by construction:                                             0 0 |ln ˆ |ln ˆ yzpE xxE yzp xx E yz z y z     0 0 0 0 0|ln ˆ |ln ˆ 1 11 definiteSemi Positive T T T T yy z IMJ I J MJMC I JMI JM MC yzp xx yzp xx E                                                             T z xxxxEC  ˆˆ:        yzpEyzpyzpEJ T yyz T yyz |ln|ln|ln:        ytxxyzpEM T y T yz T  ˆ|ln:     TT z Notations Equivalent definiteSemi Positive T MJMxxxxECMJMC 11 ˆˆ:0                ytzdyzpytzxyzp T y T y p  R |ˆ|lnWe found: q.e.d. where:
  • 217. 218 Cramér-Rao Lower Bound (CRLB)SOLO      nxn yy T y Tnxn yy T yyz ybIMyzpEJ RR   ** :&|ln: Nonrandom Parameters Corollary 1: Nonrandom Parametric Cramér – Rao Bound (Baiased Estimator) Consider an estimaton problem defined by the likelihood p (y|z), and the fixed unknown parameter . Any estimator with unknown bias has a mean square error bounded from below by *y  zyˆ  yb        ***ˆ*ˆ 1 ybybMJMyyyyE TTT z   where are matrices that depend on the true unknown parameter vector .*y Proof: Theorem 1 yields that: Introduce the quantity , the estimator is an unbiased estimator of . ybyx :    zyzx ˆˆ  x              ybIyzpEybIxxxxE T y T yyz TT y T z  1 |lnˆˆ Using , we obtain: ybyx :                  ybybybIyzpEybIyyyyE TT y T yyz TT y T z  1 |lnˆˆ after suitably inserting the true parameter .*y
  • 218. 219 Cramér-Rao Lower Bound (CRLB)                                     xbxb x xb I x xZL E x xb I xbxb x xb I x xZL x xZL E x xb I xZxxZxEZdxZLxZxxZx T x kT T x TkkT x TkkkkTkk                                                                                           1 2 2 1 ,ln ,ln,ln ,  SOLO The Cramér-Rao Lower Bound on the Variance of the Estimator The multivariable form of the Cramér-Rao Lower Bound is:                     n k n k k xZx xZx xZx     11                                       n k k k k x x xZL x xZL x xZL xZL ,ln ,ln ,ln ,ln 1  Fisher Information Matrix                                           x k x T kk x xZL E x xZL x xZL E 2 2 ,ln,ln,ln :J Fisher, Sir Ronald Aylmer 1890 - 1962 Return to Table of Content
  • 219. 220 Cramér-Rao Lower Bound (CRLB)SOLO Random Parameters Theorem 2: Random Parameters (Posterior Cramér – Rao Bound)         p zdyzpytxyb R |ˆ       rxnT yz TrxrT yyyz ytEMyzpEJ RR  :&,ln: , where then the Mean Square of the Estimate is Bounded from Below ynr t RR : x For Random Parameters there is no true parameter value. Instead, the prior assumption on the parameter distribution determines the probability of different parameter vectors. Like in the nonrandom parametric case, we assume a possible non-invertible mapping between a parameter vector and the sought parameter . The vector is assumed to have been chosen such that the joint probability density p (y,z) is a well defined density. y Let be two random vectors with a well defined joint density p (y,z), and let be an estimate of . If the estimator bias pr zandy RR    n zx Rˆ  ytx  satisfies     njandriallforypybj zi ,,1,,10lim        TT yz MJMxxxxE 1 , ˆˆ       0ˆˆ 1 , definiteSemi Positive TT yz MJMxxxxE    Equivalent Notations
  • 220. 221 Cramér-Rao Lower Bound (CRLB)SOLO Random Parameters Theorem 2: Random Parameters (Posterior Cramér – Rao Bound)     TT yz MJMxxxxE 1 , ˆˆ        ytEMyzpEJ T yz TT yyyz  :&,ln: , then the Mean Square of the Estimate is Bounded from Below Proof: Let be two random vectors with a well defined joint density p (y,z), and let be an estimate of . If the estimator bias pr zandy RR    n zx Rˆ  ytx          p zdyzpytxyb R |ˆ and     njandriallforypybj zi ,,1,,10lim    Compute                                ppp zdytzxyzpzdyzpytzdypyzpytzxypyb T y yp T y yzp T y T y RRR ˆ,,|ˆ ,   Integrating both sides w.r.t. over its complete range Rr yieldsy                    rprr ydzdytzxyzpydypytydypyb T y T y T y RRR ˆ, The (i,j) element of the left hand side matrix is:             riiiiyjyj i j ydydydydydydypybypybyd y ypyb r ii r       111 00 0                 RR
  • 221. 222 Cramér-Rao Lower Bound (CRLB)SOLO Random Parameters Theorem 2: Random Parameters (Posterior Cramér – Rao Bound)     TT yz MJMxxxxE 1 , ˆˆ        ytEMyzpEJ T yz TT yyyz  :&,ln: , then the Mean Square of the Estimate is Bounded from Below Let be two random vectors with a well defined joint density p (y,z), and let be an estimate of . If the estimator bias pr zandy RR    n zx Rˆ  ytx          p zdyzpytxyb R |ˆ and     njandriallforypybj zi ,,1,,10lim    Proof (continue – 1): We found               rrp ydypytydzdytzxyzp T y T y RR ˆ,                 ytEydypytydzdyzpytzxyzp T yz T y T y rrp   RR ,ˆ,ln Consider the Random Vector:            yzp xx y ,ln ˆ The Covariance Matrix is Positive Semi-definite by construction:     0 0 0 0 0,ln ˆ ,ln ˆ 1 11 , definiteSemi Positive T T T T yy yz IMJ I J MJMC I JMI JM MC yzp xx yzp xx E                                                                 ytExxyzpEM T yz T yyz T  ˆ,ln: ,     TT z Notations Equivalent definiteSemi Positive T MJMxxxxECMJMC 11 ˆˆ:0     q.e.d.    T yz xxxxEC  ˆˆ: ,        yzpEyzpyzpEJ T yyyz T yyz ,ln,ln|ln: , where:                                              0 0 ,ln ˆ ,ln ˆ , , , yzpE xxE yzp xx E yyz yz y yz Return to Table of Content
  • 222. 223 Cramér-Rao Lower Bound (CRLB)SOLO Nonrandom and Random Parameters Cramér – Rao Bounds For the Nonrandom Parameters the Cramér – Rao Bound depends on the true unknown parameter vector y , and on the model of the problem defined by p (z|y) and the mapping x = t (y). Hence the bound can only be computed by using simulations, when the true value of the sought parameter vector y is known. For the Random Parameters the Cramér – Rao Bound can be computed even in real applications. Since the parameters are random there is no unknown true parameter value. Instead, in the posterior Cramér – Rao Bound the matrices J and M are computed by mathematical expectation performed with respect to the prior distribution of the parameters. Return to Table of Content
  • 223. 224 Cramér-Rao Lower Bound (CRLB)SOLO Discrete Time Nonlinear Estimation     p kkk n kkk vxhz wxfx R R    , , 11 kk vw &1 are system and measurement white-noise sequences independent of past and current states and on each other and having known P.D.F.s    kk vpwp &1  0xpIn addition the P.D.F. of the initial state , is also given. We found that the Cramér – Rao Lower Bound for the Random Parameters is given by:            1 :1:1, 1 :1:1:1:1,:1:1|:1:1:1|:1, ,ln,ln,ln :1:1:1:1   kk T XXZXkk T XkkXZX T kkkkkkZX XZpEXZpXZpEXXXXE kkkk   1 kk xfxIf we have a deterministic state model, i.e. then we can use the Nonrandom Parametric Cramér – Rao Lower Bound            1 :1:1 1 :1:1:1:1:1:1|:1:1:1|:1 |ln|ln|ln :1:1:1:1   kk T XXZkk T XkkXZ T kkkkkkZ XZpEXZpXZpEXXXXE kkkk  After k cycles we have k measurements and k random parameters estimated by an Unbiased Estimator as .  T kk zzzZ ,,,: 21:1   T kk xxxxX ,,,,: 210:0   T kkkk xxxX |2|21|1:1|:1 ˆ,,ˆ,ˆ:ˆ  The CRLB provides a lower bound for second-order (mean-squared) error only. Posterior densities, which result from Nonlinear Filtering, are in general non-Gaussian. A full statistical characterization of a non-Gaussian density requires higher order moments, in addition to mean and covariance. Therefore, the CRLB for Nonlinear Filtering does not fully characterize the accuracy of Filtering Algorithms.
  • 224. 225 Cramér-Rao Lower Bound (CRLB)SOLO Discrete Time Nonlinear Estimation Theorem 3: The Cramér – Rao Lower Bound for the Random Parameters is given by: Let perform the partitioning   1 1:1:1 ,: xnkT kkk xXX R    1 |1:1|1:1:1|:1 ˆ,ˆ:ˆ xnkT kkkkkk xXX R      p kkk n kkk vxhz wxfx R R    , , 11 kk vw &1 are system and measurement white-noise sequences independent of past and current states and on each other and having known P.D.F.s    kk vpwp &1  0xpIn addition the P.D.F. of the initial state , is also given. After k cycles we have k measurements and k random parameters estimated by an Unbiased Estimator as .  T kk zzzZ ,,,: 21:1   T kk xxxxX ,,,,: 210:0   T kkkk xxxX |2|21|1:1|:1 ˆ,,ˆ,ˆ:ˆ                 nxn kk T xxZXk nxkn kk T xXZXk knxkn kk T XXZXk XZpEC XZpEB XZpEA kk kk kk R R R        :1:1, 1 :1:1, 11 :1:1, ,ln: ,ln: ,ln: 1:1 1:11:1       nxn kk T kkk T kkkkkkZX BABCJxxxxE R  111 ||, :ˆˆ      0ˆˆ 11 ||, definiteSemi Positive kk T kk T kkkkkkZX BABCxxxxE    Equivalent Notations
  • 225. 226 Cramér-Rao Lower Bound (CRLB)SOLO Discrete Time Nonlinear Estimation The Cramér – Rao Bound for the Random Parameters is given by:      0,ln,ln ˆˆ 1 :1:1,:1:1,, | 1:11:1|1:1 | 1:11:1|1:1 , 1:11:1 definiteSemi Positive kk T xXkkxXZX T kkk kkk kkk kkk ZX XZpXZpE xx XX xx XX E kkkk                                 Proof Theorem 3: Let perform the partitioning   1 1:1:1 ,: xnkT kkk xXX R    1 |1:1|1:1:1|:1 ˆ,ˆ:ˆ xnkT kkkkkk xXX R                 1 :1:1,:1:1, :1:1,:1:1,1 :1:1,,, ,ln,ln ,ln,ln ,ln 1:1 1:11:11:1 1:11:1                 kk T xxZXkk T XxZX kk T xXZXkk T XXZX kk T xXxXZX XZpEXZpE XZpEXZpE XZpE kkkk kkkk kkkk nkxnkkk kk T kk k k T kk T k kk I BAI BABC A IAB I CB BA R                                       1 1 11 1 00 00 :     p kkk n kkk vxhz wxfx R R    , , 11 kk vw &1 are system and measurement white-noise sequences independent of past and current states and on each other and having known P.D.F.s    kk vpwp &1  0xpIn addition the P.D.F. of the initial state , is also given. After k cycles we have k measurements and k random parameters estimated by an Unbiased Estimator as .  T kk zzzZ ,,,: 21:1   T kk xxxxX ,,,,: 210:0   T kkkk xxxX |2|21|1:1|:1 ˆ,,ˆ,ˆ:ˆ 
  • 226. 227 Cramér-Rao Lower Bound (CRLB)SOLO Discrete Time Nonlinear Estimation                   0 0 0 0 0 ˆˆˆ ˆ 1 111 111 ||,1:11:1|1:1|, |1:11:1|1:1,1:11:1|1:11:11:1|1:1, definiteSemi Positive k T kkk T kk kkk T kkkkkkZX T kkkkkkZX T kkkkkkZX T kkkkkkZX IAB I BABC A I BAI xxxxEXXxxE xxXXEXXXXE                                         Proof Theorem 3 (continue – 1): We found                   0 0 0 0 ˆˆˆ ˆ 0 11 1 1 ||,1:11:1|1:1|, |1:11:1|1:1,1:11:1|1:11:11:1|1:1, 1 definiteSemi Positive kk T kk k k T k T kkkkkkZX T kkkkkkZX T kkkkkkZX T kkkkkkZXkk BABC A IAB I xxxxEXXxxE xxXXEXXXXE I BAI                                               p kkk n kkk vxhz wxfx R R    , , 11 kk vw &1 are system and measurement white-noise sequences independent of past and current states and on each other and having known P.D.F.s    kk vpwp &1  0xpIn addition the P.D.F. of the initial state , is also given.
  • 227. 228 Cramér-Rao Lower Bound (CRLB)      111 ||, :ˆˆ   kk T kkk T kkkkkkZX BABCJxxxxE SOLO Discrete Time Nonlinear Estimation Prof Theorem 3 (continue – 2): We found       0 0 0 ˆˆ* ** 11 1 ||, definiteSemi Positive kk T kk k T kkkkkkZX BABC A xxxxE                              0ˆˆ 11 ||, definiteSemi Positive kk T kk T kkkkkkZX BABCxxxxE    Equivalent Notations                nxn kk T xxZXk nxkn kk T xXZXk knxkn kk T XXZXk XZpEC XZpEB XZpEA kk kk kk R R R        :1:1, 1 :1:1, 11 :1:1, ,ln: ,ln: ,ln: 1:1 1:11:1     p kkk n kkk vxhz wxfx R R    , , 11 kk vw &1 are system and measurement white-noise sequences independent of past and current states and on each other and having known P.D.F.s    kk vpwp &1  0xpIn addition the P.D.F. of the initial state , is also given. q.e.d.
  • 228. 229 Cramér-Rao Lower Bound (CRLB)                nxn kk T xxZXk nxkn kk T xXZXk knxkn kk T XXZXk XZpEC XZpEB XZpEA kk kk kk R R R        :1:1, 1 :1:1, 11 :1:1, ,ln: ,ln: ,ln: 1:1 1:11:1 SOLO Discrete Time Nonlinear Estimation – Recursive Cramér–Rao Lower Bound We found We want to compute Jk recursively, without the need for inverting large matrices as Ak.      111 ||, :ˆˆ   kk T kkk T kkkkkkZX BABCJxxxxE     p kkk n kkk vxhz wxfx R R    , , 11 kk vw &1 are system and measurement white-noise sequences independent of past and current states and on each other and having known P.D.F.s    kk vpwp &1  0xpIn addition the P.D.F. of the initial state , is also given. Theorem 4:The Recursive Cramér–Rao Lower Bound for the Random Parameters is given by:        nxn kkkkkk T kkkkkkZX DDJDDJxxxxE R   1 1211121221 111|111|1, :ˆˆ       nxn kk T kkk T kkkkkkZX BABCJxxxxE R  111 ||, :ˆˆ                nxn kk T kxxzkk T kxxxk nxnT kkk T kxxxk nxn kk T kxxxk xzpExxpED DxxpED xxpED kkkkkk kkk kkk R R R          111|11| 22 21 11| 12 1| 11 |ln|ln: |ln: |ln: 11111 1 1     000 lnln 000 xpxpEJ T xxx The recursions start with the initial information matrix J0,
  • 229. 230 Cramér-Rao Lower Bound (CRLB)SOLO         kk T xxZXk kk T xXZXk kk T XXZXk XZpEC XZpEB XZpEA kk kk kk :1:1, :1:1, :1:1, ,ln: ,ln: ,ln: 1:1 1:11:1      We found We want to compute Jk recursively, without the need for inverting large matrices as Ak.      111 ||, :ˆˆ   kk T kkk T kkkkkkZX BABCJxxxxE Start with:        kkkkkkkkkkkkk XxZpXxZzpXxZzpXZp :11:1:11:11:11:111:11:1 ,,,,|,,,,            kk xxpMarkov kkk xzpMarkov kkkk XZpXZxpXxZzp kkkk :1:1 | :1:11 | :11:11 ,,|,,| 111                1:11:11 ,||  kkkkkk XZpxxpxzp     p kkk n kkk vxhz wxfx R R    , , 11 kk vw &1 are system and measurement white-noise sequences independent of past and current states and on each other and having known P.D.F.s    kk vpwp &1  0xpIn addition the P.D.F. of the initial state , is also given. Proof of Theorem 4: Discrete Time Nonlinear Estimation – Recursive Cramér–Rao Lower Bound
  • 230. 231 Cramér-Rao Lower Bound (CRLB)   1 1 1 111 111 111 1 1:11:1, 11|1 | 1:11:1|1:1 11|1 | 1:11:1|1:1 , :,ln ˆ ˆ ˆ ˆ 1111:11 11:1 11:11:11:11:1                                                                                                  k k T k T k kk T k kkk kk T xx T xx T Xx T xx T xx T Xx T xX T xX T XX ZX T kkk kkk kkk kkk kkk kkk ZX I FEL ECB LBA XZpE xx xx XX xx xx XX E kkkkkk kkkkkk kkkkkk  SOLO Proof of Theorem 4 (continue – 1): Compute:        kkkkkkkk XZpxxpxzpXZp :1:11111:11:1 ,||,                  kkk T XXZX kkkkkk T XXZXkk T XXZXk AXZpE XZpxxpxzpEXZpEA kk kkkk      :1:1, :1:1111,1:11:1,1 ,ln00 ,ln|ln|ln,ln: 1:11:1 1:11:11:11:1                kkk T xXZX kkkkkk T xXZXkk T xXZXk BXZpE XZpxxpxzpEXZpEB kk kkkk      :1:1, :1:1111,1:11:1,1 ,ln00 ,ln|ln|ln,ln: 1:1 1:11:1                    11 :1:1,1| :1:1111,1:11:1,1 ,ln|ln0 ,ln|ln|ln,ln: 11 1 kk C kk T xxZX D kk T xxxx kkkkkk T xxZXkk T xxZXk DCXZpExxpE XZpxxpxzpEXZpEC k kk k kkkk kkkk               p kkk n kkk vxhz wxfx R R    , , 11 kk vw &1 are system and measurement white-noise sequences independent of past and current states and on each other and having known P.D.F.s    kk vpwp &1  0xpIn addition the P.D.F. of the initial state , is also given. Discrete Time Nonlinear Estimation – Recursive Cramér–Rao Lower Bound
  • 231. 232 Cramér-Rao Lower Bound (CRLB)   1 1 1 111 111 111 1 1:11:1, 11|1 | 1:11:1|1:1 11|1 | 1:11:1|1:1 , :,ln ˆ ˆ ˆ ˆ 1111:11 11:1 11:11:11:11:1                                                                                                  k k T k T k kk T k kkk kk T xx T xx T Xx T xx T xx T Xx T xX T xX T XX ZX T kkk kkk kkk kkk kkk kkk ZX I FEL ECB LBA XZpE xx xx XX xx xx XX E kkkkkk kkkkkk kkkkkk  SOLO Proof of Theorem 4 (continue – 2): Compute:        kkkkkkkk XZpxxpxzpXZp :1:11111:11:1 ,||,                       0,ln|ln|ln ,ln|ln|ln,ln: 0 :1:1, 0 1, 0 11, :1:1111,1:11:1,1 11:111:111:1 11:111:1                                         kk T xXZXkk T xXZXkk T xXZX kkkkkk T xXZXkk T xXZXk XZpExxpExzpE XZpxxpxzpEXZpEL kkkkkk kkkk                          12 1| 0 :1:1,1, 0 11, :1:1111,1:11:1,1 :|ln,ln|ln|ln ,ln|ln|ln,ln: 11111 11 kkk T xxxxkk T xxZXkk T xxZXkk T xxZX kkkkkk T xxZXkk T xxZXk DxxpEXZpExxpExzpE XZpxxpxzpEXZpEE kkkkkkkkkk kkkk                                                    22 0 :1:1,1|11| :1:1111,1:11:1,1 ,ln|ln|ln ,ln|ln|ln,ln: 111111111 1111 kkk T xxZXkk T xxxxkk T xxxz kkkkkk T xxZXkk T xxZXk DXZpExxpExzpE XZpxxpxzpEXZpEF kkkkkkkkkk kkkk                       p kkk n kkk vxhz wxfx R R    , , 11 kk vw &1 are system and measurement white-noise sequences independent of past and current states and on each other and having known P.D.F.s    kk vpwp &1  0xpIn addition the P.D.F. of the initial state , is also given. Discrete Time Nonlinear Estimation – Recursive Cramér–Rao Lower Bound
  • 232. 233 Cramér-Rao Lower Bound (CRLB) 1 2221 1211 1 111 111 111 1 1 11|1 | 1:11:1|1:1 11|1 | 1:11:1|1:1 , 0 0 : ˆ ˆ ˆ ˆ                                                                           kk kkk T k kk k T k T k kk T k kkk k T kkk kkk kkk kkk kkk kkk ZX DD DDCB BA FEL ECB LBA I xx xx XX xx xx XX E  SOLO Proof of Theorem 4 (continue – 3): We found:                                                                                     I DDCB BA I DDCB BA DD DCB BA I DCB BA D I I kkk T k kk kkk T k kk kk kk T k kk kk T k kk k k 0 0 0 00 0 0 0 12 1 11 12 1 11 2122 11 1 11 211 Therefore:         1211112122 12 1 11 2122 1 1 111|111|1, 0 0: ˆˆ kkk T kkkkk kkk T k kk kkk k T kkkkkkZX DBABDCDD DDCB BA DDJ JxxxxE                         p kkk n kkk vxhz wxfx R R    , , 11 kk vw &1 are system and measurement white-noise sequences independent of past and current states and on each other and having known P.D.F.s    kk vpwp &1  0xpIn addition the P.D.F. of the initial state , is also given. Discrete Time Nonlinear Estimation – Recursive Cramér–Rao Lower Bound
  • 233. 234 Cramér-Rao Lower Bound (CRLB)SOLO The recursions start with the initial information matrix J0, which can be computed from the initial density p (x0) as follows:       1 1211121221 111|111|1, :ˆˆ    kkkkkk T kkkkkkZX DDJDDJxxxxE         kk T xxZXk kk T xXZXk kk T XXZXk XZpEC XZpEB XZpEA kk kk kk :1:1, :1:1, :1:1, ,ln: ,ln: ,ln: 1:1 1:11:1      Proof of Theorem 4 (continue – 4):      111 ||, :ˆˆ   kk T kkk T kkkkkkZX BABCJxxxxE               11|1| 22 21 1| 12 1| 11 |ln|ln: |ln: |ln: 1111111 11 1          kk T xxxzkk T xxxxk T kkk T xxxxk kk T xxxxk xzpExxpED DxxpED xxpED kkkkkkkk kkkk kkkk     000 lnln 000 xpxpEJ T xxx      p kkk n kkk vxhz wxfx R R    , , 11 kk vw &1 are system and measurement white-noise sequences independent of past and current states and on each other and having known P.D.F.s    kk vpwp &1  0xpIn addition the P.D.F. of the initial state , is also given. Discrete Time Nonlinear Estimation – Recursive Cramér–Rao Lower Bound
  • 234. 235 Cramér-Rao Lower Bound (CRLB)SOLO       1 1211121221 111|111|1, :ˆˆ    kkkkkk T kkkkkkZX DDJDDJxxxxE Proof of Theorem 4 (continue – 5):                        nxn kk T xxxzk nxn kk T xxxxk kkk nxnT kkk T xxxxk nxn kk T xxxxk xzpEDxxpED DDD DxxpED xxpED kkkkkkkk kkkk kkkk RR R R           11| 22 1| 22 222222 21 1| 12 1| 11 |ln:2|ln:1 21: |ln: |ln: 1111111 11 1     p kkk n kkk vxhz wxfx R R    , , 11 kk vw &1 are system and measurement white-noise sequences independent of past and current states and on each other and having known P.D.F.s    kk vpwp &1  0xpIn addition the P.D.F. of the initial state , is also given. q.e.d.         tMeasuremen Updated 22 ModelProcess UsingPrediction 121112122 1 21: kkkkkkk DDDJDDJ    Discrete Time Nonlinear Estimation – Recursive Cramér–Rao Lower Bound
  • 235. 236 Cramér-Rao Lower Bound (CRLB)SOLO Discrete Time Nonlinear Estimation –Special Cases              00 1 000 0 0000 ˆˆ 2 1 exp 2 1 ,ˆ;0 xxPxx P Pxxxp T x  N     p kkk n kkk vxhz wxfx R R    , , 11 kk vw &1 are system and measurement white-noise sequences independent of past and current states and on each other and having known P.D.F.s    kk vpwp &1  0xpIn addition the P.D.F. of the initial state , is also given. Probability Density Function of is Gaussian0x        00 1 000 1 0000 ˆˆˆ 2 1 ln 000 xxPxxPxxcxp T xxx                         1 0 1 00 1 0 1 00000 1 0 1 00000 1 0000 ˆˆ ˆˆlnln 0 000000     PPPPPxxxxEP PxxxxPExpxpEJ T x TT xx T xxxx Return to Table of Content
  • 236. 237 Cramér-Rao Lower Bound (CRLB)SOLO Discrete Time Nonlinear Estimation –Special Cases                    kkkk T kkk k kkkwkk xfxQxfx Q Qwwpxxp 1 1 11 2 1 exp 2 1 ,0;|  N     p kkkk n kkkk vxhz wxfx R R     1111 1 1& kk vw are system and measurement Gaussian white-noise sequences, independent of past and current states and on each other with covariances Qk and Rk+1, respectively  0xpIn addition the P.D.F. of the initial state , is also given. Additive Gaussian Noises              kkkkk T kxkkkk T kkkxkkx xfxQxfxfxQxfxcxxp kkk              1 1 1 1 11 2 1 |ln                      111 1 1111 1 11111 2 1 exp 2 1 ,0;| kkkk T kkk k kkkvkk xhzRxhz R Rvvpxzp  N              111 1 111111 1 1111211 111 2 1 |ln               kkkkk T kxkkkk T kkkxkkx xhzRxhxhzRxhzcxzp kkk             11 11 11 |ln     kk T kx T k T kxk T kkkxkk T xx QxfxfQxfxxxp kkkkk      T k T kxk T k T kxk xhHxfF kk 111 : ~ &: ~  
  • 237. 238 Cramér-Rao Lower Bound (CRLB) SOLO Discrete Time Nonlinear Estimation –Special Cases     p kkkk n kkkk vxhz wxfx R R     1111 1 1& kk vw are system and measurement Gaussian white-noise sequences, independent of past and current states and on each other with covariances Qk and Rk+1, respectively  0xpIn addition the P.D.F. of the initial state , is also given. Additive Gaussian Noises               1 1 11|111111111 1 111| ~~ 111111          kk T kxz T k T kx T k T kkkkkkkk T kxxz HRHExhRxhzxhzRxhE kkkkkk         1 | 1 |1| 12 ~ |ln 1111     k T kxxkkk T xxxkk T xxxxk QFEQxfExxpED kkkkkkkkk      T k T kxk T k T kxk xhHxfF kk 111 : ~ &: ~                         kk T kxx T k T kx T k T kkkkkkkk T kxxx kk T xkkxxxkk T xxxxk FQFE xfQxfxxfxQxfE xxpxxpExxpED kk kkkk kkkkkkkk ~~ |ln|ln|ln: 1 | ? 11 1 | 11|1| 11 1 1 11                     1111|11| 22 |ln|ln|ln:2 11111111    kk T xkkxxzkk T xxxzk xzpxzpExzpED kkkkkkkk The Jacobians of computed at , respectively.    11&  kkkk xhxf 1& kk xx           1 1 1 |1| 22 11111 |ln:1       kkkkk T xxxkk T xxxxk QxfxQExxpED kkkkkkk
  • 238. 239 Cramér-Rao Lower Bound (CRLB)          1 1 11| 22 122 1 | 12 1 | 11 ~~ 2 1 ~ ~~ 11 1 1              kk T kxzk kk k T kxxk kk T kxxk HRHED QD QFED FQFED kk kk kk SOLO Discrete Time Nonlinear Estimation –Special Cases     p kkkk n kkkk vxhz wxfx R R     1111 1 1& kk vw are system and measurement Gaussian white-noise sequences, independent of past and current states and on each other with covariances Qk and Rk+1, respectively  0xpIn addition the P.D.F. of the initial state , is also given. Additive Gaussian Noises      T k T kxk T k T kxk xhHxfF kk 111 : ~ &: ~   The Jacobians of computed at , respectively.    11&  kkkk xhxf 1& kk xx         tMeasuremen Updated 22 ModelProcess UsingPrediction 121112122 1 21: kkkkkkk DDDJDDJ    We can calculate the expectations using a Monte Carlo Simulation. Using we draw     01 &, xpvpwp kk        Nivpvwpw xpx k i kk i k ,,2,1~&~ ~ 11 00  We Simulate System States and Measurements     Ni vxhz wxfx i k i kk i k i k i kk i k ,,2,1 1111 1          We then average over realizations to get J0. We average over realization to get next terms and so forth. 0x 1x Return to Table of Content
  • 239. 240 Cramér-Rao Lower Bound (CRLB)     1 1 11 22122112111 2&1&&      kk T kkkkk T kkkk T kk HRHDQDQFDFQFD SOLO Discrete Time Nonlinear Estimation –Special Cases p kkkk n kkkk vxHz wxFx R R     1111 1 1& kk vw are system and measurement Gaussian white-noise sequences, independent of past and current states and on each other with covariances Qk and Rk+1, respectively  0xpIn addition the P.D.F. of the initial state , is also given. Linear/ Gaussian System     1 1 11 11 tsMeasuremen Updated 1 1 11 ModelProcess UsingPrediction 11111 1           kk T k T kkkk Lemma Inverse Matrix kk T kk T kkk T kkkkkk HRHFJFQHRHQFFQFJFQQJ    Define  T kkkkkkkkkkkkk FPFQPPJPJ | 1 |1 1 | 1 1|11 :&:&:        1 1 11 1 |11 1 11 1 | 1 1|1           kk T kkkkk T k T kkkkkkk HRHPHRHFPFQP The conclusion is that CRLB for the Linear Gaussian Filtering Problem is Equivalent to the Covariance Matrix of the Kalman Filter. Return to Table of Content
  • 240. 241 Cramér-Rao Lower Bound (CRLB) SOLO Discrete Time Nonlinear Estimation –Special Cases p kkkk n kkk vxHz xFx R R     1111 1 1kv are measurement Gaussian white-noise sequence, independent of past and current states with covariance Rk+1. Qk = 0.  0xpIn addition the P.D.F. of the initial state , is also given. Linear System with Zero System Noise Define  1 | 0 1 |1 1 | 1 1|11 :&:&:        T kkkk Q kkkkkkkk FPFPPJPJ k   1 1 11 1 |11 1 11 1 | 1 1|1           kk T kkkkk T k T kkkkkk HRHPHRHFPFP Return to Table of Content
  • 241. 242 Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance )Initialization, Confirmation and Deletion( Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 SOLO Gating and Data Association Measurement 2 Measurement 1 t1 t2 t3 Association Hypothesis 1 Association Hypothesis 2 Association Hypothesis 3 Measurement 2 Measurement 1 t1 t2 t3 Measurement 2 Measurement 1 t1 t2 t3 Measurement 2 Measurement 1 t1 t2 t3 When more then one Target is detected by the Sensor in each of the Measurement Scans we must: • Open and Manage a Track File for each Target that contains the History of the Target Data. • After each new Set (Scan) of Measurements associate each Measurement to an existing Track File or open a New Track File (a New Target was detected). • Only after the association with a Track File the Measurement Data is provided to the Target Estimator (of the Track File) for Filtering and Prediction for the next Scan.
  • 242. 243 Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance )Initialization, Confirmation and Deletion( Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 SOLO Gating and Data Association Background Filtering: deals with a Single Target, i.e. Probability of Detection PD = 1, Probability of False Alarm PFA = 0 Facts: • Sensors operate with PD < 1 and PFA > 0. • Multiple Targets are often present. • Measurements (plots) are not labeled! Problem: How to know which measurements correspond to which Target (Track File) The goal of Gating and Data Association: Determine the origin of each Measurement by associating it to the existing Track File, New Track File or declaring it to be a False Detection.
  • 243. 244 Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance )Initialization, Confirmation and Deletion( Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 SOLO Gating and Data Association Gating and Data Association Techniques • Gating (Ellipsoidal, Rectangular, Others) • (Global) Nearest Neighbor (GNN, NN) Algorithm • Multiple Hypothesis Tracking (MHT) • (Joint) Probabilistic Data Association (JPDA/PDA) • Multidimensional Assignment
  • 244. 245 Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance )Initialization, Confirmation and Deletion( Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 SOLO Gating and Data Association Data Association Techniques • Nearest Neighbor (NN) Single Scan Methods: • Global Nearest Neighbor (GNN) • (Joint) Probabilistic Data Association (PDA/JPDA) Multiple Scan Methods: • Multi Hypothesis Tracker (MHT) • Multi Dimensional Association (MDA) • Mixture Reduction Data Association (MRDA) • Viterbi Data Association (VDA)
  • 245. 246 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  11 , ktxz  12  kj tS  kkj ttz |ˆ 11   12 , ktxz  13 , ktxz  kkj ttz |ˆ 12   11  kj tS Trajectory j = 2 Trajectory j = 1 Measurements at scan k+1 SOLO Optimal Correlation of Sensor Data with Tracks on Surveillance Systems (R.G. Sea, Hughes, 1973) We have n stored tracks that have predicted measurements and innovations co variances at scan k+1 given by: At scan k+1 we have m sensor reports (no more than one report per target) Gating and Data Association     njkSkkz jj ,,11,|1ˆ  set of all sensor reports on scan k+1 mk zzD ,,11  H – a particular hypothesis (from a complete set S of hypotheses) connecting r (H) tracks to r measurements. We want to solve the following Optimization Problem:                 HPHDP cHPHDP HPHDP DHPDHP SH SH SHSH |max 1 | | max|max|*      Measurement 2 Measurement 1 t1 t2 t3 Association Hypothesis 1 Measurement 2 Measurement 1 t1 t2 t3 Association Hypothesis 2 Measurement 2 Measurement 1 t1 t2 t3 Association Hypothesis 3 Measurement 2 Measurement 1 t1 t2 t3
  • 246. 247 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  11 , ktxz  12  kj tS  kkj ttz |ˆ 11   12 , ktxz  13 , ktxz  kkj ttz |ˆ 12   11  kj tS Trajectory j = 2 Trajectory j = 1 Measurements at scan k+1 SOLO Optimal Correlation of Sensor Data with Tracks on Surveillance Systems (continuous - 1) We have several tracks defined by the predicted measurements and innovations co variances Gating and Data Association     !m V em m V FA     The probability density function of the False Alarms or New Targets, in the search volume V, in terms of their spatial density λ , is given by a Poisson Distribution:     njkSkkz jj ,,11,|1ˆ  Not all the measurements are from a real target but are from False Alarms. The common mathematical model for such false measurements is that they are: • uniformly spatially distributed • independent across time • this is the residual clutter (the constant clutter, if any, is not considered). m is the number of measurements in scan k+1   V orAlarmFalsezP i 1 TargetNew|  Because of the uniformly space distribution in the search Volume, we have: False Alarm Models
  • 247. 248 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  11 , ktxz  12  kj tS  kkj ttz |ˆ 11   12 , ktxz  13 , ktxz  kkj ttz |ˆ 12   11  kj tS Trajectory j = 2 Trajectory j = 1 Measurements at scan k+1 SOLO Optimal Correlation of Sensor Data with Tracks on Surveillance Systems (continuous - 2) Gating and Data Association  mk zzD ,,11  H – a particular hypothesis (from a complete set S of hypotheses) connecting r (H) tracks to r measurements and assuming m-r false alarms or new targets.                        r rm l j jij T ji m l l VS zzSzz HzpHDP 1 1 1 1 1 2 2/ˆˆexp ||                    HPHDP cHPHDP HPHDP DHPDHP SH SH SHSH |max 1 | | max|max|*      P (D|H) - probability of the measurements given that hypothesis H is true.        m i i tindependen tsmeasuremen m HzPHzzPHDP 1 1 ||,,|  where:                     jtracktoconnecteditmeasuremen S zSz Sz orAlarmFalseistmeasuremeniif V HzP j jj T j jj i 2 2/ˆzˆzexp ,ˆ;z TargetNew 1 | i 1 i iN
  • 248. 249 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  11 , ktxz  12  kj tS  kkj ttz |ˆ 11   12 , ktxz  13 , ktxz  kkj ttz |ˆ 12   11  kj tS Trajectory j = 2 Trajectory j = 1 Measurements at scan k+1 SOLO Optimal Correlation of Sensor Data with Tracks on Surveillance Systems (continuous - 3) Gating and Data Association                 HPHDP cHPHDP HPHDP DHPDHP SH SH SHSH |max 1 | | max|max|*      P (H) – probability of hypothesis H connecting tracks j1,…,jr to measurements i1,…,ir from m sensor reports:      mPrmPjjPjjiiPHP FA tracks r tracks r tsmeasuremen r                         ,,,,|,, 111       ! ! 11 1 ,,|,, 11 m rm rmmm jjiiP tracks r tsmeasuremen r                 probability of connecting tracks j1,…,jr to measurements i1,…,ir                           n j D r D D DetectingNot n jjj j D jj Detecting r D tracks r j j j r j r j P P P PPjjP 11 ,, 1 ,, 1 1 1 1 1,, 1 1           probability of detecting only j1,…,jr targets         V rm FAFA e rm V rmrmP       ! for (m-r) False Alarms or New Targets assume Poisson Distribution with density λ over search volume V of (m-r) reports  mP probability of exactly m reports where:
  • 249. 250 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  11 , ktxz  12  kj tS  kkj ttz |ˆ 11   12 , ktxz  13 , ktxz  kkj ttz |ˆ 12   11  kj tS Trajectory j = 2 Trajectory j = 1 Measurements at scan k+1 SOLO Optimal Correlation of Sensor Data with Tracks on Surveillance Systems (continuous - 4) where: Gating and Data Association         HPHDP c DHPDHP SHSH |max 1 |max|*                        r j jij T ji rmm l l S zzSzz V HzpHDP 1 1 1 2 2/ˆˆexp1 ||              mPe rm V P P P m rm HP V rmn j D r D D j j j                       ! 1 1! ! 11                                                                 r d jij T ji jD D const n j D Vm ji j j j zzSzz SP P PmPe c HPHDP c 1 1 1 2 ˆˆ 2 1 1 ln2 2 1 1 1 ln| 1 ln                       jji ji jD D d jij T ji jiSH Gd SP P zzSzzHPHDP c j j ji                                      2 , 1 , min 2 1 1 ln2ˆˆmin| 1 lnmax 2                         jD D j SP P G j j   2 1 1 ln2:
  • 250. 251 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  11 , ktxz  12  kj tS  kkj ttz |ˆ 11   12 , ktxz  13 , ktxz  kkj ttz |ˆ 12   11  kj tS Trajectory j = 2 Trajectory j = 1 Measurements at scan k+1 SOLO Optimal Correlation of Sensor Data with Tracks on Surveillance Systems (continuous - 5) Gating and Data Association         HPHDP c DHPDHP SHSH |max 1 |max|*    jji ji Gd  2 , min                   jD D j SP P G j j   2 1 1 ln2: Association Gate to track j Return to Table of ContentInnovation in Tracking In order to find the measurement that     nizzSzzd jij T jiji ,,1ˆˆ: 12   belongs to track j, compute  jji ji Gd  2 , minand choose i for which we have  mki zzDz ,,11  
  • 251. 252 Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance )Initialization, Confirmation and Deletion( Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 SOLO Gating and Data Association Gating • A way of simplifying data association by eliminating unlikely observation-to-track pairings. • We perform this test for every Target being tracked. • Observation which don’t fall in any of the Gates will be used to initiate potentially new tracks. • We use the “measurement prediction” of the filter  1|1| ˆ,ˆ   kkkk xkhz • Using we device a Gate around it, and dismiss all the observations thatfall outside the Gate, for data association. 1|ˆ kkz 1|ˆ kkz  1ktz Measurement at tk-1 Measurement Prediction at tk  ktS 1|ˆ kkz  ktxz ,1  ktS 1|ˆ kkz  ktxz ,2  ktxz ,3 Nearest-Neighbor
  • 252. 253 Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance )Initialization, Confirmation and Deletion( Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999  ktxz ,1  ktS 1|ˆ kkz  ktxz ,2  ktxz ,3 Nearest-Neighbor                    1|ˆ1|ˆ:, ~ 12 kkzkzkSkkzkzzdzkV T k SOLO Gating Then the true measurement will be in the following region: with probability determined by the Gate Threshold γ. Gating and Data Association Assumption: The true measurement conditioned on the path is normally (Gaussian) distributed with the Probability Density Function (PDF) given by:          kSkkzkzZkzp k ,1|ˆ;| 1  N The region V (k,γ) is called a Gate or Validation Region (symbol V) or Association Region. It is also known as the Ellipsoid of Probability Concentration. The volume defined by the Ellipsoid V (k,γ) is given by:         2/12/2/1 1 2 , kSckScdzdzkV z zz k z n nn zd n                                            oddn n n evenn n n n c z z n zn z z z z n n z z z z !1 ! 2 1 2 ! 2 2 1 2 2 1 1 2   Γ is the gamma function        0 1 exp dttta a 2/,3/4,,2 2 4321   cccc is the volume of the unit ellipsoid of nz dimension (of z measurement vector) znc Ellipsoidal Gating
  • 253. 254  ktxz ,1  ktS 1|ˆ kkz  ktxz ,2  ktxz ,3 Nearest-Neighbor SOLO Ellipsoidal Gating (continue – 1) Then the true measurement will be in the following region: with probability PG determined by the Gate Threshold γ. Gating and Data Association                   zn kV T G dzdz kS kkzkzkSkkzkz kP 1 , 1 2 2/1|ˆ1|ˆexp ,                           1|ˆ1|ˆ:, ~ 12 kkzkzkSkkzkzzdzkV T k If we transform to the principal axes of S-1(k)     T n T TTdiagSSTTS z   122 1 111 &,,   2 2 2 1 2 1112 /1/1 z z T n nTTT zdTwd wdTzd T k wdwd wdwdwdTSTwdzdSzdd        Zk:=dk 2 is chi-squared of order nz distributed (Papoulis pg.250)                   2 exp 2 2 2 1 2 k z n n k k Z n Z Zp z z k                     0 2 1 2 2 exp 2 2 , k k z n n k G dZ Z n Z kP z z Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance )Initialization, Confirmation and Deletion( Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999
  • 254. 255  ktxz ,1  ktS 1|ˆ kkz  ktxz ,2  ktxz ,3 Nearest-Neighbor SOLO Ellipsoidal Gating (continue – 2) Then the true measurement will be in the following region: with probability PG determined by the Gate Threshold γ. Gating and Data Association                    1|ˆ1|ˆ:, ~ 12 kkzkzkSkkzkzzdzkV T k                     0 2 1 2 2 exp 2 2 , k k z n n k G dZ Z n Z kP z z                                   2/2/exp24/16 2/exp/23/125 2/exp2/114 2/exp/223 2/exp12 21 2             Gz Gz Gz Gz Gz Gz Pn gcPn Pn gcPn Pn gcPn This integral has the following solutions for different nz: where: standard Gaussian probability integral.     x duuxgc 0 2 2/exp 2 1 :  Since Zk:=dk 2 is chi-squared of order nz distributed Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance )Initialization, Confirmation and Deletion( Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999
  • 255. 256  ktxz ,1  ktS 1|ˆ kkz  ktxz ,2  ktxz ,3 Nearest-Neighbor SOLO Ellipsoidal Gating (continue – 3) Then the true measurement will be in the following region: with probability PG determined by the Gate Threshold γ. Here we described another way of determining γ, based on the chi-squared distribution of dk 2. Gating and Data Association Tail probabilities of the chi-square and normal densities. 9.21 11.34 13.28 2 3 4 0.01    01.01Pr 2   typicallydP kG 28.13;4,01.0 34.11;3,01.0 21.9;2,01.0       z z z n n n                    1|ˆ1|ˆ:, ~ 12 kkzkzkSkkzkzzdzkV T k Since dk 2 is chi-squared of order nz distributed we can use the chi-square Table to determine γ Return to Table of ContentInnovation in Tracking Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance )Initialization, Confirmation and Deletion( Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999
  • 256. 257 Gating and Data Association Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Comparison of Major Data Association Algorithms E. Waltz, J. Llinas,"Multisensor Data Fusion", Artech House, 1990, pg. 194 Major Characteristics (1) No of previous scan used in data assoc. (2),(3) Assoc. metric and hypotesis score (4) Assoc. decision rule and hypotesis maintenance (5) Use of neighboring observation in track estimation Association Algorithm Nearest Neighbor 0 (current scan only) score is a sum of distance metrics hard decision single unique neighbors observation used * sequential process * Assoc. matrix contains all pairing metrics REMARKS Major References 38 (A) [38] P.G. Casnev, R.J. Prengman, ”Integration and Automation of Multiple Co-Located Radars”, Proc. IEEE EASCON, 1977, pp.10-1A-1E [39] Y. Bar-Shalom, E. Tse, ”Tracking in a Cluttered Environment with Probabilistic Data Association”, Automatica, Vol. 11, September 1975, pp.451-460 [40] T.E. Fortman, Y. Bar-Shalom, M. Scheffe, ”Multi-Target Tracking Using Joint Probabilistic Data Association”, Proc. 1980, IEEE Conf. on Decision and Control, December 1980, pp.807-812 [41] R.W. Sittler, ”An Optimal Data Association Problem in Surveillance Theory”, IEEE Trans. Military Electronics Vol. MIL-8, April 1984, pp.125-139 [42] J.J. Stein, S.S. Blackman, ”Generalized Correlation of Multi-Target Track Data”, IEEE Trans. Aerospace and Electronic Systems, Vol. AES-11, No.6, November 1975, pp.1207-1217 [43] C.L. Morefield, ”Application of o-i Integer Programming to Multi-Target Tracking Problems”, IEEE Trans. Automatic Control, Vol AC-22, June 1977, pp.302-312 [44] D.B. Reid, ”An Algorithm for Tracking Multiple Targets”, IEEE Trans. Automatic Control, Vol. AC-24, December 1979, pp.843-854 [45] R.A. Singer, R.G. Sea, R.B. Housewright,”Derivation and Evaluation of Improved Tracking Filter for Use in Dense Multi-Target Environments”, IEEE Trans. Information Theory, Vol IT-20, July 1974, pp.423-432 Comparison of Major Data Association Algorithms E. Waltz, J. Llinas,"Multisensor Data Fusion", Artech House, 1990, pg. 194 Major Characteristics (1) No of previous scan used in data assoc. (2),(3) Assoc. metric and hypotesis score (4) Assoc. decision rule and hypotesis maintenance (5) Use of neighboring observation in track estimation Association Algorithm Probabilistic Data Association (PDA), Joint PDA (JPDA) 0 (current scan only) A posteriori probability hard decision all-neighbors (combined) are used * Tracks assumed to be initiated * PDA for STT, JPDA for MTT * Suitable for dense targets REMARKS Major References 39,40 (B) Comparison of Major Data Association Algorithms E. Waltz, J. Llinas,"Multisensor Data Fusion", Artech House, 1990, pg. 194 Major Characteristics (1) No of previous scan used in data assoc. (2),(3) Assoc. metric and hypotesis score (4) Assoc. decision rule and hypotesis maintenance (5) Use of neighboring observation in track estimation Association Algorithm Maximum Likelihood (ML) N likelihood score soft decision resulting in multiple hypotheses (requiring branching or track splitting) all-neighbors (individually) used in multiple hypotheses each used for independent estimates * Batch process for a set of N scans. In the limit N for full scene batch processing. * Suitable for initiation  REMARKS Major References 41 42,43 (C) Comparison of Major Data Association Algorithms E. Waltz, J. Llinas,"Multisensor Data Fusion", Artech House, 1990, pg. 194 Major Characteristics (1) No of previous scan used in data assoc. (2),(3) Assoc. metric and hypotesis score (4) Assoc. decision rule and hypotesis maintenance (5) Use of neighboring observation in track estimation Association Algorithm Sequential Bayesian Probabilistic N A posteriori probability or likelihood score soft decision resulting in multiple hypotheses (requiring branching or track splitting) all-neighbors (individually) used in multiple hypotheses each used for independent estimates * Sequential process with multiple, deferred hypotheses: pruning, combining, clustering is required to limit hypotheses REMARKS Major References 44 (D) Comparison of Major Data Association Algorithms E. Waltz, J. Llinas,"Multisensor Data Fusion", Artech House, 1990, pg. 194 Major Characteristics (1) No of previous scan used in data assoc. (2),(3) Assoc. metric and hypotesis score (4) Assoc. decision rule and hypotesis maintenance (5) Use of neighboring observation in track estimation Association Algorithm Optimal Bayesian  A posteriori probability or likelihood score soft decision resulting in multiple hypotheses (requiring branching or track splitting) all-neighbors (individually) used in multiple hypotheses each used for independent estimates * Batch process requires most computation due to consideration of all hypotheses. REMARKS Major References 45 (E)
  • 257. 258 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  ktxz ,1  ktS  kk ttz |ˆ 1  ktxz ,2  ktxz ,3 Nearest-Neighbor SOLO Nearest-Neighbor Standard Filter In the Nearest-Neighbor Standard Filter (NNSF) the validated measurement next to the predicted measurement is used for updating the state of the target. The distance measure to be minimizes is the weighted norm of the innovation:                111|1ˆ1|1ˆ: 112   kikSkikkzzkSkkzzzd TT where S is the covariance matrix of the innovation. Gating and Data Association The problem of choosing the Nearest-Neighbor is that with some probability, is not the correct measurement. Therefore the NNSF will sometimes use incorrect measurements while “believing” that they are correct. Gatting & Data Association Table
  • 258. 259 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Global Nearest-Neighbor (GNN) Algorithms Gating and Data Association Gatting & Data Association Table • Several 2D Algorithms are available - Hungarian Method (Kuhn) - Munkres Algorithm - JV, JVC (Jonker – Volgenant – Castanon) Algorithms - Auction Algorithm (Bertsekas) • All these algorithms give the EXACT global solution • They are polynomial order of complexity • Difference in the speed of computation - Auction Algorithm is considered the best
  • 259. 260 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Suboptimal Bayesian Algorithm: The PDAF The Probabilistic Data Association Filter (PDAF) is a Suboptimal Bayesian Algorithm that assumes that is Only One Target of interest in the Gate and that the track has been initialized. At each sampling a Validation Gate (to be defined) is set up. Among the possible validated measurement only one (or neither one) can be a target and all other are clutter returns, or “false alarms”, and are modeled as Independent Identical Distributed (IID) random. Gating and Data Association The PDAF uses only the latest set of measurements (the Optimal Bayesian uses all the measurements up to estimation time). The past is summarized approximately by making the following basic assumption of the PDAF:          1|,1|ˆ;| 1:1  kkPkkxkxZkxp k N i.e., the state is assumed normally distributed (Gaussian) according to the latest prediction of state estimate and covariance matrix.  ktxz ,1  kV  1|ˆ kk ttz  ktxz ,2  km txz ,  21 |ˆ  kk ttz  32 |ˆ  kk ttz Estimated Measurements Of Track The detection of the target occurs independently from sample to sample with a known probability PD, which can be time-varying.
  • 260. 261 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  ktxz ,1  kV  1|ˆ kk ttz  ktxz ,2  km txz , SOLO Suboptimal Bayesian Algorithm: The PDAF (continue – 1) Following the white IID innovation assumption the Validation Gate is defined by the ellipsoid Gating and Data Association                                     ki T ki k kkzkzkSkkzkzkzV 1|ˆ1|ˆ:: ~ 1 Tail probabilities of the chi-square and normal densities. 9.21 11.34 13.28 2 3 4 0.01 • From the chi-square table, given α and nz, we can determine γ 28.13;4,01.0 34.11;3,01.0 21.9;2,01.0       z z z n n n The weighted norm innovation is chi-square distributed with number of degrees of freedom equal to dimension nz of the measurement. The value of γ is determined by defining the required probability PG that a measurement is in the gate:        1 ~ : kG VkzPP
  • 261. 262 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  ktxz ,1  kV  1|ˆ kk ttz  ktxz ,2  km txz ,  detectedistmeasurementrueAPr:DP SOLO Suboptimal Bayesian Algorithm: The PDAF (continue – 2) The fact that a measurement is obtained depends also on the Probability of Detection PD of the target Gating and Data Association Probability that a true Target is detected in the gate = PD PG Probability that no Target is detected in the gate = 1 - PD PG Following the assumption that we have measurements mk (random variable) from the ellipsoidal validation region , let define the events: kV ~ • θj (k) := { zj (k) is a target originated measurement } j=1,2,…,mk (mk-1 are false alarms) • θ0 (k) := { none of the measurements at time k are target originated } (mk false alarms) with probabilities      kkjj mjZkPk ,...,1,0|: :1   In view of the above assumptions those events are exclusive and exhaustive, and therefore       1 1 0 0    kk m j j m j j kkk  The procedure that yields these probabilities is called Probabilistic Data Association (PDA).
  • 262. 263 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Suboptimal Bayesian Algorithm: The PDAF (continue – 3) Gating and Data Association βj (k) computation           kkkjkjj mjZmkZkPZkPk ,...,1,0,,||: 1:1:1   Z1:k - all measurements up to time k Z (k) - all mk measurements at time k Using Bayes’ rule for the mk exclusive and exhaustive events, we obtain:                        km i kkikki kkjkkj kkjj mj ZmkkZPZmkP ZmkkZPZmkP ZmkZkPk k ,...,1,0 ,,|,| ,,|,| ,,| 0 1:11:1 1:11:1 1:1         • θj (k) := { zj (k) is a target originated measurement } j=1,2,…,mk (mk-1 are false alarms) • θ0 (k) := { none of the measurements at time k are target originated} (mk false alarms)  ktxz ,1  kV  1|ˆ kk ttz  ktxz ,2  km txz , Denoting by φ the number of false alarms (we have φ=mk-1 or φ=mk) we obtain:                                                    0| ,...,1|1/1 0|1|10 ,...,1|0|1/1 |,||1,1| ,|: 1 jmmP mjmmPm jmmPmmP mjmmPmmPm mmPmmkPmmPmmkP ZmkPk kk kkkk kkkk kkkkkk kkkkjkkkkj k kjj                 1:1 0 1:11:1 ,|,,|,|     kk m i kkikki ZmkZPZmkkZPZmkP k  Likelihood Function
  • 263. 264 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Suboptimal Bayesian Algorithm: The PDAF (continue – 4) Gating and Data Association                        km i kkikki kkjkkj kkjj mj ZmkkZPZmkP ZmkkZPZmkP ZmkZkPk k ,...,1,0 ,,|,| ,,|,| ,,| 0 1:11:1 1:11:1 1:1          ktxz ,1  kV  1|ˆ kk ttz  ktxz ,2  km txz , Denoting by φ the number of false alarms (we have φ=mk-1 or φ=mk) we obtain:                  0| ,...,1|1/1 ,|: 1:1 jmmP mjmmPm ZmkPk kk kkkk kkjj    βj (k) computation (continue – 1) Using Bayes Formula we obtain:              k kGD k m k PP kk kk mP mPP mP mPmmP mmP kGD 111| |1 1                             k kGD k m k PP kk kk mP mPP mP mPmmP mmP kGD         1| | 1    where μF is the probability mass function (pmf) of the number of false alarms and PD PG is the probability that the target has been detected and its measurements fell in the gate. The common denominator is:        kGDkGDk mPPmPPmP   11
  • 264. 265 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  ktxz ,1  kV  1|ˆ kk ttz  ktxz ,2  km txz , SOLO Suboptimal Bayesian Algorithm: The PDAF (continue – 5) Gating and Data Association                        km i kkikki kkjkkj kkjj mj ZmkkZPZmkP ZmkkZPZmkP ZmkZkPk k ,...,1,0 ,,|,| ,,|,| ,,| 0 1:11:1 1:11:1 1:1         We obtained:                                     01/11/1 ,...,11/1/1 ,|: 1 1 1:1 jmmPPPPmmPP mjmmPPPPPPm ZmkPk kFkFGDGDkFkFGD kkFkFGDGDGDk kkjj    βj (k) computation (continue – 2) Two methods can be used to compute μF (the pmf of false alarms): (i) A Poisson model with a certain spatial density λ (parametric):     !k m V kF m V em k     (ii) A (nonparametric) diffuse prior model:       1kFkF mm                             011 ,...,11 ,|: 1 1 1:1 jkVPPmPPkVPP mjkVPPmPPPP ZmkPk GDkGDGD kGDkGDGD kkjj                 01 ,...,1/ ,|: 1:1 jPP mjmPP ZmkPk GD kkGD kkjj  The nonparametric model can be obtained from Poisson model by choosing:  kVmk /: V (k) volume of the Ellipsoid Gate     2/12/ 2 1 2 , kS n kV z z n z n           
  • 265. 266 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  ktxz ,1  kV  1|ˆ kk ttz  ktxz ,2  km txz , SOLO Suboptimal Bayesian Algorithm: The PDAF (continue – 6) Gating and Data Association                        km i kkikki kkjkkj kkjj mj ZmkkZPZmkP ZmkkZPZmkP ZmkZkPk k ,...,1,0 ,,|,| ,,|,| ,,| 0 1:11:1 1:11:1 1:1         Let compute: βj (k) computation (continue – 3) V (k) volume of the Ellipsoid Gate     2/12/ 2 1 2 , kS n kV z z n z n            Since for mk measurements we can have only one target and mk-1 false alarms or mk false alarms, we obtain              k k m j kkji tindependen tsmeasuremen kkjmkkj ZmkzPZmkzzPZmkkZP 1 1:11:111:1 ,,|,,|,,,,|   Assumptions: Gaussian pdf of correct target in the ellipsoidal gate, with probability PG and uniform distribution of false alarms inside V (k).                            TargetTrue 2 2/exp ,0; TargetNew 1 ,,| 11:1 kS kikSki PkSkiP orAlarmFalseistmeasuremeniif V ZmkzP j T j GjG kkjj   1-1- N                              0 ,,1,0; ,,|,,| 11 1 1:1 1 jkV mjkSkiPkV ZmkzPZmkkZP k kk m kjG mm j kkjj k kj N 
  • 266. 267 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Suboptimal Bayesian Algorithm: The PDAF (continue – 7) Gating and Data Association                        km i kkikki kkjkkj kkjj mj ZmkkZPZmkP ZmkkZPZmkP ZmkZkPk k ,...,1,0 ,,|,| ,,|,| ,,| 0 1:11:1 1:11:1 1:1         βi (k) computation (continue – 4) We obtained for parametric (Poisson) model:                                0 ,,1 2 2/exp ,,| 1 11 1:1 jkV mj kS kikSki PkV ZmkkZP k k m k j T j G m kkj                               011 ,...,11 ,|: 1 1 1:1 jkVPPmPPkVPP mjkVPPmPPPP ZmkPk GDkGDGD kGDkGDGD kkjj                                               011 1 ,...,1 2 2/exp 1 1 1 1 111 jkVkVPPmPPkVPP c mj kS kikSki PkVkVPPmPPPP c k k k m GDkGDGD k j T j G m GDkGDGD j     c is a normalized factor.
  • 267. 268 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Suboptimal Bayesian Algorithm: The PDAF (continue – 8) Gating and Data Association                        km i kkikki kkjkkj kkjj mj ZmkkZPZmkP ZmkkZPZmkP ZmkZkPk k ,...,1,0 ,,|,| ,,|,| ,,| 0 1:11:1 1:11:1 1:1         βj (k) computation (continue – 5) We obtained                                                 02 11 ,...,12/exp 1 01 1 ,...,1 2 2/exp1 2 1 2 1 1 1 jkS P PP c mjkikSki c jPP c mj kS kikSki P c k D GD kj T j GD k i T i D j    Finally                     0 ,...,1 1 1 j eb b mj eb e k k k m l l km l l j j           kS P PP b kikSkie D GD j T jj  2 1 : 2/exp: 1     where for Poisson (parametric) Model: For the nonparametric model we choose:  kVmk /:
  • 268. 269 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Suboptimal Bayesian Algorithm: The PDAF (continue – 9) Gating and Data Association βj (k) computation – Summary (continue – 6) Evaluation of Association Probabilities βj (k)                     0 ,...,1 1 1 j eb b mj eb e k k k m l l km l l j i           kS P PP b kikSkie D GD j T jj  2 1 : 2/exp: 1     For Poisson (parametric) Model: For the nonparametric model we choose:  kVmk /: Calculation of Innovations and Measurements Validations                zGkj j T jkj jj kj nPd kikSkid kkzkzki mjkz , 1|ˆ ,,1 2 12      
  • 269. 270 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Suboptimal Bayesian Algorithm: The PDAF (continue – 10) Using the Total Probability Theorem (for exclusive & exhaustive events) Gating and Data Association                                 events all n i i events no ji n i iBiBx BjiBBwhereBpBxpxp ii  11 | &| we obtain                                                 kk km i kiki m j kjkj m j kjkj ZkpZkkxp kk ZkpZkkxEZkpkxdZkkxpkx kxdZkxpkxZkxEkkx 0 :1:1 0 :1:1 |,| :1:1 |,||,| |||ˆ 0 :1:1    but            kZkpkkxZkkxE jkjjkj   :1:1 |&|ˆ,| Therefore                 kk m j jj m j kjkj kkkxZkpZkkxEkkx 00 :1:1 |ˆ|,||ˆ  estimation that the conditional state based on event θj (k) is correct and its probability βj (k) This is the relation of the estimation of a exclusive & exhaustive mixture of events with weights βj (k).
  • 270. 271 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Suboptimal Bayesian Algorithm: The PDAF (continue – 11) Gating and Data Association It is given by the Kalman Filter topology:  kkxj |ˆ is the update state estimation conditioned on event θj (k) is correct.                         11|11 1|ˆ1|ˆ1|ˆ|ˆ 1    kSkHkkPkK kkzkzkKkkxkikKkkxkkx T jjj For j = 1,…,mk (a possible target detected in the Validation Gate)    1|ˆ|ˆ0  kkxkkx Therefore                                 kikKkkxkkikKkkkx kkikKkkxkkkxkkx ki m j jj m j j m j jj m j jj kk kk       1|ˆ1|ˆ 1|ˆ|ˆ|ˆ 0 1 0 00      For i = 0 (no target detected in the Validation Gate) the innovation is   0|0 kki        kikKkkxkkx  1|ˆ|ˆ where is the combined innovation.       km j jj kkiki 0 : 
  • 271. 272 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999                               km j jkj T of Mixture events exhaustive exclusive k T kZkkkxkxkkxkxE ZkkxkxkkxkxEkkP 0 :1 & :1 ,||ˆ|ˆ ||ˆ|ˆ|                  kk m j jj m j kjkj kkkxZkpZkkxEkkx 00 :1:1 |ˆ|,||ˆ  SOLO Suboptimal Bayesian Algorithm: The PDAF (continue – 12) Gating and Data Association The covariance of the mixture is given by                       km j j T jjjj kkkxkkxkkxkxkkxkkxkkxkxE 0 |ˆ|ˆ|ˆ|ˆ|ˆ|ˆ                                kk j m j T ijj m j j kkP jj kkxkkxkkkxkxEkkkxkxkkxkxE 0 0 0 | |ˆ|ˆ|ˆ|ˆ|ˆ                                   k km j m j j T jjj T jj kkkxkkxkkxkkxkkkxkxEkkxkkx 0 0 0 |ˆ|ˆ|ˆ|ˆ|ˆ|ˆ|ˆ                         kk m j j T jj m j jj kkkxkkxkkxkkxkkkPkkP 00 |ˆ|ˆ|ˆ|ˆ||     1||0  kkPkkP No target in Validated Gate           k T j mjkKkSkKkkPkkP ,,11|| 1   One target in Validated Gate
  • 272. 273 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Suboptimal Bayesian Algorithm: The PDAF (continue – 13) Gating and Data Association                     kk m j j T jj m j jj kkkxkkxkkxkkxkkkPkkP 00 |ˆ|ˆ|ˆ|ˆ||              k Tc j mjkKkSkKkkPkkPkkP ,,11||| 1   Since      kkk kk m j j m j j 0 10 11     and                             kPd m j j T jj c k kkkxkkxkkxkkxkkPkkkPkkkP   0 00 |ˆ|ˆ|ˆ|ˆ|11||     1||0  kkPkkP We have:                                                  kkxkkxkkkxkkxkkkxkkxkkkxkkx kkxkkkxkkkxkkxkkkxkkxkkxkkx T m j j T jj m j j T kkx m j j T j T kkx m j jj m j j T jj m j j T jj kk T k kkk |ˆ|ˆ|ˆ|ˆ|ˆ|ˆ|ˆ|ˆ |ˆ|ˆ|ˆ|ˆ|ˆ|ˆ|ˆ|ˆ 0 1 0 |ˆ 0 |ˆ 000                          
  • 273. 274 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Suboptimal Bayesian Algorithm: The PDAF (continue – 14) Gating and Data Association                km j i T jj kkkxkkxkkxkkxkPd 0 |ˆ|ˆ|ˆ|ˆ                               km j j T jj kkikKkkxkikKkkxkikKkkxkikKkkx 0 1|ˆ1|ˆ1|ˆ1|ˆ                 kKkkikikikikK T m j j T jj k        0                                 kKkkikikikkikkikikkikikK T m j j T jj T ki m j jj ki m j j T j m j j T kk T kk                        000 1 0                      kKkikikkikikKkPd TT m j j T jj k        0 
  • 274. 275 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Suboptimal Bayesian Algorithm: The PDAF (continue – 15) Gating and Data Association             kPdkkPkkkPkkkP c  |11|| 00                 kKkikikkikikKkPd TT m j j T jj k        0 :           kKkSkKkkPkkP Tc 1 1||   Finally we obtained
  • 275. 276 SOLO  1|1  kkP State CovarianceState Estimation Predicted State      1|111|  kkxkFkkx           kRkHkkPkHkS T  1| Innovation Covariance          111|111|  kQkFkkPkFkkP T Covariance of Predicted State        kSkHkkPkK T 1 1|   Filter Gain              kKkikikikikKkPd TT m j T jjj k        1  Effect of Measurement Origin on State Covarriance Update State Covariance               kPdkkPkHkKI kkPkkP   1|1 1|| 0 0  Combined Innovation      km j jj kiki 1  Update State Estimation        kikKkkxkkx  1||   kS  ki j j  ki  kz j One Cycle of PDAF Measurements  1|1  kkx                                    k m l lj m l l j kjDj n GD mjebe nobservatiovalidNojebb mjdPe SPPb k k z 1/ 0/ ,,2,12/exp 21 1 1 2 2/    Evaluation of Association Probabilities Predicted Measurements      1|1|ˆ  kkxkHkkz                 zGkj j T jkj jj kj nPd kikSkid kkzkzki mjkz , 1|ˆ ,,2,1 2 12       Calculation of Innovation and Measurement Validation Suboptimal Bayesian Algorithm: The PDAF (continue – 16) Gating and Data Association
  • 276. 277 Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance (Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 SOLO Track Initialization, Maintenance & Deletion Track Life Cycle (Initialization, Maintenance & Deletion) Initial/ Terminal State Preliminary Track Tentative Track Confirmed Track No. of Detections ≥ M No Second Detection Wait N Scans Initial Detection No. of Detections < M L Consecutive Missed Detections No Detection No L Consecutive Missed Detections
  • 277. 278 Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance (Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 SOLO Track Initialization Track Life Cycle (Initialization, Maintenance & Deletion) Every Detection unassociated to an existing Track may be a False Alarm or a New Target. A Track Formation requires a Measurement-to-Measurement Association. Logic to Track Initialization (2 Detections for a Preliminary Track followed by M detections out of N scans): Every Unassociated Detection is a “Track Initiator”, yields a “Tentative Track”.1 Around the Initial Detection a Gate is set up based on2 • assumed maximum and minimum Target motion parameters. • the measured noise intensities. If is a Target, that gave rise to the initiator in the first scan, if detected in the second scan will fall in the Gate with nearly unity probability. Following a detection, in the second scan, this Track becomes a Preliminary Track, if there is no detection, this Track is dropped. Since the Preliminary Track has two measurements, a Kalman Filter can be initialized and used to set up a Gate for the next (third) sampling time. 3 Starting from the third scan a logic of M detections out of N scans (frames) is used for the subsequent Gates. 4 If at the end (scan N + 2 at the latest) the logic requirement is satisfied, the Track becomes a Confirmed Track, otherwise is dropped. 5
  • 278. 279 Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance (Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 SOLO Track Initialization Track Maintenance (Initialization, Maintenance & Deletion) Target Model • Target System is given by: kkkk kkkkkkk vxHz wuGxx    111111 kv kH kzkx kx1k 1kw 1k 1kx 1ku 1kG 1 zDelay • Target Filter Model is given by: 1|1| 111|111| ˆˆ ˆˆ     kkkkk kkkkkkk xHz uGxx • Filter Initialization is done in two steps: 1. Following an unassociated detection a Preliminary Large Gate is defined 2. After a second detection is associated in the Preliminary Gate the Kalman Filter is initiated using the two measurements by defining A Preliminary New Track is established. 0|00|0 ,ˆ Px
  • 279. 280 Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance (Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 SOLO Track Initialization Track Life Cycle (Initialization, Maintenance & Deletion) Track # 1 Track # 2 New Targets or False Alarms Old Targets Scan # m Scan # m+1 Scan # m+2 Scan # m+3 Tgt # 1 Tgt # 2 Tgt # 1 Tgt # 1 Tgt # 2 Tgt # 2 Tgt # 2 Preliminary Track # 1 Preliminary Track # 2 False Alarm False Alarm Tgt # 3
  • 280. 281 Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance (Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 SOLO Track Initialization Track Life Cycle (Initialization, Maintenance & Deletion) Target Model (continue – 1) • If detection (probability PD ) can be associated to the track, i.e. is in the Acceptance Gate (probability PG ):       1|1| ˆˆ kkkk T kkk zzSzz • At each scan we perform State and Innovation Covariance Prediction: k T kkkkk k T kkkkkk RHPHS QPP     1| 111|111| we update the Detection Indicator Vector:     otherwise katgatetheindetectionaisif k 0 1  and the State and State Covariance are updated, accordingly   kkkkkkkk kkkkkkkkk k T kkkk KSKPP zzKxx SHPK          1|| 1|1|| 1 1| ˆˆˆ • If in M scans out of N we have an associated to Track detection the Track is Confirmed otherwise is dropped.
  • 281. 282 Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance (Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 State Detection Sequence Indicator Vector δ = 0 No Detection δ = 1 Detection Transition D = detection A = Acceptance 1 Initial (zero state) 2 δ2 = [1] 3 δ3 = [1 1] 4 δ4= [1 1 1] 5 δ5= [1 1 1 0] 6 δ6 = [1 1 0] 7 δ7= [1 1 0 1] 8a δ8a = [1 1 1 1] Confirmed State 8b δ3 = [1 1 1 0 1] Confirmed State 8c δ3 = [1 1 0 1 1] Confirmed State 1;2  DD SOLO Track Initialization Track Life Cycle (Initialization, Maintenance & Deletion) Markov Chain for the Track Initialization Process for M=2, N=3 1;3  DD 6;4  AA 5;8  AaA 1;8  AbA 1;7  AA 1;8  AcA D D 1 2 3 4 5 6 7 8a 8b 8c D A D D 1  12   113   1114   01115   0116   10117  Preliminary Track Track Confirmation m=2/n=3 Initial State D A A A AA A A A A State i of the Markov Chain is defined by the Detection Sequence Indicator Vector δi, where, for example, δ7=[1 1 0 1] means Detection (D), followed by Detection (A), No Detection ( ), Detection (A).A The Markov Chain probability vector, denoted by μ (k), has components: μi (k)= Pr { the chain is in State i at time k } From the Markov Chain description by the Table or by the Graph we can define the relation:           000,10:..,1,01 821    CIkkk
  • 282. 284 Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance (Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 1 2 3 4 5 6 7 8 1 1-πD πD 0 0 0 0 0 0 2 1-πD 0 πD 0 0 0 0 0 3 0 0 0 πA 0 1-πA 0 0 4 0 0 0 0 1-πA 0 0 πA 5 1-πA 0 0 0 0 0 0 πA 6 1-πA 0 0 0 0 0 0 πA 7 1-πA 0 0 0 0 πA 0 0 8 0 0 0 0 0 0 0 1 SOLO Track Initialization Track Life Cycle (Initialization, Maintenance & Deletion) Markov Chain for the Track Initialization Process for M=2, N=3 (continue – 2) The acceptance probability is πA = PD• PG where PD = Probability of Detection PG = Probability that the true measurement will fall in the Gate           000,10:..,1,01 821    CIkkk Π 1 2 3 4 5 6 7 8a 8b 8c D A D D 1  12   113   1114   01115   0116   10117  Preliminary Track Track Confirmation m=2/n=3 Initial State D A A A AA A A A A Since for each state we can move to only two states with probabilities πD /πA and 1 – πD /1- πA the coefficients of Π matrix must satisfy: 1j ji The initialization probability is πD
  • 283. 285 Sensor Data Processing and Measurement Formation Observation- to - Track Association Input Data Track Maintenance (Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S .Blackman ," Multiple-Target Tracking with Radar Applications ", Artech House , 1986 Samuel S .Blackman ,Robert Popoli ," Design and Analysis of Modern Tracking Systems ", Artech House ,1999 SOLO Track Initialization Track Life Cycle (Initialization, Maintenance & Deletion) Markov Chain for the Track Initialization Process for M=2, N=3 (continue – 2)           000,10:..,1,01 821    CIkkk 1 2 3 4 5 6 7 8a 8b 8c D A D D 1  12   113   1114   01115   0116   10117  Preliminary Track Track Confirmation m=2/n=3 Initial State D A A A AA A A A A and: The Track Confirmation is attained in State 8. Therefore:  k8 Ck k   188  kk  Ck k         C C kk kk k 1 0 8           C C kk kk kk 1 0 188  The Average Confirmation Time of a Target-originated Sequence is:      C k C kkkkt    1 88 1
  • 284. 286 Target Estimators Filters for Maneuvering Target Detection • Maneuver Detection Scheme • Hybrid State Estimation Techniques - Jump Markov Linear System (JMLS) - Interactive Multiple Model (IMM) - Variable Structure IMM • Cramér - Rao Lower Bound (CRLB) for JMLS SOLO
  • 285. 287 Target Estimators Filters for Maneuvering Target Detection – Background • The motion of a real target never follows the same dynamic model all the time. • Essentially, there are (long – relative to measurement updates) period of constant velocity (CV) motion with sudden changes in speed and heading. • The measurements are of target position and velocity (sometimes), but not target acceleration. • There are two main approaches to deal with maneuvering targets using the Kalman Filter framework: SOLO - Maneuver Detection Basic Schemes - Hybrid-state estimation techniques, where a few predefined target maneuver models run in parallel, using the same measurements, and recursively we check what is the most plausible model in each time interval.
  • 286. 288 Target Estimators Filters for Maneuvering Target Detection SOLO Maneuver Detection Basic Schemes               kRkHkkPkHkSkSiki T  1|,0;~ N • Normalized Innovation Squared (NIS):        kikSkik T 1 :   For the optimal Kalman Filter Gain the innovation is unbiased, Gausian white noise            1|ˆ1|ˆ:  kkxkHkzkkzkzkiand innovation: • Based on measurement model :             kRvkvkvkxkHkz ,0;~ N • NIS, ε, is chi-square distributed with nz (order of ) degrees of freedom, χnz:z          z z z zz zzn n n z n n k n n U n p                    2 212/2 2 exp 2/ 2/1
  • 287. 289 SOLO Tail probabilities of the chi-square and normal densities. 9.21 11.34 13.28 2 3 4 • From the chi-square table we can determine εmax 28.13;4,01.0 34.11;3,01.0 21.9;2,01.0 max max max       z z z n n n • For non-maneuvering motion    01.01Pr max   typicallyk Target Estimators Filters for Maneuvering Target Detection Maneuver Detection Basic Schemes (continue – 1) 0.01  122 , ktxz  12 ktS  kk ttz |ˆ 12  max kTarget Maneuvers 1  111 , ktxz  11 ktS  kk ttz |ˆ 11  max k Non-maneuvering Target • Once a maneuver is detected Target dynamic model must be changed. • In the same way we can detect the end of a Target maneuver.
  • 288. 290 SOLO Target Estimators Filters for Maneuvering Target Detection Maneuver Detection Basic Schemes (continue – 2) Return to Table of Content
  • 289. 291 SOLO Target Estimators The Hybrid Model Approach - The Target Model, at a given time, is assumed to be one of r possible Target Models (Constant Velocity, Constant Acceleration, Singer Model, etc…)  r jjMMModel 1  • All models are assumed Linear – Gaussian (or linearization of nonlinear models) and a Kalman Filter type is used for state estimation and prediction. • The measurements are received at discrete times kz  Tktzzk  The information Z1:k at time k consists of all measurements received up to time k.  kk zzzZ ,,,: 21:1  Filter M1    0|0,0|0ˆ Px Filter Mj    0|0,0|0ˆ Px Filter Mr    0|0,0|0ˆ Px                                  kMRvkMkvkMkvkxkMHkz kMQwkMkwkMkwkxkMFkx ,0;~,, ,0;~,1,11 N N   Hybrid Model (have both continuous (noise) uncertainties as well as discrete (“model” or “mode”) uncertainties. kz Measurements
  • 290. 292 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach • A Bayesian framework is used. The prior probability that the system is in mode j (model Mj applies) is assumed given:     rjZMP jj ,,1|0 0  Z0 is the prior information and since the correct model is among the assumed r possible models   10 1  r j j Two possible situations are considered: 1. No Switching between models during the scenario 2. Switching between models during the scenario Return to Table of Content
  • 291. 293 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach 1. No Switching between models during the scenario Using Bayes formulation, the posterior probability of model j being correct, given the measurement data up to k, Zk, is given by                1:1 1:11:1 1:1 1:1 1:11:1 | ,|| | ,, ,||:       kk jkkkj kk kkj kkjkjj ZzP MZzPZMP ZzP ZzMP ZzMPZMPk                          r j jkkj jkkj r j jkkkj jkkkj MZzPk MZzPk MZzPZMP MZzPZMP 1 1:1 1:1 1 1:11:1 1:11:1 ,|1 ,|1 ,|| ,||                  r j jkkj jkkj j MZzPk MZzPk k 1 1:1 1:1 ,|1 ,|1        rjZMP jj ,,1|0 0  with assumed a prior probabilities P {zk|Z1:k-1, Mj} is the Likelihood Function Λj (k) of mode j at time k, which, under the linear-Gaussian assumptions, is given by:                     kS kikSki kSkikiPMZzPk j jj T j jjjjkkj 2 2/exp ,0;,|: 1 1:1     N     j kjkkj xMHzki ˆwhere Innovation of Filter Mj at time k          jk T jk j jkj MRMHkkPMHkS  | Innovation Covariance of Filter Mj at time k
  • 292. 294 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach • Each Filter Mj will provide the mode-conditioned state estimate the associated mode-conditioned covariance Pj (k|k) and the Innovation Covariance Sj (k) or the Likelihood Function Λj (k) at time k 1. No Switching between models during the scenario (continue – 1)  kkxj |ˆ Filter M1    1|1,1|1ˆ  kkPkkx    kkPkkx |,|ˆ 11      k kSki 1 11 ,  Filter Mj    1|1,1|1ˆ  kkPkkx    kkPkkx jj |,|ˆ      k kSki j jj  , Filter Mr    1|1,1|1ˆ  kkPkkx    kkPkkx rr |,|ˆ      k kSki r rr  ,                       rjgiven kk kk kSik kSik k jr j jj jj r j jkj jkj j ,,10 1 1 ,0;1 ,0;1 11                N N  k1  kj  kr Computation of μj (k) Block Diagram kz Measurements
  • 293. 295 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach kz Filter M1      k kSki 1 11 ,  Filter Mj    kkPkkx jj |,|ˆ      k kSki j jj  , Filter Mr      k kSki r rr  ,    1|1 1|1ˆ   kkP kkx         MixtureGaussiankkkxkkkx r j j r j jj 1|ˆ|ˆ 11                     rjkkxkkxkkxkkxkkPkkkP T jj r j jj ,,1|ˆ|ˆ|ˆ|ˆ|| 1       kkPkkx rr |,|ˆ    1|1 1|1ˆ   kkP kkx    1|1 1|1ˆ   kkP kkx • We have r Gaussian estimates , therefore to obtain the estimate of the system state and its covariance we can use the results of a Gaussian mixture with r terms to obtain the Overall State Estimate and its Covariance 1. No Switching between models during the scenario (continue – 2)  kkxj |ˆ Measurements                       rjgiven kk kk kSik kSik k jr i ii jj r i iki jkj j ,,10 1 1 ,0;1 ,0;1 11                N N  k1  kj  kr   kkPkkx |,|ˆ 11
  • 294. 296 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach The results are exact under the following assumptions: 1. No Switching between models during the scenario (continue – 3)        r j jj kkxkkkx 1 |ˆ|ˆ                 T jj r j jj kkxkkxkkxkkxkkPkkkP |ˆ|ˆ|ˆ|ˆ|| 1    1. The correct model is among the models considered. 2. The same model has been in effect from the initial time. If the mode set includes the correct one and no jump occurs, then the probability of the true mode will converge to unity, that is, this approach yields consistent estimates of the system parameters. Otherwise the probability of the model “nearest” to the correct one will converge to unity. Return to Table of Content
  • 295. 297 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach 2. Switching between models during the scenario As before the system is modeled by the equations:                                  kMRvkMkvkMkvkxkMHkz kMQwkMkwkMkwkxkMFkx ,0;~,, ,0;~,1,11 N N   where M (k) denotes the model “at time k” – in effect during the sampling period ending at k. Such systems are called Jump Linear Systems. The mode jump process is assumed left-continuous (i.e. the impact of the jump starts at tk +) It is assumed that the mode (model) jump process is a Markov process with known mode transition probabilities. Probability of transition from Mi at k-1 to Mj at k is given by the Markov Chain:     ijji MkMMkMPp  1|: Since, all the possibilities are to jump from i to each of j=1,…,r (including j=i) we must have      11| 11    r j ij r j ji MkMMkMPp
  • 296. 298 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach 2. Switching between models during the scenario (continue – 1) In this way the number of models running at each new measurement k are: k = 1, there are r models k = 2, there are r2 models, since each r models at k = 1 split in new r models k = 3, there are r3 models, since each r2 models at k = 2 split in new r models ……………………………………………………………………………………. k, there are rk models. The number of models grows exponentially making this approach impractical. The only way to avoid the exponentially increasing number of histories, which have to be accounted for, is by going to suboptimal techniques. Return to Table of Content
  • 297. 299 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach 2. Switching between models during the scenario (continue – 2) The Interacting Multiple Model (IMM) Algorithm In the IMM approach, at time k the state estimate is computed under each possible current model using r filters, with each filter using as start condition (for time k-1) a different combination of the previous model-conditioned estimates – mixed initial conditions. We assume a transition from Mi at k-1 to Mj at k with a predefined probability:     ijji MkMMkMPp  1|:      11| 11    r j ij r j ji MkMMkMPp Define  1|1ˆ  kkxi - filtered state estimate at scan k-1 for Kalman Filter Model i  1|1  kkPi - covariance matrix at scan k-1 for Kalman Filter Model i  1ki - probability that the target performs as in model state i as computed just after data is received on scan k-1  1kji - conditional probability that the target made the transition from state i to state j at scan k-1                              r i iji iji r i iij iij ji kp kp ZMkMPMkMMkMP ZMkMPMkMMkMP k 11 1 1 |11| |11| 1   
  • 298. 300 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach 2. Switching between models during the scenario (continue – 3) The Interacting Multiple Model (IMM) Algorithm (continue – 1)                T jiji r i ijij kkxkkxkkxkkx kkPkkkP 1|1ˆ1|1ˆ1|1ˆ1|1ˆ 1|111|1 00 1 0     For mixed Gaussian distribution we obtain the covariance of mixed initial conditions to be: Conditional probability that the target made the transition from state i to state j at scan k-1                              r i iji iji r i iij iij ji kp kp ZMkMPMkMMkMP ZMkMPMkMMkMP k 11 1 1 |11| |11| 1      rikkxi ,,11|1ˆ Mixing: The IMM algorithm starts with the initial condition from the filter Mi (k-1), assumed Gaussian distributed, and computes the mixed initial condition for the filter matched to Mj (k) according to       rjkkxkkkx r i ijij ,,11|1ˆ11|1ˆ 1 0   
  • 299. 301 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach 2. Switching between models during the scenario (continue – 4) The Interacting Multiple Model (IMM) Algorithm (continue – 2) The next step, as described before, is to run the r Kalman Filters and to calculate:                r j jkkj jkkj j MZzPk MZzPk k 1 1:1 1:1 ,|1 ,|1        rjZMP jj ,,1|0 0  with assumed a prior probabilities P {zk|Z1:k-1, Mj} is the Likelihood Function Λj (k) of mode j at time k, which, under the Linear-Gaussian assumptions, is given by            kSkikiPMZzPk jjjjkkj ,0;,|: 1:1 N      j kjkkj xMHzki ˆwhere Innovation of Filter Mj at time k          jk T jk j jkj MRMHkkPMHkS  | Innovation Covariance of Filter Mj at time k        r j jj kkxkkkx 1 |ˆ|ˆ                 T jj r j jj kkxkkxkkxkkxkkPkkkP |ˆ|ˆ|ˆ|ˆ|| 1    To obtain the estimate of the system state and its covariance we can use the results of a Gaussian mixture with r terms
  • 300. 302 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach 2. Switching between models during the scenario (continue – 5) The Interacting Multiple Model (IMM) Algorithm (continue – 3) IMM Estimation Algorithm Summary • Interaction: Mixing of the previous cycle mode-conditioned state estimates and covariance, using the predefined mixing probabilities, to initialize the current cycle of each mode-conditioned Filter.    1|1,1|1ˆ 00  kkPkkx jj • Mode-Conditioned Filtering: Mixing Calculation of the State Estimate and the covariance conditioned on a mode being in effect , as well as the mode likelihood function , for r parallel Filters.    kkPkkx jj |,|ˆ  kj • Probability Evaluation: Computation of the mixing and the updated mode probabilities μj (k) given μj (0), j=1,…,r. • Overall State Estimate and Covariance: Combination of the latest mode-conditioned State Estimate and Covariance .   kkPkkx |,|ˆ                      T jiji r i ijij r i ijij kkxkkxkkxkkx kkPkkkP rjkkxkkkx 1|1ˆ1|1ˆ1|1ˆ1|1ˆ 1|111|1 ,,11|1ˆ11|1ˆ 00 1 0 1 0                       rjgivenkkkkk j r i iijjj ,,101/1 1                 r j jj kkxkkkx 1 |ˆ|ˆ                  rjkkxkkxkkxkkxkkPkkkP T jj r j jj ,,1|ˆ|ˆ|ˆ|ˆ|| 1   
  • 301. 303 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach kz 2. Switching between models during the scenario (continue – 6) Filter M1    kkPkkx |,|ˆ 11      k kSki 1 11 ,  Filter Mj    kkPkkx jj |,|ˆ      k kSki j jj  , Filter Mr      k kSki r rr  ,                       rjgiven kk kk kSik kSik k jr i ii jj r i iki jkj j ,,10 1 1 ,0;1 ,0;1 11                N N  k1  kj  kr          r i iji iji ji kp kp k 1 1 1 1                         T jiji r i ijij r i ijij kkxkkxkkxkkx kkPkkkP rjkkxkkkx 1|1ˆ1|1ˆ1|1ˆ1|1ˆ 1|111|1 ,,11|1ˆ11|1ˆ 00 1 0 1 0              1|1 1|1ˆ   kkP kkx j j  1ki    1|1 1|1ˆ 0 0   kkP kkx j j    1|1 1|1ˆ 01 01   kkP kkx    1|1 1|1ˆ 0 0   kkP kkx r r The Interacting Multiple Model (IMM) Algorithm (continue – 4) rjip ji ,,1,         r j jj kkxkkkx 1 |ˆ|ˆ                  rjkkxkkxkkxkkxkkPkkkP T jj r j jj ,,1|ˆ|ˆ|ˆ|ˆ|| 1       kkPkkx rr |,|ˆ Measurements
  • 302. 304 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach 2. Switching between models during the scenario (continue – 7) The Interacting Multiple Model (IMM) Algorithm (continue – 5) Interaction (Mixing) 1 1|1 ˆ  kkx r kkx 1|1 ˆ  Filter Mk 1 Filter Mk r Model Probability Update State Estimate Combination 01 1|1 ˆ  kkx r kkx0 1|1 ˆ  kz 1 k r k 1 | ˆ kkx r kkx | ˆ kkx | ˆk 1k 1|1  kk IMM Algorithm jip
  • 303. 305 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach 2. Switching between models during the scenario (continue – 8) Bar-Shalom, Y., Fortmann, T.,E., “Tracking and Data Association”, Academic Press, 1988, pp. 233-237 Return to Table of Content
  • 304. 306 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach 2. Switching between models during the scenario (continue – 9) The IMM-PDAF Algorithm In cases when we want to detect a Target Maneuver and the Probability of Detection, PD, is less then 1, and False Alarms are possible we can combine the Interacting Multiple Model (IMM) Algorithm that allows Target maneuver with the Probabilistic Data Association Filter (PDAF) that deals with False Alarms, given the IMM-PDAF Algorithm. This is done by replacing the Kalman Filters Models of the IMM with PDAF Models. Interaction (Mixing) 1 1|1 ˆ  kkx r kkx 1|1 ˆ  PDAF Mk 1 PDAF Mk r Model Probability Update State Estimate Combination 01 1|1 ˆ  kkx r kkx0 1|1 ˆ  kz 1 k r k 1 | ˆ kkx r kkx | ˆ kkx | ˆk 1k 1|1  kk IMM-PDAF Algorithm
  • 305. 307 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach 2. Switching between models during the scenario (continue – 10) The IMM-PDAF Algorithm (continue – 1) The steps of IMM-PDAF are as follows: Step 1: Mixing Initial Conditions                T jiji r i ijij kkxkkxkkxkkx kkPkkkP 1|1ˆ1|1ˆ1|1ˆ1|1ˆ 1|111|1 00 1 0     For mixed Gaussian distribution we obtain the covariance of mixed initial conditions to be:   rikkxi ,,11|1ˆ The IMM algorithm starts with the initial condition from the filter Mi (k-1), assumed Gaussian distributed, and computes the mixed initial condition for the filter matched to Mj (k) according to       rjkkxkkkx r i ijij ,,11|1ˆ11|1ˆ 1 0    where Conditional probability that the target made the transition from state i to state j at scan k-1                                r i iji iji r i kiij kiij ji kp kp ZMkMPMkMMkMP ZMkMPMkMMkMP k 11 1:1 1:1 1 1 |11| |11| 1   
  • 306. 308 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach 2. Switching between models during the scenario (continue – 11) The IMM-PDAF Algorithm (continue – 2) The steps of IMM-PDAF are as follows: Step 2: Mode Conditioning PDAF      riZmiMkZPk kkki ,,1,,|: 1:1   From the r PDAF models we must obtain the likelihood functions Λi(k) i = 1,…,r , for each Model i. But at PDAF we found that for the Model i:                km j kkkjikkkjikkk ZmiMkkZPZmiMkZkPZmiMkZP 0 1:11:11:1 ,,,|,,,|,,|                                 0 ,,1 2 2/exp ,,,| 1 11 1:1 jkV mj kS kikSki PkV ZmiMkkZP k k m i k i jii T ji Gi m i kkkji                             011 ,...,11 ,,|, 1 1 1:1 jkVPPmPPkVPP mjkVPPmPPPP ZmiMkP iGiDikGiDiiGiDi kiGiDikGiDiGiDi kkkji         2/12/ 2 1 2 , kS n kV i n z n i z z                 1|ˆ:  kkzkzki ijji and where          kRkHkkPkHkS i T iiii  1|:
  • 307. 309 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach 2. Switching between models during the scenario (continue – 12) The IMM-PDAF Algorithm (continue – 3) The steps of IMM-PDAF are as follows: Step 2: Mode Conditioning PDAF (continue – 1)                                                         k iiii k i iiii k k i ii m j jii iGDkGDi m i D iGDkGD m i m j jii T ji i D GD kkki keb kVPPmPPkSkV P kVPPmPPkV kikSki kS P PP ZmiMkZPk 1 1 1 1 1 1:1 12 1 2/exp 2 1 ,,|:     From the r PDAF models we obtain the likelihood functions Λi(k) i = 1,…,r , for each Model i.     2/12/ 2 1 2 , kS n kV i n z n i z z                 1|ˆ:  kkzkzki ijji where          kRkHkkPkHkS i T iiii  1|:           kS P PP b kikSkie i D GD i jii T jiji i ii  2 1 : 2/exp: 1    
  • 308. 310 SOLO Target Estimators The Hybrid Model (Multiple Model) Approach 2. Switching between models during the scenario (continue – 13) The IMM-PDAF Algorithm (continue – 4) The steps of IMM-PDAF are as follows: Step 3: Probability Evaluation Computation of the mixing and the updated mode probabilities μj (k) given μj (0), j=1,…,r. Combination of the latest mode-conditioned State Estimate and Covariance .   kkPkkx |,|ˆ             rjlpgiven kkp kpk k jjlr i r l iljl r l ljlj j ,,1,0& 1 1 1 1 1                     r j jj kkxkkkx 1 |ˆ|ˆ                  rjkkxkkxkkxkkxkkPkkkP T jj r j jj ,,1|ˆ|ˆ|ˆ|ˆ|| 1    Step 4: Overall State Estimate and Covariance Return to Table of Content
  • 309. 311 Elements of a Basic MTT System SOLO Multi-Target Tracking (MTT) Systems The task effort of tracking n targets can require substantially more computation resources than n time the computation resources for tracking a single target, because is difficult to establish the correspondence between observations and targets (Data Association). Uncertainties in tracking targets: • Uncertainties associated with the measurements (target origin). • Inaccuracies due to the sensor performances (resolution, noise,..) Tgt. 1 Tgt. 2 Measurement 2 Measurement 1 1. Measurement 1 from target 1 & Measurement 2 from target 2 2. Measurement 1 from target 2 & Measurement 2 from target 1 3. None of the above (False Alarm) Hypotheses:
  • 310. 312 Elements of a Basic MTT System SOLO Multi-Target Tracking (MTT) Systems Association Hypothesis 1 Measurement 2 Measurement 1 t1 t2 t3 Measurement 2 Measurement 1 t1 t2 t3 Measureme nt 2 Measureme nt 1 t 1 t 2 t 3 Measureme nt 2 Measureme nt 1 t 1 t 2 t 3 Association Hypothesis 2 Association Hypothesis 3
  • 311. 313 Elements of a Basic MTT System SOLO Alignment: Referencing of sensor data to a common time and spatial origin. Association: Using a metric to compare tracks and data reports from different sensors to determine candidates for the fusion process. Correlation: Processing of the tracks and reports resulting from association to determine if they belong to a common object and thus aid in detecting, classifying and tracking the objects of interest. Estimation: Predicting an object’s future position by updating the state vector and error covariance matrix using the results of the correlation process. Classification: Assessing the tracks and object discrimination data to determine target type, lethality, and threat priority. Cueing: Feedback of threshold, integration time, and other data processing parameters or information about areas over which to conduct a more detailed search, based on the results of the fusion process. Return to Table of Content
  • 312. 314 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  ktxz ,1  kj tS 2  11 |ˆ  kkj ttz  ktxz ,2 ktxz ,3  12 |ˆ  kkj ttz  kj tS 1 Trajectory j = 2 Trajectory j = 1 Measurements at scan k  212 |ˆ  kkj ttz  211 |ˆ  kkj ttz SOLO Joint Probabilistic Data Association Filter (JPDAF) The JPDAF method is identical to the PDA except that the association probabilities β are computed using all observations and all tracks. Gating and Data Association In the PDA we dealt with only one target (track). JPDAF deals with a known number of targets (multiple targets) . Both PDA and JPDAF are of target-oriented type, i.e., the probability that a measurement belongs to an established target (track) is evaluated.
  • 313. 315 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  ktxz ,1  kj tS 2  11 |ˆ  kkj ttz  ktxz ,2 ktxz ,3  12 |ˆ  kkj ttz  kj tS 1 Trajectory j = 2 Trajectory j = 1 Measurements at scan k  212 |ˆ  kkj ttz  211 |ˆ  kkj ttz SOLO Joint Probabilistic Data Association Filter (JPDAF) (continue – 1) Assumptions of JPDAF: Gating and Data Association • There are several targets to be tracked in the presence of false measurements. • The number of targets r is known. • The track of each target has been initialized. • The state equations of the target are not necessarily the same. • The validation regions of these target can intersect and have common measurements. • A target can give rise to at most one measurement – no multipath. • The detection of a target occurs independently over time and from another target according to a known probability. • A measurement could have originated from at most one target (or none) – no unresolved measurements are considered here.        rjkkPkkxkx jjj ,,11|1,1|1ˆ;1 N • The conditional pdf of each target’s state given the past measurements is assumed Gaussian (a quasi-sufficient statistics that summarizes the past) and independent across targets with available from the previous cycle of the filter. • With the past summarized by an approximate sufficient statistics, the association probabilities are computed (only for the latest measurements) jointly across the measurement and the targets.
  • 314. 316 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  ktxz ,1  kj tS 2  11 |ˆ  kkj ttz  ktxz ,2 ktxz ,3  12 |ˆ  kkj ttz  kj tS 1 Trajectory j = 2 Trajectory j = 1 Measurements at scan k  212 |ˆ  kkj ttz  211 |ˆ  kkj ttz SOLO Joint Probabilistic Data Association Filter (JPDAF) (continue -2) • At the current time k we define the set of validated measurements: Gating and Data Association      km ii kzkZ 1  Example: From Figure we can see 3 measurements (mk=3)         ,,, 321 kzkzkzkZ  • We also have r predefined target (tracks) i=1,…,r Example: From Figure we can see 2 tracks (r = 2) • From the validated measurements and their position relative to track gates we define the Validation Matrix Ω that consists of binary elements (0 or 1) indicating if measurement j has been validated for track j (is inside the j Gate). Index i = 0 (no track) indicates a false alarm (clutter) origin, which is possible for each measurement. Example: From Figure    3 2 1 101 111 011 ˆ 210 meas meas meas ji ij              Measurement 1 can be FA or due track1, not track2 Measurement 2 can be FA or due track1, or track2 Measurement 3 can be FA or due track2, not track1 tracks
  • 315. 317 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  ktxz ,1  kj tS 2  11 |ˆ  kkj ttz  ktxz ,2 ktxz ,3  12 |ˆ  kkj ttz  kj tS 1 Trajectory j = 2 Trajectory j = 1 Measurements at scan k  212 |ˆ  kkj ttz  211 |ˆ  kkj ttz SOLO Joint Probabilistic Data Association Filter (JPDAF) (continue -3) Gating and Data Association • Define the Joint Association Events θ (Hypotheses) using the Validation Matrix Example: From Figure    3 2 1 111 111 011 ˆ 210 meas meas meas ji ij              tracks Validation Matrix      ij ˆˆ  Hypotesis Number Track Number 1 2 Comments 1 0 0 All measurements are False Alarms 2 1 0 Measurement # 1 due to target # 1, other are F.A. 3 2 0 Measurement # 2 due to target # 1, other are F.A. 4 3 0 Measurement # 3 due to target # 1, other are F.A. 5 0 2 Measurement # 2 due to target # 2, other are F.A. 6 1 2 Measurement # 1 due to target # 1, #2 due target #2. 7 3 2 Measurement # 3 due to target # 1, #2 due target #2. 8 0 3 Measurement # 3 due to target # 2, other are F.A 9 1 3 Measurement # 2 due to target # 1, #3due target #2. 10 2 3 Measurement # 1 due to target # 1, #3 due target #2. Those are all the Hypotheses (exhaustive) defined by the Validation Matrix (or Figure) Run This
  • 316. 318 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  ktxz ,1  kj tS 2  11 |ˆ  kkj ttz  ktxz ,2 ktxz ,3  12 |ˆ  kkj ttz  kj tS 1 Trajectory j = 2 Trajectory j = 1 Measurements at scan k  212 |ˆ  kkj ttz  211 |ˆ  kkj ttz SOLO Joint Probabilistic Data Association Filter (JPDAF) (continue -4) Gating and Data Association • Define the Joint Association Events θ (Hypotheses) using the Validation Matrix Example: From Figure    3 2 1 111 111 011 ˆ 210 meas meas meas ji ij              tracks Validation Matrix      ij ˆˆ  Those are all the Hypotheses (exhaustive) defined by the Validation Matrix (or Figure) OH 1 2 3 4 5 6 7 8 9 10 O1 0 T1 0 0 0 T1 0 0 T1 0 O2 0 0 T1 0 T2 T2 T2 0 0 T1 O3 0 0 0 T1 0 0 T1 T2 T2 T2 hypothesis number obs Run This
  • 317. 319 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO We have n stored tracks that have predicted measurements and innovations co variances at scan k given by: At scan k+1 we have m sensor reports (no more than one report per target)     njkSkkz jj ,,1,1|ˆ  set of all sensor reports on scan k mk zzZ ,,1  H – a particular hypothesis (from a complete set S of hypotheses) connecting r (H) tracks to r measurements. We want to compute:              HPHZP cHPHZP HPHZP ZHP k SH k k k | 1 | | |   Joint Probabilistic Data Association Filter (JPDAF) (continue -5) Gating and Data Association
  • 318. 320 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO We have several tracks defined by the predicted measurements and innovations co variances     !m V em m V FA     The probability density function of the false alarms, in the search volume V, in terms of their spatial density λ , is given by aPoisson Distribution:     njkSkkz jj ,,11,|1ˆ  Not all the measurements are from a real target but are from False Alarms. The common mathematical model for such false measurements is that they are: • uniformly spatially distributed • independent across time • this is the residual clutter (the constant clutter, if any, is not considered. m is the number of measurements in scan k+1   V orAlarmFalsezP i 1 TargetNew|  Because of the uniformly space distribution in the search Volume, we have: False Alarm Models Gating and Data Association Joint Probabilistic Data Association Filter (JPDAF) (continue - 6) We can use different probability densities for false alarms (λFA) and for new targets (λNT)
  • 319. 321 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO H – a particular hypothesis (from a complete set S of hypotheses) connecting r (H) tracks to r measurements and assuming m-r false alarms or new targets.                         r V rm li iji T ij m l lk rkm kk VS zzSzz HzpHZP 1 /1 1 1 1 1 2 2/ˆˆexp ||                  HPHZP cHPHZP HPHZP ZHP k SH k k k | 1 | | |    mk zzZ ,,11  P (Zk|H) - probability of the measurements given that hypothesis H is true.        km j j tindependen tsmeasuremen mk HzPHzzPHZP 1 1 ||,,|  where:                     itracktoconnectedjtmeasuremen S zSz Sz orAlarmFalseistmeasuremenjif V HzP i ii T i ii j 2 2/ˆzˆzexp ,ˆ;z TargetNew 1 | j 1 j jN Gating and Data Association Joint Probabilistic Data Association Filter (JPDAF) (continue - 7)
  • 320. 322 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO              HPHZP cHPHZP HPHZP ZHP k SH k k k | 1 | | |   P (H) – probability of hypothesis H connecting tracks i1,…,ir to measurements j1,…,jr from mk sensor reports:      kkFA tracks r tracks r tsmeasuremen r mPrmPiiPiijjPHP                         ,,,,|,, 111       ! ! 11 1 ,,|,, 11 m rm rmmm iijjP tracks r tsmeasuremen r                 probability of connecting tracks i1,…,ir to measurements j1,…,jr         DetectingNot m iii i D ii Detecting r D tracks r k r i r j PPiiP             ,, 1 ,, 1 1 1 1 1,,   probability of detecting only i1,…,ir targets         V k rm kFAkFA e rm V rmrmP k       ! for (m-r) False Alarms or New Targets assume Poisson Distribution with density λ over search volume V of (mk-r) reports  kmP probability of exactly mk reports where: Gating and Data Association Joint Probabilistic Data Association Filter (JPDAF) (continue - 8)
  • 321. 323 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO where:      HPHZP c ZHP kk | 1 |                       r i iji T ij rmm l lk S zzSzz V HzpHZP kk 1 1 1 2 2/ˆˆexp1 ||              k V k rm DetectingNot m iii i D ii Detecting r D k k mPe rm V PP m rm HP kk r i r j              ! 1 ! ! ,, 1 ,, 1 1 1     Gating and Data Association Joint Probabilistic Data Association Filter (JPDAF) (continue - 9)                     k V k rm DetectingNot m iii i D ii Detecting r D k k r i iji T ij rm m l lkk mPe rm V PP m rm S zzSzz Vc HzpHPHZP c ZHP kk r i r j k k                          ! 1 ! ! 2 2/ˆˆexp11 || 1 | ,, 1 ,, 11 1 1 1 1                    DetectingNot m iii i D ii Detecting r D rm r i iji T ij c k k V k r i r j k PP S zzSzz mP m e c          ,, 1 ,, 11 1 /1 1 1 1 1 2 2/ˆˆexp ! 1      
  • 322. 324 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  ktxz ,1  kj tS 2  11 |ˆ  kkj ttz  ktxz ,2 ktxz ,3  12 |ˆ  kkj ttz  kj tS 1 Trajectory j = 2 Trajectory j = 1 Measurements at scan k  212 |ˆ  kkj ttz  211 |ˆ  kkj ttz SOLO The probabilities of each hypothesis is given by:                DetectingNot m iii i D ii Detecting r D rm r g i iji T ij k k r i r j k ji PP S zzSzz c ZHP          ,, 1 ,, 11 1 1 1 1 2 2/ˆˆexp ' 1 |      Gating and Data Association Joint Probabilistic Data Association Filter (JPDAF) (continue -10) Hypotes is Number Track Number 1 2 Number of confirmed Tracks r Number of FA mk-r Hypothesis Probability 1 0 0 0 3 2 1 0 1 2 3 2 0 1 2 4 3 0 1 2 5 0 2 1 2 6 1 2 2 1 7 3 2 2 1 8 0 3 1 2 9 1 3 2 1 10 2 3 2 1 Example: Number of observations mk=3 with equal PD     '/1| 33 1 cPZHP Dk       '/1| 22 112 cPPgZHP DDk       '/1| 22 123 cPPgZHP DDk       '/1| 22 134 cPPgZHP DDk       '/1| 22 225 cPPgZHP DDk       '/1| 2 22116 cPPggZHP DDk       '/1| 2 22137 cPPggZHP DDk       '/1| 22 238 cPPgZHP DDk       '/1| 2 23119 cPPggZHP DDk       '/1| 2 231210 cPPggZHP DDk   c’ is defined by requiring: P (H1|Zk)+… …+P (H10|Zk) =1      i iji T ij ji S zzSzz g   2 2/ˆˆexp : 1    Define: i – track index j – measurement index Run This (First Display)
  • 323. 325 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  ktxz ,1  kj tS 2  11 |ˆ  kkj ttz  ktxz ,2 ktxz ,3  12 |ˆ  kkj ttz  kj tS 1 Trajectory j = 2 Trajectory j = 1 Measurements at scan k  212 |ˆ  kkj ttz  211 |ˆ  kkj ttz SOLO For each track i and measurement j (event θi j) compute the association probability βi j: Gating and Data Association Joint Probabilistic Data Association Filter (JPDAF) (continue -11) Hypotes is Number Track Number 1 2 Number of confirmed Tracks r Number of FA mk-r Hypothesis Probability 1 0 0 0 3 2 1 0 1 2 3 2 0 1 2 4 3 0 1 2 5 0 2 1 2 6 1 2 2 1 7 3 2 2 1 8 0 3 1 2 9 1 3 2 1 10 2 3 2 1 Example: Number of observations mk=3 with equal PD     '/1| 33 1 cPZHP Dk       '/1| 22 112 cPPgZHP DDk       '/1| 22 123 cPPgZHP DDk       '/1| 22 134 cPPgZHP DDk       '/1| 22 225 cPPgZHP DDk       '/1| 2 22116 cPPggZHP DDk       '/1| 2 22137 cPPggZHP DDk       '/1| 22 238 cPPgZHP DDk       '/1| 2 23119 cPPggZHP DDk       '/1| 2 231210 cPPggZHP DDk   Since the hypotheses H are exhaustive and exclusive we can apply the Total Probability Theorem:                  lji lji lji l ljiklkjiji H H HP HPZHPZP     0 1 ||:   i – track index j – measurement index Track =1      kkk ZHPZHPZHP ||| 85101       kkk ZHPZHPZHP ||| 96211     kk ZHPZHP || 10321     kk ZHPZHP || 7431  Track =2        kkkk ZHPZHPZHPZHP |||| 432102  021       kkk ZHPZHPZHP ||| 76522       kkk ZHPZHPZHP ||| 109832           1| l kl ZHP Run This (First Display)
  • 324. 326 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999  ktxz ,1  kj tS 2  11 |ˆ  kkj ttz  ktxz ,2 ktxz ,3  12 |ˆ  kkj ttz  kj tS 1 Trajectory j = 2 Trajectory j = 1 Measurements at scan k  212 |ˆ  kkj ttz  211 |ˆ  kkj ttz SOLO • Computation of Hypotheses Probabilities: Gating and Data Association Joint Probabilistic Data Association Filter (JPDAF) (continue -12) Summary:                zGji jii T jiji ijji kj nPd kikSkid kkzkzki riFor mjkz , 1|ˆ ,,1 ,,2,1 2 12         • Calculation of Innovation and Measurement Validation for each Measurement versus each Track • Definition of all Hypotheses (exhausive & exclusive) OH 1 2 3 4 5 6 7 8 9 10 O1 0 T1 0 0 0 T1 0 0 T1 0 O2 0 0 T1 0 T2 T2 T2 0 0 T1 O3 0 0 0 T1 0 0 T1 T2 T2 T2 hypothesis number l obs                DetectingNot m iii i D ii Detecting r D rm r g i iji T ij k k r i r j k ji PP S zzSzz c ZHP          ,, 1 ,, 11 1 1 1 1 2 2/ˆˆexp ' 1 |               1| l kl ZHP Run This (Second Display)
  • 325. 327 Sensor Data Processing and Measurement Formation Observation - to - Track Association Input Data Track Maintenance ( Initialization, Confirmation and Deletion) Filtering and Prediction Gating Computations Samuel S. Blackman, " Multiple-Target Tracking with Radar Applications", Artech House, 1986 Samuel S. Blackman, Robert Popoli, " Design and Analysis of Modern Tracking Systems", Artech House, 1999 SOLO Gating and Data Association Joint Probabilistic Data Association Filter (JPDAF) (continue -13) Summary (continue – 1): • Compute Combined Innovation for each Track riii m itrackj j jijii ,,1 ` 1      • Covariance Prediction for each Track           rikQkFkkPkFkkP i T iiii ,,1111|11|  • Innovation Covariance for each Track           rikRkHkkPkHkS i T iiii ,,11|  • For each track i and measurement j (event θi j) compute the association probability βi j:       l ljiklkjiji HPZHPZP  ||: i – track index j – measurement index          lji lji lji H H HP    0 1  • Filter Gain for each Track         rikSkHkkPkK i T iii ,,11| 1   • Update State Estimation for each Track         rikikKkkxkkx iiii ,,11|ˆ|ˆ  • Update State Covariance for each Track                     kKiiiikKkdP rikdPkkPkHkKIkkPkkP T i T ii m itrackj j T jijijiii iiiiiiii k               1 00 ,,11|11||   
  • 326. 328 SOLO  1|1  kkPi State Covariance          kRkHkkPkHkS i T iiii  1| Innovation Covariance          111|111|  kQkFkkPkFkkP i T iiii Covariance of Predicted State        kSkHkkPkK i T iii 1 1|   Filter Gain              kKkikikikikKkPd T i T ii m itrackj j T jijijiii k               & 1  Effect of Measurement Origin on State Covarriance Update State Covariance               kPdkkPkHkKI kkPkkP iiiii iii   1|1 1|| 0 0   Update State Estimation        kikKkkxkkx iiii  1||   kSi  ki ji ji  kii  kz i One Cycle of JPDAF for Track i Measurements Evaluation of Association Probabilities Predicted Measurements      1|1|ˆ  kkxkHkkz iii  Calculation of Innovation and Measurement Validation State Estimation for Track i  1|1  kkxi  Predicted State of Track i      1|111|  kkxkFkkx iii                 zGji jii T jiji ijji kj nPd kikSkid kkzkzki riFor mjkz , 1|ˆ ,,1 ,,2,1 2 12         Definition of all Hypotheses H and their Probabilities       l ljiklkjiji HPZHPZP  ||:                DetectingNot m iii i D ii Detecting r D rm r g i iji T ij k k r i r j k ji PP S zzSzz c ZHP          ,, 1 ,, 11 1 1 1 1 2 2/ˆˆexp ' 1 |      Combined Innovation        km itrackj j jijii kiki 1  Gating and Data Association Joint Probabilistic Data Association Filter (JPDAF) (continue -14) Return to Table of Content
  • 327. 329 SOLO Multi Hypothesis Tracking (MHT) Assumptions of MHT  ktxz ,1  kj tS 2  11 |ˆ  kkj ttz  ktxz ,2 ktxz ,3  12 |ˆ  kkj ttz  kj tS 1 Trajectory j = 2 Trajectory j = 1 Measurements at scan k  212 |ˆ  kkj ttz  211 |ˆ  kkj ttz • There is several targets to be tracked in the presence of false measurements. • The number of targets r is unknown. • The track of each target has to bee initialized. • The state equations of the targets are the same. • The validation regions of these targets can intersect and have common measurements. • A target can give rise to at most one measurement – no multipath. • The detection of a target occurs independently over time and from another target according to a known probability. • A measurement could have originated from at most one target (or none) – no unresolved measurements are considered here.        rjkkPkkxkx jjj ,,11|1,1|1ˆ;1 N • The conditional pdf of each target’s state given the past measurements is assumed Gaussian (a quasi-sufficient statistics that summarizes the past) and independent across targets with available from the previous cycle of the filter. • The origin of each sequence of measurements is considered. • At each sampling time any measurement can originated from: - an established track - a new target (with a Poisson Probability λNT) - a false alarm (with a Poisson Probability λFA)
  • 328. 330 SOLO Multi Hypothesis Tracking (MHT) MHT Algorithm Steps • The Hypotheses of the current time are obtained from: • The Set of Hypotheses at the previous time augmented with • All the Feasible Associations of the Present Measurements (Extensive and Exhaustive). • The Probability of each Hypothesis is evaluated assuming:. • measurements associated with a track are Gaussian distributed around the predicted location of the corresponding track’s measurement. • false measurements are uniformly distributed in the surveillance region and appear according to a fixed rate (λFA) Poisson process. • The State Estimation for each Hypothesized Track is obtained from a Standard Filter. • The selection of the Most Probable Hypothesis amounts to an Exhaustive Search over the Set of All Feasible Hypotheses.  ktxz ,1  11 |ˆ  kkj ttz  ktxz ,2 ktxz ,3  kj tS 1 Trajectory j = 1 Measurements at scan k  211 |ˆ  kkj ttz • new targets are uniformly distributed in the surveillance region (or according to some other PDF) and appear according to a fixed rate (λNT) Poisson process. • An elaborate Hypothesis Management is needed.
  • 329. 331 SOLO Multi Hypothesis Tracking (MHT) k=0 measurements 1 2 1 1 k=1 Plots 1,2 1 1 3 1 1 4 1 2 4 1 2 3 k=2 Plots 3,4 k=3 Plots 5,6 1 1 3 5 1 1 3 6 1 1 4 5 1 1 4 6 1 2 3 5 1 2 3 6 1 2 4 5 1 2 4 6 Hypotheses MHT tree At scan k we have m sensor reports (no more than one report per target) set of all sensor reports on scan k+1 mk zzZ ,,1  Hl – a particular hypothesis (from a complete set S of hypotheses) connecting r (H) tracks to r measurements. Measureme nt 2 Measureme nt 1 t1 t2 t3 Association Hypothesis 1 Measureme nt 2 Measureme nt 1 t 1 t 2 t 3 Association Hypothesis 2 Measureme nt 2 Measureme nt 1 t 1 t 2 t 3 Association Hypothesis 3 Measureme nt 2 Measureme nt 1 t1 t2 t3
  • 330. 332 SOLO Multi Hypothesis Tracking (MHT) Donald B. Reid PhD A&A Stanford U. 1972 Receive New Data Set Perform Target Time Update Form new clusters, identifying which targets and measurements are associated with each cluster. Initialization (A priori targets) Clusters Hypotheses Generation Form new set of hypotheses, calculate their probabilities, and perform a target measurement for each hypothesis of each cluster. Simplify hypothesis matrix of each cluster. Transfer tentative targets With unity probability to confirmed Target category. Create new clusters For confirmed targets no longer in Hypothesis matrix. Mash Stop Reduce number of hypotheses by elimination or combination,. Reduce Return to Next Data Set Flow Diagram of Multi Target Tracking Algorithms (Reid 1979)
  • 331. 333 SOLO Multi Hypothesis Tracking (MHT) MHT Implementation Issues • Need to manage Hypotheses to keep their number reasonable small. • Limit the History (the deep of hypotheses is N last scans) • Combining and pruning of Hypotheses: - Retain only hypotheses with probability above certain threshold. - Combine hypotheses with last M association in common. • Clustering: - Cluster is a set of tracks with common measurements and association hypotheses; hypotheses sets from different clusters are evaluated separately.
  • 332. 334 SOLO Multi Hypothesis Tracking (MHT) set of all sensor reports on scan k+1 mk zzZ ,,1  Sum over all feasible assignement Histories (Lots of them) Measureme nt 2 Measureme nt 1 t1 t2 t3 Accumulated measurements (plots) to time k:  kkk ZZZ ,1:1:1  Hypotheses Sequences:  kLkk HHS ,,1         L l kklkkklkk ZHxpZHpZxp 1 :1:1:1 ,||| Probability of Hkl given all current and paste data P.D.F. of Target state for a particular Hypothesis Hkl given current and paste data                  DetectingNot m iii i D ii Detecting r D rm r i iji T ij c k k V kkl k r i r j k PP S zzSzz mP m e c ZHp          ,, 1 ,, 11 1 /1 1 1 1 1 2 2/ˆˆexp ! 1 |       We found:
  • 333. 335 Sensor # 1 Sensor # 2 Estimator 1v xˆx 1z 2z 2v SOLO Multi-Sensor Estimate Consider a system comprised of two sensors, each making a single measurement, zi (i=1,2), of a constant, but unknown quantity, x, in the presence of random, dependent, unbiased measurement errors, vi (i=1,2). We want to design an optimal estimator that combines the two measurements.                    11 0 0 1122112 2 2 22222 2 1 2 11111           vEvvEvE vEvEvEvxz vEvEvEvxz In absence of any other information, we chose an estimator that combines, linearly, the two measurements: 2211 ˆ zkzkx  where k1 and k2 must be found such that: 1. The Estimator is Unbiased:     0~ˆ  xExxE                    011 ~ˆ 2121 0 22 0 11 2211   xkkxEkkvEkvEk xvxkvxkExExxE x  121  kk Sensors Fusion
  • 334. 336 Sensor # 1 Sensor # 2 Estimator 1v xˆx 1z 2z 2v SOLO Multi-sensor Estimate (continue – 1) 2211 ˆ zkzkx  where k1 and k2 must be found such that: 1. The Estimator is Unbiased:     0~ˆ  xExxE 121  kk 2. Minimize the Mean Square Estimation Error:     2 , 2 , ~minˆmin 2121 xExxE kkkk                              2111 2 2 2 1 2 1 2 12111 2 2 2 1 2 1 2 1 2 2111 2 2111 2 , 121min121min 1min1minˆmin 1 21 2 2 2 1 1 1121   kkkkvvEkkvEkvEk vkvkExvxkvxkExxE kk kkkk                        0212122121 211 2 21 2 112111 2 2 2 1 2 1 2 1 1     kkkkkkk k 21 2 2 2 1 21 2 1 12 21 2 2 2 1 21 2 2 1 2 ˆ1ˆ& 2 ˆ           kkk     2 2 2 1 21 2 2 2 1 22 2 2 12 , 2 1~min       xE Reduction of Covarriance Error Estimator: Sensors Fusion
  • 335. 337 Sensor # 1 Sensor # 2 Estimator 1v xˆx 1z 2z 2v SOLO Multi-sensor Estimate (continue – 2) 21 2 1 1 2 2 2 1 1 2 1 1 2 2 11 2 1 1 2 2 2 1 1 2 1 1 2 1 2 21 2 2 2 1 21 2 1 1 21 2 2 2 1 21 2 2 22 22 ˆ zz zzx                               2 2 2 11 2 1 1 2 2 2 1 2 21 2 2 2 1 22 2 2 12 , 2 1 2 1~min              xE 1. Uncorrelated Measurement Noises (ρ =0)     2 12 2 2 1 2 21 12 2 2 1 2 1 ˆ zzx      0~min 2 xE 2. Fully Correlated Measurement Noises (ρ =±1) 3. Perfect Sensor (σ 1 = 0) 1 ˆ zx    0~min 2 xE The estimator will use the perfect sensor as expected. 21 2 1 1 1 2 11 2 1 1 1 1 ˆ zzx           Sensors Fusion
  • 336. 338 Sensor # 1 Sensor # 2 Estimator 1v xˆx 1z 2z 2v Sensor # n nznv SOLO Multi-sensor Estimate (continue – 3) Consider a system comprised of n sensors, each making a single measurement, zi (i=1,2,…,n), of a constant, but unknown quantity, x, in the presence of random, dependent, unbiased measurement errors, vi (i=1,2,…,n). We want to design an optimal estimator that combines the n measurements.   nivEvxz iii ,,2,10  or               RVEVVEVEVE v v v x z z z nnnnn nn nn T V n UZ n                                                       2 2211 22 2 22112 112112 2 1 2 1 2 1 0 1 1 1           ZK z z z kkkzkzkzkx T n nnn                 2 1 212211 ,,,ˆEstimator: Sensors Fusion
  • 337. 339 Sensor # 1 Sensor # 2 Estimator 1v xˆx 1z 2z 2v Sensor # n nznv SOLO Multi-sensor Estimate (continue – 4) ZKx T ˆEstimator: 1. The Estimator is Unbiased:           01ˆ~ 0   VEKxUKxVKxUKExxExE TTTT 01UKT 2. Minimize the Mean Square Estimation Error:     2 1 2 1 ˆmin~min xxExE UK K UK K TT           KRKKVVEKVKVKExE T UK K TT UK K TTT UK K UK K TTTT 111 2 1 minminmin~min   Use Lagrange multiplier λ (to be determined) to include the constraint 01UKT    1 UKKRKKJ TT     0   UKRKJ K  11   URUUK TT    URURUK T 111     112 1 ~min    URUxE T UK K T              1 1 1 :  U URK 1   Sensors Fusion
  • 338. 340 Multi Sensors Data FusionSOLO Transducer 1 Feature Extraction, Target Classification, Identification, and Tracking Sensor 1 Fusion Processor - Associate - Correlate - Track - Estimate - Classify - Cue Cue Target Report Cue Target Report Transducer N Feature Extraction, Target Classification, Identification, and Tracking Sensor N Sensor – level Fusion Sensor 1 Fusion Processor - Associate - Correlate - Track - Estimate - Classify - Cue Central – level Fusion Cue Minimally Processed Data Sensor N Cue Minimally Processed Data Sensor 1 Central-Level Fusion Processor - Associate - Correlate - Track - Estimate - Classify - Cue Hybrid Fusion Sensor N Sensor 1 Processing Sensor N Processing Sensor 2 Sensor 2 Processing Sensor-Level Fusion Processor Multi Sensors Systems Architectures
  • 339. 341 Multi Sensors Data FusionSOLO Multi Sensors Systems Architectures Centralized versus Distributed Architecture Advantages Disadvantages • Simple and Direct Logic • High Data Transfer • Direct and Simple Misalignment Correction • Requires Additional Logic for Track-to-Track Association and Fusion • Susceptible to Data Transfer Latency • Accurate Estimation & Data Association • Complex Misalignment Correction • More Vulnerable to ECM and Bad sensor Data CentralizedDistributed • Moderate Data Transfer • Direct and Simple Misalignment Correction • Less Vulnerable to ECM and Bad sensor Data • Less Accurate Data Association and Tracking Performance Return to Table of Content
  • 340. 342 Sensors FusionSOLO Sensor A Track i Sensor B Track j j kk i kk j kk i kk ij k xxxxd |||| ~~ˆˆ:  Track-to-Track of Two Sensors, Correlation and Fusion We want to determine if the Track i from Sensor A and Track j from Sensor B, potentially represent the same target.      i kk i k i k i kk i k i k i kk i k i k i k i kk i kk vxHKxHKIxHzKxx   1|1|1|| ˆˆˆˆ         DynamicsTargetReal PredictorFilterˆˆ 11111 111|111| kkkk i kk kk i kk i k i kk wuGxx uGxx In the same way: 11|111|1| ~ˆ:~   k j kk j kk j kk j kk wxxxx   j k j k j kk j k j kk j kk j kk vKxHKIxxx  ||| ~ˆ:~ Define: 11|111|1| ~ˆ:~   k i kk i kk i kk i kk wxxxx      i k i k i kk i k i k i k i kk i kk i k i kk i kk i kk vKxHKIvKxxHKIxxx   |1||| ~ˆˆ:~              Tj kk j kk Ti kk j kk Tj kk i kk Ti kk i kk Tj kk i kk j kk i kk Tijijij kk xxExxExxExxE xxxxEddEU |||||||| ||||| ~~~~~~~~ ~~~~:   j kk ji kk ij kk i kk ij kk PPPPU |||||  Prediction Estimation
  • 341. 343 Sensors FusionSOLO Sensor A Track i Sensor B Track j Track-to-Track of Two Sensors, Correlation and Fusion (continue – 1)   i k i k i kk i k i kk i kk i kk vKxHKIxxx  ||| ~ˆ:~ 11|111|1| ~ˆ:~   k i kk i kk i kk i kk wxxxx In the same way: 11|111|1| ~ˆ:~   k j kk j kk j kk j kk wxxxx   j k j k j kk j k j kk j kk j kk vKxHKIxxx  ||| ~ˆ:~    Ti kk i kk i kk Ti kk i kk i kk xxEPxxEP 1|1|1|||| ~~&~~      Tj kk j kk j kk Tj kk j kk j kk xxEPxxEP 1|1|1|||| ~~&~~                      Tj k j k ij kk i k i k Tj k Tj k i k i k Tj k j k Tj kk i k i k Tj k Tj k i kk i k i k Tj k j k Tj kk i kk i k i k Tj kk i kk ij kk HKIPHKIKvvEKHKIxvEK KvxEHKIHKIxxEHKIxxEP     |1 00 |1 0 |1|1|1||| ~ ~~~~~:         111|111111|11|111|1|1| ~~~~:   k Tj k ij kk i k T kk Tj k Tj kk i kk i k Tj kk i kk ij kk QPwwExxExxEP   111|111|1|1| ~~:   k Tj k ij kk i k Tj kk i kk ij kk QPxxEP      Tj k j k ij kk i k i k Tj kk i kk ij kk HKIPHKIxxEP   |1||| ~~: Prediction Estimation Prediction Estimation
  • 342. 344 SOLO Gating Then the Track i of Sensor A and Track j of Sensor B are from the same Target if: with probability PG determined by the Gate Threshold γ. Here we described another way of determining γ, based on the chi-squared distribution of dk 2. Tail probabilities of the chi-square and normal densities. 9.21 11.34 13.28 2 3 4 0.01    01.01Pr 2   typicallydP kG 28.13;4,01.0 34.11;3,01.0 21.9;2,01.0       z z z n n n     ij k ij kk Tij kk dkUdd 1 | 2 : Since dk 2 is chi-squared of order nd distributed we can use the chi-square Table to determine γ  k i tS  1|ˆ kk j ttz  ktxz ,  1|ˆ kk i ttz  k j tS Trajectory i Trajectory j Measurements at scan k Track-to-Track of Two Sensors, Correlation and Fusion (continue – 2)
  • 343. 345 Sensors FusionSOLO Track-to-Track of Two Sensors, Correlation and Fusion (continue – 3) We want to combine the data from those two sensors by using: where C has to be defined  i kk j kk i kk c kk xxCxx |||| ~~~~  Suppose that , then Track i from Sensor A and Track j from Sensor B, potentially represent the same target.   ij k ij kk Tij kk dUdd 1 | 2 :         0~~~~ 0 | 0 | 0 ||            i kk j kk i kk c kk xExECxExE         Ti kk j kk i kk i kk j kk i kk Tc kk c kk c kk xxCxxxCxExxEP ||||||||| ~~~~~~~~                       TTi kk i kk Tj kk i kk Ti kk j kk Tj kk j kk Ti kk j kk Tj kk j kk TTi kk i kk Tj kk i kk Ti kk i kk CxxExxExxExxEC xxExxECCxxExxExxE |||||||| |||||||||| ~~~~~~~~ ~~~~~~~~~~         Ti kk ij kk Tij kk j kk Tij kk j kk Ti kk ij kk i kk CPPPPCPPCCPPP |||||||||  We will determine C by requiring c kk C Ptrace |min     022 |||||||    i kk ij kk Tij kk j kk i kk ij kk c kk PPPPCPPPtrace C      1 ||| 1 |||||| *     ij kk i kk ij kk i kk ij kk Tij kk j kk i kk ij kk UPP PPPPPPC   02 |||||2 2    i kk ij kk Tij kk j kk c kk PPPPPtrace C Minimization Condition
  • 344. 346 Sensors FusionSOLO Sensor A Track i Sensor B Track j Compute Difference j kk i kk ij k xxd || ˆˆ  Compute χ2 Statistics     ij k ij k Tij kk dUdd 12   ij d Perform Gate Test Assignment i kkx | ˆ j kkx | ˆ 11| ,,,,  k i k i k i k i kk QHKP 11| ,,,,  k j k j k j k j kk QHKP     i kk j kk ij kk ij kk i kk i kk c kk xxUPPxx || 1 ||||| ˆˆˆˆ   Recursive Track Estimate c kkx | ˆ ij kkU | i kkx | ˆ j kkx | ˆ Track-to-Track of Two Sensors, Correlation and Fusion (continue – 4) Summary Tij kk ij kk j kk i kk ij kk PPPPU |||||        00|0111|11|   ijTj k j kk Tj k ij kk i k i k i k ij kk PHKIQPHKIP   Tij k ij k ij kk ddEU :|      Tij kk i kk ij kk ij kk i kk i kk c kk Tccc kk PPUPPPP xxEP || 1 ||||| | ˆˆ:    Compute and Return to Table of Content
  • 345. 347 Sensors FusionSOLO Issues in Multi – Sensor Data Fusion Successful Multi – Sensor Data Fusion Requires the Following Practical Issues to be Addressed: • Spatial and Temporal Sensor Alignment • Track Association & Fusion (for Distributed Architecture) • Data Corruption (or Double-Counting) Problem (Repeated Use of the Same Information) • Handling Data Latency (e.g. Out of Sequence Measurements/Estimates) • Communication Bandwidth Limitations (How to Compress the Data) • Fusion of Dissimilar Kinematic Data (1D with 2D or 3D) • Picture Consistency Return to Table of Content
  • 346. 348 Multi Target TrackingSOLO References S.S. Blackman, “Multiple-Target Tracking with Radar Applications”, Artech House, 1986 S.S. Blackman, R. Popoli , “Design and Analysis of Modern Tracking Systems”, Artech House, 1999 Y. Bar-Shalom, T.E. Fortmann, “Tracking and Data Association”, Academic Press, 1988 E. Waltz, J. Llinas, “Multisensor Data Fusion”, Artech House, 1990 Y. Bar-Shalom, Ed., “Multitarget-Multisensor Tracking, Applications and Advances”, Vol. II, Artech House, 1992 Y. Bar-Shalom, Xiao-Rong Li., “Multitarget-Multisensor Tracking: Principles and Techniques”, YBS Publishing, 1995 Y. Bar-Shalom, W.D. Blair,“Multitarget-Multisensor Tracking, Applications and Advances”, Vol. III, Artech House, 2000 Y. Bar-Shalom, Ed., “Multitarget-Multisensor Tracking, Applications and Advances”, Vol. I, Artech House, 1990 Y. Bar-Shalom, Xiao-Rong Li., “Estimation and Tracking: Principles, Techniques and Software”, Artech House, 1993 L.D.Stone, C.A. Barlow, T.L. Corwin, “Bayesian Multiple Target Tracking”, Artech House, 1999
  • 347. 349 Multi Target TrackingSOLO References (continue – 1) Ristik, B. & Hernanadez, M.L., “Tracking Systems”, 2008 IEEE Radar Conference, Rome, Italy Return to Table of Content Karlsson, R., “Simulation Based Methods for Target Tracking”, Linköping University, Thesis No. 930, 2002 Karlsson, R., “Particle Filtering for Positioning and Tracking Applications”, PhD Dissertation, Linköping University, No. 924, 2005
  • 348. 350 Multi Target TrackingSOLO References S.S. Blackman, “Multiple-Target Tracking with Radar Applications”, Artech House, 1986 S.S. Blackman, R. Popoli “Design and Analysis of Modern Tracking Systems”, Artech House, 1999 L.A. Klein, “Sensor and Data Fusion”, Artech House,
  • 349. 351 Multi Target TrackingSOLO References D. Hall, J. Llinas,“Handbook of Multisensor Data Fusion”, Artech House, D. Hall, S. A. H. McMullen, “Mathematical Techniques in Multisensor Data Fusion”. Artech House M.E. Liggins, D. Hall, J. Llinas, Ed.“Handbook of Multisensor Data Fusion: Theory and Practice”, 2nd Ed., CRC Press, 2008
  • 350. 352 Multi Target TrackingSOLO References Y. Bar-Shalom, Ed.,“Multitarget-Multisensor Tracking, Applications and Advances”, Vol. II, Artech House, 1992 Y. Bar-Shalom, W.D. Blair Ed.,“Multitarget-Multisensor Tracking, Applications and Advances”, Vol. III, Artech House, L.D.Stone, C.A. Barlow, T.L. Corwin, “Bayesian Multiple Target Tracking”, Artech House, 1999
  • 351. 353 Multi Target TrackingSOLO References From left-to-right: Sam Blackman, Oliver Drummond, Yaakoov Bar-Shalom and Rabinder Madan From left-to-right: Fred Daum, X. Rong Li, Tom Kerr and Sanjeev Arulambalam A Raytheon THAAD radar, which uses Yaakov’s Bar-Shalom JPDAF algorithm httf://esplab1.ee.uconn.edu/AESmagMae02.htm The Workshop on Estimation, Tracking and Fusion: A Tribute to Yaakov Bar-Shalom, 17 May 2001
  • 352. 354 Multi Target TrackingSOLO References “Special Issue in Data Fusion”, Proceedings of the IEEE, January 1997 Klein, L. A., “Sensor and Data Fusion Concepts and Applications”, 2nd Ed., SPIE Optical Engineering Press, 1999
  • 353. 355 “Proceedings of the IEEE”, March 2004, Special Issue on: “Sequential State Estimation: From Kalman Filters to Particle Filters” Julier, S.,J. and Uhlmann, J.,K., “Unscented Filtering and Nonlinear Estimation”, pp.401 - 422
  • 354. 356 Branko Ristic Marcel L. Hernandez Fredrik Gustafsson Niclas Bergman Rickard Karlsson
  • 355. 357 SOLO Technion Israeli Institute of Technology 1964 – 1968 BSc EE 1968 – 1971 MSc EE Israeli Air Force 1970 – 1974 RAFAEL Israeli Armament Development Authority 1974 – 2013 Stanford University 1983 – 1986 PhD AA
  • 356. 358 SOLO Review of Probability Chi-square Distribution                    00 02/exp 2/ 2/1 ; 2/2 2/ x xxx kkxp k k   kxE    kxVar 2        2/ 21 exp k X j xjE      Probability Density Functions Cumulative Distribution Function Mean Value Variance Moment Generating Function               00 0 2/ 2/,2/ ; x x k xk kxP  Γ is the gamma function        0 1 exp dttta a       x a dtttxa 0 1 exp,γ is the incomplete gamma function Distributions examples
  • 357. 359 SOLO Review of Probability Gaussian Mixture Equations A mixture is a p.d.f. given by a weighted sum of p.d.f.s with the weighths summing up to unity:      n j jjj Pxxpxp 1 ,;N A Gaussian Mixture is a p.d.f. consisting of a weighted sum of Gaussian densities where: 1 1  n j jp   jjj PxxxA ,;~: N Denote by Aj the event that x is Gaussian distributed with mean and covariance Pjjx with Aj , j=1,…,n, mutually exclusive and exhaustive: and S 1A 2A nA   jj pAP : jiOAAandSAAA jin  21           n j jj n j jjj AxpAPPxxpxp 11 |,;NTherefore:
  • 358. 360 SOLO Review of Probability Gaussian Mixture Equations (continue – 1) A Gaussian Mixture is a p.d.f. consisting of a weighted sum of Gaussian densities           n j jj n j jjj AxpAPPxxpxp 11 |,;N The mean of such a mixture is:           n j jj n j jjj xpPxxEpxpxEx 11 ,;N The covariance of the mixture is:                                        n j j T jj n j jj T jj n j j T jjj n j jj T jj n j jj T jjjj n j jj TT pxxxxpAxxExx pxxAxxEpAxxxxE pAxxxxxxxxE pAxxxxExxxxE 11 0 1 0 1 1 1      
  • 359. 361 SOLO Review of Probability Gaussian Mixture Equations (continue – 2) The covariance of the mixture is:            PpPpxxxxpAxxxxExxxxE n j jj n j j T jj n j jj T jj T ~ 111    where:     n j j T jj pxxxxP 1 : ~ Is the spread of the mean term. T n j j T jj n j j TT x n j jj x n j j T j n j j T jj xxpxx pxxxpxpxxpxxP T       1 1 1111 : ~      T n j j T jj n j jj T xxpxxpPxxxxE    11 Note: Since we developed only first and second moments of the mixture, those relations will still be correct even if the random variables in the mixture are not Gaussian.
  • 360. 362 SOLO Probability Total Probability Theorem Table of Content nAAAS  21 jiOAA ji              1A 2A nAB jiOAAandSAAA jin  21If we say that the set space S is decomposed in exhaustive and incompatible (exclusive) sets. The Total Probability Theorem states that for any event B, its probability can be decomposed in terms of conditional probability as follows:           n i i n i i BPBABAB 11 |Pr,PrPr Using the relation:          llll AABBBABA Pr|PrPr|PrPr        klOBABABAB lk n k k , 1         n k k BAB 1 PrPr For any event B we obtain:
  • 361. 363 SOLO Probability Statistical Independent Events                                                                                  n i i n n kji kji i i n ji ji i i n i i tIndependen lStatisticaA n i i n n kji kji kji n ji ji ji n i i n i i AAAA AAAAAAAA i 1 1 3 ,. 3 1 2 . 2 1 1 1 1 1 3 ,. 2 . 1 11 Pr1PrPrPr Pr1PrPrPrPr    From Theorem of Addition Therefore           n i i tIndependen lStatisticaA n i i AA i 11 Pr1Pr1           n i i tIndependen lStatisticaA n i i AA i 11 Pr11Pr  Since OAASAA n i i n i i n i i n i i                                  1111 &                  n i i n i i AA 11 PrPr1         n i i tIndependen lStatisticaA n i i AA i 11 PrPr  If the n events Ai i = 1,2,…n are statistical independent than are also statistical independentiA    n i iA 1 Pr         n i i MorganDe A 1 Pr     n i i tIndependen lStatisticaA A i 1 Pr1   nrAA r i i r i i ,,2PrPr 11         Table of Content
  • 362. 364 SOLO Probability Theorem of Multiplication          12112312121 |Pr|Pr|PrPrPr AAAAAAAAAAAAA nnn   Proof      ABABA /PrPrPr Start from       12121 /PrPrPr AAAAAAA nn        2131212 /Pr/Pr/Pr AAAAAAAAA nn   in the same way      12122112211 /Pr/Pr/Pr   nnnnnnn AAAAAAAAAAAAA   From those results we obtain:          12112312121 |Pr|Pr|PrPrPr AAAAAAAAAAAAA nnn   q.e.d. Table of Content
  • 363. 365 SOLO Probability Conditional Probability - Bayes Formula Using the relation:          llll AABBBABA  Pr|PrPr|PrPr        klOBABABAB lk m k k , 1          m k k BAB 1 PrPr  we obtain:                      m k kk llll l AAB AAB B AAB BA 1 Pr|Pr Pr|Pr Pr Pr|Pr |Pr    and Bayes Formula Thomas Bayes 1702 - 1761 S jiOAA ji  1A mAAAB   21 2A 1A 2A  Table of Content               m k kk m k k m k k AABBBABAB 111 Pr|PrPr|PrPrPr 