SlideShare a Scribd company logo
MOFT Tutorials
Multi Object Filtering Multi Target Tracking
Multiple Hypothesis Tracking
Part-1 / Conceptual MHT
Multiple Hypothesis Tracking - MHT
▪ significantly differs from the PDA based tracking algorithms
▪ PDA based algorithms combine the multiple hypotheses of the
measurement update step before the next update,
▪ MHT approach keeps all hypotheses and expects that the subsequent
measurements resolve the association uncertainty
▪ number of hypotheses increases exponentially in ambiguous
situations
▪ computational complexity is reduced using pruning and merging
techniques as well as grouping strategies
Multiple Hypothesis Tracking
▪ an efficient MHT implementation using Murty’s algorithm which
obtains the k best hypotheses without evaluating all possible
hypotheses
▪ two approaches; hypothesis-oriented MHT, track-oriented MHT
▪ hypothesis-oriented MHT propagates the hypotheses over time,
▪ track-oriented MHT only propagates the obtained tracks over time
▪ reconstructs the hypotheses using compatible tracks before the
next measurement update
▪ set of tracks is considered to be compatible if each measurement
contributes to at most one of the tracks
Multiple Hypothesis Tracking
▪ important features of modern tracking systems.
▪ track initiation, track termination and a deferred decision logic
▪ especially in high clutter and high target density scenarios
▪ Multiple Hypothesis Tracking (MHT) algorithm naturally supports track
initiation, track termination and a deferred decision logic
▪ fundamental idea of MHT ,delaying difficult data association decisions
until more information is received
▪ requires maintaining a set of different association hypotheses in a
ranked fashion
Multiple Hypothesis Tracking
▪ problem of MHT, number of hypotheses grows exponentially,
▪ combinatorial explosion quickly exhausting memory and computation
capabilities
▪ important component of every MHT system are techniques for
keeping the number of hypotheses maintainable
Multiple Hypothesis Tracking
MHT Previous Work/History
▪ Morefield’s MTT batch-processing
▪ based on a 0–1 integer linear programming problem
▪ classifies measurements based on Bayesian decision theory
▪ NP-hard multi-scan data association problem
▪ Reid fundamentally changed the structure of the algorithm
▪ sequentially applicable
▪ maintains multiple different association hypotheses while
processing the set of measurements of each new scan.
▪ pruning strategies only a fixed number of the most likely
hypotheses is kept
▪ much better suited for real-time applications, hypothesis set can
be updated iteratively from scan-to-scan.
▪ complexity grows exponential in computation time and memory.
▪ unfeasible in scenarios high number of tracks and measurements
Multiple Hypothesis Tracking
MHT Previous Work/History
▪ Cox and Miller with better solution
▪ to identify the exact 𝑛-best hypotheses
▪ Murty’s algorithm, ranked set of best association hypotheses
▪ based on an algorithm for solving LAPs, Hungarian method,
Munkres algorithm
▪ Jonker and Volgenant
▪ optimized algorithm for solving square LAPs
▪ outperforms classical methods for dense LAPs easily
▪ Bijsterbosch and Volgenant
▪ further improved this algorithm and extended it to the rectangular
case
▪ most widely used algorithm for data association problem in MTT.
Multiple Hypothesis Tracking
Conceptual MHT – Reid’s Original MHT
Example
▪ validation regions of the two tracks overlap.
▪ two measurements 𝑀2 and 𝑀3 gate with both of them,
▪ measurement 𝑀1 only gates with the first track
▪ tracks 𝑇1 and 𝑇2 represented by a hypothesis 𝐻 at time-step 𝑘 − 1,
▪ prior to new measurements 𝑀1, 𝑀2 and 𝑀3 in time-step 𝑘.
Multiple Hypothesis Tracking
▪ a likely hypothesis
▪ measurement 𝑀2 belongs to the track 𝑇1,
▪ measurement 𝑀3 to the track 𝑇2
▪ measurement 𝑀1 is the start of a new track 𝑇3.
▪ an unlikely hypothesis
▪ all three new measurements originate from clutter
▪ classified as false alarms;
▪ neither updating track 𝑇1 and 𝑇2 nor initiating any new track.
▪ number of feasible hypotheses is large.
▪ MHT algorithm generates all feasible data association hypotheses for
arbitrary tracking scenarios in a systematic fashion.
Multiple Hypothesis Tracking
▪ Example -five feasible posterior data association hypotheses for 𝐻
are listed.
▪ five feasible posterior data association hypotheses for 𝐻, where 𝑁𝑇
means a new target and 𝐹𝑇 means a false alarm
Multiple Hypothesis Tracking
▪ each iteration at a time-step 𝑘 begins with the set of old hypotheses
from the previous iteration at time-step 𝑘 − 1.
▪ each of these old hypotheses becomes a parent hypothesis for the
current time-step 𝑘 as new hypotheses are formed.
▪ a hypothesis provides an interpretation of all previously received
measurements as a set of disjoint tracks.
▪ before the actual data association, Kalman filter associated with each
track is predicted.
▪ estimation of the track states for the upcoming time-step.
▪ newly arrived measurements are matched with the track state
predictions using the Mahalanobis distance.
Multiple Hypothesis Tracking
▪ three different possible outcomes for each measurement association.
▪ every measurement can be:
▪ continuation of a previously known track,
▪ start of a new track,
▪ a false alarm due to clutter, or other sensor inabilities.
▪ MHT algorithm enumerates all possible data association hypotheses
effectively generating a tree.
▪ tree encodes all association hypotheses as branches from the root to
one of its leaves.
▪ tree depth increases with every processed measurement.
Multiple Hypothesis Tracking
▪ for each hypothesis a likelihood for assigning the respective
measurement is calculated.
▪ probability stands for the likelihood of a hypothesis
▪ required to rank the hypotheses from the most likely to the most
unlikely
▪ hypothesis tree is pruned
▪ unlikely hypotheses are removed to keep the size of the tree in check
Multiple Hypothesis Tracking
Multiple Hypothesis Tracking
▪ MHT algorithm generates new hypotheses sets in an iterative manner
▪ no two measurements of the same scan are associated with the same
target
▪ no two tracks of the same hypothesis have a measurement in
common,
▪ measurements originating from a sensor are modeled as 𝐷 𝑀 -
dimensional vectors 𝑧 ∈ ℝ 𝐷 𝑀.
▪ states maintained by the Kalman filter are modeled as 𝐷𝑆-dimensional
vectors 𝑥 ∈ ℝ 𝐷 𝑆 .
▪ set of 𝑀 𝑘 measurements delivered in scan 𝑘 is denoted by
𝑍 𝑘 = 𝑧𝑖 𝑘 , 𝑖 = 1,2, ⋯ , 𝑀 𝑘 ⊂ ℝ 𝐷 𝑀
Multiple Hypothesis Tracking
▪ set of all scans up to time-step 𝑘 is denoted by
𝑍 𝑘
= ራ
𝑖=1
𝑘
𝑍 𝑖
▪ set of all 𝐻 𝑘 hypotheses at the time of scan 𝑘,
▪ associating the set of measurements 𝑍 𝑘
up to scan 𝑘 with tracks or
clutter, is denoted by
Ω 𝑘 = Ω𝑖
𝑘
, 𝑖 = 1,2, ⋯ , 𝐻 𝑘
Multiple Hypothesis Tracking
▪ an association hypothesis Ω𝑖
𝑘
is a 5-tuple consisting of a set of tracks
describing the measurement-to-track associations,
▪ number of tracks in this hypothesis 𝑁 𝑇𝐺𝑇 ,
▪ number of measurements in the actual scan associated with
previously existing tracks 𝑁 𝐷𝑇 ,
▪ number of measurements in the actual scan that initiated new
targets 𝑁 𝑁𝑇
▪ number of measurements in the actual scan classified as false
alarms 𝑁𝐹𝑇 .
▪ sum is equal to the number of measurements 𝑀 𝑘 = 𝑁 𝐷𝑇 +𝑁𝐹𝑇 + 𝑁 𝑁𝑇.
Multiple Hypothesis Tracking
▪ in scan 𝑘 + 1 a new set of measurements 𝑍 𝑘 + 1 is received
▪ a new set of hypotheses Ω 𝑘+1 is formed iteratively as follows
▪ ෡Ω 𝑘 denote the set of hypotheses after the 𝑚 𝑡ℎ measurement
𝑧 𝑚 𝑘 + 1 of the actual scan
▪ process starts by initializing ෡Ω0 = Ω 𝑘.
▪ a new set of hypotheses ෡Ω 𝑚
for each prior hypothesis ෡Ω𝑖
𝑚−1
and
each new measurement 𝑧 𝑚 𝑘 + 1 is formed.
▪ each hypothesis in this new set is the joint hypothesis which claims
that ෡Ω𝑖
𝑚−1
is true and that the measurement 𝑧 𝑚 𝑘 + 1 is either
▪ a false alarm,
▪ beginning of a new target
▪ or the continuation of one of the prior targets.
Multiple Hypothesis Tracking
Multiple Hypothesis Tracking
▪ initial hypothesis Ω 𝑚(0)
𝑘−1
contains 𝑁 𝑇𝐺𝑇 = 2 tracks, each of them
consisting of 3 measurements.
▪ In time-step 𝑘 a new scan 𝑍 𝑘 = 𝑧1 𝑘 , 𝑧2 𝑘 , 𝑧3 𝑘 , 𝑧4 𝑘 with 𝑀 𝑘 = 4
measurements arrives.
▪ a new hypothesis Ω0
𝑘
is formed which associates 𝑁 𝐷𝑇 = 2 out of 4
measurements to the 2 existing tracks. 𝑁 𝑁𝑇 = 1 measurement initiates
a new track and 𝑁𝐹𝑇 = 1measurement is classified as false alarm.
Probability of a Hypothesis
▪ rating and ranking hypotheses is a crucial aspect of the MHT
algorithm, required to differentiate likely and unlikely hypotheses
▪ MHT hypothesis rating equation gives the probability of a hypothesis
Ω𝑖
𝑘
, given the cumulative set of measurements 𝑍 𝑘
originating from a
type 1 sensor:
𝑃𝑖
𝑘
= 𝑃 Ω𝑖
𝑘
|Z 𝑘
▪ type 1 sensor is a sensor capable of providing information about the
number of targets in the area of coverage of the sensor
▪ generates in each scan a set of zero or more measurements
Multiple Hypothesis Tracking
▪ given the cumulative set of measurements up to scan 𝑘 , the
probability of a new hypothesis Ω𝑖
𝑘
is composed of the new set of
assignments 𝑤𝑖 𝑘 and the parent hypothesis Ω 𝑚 𝑖
𝑘−1
which contains
the assignments of measurements up to and including scan 𝑘 − 1.
▪ function 𝑚 𝑖 gives the index of the parent hypothesis of a hypothesis
Ω𝑖
𝑘
from the last scan.
▪ new hypothesis is the union of the old and current set of assignments,
formally written as:
Ω𝑖
𝑘
= Ω 𝑚 𝑖
𝑘−1
, 𝑤𝑖 𝑘
Multiple Hypothesis Tracking
Multiple Hypothesis Tracking
▪ probability 𝑃𝑖
𝑘
of a hypothesis Ω𝑖
𝑘
can be obtained in a recursive
manner by using previously known information and new one
▪ an equation for calculating the probability of a hypothesis 𝑃 Ω𝑖
𝑘
|Z 𝑘
can be derived as
𝑃𝑖
𝑘
= 𝑃 Ω𝑖
𝑘
|Z 𝑘
=
1
𝑐
𝑃 𝑍 𝑘 |Ω 𝑚 𝑖
𝑘−1
, 𝑤𝑖 𝑘 , Z 𝑘−1
𝑃 𝑤𝑖 𝑘 |Ω 𝑚 𝑖
𝑘−1
, Z 𝑘−1
𝑃 Ω 𝑚 𝑖
𝑘−1
|Z 𝑘−1
▪ normalizing constant is 𝑐 =
𝑃 Z 𝑘−1,𝑍 𝑘
𝑃 Z 𝑘−1 = 𝑃 Z 𝑘
|Z 𝑘−1
▪ specifies the probability of the occurrence of all measurements up to
scan 𝑘.
Multiple Hypothesis Tracking
▪ constant never has to be calculated explicitly because it is
independent of any hypothesis probability, constant among all
hypotheses of one iteration
▪ to arrive at a closed equation for the probability 𝑃𝑖
𝑘
of a hypothesis Ω𝑖
𝑘
, three factors of the righthand side required
▪ factor (3) of the equation is just the probability of the parent
hypothesis Ω 𝑚 𝑖
𝑘−1
, available from the previous iteration.
▪ remaining two terms (1) and (2) describe the likelihood of the new
assignment
Multiple Hypothesis Tracking
▪ likelihood of the measurements 𝑍 𝑘 of the current scan, given the
new association hypothesis Ω𝑖
𝑘
= Ω 𝑚 𝑖
𝑘−1
, 𝑤𝑖 𝑘 combines the old and
new data associations as
𝑃 𝑍 𝑘 |Ω 𝑚 𝑖
𝑘−1
, 𝑤𝑖 𝑘 , Z 𝑘−1 = ෑ
𝑖=1
𝑀 𝑘
𝑎 𝑧𝑖 𝑘
where 𝑎: ℝ 𝐷 𝑀 → ℝ0
+
is defined as
𝑎 𝑧 =
1
𝑉
if 𝑧 is a false alarm or a new target
𝑓 𝒩 𝐻x 𝑗
𝑘
,S 𝑗
𝑘 𝑧 if 𝑧 continues a track 𝑗 confirmed by Ω 𝑚 𝑖
𝑘−1
𝑉 ∈ ℝ+ is the scan volume covered by the sensor,
x𝑗
𝑘
is the predicted state of the 𝑗 𝑡ℎ track of hypothesis Ω 𝑚 𝑖
𝑘−1
S𝑗
𝑘
is the corresponding innovation covariance matrix
Multiple Hypothesis Tracking
▪ spatial distribution of false alarms is assumed to be uniform within the
area 𝑉 covered by the sensor
▪ number of false alarms per time-step is modeled by a Poisson
distribution
▪ probability of the new association hypothesis given its parent
hypothesis.
▪ composed of the probability for the counts 𝑁 𝐷𝑇 , 𝑁𝐹𝑇 and 𝑁 𝑁𝑇 given
Ω 𝑚 𝑖
𝑘−1
, the probability of a specific configuration given the counts 𝑁 𝐷𝑇 ,
𝑁𝐹𝑇 and 𝑁 𝑁𝑇 and lastly the probability of an assignment for a given
configuration.
▪ a configuration is the classification of all 𝑀 𝑘 measurements of the
current scan into measurements assigned to previously known tracks,
false alarms or new tracks
Multiple Hypothesis Tracking
▪ an assignment is a concrete allocation of tracks to those
measurements that were assigned to some previously known tracks
stated as:
𝑃 𝑤𝑖 𝑘 |Ω 𝑚 𝑖
𝑘−1
, Z 𝑘−1
= 𝑃 𝑁 𝐷𝑇 , 𝑁𝐹𝑇, 𝑁 𝑁𝑇|Ω 𝑚 𝑖
𝑘−1
∙ 𝑃 𝑐𝑜𝑛𝑓𝑖𝑔𝑢𝑟𝑎𝑡𝑖𝑜𝑛|𝑁 𝐷𝑇 , 𝑁𝐹𝑇, 𝑁 𝑁𝑇
∙ 𝑃 𝑎𝑠𝑠𝑖𝑔𝑛𝑚𝑒𝑛𝑡|𝑐𝑜𝑛𝑓𝑖𝑔𝑢𝑟𝑎𝑡𝑖𝑜𝑛
▪ assumed that the number of previously known tracks that were
detected in the current scan 𝑁 𝐷𝑇 , follows a binomial distribution
ℬ 𝑛, 𝑝
▪ number of new targets 𝑁 𝑁𝑇 and false targets 𝑁𝐹𝑇 follow a Poisson
distribution 𝒫 λ
▪ probability of the first factor of equation
𝑃 𝑁 𝐷𝑇 , 𝑁𝐹𝑇, 𝑁 𝑁𝑇|Ω 𝑚 𝑖
𝑘−1
= 𝑓ℬ 𝑁 𝑇𝐺𝑇,𝑃 𝐷
𝑁 𝐷𝑇 𝑓ℱ 𝛽 𝐹𝑇 𝑉 𝑁𝐹𝑇 𝑓ℱ 𝛽 𝑁𝑇 𝑉 𝑁 𝑁𝑇
Multiple Hypothesis Tracking
▪ total number of different ways to assign 𝑁 𝐷𝑇 out of 𝑀 𝑘 measurements
to prior targets, 𝑁𝐹𝑇 out of 𝑀 𝑘 measurements to false targets and 𝑁 𝑁𝑇
out of 𝑀 𝑘 measurements to new targets is given by the following
expression:
𝑀 𝑘
𝑁 𝐷𝑇
∙
𝑀 𝑘 − 𝑁 𝐷𝑇
𝑁𝐹𝑇
∙
𝑀 𝑘 − 𝑁 𝐷𝑇 − 𝑁𝐹𝑇
𝑁 𝑁𝑇
▪ probability of a configuration given the counts 𝑁 𝐷𝑇 , 𝑁𝐹𝑇 and 𝑁 𝑁𝑇 is:
𝑃 𝑐𝑜𝑛𝑓𝑖𝑔𝑢𝑟𝑎𝑡𝑖𝑜𝑛|𝑁 𝐷𝑇 , 𝑁𝐹𝑇, 𝑁 𝑁𝑇 =
1
𝑀 𝑘
𝑁 𝐷𝑇
∙ 𝑀 𝑘−𝑁 𝐷𝑇
𝑁 𝐹𝑇
∙ 𝑀 𝑘−𝑁 𝐷𝑇−𝑁 𝐹𝑇
𝑁 𝑁𝑇
▪ number of possible combinations for assigning the 𝑁 𝐷𝑇 measurements
to the 𝑁 𝑇𝐺𝑇 prior targets
Multiple Hypothesis Tracking
▪ 𝑁𝐹𝑇 and 𝑁 𝑁𝑇 do not have to be considered because the number of
possible assignments for both equals 1.
𝑁 𝑇𝐺𝑇!
𝑁 𝑇𝐺𝑇 − 𝑁 𝐷𝑇 !
▪ probability for an assignment given a certain configuration is:
𝑃 𝑎𝑠𝑠𝑖𝑔𝑛𝑚𝑒𝑛𝑡|𝑐𝑜𝑛𝑓𝑖𝑔𝑢𝑟𝑎𝑡𝑖𝑜𝑛 =
1
𝑁 𝑇𝐺𝑇!
𝑁 𝑇𝐺𝑇 − 𝑁 𝐷𝑇 !
=
𝑁 𝑇𝐺𝑇 − 𝑁 𝐷𝑇 !
𝑁 𝑇𝐺𝑇!
Multiple Hypothesis Tracking
▪ final equation for the probability of the new association hypothesis Ω𝑖
𝑘
is
𝑃 𝑤𝑖 𝑘 |Ω 𝑚 𝑖
𝑘−1
, Z 𝑘−1 = 𝑓ℬ 𝑁 𝑇𝐺𝑇,𝑃 𝐷
𝑁 𝐷𝑇 𝑓ℱ 𝛽 𝐹𝑇 𝑉 𝑁𝐹𝑇 𝑓ℱ 𝛽 𝑁𝑇 𝑉 𝑁 𝑁𝑇
1
𝑀 𝑘
𝑁 𝐷𝑇
∙ 𝑀 𝑘−𝑁 𝐷𝑇
𝑁 𝐹𝑇
∙ 𝑀 𝑘−𝑁 𝐷𝑇−𝑁 𝐹𝑇
𝑁 𝑁𝑇
𝑁 𝑇𝐺𝑇 − 𝑁 𝐷𝑇 !
𝑁 𝑇𝐺𝑇!
=
𝑁𝐹𝑇! 𝑁 𝑁𝑇!
𝑀 𝑘!
𝑃𝐷
𝑁 𝐷𝑇
1 − 𝑃 𝐷
𝑁 𝑇𝐺𝑇−𝑁 𝐷𝑇 𝑓ℱ 𝛽 𝐹𝑇 𝑉 𝑁𝐹𝑇 𝑓ℱ 𝛽 𝑁𝑇 𝑉 𝑁 𝑁𝑇
Multiple Hypothesis Tracking
▪ formula for determining the probability of a hypothesis obtained as:
𝑃𝑖
𝑘
=
1
𝑐
𝑁𝐹𝑇! 𝑁 𝑁𝑇!
𝑀 𝑘!
𝑃𝐷
𝑁 𝐷𝑇
1 − 𝑃 𝐷
𝑁 𝑇𝐺𝑇−𝑁 𝐷𝑇 𝑓ℱ 𝛽 𝐹𝑇 𝑉 𝑁𝐹𝑇 𝑓ℱ 𝛽 𝑁𝑇 𝑉 𝑁 𝑁𝑇
∙ ෑ
𝑖=1
𝑁 𝐷𝑇
𝑓 𝒩 𝐻x 𝑗
𝑘
,S 𝑗
𝑘 𝑧𝑖 𝑘
1
𝑉 𝑁 𝐹𝑇+𝑁 𝑁𝑇
𝑃 𝑚 𝑖
𝑘−1
=
1
𝑐′
𝑃𝐷
𝑁 𝐷𝑇
1 − 𝑃 𝐷
𝑁 𝑇𝐺𝑇−𝑁 𝐷𝑇 𝛽 𝐹𝑇
𝑁 𝐹𝑇
𝛽 𝑁𝑇
𝑁 𝑁𝑇
∙ ෑ
𝑖=1
𝑁 𝐷𝑇
𝑓 𝒩 𝐻x 𝑗
𝑘
,S 𝑗
𝑘 𝑧𝑖 𝑘 𝑃 𝑚 𝑖
𝑘−1
▪ dependence on 𝑉 got eliminated by the substitution by the Poisson
distribution.
Multiple Hypothesis Tracking
▪ constant 𝑐 was condensed into the constant 𝑐′ =
𝑐𝑀 𝑘
𝑒−𝛽 𝐹𝑇 𝑉−𝑒−𝛽 𝑁𝑇 𝑉 .
▪ due to dependence on the probability of the previous time-step, the
equation can be applied iteratively to calculate the probability of each
data association hypothesis
▪ required steps are to multiply first all prior hypothesis probabilities by
1 − 𝑃 𝐷
𝑁 𝑇𝐺𝑇
▪ as a branch is created for each measurement and its hypothesized
origin, the likelihood of that branch is determined by either multiplying
the prior probability by 𝛽 𝐹𝑇 in case of a false alarm, 𝛽 𝑁𝑇 in case of a
new target or
𝑃 𝐷 𝑓
𝒩 𝐻x 𝑗
𝑘,S 𝑗
𝑘 𝑧 𝑖 𝑘
1−𝑃 𝐷
in case of a track continuation.
Multiple Hypothesis Tracking
Practical Issues
▪ major issue of the MHT algorithm is memory and processing
requirements as more measurements are processed
▪ MHT algorithm is of exponential character, limitations in memory and
processing power make any direct implementation of the algorithm
impossible
Hypothesis Pruning
▪ number of generated and maintained hypotheses has to be kept in
check in order to not exceed processing power and memory
▪ can be achieved by keeping only some hypotheses and pruning the
rest
▪ pruning means discarding certain hypotheses on the basis of some
criteria.
Multiple Hypothesis Tracking
Count-based Pruning
▪ keep only the 𝑛-best (most likely) hypotheses and to discard the rest
▪ after each update the set of hypotheses Ω 𝑘 is sorted by hypothesis
probability 𝑃𝑖
𝑘
and then all hypotheses Ω𝑖
𝑘
with 𝑖 ≤ 𝑛 are deleted
▪ advantage of that strategy is its simplicity
▪ disadvantage is hypotheses are pruned because they are not within
the 𝑛-best hypotheses,
▪ even though their probability is high and it might be worth considering
them in following scans when more information is available
▪ this drawback is solved by probability-based pruning.
Multiple Hypothesis Tracking
Probability-based Pruning
▪ only hypotheses Ω𝑖
𝑘
∈ Ω 𝑘 with a probability 𝑃𝑖
𝑘
bigger than a certain
threshold 𝑃 𝑚𝑖𝑛 ∈ 0,1 are kept
▪ all hypotheses with 𝑃𝑖
𝑘
< 𝑃 𝑚𝑖𝑛 are discarded and all remaining
hypotheses are kept.
▪ advantage of this pruning strategy is that probable hypotheses are not
discarded.
▪ disadvantage is that the number of hypotheses can vary considerably
which affects the time required to process a scan.
Multiple Hypothesis Tracking
n-scan-back Pruning
▪ based on the assumption that ambiguities during the data association
process can only be resolved within the next 𝑛 time-steps.
▪ at time-step 𝑘 + 𝑛 a final association decision has to be made and just
one node is kept
▪ all the other child nodes of the node at time-step 𝑘 are discarded.
▪ pruning decision is made on the basis of the probability obtained by
summing up the hypothesis probabilities of all leaves associated with
a node.
▪ tree branch containing the leaf nodes with the maximum probability is
kept because it indicates the most probable branch.
Multiple Hypothesis Tracking
Multiple Hypothesis Tracking
An example of the n-scan-back pruning strategy. The
lefthand side shows a hypothesis tree prior to pruning.
The righthand side shows the same hypothesis tree after
n-scan-back pruning was applied
Hypothesis Merging
▪ several hypotheses with different histories might result in very similar
track estimates
▪ such hypotheses should be merged,
▪ they represent almost same measurement hypothesizing but take up
space for other likely hypotheses.
▪ two hypotheses can be merged if they both contain the same number
of tracks and the contained tracks are similar
▪ similarity of two tracks can be determined
▪ on the basis of the distance of their state predictions 𝑑 𝑠𝑝 ∈ ℝ0
+
▪ on the distances of the eigen values 𝑑 𝜆 𝑖
∈ ℝ0
+
of their associated
covariance matrices
Multiple Hypothesis Tracking
▪ if these distances are smaller than the thresholds 𝜖 𝑠𝑝, 𝜖 𝜆 𝑖
∈ ℝ0
+
, both
hypotheses are merged into a single one
▪ probability of the new hypothesis is set to the sum of the probabilities
of the merged hypotheses
▪ state prediction and covariance matrix of the new hypothesis are set
to the average of the state predictions and covariance matrices of the
merged hypotheses
▪ to merge a set of many similar hypotheses two of them are iteratively
chosen, removed from the set and merged into a new hypothesis
which is put back into the set.
▪ procedure is repeated until only one hypothesis is left in the set.
Multiple Hypothesis Tracking
Clustering
▪ spatially separated tracks with no common measurements in their
associated gates for some time-steps
▪ tracking performance mainly depends on the number of tracks present
within each hypothesis,
▪ division of these tracks into multiple independent local hypotheses, so
called clusters, that do not interact with each other
▪ solving a number of small tracking problems instead of a big one,
▪ reduces the amount of required memory and computation time
significantly.
▪ all computations on individual clusters can be performed in parallel
without any synchronization
Multiple Hypothesis Tracking
▪ tracks of the same cluster share common measurements, whereas
tracks of different cluster do not.
▪ a cluster is completely defined by the set of tracks and measurements
it contains, as well as by the set of hypotheses
Cluster Formation
▪ a new measurement is associated with a cluster if it falls within the
gating region of any track contained in that cluster
▪ if a measurement cannot be associated with any existing cluster, a
new cluster containing this measurement is formed.
Multiple Hypothesis Tracking
Cluster Combining
▪ if a new measurement gates with tracks of two or more different
clusters, these clusters are combined into one super cluster
▪ set of hypotheses of the super cluster is formed by building every
possible joint hypothesis of the hypotheses of the clusters to be
combined.
▪ joint hypothesis probabilities are calculated by multiplying the
probabilities of the combined hypotheses.
▪ in practice the hypothesis set of the super cluster has to be pruned.
▪ can be iteratively pruned after each combination of two clusters.
▪ so the total number of hypotheses that have to be generated for the
super cluster is significantly reduced.
Multiple Hypothesis Tracking
Multiple Hypothesis Tracking
An example for the combination of two previously independent
clusters.
Cluster Splitting
▪ clusters can be reduced in the number of targets through the process
of cluster splitting
▪ a track can be removed from a cluster and put into its own cluster, if
there is a measurement in a scan which gates only with that particular
track and with no other track of any other cluster
Direct Hypothesis Generation
conceptual MHT algorithm is reformulated so that the best hypotheses
can be directly determined instead of exhaustively enumerating all of
them
Multiple Hypothesis Tracking
References
[1] David Geier, Development and Evaluation of a Real-time Capable
Multiple Hypothesis Tracker, Technische Universität Berlin,2012
[2] Y. Bar-Shalom, X. Rong Li, T. Kirubarajan, “Estimation with Applications to
Tracking and Navigation”, Wiley, 2001
[3] U. Orguner, “Target Tracking”, Lecture notes, Linköpings University, 2010.
[4] S.S. Blackman, R. Popoli, “Design and Analysis of Modern Tracking
Systems”, Artech House, 1999
Other MOFT Tutorials – Lists and Links
Introduction to Multi Target Tracking
Bayesian Inference and Filtering
Kalman Filtering
Sequential Monte Carlo (SMC) Methods and Particle Filtering
Single Object Filtering Single Target Tracking
Nearest Neighbor(NN) and Probabilistic Data Association Filter(PDAF)
Multi Object Filtering Multi Target Tracking
Global Nearest Neighbor and Joint Probabilistic Data Association Filter
Data Association in Multi Target Tracking
Multiple Hypothesis Tracking, MHT
Other MOFT Tutorials – Lists and Links
Random Finite Sets, RFS
Random Finite Set Based RFS Filters
RFS Filters, Probability Hypothesis Density, PHD
RFS Filters, Cardinalized Probability Hypothesis Density, CPHD Filter
RFS Filters, Multi Bernoulli MemBer and Cardinality Balanced MeMBer, CBMemBer Filter
RFS Labeled Filters, Generalized Labeled Multi Bernoulli, GLMB and Labeled Multi Bernoulli, LMB Filters
Multiple Model Methods in Multi Target Tracking
Multi Target Tracking Implementation
Multi Target Tracking Performance and Metrics
http://www.egniya.com/EN/MOFT/Tutorials/
moft@egniya.com

More Related Content

PPTX
Antenna
PPT
Satellite Bands
PPTX
ppt.pptx
PDF
Formation en Calcul et Ajustement GNSS
PPTX
Global Positioning System (GPS)
PDF
Antenna basics from-r&s
PPT
Beamforming antennas (1)
Antenna
Satellite Bands
ppt.pptx
Formation en Calcul et Ajustement GNSS
Global Positioning System (GPS)
Antenna basics from-r&s
Beamforming antennas (1)

What's hot (20)

PDF
Reflector Antenna
PDF
Radar target detection simulation
PDF
Child Safety - A Design by Kids using IoT
PPT
Global Positioning System (GPS)
PPTX
Ph.D Research proposal
PPTX
Radar communication
PPTX
Free space optical communication
PPTX
Antenna Measurements
PPTX
phase shifter
PPTX
Lidar- light detection and ranging
PDF
Radar fundamentals
PDF
Multi-Funtion Phased Array Radar
PPTX
Military radar and satellite switching
PPTX
Ultrasonic Sensor
PPT
Microwave Transmission
PPSX
Chapter 3- pulsed radar system and MTI
PPTX
Wireless communication technologies
PPTX
Applications of ad hoc and wireless sensor networks
Reflector Antenna
Radar target detection simulation
Child Safety - A Design by Kids using IoT
Global Positioning System (GPS)
Ph.D Research proposal
Radar communication
Free space optical communication
Antenna Measurements
phase shifter
Lidar- light detection and ranging
Radar fundamentals
Multi-Funtion Phased Array Radar
Military radar and satellite switching
Ultrasonic Sensor
Microwave Transmission
Chapter 3- pulsed radar system and MTI
Wireless communication technologies
Applications of ad hoc and wireless sensor networks
Ad

Similar to MHT Multi Hypothesis Tracking - Part1 (17)

PDF
MHT Multi Hypothesis Tracking - Part3
PDF
A Combined PMHT And IMM Approach To Multiple-Point Target Tracking In Infrare...
PPTX
Multiple Hypothesis Tracking Algorithm
PPT
TargetTracking[1].ppt random finite set presentation
PDF
Introduction to Multi Object Filtering, Multi Target Tracking
PDF
MTT Data Association
PDF
Multi Object Filtering Multi Target Tracking
PDF
nips2014
PDF
Single Object Filtering, Single Target Tracking
PDF
Random finite set filters for superpositon type sensors
PDF
Bayesian Inference and Filtering
PDF
The basic ML architecture for all modern PCs and game consoles is similar
PPT
Ron_Mahler.ppt random finite set statistics
PDF
MTT Gating and Validation
PPT
Intro to Multitarget Tracking for CURVE
PPTX
Detection&Tracking - Thermal imaging object detection and tracking
PDF
Multi Object Tracking Methods Based on Particle Filter and HMM
MHT Multi Hypothesis Tracking - Part3
A Combined PMHT And IMM Approach To Multiple-Point Target Tracking In Infrare...
Multiple Hypothesis Tracking Algorithm
TargetTracking[1].ppt random finite set presentation
Introduction to Multi Object Filtering, Multi Target Tracking
MTT Data Association
Multi Object Filtering Multi Target Tracking
nips2014
Single Object Filtering, Single Target Tracking
Random finite set filters for superpositon type sensors
Bayesian Inference and Filtering
The basic ML architecture for all modern PCs and game consoles is similar
Ron_Mahler.ppt random finite set statistics
MTT Gating and Validation
Intro to Multitarget Tracking for CURVE
Detection&Tracking - Thermal imaging object detection and tracking
Multi Object Tracking Methods Based on Particle Filter and HMM
Ad

More from Engin Gul (20)

PDF
MHT Multi Hypothesis Tracking - Part2
PDF
Kalman Filtering
PDF
Sequential Monte Carlo (SMC) and Particle Filters
PDF
NN-Nearest Neighbor and PDAF-Probabilistic Data Association Filters
PDF
EMA3100A Target Motion Simulator Quick Fact Sheet
PDF
EMA3100A Target Motion Simulator User Guide - Chap5-Target Tracking Projects
PDF
EMA3100A Target Motion Simulator User Guide - Chap4-Motion Modeling Projects
PDF
EMA3100A Target Motion Simulator User Guide - Chap3-Working with EMA3100A TMS
PDF
EMA3100A Target Motion Simulator User Guide - Chap2-EMA3100A TMS User Interface
PDF
EMA3100A Target Motion Simulator User Guide - Chap1-Introduction
PDF
EMA3100A Target Motion Simulator User Guide - Chap0-Content
PDF
EMA3100A Target Motion Simulator User Guide - Chap6-Using EMA3100A TMS Tools
PDF
EMA3100A Target Motion Simulator User Guide - Chap7-GraphPanel and Graphics
PDF
Egniya ema3100 a_ugps_eng_08
PDF
Egniya ema3100 a_ugps_eng_07
PDF
Egniya ema3100 a_ugps_eng_06
PDF
Egniya ema3100 a_ugps_eng_05
PDF
Egniya ema3100 a_ugps_eng_03
PDF
Egniya ema3100 a_ugps_eng_02
PDF
Egniya ema3100 a_ugps_eng_01
MHT Multi Hypothesis Tracking - Part2
Kalman Filtering
Sequential Monte Carlo (SMC) and Particle Filters
NN-Nearest Neighbor and PDAF-Probabilistic Data Association Filters
EMA3100A Target Motion Simulator Quick Fact Sheet
EMA3100A Target Motion Simulator User Guide - Chap5-Target Tracking Projects
EMA3100A Target Motion Simulator User Guide - Chap4-Motion Modeling Projects
EMA3100A Target Motion Simulator User Guide - Chap3-Working with EMA3100A TMS
EMA3100A Target Motion Simulator User Guide - Chap2-EMA3100A TMS User Interface
EMA3100A Target Motion Simulator User Guide - Chap1-Introduction
EMA3100A Target Motion Simulator User Guide - Chap0-Content
EMA3100A Target Motion Simulator User Guide - Chap6-Using EMA3100A TMS Tools
EMA3100A Target Motion Simulator User Guide - Chap7-GraphPanel and Graphics
Egniya ema3100 a_ugps_eng_08
Egniya ema3100 a_ugps_eng_07
Egniya ema3100 a_ugps_eng_06
Egniya ema3100 a_ugps_eng_05
Egniya ema3100 a_ugps_eng_03
Egniya ema3100 a_ugps_eng_02
Egniya ema3100 a_ugps_eng_01

Recently uploaded (20)

PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPTX
Artificial Intelligence
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PPT
Project quality management in manufacturing
PPTX
Internet of Things (IOT) - A guide to understanding
PPTX
Lecture Notes Electrical Wiring System Components
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPT
introduction to datamining and warehousing
PDF
Digital Logic Computer Design lecture notes
PPTX
additive manufacturing of ss316l using mig welding
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PDF
PPT on Performance Review to get promotions
PDF
Well-logging-methods_new................
PPTX
Construction Project Organization Group 2.pptx
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPTX
Geodesy 1.pptx...............................................
PPTX
Safety Seminar civil to be ensured for safe working.
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Artificial Intelligence
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Project quality management in manufacturing
Internet of Things (IOT) - A guide to understanding
Lecture Notes Electrical Wiring System Components
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
introduction to datamining and warehousing
Digital Logic Computer Design lecture notes
additive manufacturing of ss316l using mig welding
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
Foundation to blockchain - A guide to Blockchain Tech
PPT on Performance Review to get promotions
Well-logging-methods_new................
Construction Project Organization Group 2.pptx
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Geodesy 1.pptx...............................................
Safety Seminar civil to be ensured for safe working.

MHT Multi Hypothesis Tracking - Part1

  • 1. MOFT Tutorials Multi Object Filtering Multi Target Tracking Multiple Hypothesis Tracking Part-1 / Conceptual MHT
  • 2. Multiple Hypothesis Tracking - MHT ▪ significantly differs from the PDA based tracking algorithms ▪ PDA based algorithms combine the multiple hypotheses of the measurement update step before the next update, ▪ MHT approach keeps all hypotheses and expects that the subsequent measurements resolve the association uncertainty ▪ number of hypotheses increases exponentially in ambiguous situations ▪ computational complexity is reduced using pruning and merging techniques as well as grouping strategies Multiple Hypothesis Tracking
  • 3. ▪ an efficient MHT implementation using Murty’s algorithm which obtains the k best hypotheses without evaluating all possible hypotheses ▪ two approaches; hypothesis-oriented MHT, track-oriented MHT ▪ hypothesis-oriented MHT propagates the hypotheses over time, ▪ track-oriented MHT only propagates the obtained tracks over time ▪ reconstructs the hypotheses using compatible tracks before the next measurement update ▪ set of tracks is considered to be compatible if each measurement contributes to at most one of the tracks Multiple Hypothesis Tracking
  • 4. ▪ important features of modern tracking systems. ▪ track initiation, track termination and a deferred decision logic ▪ especially in high clutter and high target density scenarios ▪ Multiple Hypothesis Tracking (MHT) algorithm naturally supports track initiation, track termination and a deferred decision logic ▪ fundamental idea of MHT ,delaying difficult data association decisions until more information is received ▪ requires maintaining a set of different association hypotheses in a ranked fashion Multiple Hypothesis Tracking
  • 5. ▪ problem of MHT, number of hypotheses grows exponentially, ▪ combinatorial explosion quickly exhausting memory and computation capabilities ▪ important component of every MHT system are techniques for keeping the number of hypotheses maintainable Multiple Hypothesis Tracking
  • 6. MHT Previous Work/History ▪ Morefield’s MTT batch-processing ▪ based on a 0–1 integer linear programming problem ▪ classifies measurements based on Bayesian decision theory ▪ NP-hard multi-scan data association problem ▪ Reid fundamentally changed the structure of the algorithm ▪ sequentially applicable ▪ maintains multiple different association hypotheses while processing the set of measurements of each new scan. ▪ pruning strategies only a fixed number of the most likely hypotheses is kept ▪ much better suited for real-time applications, hypothesis set can be updated iteratively from scan-to-scan. ▪ complexity grows exponential in computation time and memory. ▪ unfeasible in scenarios high number of tracks and measurements Multiple Hypothesis Tracking
  • 7. MHT Previous Work/History ▪ Cox and Miller with better solution ▪ to identify the exact 𝑛-best hypotheses ▪ Murty’s algorithm, ranked set of best association hypotheses ▪ based on an algorithm for solving LAPs, Hungarian method, Munkres algorithm ▪ Jonker and Volgenant ▪ optimized algorithm for solving square LAPs ▪ outperforms classical methods for dense LAPs easily ▪ Bijsterbosch and Volgenant ▪ further improved this algorithm and extended it to the rectangular case ▪ most widely used algorithm for data association problem in MTT. Multiple Hypothesis Tracking
  • 8. Conceptual MHT – Reid’s Original MHT Example ▪ validation regions of the two tracks overlap. ▪ two measurements 𝑀2 and 𝑀3 gate with both of them, ▪ measurement 𝑀1 only gates with the first track ▪ tracks 𝑇1 and 𝑇2 represented by a hypothesis 𝐻 at time-step 𝑘 − 1, ▪ prior to new measurements 𝑀1, 𝑀2 and 𝑀3 in time-step 𝑘. Multiple Hypothesis Tracking
  • 9. ▪ a likely hypothesis ▪ measurement 𝑀2 belongs to the track 𝑇1, ▪ measurement 𝑀3 to the track 𝑇2 ▪ measurement 𝑀1 is the start of a new track 𝑇3. ▪ an unlikely hypothesis ▪ all three new measurements originate from clutter ▪ classified as false alarms; ▪ neither updating track 𝑇1 and 𝑇2 nor initiating any new track. ▪ number of feasible hypotheses is large. ▪ MHT algorithm generates all feasible data association hypotheses for arbitrary tracking scenarios in a systematic fashion. Multiple Hypothesis Tracking
  • 10. ▪ Example -five feasible posterior data association hypotheses for 𝐻 are listed. ▪ five feasible posterior data association hypotheses for 𝐻, where 𝑁𝑇 means a new target and 𝐹𝑇 means a false alarm Multiple Hypothesis Tracking
  • 11. ▪ each iteration at a time-step 𝑘 begins with the set of old hypotheses from the previous iteration at time-step 𝑘 − 1. ▪ each of these old hypotheses becomes a parent hypothesis for the current time-step 𝑘 as new hypotheses are formed. ▪ a hypothesis provides an interpretation of all previously received measurements as a set of disjoint tracks. ▪ before the actual data association, Kalman filter associated with each track is predicted. ▪ estimation of the track states for the upcoming time-step. ▪ newly arrived measurements are matched with the track state predictions using the Mahalanobis distance. Multiple Hypothesis Tracking
  • 12. ▪ three different possible outcomes for each measurement association. ▪ every measurement can be: ▪ continuation of a previously known track, ▪ start of a new track, ▪ a false alarm due to clutter, or other sensor inabilities. ▪ MHT algorithm enumerates all possible data association hypotheses effectively generating a tree. ▪ tree encodes all association hypotheses as branches from the root to one of its leaves. ▪ tree depth increases with every processed measurement. Multiple Hypothesis Tracking
  • 13. ▪ for each hypothesis a likelihood for assigning the respective measurement is calculated. ▪ probability stands for the likelihood of a hypothesis ▪ required to rank the hypotheses from the most likely to the most unlikely ▪ hypothesis tree is pruned ▪ unlikely hypotheses are removed to keep the size of the tree in check Multiple Hypothesis Tracking
  • 15. ▪ MHT algorithm generates new hypotheses sets in an iterative manner ▪ no two measurements of the same scan are associated with the same target ▪ no two tracks of the same hypothesis have a measurement in common, ▪ measurements originating from a sensor are modeled as 𝐷 𝑀 - dimensional vectors 𝑧 ∈ ℝ 𝐷 𝑀. ▪ states maintained by the Kalman filter are modeled as 𝐷𝑆-dimensional vectors 𝑥 ∈ ℝ 𝐷 𝑆 . ▪ set of 𝑀 𝑘 measurements delivered in scan 𝑘 is denoted by 𝑍 𝑘 = 𝑧𝑖 𝑘 , 𝑖 = 1,2, ⋯ , 𝑀 𝑘 ⊂ ℝ 𝐷 𝑀 Multiple Hypothesis Tracking
  • 16. ▪ set of all scans up to time-step 𝑘 is denoted by 𝑍 𝑘 = ራ 𝑖=1 𝑘 𝑍 𝑖 ▪ set of all 𝐻 𝑘 hypotheses at the time of scan 𝑘, ▪ associating the set of measurements 𝑍 𝑘 up to scan 𝑘 with tracks or clutter, is denoted by Ω 𝑘 = Ω𝑖 𝑘 , 𝑖 = 1,2, ⋯ , 𝐻 𝑘 Multiple Hypothesis Tracking
  • 17. ▪ an association hypothesis Ω𝑖 𝑘 is a 5-tuple consisting of a set of tracks describing the measurement-to-track associations, ▪ number of tracks in this hypothesis 𝑁 𝑇𝐺𝑇 , ▪ number of measurements in the actual scan associated with previously existing tracks 𝑁 𝐷𝑇 , ▪ number of measurements in the actual scan that initiated new targets 𝑁 𝑁𝑇 ▪ number of measurements in the actual scan classified as false alarms 𝑁𝐹𝑇 . ▪ sum is equal to the number of measurements 𝑀 𝑘 = 𝑁 𝐷𝑇 +𝑁𝐹𝑇 + 𝑁 𝑁𝑇. Multiple Hypothesis Tracking
  • 18. ▪ in scan 𝑘 + 1 a new set of measurements 𝑍 𝑘 + 1 is received ▪ a new set of hypotheses Ω 𝑘+1 is formed iteratively as follows ▪ ෡Ω 𝑘 denote the set of hypotheses after the 𝑚 𝑡ℎ measurement 𝑧 𝑚 𝑘 + 1 of the actual scan ▪ process starts by initializing ෡Ω0 = Ω 𝑘. ▪ a new set of hypotheses ෡Ω 𝑚 for each prior hypothesis ෡Ω𝑖 𝑚−1 and each new measurement 𝑧 𝑚 𝑘 + 1 is formed. ▪ each hypothesis in this new set is the joint hypothesis which claims that ෡Ω𝑖 𝑚−1 is true and that the measurement 𝑧 𝑚 𝑘 + 1 is either ▪ a false alarm, ▪ beginning of a new target ▪ or the continuation of one of the prior targets. Multiple Hypothesis Tracking
  • 19. Multiple Hypothesis Tracking ▪ initial hypothesis Ω 𝑚(0) 𝑘−1 contains 𝑁 𝑇𝐺𝑇 = 2 tracks, each of them consisting of 3 measurements. ▪ In time-step 𝑘 a new scan 𝑍 𝑘 = 𝑧1 𝑘 , 𝑧2 𝑘 , 𝑧3 𝑘 , 𝑧4 𝑘 with 𝑀 𝑘 = 4 measurements arrives. ▪ a new hypothesis Ω0 𝑘 is formed which associates 𝑁 𝐷𝑇 = 2 out of 4 measurements to the 2 existing tracks. 𝑁 𝑁𝑇 = 1 measurement initiates a new track and 𝑁𝐹𝑇 = 1measurement is classified as false alarm.
  • 20. Probability of a Hypothesis ▪ rating and ranking hypotheses is a crucial aspect of the MHT algorithm, required to differentiate likely and unlikely hypotheses ▪ MHT hypothesis rating equation gives the probability of a hypothesis Ω𝑖 𝑘 , given the cumulative set of measurements 𝑍 𝑘 originating from a type 1 sensor: 𝑃𝑖 𝑘 = 𝑃 Ω𝑖 𝑘 |Z 𝑘 ▪ type 1 sensor is a sensor capable of providing information about the number of targets in the area of coverage of the sensor ▪ generates in each scan a set of zero or more measurements Multiple Hypothesis Tracking
  • 21. ▪ given the cumulative set of measurements up to scan 𝑘 , the probability of a new hypothesis Ω𝑖 𝑘 is composed of the new set of assignments 𝑤𝑖 𝑘 and the parent hypothesis Ω 𝑚 𝑖 𝑘−1 which contains the assignments of measurements up to and including scan 𝑘 − 1. ▪ function 𝑚 𝑖 gives the index of the parent hypothesis of a hypothesis Ω𝑖 𝑘 from the last scan. ▪ new hypothesis is the union of the old and current set of assignments, formally written as: Ω𝑖 𝑘 = Ω 𝑚 𝑖 𝑘−1 , 𝑤𝑖 𝑘 Multiple Hypothesis Tracking
  • 23. ▪ probability 𝑃𝑖 𝑘 of a hypothesis Ω𝑖 𝑘 can be obtained in a recursive manner by using previously known information and new one ▪ an equation for calculating the probability of a hypothesis 𝑃 Ω𝑖 𝑘 |Z 𝑘 can be derived as 𝑃𝑖 𝑘 = 𝑃 Ω𝑖 𝑘 |Z 𝑘 = 1 𝑐 𝑃 𝑍 𝑘 |Ω 𝑚 𝑖 𝑘−1 , 𝑤𝑖 𝑘 , Z 𝑘−1 𝑃 𝑤𝑖 𝑘 |Ω 𝑚 𝑖 𝑘−1 , Z 𝑘−1 𝑃 Ω 𝑚 𝑖 𝑘−1 |Z 𝑘−1 ▪ normalizing constant is 𝑐 = 𝑃 Z 𝑘−1,𝑍 𝑘 𝑃 Z 𝑘−1 = 𝑃 Z 𝑘 |Z 𝑘−1 ▪ specifies the probability of the occurrence of all measurements up to scan 𝑘. Multiple Hypothesis Tracking
  • 24. ▪ constant never has to be calculated explicitly because it is independent of any hypothesis probability, constant among all hypotheses of one iteration ▪ to arrive at a closed equation for the probability 𝑃𝑖 𝑘 of a hypothesis Ω𝑖 𝑘 , three factors of the righthand side required ▪ factor (3) of the equation is just the probability of the parent hypothesis Ω 𝑚 𝑖 𝑘−1 , available from the previous iteration. ▪ remaining two terms (1) and (2) describe the likelihood of the new assignment Multiple Hypothesis Tracking
  • 25. ▪ likelihood of the measurements 𝑍 𝑘 of the current scan, given the new association hypothesis Ω𝑖 𝑘 = Ω 𝑚 𝑖 𝑘−1 , 𝑤𝑖 𝑘 combines the old and new data associations as 𝑃 𝑍 𝑘 |Ω 𝑚 𝑖 𝑘−1 , 𝑤𝑖 𝑘 , Z 𝑘−1 = ෑ 𝑖=1 𝑀 𝑘 𝑎 𝑧𝑖 𝑘 where 𝑎: ℝ 𝐷 𝑀 → ℝ0 + is defined as 𝑎 𝑧 = 1 𝑉 if 𝑧 is a false alarm or a new target 𝑓 𝒩 𝐻x 𝑗 𝑘 ,S 𝑗 𝑘 𝑧 if 𝑧 continues a track 𝑗 confirmed by Ω 𝑚 𝑖 𝑘−1 𝑉 ∈ ℝ+ is the scan volume covered by the sensor, x𝑗 𝑘 is the predicted state of the 𝑗 𝑡ℎ track of hypothesis Ω 𝑚 𝑖 𝑘−1 S𝑗 𝑘 is the corresponding innovation covariance matrix Multiple Hypothesis Tracking
  • 26. ▪ spatial distribution of false alarms is assumed to be uniform within the area 𝑉 covered by the sensor ▪ number of false alarms per time-step is modeled by a Poisson distribution ▪ probability of the new association hypothesis given its parent hypothesis. ▪ composed of the probability for the counts 𝑁 𝐷𝑇 , 𝑁𝐹𝑇 and 𝑁 𝑁𝑇 given Ω 𝑚 𝑖 𝑘−1 , the probability of a specific configuration given the counts 𝑁 𝐷𝑇 , 𝑁𝐹𝑇 and 𝑁 𝑁𝑇 and lastly the probability of an assignment for a given configuration. ▪ a configuration is the classification of all 𝑀 𝑘 measurements of the current scan into measurements assigned to previously known tracks, false alarms or new tracks Multiple Hypothesis Tracking
  • 27. ▪ an assignment is a concrete allocation of tracks to those measurements that were assigned to some previously known tracks stated as: 𝑃 𝑤𝑖 𝑘 |Ω 𝑚 𝑖 𝑘−1 , Z 𝑘−1 = 𝑃 𝑁 𝐷𝑇 , 𝑁𝐹𝑇, 𝑁 𝑁𝑇|Ω 𝑚 𝑖 𝑘−1 ∙ 𝑃 𝑐𝑜𝑛𝑓𝑖𝑔𝑢𝑟𝑎𝑡𝑖𝑜𝑛|𝑁 𝐷𝑇 , 𝑁𝐹𝑇, 𝑁 𝑁𝑇 ∙ 𝑃 𝑎𝑠𝑠𝑖𝑔𝑛𝑚𝑒𝑛𝑡|𝑐𝑜𝑛𝑓𝑖𝑔𝑢𝑟𝑎𝑡𝑖𝑜𝑛 ▪ assumed that the number of previously known tracks that were detected in the current scan 𝑁 𝐷𝑇 , follows a binomial distribution ℬ 𝑛, 𝑝 ▪ number of new targets 𝑁 𝑁𝑇 and false targets 𝑁𝐹𝑇 follow a Poisson distribution 𝒫 λ ▪ probability of the first factor of equation 𝑃 𝑁 𝐷𝑇 , 𝑁𝐹𝑇, 𝑁 𝑁𝑇|Ω 𝑚 𝑖 𝑘−1 = 𝑓ℬ 𝑁 𝑇𝐺𝑇,𝑃 𝐷 𝑁 𝐷𝑇 𝑓ℱ 𝛽 𝐹𝑇 𝑉 𝑁𝐹𝑇 𝑓ℱ 𝛽 𝑁𝑇 𝑉 𝑁 𝑁𝑇 Multiple Hypothesis Tracking
  • 28. ▪ total number of different ways to assign 𝑁 𝐷𝑇 out of 𝑀 𝑘 measurements to prior targets, 𝑁𝐹𝑇 out of 𝑀 𝑘 measurements to false targets and 𝑁 𝑁𝑇 out of 𝑀 𝑘 measurements to new targets is given by the following expression: 𝑀 𝑘 𝑁 𝐷𝑇 ∙ 𝑀 𝑘 − 𝑁 𝐷𝑇 𝑁𝐹𝑇 ∙ 𝑀 𝑘 − 𝑁 𝐷𝑇 − 𝑁𝐹𝑇 𝑁 𝑁𝑇 ▪ probability of a configuration given the counts 𝑁 𝐷𝑇 , 𝑁𝐹𝑇 and 𝑁 𝑁𝑇 is: 𝑃 𝑐𝑜𝑛𝑓𝑖𝑔𝑢𝑟𝑎𝑡𝑖𝑜𝑛|𝑁 𝐷𝑇 , 𝑁𝐹𝑇, 𝑁 𝑁𝑇 = 1 𝑀 𝑘 𝑁 𝐷𝑇 ∙ 𝑀 𝑘−𝑁 𝐷𝑇 𝑁 𝐹𝑇 ∙ 𝑀 𝑘−𝑁 𝐷𝑇−𝑁 𝐹𝑇 𝑁 𝑁𝑇 ▪ number of possible combinations for assigning the 𝑁 𝐷𝑇 measurements to the 𝑁 𝑇𝐺𝑇 prior targets Multiple Hypothesis Tracking
  • 29. ▪ 𝑁𝐹𝑇 and 𝑁 𝑁𝑇 do not have to be considered because the number of possible assignments for both equals 1. 𝑁 𝑇𝐺𝑇! 𝑁 𝑇𝐺𝑇 − 𝑁 𝐷𝑇 ! ▪ probability for an assignment given a certain configuration is: 𝑃 𝑎𝑠𝑠𝑖𝑔𝑛𝑚𝑒𝑛𝑡|𝑐𝑜𝑛𝑓𝑖𝑔𝑢𝑟𝑎𝑡𝑖𝑜𝑛 = 1 𝑁 𝑇𝐺𝑇! 𝑁 𝑇𝐺𝑇 − 𝑁 𝐷𝑇 ! = 𝑁 𝑇𝐺𝑇 − 𝑁 𝐷𝑇 ! 𝑁 𝑇𝐺𝑇! Multiple Hypothesis Tracking
  • 30. ▪ final equation for the probability of the new association hypothesis Ω𝑖 𝑘 is 𝑃 𝑤𝑖 𝑘 |Ω 𝑚 𝑖 𝑘−1 , Z 𝑘−1 = 𝑓ℬ 𝑁 𝑇𝐺𝑇,𝑃 𝐷 𝑁 𝐷𝑇 𝑓ℱ 𝛽 𝐹𝑇 𝑉 𝑁𝐹𝑇 𝑓ℱ 𝛽 𝑁𝑇 𝑉 𝑁 𝑁𝑇 1 𝑀 𝑘 𝑁 𝐷𝑇 ∙ 𝑀 𝑘−𝑁 𝐷𝑇 𝑁 𝐹𝑇 ∙ 𝑀 𝑘−𝑁 𝐷𝑇−𝑁 𝐹𝑇 𝑁 𝑁𝑇 𝑁 𝑇𝐺𝑇 − 𝑁 𝐷𝑇 ! 𝑁 𝑇𝐺𝑇! = 𝑁𝐹𝑇! 𝑁 𝑁𝑇! 𝑀 𝑘! 𝑃𝐷 𝑁 𝐷𝑇 1 − 𝑃 𝐷 𝑁 𝑇𝐺𝑇−𝑁 𝐷𝑇 𝑓ℱ 𝛽 𝐹𝑇 𝑉 𝑁𝐹𝑇 𝑓ℱ 𝛽 𝑁𝑇 𝑉 𝑁 𝑁𝑇 Multiple Hypothesis Tracking
  • 31. ▪ formula for determining the probability of a hypothesis obtained as: 𝑃𝑖 𝑘 = 1 𝑐 𝑁𝐹𝑇! 𝑁 𝑁𝑇! 𝑀 𝑘! 𝑃𝐷 𝑁 𝐷𝑇 1 − 𝑃 𝐷 𝑁 𝑇𝐺𝑇−𝑁 𝐷𝑇 𝑓ℱ 𝛽 𝐹𝑇 𝑉 𝑁𝐹𝑇 𝑓ℱ 𝛽 𝑁𝑇 𝑉 𝑁 𝑁𝑇 ∙ ෑ 𝑖=1 𝑁 𝐷𝑇 𝑓 𝒩 𝐻x 𝑗 𝑘 ,S 𝑗 𝑘 𝑧𝑖 𝑘 1 𝑉 𝑁 𝐹𝑇+𝑁 𝑁𝑇 𝑃 𝑚 𝑖 𝑘−1 = 1 𝑐′ 𝑃𝐷 𝑁 𝐷𝑇 1 − 𝑃 𝐷 𝑁 𝑇𝐺𝑇−𝑁 𝐷𝑇 𝛽 𝐹𝑇 𝑁 𝐹𝑇 𝛽 𝑁𝑇 𝑁 𝑁𝑇 ∙ ෑ 𝑖=1 𝑁 𝐷𝑇 𝑓 𝒩 𝐻x 𝑗 𝑘 ,S 𝑗 𝑘 𝑧𝑖 𝑘 𝑃 𝑚 𝑖 𝑘−1 ▪ dependence on 𝑉 got eliminated by the substitution by the Poisson distribution. Multiple Hypothesis Tracking
  • 32. ▪ constant 𝑐 was condensed into the constant 𝑐′ = 𝑐𝑀 𝑘 𝑒−𝛽 𝐹𝑇 𝑉−𝑒−𝛽 𝑁𝑇 𝑉 . ▪ due to dependence on the probability of the previous time-step, the equation can be applied iteratively to calculate the probability of each data association hypothesis ▪ required steps are to multiply first all prior hypothesis probabilities by 1 − 𝑃 𝐷 𝑁 𝑇𝐺𝑇 ▪ as a branch is created for each measurement and its hypothesized origin, the likelihood of that branch is determined by either multiplying the prior probability by 𝛽 𝐹𝑇 in case of a false alarm, 𝛽 𝑁𝑇 in case of a new target or 𝑃 𝐷 𝑓 𝒩 𝐻x 𝑗 𝑘,S 𝑗 𝑘 𝑧 𝑖 𝑘 1−𝑃 𝐷 in case of a track continuation. Multiple Hypothesis Tracking
  • 33. Practical Issues ▪ major issue of the MHT algorithm is memory and processing requirements as more measurements are processed ▪ MHT algorithm is of exponential character, limitations in memory and processing power make any direct implementation of the algorithm impossible Hypothesis Pruning ▪ number of generated and maintained hypotheses has to be kept in check in order to not exceed processing power and memory ▪ can be achieved by keeping only some hypotheses and pruning the rest ▪ pruning means discarding certain hypotheses on the basis of some criteria. Multiple Hypothesis Tracking
  • 34. Count-based Pruning ▪ keep only the 𝑛-best (most likely) hypotheses and to discard the rest ▪ after each update the set of hypotheses Ω 𝑘 is sorted by hypothesis probability 𝑃𝑖 𝑘 and then all hypotheses Ω𝑖 𝑘 with 𝑖 ≤ 𝑛 are deleted ▪ advantage of that strategy is its simplicity ▪ disadvantage is hypotheses are pruned because they are not within the 𝑛-best hypotheses, ▪ even though their probability is high and it might be worth considering them in following scans when more information is available ▪ this drawback is solved by probability-based pruning. Multiple Hypothesis Tracking
  • 35. Probability-based Pruning ▪ only hypotheses Ω𝑖 𝑘 ∈ Ω 𝑘 with a probability 𝑃𝑖 𝑘 bigger than a certain threshold 𝑃 𝑚𝑖𝑛 ∈ 0,1 are kept ▪ all hypotheses with 𝑃𝑖 𝑘 < 𝑃 𝑚𝑖𝑛 are discarded and all remaining hypotheses are kept. ▪ advantage of this pruning strategy is that probable hypotheses are not discarded. ▪ disadvantage is that the number of hypotheses can vary considerably which affects the time required to process a scan. Multiple Hypothesis Tracking
  • 36. n-scan-back Pruning ▪ based on the assumption that ambiguities during the data association process can only be resolved within the next 𝑛 time-steps. ▪ at time-step 𝑘 + 𝑛 a final association decision has to be made and just one node is kept ▪ all the other child nodes of the node at time-step 𝑘 are discarded. ▪ pruning decision is made on the basis of the probability obtained by summing up the hypothesis probabilities of all leaves associated with a node. ▪ tree branch containing the leaf nodes with the maximum probability is kept because it indicates the most probable branch. Multiple Hypothesis Tracking
  • 37. Multiple Hypothesis Tracking An example of the n-scan-back pruning strategy. The lefthand side shows a hypothesis tree prior to pruning. The righthand side shows the same hypothesis tree after n-scan-back pruning was applied
  • 38. Hypothesis Merging ▪ several hypotheses with different histories might result in very similar track estimates ▪ such hypotheses should be merged, ▪ they represent almost same measurement hypothesizing but take up space for other likely hypotheses. ▪ two hypotheses can be merged if they both contain the same number of tracks and the contained tracks are similar ▪ similarity of two tracks can be determined ▪ on the basis of the distance of their state predictions 𝑑 𝑠𝑝 ∈ ℝ0 + ▪ on the distances of the eigen values 𝑑 𝜆 𝑖 ∈ ℝ0 + of their associated covariance matrices Multiple Hypothesis Tracking
  • 39. ▪ if these distances are smaller than the thresholds 𝜖 𝑠𝑝, 𝜖 𝜆 𝑖 ∈ ℝ0 + , both hypotheses are merged into a single one ▪ probability of the new hypothesis is set to the sum of the probabilities of the merged hypotheses ▪ state prediction and covariance matrix of the new hypothesis are set to the average of the state predictions and covariance matrices of the merged hypotheses ▪ to merge a set of many similar hypotheses two of them are iteratively chosen, removed from the set and merged into a new hypothesis which is put back into the set. ▪ procedure is repeated until only one hypothesis is left in the set. Multiple Hypothesis Tracking
  • 40. Clustering ▪ spatially separated tracks with no common measurements in their associated gates for some time-steps ▪ tracking performance mainly depends on the number of tracks present within each hypothesis, ▪ division of these tracks into multiple independent local hypotheses, so called clusters, that do not interact with each other ▪ solving a number of small tracking problems instead of a big one, ▪ reduces the amount of required memory and computation time significantly. ▪ all computations on individual clusters can be performed in parallel without any synchronization Multiple Hypothesis Tracking
  • 41. ▪ tracks of the same cluster share common measurements, whereas tracks of different cluster do not. ▪ a cluster is completely defined by the set of tracks and measurements it contains, as well as by the set of hypotheses Cluster Formation ▪ a new measurement is associated with a cluster if it falls within the gating region of any track contained in that cluster ▪ if a measurement cannot be associated with any existing cluster, a new cluster containing this measurement is formed. Multiple Hypothesis Tracking
  • 42. Cluster Combining ▪ if a new measurement gates with tracks of two or more different clusters, these clusters are combined into one super cluster ▪ set of hypotheses of the super cluster is formed by building every possible joint hypothesis of the hypotheses of the clusters to be combined. ▪ joint hypothesis probabilities are calculated by multiplying the probabilities of the combined hypotheses. ▪ in practice the hypothesis set of the super cluster has to be pruned. ▪ can be iteratively pruned after each combination of two clusters. ▪ so the total number of hypotheses that have to be generated for the super cluster is significantly reduced. Multiple Hypothesis Tracking
  • 43. Multiple Hypothesis Tracking An example for the combination of two previously independent clusters.
  • 44. Cluster Splitting ▪ clusters can be reduced in the number of targets through the process of cluster splitting ▪ a track can be removed from a cluster and put into its own cluster, if there is a measurement in a scan which gates only with that particular track and with no other track of any other cluster Direct Hypothesis Generation conceptual MHT algorithm is reformulated so that the best hypotheses can be directly determined instead of exhaustively enumerating all of them Multiple Hypothesis Tracking
  • 45. References [1] David Geier, Development and Evaluation of a Real-time Capable Multiple Hypothesis Tracker, Technische Universität Berlin,2012 [2] Y. Bar-Shalom, X. Rong Li, T. Kirubarajan, “Estimation with Applications to Tracking and Navigation”, Wiley, 2001 [3] U. Orguner, “Target Tracking”, Lecture notes, Linköpings University, 2010. [4] S.S. Blackman, R. Popoli, “Design and Analysis of Modern Tracking Systems”, Artech House, 1999
  • 46. Other MOFT Tutorials – Lists and Links Introduction to Multi Target Tracking Bayesian Inference and Filtering Kalman Filtering Sequential Monte Carlo (SMC) Methods and Particle Filtering Single Object Filtering Single Target Tracking Nearest Neighbor(NN) and Probabilistic Data Association Filter(PDAF) Multi Object Filtering Multi Target Tracking Global Nearest Neighbor and Joint Probabilistic Data Association Filter Data Association in Multi Target Tracking Multiple Hypothesis Tracking, MHT
  • 47. Other MOFT Tutorials – Lists and Links Random Finite Sets, RFS Random Finite Set Based RFS Filters RFS Filters, Probability Hypothesis Density, PHD RFS Filters, Cardinalized Probability Hypothesis Density, CPHD Filter RFS Filters, Multi Bernoulli MemBer and Cardinality Balanced MeMBer, CBMemBer Filter RFS Labeled Filters, Generalized Labeled Multi Bernoulli, GLMB and Labeled Multi Bernoulli, LMB Filters Multiple Model Methods in Multi Target Tracking Multi Target Tracking Implementation Multi Target Tracking Performance and Metrics