SlideShare a Scribd company logo
Multi-Domain Diversity Preservation to
Mitigate Particle Stagnation and Enable Better Pareto Coverage
in Mixed-Discrete Particle Swarm Optimization
Weiyang Tong*, Souma Chowdhury#, and Achille Messac#
* Syracuse University, Department of Mechanical and Aerospace Engineering
# Mississippi State University, Department of Aerospace Engineering
The AIAAAviation and Aeronautics Forum and Exposition
June 22-26, 2015
Dallas, TX
For citations, please refer to the journal version of this paper,
by Tong et al., "A Multi-Objective Mixed-Discrete Particle Swarm Optimization with Multi-Domain
Diversity Preservation", Structural and Multidisciplinary Optimization. DOI:10.1007/s00158-015-1319-8
Particle Swarm Optimization
2
Particle Swarm Optimization (PSO)
• was introduced by Eberhart and Kennedy in 1995
• was inspired by the swarm behavior observed in nature
• is a population based stochastic algorithm
Advantages of basic PSO:
• Fast convergence
• Ease of implementation
• Readily scalable to higher dimensions
 PSO often suffers from pre-mature stagnation, which is mainly
attributed to the loss of population diversity during convergence.
 The leader-following scheme in the basic PSO dynamics is suited
for single-objective optimization.
Algorithm for single objective unconstrained continuous optimization
Outline
• Motivation and Research Objectives
• Search Strategy in MO-MDPSO
• Introduction of MO-MDPSO
• Scheme of Quantifying the Population Diversity
• Numerical Experiments
• Continuous Unconstrained Test Problems
• Continuous Constrained Test Problems
• Mixed-Discrete Test Problems
• Concluding Remarks
3MO-MDPSO: Multi-objective Mixed-Discrete PSO
Single Objective Mixed-Discrete PSO*
Position update:
𝒙𝑖 𝑡 + 1 = 𝒙𝑖 𝑡 + 𝒗𝑖 𝑡 + 1
Velocity update:
𝒗𝑖 𝑡 + 1 = 𝑤𝒗𝑖 𝑡 + 𝑟1 𝐶1 𝑃𝑖
𝑙
(𝑡) − 𝒙𝑖 + 𝑟2 𝐶2 𝑃 𝑔(𝑡) − 𝒙𝑖 + 𝑟3 𝛾𝑐 𝒗𝑖(𝑡)
𝑤 – inertia weight
𝒙𝑖 – position of a particle
𝒗𝑖 – velocity of a particle
𝐶1 – cognitive parameter
𝐶2 – social parameter
𝑡 – generation
𝑟 – random number between 0-1
𝑃𝑖
𝑙
– pbest of a particle
𝑃 𝑔
– gbest of the swarm
4
Diverging velocity
Diversity
preservation
coefficient
Inertia Local search Global search
*Chowdhury et al (SMO, 2013)
A mixed-discrete PSO was developed to solve nonlinear constrained problems with a mix of
continuous and discrete/integer variables (in wind farm design and product family design).
Research Motivation
5
• A multi-objective version of the MDPSO algorithm was developed through
fundamental modifications to the leader selection strategy, and was successfully
tested on benchmark problems and practical engineering MOO problems *.
• However, previous research showed that the robustness of MO-MDPSO needs to be
further improved, where robustness refers to lower deviation in the final results over
multiple runs (less sensitive to the random parameters).
Constrained Multiobjective
Non-convex
Pareto frontier
Nonlinear
Mixed types of
variables
MDPSO
• We hypothesized that a more coherent
interaction of the diversity preservation
technique and the leader selection
mechanism can improve the robustness of
the algorithm, while preserving its Pareto
capture and fast convergence capabilities.
*Tong et al. IDETC, 2014; SMO 2015.
Research Objective
• Investigate diversity-preservation schemes in MO-MDPSO,
which is also cognizant of the multi-leader-follower behavior
of the particle population; here the aim is to:
1. Avoid stagnation and capture the complete Pareto frontier
2. Keep a desirably even distribution of Pareto solutions
3. Improve the robustness of the algorithm (i.e., demonstrate
lower deviation over multiple runs, or higher likelihood to
capture the actual Pareto front in any given run).
6
Search Strategies in Multi-Objective PSO
7
• MOPSO by Parsoupulos and Vrahatis (2002)
• DNPSO by Hu and Eberhart (2002)
• NSPSO by Li (2003)
• MOPSO by Coello (2004)
• To best retain the original dynamics, the Pareto based strategy is selected
Aggregating function based
Single objective based
Hybrid with other techniques
Pareto dominance based
Basic PSO MOPSO
Local
leader
The best solution is based on a
particle’s own history
Local set with non-dominated
historical solutions are found
within the local neighborhood
Global
leader
The best solution among all
local best solutions
The global Pareto solutions are
obtained using all local ones
1
2
3
4
5
1
2
3
4 5
f1
f2
0 0.5 1 1.5 2 2.5 3
0
0.5
1
1.5
2
2.5
3
Infeasible
R
egion
Actual Boundary
Local
Pareto set
Dynamics of MO-MDPSO
Position update:
𝒙𝑖 𝑡 + 1 = 𝒙𝑖 𝑡 + 𝒗𝑖 𝑡 + 1
Velocity update:
𝒗𝑖 𝑡 + 1 = 𝑤𝒗𝑖 𝑡 + 𝑟1 𝐶1 𝑷𝑖
𝑙
(𝑡) − 𝒙𝑖 + 𝑟2 𝐶2 𝑷𝑖
𝑔
(𝑡) − 𝒙𝑖 + 𝑟3 𝛾 𝑐,𝑖 𝒗𝑖(𝑡)
8
𝑷𝑖
𝑙
– local leader of particle-i, which is
the selected from the local Pareto set
𝑷𝑖
𝑔
– global leader of particle-i that is
determined by a stochastic process
Crowding Distance – to manage the
size of local/global Pareto set
Multiple global leaders
Inertia Local search Global search
Applied w.r.t.
each particle’s
global leader
Current particle Stored particle
Scheme 1: Original Diversity Metrics (MO-MDPSO I)
Measured based on the spread of the entire swarm in design space
for all continuous variables:
𝐷𝑐 =
𝑗=1
𝑛
𝑥 𝑚𝑎𝑥,𝑗(𝑡) − 𝑥 𝑚𝑖𝑛,𝑗(𝑡)
𝑋 𝑚𝑎𝑥,𝑗 − 𝑋 𝑚𝑖𝑛,𝑗
1
𝑛
for each discrete variable:
𝐷 𝑑
𝑗
=
𝑥 𝑚𝑎𝑥,𝑗(𝑡) − 𝑥 𝑚𝑖𝑛,𝑗(𝑡)
𝑋 𝑚𝑎𝑥,𝑗 − 𝑋 𝑚𝑖𝑛,𝑗
9
Considering the impact of outlier solutions, the diversity metrics are modified as
𝐷𝑐 = 𝐹𝜆 𝐷𝑐, and 𝐷 𝑑,𝑖
𝑗
= 𝐹𝜆𝑖 𝐷 𝑑
𝑗
where
𝐹𝜆𝑖 = 𝜆
𝑁 𝑝 + 1
𝑁𝑖 + 1
1
𝑚+𝑛
𝜆 is used to define the fractional domain w.r.t. the global leader of particle-i
The spread along
the jth dimension
The upper & lower bounds
along the jth dimension
Number of
candidate solutions
per global leader
Number of candidate
solutions enclosed by
the 𝜆-fractional domain
Hypercube enclosing
72 candidate solutions
X1
X2
0 1 2 3
0
1
2
3
Particles
Global leaders
Original Multiple λ-Fractional Domains
10
• 7 global leaders present.
• With a total of 72
particles, ideally, each
fractional domain
should enclose around
10 particles.
• Particles located in the
overlapping regions
often tend to be re-
allocated between the
neighboring domains.
• Outlier particles are
allowed to have
marginal impact on
diversity.
13
10
14
𝜆 = 0.25
Design Variable Space
Determining the Location of the Multi-domain
The boundaries of the 𝜆𝑖 - fractional domain are defined as
𝑥𝑖
𝑚𝑎𝑥,𝑗
= 𝑚𝑎𝑥
𝑥 𝑚𝑖𝑛,𝑗 + 𝜆𝑖 𝛿𝑥 𝑗,
𝑚𝑖𝑛 𝑷𝑖
𝑔,𝑗
+
1
2
𝜆𝑖 𝛿𝑥 𝑗, 𝑥 𝑚𝑎𝑥,𝑗
𝑥𝑖
𝑚𝑖𝑛,𝑗
= 𝑚𝑖𝑛
𝑥 𝑚𝑎𝑥,𝑗 − 𝜆𝑖 𝛿𝑥 𝑗,
𝑚𝑎𝑥 𝑷𝑖
𝑔,𝑗
−
1
2
𝜆𝑖 𝛿𝑥 𝑗, 𝑥 𝑚𝑖𝑛,𝑗
11
• Continuous variables
𝛾 𝑐,𝑖 = 𝛾 𝑐0 𝑒𝑥𝑝 −
1
2
𝐷𝑐,𝑖
𝜎𝑐
2
,
𝜎𝑐 =
1
2 ln 1 𝛾 𝑚𝑖𝑛
• Discrete variables
𝛾𝑑,𝑖
𝑗
= 𝛾 𝑑0 𝑒𝑥𝑝 −
1
2
𝐷 𝑑,𝑖
𝑗
𝜎 𝑑
2
,
𝜎 𝑑 =
1
2 ln 1 𝑀 𝑗
Scheme 2: Modified Diversity Metrics
Measured based on the spread of “followers” in design space, i.e., based on
the design vector of particles following the global leader of Particle-i:
𝐷𝑐,𝑖 =
𝑗=1
𝑁𝑖
𝑔
𝑥 𝑚𝑎𝑥,𝑗(𝑡) − 𝑥 𝑚𝑖𝑛,𝑗(𝑡)
(𝑋 𝑚𝑎𝑥,𝑗 − 𝑋 𝑚𝑖𝑛,𝑗)
1
𝑁𝑖
𝑔
12
The spread along
the jth dimension
The upper & lower bounds
along the jth dimension
Number of particles
following the global
leader of Particle-i
• The scheme for handling discrete variables is the same as in MO-MDPSO.
• Outlier particles could improve the Pareto coverage in multi-objective
problems, and hence are accounted for in the diversity metric.
• Greater the spread of the followers (in the variable space) of a global
leader, greater is its diversity footprint (it has a more diverse team).
Modified Multiple Domains
13
• Ideally, each of the 7
global leaders should have
10 followers.
• Outlier particles are not
neglected
𝛾𝑐,𝑖 = 𝛾 𝑐0 𝑒𝑥𝑝 −
1
2
𝐷𝑐,𝑖
𝜎𝑐
2
,
𝜎𝑐 =
1
2 ln 1 𝛾 𝑚𝑖𝑛
• The population diversity is
measured based on the
spread of “followers” w.r.t
each global leader
Design Variable Space
6
12
0 20 40 60 80 100 120
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Ng
i
Probability
Popsize = 100
Navg
= 20
Navg
= 30
Navg
= 40
Modified Global Leader Selection Mechanism
• 𝑟 is a random number, where 𝑟 ∈ [0,1]
• if 𝐫 < 𝒑(𝑵𝒊
𝒈
) , Particle-i keeps
following its current global leader;
• otherwise, Particle-i will switch its
leader to a global leader from a
neighboring domain.
14
𝑝(𝑁𝑖
𝑔
) = 𝑒𝑥𝑝 −
80𝑁𝑖
𝑔
Γ(1 + 1 𝜌)
𝑁 𝑝 × 𝑁𝑎𝑣𝑔
𝜌
− 𝑵 𝒂𝒗𝒈 is the expected average number of followers per global leader (𝑁𝑎𝑣𝑔 = 𝑁 𝑝
/𝑁 𝑔
)
− 𝑵𝒊
𝒈
is the number of particles following the global leader of Particle-i
Hence, global leaders with number of followers significantly exceeding the
average number per leader will tend to lose followers – an evening out of
followers among the non-dominated solutions as the population converges.
A Stochastic global
leader selection
approach
Probability to
retain a follower
• Three classes of test problems are used to evaluate the performance of the
new MO-MDPSO algorithm (called MO-MDPSO II)
• Each test problems is run 30 times to compensate for the impact of
random parameters
• Sobol’s quasirandom sequence generator is used for the initial population
User-defined parameters in MO-MDPSO
Numerical Experiment
15
Parameter
Continuous uncon-
strained problems
Continuous con-
strained problems
Mixed-discrete
constrained problems
𝑤 0.5 0.5 0.5
𝐶1 1.5 1.5 1.5
𝐶𝑐0 1.5 1.5 1.5
𝛾 𝑐0 1.0 1.0 1.0
𝛾 𝑚𝑖𝑛 1e-4 1e-6 1e-8
𝛾 𝑑0 NA NA 0.5,1.0
𝜆 0.2 0.1 0.1
Local set size 5 6 10
Global set size 50 50 Up to 100
Population size 100 100 min(5n, 100)
Performance Metric for Continuous Unconstrained Problems
Results: Continuous Unconstrained Problems
16
Function
name
Accuracy 𝜸 Uniformity and Spread ∆
Mean 𝝁 𝜸 Std. deviation 𝝈 𝜸 Mean 𝝁∆ Std. deviation 𝝈∆
MO-MDPSO I MO-MDPSO II MO-MDPSO I MO-MDPSO II MO-MDPSO I MO-MDPSO II MO-MDPSO I MO-MDPSO II
Fonseca 2 4.4e-3 4.2e-3 4.6e-4 1.2e-4 0.58 0.53 1.5e-2 5.4e-4
Coello 8.2e-3 8.4e-3 1.2e-3 3.2e-4 0.57 0.57 2.9e-2 3.2e-4
Schaffer 1 9.0e-3 5.2e-3 1.1e-3 4.3e-5 0.21 0.24 1.2e-2 8.5e-5
Schaffer 2 1.3e-2 1.2e-2 1.1e-3 4.1e-4 0.96 0.98 2.9e-3 9.1e-4
ZDT 1 8.9e-4 4.7e-4 1.2e-4 9.2e-5 0.20 0.19 4.0e-2 3.9e-5
ZDT 2 7.5e-4 8.4e-4 7.5e-3 1.2e-4 0.20 0.21 4.0e-2 1.5e-4
ZDT 3 4.2e-3 5.1e-3 4.0e-4 6.1e-4 0.54 0.45 4.0e-2 2.0e-5
ZDT 4 1.4 9.2e-1 2.0 1.3e-2 0.53 0.75 1.6e-2 6.9e-3
ZDT 6 4.6e-2 7.3e-3 5.3e-2 1.6e-4 0.60 0.62 2.4e-2 8.0e-4
γ: Measures the
closeness of the
computed Pareto
solutions to
actual Pareto
front
Δ: Measures the uniformity of the distribution
of the computed Pareto solutions (along the front)
and overall spread of the computed Pareto front.
The better values are marked in bold blue font in this Table.
Performance Metrics: Deb et al., IEEE Trans. on Evo. Comp., 2002;
Okabe et al, CEC, 2003
0
0.0005
0.001
0.0015
0
0.0002
0.0004
0.0006
0.0008
0.001
0
0.002
0.004
0.006
0.008
0.01
0
0.5
1
1.5
2
2.5
3
3.5
0
0.05
0.1
0.15
Comparison of Performance Indicators
17
Accuracy
Uniformity and Spread
The lower
the better
MO-
MDPSOII
MO-
MDPSOI
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
MO-
MDPSOI
MO-
MDPSOII
ZDT 1 ZDT 2 ZDT 3 ZDT 4 ZDT 6
ZDT 1 ZDT 2 ZDT 3 ZDT 4 ZDT 6
MO-MDPSO II consistently offers performance that is 1-2 orders of
magnitude more robust than the original algorithm (MO-MDPSO I)
Plots: Continuous Constrained Problems
18
SRN
2 design variables
TNK
2 design variables
Number of function
evaluations used by
MO-MDPSO is 10,000
(20,000 was used by
NSGA-II)
BNH
2 design variables
CONSTR
2 design variables
KITA
2 design variables
• The MINLP problem adopted
from Dimkou and Papalexandri*
19
No. of design variables 6
No. of discrete variables 3 (binary)
Function evaluations 10,000
Population size 100
Elite size of MO-MDPSO 100
*: Dimkou and Papalexandri (1998)
Mixed-Discrete Constrained Test Problem 1
f1
f2
-60 -50 -40 -30 -20 -10 0
-20
0
20
40
60
80
100
by MO-MDPSO I
by MO-MDPSO II
20
• The design of disc brake
problem adopted from
Osyczka and Kundu*
No. of design variables 4
No. of discrete variables 1 (integer)
Function evaluations 10,000
Population size 100
Elite size of MO-MDPSO 100
*: Osyczka and Kundu (1998)
Mixed-Discrete Constrained Test Problem 2
MO-MDPSO II retains its superior ability to better capture
extreme solutions (anchor points) compared to NSGA II.
Concluding Remarks
 A new multiobjective implementation of the Mixed-Discrete PSO algorithm
(called MO-MDPSO II) was developed and tested.
 In seeking to improve the algorithm robustness, the following two advancements
were made to the original MO-MDPSO:
 The diversity metric is defined based on the spread of particles (followers) following
each global leader (instead of the spread of the entire particle population).
 A stochastic leader selection is adopted that favors global leaders with fewer followers
(greater the follower-team size, greater the likelihood of its followers to switch team).
 For unconstrained problems, the new MO-MDPSO is observed to provide
significantly better robustness (lower std-dev over multiple runs).
 For constrained MOO and MO-MINLP problems, the new MO-MDPSO
algorithm retains its fast convergence and Pareto coverage capabilities.
 Owing to the involvement of a relatively high number of prescribed parameters
future work should pursue a parametric analysis of this algorithm.
21
Acknowledgement
• I would like to thank the co-authors, Dr. Weiyang
Tong and Prof. Achille Messac for their substantial
contributions to this research.
• Support from the NSF Award CMMI 1437746 is
also acknowledged.
22
Thank You.
Any Questions/Comments?
23
PDF of Global Leader Selection Mechanism
• Starting from the generalized normal distribution, which is given by
𝑝 =
𝜌
2𝛼Γ(1 𝜌)
𝑒𝑥𝑝 −
𝑥 − 𝜇
𝛼
𝜌
where
𝛼 is the scale parameter;
𝜌 is the shape parameter;
𝜇 is the mean value; and
Γ denotes the Gamma function, which is defined as
Γ 𝑡 = 0
∞
𝑥 𝑡−1 𝑒−𝑡 𝑑𝑥.
• Let 𝑝 be ranged between 0 and 1, then 𝛼 becomes 2𝜌 Γ 1 𝜌
• By adjusting the slope and the probability of 𝑝(𝑁𝑎𝑣𝑔) , we have
𝑝(𝑁𝑖
𝑔
) = 𝑒𝑥𝑝 −
80𝑁𝑖
𝑔
Γ(1 + 1 𝜌)
𝑁 𝑝 × 𝑁𝑎𝑣𝑔
𝜌
24
𝜌
𝜌
𝜌
𝜌
𝜌
𝜌
Nadarajah, Journal of Applied Statistics, 2011
Solutions Comparison
if both solu-x and solu-y are infeasible
choose the one with the smaller constraint violation;
else if solu-x is feasible and solu-y is infeasible
choose solu-x;
else if solu-x is infeasible and solu-y is feasible
choose solu-y;
else both solu-x and solu-y are feasible
apply non-dominance comparison:
 solu-x strongly dominates solu-y if and only if
𝑓𝑘 𝑥 < 𝑓𝑘 𝑦 , ∀k = 1,2,…, N
 solu-x weakly dominates solu-y if
𝑓𝑘 𝑥 ≤ 𝑓𝑘 𝑦 for at least one k
 solu-x is non-dominated with solu-y when
𝑓𝑜𝑟 𝑎𝑡 𝑙𝑒𝑎𝑠𝑡 𝑜𝑛𝑒 𝑘: 𝑓𝑘 𝑥 > 𝑓𝑘 𝑦 , for the rest: 𝑓𝑘 𝑥 < 𝑓𝑘 𝑦
25
Multi-directional Diversity Preservation (cont’d)
• Continuous variables
𝛾 𝑐,𝑖 = 𝛾 𝑐0 𝑒𝑥𝑝 −
1
2
𝐷𝑐,𝑖
𝜎𝑐
2
,
𝜎𝑐 =
1
2 ln 1 𝛾 𝑚𝑖𝑛
• The diversity coefficient for
continuous variables is to apply a
repulsion away from each of the
global leaders
• Discrete variables
𝛾𝑑,𝑖
𝑗
= 𝛾 𝑑0 𝑒𝑥𝑝 −
1
2
𝐷 𝑑,𝑖
𝑗
𝜎 𝑑
2
,
𝜎 𝑑 =
1
2 ln 1 𝑀 𝑗
• A stochastic update process is
applied to help particles jump out of
the local hypercude:
 if 𝑟 > 𝛾𝑑,𝑖
𝑗
, use the nearest
vertex approach (NVA)
 else, update randomly to the
upper or lower bound of the
local hypercube
26
The diversity coefficient is expressed as a monotonically
decreasing function of the current diversity metric
Size of feasible
values of the jth
variable
Discrete Variables and Constraints Handling
• The Nearest Vertex Approach (NVA)
is used to deal with discrete variables,
where a local hypercube is defined as
𝑯 = 𝑥 𝐿1, 𝑥 𝑈1 , 𝑥 𝐿2, 𝑥 𝑈2 , … , 𝑥 𝐿 𝑚, 𝑥 𝑈 𝑚
and 𝑥 𝐿 𝑗 < 𝑥 𝑗 < 𝑥 𝑈 𝑗, ∀𝑗 = 1,2, … , 𝑚
• The net constraint is used to handle
constraints, as given by
𝑓𝑐 =
𝑝=1
𝑃
max 𝑔 𝑞, 0 +
𝑞=1
𝑄
max ℎ 𝑞 − 𝜖, 0
27
Nearest Vertex Approach
Normalized
inequality
constraints
Normalized
equality
constraints
Tolerance

More Related Content

PPTX
A New Multi-Objective Mixed-Discrete Particle Swarm Optimization Algorithm
PPTX
MOMDPSO_IDETC_2014_Weiyang
PDF
PSOk-NN: A Particle Swarm Optimization Approach to Optimize k-Nearest Neighbo...
PPTX
Particle swarm optimization
PPT
Swarm intelligence pso and aco
PPTX
FMRI medical imagining
PPTX
Multi-Objective Genetic Topological Optimization for Design of composite wall...
PDF
An Improved Adaptive Multi-Objective Particle Swarm Optimization for Disassem...
A New Multi-Objective Mixed-Discrete Particle Swarm Optimization Algorithm
MOMDPSO_IDETC_2014_Weiyang
PSOk-NN: A Particle Swarm Optimization Approach to Optimize k-Nearest Neighbo...
Particle swarm optimization
Swarm intelligence pso and aco
FMRI medical imagining
Multi-Objective Genetic Topological Optimization for Design of composite wall...
An Improved Adaptive Multi-Objective Particle Swarm Optimization for Disassem...

What's hot (7)

PDF
Rasool_PhD_Final_Presentation
PDF
PSO APPLIED TO DESIGN OPTIMAL PD CONTROL FOR A UNICYCLE MOBILE ROBOT
PDF
Application of particle swarm optimization to microwave tapered microstrip lines
PDF
Paper id 71201933
PDF
Ijarcet vol-2-issue-7-2333-2336
PDF
A modified invasive weed
Rasool_PhD_Final_Presentation
PSO APPLIED TO DESIGN OPTIMAL PD CONTROL FOR A UNICYCLE MOBILE ROBOT
Application of particle swarm optimization to microwave tapered microstrip lines
Paper id 71201933
Ijarcet vol-2-issue-7-2333-2336
A modified invasive weed
Ad

Similar to Multi-Domain Diversity Preservation to Mitigate Particle Stagnation and Enable Better Pareto Coverage in Mixed-Discrete Particle Swarm Optimization (20)

PPTX
MDPSO_SDM_2012_Souma
PPT
ga-2.ppt
PPTX
Ensemble of Heterogeneous Flexible Neural Tree for the approximation and feat...
PPTX
An Updated Survey on Niching Methods and Their Applications
PPTX
Particle swarm optimization (PSO) ppt presentation
PPTX
Optimization Using Evolutionary Computing Techniques
PDF
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...
PDF
Artificial Intelligence in Robot Path Planning
PDF
T01732115119
PDF
Borgulya
PDF
Multi optimization lectures for the the understanding of the multi variable d...
PDF
Kaggle kenneth
PDF
C013141723
PDF
[IJCT-V3I2P31] Authors: Amarbir Singh
PDF
Tutorial rpo
PDF
Hakimi asiabar, m. 2009: multi-objective genetic local search algorithm using...
PPT
Learning to Search Henry Kautz
PPT
Learning to Search Henry Kautz
PDF
MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...
PDF
Medical diagnosis classification
MDPSO_SDM_2012_Souma
ga-2.ppt
Ensemble of Heterogeneous Flexible Neural Tree for the approximation and feat...
An Updated Survey on Niching Methods and Their Applications
Particle swarm optimization (PSO) ppt presentation
Optimization Using Evolutionary Computing Techniques
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...
Artificial Intelligence in Robot Path Planning
T01732115119
Borgulya
Multi optimization lectures for the the understanding of the multi variable d...
Kaggle kenneth
C013141723
[IJCT-V3I2P31] Authors: Amarbir Singh
Tutorial rpo
Hakimi asiabar, m. 2009: multi-objective genetic local search algorithm using...
Learning to Search Henry Kautz
Learning to Search Henry Kautz
MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...
Medical diagnosis classification
Ad

Recently uploaded (20)

PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
Internet of Things (IOT) - A guide to understanding
PPT
Mechanical Engineering MATERIALS Selection
PPTX
web development for engineering and engineering
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
Geodesy 1.pptx...............................................
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PDF
Digital Logic Computer Design lecture notes
PPTX
Construction Project Organization Group 2.pptx
DOCX
573137875-Attendance-Management-System-original
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
CYBER-CRIMES AND SECURITY A guide to understanding
CH1 Production IntroductoryConcepts.pptx
Operating System & Kernel Study Guide-1 - converted.pdf
Internet of Things (IOT) - A guide to understanding
Mechanical Engineering MATERIALS Selection
web development for engineering and engineering
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
Geodesy 1.pptx...............................................
UNIT-1 - COAL BASED THERMAL POWER PLANTS
Embodied AI: Ushering in the Next Era of Intelligent Systems
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Digital Logic Computer Design lecture notes
Construction Project Organization Group 2.pptx
573137875-Attendance-Management-System-original
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx

Multi-Domain Diversity Preservation to Mitigate Particle Stagnation and Enable Better Pareto Coverage in Mixed-Discrete Particle Swarm Optimization

  • 1. Multi-Domain Diversity Preservation to Mitigate Particle Stagnation and Enable Better Pareto Coverage in Mixed-Discrete Particle Swarm Optimization Weiyang Tong*, Souma Chowdhury#, and Achille Messac# * Syracuse University, Department of Mechanical and Aerospace Engineering # Mississippi State University, Department of Aerospace Engineering The AIAAAviation and Aeronautics Forum and Exposition June 22-26, 2015 Dallas, TX For citations, please refer to the journal version of this paper, by Tong et al., "A Multi-Objective Mixed-Discrete Particle Swarm Optimization with Multi-Domain Diversity Preservation", Structural and Multidisciplinary Optimization. DOI:10.1007/s00158-015-1319-8
  • 2. Particle Swarm Optimization 2 Particle Swarm Optimization (PSO) • was introduced by Eberhart and Kennedy in 1995 • was inspired by the swarm behavior observed in nature • is a population based stochastic algorithm Advantages of basic PSO: • Fast convergence • Ease of implementation • Readily scalable to higher dimensions  PSO often suffers from pre-mature stagnation, which is mainly attributed to the loss of population diversity during convergence.  The leader-following scheme in the basic PSO dynamics is suited for single-objective optimization. Algorithm for single objective unconstrained continuous optimization
  • 3. Outline • Motivation and Research Objectives • Search Strategy in MO-MDPSO • Introduction of MO-MDPSO • Scheme of Quantifying the Population Diversity • Numerical Experiments • Continuous Unconstrained Test Problems • Continuous Constrained Test Problems • Mixed-Discrete Test Problems • Concluding Remarks 3MO-MDPSO: Multi-objective Mixed-Discrete PSO
  • 4. Single Objective Mixed-Discrete PSO* Position update: 𝒙𝑖 𝑡 + 1 = 𝒙𝑖 𝑡 + 𝒗𝑖 𝑡 + 1 Velocity update: 𝒗𝑖 𝑡 + 1 = 𝑤𝒗𝑖 𝑡 + 𝑟1 𝐶1 𝑃𝑖 𝑙 (𝑡) − 𝒙𝑖 + 𝑟2 𝐶2 𝑃 𝑔(𝑡) − 𝒙𝑖 + 𝑟3 𝛾𝑐 𝒗𝑖(𝑡) 𝑤 – inertia weight 𝒙𝑖 – position of a particle 𝒗𝑖 – velocity of a particle 𝐶1 – cognitive parameter 𝐶2 – social parameter 𝑡 – generation 𝑟 – random number between 0-1 𝑃𝑖 𝑙 – pbest of a particle 𝑃 𝑔 – gbest of the swarm 4 Diverging velocity Diversity preservation coefficient Inertia Local search Global search *Chowdhury et al (SMO, 2013) A mixed-discrete PSO was developed to solve nonlinear constrained problems with a mix of continuous and discrete/integer variables (in wind farm design and product family design).
  • 5. Research Motivation 5 • A multi-objective version of the MDPSO algorithm was developed through fundamental modifications to the leader selection strategy, and was successfully tested on benchmark problems and practical engineering MOO problems *. • However, previous research showed that the robustness of MO-MDPSO needs to be further improved, where robustness refers to lower deviation in the final results over multiple runs (less sensitive to the random parameters). Constrained Multiobjective Non-convex Pareto frontier Nonlinear Mixed types of variables MDPSO • We hypothesized that a more coherent interaction of the diversity preservation technique and the leader selection mechanism can improve the robustness of the algorithm, while preserving its Pareto capture and fast convergence capabilities. *Tong et al. IDETC, 2014; SMO 2015.
  • 6. Research Objective • Investigate diversity-preservation schemes in MO-MDPSO, which is also cognizant of the multi-leader-follower behavior of the particle population; here the aim is to: 1. Avoid stagnation and capture the complete Pareto frontier 2. Keep a desirably even distribution of Pareto solutions 3. Improve the robustness of the algorithm (i.e., demonstrate lower deviation over multiple runs, or higher likelihood to capture the actual Pareto front in any given run). 6
  • 7. Search Strategies in Multi-Objective PSO 7 • MOPSO by Parsoupulos and Vrahatis (2002) • DNPSO by Hu and Eberhart (2002) • NSPSO by Li (2003) • MOPSO by Coello (2004) • To best retain the original dynamics, the Pareto based strategy is selected Aggregating function based Single objective based Hybrid with other techniques Pareto dominance based Basic PSO MOPSO Local leader The best solution is based on a particle’s own history Local set with non-dominated historical solutions are found within the local neighborhood Global leader The best solution among all local best solutions The global Pareto solutions are obtained using all local ones
  • 8. 1 2 3 4 5 1 2 3 4 5 f1 f2 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3 Infeasible R egion Actual Boundary Local Pareto set Dynamics of MO-MDPSO Position update: 𝒙𝑖 𝑡 + 1 = 𝒙𝑖 𝑡 + 𝒗𝑖 𝑡 + 1 Velocity update: 𝒗𝑖 𝑡 + 1 = 𝑤𝒗𝑖 𝑡 + 𝑟1 𝐶1 𝑷𝑖 𝑙 (𝑡) − 𝒙𝑖 + 𝑟2 𝐶2 𝑷𝑖 𝑔 (𝑡) − 𝒙𝑖 + 𝑟3 𝛾 𝑐,𝑖 𝒗𝑖(𝑡) 8 𝑷𝑖 𝑙 – local leader of particle-i, which is the selected from the local Pareto set 𝑷𝑖 𝑔 – global leader of particle-i that is determined by a stochastic process Crowding Distance – to manage the size of local/global Pareto set Multiple global leaders Inertia Local search Global search Applied w.r.t. each particle’s global leader Current particle Stored particle
  • 9. Scheme 1: Original Diversity Metrics (MO-MDPSO I) Measured based on the spread of the entire swarm in design space for all continuous variables: 𝐷𝑐 = 𝑗=1 𝑛 𝑥 𝑚𝑎𝑥,𝑗(𝑡) − 𝑥 𝑚𝑖𝑛,𝑗(𝑡) 𝑋 𝑚𝑎𝑥,𝑗 − 𝑋 𝑚𝑖𝑛,𝑗 1 𝑛 for each discrete variable: 𝐷 𝑑 𝑗 = 𝑥 𝑚𝑎𝑥,𝑗(𝑡) − 𝑥 𝑚𝑖𝑛,𝑗(𝑡) 𝑋 𝑚𝑎𝑥,𝑗 − 𝑋 𝑚𝑖𝑛,𝑗 9 Considering the impact of outlier solutions, the diversity metrics are modified as 𝐷𝑐 = 𝐹𝜆 𝐷𝑐, and 𝐷 𝑑,𝑖 𝑗 = 𝐹𝜆𝑖 𝐷 𝑑 𝑗 where 𝐹𝜆𝑖 = 𝜆 𝑁 𝑝 + 1 𝑁𝑖 + 1 1 𝑚+𝑛 𝜆 is used to define the fractional domain w.r.t. the global leader of particle-i The spread along the jth dimension The upper & lower bounds along the jth dimension Number of candidate solutions per global leader Number of candidate solutions enclosed by the 𝜆-fractional domain
  • 10. Hypercube enclosing 72 candidate solutions X1 X2 0 1 2 3 0 1 2 3 Particles Global leaders Original Multiple λ-Fractional Domains 10 • 7 global leaders present. • With a total of 72 particles, ideally, each fractional domain should enclose around 10 particles. • Particles located in the overlapping regions often tend to be re- allocated between the neighboring domains. • Outlier particles are allowed to have marginal impact on diversity. 13 10 14 𝜆 = 0.25 Design Variable Space
  • 11. Determining the Location of the Multi-domain The boundaries of the 𝜆𝑖 - fractional domain are defined as 𝑥𝑖 𝑚𝑎𝑥,𝑗 = 𝑚𝑎𝑥 𝑥 𝑚𝑖𝑛,𝑗 + 𝜆𝑖 𝛿𝑥 𝑗, 𝑚𝑖𝑛 𝑷𝑖 𝑔,𝑗 + 1 2 𝜆𝑖 𝛿𝑥 𝑗, 𝑥 𝑚𝑎𝑥,𝑗 𝑥𝑖 𝑚𝑖𝑛,𝑗 = 𝑚𝑖𝑛 𝑥 𝑚𝑎𝑥,𝑗 − 𝜆𝑖 𝛿𝑥 𝑗, 𝑚𝑎𝑥 𝑷𝑖 𝑔,𝑗 − 1 2 𝜆𝑖 𝛿𝑥 𝑗, 𝑥 𝑚𝑖𝑛,𝑗 11 • Continuous variables 𝛾 𝑐,𝑖 = 𝛾 𝑐0 𝑒𝑥𝑝 − 1 2 𝐷𝑐,𝑖 𝜎𝑐 2 , 𝜎𝑐 = 1 2 ln 1 𝛾 𝑚𝑖𝑛 • Discrete variables 𝛾𝑑,𝑖 𝑗 = 𝛾 𝑑0 𝑒𝑥𝑝 − 1 2 𝐷 𝑑,𝑖 𝑗 𝜎 𝑑 2 , 𝜎 𝑑 = 1 2 ln 1 𝑀 𝑗
  • 12. Scheme 2: Modified Diversity Metrics Measured based on the spread of “followers” in design space, i.e., based on the design vector of particles following the global leader of Particle-i: 𝐷𝑐,𝑖 = 𝑗=1 𝑁𝑖 𝑔 𝑥 𝑚𝑎𝑥,𝑗(𝑡) − 𝑥 𝑚𝑖𝑛,𝑗(𝑡) (𝑋 𝑚𝑎𝑥,𝑗 − 𝑋 𝑚𝑖𝑛,𝑗) 1 𝑁𝑖 𝑔 12 The spread along the jth dimension The upper & lower bounds along the jth dimension Number of particles following the global leader of Particle-i • The scheme for handling discrete variables is the same as in MO-MDPSO. • Outlier particles could improve the Pareto coverage in multi-objective problems, and hence are accounted for in the diversity metric. • Greater the spread of the followers (in the variable space) of a global leader, greater is its diversity footprint (it has a more diverse team).
  • 13. Modified Multiple Domains 13 • Ideally, each of the 7 global leaders should have 10 followers. • Outlier particles are not neglected 𝛾𝑐,𝑖 = 𝛾 𝑐0 𝑒𝑥𝑝 − 1 2 𝐷𝑐,𝑖 𝜎𝑐 2 , 𝜎𝑐 = 1 2 ln 1 𝛾 𝑚𝑖𝑛 • The population diversity is measured based on the spread of “followers” w.r.t each global leader Design Variable Space 6 12
  • 14. 0 20 40 60 80 100 120 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Ng i Probability Popsize = 100 Navg = 20 Navg = 30 Navg = 40 Modified Global Leader Selection Mechanism • 𝑟 is a random number, where 𝑟 ∈ [0,1] • if 𝐫 < 𝒑(𝑵𝒊 𝒈 ) , Particle-i keeps following its current global leader; • otherwise, Particle-i will switch its leader to a global leader from a neighboring domain. 14 𝑝(𝑁𝑖 𝑔 ) = 𝑒𝑥𝑝 − 80𝑁𝑖 𝑔 Γ(1 + 1 𝜌) 𝑁 𝑝 × 𝑁𝑎𝑣𝑔 𝜌 − 𝑵 𝒂𝒗𝒈 is the expected average number of followers per global leader (𝑁𝑎𝑣𝑔 = 𝑁 𝑝 /𝑁 𝑔 ) − 𝑵𝒊 𝒈 is the number of particles following the global leader of Particle-i Hence, global leaders with number of followers significantly exceeding the average number per leader will tend to lose followers – an evening out of followers among the non-dominated solutions as the population converges. A Stochastic global leader selection approach Probability to retain a follower
  • 15. • Three classes of test problems are used to evaluate the performance of the new MO-MDPSO algorithm (called MO-MDPSO II) • Each test problems is run 30 times to compensate for the impact of random parameters • Sobol’s quasirandom sequence generator is used for the initial population User-defined parameters in MO-MDPSO Numerical Experiment 15 Parameter Continuous uncon- strained problems Continuous con- strained problems Mixed-discrete constrained problems 𝑤 0.5 0.5 0.5 𝐶1 1.5 1.5 1.5 𝐶𝑐0 1.5 1.5 1.5 𝛾 𝑐0 1.0 1.0 1.0 𝛾 𝑚𝑖𝑛 1e-4 1e-6 1e-8 𝛾 𝑑0 NA NA 0.5,1.0 𝜆 0.2 0.1 0.1 Local set size 5 6 10 Global set size 50 50 Up to 100 Population size 100 100 min(5n, 100)
  • 16. Performance Metric for Continuous Unconstrained Problems Results: Continuous Unconstrained Problems 16 Function name Accuracy 𝜸 Uniformity and Spread ∆ Mean 𝝁 𝜸 Std. deviation 𝝈 𝜸 Mean 𝝁∆ Std. deviation 𝝈∆ MO-MDPSO I MO-MDPSO II MO-MDPSO I MO-MDPSO II MO-MDPSO I MO-MDPSO II MO-MDPSO I MO-MDPSO II Fonseca 2 4.4e-3 4.2e-3 4.6e-4 1.2e-4 0.58 0.53 1.5e-2 5.4e-4 Coello 8.2e-3 8.4e-3 1.2e-3 3.2e-4 0.57 0.57 2.9e-2 3.2e-4 Schaffer 1 9.0e-3 5.2e-3 1.1e-3 4.3e-5 0.21 0.24 1.2e-2 8.5e-5 Schaffer 2 1.3e-2 1.2e-2 1.1e-3 4.1e-4 0.96 0.98 2.9e-3 9.1e-4 ZDT 1 8.9e-4 4.7e-4 1.2e-4 9.2e-5 0.20 0.19 4.0e-2 3.9e-5 ZDT 2 7.5e-4 8.4e-4 7.5e-3 1.2e-4 0.20 0.21 4.0e-2 1.5e-4 ZDT 3 4.2e-3 5.1e-3 4.0e-4 6.1e-4 0.54 0.45 4.0e-2 2.0e-5 ZDT 4 1.4 9.2e-1 2.0 1.3e-2 0.53 0.75 1.6e-2 6.9e-3 ZDT 6 4.6e-2 7.3e-3 5.3e-2 1.6e-4 0.60 0.62 2.4e-2 8.0e-4 γ: Measures the closeness of the computed Pareto solutions to actual Pareto front Δ: Measures the uniformity of the distribution of the computed Pareto solutions (along the front) and overall spread of the computed Pareto front. The better values are marked in bold blue font in this Table. Performance Metrics: Deb et al., IEEE Trans. on Evo. Comp., 2002; Okabe et al, CEC, 2003
  • 17. 0 0.0005 0.001 0.0015 0 0.0002 0.0004 0.0006 0.0008 0.001 0 0.002 0.004 0.006 0.008 0.01 0 0.5 1 1.5 2 2.5 3 3.5 0 0.05 0.1 0.15 Comparison of Performance Indicators 17 Accuracy Uniformity and Spread The lower the better MO- MDPSOII MO- MDPSOI 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 MO- MDPSOI MO- MDPSOII ZDT 1 ZDT 2 ZDT 3 ZDT 4 ZDT 6 ZDT 1 ZDT 2 ZDT 3 ZDT 4 ZDT 6 MO-MDPSO II consistently offers performance that is 1-2 orders of magnitude more robust than the original algorithm (MO-MDPSO I)
  • 18. Plots: Continuous Constrained Problems 18 SRN 2 design variables TNK 2 design variables Number of function evaluations used by MO-MDPSO is 10,000 (20,000 was used by NSGA-II) BNH 2 design variables CONSTR 2 design variables KITA 2 design variables
  • 19. • The MINLP problem adopted from Dimkou and Papalexandri* 19 No. of design variables 6 No. of discrete variables 3 (binary) Function evaluations 10,000 Population size 100 Elite size of MO-MDPSO 100 *: Dimkou and Papalexandri (1998) Mixed-Discrete Constrained Test Problem 1 f1 f2 -60 -50 -40 -30 -20 -10 0 -20 0 20 40 60 80 100 by MO-MDPSO I by MO-MDPSO II
  • 20. 20 • The design of disc brake problem adopted from Osyczka and Kundu* No. of design variables 4 No. of discrete variables 1 (integer) Function evaluations 10,000 Population size 100 Elite size of MO-MDPSO 100 *: Osyczka and Kundu (1998) Mixed-Discrete Constrained Test Problem 2 MO-MDPSO II retains its superior ability to better capture extreme solutions (anchor points) compared to NSGA II.
  • 21. Concluding Remarks  A new multiobjective implementation of the Mixed-Discrete PSO algorithm (called MO-MDPSO II) was developed and tested.  In seeking to improve the algorithm robustness, the following two advancements were made to the original MO-MDPSO:  The diversity metric is defined based on the spread of particles (followers) following each global leader (instead of the spread of the entire particle population).  A stochastic leader selection is adopted that favors global leaders with fewer followers (greater the follower-team size, greater the likelihood of its followers to switch team).  For unconstrained problems, the new MO-MDPSO is observed to provide significantly better robustness (lower std-dev over multiple runs).  For constrained MOO and MO-MINLP problems, the new MO-MDPSO algorithm retains its fast convergence and Pareto coverage capabilities.  Owing to the involvement of a relatively high number of prescribed parameters future work should pursue a parametric analysis of this algorithm. 21
  • 22. Acknowledgement • I would like to thank the co-authors, Dr. Weiyang Tong and Prof. Achille Messac for their substantial contributions to this research. • Support from the NSF Award CMMI 1437746 is also acknowledged. 22
  • 24. PDF of Global Leader Selection Mechanism • Starting from the generalized normal distribution, which is given by 𝑝 = 𝜌 2𝛼Γ(1 𝜌) 𝑒𝑥𝑝 − 𝑥 − 𝜇 𝛼 𝜌 where 𝛼 is the scale parameter; 𝜌 is the shape parameter; 𝜇 is the mean value; and Γ denotes the Gamma function, which is defined as Γ 𝑡 = 0 ∞ 𝑥 𝑡−1 𝑒−𝑡 𝑑𝑥. • Let 𝑝 be ranged between 0 and 1, then 𝛼 becomes 2𝜌 Γ 1 𝜌 • By adjusting the slope and the probability of 𝑝(𝑁𝑎𝑣𝑔) , we have 𝑝(𝑁𝑖 𝑔 ) = 𝑒𝑥𝑝 − 80𝑁𝑖 𝑔 Γ(1 + 1 𝜌) 𝑁 𝑝 × 𝑁𝑎𝑣𝑔 𝜌 24 𝜌 𝜌 𝜌 𝜌 𝜌 𝜌 Nadarajah, Journal of Applied Statistics, 2011
  • 25. Solutions Comparison if both solu-x and solu-y are infeasible choose the one with the smaller constraint violation; else if solu-x is feasible and solu-y is infeasible choose solu-x; else if solu-x is infeasible and solu-y is feasible choose solu-y; else both solu-x and solu-y are feasible apply non-dominance comparison:  solu-x strongly dominates solu-y if and only if 𝑓𝑘 𝑥 < 𝑓𝑘 𝑦 , ∀k = 1,2,…, N  solu-x weakly dominates solu-y if 𝑓𝑘 𝑥 ≤ 𝑓𝑘 𝑦 for at least one k  solu-x is non-dominated with solu-y when 𝑓𝑜𝑟 𝑎𝑡 𝑙𝑒𝑎𝑠𝑡 𝑜𝑛𝑒 𝑘: 𝑓𝑘 𝑥 > 𝑓𝑘 𝑦 , for the rest: 𝑓𝑘 𝑥 < 𝑓𝑘 𝑦 25
  • 26. Multi-directional Diversity Preservation (cont’d) • Continuous variables 𝛾 𝑐,𝑖 = 𝛾 𝑐0 𝑒𝑥𝑝 − 1 2 𝐷𝑐,𝑖 𝜎𝑐 2 , 𝜎𝑐 = 1 2 ln 1 𝛾 𝑚𝑖𝑛 • The diversity coefficient for continuous variables is to apply a repulsion away from each of the global leaders • Discrete variables 𝛾𝑑,𝑖 𝑗 = 𝛾 𝑑0 𝑒𝑥𝑝 − 1 2 𝐷 𝑑,𝑖 𝑗 𝜎 𝑑 2 , 𝜎 𝑑 = 1 2 ln 1 𝑀 𝑗 • A stochastic update process is applied to help particles jump out of the local hypercude:  if 𝑟 > 𝛾𝑑,𝑖 𝑗 , use the nearest vertex approach (NVA)  else, update randomly to the upper or lower bound of the local hypercube 26 The diversity coefficient is expressed as a monotonically decreasing function of the current diversity metric Size of feasible values of the jth variable
  • 27. Discrete Variables and Constraints Handling • The Nearest Vertex Approach (NVA) is used to deal with discrete variables, where a local hypercube is defined as 𝑯 = 𝑥 𝐿1, 𝑥 𝑈1 , 𝑥 𝐿2, 𝑥 𝑈2 , … , 𝑥 𝐿 𝑚, 𝑥 𝑈 𝑚 and 𝑥 𝐿 𝑗 < 𝑥 𝑗 < 𝑥 𝑈 𝑗, ∀𝑗 = 1,2, … , 𝑚 • The net constraint is used to handle constraints, as given by 𝑓𝑐 = 𝑝=1 𝑃 max 𝑔 𝑞, 0 + 𝑞=1 𝑄 max ℎ 𝑞 − 𝜖, 0 27 Nearest Vertex Approach Normalized inequality constraints Normalized equality constraints Tolerance