2. Lecture plan
2
Introduction to optimization and decision making
Multi-Criteria Decision Making (MCDM)
Optimization problems in general
Multi-Objective Optimization (MOO) model
Simplifying the MOO
AI meta-heuristics used in MOO, selected meta-heuristic MOO algorithms
Programming frameworks and libraries for MOO
Methods of taking into account the decision-maker's preferences in MOO
Performance and efficiency testing of MOO algorithms
Multi-objective Optimization Algorithms (MOOA/AOW)
Lecture 2
4. Single-Objective Optimization (SOO)
4
SOO is when we have just one optimization goal (objective) function
In real problems there is usually no pure SOO
Different objectives can be reduced to a single one by means of
• aggregated objective e.g. weighted average of different objectives
• handling some of the objectives as constraints
Problems can be constrained
The result is (in most cases) a single solution
Multi-objective Optimization Algorithms (MOOA/AOW)
5. Multi-Objective Optimization (MOO)
5
MOO is when there are more than 2 objectives
• for 5 and more objectives it is called many-objective instead
Every objective is taken into account separately
MOO can also be constrained
It is hardly possible that a single solution would optimize all the objective functions at once
• so-called Pareto dominance is utilized to find solutions to MOO
Multi-objective Optimization Algorithms (MOOA/AOW)
6. Classic Pareto dominance
6
Concept of dominance was originally created by Vilfredo Pareto (1848–1923),
Italian civil engineer and economist
In Pareto Multi-Objective Optimization Problem (MOOP)
• let’s say we have two MOOP solutions x and y
• we state that solution x dominates solution y if and only if
x is better than y for at least one objective
and x is not worse than y for all the other objectives
Minimization of every goal is assumed by default
The result is a set of solutions that are not Pareto dominated (non-dominated)
Multi-objective Optimization Algorithms (MOOA/AOW)
7. Pareto 80/20 rule (Pareto principle)
7
Another concept – the „80/20 rule” aka „Pareto principle” - was named after
Vilfredo Pareto
• proposed in 1941 by J.M. Juran
• for many outcomes, roughly 80% of consequences come from 20% of causes
• V. Pareto published in one of his last works that approximately 80% of the land
in the Kingdom of Italy was owned by 20% of the population
This principle applies to many economic, political and engineering issues
Pareto 80/20 rule ≠ Pareto dominance !!!
Multi-objective Optimization Algorithms (MOOA/AOW)
8. Classic Pareto dominance (formally)
8
Assumptions of classic Pareto dominance
• minimization for every goal function
• all the solutions are feasible (no direct constraint handling!)
In Pareto MOOP (with n objectives) solution x dominates y if and only if
The result is a set of Pareto non-dominated (optimal) solutions, for which a Pareto Front (PF)
is created
Pareto non-dominated solutions are often called efficient
∀𝑖:1,..,𝑛 𝑓𝑖 𝑥 ≤ 𝑓𝑖 𝑦 and ∃𝑗:1,..,𝑛 𝑓𝑗 𝑥 < 𝑓𝑗 𝑦
Multi-objective Optimization Algorithms (MOOA/AOW)
9. Strong and weak Pareto dominance
9
Strong Pareto dominance is when we get the improvement for all of the objectives
Weak Pareto dominance is when Pareto dominance does occur, but it is not strong
Classic Pareto dominance is either weak or strong
∀𝑖:1,..,𝑛 𝑓𝑖 𝑥 < 𝑓𝑖 𝑦 ⟺ 𝑓(𝑥) ≺ 𝑓(𝑦) ⟺ 𝑥 ≺ 𝑦
∀𝑖:1,..,𝑛 𝑓𝑖 𝑥 ≤ 𝑓𝑖 𝑦 and ∃𝑗:1,..,𝑛 𝑓𝑗 𝑥 < 𝑓𝑗 𝑦 and ∃𝑘:1,..,𝑛;𝑘≠𝑗 𝑓𝑘 𝑥 = 𝑓𝑘 𝑦 ⇔
⇔ 𝑓(𝑥) ≼ 𝑓(𝑦) ⟺ 𝑥 ≼ 𝑦
Multi-objective Optimization Algorithms (MOOA/AOW)
13. Decision space vs. objective space
13
k: number of decision variables
n: number of objective functions
Decision space
• k-dimensional
• comprises of potential solutions to the problem (values of decision variables)
Objective space (aka solution space)
• n-dimensional
• space in which the objective function vectors are represented
Definitions
Multi-objective Optimization Algorithms (MOOA/AOW)
14. Decision space vs. objective space
14
http://dx.doi.org/10.1109/IPIN.2017.8115908 https://doi.org/10.1016/B978-0-323-91781-0.00002-8
Multi-objective Optimization Algorithms (MOOA/AOW)
15. Pareto optimal set vs. Pareto front
15
https://doi.org/10.1016/j.asoc.2019.105631
Multi-objective Optimization Algorithms (MOOA/AOW)
16. Constraints in MOO
16
Constraints can be considered for
• decision space
• objective space
• combined or elements in-between the decision and objective spaces
All constraints should be satisfied for a feasible solution
Feasible region represents the set of all solutions (in decision space) that satisfy the
constraints
Feasible Pareto Front is the part of PF (in objective space) that is constructed by the feasible
region
Basic Pareto-dominance definition does not have constraint handling built in
Multi-objective Optimization Algorithms (MOOA/AOW)
17. Basic constraint handling techniques in MOO
17
Penalty functions
• additional elements added to the objective function(s) to degrade their values in
cases when a constraint is violated
Domination of constraints
• basic Pareto dominance can be extended to handle constraints by differentiating
the impact and level of violation for particular constraints
Brute force
• infeasible solutions/PF elements are removed from the final set of solution/PF
Multi-objective Optimization Algorithms (MOOA/AOW)
18. Can we solve MOO problems in a deterministic way?
18
What does it mean to solve MOO problem?
• to find the feasible Pareto optimal set of solutions (determine feasible Pareto Front)
Can we solve MOO problem by a deterministic method?
• in general – no, we cannot
• in some particular cases – yes, we can
only if strong assumptions are imposed on the model e.g. as in
Multi-Objective Linear Programming (MOLP) multi-objective gradient method
Multi-objective Optimization Algorithms (MOOA/AOW)
19. Multi-Objective Linear Programming (MOLP)
19
MOLP model
• objective (goal) functions, linearly dependent on decision variables
• constraints linearly dependent on decision variables
• MOLP model example
• Decision variables: x1, x2, x3
• Objective functions: f1: -x1 -2x2 min
f2:-x1 +2x3 min
f3: x1 -x3 min
• Constraints: x1 +x2 1
x2 2
x1 -x2 +x3 4
Multi-objective Optimization Algorithms (MOOA/AOW)
20. MOLP - deterministic solvers
20
Benson’s algorithm
• finds the efficient extreme points in the outcome set
• free Benson’s solver (C, Matlab) for MOLP http://bensolve.org/
Variations of SIMPLEX algorithm for MOLP
• pivot base solution until no further improvements are possible
Multi-objective Optimization Algorithms (MOOA/AOW)
21. MOLP – Benson’s algorithm (1)
21 Multi-objective Optimization Algorithms (MOOA/AOW)
The main tasks of a typical iteration of the Benson’s algorithm are as follows:
• solve a single linear program that is parameterized by a vertex of the current outer
approximation and
• apply vertex enumeration to obtain the vertices of the next outer approximation.
In certain situations, the execution time of the algorithm is dominated by the time for
solving the sequence of linear programs
23. Original LP SIMPLEX algorithm
23
1. Find a base LP solution
2. Improve the base LP solution
• pivot the base in one dimension – select a new base!
• recalculate the solution and current value of goal function
3. Check if current solution is optimal
• yes -> THE END
• no -> go to 2.
Multi-objective Optimization Algorithms (MOOA/AOW)
24. MOLP – extended SIMPLEX algorithm
24
Similar to the original LP SIMPLEX, but
• how to select an initial base?
• how to pivot the base?
• how to check if the solution is optimal?
Multi-objective Optimization Algorithms (MOOA/AOW)
26. Can we simplify MOO problems to solve it more easily?
26
Solving MOO in deterministic fashion is not that easy
• we can try simplifying MOO in order to utilize some more straight-forward solvers
Possible techniques of simplifying MOO
• Weighted Objectives Method
• Lexicographic or Hierarchical Optimization Method
• Epsilon-Constrained (aka Trade-Off) Method
Each simplifying technique converts MOO into SOO
Multi-objective Optimization Algorithms (MOOA/AOW)
27. Simplifying MOO - Weighted Objectives Method (1)
27
In Weighted Objectives Method (WOM), instead of n objective functions, we introduce
and optimize a new single objective function G(x) using
• values of the original n objective functions fi, i=1..n (x)
• weights assigned to the objectives wi, i=1..n
𝐺 𝑥 =
𝑖=1
𝑛
𝑤𝑖𝑓𝑖(𝑥)
where
𝑤𝑖 ∈ 0.0; 1.0 ,
𝑖=1
𝑛
𝑤𝑖 = 1.0
Multi-objective Optimization Algorithms (MOOA/AOW)
28. Simplifying MOO - Weighted Objectives Method (2)
28
In WOM only G(x) is optimized
• we can use any method available for single-objective optimization
• graphically: the result is in the intersection of the feasible solutions region with
the hyperplane depending on the weights w
• applying different weights -> different results
• WOM is fast and very simple in implementation
• Yet the biggest challenge is to determine the weights w
Multi-objective Optimization Algorithms (MOOA/AOW)
29. Simplifying MOO – Lexicographic Optimization Method
29
Lexicographic Optimization Method (LOM) always returns a Pareto optimal solution
Instead of specifying weights we re-order the objective functions
• f1() the most important one
• …
• fn() the least important one
We proceed with finding optimum just for single objective function!
• we find optimum only for the f1() objective function
• for the next (i-th) objective we find optimum just for the fi() and add new constraints for
optimum values for all preceding objective functions
• Stop the procedure if for step the solution is just a single point
𝑓𝑗 𝑥 ≤ 𝑓𝑗 𝑥𝑗
∗
, 𝑤ℎ𝑒𝑟𝑒 𝑗 = 1, . . , 𝑖 − 1
Multi-objective Optimization Algorithms (MOOA/AOW)
31. Simplifying MOO – LOM, example (2)
31
Step 1
• f1=4x1+x2 -> min
• result: BC line segment, f1
*=10.02
Step 2
• f2=2x1+x2 -> min
• new constraint C5: f1≤ 10.02
• result: point B (x1*=2.26; x2*=0.947)
• f1
*=10.02; f2
*=5.47; f3
*=3.21
END
Multi-objective Optimization Algorithms (MOOA/AOW)
32. Simplifying MOO – Hierarchical Optimization Method
32
HOM extends LOM in the way the newly introduced constraints are built
• LOM
• HOM
• 𝜙𝑗 is the variance value allowed for the j-th objective function (if 𝜙𝑗=0 then LOM)
𝑓𝑗 𝑥 ≤ 𝑓𝑗 𝑥𝑗
∗
, 𝑤ℎ𝑒𝑟𝑒 𝑗 = 1, . . , 𝑖 − 1
𝑓𝑗 𝑥 ≤ (𝟏 − 𝝓𝒋)𝑓𝑗 𝑥𝑗
∗
, 𝑤ℎ𝑒𝑟𝑒 𝑗 = 1, . . , 𝑖 − 1
Multi-objective Optimization Algorithms (MOOA/AOW)
33. Simplifying MOO – LOM & HOM: pros & cons
33
Advantages
• quite handy: we can use it instead of specifying weights
• does not require normalized objective functions
• always provides a Pareto optimal solution
Disadvantages
• several single-objective problems have to be solved to obtain just one solution
point
• additional constraints have to be imposed
• when we change the order of objective functions then a different result
(Pareto optimal solution) will be returned
Multi-objective Optimization Algorithms (MOOA/AOW)
34. Simplifying MOO – Epsilon-Constrained Method
34
Again, as in LOM/HOM, we select just one (r-th) objective to be optimized
• all the other objectives (i=1, .., n, i≠r) are turned into (n-1) new constraints
• i (i=1, .., n, i≠r) defines threshold for the i-th objective
Threshold values i that can be accepted for i-th objective functions are established
• limits the objective space
Method simple in implementation and fast
But we lose possibility to really optimize most of the objectives
Multi-objective Optimization Algorithms (MOOA/AOW)
36. How in general can we solve MOO?
36
Deterministic MOO methods
• limited to specific types of models e.g. purely linear ones
Simplification of MOO (MOO SOO)
• cannot provide a complete nondominated solution set
• requires some kind of preferences by DM
Thus meta-heuristic approach can be a good solution for MOO
• good approximation of Pareto front
• universal - able to handle any MOO
• not as fast or as simple as some deterministic/simplified approach
• by definition cannot guarantee to provide a solution at all
Multi-objective Optimization Algorithms (MOOA/AOW)
37. Meta-heuristics for MOO
37
Multi-Objective Meta-Heuristics (MOMH) are AI methods able to solve real-life
multi-objective problems i.e. by approximating Pareto Fronts
• Genetic algorithm (GA)/ Evolutionary algorithm (EA)
• Ant Colony Optimization (ACO)
• Artificial Bee Colony (ABC)
• Grey Wolf Optimization (GWO)
• Particle Swarm Optimization (PSO)
• Differential Evolution (DE)
• Memetic Algorithm (MA)
• Local Search (LS)
• Harmony Search (HS)
• Simulated Annealing (SA)
Multi-objective Optimization Algorithms (MOOA/AOW)
38. Meta-heuristics in AI world
38
Artificial Intelligence (AI)
Machine Learning (ML)
Deep Learning (DL)
Reinforcement
Learning (RL)
Meta-heuristics
GA & EA ACO
PSO SA & TS
Generative AI
Multi-objective Optimization Algorithms (MOOA/AOW)
39. AI timeline
39
1943
Model of binary
neuron
1951
First neural
network computer
(SNARC)
1958
Perceptron
& early MLP
1959
ADALINE
& MADALINE
1972
Early RNN
1979 - 80
Neocognitron
& early CNN
1986
„Deep Learning”
1992
Early
Transformer
2017
Transformer
2018 - 2024
GPT-1, …, GPT-4
1952
stochastic
optimization
1963
Random search
1966
Evolutionary
programming
1975
GA
1986
TS
1983
SA
1981
RL
1991
EA
1992
ACO
1995
PSO
1996-2024
MOMH
ANN perspective
M-H + RL perspective
Place of Multi-Objective Meta-Heuristics (MOMH) in AI
Multi-objective Optimization Algorithms (MOOA/AOW)
40. Thank you for your attention
joanna.szlapczynska@pg.edu.pl