SlideShare a Scribd company logo
IN THE NAME OF GOD
Isfahan University of Technology
Department Of Electrical and Computer Engineering
Title:
Stochastic Optimization
Course instructor:
DR. Naghmeh Sadat Moayedian
By:
Mohammad Reza Jabbari
July 2018
CONTENTS
1. Introduction
2. Application
3. Probabilistically Constrained Models
4. Recourse Models
5. Conclusion
6. End
2/31
Contact Info: Mo.re.Jabbari@gmail.com
1.Introduction
3/31
 Definition
 Many Decision Problems in business and social systems can be
modeled using mathematical optimization, which seek to maximize
or minimize some objective which is a function of the decisions.
 The feasible decisions are constrained by limits in resources,
minimum requirements, etc.
 Objectives and constraints are functions of the variables and
problem data, such as costs, production rates, sales, or capacities.
Optimization Problems
 Stochastic Optimization Problems
are mathematical programs where some of
the data incorporated into the objective or
constraints is Uncertain.
Stochastic
Optimization
Deterministic
Optimization
 Deterministic Optimization Problems
are formulated with known parameters.
Real world problems almost invariably include some unknown and uncertain parameters.
Robust
Optimizatio
n
4/31
 When some of the data is random, then Optimal Solutions and the Optimal Value to the optimization problem
are themselves random.
 A distribution of optimal decisions is generally unimplementanble.
Probabilistic Distribution
 Ideally, we would like one decision and one optimal objective value.
What do you tell your boss?
5/31
 Problem Solving Methods
1. Probabilistically Constrained Models
2. Recourse Models
 Try to find a decision which ensures that a set of constraints will hold with
a certain probability.
 One logical way to pose the problem is to require that we make one
decision now and minimize the expected costs (or utilities) of the
consequences of that decision
disjoint Probabilistic Constraint Joint Probabilistic Constraint
Two-Stage Multi-Stage
6/31
 The purpose of both methods is to:
Stochastic Optimization Problem Deterministic Optimization Problem
Deterministic Equivalent Problem
So DEP can be solved using linear or nonlinear optimization methods.
7/31
 Sustainability and Power Planning
 Supply Chain Management
 Network optimization
 Logistics
 Financial Management
 Location Analysis
 etc.
2.Applications
8/31
 Suppose that a factory can simultaneously produce two products from two raw materials 𝑥1and 𝑥2 with
below constraints:
1. The production cost of each unit of the first and second product is 𝑐1 = 2 and 𝑐2 = 3, respectively
2. The maximum storage capacity for storing raw materials is equal to b = 100.
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑧 = 2𝑥1 + 3𝑥2
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑥1 + 𝑥2 ≤ 100
2𝑥1 + 6𝑥2 ≥ 180
3𝑥1 + 3𝑥2 ≥ 162
𝑥1 ≥ 0 , 𝑥2 ≥ 0
𝑥1
∗
= 36 𝑥2
∗
= 18 𝑝∗ = 126
 Example1:
!! Note that the assumption that parameters such as the minimum demand or cost per unit of raw material, etc.
are not realistic. Therefore, each of these parameters and coefficients can be a random variable:
ℎ1 = 180 + 𝝃 𝟏 𝑎21 = 2 + 𝜼 𝟏
ℎ1 = 162 + 𝝃 𝟐 𝑎32 = 3 + 𝜼 𝟐
Where
𝝃 𝟏~ 𝑁 0,12 𝜼 𝟏~ 𝑈(−0.8 , 0.8)
𝝃 𝟐~ 𝑁 0,9 𝜼 𝟐~ exponential (λ = 2.5)
9/31
 The random variables 𝝃 𝟏 ،𝝃 𝟐 and 𝜼 𝟐 are unbounded. Therefore, we obtain a reliability level (95%) for them:
𝝃 𝟏 𝜖 −30.94 , 30.94 𝜼 𝟏 𝜖 [−23.18 , 23.18]
𝝃 𝟐 𝜖 −0.8 , 0.8 𝜼 𝟐 𝜖 [0.1.84]
 Define the random vector 𝒘 = (𝝃 𝟏, 𝝃 𝟐, 𝜼 𝟏, 𝜼 𝟐) and rewrite the problem:
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑧 = 2𝑥1 + 3𝑥2
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑥1 + 𝑥2 ≤ 100
𝛼 𝒘 𝑥1 + 6𝑥2 ≥ ℎ1 𝒘
3𝑥1 + 𝛽 𝒘 𝑥2 ≥ ℎ2 𝒘
𝑥1 ≥ 0 , 𝑥2 ≥ 0
Before determining the random parameters of the problem (a certain realization of 𝒘), the optimal value of
the problem can not be calculated !!!
10/31
3. Probabilistically Constrained Models
 Definition:
 If instead of persisting that the constraints of the problem hold for all values of random variables, suppose
that the constraints hold with a certain reliability level. This case is called an optimization problem with
Probabilistic constraints.
𝑃{
𝑗=1
𝑛
𝑎𝑖𝑗 𝑤 𝑥𝑗 ≥ 𝑏𝑖 𝑤 , 𝑖 = 1,2, … , 𝑚 ≥ 𝛼 𝑎𝑛𝑑 𝛼𝜖[0,1]
𝑃{
𝑗=1
𝑛
𝑎𝑖𝑗 𝑤 𝑥𝑗 ≥ 𝑏𝑖 𝑤 ≥ 𝛼𝑖 , 𝑖 = 1,2, … , 𝑚 𝑎𝑛𝑑 𝛼𝜖[0,1]
1. Dsjoint Probabilistic
Constraints
2. Joint Probabilistic
Constraints
11/31
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑧 = 2𝑥1 + 3𝑥2
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑥1 + 𝑥2 ≤ 100
𝑃{ 𝒘 𝛼( 𝑤) 𝑥1 + 6𝑥2 ≥ ℎ1( 𝑤) , 3𝑥1 + 𝛽( 𝑤) 𝑥2 ≥ ℎ2( 𝑤) } ≥ 0.95
𝑥1 ≥ 0 , 𝑥2 ≥ 0
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑧 = 2𝑥1 + 3𝑥2
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑥1 + 𝑥2 ≤ 100
𝛼( 𝒘) 𝑥1 + 6𝑥2 ≥ ℎ1( 𝒘)
3𝑥1 + 𝛽( 𝒘) 𝑥2 ≥ ℎ2( 𝒘)
𝑥1 ≥ 0 , 𝑥2 ≥ 0
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑧 = 2𝑥1 + 3𝑥2
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑥1 + 𝑥2 ≤ 100
𝑃{ 𝑤 𝛼( 𝒘) 𝑥1 + 6𝑥2 ≥ ℎ1( 𝒘) } ≥ 1 − 𝛼1
𝑃{ 𝑤 3𝑥1 + 𝛽( 𝒘) 𝑥2 ≥ ℎ2( 𝒘)} ≥ 1 − 𝛼2
𝑥1 ≥ 0 , 𝑥2 ≥ 0
12/31
 Joint Probabilistic Constraints
 In optimization problems with Joint Probabilistic Constraints, often apply conditions to probability
distribution in order to obtain DEP. The purpose of these conditions is to satisfy convexity of feasible
set or the objective function.
logarithmically
concave
Concept
 Suppose that 𝑎𝑖𝑗 ’s coefficients are definite and deterministic and the
parameters 𝑏𝑖(𝑖 = 1, … , 𝑚1) are random, independent with the probability
corresponding to the logarithmically concave 𝑃𝑖 and the corresponding
probability distribution function 𝐹𝑖:
𝑃 𝐴𝑥 ≥ 𝑏 ≥ 𝛼
𝑖=1
𝑚1
𝑃𝑖 𝐴𝑖 𝑥 ≥ 𝑏𝑖 ≥ 𝛼
𝑖=1
𝑚1
𝐹𝑖 𝐴𝑖 𝑥 ≥ 𝛼
𝑖=1
𝑚1
𝐿𝑛( 𝐹𝑖 𝐴𝑖 𝑥 ) ≥ 𝐿𝑛(𝛼)
 To show that this equivalent equation is convex we just need to show that the probability distribution
function 𝐹𝑖 ،is logarithmically concave.
13/31
 Disjoint Probabilistic Constraints
𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 =
𝑗=1
𝑛
𝑐𝑗 𝑥𝑗
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑃
𝑗=1
𝑛
𝑎𝑖𝑗 𝑥𝑗 ≤ 𝑏𝑖 ≥ 1 − 𝛼𝑖 𝑖 = 1, … , 𝑚
𝛼𝑖 𝜖 0,1 , 𝑥𝑗 ≥ 0 𝑖 = 1, … , 𝑚 𝑗 = 1, … , 𝑛
 In fact, the i-th constraint is satisfied with the least probability 1 − 𝛼𝑖.
 In order to transform the above problem into a definite equivalent problem, seven state arise,
depending on which of the parameters of the problem is random. By examining its four independent
states, the remaining states can be obtained from the combination of previous states
14/31
 State1:
• Assume that only 𝑐𝑗’s are random variables with mean 𝐸{𝑐𝑗}
𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 =
𝑗=1
𝑛
𝐸{𝑐𝑗}𝑥𝑗
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑗=1
𝑛
𝑎𝑖𝑗 𝑥𝑗 ≤ 𝑏𝑖 𝑖 = 1, … , 𝑚
𝑥𝑗 ≥ 0 𝑗 = 1, … , 𝑛
• In this case, it is enough to substitute 𝑐𝑗’s with their means.
15/31
 State2:
• Assume that 𝑎𝑖𝑗’s are correlated variable with the mean 𝑬{𝒂𝒊𝒋}, the variance 𝑽𝒂𝒓{𝒂𝒊𝒋}, and the
covariance matrix 𝑪𝒐𝒗(𝒂𝒊𝒋, 𝒂𝒊′ 𝒋′).
• Also, we assume i-th constraint is in the following form:
𝑖𝑓 𝑑𝑒𝑓𝑖𝑛𝑒 𝑇𝑖 =
𝑗=1
𝑛
𝑎𝑖𝑗 𝑥𝑗
𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 =
𝑗=1
𝑛
𝑐𝑗 𝑥𝑗
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑃
𝑗=1
𝑛
𝑎𝑖𝑗 𝑥𝑗 ≤ 𝑏𝑖 ≥ 1 − 𝛼𝑖 𝑖 = 1, … , 𝑚
𝛼𝑖 𝜖 0,1 , 𝑥𝑗 ≥ 0 𝑖 = 1, … , 𝑚 𝑗 = 1, … , 𝑛
𝐸 𝑇𝑖 =
𝑗=1
𝑛
𝐸 𝑎𝑖𝑗 𝑥𝑗 𝑖 = 1, … , 𝑚
𝑉𝑎𝑟 𝑇𝑖 = 𝑚=1
𝑛
𝑉𝑎𝑟 𝑎𝑖𝑚 𝑥 𝑚
2
+ 𝑚=1
𝑛
𝑘=1
𝑘≠𝑚
𝑛
𝐶𝑜𝑣(𝑎𝑖𝑚, 𝑎𝑖𝑘) 𝑥 𝑚 𝑥 𝑘
16/31
• Base on Central Limit theory, 𝑇𝑖 is approximately Gaussian Distribution.
𝑃 𝑇𝑖 ≤ 𝑏𝑖 = 1 − 𝑄
𝑏𝑖 − 𝐸 𝑇𝑖
𝑉𝑎𝑟 𝑇𝑖
𝑖 = 1, … , 𝑚
i𝑓 𝑑𝑒𝑓𝑖𝑛𝑒 𝑄 𝐾 𝛼 𝑖
= 𝛼𝑖
𝑏𝑖 − 𝐸 𝑇𝑖
𝑉𝑎𝑟 𝑇𝑖
≥ 𝐾 𝛼 𝑖
𝑖 = 1, … , 𝑚
𝐸 𝑇𝑖 + 𝐾 𝛼 𝑖
𝑉𝑎𝑟 𝑇𝑖 ≤ 𝑏𝑖
 If 𝑎𝑖𝑗’s were independent variables so 𝐶𝑜𝑣 𝑎𝑖𝑗, 𝑎𝑖′ 𝑗′ = 0 and:
𝑗=1
𝑛
𝐸 𝑎𝑖𝑗 𝑥𝑗 + 𝐾 𝛼 𝑖
𝑗=1
𝑛
𝑉𝑎𝑟 𝑎𝑖𝑗 𝑥𝑗
2
≤ 𝑏𝑖 𝑖 = 1, … , 𝑚
• Therefore, the below deterministic equivalent constraint be replaced with probabilistic constraint:
17/31
 State3:
• Assume that 𝑏𝑖’s are Gaussian variable with the mean 𝑬{𝒃𝒊}, the variance 𝑽𝒂𝒓 𝒃𝒊 :
• Also, we assume i-th constraint is in the following form:
𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 =
𝑗=1
𝑛
𝑐𝑗 𝑥𝑗
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑃
𝑗=1
𝑛
𝑎𝑖𝑗 𝑥𝑗 ≤ 𝑏𝑖 ≥ 𝛼𝑖 𝑖 = 1, … , 𝑚
𝛼𝑖 𝜖 0,1 , 𝑥𝑗 ≥ 0 𝑖 = 1, … , 𝑚 𝑗 = 1, … , 𝑛
• Like Previous state:
𝑗=1
𝑛
𝑎𝑖𝑗 𝑥𝑗 − 𝐸 𝑏𝑖
𝑉𝑎𝑟 𝑏𝑖
≤ 𝐾 𝛼 𝑖
𝑖 = 1, … , 𝑚
𝑗=1
𝑛
𝑎𝑖𝑗 𝑥𝑗 ≤ 𝐸 𝑏𝑖 + 𝐾 𝛼 𝑖
𝑉𝑎𝑟 𝑏𝑖 𝑖 = 1, … , 𝑚
18/31
 State4:
• In this State, assume that 𝑏𝑖’s and 𝑎𝑖𝑗’s are Random Variable with the mean and variance 𝑬{𝒃𝒊},
𝑽𝒂𝒓 𝒃𝒊 , 𝑬{𝒂𝒊𝒋} and 𝑽𝒂𝒓{𝒂𝒊𝒋}
• Also, we assume i-th constraint is in the following form:
𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 =
𝑗=1
𝑛
𝑐𝑗 𝑥𝑗
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑃
𝑗=1
𝑛
𝑎𝑖𝑗 𝑥𝑗 − 𝑏𝑖 ≤ 0 ≥ 1 − 𝛼𝑖 𝑖 = 1, … , 𝑚
𝛼𝑖 𝜖 0,1 , 𝑥𝑗 ≥ 0 𝑖 = 1, … , 𝑚 𝑗 = 1, … , 𝑛
if define ℎ𝑖 ≜
𝑗=1
𝑛
𝑎𝑖𝑗 𝑥𝑗 − 𝑏𝑖
𝐸 ℎ𝑖 =
𝑗=1
𝑛
𝐸 𝑎𝑖𝑗 𝑥𝑗 − 𝐸 𝑏𝑖 𝑖 = 1, … , 𝑚
𝑉𝑎𝑟 ℎ𝑖 = 𝑥 𝑇
𝐷𝑖 𝑥 , 𝑖 = 1, … , 𝑚 𝐷𝑖 =
)𝑉𝑎𝑟(𝑎𝑖1 )𝐶𝑜𝑣(𝑎𝑖1, 𝑎𝑖2 ⋯ )𝐶𝑜𝑣(𝑎𝑖1, 𝑎𝑖𝑛 )𝐶𝑜𝑣(𝑎𝑖1, 𝑏𝑖
)𝐶𝑜𝑣(𝑎𝑖2, 𝑎𝑖1 )𝑉𝑎𝑟(𝑎𝑖2 ⋯ )𝐶𝑜𝑣(𝑎𝑖2, 𝑎𝑖𝑛 )𝐶𝑜𝑣(𝑎𝑖1, 𝑏𝑖
⋮ ⋮ ⋱ ⋮ ⋮
)𝐶𝑜𝑣(𝑎𝑖𝑛, 𝑎𝑖1 )𝐶𝑜𝑣(𝑎𝑖𝑛, 𝑎𝑖2 ⋯ )𝑉𝑎𝑟(𝑎𝑖𝑛 )𝐶𝑜𝑣(𝑎𝑖𝑛, 𝑏𝑖
)𝐶𝑜𝑣(𝑏𝑖, 𝑎𝑖1 )𝐶𝑜𝑣(𝑏𝑖, 𝑎𝑖1 ⋯ )𝐶𝑜𝑣(𝑎𝑖𝑛, 𝑎𝑖1 )𝑉𝑎𝑟(𝑏𝑖
• Base on Central Limit theory,ℎ𝑖 is approximately Gaussian Distribution.
𝑃 ℎ𝑖 ≤ 0 = 1 − 𝑄
−𝐸 ℎ𝑖
𝑉𝑎𝑟 ℎ𝑖
≥ 1 − 𝛼𝑖
−𝐸 ℎ𝑖
𝑉𝑎𝑟 ℎ𝑖
≥ 𝐾 𝛼 𝑖
𝐸 ℎ𝑖 + 𝐾 𝛼 𝑖
𝑉𝑎𝑟 ℎ𝑖 ≤ 0
• Therefore, the deterministic equivalent constraint is:
 The next three States are combination the four previous sates and can easily obtained:
• State5: 𝑐𝑗’s and 𝑎𝑖𝑗’s are random variable
• State6: 𝑐𝑗’s and 𝑏𝑖’s are random variable
• State7: 𝑐𝑗’s and 𝑎𝑖𝑗’s and 𝑏𝑖’s are random variable
20/31
 Example2:
 Consider the following optimization problem:
𝑎11~ 𝑁(1 , 25)
𝑎12~ 𝑁(3 , 16)
𝑎13~ 𝑁(9 , 4)
𝑏2~ 𝑁(7 , 9)
𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 = 5𝑥1 + 6𝑥2 + 3𝑥3
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑃{ 𝑎11 𝑥1 + 𝑎12 𝑥2 + 𝑎13 𝑥3 ≤ 8} ≥ 0.95
𝑃{ 5𝑥1 + 𝑥2 + 6𝑥3 ≤ 𝑏2} ≥ 0.1
𝑃{ 4𝑥1 + 2𝑥2 − 3𝑥3 ≥ 𝑏3 , 2𝑥1 + 3𝑥2 + 𝑥3 ≥ 𝑏4} ≥ 0.85
𝑥𝑗 ≥ 0 𝑗 = 1, … , 𝑚
𝑏3 𝑎𝑛𝑑 𝑏4~ 𝐸𝑥𝑝𝑜𝑛𝑒𝑛𝑡𝑖𝑎𝑙 (𝜆 = 1)
• 1st constraint ≡ State2
𝑥1 + 3𝑥2 + 9𝑥3 + 1.645 25𝑥1
2
+ 16𝑥2
2
+ 4𝑥3
2
≤ 8𝑄 𝐾0.05 = 0.05 → 𝐾0.05 = 𝑄−1
0.05 = 1.645
21/31
• 2nd constraint ≡ State3
𝑄 𝐾0.1 = 0.1 → 𝐾0.1 = 𝑄−1 0.1 = 1.285 5𝑥1 + 𝑥2 + 6𝑥3 ≤ 7 + 1.285 9
• 3rd constraint ≡ joint Probabilistic constraint
𝐹𝑏 𝐴𝑖 𝑥 =
0
𝐴 𝑖 𝑥
𝑒−𝑡
𝑑𝑡 = 1 − 𝑒−𝐴 𝑖 𝑥 ln 1 − 𝑒− 4𝑥1+2𝑥2−3𝑥3 + ln 1 − 𝑒− 2𝑥1+3𝑥2+𝑥3 ≥ l n( 0.85
𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 = 5𝑥1 + 6𝑥2 + 3𝑥3
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑥1 + 3𝑥2 + 9𝑥3 + 1.645 25𝑥1
2
+ 16𝑥2
2
+ 4𝑥3
2
≤ 8
5𝑥1 + 𝑥2 + 6𝑥3 ≤ 10.855
ln 1 − 𝑒−(4𝑥1+2𝑥2−3𝑥3)
+ ln 1 − 𝑒−(2𝑥1+3𝑥2+𝑥3)
≥ ln(0.85)
𝑥𝑗 ≥ 0 𝑗 = 1,2,3
 So DEP is:
22/31
cvx_begin
variables x(3) y
maximize((5*x(1)+6*x(2)+3*x(3)))
subject to
x(1)+3*x(2)+9*x(3)+1.645*norm([5*x(1) 4*x(2) 2*x(3)]) <= 8;
5*x(1)+x(2)+6*x(3) <= 10.855;
log(1-exp(-(4*x(1)+2*x(2)-3*x(3))))+log(1-exp(-(2*x(1)+3*x(2)+x(3))))>= log(0.85);
x >= 0;
cvx_end
𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 = 5𝑥1 + 6𝑥2 + 3𝑥3
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑥1 + 3𝑥2 + 9𝑥3 + 1.645 25𝑥1
2
+ 16𝑥2
2
+ 4𝑥3
2
≤ 8
5𝑥1 + 𝑥2 + 6𝑥3 ≤ 10.855
ln 1 − 𝑒−(4𝑥1+2𝑥2−3𝑥3)
+ ln 1 − 𝑒−(2𝑥1+3𝑥2+𝑥3)
≥ ln(0.85)
𝑥𝑗 ≥ 0 𝑗 = 1,2,3
𝑥∗
=
0.4625
0.6327
1.976 × 10−9
𝑎𝑛𝑑 𝑃∗
= 6.1087
25𝑥1
2
+ 16𝑥2
2
+ 4𝑥3
2
≡ 5𝑥1 4𝑥2 2𝑥3 2
23/31
4. Recourse Models
 Recourse Models are those in which some decisions or recourse actions can be taken after uncertainty
is disclosed.
 Definition:
Stochastic
Optimization
with Recource
Anticipative
Variables
Adoptive
Variables
 The set of decisions is then divided into two groups:
• A number of decisions have to be taken before the
experiment. All these decisions are called first-stage
decisions and the period when these decisions
aretaken is called the first stage.
• A number of decisions can be taken after the
experiment. They are called second-stage
decisions. The corresponding period is called the
second stage.
24/31
 Two-Stage Program with Fixed Recourse
 The classical two-stage stochastic linear program with fixed recourse (originated by Dantzig [1955] and
Beale [1955]) is the problem of finding:
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑧 = 𝑐 𝑇
𝑥 + 𝐸𝜉 {min
𝑦
𝑞( 𝒘) 𝑇
𝑦(𝒘)}
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝐴𝑥 = 𝑏
𝑇( 𝒘) 𝑥 + 𝑊𝑦( 𝒘) = ℎ(𝒘)
𝑥 ≥ 0 , 𝑦( 𝒘) ≥ 0
Where:
• 𝑥: The first-stage decisions )𝑛1 × 1)
• 𝑦: The second-stage decisions )𝑛2 × 1)
• 𝑊: Recourse Matrix (𝑚2 × 𝑛2)
• 𝜉 𝑇 𝒘 = (𝑞 𝑇 𝒘 , 𝑞 𝑇 𝒘 , 𝑞 𝑇 𝒘 , 𝑇1 𝒘 , … , 𝑇 𝑚2
𝒘
 Each component of q , T , and h is thus a possible random variable.
 For a given realization ω , the second-stage problem data q)ω) , h)ω) and T)ω) become known.
25/31
 Example3:
 Suppose we have to decide on the number of production X products.
• The production of each unit X costs $ 2.
• Customers' demand is random and there is a discrete distribution Ds with the probability ps (s = 1,
..., S).
• Customer demand must be met. We will have the opportunity to buy X from an external supplier to
meet the real demand of our customers. This purchase costs $ 3.
Question: How much should we produce at the moment, while we do not know the future demand of our
customers?
 We assume S=2 and D1=500, p1=0.6
D2=700, p2=0.4
First Stage
Second Stage
D1=500, p1=0.6 D2=700, p2=0.4
26/31
Misleading solution:
The optimal solution is the expected amount of demand.
Correct solution:
The optimal solution is obtained through
stochastic Optimization.
26/31
27
The number of units of the X that
are currently produced (the first
stage).
x1 :the 1st stage
The number of units of X, which in the
second stage is purchased with the random
fulfillment of demand scenarios, Ds (s = 1, ...,
S).
y2s :the 2nd stage
cvx_begin
variables x y(2)
minimize(2*x+1.8*y(1)+1.2*y(2))
subject to
x+y(1)>=500
x+y(2)>=700
x>=0; y>=0
cvx_end
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 2𝑥1 + 𝑝𝑠(3𝑦2𝑠)
2
𝑠=1
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑥1 + 𝑦2𝑠 ≥ 𝐷𝑠 𝑠 = 1,2
𝑥1 ≥ 0
𝑦2𝑠 ≥ 0 𝑠 = 1,2
𝑥∗
= 500
𝑦 = 1.22𝑒 − 8 200
Optimal cost= 1240 28/31
 If we choose expected value of demand: 𝑥∗ = 580
if D=700 → Optimal cost = 1350
29/31
 Stochastic programming problems are formulated as mathematical programming tasks with the objective
and constraints defined as expectations of some random functions or probabilities of some sets of
scenarios.
 Expectations are defined by multivariate integrals (scenarios distributed continuously) or finite series
(scenarios distributed discretely).
5. Conclusion
30/31
END
31

More Related Content

PPTX
Prescriptive analytics BA4206 Anna University PPT
PDF
4. linear programming using excel solver
PPTX
Analytic Network Process
PPTX
Replacement Theory Models in Operations Research by Dr. Rajesh Timane
PPT
Chapter 4 power point presentation Regression models
PPTX
Linear Programming Application
PPT
Unit i-2-dt
PPT
05 forecasting
Prescriptive analytics BA4206 Anna University PPT
4. linear programming using excel solver
Analytic Network Process
Replacement Theory Models in Operations Research by Dr. Rajesh Timane
Chapter 4 power point presentation Regression models
Linear Programming Application
Unit i-2-dt
05 forecasting

What's hot (20)

PDF
Grovers Algorithm
PDF
Markov decision process
PPTX
PRML Chapter 1
PDF
Lecture 9 Markov decision process
PDF
PPTX
Metaheuristics
PPTX
Fuzzy c means manual work
PDF
fuzzy fuzzification and defuzzification
PDF
Linguistic hedges in fuzzy logic
ODP
Machine Learning With Logistic Regression
PPT
Bayseian decision theory
PDF
Multi Objective Optimization and Pareto Multi Objective Optimization with cas...
PPT
Introduction to Optimization.ppt
PDF
Machine learning Lecture 2
PPTX
Optimization problems and algorithms
PPTX
Fuzzy Genetic Algorithm
PPTX
2. forward chaining and backward chaining
POT
Multi Objective Optimization
Grovers Algorithm
Markov decision process
PRML Chapter 1
Lecture 9 Markov decision process
Metaheuristics
Fuzzy c means manual work
fuzzy fuzzification and defuzzification
Linguistic hedges in fuzzy logic
Machine Learning With Logistic Regression
Bayseian decision theory
Multi Objective Optimization and Pareto Multi Objective Optimization with cas...
Introduction to Optimization.ppt
Machine learning Lecture 2
Optimization problems and algorithms
Fuzzy Genetic Algorithm
2. forward chaining and backward chaining
Multi Objective Optimization
Ad

Similar to Stochastic Optimization (20)

PPT
The Use of Fuzzy Optimization Methods for Radiation.ppt
PPTX
Evans_Analytics2e_ppt_13.pptxbbbbbbbbbbb
PPTX
9 Multi criteria Operation Decision Making - Nov 16 2020. pptx (ver2).pptx
PDF
Deep Learning Opening Workshop - Admissibility of Solution Estimators in Stoc...
PPTX
Operation research - Chapter 01
PPTX
Linear programming manzoor nabi
PPTX
Operations Research - Introduction
DOCX
Mc0079 computer based optimization methods--phpapp02
PDF
Mathematical linear programming notes
DOCX
Master of Computer Application (MCA) – Semester 4 MC0079
PDF
Statement of stochastic programming problems
PDF
Paper Study: Melding the data decision pipeline
PPT
stochopt-long as part of stochastic optimization
PPTX
Operations research
PDF
Optimization Techniques.pdf
PDF
Chapter 4 Simplex Method ppt
PDF
Unit.3. duality and sensetivity analisis
PDF
Monte-Carlo method for Two-Stage SLP
PDF
Linear Programming
PDF
公平性を保証したAI/機械学習
アルゴリズムの最新理論
The Use of Fuzzy Optimization Methods for Radiation.ppt
Evans_Analytics2e_ppt_13.pptxbbbbbbbbbbb
9 Multi criteria Operation Decision Making - Nov 16 2020. pptx (ver2).pptx
Deep Learning Opening Workshop - Admissibility of Solution Estimators in Stoc...
Operation research - Chapter 01
Linear programming manzoor nabi
Operations Research - Introduction
Mc0079 computer based optimization methods--phpapp02
Mathematical linear programming notes
Master of Computer Application (MCA) – Semester 4 MC0079
Statement of stochastic programming problems
Paper Study: Melding the data decision pipeline
stochopt-long as part of stochastic optimization
Operations research
Optimization Techniques.pdf
Chapter 4 Simplex Method ppt
Unit.3. duality and sensetivity analisis
Monte-Carlo method for Two-Stage SLP
Linear Programming
公平性を保証したAI/機械学習
アルゴリズムの最新理論
Ad

Recently uploaded (20)

PDF
Categorization of Factors Affecting Classification Algorithms Selection
PPT
INTRODUCTION -Data Warehousing and Mining-M.Tech- VTU.ppt
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PDF
737-MAX_SRG.pdf student reference guides
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PPTX
Information Storage and Retrieval Techniques Unit III
PPT
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
PDF
SMART SIGNAL TIMING FOR URBAN INTERSECTIONS USING REAL-TIME VEHICLE DETECTI...
PDF
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
PPTX
UNIT 4 Total Quality Management .pptx
PDF
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
PDF
Abrasive, erosive and cavitation wear.pdf
PPT
introduction to datamining and warehousing
PDF
Analyzing Impact of Pakistan Economic Corridor on Import and Export in Pakist...
PPTX
Nature of X-rays, X- Ray Equipment, Fluoroscopy
PPTX
Fundamentals of Mechanical Engineering.pptx
PDF
86236642-Electric-Loco-Shed.pdf jfkduklg
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PDF
null (2) bgfbg bfgb bfgb fbfg bfbgf b.pdf
PPT
Total quality management ppt for engineering students
Categorization of Factors Affecting Classification Algorithms Selection
INTRODUCTION -Data Warehousing and Mining-M.Tech- VTU.ppt
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
737-MAX_SRG.pdf student reference guides
Automation-in-Manufacturing-Chapter-Introduction.pdf
Information Storage and Retrieval Techniques Unit III
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
SMART SIGNAL TIMING FOR URBAN INTERSECTIONS USING REAL-TIME VEHICLE DETECTI...
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
UNIT 4 Total Quality Management .pptx
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
Abrasive, erosive and cavitation wear.pdf
introduction to datamining and warehousing
Analyzing Impact of Pakistan Economic Corridor on Import and Export in Pakist...
Nature of X-rays, X- Ray Equipment, Fluoroscopy
Fundamentals of Mechanical Engineering.pptx
86236642-Electric-Loco-Shed.pdf jfkduklg
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
null (2) bgfbg bfgb bfgb fbfg bfbgf b.pdf
Total quality management ppt for engineering students

Stochastic Optimization

  • 1. IN THE NAME OF GOD Isfahan University of Technology Department Of Electrical and Computer Engineering Title: Stochastic Optimization Course instructor: DR. Naghmeh Sadat Moayedian By: Mohammad Reza Jabbari July 2018
  • 2. CONTENTS 1. Introduction 2. Application 3. Probabilistically Constrained Models 4. Recourse Models 5. Conclusion 6. End 2/31 Contact Info: Mo.re.Jabbari@gmail.com
  • 3. 1.Introduction 3/31  Definition  Many Decision Problems in business and social systems can be modeled using mathematical optimization, which seek to maximize or minimize some objective which is a function of the decisions.  The feasible decisions are constrained by limits in resources, minimum requirements, etc.  Objectives and constraints are functions of the variables and problem data, such as costs, production rates, sales, or capacities.
  • 4. Optimization Problems  Stochastic Optimization Problems are mathematical programs where some of the data incorporated into the objective or constraints is Uncertain. Stochastic Optimization Deterministic Optimization  Deterministic Optimization Problems are formulated with known parameters. Real world problems almost invariably include some unknown and uncertain parameters. Robust Optimizatio n 4/31
  • 5.  When some of the data is random, then Optimal Solutions and the Optimal Value to the optimization problem are themselves random.  A distribution of optimal decisions is generally unimplementanble. Probabilistic Distribution  Ideally, we would like one decision and one optimal objective value. What do you tell your boss? 5/31
  • 6.  Problem Solving Methods 1. Probabilistically Constrained Models 2. Recourse Models  Try to find a decision which ensures that a set of constraints will hold with a certain probability.  One logical way to pose the problem is to require that we make one decision now and minimize the expected costs (or utilities) of the consequences of that decision disjoint Probabilistic Constraint Joint Probabilistic Constraint Two-Stage Multi-Stage 6/31
  • 7.  The purpose of both methods is to: Stochastic Optimization Problem Deterministic Optimization Problem Deterministic Equivalent Problem So DEP can be solved using linear or nonlinear optimization methods. 7/31
  • 8.  Sustainability and Power Planning  Supply Chain Management  Network optimization  Logistics  Financial Management  Location Analysis  etc. 2.Applications 8/31
  • 9.  Suppose that a factory can simultaneously produce two products from two raw materials 𝑥1and 𝑥2 with below constraints: 1. The production cost of each unit of the first and second product is 𝑐1 = 2 and 𝑐2 = 3, respectively 2. The maximum storage capacity for storing raw materials is equal to b = 100. 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑧 = 2𝑥1 + 3𝑥2 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑥1 + 𝑥2 ≤ 100 2𝑥1 + 6𝑥2 ≥ 180 3𝑥1 + 3𝑥2 ≥ 162 𝑥1 ≥ 0 , 𝑥2 ≥ 0 𝑥1 ∗ = 36 𝑥2 ∗ = 18 𝑝∗ = 126  Example1: !! Note that the assumption that parameters such as the minimum demand or cost per unit of raw material, etc. are not realistic. Therefore, each of these parameters and coefficients can be a random variable: ℎ1 = 180 + 𝝃 𝟏 𝑎21 = 2 + 𝜼 𝟏 ℎ1 = 162 + 𝝃 𝟐 𝑎32 = 3 + 𝜼 𝟐 Where 𝝃 𝟏~ 𝑁 0,12 𝜼 𝟏~ 𝑈(−0.8 , 0.8) 𝝃 𝟐~ 𝑁 0,9 𝜼 𝟐~ exponential (λ = 2.5) 9/31
  • 10.  The random variables 𝝃 𝟏 ،𝝃 𝟐 and 𝜼 𝟐 are unbounded. Therefore, we obtain a reliability level (95%) for them: 𝝃 𝟏 𝜖 −30.94 , 30.94 𝜼 𝟏 𝜖 [−23.18 , 23.18] 𝝃 𝟐 𝜖 −0.8 , 0.8 𝜼 𝟐 𝜖 [0.1.84]  Define the random vector 𝒘 = (𝝃 𝟏, 𝝃 𝟐, 𝜼 𝟏, 𝜼 𝟐) and rewrite the problem: 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑧 = 2𝑥1 + 3𝑥2 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑥1 + 𝑥2 ≤ 100 𝛼 𝒘 𝑥1 + 6𝑥2 ≥ ℎ1 𝒘 3𝑥1 + 𝛽 𝒘 𝑥2 ≥ ℎ2 𝒘 𝑥1 ≥ 0 , 𝑥2 ≥ 0 Before determining the random parameters of the problem (a certain realization of 𝒘), the optimal value of the problem can not be calculated !!! 10/31
  • 11. 3. Probabilistically Constrained Models  Definition:  If instead of persisting that the constraints of the problem hold for all values of random variables, suppose that the constraints hold with a certain reliability level. This case is called an optimization problem with Probabilistic constraints. 𝑃{ 𝑗=1 𝑛 𝑎𝑖𝑗 𝑤 𝑥𝑗 ≥ 𝑏𝑖 𝑤 , 𝑖 = 1,2, … , 𝑚 ≥ 𝛼 𝑎𝑛𝑑 𝛼𝜖[0,1] 𝑃{ 𝑗=1 𝑛 𝑎𝑖𝑗 𝑤 𝑥𝑗 ≥ 𝑏𝑖 𝑤 ≥ 𝛼𝑖 , 𝑖 = 1,2, … , 𝑚 𝑎𝑛𝑑 𝛼𝜖[0,1] 1. Dsjoint Probabilistic Constraints 2. Joint Probabilistic Constraints 11/31
  • 12. 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑧 = 2𝑥1 + 3𝑥2 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑥1 + 𝑥2 ≤ 100 𝑃{ 𝒘 𝛼( 𝑤) 𝑥1 + 6𝑥2 ≥ ℎ1( 𝑤) , 3𝑥1 + 𝛽( 𝑤) 𝑥2 ≥ ℎ2( 𝑤) } ≥ 0.95 𝑥1 ≥ 0 , 𝑥2 ≥ 0 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑧 = 2𝑥1 + 3𝑥2 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑥1 + 𝑥2 ≤ 100 𝛼( 𝒘) 𝑥1 + 6𝑥2 ≥ ℎ1( 𝒘) 3𝑥1 + 𝛽( 𝒘) 𝑥2 ≥ ℎ2( 𝒘) 𝑥1 ≥ 0 , 𝑥2 ≥ 0 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑧 = 2𝑥1 + 3𝑥2 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑥1 + 𝑥2 ≤ 100 𝑃{ 𝑤 𝛼( 𝒘) 𝑥1 + 6𝑥2 ≥ ℎ1( 𝒘) } ≥ 1 − 𝛼1 𝑃{ 𝑤 3𝑥1 + 𝛽( 𝒘) 𝑥2 ≥ ℎ2( 𝒘)} ≥ 1 − 𝛼2 𝑥1 ≥ 0 , 𝑥2 ≥ 0 12/31
  • 13.  Joint Probabilistic Constraints  In optimization problems with Joint Probabilistic Constraints, often apply conditions to probability distribution in order to obtain DEP. The purpose of these conditions is to satisfy convexity of feasible set or the objective function. logarithmically concave Concept  Suppose that 𝑎𝑖𝑗 ’s coefficients are definite and deterministic and the parameters 𝑏𝑖(𝑖 = 1, … , 𝑚1) are random, independent with the probability corresponding to the logarithmically concave 𝑃𝑖 and the corresponding probability distribution function 𝐹𝑖: 𝑃 𝐴𝑥 ≥ 𝑏 ≥ 𝛼 𝑖=1 𝑚1 𝑃𝑖 𝐴𝑖 𝑥 ≥ 𝑏𝑖 ≥ 𝛼 𝑖=1 𝑚1 𝐹𝑖 𝐴𝑖 𝑥 ≥ 𝛼 𝑖=1 𝑚1 𝐿𝑛( 𝐹𝑖 𝐴𝑖 𝑥 ) ≥ 𝐿𝑛(𝛼)  To show that this equivalent equation is convex we just need to show that the probability distribution function 𝐹𝑖 ،is logarithmically concave. 13/31
  • 14.  Disjoint Probabilistic Constraints 𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 = 𝑗=1 𝑛 𝑐𝑗 𝑥𝑗 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑃 𝑗=1 𝑛 𝑎𝑖𝑗 𝑥𝑗 ≤ 𝑏𝑖 ≥ 1 − 𝛼𝑖 𝑖 = 1, … , 𝑚 𝛼𝑖 𝜖 0,1 , 𝑥𝑗 ≥ 0 𝑖 = 1, … , 𝑚 𝑗 = 1, … , 𝑛  In fact, the i-th constraint is satisfied with the least probability 1 − 𝛼𝑖.  In order to transform the above problem into a definite equivalent problem, seven state arise, depending on which of the parameters of the problem is random. By examining its four independent states, the remaining states can be obtained from the combination of previous states 14/31
  • 15.  State1: • Assume that only 𝑐𝑗’s are random variables with mean 𝐸{𝑐𝑗} 𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 = 𝑗=1 𝑛 𝐸{𝑐𝑗}𝑥𝑗 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑗=1 𝑛 𝑎𝑖𝑗 𝑥𝑗 ≤ 𝑏𝑖 𝑖 = 1, … , 𝑚 𝑥𝑗 ≥ 0 𝑗 = 1, … , 𝑛 • In this case, it is enough to substitute 𝑐𝑗’s with their means. 15/31
  • 16.  State2: • Assume that 𝑎𝑖𝑗’s are correlated variable with the mean 𝑬{𝒂𝒊𝒋}, the variance 𝑽𝒂𝒓{𝒂𝒊𝒋}, and the covariance matrix 𝑪𝒐𝒗(𝒂𝒊𝒋, 𝒂𝒊′ 𝒋′). • Also, we assume i-th constraint is in the following form: 𝑖𝑓 𝑑𝑒𝑓𝑖𝑛𝑒 𝑇𝑖 = 𝑗=1 𝑛 𝑎𝑖𝑗 𝑥𝑗 𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 = 𝑗=1 𝑛 𝑐𝑗 𝑥𝑗 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑃 𝑗=1 𝑛 𝑎𝑖𝑗 𝑥𝑗 ≤ 𝑏𝑖 ≥ 1 − 𝛼𝑖 𝑖 = 1, … , 𝑚 𝛼𝑖 𝜖 0,1 , 𝑥𝑗 ≥ 0 𝑖 = 1, … , 𝑚 𝑗 = 1, … , 𝑛 𝐸 𝑇𝑖 = 𝑗=1 𝑛 𝐸 𝑎𝑖𝑗 𝑥𝑗 𝑖 = 1, … , 𝑚 𝑉𝑎𝑟 𝑇𝑖 = 𝑚=1 𝑛 𝑉𝑎𝑟 𝑎𝑖𝑚 𝑥 𝑚 2 + 𝑚=1 𝑛 𝑘=1 𝑘≠𝑚 𝑛 𝐶𝑜𝑣(𝑎𝑖𝑚, 𝑎𝑖𝑘) 𝑥 𝑚 𝑥 𝑘 16/31
  • 17. • Base on Central Limit theory, 𝑇𝑖 is approximately Gaussian Distribution. 𝑃 𝑇𝑖 ≤ 𝑏𝑖 = 1 − 𝑄 𝑏𝑖 − 𝐸 𝑇𝑖 𝑉𝑎𝑟 𝑇𝑖 𝑖 = 1, … , 𝑚 i𝑓 𝑑𝑒𝑓𝑖𝑛𝑒 𝑄 𝐾 𝛼 𝑖 = 𝛼𝑖 𝑏𝑖 − 𝐸 𝑇𝑖 𝑉𝑎𝑟 𝑇𝑖 ≥ 𝐾 𝛼 𝑖 𝑖 = 1, … , 𝑚 𝐸 𝑇𝑖 + 𝐾 𝛼 𝑖 𝑉𝑎𝑟 𝑇𝑖 ≤ 𝑏𝑖  If 𝑎𝑖𝑗’s were independent variables so 𝐶𝑜𝑣 𝑎𝑖𝑗, 𝑎𝑖′ 𝑗′ = 0 and: 𝑗=1 𝑛 𝐸 𝑎𝑖𝑗 𝑥𝑗 + 𝐾 𝛼 𝑖 𝑗=1 𝑛 𝑉𝑎𝑟 𝑎𝑖𝑗 𝑥𝑗 2 ≤ 𝑏𝑖 𝑖 = 1, … , 𝑚 • Therefore, the below deterministic equivalent constraint be replaced with probabilistic constraint: 17/31
  • 18.  State3: • Assume that 𝑏𝑖’s are Gaussian variable with the mean 𝑬{𝒃𝒊}, the variance 𝑽𝒂𝒓 𝒃𝒊 : • Also, we assume i-th constraint is in the following form: 𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 = 𝑗=1 𝑛 𝑐𝑗 𝑥𝑗 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑃 𝑗=1 𝑛 𝑎𝑖𝑗 𝑥𝑗 ≤ 𝑏𝑖 ≥ 𝛼𝑖 𝑖 = 1, … , 𝑚 𝛼𝑖 𝜖 0,1 , 𝑥𝑗 ≥ 0 𝑖 = 1, … , 𝑚 𝑗 = 1, … , 𝑛 • Like Previous state: 𝑗=1 𝑛 𝑎𝑖𝑗 𝑥𝑗 − 𝐸 𝑏𝑖 𝑉𝑎𝑟 𝑏𝑖 ≤ 𝐾 𝛼 𝑖 𝑖 = 1, … , 𝑚 𝑗=1 𝑛 𝑎𝑖𝑗 𝑥𝑗 ≤ 𝐸 𝑏𝑖 + 𝐾 𝛼 𝑖 𝑉𝑎𝑟 𝑏𝑖 𝑖 = 1, … , 𝑚 18/31
  • 19.  State4: • In this State, assume that 𝑏𝑖’s and 𝑎𝑖𝑗’s are Random Variable with the mean and variance 𝑬{𝒃𝒊}, 𝑽𝒂𝒓 𝒃𝒊 , 𝑬{𝒂𝒊𝒋} and 𝑽𝒂𝒓{𝒂𝒊𝒋} • Also, we assume i-th constraint is in the following form: 𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 = 𝑗=1 𝑛 𝑐𝑗 𝑥𝑗 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑃 𝑗=1 𝑛 𝑎𝑖𝑗 𝑥𝑗 − 𝑏𝑖 ≤ 0 ≥ 1 − 𝛼𝑖 𝑖 = 1, … , 𝑚 𝛼𝑖 𝜖 0,1 , 𝑥𝑗 ≥ 0 𝑖 = 1, … , 𝑚 𝑗 = 1, … , 𝑛 if define ℎ𝑖 ≜ 𝑗=1 𝑛 𝑎𝑖𝑗 𝑥𝑗 − 𝑏𝑖 𝐸 ℎ𝑖 = 𝑗=1 𝑛 𝐸 𝑎𝑖𝑗 𝑥𝑗 − 𝐸 𝑏𝑖 𝑖 = 1, … , 𝑚 𝑉𝑎𝑟 ℎ𝑖 = 𝑥 𝑇 𝐷𝑖 𝑥 , 𝑖 = 1, … , 𝑚 𝐷𝑖 = )𝑉𝑎𝑟(𝑎𝑖1 )𝐶𝑜𝑣(𝑎𝑖1, 𝑎𝑖2 ⋯ )𝐶𝑜𝑣(𝑎𝑖1, 𝑎𝑖𝑛 )𝐶𝑜𝑣(𝑎𝑖1, 𝑏𝑖 )𝐶𝑜𝑣(𝑎𝑖2, 𝑎𝑖1 )𝑉𝑎𝑟(𝑎𝑖2 ⋯ )𝐶𝑜𝑣(𝑎𝑖2, 𝑎𝑖𝑛 )𝐶𝑜𝑣(𝑎𝑖1, 𝑏𝑖 ⋮ ⋮ ⋱ ⋮ ⋮ )𝐶𝑜𝑣(𝑎𝑖𝑛, 𝑎𝑖1 )𝐶𝑜𝑣(𝑎𝑖𝑛, 𝑎𝑖2 ⋯ )𝑉𝑎𝑟(𝑎𝑖𝑛 )𝐶𝑜𝑣(𝑎𝑖𝑛, 𝑏𝑖 )𝐶𝑜𝑣(𝑏𝑖, 𝑎𝑖1 )𝐶𝑜𝑣(𝑏𝑖, 𝑎𝑖1 ⋯ )𝐶𝑜𝑣(𝑎𝑖𝑛, 𝑎𝑖1 )𝑉𝑎𝑟(𝑏𝑖
  • 20. • Base on Central Limit theory,ℎ𝑖 is approximately Gaussian Distribution. 𝑃 ℎ𝑖 ≤ 0 = 1 − 𝑄 −𝐸 ℎ𝑖 𝑉𝑎𝑟 ℎ𝑖 ≥ 1 − 𝛼𝑖 −𝐸 ℎ𝑖 𝑉𝑎𝑟 ℎ𝑖 ≥ 𝐾 𝛼 𝑖 𝐸 ℎ𝑖 + 𝐾 𝛼 𝑖 𝑉𝑎𝑟 ℎ𝑖 ≤ 0 • Therefore, the deterministic equivalent constraint is:  The next three States are combination the four previous sates and can easily obtained: • State5: 𝑐𝑗’s and 𝑎𝑖𝑗’s are random variable • State6: 𝑐𝑗’s and 𝑏𝑖’s are random variable • State7: 𝑐𝑗’s and 𝑎𝑖𝑗’s and 𝑏𝑖’s are random variable 20/31
  • 21.  Example2:  Consider the following optimization problem: 𝑎11~ 𝑁(1 , 25) 𝑎12~ 𝑁(3 , 16) 𝑎13~ 𝑁(9 , 4) 𝑏2~ 𝑁(7 , 9) 𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 = 5𝑥1 + 6𝑥2 + 3𝑥3 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑃{ 𝑎11 𝑥1 + 𝑎12 𝑥2 + 𝑎13 𝑥3 ≤ 8} ≥ 0.95 𝑃{ 5𝑥1 + 𝑥2 + 6𝑥3 ≤ 𝑏2} ≥ 0.1 𝑃{ 4𝑥1 + 2𝑥2 − 3𝑥3 ≥ 𝑏3 , 2𝑥1 + 3𝑥2 + 𝑥3 ≥ 𝑏4} ≥ 0.85 𝑥𝑗 ≥ 0 𝑗 = 1, … , 𝑚 𝑏3 𝑎𝑛𝑑 𝑏4~ 𝐸𝑥𝑝𝑜𝑛𝑒𝑛𝑡𝑖𝑎𝑙 (𝜆 = 1) • 1st constraint ≡ State2 𝑥1 + 3𝑥2 + 9𝑥3 + 1.645 25𝑥1 2 + 16𝑥2 2 + 4𝑥3 2 ≤ 8𝑄 𝐾0.05 = 0.05 → 𝐾0.05 = 𝑄−1 0.05 = 1.645 21/31
  • 22. • 2nd constraint ≡ State3 𝑄 𝐾0.1 = 0.1 → 𝐾0.1 = 𝑄−1 0.1 = 1.285 5𝑥1 + 𝑥2 + 6𝑥3 ≤ 7 + 1.285 9 • 3rd constraint ≡ joint Probabilistic constraint 𝐹𝑏 𝐴𝑖 𝑥 = 0 𝐴 𝑖 𝑥 𝑒−𝑡 𝑑𝑡 = 1 − 𝑒−𝐴 𝑖 𝑥 ln 1 − 𝑒− 4𝑥1+2𝑥2−3𝑥3 + ln 1 − 𝑒− 2𝑥1+3𝑥2+𝑥3 ≥ l n( 0.85 𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 = 5𝑥1 + 6𝑥2 + 3𝑥3 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑥1 + 3𝑥2 + 9𝑥3 + 1.645 25𝑥1 2 + 16𝑥2 2 + 4𝑥3 2 ≤ 8 5𝑥1 + 𝑥2 + 6𝑥3 ≤ 10.855 ln 1 − 𝑒−(4𝑥1+2𝑥2−3𝑥3) + ln 1 − 𝑒−(2𝑥1+3𝑥2+𝑥3) ≥ ln(0.85) 𝑥𝑗 ≥ 0 𝑗 = 1,2,3  So DEP is: 22/31
  • 23. cvx_begin variables x(3) y maximize((5*x(1)+6*x(2)+3*x(3))) subject to x(1)+3*x(2)+9*x(3)+1.645*norm([5*x(1) 4*x(2) 2*x(3)]) <= 8; 5*x(1)+x(2)+6*x(3) <= 10.855; log(1-exp(-(4*x(1)+2*x(2)-3*x(3))))+log(1-exp(-(2*x(1)+3*x(2)+x(3))))>= log(0.85); x >= 0; cvx_end 𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 = 5𝑥1 + 6𝑥2 + 3𝑥3 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑥1 + 3𝑥2 + 9𝑥3 + 1.645 25𝑥1 2 + 16𝑥2 2 + 4𝑥3 2 ≤ 8 5𝑥1 + 𝑥2 + 6𝑥3 ≤ 10.855 ln 1 − 𝑒−(4𝑥1+2𝑥2−3𝑥3) + ln 1 − 𝑒−(2𝑥1+3𝑥2+𝑥3) ≥ ln(0.85) 𝑥𝑗 ≥ 0 𝑗 = 1,2,3 𝑥∗ = 0.4625 0.6327 1.976 × 10−9 𝑎𝑛𝑑 𝑃∗ = 6.1087 25𝑥1 2 + 16𝑥2 2 + 4𝑥3 2 ≡ 5𝑥1 4𝑥2 2𝑥3 2 23/31
  • 24. 4. Recourse Models  Recourse Models are those in which some decisions or recourse actions can be taken after uncertainty is disclosed.  Definition: Stochastic Optimization with Recource Anticipative Variables Adoptive Variables  The set of decisions is then divided into two groups: • A number of decisions have to be taken before the experiment. All these decisions are called first-stage decisions and the period when these decisions aretaken is called the first stage. • A number of decisions can be taken after the experiment. They are called second-stage decisions. The corresponding period is called the second stage. 24/31
  • 25.  Two-Stage Program with Fixed Recourse  The classical two-stage stochastic linear program with fixed recourse (originated by Dantzig [1955] and Beale [1955]) is the problem of finding: 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑧 = 𝑐 𝑇 𝑥 + 𝐸𝜉 {min 𝑦 𝑞( 𝒘) 𝑇 𝑦(𝒘)} 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝐴𝑥 = 𝑏 𝑇( 𝒘) 𝑥 + 𝑊𝑦( 𝒘) = ℎ(𝒘) 𝑥 ≥ 0 , 𝑦( 𝒘) ≥ 0 Where: • 𝑥: The first-stage decisions )𝑛1 × 1) • 𝑦: The second-stage decisions )𝑛2 × 1) • 𝑊: Recourse Matrix (𝑚2 × 𝑛2) • 𝜉 𝑇 𝒘 = (𝑞 𝑇 𝒘 , 𝑞 𝑇 𝒘 , 𝑞 𝑇 𝒘 , 𝑇1 𝒘 , … , 𝑇 𝑚2 𝒘  Each component of q , T , and h is thus a possible random variable.  For a given realization ω , the second-stage problem data q)ω) , h)ω) and T)ω) become known. 25/31
  • 26.  Example3:  Suppose we have to decide on the number of production X products. • The production of each unit X costs $ 2. • Customers' demand is random and there is a discrete distribution Ds with the probability ps (s = 1, ..., S). • Customer demand must be met. We will have the opportunity to buy X from an external supplier to meet the real demand of our customers. This purchase costs $ 3. Question: How much should we produce at the moment, while we do not know the future demand of our customers?  We assume S=2 and D1=500, p1=0.6 D2=700, p2=0.4 First Stage Second Stage D1=500, p1=0.6 D2=700, p2=0.4 26/31
  • 27. Misleading solution: The optimal solution is the expected amount of demand. Correct solution: The optimal solution is obtained through stochastic Optimization. 26/31 27 The number of units of the X that are currently produced (the first stage). x1 :the 1st stage The number of units of X, which in the second stage is purchased with the random fulfillment of demand scenarios, Ds (s = 1, ..., S). y2s :the 2nd stage
  • 28. cvx_begin variables x y(2) minimize(2*x+1.8*y(1)+1.2*y(2)) subject to x+y(1)>=500 x+y(2)>=700 x>=0; y>=0 cvx_end 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 2𝑥1 + 𝑝𝑠(3𝑦2𝑠) 2 𝑠=1 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 𝑥1 + 𝑦2𝑠 ≥ 𝐷𝑠 𝑠 = 1,2 𝑥1 ≥ 0 𝑦2𝑠 ≥ 0 𝑠 = 1,2 𝑥∗ = 500 𝑦 = 1.22𝑒 − 8 200 Optimal cost= 1240 28/31
  • 29.  If we choose expected value of demand: 𝑥∗ = 580 if D=700 → Optimal cost = 1350 29/31
  • 30.  Stochastic programming problems are formulated as mathematical programming tasks with the objective and constraints defined as expectations of some random functions or probabilities of some sets of scenarios.  Expectations are defined by multivariate integrals (scenarios distributed continuously) or finite series (scenarios distributed discretely). 5. Conclusion 30/31

Editor's Notes

  • #4: در فیلدهای زیادی ب
  • #27: درواقع تعداد S سناریو برای تقاضای آتی داریم.
  • #29: : تعداد واحد کالای X که در مرحله‌ی دوم با تحقق تصادفی سناریوهای تقاضا یعنی Ds (s=1,...,S) خریداری می‌شود.