SlideShare a Scribd company logo
D Nagesh Kumar, IISc Optimization Methods: M2L41
Optimization using Calculus
Optimization of Functions of
Multiple Variables subject to
Equality Constraints
D Nagesh Kumar, IISc Optimization Methods: M2L42
Objectives
Optimization of functions of multiple variables
subjected to equality constraints using
the method of constrained variation, and
the method of Lagrange multipliers.
D Nagesh Kumar, IISc Optimization Methods: M2L43
Constrained optimization
A function of multiple variables, f(x), is to be optimized subject to one or
more equality constraints of many variables. These equality constraints,
gj(x), may or may not be linear. The problem statement is as follows:
Maximize (or minimize) f(X), subject to gj(X) = 0, j = 1, 2, … , m
where
1
2
n
x
x
x
⎧ ⎫
⎪ ⎪
⎪ ⎪
= ⎨ ⎬
⎪ ⎪
⎪ ⎪⎩ ⎭
X
M
D Nagesh Kumar, IISc Optimization Methods: M2L44
Constrained optimization (contd.)
With the condition that ; or else if m > n then the problem
becomes an over defined one and there will be no solution. Of the
many available methods, the method of constrained variation and the
method of using Lagrange multipliers are discussed.
m n≤
D Nagesh Kumar, IISc Optimization Methods: M2L45
Solution by method of Constrained
Variation
For the optimization problem defined above, let us consider a specific
case with n = 2 and m = 1 before we proceed to find the necessary and
sufficient conditions for a general problem using Lagrange
multipliers. The problem statement is as follows:
Minimize f(x1,x2), subject to g(x1,x2) = 0
For f(x1,x2) to have a minimum at a point X* = [x1
*,x2
*], a necessary
condition is that the total derivative of f(x1,x2) must be zero at [x1
*,x2
*].
(1)
1 2
1 2
0
f f
df dx dx
x x
∂ ∂
= + =
∂ ∂
D Nagesh Kumar, IISc Optimization Methods: M2L46
Method of Constrained Variation (contd.)
Since g(x1
*,x2
*) = 0 at the minimum point, variations dx1 and dx2 about
the point [x1
*, x2
*] must be admissible variations, i.e. the point lies on
the constraint:
g(x1
* + dx1 , x2
* + dx2) = 0 (2)
assuming dx1 and dx2 are small the Taylor series expansion of this
gives us
(3)
*
1 1 2 2
* * * * * *
1 2 1 2 1 1 2 2
1 2
( * , )
( , ) (x ,x ) (x ,x ) 0
g x dx x dx
g g
g x x dx dx
x x
+ +
∂ ∂
= + + =
∂ ∂
D Nagesh Kumar, IISc Optimization Methods: M2L47
Method of Constrained Variation (contd.)
or at [x1
*,x2
*] (4)
which is the condition that must be satisfied for all admissible
variations.
Assuming , (4) can be rewritten as
(5)
1 2
1 2
0
g g
dg dx dx
x x
∂ ∂
= + =
∂ ∂
2/ 0g x∂ ∂ ≠
* *1
2 1 2 1
2
/
( , )
/
g x
dx x x dx
g x
∂ ∂
= −
∂ ∂
D Nagesh Kumar, IISc Optimization Methods: M2L48
Method of Constrained Variation (contd.)
(5) indicates that once variation along x1 (dx1) is chosen arbitrarily, the variation
along x2 (dx2) is decided automatically to satisfy the condition for the admissible
variation. Substituting equation (5) in (1) we have:
(6)
The equation on the left hand side is called the constrained variation of f. Equation
(5) has to be satisfied for all dx1, hence we have
(7)
This gives us the necessary condition to have [x1
*, x2
*] as an extreme point
(maximum or minimum)
* *
1 2
1
1
1 2 2 (x , x )
/
0
/
f g x f
df dx
x g x x
⎛ ⎞∂ ∂ ∂ ∂
= − =⎜ ⎟
∂ ∂ ∂ ∂⎝ ⎠
* *
1 2
1 2 2 1 (x , x )
0
f g f g
x x x x
⎛ ⎞∂ ∂ ∂ ∂
− =⎜ ⎟
∂ ∂ ∂ ∂⎝ ⎠
D Nagesh Kumar, IISc Optimization Methods: M2L49
Solution by method of Lagrange
multipliers
Continuing with the same specific case of the optimization problem
with n = 2 and m = 1 we define a quantity λ, called the Lagrange
multiplier as
(8)
Using this in (5)
(9)
And (8) written as
(10)
* *
1 2
2
2 (x , x )
/
/
f x
g x
λ
∂ ∂
= −
∂ ∂
* *
1 2
1 1 (x , x )
0
f g
x x
λ
⎛ ⎞∂ ∂
+ =⎜ ⎟
∂ ∂⎝ ⎠
* *
1 2
2 2 (x , x )
0
f g
x x
λ
⎛ ⎞∂ ∂
+ =⎜ ⎟
∂ ∂⎝ ⎠
D Nagesh Kumar, IISc Optimization Methods: M2L410
Solution by method of Lagrange
multipliers…contd.
Also, the constraint equation has to be satisfied at the extreme point
(11)
Hence equations (9) to (11) represent the necessary conditions for the
point [x1*, x2*] to be an extreme point.
Note that λ could be expressed in terms of as well and
has to be non-zero.
Thus, these necessary conditions require that at least one of the partial
derivatives of g(x1 , x2) be non-zero at an extreme point.
* *
1 2
1 2 ( , )
( , ) 0x x
g x x =
1/g x∂ ∂ 1/g x∂ ∂
D Nagesh Kumar, IISc Optimization Methods: M2L411
Solution by method of Lagrange
multipliers…contd.
The conditions given by equations (9) to (11) can also be generated by
constructing a functions L, known as the Lagrangian function, as
(12)
Alternatively, treating L as a function of x1,x2 and λ, the necessary
conditions for its extremum are given by
(13)
1 2 1 2 1 2( , , ) ( , ) ( , )L x x f x x g x xλ λ= +
1 2 1 2 1 2
1 1 1
1 2 1 2 1 2
2 2 2
( , , ) ( , ) ( , ) 0
( , , ) ( , ) ( , ) 0
( , , ) ( , ) 0
L f g
x x x x x x
x x x
L f g
x x x x x x
x x x
L
x x g x x
λ λ
λ λ
λ1 2 1 2
λ
∂ ∂ ∂
= + =
∂ ∂ ∂
∂ ∂ ∂
= + =
∂ ∂ ∂
∂
= =
∂
D Nagesh Kumar, IISc Optimization Methods: M2L412
Necessary conditions for a general
problem
For a general problem with n variables and m equality constraints the
problem is defined as shown earlier
Maximize (or minimize) f(X), subject to gj(X) = 0, j = 1, 2, … , m
where
In this case the Lagrange function, L, will have
one Lagrange multiplier λj for each constraint
as
(14)
1 2 , 1 2 1 1 2 2( , ,..., , ,..., ) ( ) ( ) ( ) ... ( )n m m mL x x x f g g gλ λ λ λ λ λ= + + + +X X X X
1
2
n
x
x
x
⎧ ⎫
⎪ ⎪
⎪ ⎪
= ⎨ ⎬
⎪ ⎪
⎪ ⎪⎩ ⎭
X
M
D Nagesh Kumar, IISc Optimization Methods: M2L413
Necessary conditions for a general
problem…contd.
L is now a function of n + m unknowns, , and the
necessary conditions for the problem defined above are given by
(15)
which represent n + m equations in terms of the n + m unknowns, xi and λj.
The solution to this set of equations gives us
(16)
and
1 2 , 1 2, ,..., , ,...,n mx x x λ λ λ
1
( ) ( ) 0, 1,2,..., 1,2,...,
( ) 0, 1,2,...,
m
j
j
ji i i
j
j
gL f
i n j m
x x x
L
g j m
λ
λ
=
∂∂ ∂
= + = = =
∂ ∂ ∂
∂
= = =
∂
∑X X
X
*
1
*
2
*
n
x
x
x
⎧ ⎫
⎪ ⎪
⎪ ⎪
= ⎨ ⎬
⎪ ⎪
⎪ ⎪⎩ ⎭
X
M
*
1
*
* 2
*
m
λ
λ
λ
λ
⎧ ⎫
⎪ ⎪
⎪ ⎪
= ⎨ ⎬
⎪ ⎪
⎪ ⎪⎩ ⎭
M
D Nagesh Kumar, IISc Optimization Methods: M2L414
Sufficient conditions for a general problem
A sufficient condition for f(X) to have a relative minimum at X* is that each
root of the polynomial in Є, defined by the following determinant equation
be positive.
(17)
11 12 1 11 21 1
21 22 2 12 22 2
1 2 1 2
11 12 1
21 22 2
1 2
0
0 0
0 0
n m
n m
n n nn n n mn
n
n
m m mn
L L L g g g
L L L g g g
L L L g g g
g g g
g g g
g g g
−∈
−∈
−∈
=
L L
M O M M O M
L L
L L L
M O M
M O M M M
L L L
D Nagesh Kumar, IISc Optimization Methods: M2L415
Sufficient conditions for a general
problem…contd.
where
(18)
Similarly, a sufficient condition for f(X) to have a relative maximum at X*
is that each root of the polynomial in Є, defined by equation (17) be
negative.
If equation (17), on solving yields roots, some of which are positive and
others negative, then the point X* is neither a maximum nor a minimum.
2
* *
*
( , ), for 1,2,..., 1,2,...,
( ), where 1,2,..., and 1,2,...,
ij
i j
p
pq
q
L
L i n and j m
x x
g
g p m q
x
λ
∂
= = =
∂ ∂
∂
= = =
∂
X
X n
D Nagesh Kumar, IISc Optimization Methods: M2L416
Example
Minimize , Subject to
or
2 2
1 1 2 2 1 2( ) 3 6 5 7 5f x x x x x x= − − − + +X 1 2 5x x+ =
1 2 1
2
1 2 1
1 2 2 1
6 10 5 0
1
3 5 (5 )
2
1
3( ) 2 (5 )
2
x x
x
x x
x x x
λ
λ
λ
∂
= − − + + =
∂
=> + = +
=> + + = +
L
2
1
2
x
−
=
1
11
2
x =
[ ]
1 11
* , ; * 23
2 2
−⎡ ⎤
= =⎢ ⎥⎣ ⎦
X λ 11 12 11
21 22 21
11 12
0
0
L L g
L L g
g g
−∈⎛ ⎞
⎜ ⎟
−∈ =⎜ ⎟
⎜ ⎟
⎝ ⎠
Numerical analysis  m2 l4slides
D Nagesh Kumar, IISc Optimization Methods: M2L419
Thank you

More Related Content

PDF
Jacobi and gauss-seidel
PPT
Integration
DOCX
PPTX
Jacobi method
PDF
A Note on Correlated Topic Models
PDF
Text s1 21
PDF
Integration techniques
Jacobi and gauss-seidel
Integration
Jacobi method
A Note on Correlated Topic Models
Text s1 21
Integration techniques

What's hot (20)

PDF
Steven Duplij, Raimund Vogl, "Polyadic Braid Operators and Higher Braiding Ga...
PDF
Steven Duplij, Raimund Vogl, "Polyadic braid operators and higher braiding ga...
PPT
NUMERICAL METHODS -Iterative methods(indirect method)
PDF
2012 mdsp pr13 support vector machine
PDF
Common derivatives integrals
PDF
Numerical analysis stationary variables
PPT
11365.integral 2
PPT
Integral Calculus
PDF
Novel Performance Analysis of Network Coded Communications in Single-Relay Ne...
PDF
Lesson 9: Gaussian Elimination
PDF
13 1 basics_integration
PDF
2012 mdsp pr12 k means mixture of gaussian
PDF
Fractional programming (A tool for optimization)
PDF
2012 mdsp pr10 ica
PPTX
Gr 11 equations
PPTX
Daa unit 4
Steven Duplij, Raimund Vogl, "Polyadic Braid Operators and Higher Braiding Ga...
Steven Duplij, Raimund Vogl, "Polyadic braid operators and higher braiding ga...
NUMERICAL METHODS -Iterative methods(indirect method)
2012 mdsp pr13 support vector machine
Common derivatives integrals
Numerical analysis stationary variables
11365.integral 2
Integral Calculus
Novel Performance Analysis of Network Coded Communications in Single-Relay Ne...
Lesson 9: Gaussian Elimination
13 1 basics_integration
2012 mdsp pr12 k means mixture of gaussian
Fractional programming (A tool for optimization)
2012 mdsp pr10 ica
Gr 11 equations
Daa unit 4
Ad

Viewers also liked (20)

PDF
Lesson 22: Optimization II (Section 041 slides)
PDF
Lesson 15: Exponential Growth and Decay
PDF
Lesson 16: Inverse Trigonometric Functions (Section 041 slides)
PDF
Lesson 26: Evaluating Definite Integrals
PDF
Lesson 22: Optimization (Section 021 slides)
PDF
Lesson 7: The Derivative (Section 41 slides)
PDF
Methods from Mathematical Data Mining (Supported by Optimization)
PDF
Calculus 45S Slides April 30, 2008
PDF
Lesson 4: Calculating Limits (Section 21 slides)
PDF
Lesson 25: The Definite Integral
PDF
Lesson 3: Limits (Section 21 slides)
PDF
Lesson 7: The Derivative (Section 21 slide)
PDF
Lesson 22: Optimization II (Section 021 slides)
PDF
Lesson 2: A Catalog of Essential Functions
PDF
Lesson 8: Basic Differentiation Rules (Section 41 slides)
PDF
Lesson 6: Limits Involving ∞ (Section 21 slides)
PDF
Lesson 8: Basic Differentiation Rules (Section 21 slides)
PDF
Lesson 13: Exponential and Logarithmic Functions (Section 021 slides)
PDF
Lesson 24: Area and Distances
PDF
Lesson 6: Limits Involving ∞ (Section 41 slides)
Lesson 22: Optimization II (Section 041 slides)
Lesson 15: Exponential Growth and Decay
Lesson 16: Inverse Trigonometric Functions (Section 041 slides)
Lesson 26: Evaluating Definite Integrals
Lesson 22: Optimization (Section 021 slides)
Lesson 7: The Derivative (Section 41 slides)
Methods from Mathematical Data Mining (Supported by Optimization)
Calculus 45S Slides April 30, 2008
Lesson 4: Calculating Limits (Section 21 slides)
Lesson 25: The Definite Integral
Lesson 3: Limits (Section 21 slides)
Lesson 7: The Derivative (Section 21 slide)
Lesson 22: Optimization II (Section 021 slides)
Lesson 2: A Catalog of Essential Functions
Lesson 8: Basic Differentiation Rules (Section 41 slides)
Lesson 6: Limits Involving ∞ (Section 21 slides)
Lesson 8: Basic Differentiation Rules (Section 21 slides)
Lesson 13: Exponential and Logarithmic Functions (Section 021 slides)
Lesson 24: Area and Distances
Lesson 6: Limits Involving ∞ (Section 41 slides)
Ad

Similar to Numerical analysis m2 l4slides (20)

PDF
Chapter 2. Constrained Optimization lecture note.pdf
PPTX
AMS_502_13, 14,15,16 (1).pptx
PPTX
7.3_Nonlinear Programming-LagrangeExamples.pptx
PDF
lecture on support vector machine in the field of ML
PDF
Numerical analysis multivariable unconstrained
PDF
Optimality conditions for Equality Constrained Optimization Problems
PPT
Constrained Maximization
PPT
Simplex Method for Linear Programming - Operations Research
 
PDF
Optimum Engineering Design - Day 2b. Classical Optimization methods
PPTX
Optmization techniques
PDF
optmizationtechniques.pdf
PDF
4optmizationtechniques-150308051251-conversion-gate01.pdf
PDF
Optimum Solution of Quadratic Programming Problem: By Wolfe’s Modified Simple...
PDF
Linearprog, Reading Materials for Operational Research
PPT
LAGRANGE_MULTIPLIER.ppt
PPTX
lec10_OPTIMAL and MULTIVARIABLE CONTROLS.pptx
PDF
Monte-Carlo method for Two-Stage SLP
PDF
Chapter 4 Simplex Method ppt
PPTX
AEM.pptx
Chapter 2. Constrained Optimization lecture note.pdf
AMS_502_13, 14,15,16 (1).pptx
7.3_Nonlinear Programming-LagrangeExamples.pptx
lecture on support vector machine in the field of ML
Numerical analysis multivariable unconstrained
Optimality conditions for Equality Constrained Optimization Problems
Constrained Maximization
Simplex Method for Linear Programming - Operations Research
 
Optimum Engineering Design - Day 2b. Classical Optimization methods
Optmization techniques
optmizationtechniques.pdf
4optmizationtechniques-150308051251-conversion-gate01.pdf
Optimum Solution of Quadratic Programming Problem: By Wolfe’s Modified Simple...
Linearprog, Reading Materials for Operational Research
LAGRANGE_MULTIPLIER.ppt
lec10_OPTIMAL and MULTIVARIABLE CONTROLS.pptx
Monte-Carlo method for Two-Stage SLP
Chapter 4 Simplex Method ppt
AEM.pptx

More from SHAMJITH KM (20)

PDF
Salah of the Prophet (ﷺ).pdf
PPTX
Construction Materials and Engineering - Module IV - Lecture Notes
PPTX
Construction Materials and Engineering - Module III - Lecture Notes
PPTX
Construction Materials and Engineering - Module II - Lecture Notes
PPTX
Construction Materials and Engineering - Module I - Lecture Notes
DOCX
Computing fundamentals lab record - Polytechnics
DOCX
Concrete lab manual - Polytechnics
DOCX
Concrete Technology Study Notes
PDF
നബി(സ)യുടെ നമസ്കാരം - രൂപവും പ്രാര്ത്ഥനകളും
DOCX
Design of simple beam using staad pro - doc file
PDF
Design of simple beam using staad pro
PPTX
Python programs - PPT file (Polytechnics)
PDF
Python programs - first semester computer lab manual (polytechnics)
PDF
Python programming Workshop SITTTR - Kalamassery
PDF
Analysis of simple beam using STAAD Pro (Exp No 1)
PDF
Theory of structures I - STUDENT NOTE BOOK (Polytechnics Revision 2015)
PDF
Theory of structures II - STUDENT NOTE BOOK (Polytechnics Revision 2015)
PDF
CAD Lab model viva questions
PPTX
Brain Computer Interface (BCI) - seminar PPT
PDF
Surveying - Module iii-levelling only note
Salah of the Prophet (ﷺ).pdf
Construction Materials and Engineering - Module IV - Lecture Notes
Construction Materials and Engineering - Module III - Lecture Notes
Construction Materials and Engineering - Module II - Lecture Notes
Construction Materials and Engineering - Module I - Lecture Notes
Computing fundamentals lab record - Polytechnics
Concrete lab manual - Polytechnics
Concrete Technology Study Notes
നബി(സ)യുടെ നമസ്കാരം - രൂപവും പ്രാര്ത്ഥനകളും
Design of simple beam using staad pro - doc file
Design of simple beam using staad pro
Python programs - PPT file (Polytechnics)
Python programs - first semester computer lab manual (polytechnics)
Python programming Workshop SITTTR - Kalamassery
Analysis of simple beam using STAAD Pro (Exp No 1)
Theory of structures I - STUDENT NOTE BOOK (Polytechnics Revision 2015)
Theory of structures II - STUDENT NOTE BOOK (Polytechnics Revision 2015)
CAD Lab model viva questions
Brain Computer Interface (BCI) - seminar PPT
Surveying - Module iii-levelling only note

Recently uploaded (20)

PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PPTX
Construction Project Organization Group 2.pptx
PPT
Mechanical Engineering MATERIALS Selection
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPTX
Welding lecture in detail for understanding
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
OOP with Java - Java Introduction (Basics)
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PPTX
Lecture Notes Electrical Wiring System Components
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
Internet of Things (IOT) - A guide to understanding
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPTX
UNIT 4 Total Quality Management .pptx
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PPT
Project quality management in manufacturing
PPTX
bas. eng. economics group 4 presentation 1.pptx
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
CH1 Production IntroductoryConcepts.pptx
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Construction Project Organization Group 2.pptx
Mechanical Engineering MATERIALS Selection
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Welding lecture in detail for understanding
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
OOP with Java - Java Introduction (Basics)
R24 SURVEYING LAB MANUAL for civil enggi
Lecture Notes Electrical Wiring System Components
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
Internet of Things (IOT) - A guide to understanding
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
UNIT 4 Total Quality Management .pptx
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
Model Code of Practice - Construction Work - 21102022 .pdf
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Project quality management in manufacturing
bas. eng. economics group 4 presentation 1.pptx
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx

Numerical analysis m2 l4slides

  • 1. D Nagesh Kumar, IISc Optimization Methods: M2L41 Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints
  • 2. D Nagesh Kumar, IISc Optimization Methods: M2L42 Objectives Optimization of functions of multiple variables subjected to equality constraints using the method of constrained variation, and the method of Lagrange multipliers.
  • 3. D Nagesh Kumar, IISc Optimization Methods: M2L43 Constrained optimization A function of multiple variables, f(x), is to be optimized subject to one or more equality constraints of many variables. These equality constraints, gj(x), may or may not be linear. The problem statement is as follows: Maximize (or minimize) f(X), subject to gj(X) = 0, j = 1, 2, … , m where 1 2 n x x x ⎧ ⎫ ⎪ ⎪ ⎪ ⎪ = ⎨ ⎬ ⎪ ⎪ ⎪ ⎪⎩ ⎭ X M
  • 4. D Nagesh Kumar, IISc Optimization Methods: M2L44 Constrained optimization (contd.) With the condition that ; or else if m > n then the problem becomes an over defined one and there will be no solution. Of the many available methods, the method of constrained variation and the method of using Lagrange multipliers are discussed. m n≤
  • 5. D Nagesh Kumar, IISc Optimization Methods: M2L45 Solution by method of Constrained Variation For the optimization problem defined above, let us consider a specific case with n = 2 and m = 1 before we proceed to find the necessary and sufficient conditions for a general problem using Lagrange multipliers. The problem statement is as follows: Minimize f(x1,x2), subject to g(x1,x2) = 0 For f(x1,x2) to have a minimum at a point X* = [x1 *,x2 *], a necessary condition is that the total derivative of f(x1,x2) must be zero at [x1 *,x2 *]. (1) 1 2 1 2 0 f f df dx dx x x ∂ ∂ = + = ∂ ∂
  • 6. D Nagesh Kumar, IISc Optimization Methods: M2L46 Method of Constrained Variation (contd.) Since g(x1 *,x2 *) = 0 at the minimum point, variations dx1 and dx2 about the point [x1 *, x2 *] must be admissible variations, i.e. the point lies on the constraint: g(x1 * + dx1 , x2 * + dx2) = 0 (2) assuming dx1 and dx2 are small the Taylor series expansion of this gives us (3) * 1 1 2 2 * * * * * * 1 2 1 2 1 1 2 2 1 2 ( * , ) ( , ) (x ,x ) (x ,x ) 0 g x dx x dx g g g x x dx dx x x + + ∂ ∂ = + + = ∂ ∂
  • 7. D Nagesh Kumar, IISc Optimization Methods: M2L47 Method of Constrained Variation (contd.) or at [x1 *,x2 *] (4) which is the condition that must be satisfied for all admissible variations. Assuming , (4) can be rewritten as (5) 1 2 1 2 0 g g dg dx dx x x ∂ ∂ = + = ∂ ∂ 2/ 0g x∂ ∂ ≠ * *1 2 1 2 1 2 / ( , ) / g x dx x x dx g x ∂ ∂ = − ∂ ∂
  • 8. D Nagesh Kumar, IISc Optimization Methods: M2L48 Method of Constrained Variation (contd.) (5) indicates that once variation along x1 (dx1) is chosen arbitrarily, the variation along x2 (dx2) is decided automatically to satisfy the condition for the admissible variation. Substituting equation (5) in (1) we have: (6) The equation on the left hand side is called the constrained variation of f. Equation (5) has to be satisfied for all dx1, hence we have (7) This gives us the necessary condition to have [x1 *, x2 *] as an extreme point (maximum or minimum) * * 1 2 1 1 1 2 2 (x , x ) / 0 / f g x f df dx x g x x ⎛ ⎞∂ ∂ ∂ ∂ = − =⎜ ⎟ ∂ ∂ ∂ ∂⎝ ⎠ * * 1 2 1 2 2 1 (x , x ) 0 f g f g x x x x ⎛ ⎞∂ ∂ ∂ ∂ − =⎜ ⎟ ∂ ∂ ∂ ∂⎝ ⎠
  • 9. D Nagesh Kumar, IISc Optimization Methods: M2L49 Solution by method of Lagrange multipliers Continuing with the same specific case of the optimization problem with n = 2 and m = 1 we define a quantity λ, called the Lagrange multiplier as (8) Using this in (5) (9) And (8) written as (10) * * 1 2 2 2 (x , x ) / / f x g x λ ∂ ∂ = − ∂ ∂ * * 1 2 1 1 (x , x ) 0 f g x x λ ⎛ ⎞∂ ∂ + =⎜ ⎟ ∂ ∂⎝ ⎠ * * 1 2 2 2 (x , x ) 0 f g x x λ ⎛ ⎞∂ ∂ + =⎜ ⎟ ∂ ∂⎝ ⎠
  • 10. D Nagesh Kumar, IISc Optimization Methods: M2L410 Solution by method of Lagrange multipliers…contd. Also, the constraint equation has to be satisfied at the extreme point (11) Hence equations (9) to (11) represent the necessary conditions for the point [x1*, x2*] to be an extreme point. Note that λ could be expressed in terms of as well and has to be non-zero. Thus, these necessary conditions require that at least one of the partial derivatives of g(x1 , x2) be non-zero at an extreme point. * * 1 2 1 2 ( , ) ( , ) 0x x g x x = 1/g x∂ ∂ 1/g x∂ ∂
  • 11. D Nagesh Kumar, IISc Optimization Methods: M2L411 Solution by method of Lagrange multipliers…contd. The conditions given by equations (9) to (11) can also be generated by constructing a functions L, known as the Lagrangian function, as (12) Alternatively, treating L as a function of x1,x2 and λ, the necessary conditions for its extremum are given by (13) 1 2 1 2 1 2( , , ) ( , ) ( , )L x x f x x g x xλ λ= + 1 2 1 2 1 2 1 1 1 1 2 1 2 1 2 2 2 2 ( , , ) ( , ) ( , ) 0 ( , , ) ( , ) ( , ) 0 ( , , ) ( , ) 0 L f g x x x x x x x x x L f g x x x x x x x x x L x x g x x λ λ λ λ λ1 2 1 2 λ ∂ ∂ ∂ = + = ∂ ∂ ∂ ∂ ∂ ∂ = + = ∂ ∂ ∂ ∂ = = ∂
  • 12. D Nagesh Kumar, IISc Optimization Methods: M2L412 Necessary conditions for a general problem For a general problem with n variables and m equality constraints the problem is defined as shown earlier Maximize (or minimize) f(X), subject to gj(X) = 0, j = 1, 2, … , m where In this case the Lagrange function, L, will have one Lagrange multiplier λj for each constraint as (14) 1 2 , 1 2 1 1 2 2( , ,..., , ,..., ) ( ) ( ) ( ) ... ( )n m m mL x x x f g g gλ λ λ λ λ λ= + + + +X X X X 1 2 n x x x ⎧ ⎫ ⎪ ⎪ ⎪ ⎪ = ⎨ ⎬ ⎪ ⎪ ⎪ ⎪⎩ ⎭ X M
  • 13. D Nagesh Kumar, IISc Optimization Methods: M2L413 Necessary conditions for a general problem…contd. L is now a function of n + m unknowns, , and the necessary conditions for the problem defined above are given by (15) which represent n + m equations in terms of the n + m unknowns, xi and λj. The solution to this set of equations gives us (16) and 1 2 , 1 2, ,..., , ,...,n mx x x λ λ λ 1 ( ) ( ) 0, 1,2,..., 1,2,..., ( ) 0, 1,2,..., m j j ji i i j j gL f i n j m x x x L g j m λ λ = ∂∂ ∂ = + = = = ∂ ∂ ∂ ∂ = = = ∂ ∑X X X * 1 * 2 * n x x x ⎧ ⎫ ⎪ ⎪ ⎪ ⎪ = ⎨ ⎬ ⎪ ⎪ ⎪ ⎪⎩ ⎭ X M * 1 * * 2 * m λ λ λ λ ⎧ ⎫ ⎪ ⎪ ⎪ ⎪ = ⎨ ⎬ ⎪ ⎪ ⎪ ⎪⎩ ⎭ M
  • 14. D Nagesh Kumar, IISc Optimization Methods: M2L414 Sufficient conditions for a general problem A sufficient condition for f(X) to have a relative minimum at X* is that each root of the polynomial in Є, defined by the following determinant equation be positive. (17) 11 12 1 11 21 1 21 22 2 12 22 2 1 2 1 2 11 12 1 21 22 2 1 2 0 0 0 0 0 n m n m n n nn n n mn n n m m mn L L L g g g L L L g g g L L L g g g g g g g g g g g g −∈ −∈ −∈ = L L M O M M O M L L L L L M O M M O M M M L L L
  • 15. D Nagesh Kumar, IISc Optimization Methods: M2L415 Sufficient conditions for a general problem…contd. where (18) Similarly, a sufficient condition for f(X) to have a relative maximum at X* is that each root of the polynomial in Є, defined by equation (17) be negative. If equation (17), on solving yields roots, some of which are positive and others negative, then the point X* is neither a maximum nor a minimum. 2 * * * ( , ), for 1,2,..., 1,2,..., ( ), where 1,2,..., and 1,2,..., ij i j p pq q L L i n and j m x x g g p m q x λ ∂ = = = ∂ ∂ ∂ = = = ∂ X X n
  • 16. D Nagesh Kumar, IISc Optimization Methods: M2L416 Example Minimize , Subject to or 2 2 1 1 2 2 1 2( ) 3 6 5 7 5f x x x x x x= − − − + +X 1 2 5x x+ =
  • 17. 1 2 1 2 1 2 1 1 2 2 1 6 10 5 0 1 3 5 (5 ) 2 1 3( ) 2 (5 ) 2 x x x x x x x x λ λ λ ∂ = − − + + = ∂ => + = + => + + = + L 2 1 2 x − = 1 11 2 x = [ ] 1 11 * , ; * 23 2 2 −⎡ ⎤ = =⎢ ⎥⎣ ⎦ X λ 11 12 11 21 22 21 11 12 0 0 L L g L L g g g −∈⎛ ⎞ ⎜ ⎟ −∈ =⎜ ⎟ ⎜ ⎟ ⎝ ⎠
  • 19. D Nagesh Kumar, IISc Optimization Methods: M2L419 Thank you