SlideShare a Scribd company logo
4
Most read
7
Most read
12
Most read
Penalty Function Method
By
Suman Bhattacharyya
Transformation Methods
Transformation methods are the simplest and most popular optimization methods of
handling constraints. The constrained problem is transformed into a sequence of
unconstrained problems by adding penalty terms for each constraint violation. There are
mainly There types of Penalty Methods.
 Interior Penalty Method : Some penalty methods cannot deal with infeasible points and
penalize feasible points that are close to the constraint boundary.
 Exterior Penalty Method : This method penalize infeasible points but do not penalize
feasible points. In these methods, every sequence of unconstrained optimization finds an
improved yet infeasible solution.
 Mixed Penalty Method : There exist another method which penalizes both infeasible
and feasible points.
2
Penalty Function Method
Penalty Function Method
Penalty function methods transform the basic optimization problem into
alternative formulations such that numerical solutions are sought by solving a
sequence of unconstrained minimization problems . Penalty function methods
work in a series of sequences, each time modifying a set of penalty parameters
and starting a sequence with the solution obtained in the previous sequence. At
any sequence, the following penalty function is minimized:
P(x, R) = f(x) + Ω(R, g(x), h(x)) ------------ (i)
Where,
R -> Is a set of penalty parameters,
Ω -> Is the penalty term chosen to favor the selection of feasible points over
infeasible points.
3
Penalty Function Method
For equality or inequality constraints, different penalty terms are used:
 Parabolic Penalty: Ω = R{h(x)}2
This penalty term is used for handling equality constraints only. Since all infeasible points
are penalized, this is an exterior penalty term.
 Infinite Barrier Penalty:
This penalty term is used for handling inequality constraints. Since only infeasible points
are penalized, this is also an exterior penalty term.
 Log Penalty: Ω = − R ln [g(x)]
This penalty term is also used for inequality constraints. For infeasible points, g(x) < 0.
Thus, this penalty term cannot assign penalty to infeasible points. For feasible points, more
penalty is assigned to points close to the constraint boundary or points with very small g(x).
Since only feasible points are penalized, this is an interior penalty term.
 Inverse Penalty:
Like the log penalty term, this term is also suitable for inequality constraints. This is also an
interior penalty term and the penalty parameter is assigned a large value in the first
sequence.
 Bracket Operator Penalty: Ω = R⟨g(x)⟩2
Where ⟨α⟩ = α, when α is negative; zero. Since the bracket operator assigns a positive value
to the infeasible points, this is an exterior penalty term. 4
Exterior Penalty Function Method
 Minimize total objective function = Objective function + Penalty function.
 Penalty function: Penalizes for violating constraints.
 Penalty Multiplier : Small in first iterations, large in final iterations.
Interior Penalty Function Method
 Minimize total objective function = Objective function + Penalty function.
 Penalty function: Penalizes for being too close to constraint boundary.
 Penalty Multiplier : Large in first iterations, small in final iterations.
 Total objective function discontinuous on constraint boundaries.
 Also known as “Barrier Methods”.
5
Penalty Function Method
Algorithm
Step 1 : Choose two termination parameters ϵ1, ϵ2, an initial solution x (0), a penalty term Ω,
and an initial penalty parameter R(0). Choose a parameter c to update R such that 0 < c < 1 is
used for interior penalty terms and c > 1 is used for exterior penalty terms. Set t = 0.
Step 2 : Form P(x(t) , R(t)) = f(x(t) ) + Ω( R(t) , g(x(t) ), h(x(t))) .
Step 3 : Starting with a solution x(t) , find x(t+1) such that P(x(t+1), R(t) ) is minimum for a fixed
value of R(t) . Use ϵ1 to terminate the unconstrained search.
Step 4 : Is |P(x(t+1) , R(t)) − P(x(t) , R(t−1))| ≤ ϵ2?
If yes, set xT = x(t+1) and Terminate;
Else go to Step 5.
Step 5 : Choose R(t+1) = cR(t) . Set t = t + 1 and go to Step 2.
6
Penalty Function Method
Problem
Consider the following Himmelblau’s function:
Minimize
Subject to
Step 1: We use the bracket-operator penalty term to solve this problem. This term is
an exterior penalty term. We choose an infeasible point x(0) = (0 , 0)T as the initial
point. Choose the penalty parameter: R(0) = 0.1. We choose two convergence
parameters ϵ1 = ϵ2 = 10−5.
Step 2: Now form the penalty function:
The variable bounds must also be included as inequality constraints.
7
Penalty Function Method
Step 3: We use the steepest descent method to solve the above problem. Begin the algorithm
with an initial solution x(0) = (0 , 0)T having f (x(0)) = 170.0. At this point, the constraint
violation is −1.0 and the penalized function value P(x(0),R(0)) = 170.100. Simulation of the
steepest descent method on the penalized function with R = 0.1 are shown in Figure-1.
After 150 function evaluations, the solution x∗ = (2.628, 2.475)T having a function value
equal to f(x∗)= 5.709 is obtained. At this point, the constraint violation is equal to −14.248,
but has a penalized function value equal to 25.996, which is smaller than that at the initial
point. Even though the constraint violation at this point is greater than that at the initial
point, the steepest descent method has minimized the penalized function P(x, R(0)) from
170.100 to 25.996. Thus set x1 = (2.628, 2.475)T and proceed to the next step.
Fig 1. -> A simulation of the steepest descent method on the penalized function with R = 0.1. The hashes are used to
mark the feasible region . 8
Step 4: Since In the first iteration, there is no previous penalized function value to compare;
thus we move to Step 5.
Step 5: Update the penalty parameter R(1) = 10 × 0.1 = 1.0 and move to Step 2. This is the end
of the first sequence.
Step 2: The new penalized function in the second sequence is as follows:
Step 3: Once again use the steepest descent method to solve the problem from the starting
point (2.628, 2.475) T . The minimum of the function is found after 340 function evaluations
and is x(2) = (1.011, 2.939)T. At this point, the constraint violation is equal to −1.450, which
suggests that the point is still an infeasible point. Intermediate points using the steepest
descent method for the penalized function with R = 1.0 shown in Figure-2.
This penalized function is distorted
with respect to the original Himmelblau function.
This distortion is necessary to shift the minimum point
of the current function closer to the true constrained
minimum point. Also notice that the penalized function
at the feasible region is undistorted.
Fig 2. -> Intermediate points using the steepest descent method for the penalized function with R = 1.0 (solid lines). The hashes used to
mark the feasible region.
9
Step 4 : Comparing the penalized function values, P(x(2) , 1.0) = 58.664 and P(x(1), 0.1) =
25.996. Since they are very different from each other, continue with Step 5.
Step 5 : The new value of the penalty parameter is R(2) = 10.0. Increment the iteration
counter t = 2 and go to Step 2.
In the next sequence, the penalized function is formed with R(2) = 10.0. The
penalized function and the corresponding solution is shown in Figure-3. Now start the
steepest descent algorithm starts with an initial solution x(2) . The minimum point of the
sequence is found to be x(3) = (0.844, 2.934)Twith a constraint violation equal to −0.119.
Figure-3 shows the extent of distortion of the original objective function. Compare the
contour levels shown at the top right corner of Figures 1 and 3.
Fig -3 -> Intermediate points obtained using the steepest descent method for the penalized function with R = 10.0
(solid lines near the true optimum). Hashes used to mark the feasible region 10
With R = 10.0, the effect of the objective function f(x) is almost insignificant compared to
that of the constraint violation in the infeasible search region. Thus, the contour lines are
almost parallel to the constraint line. In this problem, the increase in the penalty parameter
R only makes the penalty function steeper in the infeasible search region.
After another iteration of this algorithm, the obtained solution is x(4) = (0.836,
2.940)T with a constraint violation of only −0.012, which is very close to the true
constrained optimum solution.
In the presence of multiple constraints, it is observed that the performance of the penalty
function method improves considerably if the constraints and the objective functions are
first normalized before constructing the penalized function.
11
Penalty Function Method
Advantages
Penalty method replaces a constrained optimization problem by a series of unconstrained
problems whose solutions ideally converge to the solution of the original constrained problem.
 The algorithm does not take into account the structure of the constraints, that is, linear or
nonlinear constraints can be tackled with this algorithm.
Disadvantages
 The main problem of this method is to set appropriate values of the penalty parameters.
Consequently, users have to experiment with different values of penalty parameters.
 At every sequence, the penalized function becomes somewhat distorted with respect to the
original objective function.
The distortion of the function causes the unconstrained search to become slow in finding the
minimum of the penalized function.
Applications
 Image compression optimization algorithms can make use of penalty functions for selecting
how best to compress zones of color to single representative values.
 Genetic algorithm uses Penalty function. 12

More Related Content

PPTX
Secant method
PPTX
Dining philosopher
PPT
PPTX
Multivector and multiprocessor
PPTX
Cpu scheduling
PDF
PPT
Simplex Method
PPT
Secant Method
Secant method
Dining philosopher
Multivector and multiprocessor
Cpu scheduling
Simplex Method
Secant Method

What's hot (20)

PPTX
Dining Philosopher Problem and Solution
PPT
Newton raphson method
PPT
introduction to Numerical Analysis
PPTX
Fuzzy logic
PPTX
newton raphson method
PPTX
Travelling salesman dynamic programming
PPSX
Ch 05 MATLAB Applications in Chemical Engineering_陳奇中教授教學投影片
PPTX
LINEAR RECURRENCE RELATIONS WITH CONSTANT COEFFICIENTS
PPT
Greedy Algorithm
PDF
Initial Value Problems
PPTX
Linux Basic commands and VI Editor
PPTX
Newton Raphson
PPTX
6-Role of Parser, Construction of Parse Tree and Elimination of Ambiguity-06-...
PDF
Operator Precedence Grammar
PPT
Adaline and Madaline.ppt
PPTX
PPTX
Types of grammer - TOC
PDF
Unix system programming
PDF
Numerical Analysis
Dining Philosopher Problem and Solution
Newton raphson method
introduction to Numerical Analysis
Fuzzy logic
newton raphson method
Travelling salesman dynamic programming
Ch 05 MATLAB Applications in Chemical Engineering_陳奇中教授教學投影片
LINEAR RECURRENCE RELATIONS WITH CONSTANT COEFFICIENTS
Greedy Algorithm
Initial Value Problems
Linux Basic commands and VI Editor
Newton Raphson
6-Role of Parser, Construction of Parse Tree and Elimination of Ambiguity-06-...
Operator Precedence Grammar
Adaline and Madaline.ppt
Types of grammer - TOC
Unix system programming
Numerical Analysis
Ad

Similar to Penalty Function Method in Modern Optimization Techniques (20)

PDF
Penalty functions
PPTX
Penalty Function Method with Exterior and Interior
PPTX
ME-609_Phase_3.pptx Mechanical optimization course iit Guwahati set your life
PDF
LP linear programming (summary) (5s)
PPTX
AMS_502_13, 14,15,16 (1).pptx
PDF
optmizationtechniques.pdf
PDF
4optmizationtechniques-150308051251-conversion-gate01.pdf
PPTX
Optmization techniques
PDF
A Comparison Of Iterative Methods For The Solution Of Non-Linear Systems Of E...
PDF
Statistical computing with r estatistica - maria l. rizzo
PPT
Bisection method in maths 4
PPTX
A New SR1 Formula for Solving Nonlinear Optimization.pptx
PPT
algo_vc_lecture8.ppt
PDF
Chapter 3.Simplex Method hand out last.pdf
PPT
CH II Operation Research Duality and SA (2).ppt
PDF
AOT2 Single Variable Optimization Algorithms.pdf
PDF
4-Unconstrained Single Variable Optimization-Methods and Application.pdf
PDF
Numerical analysis dual, primal, revised simplex
PDF
numericalmethods.pdf
DOCX
CALIFORNIA STATE UNIVERSITY, NORTHRIDGEMECHANICAL ENGINEERIN.docx
Penalty functions
Penalty Function Method with Exterior and Interior
ME-609_Phase_3.pptx Mechanical optimization course iit Guwahati set your life
LP linear programming (summary) (5s)
AMS_502_13, 14,15,16 (1).pptx
optmizationtechniques.pdf
4optmizationtechniques-150308051251-conversion-gate01.pdf
Optmization techniques
A Comparison Of Iterative Methods For The Solution Of Non-Linear Systems Of E...
Statistical computing with r estatistica - maria l. rizzo
Bisection method in maths 4
A New SR1 Formula for Solving Nonlinear Optimization.pptx
algo_vc_lecture8.ppt
Chapter 3.Simplex Method hand out last.pdf
CH II Operation Research Duality and SA (2).ppt
AOT2 Single Variable Optimization Algorithms.pdf
4-Unconstrained Single Variable Optimization-Methods and Application.pdf
Numerical analysis dual, primal, revised simplex
numericalmethods.pdf
CALIFORNIA STATE UNIVERSITY, NORTHRIDGEMECHANICAL ENGINEERIN.docx
Ad

Recently uploaded (20)

PPTX
OOP with Java - Java Introduction (Basics)
PDF
PPT on Performance Review to get promotions
PDF
Digital Logic Computer Design lecture notes
PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PDF
composite construction of structures.pdf
PPTX
Geodesy 1.pptx...............................................
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
Internet of Things (IOT) - A guide to understanding
PPT
Project quality management in manufacturing
PPT
Mechanical Engineering MATERIALS Selection
PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPT
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
PPTX
Safety Seminar civil to be ensured for safe working.
PDF
Well-logging-methods_new................
PPTX
UNIT 4 Total Quality Management .pptx
PPTX
bas. eng. economics group 4 presentation 1.pptx
OOP with Java - Java Introduction (Basics)
PPT on Performance Review to get promotions
Digital Logic Computer Design lecture notes
CH1 Production IntroductoryConcepts.pptx
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
composite construction of structures.pdf
Geodesy 1.pptx...............................................
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
Internet of Things (IOT) - A guide to understanding
Project quality management in manufacturing
Mechanical Engineering MATERIALS Selection
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
Safety Seminar civil to be ensured for safe working.
Well-logging-methods_new................
UNIT 4 Total Quality Management .pptx
bas. eng. economics group 4 presentation 1.pptx

Penalty Function Method in Modern Optimization Techniques

  • 2. Transformation Methods Transformation methods are the simplest and most popular optimization methods of handling constraints. The constrained problem is transformed into a sequence of unconstrained problems by adding penalty terms for each constraint violation. There are mainly There types of Penalty Methods.  Interior Penalty Method : Some penalty methods cannot deal with infeasible points and penalize feasible points that are close to the constraint boundary.  Exterior Penalty Method : This method penalize infeasible points but do not penalize feasible points. In these methods, every sequence of unconstrained optimization finds an improved yet infeasible solution.  Mixed Penalty Method : There exist another method which penalizes both infeasible and feasible points. 2 Penalty Function Method
  • 3. Penalty Function Method Penalty function methods transform the basic optimization problem into alternative formulations such that numerical solutions are sought by solving a sequence of unconstrained minimization problems . Penalty function methods work in a series of sequences, each time modifying a set of penalty parameters and starting a sequence with the solution obtained in the previous sequence. At any sequence, the following penalty function is minimized: P(x, R) = f(x) + Ω(R, g(x), h(x)) ------------ (i) Where, R -> Is a set of penalty parameters, Ω -> Is the penalty term chosen to favor the selection of feasible points over infeasible points. 3 Penalty Function Method
  • 4. For equality or inequality constraints, different penalty terms are used:  Parabolic Penalty: Ω = R{h(x)}2 This penalty term is used for handling equality constraints only. Since all infeasible points are penalized, this is an exterior penalty term.  Infinite Barrier Penalty: This penalty term is used for handling inequality constraints. Since only infeasible points are penalized, this is also an exterior penalty term.  Log Penalty: Ω = − R ln [g(x)] This penalty term is also used for inequality constraints. For infeasible points, g(x) < 0. Thus, this penalty term cannot assign penalty to infeasible points. For feasible points, more penalty is assigned to points close to the constraint boundary or points with very small g(x). Since only feasible points are penalized, this is an interior penalty term.  Inverse Penalty: Like the log penalty term, this term is also suitable for inequality constraints. This is also an interior penalty term and the penalty parameter is assigned a large value in the first sequence.  Bracket Operator Penalty: Ω = R⟨g(x)⟩2 Where ⟨α⟩ = α, when α is negative; zero. Since the bracket operator assigns a positive value to the infeasible points, this is an exterior penalty term. 4
  • 5. Exterior Penalty Function Method  Minimize total objective function = Objective function + Penalty function.  Penalty function: Penalizes for violating constraints.  Penalty Multiplier : Small in first iterations, large in final iterations. Interior Penalty Function Method  Minimize total objective function = Objective function + Penalty function.  Penalty function: Penalizes for being too close to constraint boundary.  Penalty Multiplier : Large in first iterations, small in final iterations.  Total objective function discontinuous on constraint boundaries.  Also known as “Barrier Methods”. 5 Penalty Function Method
  • 6. Algorithm Step 1 : Choose two termination parameters ϵ1, ϵ2, an initial solution x (0), a penalty term Ω, and an initial penalty parameter R(0). Choose a parameter c to update R such that 0 < c < 1 is used for interior penalty terms and c > 1 is used for exterior penalty terms. Set t = 0. Step 2 : Form P(x(t) , R(t)) = f(x(t) ) + Ω( R(t) , g(x(t) ), h(x(t))) . Step 3 : Starting with a solution x(t) , find x(t+1) such that P(x(t+1), R(t) ) is minimum for a fixed value of R(t) . Use ϵ1 to terminate the unconstrained search. Step 4 : Is |P(x(t+1) , R(t)) − P(x(t) , R(t−1))| ≤ ϵ2? If yes, set xT = x(t+1) and Terminate; Else go to Step 5. Step 5 : Choose R(t+1) = cR(t) . Set t = t + 1 and go to Step 2. 6 Penalty Function Method
  • 7. Problem Consider the following Himmelblau’s function: Minimize Subject to Step 1: We use the bracket-operator penalty term to solve this problem. This term is an exterior penalty term. We choose an infeasible point x(0) = (0 , 0)T as the initial point. Choose the penalty parameter: R(0) = 0.1. We choose two convergence parameters ϵ1 = ϵ2 = 10−5. Step 2: Now form the penalty function: The variable bounds must also be included as inequality constraints. 7 Penalty Function Method
  • 8. Step 3: We use the steepest descent method to solve the above problem. Begin the algorithm with an initial solution x(0) = (0 , 0)T having f (x(0)) = 170.0. At this point, the constraint violation is −1.0 and the penalized function value P(x(0),R(0)) = 170.100. Simulation of the steepest descent method on the penalized function with R = 0.1 are shown in Figure-1. After 150 function evaluations, the solution x∗ = (2.628, 2.475)T having a function value equal to f(x∗)= 5.709 is obtained. At this point, the constraint violation is equal to −14.248, but has a penalized function value equal to 25.996, which is smaller than that at the initial point. Even though the constraint violation at this point is greater than that at the initial point, the steepest descent method has minimized the penalized function P(x, R(0)) from 170.100 to 25.996. Thus set x1 = (2.628, 2.475)T and proceed to the next step. Fig 1. -> A simulation of the steepest descent method on the penalized function with R = 0.1. The hashes are used to mark the feasible region . 8
  • 9. Step 4: Since In the first iteration, there is no previous penalized function value to compare; thus we move to Step 5. Step 5: Update the penalty parameter R(1) = 10 × 0.1 = 1.0 and move to Step 2. This is the end of the first sequence. Step 2: The new penalized function in the second sequence is as follows: Step 3: Once again use the steepest descent method to solve the problem from the starting point (2.628, 2.475) T . The minimum of the function is found after 340 function evaluations and is x(2) = (1.011, 2.939)T. At this point, the constraint violation is equal to −1.450, which suggests that the point is still an infeasible point. Intermediate points using the steepest descent method for the penalized function with R = 1.0 shown in Figure-2. This penalized function is distorted with respect to the original Himmelblau function. This distortion is necessary to shift the minimum point of the current function closer to the true constrained minimum point. Also notice that the penalized function at the feasible region is undistorted. Fig 2. -> Intermediate points using the steepest descent method for the penalized function with R = 1.0 (solid lines). The hashes used to mark the feasible region. 9
  • 10. Step 4 : Comparing the penalized function values, P(x(2) , 1.0) = 58.664 and P(x(1), 0.1) = 25.996. Since they are very different from each other, continue with Step 5. Step 5 : The new value of the penalty parameter is R(2) = 10.0. Increment the iteration counter t = 2 and go to Step 2. In the next sequence, the penalized function is formed with R(2) = 10.0. The penalized function and the corresponding solution is shown in Figure-3. Now start the steepest descent algorithm starts with an initial solution x(2) . The minimum point of the sequence is found to be x(3) = (0.844, 2.934)Twith a constraint violation equal to −0.119. Figure-3 shows the extent of distortion of the original objective function. Compare the contour levels shown at the top right corner of Figures 1 and 3. Fig -3 -> Intermediate points obtained using the steepest descent method for the penalized function with R = 10.0 (solid lines near the true optimum). Hashes used to mark the feasible region 10
  • 11. With R = 10.0, the effect of the objective function f(x) is almost insignificant compared to that of the constraint violation in the infeasible search region. Thus, the contour lines are almost parallel to the constraint line. In this problem, the increase in the penalty parameter R only makes the penalty function steeper in the infeasible search region. After another iteration of this algorithm, the obtained solution is x(4) = (0.836, 2.940)T with a constraint violation of only −0.012, which is very close to the true constrained optimum solution. In the presence of multiple constraints, it is observed that the performance of the penalty function method improves considerably if the constraints and the objective functions are first normalized before constructing the penalized function. 11 Penalty Function Method
  • 12. Advantages Penalty method replaces a constrained optimization problem by a series of unconstrained problems whose solutions ideally converge to the solution of the original constrained problem.  The algorithm does not take into account the structure of the constraints, that is, linear or nonlinear constraints can be tackled with this algorithm. Disadvantages  The main problem of this method is to set appropriate values of the penalty parameters. Consequently, users have to experiment with different values of penalty parameters.  At every sequence, the penalized function becomes somewhat distorted with respect to the original objective function. The distortion of the function causes the unconstrained search to become slow in finding the minimum of the penalized function. Applications  Image compression optimization algorithms can make use of penalty functions for selecting how best to compress zones of color to single representative values.  Genetic algorithm uses Penalty function. 12