SlideShare a Scribd company logo
1 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Efficient Hill Climber for Constrained
Pseudo-Boolean Optimization Problems
Francisco Chicano, Darrell Whitley, Renato Tinós
2 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
r = 1 n
r = 2 n
2
r = 3 n
3
r n
r
Ball
Pr
i=1
n
i
S1(
r = 1 n
r = 2 n
2
r = 3 n
3
r n
r
Ball
Pr
i=1
n
i
• Considering binary strings of length n and Hamming distance…
Solutions in a ball of radius r
r=1
r=2
r=3
Ball of radius r Previous work Research Question
How many solutions at Hamming distance r?
If r << n : Θ (nr)
3 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
• We want to find improving moves in a ball of radius r around solution x
• What is the computational cost of this exploration?
• By complete enumeration: O (nr) if the fitness evaluation is O(1)
• Previous work proposed a way to find improving moves in ball of radius r in
O(1) (constant time independent of n): Szeider (DO 2011), Whitley and Chen
(GECCO 2012), Chen et al. (GECCO 2013), Chicano et al. (GECCO 2014)
• The approach was extended to the multi-objective case: Chicano et al. (EvoCOP 2016)
• All the previous results are for unconstrained optimization problems
Improving moves in a ball of radius r
r
Ball of radius r Previous work Research Question
4 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Research Question
Ball of radius r Previous work Research Question
5 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
• Definition:
• where f(i) only depends on k variables (k-bounded epistasis)
• We will also assume that the variables are arguments of at most c subfunctions
• Example (m=4, n=4, k=2):
• Are Mk Landscapes a small subset of problems? Are they interesting?
• Max-kSAT is a k-bounded pseudo-Boolean optimization problem
• NK-landscapes is a (K+1)-bounded pseudo-Boolean optimization problem
• Any compressible pseudo-Boolean function can be reduced to a quadratic
pseudo-Boolean function (e.g., Rosenberg, 1975)
Mk Landscape (Whitley, GECCO2015: 927-934)
Pseudo-Boolean functions Scores Update Decomposition
The family of k-bounded pseudo-Boolean Optimization
problems have also been described as an embedded landscape.
An embedded landscape [3] with bounded epistasis k is de-
fined as a function f(x) that can be written as the sum
of m subfunctions, each one depending at most on k input
variables. That is:
f(x) =
mX
i=1
f(i)
(x), (1)
where the subfunctions f(i)
depend only on k components
of x. Embedded Landscapes generalize NK-landscapes and
the MAX-kSAT problem. We will consider in this paper that
the number of subfunctions is linear in n, that is m 2 O(n).
For NK-landscapes m = n and is a common assumption in
MAX-kSAT that m 2 O(n).
3. SCORES IN THE HAMMING BALL
For v, x 2 Bn
, and a pseudo-Boolean function f : Bn
! R,
we denote the Score of x with respect to move v as Sv(x),
defined as follows:1
Sv(x) = f(x v) f(x), (2)
1
We omit the function f in Sv(x) to simplify the notation.
S(l)
v (x) =
Equation (5) cl
change in the mov
f(l)
the Score of th
this subfunction w
On the other hand
we only need to c
changed variables
acterized by the m
we can write (3) a
S
3.1 Scores De
The Score value
tion than just the c
in that ball. Let us
balls of radius r =
xj are two variabl
ments of any subfu
f = + + +f(1)(x) f(2)(x) f(3)(x) f(4)(x)
x1 x2 x3 x4
6 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
• Based on Mk Landscapes
Going Multi-Objective: Vector Mk Landscape
Pseudo-Boolean functions Scores Update Decomposition
x1 x2 x3 x4 x5
f
(1)
1 f
(2)
1 f
(3)
1 f
(4)
1 f
(5)
1
f
(1)
2 f
(2)
2 f
(3)
2
(a) Vector Mk Landscape
x3x4x5
f1
f2
7 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Going Multi-Objective & Constrained
Pseudo-Boolean functions Scores Update Decomposition
f1
f2
x1 x2 x3 x4 x5
f
(1)
1
f
(1)
2
f
(2)
1
f
(2)
2
f
(3)
1
g
(1)
1 g
(2)
1 g
(3)
1g1
Feasible solutions:
. BACKGROUND
In constrained multi-objective optimization, there is a vec-
r function f : Bn
! Rd
to optimize, called the objective
nction. We will assume, without loss of generality, that
l the objectives (components of the vector function) are
be maximized. The constraints of the problem will be
ven in the form1
g(x) 0, where g : Bn
! Rb
is a vec-
r function, that will be called constraint function2
. That
, a solution is feasible if all the components of the vector
nction g are nonnegative when evaluated in that solution.
his type of constraints does not represent a limitation. Any
her equality or inequality constraint can be expressed in
he form gi(x) 0, including those that use strict inequality
onstraints (> and <)3
. The set of feasible solutions of a
roblem will be denoted by Xg = {x 2 Bn
|g(x) 0}.
Given a vector function f : Bn
! Rd
, we say that solution
Objectivefunction
Constraint function
8 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
• Let us represent a potential move of the current solution with a binary vector v having
1s in the positions that should be flipped
• The score of move v for solution x is the difference in the fitness value of the
neighboring and the current solution
• Scores are useful to identify improving moves: if Sv(x) > 0, v is an improving move
Scores: definition
Current solution, x Neighboring solution, y Move, v
01110101010101001 01111011010101001 00001110000000000
01110101010101001 00110101110101111 01000000100000110
01110101010101001 01000101010101001 00110000000000000
Pseudo-Boolean functions Scores Update Decomposition
vector Mk Landscape of Figure 1(a).
move in Bn
can be characterized by a binary
n
having 1 in all the bits that change in the so
Score of a move, has been previously defined
ment in the objective function when that move is
finition 4 (Score). For v, x 2 Bn
, and a
on f : Bn
! Rd
, we denote the Score of x with
ve v for function f as S
(f)
v (x), defined as follow
S(f)
v (x) = f(x v) f(x),
is the exclusive OR bitwise operation.
9 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
• The key idea is to compute the scores from scratch once at the beginning and update
their values as the solution moves (less expensive)
Scores update
r
Selected improving move
Update the scores
Pseudo-Boolean functions Scores Update Decomposition
10 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
• The key idea is to compute the scores from scratch once at the beginning and update
their values as the solution moves (less expensive)
• How can we do it less expensive?
• We have still O(nr) scores to update!
• … thanks to two key facts:
• We don’t need all the O(nr) scores to have complete information of the influence
of a move in the objective or constraint (vector) function, only O(n) scores
• From the ones we need, we only have to update a constant number of them and
we can do each update in constant time
Key facts for efficient scores update
r
Pseudo-Boolean functions Scores Update Decomposition
11 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
What is an improving move in MO?
• An improving move is one that provides a solution dominating the current one
• We define strong and weak improving moves.
• We are interested in strong improving moves
Taking improving moves Taking feasible moves Algorithm
f2
f1 Assuming maximization
Can we safely take only
strong improving moves?
12 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
We need to take weak improving moves
• We could miss some higher order strong improving moves if we don’t take weak
improving moves
S2
S1
stored
stored
not stored
current solution
x1 x2 x3 x4 x5
f2 f2
g
(1)
1 g
(2)
1 g
(3)
1
S
(f)
v1[v2
(x) = S(f)
v1
(x) + S(f)
v2
(x)
Taking improving moves Taking feasible moves Algorithm
13 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Cycling
• We can make the hill climber to cycle if we take weak improving moves
S2
S1
stored
current solution
S2
S1
stored
current solution
Taking improving moves Taking feasible moves Algorithm
14 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Solution: weights for the scores
• Given a weight vector w with wi > 0
S2
S1
w
1st strong improving
2nd w-improving
2nd w-improving
A w-improving move is
one with w · Sv(x) > 0
Taking improving moves Taking feasible moves Algorithm
15 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Feasibility does not only depends on the scores
• In order to classify the moves as feasible or unfeasible we have to check all the
scores, even if they have not changed
• This can be done in O(n), but not in O(1) in general
g2
g1
Unfeasible
solution
current solution
v1
v1
v2
Feasible region
Taking improving moves Taking feasible moves Algorithm
16 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Feasibility in the Hamming Ball
x3 x4 x5
f
(2)
1
f
(2)
2
f
(3)
1
g
(2)
1 g
(3)
1
S
(f)
v1[v2
(x) = S(f)
v1
(x) + S(f)
v2
(x) (1)
S
(g)
v1[v2
(x) = S(g)
v1
(x) + S(g)
v2
(x) (2)
g2
g1
Unfeasible
stored solution current solution
v1
v1
v2
Feasible region
v2
Unfeasible
stored solution
Feasible not
stored solution
Taking improving moves Taking feasible moves Algorithm
17 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Feasibility in the Hamming Ball
g2
g1
current solution
v1
v1
v2
Feasible region
v2
w*
w*-feasible
• Bad news: this does not work when the current solution is unfeasible
• Worse news: there is no “efficient” way to identify a feasible solution when the
current solution is unfeasible
Taking improving moves Taking feasible moves Algorithm
18 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Unfeasible regionFeasible region
No w-improving
or w*-feasible
moves ▶ stop
strong
improving in g
w*-improving
in g
solution y
otherwise
Hill Climber
Taking improving moves Taking feasible moves Algorithm
19 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
MNK Landscape
Problem Results Source Code
Why NKq and not NK?
Floating point precision
• An MNK Landscape is a multi-objective pseudo-Boolean problem where each
objective is an NK Landscape (Aguirre, Tanaka, CEC 2004: 196-203)
• An NK-landscape is a pseudo-Boolean optimization problem with objective function:
where each subfunction f(l) depends on variable xl and K
other variables
• The subfunctions are randomly generated and the values are taken in the range [0,1]
• In NKq-landscapes the subfunctions take integer values in the range [0,q-1]
• We used shifted NKq-landscapes in the experiments with values in two ranges:
• [-49, 50] slightly constrained problems (2% unfeasible)
• [-50, 49] highly constrained problems (80% unfeasible)
f(1)
(x) + f(2)
(x 2) f(2)
(x) + f(3)
(x 2) f(3)
(x)
2) f(1)
(x)+f(2)
(x 1, 2) f(2)
(x)+f(3)
(x 1, 2) f(3)
(x)
S1,2(x) 6= S1(x) + S2(x)
f(x) =
NX
l=1
f(l)
(x)
1
• In the adjacent model the variables are
consecutive
f = + + +f(1)(x) f(3)(x)f(2)(x) f(4)(x)
x1 x2 x3 x4
20 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
0
2000
4000
6000
8000
10000
12000
14000
16000
1 2 3 4 5 6 7 8 9 10
Averagetimepermove(microseconds)
N (number of variables in thousands)
r=1 r=2 r=3
Runtime: highly constrained MNK Landscapes
Neighborhood size: 166 billion
Problem Results Source Code
21 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
ries from 1 to 3. These instances are slightly constrained
th around 2% of the search space being unfeasible10
. We
rformed 30 independent runs of the algorithm for each
nfiguration, and the results are the average of these 30
ns.
0
200
400
600
800
1000
1200
1400
1600
10 20 30 40 50 60 70 80 90 100
Averagetimepermove(microseconds)
N (number of variables in thousands)
r=1, d=1, b=1
r=1, d=2, b=1
r=1, d=1, b=2
r=1, d=2, b=2
r=2, d=1, b=1
r=2, d=2, b=1
r=2, d=1, b=2
r=2, d=2, b=2
r=3, d=1, b=1
r=3, d=2, b=1
r=3, d=1, b=2
r=3, d=2, b=2
gure 3: Average time per move in microsec-
Figure 4
for the
rithm 1 fo
b = 1, N
[ 50, 49],
In this c
the time p
per move
Figure 3,
times sma
the unfeas
all the mo
time.
5.2 Qu
Runtime: slightly constrained MNK Landscapes
Neighborhood size: 166 trillion
Problem Results Source Code
from 1 to 3. These instances are slightly constrained
round 2% of the search space being unfeasible10
. We
med 30 independent runs of the algorithm for each
uration, and the results are the average of these 30
0
200
400
600
800
1000
1200
1400
1600
10 20 30 40 50 60 70 80 90 100
N (number of variables in thousands)
r=1, d=1, b=1
r=1, d=2, b=1
r=1, d=1, b=2
r=1, d=2, b=2
r=2, d=1, b=1
r=2, d=2, b=1
r=2, d=1, b=2
r=2, d=2, b=2
r=3, d=1, b=1
r=3, d=2, b=1
r=3, d=1, b=2
r=3, d=2, b=2
0
1
Figure 4: A
for the Mul
rithm 1 for co
b = 1, N = 1
[ 50, 49], and
In this case,
the time per m
per move in th
Figure 3, even
times smaller)
the unfeasible
all the moves
time.
5.2 Qualit
22 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Source Code
https://github.com/jfrchicanog/EfficientHillClimbers/tree/constrained-multiobjective
Problem Results Source Code
23 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Conclusions and Future Work
Conclusions & Future Work
• Adding constrains to the MK Landscapes has
a cost in terms of efficiency: from O(1) to O(n)
• The space required to store the information is
still linear in the size of the problem n
Conclusions
• Generalize to other search spaces
• Combine with high-level algorithms (MOEA/D)
Future Work
24 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Acknowledgements
Efficient Hill Climber for Constrained
Pseudo-Boolean Optimization Problems
25 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Examples: 1 and 4
f(1) f(2) f(3) f(4)
x1 x2 x3 x4
Ball
Pr
i=1
n
i
S1(x) = f(x 1) f(x)
Sv(x) = f(x v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
S1(x) = f(x 1) f(x)
v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
f(1) f(2) f(3) f(4)
x1 x2 x3 x4
S1(x) = f(x 1) f(x)
v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
S4(x) = f(x 4) f(x)
Pseudo-Boolean functions Scores Update Decomposition
26 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Example: 1,4
f(1) f(2) f(3) f(4)
x1 x2 x3 x4
r
Ball
Pr
i=1
n
i
S1(x) = f(x 1) f(x)
Sv(x) = f(x v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
n
i
S1(x) = f(x 1) f(x)
v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
S4(x) = f(x 4) f(x)
S1,4(x) = f(x 1, 4) f(x)
S1(x) = f(x 1) f(x)
f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
Ball
Pr
i=1
n
i
S1(x) =
Sv(x) = f(x v) f(x) =
S4(x) =
S1(x) = f(x 1) f(x)
x) = f(x v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l
S4(x) = f(x 4) f(x)
S1,4(x) = f(x 1, 4) f(x)
S1,4(x) = S1(x) + S4(x)
We don’t need to store S1,4(x) since can be computed from others
If none of 1 and 4 are improving moves, 1,4 will not be an improving move
Pseudo-Boolean functions Scores Update Decomposition
27 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Quality
• 50% Empirical Attainment Functions (Knowles, ISDA 2005: 552-557)
Problem Results Source Code
0
500000
1e+06
1.5e+06
2e+06
2.5e+06
3e+06
10 20 30 40 50 60 70 80 90 100
Averagequalityofbestfoundsolution
N (number of variables in thousands)
r=1, b=1
r=1, b=2
r=2, b=1
r=2, b=2
r=3, b=1
r=3, b=2
Figure 5: Average (over 30 runs) solution quality
of the best solution found by the Multi-Start Hill
Climber based on Algorithm 1 for a MNK Land-
scape with d = 1, b = 1, 2, subfunctions codomain
[ 49, 50], N = 10, 000 to 100, 000, and r = 1 to 3.
0
50000
100000
150000
200000
250000
300000
0 50000 100000 150000 200000 250000 300000
f2
f1
r=1
r=2
r=3
(a) N = 10, 000, b = 1
0
200000
400000
600000
800000
1e+06
1.2e+06
1.4e+06
0 200000 400000 600000 800000 1e+06 1.2e+06 1.4e+06
f2
f1
r=1
r=2
r=3
(b) N = 50, 000, b = 2
Figure 6: 50%-empirical attainment surfaces of the
30 runs of the Multi-Start Hill Climber based on
Acknowledgements
This research was partially funded by
gram, the Spanish Ministry of Education,
(CAS12/00274), the Spanish Ministry of E
petitiveness and FEDER (TIN2014-573
sity of M´alaga, Andaluc´ıa Tech, the A
Scientific Research, Air Force Materiel
(FA9550-11-1-0088), the FAPESP (2015/
7. REFERENCES
[1] Hernan E. Aguirre and Kiyoshi Tan
properties of multiobjective MNK-la
Proceedings of CEC, volume 1, page
[2] Wenxiang Chen, Darrell Whitley, D
Adele Howe. Second order partial de
NK-landscapes. In Proceeding of GE
503–510, New York, NY, USA, 2013
[3] Francisco Chicano, Darrell Whitley,
Sutton. E cient identification of im
ball for pseudo-boolean problems. In
GECCO, pages 437–444. ACM, 201
[4] Francisco Chicano, Darrell Whitley,
Tin´os. E cient hill climber for mult
pseudo-boolean optimization. In Pro
EvoCOP, pages 88–103, 2016.
[5] Yves Crama, Pierre Hansen, and Br
The basic algorithm for pseudo-boo
revisited. Discrete Applied Mathema
29(2-3):171–185, 1990.
[6] Brian W. Goldman and William F.
optimization using the parameter-le
pyramid. In Proceedings of GECCO
0
500000
1e+06
10 20 30 40 50 60 70 80 90 100
Averagequ
N (number of variables in thousands)
Figure 5: Average (over 30 runs) solution quality
of the best solution found by the Multi-Start Hill
Climber based on Algorithm 1 for a MNK Land-
scape with d = 1, b = 1, 2, subfunctions codomain
[ 49, 50], N = 10, 000 to 100, 000, and r = 1 to 3.
0
50000
100000
150000
200000
250000
300000
0 50000 100000 150000 200000 250000 300000
f2
f1
r=1
r=2
r=3
0
200000
400000
600000
800000
1e+06
1.2e+06
1.4e+06
0 200000 400000 600000 800000 1e+06 1.2e+06 1.4e+06
f2
f1
r=1
r=2
r=3
(FA9550-
7. RE
[1] Hern
prop
Proc
[2] Wen
Ade
NK-
503–
[3] Fran
Sutt
ball
GEC
[4] Fran
Tin´o
pseu
Evo
[5] Yve
The
revi
0
500000
1e+06
10 20 30 40 50 60 70 80 90 100
Averagequ N (number of variables in thousands)
Figure 5: Average (over 30 runs) solution quality
of the best solution found by the Multi-Start Hill
Climber based on Algorithm 1 for a MNK Land-
scape with d = 1, b = 1, 2, subfunctions codomain
[ 49, 50], N = 10, 000 to 100, 000, and r = 1 to 3.
0
50000
100000
150000
200000
250000
300000
0 50000 100000 150000 200000 250000 300000
f2
f1
r=1
r=2
r=3
0
200000
400000
600000
800000
1e+06
1.2e+06
1.4e+06
0 200000 400000 600000 800000 1e+06 1.2e+06 1.4e+06
f2
f1
r=1
r=2
r=3
(F
7.
[
[
[
[
[
Single-objective
constrained problem
Bi-objective
constrainedproblems
1 constraint 2 constraints
28 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Example: 1,2
S1(x) = f(x 1) f(x)
v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
S4(x) = f(x 4) f(x)
S1,4(x) = f(x 1, 4) f(x)
S1,4(x) = S1(x) + S4(x)
S1(x) = f(1)
(x 1) f(1)
(x)
f(1) f(2) f(3)
x1 x2 x3 x4
f(1) f(2) f(3)
x1 x2 x3 x4
f(1)
x1 x2
Sv(x) = f(x v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
S4(x) = f(x 4) f(x)
S1,4(x) = f(x 1, 4) f(x)
S1,4(x) = S1(x) + S4(x)
S1(x) = f(1)
(x 1) f(1)
(x)
S2(x) = f(1)
(x 2) f(1)
(x) + f(2)
(x 2) f(2)
(x) + f(3)
(x 2) f(3)
(x)
S1,2(x) = f(1)
(x 1, 2) f(1)
(x)+f(2)
(x 1, 2) f(2)
(x)+f(3)
(x 1, 2) f(3)
(x)
S1,2(x) 6= S1(x) + S2(x)
Sv(x) = f(x v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
S4(x) = f(x 4) f(x)
S1,4(x) = f(x 1, 4) f(x)
S1,4(x) = S1(x) + S4(x)
S1(x) = f(1)
(x 1) f(1)
(x)
S2(x) = f(1)
(x 2) f(1)
(x) + f(2)
(x 2) f(2)
(x) + f(3)
(x 2) f(3)
(x)
S1,2(x) = f(1)
(x 1, 2) f(1)
(x)+f(2)
(x 1, 2) f(2)
(x)+f(3)
(x 1, 2) f(3)
(x)
S1,2(x) 6= S1(x) + S2(x)
Sv(x) = f(x v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
S4(x) = f(x 4) f(x)
S1,4(x) = f(x 1, 4) f(x)
S1,4(x) = S1(x) + S4(x)
S1(x) = f(1)
(x 1) f(1)
(x)
S2(x) = f(1)
(x 2) f(1)
(x) + f(2)
(x 2) f(2)
(x) + f(3)
(x 2) f(3)
(x)
S1,2(x) = f(1)
(x 1, 2) f(1)
(x)+f(2)
(x 1, 2) f(2)
(x)+f(3)
(x 1, 2) f(3)
(x)
S1,2(x) 6= S1(x) + S2(x)
x1 and x2
“interact”
Pseudo-Boolean functions Scores Update Decomposition
29 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Decomposition rule for scores
• When can we decompose a score as the sum of lower order scores?
• … when the variables in the move can be partitioned in subsets of variables that
DON’T interact
• Let us define the Co-occurrence Graph
f(1) f(2) f(3) f(4)
x1 x2 x3 x4
There is an edge between two variables
if there exists a function that depends
on both variables (they “interact”)
x4 x3
x1 x2
Pseudo-Boolean functions Scores Update Decomposition
30 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
• Whitley and Chen proposed an O(1) approximated steepest descent for MAX-kSAT
and NK-landscapes based on Walsh decomposition
• For k-bounded pseudo-Boolean functions its complexity is O(k2 2k)
• Chen, Whitley, Hains and Howe reduced the time required to identify improving
moves to O(k3) using partial derivatives
• Szeider proved that the exploration of a ball of radius r in MAX-kSAT and kSAT can be
done in O(n) if each variable appears in a bounded number of clauses
Previous work
Ball of radius r Improving moves Previous work Research Question
D. Whitley and W. Chen. Constant time steepest descent local search with
lookahead for NK-landscapes and MAX-kSAT. GECCO 2012: 1357–1364
W. Chen, D. Whitley, D. Hains, and A. Howe. Second order partial derivatives
for NK-landscapes. GECCO 2013: 503–510
S. Szeider. The parameterized complexity of k-flip local search for SAT and
MAX SAT. Discrete Optimization, 8(1):139–145, 2011
31 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Scores to store
• In terms of the VIG a score can be decomposed if the subgraph containing the
variables in the move is NOT connected
• The number of these scores (up to radius r) is O((3kc)r n)
• Details of the proof in the paper
• With a linear amount of information we can explore a ball of radius r containing O(nr)
solutions
x4 x3
x1 x2
x4 x3
x1 x2
S2(x) = f(1)
(x 2) f(1)
(x) + f(2)
(x 2) f(2)
(x) + f
S1,2(x) = f(1)
(x 1, 2) f(1)
(x)+f(2)
(x 1, 2) f(2)
(x)+f
S1,2(x) 6= S1(x) + S2(x)
l=1 l=1
S4(x) = f(x 4) f(x)
S1,4(x) = f(x 1, 4) f(x)
S1,4(x) = S1(x) + S4(x)
We need to store the scores of moves whose
variables form a connected subgraph of the VIG
Pseudo-Boolean functions Scores Update Decomposition
32 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Scores to update
• Let us assume that x4 is flipped
• Which scores do we need to update?
• Those that need to evaluate f(3) and f(4)
f(1) f(2) f(3) f(4)
x1 x2 x3 x4
x4 x3
x1 x2
• The scores of moves containing variables adjacent
or equal to x4 in the VIG
Main idea Decomposition of scores Constant time update
33 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Scores to update and time required
• The number of neighbors of a variable in the VIG is bounded by c k
• The number of stored scores in which a variable appears is the number of spanning
trees of size less than or equal to r with the variable at the root
• This number is constant
• The update of each score implies evaluating a constant number of functions that
depend on at most k variables (constant), so it requires constant time
x4 x3
x1 x2
f(1) f(2) f(3) f(4)
x1 x2 x3 x4
O( b(k) (3kc)r |v| ) b(k) is a bound for the time to
evaluate any subfunction
Main idea Decomposition of scores Constant time update
34 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Results: checking the time in the random model
• Random model: the number of subfunctions in which a variable appears, c, is not
bounded by a constant
NKq-landscapes
• Random model
• N=1,000 to 12,000
• K=1 to 4
• q=2K+1
• r=1 to 4
• 30 instances per conf.
K=3
r=1
r=2
r=3
0 2000 4000 6000 8000 10000 12000
0
50
100
150
200
n
TimeHsL
r=1
r=2
r=3
0 2000 4000 6000 8000 10000 12000
0
100000
200000
300000
400000
N
Scoresstoredinmemory
NKq-landscapes Sanity check Random model Next improvement
35 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Scores
Problem Formulation Landscape Theory Decomposition SAT Transf. Results
f(1) f(2) f(3)
x1 x2 x3 x4
f = + + +f(1)(x) f(3)(x)f(2)(x) f(4)(x)
x1 x2 x3 x4
f = + + +f(1)(x) f(3)(x)f(2)(x) f(4)(x)
x1 x2 x3 x4
36 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016
Genetic and Evolutionary
Computation Conference 2016
Conference Program
Denver, CO, USA
July 20-24, 2016
Introduction Background Hill Climber Experiments
Conclusions
& Future Work
Scores
Problem Formulation Landscape Theory Decomposition SAT Transf. Results
f(1)
x1 x2
f(1) f(2) f(3)
x1 x2 x3 x4
S1(x) = f(x 1) f(x)
v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
S4(x) = f(x 4) f(x)
S1,4(x) = f(x 1, 4) f(x)
S1,4(x) = S1(x) + S4(x)
S1(x) = f(1)
(x 1) f(1)
(x)

More Related Content

PPTX
Lecture 6-1543909797
PDF
Ef24836841
PDF
PDF
Parabolic Restricted Three Body Problem
PDF
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
PDF
Paraproducts with general dilations
PDF
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
PDF
Tales on two commuting transformations or flows
Lecture 6-1543909797
Ef24836841
Parabolic Restricted Three Body Problem
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Paraproducts with general dilations
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Tales on two commuting transformations or flows

What's hot (20)

PDF
Introduction to harmonic analysis on groups, links with spatial correlation.
PDF
A Szemerédi-type theorem for subsets of the unit cube
PDF
Csr2011 june16 12_00_wagner
PDF
Multilinear singular integrals with entangled structure
PDF
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
PDF
A multi-objective optimization framework for a second order traffic flow mode...
PDF
Dmss2011 public
PDF
A new solver for the ARZ traffic flow model on a junction
PDF
Solving connectivity problems via basic Linear Algebra
PDF
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
PDF
Scattering theory analogues of several classical estimates in Fourier analysis
PDF
A Szemeredi-type theorem for subsets of the unit cube
PDF
Bayesian regression models and treed Gaussian process models
PDF
CLIM Fall 2017 Course: Statistics for Climate Research, Estimating Curves and...
PDF
A new class of a stable implicit schemes for treatment of stiff
PDF
CLIM Fall 2017 Course: Statistics for Climate Research, Nonstationary Covaria...
PDF
Physics of Algorithms Talk
PDF
New Mathematical Tools for the Financial Sector
PDF
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
PDF
Dressing2011
Introduction to harmonic analysis on groups, links with spatial correlation.
A Szemerédi-type theorem for subsets of the unit cube
Csr2011 june16 12_00_wagner
Multilinear singular integrals with entangled structure
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
A multi-objective optimization framework for a second order traffic flow mode...
Dmss2011 public
A new solver for the ARZ traffic flow model on a junction
Solving connectivity problems via basic Linear Algebra
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
Scattering theory analogues of several classical estimates in Fourier analysis
A Szemeredi-type theorem for subsets of the unit cube
Bayesian regression models and treed Gaussian process models
CLIM Fall 2017 Course: Statistics for Climate Research, Estimating Curves and...
A new class of a stable implicit schemes for treatment of stiff
CLIM Fall 2017 Course: Statistics for Climate Research, Nonstationary Covaria...
Physics of Algorithms Talk
New Mathematical Tools for the Financial Sector
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Dressing2011
Ad

Similar to Efficient Hill Climber for Constrained Pseudo-Boolean Optimization Problems (20)

PDF
Efficient Hill Climber for Multi-Objective Pseudo-Boolean Optimization
PDF
Efficient Identification of Improving Moves in a Ball for Pseudo-Boolean Prob...
PDF
04-local-search.pdf
PPT
local-search and optimization slides.ppt
PDF
hillclimbing in Artificial intelligence.pdf
PPTX
cs-171-05-LocalSearch.pptx
PPT
hillclimb algorithm for heuristic search
PPT
local-search algorithms in Artificial intelligence .ppt
PPTX
Artificial Intelligence_Anjali_Kumari_26900122059.pptx
PPT
dokumen.tips_heuristic-search-techniques-contents-several-general-purpose-sea...
PPT
Hill climbing
PDF
02LocalSearch.pdf
PPTX
UNIT 2 HILLclimbling 19geyebshshsb .pptx
PPT
vdocuments.mx_chapter-3-heuristic-search-techniques-56a314b01c908.ppt
PPTX
Informed Search Techniques new kirti L 8.pptx
PPTX
Heuristic search
PPT
Chapter 03 - artifi HEURISTIC SEARCH.ppt
PPTX
CS4700-LS_v6.pptx
PPTX
Heuristic search
PPT
Searchadditional2
Efficient Hill Climber for Multi-Objective Pseudo-Boolean Optimization
Efficient Identification of Improving Moves in a Ball for Pseudo-Boolean Prob...
04-local-search.pdf
local-search and optimization slides.ppt
hillclimbing in Artificial intelligence.pdf
cs-171-05-LocalSearch.pptx
hillclimb algorithm for heuristic search
local-search algorithms in Artificial intelligence .ppt
Artificial Intelligence_Anjali_Kumari_26900122059.pptx
dokumen.tips_heuristic-search-techniques-contents-several-general-purpose-sea...
Hill climbing
02LocalSearch.pdf
UNIT 2 HILLclimbling 19geyebshshsb .pptx
vdocuments.mx_chapter-3-heuristic-search-techniques-56a314b01c908.ppt
Informed Search Techniques new kirti L 8.pptx
Heuristic search
Chapter 03 - artifi HEURISTIC SEARCH.ppt
CS4700-LS_v6.pptx
Heuristic search
Searchadditional2
Ad

More from jfrchicanog (20)

PDF
Seminario-taller: Introducción a la Ingeniería del Software Guiada or Búsqueda
PDF
Combinando algoritmos exactos y heurísticos para problemas en ISGB
PDF
Quasi-Optimal Recombination Operator
PDF
Uso de CMSA para resolver el problema de selección de requisitos
PDF
Enhancing Partition Crossover with Articulation Points Analysis
PDF
Search-Based Software Project Scheduling
PDF
Dos estrategias de búsqueda anytime basadas en programación lineal entera par...
PDF
Mixed Integer Linear Programming Formulation for the Taxi Sharing Problem
PDF
Descomposición en Landscapes Elementales del Problema de Diseño de Redes de R...
PDF
Optimización Multi-objetivo Basada en Preferencias para la Planificación de P...
PDF
Resolviendo in problema multi-objetivo de selección de requisitos mediante re...
PDF
On the application of SAT solvers for Search Based Software Testing
PDF
Elementary Landscape Decomposition of the Hamiltonian Path Optimization Problem
PDF
Recent Research in Search Based Software Testing
PDF
Problem Understanding through Landscape Theory
PDF
Searching for Liveness Property Violations in Concurrent Systems with ACO
PDF
Finding Safety Errors with ACO
PDF
Elementary Landscape Decomposition of Combinatorial Optimization Problems
PDF
Elementary Landscape Decomposition of Combinatorial Optimization Problems
PDF
Elementary Landscape Decomposition of the Quadratic Assignment Problem
Seminario-taller: Introducción a la Ingeniería del Software Guiada or Búsqueda
Combinando algoritmos exactos y heurísticos para problemas en ISGB
Quasi-Optimal Recombination Operator
Uso de CMSA para resolver el problema de selección de requisitos
Enhancing Partition Crossover with Articulation Points Analysis
Search-Based Software Project Scheduling
Dos estrategias de búsqueda anytime basadas en programación lineal entera par...
Mixed Integer Linear Programming Formulation for the Taxi Sharing Problem
Descomposición en Landscapes Elementales del Problema de Diseño de Redes de R...
Optimización Multi-objetivo Basada en Preferencias para la Planificación de P...
Resolviendo in problema multi-objetivo de selección de requisitos mediante re...
On the application of SAT solvers for Search Based Software Testing
Elementary Landscape Decomposition of the Hamiltonian Path Optimization Problem
Recent Research in Search Based Software Testing
Problem Understanding through Landscape Theory
Searching for Liveness Property Violations in Concurrent Systems with ACO
Finding Safety Errors with ACO
Elementary Landscape Decomposition of Combinatorial Optimization Problems
Elementary Landscape Decomposition of Combinatorial Optimization Problems
Elementary Landscape Decomposition of the Quadratic Assignment Problem

Recently uploaded (20)

PDF
Review of recent advances in non-invasive hemoglobin estimation
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
Big Data Technologies - Introduction.pptx
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPTX
Spectroscopy.pptx food analysis technology
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
Cloud computing and distributed systems.
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Encapsulation theory and applications.pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPTX
Machine Learning_overview_presentation.pptx
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Review of recent advances in non-invasive hemoglobin estimation
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Big Data Technologies - Introduction.pptx
Mobile App Security Testing_ A Comprehensive Guide.pdf
Spectroscopy.pptx food analysis technology
A comparative analysis of optical character recognition models for extracting...
Unlocking AI with Model Context Protocol (MCP)
Cloud computing and distributed systems.
The AUB Centre for AI in Media Proposal.docx
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
gpt5_lecture_notes_comprehensive_20250812015547.pdf
The Rise and Fall of 3GPP – Time for a Sabbatical?
Building Integrated photovoltaic BIPV_UPV.pdf
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Encapsulation theory and applications.pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
Machine Learning_overview_presentation.pptx
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows

Efficient Hill Climber for Constrained Pseudo-Boolean Optimization Problems

  • 1. 1 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Efficient Hill Climber for Constrained Pseudo-Boolean Optimization Problems Francisco Chicano, Darrell Whitley, Renato Tinós
  • 2. 2 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work r = 1 n r = 2 n 2 r = 3 n 3 r n r Ball Pr i=1 n i S1( r = 1 n r = 2 n 2 r = 3 n 3 r n r Ball Pr i=1 n i • Considering binary strings of length n and Hamming distance… Solutions in a ball of radius r r=1 r=2 r=3 Ball of radius r Previous work Research Question How many solutions at Hamming distance r? If r << n : Θ (nr)
  • 3. 3 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work • We want to find improving moves in a ball of radius r around solution x • What is the computational cost of this exploration? • By complete enumeration: O (nr) if the fitness evaluation is O(1) • Previous work proposed a way to find improving moves in ball of radius r in O(1) (constant time independent of n): Szeider (DO 2011), Whitley and Chen (GECCO 2012), Chen et al. (GECCO 2013), Chicano et al. (GECCO 2014) • The approach was extended to the multi-objective case: Chicano et al. (EvoCOP 2016) • All the previous results are for unconstrained optimization problems Improving moves in a ball of radius r r Ball of radius r Previous work Research Question
  • 4. 4 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Research Question Ball of radius r Previous work Research Question
  • 5. 5 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work • Definition: • where f(i) only depends on k variables (k-bounded epistasis) • We will also assume that the variables are arguments of at most c subfunctions • Example (m=4, n=4, k=2): • Are Mk Landscapes a small subset of problems? Are they interesting? • Max-kSAT is a k-bounded pseudo-Boolean optimization problem • NK-landscapes is a (K+1)-bounded pseudo-Boolean optimization problem • Any compressible pseudo-Boolean function can be reduced to a quadratic pseudo-Boolean function (e.g., Rosenberg, 1975) Mk Landscape (Whitley, GECCO2015: 927-934) Pseudo-Boolean functions Scores Update Decomposition The family of k-bounded pseudo-Boolean Optimization problems have also been described as an embedded landscape. An embedded landscape [3] with bounded epistasis k is de- fined as a function f(x) that can be written as the sum of m subfunctions, each one depending at most on k input variables. That is: f(x) = mX i=1 f(i) (x), (1) where the subfunctions f(i) depend only on k components of x. Embedded Landscapes generalize NK-landscapes and the MAX-kSAT problem. We will consider in this paper that the number of subfunctions is linear in n, that is m 2 O(n). For NK-landscapes m = n and is a common assumption in MAX-kSAT that m 2 O(n). 3. SCORES IN THE HAMMING BALL For v, x 2 Bn , and a pseudo-Boolean function f : Bn ! R, we denote the Score of x with respect to move v as Sv(x), defined as follows:1 Sv(x) = f(x v) f(x), (2) 1 We omit the function f in Sv(x) to simplify the notation. S(l) v (x) = Equation (5) cl change in the mov f(l) the Score of th this subfunction w On the other hand we only need to c changed variables acterized by the m we can write (3) a S 3.1 Scores De The Score value tion than just the c in that ball. Let us balls of radius r = xj are two variabl ments of any subfu f = + + +f(1)(x) f(2)(x) f(3)(x) f(4)(x) x1 x2 x3 x4
  • 6. 6 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work • Based on Mk Landscapes Going Multi-Objective: Vector Mk Landscape Pseudo-Boolean functions Scores Update Decomposition x1 x2 x3 x4 x5 f (1) 1 f (2) 1 f (3) 1 f (4) 1 f (5) 1 f (1) 2 f (2) 2 f (3) 2 (a) Vector Mk Landscape x3x4x5 f1 f2
  • 7. 7 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Going Multi-Objective & Constrained Pseudo-Boolean functions Scores Update Decomposition f1 f2 x1 x2 x3 x4 x5 f (1) 1 f (1) 2 f (2) 1 f (2) 2 f (3) 1 g (1) 1 g (2) 1 g (3) 1g1 Feasible solutions: . BACKGROUND In constrained multi-objective optimization, there is a vec- r function f : Bn ! Rd to optimize, called the objective nction. We will assume, without loss of generality, that l the objectives (components of the vector function) are be maximized. The constraints of the problem will be ven in the form1 g(x) 0, where g : Bn ! Rb is a vec- r function, that will be called constraint function2 . That , a solution is feasible if all the components of the vector nction g are nonnegative when evaluated in that solution. his type of constraints does not represent a limitation. Any her equality or inequality constraint can be expressed in he form gi(x) 0, including those that use strict inequality onstraints (> and <)3 . The set of feasible solutions of a roblem will be denoted by Xg = {x 2 Bn |g(x) 0}. Given a vector function f : Bn ! Rd , we say that solution Objectivefunction Constraint function
  • 8. 8 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work • Let us represent a potential move of the current solution with a binary vector v having 1s in the positions that should be flipped • The score of move v for solution x is the difference in the fitness value of the neighboring and the current solution • Scores are useful to identify improving moves: if Sv(x) > 0, v is an improving move Scores: definition Current solution, x Neighboring solution, y Move, v 01110101010101001 01111011010101001 00001110000000000 01110101010101001 00110101110101111 01000000100000110 01110101010101001 01000101010101001 00110000000000000 Pseudo-Boolean functions Scores Update Decomposition vector Mk Landscape of Figure 1(a). move in Bn can be characterized by a binary n having 1 in all the bits that change in the so Score of a move, has been previously defined ment in the objective function when that move is finition 4 (Score). For v, x 2 Bn , and a on f : Bn ! Rd , we denote the Score of x with ve v for function f as S (f) v (x), defined as follow S(f) v (x) = f(x v) f(x), is the exclusive OR bitwise operation.
  • 9. 9 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work • The key idea is to compute the scores from scratch once at the beginning and update their values as the solution moves (less expensive) Scores update r Selected improving move Update the scores Pseudo-Boolean functions Scores Update Decomposition
  • 10. 10 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work • The key idea is to compute the scores from scratch once at the beginning and update their values as the solution moves (less expensive) • How can we do it less expensive? • We have still O(nr) scores to update! • … thanks to two key facts: • We don’t need all the O(nr) scores to have complete information of the influence of a move in the objective or constraint (vector) function, only O(n) scores • From the ones we need, we only have to update a constant number of them and we can do each update in constant time Key facts for efficient scores update r Pseudo-Boolean functions Scores Update Decomposition
  • 11. 11 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work What is an improving move in MO? • An improving move is one that provides a solution dominating the current one • We define strong and weak improving moves. • We are interested in strong improving moves Taking improving moves Taking feasible moves Algorithm f2 f1 Assuming maximization Can we safely take only strong improving moves?
  • 12. 12 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work We need to take weak improving moves • We could miss some higher order strong improving moves if we don’t take weak improving moves S2 S1 stored stored not stored current solution x1 x2 x3 x4 x5 f2 f2 g (1) 1 g (2) 1 g (3) 1 S (f) v1[v2 (x) = S(f) v1 (x) + S(f) v2 (x) Taking improving moves Taking feasible moves Algorithm
  • 13. 13 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Cycling • We can make the hill climber to cycle if we take weak improving moves S2 S1 stored current solution S2 S1 stored current solution Taking improving moves Taking feasible moves Algorithm
  • 14. 14 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Solution: weights for the scores • Given a weight vector w with wi > 0 S2 S1 w 1st strong improving 2nd w-improving 2nd w-improving A w-improving move is one with w · Sv(x) > 0 Taking improving moves Taking feasible moves Algorithm
  • 15. 15 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Feasibility does not only depends on the scores • In order to classify the moves as feasible or unfeasible we have to check all the scores, even if they have not changed • This can be done in O(n), but not in O(1) in general g2 g1 Unfeasible solution current solution v1 v1 v2 Feasible region Taking improving moves Taking feasible moves Algorithm
  • 16. 16 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Feasibility in the Hamming Ball x3 x4 x5 f (2) 1 f (2) 2 f (3) 1 g (2) 1 g (3) 1 S (f) v1[v2 (x) = S(f) v1 (x) + S(f) v2 (x) (1) S (g) v1[v2 (x) = S(g) v1 (x) + S(g) v2 (x) (2) g2 g1 Unfeasible stored solution current solution v1 v1 v2 Feasible region v2 Unfeasible stored solution Feasible not stored solution Taking improving moves Taking feasible moves Algorithm
  • 17. 17 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Feasibility in the Hamming Ball g2 g1 current solution v1 v1 v2 Feasible region v2 w* w*-feasible • Bad news: this does not work when the current solution is unfeasible • Worse news: there is no “efficient” way to identify a feasible solution when the current solution is unfeasible Taking improving moves Taking feasible moves Algorithm
  • 18. 18 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Unfeasible regionFeasible region No w-improving or w*-feasible moves ▶ stop strong improving in g w*-improving in g solution y otherwise Hill Climber Taking improving moves Taking feasible moves Algorithm
  • 19. 19 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work MNK Landscape Problem Results Source Code Why NKq and not NK? Floating point precision • An MNK Landscape is a multi-objective pseudo-Boolean problem where each objective is an NK Landscape (Aguirre, Tanaka, CEC 2004: 196-203) • An NK-landscape is a pseudo-Boolean optimization problem with objective function: where each subfunction f(l) depends on variable xl and K other variables • The subfunctions are randomly generated and the values are taken in the range [0,1] • In NKq-landscapes the subfunctions take integer values in the range [0,q-1] • We used shifted NKq-landscapes in the experiments with values in two ranges: • [-49, 50] slightly constrained problems (2% unfeasible) • [-50, 49] highly constrained problems (80% unfeasible) f(1) (x) + f(2) (x 2) f(2) (x) + f(3) (x 2) f(3) (x) 2) f(1) (x)+f(2) (x 1, 2) f(2) (x)+f(3) (x 1, 2) f(3) (x) S1,2(x) 6= S1(x) + S2(x) f(x) = NX l=1 f(l) (x) 1 • In the adjacent model the variables are consecutive f = + + +f(1)(x) f(3)(x)f(2)(x) f(4)(x) x1 x2 x3 x4
  • 20. 20 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work 0 2000 4000 6000 8000 10000 12000 14000 16000 1 2 3 4 5 6 7 8 9 10 Averagetimepermove(microseconds) N (number of variables in thousands) r=1 r=2 r=3 Runtime: highly constrained MNK Landscapes Neighborhood size: 166 billion Problem Results Source Code
  • 21. 21 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work ries from 1 to 3. These instances are slightly constrained th around 2% of the search space being unfeasible10 . We rformed 30 independent runs of the algorithm for each nfiguration, and the results are the average of these 30 ns. 0 200 400 600 800 1000 1200 1400 1600 10 20 30 40 50 60 70 80 90 100 Averagetimepermove(microseconds) N (number of variables in thousands) r=1, d=1, b=1 r=1, d=2, b=1 r=1, d=1, b=2 r=1, d=2, b=2 r=2, d=1, b=1 r=2, d=2, b=1 r=2, d=1, b=2 r=2, d=2, b=2 r=3, d=1, b=1 r=3, d=2, b=1 r=3, d=1, b=2 r=3, d=2, b=2 gure 3: Average time per move in microsec- Figure 4 for the rithm 1 fo b = 1, N [ 50, 49], In this c the time p per move Figure 3, times sma the unfeas all the mo time. 5.2 Qu Runtime: slightly constrained MNK Landscapes Neighborhood size: 166 trillion Problem Results Source Code from 1 to 3. These instances are slightly constrained round 2% of the search space being unfeasible10 . We med 30 independent runs of the algorithm for each uration, and the results are the average of these 30 0 200 400 600 800 1000 1200 1400 1600 10 20 30 40 50 60 70 80 90 100 N (number of variables in thousands) r=1, d=1, b=1 r=1, d=2, b=1 r=1, d=1, b=2 r=1, d=2, b=2 r=2, d=1, b=1 r=2, d=2, b=1 r=2, d=1, b=2 r=2, d=2, b=2 r=3, d=1, b=1 r=3, d=2, b=1 r=3, d=1, b=2 r=3, d=2, b=2 0 1 Figure 4: A for the Mul rithm 1 for co b = 1, N = 1 [ 50, 49], and In this case, the time per m per move in th Figure 3, even times smaller) the unfeasible all the moves time. 5.2 Qualit
  • 22. 22 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Source Code https://github.com/jfrchicanog/EfficientHillClimbers/tree/constrained-multiobjective Problem Results Source Code
  • 23. 23 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Conclusions and Future Work Conclusions & Future Work • Adding constrains to the MK Landscapes has a cost in terms of efficiency: from O(1) to O(n) • The space required to store the information is still linear in the size of the problem n Conclusions • Generalize to other search spaces • Combine with high-level algorithms (MOEA/D) Future Work
  • 24. 24 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Acknowledgements Efficient Hill Climber for Constrained Pseudo-Boolean Optimization Problems
  • 25. 25 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Examples: 1 and 4 f(1) f(2) f(3) f(4) x1 x2 x3 x4 Ball Pr i=1 n i S1(x) = f(x 1) f(x) Sv(x) = f(x v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) S1(x) = f(x 1) f(x) v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) f(1) f(2) f(3) f(4) x1 x2 x3 x4 S1(x) = f(x 1) f(x) v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) S4(x) = f(x 4) f(x) Pseudo-Boolean functions Scores Update Decomposition
  • 26. 26 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Example: 1,4 f(1) f(2) f(3) f(4) x1 x2 x3 x4 r Ball Pr i=1 n i S1(x) = f(x 1) f(x) Sv(x) = f(x v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) n i S1(x) = f(x 1) f(x) v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) S4(x) = f(x 4) f(x) S1,4(x) = f(x 1, 4) f(x) S1(x) = f(x 1) f(x) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) Ball Pr i=1 n i S1(x) = Sv(x) = f(x v) f(x) = S4(x) = S1(x) = f(x 1) f(x) x) = f(x v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l S4(x) = f(x 4) f(x) S1,4(x) = f(x 1, 4) f(x) S1,4(x) = S1(x) + S4(x) We don’t need to store S1,4(x) since can be computed from others If none of 1 and 4 are improving moves, 1,4 will not be an improving move Pseudo-Boolean functions Scores Update Decomposition
  • 27. 27 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Quality • 50% Empirical Attainment Functions (Knowles, ISDA 2005: 552-557) Problem Results Source Code 0 500000 1e+06 1.5e+06 2e+06 2.5e+06 3e+06 10 20 30 40 50 60 70 80 90 100 Averagequalityofbestfoundsolution N (number of variables in thousands) r=1, b=1 r=1, b=2 r=2, b=1 r=2, b=2 r=3, b=1 r=3, b=2 Figure 5: Average (over 30 runs) solution quality of the best solution found by the Multi-Start Hill Climber based on Algorithm 1 for a MNK Land- scape with d = 1, b = 1, 2, subfunctions codomain [ 49, 50], N = 10, 000 to 100, 000, and r = 1 to 3. 0 50000 100000 150000 200000 250000 300000 0 50000 100000 150000 200000 250000 300000 f2 f1 r=1 r=2 r=3 (a) N = 10, 000, b = 1 0 200000 400000 600000 800000 1e+06 1.2e+06 1.4e+06 0 200000 400000 600000 800000 1e+06 1.2e+06 1.4e+06 f2 f1 r=1 r=2 r=3 (b) N = 50, 000, b = 2 Figure 6: 50%-empirical attainment surfaces of the 30 runs of the Multi-Start Hill Climber based on Acknowledgements This research was partially funded by gram, the Spanish Ministry of Education, (CAS12/00274), the Spanish Ministry of E petitiveness and FEDER (TIN2014-573 sity of M´alaga, Andaluc´ıa Tech, the A Scientific Research, Air Force Materiel (FA9550-11-1-0088), the FAPESP (2015/ 7. REFERENCES [1] Hernan E. Aguirre and Kiyoshi Tan properties of multiobjective MNK-la Proceedings of CEC, volume 1, page [2] Wenxiang Chen, Darrell Whitley, D Adele Howe. Second order partial de NK-landscapes. In Proceeding of GE 503–510, New York, NY, USA, 2013 [3] Francisco Chicano, Darrell Whitley, Sutton. E cient identification of im ball for pseudo-boolean problems. In GECCO, pages 437–444. ACM, 201 [4] Francisco Chicano, Darrell Whitley, Tin´os. E cient hill climber for mult pseudo-boolean optimization. In Pro EvoCOP, pages 88–103, 2016. [5] Yves Crama, Pierre Hansen, and Br The basic algorithm for pseudo-boo revisited. Discrete Applied Mathema 29(2-3):171–185, 1990. [6] Brian W. Goldman and William F. optimization using the parameter-le pyramid. In Proceedings of GECCO 0 500000 1e+06 10 20 30 40 50 60 70 80 90 100 Averagequ N (number of variables in thousands) Figure 5: Average (over 30 runs) solution quality of the best solution found by the Multi-Start Hill Climber based on Algorithm 1 for a MNK Land- scape with d = 1, b = 1, 2, subfunctions codomain [ 49, 50], N = 10, 000 to 100, 000, and r = 1 to 3. 0 50000 100000 150000 200000 250000 300000 0 50000 100000 150000 200000 250000 300000 f2 f1 r=1 r=2 r=3 0 200000 400000 600000 800000 1e+06 1.2e+06 1.4e+06 0 200000 400000 600000 800000 1e+06 1.2e+06 1.4e+06 f2 f1 r=1 r=2 r=3 (FA9550- 7. RE [1] Hern prop Proc [2] Wen Ade NK- 503– [3] Fran Sutt ball GEC [4] Fran Tin´o pseu Evo [5] Yve The revi 0 500000 1e+06 10 20 30 40 50 60 70 80 90 100 Averagequ N (number of variables in thousands) Figure 5: Average (over 30 runs) solution quality of the best solution found by the Multi-Start Hill Climber based on Algorithm 1 for a MNK Land- scape with d = 1, b = 1, 2, subfunctions codomain [ 49, 50], N = 10, 000 to 100, 000, and r = 1 to 3. 0 50000 100000 150000 200000 250000 300000 0 50000 100000 150000 200000 250000 300000 f2 f1 r=1 r=2 r=3 0 200000 400000 600000 800000 1e+06 1.2e+06 1.4e+06 0 200000 400000 600000 800000 1e+06 1.2e+06 1.4e+06 f2 f1 r=1 r=2 r=3 (F 7. [ [ [ [ [ Single-objective constrained problem Bi-objective constrainedproblems 1 constraint 2 constraints
  • 28. 28 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Example: 1,2 S1(x) = f(x 1) f(x) v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) S4(x) = f(x 4) f(x) S1,4(x) = f(x 1, 4) f(x) S1,4(x) = S1(x) + S4(x) S1(x) = f(1) (x 1) f(1) (x) f(1) f(2) f(3) x1 x2 x3 x4 f(1) f(2) f(3) x1 x2 x3 x4 f(1) x1 x2 Sv(x) = f(x v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) S4(x) = f(x 4) f(x) S1,4(x) = f(x 1, 4) f(x) S1,4(x) = S1(x) + S4(x) S1(x) = f(1) (x 1) f(1) (x) S2(x) = f(1) (x 2) f(1) (x) + f(2) (x 2) f(2) (x) + f(3) (x 2) f(3) (x) S1,2(x) = f(1) (x 1, 2) f(1) (x)+f(2) (x 1, 2) f(2) (x)+f(3) (x 1, 2) f(3) (x) S1,2(x) 6= S1(x) + S2(x) Sv(x) = f(x v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) S4(x) = f(x 4) f(x) S1,4(x) = f(x 1, 4) f(x) S1,4(x) = S1(x) + S4(x) S1(x) = f(1) (x 1) f(1) (x) S2(x) = f(1) (x 2) f(1) (x) + f(2) (x 2) f(2) (x) + f(3) (x 2) f(3) (x) S1,2(x) = f(1) (x 1, 2) f(1) (x)+f(2) (x 1, 2) f(2) (x)+f(3) (x 1, 2) f(3) (x) S1,2(x) 6= S1(x) + S2(x) Sv(x) = f(x v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) S4(x) = f(x 4) f(x) S1,4(x) = f(x 1, 4) f(x) S1,4(x) = S1(x) + S4(x) S1(x) = f(1) (x 1) f(1) (x) S2(x) = f(1) (x 2) f(1) (x) + f(2) (x 2) f(2) (x) + f(3) (x 2) f(3) (x) S1,2(x) = f(1) (x 1, 2) f(1) (x)+f(2) (x 1, 2) f(2) (x)+f(3) (x 1, 2) f(3) (x) S1,2(x) 6= S1(x) + S2(x) x1 and x2 “interact” Pseudo-Boolean functions Scores Update Decomposition
  • 29. 29 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Decomposition rule for scores • When can we decompose a score as the sum of lower order scores? • … when the variables in the move can be partitioned in subsets of variables that DON’T interact • Let us define the Co-occurrence Graph f(1) f(2) f(3) f(4) x1 x2 x3 x4 There is an edge between two variables if there exists a function that depends on both variables (they “interact”) x4 x3 x1 x2 Pseudo-Boolean functions Scores Update Decomposition
  • 30. 30 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work • Whitley and Chen proposed an O(1) approximated steepest descent for MAX-kSAT and NK-landscapes based on Walsh decomposition • For k-bounded pseudo-Boolean functions its complexity is O(k2 2k) • Chen, Whitley, Hains and Howe reduced the time required to identify improving moves to O(k3) using partial derivatives • Szeider proved that the exploration of a ball of radius r in MAX-kSAT and kSAT can be done in O(n) if each variable appears in a bounded number of clauses Previous work Ball of radius r Improving moves Previous work Research Question D. Whitley and W. Chen. Constant time steepest descent local search with lookahead for NK-landscapes and MAX-kSAT. GECCO 2012: 1357–1364 W. Chen, D. Whitley, D. Hains, and A. Howe. Second order partial derivatives for NK-landscapes. GECCO 2013: 503–510 S. Szeider. The parameterized complexity of k-flip local search for SAT and MAX SAT. Discrete Optimization, 8(1):139–145, 2011
  • 31. 31 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Scores to store • In terms of the VIG a score can be decomposed if the subgraph containing the variables in the move is NOT connected • The number of these scores (up to radius r) is O((3kc)r n) • Details of the proof in the paper • With a linear amount of information we can explore a ball of radius r containing O(nr) solutions x4 x3 x1 x2 x4 x3 x1 x2 S2(x) = f(1) (x 2) f(1) (x) + f(2) (x 2) f(2) (x) + f S1,2(x) = f(1) (x 1, 2) f(1) (x)+f(2) (x 1, 2) f(2) (x)+f S1,2(x) 6= S1(x) + S2(x) l=1 l=1 S4(x) = f(x 4) f(x) S1,4(x) = f(x 1, 4) f(x) S1,4(x) = S1(x) + S4(x) We need to store the scores of moves whose variables form a connected subgraph of the VIG Pseudo-Boolean functions Scores Update Decomposition
  • 32. 32 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Scores to update • Let us assume that x4 is flipped • Which scores do we need to update? • Those that need to evaluate f(3) and f(4) f(1) f(2) f(3) f(4) x1 x2 x3 x4 x4 x3 x1 x2 • The scores of moves containing variables adjacent or equal to x4 in the VIG Main idea Decomposition of scores Constant time update
  • 33. 33 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Scores to update and time required • The number of neighbors of a variable in the VIG is bounded by c k • The number of stored scores in which a variable appears is the number of spanning trees of size less than or equal to r with the variable at the root • This number is constant • The update of each score implies evaluating a constant number of functions that depend on at most k variables (constant), so it requires constant time x4 x3 x1 x2 f(1) f(2) f(3) f(4) x1 x2 x3 x4 O( b(k) (3kc)r |v| ) b(k) is a bound for the time to evaluate any subfunction Main idea Decomposition of scores Constant time update
  • 34. 34 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Results: checking the time in the random model • Random model: the number of subfunctions in which a variable appears, c, is not bounded by a constant NKq-landscapes • Random model • N=1,000 to 12,000 • K=1 to 4 • q=2K+1 • r=1 to 4 • 30 instances per conf. K=3 r=1 r=2 r=3 0 2000 4000 6000 8000 10000 12000 0 50 100 150 200 n TimeHsL r=1 r=2 r=3 0 2000 4000 6000 8000 10000 12000 0 100000 200000 300000 400000 N Scoresstoredinmemory NKq-landscapes Sanity check Random model Next improvement
  • 35. 35 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Scores Problem Formulation Landscape Theory Decomposition SAT Transf. Results f(1) f(2) f(3) x1 x2 x3 x4 f = + + +f(1)(x) f(3)(x)f(2)(x) f(4)(x) x1 x2 x3 x4 f = + + +f(1)(x) f(3)(x)f(2)(x) f(4)(x) x1 x2 x3 x4
  • 36. 36 / 24GECCO 2016, Denver, CO, USA, 20-24 July 2016 Genetic and Evolutionary Computation Conference 2016 Conference Program Denver, CO, USA July 20-24, 2016 Introduction Background Hill Climber Experiments Conclusions & Future Work Scores Problem Formulation Landscape Theory Decomposition SAT Transf. Results f(1) x1 x2 f(1) f(2) f(3) x1 x2 x3 x4 S1(x) = f(x 1) f(x) v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) S4(x) = f(x 4) f(x) S1,4(x) = f(x 1, 4) f(x) S1,4(x) = S1(x) + S4(x) S1(x) = f(1) (x 1) f(1) (x)