SlideShare a Scribd company logo
1 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background Contribution Experiments
Conclusions
& Future Work
Efficient Hill Climber for Multi-Objective
Pseudo-Boolean Optimization
Francisco Chicano, Darrell Whitley, Renato Tinós
2 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
r = 1 n
r = 2 n
2
r = 3 n
3
r n
r
Ball
Pr
i=1
n
i
S1(
r = 1 n
r = 2 n
2
r = 3 n
3
r n
r
Ball
Pr
i=1
n
i
• Considering binary strings of length n and Hamming distance…
Solutions in a ball of radius r
r=1
r=2
r=3
Ball of radius r Previous work Research Question
How many solutions at Hamming distance r?
If r << n : Θ (nr)
3 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
• We want to find improving moves in a ball of radius r around solution x
• What is the computational cost of this exploration?
• By complete enumeration: O (nr) if the fitness evaluation is O(1)
• In previous work we proposed a way to find improving moves in ball of radius r in
O(1) (constant time independent of n)
Improving moves in a ball of radius r
r
Ball of radius r Previous work Research Question
Chicano, Whitley, Sutton, GECCO 2014: 437-444
4 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
Research Question
Ball of radius r Previous work Research Question
5 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
• Definition:
• where f(i) only depends on k variables (k-bounded epistasis)
• We will also assume that the variables are arguments of at most c subfunctions
• Example (m=4, n=4, k=2):
• Is this set of functions too small? Is it interesting?
• Max-kSAT is a k-bounded pseudo-Boolean optimization problem
• NK-landscapes is a (K+1)-bounded pseudo-Boolean optimization problem
• Any compressible pseudo-Boolean function can be reduced to a quadratic
pseudo-Boolean function (e.g., Rosenberg, 1975)
Mk Landscape (Whitley, GECCO2015: 927-934)
Pseudo-Boolean functions Scores Update Decomposition
The family of k-bounded pseudo-Boolean Optimization
problems have also been described as an embedded landscape.
An embedded landscape [3] with bounded epistasis k is de-
fined as a function f(x) that can be written as the sum
of m subfunctions, each one depending at most on k input
variables. That is:
f(x) =
mX
i=1
f(i)
(x), (1)
where the subfunctions f(i)
depend only on k components
of x. Embedded Landscapes generalize NK-landscapes and
the MAX-kSAT problem. We will consider in this paper that
the number of subfunctions is linear in n, that is m 2 O(n).
For NK-landscapes m = n and is a common assumption in
MAX-kSAT that m 2 O(n).
3. SCORES IN THE HAMMING BALL
For v, x 2 Bn
, and a pseudo-Boolean function f : Bn
! R,
we denote the Score of x with respect to move v as Sv(x),
defined as follows:1
Sv(x) = f(x v) f(x), (2)
1
We omit the function f in Sv(x) to simplify the notation.
S(l)
v (x) =
Equation (5) cl
change in the mov
f(l)
the Score of th
this subfunction w
On the other hand
we only need to c
changed variables t
acterized by the m
we can write (3) as
S
3.1 Scores De
The Score values
tion than just the c
in that ball. Let us
balls of radius r =
xj are two variabl
ments of any subfu
f = + + +f(1)(x) f(2)(x) f(3)(x) f(4)(x)
x1 x2 x3 x4
6 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
• Based on Mk Landscapes
Going Multi-Objective: Vector Mk Landscape
Pseudo-Boolean functions Scores Update Decomposition
x1 x2 x3 x4 x5
f
(1)
1 f
(2)
1 f
(3)
1 f
(4)
1 f
(5)
1
f
(1)
2 f
(2)
2 f
(3)
2
(a) Vector Mk Landscape
x3x4x5
f1
f2
7 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
• Let us represent a potential move of the current solution with a binary vector v having
1s in the positions that should be flipped
• The score of move v for solution x is the difference in the fitness value of the
neighboring and the current solution
• Scores are useful to identify improving moves: if Sv(x) > 0, v is an improving move
• We keep all the scores in the score vector
Scores: definition
Current solution, x Neighboring solution, y Move, v
01110101010101001 01111011010101001 00001110000000000
01110101010101001 00110101110101111 01000000100000110
01110101010101001 01000101010101001 00110000000000000
Pseudo-Boolean functions Scores Update Decomposition
Sv(x) = f(x v) f(x)
8 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
• The key idea is to compute the scores from scratch once at the beginning and update
their values as the solution moves (less expensive)
Scores update
r
Selected improving move
Update the score vector
Pseudo-Boolean functions Scores Update Decomposition
9 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
• The key idea is to compute the scores from scratch once at the beginning and update
their values as the solution moves (less expensive)
• How can we do it less expensive?
• We have still O(nr) scores to update!
• … thanks to two key facts:
• We don’t need all the O(nr) scores to know if there is an improving move
• From the ones we need, we only have to update a constant number of them and
we can do each update in constant time
Key facts for efficient scores update
r
Pseudo-Boolean functions Scores Update Decomposition
10 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
Examples: 1 and 4
f(1) f(2) f(3) f(4)
x1 x2 x3 x4
Ball
Pr
i=1
n
i
S1(x) = f(x 1) f(x)
Sv(x) = f(x v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
S1(x) = f(x 1) f(x)
v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
f(1) f(2) f(3) f(4)
x1 x2 x3 x4
S1(x) = f(x 1) f(x)
v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
S4(x) = f(x 4) f(x)
Pseudo-Boolean functions Scores Update Decomposition
11 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
Example: 1,4
f(1) f(2) f(3) f(4)
x1 x2 x3 x4
r
Ball
Pr
i=1
n
i
S1(x) = f(x 1) f(x)
Sv(x) = f(x v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
n
i
S1(x) = f(x 1) f(x)
v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
S4(x) = f(x 4) f(x)
S1,4(x) = f(x 1, 4) f(x)
S1(x) = f(x 1) f(x)
f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
Ball
Pr
i=1
n
i
S1(x) =
Sv(x) = f(x v) f(x) =
S4(x) =
S1(x) = f(x 1) f(x)
x) = f(x v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l
S4(x) = f(x 4) f(x)
S1,4(x) = f(x 1, 4) f(x)
S1,4(x) = S1(x) + S4(x)
We don’t need to store S1,4(x) since can be computed from others
If none of 1 and 4 are improving moves, 1,4 will not be an improving move
Pseudo-Boolean functions Scores Update Decomposition
12 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
Example: 1,2
S1(x) = f(x 1) f(x)
v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
S4(x) = f(x 4) f(x)
S1,4(x) = f(x 1, 4) f(x)
S1,4(x) = S1(x) + S4(x)
S1(x) = f(1)
(x 1) f(1)
(x)
f(1) f(2) f(3)
x1 x2 x3 x4
f(1) f(2) f(3)
x1 x2 x3 x4
f(1)
x1 x2
Sv(x) = f(x v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
S4(x) = f(x 4) f(x)
S1,4(x) = f(x 1, 4) f(x)
S1,4(x) = S1(x) + S4(x)
S1(x) = f(1)
(x 1) f(1)
(x)
S2(x) = f(1)
(x 2) f(1)
(x) + f(2)
(x 2) f(2)
(x) + f(3)
(x 2) f(3)
(x)
S1,2(x) = f(1)
(x 1, 2) f(1)
(x)+f(2)
(x 1, 2) f(2)
(x)+f(3)
(x 1, 2) f(3)
(x)
S1,2(x) 6= S1(x) + S2(x)
Sv(x) = f(x v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
S4(x) = f(x 4) f(x)
S1,4(x) = f(x 1, 4) f(x)
S1,4(x) = S1(x) + S4(x)
S1(x) = f(1)
(x 1) f(1)
(x)
S2(x) = f(1)
(x 2) f(1)
(x) + f(2)
(x 2) f(2)
(x) + f(3)
(x 2) f(3)
(x)
S1,2(x) = f(1)
(x 1, 2) f(1)
(x)+f(2)
(x 1, 2) f(2)
(x)+f(3)
(x 1, 2) f(3)
(x)
S1,2(x) 6= S1(x) + S2(x)
Sv(x) = f(x v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
S4(x) = f(x 4) f(x)
S1,4(x) = f(x 1, 4) f(x)
S1,4(x) = S1(x) + S4(x)
S1(x) = f(1)
(x 1) f(1)
(x)
S2(x) = f(1)
(x 2) f(1)
(x) + f(2)
(x 2) f(2)
(x) + f(3)
(x 2) f(3)
(x)
S1,2(x) = f(1)
(x 1, 2) f(1)
(x)+f(2)
(x 1, 2) f(2)
(x)+f(3)
(x 1, 2) f(3)
(x)
S1,2(x) 6= S1(x) + S2(x)
x1 and x2
“interact”
Pseudo-Boolean functions Scores Update Decomposition
13 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
Decomposition rule for scores
• When can we decompose a score as the sum of lower order scores?
• … when the variables in the move can be partitioned in subsets of variables that
DON’T interact
• Let us define the Co-occurrence Graph (a.k.a. Variable Interaction Graph, VIG)
f(1) f(2) f(3) f(4)
x1 x2 x3 x4
There is an edge between two variables
if there exists a function that depends
on both variables (they “interact”)
x4 x3
x1 x2
Pseudo-Boolean functions Scores Update Decomposition
14 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
What is an improving move in MO?
• An improving move is one that provides a solution dominating the current one
• We define strong and weak improving moves.
• We are interested in strong improving moves
Taking improving moves Algorithm
f2
f1 Assuming maximization
Can we safely take only
strong improving moves?
15 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
We need to take weak improving moves
• We could miss some higher order strong improving moves if we don’t take weak
improving moves
S2
S1
stored
stored
not stored
current solution
Sv(x) = f(x v) f(x)
Sv1[v2 (x) = Sv1 (x) + Sv2 (x)
Taking improving moves Algorithm
16 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
Cycling
• We can make the hill climber to cycle if we take weak improving moves
S2
S1
stored
current solution
S2
S1
stored
current solution
Taking improving moves Algorithm
17 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
Solution: weights for the scores
• Given a weight vector w with wi > 0
S2
S1
w
1st strong improving
2nd w-improving
2nd w-improving
A w-improving move is
one with w · Sv(x) > 0
Taking improving moves Algorithm
18 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
MNK Landscape
Problem Results Source Code
Why NKq and not NK?
Floating point precision
• An MNK Landscape is a multi-objective pseudo-Boolean problem where each
objective is an NK Landscape (Aguirre, Tanaka, CEC 2004: 196-203)
• An NK-landscape is a pseudo-Boolean optimization problem with objective function:
where each subfunction f(l) depends on variable xl and K
other variables
•The subfunctions are randomly generated and the values are taken in the range [0,1]
• In NKq-landscapes the subfunctions take integer values in the range [0,q-1]
• We use NKq-landscapes in the experiments
f(1)
(x) + f(2)
(x 2) f(2)
(x) + f(3)
(x 2) f(3)
(x)
2) f(1)
(x)+f(2)
(x 1, 2) f(2)
(x)+f(3)
(x 1, 2) f(3)
(x)
S1,2(x) 6= S1(x) + S2(x)
f(x) =
NX
l=1
f(l)
(x)
1
• In the adjacent model the variables are
consecutive
f = + + +f(1)(x) f(3)(x)f(2)(x) f(4)(x)
x1 x2 x3 x4
19 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
Runtime
0
100
200
300
400
500
600
700
800
900
1000
1100
10 20 30 40 50 60 70 80 90 100
Averagetimepermove(microseconds)
N (number of variables in thousands)
r=1, d=2
r=1, d=3
r=2, d=2
r=2, d=3
r=3, d=2
r=3, d=3
Fig. 2. Average time per move in µs for the Multi-Start Hill Climber based on Algo-
Neighborhood size: 166 trillion
Problem Results Source Code
20 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
Quality
600000
610000
620000
630000
640000
650000
660000
670000
610000 630000 650000
f2
f1
r=1
r=2
r=3
• 50% Empirical Attainment Functions (Knowles, ISDA 2005: 552-557)
Problem Results Source Code
21 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
Source Code
https://github.com/jfrchicanog/EfficientHillClimbers
Problem Results Source Code
22 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
Conclusions and Future Work
Conclusions & Future Work
• We can efficiently identify improving moves in a ball
of radius r around a solution in constant time
(independent of n)
• The space required to store the information (scores)
is linear in the size of the problem n
Conclusions
• We can also deal with constrained problems
(GECCO 2016)
• Generalize to other search spaces
• Combine with high-level algorithms
Future Work
23 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Acknowledgements
Efficient Hill Climber for Multi-Objective
Pseudo-Boolean Optimization
24 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
• Whitley and Chen proposed an O(1) approximated steepest descent for MAX-kSAT
and NK-landscapes based on Walsh decomposition
• For k-bounded pseudo-Boolean functions its complexity is O(k2 2k)
• Chen, Whitley, Hains and Howe reduced the time required to identify improving
moves to O(k3) using partial derivatives
• Szeider proved that the exploration of a ball of radius r in MAX-kSAT and kSAT can be
done in O(n) if each variable appears in a bounded number of clauses
Previous work
Ball of radius r Improving moves Previous work Research Question
D. Whitley and W. Chen. Constant time steepest descent local search with
lookahead for NK-landscapes and MAX-kSAT. GECCO 2012: 1357–1364
W. Chen, D. Whitley, D. Hains, and A. Howe. Second order partial derivatives
for NK-landscapes. GECCO 2013: 503–510
S. Szeider. The parameterized complexity of k-flip local search for SAT and
MAX SAT. Discrete Optimization, 8(1):139–145, 2011
25 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
Scores to store
• In terms of the VIG a score can be decomposed if the subgraph containing the
variables in the move is NOT connected
• The number of these scores (up to radius r) is O((3kc)r n)
• Details of the proof in the paper
• With a linear amount of information we can explore a ball of radius r containing O(nr)
solutions
x4 x3
x1 x2
x4 x3
x1 x2
S2(x) = f(1)
(x 2) f(1)
(x) + f(2)
(x 2) f(2)
(x) + f
S1,2(x) = f(1)
(x 1, 2) f(1)
(x)+f(2)
(x 1, 2) f(2)
(x)+f
S1,2(x) 6= S1(x) + S2(x)
l=1 l=1
S4(x) = f(x 4) f(x)
S1,4(x) = f(x 1, 4) f(x)
S1,4(x) = S1(x) + S4(x)
We need to store the scores of moves whose
variables form a connected subgraph of the VIG
Pseudo-Boolean functions Scores Update Decomposition
26 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
Scores to update
• Let us assume that x4 is flipped
• Which scores do we need to update?
• Those that need to evaluate f(3) and f(4)
f(1) f(2) f(3) f(4)
x1 x2 x3 x4
x4 x3
x1 x2
• The scores of moves containing variables adjacent
or equal to x4 in the VIG
Main idea Decomposition of scores Constant time update
27 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
Scores to update and time required
• The number of neighbors of a variable in the VIG is bounded by c k
• The number of stored scores in which a variable appears is the number of spanning
trees of size less than or equal to r with the variable at the root
• This number is constant
• The update of each score implies evaluating a constant number of functions that
depend on at most k variables (constant), so it requires constant time
x4 x3
x1 x2
f(1) f(2) f(3) f(4)
x1 x2 x3 x4
O( b(k) (3kc)r |v| ) b(k) is a bound for the time to
evaluate any subfunction
Main idea Decomposition of scores Constant time update
28 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
Results: checking the time in the random model
• Random model: the number of subfunctions in which a variable appears, c, is not
bounded by a constant
NKq-landscapes
• Random model
• N=1,000 to 12,000
• K=1 to 4
• q=2K+1
• r=1 to 4
• 30 instances per conf.
K=3
r=1
r=2
r=3
0 2000 4000 6000 8000 10000 12000
0
50
100
150
200
n
TimeHsL
r=1
r=2
r=3
0 2000 4000 6000 8000 10000 12000
0
100000
200000
300000
400000
N
Scoresstoredinmemory
NKq-landscapes Sanity check Random model Next improvement
29 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
Scores
Problem Formulation Landscape Theory Decomposition SAT Transf. Results
f(1) f(2) f(3)
x1 x2 x3 x4
f = + + +f(1)(x) f(3)(x)f(2)(x) f(4)(x)
x1 x2 x3 x4
f = + + +f(1)(x) f(3)(x)f(2)(x) f(4)(x)
x1 x2 x3 x4
30 / 23EvoCOP 2016, Porto, Portugal, March-April 2016
Introduction Background
MO Hill
Climber
Experiments
Conclusions
& Future Work
Scores
Problem Formulation Landscape Theory Decomposition SAT Transf. Results
f(1)
x1 x2
f(1) f(2) f(3)
x1 x2 x3 x4
S1(x) = f(x 1) f(x)
v) f(x) =
mX
l=1
(f(l)
(x v) f(l)
(x)) =
mX
l=1
S(l)
(x)
S4(x) = f(x 4) f(x)
S1,4(x) = f(x 1, 4) f(x)
S1,4(x) = S1(x) + S4(x)
S1(x) = f(1)
(x 1) f(1)
(x)

More Related Content

PDF
Optimal L-shaped matrix reordering, aka graph's core-periphery
PDF
k-MLE: A fast algorithm for learning statistical mixture models
PDF
BEADS : filtrage asymétrique de ligne de base (tendance) et débruitage pour d...
PDF
Core–periphery detection in networks with nonlinear Perron eigenvectors
PDF
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
PDF
Patch Matching with Polynomial Exponential Families and Projective Divergences
PDF
Polyhedral computations in computational algebraic geometry and optimization
PDF
A nonlinear approximation of the Bayesian Update formula
Optimal L-shaped matrix reordering, aka graph's core-periphery
k-MLE: A fast algorithm for learning statistical mixture models
BEADS : filtrage asymétrique de ligne de base (tendance) et débruitage pour d...
Core–periphery detection in networks with nonlinear Perron eigenvectors
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Patch Matching with Polynomial Exponential Families and Projective Divergences
Polyhedral computations in computational algebraic geometry and optimization
A nonlinear approximation of the Bayesian Update formula

What's hot (20)

PDF
Divergence clustering
PDF
QMC: Operator Splitting Workshop, A New (More Intuitive?) Interpretation of I...
PDF
The dual geometry of Shannon information
PDF
Nonconvex Compressed Sensing with the Sum-of-Squares Method
PDF
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
PDF
Computational Information Geometry: A quick review (ICMS)
PDF
Divergence center-based clustering and their applications
PDF
The low-rank basis problem for a matrix subspace
PDF
Regret Minimization in Multi-objective Submodular Function Maximization
PDF
CLIM Fall 2017 Course: Statistics for Climate Research, Geostats for Large Da...
PDF
Tailored Bregman Ball Trees for Effective Nearest Neighbors
PDF
A series of maximum entropy upper bounds of the differential entropy
PDF
H2O World - Consensus Optimization and Machine Learning - Stephen Boyd
PDF
CLIM Fall 2017 Course: Statistics for Climate Research, Guest lecture: Data F...
PDF
On the Jensen-Shannon symmetrization of distances relying on abstract means
PDF
CLIM Fall 2017 Course: Statistics for Climate Research, Estimating Curves and...
PDF
Expectation propagation
PDF
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
PDF
Classification with mixtures of curved Mahalanobis metrics
PDF
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Divergence clustering
QMC: Operator Splitting Workshop, A New (More Intuitive?) Interpretation of I...
The dual geometry of Shannon information
Nonconvex Compressed Sensing with the Sum-of-Squares Method
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Computational Information Geometry: A quick review (ICMS)
Divergence center-based clustering and their applications
The low-rank basis problem for a matrix subspace
Regret Minimization in Multi-objective Submodular Function Maximization
CLIM Fall 2017 Course: Statistics for Climate Research, Geostats for Large Da...
Tailored Bregman Ball Trees for Effective Nearest Neighbors
A series of maximum entropy upper bounds of the differential entropy
H2O World - Consensus Optimization and Machine Learning - Stephen Boyd
CLIM Fall 2017 Course: Statistics for Climate Research, Guest lecture: Data F...
On the Jensen-Shannon symmetrization of distances relying on abstract means
CLIM Fall 2017 Course: Statistics for Climate Research, Estimating Curves and...
Expectation propagation
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Classification with mixtures of curved Mahalanobis metrics
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Ad

Viewers also liked (16)

PPTX
Números reales
PPTX
(youthlab indo) Indonesian animology: Trends on youth anime community
DOCX
MANAJEMEN DAN PENDOKUMENTASIANASUHAN KEBIDANAN PADA BAYI NY“M” DENGAN ASFIKSI...
PPT
Hotelería y Turismo
PPTX
Fútbol Al Máximo
PPTX
Generational intelligence- a new HR challenge
PPTX
Radio nahndic3a1
PPTX
Presentacion nueva
DOCX
JDLHBedCheckDoc
PDF
MANAJEMEN DAN PENDOKUMENTASIAN ASUHAN KEBIDANAN INTRANATAL PADA NY”Y” DENGAN ...
DOCX
MANAJEMEN DAN PENDOKUMENTASIANASUHAN KEBIDANAN PADA BAYI NY“M” DENGAN ASFIKSI...
PPTX
Designing peer review for digital media instruction
PPTX
HV 233, Los inicios históricos de las Verapaces, Erick Reyes Andrade
DOCX
Manfaat dan pengaruh gadget
PPTX
Amistad
PPTX
Master armero
Números reales
(youthlab indo) Indonesian animology: Trends on youth anime community
MANAJEMEN DAN PENDOKUMENTASIANASUHAN KEBIDANAN PADA BAYI NY“M” DENGAN ASFIKSI...
Hotelería y Turismo
Fútbol Al Máximo
Generational intelligence- a new HR challenge
Radio nahndic3a1
Presentacion nueva
JDLHBedCheckDoc
MANAJEMEN DAN PENDOKUMENTASIAN ASUHAN KEBIDANAN INTRANATAL PADA NY”Y” DENGAN ...
MANAJEMEN DAN PENDOKUMENTASIANASUHAN KEBIDANAN PADA BAYI NY“M” DENGAN ASFIKSI...
Designing peer review for digital media instruction
HV 233, Los inicios históricos de las Verapaces, Erick Reyes Andrade
Manfaat dan pengaruh gadget
Amistad
Master armero
Ad

Similar to Efficient Hill Climber for Multi-Objective Pseudo-Boolean Optimization (20)

PDF
Efficient Identification of Improving Moves in a Ball for Pseudo-Boolean Prob...
PDF
Efficient Hill Climber for Constrained Pseudo-Boolean Optimization Problems
PDF
hillclimbing in Artificial intelligence.pdf
PPTX
Local Search Algorithms and Optimization problems.pptx
PPTX
cs-171-05-LocalSearch.pptx
PDF
04-local-search.pdf
PPT
local-search and optimization slides.ppt
PPT
local-search algorithms in Artificial intelligence .ppt
PPTX
Heuristic search
PDF
Erwin. e. obermayer k._schulten. k. _1992: self-organising maps_stationary st...
PPTX
Informed Search Techniques new kirti L 8.pptx
PPT
vdocuments.mx_chapter-3-heuristic-search-techniques-56a314b01c908.ppt
PPTX
Hill-climbing #2
PPT
Chapter 03 - artifi HEURISTIC SEARCH.ppt
PPT
Heuristic search problem-solving str.ppt
PPT
Heuristc Search Techniques
PPTX
Heuristic search
PPTX
Heuristic or informed search
PPTX
CS4700-LS_v6.pptx
PPTX
Topology Matters in Communication
Efficient Identification of Improving Moves in a Ball for Pseudo-Boolean Prob...
Efficient Hill Climber for Constrained Pseudo-Boolean Optimization Problems
hillclimbing in Artificial intelligence.pdf
Local Search Algorithms and Optimization problems.pptx
cs-171-05-LocalSearch.pptx
04-local-search.pdf
local-search and optimization slides.ppt
local-search algorithms in Artificial intelligence .ppt
Heuristic search
Erwin. e. obermayer k._schulten. k. _1992: self-organising maps_stationary st...
Informed Search Techniques new kirti L 8.pptx
vdocuments.mx_chapter-3-heuristic-search-techniques-56a314b01c908.ppt
Hill-climbing #2
Chapter 03 - artifi HEURISTIC SEARCH.ppt
Heuristic search problem-solving str.ppt
Heuristc Search Techniques
Heuristic search
Heuristic or informed search
CS4700-LS_v6.pptx
Topology Matters in Communication

More from jfrchicanog (20)

PDF
Seminario-taller: Introducción a la Ingeniería del Software Guiada or Búsqueda
PDF
Combinando algoritmos exactos y heurísticos para problemas en ISGB
PDF
Quasi-Optimal Recombination Operator
PDF
Uso de CMSA para resolver el problema de selección de requisitos
PDF
Enhancing Partition Crossover with Articulation Points Analysis
PDF
Search-Based Software Project Scheduling
PDF
Dos estrategias de búsqueda anytime basadas en programación lineal entera par...
PDF
Mixed Integer Linear Programming Formulation for the Taxi Sharing Problem
PDF
Descomposición en Landscapes Elementales del Problema de Diseño de Redes de R...
PDF
Optimización Multi-objetivo Basada en Preferencias para la Planificación de P...
PDF
Resolviendo in problema multi-objetivo de selección de requisitos mediante re...
PDF
On the application of SAT solvers for Search Based Software Testing
PDF
Elementary Landscape Decomposition of the Hamiltonian Path Optimization Problem
PDF
Recent Research in Search Based Software Testing
PDF
Problem Understanding through Landscape Theory
PDF
Searching for Liveness Property Violations in Concurrent Systems with ACO
PDF
Finding Safety Errors with ACO
PDF
Elementary Landscape Decomposition of Combinatorial Optimization Problems
PDF
Elementary Landscape Decomposition of Combinatorial Optimization Problems
PDF
Elementary Landscape Decomposition of the Quadratic Assignment Problem
Seminario-taller: Introducción a la Ingeniería del Software Guiada or Búsqueda
Combinando algoritmos exactos y heurísticos para problemas en ISGB
Quasi-Optimal Recombination Operator
Uso de CMSA para resolver el problema de selección de requisitos
Enhancing Partition Crossover with Articulation Points Analysis
Search-Based Software Project Scheduling
Dos estrategias de búsqueda anytime basadas en programación lineal entera par...
Mixed Integer Linear Programming Formulation for the Taxi Sharing Problem
Descomposición en Landscapes Elementales del Problema de Diseño de Redes de R...
Optimización Multi-objetivo Basada en Preferencias para la Planificación de P...
Resolviendo in problema multi-objetivo de selección de requisitos mediante re...
On the application of SAT solvers for Search Based Software Testing
Elementary Landscape Decomposition of the Hamiltonian Path Optimization Problem
Recent Research in Search Based Software Testing
Problem Understanding through Landscape Theory
Searching for Liveness Property Violations in Concurrent Systems with ACO
Finding Safety Errors with ACO
Elementary Landscape Decomposition of Combinatorial Optimization Problems
Elementary Landscape Decomposition of Combinatorial Optimization Problems
Elementary Landscape Decomposition of the Quadratic Assignment Problem

Recently uploaded (20)

PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
1. Introduction to Computer Programming.pptx
PPTX
A Presentation on Artificial Intelligence
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
Spectroscopy.pptx food analysis technology
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
MYSQL Presentation for SQL database connectivity
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPT
Teaching material agriculture food technology
PPTX
Big Data Technologies - Introduction.pptx
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
Electronic commerce courselecture one. Pdf
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Tartificialntelligence_presentation.pptx
PDF
Approach and Philosophy of On baking technology
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
Network Security Unit 5.pdf for BCA BBA.
Encapsulation_ Review paper, used for researhc scholars
1. Introduction to Computer Programming.pptx
A Presentation on Artificial Intelligence
Building Integrated photovoltaic BIPV_UPV.pdf
Spectroscopy.pptx food analysis technology
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
MYSQL Presentation for SQL database connectivity
The Rise and Fall of 3GPP – Time for a Sabbatical?
Assigned Numbers - 2025 - Bluetooth® Document
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Teaching material agriculture food technology
Big Data Technologies - Introduction.pptx
A comparative analysis of optical character recognition models for extracting...
Electronic commerce courselecture one. Pdf
Spectral efficient network and resource selection model in 5G networks
Tartificialntelligence_presentation.pptx
Approach and Philosophy of On baking technology
Mobile App Security Testing_ A Comprehensive Guide.pdf

Efficient Hill Climber for Multi-Objective Pseudo-Boolean Optimization

  • 1. 1 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background Contribution Experiments Conclusions & Future Work Efficient Hill Climber for Multi-Objective Pseudo-Boolean Optimization Francisco Chicano, Darrell Whitley, Renato Tinós
  • 2. 2 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work r = 1 n r = 2 n 2 r = 3 n 3 r n r Ball Pr i=1 n i S1( r = 1 n r = 2 n 2 r = 3 n 3 r n r Ball Pr i=1 n i • Considering binary strings of length n and Hamming distance… Solutions in a ball of radius r r=1 r=2 r=3 Ball of radius r Previous work Research Question How many solutions at Hamming distance r? If r << n : Θ (nr)
  • 3. 3 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work • We want to find improving moves in a ball of radius r around solution x • What is the computational cost of this exploration? • By complete enumeration: O (nr) if the fitness evaluation is O(1) • In previous work we proposed a way to find improving moves in ball of radius r in O(1) (constant time independent of n) Improving moves in a ball of radius r r Ball of radius r Previous work Research Question Chicano, Whitley, Sutton, GECCO 2014: 437-444
  • 4. 4 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work Research Question Ball of radius r Previous work Research Question
  • 5. 5 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work • Definition: • where f(i) only depends on k variables (k-bounded epistasis) • We will also assume that the variables are arguments of at most c subfunctions • Example (m=4, n=4, k=2): • Is this set of functions too small? Is it interesting? • Max-kSAT is a k-bounded pseudo-Boolean optimization problem • NK-landscapes is a (K+1)-bounded pseudo-Boolean optimization problem • Any compressible pseudo-Boolean function can be reduced to a quadratic pseudo-Boolean function (e.g., Rosenberg, 1975) Mk Landscape (Whitley, GECCO2015: 927-934) Pseudo-Boolean functions Scores Update Decomposition The family of k-bounded pseudo-Boolean Optimization problems have also been described as an embedded landscape. An embedded landscape [3] with bounded epistasis k is de- fined as a function f(x) that can be written as the sum of m subfunctions, each one depending at most on k input variables. That is: f(x) = mX i=1 f(i) (x), (1) where the subfunctions f(i) depend only on k components of x. Embedded Landscapes generalize NK-landscapes and the MAX-kSAT problem. We will consider in this paper that the number of subfunctions is linear in n, that is m 2 O(n). For NK-landscapes m = n and is a common assumption in MAX-kSAT that m 2 O(n). 3. SCORES IN THE HAMMING BALL For v, x 2 Bn , and a pseudo-Boolean function f : Bn ! R, we denote the Score of x with respect to move v as Sv(x), defined as follows:1 Sv(x) = f(x v) f(x), (2) 1 We omit the function f in Sv(x) to simplify the notation. S(l) v (x) = Equation (5) cl change in the mov f(l) the Score of th this subfunction w On the other hand we only need to c changed variables t acterized by the m we can write (3) as S 3.1 Scores De The Score values tion than just the c in that ball. Let us balls of radius r = xj are two variabl ments of any subfu f = + + +f(1)(x) f(2)(x) f(3)(x) f(4)(x) x1 x2 x3 x4
  • 6. 6 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work • Based on Mk Landscapes Going Multi-Objective: Vector Mk Landscape Pseudo-Boolean functions Scores Update Decomposition x1 x2 x3 x4 x5 f (1) 1 f (2) 1 f (3) 1 f (4) 1 f (5) 1 f (1) 2 f (2) 2 f (3) 2 (a) Vector Mk Landscape x3x4x5 f1 f2
  • 7. 7 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work • Let us represent a potential move of the current solution with a binary vector v having 1s in the positions that should be flipped • The score of move v for solution x is the difference in the fitness value of the neighboring and the current solution • Scores are useful to identify improving moves: if Sv(x) > 0, v is an improving move • We keep all the scores in the score vector Scores: definition Current solution, x Neighboring solution, y Move, v 01110101010101001 01111011010101001 00001110000000000 01110101010101001 00110101110101111 01000000100000110 01110101010101001 01000101010101001 00110000000000000 Pseudo-Boolean functions Scores Update Decomposition Sv(x) = f(x v) f(x)
  • 8. 8 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work • The key idea is to compute the scores from scratch once at the beginning and update their values as the solution moves (less expensive) Scores update r Selected improving move Update the score vector Pseudo-Boolean functions Scores Update Decomposition
  • 9. 9 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work • The key idea is to compute the scores from scratch once at the beginning and update their values as the solution moves (less expensive) • How can we do it less expensive? • We have still O(nr) scores to update! • … thanks to two key facts: • We don’t need all the O(nr) scores to know if there is an improving move • From the ones we need, we only have to update a constant number of them and we can do each update in constant time Key facts for efficient scores update r Pseudo-Boolean functions Scores Update Decomposition
  • 10. 10 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work Examples: 1 and 4 f(1) f(2) f(3) f(4) x1 x2 x3 x4 Ball Pr i=1 n i S1(x) = f(x 1) f(x) Sv(x) = f(x v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) S1(x) = f(x 1) f(x) v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) f(1) f(2) f(3) f(4) x1 x2 x3 x4 S1(x) = f(x 1) f(x) v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) S4(x) = f(x 4) f(x) Pseudo-Boolean functions Scores Update Decomposition
  • 11. 11 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work Example: 1,4 f(1) f(2) f(3) f(4) x1 x2 x3 x4 r Ball Pr i=1 n i S1(x) = f(x 1) f(x) Sv(x) = f(x v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) n i S1(x) = f(x 1) f(x) v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) S4(x) = f(x 4) f(x) S1,4(x) = f(x 1, 4) f(x) S1(x) = f(x 1) f(x) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) Ball Pr i=1 n i S1(x) = Sv(x) = f(x v) f(x) = S4(x) = S1(x) = f(x 1) f(x) x) = f(x v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l S4(x) = f(x 4) f(x) S1,4(x) = f(x 1, 4) f(x) S1,4(x) = S1(x) + S4(x) We don’t need to store S1,4(x) since can be computed from others If none of 1 and 4 are improving moves, 1,4 will not be an improving move Pseudo-Boolean functions Scores Update Decomposition
  • 12. 12 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work Example: 1,2 S1(x) = f(x 1) f(x) v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) S4(x) = f(x 4) f(x) S1,4(x) = f(x 1, 4) f(x) S1,4(x) = S1(x) + S4(x) S1(x) = f(1) (x 1) f(1) (x) f(1) f(2) f(3) x1 x2 x3 x4 f(1) f(2) f(3) x1 x2 x3 x4 f(1) x1 x2 Sv(x) = f(x v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) S4(x) = f(x 4) f(x) S1,4(x) = f(x 1, 4) f(x) S1,4(x) = S1(x) + S4(x) S1(x) = f(1) (x 1) f(1) (x) S2(x) = f(1) (x 2) f(1) (x) + f(2) (x 2) f(2) (x) + f(3) (x 2) f(3) (x) S1,2(x) = f(1) (x 1, 2) f(1) (x)+f(2) (x 1, 2) f(2) (x)+f(3) (x 1, 2) f(3) (x) S1,2(x) 6= S1(x) + S2(x) Sv(x) = f(x v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) S4(x) = f(x 4) f(x) S1,4(x) = f(x 1, 4) f(x) S1,4(x) = S1(x) + S4(x) S1(x) = f(1) (x 1) f(1) (x) S2(x) = f(1) (x 2) f(1) (x) + f(2) (x 2) f(2) (x) + f(3) (x 2) f(3) (x) S1,2(x) = f(1) (x 1, 2) f(1) (x)+f(2) (x 1, 2) f(2) (x)+f(3) (x 1, 2) f(3) (x) S1,2(x) 6= S1(x) + S2(x) Sv(x) = f(x v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) S4(x) = f(x 4) f(x) S1,4(x) = f(x 1, 4) f(x) S1,4(x) = S1(x) + S4(x) S1(x) = f(1) (x 1) f(1) (x) S2(x) = f(1) (x 2) f(1) (x) + f(2) (x 2) f(2) (x) + f(3) (x 2) f(3) (x) S1,2(x) = f(1) (x 1, 2) f(1) (x)+f(2) (x 1, 2) f(2) (x)+f(3) (x 1, 2) f(3) (x) S1,2(x) 6= S1(x) + S2(x) x1 and x2 “interact” Pseudo-Boolean functions Scores Update Decomposition
  • 13. 13 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work Decomposition rule for scores • When can we decompose a score as the sum of lower order scores? • … when the variables in the move can be partitioned in subsets of variables that DON’T interact • Let us define the Co-occurrence Graph (a.k.a. Variable Interaction Graph, VIG) f(1) f(2) f(3) f(4) x1 x2 x3 x4 There is an edge between two variables if there exists a function that depends on both variables (they “interact”) x4 x3 x1 x2 Pseudo-Boolean functions Scores Update Decomposition
  • 14. 14 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work What is an improving move in MO? • An improving move is one that provides a solution dominating the current one • We define strong and weak improving moves. • We are interested in strong improving moves Taking improving moves Algorithm f2 f1 Assuming maximization Can we safely take only strong improving moves?
  • 15. 15 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work We need to take weak improving moves • We could miss some higher order strong improving moves if we don’t take weak improving moves S2 S1 stored stored not stored current solution Sv(x) = f(x v) f(x) Sv1[v2 (x) = Sv1 (x) + Sv2 (x) Taking improving moves Algorithm
  • 16. 16 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work Cycling • We can make the hill climber to cycle if we take weak improving moves S2 S1 stored current solution S2 S1 stored current solution Taking improving moves Algorithm
  • 17. 17 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work Solution: weights for the scores • Given a weight vector w with wi > 0 S2 S1 w 1st strong improving 2nd w-improving 2nd w-improving A w-improving move is one with w · Sv(x) > 0 Taking improving moves Algorithm
  • 18. 18 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work MNK Landscape Problem Results Source Code Why NKq and not NK? Floating point precision • An MNK Landscape is a multi-objective pseudo-Boolean problem where each objective is an NK Landscape (Aguirre, Tanaka, CEC 2004: 196-203) • An NK-landscape is a pseudo-Boolean optimization problem with objective function: where each subfunction f(l) depends on variable xl and K other variables •The subfunctions are randomly generated and the values are taken in the range [0,1] • In NKq-landscapes the subfunctions take integer values in the range [0,q-1] • We use NKq-landscapes in the experiments f(1) (x) + f(2) (x 2) f(2) (x) + f(3) (x 2) f(3) (x) 2) f(1) (x)+f(2) (x 1, 2) f(2) (x)+f(3) (x 1, 2) f(3) (x) S1,2(x) 6= S1(x) + S2(x) f(x) = NX l=1 f(l) (x) 1 • In the adjacent model the variables are consecutive f = + + +f(1)(x) f(3)(x)f(2)(x) f(4)(x) x1 x2 x3 x4
  • 19. 19 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work Runtime 0 100 200 300 400 500 600 700 800 900 1000 1100 10 20 30 40 50 60 70 80 90 100 Averagetimepermove(microseconds) N (number of variables in thousands) r=1, d=2 r=1, d=3 r=2, d=2 r=2, d=3 r=3, d=2 r=3, d=3 Fig. 2. Average time per move in µs for the Multi-Start Hill Climber based on Algo- Neighborhood size: 166 trillion Problem Results Source Code
  • 20. 20 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work Quality 600000 610000 620000 630000 640000 650000 660000 670000 610000 630000 650000 f2 f1 r=1 r=2 r=3 • 50% Empirical Attainment Functions (Knowles, ISDA 2005: 552-557) Problem Results Source Code
  • 21. 21 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work Source Code https://github.com/jfrchicanog/EfficientHillClimbers Problem Results Source Code
  • 22. 22 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work Conclusions and Future Work Conclusions & Future Work • We can efficiently identify improving moves in a ball of radius r around a solution in constant time (independent of n) • The space required to store the information (scores) is linear in the size of the problem n Conclusions • We can also deal with constrained problems (GECCO 2016) • Generalize to other search spaces • Combine with high-level algorithms Future Work
  • 23. 23 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Acknowledgements Efficient Hill Climber for Multi-Objective Pseudo-Boolean Optimization
  • 24. 24 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work • Whitley and Chen proposed an O(1) approximated steepest descent for MAX-kSAT and NK-landscapes based on Walsh decomposition • For k-bounded pseudo-Boolean functions its complexity is O(k2 2k) • Chen, Whitley, Hains and Howe reduced the time required to identify improving moves to O(k3) using partial derivatives • Szeider proved that the exploration of a ball of radius r in MAX-kSAT and kSAT can be done in O(n) if each variable appears in a bounded number of clauses Previous work Ball of radius r Improving moves Previous work Research Question D. Whitley and W. Chen. Constant time steepest descent local search with lookahead for NK-landscapes and MAX-kSAT. GECCO 2012: 1357–1364 W. Chen, D. Whitley, D. Hains, and A. Howe. Second order partial derivatives for NK-landscapes. GECCO 2013: 503–510 S. Szeider. The parameterized complexity of k-flip local search for SAT and MAX SAT. Discrete Optimization, 8(1):139–145, 2011
  • 25. 25 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work Scores to store • In terms of the VIG a score can be decomposed if the subgraph containing the variables in the move is NOT connected • The number of these scores (up to radius r) is O((3kc)r n) • Details of the proof in the paper • With a linear amount of information we can explore a ball of radius r containing O(nr) solutions x4 x3 x1 x2 x4 x3 x1 x2 S2(x) = f(1) (x 2) f(1) (x) + f(2) (x 2) f(2) (x) + f S1,2(x) = f(1) (x 1, 2) f(1) (x)+f(2) (x 1, 2) f(2) (x)+f S1,2(x) 6= S1(x) + S2(x) l=1 l=1 S4(x) = f(x 4) f(x) S1,4(x) = f(x 1, 4) f(x) S1,4(x) = S1(x) + S4(x) We need to store the scores of moves whose variables form a connected subgraph of the VIG Pseudo-Boolean functions Scores Update Decomposition
  • 26. 26 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work Scores to update • Let us assume that x4 is flipped • Which scores do we need to update? • Those that need to evaluate f(3) and f(4) f(1) f(2) f(3) f(4) x1 x2 x3 x4 x4 x3 x1 x2 • The scores of moves containing variables adjacent or equal to x4 in the VIG Main idea Decomposition of scores Constant time update
  • 27. 27 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work Scores to update and time required • The number of neighbors of a variable in the VIG is bounded by c k • The number of stored scores in which a variable appears is the number of spanning trees of size less than or equal to r with the variable at the root • This number is constant • The update of each score implies evaluating a constant number of functions that depend on at most k variables (constant), so it requires constant time x4 x3 x1 x2 f(1) f(2) f(3) f(4) x1 x2 x3 x4 O( b(k) (3kc)r |v| ) b(k) is a bound for the time to evaluate any subfunction Main idea Decomposition of scores Constant time update
  • 28. 28 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work Results: checking the time in the random model • Random model: the number of subfunctions in which a variable appears, c, is not bounded by a constant NKq-landscapes • Random model • N=1,000 to 12,000 • K=1 to 4 • q=2K+1 • r=1 to 4 • 30 instances per conf. K=3 r=1 r=2 r=3 0 2000 4000 6000 8000 10000 12000 0 50 100 150 200 n TimeHsL r=1 r=2 r=3 0 2000 4000 6000 8000 10000 12000 0 100000 200000 300000 400000 N Scoresstoredinmemory NKq-landscapes Sanity check Random model Next improvement
  • 29. 29 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work Scores Problem Formulation Landscape Theory Decomposition SAT Transf. Results f(1) f(2) f(3) x1 x2 x3 x4 f = + + +f(1)(x) f(3)(x)f(2)(x) f(4)(x) x1 x2 x3 x4 f = + + +f(1)(x) f(3)(x)f(2)(x) f(4)(x) x1 x2 x3 x4
  • 30. 30 / 23EvoCOP 2016, Porto, Portugal, March-April 2016 Introduction Background MO Hill Climber Experiments Conclusions & Future Work Scores Problem Formulation Landscape Theory Decomposition SAT Transf. Results f(1) x1 x2 f(1) f(2) f(3) x1 x2 x3 x4 S1(x) = f(x 1) f(x) v) f(x) = mX l=1 (f(l) (x v) f(l) (x)) = mX l=1 S(l) (x) S4(x) = f(x 4) f(x) S1,4(x) = f(x 1, 4) f(x) S1,4(x) = S1(x) + S4(x) S1(x) = f(1) (x 1) f(1) (x)