Table Of ContentSimulated Annealing
Key idea: Vary temperature parameter, i.e., probability of
accepting worsening moves, in Probabilistic Iterative Improvement
according to annealing schedule (aka cooling schedule).
Inspired by physical annealing process:
I candidate solutions ⇠= states of physical system
I evaluation function ⇠= thermodynamic energy
I globally optimal solutions ⇠= ground states
I parameter T ⇠= physical temperature
Note: In physical process (e.g., annealing of metals), perfect
ground states are achieved by very slow lowering of temperature.
HeuristicOptimization2014 68
Simulated Annealing (SA):
determine initial candidate solution s
set initial temperature T according to annealing schedule
While termination condition is not satisfied:
probabilistically choose a neighbour s of s
0
|| using proposal mechanism
|| If s satisfies probabilistic acceptance criterion (depending on T):
0
|| s := s
0
|| update T according to annealing schedule
b
HeuristicOptimization2014 69
Note:
2-stage step function based on
I
proposal mechanism (often uniform random choice from N(s))
I
acceptance criterion (often Metropolis condition)
I
Annealing schedule (function mapping run-time t onto
I
temperature T(t)):
initial temperature T
I 0
(may depend on properties of given problem instance)
temperature update scheme
I
(e.g., geometric cooling: T := ↵ T)
·
number of search steps to be performed at each temperature
I
(often multiple of neighbourhood size)
Termination predicate: often based on acceptance ratio,
I
i.e., ratio of proposed vs accepted steps.
HeuristicOptimization2014 70
Example: Simulated Annealing for the TSP
Extension of previous PII algorithm for the TSP, with
proposal mechanism: uniform random choice from
I
2-exchange neighbourhood;
acceptance criterion: Metropolis condition (always accept
I
improving steps, accept worsening steps with probability
exp[(f (s) f (s ))/T]);
0
�
annealing schedule: geometric cooling T := 0.95 T with
I
·
n (n 1) steps at each temperature (n = number of vertices
· �
in given graph), T chosen such that 97% of proposed steps
0
are accepted;
termination: when for five successive temperature values no
I
improvement in solution quality and acceptance ratio < 2%.
HeuristicOptimization2014 71
Improvements:
neighbourhood pruning (e.g., candidate lists for TSP)
I
greedy initialisation (e.g., by using NNH for the TSP)
I
low temperature starts (to prevent good initial
I
candidate solutions from being too easily destroyed
by worsening steps)
look-up tables for acceptance probabilities:
I
instead of computing exponential function exp(�/T)
for each step with � := f(s) f(s ) (expensive!),
0
�
use precomputed table for range of argument values �/T.
HeuristicOptimization2014 72
Example: Simulated Annealing for the graph bipartitioning
for a given graph G := (V,E), find a partition of the nodes in
I
two sets V and V such that V = V , V V = V, and
1 2 1 2 1 2
| | | | [
that the number of edges with vertices in each of the two sets
is minimal
B
A
HeuristicOptimization2014 73
SA example: graph bipartitioning
Johnsonetal.1989
tests were run on random graphs (G ) and random
I n,p
geometric graphs U
n,d
modified cost function (↵: imbalance factor)
I
2
f (V ,V ) = (u,v) E u V v V +↵( V V )
1 2 1 2 1 2
|{ 2 | 2 ^ 2 }| | |�| |
allows infeasible solutions but punishes the amount of
infeasibility
side advantage: allows to use 1–exchange neighborhoods of
I
size (n) instead of the typical neighborhood that exchanges
O
two nodes at a time and is of size (n2)
O
HeuristicOptimization2014 74
SA example: graph bipartitioning
Johnsonetal.1989
initial solution is chosen randomly
I
standard geometric cooling schedule
I
experimental comparison to Kernighan–Lin heuristic
I
Simulated Annealing gave better performance on G graphs
I n,p
just the opposite is true for U graphs
I n,d
several further improvements were proposed and tested
I
general remark: Although relatively old, Johnson et al.’s experimental
investigations on SA are still worth a detailed reading!
HeuristicOptimization2014 75
‘Convergence’ result for SA:
Under certain conditions (extremely slow cooling),
any su�ciently long trajectory of SA is guaranteed to end in
an optimal solution [Geman and Geman, 1984; Hajek, 1998].
Note:
Practical relevance for combinatorial problem solving
I
is very limited (impractical nature of necessary conditions)
In combinatorial problem solving, ending in optimal solution
I
is typically unimportant, but finding optimal solution
during the search is (even if it is encountered only once)!
HeuristicOptimization2014 76
SA is historically one of the first SLS methods (metaheuristics)
I
raised significant interest due to simplicity, good results, and
I
theoretical properties
rather simple to implement
I
on standard benchmark problems (e.g. TSP, SAT) typically
I
outperformed by more advanced methods (see following ones)
nevertheless, for some (messy) problems sometimes
I
surprisingly e↵ective
HeuristicOptimization2014 77
Tabu Search
Key idea: Use aspects of search history (memory) to escape from
local minima.
Simple Tabu Search:
Associate tabu attributes with candidate solutions or
I
solution components.
Forbid steps to search positions recently visited by
I
underlying iterative best improvement procedure based on
tabu attributes.
HeuristicOptimization2014 78
Tabu Search (TS):
determine initial candidate solution s
While termination criterion is not satisfied:
determine set N of non-tabu neighbours of s
0
||
choose a best improving candidate solution s in N
0 0
||
|| update tabu attributes based on s0
||
s := s
0
b
HeuristicOptimization2014 79
Note:
Non-tabu search positions in N(s) are called
I
admissible neighbours of s.
After a search step, the current search position
I
or the solution components just added/removed from it
are declared tabu for a fixed number of subsequent
search steps (tabu tenure).
Often, an additional aspiration criterion is used: this specifies
I
conditions under which tabu status may be overridden (e.g., if
considered step leads to improvement in incumbent solution).
HeuristicOptimization2014 80
Example: Tabu Search for SAT – GSAT/Tabu (1)
Search space: set of all truth assignments for propositional
I
variables in given CNF formula F.
Solution set: models of F.
I
Use 1-flip neighbourhood relation, i.e., two truth
I
assignments are neighbours i↵ they di↵er in the truth value
assigned to one variable.
Memory: Associate tabu status (Boolean value) with each
I
variable in F.
HeuristicOptimization2014 81
Example: Tabu Search for SAT – GSAT/Tabu (2)
Initialisation: random picking, i.e., select uniformly at
I
random from set of all truth assignments.
Search steps:
I
variables are tabu i↵ they have been changed
I
in the last tt steps;
neighbouring assignments are admissible i↵ they
I
can be reached by changing the value of a non-tabu variable
or have fewer unsatisfied clauses than the best assignment
seen so far (aspiration criterion);
choose uniformly at random admissible assignment
I
with minimal number of unsatisfied clauses.
Termination: upon finding model of F or after given bound
I
on number of search steps has been reached.
HeuristicOptimization2014 82
Note:
GSAT/Tabu used to be state of the art for SAT solving.
I
Crucial for e�cient implementation:
I
keep time complexity of search steps minimal
I
by using special data structures, incremental updating
and caching mechanism for evaluation function values;
e�cient determination of tabu status:
I
store for each variable x the number of the search step
when its value was last changed it ; x is tabu i↵
x
it it < tt, where it = current search step number.
x
�
HeuristicOptimization2014 83
Note: Performance of Tabu Search depends crucially on
setting of tabu tenure tt:
tt too low search stagnates due to inability to escape
I
)
from local minima;
tt too high search becomes ine↵ective due to overly
I
)
restricted search path (admissible neighbourhoods too small)
Advanced TS methods:
Robust Tabu Search [Taillard, 1991]:
I
repeatedly choose tt from given interval;
also: force specific steps that have not been made for a long time.
Reactive Tabu Search [Battiti and Tecchiolli, 1994]:
I
dynamically adjust tt during search;
also: use escape mechanism to overcome stagnation.
HeuristicOptimization2014 84
Further improvements can be achieved by using intermediate-term
or long-term memory to achieve additional intensification or
diversification.
Examples:
Occasionally backtrack to elite candidate solutions, i.e.,
I
high-quality search positions encountered earlier in the search;
when doing this, all associated tabu attributes are cleared.
Freeze certain solution components and keep them fixed
I
for long periods of the search.
Occasionally force rarely used solution components to be
I
introduced into current candidate solution.
Extend evaluation function to capture frequency of use
I
of candidate solutions or solution components.
HeuristicOptimization2014 85
Tabu search algorithms algorithms are state of the art
for solving several combinatorial problems, including:
SAT and MAX-SAT
I
the Constraint Satisfaction Problem (CSP)
I
several scheduling problems
I
Crucial factors in many applications:
choice of neighbourhood relation
I
e�cient evaluation of candidate solutions
I
(caching and incremental updating mechanisms)
HeuristicOptimization2014 86
Dynamic Local Search
Key Idea: Modify the evaluation function whenever
I
a local optimum is encountered in such a way that
further improvement steps become possible.
Associate penalty weights (penalties) with solution
I
components; these determine impact of components on
evaluation function value.
Perform Iterative Improvement; when in local minimum,
I
increase penalties of some solution components
until improving steps become available.
HeuristicOptimization2014 87
Description:Simulated Annealing (SA): determine initial candidate solution s set initial temperature T according to annealing schedule. While termination condition