|
|
|
|
Q/A Time |
|
Overview of My Code, Java Concepts |
|
Informed Search Methods |
|
Memory Bounded Search |
|
Iterative Improvement Search |
|
Heuristic Functions |
|
Game-Playing |
|
Worked Examples |
|
|
|
|
|
Abstract classes form “templates” helpful for: |
|
Generalizing problem solving methods (factoring
out common code, enabling reuse) |
|
Making the problem formalization clear |
|
Providing a language construct to guide
fill-in-the-blank implementation |
|
|
|
|
Operator – operator encodings may take on many
different forms |
|
State – “search state” or node really. The encoding of the “state” of the agent
is internal and unspecified |
|
Searcher – could have added on thread support
for anytime algorithm, but kept it simple for now |
|
|
|
|
Best-First Search |
|
Greedy Search |
|
A* |
|
Worked examples |
|
Admissibility, monotonicity, pathmax,
optimality, completeness, complexity |
|
|
|
|
Describe General Search |
|
Evaluation Function f(n) - desirability of node n
for expansion |
|
Best-First Search: General Search with queueing
ordered by f (A heap rather than a
queue, really.) |
|
|
|
|
Let g(n) = pathCost(n). What search do you have if f(n) = g(n)? |
|
|
|
|
Heuristic Function h(n) - estimated cost of the
cheapest path from n to a goal node |
|
Greedy Search: Best-First Search with |
|
f(n) = h(n) |
|
“Always choose the node that looks closest to
the goal next.” |
|
Can think of h as the height of a “search
terrain”. In greedy search, you
would look for the next biggest step down from where you’ve been. |
|
|
|
|
|
h(n) = straight-line distance to goal city |
|
|
|
|
Path Cost g(n) - cost from initial state to n |
|
Uniform Cost Search expands minimum g(n) |
|
Greedy Search expands minimum h(n) |
|
Search algorithm “A” expands minimum estimated total
cost f(n)=g(n)+h(n) |
|
Worked example |
|
|
|
|
|
Admissible h never overestimates the cost to
reach the goal (e.g. route finding, sliding tile puzzle) |
|
h admissible Þ A is called A* |
|
Monotonicity of f – f doesn’t decrease along a
path |
|
h admissible Þ f monotonic? |
|
h admissible Þ pathmax equation forces monotonicity: |
|
f(n’) = max(f(n), g(n’)+h(n’)) |
|
Monotonicity à search contours |
|
|
|
|
|
A* is optimal – finds optimal path to goal |
|
A* is optimally efficient - no other algorithm
is guaranteed to expand fewer nodes |
|
A* is complete on locally finite graphs (finite
branching factor, minimum cost constant) |
|
Complexity exponential unless h is very accurate
(error grows no faster than log of actual path cost) |
|
|
|
|
Basic intuition: iterative deepening DFS with
increasing f-limit rather than increasing depth-limit. |
|
|
|
|
|
|
|
Fundamental tradeoff |
|
Inventing heuristics: 8-puzzle example |
|
Relaxed problem – problem with constraints
removed |
|
Relax: A tile can move from A to B if A adjacent
to B and B is empty. |
|
A tile can move from A to B if A adjacent to B. |
|
A tile can move from A to B if A is empty. |
|
A tile can move from A to B. |
|
The cost of an exact solution to a relaxed
problem can provide a good heuristic h. |
|
|
|
|
Pretend tiles can be shifted independently with
4 operators. |
|
Pretend subgoals (solve circle, line of dots,
“Binary Arts”) can be solved independently with parallel operations. |
|
Other ideas? |
|
|
|
|
|
Tradeoff between quality of h and efficiency of
search. |
|
Extreme cases: |
|
h(n) = 0
(admissible, but equivalent to uniform cost search) |
|
h(n) = exact optimal cost remaining (search goes
directly to goal, but full problem solved for each step!) |
|
|
|
|
Space-complexity the main drawback with A* |
|
IDA* - Iterative deepening search along f
contours |
|
Good for simple problems with few contours. What to do if there are many unique
path-cost values? |
|
|
|
|
Argument: IDA* uses too little memory |
|
SMA* - Simplified Memory-bounded A* |
|
Korf: SMA* 20x slower on 15 tile puzzle |
|
Tradeoff: wasted work searching vs. wasted work
checking repeated states. |
|
Can there be a good balance? |
|
|
|
|
Hill-Climbing – greedy wandering, no queue, no
backtracking |
|
Should be called “hill-descending” or “local
optimization” |
|
Gets caught in local minima |
|
Local optimization with random restarts |
|
Local optimization with % random steps |
|
|
|
|
|
Simulated Annealing – local optimization with %
random steps, with % decreasing slowly over time |
|
Analogy to statistical mechanics, process of
annealing metals, forming crystalline structures. |
|
Applet demo by Dudu Noy |
|
http://www.math.tau.ac.il/~dudunoy/ex2/tsp.html |
|
Another applet demo by Wesley Perkins: |
|
http://www.starbath.com/Optimization/Gacut.htm |
|
|
|
|
|
Analogy to genetic evolution: |
|
Encoding (of characteristics, behaviors) |
|
Population |
|
Fitness (evaluation function, survival of
fittest) |
|
Crossover, mutation from one generation to the
next |
|
Koza video |
|
Biomorph demo by Jean-Philippe Rennard |
|
http://www.rennard.org/alife/english/jpmain.html?border.html&entete.html&gavgb.html |
|