0% found this document useful (0 votes)
57 views

Chapter03a Annotated PDF

This document discusses problem solving through search algorithms. It describes formulating problems by defining an initial state, possible operators or actions, a goal test, and path costs. Search algorithms explore the state space of a problem by generating successors of explored states to find a solution path from start to goal. Tree search represents the partial exploration as a search tree that tracks explored and unexplored states. Examples discussed include traveling between cities in Romania and solving the 8-puzzle problem.

Uploaded by

Ulaş Tura
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views

Chapter03a Annotated PDF

This document discusses problem solving through search algorithms. It describes formulating problems by defining an initial state, possible operators or actions, a goal test, and path costs. Search algorithms explore the state space of a problem by generating successors of explored states to find a solution path from start to goal. Tree search represents the partial exploration as a search tree that tracks explored and unexplored states. Examples discussed include traveling between cities in Romania and solving the 8-puzzle problem.

Uploaded by

Ulaş Tura
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 86

Problem solving and search

Chapter 3, Sections 3.1–3.4.5

Chapter 3, Sections 3.1–3.4.5 1


Searching to Solve a Problem
♦ Consider some puzzles that you would want an AI to be able to solve:

7 2 4 1 2

5 6 3 4 5

8 3 1 6 7 8

Start State Goal State

Chapter 3, Sections 3.1–3.4.5 2


Searching to Solve a Problem
♦ Simple-reflex agents directly maps percepts to actions. Therefore, they
cannot operate well in environments where the mapping is too large to store
or takes too much to specify; or where a model is needed.
♦ Goal-based agents can succeed by considering actions and desirability of
their outcomes, in achieving their goal.
♦ Problem solving agents are goal-based agents that decide what to do by
finding sequences of actions that lead to goal states

7 2 4 1 2

5 6 3 4 5

8 3 1 6 7 8

Start State Goal State

Chapter 3, Sections 3.1–3.4.5 3


Introduction
♦ Simple goal-based agents can solve problems via searching the state space
for a solution, starting from the initial state and terminating when (one of )
the goal state(s) are reached.
♦ The search algorithms can be blind (not using specific info about the
problem) as in Chp. 3 or informed (Chp. 4) using heuristics about the
problem for a more efficient search.
♦ Before we see how we can search the state space of the problem, we need
to decide on what the states and operators of a problem are.
⇒ problem formulation

Chapter 3, Sections 3.1–3.4.5 4


Example: Traveling in Romania
♦ On holiday in Romania; currently in Arad.
♦ Flight leaves tomorrow from Bucharest
♦ You have access to a map.
Oradea
71
Neamt

Zerind 87
75 151
Iasi
Arad
140
92
Sibiu Fagaras
99
118
Vaslui
80
Rimnicu Vilcea
Timisoara
142
111 Pitesti 211
Lugoj 97
70 98
85 Hirsova
Mehadia 146 101 Urziceni
75 138 86
Bucharest
Dobreta 120
90
Craiova Eforie
Giurgiu

Chapter 3, Sections 3.1–3.4.5 5


Example: Romania
♦ TWhile you see the solution here easily, the solution will not be obvious
to an agent or us if the graph is huge...
♦ The input may also be given to us as a list of roads from each city.
♦ This is in fact how a robot will see the map, as a list of nodes and edges
between them with associated distances:
Arad to: Zerind, Sibiu, Timisoara
Bucharest to: Pitesti, Guirgiu, Fagaras, Urziceni
Craiova to: Dobreta, Pitesti
Dobreta to: Craiova, Mehadia
Oradea to: Zerind, Sibiu
Zerind to: Oradea, Arad

Chapter 3, Sections 3.1–3.4.5 6


Example: Traveling in Romania
Formulate goal:
be in Bucharest
Formulate problem:
Action sequence needs to be in the form of ”drive from Arad to ...;
drive from ... to ...; ...; drive from ... to Bucharest).

Hence the states of the robot, abstracted for this problem, are ”the
cities where the robot is/may be at”.

The corresponding operators taking one state to the other are ”driving
between cities”.
Find solution: sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest

Chapter 3, Sections 3.1–3.4.5 7


Selecting a state space
Real world is absurdly complex
⇒ state space must be abstracted for problem solving
(Abstract) state = set of real states
(Abstract) operator = complex combination of real actions
e.g., “Arad → Zerind” represents a complex set
of possible routes, detours, rest stops, etc.
(Abstract) solution =
set of real paths that are solutions in the real world

Chapter 3, Sections 3.1–3.4.5 8


Single-state problem formulation
A problem is defined by four items:
initial state e.g., “at Arad”
operators (or successor function S(x))
e.g., Arad → Zerind Arad → Sibiu etc.
goal test, can be
explicit, e.g., x = “at Bucharest”
implicit, e.g., N oDirt(x)
path cost (additive)
e.g., sum of distances, number of operators executed, etc.
A solution is a sequence of operators leading from the initial state to a goal
state.

Chapter 3, Sections 3.1–3.4.5 9


Example: The 8-puzzle
5 4 5
1 4
2 3

6 1 88 6
8 84

7 3 22 7 6 25

Start State Goal State

states??
operators??
goal test??
path cost??

Chapter 3, Sections 3.1–3.4.5 10


Example: The 8-puzzle
5 4 5
1 4
2 3

6 1 88 6
8 84

7 3 22 7 6 25

Start State Goal State

states??: integer locations of tiles (ignore intermediate positions)


operators??: move blank left, right, up, down (ignore unjamming etc.)
goal test??: = goal state (given)
path cost??: 1 per move
[Note: optimal solution of n-Puzzle family is NP-hard]

Chapter 3, Sections 3.1–3.4.5 11


Example: robotic assembly
P
R R

R R

states??: real-valued coordinates of


robot joint angles
parts of the object to be assembled
operators??: continuous motions of robot joints
goal test??: complete assembly
path cost??: time to execute

Chapter 3, Sections 3.1–3.4.5 12


Implementation of search algorithms
♦ Offline, simulated exploration of state space
by generating successors of already-explored states
(a.k.a. expanding states)
♦ Need to keep track of the partial work: use a search tree

Chapter 3, Sections 3.1–3.4.5 13


Tree search example
(a) The initial state Arad

Sibiu Timisoara Zerind

Arad Fagaras Oradea Rimnicu Vilcea Arad Lugoj Arad Oradea

(b) After expanding Arad Arad

Sibiu Timisoara Zerind

Arad Fagaras Oradea Rimnicu Vilcea Arad Lugoj Arad Oradea

(c) After expanding Sibiu Arad

Sibiu Timisoara Zerind

Arad Fagaras Oradea Rimnicu Vilcea Arad Lugoj Arad Oradea

Chapter 3, Sections 3.1–3.4.5 14


Tree search algorithm

function Tree-Search( problem, strategy) returns a solution, or failure


initialize the search tree using the initial state of problem
loop do
if there are no candidates for expansion then return failure
choose a leaf node for expansion according to strategy
if the node contains a goal state then return the corresponding solution
else expand the node and add the resulting nodes to the search tree
end

Chapter 3, Sections 3.1–3.4.5 15


Implementation: states vs. nodes
A state is a (representation of) a physical configuration
A node is a data structure constituting part of a search tree
includes state, parent, children, operator, depth, path cost g(x)
States do not have parents, children, depth, or path cost!
parent

depth = 6
State 5 4 Node
g=6
6 1 88

state
7 3 22
children

The Expand function creates new nodes, filling in the various fields and
using the Operators (or SuccessorFn) of the problem to create the
corresponding states.
Note that there are pointers (links) from Parent to Child, as well as from Child to parent, so as to be able to recover a solution once the goal is
found.

Chapter 3, Sections 3.1–3.4.5 16


Terminology
♦ depth of a node: number of steps from root (starting from depth=0)
♦ path cost: cost of the path from the root to the node
♦ expanded node: node pulled out from the queue, goal tested (not goal)
and its children (resulting from alternative actions) are added to the queue.
It ’is often used synonymous to visited node. Only tiny difference is that
visited could visit a node and see that it corresponds to the goal state and
not expand it. So visited can be one more than expanded (you can use them
the interchangeably, unless asked otherwise).
♦ generated nodes: nodes added to the fringe. different than nodes ex-
panded!

Chapter 3, Sections 3.1–3.4.5 17


Implementation of search algorithms

function Tree-Search( problem, fringe) returns a solution, or failure


fringe ← Insert(Make-Node(Initial-State[problem]), fringe)
loop do
if fringe is empty then return failure
node ← Remove-Front(fringe)
if Goal-Test[problem] applied to State(node) succeeds return node
fringe ← InsertAll(Expand(node, problem), fringe)

function Expand( node, problem) returns a set of nodes


successors ← the empty set
for each action, result in Successor-Fn[problem](State[node]) do
s ← a new Node
Parent-Node[s] ← node; Action[s] ← action; State[s] ← result
Path-Cost[s] ← Path-Cost[node] + Step-Cost(node, action, s)
Depth[s] ← Depth[node] + 1
add s to successors
return successors

Chapter 3, Sections 3.1–3.4.5 18


Implementation of search algorithms
♦ We need to be able to implement the fringe (also called the frontier)
and explored / visited lists efficiently.
♦ Fringe should be implemented as a queue.
-Empty?(Fringe) should return True only if there are no elements in
the Fringe.
- RemoveFront(Fringe) removes and returns the ”first” element of
the Fringe
- InsertElement(Fringe,Element) inserts Element and returns the Fringe
Insertion at the End or the Front would take constant time and
result in FIFO queue or LIFO queue (a stack).

Chapter 3, Sections 3.1–3.4.5 19


Implementation of search algorithms
Certain search algorithms (e.g. Uniform-Cost search) requires the insertion
to be done not to the front or end, but according to some priority (e.g. total
cost etc).
In that case, we need to use a priority queue.
♦ The Explored set can be implemented as a hash table and roughly
have constant (constant insertion and lookup time regardless of the number
of states).
SUMMARY in case you have not taken a data structures course:
♦ You should assume the implementation is done with appropriate data
structures as efficiently as possible.
♦ As the programmer choosing the appropriate search strategy, you are
responsible in reducing the number of explored and generated nodes for the
particular problem at hand.

Chapter 3, Sections 3.1–3.4.5 20


Search strategies
A strategy is defined by picking the order of node expansion
Strategies are evaluated along the following dimensions:
completeness—does it always find a solution if one exists?
time complexity—maximum number of nodes generated/expanded
(the slides mostly use visited (goal test and expand if necessary)
nodes)
space complexity—maximum number of nodes in memory
optimality—does it always find a least-cost solution?
Time and space complexity are measured in terms of
b—maximum branching factor of the search tree (finite)
d—depth of the least-cost solution
m—maximum depth of the state space (may be ∞)

Chapter 3, Sections 3.1–3.4.5 21


Time Complexity
An algorithm’s time complexity is often measured asymptotically. Assume
you ”process” n items with your algorithm. We say that the time

T (n) of the algorithm is O(f (n)) if


(e.g. T (n) is O(n2))

∃n0 such that T (n) ≤ kf (n), ∀n ≥ n0

♦ In simple terms, it checks what is the dominating factor in the spent


time, for large enough problem size (n).
♦ The time complexity analysis can be done (separately) for the worst case
and average case

Chapter 3, Sections 3.1–3.4.5 22


Time Complexity
♦ Some problems can be solved in polynomial time (P). These are considered
as ”easy” problems (e.g. O(n), O(logn) algorithms).
♦ Some problems do not have a polynomial-time solution, but can be
verified in polynomial time if one can guess the solution. They are called
non-deterministic polynomial (NP) problems.
♦ NP-complete problems: those ”harder” NP problems that if you find
a polynomial time solution, you can solve all the other NP problems (by
reducing one problem into another).
♦ Read Appendix pp.977-979 on time complexity

Chapter 3, Sections 3.1–3.4.5 23


Uninformed search strategies
Uninformed strategies use only the information available
in the problem definition
♦ Breadth-first search
♦ Depth-first search
♦ Depth-limited search
♦ Iterative deepening search
♦ Uniform-cost search
♦ Bidirectional search

Chapter 3, Sections 3.1–3.4.5 24


Breadth-first search
Expand shallowest unexpanded node
Implementation:
QueueingFn = first in first out (FIFO)

Chapter 3, Sections 3.1–3.4.5 25


Breadth-first search
A A A A

B C B C B C B C

D E F G D E F G D E F G D E F G

Chapter 3, Sections 3.1–3.4.5 26


Breadth-first search
Arad

Chapter 3, Sections 3.1–3.4.5 27


Breadth-first search
Arad

Zerind Sibiu Timisoara

Chapter 3, Sections 3.1–3.4.5 28


Breadth-first search
Arad

Zerind Sibiu Timisoara

Arad Oradea

Chapter 3, Sections 3.1–3.4.5 29


Breadth-first search
Arad

Zerind Sibiu Timisoara

Rimnicu
Arad Oradea Arad Oradea Fagaras Vilcea Arad Lugoj

Chapter 3, Sections 3.1–3.4.5 30


Properties of breadth-first search
Complete??
Time??
Space??
Optimal??

Chapter 3, Sections 3.1–3.4.5 31


Properties of breadth-first search
Complete?? Yes (if b is finite - otherwise it may be stuck at generating the
first level)
Time??
Space??
Optimal??

Chapter 3, Sections 3.1–3.4.5 32


Properties of breadth-first search
Complete?? Yes (if b is finite)
Time ?? 1 + b + b2 + b3 + . . . + bd = O(bd), i.e., visited nodes is exponential
in d
Time ?? b + b2 + b3 + . . . + bd + b(d+1)−b = O(b(d+1)), i.e., generated nodes
is exponential in d + 1
Remember: N = 1 + b + b2 + ... + bd = (b(d+1) − 1)/(b − 1)
♦ The book sometimes gives time complexity in terms of visited and some-
times in terms of generated nodes (of course the longer time is more mean-
ingful). But I will make a distinction between visited and generated nodes so
as to be precise and so that you understand the examples in the book, but
the two time formulas can be counted as having the same order (big-Oh).
♦ For comparing different search algorithms we are NOT interested in the
diff. between bd and b(d+1), but bigger differences.

Chapter 3, Sections 3.1–3.4.5 33


Properties of breadth-first search
Complete?? Yes (if b is finite)
Time ?? 1 + b + b2 + b3 + . . . + bd = O(bd), i.e., visited nodes is exponential
in d
Time ?? b + b2 + b3 + . . . + bd + b(d+1)−b = O(b(d+1)), i.e., generated nodes
is exponential in d + 1
Space?? O(b(d+1))
All nodes in the last level (d) are already explored, so level (d+1) is generated.
Notice that you can also find a precise sum for the space requirement as in
the quiz.

Chapter 3, Sections 3.1–3.4.5 34


Properties of breadth-first search
Complete?? Yes (if b is finite)
Time ?? 1 + b + b2 + b3 + . . . + bd = O(bd), i.e., visited nodes is exponential
in d
Time ?? b + b2 + b3 + . . . + bd + b(d+1)−b = O(b(d+1)), i.e., generated nodes
is exponential in d + 1
Space?? O(b(d+1))
Optimal?? Yes (if cost = 1 per step); not optimal in general
Note: BFS finds the shallowest solution; if the shallowest solution is not the
optimal one (step costs are not uniform) than BFS is not optimal.
Quiz 01 -

b =3 ,

1 Expanded
solutionin3stegsfromstartmfstate.fr
-

) 1+31-9+26=39
3
00 o 3h09
folk
NA a

Al N⑧ ,
27 Chapter 3, Sections 3.1–3.4.5 35

Örs 79
-
. _

. 81-3=78 GIYLIGI 27+78=118


Time-Space Requirements
Assuming b = 10 and processing speed of 1000 nodes/second (100 bytes/node).

Depth Nodes Time Memory


0 1 1 millisecond 100 bytes
2 111 .1 seconds 11 kilobytes
4 11,111 11 seconds 1 megabyte
6 106 18 minutes 111 megabytes
8 108 31 hours 11 gigabytes
10 1010 128 days 1 terabyte
12 1012 35 years 111 terabytes
14 1014 3500 years 11,111 terabytes

With even small depths (d=12), both time and space are problematic (but
space is the bigger problem).
become speed
Tbylğ for
ve can

but not III


4 can spora
y usny programmüş
small
"
problem
"
a .

( N-fnzzkwllzmevsleff.rs asmak problem)


Chapter 3, Sections 3.1–3.4.5 36
Time-Space Requirements
Exponential complexity search problems cannot be solved for all but smallest
instances!

Chapter 3, Sections 3.1–3.4.5 37


Depth-first search
Expand deepest unexpanded node
Implementation:
QueueingFn = last in first out (LIFO)

Chapter 3, Sections 3.1–3.4.5 38


Depth-first search
A A A

B C B C B C

D E F G D E F G D E F G

H I J K L M N O H I J K L M N O H I J K L M N O

A A A

B C B C C

D E F G D E F G E F G

H I J K L M N O I J K L M N O J K L M N O

A A

C B C C

E F G E F G F G

J K L M N O K L M N O L M N O

A A A

C C C

F G F G F G

L M N O L M N O M N O

W hal -

dayan netice
aboutseaareguiremeub-7-fffiaenl-IEasyrec.ws
-
Chapter 3, Sections 3.1–3.4.5 39

ementatron ! ! ne mel
Properties of depth-first search
Complete??
Time?? "

Space??
Optimal??

0lb .
m
)

Chapter 3, Sections 3.1–3.4.5 40


Properties of depth-first search
Complete?? No: fails in infinite-depth spaces, spaces with loops
Modify to avoid repeated states along path
⇒ complete in finite spaces
Time??
Space??
Optimal??

1A
rad

B
/
Aradı

/
B.
'

Ö wnksswedoryeatedst.de de
duy; in
Chapter 3, Sections 3.1–3.4.5

which case
41

m

M = Max dertle oftree

Properties of depth-first search


Complete?? No: fails in infinite-depth spaces, spaces with loops
Modify to avoid repeated states along path
⇒ complete in finite spaces
Time?? O(bm): terrible if m is much larger than d
but if solutions are dense, may be much faster than breadth-first m

Space?? 0lb ]
Optimal??
Notice here that you can find the big-Oh answer by considering the number
of nodes in the last level that needs to be considered.

Chapter 3, Sections 3.1–3.4.5 42


Properties of depth-first search
Complete?? No: fails in infinite-depth spaces, spaces with loops
Modify to avoid repeated states along path
⇒ complete in finite spaces
Time?? O(bm): terrible if m is much larger than d
but if solutions are dense, may be much faster than breadth-first
Space?? O(bm), i.e., linear space!
Optimal??
Why? Calculate the size of the Queue assuming that the left-most branch
has the maximum depth, m. Now reason that the Queue will never get
bigger, wherever the solution may be.

Chapter 3, Sections 3.1–3.4.5 43


Properties of depth-first search
Complete?? No: fails in infinite-depth spaces, spaces with loops
Modify to avoid repeated states along path
⇒ complete in finite spaces
Time?? O(bm): terrible if m is much larger than d
but if solutions are dense, may be much faster than breadth-first
Space?? O(bm), i.e., linear space!
Optimal?? No

- It willfüd the fmsh sol akan .

Chapter 3, Sections 3.1–3.4.5 44


Depth-limited search
= depth-first search with depth limit l:
nodes at depth l are treated as if they have no successors

E.g. when we know that there are 20 cities on the map of Romania, there is
no need to look beyond depth 19. Compare with the diameter of a problem.
Implementation: a d :O


Nodes at depth l have no successors

da
¥

Chapter 3, Sections 3.1–3.4.5 45


Depth-limited search - properties
Similar to DFS.
Complete?? yes, if l ≥ d
Time?? O(bl )
Space?? O(bl)
Optimal?? No

Chapter 3, Sections 3.1–3.4.5 46


Iterative deepening search
Can we do away with trying to estimate the limit?
1=0
D
ifrat suladı
ktl-11

ifrat sol ved ,

1=12
- .
_

Chapter 3, Sections 3.1–3.4.5 47


Iterative deepening search

function Iterative-Deepening-Search( problem) returns a solution se-


quence
inputs: problem, a problem
e ← 0 to ∞ do
for depth
e
result ← Depth-Limited-Search( problem, depth)
if result += cutoff then return result
end

cutoff: no solution within the depth-limit


Failure: no solution at all

Chapter 3, Sections 3.1–3.4.5 48


Iterative deepening search l = 0

Arad

Chapter 3, Sections 3.1–3.4.5 49


Iterative deepening search l = 1

Arad

Zerind Sibiu Timisoara

Chapter 3, Sections 3.1–3.4.5 50


Iterative deepening search l = 1

Arad

Zerind Sibiu Timisoara

Arad Oradea

Chapter 3, Sections 3.1–3.4.5 51


Iterative deepening search l = 2

Arad

Zerind Sibiu Timisoara

Rimnicu
Arad Oradea Arad Oradea Fagaras Vilcea

Chapter 3, Sections 3.1–3.4.5 52


Iterative deepening search l = 2

Arad

Zerind Sibiu Timisoara

Rimnicu
Arad Oradea Arad Oradea Fagaras Vilcea Arad Lugoj

Chapter 3, Sections 3.1–3.4.5 53


Properties of iterative deepening search
Complete??
Time??
Space??
Optimal??

Chapter 3, Sections 3.1–3.4.5 54


Properties of iterative deepening search
Complete?? Yes
Time??
Space??
Optimal??

Chapter 3, Sections 3.1–3.4.5 55


Properties of iterative deepening search DFS

Complete?? Yes
Time?? (d + 1)b0 + db1 + (d − 1)b2 + . . . + bd = O(bd)
Space??
Optimal??
Note: Number of visited nodes

II.
ö
, ¥!
Hatim üstüme
} d. b
'Y!ed
!
tür
dtimes
Chapter 3, Sections 3.1–3.4.5 56
Properties of iterative deepening search
Complete?? Yes
Time?? (d + 1)b0 + db1 + (d − 1)b2 + . . . + bd = O(bd)
Space?? O(bd)
Optimal??

Chapter 3, Sections 3.1–3.4.5 57


Properties of iterative deepening search
Complete?? Yes
Time?? (d + 1)b0 + db1 + (d − 1)b2 + . . . + bd = O(bd)
Space?? O(bd)
Optimal?? Yes, if step cost = 1 No
=
,
not
mgeneral _

Chapter 3, Sections 3.1–3.4.5 58


Properties of iterative deepening search
The higher the branching factor, the lower the overhead of repeatedly ex-
panded states (number of leaves dominate).
Number of generated nodes for b = 10 and d = 5 :
N (IDS) = 50 + 400 + 3, 000 + 20, 000 + 100, 000 = 123, 450
N (BFS) = 10 + 100 + 1, 000 + 10, 000 + 100, 000 + 999, 990 = 1, 111, 100

EistePreferred method when there is a large search space and the depth of the
solution is not known.

j ;D

Chapter 3, Sections 3.1–3.4.5 59


Romania with step costs in km
BFS finds the shallowest goal state. What if we have a more general path
cost?
Straight−line distance
Oradea to Bucharest
71
Neamt Arad 366
87 Bucharest 0
Zerind 151 Craiova
75 160
Iasi Dobreta 242
Arad 140 Eforie 161
92 Fagaras 178
Sibiu Fagaras
99 Giurgiu 77
118 Hirsova
Vaslui 151
80
Iasi 226
Timisoara
Rimnicu Vilcea Lugoj 244
142 Mehadia 241
111 211 Neamt 234
Lugoj 97 Pitesti
Oradea 380
70 98 Pitesti 98
146 85 Hirsova
Mehadia 101 Urziceni Rimnicu Vilcea 193
75 138 86 Sibiu 253
Bucharest Timisoara 329
120
Dobreta
90 Urziceni 80
Craiova Eforie Vaslui 199
Giurgiu Zerind 374

Chapter 3, Sections 3.1–3.4.5 60


Uniform-cost search
Expand least-cost (path cost) unexpanded node
Implementation:
QueueingFn = insert in order of increasing path cost

Chapter 3, Sections 3.1–3.4.5 61


Uniform-cost search
Arad

Chapter 3, Sections 3.1–3.4.5 62


Uniform-cost search
Arad

75 140 118

Zerind Sibiu Timisoara

fonda
140

lsibiufmis.la?Xd
146 118

FRONT
of 1
Q o

Chapter 3, Sections 3.1–3.4.5 63


Uniform-cost search
Arad

75 140 118

Zerind Sibiu Timisoara

75 71

Arad Oradea

Chapter 3, Sections 3.1–3.4.5 64


Uniform-cost search
Arad

75 140 118

75 71
Zerind Sibiu

Timisoara

118 111

Arad Oradea
paha ast : ? Arad Lugoj

to Orada

146

Chapter 3, Sections 3.1–3.4.5 65


Properties of uniform-cost search
Complete??
Time??
Space??
Optimal??
Note: What would happen if some paths had negative costs?

Chapter 3, Sections 3.1–3.4.5 66


Properties of uniform-cost search
Complete?? Yes, if step cost ≥ ! (nondecreasing)
Time??
Space??
Optimal??
Note: What would happen if some paths had negative costs?

Chapter 3, Sections 3.1–3.4.5 67


Properties of uniform-cost search
Complete?? Yes, if step cost ≥ !
Time?? # of nodes with g ≤ cost of optimal solution
Space??
Optimal??
,C ∗/!.
If each step costs at least ! > 0, then time complexity is O(b ), if the
optimum solution has cost C ∗
Why?

Chapter 3, Sections 3.1–3.4.5 68


Properties of uniform-cost search
Complete?? Yes, if step cost ≥ !
Time?? # of nodes with g ≤ cost of optimal solution
Space??
Optimal??
,C ∗/!.
If each step costs at least ! > 0, then time complexity is O(b ), if the
optimum solution has cost C ∗
Why? since the optimum solution would be at a maximum depth of ,C ∗/!.).

Chapter 3, Sections 3.1–3.4.5 69


Properties of uniform-cost search
Complete?? Yes, if step cost ≥ !
Time?? # of nodes with g ≤ cost of optimal solution
Space?? # of nodes with g ≤ cost of optimal solution
Optimal??

Chapter 3, Sections 3.1–3.4.5 70


Properties of uniform-cost search
Complete?? Yes, if step cost ≥ !
Time?? # of nodes with g ≤ cost of optimal solution
Space?? # of nodes with g ≤ cost of optimal solution
Optimal?? Yes with the right implementation (Fig. 3.14) which checks if a
shorter path is found to a node in the frontier.
:
wonttbeexpanded
µ

800

÷ ÷ .

500
Chapter 3, Sections 3.1–3.4.5 71
Optimality of uniform-cost search
When a path to Bucharest is found via Fagaras, and later via Pitesti, Bucharest
node is replaced to reflect the new found path. So that when it is taken from
the fringe and goal-tested, we find the right path.
Sibiu Fagaras
99

80
Rimnicu Vilcea

Pitesti 211
97

101

Bucharest

Chapter 3, Sections 3.1–3.4.5 72


BFS versus uniform-cost search
Uniform cost search becomes Breadth-first search when the path cost func-
tion g(n) is DEPTH(n)
Equivalently, if all the step costs are equal.

Chapter 3, Sections 3.1–3.4.5 73


Bidirectional search
Simultaneously search both forward from the initial state and backward from
the goal state.

Start Goal


âna
¥ü÷ Chapter 3, Sections 3.1–3.4.5 74
Bidirectional search
Need to define predecessors
Operators may not be reversible
What if there are many goal states?

Chapter 3, Sections 3.1–3.4.5 75


Bidirectional search
Time?
Space?

Chapter 3, Sections 3.1–3.4.5 76


Bidirectional search

#%
Time? O(bd/2)
Space? O(bd/2)
For b=10, d = 6, BFS vs. BDS: million vs 2222 nodes .

Chapter 3, Sections 3.1–3.4.5 77


Summary of algorithms

Criterion Breadth- Uniform- Depth- Depth- Iterative


First Cost First Limited Deepening
Complete? Yes∗ Yes∗ No Yes, if l ≥ d Yes

Time bd+1 b,C /!. bm bl bd

Space bd+1 b,C /!. bm bl bd
Optimal? Yes∗ Yes∗ No No Yes

Note that Y es∗ and No are not that different: they both do not guarantee
completeness, only differ in the strength of the assumptions (b is finite or
the max. depth is finite etc.)

Chapter 3, Sections 3.1–3.4.5 78


Repeated states
Failure to detect repeated states can turn a linear problem into an exponential
one, even for non-looping problems!
A A

B B B
A

C C C C C

(a) (b) (c)

Solution: Remember every visited state using a graph search.

Chapter 3, Sections 3.1–3.4.5 79


Graph search

function Graph-Search( problem, fringe) returns a solution, or failure


closed ← an empty set
fringe ← Insert(Make-Node(Initial-State[problem]), fringe)
loop do
if fringe is empty then return failure
node ← Remove-Front(fringe)
if Goal-Test[problem](State[node]) then return node
if State[node] is not in closed then
add State[node] to closed
fringe ← InsertAll(Expand(node, problem), fringe)
end

Compare this to the one in slide 27.

Chapter 3, Sections 3.1–3.4.5 80


Graph search
Problems with Graph Search:
♦ Memory Requirements: increased space requirements for Depth-First
search (the state, rather than the node, is checked for repetition!)
♦ Optimality: Graph Search deletes the later found path to a repeated state.
This could be the path with a shorter cost according to the chosen search
strategy (e.g. iterative deepening). however, we can show that Uniform-Cost
search remains optimal.

Chapter 3, Sections 3.1–3.4.5 81


Problem types
Deterministic, accessible =⇒ single-state problem
Deterministic, inaccessible =⇒ multiple-state problem
Nondeterministic, inaccessible =⇒ contingency problem
must use sensors during execution
solution is a tree or policy
often interleave search, execution
Unknown state space =⇒ exploration problem (“online”)

Chapter 3, Sections 3.1–3.4.5 82


Example: vacuum world
Single-state, start in #5. Solution??

Agent has no sensors: Multiple-state, start in 1 2

{1, 2, 3, 4, 5, 6, 7, 8}
e.g., Right goes to {2, 4, 6, 8}. Solution?? 3 4

World is stochastic or only partially observ-


5 6
able, or adversarial: Contingency, start in #5
Murphy’s Law: Suck can dirty a clean carpet
Local sensing: dirt, location only. 7 8

Solution??

Chapter 3, Sections 3.1–3.4.5 83


Example: vacuum world
Single-state, start in #5. Solution: simple

Agent has no sensors: Multiple-state, start in


{1, 2, 3, 4, 5, 6, 7, 8} 1 2

e.g., Right goes to {2, 4, 6, 8}. Solution:


Right, Suck, Left, Suck 3 4

World is stochastic or only partially observ-


5 6
able, or adversarial: Contingency, start in #5
Murphy’s Law: Suck can dirty a clean carpet
Local sensing: dirt, location only. 7 8

Solution: Right, if ([R,Dirty], Suck)

Chapter 3, Sections 3.1–3.4.5 84


Summary
Problem formulation usually requires abstracting away real-world details to
define a state space that can feasibly be explored
Variety of uninformed search strategies
Iterative deepening search uses only linear space
and not much more time than other uninformed algorithms

Chapter 3, Sections 3.1–3.4.5 85


Summary
Problem formulation usually requires abstracting away real-world details to
define a state space that can feasibly be explored
Variety of uninformed search strategies
Iterative deepening search uses only linear space
and not much more time than other uninformed algorithms

Chapter 3, Sections 3.1–3.4.5 85

You might also like