0% found this document useful (0 votes)
24 views

cs4811 ch03 Search A Uninformed

The document summarizes several uninformed search strategies including breadth-first search, uniform-cost search, depth-first search, depth-limited search, and iterative deepening search. It describes how each strategy selects which node to expand next and provides analysis of their properties such as completeness, time and space complexity, and optimality. Breadth-first search is used as a running example to illustrate the progress and mechanics of the search algorithm.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

cs4811 ch03 Search A Uninformed

The document summarizes several uninformed search strategies including breadth-first search, uniform-cost search, depth-first search, depth-limited search, and iterative deepening search. It describes how each strategy selects which node to expand next and provides analysis of their properties such as completeness, time and space complexity, and optimality. Breadth-first search is used as a running example to illustrate the progress and mechanics of the search algorithm.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

Chapter 3 Solving Problems By Searching

3.1 –3.4 Uninformed search strategies

CS4811 - Artificial Intelligence

Nilufer Onder
Department of Computer Science
Michigan Technological University
Outline
Problem-solving agents

Problem formulation

Basic search algorithms


Tree search
Graph search

Evaluating search strategies

Uninformed search strategies


Breadth-first search
Uniform-cost search
Depth-first search
Depth-limited search
Iterative deepening search
Bidirectional search
Problem-solving agents

function Simple-Problem-Solving-Agent (percept)


returns an action
inputs: percept, a percept
private: seq, an action sequence, initially empty
state, some description of the current world state
goal, a goal, initially null
problem, a problem formulation

state ← Update-State (state,percept)


if seq is empty then
goal ← Formulate-Goal (state)
problem ← Formulate-Problem (state, goal)
seq ← Search (problem)
if seq = failure then return a null action
action ← First (seq)
seq ← Rest (seq)
return action
Assumptions

I Static: The world does not change unless the agent changes
it.
I Observable: Every aspect of the world state can be seen.
I Discrete: Has distinct states as opposed to continuously
flowing time.
I Deterministic: There is no element of chance.
This is a restricted form of a general agent called offline problem
solving. The solution is executed “eyes closed.”
Online problem solving involves acting without complete knowledge
Example: Traveling in Romania

I On holiday in Romania; currently in Arad


I Flight leaves tomorrow from Bucharest
I Formulate goal:
be in Bucharest
I Formulate problem:
states: various cities
actions: drive between cities
I Find solution:
sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
(any solution or optimal solution?)
Distances between cities in Romania
Infrastructure for search algorithms
I A problem is defined by five components:
I initial state e.g., “In(Arad)”
I actions, Actions(s) returns the actions applicable in s.
e.g, In Arad, the applicable actions are
{Go(Sibiu), Go(Timisoara), Go(Zerind)}
I transition model, Result(s, a) returns the state that results
from executing action a in state s
e.g., Result(In(Arad), Go(Zerind)) = In(Zerind).
I goal test, can be
explicit, e.g., x = “In Bucharest”
implicit, e.g., x = “In a city with an international airport”
I path cost (additive)
e.g., sum of distances, number of actions executed, etc.
c(x, a, y ) is the step cost of executing action a in state x and
arriving at state y , assumed to be ≥ 0
I A solution is a sequence of actions leading from the initial
state to a goal state
Selecting a state space

I The real world is absurdly complex


⇒ state space must be abstracted for problem solving
I (Abstract) state = set of real states
I (Abstract) action = complex combination of real actions
e.g., “Arad → Zerind” represents a complex set
of possible routes, detours, rest stops, etc.
For guaranteed realizability, any real state “in Arad” must get
to some real state “in Zerind”
I (Abstract) solution =
set of real paths that are solutions in the real world
I Each abstract action should be “easier” than the original
problem!
I Find an abstraction that is valid and useful.
Example: The 8-puzzle
Example: The 8-puzzle (cont’d)

I states: integer locations of tiles


(ignore intermediate positions)
I actions: move blank left, right, up, down
(ignore unjamming etc.)
I goal test: = goal state (given)
I path cost: 1 per move
I Note that the optimal solution of n-Puzzle family is NP-hard
Tree search algorithms

Basic idea:
offline, simulated exploration of state space
by generating successors of the states that haven’t been explored
(a.k.a. expanding states)
Tree search algorithms (cont’d)

function Tree-Search (problem, strategy)


returns a solution, or failure

initialize the frontier using the initial state of problem


loop do
if the frontier is empty then return failure
choose a leaf node and remove it from the frontier
if the node contains a goal state
then return the corresponding solution
expand the chosen node and add the resulting nodes to the frontier
end
Tree search example
Tree search example
Tree search example
Implementation: states vs. nodes

I A state is a (representation of) a physical configuration.


I A node is a data structure constituting part of a search tree
I A node includes: parent, children, depth, path cost g (x).
I States do not have parents, children, depth, or path cost!
I The Expand function creates new nodes, filling in the various
fields and using the SuccessorFn of the problem to create
the corresponding states.
Repeated states

Failure to detect repeated states can turn a linear problem into an


exponential one!
Graph search algorithms

Basic idea:
similar to tree-search
keep a separate list of “explored” states
Graph search algorithms (cont’d)

function Graph-Search (problem)


returns a solution, or failure

initialize the frontier using the initial state of problem


→ initialize the explored set to be empty
loop do
if the frontier is empty then return failure
choose a leaf node and remove it from the frontier
if the node contains a goal state
then return the corresponding solution
→ add the node to the explored set
expand the chosen node and add the resulting nodes to the frontier
→ only if not in the frontier or explored set
end

Note: A → shows the lines that are added to the tree search algorithm.
Evaluating search strategies

I A strategy is defined by picking the order of node expansion


I Strategies are evaluated along the following dimensions:
I completeness—does it always find a solution if one exists?
I time complexity—number of nodes generated/expanded
I space complexity—maximum number of nodes in memory
I optimality—does it always find a least-cost solution?
I Time and space complexity are measured in terms of
I b — maximum branching factor of the search tree
I d — depth of the least-cost solution
I m — maximum depth of the state space
(may be ∞)
Uninformed search strategies

Uninformed strategies use only the information available


in the problem definition
I Breadth-first search
I Uniform-cost search
I Depth-first search
I Depth-limited search
I Iterative deepening search
I Bidirectional search
Breadth-first search

I Expand the shallowest unexpanded node


I Implementation: frontier is a FIFO queue,
i.e., new successors go at end
Progress of breadth-first search
Breadth-first search on a simple binary tree.
At each stage, the node to be expanded next is indicated by a
marker.
The nodes that are already explored are gray.
The nodes with dashed lines are not generated yet.
Progress of breadth-first search
Breadth-first search on a simple binary tree.
At each stage, the node to be expanded next is indicated by a
marker.
The nodes that are already explored are gray.
The nodes with dashed lines are not generated yet.
Progress of breadth-first search
Breadth-first search on a simple binary tree.
At each stage, the node to be expanded next is indicated by a
marker.
The nodes that are already explored are gray.
The nodes with dashed lines are not generated yet.
Progress of breadth-first search
Breadth-first search on a simple binary tree.
At each stage, the node to be expanded next is indicated by a
marker.
The nodes that are already explored are gray.
The nodes with dashed lines are not generated yet.
Properties of breadth-first search

I Complete: Yes (if b is finite)


I Time: b + b 2 + b 3 + . . . + b d + b(b d − 1) = O(b d+1 ),
i.e., number of nodes generated is exponential in d
I Space: O(b d+1 ) (keeps every node in memory)
I Optimal: Yes (if cost = 1 per step)
Space is the big problem; can easily generate nodes at 100MB/sec
so 24hrs = 8604GB.
Breadth-first search algorithm

function Breadth-First-Search (problem)


returns a solution, or failure
node ← a node with State=problem.Initial-State,
Path-Cost = 0
if problem.Goal-Test(node.State) then return Solution(node)
frontier ← a FIFO queue with node as the only element
explored ← an empty set
loop do
if Empty?(frontier) then return failure
node ← pop(frontier) /* chooses the shallowest node in frontier */
add node.State to explored
for each action in problem.Actions(node.State) do
child ← Child-Node (problem,node, action)
if child.State is not in explored or frontier then
if problem.Goal-Test (child.State) then
return Solution(child)
frontier ← Insert (child, frontier)
Uniform-cost search

I Expand the least-cost unexpanded node


I Implementation: frontier is a queue ordered by path cost
I Equivalent to breadth-first if step costs are all equal
Properties of uniform-cost search

I Complete: Yes, if step cost ≥


I Time: # of nodes with g ≤ cost of optimal solution,

O(b 1+bC /c )
where C ∗ is the cost of the optimal solution
I Space: # of nodes with g ≤ cost of optimal solution,

O(b 1+bC /c )
I Optimal: Yes—nodes expanded in increasing order of g (n)
Uniform-cost search algorithm

function Uniform-Cost-Search (problem)


returns a solution, or failure
node ← a node with State=problem.Initial-State,
Path-Cost = 0
if problem.Goal-Test(node.State) then return Solution(node)
frontier ← a priority ordered by Path-Cost, with node as the only element
explored ← an empty set
loop do
if Empty?(frontier) then return failure
node ← pop(frontier) /* chooses the lowest-cost node in frontier */
add node.State to explored
for each action in problem.Actions(node.State) do
child ← Child-Node (problem,node, action)
if child.State is not in explored or frontier then
frontier ← Insert (child, frontier)
else if child.State is in frontier with higher Path-Cost then
replace that frontier node with child
Depth-first search

I Expand deepest unexpanded node


I Implementation: frontier is a LIFO queue,
i.e., put successors at front
Progress of depth-first search
Progress of depth-first search
Progress of depth-first search
Progress of depth-first search
Progress of depth-first search
Progress of depth-first search
Progress of depth-first search
Progress of depth-first search
Progress of depth-first search
Progress of depth-first search
Progress of depth-first search
Progress of depth-first search
Properties of depth-first search

I Complete: No: fails in infinite-depth spaces, spaces with loops


Modify to avoid repeated states along path
⇒ complete in finite spaces
I Time: O(b m ): terrible if m is much larger than d
but if solutions are dense, may be much faster than
breadth-first
I Space: O(bm), i.e., linear space!
I Optimal: No
Depth-limited search

I It is equivalent to depth-first search with depth limit l,


i.e., nodes at depth l have no successors
I implementation: a recursive implementation is shown on the
next page
Properties of depth-limited search

I Complete: No (similar to DFS)


I Time: O(b l ), where l is the depth-limit
I Space: O(bl), i.e., linear space (similar to DFS)
I Optimal: No
Depth-limited search

function Depth-Limited-Search (problem, limit)


returns a solution, or failure/cutoff
return Recursive-DLS(Make-Node( problem.Initial-State),
problem, limit)

function Recursive-DLS (node, problem, limit)


returns a solution, or failure/cutoff
if problem.Goal-Test(node.State) then return Solution(node)
else if limit = 0 then return cutoff
else
cutoff-occurred? ← false
for each action in problem.Actions(node.State) do
child ← Child-Node (problem,node, action)
result ← Recursive-DLS (child, problem,limit-1)
if result = cutoff then cutoff-occurred? ← true
else if result 6= failure then return result
if cutoff-occurred? then return cutoff else return failure
Iterative deepening search

I Do iterations of depth-limited search starting with a limit of 0.


If you fail to find a goal with a particular depth limit,
increment it and continue with the iterations.
I Terminate when a solution is found or if the depth-limited
search returns failure, meaning that no solution exists.
I Combines the linear space complexity of DFS with the
completeness property of BFS.
Iterative deepening search (l = 0)
Iterative deepening search (l = 1)
Iterative deepening search (l = 2)
Iterative deepening search (l = 3)
Properties of iterative deepening search

I Complete: Yes
I Time: db 1 + (d − 1)b 2 + . . . + b d = O(b d )
I Space: O(bd)
I Optimal: Yes, if step cost = 1
Can be modified to explore uniform-cost tree
Iterative deepening search

function Iterative-Deepening-Search(problem)
returns a solution, or failure
for depth ← 0 to ∞ do
result ← Depth-Limited-Search (problem, depth)
if result 6= cutoff then return result
Compare IDS and BFS

Numerical comparison of the number of nodes generated for


b = 10 and d = 5, solution at the far right leaf:

N(IDS) = 50 + 400 + 3, 000 + 20, 000 + 100, 000


= 123, 450
N(BFS) = 10 + 100 + 1, 000 + 10, 000 + 100, 000 + 999, 990
= 1, 111, 100

IDS does better because other nodes at depth d are not expanded.
BFS can be modified to apply the goal test when a node is
generated (rather than expanded).
Summary of algorithms

Criterion Breadth- Uniform- Depth- Depth- Iter.


First Cost First Limited Deep.
Complete? Yes Yes No Yes Yes

Time O(b d+1 ) O(b 1+bC /c ) O(b m ) O(b l ) O(b d )

Space O(b d+1 ) O(b 1+bC /c ) O(bm) O(bl) O(bd)
Optimal? Yes∗ Yes ∗
No No Yes
Bidirectional search
I Run two simultaneous states:
one forward from the initial state
one backward from the goal state
d d
I Motivation: b ( 2 ) + b 2 is much less than b d
I Implementation: Replace the goal check with a check to see
whether the frontiers of the searches intersect
Summary

I Problem formulation usually requires abstracting away


real-world details to define a state space that can feasibly be
explored.
I There are a variety of uninformed search strategies available.
I Iterative deepening search uses only linear space and not
much more time than other uninformed algorithms.
Sources for the slides

I AIMA textbook (3rd edition)


I AIMA slides (http://aima.cs.berkeley.edu/)

You might also like