0% found this document useful (0 votes)
10 views

Chapter3a_Artificial_Intelligence

Chapter 3 discusses various search techniques, focusing on the Water Jug Problem as a case study. It outlines the concepts of state space, operators, and the formalization of search algorithms, including uninformed and informed search strategies. The chapter also evaluates different search methods such as breadth-first and depth-first search, highlighting their properties, strengths, and weaknesses.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Chapter3a_Artificial_Intelligence

Chapter 3 discusses various search techniques, focusing on the Water Jug Problem as a case study. It outlines the concepts of state space, operators, and the formalization of search algorithms, including uninformed and informed search strategies. The chapter also evaluates different search methods such as breadth-first and depth-first search, highlighting their properties, strengths, and weaknesses.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

Chapter 3

Search techniques
Basanta Joshi, PhD
[email protected]
Lecture notes can be downloaded from
www.basantajoshi.com.np
Water Jug Problem
Given a full 5-gallon jug and a full 2-gallon jug, fill the 2-gallon jug with
exactly one gallon of water.

• State: ?

• Initial State: ?
5
• Operators: ? 2
• Goal State: ?
Water Jug Problem
Operator table

5 Name Cond. Transition Effect


2
Empty5 – (x,y)→(0,y) Empty 5-gal.
jug
• State = (x,y), where x is
the number of gallons of Empty2 – (x,y)→(x,0) Empty 2-gal.
water in the 5-gallon jug jug
and y is # of gallons in 2to5 x≤3 (x,2)→(x+2,0) Pour 2-gal.
the 2-gallon jug into 5-gal.
• Initial State = (5,2)
5to2 x≥2 (x,0)→(x-2,2) Pour 5-gal.
• Goal State = (*,1), where into 2-gal.
* means any amount
5to2part y<2 (1,y)→(0,y+1) Pour partial 5-
gal. into 2-gal.
Water jug state space

Empty5 5, 2 5, 1 5, 0

Empty2 4, 2 4, 1 4, 0

3, 2 3, 1 3, 0
2to5

2, 2 2, 1 2, 0
5to2
1, 2 1, 1 1, 0
5to2part
0, 2 0, 1 0, 0
Water jug solution

5, 2 5, 1 5, 0

4, 2 4, 1 4, 0

3, 2 3, 1 3, 0

2, 2 2, 1 2, 0

1, 2 1, 1 1, 0

0, 2 0, 1 0, 0
Formalizing search III

• A solution is a sequence of operators that is associated with


a path in a state space from a start node to a goal node.
• The cost of a solution is the sum of the arc costs on the
solution path.
– If all arcs have the same (unit) cost, then the solution cost is just the
length of the solution (number of steps / state transitions)
Formalizing search IV

• State-space search is the process of searching through a state


space for a solution by making explicit a sufficient portion of an
implicit state-space graph to find a goal node.
– For large state spaces, it isn’t practical to represent the whole space.
– Initially V={S}, where S is the start node; when S is expanded, its
successors are generated and those nodes are added to V and the associated
arcs are added to E. This process continues until a goal node is found.
• Each node implicitly or explicitly represents a partial solution
path (and cost of the partial solution path) from the start node to
the given node.
– In general, from this node there are many possible paths (and therefore
solutions) that have this partial path as a prefix.
State-space search algorithm
function general-search (problem, QUEUEING-FUNCTION)
;; problem describes the start state, operators, goal test, and operator costs
;; queueing-function is a comparator function that ranks two states
;; general-search returns either a goal node or failure
nodes = MAKE-QUEUE(MAKE-NODE(problem.INITIAL-STATE))
loop
if EMPTY(nodes) then return "failure"
node = REMOVE-FRONT(nodes)
if problem.GOAL-TEST(node.STATE) succeeds
then return node
nodes = QUEUEING-FUNCTION(nodes, EXPAND(node,
problem.OPERATORS))
end
;; Note: The goal test is NOT done when nodes are generated
;; Note: This algorithm does not detect loops
Key procedures to be defined

• EXPAND
– Generate all successor nodes of a given node
• GOAL-TEST
– Test if state satisfies all goal conditions
• QUEUEING-FUNCTION
– Used to maintain a ranked list of nodes that are candidates
for expansion
Bookkeeping

• Typical node data structure includes:


– State at this node
– Parent node
– Operator applied to get to this node
– Depth of this node (number of operator applications since initial state)
– Cost of the path (sum of each operator application so far)
Some issues
• Search process constructs a search tree, where
– root is the initial state and
– leaf nodes are nodes
• not yet expanded (i.e., they are in the list “nodes”) or
• having no successors (i.e., they’re “deadends” because no operators were
applicable and yet they are not goals)
• Search tree may be infinite because of loops even if state space is
small
• Return a path or a node depending on problem.
– E.g., in cryptarithmetic return a node; in 8-puzzle return a path
• Changing definition of the QUEUEING-FUNCTION leads to
different search strategies
Evaluating Search Strategies

• Completeness
– Guarantees finding a solution whenever one exists
• Time complexity
– How long (worst or average case) does it take to find a solution?
Usually measured in terms of the number of nodes expanded
• Space complexity
– How much space is used by the algorithm? Usually measured in terms
of the maximum size of the “nodes” list during the search
• Optimality/Admissibility
– If a solution is found, is it guaranteed to be an optimal one? That is, is
it the one with minimum cost?
Uninformed vs. informed search

• Uninformed search strategies


– Also known as “blind search,” uninformed search strategies use no
information about the likely “direction” of the goal node(s)
– Uninformed search methods: Breadth-first, depth-first, depth-limited,
uniform-cost, depth-first iterative deepening, bidirectional
• Informed search strategies
– Also known as “heuristic search,” informed search strategies use
information about the domain to (try to) (usually) head in the general
direction of the goal node(s)
– Informed search methods: Hill climbing, best-first, greedy search,
beam search, A, A*
Uninformed Search Methods
Uninformed search strategies
• Uninformed: While searching you have no clue
whether one non-goal state is better than any
other. Your search is blind. You don’t know if your
current exploration is likely to be fruitful.
• Various blind strategies:
• Breadth-first search
• Uniform-cost search
• Depth-first search
• Iterative deepening search
19
Blind Searching
• We call search algorithms that do not use any
extra information regarding the problem or our
representation of the problem, “blind”.

• These searches typically visit all of the nodes of


a tree in a certain order looking for a pre-
specified goal.
• No cleverness is used to decide where to look next.
• Searches may never find the goal state.
• In some cases blind search is the right approach.
Path Finding Problems
• One class of search
problems which best
illustrates the B
important algorithms
A
involves the search E
through a graph to find
a path between two
nodes. F
D
C

G
Find a path from A to E
Graph Searching Problems
• Generic search algorithm: given a graph, start nodes,
and goal nodes, incrementally explore paths from the
start nodes.

• Maintain a frontier of paths from the start node that


have been explored.

• As search proceeds, the frontier expands into the


unexplored nodes until a goal node is encountered.

• The way in which the frontier is expanded defines the


search strategy.
Example
Breadth First Search
• PROCESS: Slow descent method

Place the start node at the end of a queue


Examine the node at the front of the queue
- if the queue is empty, stop
- if the node is the goal, stop
- otherwise, add the children of the
node to the end of the queue
Breadth-first search
• Expand shallowest unexpanded node
• Fringe: nodes waiting in a queue to be explored
• Implementation:
• fringe is a first-in-first-out (FIFO) queue, i.e., new
successors go at end of the queue.

Is A a goal state?

24
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
• fringe is a FIFO queue, i.e., new successors go
at end

Expand:
fringe = [B,C]

Is B a goal state?

25
Breadth-first search
• Expand shallowest unexpanded node

•Implementation:
fringe is a FIFO queue, i.e., new successors go at
end
Expand:
fringe=[C,D,E]

Is C a goal state?

26
Breadth-first search
• Expand shallowest unexpanded node
• Implementation:
• fringe is a FIFO queue, i.e., new successors go
at end

Expand:
fringe=[D,E,F,G]

Is D a goal state?

27
Example
BFS
Properties of breadth-first search
• Complete? Yes it always reaches goal (if b is finite)
• Time? 1+b+b2+b3+… +bd + (bd+1-b)) = O(bd+1)
(this is the number of nodes we generate)
• Space? O(bd+1) (keeps every node in memory,
either in fringe or on a path to fringe).
• Optimal? Yes (if we guarantee that deeper solutions
are less optimal, e.g. step-cost=1).

• Space is the bigger problem (more than time)

29
Time complexity of breadth-first search
Space complexity of breadth-first search
BFS Weakness
• Exponential Growth
Time and memory requirements for breadth-first search,
assuming a branching factor of 10, 100 bytes per node
and searching 1000 nodes/second

Depth Nodes Time Memory


0 1 1 millisecond 100 kbytes
2 111 0.1 second 11 kilobytes
4 11,111 11 seconds 1 megabyte
6 106 18 minutes 111 megabytes
8 108 31 hours 11 gigabytes
10 1010 128 days 1 terabyte
12 1012 35 years 111 terabytes
14 1014 3500 years 11,111 terabytes
Uniform-cost search
Breadth-first is only optimal if step costs is increasing with depth
(e.g. constant). Can we guarantee optimality for any step
cost?
Uniform-cost Search: Expand node with
smallest path cost g(n).
Proof Completeness:

Given that every step will cost more than 0,


and assuming a finite branching factor, there
is a finite number of expansions required before
the total path cost is equal to the path cost of the
goal state. Hence, we will reach it.

Proof of optimality given completeness:

Assume UCS is not optimal.


Then there must be a goal state with path cost
smaller than the goal state which was found
(invoking completeness).
However, this is impossible because UCS would
have expanded that node first by definition.
Contradiction.
33
Uniform-cost search

Implementation: fringe = queue ordered by path cost


Equivalent to breadth-first if all step costs all equal.

Complete? Yes, if step cost ≥ ε


(otherwise it can get stuck in infinite loops)

Time? # of nodes with path cost ≤ cost of optimal solution.

Space? # of nodes on paths with path cost ≤ cost of optimal


solution.

Optimal? Yes, for any step cost ≥ ε


34
Depth First Search

• PROCESS: Dive into the graph and follow a


single path as far as possible
• Use a stack to keep track of nodes

Put the start node on the stack


Pop the stack Otherwise, push the nodes connected
to the top of the stack on the stack
If the stack is empty then stop, (provided they are not already on the
there is no path stack)

If the top of the stack is the goal, stop


Depth-first search
• Expand deepest unexpanded node
• Implementation:
• fringe = Last In First Out (LIPO) queue, i.e., put
successors at front

Is A a goal state?

36
Depth-first search
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[B,C]

Is B a goal state?

37
Depth-first search
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[D,E,C]

Is D = goal state?

38
Depth-first search
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[H,I,E,C]

Is H = goal state?

39
Depth-first search
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[I,E,C]

Is I = goal state?

40
Depth-first search
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[E,C]

Is E = goal state?

41
Depth-first search
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[J,K,C]

Is J = goal state?

42
Depth-first search
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[K,C]

Is K = goal state?

43
Depth-first search
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[C]

Is C = goal state?

44
Depth-first search
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[F,G]

Is F = goal state?

45
Depth-first search
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[L,M,G]

Is L = goal state?

46
Depth-first search
• Expand deepest unexpanded node
• Implementation:
• fringe = LIFO queue, i.e., put successors at front

queue=[M,G]

Is M = goal state?

47
Properties of depth-first search
A

• Complete? No: fails in infinite-depth spaces B C

Can modify to avoid repeated states along path


• Time? O(bm) with m=maximum depth
• terrible if m is much larger than d
• but if solutions are dense, may be much faster than
breadth-first
• Space? O(bm), i.e., linear space! (we only need to
remember a single path + expanded unexplored nodes)
• Optimal? No (It may find a non-optimal goal first)
49
Iterative deepening search

• To avoid the infinite depth problem of DFS, we can


decide to only search until depth L, i.e. we don’t expand beyond depth L.
Depth-Limited Search

• What of solution is deeper than L? -->Increase L iteratively.


Iterative Deepening Search

• As we shall see: this inherits the memory advantage of Depth-First


search, and is better in terms of time complexity than Breadth first search.

50
Iterative deepening search L=0

51
Iterative deepening search L=1

52
Iterative deepening search L=2

53
Iterative Deepening Search L=3

54
Iterative deepening search
• Number of nodes generated in a depth-limited search to depth
d with branching factor b:
NDLS = b0 + b1 + b2 + … + bd-2 + bd-1 + bd

• Number of nodes generated in an iterative deepening search


to depth d with branching factor b:
NIDS = (d+1)b0 + d b1 + (d-1)b2 + … + 3bd-2 +2bd-1 + 1bd =
O( d
b ≠) O (+1db

• For b = 10, d = 5, BFS


• NDLS = 1 + 10 + 100 + 1,000 + 10,000 + 100,000 = 111,111
• NIDS = 6 + 50 + 400 + 3,000 + 20,000 + 100,000 = 123,450
• NBFS = ............................................................................................ = 1,111,100
55
Properties of iterative deepening search

• Complete? Yes
• Time? O(bd)
• Space? O(bd)
• Optimal? Yes, if step cost = 1 or increasing
function of depth.

56
Example IDS

57
Bi-directional search

• Alternate searching from the start state toward the goal and
from the goal state toward the start.
• Stop when the frontiers intersect.
• Works well only when there are unique start and goal states.
• Requires the ability to generate “predecessor” states.
• Can (sometimes) lead to finding a solution more quickly.
• Time complexity: O(bd/2). Space complexity: O(bd/2).
Bidirectional Search
• Idea
• simultaneously search forward from S and backwards from G
• stop when both “meet in the middle”
• need to keep track of the intersection of 2 open sets of nodes
• What does searching backwards from G mean
• need a way to specify the predecessors of G
• this can be difficult,
• e.g., predecessors of checkmate in chess?
• which to take if there are multiple goal states?
• where to start if there is only a goal test, no explicit list?

58
Bi-Directional Search
Complexity: time and
space complexity are:

O( d /
b 2
)

59
Comparing Search Strategies
b – branching factor d – depth of optimal solution
m – maximum depth l – depth limit

even complete preferred


if step cost is not uninformed
increasing with depth. search strategy
Summary of algorithms

even complete preferred


if step cost is not uninformed
increasing with depth. search strategy

61
Repeated states
• Failure to detect repeated states can turn a
linear problem into an exponential one!

62
Avoiding Repeated States

• In increasing order of effectiveness in reducing size


of state space and with increasing computational
costs:
1. Do not return to the state you just came from.
2. Do not create paths with cycles in them.
3. Do not generate any state that was ever created
before.
• Net effect depends on frequency of “loops” in state
space.
Solutions to Repeated States
S
B
S
B C

C C S B S
State Space
Example of a Search Tree
• Graph search optimal but memory inefficient
• never generate a state generated before
• must keep track of all possible states (uses a lot of memory)
• e.g., 8-puzzle problem, we have 9! = 362,880 states
• approximation for DFS/DLS: only avoid states in its (limited)
memory: avoid looping paths.
(this does not avoid the problem on the previous slide)
• Graph search optimal for BFS and UCS, not for DFS.
63
Graph Search
function graph-search (problem, QUEUEING-FUNCTION)
;; problem describes the start state, operators, goal test, and operator costs
;; queueing-function is a comparator function that ranks two states
;; graph-search returns either a goal node or failure
nodes = MAKE-QUEUE(MAKE-NODE(problem.INITIAL-STATE))
closed = {}
loop
if EMPTY(nodes) then return "failure"
node = REMOVE-FRONT(nodes)
if problem.GOAL-TEST(node.STATE) succeeds
then return node.SOLUTION
if node.STATE is not in closed
then ADD(node, closed)
nodes = QUEUEING-FUNCTION(nodes,
EXPAND(node, problem.OPERATORS))
end
;; Note: The goal test is NOT done when nodes are generated
;; Note: closed should be implemented as a hash table for efficiency
Graph Search Strategies

• Breadth-first search and uniform-cost search are optimal


graph search strategies.
• Iterative deepening search and depth-first search can follow
a non-optimal path to the goal.
• Iterative deepening search can be used with modification:
– It must check whether a new path to a node is better than the original
one
– If so, IDS must revise the depths and path costs of the node’s
descendants.
Breadth-First Search
Expanded node Nodes list
{ S0 }
S0 { A3 B1 C8 }
A3 { B1 C8 D6 E10 G18 }
B1 { C8 D6 E10 G18 G21 }
C8 { D6 E10 G18 G21 G13 }
D6 { E10 G18 G21 G13 }
E10 { G18 G21 G13 }
G18 { G21 G13 }
Solution path found is S A G , cost 18
Number of nodes expanded (including goal node) = 7
Depth-First Search
Expanded node Nodes list
{ S0 }
S0 { A3 B1 C8 }
A3 { D6 E10 G18 B1 C8 }
D6 { E10 G18 B1 C8 }
E10 { G18 B1 C8 }
G18 { B1 C8 }

Solution path found is S A G, cost 18


Number of nodes expanded (including goal node) = 5
Uniform-Cost Search
Expanded node Nodes list
{ S0 }
S0 { B1 A3 C8 }
B1 { A3 C8 G21 }
A3 { D6 C8 E10 G18 G21 }
D6 { C8 E10 G18 G1 }
C8 { E10 G13 G18 G21 }
E10 { G13 G18 G21 }
G13 { G18 G21 }
Solution path found is S B G, cost 13
Number of nodes expanded (including goal node) = 7
How they perform
• Breadth-First Search:
– Expanded nodes: S A B C D E G
– Solution found: S A G (cost 18)
• Depth-First Search:
– Expanded nodes: S A D E G
– Solution found: S A G (cost 18)
• Uniform-Cost Search:
– Expanded nodes: S A D B C E G
– Solution found: S B G (cost 13)
This is the only uninformed search that worries about costs.
• Iterative-Deepening Search:
– nodes expanded: S S A B C S A D E G
– Solution found: S A G (cost 18)
Summary
• Problem formulation usually requires abstracting away real-
world details to define a state space that can feasibly be
explored

• Variety of uninformed search strategies

• Iterative deepening search uses only linear space and not


much more time than other uninformed algorithms

http://www.cs.rmit.edu.au/AI-Search/Product/
http://aima.cs.berkeley.edu/demos.html (for more demos)

64

You might also like