0% found this document useful (0 votes)
6 views81 pages

Lec 3-Solving Problems by Searching Part 1 (2)

Uploaded by

lenah.buk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views81 pages

Lec 3-Solving Problems by Searching Part 1 (2)

Uploaded by

lenah.buk
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 81

ARTIFICIAL INTELLIGENCE

LECTURE 3:
SOLVING PROBLEMS BY
SEARCHING PART 1
2024
Objective

This chapter will introduce methods that an agent can use to


select actions in environments that are deterministic,
observable, static, and completely known. In such cases, the
agent can construct sequences of actions that achieve its

goals; this process is called search.


Outline

 Problem-solving agents

 Problem formulation

 Example problems

 Searching For Solutions

 Search Tree

 Uninformed Search
Problem-Solving Agents

 The agent that can construct sequences of actions that achieve


its goals; this process is called search.

 A search algorithm takes a problem as input and returns a


solution in the form of an action sequence.

 Once a solution is found, the actions it recommends can be


carried out. This is called the execution phase.
Problem-Solving Agent

 Goal formulation, based on the current situation and


the agent’s performance measure, is the first step in
problem solving.
 Goals organize behavior by limiting the objectives and
hence the actions to be considered.
 The agent’s task is to find out how to act. So, we
need to decide what sort of actions and states it
should consider.
 Problem formulation is the process of deciding what
actions and states to consider, given a goal.
Example: Romania

 Goal formulation :
 be in Bucharest
 Problem formulation :
 states: various cities
 actions: drive between cities or choose next city

 Find solution:
 sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest
Well-defined Problems and Solution
A problem consists of five parts:
1. State space: A set of possible states that the environment
can be in
2. The initial state that the agent starts in.
3. A set of actions
4. A transition model describing the results of those actions
5. A goal states
6. A path cost function.
 The environment of the problem is represented by a state
space.
 A path through the state space from the initial state to a
goal state is a solution.
Example: Romania

On vacation in Romania;
currently in Arad. Flight leaves
tomorrow from Bucharest

❑ Initial state: In(Arad)


❑ Actions: Go from one city to another {Go(Sibiu), Go(Timisoara),
Go(Zerind)}
❑ Transition model: If you go from city A to city B, you end up in city B
RESULT(In(Arad), Go(Zerind)) = In(Zerind)
❑ Goal states : the agent at Bucharest?
❑ Path cost: Sum of edge costs (total distance traveled). The optimal
solution has the lowest path cost among all solutions
Example: Vacuum World state space graph

 States?
 Initial state ?
 Actions ?
 Transition model?
 goal states ?
 path cost ?
Vacuum world state space graph

 States? the dirt locations and the agent (robot) location.


 Initial state ? Any state can be designated as the initial state.
 Actions? Left, Right, Suck
 Transition model? The actions have their expected effects, except that moving
Left in the leftmost square, moving Right in the rightmost square, and Sucking
in a clean square has no effect. The complete state space is shown above Figure
 Goal states? no dirt at all locations
 Path cost? 1 per action
Example: The 8-puzzle

 States ?
 Initial state ?
 Actions ?
 Transition model?
 Goal states ?
 Path cost ?
Example: The 8-puzzle

 States? locations of tiles


 Initial state ? Any state can be designated as the initial state.
 Actions? move blank left, right, up, down
 Transition model? Given a state and action, this returns the resulting state;
for example, if we apply Left to the start state in the above Figure, the
resulting state has the 5 and the blank switched
 Goal states? = goal state (given)
 Path cost? 1 per move
Problem-Solving Agents

 The agent that can construct sequences of actions that achieve


its goals; this process is called search.

 Once a solution is found, the actions it recommends can be


carried out. This is called the execution phase.
Search

 A search algorithm takes a problem as input and returns a


solution in the form of an action sequence.

 The agent must find a sequence of actions that reaches the goal

 The performance measure is defined by

a) reaching the goal and

b) how “expensive” the path to the goal is.


Search Tree

 “What if” tree of sequences of actions and outcomes


Starting State
 The root node corresponds to the starting state/
initial state Action
 The children of a node correspond to the successor Successor
states of that node’s state State

 A path through the tree corresponds to a sequence of


actions

 A solution is a path ending in the goal state ………
 Nodes vs. States
 State is a representation of the world Goal state
 Node is a data structure that is part of the search tree.
Node has to keep pointer to parent, path cost,
possibly other info
Search Tree
Tree Search Example
Start: Arad
Goal: Bucharest

Arad

Sibiu Timisoara Zerind

Rimnicu
Arad Fagaras Oradea Arad Lugoj Arad Orade
Vilcea

(a) The initial state


Tree Search Example
Start: Arad
Goal: Bucharest

Arad

Sibiu Timisoara Zerind

Rimnicu
Arad Fagaras Oradea Arad Lugoj Arad Orade
Vilcea

(b) After expanding Arad


Tree Search Example
Start: Arad
Goal: Bucharest

Arad

Sibiu Timisoara Zerind

Rimnicu
Arad Fagaras Oradea Arad Lugoj Arad Orade
Vilcea

(c) After expanding Sibiu


Tree Search Example

 Nodes that have been expanded are


Arad
shaded (green color);
 Nodes that have been generated but
not yet expanded are outlined in Sibiu
bold (light green color);
 Nodes that have not yet been
Arad
generated are shown in dashed lines.
Search Strategies

 A search strategy is defined by picking the order of node expansion


 Strategies are evaluated along the following dimensions:
 Completeness: does it always find a solution if one exists?
 Time complexity: by the number of states and actions considered
(number of nodes generated)
 Space complexity: How much memory is needed to perform the
search? maximum number of nodes in memory
 Cost optimality: does it always find a least-cost solution?

 Time and space complexity are measured in terms of


 b: maximum branching factor of the search tree (the number of
children that a node can have)
 d: depth of the least-cost solution
Search Strategies

1. Uninformed Search Strategies


 Use only the information available in the
problem definition
 Also called blind search
 All they can do is generate successors and
distinguish a goal state from a non-goal
state.
 All search strategies are distinguished by
the order in which nodes are expanded.
Search Strategies

2. Informed (Heuristic) Search Strategies


 Strategies that know whether one non-goal
state is “more promising” than another are
called informed search or heuristic search
strategies
 Use problem-specific knowledge beyond
the definition of the problem itself
 Can find solutions more efficiently than can
an uninformed strategy.
Infrastructure for Search Algorithms
 Search algorithms require a data structure to keep track of the
search tree.
 For each node n of the tree, we will have a structure that

contains the following four components:


• n.STATE: the state in the state space to which the node
corresponds;
• n.PARENT: the node in the search tree that generated this node;
• n.ACTION: the action that was applied to the parent to generate
the node;
• n.PATH-COST: the cost, traditionally denoted by g(n), of the path
from the initial state to the node, as indicated by the parent
pointers.
Infrastructure for Search Algorithms
 Frontier is the set of all nodes available for expansion at any given
point
 The frontier needs to be stored in such a way that the search
algorithm can easily choose the next node to expand according to its
preferred strategy.
 The appropriate data structure for this is a queue. The operations
on a queue are as follows:
 EMPTY?(queue) returns true only if there are no more elements in
the queue.
 INSERT(element , queue) inserts an element at the end and returns
the resulting queue.
 POP(queue) removes the ?? element of the queue and returns it.
Infrastructure for Search Algorithms

Three common variants of a queue


1. First-In, First-Out (FIFO) queue, which pops the oldest
element of the queue;
2. Last-In, First-Out (LIFO) queue (also known as a
stack), which pops the newest element
3. Priority queue, which pops the element of the queue
with the highest priority according to some ordering
function.
Infrastructure for Search Algorithms
1. Uninformed Search Strategies
 Uninformed search strategies use only the information
available in the problem definition (also called blind search).
 There is no additional information about states provided in
the problem definition.
The Objectives of This Lecture is to Learn:
1. Breadth-first search
2. Uniform-cost search
3. Depth-first search
4. Depth-limited search
5. Iterative deepening search
Search Strategies
 Search cost — depends on the time complexity
and the memory usage

 Total cost: combines the search cost and the


path cost of the solution found.
1. Breadth-First Search (BFS)
 Breadth-first search is a simple strategy in
which the root node is expanded first,
then all the successors of the root node
are expanded next, then their successors,
and so on.
 In general, all the nodes are expanded at
a given depth in the search tree before any
nodes at the next level are expanded.
 Expand shallowest unexpanded node
 Implementation:
 Frontier (Fringe) is a FIFO queue, i.e.,
new successors go at end
1. Breadth-First Search (BFS)

Goal is H A
In
Frontier
(To be visited) B C

Out
D E F G

Visited Set = ()
H I
1. Breadth-First Search

Goal is H A
In
Frontier
(To be visited) B C
A
Out
D E F G
POP
Visited Set = ()
H I
1. Breadth-First Search (BFS)

Goal is H A
In
Frontier
(To be visited) B C
C, B

POP Out
D E F G

Visited Set = (A)


H I
1. Breadth-First Search (BFS)

Goal is H A
In
Frontier
(To be visited) B C
F, E, D, C
Out
POP D E F G

Visited Set = (A, B)


H I
1. Breadth-First Search (BFS)

Goal is H A
In
Frontier
(To be visited) B C
G, F, E, D
Out
POP D E F G

Visited Set = (A, B, C)


H I
1. Breadth-First Search (BFS)

Goal is H A
In
Frontier
(To be visited) B C
G, F, E
Out
POP D E F G

Visited Set = (A, B, C, D) H I


1. Breadth-First Search (BFS)

Goal is H A
In
Frontier
(To be visited) B C
G, F
Out
POP D E F G

Visited Set = (A, B, C, D, E) H I


1. Breadth-First Search (BFS)

Goal is H A
In
Frontier
(To be visited) B C
H, G
Out
POP D E F G

Visited Set = (A, B, C, D, E, F) H I


1. Breadth-First Search (BFS)

Goal is H A
In
Frontier
(To be visited) B C
I, H
Out
POP D E F G

Visited Set = (A, B, C, D, E, F, G) H I


1. Breadth-First Search (BFS)

Goal is H A
In
Frontier
(To be visited) B C
I
Out
DONE D E F G

Visited Set = (A, B, C, D, E, F, G, H) H I


Should be in the correct order
1. Breadth-First Search (BFS)
Extra resources
 https://www.youtube.com/watch?v=HZ5YTanv5QE
Properties of Breadth-first Search

 Complete? Yes (if b is finite)


 Time? 1+ b+b2+b3+… +bd = O(bd)
 Space? O(bd) (keeps every node in memory)
 Optimal? Yes (if cost = 1 per step)
b: maximum branching
factor of the search
tree (the number of
b0 = 1 children that a node
can have)
d: depth of the least-
b1 =2 cost solution
m: maximum depth of
the state space (may
b2 = 4 be ∞)
BFS Tree Example
Start: Arad
Goal: Bucharest

Arad

Sibiu Timisoara Zerind

Rimnicu
Arad Fagaras Oradea Arad Lugoj Arad Orade
Vilcea

(a) The initial state


BFS Tree Example
Start: Arad
Goal: Bucharest

Arad

Sibiu Timisoara Zerind

Rimnicu
Arad Fagaras Oradea Arad Lugoj Arad Orade
Vilcea

(b) After expanding Arad


BFS Tree Example
Start: Arad
Goal: Bucharest

Arad

What is the next to be expanded?


Sibiu Timisoara Zerind

Rimnicu
Arad Fagaras Oradea Arad Lugoj Arad Orade
Vilcea

(c) After expanding Sibiu


Tree Search Example
Start: Arad
Goal: Bucharest

Arad

Sibiu Timisoara Zerind

Rimnicu
Arad Fagaras Oradea Arad Lugoj Arad Orade
Vilcea

(d) After expanding Timisoara and Zerind


Tree Search Example
Start: Arad
Goal: Bucharest

Arad

Sibiu Timisoara Zerind

Rimnicu
Arad Fagaras Oradea Arad Lugoj Arad Orade
Vilcea

(e) After expanding Arad again !!


Advantages & Disadvantages of BFS

Advantages

 It is guaranteed to find a solution if one exist.

Disadvantages
d
 Time complexity and space complexity are both O( b ). This is a
very big hurdle.

 All the nodes are to be generated in BFS. So, even unwanted


nodes are to be remembered (stored in the queue) which is of no
practical use to the search.
2. Depth-First Search (DFS)

 Depth-first search always expands the deepest node in the


current frontier of the search tree.
 The search proceeds immediately to the deepest level of the
search tree, where (until) the nodes have no successors.
 As those nodes are expanded, they are dropped from the
frontier, so then the search “backs up” to the next deepest
node that still has unexplored successors.
 Frontier (Fringe) : uses a LIFO queue (stack). A LIFO queue
means that the most recently generated node is chosen for
expansion.
2. Depth-First Search (DFS)

Goal is H A
In
Frontier
(To be visited) B G

Out C D E H

Visited Set = ()
F I
2. Depth-First Search (DFS)

Goal is H A
In
Frontier
(To be visited) B G
A

Out
POP C D E H

Visited Set = ()
F I
2. Depth-First Search (DFS)

Goal is H A
In
Frontier
(To be visited) B G
B,G

Out
POP C D E H

Visited Set = (A)


F I
2. Depth-First Search (DFS)

Goal is H A
In
Frontier
(To be visited) B G
C, D, E , G

Out
POP C D E H

Visited Set = (A, B)


F I
2. Depth-First Search (DFS)

Goal is H A
In
Frontier
(To be visited) B G
D, E , G

Out
POP C D E H

Visited Set = (A, B, C)


F I
2. Depth-First Search (DFS)

Goal is H A
In
Frontier
(To be visited) B G
E,G

Out
POP C D E H

Visited Set = (A, B, C, D)


F I
2. Depth-First Search (DFS)

Goal is H A
In
Frontier
(To be visited) B G
F,G

Out
POP C D E H

Visited Set = (A, B, C, D, E)


F I
2. Depth-First Search (DFS)

Goal is H A
In
Frontier
(To be visited) B G
G

Out
POP C D E H

Visited Set = (A, B, C, D, E, F)


F I
2. Depth-First Search (DFS)

Goal is H A
In
Frontier
(To be visited) B G
H

Out
POP C D E H

Visited Set = (A, B, C, D, E, F, G)


F I
2. Depth-First Search (DFS)

Goal is H A
In
Frontier
(To be visited) B G
I

Out
DONE C D E H

Visited Set = (A, B, C, D, E, F, G, H)


F I
Should be in the correct order
2. Depth-First Search (DFS)
Extra resources
 https://www.youtube.com/watch?v=Urx87-NMm6c
2. Depth-First Search (DFS)
DFS Tree Example
Start: Arad
Goal: Bucharest

Arad

Sibiu Timisoara Zerind

Rimnicu
Arad Fagaras Oradea Arad Lugoj Arad Orade
Vilcea

(a) The initial state


DFS Tree Example
Start: Arad
Goal: Bucharest

Arad

Sibiu Timisoara Zerind

Rimnicu
Arad Fagaras Oradea Arad Lugoj Arad Orade
Vilcea

(b) After expanding Arad


DFS Tree Example
Start: Arad
Goal: Bucharest

Arad

What is the next to be expanded?


Sibiu Timisoara Zerind

Rimnicu
Arad Fagaras Oradea Arad Lugoj Arad Orade
Vilcea

(c) After expanding Sibiu


DFS Tree Example
Start: Arad
Goal: Bucharest

Arad

What is the problem?


Sibiu Timisoara Zerind

Rimnicu
Arad Fagaras Oradea Arad Lugoj Arad Orade
Vilcea

(d) After expanding Arad again !!


DFS Tree Example
Start: Arad
Goal: Bucharest

Arad

What is the problem?


Sibiu
Infinite-depth spaces
Timisoara Zerind

Rimnicu
Arad Fagaras Oradea Arad Lugoj Arad Orade
Vilcea

(d) After expanding Arad again !!


2. Depth-First Search (DFS)

 This version is non-optimal.


A
 For example, if the goal is G,
DFS will explore the entire left B G
subtree then will return (yellow)
G not the (green) G.
C D E H
 Yellow G costs more than green
G
G
Properties of Depth-First Search
A
68

B C
 Complete? No because may fail in infinite-depth spaces
Can modify to avoid repeated states along path
 Time? O(bm) with m=maximum depth
 terrible if m is much larger than d
 Space? O(bm), i.e., linear space! (we only need to
remember a single path + expanded unexplored nodes)
 Optimal? No (It may find a non-optimal goal first)

b: maximum branching factor of the search tree (the number of children that a node can have)
d: depth of the least-cost solution
m: maximum depth of the state space (may be ∞)
Properties of Depth-First Search

 Depth-first search DFS seems to have no clear advantage


over breadth-first search BFS, so why do we include it?
 Depth-first tree search needs to store only a single path
from the root to a leaf node, along with the remaining
unexpanded sibling nodes for each node on the path.
 Once a node has been expanded, it can be removed from
memory as soon as all its descendants have been fully
explored. For a state space with branching factor b and
maximum depth m, depth-first search requires storage of
only O(bm) nodes.
Advantages & Disadvantages of DFS

Advantages

 Memory requirements are less because only nodes on the current path
are stored

 By chance, it may find a solution without examining much of the


search space of all .

Disadvantages

 This type can go on and on, deeper and deeper into the search
space and thus, we can get lost. This is referred to blind alley.
BFS Vs. DFS

BFS DFS
Complete? YES NO
Time O(bd) O(bm)
Space O(bd) O(bm)
Optimal? YES NO

b: maximum branching factor of the search tree (the number of children that a node can have)
d: depth of the least-cost solution
m: maximum depth of the state space (may be ∞)
3. Iterative Deepening Search IDS

o Iterative deepening search (or iterative deepening depth-


first search) is a general strategy, often used in combination
with depth-first tree search, that finds the best depth limit.
o It does this by gradually increasing the limit—first 0, then 1,
then 2, and so on—until a goal is found.
o This will occur when the depth limit reaches d, the depth of the
shallowest goal node.
o In general, iterative deepening is the preferred uninformed
search method when there is a large search space, and the depth
of the solution is not known.
Problem?

Goal is M A
BFS ?
DFS?
IDS? B C

D E F G

H I J K L M N O
3. Iterative Deepening Search (l =0)
3. Iterative Deepening Search (l =1)
3. Iterative Deepening Search (l =2)
3. Iterative Deepening Search (l =3)
BFS Vs. DFS Vs. IDS

BFS DFS IDS


Complete? YES NO YES
Time O(bd) O(bm) O(bd)
Space O(bd) O(bm) O(bd)
Optimal? YES NO YES

b: maximum branching factor of the search tree (the number of children that a node can have)
d: depth of the least-cost solution
m: maximum depth of the state space (may be ∞)
Advantages & Disadvantages of IDS
Advantages
 It is guaranteed to find a shortest path solution.
 It is a preferred uninformed search method when the search space
is large and the depth of solution is not known.
Disadvantages
 It performs wasted computations before reaching to the goal
depth.
Summary
Uninformed Search Strategies
1. Breadth-first search
2. Uniform-cost search
3. Depth-first search
4. Depth-limited search
5. Iterative deepening sear

You might also like