Search Algorithms
Search Algorithms
Each node in the search tree corresponds to a state in the state space and the edges
Node in the search tree correspond to actions. The root of the tree corresponds to
the initial state of the problem. It is important to understand the distinction
between the state space and the search tree. The state space describes the (possibly
infinite) set of states in the world, and the actions that allow transitions from one
state to another.
The search tree describes paths between these states, reaching towards the goal.
The search tree may have multiple paths to (and thus multiple nodes for) any
given state, but each node in the tree has a unique path back to the root (as in all
trees).
Figure shows the first few steps in finding a path from Arad to Bucharest. The
root node of the search tree is at the initial state, Arad.
the available ACTIONS for that state, using the RESULT function to see where
those actions Generating lead to, and generating a new node (called a child node
or successor node) for each of the Child node
Successor node resulting states. Each child node has Arad as its parent node.
Parent node Now we must choose which of these three child nodes to consider
next. This is the essence of search—following up one option now and putting the
others aside for later.
Suppose we choose to expand Sibiu first. Figure (bottom) shows the result: a set
of 6 unex Frontier panded nodes (outlined in bold). We call this the frontier of
the search tree. We say that any Reached state that has had a node generated for
it has been reached (whether or not that node has been expanded).
Figure shows the search tree superimposed on the state-space graph. Separator
Note that the frontier separates two regions of the state-space graph: an interior
region here every state has been expanded, and an exterior region of states that
have not yet been reached
Best-_rst search
EXPAND to generate child nodes. Each child node is added to the frontier if it
has not been reached before, or is re-added if it is now being reached with a path
that has a lower path cost than any previous path. The algorithm returns either an
indication of failure, or a node that represents a path to a goal. By employing
different f (n) functions, we get different specific algorithms, which this chapter
will cover.
Priority queue • A priority queue first pops the node with the minimum cost
according to some evaluation function, f . It is used in best-first search.
FIFO queue • A FIFO queue or first-in-first-out queue first pops the node that
was added to the queue first; we shall see it is used in breadth-first search.
Redundant paths
The search tree shown in Figure includes a path from Arad to Sibiu and back to
Repeated state Arad again. We say that Arad is a repeated state in the search tree,
generated in this case by Cycle a cycle (also known as a loopy path). So even
though the state space has only 20 states, the Loopy path complete search tree is
infinite because there is no limit to how often one can traverse a loop.
Consider an agent in a 10_10 grid world, with the ability to move to any of 8
adjacent squares. If there are no obstacles, the agent can reach any of the 100
squares in 9 moves or fewer. But the number of paths of length 9 is almost 89 (a
bit less because of the edges of the grid), or more than 100 million. In other words,
the average cell can be reached by over a million redundant paths of length 9, and
if we eliminate redundant paths, we can complete a I search roughly a million
times faster. As the saying goes, algorithms that cannot remember the past are
doomed to repeat it. There are three approaches to this issue.
First, we can remember all previously reached states (as best-first search does),
allowing us to detect all redundant paths, and keep only the best path to each state.
This is appropriate for state spaces where there are many redundant paths, and is
the preferred choice when the table of reached states will fit in memory.
Second, we can not worry about repeating the past. There are some problem
formulations where it is rare or impossible for two paths to reach the same state.
An example would be an assembly problem where each action adds a part to an
evolving assemblage, and there is an ordering of parts so that it is possible to add
A and then B, but not B and then A.
For those problems, we could save memory space if we don’t track reached states
and we don’t check Graph search for redundant paths. We call a search algorithm
a graph search if it checks for redundant Tree-like search paths and a tree-like
search6 if it does not check.
Since each node has a chain of parent pointers, we can check for cycles with no
need for additional memory by following up the chain of parents to see if the state
at the end of the path has appeared earlier in the path. Some implementations
follow this chain all the way up, and thus eliminate all cycles; other
implementations follow only a few links (e.g., to the parent, grandparent, and
great-grandparent), and thus take only a constant amount of time, while
eliminating all short cycles (and relying on other mechanisms to deal with long
cycles).
Cost optimality
_ Time complexity
Space complexity
Time and space complexity are considered with respect to some measure of the
problem difficulty. In theoretical computer science, the typical measure is the size
of the state-space graph, jVj+jEj, where jVj is the number of vertices (state nodes)
of the graph and jEj is the number of edges (distinct state/action pairs).
But in many AI problems, the graph is represented only implicitly by the initial
state, actions, and transition model. For an implicit Depth state space, complexity
can be measured in terms of d, the depth or number of actions in
an optimal solution; m, the maximum number of actions in any path; and b, the
branching Branching factor factor or number of successors of a node that need to
be considered.