0% found this document useful (0 votes)
39 views

Problem-Solving As Search

The document discusses different types of search algorithms used in artificial intelligence problem solving. It introduces agents and different types including simple reflex agents, agents with internal states, and goal-based agents. It then covers problem solving as search, defining a search problem, and examples like the 8-puzzle and cryptarithmetic puzzles. Finally, it analyzes various search strategies like breadth-first search, depth-first search, uniform-cost search, iterative deepening, backtracking, and bidirectional search in terms of their time complexity, space complexity, optimality, and completeness.

Uploaded by

sshammim
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

Problem-Solving As Search

The document discusses different types of search algorithms used in artificial intelligence problem solving. It introduces agents and different types including simple reflex agents, agents with internal states, and goal-based agents. It then covers problem solving as search, defining a search problem, and examples like the 8-puzzle and cryptarithmetic puzzles. Finally, it analyzes various search strategies like breadth-first search, depth-first search, uniform-cost search, iterative deepening, backtracking, and bidirectional search in terms of their time complexity, space complexity, optimality, and completeness.

Uploaded by

sshammim
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Problem-Solving as Search

Intelligent Agents
Agent: Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. Agent Function: Agent behavior is determined by the agent function that maps any given percept sequence to an action. Agent Program: The agent function for an artificial agent will be implemented by an agent program.

A Simple Reflex Agent


Agent Sensors

What the world is like now

Environment

Condition-action rules

What action I should do now


Actuators

Agent with Model and Internal State


Agent Sensors

How the world evolves

What the world is like now

Environment

Condition-action rules

What action I should do now


Actuators

Goal-Based Agent
Agent Sensors What the world is like now What it will be like if I do action A

Environment

How the world evolves

Goals

What action I should do now Actuators

Schedule
Search Machine learning Knowledge based systems Discovery

Problem Solving as Search


Search is a central topic in AI
Originated with Newell and Simon's work on problem solving. Famous book: Human Problem Solving (1972)

Automated reasoning is a natural search task More recently: Smarter algorithms


Given that almost all AI formalisms (planning, learning, etc.) are NP-complete or worse, some form of search is generally unavoidable (no smarter algorithm available).

Defining a Search Problem


State space - described by initial state - starting state actions - possible actions available successor function; operators - given a particular state x, returns a set of < action, successor > pairs Goal test - determines whether a given state is a goal state (sometimes list, sometimes condition). Path cost - function that assigns a cost to a path

The 8 Puzzle

3 Initial State

6 Goal State

Clicker
What is the size of the state space?
A. 4 B. 3x3 C. 9! D. 99 E. Whatever

Clicker
How many actions possible for each state (on average)?
A. ~1 B. ~4 C. ~9 D. ~9!

Cryptarithmetic
SEND + MORE -----MONEY Find (non-duplicate) substitution of digits for letters such that the resulting sum is arithmetically correct. Each letter must stand for a different digit.

Solving a Search Problem: State Space Search Input:


Initial state Goal test Successor function Path cost function

Output:
Path from initial state to goal state. Solution quality is measured by the path cost.

Generic Search Algorithm


L = make-list(initial-state) loop node = remove-front(L) (node contains path of how the algorithm got there) if goal-test(node) == true then return(path to node) S = successors (node) insert (S,L) until L is empty return failure

Search procedure defines a search tree


Search tree root node - initial state children of a node - successor states fringe of tree - L: states not yet expanded
Search strategy - algorithm for deciding which leaf node to expand next. stack: Depth-First Search (DFS). queue: Breadth-First Search (BFS).

Solving the 8-Puzzle


5 4 1 2 3 6 1 8 8 4

3 Start State

6 Goal State

What would the search tree look like after the start state was expanded?

Node Data Structure

PARENT-NODE NODE

STATE

ACTION= right DEPTH=6 PATH-COST=6

2 CHILD-NODE CHILD-NODE

Sliding Block Puzzles


8-puzzle (on 3x3 grid) has 181,440 states
Easily solvable from any random position

15-puzzle (on 4x4 grid) has ~1.3 Trillion states


Solvable in a few milliseconds

24-puzzle (on 5x5 grid) has ~1025 states


Difficult to solve

Evaluating a Search Strategy


Completeness: Is the strategy guaranteed to find a solution when there is one? Time Complexity: How long does it take to find a solution? Space Complexity: How much memory does it need? Optimality: Does strategy always find a lowest-cost path to solution? (this may include different cost of one solution vs. another).

Uninformed search: BFS

Consider paths of length 1, then of length 2, then of length 3, then of length 4,....

Time and Memory Requirements for BFS O(bd+1)

Let b = branching factor, d = solution depth, then the maximum number of nodes generated is: b + b2 + ... + bd + (bd+1-b)

Time and Memory Requirements for BFS O(bd+1)


Example: b = 10 10,000 nodes/second each node requires 1000 bytes of storage

Depth Nodes 2 4 6 8 10 12 14 1100 111,100 107 109 1011 1013 1015

Time .11 sec 11 sec 19 min 31 hrs 129 days 35 yrs 3523 yrs

Memory 1 meg 106 meg 10 gig 1 tera 101 tera 10 peta 1 exa

Uniform-cost Search
Use BFS, but always expand the lowest-cost node on the fringe as measured by path cost g(n). s s

0
A A 1 s 15 C 5 B 5 5 10 A G B 11 A B G C 15 5 C 15 s 1 B 5 C 15 s

Requirement: g(Successor(n))

g(n)

11

10

Always expand lowest cost node in open-list. Goal-test only before expansion, not after generation.

Uninformed search: DFS

DFS vs. BFS


Complete Optimal Time Space

BFS
DFS

YES
Finite depth

YES
NO

O(bd+1)
O(bm)

O(bd+1)
O(bm)

m is maximum search depth d is solution depth b is branching factor


Time m = d: DFS typically wins m > d: BFS might win m is infinite: BFS probably will do better Space DFS almost always beats BFS

Which search should I use...


If there may be infinite paths?

B=BFS

D=DFS

Which search should I use...


If goal is at a known depth?

B=BFS

D=DFS

Which search should I use...


If there is a large (possibly infinite) branching factor?

B=BFS

D=DFS

Which search should I use...


If there are lots of solutions?

B=BFS

D=DFS

Backtracking Search
Idea: DFS, but dont expand all b states before next level Generate the next state as needed (e.g. from previous state) Uses only O(m) storage Important when space required to store each state is very large (e.g. assembly planning)

Iterative Deepening [Korf 1985]


Idea: Use an artificial depth cutoff, c. If search to depth c succeeds, we're done. If not, increase c by 1 and start over.

Each iteration searches using depth-limited DFS.

Limit=0

Iterative Deepening

Limit=1

Limit=2

Limit=3

Cost of Iterative Deepening


space: O(bd) as in DFS, time: O(bd)

b 2 3 5 10

ratio of IDS to DFS 3 2 1.5 1.2

25
100

1.08
1.02

Bidirectional Search

Comparing Search Strategies


Criterion Breadth -First bd+1 bd+1 Yes Yes UniformCost
b b
1 C* C*

DepthFirst bm bm no No

Iterative Deepening bd bd yes Yes

Bidirectional (if applicable) bd/2 bd/2 yes Yes

Time Space Optimal? Complete?

yes Yes

***Note that many of the ``yes's'' above have caveats, which we discussed when covering each of the algorithms.

You might also like