0% found this document useful (0 votes)
18 views

AI Unit 2 Notes

The State Space Approach in Artificial Intelligence systematically represents all possible states of a system and the transitions between them, focusing on key elements such as states, actions, and goal states. It includes various search strategies, categorized into uninformed and informed searches, with techniques like Breadth-First Search, Depth-First Search, and A* algorithm to efficiently find solutions. The document also discusses specific problems like the Water Jug Problem and the Mini-Max algorithm used in decision-making and game theory.

Uploaded by

Deepak Tyagi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

AI Unit 2 Notes

The State Space Approach in Artificial Intelligence systematically represents all possible states of a system and the transitions between them, focusing on key elements such as states, actions, and goal states. It includes various search strategies, categorized into uninformed and informed searches, with techniques like Breadth-First Search, Depth-First Search, and A* algorithm to efficiently find solutions. The document also discusses specific problems like the Water Jug Problem and the Mini-Max algorithm used in decision-making and game theory.

Uploaded by

Deepak Tyagi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Unit 2

# State Space Approach


The State Space Approach is a method used in Artificial Intelligence to solve problems by
representing all possible states of a system and the transitions between them. It provides a
structured way to define and explore problems systematically.

Key Elements of the State Space Approach:


 State:
o A state represents a snapshot of the system at a particular
moment.
o It contains all the information needed to describe the
situation.
 State Space:
o It is the set of all possible states that can be reached from
the initial state by applying valid actions.
o It forms a graph or tree structure, where nodes represent
states and edges represent actions.
 Initial State:
o The starting point or the beginning state of the system.
o For example, in a puzzle, the initial state is the arrangement
of pieces at the start.
 Goal State:
o The desired state or the solution to the problem.
o For example, in a puzzle, the goal state is the completed
arrangement.
 Actions/Operators:
o Actions are the moves or steps that transition the system from
one state to another.
o For example, in a chess game, a move of a chess piece is an
action.
 Path:
o A sequence of states connected by actions leading from the initial
state to the goal state.
# Searching Process
o The searching process in Artificial Intelligence is how a system or agent
finds a solution to a problem by exploring different options.
o Searching is the sequence of steps that transforms the initial state to the
goal state.
The process of search includes :
 Initial state description of the problem.
 A set of legal operators that change the state.
 The final or goal state.

 Following are the parameters used to evaluate a search technique:


o Completeness: By completeness, the algorithm finds an answer in some finite time.
So, the algorithm is said to be complete if it is guaranteed to find a solution, if there
is one.
o Space and time complexity: With space and time complexity, we address the
memory required and the time factor in terms of operations carried out. Uninformed
search:
o Optimality: Optimality tells us how good the solution is. So, an algorithm or a
search process is said to be optimal, if it gives the best solution.

# Types of search strategies


1. Uninformed search:
• Uninformed search means that they have no additional information about states
beyond that provided in the problem definition(like initial state, goal state, and
possible actions).
• The agent explores all possible paths one by one until it finds a solution. It doesn't
"know" which path is better and works like trial-and-error.
• An uninformed search proceeds in a systematic way by exploring nodes in some
predetermined order or
• simply by selecting nodes at random. Uninformed
• search is also called Brute Force Search or Blind Search.

Characteristics of Uninformed Search


 No prior knowledge of the problem is used.

 Searches blindly, relying only on the structure of the state space.

 Works well in simple or small problems but becomes inefficient for


large spaces.
 It is of following Types :
o Breadth First Search

o Depth First Search

o Uniform Cost search

2. Informed search:
Informed search strategies use extra information (called a heuristic) to guess
which paths are more
promising. This allows the agent to focus on better options and solve problems
faster.

Informed search strategies can find a solution more efficiently than an


uninformed search strategy.
Informed search is also called a Heuristic search.
The heuristic function estimates the cost of reaching the goal from a given
state, helping the agent prioritize better paths.
Characteristics of Informed Search
 Uses a heuristic function to guide the search.
 Faster and more efficient than uninformed search.
 Reduces the search space significantly.
Types of informed search are:
o Hill climbing search
o Best first search
o A* Algorithm
# BFS and DFS

1. Breadth-First Search (BFS)


BFS is a search strategy that starts at the initial node and explores all its
neighbors first before moving on to the next level of neighbors. It uses a
queue to keep track of the nodes it needs to explore. The idea is to go level
by level, exploring all nodes that are one step away from the current node
before moving to the next level of nodes.
How BFS Works:
o BFS starts by exploring the initial node.
o Then, it moves to all nodes directly connected to the initial
node (i.e., neighbors).
o After finishing a level, it moves on to the next level, exploring the
neighbors of the previously explored nodes.
o This process continues until the goal is found.
Advantages of BFS:
 Guarantees the shortest path
 Complete: BFS will always find a solution if one exists.
 Best for finding paths.
Disadvantages of BFS:
 It requires lots of memory since each level of the tree must be saved
into memory to expand the next level.
 BFS needs lots of time if the solution is far away from the root node.
2. Depth-First Search (DFS)

DFS is a search strategy that explores as deep as possible along one path
before backtracking and trying another path. It uses a stack to keep track of
the nodes that still need to be explored. DFS goes deep into a branch of the
tree (or graph) until it hits a dead-end, then backtracks and explores other
paths.
How DFS Works:
o DFS starts at the initial node and explores as far as possible along
one path.
o If it reaches a node with no unvisited neighbors (a dead-end), it
backtracks to the previous node and explores the next possible
path.
o This continues until the goal is found or all paths are explored.
Advantages of DFS:
 Uses less memory
 Good for deep problems
Disadvantages of DFS:
 May not find the shortest path
 Can get stuck
# Heuristic function
A heuristic function is a problem-solving technique used in AI to guide the
search for a solution in the most promising direction. It provides an estimate of
the "cost" or "distance" from the current state to the goal state. Heuristics are
used in informed search strategies (like A* search) to make the search process
more
efficient by focusing on the most relevant paths.
Definition:
A heuristic is a function h(n) that gives an estimate of the remaining cost from
a given node (n) to the goal. This value helps the AI agent decide which path to
take by choosing the nodes with the lowest heuristic value.
The value of the heuristic function is always positive.
h(n) <= h*(n)
here, h(n) is heuristic cost and h*(n) is the estimated cost.
# A* algorithm

A* search is the most commonly known form of best- first search. It uses
heuristic function h(n), and cost to reach the node n from the start state g(n). It
has combined features of UCS and greedy best-first search, by which it solve the
problem efficiently. A* search
algorithm finds the shortest path through the search space using the heuristic
function.

This search algorithm expands less search tree and provides optimal result
faster. A* algorithm is similar to UCS except that it uses g(n)+h(n) instead of
g(n). In A* search algorithm, we use search heuristic as well as the cost to reach
the node. Hence we can combine both costs as following, and this sum is called
as a fitness number.
f(n)=g(n)+h(n)

# Hill Climbing Algorithm


The Hill Climbing Algorithm is a popular search and optimization
technique in Artificial Intelligence (AI).
The hill climbing search algorithm is simply a loop that continually moves in
the direction of increasing values, that is, uphill (the goal). It terminates when
it reaches a "peak' where no neighbour has a higher value.
It is a simple and greedy algorithm used to find solutions to problems by
iteratively making small changes to a current solution. Hill Climbing is primarily
used for solving optimization problems.
One of the widely discussed examples of Hill climbing algorithm is Traveling-
salesman Problem in which we need to minimize the distance traveled by
the
salesman.

 Working of Hill Climbing Algorithm:

1. Start with an Initial Solution:


o Begin with a random solution or starting point in the problem.

2. Evaluate the Neighbors:


o Check the nearby solutions by making small changes to the
current solution.

3. Choose the Best Neighbor:


o From all the nearby solutions, choose the one that is better
(higher value for maximization or lower value for minimization).

4. Repeat the Process:


o Move to the best neighbor and check its neighbors again. Keep
repeating this process until no better solution is found.

5. Termination:
o The algorithm stops when no better solution can be found. At
this point, it has reached the best solution it can in that area
(localoptimum).
Algorithm:
1. Evaluate the initial state, if it is goal state then return success and
stop.
2. Loop until a solution is found or there is no new operator left to
apply.
3. Select and apply an operator to the current state.
4. Check new state:
 If it is goal state the return success and quit.
 Else if it is better than the current state then assign new state as current
state.
 Else if not better than the current state, then return to step 2.
5. Exit

# Types:
There are different variants of Hill Climbing, and each has its own approach:

1. Simple Hill Climbing:


o This is the basic version where the algorithm checks the
immediate neighbors of the
current solution. If a better neighbor is found, it moves to that
neighbor. The process is repeated until no better neighbor is
found.

2. Steepest-Ascent Hill Climbing:


o This version evaluates all the neighbors and selects the one that
offers the steepest ascent (i.e., the best possible improvement). It
is more thorough than simple hill climbing, but still may get stuck
in local optima.

3. Stochastic Hill Climbing:


o Instead of evaluating all neighbors, this version randomly
selects a neighbor and moves to it if it improves the
solution. It
introduces some randomness to avoid getting stuck in local
optima, but does not guarantee finding the global optimum.

==>Problems in Hill Climbing Algorithm:


 Local maxima: A local maximum is a peak that is higher than each of its
neighboring states, but lower than the global maximum.Hill climbing
algorithms that reach the vicinity of a local maximum will be drawn
upwards towards the peak, but will then be stuck with nowhere else to
go.
 Plateau:A plateau is a flat area of the search space in which a whole set
of neighbouring states has the same value. A hill climbing search might be
unable to find its way off the plateau.
 Ridges: A ridge is a special form of the local maximum. It has an area
which is higher than its surrounding areas, but itself has a slope, and
cannot be reached in a single move.
# Water Jug Problem
The Water Jug Problem is a classic problem in Artificial Intelligence that
demonstrates how state space search and problem-solving strategies can be
applied to find a solution. The problem involves two jugs of different capacities
and requires achieving a specific amount of water in one of the jugs by
performing a series of allowed operations.
> You are given two jugs, a 4 litres one and a 3 litres one. Neither has any
measuring marker on it. There is a pump that can be use to fill the jugs with
water. How can you get exactly 2 litres of water into the 4 litres jug.
The state space for this problem can be represented by ordered pairs of
integers (x,y) such that x = 0, represents the quantity of water in the 3 litres
jug.
1. The start state is (0, 0).
2. The goal state is (2, n) for any value of n.
# Prediction Rules

(x,y)->(4,y) Fill the 4 litres jug.


If x<4
(x, y) -> (x, 3) Fill the 3 litres jug.
If y<3
(x, y) -> (x-d, y) If x>0 Pour some water out of the 4 litres jug

(x, y) -> (x, y -d) Pour some water out of


If y>0 the 3 litres jug
(x, y) ->(0, y) Empty the 4 litres jug on the ground.
If x>0
(x, y) -> (x, 0) Empty the 3 litres jug on
If y>0 the ground.
(x, y) -> (4, y- (4- x)) Pour water from the 3
If x-y>=0 and y>0 litres jug into the 4 litres jug until the 4
litres jug is
full.

(x, y) -> (x-(3-y), 3) Pour water from the 4


If x+y>=3 and x>0 litres jug into the 3 litres jug until the 3
litres jug is
full.

(x, y) -> (x-y, 0) Pour all the water into the 4 litres jug.
If x+y<=4 and y>0
x, y) -> (0, x + y) if x+y<=3 Pour all the water from the 4 litres jug
and x>0 into the 3
litres jug
(0, 2) -> (2, 0) Pour the 2 litres from the 3 litres jug
into the 4 litres
jug.

(2, y) -> (0, y) Empty the 2 litres in the 4 litres jug on


the ground.
# Constraint Satisfaction Problem
It is a mathematical problem defined by a set of variables, their possible values
(domains), and
constraints that restrict the values the variables can take. The goal is to assign
values to the variables in such a way that all the constraints are satisfied.
Constraint Satisfaction Procedure to Solve Cryptarithmetic Problems
Cryptarithmetic problems are puzzles where arithmetic equations are given in
an encrypted form, and the task is to find the unique digits for each letter so
the equation holds true.

(1) CROSS+ ROADS = DANGER

(2) SEND + MORE = MONEY


1. Initial guess, M= l because the sum of two single digits can generate
at most a carry of 1.

2. If M = 1, then S should be either 8 or 9 because S+ M gives a two-digit


number. This also considers the carry digit.

3. When M = 1 and S = 8 or 9, adding M + S gives a two- digit number, which


can either be 10 or 11. This means the value of O will be either 0 or 1,
depending on the carry. Since 1 is already assigned to M, it cannot be used for
any other letter. Therefore, we conclude that
O = 0 and M + S = 10. The value of S can be either 8 or
9, depending on the carry.

4. after doing calculation:


So, M =1, S = 9, O = 0, E = 5, N = 6, R= 8, D =7, E - 5,
Y= 2
# Min Max algorithm

1. Mini-max algorithm is a recursive or backtracking algorithm which is


used in decision-making and game theory.
2. Mini-max algorithm uses recursion to search through the game
tree.

3. Mini-max algorithm is mostly used for game playing in AI, such as Chess,
Checkers, Tic-Tac-Toe, Go, and
various two-player games. This algorithm computes the minimax decision for
the current state.

4. In this algorithm, two players play the game; one is called MAX, and the
other is called MIN.

5. Both the players fight it as the opponent player gets the minimum benefit
while they get the maximum benefit.

6. Both players of the game are opponents of each other, where MAX will
select the maximized value, and MIN will select the minimized value.

7. The minimax algorithm performs a depth-first search algorithm for the


exploration of the complete game tree.

8. The minimax algorithm proceeds all the way down to the terminal node
of the tree, then
# Alpha-beta pruning:

1. Alpha-beta pruning is a modified version of the min- max algorithm. It is


an optimization technique for the min-max algorithm.

2. In min-max search algorithm, the number of nodes (game states) that it


has to examine is exponential in depth of the tree. Now we cannot
eliminate the exponent completely, but we can cut it to half.

3. There is a technique by which without checking each node of the game tree
we can compute the correct min-max decision, and this technique is called
pruning. This involves two threshold parameter alpha and beta for future
expansion, so it is called alpha-beta pruning.

4. Alpha-beta pruning can be applied at any depth of a tree, and sometime it


not only prunes the tree leaves but also entire sub-tree.

5. The two parameters can be defined as:


• Alpha : The best (highest-value) choice we have found so far
at any point along the path of Maximizer. The initial value of
alpha is - ∞.
• Beta : The best (lowest-value) choice we have found so far at
any point along the path of Minimizer. The initial value of
beta is + ∞.

You might also like