AI Unit 2 Notes
AI Unit 2 Notes
2. Informed search:
Informed search strategies use extra information (called a heuristic) to guess
which paths are more
promising. This allows the agent to focus on better options and solve problems
faster.
DFS is a search strategy that explores as deep as possible along one path
before backtracking and trying another path. It uses a stack to keep track of
the nodes that still need to be explored. DFS goes deep into a branch of the
tree (or graph) until it hits a dead-end, then backtracks and explores other
paths.
How DFS Works:
o DFS starts at the initial node and explores as far as possible along
one path.
o If it reaches a node with no unvisited neighbors (a dead-end), it
backtracks to the previous node and explores the next possible
path.
o This continues until the goal is found or all paths are explored.
Advantages of DFS:
Uses less memory
Good for deep problems
Disadvantages of DFS:
May not find the shortest path
Can get stuck
# Heuristic function
A heuristic function is a problem-solving technique used in AI to guide the
search for a solution in the most promising direction. It provides an estimate of
the "cost" or "distance" from the current state to the goal state. Heuristics are
used in informed search strategies (like A* search) to make the search process
more
efficient by focusing on the most relevant paths.
Definition:
A heuristic is a function h(n) that gives an estimate of the remaining cost from
a given node (n) to the goal. This value helps the AI agent decide which path to
take by choosing the nodes with the lowest heuristic value.
The value of the heuristic function is always positive.
h(n) <= h*(n)
here, h(n) is heuristic cost and h*(n) is the estimated cost.
# A* algorithm
A* search is the most commonly known form of best- first search. It uses
heuristic function h(n), and cost to reach the node n from the start state g(n). It
has combined features of UCS and greedy best-first search, by which it solve the
problem efficiently. A* search
algorithm finds the shortest path through the search space using the heuristic
function.
This search algorithm expands less search tree and provides optimal result
faster. A* algorithm is similar to UCS except that it uses g(n)+h(n) instead of
g(n). In A* search algorithm, we use search heuristic as well as the cost to reach
the node. Hence we can combine both costs as following, and this sum is called
as a fitness number.
f(n)=g(n)+h(n)
5. Termination:
o The algorithm stops when no better solution can be found. At
this point, it has reached the best solution it can in that area
(localoptimum).
Algorithm:
1. Evaluate the initial state, if it is goal state then return success and
stop.
2. Loop until a solution is found or there is no new operator left to
apply.
3. Select and apply an operator to the current state.
4. Check new state:
If it is goal state the return success and quit.
Else if it is better than the current state then assign new state as current
state.
Else if not better than the current state, then return to step 2.
5. Exit
# Types:
There are different variants of Hill Climbing, and each has its own approach:
(x, y) -> (x-y, 0) Pour all the water into the 4 litres jug.
If x+y<=4 and y>0
x, y) -> (0, x + y) if x+y<=3 Pour all the water from the 4 litres jug
and x>0 into the 3
litres jug
(0, 2) -> (2, 0) Pour the 2 litres from the 3 litres jug
into the 4 litres
jug.
3. Mini-max algorithm is mostly used for game playing in AI, such as Chess,
Checkers, Tic-Tac-Toe, Go, and
various two-player games. This algorithm computes the minimax decision for
the current state.
4. In this algorithm, two players play the game; one is called MAX, and the
other is called MIN.
5. Both the players fight it as the opponent player gets the minimum benefit
while they get the maximum benefit.
6. Both players of the game are opponents of each other, where MAX will
select the maximized value, and MIN will select the minimized value.
8. The minimax algorithm proceeds all the way down to the terminal node
of the tree, then
# Alpha-beta pruning:
3. There is a technique by which without checking each node of the game tree
we can compute the correct min-max decision, and this technique is called
pruning. This involves two threshold parameter alpha and beta for future
expansion, so it is called alpha-beta pruning.