0% found this document useful (0 votes)
7 views

uniit 2

The document covers various searching techniques in Artificial Intelligence, including uninformed and informed search methods such as breadth-first search, depth-first search, and A* algorithm. It discusses the components of search problems, local search algorithms like hill climbing and simulated annealing, and genetic algorithms. The content is aimed at providing an understanding of how these techniques can be applied to solve real-world problems effectively.

Uploaded by

vidyut1234569
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

uniit 2

The document covers various searching techniques in Artificial Intelligence, including uninformed and informed search methods such as breadth-first search, depth-first search, and A* algorithm. It discusses the components of search problems, local search algorithms like hill climbing and simulated annealing, and genetic algorithms. The content is aimed at providing an understanding of how these techniques can be applied to solve real-world problems effectively.

Uploaded by

vidyut1234569
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

ARTIFICIAL INTELLIGENCE

Subject Code: 18CSC305J


UNIT -2
Presented by:
Dr. Pritee Parwekar
Associate Professor
Computer Science and Engineering Department
TOPICS COVERED
Searching techniques
Uniformed search
General search Algorithm
Uniformed search Methods-Breadth first search
Uniformed search Methods-Depth first search
Uniformed search Methods-Depth limited search
Uniformed search Methods- Iterative Deepening search
Bi-directional search
Informed search- Generate and test, Best First search
Informed search-A* Algorithm
AO* research
TOPICS COVERED
Task environment and its properties
Local search Algorithms-Hill Climbing, Simulated Annealing
Local Beam Search
Genetic Algorithms
Adversarial search Methods- Game playing-Important concepts
Game playing and knowledge structure
Game as a search problem-Mini max approach
Mini max Algorithm
Alpha beta pruning
Game theory problems
CLO2:Apply appropriate searching techniques to solve a real world problem
Search
Searching is the universal technique of problem solving in Artificial
Intelligence
A search problem consists of:
• A State Space. Set of all possible states where you can be.
• A Start State. The state from where the search begins.
• A Goal Test. A function that looks at the current state returns whether or
not it is the goal state.
• The Solution to a search problem is a sequence of actions, called
the plan that transforms the start state to the goal state.
Search Problem Components
Search tree: A tree representation of search problem is called Search tree. The
root of the search tree is the root node which is corresponding to the initial state.
Actions: It gives the description of all the available actions to the agent.
Transition model: A description of what each action do, can be represented as a
transition model.
Path Cost: It is a function which assigns a numeric cost to each path.
Solution: It is an action sequence which leads from the start node to the goal
node.
Optimal Solution: If a solution has the lowest cost among all solutions.
Examples of Search Algorithm
TSP
8 Puzzle problem
Tic Tac Toe
N-queen problem
Tower of Hanoi
Water Jug Problem
Blocks World
Vacuum Cleaner
Search Technique
Search techniques are problem-solving approaches in Artificial Intelligence
(AI). These techniques are used by problem-solving agents to find the best solution
to a problem.
Types of search technique
▪ Uniform Search
▪ Inform Search
• Breadth first search
• Depth first search
• Depth limited search
• Iterative Deepening search
• By-directional Search
• A* Algorithm
• AO* Algorithm
Uninformed Search

The uninformed search does not contain any domain knowledge such as closeness,
the location of the goal.
It operates in a brute-force way as it only includes information about how to
traverse the tree and how to identify leaf and goal nodes.
Uninformed search applies a way in which search tree is searched without any
information about the search space like initial state operators and test for the goal,
so it is also called blind search.
It examines each node of the tree until it achieves the goal node.
General Search Algorithm
Breadth-first search

Breadth-first search is the most common search strategy for traversing a tree or
graph. This algorithm searches breadthwise in a tree or graph, so it is called
breadth-first search.
BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
The breadth-first search algorithm is an example of a general-graph search
algorithm.
Breadth-first search implemented using FIFO queue data structure.
BFS - Example
Uniform search
Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph.
This algorithm comes into play when a different cost is available for each edge.
The primary goal of the uniform-cost search is to find a path to the goal node which has the lowest
cumulative cost. Uniform-cost search expands nodes according to their path costs form the root
node.
It can be used to solve any graph/tree where the optimal cost is in demand.
A uniform-cost search algorithm is implemented by the priority queue.
It gives maximum priority to the lowest cumulative cost. Uniform cost search is equivalent to BFS
algorithm if the path cost of all edges is the same.
Uniform search - Example
Depth-first search
Depth-first search (DFS) is an algorithm for traversing or searching tree or graph
data structures.
The algorithm starts at the root node (selecting some arbitrary node as the root node
in the case of a graph) and explores as far as possible along each branch before
backtracking.
Depth first search (DFS) algorithm starts with the initial node of the graph G, and
then goes to deeper and deeper until the goal node or the node which has no
children.
The algorithm, then backtracks from the dead end towards the most recent node that
is yet to be completely unexplored.
The data structure which is being used in DFS is stack. In DFS, the edges that leads
to an unvisited node are called discovery edges while the edges that leads to an
already visited node are called block edges.
DFS - Example
Depth Limited Search
A depth-limited search algorithm is similar to depth-first search with a predetermined
limit.
Depth-limited search can solve the drawback of the infinite path in the Depth-first search.
In this algorithm, the node at the depth limit will treat as it has no successor nodes further.
DLS - Example
Depth limited Search

► You can set limit of the depth.


► It will be terminated when the goal state is found
► It will be terminated when there is no solution found
► Advantage is it solves the problem in DFS of infinite deep path
► Another advantage is its memory efficient
► Its disadvantage is it can not complete the search if the goal state is below the
depth limit
► Another disadvantage is it will not always give the best solution
► Time complexity is O(bd).
► Space complexity is O(bXd)
Iterative Deepening Depth First
Search(IDDFS)
A search algorithm which suffers neither the drawbacks of breadth-first nor depth-first
search on trees is depth-first iterative-deepening
IDDFS combines depth-first search’s space-efficiency and breadth-first search’s fast
search (for nodes closer to root).
Iterative deepening depth first search (IDDFS) is a hybrid of BFS and DFS. In IDDFS, we
perform DFS up to a certain “limited depth,” and keep increasing this “limited depth”
after every iteration
The idea is to perform depth-limited DFS repeatedly, with an increasing depth limit, until
a solution is found
IDDFS- Example
Informed Search
Informed Search algorithms have information on the goal state which helps in more efficient
searching.
This information is obtained by a function that estimates how close a state is to the goal state.
Informed search algorithm uses the idea of heuristic, so it is also called Heuristic search.
Heuristic Search
Add domain-specific information to select what is the best path to continue searching along
Define a heuristic function, h(n), that estimates the "goodness" of a node n.
Specifically, h(n) = estimated cost (or distance) of minimal cost path from n to a goal state.
The term heuristic means "serving to aid discovery" and is an estimate, based on
domain-specific information that is computable from the current state description, of how
close we are to a goal
Heuristic function
Heuristic is a function which is used in Informed Search, and it finds the most promising path.
It takes the current state of the agent as its input and produces the estimation of how close agent
is from the goal.
The heuristic method, however, might not always give the best solution, but it guaranteed to
find a good solution in reasonable time.
Heuristic function estimates how close a state is to the goal. It is represented by h(n), and it
calculates the cost of an optimal path between the pair of states.
The value of the heuristic function is always positive
Generate and Test
Generate-and-test search algorithm is a very simple algorithm that guarantees to find a
solution if done systematically and there exists a solution.
The generate and Test algorithm is a depth first search procedure because complete possible
solutions are generated before test.
This can be implemented states are likely to appear often in a tree
It can be implemented on a search graph rather than a tree
Generate and Test
TSP - Example
Best First Search
Best first search is a traversal technique that decides which node is to be visited next by
checking which node is the most promising one and then check it.
Best first search is an instance of graph search algorithm in which a node is selected for
expansion based on evaluation function f (n)
Best first search can be implemented a priority queue, a data structure that will maintain
the fringe in ascending order of f values.
BFS - Example
A* Search
It is an informed search algorithm, as it uses information about path cost and also
uses heuristics to find the solution.
To find the shortest path between nodes or graphs
A * algorithm is a searching algorithm that searches for the shortest path between
the initial and the final state
It is an advanced BFS that searches for shorter paths first rather than the longer paths.
A* is optimal as well as a complete algorithm
A* Search
It calculates the cost, f(n) (n being the neighboring node), to travel to all of the
neighboring nodes, and then enters the node with the lowest value of f(n).
These values are calculated with the following formula:
f(n)=g(n)+h(n)
g(n) being the value of the shortest path from the start node to node n, and h(n) being a
heuristic approximation of the node's value.
A* Search - Example
A* Search - Example
Local search Algorithms
Local search is a heuristic method for solving computationally hard optimization problems.
Local search can be used on problems that can be formulated as finding a solution maximizing a criterion
among a number of candidate solutions.
Local search algorithms move from solution to solution in the space of candidate solutions (the search
space) by applying local changes, until a solution deemed optimal is found or a time bound is elapsed.
Advantages: Very little memory — usually constant amount- Can often find reasonable solutions in infinite
(continuous) state spaces
Nutshell:
• All that matters is the solution state
• Don't care about solution path
Examples
• Hill Climbing
• Simulated Annealing (suited for either local or global search)
▪ Non-Traditional Search and Optimization technique
Genetic Algorithm
▪ Non-Traditional Search and Optimization technique
▪ Conceived by Prof. Holland of University of Michigan
Hill Climbing
Heuristic search -used for mathematical optimization problems
Variant of generate and test algorithm
• Generate possible solutions.
• Test to see if this is the expected solution.
• If the solution has been found quit else go to step 1.
Hill Climbing
• It examines the neighboring nodes one by one and selects the first neighboring node which
optimizes the current cost as next node.
► Local Search , Greedy and No Backtracking
Hill Climbing
Hill Climbing
Step 1 : Evaluate the initial state. If it is a goal state, then stop and return success.
Otherwise, make initial state as current state.
Step 2 : Loop until the solution state is found or there are no new operators present
which can be applied to the current state.
a) Select a state that has not been yet applied to the current state and apply it to produce a new
state.
b) Perform these to evaluate new state
i. If the current state is a goal state, then stop and return success.
ii. If it is better than the current state, then make it current state and proceed further.
iii. If it is not better than the current state, then continue in the loop until a solution is found.
Step 3 : Exit.
Hill Climbing-Simpler Version

Input <- Problem


Local variable <- Current node/stage and neighboring node/stage
Loop
do
neighbor <- highest value successor of current
if value(neighbor) <= value(current)
then
return state current
current <- neighbor
end Loop
Different regions in the State Space
Diagram
Local maximum: It is a state which is better than its neighboring state however there
exists a state which is better than it(global maximum). This state is better because here the
value of the objective function is higher than its neighbors.
Global maximum : It is the best possible state in the state space diagram. This because at
this state, objective function has highest value.
Plateua/flat local maximum : It is a flat region of state space where neighboring states
have the same value.
Ridge : It is region which is higher than its neighbors but itself has a slope. It is a special
kind of local maximum.
Current state : The region of state space diagram where we are currently present during
the search.
Shoulder : It is a plateau that has an uphill edge.
Advantages of Hill Climbing algorithm:

► Hill Climbing is a simple and intuitive algorithm that is easy to understand and
implement.
► It can be used in a wide variety of optimization problems, including those with a
large search space and complex constraints.
► Hill Climbing is often very efficient in finding local optima, making it a good
choice for problems where a good solution is needed quickly.
► The algorithm can be easily modified and extended to include additional heuristics
or constraints.
Disadvantages of Hill Climbing algorithm:

► Hill Climbing can get stuck in local optima, meaning that it may not find the
global optimum of the problem.
► The algorithm is sensitive to the choice of initial solution, and a poor initial
solution may result in a poor final solution.
► Hill Climbing does not explore the search space very thoroughly, which can limit
its ability to find better solutions.
► It may be less effective than other optimization algorithms, such as genetic
algorithms or simulated annealing, for certain types of problems.
Simulated Annealing
Simulated Annealing came from the concept of annealing in physics. This technique is
used to increase the size of crystals and to reduce the defects in crystals. This was
done by heating and then suddenly(mistake quenching) cooling of crystals.
Advantages
• can deal with arbitrary systems and cost functions
• is relatively easy to code, even for complex problems
• generally, gives a "good" solution

Simulated annealing: This is a type of hill climbing algorithm that allows the
algorithm to occasionally accept worse moves in order to avoid getting stuck in local
optima. This can be more efficient than randomized hill climbing, but it can also be
more time-consuming
Simulated Annealing
Algorithm
First generate a random solution
Calculate its cost using some cost function
Generate a random neighbor solution and calculate its cost
Compare the cost of old and new random solution
If C old > C new, then go for old solution otherwise go for new solution
Repeat steps 3 to 5 until you reach an acceptable optimized solution of given
problem
Simulated Annealing

Simulated annealing explores the search space and avoids local optimum by
employing a probabilistic method to accept a worse solution with a given probability.
The initial temperature, cooling schedule, and acceptance probability function are just
a few of the tuning parameters. Hill Climbing is faster, but Simulated Annealing is
better at locating the global optimum, particularly for complex issues with numerous
local optima.
Best First Search Algorithm

► Its informed , greedy and heuristic search algorithm


► In this algorithm we always consider heuristic values only
► Algorithm considers a priority queue
► Lets assume OPEN is the name of priority queue you can give any other name also
► It gives good solution but does not guarantee the optimal solution.
► When you consider the heuristic and path cost both and finds the best solution
then its optimal which is used by A*
Best First Search Algorithm

Algorithm
Let OPEN is priority queue containing initial state
LOOP
If OPEN is empty then return failure
Node <- Remove-First(OPEN)
If Node is Goal then return a path from initial to Node
else
generate all successors of Node and put newly generated Node into OPEN
according to their heuristic values
END LOOP
Local Beam Search
Genetic Algorithms
Genetic Algorithms(GAs) are adaptive heuristic search algorithms that belong to the larger part
of evolutionary algorithms. Genetic algorithms are based on the ideas of natural selection and
genetics.
They are commonly used to generate high-quality solutions for optimization problems and
search problems.
n simple words, they simulate “survival of the fittest” among individual of consecutive
generation for solving a problem. Each generation consist of a population of individuals and
each individual represents a point in search space and possible solution. Each individual is
represented as a string of character/integer/float/bits. This string is analogous to the
Chromosome.
Adversarial search Methods-Game
playing-Important concepts
Adversarial search: Search based on Game theory- Agents- Competitive environment
According to game theory, a game is played between two players. To complete the
game, one has to win the game and the other looses automatically.’
Such Conflicting goal- adversarial search
Game playing technique- Those games- Human Intelligence and Logic factor-
Excluding other factors like Luck factor
• Tic-Tac-Toe, Checkers, Chess – Only mind works, no luck works
Adversarial search Methods-Game
playing-Important concepts
Techniques required to get the best optimal solution
(Choose Algorithms for best optimal solution within
limited time)
Pruning: A technique which allows ignoring the
unwanted portions of a search tree which make no
difference in its final result.
Heuristic Evaluation Function: It allows to
approximate the cost value at each level of the search
tree, before reaching the goal node.
Game playing and knowledge structure-
Elements of Game Playing search
To play a game, we use a game tree to know all the possible choices and to pick the best one
out. There are following elements of a game-playing:
S0: It is the initial state from where a game begins.
PLAYER (s): It defines which player is having the current turn to make a move in the state.
ACTIONS (s): It defines the set of legal moves to be used in a state.
RESULT (s, a): It is a transition model which defines the result of a move.
TERMINAL-TEST (s): It defines that the game has ended and returns true.
UTILITY (s,p): It defines the final value with which the game has ended. This function is
also known as Objective function or Payoff function. The price which the winner will get
i.e.
(-1): If the PLAYER loses.
(+1): If the PLAYER wins.
(0): If there is a draw between the PLAYERS.
Game playing and knowledge structure-
Elements of Game Playing search Example
• For example, in chess, tic-tac-toe, we have two or three possible outcomes.
Either to win, to lose, or to draw the match with values +1,-1 or 0.
• Game Tree for Tic-Tac-Toe
– Node: Game states, Edges: Moves taken by players
Game as a search problem
Types of algorithms in Adversarial search
In a normal search, we follow a sequence of actions to reach the goal or to finish the game
optimally. But in an adversarial search, the result depends on the players which will decide
the result of the game. It is also obvious that the solution for the goal state will be an optimal
solution because the player will try to win the game with the shortest path and under limited
time.
• Minmax Algorithm
• Alpha-beta Pruning
Game as a search problem-Minimax
approach
Minimax/Minmax/MM/Saddle Point:
Decision strategy-Game Theory
▪ Minimize losing chances- Maximize winning chances
▪ Two-player game strategy
▪ Players will be two namely:
▪ MIN: Decrease the chances of MAX to win the game.
▪ MAX: Increases his chances of winning the game.
▪ Result of the game/Utility value
• Heuristic function propagating from initial node to root node

• Backtracking technique-Best choice


Minimax Algorithm
Follows DFS
• Follows same path cannot change in middle- i.e., Move once made cannot be altered- That is
this is DFS and not BFS
Algorithm
• Keep on generating the game tree/ search tree till a limit d.
• Compute the move using a heuristic function.
• Propagate the values from the leaf node till the current position following the minimax strategy.
• Make the best move from the choices.
Alpha beta pruning
Advanced version of MINIMAX algorithm

Any optimization algorithm- performance measure is the first consideration

Drawback of Minimax:

• Explores each node in the tree deeply to provide the best path among all the paths

• Increases time complexity

Alpha beta pruning: Overcomes drawback by less exploring nodes of search tree
Alpha beta pruning
Cutoff the search by exploring less number of nodes
It makes same moves as Minimax algorithm does- but prunes unwanted branches using
the pruning techniques
Alpha beta pruning works on 2 threshold values α and β
• α: It is the best highest value; a MAX player can have. It is the lower bound, which
represents negative infinity value.
• β: It is the best lowest value; a MIN player can have. It is the upper bound which represents
positive infinity.
So, each MAX node has α-value, which never decreases, and each MIN node has
β-value, which never increases.
Note: Alpha-beta pruning technique can be applied to trees of any depth, and it is
possible to prune the entire subtrees easily.
Game theory problems
Game theory is basically a branch of mathematics that is used to typical strategic
interaction between different players (agents), all of which are equally rational, in a
context with predefined rules (of playing or maneuvering) and outcomes.
GAME can be defined as a set of players, actions, strategies, and a final playoff for
which all the players are competing.
Game Theory has now become a describing factor for both Machine Learning
algorithms and many daily life situations.
Game theory problems- Types of Games
Zero-Sum and Non- Zero-Sum Games: In non-zero-sum games, there are multiple players and all of
them have the option to gain a benefit due to any move by another player. In zero-sum games, however,
if one player earns something, the other players are bound to lose a key playoff.
Simultaneous and Sequential Games: Sequential games are the more popular games where every player
is aware of the movement of another player. Simultaneous games are more difficult as in them, the
players are involved in a concurrent game. BOARD GAMES are the perfect example of sequential
games and are also referred to as turn-based or extensive-form games.
Imperfect Information and Perfect Information Games: In a perfect information game, every player is
aware of the movement of the other player and is also aware of the various strategies that the other
player might be applying to win the ultimate playoff. In imperfect information games, however, no
player is aware of what the other is up to. CARDS are an amazing example of Imperfect information
games while CHESS is the perfect example of a Perfect Information game.
Asymmetric and Symmetric Games: Asymmetric games are those win in which each player has a
different and usually conflicting final goal. Symmetric games are those in which all players have the
same ultimate goal but the strategy being used by each is completely different.
Co-operative and Non-Co-operative Games: In non-co-operative games, every player plays for himself
while in co-operative games, players form alliances in order to achieve the final goal.
Thankyou

You might also like