uniit 2
uniit 2
The uninformed search does not contain any domain knowledge such as closeness,
the location of the goal.
It operates in a brute-force way as it only includes information about how to
traverse the tree and how to identify leaf and goal nodes.
Uninformed search applies a way in which search tree is searched without any
information about the search space like initial state operators and test for the goal,
so it is also called blind search.
It examines each node of the tree until it achieves the goal node.
General Search Algorithm
Breadth-first search
Breadth-first search is the most common search strategy for traversing a tree or
graph. This algorithm searches breadthwise in a tree or graph, so it is called
breadth-first search.
BFS algorithm starts searching from the root node of the tree and expands all
successor node at the current level before moving to nodes of next level.
The breadth-first search algorithm is an example of a general-graph search
algorithm.
Breadth-first search implemented using FIFO queue data structure.
BFS - Example
Uniform search
Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph.
This algorithm comes into play when a different cost is available for each edge.
The primary goal of the uniform-cost search is to find a path to the goal node which has the lowest
cumulative cost. Uniform-cost search expands nodes according to their path costs form the root
node.
It can be used to solve any graph/tree where the optimal cost is in demand.
A uniform-cost search algorithm is implemented by the priority queue.
It gives maximum priority to the lowest cumulative cost. Uniform cost search is equivalent to BFS
algorithm if the path cost of all edges is the same.
Uniform search - Example
Depth-first search
Depth-first search (DFS) is an algorithm for traversing or searching tree or graph
data structures.
The algorithm starts at the root node (selecting some arbitrary node as the root node
in the case of a graph) and explores as far as possible along each branch before
backtracking.
Depth first search (DFS) algorithm starts with the initial node of the graph G, and
then goes to deeper and deeper until the goal node or the node which has no
children.
The algorithm, then backtracks from the dead end towards the most recent node that
is yet to be completely unexplored.
The data structure which is being used in DFS is stack. In DFS, the edges that leads
to an unvisited node are called discovery edges while the edges that leads to an
already visited node are called block edges.
DFS - Example
Depth Limited Search
A depth-limited search algorithm is similar to depth-first search with a predetermined
limit.
Depth-limited search can solve the drawback of the infinite path in the Depth-first search.
In this algorithm, the node at the depth limit will treat as it has no successor nodes further.
DLS - Example
Depth limited Search
► Hill Climbing is a simple and intuitive algorithm that is easy to understand and
implement.
► It can be used in a wide variety of optimization problems, including those with a
large search space and complex constraints.
► Hill Climbing is often very efficient in finding local optima, making it a good
choice for problems where a good solution is needed quickly.
► The algorithm can be easily modified and extended to include additional heuristics
or constraints.
Disadvantages of Hill Climbing algorithm:
► Hill Climbing can get stuck in local optima, meaning that it may not find the
global optimum of the problem.
► The algorithm is sensitive to the choice of initial solution, and a poor initial
solution may result in a poor final solution.
► Hill Climbing does not explore the search space very thoroughly, which can limit
its ability to find better solutions.
► It may be less effective than other optimization algorithms, such as genetic
algorithms or simulated annealing, for certain types of problems.
Simulated Annealing
Simulated Annealing came from the concept of annealing in physics. This technique is
used to increase the size of crystals and to reduce the defects in crystals. This was
done by heating and then suddenly(mistake quenching) cooling of crystals.
Advantages
• can deal with arbitrary systems and cost functions
• is relatively easy to code, even for complex problems
• generally, gives a "good" solution
Simulated annealing: This is a type of hill climbing algorithm that allows the
algorithm to occasionally accept worse moves in order to avoid getting stuck in local
optima. This can be more efficient than randomized hill climbing, but it can also be
more time-consuming
Simulated Annealing
Algorithm
First generate a random solution
Calculate its cost using some cost function
Generate a random neighbor solution and calculate its cost
Compare the cost of old and new random solution
If C old > C new, then go for old solution otherwise go for new solution
Repeat steps 3 to 5 until you reach an acceptable optimized solution of given
problem
Simulated Annealing
Simulated annealing explores the search space and avoids local optimum by
employing a probabilistic method to accept a worse solution with a given probability.
The initial temperature, cooling schedule, and acceptance probability function are just
a few of the tuning parameters. Hill Climbing is faster, but Simulated Annealing is
better at locating the global optimum, particularly for complex issues with numerous
local optima.
Best First Search Algorithm
Algorithm
Let OPEN is priority queue containing initial state
LOOP
If OPEN is empty then return failure
Node <- Remove-First(OPEN)
If Node is Goal then return a path from initial to Node
else
generate all successors of Node and put newly generated Node into OPEN
according to their heuristic values
END LOOP
Local Beam Search
Genetic Algorithms
Genetic Algorithms(GAs) are adaptive heuristic search algorithms that belong to the larger part
of evolutionary algorithms. Genetic algorithms are based on the ideas of natural selection and
genetics.
They are commonly used to generate high-quality solutions for optimization problems and
search problems.
n simple words, they simulate “survival of the fittest” among individual of consecutive
generation for solving a problem. Each generation consist of a population of individuals and
each individual represents a point in search space and possible solution. Each individual is
represented as a string of character/integer/float/bits. This string is analogous to the
Chromosome.
Adversarial search Methods-Game
playing-Important concepts
Adversarial search: Search based on Game theory- Agents- Competitive environment
According to game theory, a game is played between two players. To complete the
game, one has to win the game and the other looses automatically.’
Such Conflicting goal- adversarial search
Game playing technique- Those games- Human Intelligence and Logic factor-
Excluding other factors like Luck factor
• Tic-Tac-Toe, Checkers, Chess – Only mind works, no luck works
Adversarial search Methods-Game
playing-Important concepts
Techniques required to get the best optimal solution
(Choose Algorithms for best optimal solution within
limited time)
Pruning: A technique which allows ignoring the
unwanted portions of a search tree which make no
difference in its final result.
Heuristic Evaluation Function: It allows to
approximate the cost value at each level of the search
tree, before reaching the goal node.
Game playing and knowledge structure-
Elements of Game Playing search
To play a game, we use a game tree to know all the possible choices and to pick the best one
out. There are following elements of a game-playing:
S0: It is the initial state from where a game begins.
PLAYER (s): It defines which player is having the current turn to make a move in the state.
ACTIONS (s): It defines the set of legal moves to be used in a state.
RESULT (s, a): It is a transition model which defines the result of a move.
TERMINAL-TEST (s): It defines that the game has ended and returns true.
UTILITY (s,p): It defines the final value with which the game has ended. This function is
also known as Objective function or Payoff function. The price which the winner will get
i.e.
(-1): If the PLAYER loses.
(+1): If the PLAYER wins.
(0): If there is a draw between the PLAYERS.
Game playing and knowledge structure-
Elements of Game Playing search Example
• For example, in chess, tic-tac-toe, we have two or three possible outcomes.
Either to win, to lose, or to draw the match with values +1,-1 or 0.
• Game Tree for Tic-Tac-Toe
– Node: Game states, Edges: Moves taken by players
Game as a search problem
Types of algorithms in Adversarial search
In a normal search, we follow a sequence of actions to reach the goal or to finish the game
optimally. But in an adversarial search, the result depends on the players which will decide
the result of the game. It is also obvious that the solution for the goal state will be an optimal
solution because the player will try to win the game with the shortest path and under limited
time.
• Minmax Algorithm
• Alpha-beta Pruning
Game as a search problem-Minimax
approach
Minimax/Minmax/MM/Saddle Point:
Decision strategy-Game Theory
▪ Minimize losing chances- Maximize winning chances
▪ Two-player game strategy
▪ Players will be two namely:
▪ MIN: Decrease the chances of MAX to win the game.
▪ MAX: Increases his chances of winning the game.
▪ Result of the game/Utility value
• Heuristic function propagating from initial node to root node
Drawback of Minimax:
• Explores each node in the tree deeply to provide the best path among all the paths
Alpha beta pruning: Overcomes drawback by less exploring nodes of search tree
Alpha beta pruning
Cutoff the search by exploring less number of nodes
It makes same moves as Minimax algorithm does- but prunes unwanted branches using
the pruning techniques
Alpha beta pruning works on 2 threshold values α and β
• α: It is the best highest value; a MAX player can have. It is the lower bound, which
represents negative infinity value.
• β: It is the best lowest value; a MIN player can have. It is the upper bound which represents
positive infinity.
So, each MAX node has α-value, which never decreases, and each MIN node has
β-value, which never increases.
Note: Alpha-beta pruning technique can be applied to trees of any depth, and it is
possible to prune the entire subtrees easily.
Game theory problems
Game theory is basically a branch of mathematics that is used to typical strategic
interaction between different players (agents), all of which are equally rational, in a
context with predefined rules (of playing or maneuvering) and outcomes.
GAME can be defined as a set of players, actions, strategies, and a final playoff for
which all the players are competing.
Game Theory has now become a describing factor for both Machine Learning
algorithms and many daily life situations.
Game theory problems- Types of Games
Zero-Sum and Non- Zero-Sum Games: In non-zero-sum games, there are multiple players and all of
them have the option to gain a benefit due to any move by another player. In zero-sum games, however,
if one player earns something, the other players are bound to lose a key playoff.
Simultaneous and Sequential Games: Sequential games are the more popular games where every player
is aware of the movement of another player. Simultaneous games are more difficult as in them, the
players are involved in a concurrent game. BOARD GAMES are the perfect example of sequential
games and are also referred to as turn-based or extensive-form games.
Imperfect Information and Perfect Information Games: In a perfect information game, every player is
aware of the movement of the other player and is also aware of the various strategies that the other
player might be applying to win the ultimate playoff. In imperfect information games, however, no
player is aware of what the other is up to. CARDS are an amazing example of Imperfect information
games while CHESS is the perfect example of a Perfect Information game.
Asymmetric and Symmetric Games: Asymmetric games are those win in which each player has a
different and usually conflicting final goal. Symmetric games are those in which all players have the
same ultimate goal but the strategy being used by each is completely different.
Co-operative and Non-Co-operative Games: In non-co-operative games, every player plays for himself
while in co-operative games, players form alliances in order to achieve the final goal.
Thankyou