Unit-2 Notes
Unit-2 Notes
Syllabus: Introduction to KR, Knowledge agent, Predicate Logic, Inference rule & theorem
proving forward chaining, backward chaining, resolution; Conflict resolution, backward
reasoning: Structured KR: Semantic Net – slots, inheritance, Conceptual Dependency
Introduction to KR
Knowledge representation and reasoning (KR, KRR) is the part of Artificial intelligence
which concerned with AI agents thinking and how thinking contributes to intelligent behavior
of agents.
It is responsible for representing information about the real world so that a computer can
understand and can utilize this knowledge to solve the complex real world problems such as
diagnosis a medical condition or communicating with humans in natural language.
It is also a way which describes how we can represent knowledge in artificial intelligence.
Knowledge representation is not just storing data into some database, but it also enables an
intelligent machine to learn from that knowledge and experiences so that it can behave
intelligently like a human.
Knowledge agent
Knowledge-based agents – agents that have an explicit representation of knowledge that can be
reasoned with.
These agents can manipulate this knowledge to infer new things at the ―knowledge level‖
A Knowledge Based Agent
A knowledge-based agent includes a knowledge base and an inference system.
A knowledge base is a set of representations of facts of the world.
Each individual representation is called a sentence.
The sentences are expressed in a knowledge representation language.
The agent operates as follows:
1. It TELLs the knowledge base what it perceives.
2. It ASKs the knowledge base what action it should perform.
3. It performs the chosen action.
Knowledge Level.
The most abstract level: describe agent by saying what it knows.
Example: A taxi agent might know that the Golden Gate Bridge connects San Francisco with the
Marin County.
Logical Level.
The level at which the knowledge is encoded into sentences.
Example: Links(GoldenGateBridge, SanFrancisco, MarinCounty).
Implementation Level.
The physical representation of the sentences in the logical level.
Example: ‗(links goldengatebridge sanfrancisco marincounty)
Knowledge Bases:
Predicate Logic
Predicate Logic deals with predicates, which are propositions, consist of variables.
Quantifier:
It shows the scope of a term.
The variable of predicates is quantified by quantifiers. There are two types of quantifier in
predicate logic - Existential Quantifier and Universal Quantifier
Existential Quantifier:
If p(x) is a proposition over the universe U. Then it is denoted as ∃x p(x) and read as "There
exists at least one value in the universe of variable x such that p(x) is true. The quantifier ∃ is
called the existential quantifier.
There are several ways to write a proposition, with an existential quantifier, i.e.,
(∃x∈A)p(x) or ∃x∈A such that p (x) or (∃x)p(x) or p(x) is true for some x ∈A.
Universal Quantifier:
If p(x) is a proposition over the universe U. Then it is denoted as ∀x,p(x) and read as "For every
x∈U,p(x) is true." The quantifier ∀ is called the Universal Quantifier.
There are several ways to write a proposition, with a universal quantifier.
∀x∈A,p(x) or p(x), ∀x ∈A Or ∀x,p(x) or p(x) is true for all x ∈A.
2. (∃x∈U) (x+6=25)
Sol: ~( ∃ x∈U) (x+6=25) ≅∀ x∈U~ (x+6)=25 ≅(∀ x∈U) (x+6)≠25
3. ~( ∃ x p(x)∨∀ y q(y)
Sol: ~( ∃ x p(x)∨∀ y q(y)) ≅~∃ x p(x)∧~∀ y q(y) (∴~(p∨q)= ∼p∧∼q) ≅ ∀ x ∼ p(x)∧∃y~q(y))
Inference: In artificial intelligence, we need intelligent computers which can create new logic
from old logic or by evidence, so generating the conclusions from evidence and facts is termed
as Inference.
Techniques:
Forward Chaining
Backward Chaining
Resolution
Forward chaining: Forward chaining is the process where it matches the set of conditions and
infer results from these conditions. It is data driven approach using which it reaches (infers) to
the goal condition.
Example: : Given A (is true)
A->B
B-> C
C->D
Prove D is also true.
Solution: Starting from A, A is true then B is true (A->B) B is true then C is true (B->C)
C is true then D is true Proved (C->D)
Backward chaining: Backward chaining is the process where it performs backward search from
goal to the conditions used to get the goal. It is goal driven approach using which it reaches to
the initial condition.
Example: Given A (is true)
B->C
A->B
C->D
Prove D is also true.
A. Forward Chaining
Forward chaining is also known as a forward deduction or forward reasoning method when using
an inference engine.
Forward chaining is a form of reasoning which start with atomic sentences in the
knowledge base and applies inference rules (Modus Ponens) in the forward direction to
extract more data until a goal is reached.
The Forward-chaining algorithm starts from known facts, triggers all rules whose premises
are satisfied, and add their conclusion to the known facts. This process repeats until the
problem is solved.
Properties of Forward-Chaining:
o It is a down-up approach, as it moves from bottom to top.
o It is a process of making a conclusion based on known facts or data, by starting from the initial
state and reaches the goal state.
o Forward-chaining approach is also called a s data-driven as we reach to the goal using
available data.
o Forward -chaining approach is commonly used in the expert system, such as CLIPS, business,
and production rule systems.
Consider the following famous example which we will use in both approaches:
Example:
"As per the law, it is a crime for an American to sell weapons to hostile nations. Country A,
an enemy of America, has some missiles, and all the missiles were sold to it by Robert, who
is an American citizen."
Prove that "Robert is criminal."
To solve the above problem, first, we will convert all the above facts into first-order definite
clauses, and then we will use a forward-chaining algorithm to reach the goal.
Step-2:
At the second step, we will see those facts which infer from available facts and with satisfied
premises.
Rule-(1) does not satisfy premises, so it will not be added in the first iteration.
Rule-(2) and (3) are already added.
Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1, A) is added, which infers
from the conjunction of Rule (2) and (3).
Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and which infers from
Rule-(7).
Step-3:
At step-3, as we can check Rule-(1) is satisfied with the substitution {p/Robert, q/T1, r/A}, so
we can add Criminal(Robert) which infers all the available facts. And hence we reached our
goal statement.
B. Backward Chaining:
Backward-chaining is also known as a backward deduction or backward reasoning method when
using an inference engine. A backward chaining algorithm is a form of reasoning, which starts
with the goal and works backward, chaining through rules to find known facts that support the
goal.
Properties of backward chaining:
It is known as a top-down approach.
Backward-chaining is based on modus ponens inference rule.
In backward chaining, the goal is broken into sub-goal or sub-goals to prove the facts
true.
It is called a goal-driven approach, as a list of goals decides which rules are selected and
used.
Backward -chaining algorithm is used in game theory, automated theorem proving tools,
inference engines, proof assistants, and various AI applications.
The backward-chaining method mostly used a depth-first search strategy for proof.
Example:
In backward-chaining, we will use the same above example, and will rewrite all the rules.
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)
Owns(A, T1) ........(2)
Missile(T1) …….(3)
?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
Missile(p) → Weapons (p) .......(5)
Enemy(p, America) →Hostile(p) ........(6)
Enemy (A, America) .........(7)
American(Robert). ..........(8)
Backward-Chaining proof:
In Backward chaining, we will start with our goal predicate, which is Criminal(Robert), and
then infer further rules.
Step-1:
At the first step, we will take the goal fact. And from the goal fact, we will infer other facts, and
at last, we will prove those facts true. So our goal fact is "Robert is Criminal," so following is the
predicate of it.
Step-2: At the second step, we will infer other facts form goal fact which satisfies the rules. So as we
can see in Rule-1, the goal predicate Criminal (Robert) is present with substitution {Robert/P}. So
we will add all the conjunctive facts below the first level and will replace p with Robert.
Here we can see American (Robert) is a fact, so it is proved here.
Step-3:t At step-3, we will extract further fact Missile(q) which infer from Weapon(q), as it
satisfies Rule-(5). Weapon (q) is also true with the substitution of a constant T1 at q.
Step-4:
At step-4, we can infer facts Missile(T1) and Owns(A, T1) form Sells(Robert, T1, r) which
satisfies the Rule- 4, with the substitution of A in place of r. So these two statements are proved
here.
Step-5:
At step-5, we can infer the fact Enemy(A, America) from Hostile(A) which satisfies Rule- 6.
And hence all the statements are proved true using backward chaining.
Forward Chaining Backward Chaining
Forward chaining starts from known facts and Backward chaining starts from the goal and
applies inference rule to extract more data unit it works backward through inference rules to find
reaches to the goal. the required facts that support the goal.
It is a bottom-up approach It is a top-down approach
Forward chaining is known as data-driven Backward chaining is known as goal-driven
inference technique as we reach to the goal technique as we start from the goal and divide
using the available data. into sub-goal to extract the facts.
Forward chaining reasoning applies a breadth- Backward chaining reasoning applies a depth-first
first search strategy. search strategy
Forward chaining tests for all the available rules Backward chaining only tests for few required
rules.
Forward chaining is suitable for the planning, Backward chaining is suitable for diagnostic,
monitoring, control, and interpretation prescription, and debugging application.
application.
Forward chaining can generate an infinite Backward chaining generates a finite number of
number of possible conclusions. possible conclusions
Forward chaining is aimed for any conclusion. Backward chaining is only aimed for the required
data.
Propositional Logic:
Propositional logic (PL) is the simplest form of logic where all the statements are made by
propositions. A proposition is a declarative statement which is either true or false. It is a
technique of knowledge representation in logical and mathematical form. Propositions can be
either true or false, but it cannot be both. Propositional logic is also called Boolean logic as it
works on 0 and 1.
Examples:
Today is Tuesday.
The Sun rises from West (False proposition)
2+2= 5(False proposition)
2+3= 5
The names are arbitrary but are often chosen to have some mnemonic value—we use W1,3 to
stand for the proposition that the wumpus is in [1,3]. There are two proposition symbols with
fixed meanings:
True is the always-true proposition and False is the always-false proposition.
Complex sentences are constructed from simpler sentences, using parentheses and logical
connectives.
Atomic Proposition: Atomic propositions are the simple propositions. It consists of a single
proposition symbol. These are the sentences which must be either true or false.
Propositional logic is the simplest logic – illustrates basic ideas The proposition symbols P1, P2
etc are (atomic) sentences
If S is a sentence, ~(S) is a sentence (negation)
If S1 and S2 are sentences, (S1 ˄ S2) is a sentence (conjunction)
If S1 and S2 are sentences, (S1 ˅ S2) is a sentence (disjunction)
If S1 and S2 are sentences, (S1 → S2) is a sentence (implication)
If S1 and S2 are sentences, (S1 ↔ S2) is a sentence (biconditional)
Examples of Connectives:
(P 𝖠 Q) Arsalan likes football and Arsalan likes baseball.
(P ∨ Q) Arsalan is a doctor or Arsalan is an engineer.
(P ⇒ Q) If it is raining, then the street is wet.
(P ⇔ Q) I am breathing if and only if I am alive
Precedence of Connectives:
To eliminate the ambiguity we define a precedence for each operator. The ―not‖ operator (~) has
the highest precedence, followed by ˄ (conjunction), ∨(disjunction), ⇒(implication),
⇔ (biconditional).
Example: ~A ˄ B the ~ binds most tightly, giving us the equivalent of (~A) ˄ B rather than
~ (A ˄ B)
Examples:
All boys are smart.
All girls are hardworking.
Some mangoes are sweet.
Propositional logic has limited expressive power and in propositional logic, we cannot describe
statements in terms of their properties or logical relationships.
Advantages
Semantic networks are a natural representation of knowledge.
It transparently conveys meaning.
These networks are simple and easy to understand.
Disadvantages
Semantic networks take more computational time at runtime.
These are inadequate as they do not have any equivalent quantifiers.
These networks are not intelligent and depend on the creator of the system.
Frame Representation
A frame is a record-like structure that consists of a collection of attributes and values to describe
an entity in the world. These are the AI data structures that divide knowledge into substructures
by representing stereotypical situations. It‘s a collection of slots and slot values of different types
and sizes. Slots have been identified by names and values, which are called facets.
Advantages
It makes the programming easier by grouping the related data.
Frame representation is easy to understand and visualize.
It is very easy to add slots for new attributes and relations.
Also, it is easy to include default data and search for missing values.
Disadvantages
In frame system inference, the mechanism cannot be easily processed.
The inference mechanism cannot be smoothly proceeded by frame representation.
Here information is organised into more complex knowledge structures. Slots in the structure
represent attributes into which values can be placed. These values are either specific to a
particular instance or default. These can capture complex situation or objects, for example, eating
a meal in a restaurant or the contents of a hotel room or details of a tree in a garden. Such
structures can be linked together as in networks, giving property inheritance. Frames and scripts
are the most common types of structured representation.
Each class is related by ako to at least one other class. There exists exactly one class ‗linked‘ to
itself, which is the vertex of the graph of ako links. The slot ako determines one of the
fundamental interests of the object representation: inheritance. At the level of instanced frames,
it allows, by default, a frame to inherit certain values of the classes to which it is bound (related)
by ako. If this frame is bound to several others, there can be multiple inheritance (the value of
ako is a set of frames).
1. Efficiency:
They allow more complex knowledge to be captured efficiently.
2. Explicitness:
The additional structures (if-needed, if-added, if-removed) make the relative importance of
particular objects and concepts explicit.
3. Expressiveness:
They allow representation of structured knowledge and procedural knowledge, the additional
structure increase clarity.
4. Effectiveness: Actions or operations can be associated with a slot and performed, for
example, whenever the value for that slot is changed procure get activated. Such procedures are
called demons.
Comparison between Semantic Net and Frames:
Semantic Networks are logically inadequate because they could not make many of the
distinctions which logic can make.
Semantic Networks are heuristically inadequate because searches for information were
not themselves knowledge based that is there was knowledge is in Semantic Networks
which tells us how to search for the knowledge what we wanted to find.
Frames represents real world knowledge by integrating declarations about objects and
events and their properties and procedural notions about how to retrieve information and
achieve goals thereby overcoming some of the problems associated with semantic nets.
Frames are used for default reasoning in particular, they are useful for simulation
common sense knowledge.
Semantic Networks are basically a 2 – D representation of knowledge, while frames add
a third dimension to allow the nodes (slots) to have structures. Hence Frames are very
often used in computer vision.
Frame based expert systems are very useful for representing causal knowledge, because
their information is organised by cause and effect.
By contrast rule based expert systems generally rely on unorganised knowledge which is
not casual.
A Semantic Networks based expert system can deal with shallow knowledge;
shallowness occurs because all knowledge in semantic net is contained in nodes and
links.