0% found this document useful (0 votes)
14 views

Unit-2 Notes

The document provides an overview of Knowledge Representation (KR) and Reasoning (KRR) in artificial intelligence, detailing the architecture of knowledge-based agents, predicate logic, and inference techniques such as forward and backward chaining. It explains the roles of knowledge bases, quantifiers, and rules of inference, along with examples to illustrate forward and backward chaining processes. Additionally, it discusses the significance of Horn clauses and definite clauses in logical inference algorithms.

Uploaded by

chatrebhushan447
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Unit-2 Notes

The document provides an overview of Knowledge Representation (KR) and Reasoning (KRR) in artificial intelligence, detailing the architecture of knowledge-based agents, predicate logic, and inference techniques such as forward and backward chaining. It explains the roles of knowledge bases, quantifiers, and rules of inference, along with examples to illustrate forward and backward chaining processes. Additionally, it discusses the significance of Horn clauses and definite clauses in logical inference algorithms.

Uploaded by

chatrebhushan447
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

UNIT-2

Syllabus: Introduction to KR, Knowledge agent, Predicate Logic, Inference rule & theorem
proving forward chaining, backward chaining, resolution; Conflict resolution, backward
reasoning: Structured KR: Semantic Net – slots, inheritance, Conceptual Dependency

Introduction to KR
 Knowledge representation and reasoning (KR, KRR) is the part of Artificial intelligence
which concerned with AI agents thinking and how thinking contributes to intelligent behavior
of agents.
 It is responsible for representing information about the real world so that a computer can
understand and can utilize this knowledge to solve the complex real world problems such as
diagnosis a medical condition or communicating with humans in natural language.
 It is also a way which describes how we can represent knowledge in artificial intelligence.
Knowledge representation is not just storing data into some database, but it also enables an
intelligent machine to learn from that knowledge and experiences so that it can behave
intelligently like a human.

Knowledge agent
Knowledge-based agents – agents that have an explicit representation of knowledge that can be
reasoned with.
These agents can manipulate this knowledge to infer new things at the ―knowledge level‖
A Knowledge Based Agent
 A knowledge-based agent includes a knowledge base and an inference system.
 A knowledge base is a set of representations of facts of the world.
 Each individual representation is called a sentence.
 The sentences are expressed in a knowledge representation language.
The agent operates as follows:
1. It TELLs the knowledge base what it perceives.
2. It ASKs the knowledge base what action it should perform.
3. It performs the chosen action.

Architecture of Knowledge based agent:

Knowledge Level.
The most abstract level: describe agent by saying what it knows.
Example: A taxi agent might know that the Golden Gate Bridge connects San Francisco with the
Marin County.
Logical Level.
The level at which the knowledge is encoded into sentences.
Example: Links(GoldenGateBridge, SanFrancisco, MarinCounty).
Implementation Level.
The physical representation of the sentences in the logical level.
Example: ‗(links goldengatebridge sanfrancisco marincounty)
Knowledge Bases:

 Knowledge base = set of sentences in a formal language


 Declarative approach to building an agent (or other system): Tell it what it needs to know
it can Ask itself what to do - answers should follow from the KB
 Agents can be viewed at the knowledge level - i.e., what they know, regardless of how
implemented or at the implementation level i.e., data structures in KB and algorithms
that manipulate them
The agent must be able to:
 Represent states, actions, etc.
 Incorporate new percepts
 Update internal representations of the world
 Deduce(arrive at) hidden properties of the world
 Deduce appropriate actions

Predicate Logic
Predicate Logic deals with predicates, which are propositions, consist of variables.

Predicate Logic - Definition


A predicate is an expression of one or more variables determined on some specific domain. A
predicate with variables can be made a proposition by either authorizing a value to the variable
or by quantifying the variable.
The following are some examples of predicates.
 Consider E(x, y) denote "x = y"

 Consider X(a, b, c) denote "a + b + c = 0"

 Consider M(x, y) denote "x is married to y."

Quantifier:
It shows the scope of a term.
The variable of predicates is quantified by quantifiers. There are two types of quantifier in
predicate logic - Existential Quantifier and Universal Quantifier

Existential Quantifier:
If p(x) is a proposition over the universe U. Then it is denoted as ∃x p(x) and read as "There
exists at least one value in the universe of variable x such that p(x) is true. The quantifier ∃ is
called the existential quantifier.
There are several ways to write a proposition, with an existential quantifier, i.e.,
(∃x∈A)p(x) or ∃x∈A such that p (x) or (∃x)p(x) or p(x) is true for some x ∈A.
Universal Quantifier:
If p(x) is a proposition over the universe U. Then it is denoted as ∀x,p(x) and read as "For every
x∈U,p(x) is true." The quantifier ∀ is called the Universal Quantifier.
There are several ways to write a proposition, with a universal quantifier.
∀x∈A,p(x) or p(x), ∀x ∈A Or ∀x,p(x) or p(x) is true for all x ∈A.

Negation of Quantified Propositions:


When we negate a quantified proposition, i.e., when a universally quantified proposition is
negated, we obtain an existentially quantified proposition,and when an existentially quantified
proposition is negated, we obtain a universally quantified proposition.
The two rules for negation of quantified proposition are as follows. These are also called
DeMorgan's Law.

Example: Negate each of the following propositions:

1.∀x p(x)∧ ∃ y q(y)


Sol: ~ (∀x p(x)∧ ∃ y q(y)) ≅~∀ x p(x)∨∼∃yq (y) (∴∼(p∧q)=∼p∨∼q) ≅ ∃ x ~p(x)∨∀y∼q(y)

2. (∃x∈U) (x+6=25)
Sol: ~( ∃ x∈U) (x+6=25) ≅∀ x∈U~ (x+6)=25 ≅(∀ x∈U) (x+6)≠25

3. ~( ∃ x p(x)∨∀ y q(y)
Sol: ~( ∃ x p(x)∨∀ y q(y)) ≅~∃ x p(x)∧~∀ y q(y) (∴~(p∨q)= ∼p∧∼q) ≅ ∀ x ∼ p(x)∧∃y~q(y))

Inference: In artificial intelligence, we need intelligent computers which can create new logic
from old logic or by evidence, so generating the conclusions from evidence and facts is termed
as Inference.

Techniques:
 Forward Chaining
 Backward Chaining
 Resolution

Forward chaining: Forward chaining is the process where it matches the set of conditions and
infer results from these conditions. It is data driven approach using which it reaches (infers) to
the goal condition.
Example: : Given A (is true)
A->B
B-> C
C->D
Prove D is also true.

Solution: Starting from A, A is true then B is true (A->B) B is true then C is true (B->C)
C is true then D is true Proved (C->D)
Backward chaining: Backward chaining is the process where it performs backward search from
goal to the conditions used to get the goal. It is goal driven approach using which it reaches to
the initial condition.
Example: Given A (is true)
B->C
A->B
C->D
Prove D is also true.

Solution: Starting from D,

Let D is true then C Is true (C->D)


C is true then B is true (B->C)
B is true then A is true Proved (A->B)

Rules of Inference in Artificial intelligence


Inference rules:
Inference rules are the templates for generating valid arguments. Inference rules are applied to
derive proofs in artificial intelligence, and the proof is a sequence of the conclusion that leads to
the desired goal.
In inference rules, the implication among all the connectives plays an important role. Following
are some terminologies related to inference rules:

 Implication: It is one of the logical connectives which can be represented as P → Q. It is


a Boolean expression.
 Converse: The converse of implication, which means the right-hand side proposition
goes to the left-hand side and vice-versa. It can be written as Q → P.
 Contrapositive: The negation of converse is termed as contrapositive, and it can be
represented as ¬ Q → ¬ P.
 Inverse: The negation of implication is called inverse. It can be represented as ¬ P → ¬
Q.
Resolution:
The Resolution rule state that if P∨Q and ¬ P∧R is true, then Q∨R will also be true. It can be
represented as

P Q R ~P PVQ ~P^R QVR


0 0 0 1 0 0 0 0
0 0 1 1 0 1 1 0
0 1 0 1 1 0 1 0
0 1 1 1 1 1 1 1
1 0 0 0 1 0 0 0
1 0 1 0 1 0 1 0
1 1 0 0 1 0 1 0
1 1 1 0 1 0 1 0

Forward Chaining and backward chaining in AI


In artificial intelligence, forward and backward chaining is one of the important topics, but
before understanding forward and backward chaining lets first understand that from where these
two terms came.
Inference engine:
The inference engine is the component of the intelligent system in artificial intelligence, which
applies logical rules to the knowledge base to infer new information from known facts. The first
inference engine was part of the expert system. Inference engine commonly proceeds in two
modes, which are:
a. Forward chaining
b. Backward chaining

Horn Clause and Definite clause:


Horn clause and definite clause are the forms of sentences, which enables knowledge base to use
a more restricted and efficient inference algorithm. Logical inference algorithms use forward and
backward chaining approaches, which require KB in the form of the first-order definite clause.
Definite clause: A clause which is a disjunction of literals with exactly one positive literal is
known as a definite clause or strict horn clause.
Horn clause: A clause which is a disjunction of literals with at most one positive literal is
known as horn clause. Hence all the definite clauses are horn clauses.
Example: (¬ p V ¬ q V k). It has only one positive literal k.
It is equivalent to p ∧ q → k.

A. Forward Chaining
Forward chaining is also known as a forward deduction or forward reasoning method when using
an inference engine.

Forward chaining is a form of reasoning which start with atomic sentences in the
knowledge base and applies inference rules (Modus Ponens) in the forward direction to
extract more data until a goal is reached.

The Forward-chaining algorithm starts from known facts, triggers all rules whose premises
are satisfied, and add their conclusion to the known facts. This process repeats until the
problem is solved.

Properties of Forward-Chaining:
o It is a down-up approach, as it moves from bottom to top.
o It is a process of making a conclusion based on known facts or data, by starting from the initial
state and reaches the goal state.
o Forward-chaining approach is also called a s data-driven as we reach to the goal using
available data.
o Forward -chaining approach is commonly used in the expert system, such as CLIPS, business,
and production rule systems.

Consider the following famous example which we will use in both approaches:
Example:
"As per the law, it is a crime for an American to sell weapons to hostile nations. Country A,
an enemy of America, has some missiles, and all the missiles were sold to it by Robert, who
is an American citizen."
Prove that "Robert is criminal."
To solve the above problem, first, we will convert all the above facts into first-order definite
clauses, and then we will use a forward-chaining algorithm to reach the goal.

Facts Conversion into FOL:


o It is a crime for an American to sell weapons to hostile nations. (Let's say p, q, and r are
variables)
American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)
o Country A has some missiles. ?p Owns(A, p) ∧ Missile(p). It can be written in two definite
clauses by using Existential Instantiation, introducing new Constant T1.
Owns(A, T1) ......(2)
Missile(T1) .......(3)
o All of the missiles were sold to country A by Robert.
?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
o Missiles are weapons.
Missile(p) → Weapons (p) .......(5)
o Enemy of America is known as hostile.
Enemy(p, America) →Hostile(p) ........(6)
o Country A is an enemy of America.
Enemy (A, America) .........(7)
o Robert is American
American(Robert). ..........(8)

Forward chaining proof:


Step-1:
In the first step we will start with the known facts and will choose the sentences which do not
have implications, such as: American(Robert), Enemy(A, America), Owns(A, T1), and
Missile(T1). All these facts will be represented as below.

Step-2:
At the second step, we will see those facts which infer from available facts and with satisfied
premises.
Rule-(1) does not satisfy premises, so it will not be added in the first iteration.
Rule-(2) and (3) are already added.
Rule-(4) satisfy with the substitution {p/T1}, so Sells (Robert, T1, A) is added, which infers
from the conjunction of Rule (2) and (3).
Rule-(6) is satisfied with the substitution(p/A), so Hostile(A) is added and which infers from
Rule-(7).
Step-3:
At step-3, as we can check Rule-(1) is satisfied with the substitution {p/Robert, q/T1, r/A}, so
we can add Criminal(Robert) which infers all the available facts. And hence we reached our
goal statement.

Hence it is proved that Robert is Criminal using forward chaining approach.

B. Backward Chaining:
Backward-chaining is also known as a backward deduction or backward reasoning method when
using an inference engine. A backward chaining algorithm is a form of reasoning, which starts
with the goal and works backward, chaining through rules to find known facts that support the
goal.
Properties of backward chaining:
 It is known as a top-down approach.
 Backward-chaining is based on modus ponens inference rule.
 In backward chaining, the goal is broken into sub-goal or sub-goals to prove the facts
true.
 It is called a goal-driven approach, as a list of goals decides which rules are selected and
used.
 Backward -chaining algorithm is used in game theory, automated theorem proving tools,
inference engines, proof assistants, and various AI applications.
 The backward-chaining method mostly used a depth-first search strategy for proof.

Example:
In backward-chaining, we will use the same above example, and will rewrite all the rules.
 American (p) ∧ weapon(q) ∧ sells (p, q, r) ∧ hostile(r) → Criminal(p) ...(1)
 Owns(A, T1) ........(2)
 Missile(T1) …….(3)
 ?p Missiles(p) ∧ Owns (A, p) → Sells (Robert, p, A) ......(4)
 Missile(p) → Weapons (p) .......(5)
 Enemy(p, America) →Hostile(p) ........(6)
 Enemy (A, America) .........(7)
 American(Robert). ..........(8)
Backward-Chaining proof:
In Backward chaining, we will start with our goal predicate, which is Criminal(Robert), and
then infer further rules.
Step-1:
At the first step, we will take the goal fact. And from the goal fact, we will infer other facts, and
at last, we will prove those facts true. So our goal fact is "Robert is Criminal," so following is the
predicate of it.

Step-2: At the second step, we will infer other facts form goal fact which satisfies the rules. So as we
can see in Rule-1, the goal predicate Criminal (Robert) is present with substitution {Robert/P}. So
we will add all the conjunctive facts below the first level and will replace p with Robert.
Here we can see American (Robert) is a fact, so it is proved here.

Step-3:t At step-3, we will extract further fact Missile(q) which infer from Weapon(q), as it
satisfies Rule-(5). Weapon (q) is also true with the substitution of a constant T1 at q.
Step-4:
At step-4, we can infer facts Missile(T1) and Owns(A, T1) form Sells(Robert, T1, r) which
satisfies the Rule- 4, with the substitution of A in place of r. So these two statements are proved
here.

Step-5:
At step-5, we can infer the fact Enemy(A, America) from Hostile(A) which satisfies Rule- 6.
And hence all the statements are proved true using backward chaining.
Forward Chaining Backward Chaining
Forward chaining starts from known facts and Backward chaining starts from the goal and
applies inference rule to extract more data unit it works backward through inference rules to find
reaches to the goal. the required facts that support the goal.
It is a bottom-up approach It is a top-down approach
Forward chaining is known as data-driven Backward chaining is known as goal-driven
inference technique as we reach to the goal technique as we start from the goal and divide
using the available data. into sub-goal to extract the facts.

Forward chaining reasoning applies a breadth- Backward chaining reasoning applies a depth-first
first search strategy. search strategy
Forward chaining tests for all the available rules Backward chaining only tests for few required
rules.
Forward chaining is suitable for the planning, Backward chaining is suitable for diagnostic,
monitoring, control, and interpretation prescription, and debugging application.
application.
Forward chaining can generate an infinite Backward chaining generates a finite number of
number of possible conclusions. possible conclusions
Forward chaining is aimed for any conclusion. Backward chaining is only aimed for the required
data.

Propositional Logic:
Propositional logic (PL) is the simplest form of logic where all the statements are made by
propositions. A proposition is a declarative statement which is either true or false. It is a
technique of knowledge representation in logical and mathematical form. Propositions can be
either true or false, but it cannot be both. Propositional logic is also called Boolean logic as it
works on 0 and 1.

Examples:
 Today is Tuesday.
 The Sun rises from West (False proposition)
 2+2= 5(False proposition)
 2+3= 5

Syntax of Propositional Logic:


The syntax of propositional logic defines the allowable sentences. The atomic sentences consist
of a single proposition symbol. Each such symbol stands for a proposition that can be true or
false. We use symbols that start with an uppercase letter and may contain other letters or
subscripts.
Example: P, Q, R, W1, 3 and North.

The names are arbitrary but are often chosen to have some mnemonic value—we use W1,3 to
stand for the proposition that the wumpus is in [1,3]. There are two proposition symbols with
fixed meanings:
True is the always-true proposition and False is the always-false proposition.
Complex sentences are constructed from simpler sentences, using parentheses and logical
connectives.

In propositional logic there are two types of propositions. They are:


1. Atomic Propositions
2. Compound Propositions

Atomic Proposition: Atomic propositions are the simple propositions. It consists of a single
proposition symbol. These are the sentences which must be either true or false.

Ex: The sun rises from west . (False Atomic Propositions)


The sun rises from east (True Atomic Propositions)

Compound proposition: Compound propositions are constructed by combining simpler or


atomic propositions, using parenthesis and logical connectives.

Ex: Arsalan is an engineer and arsalan lives in dubai.


Arsalan is an engineer or arsalan is a doctor.

Propositional logic is the simplest logic – illustrates basic ideas The proposition symbols P1, P2
etc are (atomic) sentences
 If S is a sentence, ~(S) is a sentence (negation)
 If S1 and S2 are sentences, (S1 ˄ S2) is a sentence (conjunction)
 If S1 and S2 are sentences, (S1 ˅ S2) is a sentence (disjunction)
 If S1 and S2 are sentences, (S1 → S2) is a sentence (implication)
 If S1 and S2 are sentences, (S1 ↔ S2) is a sentence (biconditional)
Examples of Connectives:
(P 𝖠 Q) Arsalan likes football and Arsalan likes baseball.
(P ∨ Q) Arsalan is a doctor or Arsalan is an engineer.
(P ⇒ Q) If it is raining, then the street is wet.
(P ⇔ Q) I am breathing if and only if I am alive

Precedence of Connectives:
To eliminate the ambiguity we define a precedence for each operator. The ―not‖ operator (~) has
the highest precedence, followed by ˄ (conjunction), ∨(disjunction), ⇒(implication),
⇔ (biconditional).

Example: ~A ˄ B the ~ binds most tightly, giving us the equivalent of (~A) ˄ B rather than
~ (A ˄ B)

Semantics of Propositional Logic:


Each model specifies true/false for each proposition symbol
E.g. A B C false true false
With these symbols, 8 possible models, can be enumerated automatically. Rules for evaluating
truth with respect to a model m:
~ S is true iff S is false
S1 ˄ S2 is true iff S1 is true and S2 is true
S1 ˅ S2 is true iff S1is true or S2 is true
S1 ⇒S2 is true iff S1 is false or S2 is true i.e., is false iff S1 is true and S2 is false
S1⇔S2 is true iff S1 ˄ S2 is true and S2 ⇒ S1 is true Simple recursive process evaluates an
arbitrary sentence,
e.g., ~A˄(B ˅C ) = true ˄ (true ˅ false) = true ˄ true = true

Limitations of Propositional Logic:


We cannot represent relations like All, some, or none with propositional logic.

Examples:
All boys are smart.
All girls are hardworking.
Some mangoes are sweet.

Propositional logic has limited expressive power and in propositional logic, we cannot describe
statements in terms of their properties or logical relationships.

RULE BASED SYSTEMS


The rule-based system in AI bases choices or inferences on established rules. These laws are
frequently expressed in human-friendly language, such as "if X is true, then Y is true," to make
them easier for readers to comprehend. Expert and decision support systems are only two
examples of the many applications in which rule-based systems have been employed.
What is a Rule-based System?
A system that relies on a collection of predetermined rules to decide what to do next is known as
a rule-based system in AI. These laws are predicated on several circumstances and deeds. For
instance, if a patient has a fever, the doctor may recommend antibiotics because the patient may
have an infection. Expert systems, decision support systems, and chatbots are examples of apps
that use rule-based systems.

Characteristics of Rule-based Systems in AI


The following are some of the primary traits of the rule-based system in AI:
 The rules are written simply for humans to comprehend, making rule-based systems
simple to troubleshoot and maintain.
 Given a set of inputs, rule-based systems will always create the same output, making
them predictable and dependable. This property is known as determinism.
 A rule-based system in AI is transparent because the standards are clear and open to
human inspection, which makes it simpler to comprehend how the system operates.
 A rule-based system in AI is scalable. When scaled up, large quantities of data can be
handled by rule-based systems.
 Rule-based systems can be modified or updated more easily because the rules can be
divided into smaller components.

How does a Rule-based System Work?


A rule-based system in AI generates an output by using a collection of inputs and a set of rules.
The system first determines which principles apply to the inputs. If a rule is applicable, the
system executes the corresponding steps to generate the output. If no guideline is applicable, the
system might generate a default output or ask the user for more details.

Examples of Rule-based Systems


 Healthcare, finance, and engineering are just a few examples of the sectors and
applications that use rule-based systems. Following are some instances of a rule-based
system in AI:
 Medical Diagnosis: Based on a patient's symptoms, medical history, and test findings, a
rule-based system in AI can make a diagnosis. The system can make a diagnosis by
adhering to a series of guidelines developed by medical professionals.
 Fraud Detection: Based on particular criteria, such as the transaction's value, location,
and time of day, a rule-based system in AI can be used to spot fraudulent transactions.
The system, for the additional examination, can then flag the transaction.
 Quality Control: A rule-based system in AI can ensure that products satisfy particular
quality standards. Based on a set of guidelines developed by quality experts, the system
can check for flaws.
 Decision support systems: They are created to aid decision-making, such as choosing
which assets to buy or what to buy.
Conflict Resolution in AI
 We have two rules, Rule 2 and Rule 3, with the same IF part. Thus both of them can be
set to fire when the condition part is satisfied.These rules represent a conflict set. The
inference engine must determine which rule to fire from such a set. A method for
choosing a rule to fired in a given cycle is called conflict resolution.
 In forward chaining, BOTH rules would be fired.
 Rule 2 is fired first as the topmost one, and as a result, its THEN part is executed and
linguistic object action obtains value stop.
 However, Rule 3 is also fired because the condition part of this matches the fact 'traffic
light' is red, which is still in the database.As a consequence, object action takes new value
go.

Methods used for conflict resolution


 Fire the rule with the highest priority. In simple applications, the priority can be
established by placing the rules in an appropriate order in the knowledge base. Usually
this strategy works well for expert systems with around 100 rules.
 Fire the most specific rule. This method is also known as the longest matching
strategy. It is based on the assumption that a specific rule processes more information
than a general one.
 Fire the rule that uses the data most recently entered in the database.
 This method relies on time tags attached to each fact in the database.In the conflict set,
the expert system first fires the rule whose antecedent uses the data most recently added
to the database.

Semantic Network Representation


 Semantic networks work as an alternative to predicate logic for knowledge
representation. In semantic networks, the user can represent their knowledge in the form
of graphical networks. This network consists of nodes representing objects and arcs
which describe the relationship between those objects. This representation consists of two
types of relations, such as IS-A relationship (Inheritance) and Kind-Of-Relation.

Advantages
 Semantic networks are a natural representation of knowledge.
 It transparently conveys meaning.
 These networks are simple and easy to understand.

Disadvantages
 Semantic networks take more computational time at runtime.
 These are inadequate as they do not have any equivalent quantifiers.
 These networks are not intelligent and depend on the creator of the system.
Frame Representation
A frame is a record-like structure that consists of a collection of attributes and values to describe
an entity in the world. These are the AI data structures that divide knowledge into substructures
by representing stereotypical situations. It‘s a collection of slots and slot values of different types
and sizes. Slots have been identified by names and values, which are called facets.

Advantages
 It makes the programming easier by grouping the related data.
 Frame representation is easy to understand and visualize.
 It is very easy to add slots for new attributes and relations.
 Also, it is easy to include default data and search for missing values.

Disadvantages
 In frame system inference, the mechanism cannot be easily processed.
 The inference mechanism cannot be smoothly proceeded by frame representation.

Here information is organised into more complex knowledge structures. Slots in the structure
represent attributes into which values can be placed. These values are either specific to a
particular instance or default. These can capture complex situation or objects, for example, eating
a meal in a restaurant or the contents of a hotel room or details of a tree in a garden. Such
structures can be linked together as in networks, giving property inheritance. Frames and scripts
are the most common types of structured representation.

Another Representation of a Frame


A frame can also be described as a tree whose root is labelled by its name. The first level of the
tree is that of attributes (slots), the second of facets, the third that of values.

The slots are divided into several categories:

(a) The Inherent Slots:


These are comparable with the recording field of the data and hence are characteristic of the
knowledge under consideration. There is a difference, however. Since these can exist outside of
their recording they can have one or more values, but also procedures associated with them.
These slots are specified by the use for each class, and the values are given during the
instantiating of this class to obtain an object.

(b) The Meta-Slots:


These are defined for the classes, and often have only one value, being related to the given class
(these are not inherited). Such an attribute is the ako (a kind of) link mentioned in the semantic
networks, which relates one class to a more general class (machine).

Each class is related by ako to at least one other class. There exists exactly one class ‗linked‘ to
itself, which is the vertex of the graph of ako links. The slot ako determines one of the
fundamental interests of the object representation: inheritance. At the level of instanced frames,
it allows, by default, a frame to inherit certain values of the classes to which it is bound (related)
by ako. If this frame is bound to several others, there can be multiple inheritance (the value of
ako is a set of frames).

(c) The Instantiated Slots:


These are defined for the objects which are instances of classes. As priori, their creation is
automatic during instantiating, since they are ascribed to the instantiated class.
Each slot possesses an arbitrary number of facets. These are the declarations or procedures
associated to the attributes.
Frames extend semantic networks to include structured, hierarchical knowledge. Since they can
be used with semantic networks, they share the benefits of these, as well characteristics of
Frames.

1. Efficiency:
They allow more complex knowledge to be captured efficiently.
2. Explicitness:
The additional structures (if-needed, if-added, if-removed) make the relative importance of
particular objects and concepts explicit.
3. Expressiveness:
They allow representation of structured knowledge and procedural knowledge, the additional
structure increase clarity.
4. Effectiveness: Actions or operations can be associated with a slot and performed, for
example, whenever the value for that slot is changed procure get activated. Such procedures are
called demons.
Comparison between Semantic Net and Frames:

 Semantic Networks are logically inadequate because they could not make many of the
distinctions which logic can make.
 Semantic Networks are heuristically inadequate because searches for information were
not themselves knowledge based that is there was knowledge is in Semantic Networks
which tells us how to search for the knowledge what we wanted to find.
 Frames represents real world knowledge by integrating declarations about objects and
events and their properties and procedural notions about how to retrieve information and
achieve goals thereby overcoming some of the problems associated with semantic nets.
 Frames are used for default reasoning in particular, they are useful for simulation
common sense knowledge.
 Semantic Networks are basically a 2 – D representation of knowledge, while frames add
a third dimension to allow the nodes (slots) to have structures. Hence Frames are very
often used in computer vision.
 Frame based expert systems are very useful for representing causal knowledge, because
their information is organised by cause and effect.
 By contrast rule based expert systems generally rely on unorganised knowledge which is
not casual.
 A Semantic Networks based expert system can deal with shallow knowledge;
shallowness occurs because all knowledge in semantic net is contained in nodes and
links.

The Conceptual Dependency is used to represent knowledge of Artificial Intelligence. It should


be powerful enough to represent these concepts in the sentence of natural language. It states that
different sentence which has the same meaning should have some unique representation.

There are 5 types of states in Conceptual Dependency:


1. Entities
2. Actions
3. Conceptual cases
4. Conceptual dependencies
5. Conceptual tense

Main Goals of Conceptual Dependency:


1. It captures the implicit concept of a sentence and makes it explicit.
2. It helps in drawing inferences from sentences. 3. For any two or more sentences that are
identical in meaning. It should be only one representation of meaning. 4. It provides a means of
representation which are language independent. 5. It develops language conversion packages.

You might also like