0% found this document useful (0 votes)
25 views15 pages

Explain Explanation Based Learning in Artificial I

Explanation-Based Learning (EBL) is a machine learning approach in AI that focuses on understanding individual examples through domain knowledge to create generalized rules from limited training instances. EBL involves analyzing a problem, generating logical explanations, and generalizing these to apply to similar situations, making it efficient for domains with strong theories. While EBL offers advantages such as learning from single instances, it faces challenges related to the quality of domain theories and computational costs.

Uploaded by

kunduakash1102
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views15 pages

Explain Explanation Based Learning in Artificial I

Explanation-Based Learning (EBL) is a machine learning approach in AI that focuses on understanding individual examples through domain knowledge to create generalized rules from limited training instances. EBL involves analyzing a problem, generating logical explanations, and generalizing these to apply to similar situations, making it efficient for domains with strong theories. While EBL offers advantages such as learning from single instances, it faces challenges related to the quality of domain theories and computational costs.

Uploaded by

kunduakash1102
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

explain explanation based learning in Artificial

Intelligence
Explanation-Based Learning (EBL) in Artificial Intelligence (AI) is a specialized form of machine
learning that focuses on learning from a deep understanding of individual examples by
leveraging domain knowledge and logical reasoning. Unlike other machine learning approaches
that require numerous examples to identify patterns, EBL emphasizes creating generalized rules
or concepts from a single or limited set of training instances by explaining why a specific solution
works.

Core Concept of EBL


EBL operates by analyzing a specific problem instance, generating an explanation for why a
solution is correct using a strong or even perfect domain theory (a comprehensive set of rules or
axioms about a domain), and then generalizing that explanation to apply to similar situations [1]
[2] [3] . A domain theory is considered "perfect" or "complete" if it contains all necessary
information to address any question within that domain, such as the rules of chess for a chess-
playing program [2] [3] . The goal is to extract essential features and principles from the example,
ignoring irrelevant details, to form a broader concept or rule that can be applied efficiently to
future problems [3] [4] .
The EBL process typically involves the following steps:
Analyzing the Problem: The system examines a specific instance or training example and
uses domain knowledge to understand why a particular solution works [5] .
Generating an Explanation: It constructs a logical justification or proof for the solution
based on the domain theory, often using inference rules and facts [1] [6] .
Generalizing the Explanation: The system generalizes this explanation into a rule or pattern
that can cover other related instances, focusing on essential features while discarding
irrelevant ones [2] [5] .
Storing and Applying the Rule: The generalized rule is stored for future use, allowing the
system to solve similar problems quickly without re-analyzing each case [5] .

EBL Architecture and Inputs


The architecture of an EBL system typically includes components like a problem solver, a
generalizer, and an operational pruner. The problem solver takes inputs such as a specific goal
(a complex problem to solve), training instances (specific facts or examples), and inference rules
(existing knowledge) to create an explanation [1] . The generalizer then compares this
explanation with a knowledge base to produce a broader concept, while the operational pruner
ensures the final concept meets operational criteria, defining how the learned concept should be
expressed [1] .
EBL systems also rely on four key inputs:
A hypothesis space (possible conclusions).
A domain theory (axioms or rules about the domain).
Training examples (specific facts to refine hypotheses).
Operationality criteria (rules for determining which features are efficiently recognizable) [3] .

Advantages and Challenges


One of the primary advantages of EBL is its ability to learn from a single training instance,
making it highly efficient compared to methods requiring large datasets [2] [4] . By focusing on
explanations, it mimics human-like learning processes, potentially enhancing the interpretability
of AI models [5] . EBL is particularly effective in domains where a strong theory exists, as it can
deduce solutions and improve problem-solving efficiency [3] [7] .
However, EBL faces significant challenges. It heavily depends on the quality and completeness
of the domain theory. Incomplete, intractable, or inconsistent theories can hinder the system’s
ability to build explanations or may lead to incorrect generalizations [6] [7] . Additionally, as more
generalized rules or proofs are learned, the computational cost of applying them can become
prohibitive, a problem known as the "utility problem" [2] [3] .

Applications of EBL
EBL has been applied across various domains in AI, demonstrating its versatility:
Medical Diagnosis: EBL algorithms analyze previously diagnosed cases to identify patterns
and underlying causes of symptoms, leading to more accurate diagnoses [1] .
Robot Navigation: Robots can learn to navigate complex environments by generalizing
rules from past successful navigation attempts [1] .
Fraud Detection: EBL helps detect fraudulent behavior by extracting principles from past
fraud cases and applying them to new scenarios [1] .
Natural Language Processing (NLP): EBL has been used to adapt language grammars to
specific applications, such as parsing and surface generation in large-scale systems [2] .
Chess and Game Strategies: Programs learn optimal moves by focusing on critical features
of specific positions, using the rules of the game as a domain theory [2] [3] .

Conclusion
Explanation-Based Learning is a powerful approach in AI that prioritizes understanding and
generalizing from specific examples using robust domain knowledge. By focusing on logical
explanations rather than statistical patterns, EBL enables efficient problem-solving and has
wide-ranging applications from medical diagnosis to robotics. However, its reliance on
comprehensive domain theories and the computational challenges of scaling learned rules
highlight areas for further development in making EBL more adaptable to imperfect or
incomplete knowledge bases.

what are the advantages and disadvantages of


Genetic Algorithm
Genetic Algorithms (GAs) are a type of evolutionary algorithm inspired by natural selection,
widely used in machine learning and optimization problems. They mimic biological evolution to
iteratively improve solutions through processes like selection, crossover, and mutation. Below, I
outline the key advantages and disadvantages of GAs based on their characteristics and
practical applications.

Advantages of Genetic Algorithms


Global Search Capability: GAs excel at exploring large and complex solution spaces,
reducing the likelihood of getting stuck in local optima. This makes them ideal for nonlinear,
high-dimensional problems where traditional methods may fail [8] [9] [10] .
Flexibility and Versatility: They can be applied to a wide range of problems, including
optimization, machine learning, robotics, and bioinformatics. GAs handle discrete,
continuous, and multi-objective problems effectively [8] [10] [11] .
Robustness to Noise: GAs perform well in noisy environments or with incomplete data,
making them suitable for real-world applications where information may be imprecise or
uncertain [10] [12] [11] .
Parallel Processing: Their ability to evaluate multiple candidate solutions simultaneously
allows for efficient use of parallel computing resources, speeding up the process for large-
scale problems [10] [12] .
No Need for Derivative Information: Unlike gradient-based methods, GAs do not require
derivative or gradient information, which is beneficial for problems where such data is
unavailable or difficult to compute [8] [12] [13] .
Improving Solutions Over Time: GAs often produce solutions that improve with each
generation, leveraging historical data to guide the search toward better outcomes [8] [11] [13] .

Disadvantages of Genetic Algorithms


Computational Complexity: GAs can be computationally expensive, especially for complex
problems or large search spaces. Repeated fitness function evaluations can lead to long
processing times, making them less suitable for real-time applications [14] [12] [11] .
Premature Convergence: There is a risk of converging too early to suboptimal or local
optima, particularly if the population lacks diversity or if parameters are not well-tuned. This
can prevent finding the global optimum [14] [12] [15] .
Parameter Sensitivity: The performance of GAs heavily depends on the tuning of
parameters like population size, mutation rate, and crossover rate. Finding the right settings
often requires time-consuming trial and error [14] [11] [15] .
Difficulty in Fitness Function Design: Designing an effective fitness function can be
challenging and may require domain-specific knowledge. A poorly designed function can
lead to incorrect or suboptimal solutions [12] [15] .
Limited Suitability for Simple or Analytical Problems: GAs are not efficient for
straightforward problems where traditional analytical methods can provide faster solutions
with fewer computational steps [13] [15] .
Challenges with Representation and Constraints: Choosing an appropriate representation
for solutions can be difficult, and handling constraints in optimization problems often
requires additional mechanisms, which can complicate implementation [11] [15] .
No Guarantee of Optimal Solutions: GAs do not always guarantee the best solution, as
they may settle on near-optimal or suboptimal results, especially in complex or dynamic
environments [11] [16] [17] .
In summary, Genetic Algorithms offer powerful tools for solving complex optimization problems
due to their global search capabilities and adaptability. However, their computational demands,
sensitivity to parameters, and risk of premature convergence highlight the need for careful
design and application, particularly in scenarios where speed or guaranteed optimality is critical.

Cycle of Genetic Algorithm and Types of Crossover Techniques (Explained Simply


and in Detail)

Cycle of Genetic Algorithm


A Genetic Algorithm (GA) is a method inspired by natural evolution to solve optimization
problems. The cycle of a genetic algorithm involves several steps:
1. Initialization:
Start by creating a population of possible solutions (called chromosomes or individuals).
Each solution is usually represented as a string (like a sequence of 0s and 1s).
2. Evaluation:
Each individual in the population is evaluated using a fitness function. This function
measures how good or bad each solution is for the problem.
3. Selection:
Individuals with higher fitness are more likely to be chosen as parents for the next
generation. This mimics natural selection, where the fittest survive and reproduce.
4. Crossover (Recombination):
Selected parents are paired and combined to create new offspring. This is done using
crossover techniques (explained below).
5. Mutation:
Some offspring undergo random changes (mutations) to maintain diversity in the population
and explore new solutions.
6. Replacement:
The new generation replaces the old one, and the cycle repeats from the evaluation step.
7. Termination:
The algorithm stops when a stopping condition is met (like a maximum number of
generations or a satisfactory fitness level).

Different Types of Crossover Techniques


Crossover is a key process in genetic algorithms where two or more parent solutions combine to
produce new offspring. Here are the main types of crossover techniques explained simply:

1. One-Point Crossover
A single crossover point is chosen randomly along the parent chromosomes.
All data before this point is taken from one parent, and all data after this point is taken from
the other parent.
Example:
If Parent 1 is 110|0110 and Parent 2 is 101|1001 (| is the crossover point), the offspring could
be 1101001.

2. Two-Point Crossover
Two crossover points are chosen.
Data between these points is swapped between the parents.
Example:
If Parent 1 is 11|001|10 and Parent 2 is 10|110|01, offspring could be 11|110|10.

3. Uniform Crossover
Each gene (bit) in the offspring is chosen randomly from either parent.
For each bit, a coin is flipped to decide which parent’s gene to use.
Example:
Parent 1: 110010
Parent 2: 101101
Offspring: 111100 (bits chosen randomly from either parent).

4. Half Uniform Crossover (HUX)


Similar to uniform crossover, but only half of the genes that differ between the parents are
swapped.
5. Order Crossover (OX)
Used when the chromosome represents an ordered list (like a path in the traveling salesman
problem).
A segment is copied from one parent, and the remaining positions are filled in order from the
other parent, skipping duplicates.

6. Partially Mapped Crossover (PMX)


Also used for ordered chromosomes.
A segment is mapped between parents, and the rest is filled by mapping the remaining
elements to avoid duplicates.

7. Cycle Crossover (CX)


Genes are swapped in cycles to ensure that each gene comes from only one parent.

8. Arithmetic Crossover
Used for numeric chromosomes.
Offspring genes are created by taking a weighted average of the parents’ genes.

9. Three-Parent Crossover
Offspring are created from three parents, usually by checking each bit: if two parents have
the same bit, the child takes that bit; otherwise, the bit from the third parent is used.

Summary Table of Crossover Types


Crossover Type How It Works (Simple) Best For

One-Point Swap after one point General use

Two-Point Swap between two points General use

Uniform Randomly pick each gene from either parent General use

Half Uniform (HUX) Swap half of differing genes General use

Order (OX) Copy a segment, fill rest in order Ordered lists

Partially Mapped (PMX) Map a segment, fill rest with mapping Ordered lists

Cycle (CX) Swap in cycles Ordered lists

Arithmetic Weighted average of genes Numeric chromosomes

Three-Parent Use majority or third parent for each gene Increasing diversity

In summary:
Genetic algorithms mimic natural evolution to solve problems. The cycle involves creating a
population, evaluating fitness, selecting the best, combining them with crossover, introducing
mutations, and repeating the process. Crossover techniques are various ways to mix parent
solutions to create new ones, each suited for different types of problems and data structures [18]
[19] .

what about cut and splice crossover


The cut and splice crossover is a crossover technique used in genetic algorithms to create
offspring by combining segments from two parent solutions. Here’s a simple but detailed
explanation:

What is Cut and Splice Crossover?


In cut and splice crossover, you select a random cut point in each parent chromosome (the
position where you will split the parent’s genetic material).
You then take the first segment from the first parent up to its cut point, and the second
segment from the second parent after its cut point.
These two segments are joined (“spliced”) together to form a new offspring.
Key feature:
Unlike standard one-point crossover (where both parents are cut at the same position), in cut
and splice crossover, the cut points can be at different positions in each parent. This means the
resulting offspring can be longer or shorter than the parents, allowing for variable-length
chromosomes [20] [21] .

How Does It Work? (Step-by-Step)


1. Choose a cut point in Parent 1.
2. Choose a (possibly different) cut point in Parent 2.
3. Take the left segment from Parent 1 (from the start to its cut point).
4. Take the right segment from Parent 2 (from its cut point to the end).
5. Combine these two segments to create the offspring.
Example:
Parent 1: 110|0110 (cut after the third bit)
Parent 2: 1011|001 (cut after the fourth bit)
Offspring:
Take 110 from Parent 1 and 001 from Parent 2 → 110001
Advantages
Allows variable-length offspring: This is useful for problems where solutions of different
sizes are needed.
Increases diversity: By allowing different cut points, it can create more diverse offspring
compared to fixed-point crossover.

Disadvantages
May disrupt good building blocks: Because the segments joined may not always align well,
useful patterns in the parents can be broken up.
Can create invalid solutions: In some problems (like the traveling salesman problem),
splicing may create duplicates or omit elements, so extra checks or repairs may be
needed [21] .

Where is it Used?
Cut and splice crossover is often used in problems where the length of the solution can vary,
such as evolving programs, variable-length strings, or cluster structure optimization [21] [22]
[23] .

Variants like plane-cut-splice or sphere-cut-splice are used in molecular or cluster


structure optimization, where a geometric plane or sphere is used to cut parent structures
and splice them together [22] [23] .

Summary Table
Step Parent 1 Parent 2 Offspring

Choose cut e.g., after 3rd e.g., after 4th

Take segments 110 001 110001

In short:
Cut and splice crossover is a flexible way to combine parents in genetic algorithms, especially
when variable-length solutions are needed. It increases diversity but may require extra care to
maintain valid solutions [21] [22] [23] .

explain Forms of learning, inductive learning,
learning decision trees, explanation based
learning, learning using relevance information
Here is a detailed yet simple explanation of the requested topics:

Forms of Learning in Artificial Intelligence


There are several main forms of learning in AI, each with distinct characteristics:
Supervised Learning:
The system learns from labeled data (where the correct answer is provided). It tries to map
inputs to outputs, such as classifying emails as spam or not spam [24] [25] [26] [27] .
Unsupervised Learning:
The system learns from unlabeled data, discovering patterns or groupings without explicit
instructions (like grouping customers by purchasing behavior) [24] [25] [26] [27] .
Reinforcement Learning:
The system learns by trial and error, receiving rewards or penalties for its actions, and
gradually improving its strategy (used in robotics and games) [24] [25] [26] [27] .
Semi-supervised and Self-supervised Learning:
These are hybrid approaches that use a mix of labeled and unlabeled data or allow the
model to generate its own labels [27] .

Inductive Learning
Definition:
Inductive learning is when a system learns general rules or patterns from specific examples.
For example, if you show a computer many pictures of apples and label them, it can learn
what features define an apple and recognize new apples it hasn't seen before [28] [29] [30] [31]
[32] .

How it Works:
1. The system observes specific, labeled examples (input-output pairs).
2. It tries to find patterns or rules that explain these examples.
3. It generalizes these rules to make predictions on new, unseen data [28] [31] .
Example:
If a model is trained with data showing that round, red fruits are apples, it can learn to
identify apples in new data based on those patterns—even if it hasn’t seen those exact
fruits before [31] .
Types:
Instance-based learning: Remembers all examples and compares new data to them.
Model-based learning: Builds a general model (like a decision tree or neural network)
from the data [28] .

Learning Decision Trees


What is a Decision Tree?
A decision tree is a flowchart-like structure used for making decisions or predictions. Each
internal node represents a test on an attribute, each branch represents the outcome, and
each leaf node represents a class label or decision [33] [34] [35] [36] .
How Learning Happens:
1. The algorithm starts with all training data at the root.
2. It selects the attribute that best splits the data (using criteria like information gain).
3. The data is split, and the process repeats for each branch.
4. The process stops when all data in a node belong to the same class or no further
splitting improves the model [33] [34] [36] .
Use:
Decision trees are used for both classification (e.g., yes/no decisions) and regression
(predicting numbers) [34] [35] [36] .
Advantages:
Easy to understand and interpret.
Can handle both numerical and categorical data [34] [35] .

Explanation-Based Learning (EBL)


What is EBL?
Explanation-based learning is a method where the system learns by analyzing specific
examples deeply, using domain knowledge to understand why a solution works, and then
generalizing this explanation to similar problems [37] [38] .
How it Works:
1. The system is given a specific problem and a partial solution.
2. It uses domain knowledge to explain why the solution works.
3. It extracts the underlying rules or principles from this explanation.
4. These rules are then applied to solve new, similar problems [37] [38] .
Example:
In medical diagnosis, EBL can analyze a diagnosed case, understand the reasoning behind
the diagnosis, and use this reasoning to diagnose new cases [37] .
Key Point:
EBL is powerful because it can generalize from a single example if a strong domain theory
exists, but it relies heavily on having good background knowledge [38] .
Learning Using Relevance Information
What is Relevance-Based Learning?
This approach focuses on identifying and using the most relevant features or information for
learning and decision-making. The idea is that not all information is equally useful; by
focusing on what’s most relevant, learning becomes more efficient and effective [39] [40] [41] .
How it Works:
The system evaluates which features or data points are most important for the task.
It prioritizes learning from and using these relevant features.
This can improve accuracy, reduce complexity, and make models more interpretable.
Example:
In information retrieval (like search engines), machine learning models learn which features
(such as keywords, user history, or document context) are most relevant to provide
accurate and personalized search results [42] .
Importance:
Relevance-based learning helps in creating models that are not only accurate but also
efficient, as they ignore noise and focus on what truly matters for the task at hand [40] [41] .

In summary:
Forms of learning in AI include supervised, unsupervised, reinforcement, and hybrid types.
Inductive learning generalizes rules from specific examples.
Decision tree learning builds a tree structure to make decisions based on input features.
Explanation-based learning uses deep analysis and domain knowledge to generalize from
specific cases.
Relevance-based learning focuses on the most important information to improve learning
efficiency and effectiveness.

what's the difference between inductive learning


and experience based learning
Here’s a clear and detailed comparison between inductive learning and experience-based
(experiential) learning:

Inductive Learning
Definition:
Inductive learning is when learners are presented with specific examples, data, or cases,
and from these, they infer general principles, rules, or patterns [43] [44] [45] .
Process:
1. Learners observe or analyze examples.
2. They identify patterns or regularities.
3. They formulate general rules or concepts based on these observations.
Focus:
The main focus is on moving from the specific (examples) to the general (rules/principles).
Example:
If students are shown several examples of mammals and non-mammals, and asked to figure
out what makes an animal a mammal, they will look for common traits and form the rule
themselves.
Key Feature:
The learning is driven by the process of inferring or discovering generalizations from specific
instances, often guided by the teacher or system [43] [45] .

Experience-Based (Experiential) Learning


Definition:
Experience-based learning (or experiential learning) is a process where the learner’s own
experiences—past or present—are central to the learning process. Learners reflect on,
analyze, and draw meaning from these experiences [46] [47] .
Process:
1. The learner has a concrete experience (doing, observing, or participating in something).
2. They reflect on that experience.
3. They conceptualize or make sense of what happened.
4. They experiment or apply what they learned to new situations (Kolb’s cycle) [46] .
Focus:
The emphasis is on direct engagement, reflection, and personal meaning-making from real
or simulated experiences [47] .
Example:
A student participates in a group project, then reflects on what worked, what didn’t, and
how to improve teamwork in the future.
Key Feature:
The learner’s own active participation and reflection are essential. The process is holistic,
involving intellect, emotions, and senses, and is often social or collaborative [47] .

Key Differences
Aspect Inductive Learning Experience-Based Learning (EBL)

Source of
Specific examples, cases, or data The learner’s own direct experiences
Learning

Inferring/generalizing rules from


Main Activity Reflecting on and making sense of experiences
examples
Aspect Inductive Learning Experience-Based Learning (EBL)

Cyclical: experience → reflection →


Direction From specific to general
conceptualization → experimentation

Role of May be present, but focus is on


Central—reflection is a core step
Reflection pattern-finding

Learner’s Role Observer, analyzer, rule-formulator Active participant, reflector, experimenter

Inferring grammar rules from sample


Example Learning teamwork by participating in a group task
sentences

Personal growth, skill development, and


Goal Discover general principles or rules
understanding through experience

In summary:
Inductive learning is about discovering general rules from specific examples.
Experience-based learning is about learning through direct experiences and reflection,
making meaning from what one does or observes.
Both are learner-centered, but they differ in what drives the learning process: inductive learning
is driven by analysis of examples, while experience-based learning is driven by active
engagement and reflection on real experiences [43] [46] [45] [47] .

1. https://www.scaler.com/topics/artificial-intelligence-tutorial/explanation-based-learning-in-artificial-int
elligence/
2. https://indiaai.gov.in/article/understanding-explanation-based-learning-in-ai
3. https://en.wikipedia.org/wiki/Explanation-based_learning
4. https://dl.acm.org/doi/10.1145/66443.66445
5. https://unogeeks.com/explanation-based-learning-in-artificial-intelligence/
6. https://www.d.umn.edu/~rmaclin/cs5751/notes/Chapter11-1PerPage.pdf
7. https://www.slideshare.net/slideshow/ebl-machine-learningpdf/252440561
8. https://www.turing.com/kb/genetic-algorithm-applications-in-ml
9. https://www.appliedaicourse.com/blog/genetic-algorithm-in-machine-learning/
10. https://dev.to/adityabhuyan/understanding-genetic-algorithms-applications-benefits-and-challenges-in
-soft-computing-4j71
11. https://www.upgrad.com/blog/genetic-algorithm-in-ai/
12. https://eyelit.ai/what-is-genetic-algorithm/
13. https://statusneo.com/genetic-algorithm-in-artificial-intelligence/
14. https://www.tutorchase.com/answers/ib/computer-science/what-are-the-limitations-of-genetic-algorith
ms
15. https://www.doc.ic.ac.uk/project/examples/2005/163/g0516312/Algorithms/Reality.html
16. https://becominghuman.ai/understanding-genetic-algorithms-a-use-case-in-organizational-field-2087c
30fb61e
17. https://en.wikipedia.org/wiki/Genetic_algorithm
18. 1000367134.jpg
19. 1000367135.jpg
20. 1000367134.jpg
21. http://www.flipstar.com.br/portfolio/paulo/uploads/multiple_crossovers_paper_paulo.pdf
22. https://pubs.aip.org/aip/jcp/article/138/21/214303/192629/A-sphere-cut-splice-crossover-for-the-evolu
tion-of
23. https://www.frontiersin.org/journals/chemistry/articles/10.3389/fchem.2019.00707/full
24. https://www.tutorialspoint.com/machine_learning_with_python/machine_learning_with_python_types_of_
learning.htm
25. https://www.pecan.ai/blog/3-types-of-machine-learning/
26. https://www.slideshare.net/slideshow/forms-of-learning-in-ai/232827932
27. https://www.ibm.com/think/topics/machine-learning-types
28. https://www.scaler.com/topics/artificial-intelligence-tutorial/inductive-learning/
29. https://www.netlanguages.com/blog/index.php/2023/03/23/what-is-inductive-learning/
30. https://soulpageit.com/ai-glossary/inductive-learning-explained/
31. https://www.appliedaicourse.com/blog/inductive-learning-algorithm/
32. https://www.facultyfocus.com/articles/course-design-ideas/learning-through-examples-inductive-learn
ing/
33. https://en.wikipedia.org/wiki/Decision_tree_learning
34. https://www.shiksha.com/online-courses/articles/understanding-decision-tree-algorithm-in-machine-le
arning/
35. https://www.ibm.com/think/topics/decision-trees
36. https://www.kdnuggets.com/2020/01/decision-tree-algorithm-explained.html
37. https://www.scaler.com/topics/artificial-intelligence-tutorial/explanation-based-learning-in-artificial-int
elligence/
38. https://indiaai.gov.in/article/understanding-explanation-based-learning-in-ai
39. https://www.slideshare.net/slideshow/learning-in-ai-79938388/79938388
40. https://esheninger.blogspot.com/2018/04/relevance-is-fuel-of-learning.html
41. https://www.cambridge.org/elt/blog/2022/02/10/role-relevance-learner-engagement/
42. https://www.coveo.com/blog/ai-information-retrieval/
43. https://www.facultyfocus.com/articles/course-design-ideas/learning-through-examples-inductive-learn
ing/
44. https://engr.ncsu.edu/wp-content/uploads/drive/1-MORshfST1zjmfEqk33HF1PCYP-HbW3L/2006-
Inductive learning paper (JEE).pdf
45. https://lsa.umich.edu/technology-services/news-events/all-news/teaching-tip-of-the-week/inductive-le
arning-through-examples.html
46. https://www.simplypsychology.org/learning-kolb.html
47. http://complexworld.pbworks.com/f/Experience-based+learning.pdf

You might also like