0% found this document useful (0 votes)
412 views25 pages

AIML Module - 4

The document discusses bivariate and multivariate data analysis, emphasizing the importance of understanding relationships between variables through methods like scatter plots and correlation coefficients. It also covers various statistical concepts, including probability distributions and types of learning in machine learning, such as supervised and reinforcement learning. Additionally, it introduces computational learning theory and the design of learning systems, focusing on training experiences and target function representation.

Uploaded by

mjyr1319
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
412 views25 pages

AIML Module - 4

The document discusses bivariate and multivariate data analysis, emphasizing the importance of understanding relationships between variables through methods like scatter plots and correlation coefficients. It also covers various statistical concepts, including probability distributions and types of learning in machine learning, such as supervised and reinforcement learning. Additionally, it introduces computational learning theory and the design of learning systems, focusing on training experiences and target function representation.

Uploaded by

mjyr1319
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Artificial Intelligence and Machine Learning (BDS602) Module - 4

Understanding Data
BIVARIATE DATA AND MULTIVARIATE DATA

Bivariate Data involves two variables. Bivariate data deals with causes of relationships. The aim is to find
relationships among data. Consider the following Table 2.3, with data of the temperature in a shop and sales
of sweaters.

The aim of bivariate analysis is to find relationships among variables. The relationships can then be used
in comparisons, finding causes; and in further explorations. To do that, graphical display of the data is
necessary. One such graph method is called scatter plot.

Scatter plot is used to visualize bivariate data. It is useful to plot two variables with or without nominal
variables, to illustrate the trends, and also to show differences. It is a plot between explanatory and response
variables. It is a 2D graph showing the relationship between two variables.

The scatter plot (Refer Figure 2.11) indicates strength, shape, direction and the presence of Outliers. It is
useful in exploratory data before calculating a correlation coefficient or fitting regression curve.

Line graphs are similar to scatter plots. The Line Chart for sales data is shown in Figure 2.12.

Dept. of Data Science, A.I.T, CKM Page 1


Artificial Intelligence and Machine Learning (BDS602) Module - 4

Bivariate Statistics

Covariance and Correlation are examples of bivariate statistics. Covariance is a measure of joint probability
of random variables, say X and Y. Generally, random variables are represented in capital letters. It is defined
as covariance(X, Y) or COV(X, Y) and is used to measure the variance between two dimensions. The
formula for finding co-variance for specific x, and y are:

Here, x, and y, are data values from X and Y. E(X) and E(Y) are the mean values of x i and yi. N is the
number of given data. Also, the COV(X, Y) is same as COV(Y, X).

Correlation
The Pearson correlation coefficient is the most common test for determining any association between two
phenomena. It measures the strength and direction of a linear relationship between the x and y variables.

Dept. of Data Science, A.I.T, CKM Page 2


Artificial Intelligence and Machine Learning (BDS602) Module - 4

The correlation indicates the relationship between dimensions using its sign. The sign is more important
than the actual value.
1. If the value is positive, it indicates that the dimensions increase together.
2. If the value is negative, it indicates that while one--dimension increases, the other dimension decreases.
3. If the value is zero, then it indicates that both the dimensions are independent of each other.
If the dimensions are correlated, then it is better to remove one dimension as it is a redundant dimension.

If the dimensions are correlated, then it is better to remove one dimension as it is a redundant dimension.

MULTIVARIATE STATISTICS
In machine learning, almost all datasets are multivariable. Multivariate data is the analysis of more than
two observable variables, and often, thousands of multiple measurements need to be conducted for one or
more subjects.

The multivariate data is like bivariate data but may have more than two dependant variables. Some of the
multivariate analysis are regression analysis, principal component analysis, and path analysis.

The mean of multivariate data is a mean vector and the mean of the above three attributes is given as (2,
7.5, 1.33). The variance of multivariate data becomes the covariance matrix. The mean vector is called
centroid and variance is called dispersion matrix.

Dept. of Data Science, A.I.T, CKM Page 3


Artificial Intelligence and Machine Learning (BDS602) Module - 4

Heatmap

Heatmap is a graphical representation of 2D matrix. It takes a matrix as input and colours it. The darker
colours indicate very large values and lighter colours indicate smaller values. The advantage of this method
is that humans perceive colours well. So, by colour shaping, larger values can be perceived well. For
example, in vehicle traffic data, heavy traffic regions can be differentiated from low traffic regions 6
through heatmap.

In Figure 2.13, patient data highlighting weight and health status is plotted. Here, X axis is weights and Y
axis is patient counts. The dark colour regions highlight patients weights vs counts in health status.

Pairplot

Pairplot or scatter matrix is a data visual technique for multivariate data. A scatter matrix consists of several
pairwise scatter plots of variables of the multivariate data. All the results are presented in a matrix format.
By visual examination of the chart, one can easily find relationships among the variables such as correlation
between the variables. .
A random matrix of three columns is chosen and the relationships of the columns is plotted as a pairplot (or
scatter matrix) as shown below in Figure 2.14.

Dept. of Data Science, A.I.T, CKM Page 4


Artificial Intelligence and Machine Learning (BDS602) Module - 4

Multivariate Essential Mathematics

GAUSSIAN ELIMINATION

Dept. of Data Science, A.I.T, CKM Page 5


Artificial Intelligence and Machine Learning (BDS602) Module - 4

MATRIX DECOMPOSITION

PROBABILITY DISTRIBUTIONS

A probability distribution of a variable, say X, summarises the probability associated with X’s events.
Distribution is a parameterized mathematical function. In other words, distribution is a function the
describes the relationship between the observations in a sample space.

Consider a set of data. The data is said to follow a distribution if it obeys a mathematical function that
characterizes the distribution. The function can be used to calculate the probability of individual
observations.
Probability distributions of two types,

1. discrete probability distribution


2. continuous probability distribution

The relationships between the events for continuous random variable and their probabilities is called a
continuous probability distribution. It is summarized as probability density function (PDF). PDF calculates

Dept. of Data Science, A.I.T, CKM Page 6


Artificial Intelligence and Machine Learning (BDS602) Module - 4

the probability of observing an instance. The plot of PDF shows the shape of the distribution. Community
distributive function (CDF) computes the probability of an observation <= value.

Both PDF and CDF or continuous values. The discrete equivalent of PDF in district distribution is called
probability mass function (PMF).

The probability of an event cannot be detected directly. It should be computed as the area under the curve
for small interval around the specific outcome. This is defined as CDF

Continuous Probability Distributions

Normal, Rectangular and Exponential distributions fall under this category

1. Normal distribution

Normal distribution is a continuous probability distribution. This is also known as Gaussian distribution or
bell shaped curve distribution. It is the most common distribution function. The shape of the distribution
is a typical bell shaped curve. In normal distribution, data tends to be around a central value with no bias
on left or right. The height of the students, Blood pressure of a population and marks scored in a class can
be approximated using normal distribution.

2. Rectangular distribution

3. Exponential Distribution

This is a continuous uniform distribution. This probability distribution is used to describe the time between
events in a Poisson process.

Where x is a random variable and λ is called rate parameter. The mean and standard deviation of exponential
distribution is given as β, where β = 1/ λ

Dept. of Data Science, A.I.T, CKM Page 7


Artificial Intelligence and Machine Learning (BDS602) Module - 4

Discrete Distribution

Binomial, Poisson and Bernoulli distributions fall under this category

1. Binomial Distribution

Binomial Distribution has only two outcomes: success and failure. It is also known as Bernoulli trial

Here p is the probability of each choice, k is the number of choices and n is the total number of choices

2. Poisson Distribution
It a distribution which is quite useful. Given an interval of time, this distribution is used to model the
probability of a given number of events k. the mean rule λ is inclusive of previous events.

Here x is the number of times the event occurs and λ is the mean number of times an event occurs.

3. Bernoulli Distribution

Dept. of Data Science, A.I.T, CKM Page 8


Artificial Intelligence and Machine Learning (BDS602) Module - 4

BASICS OF LEARNING THEORY

INTRODUCTION TO LEARNING AND ITS TYPES

The process of acquiring knowledge and expertise through study, experience, or being taught is called as
learning. Generally, humans learn 'm different ways. To make machines learn, we need to simulate the
strategies of human learning in machines.

There are two kinds of problems- well-posed and ill-posed. Computers can solve only well-posed problems,
as these have well-defined specifications and have the following components inherent to it.
1. Class of learning tasks (T)
2. A measure of performance (P)
3. A source of experience (E)

The standard definition of learning proposed by Tom Mitchell is that a program can learn from E for the
task T, and P improves with experience E. Let us formalize the concept of learning as follows:

Let x be the input and be the input space, which is the set of all inputs, and Y is the output space, which
is the set of all possible outputs, that is, yes/no.

It can be observed that training samples and target function are dependent on the given problem. The
learning algorithm and hypothesis set are independent of the given problem. Thus, learning model is
informally the hypothesis set and learning algorithm. Thus, learning model can be stated as follows: .

Learning Model = Hypothesis Set + Learning Algorithm

Let us assume a problem of predicting a label for a given input data. Let D be the input dataset with both
positive and negative examples. Let y be the output with class 0 or 1. The simple learning model can be
given as:

Dept. of Data Science, A.I.T, CKM Page 9


Artificial Intelligence and Machine Learning (BDS602) Module - 4

This can be put into a single equation as follows:

This is called perception learning algorithm.

Classical and Adaptive Machine Learning Systems

A classical machine learning system has components such as Input, Process and Output. The input values
are taken from the environment directly. These values are processed and a hypothesis is generated as output
model. This model is then used for making predictions. The predicted values are consumed by the
environment.

In contrast to the classical systems, adaptive systems interact with the input for getting labelled data as
direct inputs are not available. This process is called reinforcement learning. In reinforcement learning, a
learning agent interacts with the environment and in return gets feedback. Based on the feedback, the
learning agent generates input samples for learning, which are used for generating the learning model.

Learning Types

There are different types of learning. Some of the different learning methods are as follows:

1. Learn by memorization or learn by repetition also called as rote learning is done by memorizing
without understanding the logic or concept. Although rote learning is basically learning by
repetition, in machine learning perspective, the learning, occurs by simply comparing with the
existing knowledge for the same input data and producing the output if present.

2. Learn by examples also called as learn by experience or previous knowledge acquired at some
time, is like finding an anology, which means performing inductive learning from observations that
formulate a general concept. Inductive learning is also called as discovery learning.

Dept. of Data Science, A.I.T, CKM Page 10


Artificial Intelligence and Machine Learning (BDS602) Module - 4

3. Learn by examples also called as learn by experience or previous knowledge acquired at some
time, is like finding an anology, which means performing inductive learning from observations that
formulate a general concept. Inductive learning is also called as discovery learning.

4. Learn by being taught by .an expert or a teacher, generally called as passive learning. However,
there is a special kind of learning called active learning where the learner can interactively query a
teacher/expert to label unlabelled data instances with the desired outputs.

5. Learning by critical thinking, also called as deductive learning, deduces new facts or conclusion
from related known-facts and information.

6. Self-learning, also called as reinforcement learning, is a self-directed learning that normally learns
from mistakes punishments and rewards.

7. Learning to solve problems is a type of cognitive learning where learning happens in the mind
and is possible by devising a methodology to achieve a goal. Here, the learner initially is not aware
of the solution or the way to achieve the goal but only knows the goal.

8. Learning by generalizing explanations, also called as explanation-based learning (EBL), is


another learning method that exploits domain knowledge from experts to improve the accuracy of
learned concepts by supervised learning.

Acquiring general concept from specific instances of the training dataset is the main challenge of machine
learning.

INTRODUCTION TO COMPUTATION LEARNING THEORY

There are many questions that have been raised by mathematicians and logicians over the time taken by
computers to learn. Some of the questions are as follows:

1. How can a learning system predict an unseen instance?


2. How do the hypothesis h is close to f, when hypothesis f itself is unknown?
3. How many samples are required?
4. Can we measure the performance of a learning system?
5. Is the solution obtained local or global?

These questions are the basis of a field called 'Computational Learning Theory' or in short (COLT). It is a
specialized field of study of machine learning. COLT deals with formal methods used for learning systems.
It deals with frameworks for quantifying learning tasks and learning algorithms. It provides a fundamental
basis for study of machine learning. It deals with Probably Approximate Learning (PAC) and Vapnik-
Chervonenkis (VC) dimensions.

DESIGN OF A LEARNING SYSTEM

A system that is built around a learning o algorithm is called a learning system.. The design of systems
focuses on these steps:
1. Choosing a training experience
2. Choosing a target function
3. Representation of a target function

Dept. of Data Science, A.I.T, CKM Page 11


Artificial Intelligence and Machine Learning (BDS602) Module - 4

4. Function approximation

Training Experience

Let us consider designing of a chess game. In direct experience, individual board states and correct moves
of the chess game are given directly. In indirect system, the move sequences and results are only given. The
training experience also depends on the presence of a supervisor who can label all valid moves for a board
state. In the absence of a supervisor, the game agent plays against itself and learns the good moves, if the
training samples cover all scenarios, or in other words, distributed enough for performance computation. If
the training samples and testing samples have the same distribution, the results would be good.

Determine the Target Function

The next step is the determination of a target function. In this step, the type of knowledge that needs to be
learnt is determined. In direct experience, a board move is selected and is determined whether it is a good
move or not against all other moves. If it is the best move, then it is chosen as:

B -> M, where, B and M are legal moves. In indirect experience, all legal moves are accepted and a score
is generated for each. The move with largest score is then chosen and executed.

Determine the Target Function Representation

The representation of knowledge may be a table, collection of rules or a neural network. The linear
combination of these factors can be coined as:

Choosing an Approximation Algorithm for the Target Function

The focus is to choose weights and fit the given training samples effectively. The aim is to reduce the error
given as:

Here, b is the sample and is the predicted hypothesis. The approximation is carried out as:

 Computing the error as the difference between trained and expected hypothesis. Let error be
error(b).
 Then, for every board .feature xi, the weights are updated as:

Dept. of Data Science, A.I.T, CKM Page 12


Artificial Intelligence and Machine Learning (BDS602) Module - 4

Here, is the constant that moderates the size of the weight update.

Thus, the learning system has the following components:

 A Performance system to allow the game to play against itself.


 A Critic system to generate the samples.
 A Generalizer system to generate a hypothesis based on samples.
 An Experimenter system to generate a new system based on the currently learnt function. This is
sent as input to the performance system.

INTRODUCTION TO CONCEPT LEARNING

Concept learning is a learning strategy of acquiring abstract knowledge or inferring a general concept or
deriving a category from the given training samples. It is a process of abstraction and generalization from
the data.

Concept learning helps to classify an object that has a set of common, relevant features. Thus, it helps a
learner compare and contrast categories based on the similarity and association of positive and negative
instances in the training data to classify an object. The learner tries to simplify by observing the common
features from the training samples and then apply this simplified model to the future samples. This task is
also known as learning from experience.

Concept learning requires three things:


1. Input .- Training dataset which is a set of training instances, each labeled with the name of a concept
or category to which it belongs. Use this past experience to train and build the model.
2. Output - Target concept or Target functions It is a mapping function f(x) from input x to output y.
It is to determine the specific features or common features to identify an object. In other words, it
is to find the hypothesis to determine the target concept. For e.g., the specific set of features to
identify an elephant from all animals.
3. Test - New instances to test the learned mode.

Formally, Concept learning is defined as-"Given a set of hypotheses, the learner searches through the
hypothesis space to identify the best hypothesis that matches the target concept”.

Consider the following set of training instances shown in Table 3.1.

Dept. of Data Science, A.I.T, CKM Page 13


Artificial Intelligence and Machine Learning (BDS602) Module - 4

Here, in this Set of training instances, the independent attributes considered are 'Horns', 'Tail', 'Tusks',
'Paws', 'Fur', 'Color', 'Hooves' and 'Size'. The dependent attribute is 'Elephant'. The target concept is to
identify the animal to be an Elephant.
Let us now take this example and understand further the concept of hypothesis.

Target Concept: Predict the type of animal - For example -'Elephant'.

Representation of a Hypothesis

A hypothesis 'h' approximates a target function 'f' to represent the relationship between the independent
attributes and the dependent attribute of the training instances. The hypothesis is the predicted approximate
model that best maps the inputs to outputs. Each hypothesis is represented as a conjunction of attribute
conditions in the antecedent part

For example, (Tail = Short) ^ (Color = Black).

The set of hypothesis in the search space is called as hypotheses. Hypotheses are the plural form of
hypothesis. Generally 'H ' is used to represent the hypotheses and ‘h’ is used to represent a candidate
hypothesis.

Each attribute condition is the constraint on the attribute which is represented as attribute-value pair. In the
antecedent of an attribute condition of a hypothesis, each attribute can take value as either can hold
a single value.

Dept. of Data Science, A.I.T, CKM Page 14


Artificial Intelligence and Machine Learning (BDS602) Module - 4

Dept. of Data Science, A.I.T, CKM Page 15


Artificial Intelligence and Machine Learning (BDS602) Module - 4

Hypothesis Space

Hypothesis Space is the set of all possible hypotheses that approximates the target function f. In other
words, the set of all possible approximations of the target function can be defined as hypothesis space. From
this set of hypotheses in the hypothesis space, a machine learning algorithm would determine the best
possible hypothesis that would best describe the target function or best fit the outputs.

Every machine learning algorithm would represent the hypothesis space in a different manner about the
function that maps the input variables to output variables. For example, a regression algorithm represents
the hypothesis space as a linear function whereas a decision tree algorithm represents the hypothesis space
as a tree.

The set of hypotheses that can be generated by a learning algorithm can be further reduced by specifying a
language bias. The subset of hypothesis space that is consistent with all-observed training instances is called
as Version Space. Version space represents the only hypotheses that are used for the classification.

For example, each of the attribute given in the Table 3.1 has the following possible set of values.

Heuristic Space Search

Heuristic search is a strategy that finds an optimized hypothesis/solution to a problem by iteratively


improving the hypothesis/solution based on a given heuristic function or a cost measure. Heuristic search
methods will generate a possible hypothesis that can be a solution in the hypothesis space or a path from
the initial state.
This hypothesis will be tested with the target function or the goal state to see if it is a real solution.

If the tested hypothesis is a real solution, then it will be selected. This method generally increases the
efficiency because it is guaranteed to find & better hypothesis but may not be the best hypothesis. It is
useful for solving tough problems which could not solved by any other method. The typical example
problem solved by heuristic search is the travelling salesman problem.

Several commonly used heuristic search methods are hill climbing methods, constraint satisfaction
problems, best-first search, simulated-annealing, A* algorithm, and genetic algorithms.

Generalization and Specialization

In order to understand about how we construct this concept hierarchy, let us apply this general principle of
generalization/specialization relation. By generalization of the most specific hypothesis and by-
specialization of the most general hypothesis, the hypothesis space can be searched for an approximate
hypothesis that matches all positive instances but does not match any negative instance.

Dept. of Data Science, A.I.T, CKM Page 16


Artificial Intelligence and Machine Learning (BDS602) Module - 4

Searching the Hypothesis Space

There are two ways of learning the hypothesis, consistent with all training instances from the large
hypothesis space.

1. Specialization - General to Specific learning


2. Generalization - Specific to General learning

Generalization- Specific to General Learning

This learning methodology will search through the hypothesis space for an approximate hypothesis by
generalizing the most specific hypothesis.

Specialization - General to Specific Learning

This learning methodology will search through the hypothesis space for an approximate hypothesis by
specializing the most general hypothesis.

Hypothesis Space Search by Find-S Algorithm

Find-S algorithm is guaranteed to converge to the most specific hypothesis in H that is consistent with the
positive instances in the training dataset. Obviously, it will also be consistent with the negative instances.
Thus, this algorithm considers only the positive instances and eliminates negative instances while
generating the hypothesis. It initially starts with the most specific hypothesis.

Dept. of Data Science, A.I.T, CKM Page 17


Artificial Intelligence and Machine Learning (BDS602) Module - 4

Consider the training dataset of 4 instances shown in Table 3.2. It contains the details of the performance
of students and their likelihood of getting a job offer or not in their final semester. Apply the Find-S
algorithm.

Dept. of Data Science, A.I.T, CKM Page 18


Artificial Intelligence and Machine Learning (BDS602) Module - 4

Limitations of Find-S algorithm

1. Find-S algorithm tries to find a hypothesis that is consistent with positive instances, ignoring all
negative instances. As long as the training dataset is consistent, the hypothesis found by this
algorithm may be consistent.
2. The algorithm finds only one unique hypothesis, wherein there may be many other hypotheses that
are consistent with the training dataset.
3. Many times, the training dataset may contain some errors; hence such inconsistent data instances
can mislead this algorithm in determining the consistent hypothesis since it ignores negative
instances.

It is necessary to find the set of hypotheses that are consistent with the training data including the negative
examples. To overcome the limitations of Find-S algorithm, Candidate Elimination algorithm was proposed
to output the set of all hypotheses consistent with the training dataset.

Version Spaces

The version space contains the subset of hypotheses from the hypothesis space that is consistent with all
training instances in the training dataset.

List-Then-Eliminate Algorithm

The principle idea of this learning algorithm is to initialize the version space to contain all hypotheses and
then eliminate any hypothesis that is found inconsistent with any training instances. Initially, the algorithm
starts with a version space to contain all hypotheses scanning each training instance. The hypotheses that
are inconsistent with the training instance are eliminated. Finally, the algorithm outputs the list of remaining
hypotheses that are all consistent.

This algorithm works fine if the hypothesis space is finite but practically it is difficult to deploy this
algorithm. Hence, a variation of this idea is introduced in the Candidate Elimination algorithm.

Version Spaces and the Candidate Elimination Algorithm

Version space learning is to generate all consistent hypotheses around. This algorithm computes the version
space by the combination of the two cases namely,
 Specific to General learning - Generalize S to include the positive example
 General to Specific learning- Specialize G to exclude the negative example

Using the Candidate Elimination algorithm, we can compute the version space containing aII (and only
those) hypotheses from H that are consistent with the given observed sequence of training instances. The
algorithm defines two boundaries called 'general boundary' which is a set of all hypotheses that are the most
general and 'spec boundary' which is a set of all hypotheses that are the most specific. Thus, the algorithm

Dept. of Data Science, A.I.T, CKM Page 19


Artificial Intelligence and Machine Learning (BDS602) Module - 4

limits the version space to contain only those hypotheses that are most general and most specific. Thus, it
provides a compact representation of List-then algorithm.

Generating Positive Hypothesis ‘S’

If it is appositive example, refine S to include the positive instance. We need to generalize S to include the
positive instance. The hypothesis is the conjunction of ‘S’ and positive instance. When generalizing for the
first positive instance add to S all minimal generalizations such that S is filled with attribute values of that
positive instance. For the subsequent positive instances scanned, check the attribute value of the positive
instance and S obtained in the previous iteration. If the attribute values of positive instance and S are
different, fill that field value with a '?'. If the attribute values of positive 'instance and .S are same, no
changers required.

If it is a negative instance, it skips.

Generating Negative Hypothesis 'G'

If it is a negative instance, refine G to exclude the negative instance. Then, prune G to exclude all
inconsistent hypotheses in G with the positive instance. The idea is to add to G all minimal specializations
to exclude the negative instance and be consistent with the positive instance. Negative hypothesis indicates
general hypothesis.

If the attribute values of positive and negative instances are different, then fill that field with positive
instance value so that the hypothesis does not classify that negative instance as true. If the attribute values
of positive and negative instances are same, then no need to update 'G' and fill that attribute value with a
'?'.

Generating Version Space - [Consistent Hypothesis]


We need to take the combination of sets in 'G' and check that with 'S'. When the combined set fields are
matched with fields in 'S', then only that is included in the version space as consistent hypothesis.

Dept. of Data Science, A.I.T, CKM Page 20


Artificial Intelligence and Machine Learning (BDS602) Module - 4

Dept. of Data Science, A.I.T, CKM Page 21


Artificial Intelligence and Machine Learning (BDS602) Module - 4

Deriving the Version Space

Dept. of Data Science, A.I.T, CKM Page 22


Artificial Intelligence and Machine Learning (BDS602) Module - 4

SIMILARITY BASED LEARNING

Instance-and Model-based Learning

Nearest-Neighbor Learning

Dept. of Data Science, A.I.T, CKM Page 23


Artificial Intelligence and Machine Learning (BDS602) Module - 4

Algorithm 4.1: k-NN

Weighted k-Nearest-Neighbor Algorithm

Dept. of Data Science, A.I.T, CKM Page 24


Artificial Intelligence and Machine Learning (BDS602) Module - 4

Dept. of Data Science, A.I.T, CKM Page 25

You might also like