Machine Learning
Machine Learning
1. Perceptrons:
• Structure: They consist of input nodes, each connected to an output node via
weighted connections.
• Mathematical Representation:
• Limitations: Perceptrons can only solve linearly separable problems and have
limited capacity for handling complex data distributions.
• Definition: MLPs are a type of artificial neural network with multiple layers of
neurons, designed to overcome the limitations of perceptrons.
• Structure: They typically consist of an input layer, one or more hidden layers,
and an output layer.
• Activation Function: Each neuron in the hidden layers and the output layer
applies an activation function to introduce non-linearity into the network.
• Training: MLPs are trained using techniques like gradient descent and
backpropagation.
3. Gradient Descent and the Delta Rule:
• Delta Rule: The delta rule is a learning rule used in neural networks for
updating the weights during the training process. It adjusts the weights in the
direction of the negative gradient of the error function with respect to the
weights.
4. Multilayer Networks:
• Process: It involves forward pass for prediction and backward pass for error
propagation and weight updates. The algorithm uses the chain rule of calculus
to compute the gradient of the error function.
6. Generalization:
Deep Learning
CNNs are a class of deep neural networks, most commonly applied to analyzing
visual imagery. They consist of multiple layers of interconnected neurons, each
processing small regions of the input data. Key concepts in CNNs include:
• Pooling: Reduces the spatial dimensions of the feature map, thus decreasing
computational complexity while retaining the most important information.
Training of Network
Training a CNN involves feeding it input data and adjusting the network's weights
through backpropagation to minimize the difference between its output and the
desired output. Key steps in training include:
• Loss Calculation: Measures the difference between the predicted output and
the actual target output.
Self-Driving Car
• Perception: Equip the car with sensors like LiDAR, radar, and cameras to
perceive the environment.
• Mapping and Localization: Create detailed maps and accurately localize the
car within them.
• Path Planning: Plan a safe and efficient route considering various factors like
traffic, road conditions, and regulations.
• Action (a): Choices made by the agent that affect the environment.
• Reward (r): Feedback from the environment after taking an action, indicating
the immediate benefit or cost.
• Policy (π): The strategy or rule that the agent follows to select actions.
2. Learning Task
The learning task in RL involves the agent learning a policy that maximizes
cumulative reward over time. It can be formalized as a Markov Decision Process
(MDP), where the agent aims to learn the optimal policy π* that maximizes the
expected cumulative reward.
• Q-Learning:
In DQL:
DQL has been successfully applied in various domains, including video game playing
(e.g., Atari games) and robotics.
3. GA Cycle of Reproduction
2. Evaluation: Evaluate the fitness of each individual in the population using the
fitness function.
6. Replacement: Form the next generation by replacing the old population with
the new one.
4. Crossover
Crossover is the process of combining genetic material from two parent solutions to
create offspring. There are various types of crossover operators, including:
• Uniform crossover: Offspring inherit genetic material from each parent with
equal probability for each gene.
5. Mutation
GAs are inspired by biological evolution and learning processes. They incorporate
concepts such as:
• Survival of the fittest: Individuals with higher fitness have a better chance of
passing their genetic material to the next generation.
8. Applications