DeepLearning Glossary
DeepLearning Glossary
Beginners
A guide for Newcomers and Young Entrepreneurs
By Atiyeh Chatrsefid
December 4, 2024
Contents
1 Key Terms in Deep Learning 2
Introduction to Deep Learning
Deep Learning is a subset of Machine Learning that uses neural networks with many
layers to process and learn from large amounts of data. It is a technique used in various
fields such as image recognition, speech recognition, and natural language processing.
Neural Network
A neural network is a set of algorithms, modeled loosely after the human brain, that
are designed to recognize patterns. It interprets sensory data through a kind of machine
perception, labeling, and clustering of raw input.
Neuron
A neuron in a neural network is a computational unit that processes input data, applies
an activation function, and passes the result to the next layer of the network.
Activation Function
An activation function defines the output of a neuron, determining whether it should be
activated or not. Common activation functions include Sigmoid, ReLU (Rectified Linear
Unit), and Tanh.
Training
Training is the process of feeding data to a neural network, allowing it to learn and adjust
its weights in order to improve performance and make accurate predictions.
Supervised Learning
Supervised Learning is a type of Machine Learning where the model is trained using
labeled data (input-output pairs) to predict outcomes for new, unseen data.
Unsupervised Learning
In Unsupervised Learning, the model is given input data without labeled outputs. It tries
to find patterns or structures in the data, such as clusters or anomalies.
Reinforcement Learning
Reinforcement Learning is a type of learning where an agent interacts with its environ-
ment, receiving rewards or penalties based on its actions, with the goal of maximizing
long-term performance.
Overfitting
Overfitting occurs when a model learns the details and noise in the training data to such
an extent that it negatively impacts the performance of the model on new, unseen data.
Underfitting
Underfitting happens when a model is too simple and cannot capture the underlying
patterns in the data, leading to poor performance on both training and test data.
Epoch
An Epoch is one complete pass through the entire training dataset. Deep learning models
are usually trained for multiple epochs to improve their accuracy and performance.
Loss Function
A loss function measures how well or poorly a model’s predictions match the true out-
comes. The goal is to minimize the loss during training to improve the model’s perfor-
mance.
Gradient Descent
Gradient Descent is an optimization algorithm used to minimize the loss function by
iteratively adjusting the parameters (weights) of the model in the direction of the steepest
decrease.
Backpropagation
Backpropagation is a method used for training neural networks by adjusting the weights
of neurons based on the error (loss) calculated in the output layer and propagated back
through the network.
Transfer Learning
Transfer Learning involves taking a pre-trained model and fine-tuning it for a new, but
similar task. This is particularly useful when working with limited data.
Dropout
Dropout is a regularization technique used during training, where randomly selected
neurons are ignored to prevent overfitting and to improve the model’s ability to generalize.
Batch Normalization
Batch Normalization normalizes the input layer by adjusting and scaling activations,
which helps to speed up training and improve the stability of the neural network.
Fine-Tuning
Fine-Tuning involves adjusting a pre-trained model on a new dataset to optimize its
performance on the specific task at hand.