0% found this document useful (0 votes)
18 views5 pages

DeepLearning Glossary

This document is a glossary for beginners in deep learning, providing key terms and concepts related to the field. It covers essential topics such as neural networks, machine learning, and various learning techniques like supervised and unsupervised learning. The guide aims to help newcomers and young entrepreneurs understand the foundational elements of deep learning.

Uploaded by

chatrsefidatiyeh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views5 pages

DeepLearning Glossary

This document is a glossary for beginners in deep learning, providing key terms and concepts related to the field. It covers essential topics such as neural networks, machine learning, and various learning techniques like supervised and unsupervised learning. The guide aims to help newcomers and young entrepreneurs understand the foundational elements of deep learning.

Uploaded by

chatrsefidatiyeh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Deep Learning Glossary for

Beginners
A guide for Newcomers and Young Entrepreneurs

By Atiyeh Chatrsefid

December 4, 2024
Contents
1 Key Terms in Deep Learning 2
Introduction to Deep Learning
Deep Learning is a subset of Machine Learning that uses neural networks with many
layers to process and learn from large amounts of data. It is a technique used in various
fields such as image recognition, speech recognition, and natural language processing.

1 Key Terms in Deep Learning


Artificial Intelligence (AI)
AI refers to the field of computer science that focuses on creating systems that can
perform tasks that would normally require human intelligence, such as decision-making,
perception, and problem-solving.

Machine Learning (ML)


Machine Learning is a branch of AI that deals with algorithms that allow systems to
learn from data, improve their performance, and make predictions or decisions without
being explicitly programmed.

Deep Learning (DL)


Deep Learning is a specialized form of Machine Learning that uses multi-layered neural
networks to learn from large datasets. Deep Learning models are particularly effective at
handling unstructured data such as images, audio, and text.

Neural Network
A neural network is a set of algorithms, modeled loosely after the human brain, that
are designed to recognize patterns. It interprets sensory data through a kind of machine
perception, labeling, and clustering of raw input.

Artificial Neural Network (ANN)


An Artificial Neural Network (ANN) is a computational model consisting of intercon-
nected layers of neurons (nodes). These networks learn by adjusting the weights of
connections based on input data.

Neuron
A neuron in a neural network is a computational unit that processes input data, applies
an activation function, and passes the result to the next layer of the network.

Activation Function
An activation function defines the output of a neuron, determining whether it should be
activated or not. Common activation functions include Sigmoid, ReLU (Rectified Linear
Unit), and Tanh.
Training
Training is the process of feeding data to a neural network, allowing it to learn and adjust
its weights in order to improve performance and make accurate predictions.

Supervised Learning
Supervised Learning is a type of Machine Learning where the model is trained using
labeled data (input-output pairs) to predict outcomes for new, unseen data.

Unsupervised Learning
In Unsupervised Learning, the model is given input data without labeled outputs. It tries
to find patterns or structures in the data, such as clusters or anomalies.

Reinforcement Learning
Reinforcement Learning is a type of learning where an agent interacts with its environ-
ment, receiving rewards or penalties based on its actions, with the goal of maximizing
long-term performance.

Overfitting
Overfitting occurs when a model learns the details and noise in the training data to such
an extent that it negatively impacts the performance of the model on new, unseen data.

Underfitting
Underfitting happens when a model is too simple and cannot capture the underlying
patterns in the data, leading to poor performance on both training and test data.

Epoch
An Epoch is one complete pass through the entire training dataset. Deep learning models
are usually trained for multiple epochs to improve their accuracy and performance.

Loss Function
A loss function measures how well or poorly a model’s predictions match the true out-
comes. The goal is to minimize the loss during training to improve the model’s perfor-
mance.

Gradient Descent
Gradient Descent is an optimization algorithm used to minimize the loss function by
iteratively adjusting the parameters (weights) of the model in the direction of the steepest
decrease.
Backpropagation
Backpropagation is a method used for training neural networks by adjusting the weights
of neurons based on the error (loss) calculated in the output layer and propagated back
through the network.

Convolutional Neural Network (CNN)


CNNs are a type of neural network particularly useful for processing grid-like data such as
images. They consist of convolutional layers that automatically detect important features
in the data.

Recurrent Neural Network (RNN)


RNNs are a class of neural networks designed for processing sequential data, such as time
series or text. They can retain information from previous time steps to predict future
values.

Transfer Learning
Transfer Learning involves taking a pre-trained model and fine-tuning it for a new, but
similar task. This is particularly useful when working with limited data.

Generative Adversarial Network (GAN)


GANs are composed of two networks: a Generator, which creates data, and a Discrim-
inator, which evaluates whether the generated data is real or fake. The two networks
compete to improve each other’s performance.

Dropout
Dropout is a regularization technique used during training, where randomly selected
neurons are ignored to prevent overfitting and to improve the model’s ability to generalize.

Batch Normalization
Batch Normalization normalizes the input layer by adjusting and scaling activations,
which helps to speed up training and improve the stability of the neural network.

Fine-Tuning
Fine-Tuning involves adjusting a pre-trained model on a new dataset to optimize its
performance on the specific task at hand.

You might also like