0% found this document useful (0 votes)
5 views

DL_pytorch_

This document provides an introduction to PyTorch, a deep learning library, and explains the differences between CPUs and GPUs, highlighting the advantages of using GPUs for deep learning tasks. It covers the concept of tensors, their creation, manipulation, and various operations in PyTorch, including mathematical operations and tensor indexing. Additionally, it includes practical questions to reinforce understanding of tensor operations and GPU usage.

Uploaded by

zhumataevtimurr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

DL_pytorch_

This document provides an introduction to PyTorch, a deep learning library, and explains the differences between CPUs and GPUs, highlighting the advantages of using GPUs for deep learning tasks. It covers the concept of tensors, their creation, manipulation, and various operations in PyTorch, including mathematical operations and tensor indexing. Additionally, it includes practical questions to reinforce understanding of tensor operations and GPU usage.

Uploaded by

zhumataevtimurr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Introduction to PyTorch and Tensors in Deep

Learning

1 Introduction to PyTorch
PyTorch is a popular open-source library for deep learning, developed by
Facebook AI. It is widely used for research and production because of its
flexibility and ease of use.
Why PyTorch?

• It is easy to debug and understand

• It has strong support for GPUs, which makes computations faster.

2 What are CPU and GPU?


Deep learning computations can run on different types of processing units,
each optimized for specific tasks.

2.1 CPU (Central Processing Unit)


• The CPU is the main processor of a computer.

• It is good at handling general-purpose tasks like running software, man-


aging files, and executing basic code.

• CPUs have a few powerful cores (usually between 2 to 16).

• They are not the best for deep learning because they process tasks
sequentially.

Example: Running PyTorch on CPU

1
import torch
device = torch . device ( " cpu " ) # Force PyTorch to use CPU
x = torch . rand (3 , 3 , device = device )
print ( x )

2.2 GPU (Graphics Processing Unit)


• A GPU is designed for parallel processing.

• Unlike CPUs, GPUs have thousands of small cores that allow them to
process many tasks at once.

• This makes them much faster for deep learning and AI tasks.

• NVIDIA and AMD make GPUs commonly used in deep learning.

Example: Running PyTorch on GPU


device = torch . device ( " cuda " if torch . cuda . is_available
,→ () else " cpu " )
x = torch . rand (3 , 3) . to ( device ) # Moves tensor to GPU
print ( x )

Key Differences Between CPU, GPU, and TPU


Feature CPU GPU TPU
Cores Few (2-16) Thousands Thousands
Speed Slow Fast Very Fast
Best for General tasks AI, Gaming Deep Learning
Used in PyTorch? Yes Yes Limited

3 Why Use GPU for Deep Learning?


Deep learning models require performing millions of mathematical opera-
tions, such as matrix multiplications and derivatives. Since GPUs handle
many operations simultaneously, they dramatically speed up training and
inference.

4 Checking Device Availability in PyTorch


Before using a GPU, you should check whether your system has one available.
Checking for CUDA (GPU)

2
import torch
print ( torch . cuda . is_available () ) # Returns True if a
,→ GPU is available

Selecting a Device
device = torch . device ( " cuda " if torch . cuda . is_available
,→ () else " cpu " )
print ( device )

Now that we understand different processing units, let’s explore tensors


and their operations in PyTorch.

5 What is a Tensor?
A tensor is a multi-dimensional array, similar to a NumPy array, but with
the added capability of being used on GPUs for faster computation.
Why Tensors?
• Tensors store numerical data.
• They work efficiently with GPUs.
• They are the basic building blocks of deep learning models.

6 Installing PyTorch
Before using PyTorch, install it using:
pip install torch torchvision torchaudio

7 Creating and Manipulating Tensors


7.1 Creating Tensors
1. Creating a tensor from a list
import torch
x = torch . tensor ([1 , 2 , 3 , 4])
print ( x )

Explanation: We created a simple tensor from a Python list.


2. Creating a zero tensor

3
x = torch . zeros (3 , 3)
print ( x )

Explanation: This creates a 3 × 3 matrix filled with zeros.


3. Creating a tensor filled with ones
x = torch . ones (2 , 2)
print ( x )

4. Creating a random tensor


x = torch . rand (2 , 2)
print ( x )

Explanation: Generates a 2 × 2 tensor with random values.

7.2 Tensor Data Types


5. Creating a tensor with specific data type
x = torch . tensor ([1.0 , 2.0 , 3.0] , dtype = torch . float32 )
print ( x )

Explanation: Specifies that the tensor should have floating-point numbers.


6. Checking tensor data type
print ( x . dtype )

7.3 Reshaping Tensors


7. Changing tensor shape using view()
x = torch . rand (4 , 4)
y = x . view (2 , 8)
print ( y )

8. Using reshape()
x = torch . rand (3 , 4)
y = x . reshape (2 , 6)
print ( y )

4
7.4 Mathematical Operations on Tensors
9. Adding two tensors
a = torch . tensor ([1 , 2 , 3])
b = torch . tensor ([4 , 5 , 6])
c = a + b
print ( c )

10. Subtracting tensors


c = a - b
print ( c )

11. Multiplying tensors (element-wise)


c = a * b
print ( c )

12. Matrix multiplication


A = torch . rand (2 , 3)
B = torch . rand (3 , 2)
C = torch . mm (A , B )
print ( C )

7.5 Tensor Indexing


13. Accessing elements
x = torch . tensor ([10 , 20 , 30 , 40])
print ( x [2]) # Third element

14. Slicing a tensor


print ( x [:2]) # First two elements

7.6 Converting Between Tensors and NumPy


15. Convert PyTorch tensor to NumPy array
import numpy as np
x = torch . tensor ([1 , 2 , 3])
y = x . numpy ()
print ( y )

16. Convert NumPy array to PyTorch tensor

5
z = torch . from_numpy ( y )
print ( z )

7.7 Using GPU with Tensors


17. Checking for GPU availability
device = torch . device ( " cuda " if torch . cuda . is_available
,→ () else " cpu " )
print ( device )

18. Moving a tensor to GPU


x = torch . rand (3 , 3)
x = x . to ( device )
print ( x )

7.8 Other Useful Tensor Operations


19. Cloning a tensor
y = x . clone ()
print ( y )

20. Finding the maximum value in a tensor


x = torch . tensor ([1 , 5 , 3 , 9 , 2])
print ( torch . max ( x ) )

21. Concatenating tensors along a dimension


x = torch . rand (2 , 3)
y = torch . rand (2 , 3)
z = torch . cat (( x , y ) , dim =0) # Concatenating along rows
print ( z )

22. Splitting a tensor into multiple parts


x = torch . rand (6)
y = torch . split (x , 2) # Splitting into 3 parts
print ( y )

6
7.9 Tensor Reduction Operations
Reduction operations summarize information in tensors.
23. Finding the mean of a tensor
x = torch . tensor ([1.0 , 2.0 , 3.0 , 4.0])
print ( torch . mean ( x ) )

24. Summing all elements in a tensor


x = torch . tensor ([[1 , 2 , 3] , [4 , 5 , 6]])
print ( torch . sum ( x ) )

25. Finding the minimum and maximum values


x = torch . tensor ([4 , 7 , 1 , 9])
print ( torch . min ( x ) )
print ( torch . max ( x ) )

7.10 Tensor Reshaping and Transposing


26. Transposing a matrix (switching rows and columns)
x = torch . rand (2 , 3)
y = x . T # Transposes the matrix
print ( y )

27. Flattening a multi-dimensional tensor


x = torch . rand (2 , 3)
y = x . view ( -1) # Flattens the tensor
print ( y )

7.11 Conclusion
Tensors are the backbone of deep learning in PyTorch. They allow us to
perform numerical operations efficiently on CPUs and GPUs. Mastering
tensor operations is essential for working with neural networks.

8 Practical Questions on Tensors


8.1 Basic Tensor Operations
1. Create a 3x3 tensor filled with zeros.

7
2. Create a tensor with values from 1 to 10 and print its shape.
3. Create a 2 × 5 tensor filled with ones and convert its data
type to float32.
4. Generate a random tensor of size 4 × 4 and print its values.
5. Create a tensor with values evenly spaced between 0 and 1,
with 5 elements.

8.2 Tensor Reshaping and Indexing


6. Create a tensor of shape (6,) and reshape it into a 2x3 matrix.
7. Create a tensor x = [10, 20, 30, 40, 50] and access the
third element.
8. Extract the first two rows from a 3 × 3 tensor.
9. Flatten a 2 × 4 tensor into a one-dimensional tensor.
10. Transpose a 3 × 2 matrix using PyTorch.

8.3 Mathematical Operations on Tensors


11. Create two tensors of shape (3,) and perform element-wise
addition.
12. Multiply two matrices of shape 2 × 3 and 3 × 2.
13. Compute the mean and standard deviation of a randomly
generated tensor of size (5,5).
14. Find the maximum and minimum values of a tensor.
15. Compute the sum of all elements in a 4 × 4 tensor.

8.4 Working with GPU


16. Check if CUDA (GPU) is available in your system using Py-
Torch.
17. Create a tensor and move it to GPU, then move it back to
CPU.
18. Perform matrix multiplication using tensors on GPU.

8.5 Tensor Conversions and Concatenation


19. Convert a PyTorch tensor to a NumPy array and back to a
tensor.
20. Concatenate two tensors along different dimensions (rows
and columns).

You might also like