Last Updated : 03 Jul, 2025
PyTorch is a deep learning library built on Python. It provides GPU acceleration, dynamic computation graphs and an intuitive interface for deep learning researchers and developers. PyTorch follows a "define-by-run" approach meaning that its computational graphs are constructed on the fly allowing for better debugging and model customization.
Key Features of PyTorchPyTorch can be installed on Windows, macOS and Linux using pip for CPU (without GPU):
PyTorch Tensors!pip install torch torchvision torchaudio
Tensors are the fundamental data structures in PyTorch, similar to NumPy arrays but with GPU acceleration capabilities. PyTorch tensors support automatic differentiation, making them suitable for deep learning tasks.
Python
import torch
# Creating a 1D tensor
x = torch.tensor([1.0, 2.0, 3.0])
print('1D Tensor: \n', x)
# Creating a 2D tensor
y = torch.zeros((3, 3))
print('2D Tensor: \n', y)
Output:
Output Operations on Tensors Python
a = torch.tensor([1.0, 2.0])
b = torch.tensor([3.0, 4.0])
# Element-wise addition
print('Element Wise Addition of a & b: \n', a + b)
# Matrix multiplication
print('Matrix Multiplication of a & b: \n',
torch.matmul(a.view(2, 1), b.view(1, 2)))
Output:
Output Reshaping and Transposing Tensors Python
import torch
t = torch.tensor([[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]])
# Reshaping
print("Reshaping")
print(t.reshape(6, 2))
# Resizing (deprecated, use reshape)
print("\nResizing")
print(t.view(2, 6))
# Transposing
print("\nTransposing")
print(t.transpose(0, 1))
Output:
Output Autograd and Computational GraphsThe autograd module automates gradient calculation for backpropagation. This is crucia in training deep neural networks.
Python
x = torch.tensor(2.0, requires_grad=True)
y = x ** 2
y.backward()
print(x.grad) #(dy/dx = 2x = 4 when x=2)
Output:
tensor(4.)
PyTorch dynamically creates a computational graph that tracks operations and gradients for backpropagation.
Building Neural Networks in PyTorch Pytorch WorkflowIn PyTorch, neural networks are built using the torch.nn module where:
To build a neural network in PyTorch, we create a class that inherits from torch.nn.Module and defines its layers and forward pass.
Python
import torch
import torch.nn as nn
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.fc1 = nn.Linear(10, 16) # First layer
self.fc2 = nn.Linear(16, 8) # Second layer
self.fc3 = nn.Linear(8, 1) # Output layer
def forward(self, x):
x = torch.relu(self.fc1(x))
x = torch.relu(self.fc2(x))
x = torch.sigmoid(self.fc3(x))
return x
model = NeuralNetwork()
print(model)
Output:
Output Define Loss Function and OptimizerOnce we define our model, we need to specify:
We use nn.BCELoss() for binary cross-entropy loss and used optim.Adam() for Adam optimizer to combine the benefits of momentum and adaptive learning rates.
Python
model = NeuralNetwork()
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
Train the Model
The training involves:
1. Generating dummy data (100 samples, each with 10 features).
2. Running a training loop where we:
inputs = torch.randn((100, 10))
targets = torch.randint(0, 2, (100, 1)).float()
epochs = 20
for epoch in range(epochs):
optimizer.zero_grad() # Reset gradients
outputs = model(inputs) # Forward pass
loss = criterion(outputs, targets) # Compute loss
loss.backward() # Compute gradients
optimizer.step() # Update weights
if (epoch+1) % 5 == 0:
print(f"Epoch [{epoch+1}/{epochs}], Loss: {loss.item():.4f}")
Output:
Output PyTorch vs TensorFlowLets see a quick difference between pytorch and tensorflow:
Feature PyTorch TensorFlow Computational Graph Dynamic Static (TF 1.x), Dynamic (TF 2.0) Ease of Use Pythonic, easy to debug Steeper learning curve Performance Fast with eager execution Optimized for large-scale deployment Deployment TorchScript & ONNX TensorFlow Serving & TensorFlow Lite Popularity in Research Widely used Also widely used but more in production Applications of PyTorch Application of PyTorchPyTorch is used in industry for computer vision, NLP and reinforcement learning applications. With its strong community support and easy-to-use API, it continues to be one of the leading deep learning frameworks.
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4