A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://www.geeksforgeeks.org/machine-learning/denoising-autoencoders-in-machine-learning/ below:

Denoising AutoEncoders In Machine Learning

Denoising AutoEncoders In Machine Learning

Last Updated : 07 Aug, 2025

Autoencoders are neural networks for unsupervised learning that compress input data into a low-dimensional space (using an encoder) and then reconstruct it (using a decoder), training the network to minimize the reconstruction error between the original input and its reconstructed output. If the hidden layer is too large, autoencoders may simply learn to replicate the input perfectly, functioning as an identity mapping and failing to extract meaningful features.

Architecture of DAE

The denoising autoencoder (DAE) architecture resembles a standard autoencoder and consists of two main components:

Encoder Decoder DAE architecture Step-by-Step Implementation of DAE

Let's implement DAE in PyTorch for MNIST dataset.

Step 1: Import Libraries

Lets import the necessary libraries,

Python
import torch
import torch.utils.data
from torchvision import datasets, transforms
import numpy as np
import pandas as pd
from torch import nn, optim

device = 'cuda' if torch.cuda.is_available() else 'cpu'
Step 2: Load the Dataset and Define Dataloader

We prepare the MNIST handwritten digits dataset:

Python
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize(0, 1)
])

mnist_dataset_train = datasets.MNIST(
    root='./data', train=True, download=True, transform=transform)
mnist_dataset_test = datasets.MNIST(
    root='./data', train=False, download=True, transform=transform)

batch_size = 128

train_loader = torch.utils.data.DataLoader(
    mnist_dataset_train, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(
    mnist_dataset_test, batch_size=5, shuffle=False)
Step 3: Define Denoising Autoencoder(DAE) Model

We design a neural network with an encoder and decoder:

Python
class DAE(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(784, 512)
        self.fc2 = nn.Linear(512, 256)
        self.fc3 = nn.Linear(256, 128)
        self.fc4 = nn.Linear(128, 256)
        self.fc5 = nn.Linear(256, 512)
        self.fc6 = nn.Linear(512, 784)
        self.relu = nn.ReLU()
        self.sigmoid = nn.Sigmoid()

    def encode(self, x):
        h1 = self.relu(self.fc1(x))
        h2 = self.relu(self.fc2(h1))
        return self.relu(self.fc3(h2))

    def decode(self, z):
        h4 = self.relu(self.fc4(z))
        h5 = self.relu(self.fc5(h4))
        return self.sigmoid(self.fc6(h5))

    def forward(self, x):
        q = self.encode(x.view(-1, 784))
        return self.decode(q)
Step 4: Define the Training Function

We define the Training function in which:

Python
def train(epoch, model, train_loader, optimizer, cuda=True):
    model.train()
    train_loss = 0
    for batch_idx, (data, _) in enumerate(train_loader):
        data = data.to(device)
        optimizer.zero_grad()
        data_noise = torch.randn(data.shape).to(device)
        data_noise = data + data_noise
        recon_batch = model(data_noise)
        loss = criterion(recon_batch, data.view(data.size(0), -1))
        loss.backward()
        train_loss += loss.item() * len(data)
        optimizer.step()
        if batch_idx % 100 == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                epoch, batch_idx * len(data), len(train_loader.dataset),
                100. * batch_idx / len(train_loader),
                loss.item()))
    print('====> Epoch: {} Average loss: {:.4f}'.format(
        epoch, train_loss / len(train_loader.dataset)))
Step 5: Initialize Model, Optimizer and Loss Function

We need to initialize the model along with the optimizer and Loss Function,

Python
epochs = 10
model = DAE().to(device)
optimizer = optim.Adam(model.parameters(), lr=1e-2)
criterion = nn.MSELoss()
Step 6: Train the Model

Loop over the dataset for the given number of epochs, invoking the training function.

Python
for epoch in range(1, epochs + 1):
    train(epoch, model, train_loader, optimizer, True)

Output:

Testing Phase Step 7: Evaluate and Visualize the Model

We evaluate the predictions of the model and also visualize the results,

Python
import matplotlib.pyplot as plt

for batch_idx, (data, labels) in enumerate(test_loader):
    data = data.to(device)
    optimizer.zero_grad()
    data_noise = torch.randn(data.shape).to(device)
    data_noise = data + data_noise
    recon_batch = model(data_noise)
    break

plt.figure(figsize=(20, 12))
for i in range(5):
    print(f" Image {i} with label {labels[i]}", end="")
    plt.subplot(3, 5, 1 + i)
    plt.imshow(data_noise[i, :, :, :].view(
        28, 28).cpu().detach().numpy(), cmap='binary')
    plt.axis('off')
    plt.subplot(3, 5, 6 + i)
    plt.imshow(recon_batch[i, :].view(
        28, 28).cpu().detach().numpy(), cmap='binary')
    plt.axis('off')
    plt.subplot(3, 5, 11 + i)
    plt.imshow(data[i, :, :, :].view(
        28, 28).cpu().detach().numpy(), cmap='binary')
    plt.axis('off')
plt.show()

Output:

Result

Row 1: Noisy images (input)
Row 2: Denoised outputs (autoencoder reconstructions)
Row 3: Original images (target, uncorrupted)

Applications of DAE Advantages Limitations

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4