Denoising images

Reconstructing images with an autoencoder

This tutorial will show you how to build a model for unsupervised learning using an autoencoder. Unsupervised in this context means that the input data has not been labeled, classified or categorized.

An autoencoder encodes a dense representation of the input data and then decodes it to reconstruct the input.

Person - Target group: Intermediate users

Preread: Before following this tutorial, it is strongly recommended that you complete the tutorial Deploy an operational AI model if you have not done that already.

The problem — Denoising images

For training of your model you will use the images in the MNIST dataset. These images will be used both as input and target in the model.

Autoencoder reconstructing images
Figure 1. Autoencoder reconstructing images

An interesting property of autoencoders is that they will typically denoise input images that have artificial noise added to them, even though they were never trained to do this specifically. The cause of this behavior is that the dense representation learnt during training will not contain information about the random noise and the images will be reconstructed without it. You will use this property to realize the image denoising use case.

Send noisy images to the deployed model to test it

Once the model is deployed, you can send it images that have been modified to contain some random pixel noise. This noise is not part of the learned representation of the data and will be filtered out in the model predictions.

The data

The original MNIST dataset consists of small, 28 x 28 pixel images of handwritten numbers that are annotated with a label indicating the correct number. The dataset consists of a training set of 60K examples and a test set of 10K examples.

You will use both the training and the test set in this tutorial.

Read more about the MNIST dataset here: MNIST dataset.

Create a project for denoising images

First, create a project and name it so you know what kind of project it is.

Once created, you can click on the project name in the Project options menu at any time to view the description, running experiments and the amount of computing resources that you have spent on the project.

New project button

Add the grayscale MNIST dataset to the platform

  1. Navigate to the Datasets view and expand Import Data.

  2. Copy the link below:

  3. Click Import] and paste the copied link. The zip includes the complete MNIST training dataset

  4. When done click Next, name the dataset MNIST and click Done.

The first samples of the MNIST dataset are shown in the Datasets view with one column for each feature.
Figure 2. The first samples of the MNIST dataset are shown in the Datasets view with one column for each feature.

Rename the image-file column to image

Later in this tutorial you will have the option to evaluate the model off-platform using a Jupyter notebook. The dash(-) character is not a valid symbol in Python, so it is recommended that you rename the image-file column to image. To do this, click on the column and then change the name in the Inspector to the right.

Alternatively, you can change the name of an input or output feature in the Deployment view, after you have completed an experiment.

Subsets of the MNIST dataset

In the top right corner, you will see the subsets. All samples in the dataset are by default split into 10% validation, 10% test, and 80% training subsets. Keep these default values in this project.

Save the dataset

You’ve now created a dataset ready to be used in the platform. Click Save version and then Use in new experiment.

Design a deep learning autoencoder

The model that you will build is based directly on the example provided in this Keras autoencoder tutorial.

Create a new experiment

  1. Click New experiment. Name the experiment and make sure that the correct dataset is selected in the Experiment wizard.

  2. Click Create blank experiment.

  3. Navigate to the Settings tab in the Inspector.


Build the encoder

  1. In the Build tab expand the Blocks section.

  2. Add an Input block and set Feature to image.

  3. dd a 2D Convolution block:

    • Filters: 16

    • Padding: Same

  4. Add a 2D Max pooling block:

    • Horizontal stride: 2

    • Vertical stride: 2

    • Padding: Same

  5. Copy and paste the 2D Convolution and 2D Max pooling blocks, then set Filters to 8.

  6. Copy and paste the last 2D Convolution and the 2D Max pooling blocks.

Build the decoder

  1. Add a 2D Convolution block:

    • Filters: 8

    • Padding: Same

  2. Add a 2D Upsampling block

  3. Copy and paste the 2D Convolution and 2D Upsampling blocks.

  4. Again, copy and paste the 2D Convolution and 2D Upsampling blocks, then set Filters to 16 and Padding to Valid.

Finalize the model

  1. Add a 2D Convolution block:

    • Filters: 1

    • Padding: Same

    • Activation: Sigmoid

  2. Add a Target block:

    • Feature: image

    • Loss: Binary crossentropy

Run experiment

You can train the model with the default values in the Settings tab so just click Run.

Analyze experiment

Navigate to the Evaluation view and watch the model train. You can see how the loss is getting lower epoch by epoch.

Interpreting the confusion matrix

When you created the model, you selected the binary crossentropy loss function. This is a loss function used on problems involving yes/no (binary) decisions. In your model, the decisions apply to the individual pixels in the input images. These pixels should be classified as either black or white.

The output from the last activation function (sigmoid) is a value between 0 and 1. Values below 0.5 are considered black and values equal to or above 0.5 are considered white. The model output is a grayscale image where the pixel intensity is represented by a value between 0 and 255.

Number of predictions in the confusion matrix

There are 60,000 input images with the dimension 28 x 28 and 10% of those are included in the validation set. This means that an approximate of the total number of values in the confusion matrix can be calculated as follows:

28 x 28 x 60,000 x 0.2 ≈ 9,408,000

To see the actual number of predictions, click Cells under the confusion matrix and select Count.

Figure 3. Model evaluation - Confusion matrix displays actual number of predictions

What about denoising?

The confusion matrix indicates the models ability to reconstruct the images in the validation subset. If you want to find out if you can use the model to remove pixel noise, you will need to deploy the model and apply your test data, e.g., via CURL or a Jupyter notebook.

Test if your autoencoder can remove noise

Deploy the trained model

  1. In the Deployment view, click New deployment. The Create deployment popup will appear.

  2. Select your last Experiment and the Checkpoint for best epoch.

  3. Click the Enable switch to deploy the experiment.

Alternative 1 — Single predictions with CURL

You can use the CURL command to test model on a handful of input images.

tut10 7
Figure 4. Deployment view - Input examples (Curl)
  1. Download and unzip this test dataset. The images in this dataset contain random noise.

  2. Open a terminal and change to the directory that contains the noisy image files.

  3. In the Deployment view, click Copy to clipboard next to the Input example.

  4. Update the curl example so that the image parameter references one of the test files.

curl -X POST \
-F "image=@1.png" \
-u "<Token>" \
  1. Run the CURL command in the terminal. The output will be a Base64 encoded string.

  2. To view the output, copy all characters between the double quotes in the output and paste into an online Base64 decoder tool, e.g., onlinejpgtools.

  3. Save the reconstructed image

Compare input and reconstructed image

Compare the reconstructed image with the input image. Are they similar and has the model successfully removed the noise?

Alternative 2 — Analyzing the model output in Python

If you are familiar with Python, you can analyze the model predictions on the test dataset using this Jupyter notebook. To run the notebook you must either save or clone the entire GitHub repository or save the file in raw format with the .ipynb extension.

  1. Download and unzip this test dataset without random noise (it will be added in the notebook).

  2. Start the Jupyter Notebook:

$ jupyter notebook image_denoising_analysis.ipynb
  1. Install Sidekick and any other required Python packages.

  2. Update:

    • The path to your test dataset

    • URL

    • Token for your deployment.

  3. Run the notebook.

Compare input and reconstructed image

Compare the reconstructed image with the input image. Are they similar and has the model successfully removed the noise?

Notebook output when we run it
Figure 5. Notebook output when we run it

Tutorial recap

You have learned how to create an autoencoder, a type of unsupervised neural network. The model is trained to reconstruct images of handwritten numbers. In this process, it will filter out pixel noise that was not present in the training data.

Alternative solutions to image denoising

The autoencoder approach to image denoising has the advantage that it does not require access to both noisy images and clean images that represent the ground truth. However, if you want to create a model that is optimized for noise reduction only, supervised learning with, e.g., a U-Net or Tiramisu architecture will give better results.

Other applications for autoencoders

Other examples of practical datavision applications of autoencoders for include:

  • Computer vision tasks such as black and white image coloring and low-light image enhancement

  • Recommendation systems — predicting user preferences

  • Anomaly detection for manufacturing, maintenance, medical applications etc.

Anomaly detection example
Figure 6. Anomaly detection example - Reconstruction loss (error) will be high when an input sample is dissimilar from the data that was used in training.
Was this page helpful?