Skin cancer detection

Solve an image segmentation problem

This tutorial will show you how to build a model that will solve an image segmentation problem. This means that your experiment is about partitioning an image into sections, in this case, "lesion" or "not lesion".

Person - Target group: Intermediate users

Preread: We suggest that you start your deep dive into the Peltarion Platform with the Deploy an operational AI model tutorial. The link below is a proposed pre-read material for those unfamiliar with CNNs and how they work. It does not endeavor to be an exhaustive learning reference. A Beginner’s Guide To Understanding Convolutional Neural Networks


The problem - Predict lesion segmentation boundaries

Microscope

Although skin lesions are visible to the naked eye, early-stage melanomas may be difficult to distinguish from benign skin lesions with similar appearances. Dermatoscopes, simple hand-held devices that eliminate surface glare and magnify structures invisible to the naked eye, significantly improve the distinction of melanomas from other skin lesions.

The International Skin Imaging Collaboration

The International Skin Imaging Collaboration (ISIC) is a partnership with the goal to help reduce melanoma mortality. ISIC has created an open-source public archive of skin images, which can be used for the development of automated diagnostic systems.

The overarching goal of the ISIC Melanoma Project is to support efforts to reduce melanoma-related deaths and unnecessary biopsies by improving the accuracy and efficiency of melanoma early detection. To this end, the ISIC is developing proposed digital imaging standards and creating a public archive of clinical and dermoscopic images of skin lesions.

ISIC challenge

Since 2016, the ISIC Project has conducted an annual challenge for developers of artificial intelligence (AI) algorithms in the diagnosis of melanoma. The goal of this recurring challenge is to help participants develop image analysis tools to enable the automated diagnosis of melanoma from dermoscopic images.

In this tutorial, you will perform the first task of the ISIC challenge, which is to predict the lesion segmentation boundaries within dermoscopic images.


The data

The original training dataset for the ISIC 2018 challenge consists of 2,594 skin lesion images, each with a corresponding segmentation mask image that indicates the lesion boundaries. White and black color are used to represent lesion and non-lesion areas. A separate validation dataset is also available. However, for this tutorial, we will use the training dataset for both training and validation.

Lesion images and segmentation masks
Figure 1. Lesion images and segmentation masks

All images have an approximate aspect ratio of 1:1.5 and the sizes range from 1,022 x 767 to 4,288 x 2,848 pixels.

The images that you will upload to the platform have been processed to have a uniform aspect ratio and a size (64 x 64)


Goals of the experiment

The goal of the experiment is to build, train and deploy a model that will accurately generate segmentation masks for the images in a test subset. The test subset will not be used for training of the model.


Create project

First, create a project and name it, so you know what kind of project it is. Naming is important.

A project combines all of the steps in solving a problem, from pre-processing of datasets to model building, evaluation, and deployment. Using projects makes it easy to collaborate with others.


Add the dataset

Please note that by working with this dataset, you accept the author’s license in the Dataset licenses section of the Knowledge center.

In the Datasets view, click Import free datasets and select the Skin lesion - tutorial dataset.

You don’t need to change anything, so just go ahead and click Use in new experiment.

Create an experiment

Create a model for image segmentation

In the Experiment wizard

  • Dataset tab.
    Make sure the new dataset is selected

  • Inputs/target tab. Check that:

    • Input feature is image

    • Target feature is mask

  • Problem type tab.
    Select Image segmentation

  • Click Create.

A complete model will appear on the Modeling canvas with all settings pre-populated by the platform.

Click Run to start to train the model.

Run button

Analyze experiment

Watch the results unfold in the Loss and metrics tab in the Evaluation view. The model will continue to train until it is early stopped or until it has run all set epochs. The training might take some time as it often does in deep learning projects.

Predictions inspection tab

When the model has stopped training you should navigate to the Predictions inspection tab and look at the results. What’s good really depends on your use case. Sometimes you need really good results. Sometimes you need the model to give you good enough results.

In this tutorial, we decide that the predicted masks are good enough, so let’s move forward and test the model.


Test the model

Deploy the trained model

In the Evaluation view, click Create deployment.

Create deployment button
  1. Select your last Experiment and the Checkpoint for best epoch.

  2. Click Enable to deploy the experiment.

Single predictions with cURL

You can use the cURL command to test the model on a handful of input images.

  1. Download and unzip the test dataset.

  2. Resize some of the test images in an editor to have the same dimensions as the training data images (64x64 pixels).

  3. In the Deployment view, copy the cURL Input example.

In a terminal

  1. Open a terminal and change to the directory that contains the resized images files.

  2. Paste the cURL example.

  3. Update the cURL example so that the image parameter references one of the resized test files.
    Remember to add an @-symbol in front of the image name.
    Example: -F "image=@Lesion_Test_Image.png"

  4. Run the cURL command in the terminal. The output will be a Base64 encoded string.

  5. To visualize the image mask, copy all characters between the double quotes in the output and paste into an online Base64 decoder tool, e.g., onlinejpgtools.

  6. Save the image mask and compare it with the input image. Do you agree that the image mask correctly marks the location of the lesion?

Analyze the model’s output in Python

If you are familiar with Python, you can analyze the model predictions on the test dataset using this Jupyter Notebook.

  1. Download the test dataset from the ISIC 2018 challenge.

  2. Unzip the file that you have downloaded.

  3. Start the Jupyter Notebook:
    $ jupyter notebook skin_lesion_image_segmentation_analysis.ipynb

  4. Install Sidekick and any other required Python packages.

  5. Update the path of your dataset, URL, and token for your deployment.

  6. Run the notebook.

Notebook output
Figure 2. Notebook output – Left: input image, middle: predicted mask, right: white sections of the mask are made transparent and superimposed onto the image.

The pixels in the output will have an intensity in the range between 0 and 255. Pixels with an intensity greater than 127 are considered white.


Tutorial recap

  • You have trained a model based on a prebuilt block to create segmentation masks that outline the contours of skin lesions. The models generate these masks by making a binary prediction on each pixel in the input images.

Was this page helpful?
YesNo