Estimated time: 45 min

Classifying car damage

Transfer learning with a pretrained snippet

Save training time and create well-performing models with small datasets. Sounds good? This tutorial will show you how.

Target audience: Data scientists and developers

Preread: A Comprehensive Hands-on Guide to Transfer Learning with Real-World Applications in Deep Learning

The problem

You will learn

In this tutorial, you will use a pretrained snippet in a classification model designed to detect different types of car damage. The number of images in the input data is small relative to the number of classes and has a significant amount of variation. This makes it challenging to create a well-performing model based on this dataset alone. As you will see, transfer learning can help to mitigate these problems.

Domain adaptation

The first layer in a convolutional neural network is designed to detect lower level features such as curves and edges. The output from this and subsequent convolutional layers will recognize increasingly higher level features that together represent the object to be classified, let's say dogs, clothes or fruits.

If you have previously created a classifier model for a set of objects in one domain, chances are that the learned weights in its first layers are also applicable to other objects in different domains. This is because the low-level geometric features are shared among all the classes in the different models.

Training iterations

One common approach in deep learning is to take a model that has been trained on a very large dataset, e.g., Imagenet, freeze the weights and then feed the output to a set of fully connected layers, we call them Dense blocks. Once you have trained the Dense blocks, often referred to as the "top" in a CNN, you can continue to train the model in iterations. In each of these iterations, you will "unfreeze" the weights for a group of 2D Convolution blocks while gradually lowering the learning rate. To continue to train a model on the platform, you will just duplicate an experiment and include the weights from the best epoch in the last experiment.

When you create an experiment on the Peltarion platform, you can make use of the learned weights from previous experiments, or you can use a pretrained snippet such as VGG16. This will reduce the training time, and thereby GPU hour costs. It also makes it possible to improve the performance of models that are trained on small datasets.

The data

The dataset that you will use in this experiment contains approximately 1,500 unique RGB images with the dimensions 224 x 224 pixels, and is split into a training- and a validation subset. The underrepresented classes in the training subset have been upsampled in the preprocessing in order to reduce bias. This means that the index file (index.csv) has duplicate entries that are linking to the same image file. The total number of entries in the index file is approximately 3,800.

Each image belongs to one of the following classes:

  • Broken headlamp
  • Broken tail lamp
  • Glass shatter
  • Door scratch
  • Door dent
  • Bumper dent
  • Bumper scratch
  • Unknown

Below are sample images from the various classes in the dataset. Note that the unknown class contains images of cars that are in an either pristine or wrecked condition.

Each collected image represents one car with one specific type of damage. This means that the dataset can be used to solve a single-label classification problem.

Example images from each class

Dataset generation deep dive

If you want to learn how the raw data was processed to create the dataset used in this tutorial, you can dive deeper here.

Create a project

First, create a project and name it so you know what kind of project it is. Naming is important.

A project combines all of the steps in solving a problem, from preprocessing of datasets to model building, evaluation and deployment. Using projects makes it easy to collaborate with others.

Upload the car damage dataset to the platform

  1. Navigate to the Datasets view and click New dataset.
  2. Copy the link below:
    https://storage.googleapis.com/bucket-8732/car_damage/preprocessed.zip
  3. Click Import and paste the copied link. The zip includes the whole dataset.
  4. When done click Next, name the dataset Car damage classifier and click Done.

Datasets view — Dataset with default subsets

Create subsets of the car damage dataset

The subset column, containing a T or a V, indicates if the row should be used for training or validation. The split between training and validation data is approximately 80% and 20%. This column was created during the preprocessing of the raw data.

Even though it is possible to use the default subsets created by the platform when you upload the data, it is more advantageous to create a conditional split based on the subset column. For this dataset, there is no separate labeled test subset, in case you want to analyze the performance of the deployed model outside the platform. Instead, you can compare the model predictions with the ground truth provided with the predefined validation subset.

  1. Delete the default subsets Training and Validation by clicking the Subsets options menu (...) then Delete.
  2. Click New subset and then Add conditional filter.
  3. Name the training subset Training and enter the condition subset is equal to T.
  4. Name the validation subset Validation and enter the condition subset is equal to V.

Training subset

Validation subset

Select the training subset that you have created in Normalize on subset.

Save the dataset

You’ve now created a dataset ready to be used in the platform. Click Save version and navigate to the Modeling view.

Design a pretrained model

Adding a pretrained snippet

  1. In the Modeling view, click New experiment. Name the model and click Create.
  2. Expand Snippets in the Inspector and select VGG16. A dialog will open where you can choose the weights and if the blocks in the snippet should be trainable.
  3. Select ImageNet and then click Create. The VGG16 blocks will be added as two collapsed groups on the canvas, VGG16 and Head. You can expand and collapse the groups at any time by clicking +/-. An Input block and a Target block will also be added.
  4. Select the Input block on the canvas and set Feature to image.

VGG16 snippet

Replacing the Head

You cannot replace or add blocks within a group but you can replace an entire group with individual blocks. You will do this next since the Flatten block within the Head group should be replaced with a 2D Global average pooling block for flattening the data. This is common when using VGG models and is required here to get good results. It also has the added benefit that it works well with other image sizes than 224 x 224 pixels (the size of the images in the ImageNet dataset).

The model will train better if you drop the weights in the Dense blocks so replacing that part of the Head group is also beneficial.

  1. Select the Head group and delete it.
  2. Add a 2D Global average pooling block.
  3. Add three Dense blocks.
  4. Add a Target block.
  5. Set Nodes to 8 (the number of classes in the dataset) and Activation to Softmax in the last Dense block.
  6. Set Feature to class and Loss to Categorical crossentropy in the Target block.

The model

Running the first experiment

Let's start training the Dense blocks of the model for ten epochs with the Adam optimizer. Everything should be set up correctly so just click Run.

Analyzing the first experiment

Go to the Evaluation view. Since the model solves a classification problem, a confusion matrix is displayed. The top-left to bottom-right diagonal shows correct predictions. Everything outside this diagonal are errors.

Note that metrics are based on the validation subset which only consists of 20% of the original dataset.

Model evaluation — First experiment, number of predictions in confusion matrix

Click the dropdown next to Cells and select Percentage. The normalized values that are now displayed correspond to the recall for each class.

The recall values clearly indicate that the model has learned the features in the images, but there is still room for improvement. So far, you have only trained Dense blocks of the model. By training some of the 2D convolution blocks in the snippet, it should be possible to get better results.

Model evaluation — First experiment, normalized confusion matrix

Running the second experiment

  1. Go back to the Modeling view.
  2. Click the Experiment options menu (...) next to the name of the last experiment and select Duplicate. When prompted, select to include the weights from the best epoch.
  3. Select the new experiment.
  4. Expand the VGG16 group.
  5. Select the last three 2D Convolution blocks and set Trainable to Yes.
  6. Click the Settings tab in the Inspector and select the SGD optimizer.

    Switching from Adam to SGD in mid-training is common practice since Adam is typically converging faster, and requires less tuning, while SGD tends to generalize better.
  7. Set Learning rate to 0.0005.

    Lowering the learning rate while increasing the number of trainable blocks helps to avoid catastrophic forgetting, which can easily happen when the weights learned by an old model are changed to meet the objectives of a new model.
  8. Click Run.

Analyzing the second experiment

Go to the Evaluation view. You should see a general improvement in all performance metrics, including the training loss and the recall values for each class.

Model evaluation — Second experiment, normalized confusion matrix

Analyzing the model in a Jupyter Notebook

If you are familiar with Python, you can analyze the model predictions on the validation subset using this Jupyter Notebook.

Deploy the trained model
  1. In the Deployment view click New deployment.
  2. Select experiment and checkpoint of your trained model to test it for predictions.
  3. Click the Enable switch to deploy the experiment.
Start the notebook

The following instructions require that you are familiar with Python and Jupyter Notebook.

  1. Start the Jupyter Notebook:
    $ jupyter notebook car_damage_analysis.ipynb
  2. Install Sidekick and any other required Python packages.
  3. Update the path your dataset (zip file), URL, and token for your deployment.
  4. Run the notebook.

Tutorial recap

Using transfer learning, you have saved hours of training time and created a better performing than would have been possible, had you trained the model from scratch.

In this tutorial, only three blocks of the snippet were set to trainable. For other datasets, training more blocks in the snippet in a longer chain of iterations will continue to provide meaningful improvements to the performance of the model. There are also cases when training the entire model in a single iteration will give you good results.