Product development /

Pretrained blocks: Reducing the complexity of building DL models

May 5 2019/6 min read
  • Ele-Kaja Gildemann
    Ele-Kaja GildemannProduct Owner

Building well-performing neural networks is a complex task that requires specific skills, knowledge, resources and data to succeed. That’s why we have taken a big leap towards reducing the complexity of building deep learning models and helping you succeed with deep learning on the Peltarion Platform.

This is how

We've added a pretrained VGG feature extractor as one of the first pretrained networks, trained on 1.2M ImageNet images.

The VGG16 feature extractor

Why pretrained blocks?

Pretrained blocks, called Pretrained snippets on the platform, is an extremely powerful feature that has numerous benefits like reducing the time and skills needed to get started, lowering the costs and supporting many companies or individuals who don't own large datasets, enabling them to get value in their specific domain or the problem they’re trying to solve. This is due to the fact that pretrained networks have already learned the basic representations of data structures and can be trained on a small domain-specific dataset to provide value.

When using the VGG feature extractor, you will notice that we have also grouped the deep neural network blocks to hide the unnecessary complexity and fit the model in the canvas. You can always expand and collapse the groups, or just add additional layers in the end to adjust the model functions.

How to use transfer learning with pretrained blocks on the Peltarion Platform?

Follow the simple guidelines in order to access the value of transfer learning capabilities for an image classification task:

  1. Import and save a well-formatted and labeled image dataset to the platform. The images should be at least 32x32px in size since VGG uses large filtering and down-sampling factors. For example, you can use the Peltarion Sidekick repository functions to prepare and upload the HAM10000 dataset. If your images are smaller, you can always upscale them by using the 2D Upsampling block.
  2. Create a new experiment and add a VGG feature extractor from the Pretrained snippets section on the modeling canvas. In the dialogue window, to choose the weights, make sure the Weights trainable setting is set to NO. Why? See this article.
  3. Set input as Images.
  4. Add a few (e.g., two) dense layers and a target block. Make sure target nodes match the number of classes you need to predict. Set the loss to Categorical Crossentropy, and set the last Dense layer activation as Softmax.
  5. Define the suitable batch size to make sure your model fits in memory, then click Run!

When doing transfer learning right, you should begin to see how quickly your neural network starts generating very good predictions.

Tip! To make the most out of the pretrained blocks, make sure to initially lock all the layers’ weights from training, except for the last few Dense layers. These are the layers that learn the class representations of your dataset. As you see your network succeed during training, you can gradually duplicate and unlock weights to train more layers as you see fit. Note, however, that if you set all the layers’ weights to be trainable immediately or too early in the training process, there is a high risk for catastrophic forgetting.

This is the first step of the transfer learning capabilities on the Peltarion Platform. Stay tuned for more pretrained blocks and helpful tutorials available soon!

For more information about VGG, see this article on our Knowledge center.

  • Ele-Kaja Gildemann

    Ele-Kaja Gildemann

    Product Owner

    Ele-Kaja Gildemann is a Product Owner at Peltarion. She has a degree in computer science from Tallinn University of Technology and more than 15 years of experience in sectors as diverse as digital services, telecom and retail. She is passionate about data-driven product development, user experience and machine learning.

Product development topics

02/ More on Product development