VGG - pretrained

The VGG (Visual Geometry Group) network greatly influenced the design of deep convolutional neural networks. Although there exist architectures with better performance, VGG is still very useful for many applications such as image classification.

Input image size: 32x32 pixels and larger.

VGG architecture

On the Peltarion Platform, the pretrained VGG network is implemented in the following snippet:

  • VGG16 feature extractor. Same as the VGG16 but without the last part of the model.

The 2D Convolutional blocks all have a 3x3 filter (Width x Height). This is the smallest size to capture the notion of left-right, up-down, and center.

How to use the VGG with weights snippet

Note
Disclaimer
Please note that datasets, machine-learning models, weights, topologies, research papers and other content, including open source software, (collectively referred to as “Content”) provided and/or suggested by Peltarion for use in the Platform and otherwise, may be subject to separate third party terms of use or license terms. You are solely responsible for complying with the applicable terms. Peltarion makes no representations or warranties about Content. You expressly relieve us from any and all liability, loss or risk arising (directly or indirectly) from Your use of any third party content.

The basic idea is that you first create and train an experiment containing a pretrained snippet and some Dense blocks. Only the Dense blocks will be trained in the first experiment.
Then you duplicate this first experiment, set all blocks to trainable and train the new experiment with a really low learning rate.

The method is described in How to use pretrained snippets with weights.

Available weights

Terms

When using pretrained snippets, additonal terms apply: VGG with weights licence.

Reference

Get started for free