VGG snippet

The VGG network greatly influenced the design of deep convolutional neural networks. Nowadays, however, there are better architectures for most image tasks. It still has its use in the more esoteric use cases like feature extraction, style transfer, and autoencoders.

Input images for the VGG model should be 32x32 pixels and larger.

VGG architecture

On the Peltarion Platform, the VGG network is implemented in two snippets, VGG16 (VGG Net-D in the paper) that consists of 16 layers and VGG19 (VGG Net-E in the paper) that consists of 19 layers.

The 2D Convolutional blocks all have a 3x3 filter (Width x Height). This is the smallest size to capture the notion of left-right, up-down, and center.

VGG architecture
Figure 1. VGG architecture

How to use the VGG snippet

To add a VGG snippet open the Snippet section in the Inspector and click VGG. The images in the dataset must be 32x32 pixels and larger.

Remember to change the last Dense block to the number of categories in your experiment and the correct loss function for the Target block, e.g., 10 nodes and Categorical crossentropy for MNIST.

Reference

Try the platform