The VGG (Visual Geometry Group) network greatly influenced the design of deep convolutional neural networks. Although there exist architectures with better performance, VGG is still very useful for many applications such as image classification.
Input image size: 32x32 pixels and larger.
On the Peltarion Platform, the VGG network is implemented in two snippets:
A complete VGG16 (VGG Net-D in the paper) that consists of 16 layers
A complete VGG19 (VGG Net-E in the paper) that consists of 19 layers.
The 2D Convolutional blocks all have a 3x3 filter (Width x Height). This is the smallest size to capture the notion of left-right, up-down, and center.
To add a VGG snippet open the Snippet section in the Inspector and click VGG16/VGG19. The images in the dataset must be 32x32 pixels and larger.
Remember to change the last Dense block to the number of categories in your experiment and the correct loss function for the Target block.
Example: Set Nodes to 10 and Loss to Categorical crossentropy when using the MNIST dataset.
Karen Simonyan, Andrew Zisserman: Very Deep Convolutional Networks for Large-Scale Image Recognition, 2014.