Snippets - your gateway to deep neural network architectures

Many of the most powerful neural networks have very large architectures (e.g., the Resnet 152 network has, you guessed it, 152 layers in total) which can make them tedious to build and daunting to start working with.

To help you get started, we’ve prebuilt many popular networks inside of the Peltarion Platform.

Using snippets can save you a lot of time by removing the need of having to build these models yourself and consequently, the need to double-check that you haven’t missed a block or connection during your build process. Instead, you can spend more time exploring and experimenting with the different architectures for your specific application.

Modeling view blocks
Figure 1. Modeling view blocks

You find the available snippets in the Inspector.

Note
Currently, the snippets are not pre-trained and training a deep network on a large set of data may consume a significant amount of GPU power.

Choosing the right snippet

With so many different network architectures out there, it can sometimes be confusing to figure out which one to use for your problem at hand.

Image-based projects

Here is a recommendation of different snippets for various image-based tasks:

Image size:
32x32
Image size :
larger than 32x32

Image classification
Regression
Feature extraction

ResNetv2 Small 29
ResNetv2 Small 56
ResNetv2 Small 110

DenseNet121
DenseNet 169
Inception v3
Inception v4
ResNetv2 Large 50
ResNetv2 Large 101
ResNetv2 Large 152

Image segmentation
Image denoising/reconstruction

Tiramisu
U-Net

Feature extraction
Style transfer
Autoencoders

VGG 16
VGG 19

This flowchart goes a bit more into details and will help you select the right snippet for your image data project: Snippet selector for image projects.

Text-based projects

To perform text-based tasks, you can choose to use a simple embedding layer, or the BERT snippet. While embedding layers are available as blocks and not as snippets, they can be initialized with pre-trained weights from fastText.

fastText BERT

Languages supported

English, Swedish, Finnish

English

Case sensitive

Yes

No

Output

Pre-determined mapping of token to vector

Single vector representative of the whole input

Going further

This is a good start, but how do you choose between the different network alternatives? The easiest is probably to try them all and see which one performs best for your problem.

This is why snippets are so powerful! You can easily create separate experiments, load a different model (snippet) in each and run them all simultaneously on the platform. This is a quick and easy way to explore and try out what works best!

Here in the Knowledge center you can find more information on each snippet and on which is best suitable for your problem type and input data.