The ResNetv2 is a neural network architecture used for image classification, regression and feature extraction. It uses skip connections to add the input of a group of convolutions to its output.
How to use the ResNet pretrained block
To add a ResNet pretrained block open the Inspector and click on one of the ResNet V2 blocks.
When using the ResNet block you could consider the following things:
We recommend that you use the Random transformation block.
Change the number of units in the last Dense block to match the number of classes you have. Also, change the activation.
Set the correct Loss function in the Target block and specifying the correct loss.
For optimizer we recommend you to use ADAM with learning rate 0.001 or SGD with momentum and learning rate 0.01.
The idea is to build a network consisting of branches with skip connections. For each branch, you then learn the difference, the residual activation-map, between the input and the output of the branch. This residual activation-map is added together with the previous activation-maps building the “collective knowledge” of the ResNet.
Deeper neural networks have historically been hard to train. Residual learning with skip connections made it possible to successfully train deeper models than ever before, there are well-performing networks with over 1000 layers. For most recent models we now observe that deeper models are more powerful.
There are many variations of the ResNetv2 architecture. We define the ResNetv2 architecture as follows:
Trainable: Whether we want the training algorithm to change the value of the weights during training. In some cases, one will want to keep parts of the network static.