The Tiramisu snippet is a fully convolutional DenseNet that is suitable for image-to-image mapping problems, such as image segmentation or image denoising/reconstruction.
Tiramisu snippet architecture
The Tiramisu network, like a U-Net, has a horseshoe shape where an image’s activation maps first are reduced in size (downsampled) on the way down and then gradually blown up to their original size (upsampled) on the way up. Again like the U-Net, the Tiramisu has skip connections that feed information from the downward path to the upward path.
The idea is that the autoencoder-like horseshoe will extract high-level features, which will be complemented by more detailed, high-resolution information coming via the skip connections (dotted lines in the illustration) from the downward path. The Tiramisu network can reuse features multiple times and is thus parameter-efficient despite being relatively deep.
The building blocks in the network are so-called "dense blocks" (not to be confused with the Dense block in the Modeling view, which is a fully connected neural network layer). A Tiramisu "dense block" is built of repeated composite functions that via skip connections concatenate the forwarded activation-maps. The composite function consists of a Batch normalization, an Activation, a 2D Convolution and a Dropout block.
How to use the Tiramisu snippet
To add a Tiramisu snippet open the Snippet section in the Inspector and click Tiramisu.
Each side of the input image must be divisible by 32, e.g. 512x256px or 640x800px. This is due to the presence of the 2D Max pooling blocks.
Simon Jégou, Michal Drozdzal, David Vazquez, Adriana Romero, Yoshua Bengio: The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation, 2016.