Inception neural networks are used for image classification, regression and feature extraction.
A popular way to build convolutional networks is to stack layers on top of each other. The idea with the Inception network is to build a wider network rather than a deeper.
The wide part of the Inception network is built by Inception modules, here the blocks are executed in parallel in different sized branches with different sized filters. The intuition with these branches is that different sized filters can pick up features of different scale, i.e., a 7x7 filter picks up larger features than a 1x1 filter.
Another intuition was that the convolutions in the branches could be factorized. This means that an nxn convolution is factorized into a combination of 1xn and nx1 convolutions. For example, a 7x7 convolution is replaced by two convolutions, first a 1x7 convolution, and then a 7x1 convolution.
At the end of the Inception module, the outputs from the parallel branches are concatenated and sent forward.
If you want to know more about the Inception network architecture you should read this well-written blog post: A Simple Guide to the Versions of the Inception Network.
There exist several versions of the Inception network. We have built two of them as snippets on the Platform.
Inception v3, published in 2015, proposed a number of upgrades of earlier versions which increased the accuracy and reduced the computational complexity. It is more lightweight than the later generation and is still widely used, e.g., when computing the Inception score of Generative Adversarial Networks.
Inception v4, published in 2016, is one of the largest and best performing inception architectures. It modified the set of operations performed at the beginning of the network, introduced new types of inception modules as well as specialized “Reduction Block” types which are used to change the width and height of the grid.
To add an Inception snippet, open the Snippet section in the Inspector and click Inception v3 or Inception v4.
The images in the dataset should be 229x229 pixels.
In the Input block, we recommend that you use image augmentation.
Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna: Rethinking the Inception Architecture for Computer Vision. 2015
Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi: Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. 2016
Stay in the know by signing up for occasional emails with tips, tricks, deep learning insights, product updates, event news and webinar invitations.
We promise not to spam you or share your email with any third party. You can change your preferences at any time. See our privacy policies.
Please check your email inbox account to confirm, set, or update your communication preferences.