New to Peltarion? Discover our deep learning Platform

A single deep learning platform to build and deploy your projects, even if you’re not an AI superstar.

FIND OUT MORE

EfficientNet - pretrained

EfficientNets [1] are a family of neural network architectures released by Google in 2019 that have been designed by an optimization procedure that maximizes the accuracy for a given computational cost.

EfficientNets are recommended for classification tasks, since they beat many other networks (like DenseNet, Inception, ResNet) on the ImageNet benchmark, while running significantly faster.

How to use the EfficientNet snippet

Note
Disclaimer
Please note that datasets, machine-learning models, weights, topologies, research papers and other content, including open source software, (collectively referred to as “Content”) provided and/or suggested by Peltarion for use in the Platform and otherwise, may be subject to separate third party terms of use or license terms. You are solely responsible for complying with the applicable terms. Peltarion makes no representations or warranties about Content. You expressly relieve us from any and all liability, loss or risk arising (directly or indirectly) from Your use of any third party content.

The whole family of pretrained EfficientNets, B0 to B7, is available on the platform.
Each number represents a network size, and the processing power roughly doubles for every increment.

Try EfficientNet B0 first, since its accuracy is on par with other networks while being ridiculously fast to run — and to train.

If you need to improve your results, try using bigger and bigger sizes of the EfficientNet architecture (B1→B2→B3→ etc.) until you hit the highest accuracy for your data.

Recommendations

When using the EfficientNet snippets you should consider the following things:

Input block: use image augmentation.

Last Dense block: change the number of nodes to match the number of classes you have.

Target block: set the loss function to Categorical Crossentropy.

EfficientNet architecture

The EfficientNet family relies on two cornerstones:

  • An optimized baseline architecture

  • An efficient scaling strategy

The baseline architecture: B0

The baseline network, EfficientNet B0, is built around 2D Depthwise convolution blocks, which have been shown to be extremely cost-efficient and are also the basis of the MobileNetV2 network [2].

However, the exact architecture was not designed by hand, but is the result of Neural Architecture Search [3]. This is an optimization procedure that searches for the network architecture with the highest possible accuracy, given fixed computational resources to run this network.

The scaling strategy

The conventional strategy for making image classification more accurate is to take an existing network and either:

  • Increase its depth: the number of layers that data goes through

  • Increase its width: the number of filters within each layer

  • Give the network images of higher resolution, which include more and finer details

To increase accuracy, EfficientNets scale up these three aspects together, in proportions that have been optimized to never let one of them be a bottleneck.

The baseline network EfficientNet B0 is scaled up this way to create EfficientNet B1, a network with roughly twice the processing power. The same scaling is applied successively to create EfficientNet B2, B3, B4, B5, B6, and B7.
Because all aspects of the network are scaled up together, there is no limiting factor and increases in processing power always translate to increases in accuracy.

Available weights

The weights of the pre-trained snippets were learned by the Google team using ImageNet

Terms

When using pretrained snippets, additional terms apply: EfficientNet with weights licence.