EfficientNet
EfficientNets [1] are a family of neural network architectures released by Google in 2019 that have been designed by an optimization procedure that maximizes the accuracy for a given computational cost.
How to use the EfficientNet pretrained block
Note

Disclaimer Please note that datasets, machinelearning models, weights, topologies, research papers and other content, including open source software, (collectively referred to as “Content”) provided and/or suggested by Peltarion for use in the Platform and otherwise, may be subject to separate third party terms of use or license terms. You are solely responsible for complying with the applicable terms. Peltarion makes no representations or warranties about Content. You expressly relieve us from any and all liability, loss or risk arising (directly or indirectly) from Your use of any third party content. 
The whole family of pretrained EfficientNets, B0 to B7, is available on the platform.
Each number represents a network size, and the processing power roughly doubles for every increment.
Try the EfficientNet B0 block first, since its accuracy is on par with other networks while being ridiculously fast to run — and to train.
If you need to improve your results, try using bigger and bigger sizes of the EfficientNet architecture (B1→B2→B3→ etc.) until you hit the highest accuracy for your data.
Recommendations
When using an EfficientNet block you should consider the following things:
Input block: use image augmentation.
Last Dense block: change the number of nodes to match the number of classes you have.
Target block: set the loss function to Categorical Crossentropy.
EfficientNet architecture
The EfficientNet family relies on two cornerstones:

An optimized baseline architecture

An efficient scaling strategy
The baseline architecture: B0
The baseline network, EfficientNet B0, is built around 2D Depthwise convolution blocks, which have been shown to be extremely costefficient and are also the basis of the MobileNetV2 network.
However, the exact architecture was not designed by hand, but is the result of Neural Architecture Search [3]. This is an optimization procedure that searches for the network architecture with the highest possible accuracy, given fixed computational resources to run this network.
The scaling strategy
The conventional strategy for making image classification more accurate is to take an existing network and either:

Increase its depth: the number of layers that data goes through

Increase its width: the number of filters within each layer

Give the network images of higher resolution, which include more and finer details
To increase accuracy, EfficientNets scale up these three aspects together, in proportions that have been optimized to never let one of them be a bottleneck.
The baseline network EfficientNet B0 is scaled up this way to create EfficientNet B1, a network with roughly twice the processing power.
The same scaling is applied successively to create EfficientNet B2, B3, B4, B5, B6, and B7.
Because all aspects of the network are scaled up together, there is no limiting factor and increases in processing power always translate to increases in accuracy.
Available weights
The weights were learned by the Google team using ImageNet
Terms
When using pretrained blocks, additional terms apply: EfficientNet with weights licence.
References

[1] Tan M., Le Q. V. EfficientNet Rethinking Model Scaling for Convolutional Neural Networks, ICML, 2019.

[2] Sandler M., Howard A., Zhu M., Zhmoginov A., Chen L.C.: MobileNetV2: Inverted Residuals and Linear Bottlenecks, CVPR, 2018.

[3] Tan M., Chen B., Pang R., Vasudevan V., Sandler M., Howard A., Le Q. V. MnasNet: PlatformAware Neural Architecture Search for Mobile, CVPR, 2019.
Parameters
Trainable: Whether we want the training algorithm to change the value of the weights during training. In some cases, one will want to keep parts of the network static.