Not only did we build many of the most popular networks as snippets to help you get started, we’ve also made pretrained snippets available. As it is, large models have limited utility without associated weights. By pretraining the snippets, we have:
lowered the knowledge bar
reduced the time to get started
lowered the costs. You simply don’t have to train as long as you otherwise would have needed.
Pretrained snippets have already learned the basic representations from the dataset it has been trained on and can be trained on a small domain-specific dataset to provide value. This will unlock many companies who don’t own large sets of data to get value in their specific domain.
Want to get started? Check out our article on how to use pretrained snippets.
When you use pretrained snippets you do something called transfer learning. This is when you train a model on a large dataset (such as ImageNet) and want to transfer parts of that knowledge to solve another, related problem.
It is important to know what dataset the pretrained snippet has trained on. Choose weights that closely resemble the dataset that you want to use for this model. The standard dataset for images on the platform is ImageNet.