Run a model
You’ve built a model. Great! Time to train it and see how it performs.
In this section, we will go through the settings that’ll help you to run and train a model successfully.
Before running a model
Before running your model, make sure that you have:
Selected the desired loss function in the Target block.
Verified that your model has the right output size that matches your target. The model output size can be modified by changing the number of Nodes (for dense blocks) or Filters (for convolution blocks) in the last such block before the target block. The shape of the output can be changed, if needed, by adding a flatten or reshape block just before the target.
The Run settings section
Click on the Settings tab in the Inspector. This is where you will find the Run settings section, which contains all the parameters you can adjust prior to running your training.
The Batch size is the number of examples (i.e., rows of your training set) that are processed in each training iteration and after which your model parameters (i.e., weights) are updated. The batch size is a hyperparameter of your model and can be freely adjusted.
The Experiment Wizard automatically sets a batch size that should be appropriate. But if you change it by hand, here are a few things to consider:
The combination of Learning rate, Batch size and Epochs may affect training results. In general, increasing the batch size allows for an increase in the learning rate. When lowering the batch size, consider also reducing the learning rate.
Larger batch size might lower training time, due to fewer model updates taking place within each epoch. However, training runs faster, but might need more epochs to reach the same results as with smaller batch size.
More memory will be required to run the model, since all examples within a batch are evaluated at the same time.
Commonly set sizes on our platform are 32, 64, 128, 256 or 512.
Note that the bigger the batch size, the bigger the memory requirements to run your model. The platform will not let you run if the model exceeds the memory available. In particular, models using BERT need to use a batch size smaller than roughly 4000 divided by sequence length to not exceed memory limitations.
|Change the batch size of your model and see how it affects the memory requirements of running your model.|
You can keep track of the memory requirements of the model by having a look at the Run button in the upper right corner. The platform will not let you run your model if it exceeds the memory limit.
One Epoch is when the complete training set has run through the model one time.
If you set the number of Epochs to 100, the model will run trough the training set 100 times before completion.
The Optimizer lets you select the particular numerical method to use to conduct gradient descent. Different optimizers may be better for different types of problems, but Adam is typically used as standard for most tasks. For a more detailed description of the optimizers available on the platform see our Optimizer article in our Knowledge center.
The Run button
Now that you have set all your preferred Run settings you’re ready to train your model! The only thing left to do is to press the Run button on the upper right corner to start the training process.
5GB model limit
Note that the memory requirements of your model have to be 5GB or less for you to be able to train the model on the platform. If your memory requirements are larger than 5GB the Run button will be disabled (i.e., it will appear greyed out).
If this is the case, you can try reducing the batch size in the Run settings section to reduce the memory requirements. You can also reduce the size of your examples (e.g., reduce the Width and Height of image features, reduce the Sequence length of text features), or use a simpler model with fewer blocks.
Once your memory requirements are 5GB or less, the Run button will become active again.
If you used the Experiment Wizard, most of these will be set up automatically. And if any of them looks incorrect, the platform will warn you. But it’s always good to check that the model is configured as you intend.