Error messages

Modeling view

Border lost due to stride (${shape}).

{shape}: (Vertical stride, Horizontal stride, Filter).

Cause /

If a block’s stride is greater than 1, information may be lost at the border of the block input.

This means that the convolutional filter cannot be evenly applied at the border of the block input. Therefore some information will be lost.

A horizontal stride of 2 will in this case result in lost information.
Figure 1. A horizontal stride of 2 will in this case result in lost information.

Remedy /

To remove the warning, make sure that:
input_size - offset is evenly divisible by the stride, where:
* input_size is the output size of the previous block
* offset is the kernel’s size.

Example: You’ll get this warning if your block input is 60x60, your kernel is 3x3, and you select a stride of 4. Change the stride to 3 to resolve this warning.

This is just a warning, it does not affect the model in any major way, especially if you get it on the first few blocks.
However, if the amount of border lost is in the same order of magnitude as the corresponding input dimension, it means that a significant part of the image is being lost. Then it is important to fix this warning.

No target data

Cause /

The Target block has not any set Selection , that is, there is no data to train the model with.

Remedy /

Select the Target block and pick a Selection from the dropdown.

Expected an input dimension > 1.

Cause /

The input to the Flatten block has only got 1 dimension.

Remedy /

Make sure that the number of input dimensions to the Flatten block is > 1.

${label} is required.

{label}: e.g., Feature in the Input block.

Cause /

A required {label} has not been specified.

Example: The input Feature has not been set in the Input block.

Remedy /

Set the required {label}.

Language model must be the same as used by the feature selected in the Input block.

Cause /

This error message appears if you use a Text embedding block after an Input block. The Language model in the text encoded feature must match the Language model selected for the input feature in the Input block.

Example: English is selected as Language model for a text encoded feature in the Datasets view, but in the Text embedding block Swedish is selected as Language model.

Remedy /

Make sure that the Language models match.
Change Language model either in the Datasets view or in the Text embedding block.

The selected feature in the Input block must use text encoding.

Cause /

This error message appears if you have a Text embedding block after the Input block, and the feature you’ve selected in the Input block doesn’t have encoding-type Text.

Example: The feature you’ve selected in the Input block use Categorical encoding.

Remedy /

In the Input block, select a feature that uses Text encoding.

Or change the Encoding of the selected feature in the Datasets view.

Exceeded total allowed memory of 5 Gb.

Cause /

Large neural networks have more parameters that require more GPU memory. Large batch sizes also need more GPU memory. If the required memory exceeds the limitation, you will get this error.

Example: For MNIST dataset, the neural network ResNetv2 large 152 with batch size 1024 would need 6.41 GB memory to train.

Remedy /

You can either choose a smaller neural network or keep the larger neural network but choose a smaller batch size.

Example: Change the neural network from ResNetv2 large 152 to ResNetv2 large 50, while still keep 1024 as the batch size.

Example: Change the batch size from 1024 to 512, while still keep the neural network ResNetv2 large 152.

Experiment

Broken data found

Cause /

The model uses a feature for which all the examples do not have the same shape or type as what was expected, or have a missing value.

Remedy /

The error pop-up (displayed by clicking the Error icon. icon under the experiment’s name) contains further information about the first few rows that were incompatible. Inspect this additional information to determine which features and examples are causing the problem.

Modify the data and upload a new dataset where all the examples have identical shape and type for all of their used features.

Workaround

To continue working quickly, you can create a new dataset version that does not use either the feature or the subset of examples that is causing the problem. However, this is not recommended, since this might ignore valuable information from the dataset.