Error messages

Here you’ll find platform error messages with Cause that describes why you get this error message and Remedy that explains how you can solve the error message.

Dataset errors


The value in one cell (column; ${column}, row; ${row}) exceeds the max size (${limit}). Make the value shorter.

Cause /

This Message in GUI triggers the first time the value in one cell exceeds the limit for a cell value.

Note that you get this error for the first cell where the value is too large. There might be other cells in the file that have cell values that exceed the limit.

Example:
If a cell includes text from a book, the text can easily become too large.

Remedy /

Shorten the text in the cell before uploading the file.

It’s a good idea to take a look at all cells in the file simultaneously to make sure there aren’t other cells with too large values.


The file includes more columns than the max limit (${maxNumberOfColumnsAllowed}). Cut down on the columns and upload the file again.

Cause /

The number of columns in the file exceeds the max number of columns allowed.
This can happen if the platform has problems interpreting linebreak in the file, thus reading the whole file as one long row. Then the number of columns can be significant.

We support Microsoft Windows (CR LF, \r\n) and modern Linux/Unix like (LF, \n) systems.

We do sometimes have problems with older versions of OSX (especially older versions of Microsoft Excel) that only produce (CR, \r).

Remedy /

There are several ways to solve such problems by opening up the CSV in a text editor (such as Visual Studio Code) and just save it again. This will usually change the line endings to compatible ones.

There are several command-line tools to solve this problem too, such as dos2unix etc.


We failed to parse your file. Try again, and if it fails, contact support@peltarion.com.

Cause /

We failed to parse your file for some reason.

Remedy /

Please try again to see if the error occurs again. If it fails contact support@peltarion.com.


We failed to read an image within the ZIP file. Make sure all images follow our requirements.

Cause /

We could not parse an image within the ZIP file. This may happen for many reasons.

Example:
The image header says that the image is a png, but it is actually another format, for example, jpg. When the platform tries to parse the image, it fails.

Remedy /

Make sure the images and the ZIP file follow our requirements.


We failed to read an image within the ZIP file. Try to upload the file again.

Cause /

We could not read an image in the ZIP file that you tried to upload. This can happen for many different reasons.

Remedy /

Try to upload the file again. If the problem persists, please contact support.


A numpy file contain ${actualType} with width ${actualWidth bits. We only support ${allowedType} ${allowedWidth} with width ${allowedType} bits. Update the file.

Cause /

A NumPy file contains the wrong width of bits and/or the wrong encoding type.

Example:
The platform supports float-32, but the uploaded NumPy is a float-16.

Remedy /

Update the NumPy file with the correct type and width. Then upload the file again.


The platform only supports little-endian ('<') byte-order. The file has the '${endianness}' byte-order. Update the file and upload again.

Cause /

The platform only supports little-endian ('<') byte-order. The file has the wrong byte-order.

Column major order (Fortran order) is not supported. Change the order to row major order and upload again.

Cause /

The platform supports row major order when storing a matrix. If the file is stored in column major order it won’t work.

Remedy /

Change the order to row major and upload the file again.


The files have a different number of rows. The row count needs to match. Make sure the files have the same number of rows before you upload them.

Cause /

When you add more files, you add more columns to your dataset. If they have a mismatch in the number of rows, you will get rows with missing values that the platform doesn’t support. You can’t train a model with data that don’t exist.
Currently, it is not possible to add more rows to the dataset by importing another file.

Remedy /

Check your files before you upload them to the platform and make sure they have the same amount of rows.

Then upload the files again.


The file has the wrong encoding. The platform only supports ${expectedCharset} encoding.

Cause /

The platform only supports UTF-8 encoding. You get this error when you try to upload a file with another encoding.

Remedy /

Save your csv files using UTF-8 encoding.


There are fewer columns (${numberOfColumnsInSample} in row ${row} than expected (${numberOfColumnsInHeader}). Make sure all rows have as many columns as the header row.

Cause /

The platform expects that there are as many columns in each row as in the header row.

If one row in the dataset has fewer columns, the platform cannot use the dataset.

The error is shown on the first row this happens but there might be more rows that have too few columns.

Remedy /

Make sure all rows have the same number of columns as the header row before uploading the file.


There are more columns (${numberOfColumnsInSample}) in row ${row} than expected (${numberOfColumnsInHeader}). Make sure all rows have as many columns as the header row.

Cause /

The platform expects that there are as many columns in each row as in the header row.

If one row in the dataset has more columns, the platform cannot use the dataset.

The error is shown on the first row this happens but there might be more that have too many columns.

Remedy /

Make sure all rows have the same number of columns as the header row before uploading the file.

Modeling view

Did you mean to use ${activation} as activation? Or perhaps ${loss} as loss function in the Target block?

{activation} is the activation function in the last block before the Target block.
{loss} is the Target block’s loss.

Cause /

The activation and the loss don’t match.

The activation function calculates what value a block should give as an output. The loss function quantifies how well a model is performing a task by calculating a single number, the loss, from the model output, and the desired target.

Some loss functions can only be calculated for a limited range of model outputs. You can ensure that the model output is always in the correct range by using an appropriate activation function on the last block of the model.

Examples:
Sigmoid is often used together with the loss function binary crossentropy.
Softmax is often used in the final block in a classifier model with the categorical crossentropy as loss function.

Remedy /

You can solve this problem in two ways:

  • Change the Activation in the second last block to the one we suggest.

  • Change the Loss function in the Target block to the one we recommend.

Read more here about activation functions and loss functions.


The last block before the Target uses the activation ${activation}. We do not recommend this with ${loss} as a loss function. How about changing the activation?

{activation} is the activation function in the last block before the Target block
{loss} is the Target block’s loss

Cause /

The activation function calculates what value a block should give as an output. Which activation function should I chose? This depends, off course, on your model and what you want to achieve.

The loss function is a critical part of model training: it quantifies how well a model is performing a task by calculating a single number, the loss, from the model output and the desired target.

Some loss functions can only be calculated for a limited range of model outputs. You can ensure that the model output is always in the correct range by using an appropriate activation function on the last block of the model.

Remedy /

Change the Activation in the last block before the Target block.

You could also change the loss function for the Target block. Maybe you didn’t mean to select the loss you did.

Read more here about activation functions and loss functions.



We suggest that you enable Early stopping. This model will train for many epochs and you do not want it to run longer than necessary.

Cause /

Training your for model for too long may lead to overfitting and it is also expensive. Better to spend your GPU hours on something more valuable.

Early stopping is a feature that enables the training to be automatically stopped when a chosen metric has stopped improving. You can see it as a form of regularization used to avoid overfitting.

Remedy /

Enable early stopping. You do this on the Run settings in the Modeling canvas.


With this input you don’t have to use flattening. Use only when the input dimension > 1.

Cause /

The input to the Flatten block has only got 1 dimension.

Remedy /

Make sure that the number of input dimensions to the Flatten block is > 1.


You need a batch size smaller than validation subset. Change it to ${examples} or less.

{examples} is the size of the validation subset.

Cause /

There aren’t enough samples in the validation subset to fill up one batch.

Your dataset consists of samples. In the Datasets view you split the dataset in a larger training subset and a smaller validation subset. If you don’t have a large dataset, the validation dataset can become quite small.

Remedy /

Make the Batch size equal to or smaller than the size of the validation subset.


Change concatenation axis to one of the following axis-values ${allowedAxes}.

Cause /

The selected concatenation axis will not work.

Remedy /

Change concatenation axis to one of the suggested axis. We’ve calculated that the suggested axis will work.

-1 means the last axis.

Example:
For 3D, if you want to merge the inputs vertically (1), horizontally (2) or depthwise (3).

Concatenation match and no match

All input dimensions except for the concatenation axis must match. Update input sizes.

Cause /

The size of all the inputs must be identical on each axis that is not the axis of concatenation. This is because you merge the inputs along the concatenation axis.

Example:
If you in 3D want to concatenate along the vertical axis, dimension 2, then all inputs must be identical along dimension 1 and 3.

Concatenation match and no match

Remedy /

Update the input sizes so they match.


The output size is reduced to zero. Make the input bigger or change the settings of this block.

Cause /

A mathematical operation has reduced the output size to zero. The root of the problem may lie in something the model did earlier.

Example: If you use a Stride larger than the input image when performing a convolution somewhere upstream in your model.

What is stride?
The stride sets how big steps the convolution will take along an axis. It can cause a too big loss of information. Then you’ll get this message.

Remedy /

Make this block’s input bigger. You can do this by lowering the Stride in a previous block.
OR
Lower the Stride of this block.
OR
Lower the Width or Height of this block.


The Output block doesn’t affect training but allows to return extra data from any block. Use the Output block data to understand what your deployed model is up to.

Some deep learning techniques use the model to not only get predictions about the target feature but to also get values from intermediate blocks of the model. This is the case, for instance with similarity search, autoencoders, feature embedding, or if you simply want to check what the model is calculating. Use the Output block to get data from any part of the model.

The data from Output block will be included in predictions made by the deployed model. Create a deployment in the Deployment view, and you will see the Output block in the list of model outputs.


Stride value causes info loss. No worries, in most cases this is ok.

Cause /

If a block’s stride is greater than 1, information may be lost at the border of the block input.

This means that the convolutional filter cannot be evenly applied at the border of the block input. Therefore some information will be lost.

A horizontal stride of 2 will in this case result in lost information.
Figure 1. A horizontal stride of 2 will in this case result in lost information.

Remedy /

To remove the warning, make sure that:
input_size - offset is evenly divisible by the stride, where:
* input_size is the output size of the previous block
* offset is the kernel’s size.

Example: You’ll get this warning if your block input is 60x60, your kernel is 3x3, and you select a stride of 4. Change the stride to 3 to resolve this warning.

This is just a warning, it does not affect the model in any major way, especially if you get it on the first few blocks.
However, if the amount of border lost is in the same order of magnitude as the corresponding input dimension, it means that a significant part of the image is being lost. Then it is important to fix this warning.


Language model must be the same as used by the feature selected in the Input block. Change the Language model in the Datasets view OR in the Text embedding block.

Cause /

This error message appears if you use a Text embedding block after an Input block. The Language model in the text encoded feature must match the Language model selected for the input feature in the Input block.

Example: English is selected as Language model for a text encoded feature in the Datasets view, but in the Text embedding block Swedish is selected as Language model.

Remedy /

Make sure that the Language models match.
If they don’t, change Language model either in the Datasets view or in the Text embedding block.


The model cannot output a prediction. Check the "Use in predictions" box in the Target block or add an Output block.

Cause /

A model is trained to predict a target feature from examples. On the platform represented by a Target block.

However, some deep learning techniques use the model to not only get predictions about the target feature, but to also get values from various layers of the model. On the platform represented by an Output block.
Examples: autoencoders, feature embedding, similarity search, or if you simply want to check what the model is calculating.

Remedy /

Add a Target block and/or check the Use in predictions-box in the Target block
or
Add an Output block.


You need at least one output connection. Connect a block after this one.

Cause /

A block is the basic building unit in the Peltarion Platform. They represent the basic components of a neural network and/or the actions that can be carried out on them.
Almost all blocks on the Peltarion Platform need to have an output. This output will be the input for another block. The Target block is an exception to this rule since it is, well…​ the target.

Remedy /

Simple! Connect the output of this block to the input of another block.

Connect blocks PA2

Expected ${expected} inputs but got ${count}. So change the number of inputs.

{expected} is a whole number.
{count} is how many inputs that are connected to the block.

Cause /

You haven’t connected enough inputs to this block.

This block, merges several inputs into one single output. When you added this block you selected how many inputs this block should have and now that number

Example: You’ve added a Concatenate block with 3 inputs, and you have so far connected only 2.

Remedy /

Add as many inputs as expected.

OR
If you need to update the number of inputs of an existing block, you will need to delete this block and create a new one.


Multiple target blocks are currently unsupported. Delete all but one target block.

Cause /

Your model has too many target blocks.

A model on the Peltarion Platform can only predict one output right now. The target block represents the output that you are trying to learn with your model.

Remedy /

Delete all Target blocks but one.
You can only have one Target block.


Change to a text feature in the Input block.

Cause /

This error message appears if you have a Text embedding block after the Input block, and the feature you’ve selected in the Input block doesn’t have encoding-type Text.

Example: The feature you’ve selected in the Input block use Categorical encoding.

Remedy /

In the Input block, select a feature that uses Text encoding.

Or change the Encoding of the selected feature in the Datasets view.


Change the ${label}, it needs to be between ${minValue} and ${maxValue}.

{label} is a block Parameter
{minValue} and {maxValue} are whole numbers

Cause /

You have typed a value that is outside this Parameter’s range.

Examples: When you use Image augmentation, you can only rotate the images between 0 and 359 degrees. 360 is a full circle.
A Dense block can’t have 0 nodes.

Remedy /

Type a valid value for the Parameter.


Change the ${label} parameter into a number.

The {label} is a block Parameter.

Cause /

The Parameter requires a number and the value you have typed some other kind of character, for example, A or &.

Remedy /

Change the Parameter value to a number, for example, 1 or 1337.


Your experiment is too big (max is 5 GB). You can either use a smaller batch size OR use a smaller neural network.

Cause /

Large neural networks have more parameters that require more GPU memory. Large batch sizes also need more GPU memory. If the required memory exceeds the limitation, you will get this error.

Example: For MNIST dataset, the neural network ResNetv2 large 152 with batch size 1024 would need 6.41 GB memory to train.

Remedy /

You can either choose a smaller neural network or keep the larger neural network but choose a smaller batch size.

Example: Change the neural network from ResNetv2 large 152 to ResNetv2 large 50, while still keep 1024 as the batch size.

Example: Change the batch size from 1024 to 512, while still keep the neural network ResNetv2 large 152.


No target data

Cause /

The Target block has not any set Selection , that is, there is no data to train the model with.

Remedy /

Select the {TargetButton} block and pick a Selection from the dropdown.


The model doesnt contain any trainable blocks. Clicking Run will only go over the validation subset once, and no training will occur. To make a block trainable check the Trainable box.

Cause /

None of the model blocks are set to Trainable. Therefore the model won’t learn anything new.

Remedy /

If you want to make a block trainable, check the Trainable checkbox in the Block parameter pane. This will allow the training algorithm to change the value of the weights during training.

In some cases, you don’t need trainable blocks, for example, when doing a similarity search project.


Outputs must have unique names. Change the name of this output.

Cause /

Outputs must have unique names. Otherwise, it gets so confusing.

The name is used to identify the data when you request predictions with the deployment API.

Remedy /

Change the name of this output. Pick good ones. That will make life easier.


Choose a ${label}!

A {label} can be, for example, Feature in the Input block.

Cause /

A required {label} has not been specified.

Example: The input Feature has not been set in the Input block.

Remedy /

Set the required {label}.


The Target shape doesn’t have the same size as the input. Make sure the product of all dimensions is kept.

Cause /

The Reshape block takes in data values and arranges them into the specified shape, different shapes being more appropriate for different blocks.

You get this error when the Reshape block receives an amount of values, i.e., the size of the input, that can’t fit into the specified shape.

Example

  • Trying to reshape data of size 10 into a (3,3) shape (size 9).

  • Trying to reshape data of shape 3 x 3 (size 9) into a (2,2,2) shape (size 8).

Remedy /

The shape of the data output by a block is displayed under that block, e.g., 256 x 256 x 3.
You can calculate the size of a shape by multiplying all of its dimensions together, e.g., 256 x 256 x 3 = 196608.

  1. When you set the Target shape, make sure that the size of the input data equals the size of the Target shape.

Example

If the input data is displayed to have a shape of 10 x 10 (size 100):

  • Target shapes of (100,1,1), (50,2), (4,25), (25,2,2) are possible.

  • Target shapes of (1), (100,100), (3,3,10) will cause the error.

  1. If the Target shape is the one you need, but the input data doesn’t match, use one of the 1D Upsampling, 2D Upsampling, 1D Zero padding, or 2D Zero padding block to increase the size of the data.


Expected a single input but received ${count} instead.

The {count} is a whole number showing how many inputs that are connected to the block.

Cause /

Most blocks expect one input. This message states that the block received {count} instead, where 0 means that no input is connected.

Remedy /

Connect an earlier block to this block.

Connect blocks

This block needs a shape of ${shape}. Update the previous blocks OR change the target feature to match the shape.

{shape} is the shape of the target feature, e.g., 28x28x3 for a small RGB image or 10 for a classification problem with 10 classes.

Cause /

The target feature represents the output that you are trying to learn with your model. It can be a label (classification), a scalar (regression) or an image (autoencoders, image segmentation).

You’ll get this message either if you, by mistake, have selected the wrong target feature.
OR
If the input shape doesn’t match the shape of the target feature.

Example: The number of Nodes in the last Dense block doesn’t match the number of classes in a classification problem, e.g., you have 11 nodes instead of 10 in the MNIST tutorial.

Remedy /

Change the target feature in the Target block to match the incoming data.
OR
Update the previous blocks, so the Target block’s input data match the target feature.

Example: Update the number of Nodes in the last Dense block to match the target feature.


Output blocks can only be connected directly to the main model. No blocks are allowed in between. Remove the intermediate block(s) and reconnect the Output block.

Cause /

The Output blocks are ignored during training. The main model must remain valid if all Output blocks were removed.

You cannot have blocks between an Output block and the main model.

Example

Invalid use of Output block.
Figure 2. These uses of the Output block are not allowed, since deleting the output blocks would make the model graphs invalid.

Remedy /

Remove the intermediate block(s) and reconnect the Output block with the main model.


This block needs ${expected} dimensions but got ${actual} from the previous block. Update the number of dimensions.

{expected} and {actual} are whole numbers.

Cause /

The block receives input data that has an incompatible amount of dimensions. Some blocks can only work on data that has a particular amount of dimensions.

Examples: The Dense block needs input of 1 dimension,
The 1D Convolution block needs input of 2 dimensions,
The 2D Convolution block needs input of 3 dimensions…​

Remedy /

The shape of the data is displayed after every block in the model canvas. Look for the dimensions of the data going into the block to understand what is going wrong.

Examples: An image feature with a resolution of 256 by 256 pixels and 3 color components has 3 dimensions and will display 256x256x3.
A categorical feature with 10 classes has 1 dimension and will display 10.

Change the Input feature

Maybe the wrong Input feature is selected. Make sure you have the correct feature selected in the Input block.

Example: Change the Input feature from tabular data to image data.

If the correct feature is selected but the dimensions are incorrect, go to the Datasets view and control that the Encoding of the feature is correct.

Use a different block before this block

Maybe the block is not be adapted to the data coming in. Try to use a different block before this block.

Example: An image feature has 3 dimensions and cannot be processed by a Dense or 1D Convolution block. Use a 2D Convolution block instead to process this data.

Reshape the data

If the input data is correct and you need to use this block, you can reshape the data to have the expected dimensions by using the Flatten or Reshape blocks.


Deployment view

This experiment has too many inputs. Use an experiment with only 1 image input.

Cause /

Image similarity only works if you have one image input. It seems that you uses multiple inputs.

Remedy /

Select an experiment with only one image input.

If you don’t have an experiment with only one image input right now, go back to the Modeling view and create a new experiment with only one input.


Image similarity must have images as input. Use an experiment with an image input feature.

Cause /

Image similarity only works if you have images as input. It seems that the input feature you’re using isn’t an image feature.

Remedy /

Select an experiment with an image input.

If you don’t have an experiment with an image input right now, go back to the Modeling view and create a new experiment with an image input.


Image similarity uses Output blocks. Select an experiment that includes an Output block.

Cause /

Image similarity uses Output blocks to get values from intermediate layers of the model. The model you’ve selected doesn’t include any Output blocks.

Remedy /

Select an experiment that includes an Output block.

OR

Create a new experiment in the Modeling view that includes an Output block.


The Output block you use isn’t one-dimensional. Select a one-dimensional Output block.

Cause /

The Output block you’ve selected isn’t one-dimensional. For image similarity to work, the Output block must be one-dimensional.

Remedy /

Select a one-dimensional Output block in the selected experiment.

OR

Create a new experiment in the Modeling view that includes a one-dimensional Output block.


The Output block does not produce a tensor. Select an Output block that produces a tensor.

Cause /

The Output block you’ve selected does not produce a tensor. For image similarity to work, the Output block must produce a tensor.

Remedy /

Select an Output block that produces a tensor.

OR

Create a new experiment in the Modeling view that produces a tensor.


This experiment has too many inputs. Use an experiment with only 1 image input.

Cause /

The experiment you’ve selected has too many inputs. For image similarity to work the experiment must have only one input with an image feature.

Remedy /

Use an experiment with only one image input.

OR

Create a new experiment in the Modeling view that uses only one image input.


Experiment

Broken data found

Cause /

The model uses a feature for which all the examples do not have the same shape or type as what was expected, or have a missing value.

Remedy /

The error pop-up (displayed by clicking the Error icon under the experiment’s name) contains further information about the first few rows that were incompatible. Inspect this additional information to determine which features and examples are causing the problem.

Modify the data and upload a new dataset where all the examples have identical shapes and types for all of their used features.

Workaround

To continue working quickly, you can create a new dataset version that does not use either the feature or the subset of examples that is causing the problem. However, this is not recommended since this might ignore valuable information from the dataset.


API errors

Oops, an error occurred (400)

Cause /

Your application is trying to call an experiment that hasn’t been enabled.

Remedy /

Navigate to the Deployment view and select the experiment you want to call.

Click the Enable button.

Enable button

The model is now enabled and can be called using the experiment’s URL and Token.

Was this page helpful?
YesNo