Peltarion

# Error messages

Here you’ll find platform error messages with Cause that describes why you get this error message and Remedy that explains how you can solve the error message.

## Dataset errors

#### Cause /

The number of columns in the file exceeds the max number of columns allowed.
This can happen if the platform has problems interpreting linebreak in the file, thus reading the whole file as one long row. Then the number of columns can be significant.

We support Microsoft Windows (CR LF, \r\n) and modern Linux/Unix like (LF, \n) systems.

We do sometimes have problems with older versions of OSX (especially older versions of Microsoft Excel) that only produce (CR, \r).

#### Remedy /

There are several ways to solve such problems by opening up the CSV in a text editor (such as Visual Studio Code) and just save it again. This will usually change the line endings to compatible ones.

There are several command-line tools to solve this problem too, such as dos2unix etc.

### We failed to parse your file. Try again, and if it fails, contact support@peltarion.com.

#### Cause /

We failed to parse your file for some reason.

#### Remedy /

1. Try again to see if the error remains.

2. If this doesn’t help, make sure that the dataset fulfills our requirements.

3. As a last resort, feel free to contact support@peltarion.com.

### We failed to read an image within the ZIP file. Make sure all images follow our requirements.

#### Cause /

We could not parse an image within the ZIP file. This may happen for many reasons.

Example:
The image header says that the image is a png, but it is actually another format, for example, jpg. When the platform tries to parse the image, it fails.

#### Remedy /

Make sure the images and the ZIP file follow our requirements.

### We failed to read an image within the ZIP file. Try to upload the file again.

#### Cause /

We could not read an image in the ZIP file that you tried to upload. This can happen for many different reasons.

#### Cause /

The platform only supports little-endian ('<') byte-order. The file has the wrong byte-order.

### Column major order (Fortran order) is not supported. Change the order to row major order and upload again.

#### Cause /

The platform supports row major order when storing a matrix. If the file is stored in column major order it won’t work.

#### Remedy /

Change the order to row major and upload the file again.

### The files have a different number of rows. The row count needs to match. Make sure the files have the same number of rows before you upload them.

#### Cause /

When you add more files, you add more columns to your dataset. If they have a mismatch in the number of rows, you will get rows with missing values that the platform doesn’t support. You can’t train a model with data that don’t exist.
Currently, it is not possible to add more rows to the dataset by importing another file.

#### Remedy /

Check your files before you upload them to the platform and make sure they have the same amount of rows.

#### Cause /

The platform expects that there are as many columns in each row as in the header row.

If one row in the dataset has fewer columns, the platform cannot use the dataset.

The error is shown on the first row this happens but there might be more rows that have too few columns.

#### Remedy /

Make sure all rows have the same number of columns as the header row before uploading the file.

### There are more columns (${numberOfColumnsInSample}) in row${row} than expected (${numberOfColumnsInHeader}). Make sure all rows have as many columns as the header row. #### Cause / The platform expects that there are as many columns in each row as in the header row. If one row in the dataset has more columns, the platform cannot use the dataset. The error is shown on the first row this happens but there might be more that have too many columns. #### Remedy / Make sure all rows have the same number of columns as the header row before uploading the file. ## Modeling view ### Did you mean to use${activation} as activation? Or perhaps ${loss} as loss function in the Target block? {activation} is the activation function in the last block before the Target block. {loss} is the Target block’s loss. #### Cause / The activation and the loss don’t match. The activation function calculates what value a block should give as an output. The loss function quantifies how well a model is performing a task by calculating a single number, the loss, from the model output, and the desired target. Some loss functions can only be calculated for a limited range of model outputs. You can ensure that the model output is always in the correct range by using an appropriate activation function on the last block of the model. Examples: Sigmoid is often used together with the loss function binary crossentropy. Softmax is often used in the final block in a classifier model with the categorical crossentropy as loss function. #### Remedy / You can solve this problem in two ways: • Change the Activation in the second last block to the one we suggest. • Change the Loss function in the Target block to the one we recommend. Read more here about activation functions and loss functions. ### The last block before the Target uses the activation${activation}. We do not recommend this with ${loss} as a loss function. How about changing the activation? {activation} is the activation function in the last block before the Target block {loss} is the Target block’s loss #### Cause / The activation function calculates what value a block should give as an output. Which activation function should I chose? This depends, off course, on your model and what you want to achieve. The loss function is a critical part of model training: it quantifies how well a model is performing a task by calculating a single number, the loss, from the model output and the desired target. Some loss functions can only be calculated for a limited range of model outputs. You can ensure that the model output is always in the correct range by using an appropriate activation function on the last block of the model. #### Remedy / Change the Activation in the last block before the Target block. You could also change the loss function for the Target block. Maybe you didn’t mean to select the loss you did. Read more here about activation functions and loss functions. ### The Input vocabulary size must always match the number of classes of the Input feature. Make sure they match. #### Cause / The Input vocabulary size in the Embedding block must always match the number of Unique values (classes) of the Input feature. Example: Input vocabulary size: 2 = Classes in input feature: Yes and No. #### Remedy / 1. Navigate to the Datasets view and the Features tab. • Find the parameter Unique values and note the value. • Make sure Encoding type is Categorical • Make sure Type is Index. 2. Navigate back to the Modeling view and select the Embedding block. • Set the Input vocabulary size to the same value as Unique values. • Deselect One hot input. ### One-hot input is selected, then the Input feature must have; Encoding: Categorical and Type: One-hot. The Input vocabulary size must match the number of input classes. Change here or in the Datasets view. #### Cause / One-hot input is selected, then the Input feature must have; Encoding set to Categorical and Type set to One-hot. The Input vocabulary size must match the number of Unique values (classes). Update the Embedding block’s parameters and in the Datasets view. #### Remedy / 1. Navigate to the Datasets view and the Features tab. • Find the parameter Unique values and note the value. • Make sure Encoding type is Categorical • Make sure Type is One-hot. 2. Navigate back to the Modeling view and select the Embedding block. • Set the Input vocabulary size to the same value as Unique values. • Select One hot input. ### We suggest that you enable Early stopping. This model will train for many epochs and you do not want it to run longer than necessary. #### Cause / Training your for model for too long may lead to overfitting and it is also expensive. Better to spend your GPU hours on something more valuable. Early stopping is a feature that enables the training to be automatically stopped when a chosen metric has stopped improving. You can see it as a form of regularization used to avoid overfitting. #### Remedy / Enable early stopping. You do this on the Run settings in the Modeling canvas. ### With this input you don’t have to use flattening. Use only when the input dimension > 1. #### Cause / The input to the Flatten block has only got 1 dimension. #### Remedy / Make sure that the number of input dimensions to the Flatten block is > 1. ### The selected loss function is incompatible with the selected target feature. Change the loss function in the Target block to MSE, MAE, MSLE, or Poisson. #### Cause / If the selected target feature has a Numeric, the loss function chosen should be MSE, MAE, MSLE, or Poisson. #### Remedy / Change the loss function in the Target block to MSE, MAE, MSLE, or Poisson. ### All inputs have to be of the same shape. Change the size of the inputs to the same shape. #### Cause / The Add block can take between 2 and 5 inputs and returns a single tensor containing the element-wise sum over all inputs. All the inputs must have the same number of dimensions. #### Remedy / Change the size of the inputs to the same shape. Example: You can add inputs of shapes: • 16x16 and 16x16 • 1x1x3 and 32x32x3 • 64x1x64 and 64x100x64 ### You need a batch size smaller than validation subset. Change it to${examples} or less.

{examples} is the size of the validation subset.

#### Cause /

There aren’t enough samples in the validation subset to fill up one batch.

Your dataset consists of samples. In the Datasets view you split the dataset in a larger training subset and a smaller validation subset. If you don’t have a large dataset, the validation dataset can become quite small.

#### Remedy /

Make the Batch size equal to or smaller than the size of the validation subset.

### Change concatenation axis to one of the following axis-values ${allowedAxes}. #### Cause / The selected concatenation axis will not work. #### Remedy / Change concatenation axis to one of the suggested axis. We’ve calculated that the suggested axis will work. -1 means the last axis. Example: For 3D, if you want to merge the inputs vertically (1), horizontally (2) or depthwise (3). ### All input dimensions except for the concatenation axis must match. Update input sizes. #### Cause / The size of all the inputs must be identical on each axis that is not the axis of concatenation. This is because you merge the inputs along the concatenation axis. Example: If you in 3D want to concatenate along the vertical axis, dimension 2, then all inputs must be identical along dimension 1 and 3. #### Remedy / Update the input sizes so they match. ### The output size is reduced to zero. Make the input bigger or change the settings of this block. #### Cause / A mathematical operation has reduced the output size to zero. The root of the problem may lie in something the model did earlier. Example: If you use a Stride larger than the input image when performing a convolution somewhere upstream in your model. What is stride? The stride sets how big steps the convolution will take along an axis. It can cause a too big loss of information. Then you’ll get this message. #### Remedy / Make this block’s input bigger. You can do this by lowering the Stride in a previous block. OR Lower the Stride of this block. OR Lower the Width or Height of this block. ### Invalid data type: expected${expected}, received ${actual}. Transform the input to this block. #### Cause / If you concatenate the outputs of two blocks, they have to have the same data type. You cannot concatenate a decimal and an integer, even if they have the same shape. This can happen if you try to connect a categorical and a numeric input. #### Remedy / Usually, one transforms categoricals into dense decimal vectors via an Embedding block or a Dense block. ### The Output block doesn’t affect training but allows to return extra data from any block. Use the Output block data to understand what your deployed model is up to. Some deep learning techniques use the model to not only get predictions about the target feature but to also get values from intermediate blocks of the model. This is the case, for instance with similarity search, autoencoders, feature embedding, or if you simply want to check what the model is calculating. Use the Output block to get data from any part of the model. The data from Output block will be included in predictions made by the deployed model. Create a deployment in the Deployment view, and you will see the Output block in the list of model outputs. #### Cause / #### Remedy / ### Stride value causes info loss. No worries, in most cases this is ok. #### Cause / If a block’s stride is greater than 1, information may be lost at the border of the block input. This means that the convolutional filter cannot be evenly applied at the border of the block input. Therefore some information will be lost. Figure 1. A horizontal stride of 2 will in this case result in lost information. #### Remedy / To remove the warning, make sure that: input_size - offset is evenly divisible by the stride, where: * input_size is the output size of the previous block * offset is the kernel’s size. Example: You’ll get this warning if your block input is 60x60, your kernel is 3x3, and you select a stride of 4. Change the stride to 3 to resolve this warning. This is just a warning, it does not affect the model in any major way, especially if you get it on the first few blocks. However, if the amount of border lost is in the same order of magnitude as the corresponding input dimension, it means that a significant part of the image is being lost. Then it is important to fix this warning. ### Language model must be the same as used by the feature selected in the Input block. Change the Language model in the Datasets view OR in the Text embedding block. #### Cause / This error message appears if you use a Text embedding block after an Input block. The Language model in the text encoded feature must match the Language model selected for the input feature in the Input block. Example: English is selected as Language model for a text encoded feature in the Datasets view, but in the Text embedding block Swedish is selected as Language model. #### Remedy / Make sure that the Language models match. If they don’t, change Language model either in the Datasets view or in the Text embedding block. ### The model cannot output a prediction. Check the "Use in predictions" box in the Target block or add an Output block. #### Cause / A model is trained to predict a target feature from examples. On the platform represented by a Target block. However, some deep learning techniques use the model to not only get predictions about the target feature, but to also get values from various layers of the model. On the platform represented by an Output block. Examples: autoencoders, feature embedding, similarity search, or if you simply want to check what the model is calculating. #### Remedy / Add a Target block and/or check the Use in predictions-box in the Target block or Add an Output block. ### You need at least one output connection. Connect a block after this one. #### Cause / A block is the basic building unit in the Peltarion Platform. They represent the basic components of a neural network and/or the actions that can be carried out on them. Almost all blocks on the Peltarion Platform need to have an output. This output will be the input for another block. The Target block is an exception to this rule since it is, well…​ the target. #### Remedy / Simple! Connect the output of this block to the input of another block. ### Expected${expected} inputs but got ${count}. So change the number of inputs. {expected} is a whole number. {count} is how many inputs that are connected to the block. #### Cause / You haven’t connected enough inputs to this block. This block, merges several inputs into one single output. When you added this block you selected how many inputs this block should have and now that number Example: You’ve added a Concatenate block with 3 inputs, and you have so far connected only 2. #### Remedy / Add as many inputs as expected. OR If you need to update the number of inputs of an existing block, you will need to delete this block and create a new one. ### Multiple target blocks are currently unsupported. Delete all but one target block. #### Cause / Your model has too many target blocks. A model on the Peltarion Platform can only predict one output right now. The target block represents the output that you are trying to learn with your model. #### Remedy / Delete all Target blocks but one. You can only have one Target block. ### Change to a text feature in the Input block. #### Cause / This error message appears if you have a Text embedding block after the Input block, and the feature you’ve selected in the Input block doesn’t have encoding-type Text. Example: The feature you’ve selected in the Input block use Categorical encoding. #### Remedy / In the Input block, select a feature that uses Text encoding. Or change the Encoding of the selected feature in the Datasets view. ### Change the${label}, it needs to be between ${minValue} and${maxValue}.

{label} is a block Parameter
{minValue} and {maxValue} are whole numbers

#### Cause /

You have typed a value that is outside this Parameter’s range.

Examples: When you use Random transformation, you can only rotate the images between 0 and 359 degrees. 360 is a full circle.
A Dense block can’t have 0 nodes.

#### Remedy /

Type a valid value for the Parameter.

#### Cause /

This input needs a feature.

The dataset you use in the experiment consists of features.

#### Remedy /

Select a feature for this block.

### The model doesn’t contain any trainable blocks. Clicking Run will only go over the validation subset once, and no training will occur. To make a block trainable check the Trainable box.

#### Cause /

None of the model blocks are set to Trainable. Therefore the model won’t learn anything new.

#### Remedy /

If you want to make a block trainable, check the Trainable checkbox in the Block parameter pane. This will allow the training algorithm to change the value of the weights during training.

In some cases, you don’t need trainable blocks, for example, when doing a similarity search project.

### Outputs must have unique names. Change the name of this output.

#### Cause /

Outputs must have unique names. Otherwise, it gets so confusing.

The name is used to identify the data when you request predictions with the deployment API.

#### Remedy /

Change the name of this output. Pick good ones. That will make life easier.

### You cannot connect an Output block to an Input block. Connect the Output block to another block in the model.

#### Cause /

The Input block represents data coming into the model. All calculations are made in subsequent blocks in the model.

The Output blocks purpose is to extract information on what the model is calculating.

There for you cannot connect an Output block to an Input block since no calculations has happened yet.

#### Remedy /

Connect the Output block to another block in the model.

### Choose a ${label}! A {label} can be, for example, Feature in the Input block. #### Cause / A required {label} has not been specified. Example: The input Feature has not been set in the Input block. #### Remedy / Set the required {label}. ### The Target shape doesn’t have the same size as the input. Make sure the product of all dimensions is kept. #### Cause / The Reshape block takes in data values and arranges them into the specified shape, different shapes being more appropriate for different blocks. You get this error when the Reshape block receives an amount of values, i.e., the size of the input, that can’t fit into the specified shape. Example • Trying to reshape data of size 10 into a (3,3) shape (size 9). • Trying to reshape data of shape 3 x 3 (size 9) into a (2,2,2) shape (size 8). #### Remedy / The shape of the data output by a block is displayed under that block, e.g., 256 x 256 x 3. You can calculate the size of a shape by multiplying all of its dimensions together, e.g., 256 x 256 x 3 = 196608. 1. When you set the Target shape, make sure that the size of the input data equals the size of the Target shape. Example If the input data is displayed to have a shape of 10 x 10 (size 100): • Target shapes of (100,1,1), (50,2), (4,25), (25,2,2) are possible. • Target shapes of (1), (100,100), (3,3,10) will cause the error. 1. If the Target shape is the one you need, but the input data doesn’t match, use one of the 1D Upsampling, 2D Upsampling, 1D Zero padding, or 2D Zero padding block to increase the size of the data. ### Expected a single input but received${count} instead.

The {count} is a whole number showing how many inputs that are connected to the block.

#### Cause /

Most blocks expect one input. This message states that the block received {count} instead, where 0 means that no input is connected.

#### Remedy /

Connect an earlier block to this block.

### This block must be connected to exactly one block afterwards.

#### Cause /

Most blocks can’t be connected to more than one block.

#### Remedy /

Connect a block after this block.

### This block needs a shape of ${shape}. Update the previous blocks OR change the target feature to match the shape. {shape} is the shape of the target feature, e.g., 28x28x3 for a small RGB image or 10 for a classification problem with 10 classes. #### Cause / The target feature represents the output that you are trying to learn with your model. It can be a label (classification), a scalar (regression) or an image (autoencoders, image segmentation). You’ll get this message either if you, by mistake, have selected the wrong target feature. OR If the input shape doesn’t match the shape of the target feature. Example: The number of Nodes in the last Dense block doesn’t match the number of classes in a classification problem, e.g., you have 11 nodes instead of 10 in the MNIST tutorial. #### Remedy / Change the target feature in the Target block to match the incoming data. OR Update the previous blocks, so the Target block’s input data match the target feature. Example: Update the number of Nodes in the last Dense block to match the target feature. ### Output blocks can only be connected directly to the main model. No blocks are allowed in between. Remove the intermediate block(s) and reconnect the Output block. #### Cause / The Output blocks are ignored during training. The main model must remain valid if all Output blocks were removed. You cannot have blocks between an Output block and the main model. Example Figure 2. These uses of the Output block are not allowed, since deleting the output blocks would make the model graphs invalid. #### Remedy / Remove the intermediate block(s) and reconnect the Output block with the main model. ### This block needs${expected} dimensions but got \${actual} from the previous block. Update the number of dimensions.

{expected} and {actual} are whole numbers.

#### Cause /

The block receives input data that has an incompatible amount of dimensions. Some blocks can only work on data that has a particular amount of dimensions.

Examples: The Dense block needs input of 1 dimension,
The 1D Convolution block needs input of 2 dimensions,
The 2D Convolution block needs input of 3 dimensions…​

#### Remedy /

The shape of the data is displayed after every block in the model canvas. Look for the dimensions of the data going into the block to understand what is going wrong.

Examples: An image feature with a resolution of 256 by 256 pixels and 3 color components has 3 dimensions and will display 256x256x3.
A categorical feature with 10 classes has 1 dimension and will display 10.

#### Change the Input feature

Maybe the wrong Input feature is selected. Make sure you have the correct feature selected in the Input block.

Example: Change the Input feature from tabular data to image data.

If the correct feature is selected but the dimensions are incorrect, go to the Datasets view and control that the Encoding of the feature is correct.

#### Use a different block before this block

Maybe the block is not be adapted to the data coming in. Try to use a different block before this block.

Example: An image feature has 3 dimensions and cannot be processed by a Dense or 1D Convolution block. Use a 2D Convolution block instead to process this data.

#### Reshape the data

If the input data is correct and you need to use this block, you can reshape the data to have the expected dimensions by using the Flatten or Reshape blocks.

## Deployment view

### This experiment has too many inputs. Use an experiment with only 1 image input.

#### Cause /

Image similarity only works if you have one image input. It seems that you uses multiple inputs.

#### Remedy /

Select an experiment with only one image input.

If you don’t have an experiment with only one image input right now, go back to the Modeling view and create a new experiment with only one input.

### Image similarity must have images as input. Use an experiment with an image input feature.

#### Cause /

Image similarity only works if you have images as input. It seems that the input feature you’re using isn’t an image feature.

#### Remedy /

Select an experiment with an image input.

If you don’t have an experiment with an image input right now, go back to the Modeling view and create a new experiment with an image input.

### Image similarity uses Output blocks. Select an experiment that includes an Output block.

#### Cause /

Image similarity uses Output blocks to get values from intermediate layers of the model. The model you’ve selected doesn’t include any Output blocks.

#### Remedy /

Select an experiment that includes an Output block.

OR

Create a new experiment in the Modeling view that includes an Output block.

### The Output block you use isn’t one-dimensional. Select a one-dimensional Output block.

#### Cause /

The Output block you’ve selected isn’t one-dimensional. For image similarity to work, the Output block must be one-dimensional.

#### Remedy /

Select a one-dimensional Output block in the selected experiment.

OR

Create a new experiment in the Modeling view that includes a one-dimensional Output block.

### The Output block does not produce a tensor. Select an Output block that produces a tensor.

#### Cause /

The Output block you’ve selected does not produce a tensor. For image similarity to work, the Output block must produce a tensor.

#### Remedy /

Select an Output block that produces a tensor.

OR

Create a new experiment in the Modeling view that produces a tensor.

### This experiment has too many inputs. Use an experiment with only 1 image input.

#### Cause /

The experiment you’ve selected has too many inputs. For image similarity to work the experiment must have only one input with an image feature.

#### Remedy /

Use an experiment with only one image input.

OR

Create a new experiment in the Modeling view that uses only one image input.

## Experiment

### Broken data found

#### Cause /

The model uses a feature for which all the examples do not have the same shape or type as what was expected, or have a missing value.

#### Remedy /

The error pop-up (displayed by clicking the under the experiment’s name) contains further information about the first few rows that were incompatible. Inspect this additional information to determine which features and examples are causing the problem.

Modify the data and upload a new dataset where all the examples have identical shapes and types for all of their used features.

##### Workaround

To continue working quickly, you can create a new dataset version that does not use either the feature or the subset of examples that is causing the problem. However, this is not recommended since this might ignore valuable information from the dataset.

## API errors

### Oops, an error occurred (400)

#### Cause /

Your application is trying to call an experiment that hasn’t been enabled.

#### Remedy /

Navigate to the Deployment view and select the experiment you want to call.

Click the Enable button.

The model is now enabled and can be called using the experiment’s URL and Token.