Image encoding

Image encoding is used on dataset features that are image files, like jpg and png files.

The Color mode and Image transformation are conversion steps that are performed before an image is sent to the model, both during training and at inference, when you query a deployed model. These features allow you to freely mix images from various sources and formats in your datasets, while ensuring that models always run as intended.


Image encoding allows to use normalization to rescale the color range of the images in the dataset.

Color mode

Image files from various sources may be saved in two modes,for example, grayscale ("black and white") or color. Different modes have different amounts of channels, which change the size of the image that the model receives.

Models can only work with inputs having a fixed data size, so converting images to a specific color mode lets your model run regardless of what the original images used.
Use the Color mode to select which mode you want your model to work with: all images are converted to this mode before being sent to the model, both during training and at inference time.

We recommend to use the mode Color in almost all cases, since most pretrained models are available for color image inputs. Except for specific applications, there is little benefit in using other modes.

Figure 1. Images from different sources may use different color modes, which change the size of the data. The feature’s Color mode converts images to have a fixed number of channels (and Image transformation gives a fixed image resolution), so that models can run on any image input.

Image transformation

Transformation lets you set the pixel width and height of the images that you want your models to work with. If the image feature of an example doesn’t match the specified size, it is transformed using the selected method before being submitted to the model.

There are 4 methods for transforming images on the platform:

  • Crop and resize all images to the same size.

  • Crop or pad all images to the same size. Too big images are cropped, too small images are padded with zeros.

  • Resize all images to the same size.

  • None

Image transformation may happen at training time, if the dataset contains image examples of different resolutions. Transformation may also happen at inference time, if an image of arbitrary resolution is sent to a deployed model for prediction.
Both training and inference will apply the same transformation settings.

When normalization is used, it is applied after the images have been transformed, except when the Crop or pad method is used.
When Crop or pad is used, normalization is applied to the original images before transformation, so that the padded values are not affected by the normalization.

Crop and resize

This option will crop and resize all images.
It will first crop images around the center to obtain the target aspect ratio, then resize the result to get the target size.

Crop and resize example
Figure 2. Result of crop and resize inside the red box

This is a good compromise between the two other methods, since it avoids both distorting shapes and adding padding. However, some parts of the image might be discarded when cropping.

Crop or pad

Crop or pad simply draws a window of the specified size around the center of the image, and either crops the image if it goes outside of the window, or pads the image with black if it is smaller than the window.

Crop or pad example
Figure 3. Result of crop or pad inside the red box

Crop and pad can be used in cases where it’s critical to preserve the pixel size of the shapes in the image, or to avoid noise being introduced by the resizing algorithm.


Resize performs a straighforward resizing of the image to the specified resolution. Shapes may be distorted since the original aspect ratio is not preserved.

Resize example
Figure 4. Result of resize inside the red box

No part of the image is discarded since cropping never occurs.


Selecting None disables image transformation, and each image is passed directly to the model.

In this case, models will infer the size of image features from the first example found in the dataset.

This method is not recommended, since you will have to make sure to always use images of the same size. If an image of a different size is submitted to a model, either during training or to get prediction, an error will occur.

Was this page helpful?