Model download

You can download your trained model to deploy it off-platform in three ways:

  • tf.savedmodel. Recommended format.
    The TensorFlow SavedModel format includes all the pre- and post-processing done by the platform, and is compatible with TF 2.5.x.

  • Model with Docker container definition
    The model as a file, together with all files needed to build and deploy a Docker image locally or within your Docker registry.

  • Keras .h5
    Legacy format that only contains blocks without data pre- or post-processing.

How to download the model

Create a project and run an experiment.

When done, navigate to the Deployment view and click Export model.

Export model button
  1. Select Experiment to export, that is, which specific model from your project.

  2. Select Checkpoint, that is, how many training epochs the downloaded model will have. The checkpoint with the best model performance is indicated with (best) and is selected by default.

  3. Select the file format.
    We recommend using the tf.savedmodel format.

  4. Click Export to start the download.

tf.savedmodel

We recommend you to use the TensorFlow SavedModel format since it gives you more flexibility in loading the model again and includes operations performed by the platform.

tf.savedmodel is compatible with all platform blocks and tf.savedmodel includes pre- and post-processing steps that you get if you deploy the model on-platform.

tf.savedmodel is compatible with TensorFlow 2.5.x.

Implementation guides - tf.savedmodel

We created guides on how you deploy the tf.savedmodel in a few selected frameworks and platforms:

Model with Docker container definition

Use the model with Docker container definition for hosting your model within the portable Peltarion Prediction Server docker image.

This option includes the model as a TensorFlow SavedModel file, together with extra files that let you build a Docker image which you can deploy anywhere.

h5 format

An .h5 file that only contains only the model blocks without data pre- or post-processing. Compatible with Keras 2.1.6-tf, set compile=false when loading the model. Not all the platform blocks are supported by this format, for instance, the Scaling block.

Currently, Keras v2.1.6-tf compatible .h5 file is provided for running a forward pass. Make sure to set compile=False when loading the model in Keras. If import Keras doesn’t work, try from Tensorflow import keras instead.

An example of how you can load a .h5-model is explained in the Keras documentation.

h5 limitations

This solution does currently not take pre/post processing into account, which means that any normalization or categorical preprocessing will not be part of this model. Currently, these operations and the metadata used to apply them are not exposed.

Therefore we recommend for users that rely on deploying .h5-files from their platform in their own systems to do all pre-processing, such as normalization, one-hot preprocessing, and reordering color channels of images outside the platform.

Note: If your model uses the Scaling block, make sure to use the additional code provided here.

Was this page helpful?
Yes No