We are thrilled to bring you yet another product update this year. It’s time to expand your models’ horizons and deploy them into a wider range of scenarios. We’re excited to introduce the ability to download your models as containers with the new Peltarion Prediction Server feature. Enjoy!
Peltarion Prediction Server
How to download your model as a Docker container
Once your model has finished training and you’re happy with its performance, download and install Docker to create and deploy the image containing your model.
On the Deployment view, click on Export model.
Choose the Experiment that you want to export, making sure to select Model with Docker container definition.
Extract the downloaded zip which contains the SavedModel file, as well as extra files that define how your model can be deployed. Within the extracted folder, run this command to generate the Docker image:
Note that if you want to get predictions from your deployed model, the API used by the containerized model is slightly different from the Deployment API used on the Peltarion Platform. You can read more about this here on our Knowledge Center.
Now that you’ve built your model as an image, go ahead and create your own application, share it with the community and run it where you want it - on-prem or in the cloud. Your very own platform-built models are now easily deployed and can be used in a wide range of set-ups by leveraging containers. Neat!
And of course, if you need any extra help, have a look at our Knowledge Center or send us a message and we’ll gladly help you with your projects.