Product development /

Peltarion Prediction Server

December 15 2021/6 min read

We are thrilled to bring you yet another product update this year. It’s time to expand your models’ horizons and deploy them into a wider range of scenarios. We’re excited to introduce the ability to download your models as containers with the new Peltarion Prediction Server feature. Enjoy!

Introducing the Peltarion Prediction Server

From now on, our Pro and Enterprise users will be able to download their Peltarion platform-built models and containerize them to be used on the beloved developer tool Docker.

Containers leverage the Linux kernel and its features, such as Cgroups and namespaces, in order to separate processes, thus allowing them to run independently. With containers, you can run multiple processes and applications independently from one another, making full use of your infrastructure while retaining the security of separate systems.

Docker provides an image-based deployment model that makes it easy to share applications and their dependencies across multiple environments with all dependencies intact. Docker also automates the deployment of applications within this container environment, providing users with unprecedented access to apps, rapid deployment capabilities, and control over version distribution.

Now, Peltarion platform users with a Pro license can take their models to the new heights, utilizing the new Peltarion Prediction Server feature to build, share and run any app of their own making anywhere - on-prem or in the cloud.

How to download your model as a Docker container

Once your model has finished training and you’re happy with its performance, download and install Docker to create and deploy the image containing your model.

On the Deployment view, click on Export model.

Choose the Experiment that you want to export, making sure to select Model with Docker container definition.

Extract the downloaded zip which contains the SavedModel file, as well as extra files that define how your model can be deployed. Within the extracted folder, run this command to generate the Docker image:

Note that if you want to get predictions from your deployed model, the API used by the containerized model is slightly different from the Deployment API used on the Peltarion Platform. You can read more about this here on our Knowledge Center.

Now that you’ve built your model as an image, go ahead and create your own application, share it with the community and run it where you want it - on-prem or in the cloud. Your very own platform-built models are now easily deployed and can be used in a wide range of set-ups by leveraging containers. Neat!

And of course, if you need any extra help, have a look at our Knowledge Center or send us a message and we’ll gladly help you with your projects.

Happy containerizing! 

  • Björn Treje

    Björn Treje

    Head of Technical Enablement

    Björn has a Master of Science in Electrical Engineering. He strives to put engineering into the business and business into the engineer. Secretly he hopes all projects involves helmets or reflex vests at some point.

02/ More on Product development