Deployment view

The Peltarion deployment solution provides the means to quickly test out model prototypes all the way directly in your services. It also provides the stability and scalability you need for a system that will be deployed for longer periods of time with a reliable model for server-to-server integration.

The Deployment view allows you to quickly see which models are deployed and when they were deployed. A green checkmark Check mark PA1 indicates that the experiment is deployed and the date is shown in the Deployment info section.

A deployed model will be accessible through the Peltarion Deployment API for forward pass queries. You can request one or several predictions at a time, within the API limitations.

The API is called by sending an HTTP POST to the endpoint indicated by the URL in the interface. The request body needs to be multipart-form encoded or json.

Deployment ­overview

Enable deployment for requests

You can control whether a deployment is enabled for requests or disabled: just toggle the Enable switch.

The deployed model will not respond with predictions while the deployment is disabled.

A deployment can be enabled and disabled several times, and can be deleted when it’s not relevant anymore. Note that you have to disable the deployment before you can delete it.

Deployment enabling

Parameters

The parameter section gives a list of all the input and output features used by the deployed model.

When you submit a request to the deployed model, you have to send all the input features. The response will contain the predicted output feature for each submitted example.

The Name field refers to the name you want to use for a feature when exchanging data via the API. You can update it to something convenient to you before enabling the deployment for the first time. To change it after the deployment has been enabled once, you will need to duplicate the deployment to change it.

API information

Deployment API

Together with code examples, you will find the URL and Token that you can use to send queries via the API.

  • The URL is the API endpoint where you submit samples.

  • The token is required to allow the deployment to respond with predictions.
    Since the token is considered a secret, the deployment system is not meant to be shipped in the client-code (like javascript widgets, Android apps and so on).

Using your deployment

As soon as your deployment is enabled, you can start requesting predictions. You can try a deployment easily by using one of our web apps for submitting images or text.

You can also use the deployment API to integrate your deployment in your program. If you are unfamiliar with REST APIs, checkout our Python package Sidekick, which makes it easy work with deployed models.