Test it on the Peltarion Platform
A platform to build and deploy deep learning projects.
Even if you’re not an AI superstar.
The Peltarion deployment solution provides the means to quickly test out model prototypes all the way directly in your services. It also provides the stability and scalability you need for a system that will be deployed for longer periods of time with a reliable model for server-to-server integration.
The Deployment view allows you to quickly see which models are deployed and when they were deployed. A green checkmark indicates that the experiment is deployed and the date is shown in the Deployment info section.
The API is called by sending an HTTP POST to the endpoint indicated by the URL in the interface. The request body needs to be multipart-form encoded or json.
Enable deployment for requests
You can control whether a deployment is enabled for requests or disabled: just toggle the Enable switch.
The deployed model will not respond with predictions while the deployment is disabled.
A deployment can be enabled and disabled several times, and can be deleted when it’s not relevant anymore. Note that you have to disable the deployment before you can delete it.
The parameter section gives a list of all the input and output features used by the deployed model.
When you submit a request to the deployed model, you have to send all the input features. The response will contain the predicted output feature for each submitted example.
The Name field refers to the name you want to use for a feature when exchanging data via the API. You can update it to something convenient to you before enabling the deployment for the first time. To change it after the deployment has been enabled once, you will need to duplicate the deployment to change it.
Together with code examples, you will find the URL and Token that you can use to send queries via the API.
The URL is the API endpoint where you submit samples.
The token is required to allow the deployment to respond with predictions.