A key component in making AI operational is to provide an easy way to make the deep learning models you’ve built on the Peltarion Platform accessible for integration in your services.
The Peltarion deployment solution provide the means to quickly test out model prototypes all the way directly in your services. It also provides the stability and scalability you need for a system that will be deployed for longer periods of time with a reliable model for server-to-server integration.
With the deployment functionality you can perform batch-lookups on a series of samples.
A deployed model will be accessible through a REST API for forward pass queries, either single lookups or in batches.
Easy overview of deployed models
The Deployment view makes it possible to quickly see which models that are deployed and when they were deployed. A green check mark indicates that the experiment is deployed and the date is shown in the Deployment info section.