Product update of the Peltarion Platform – November 2018

Each month, we share updates about our progress on the platform and plans for what’s next.

This sprint includes some larger updates focused on usability, as well as an updated look and feel of the Peltarion Platform – making the user experience of the platform as productive as possible.

/ Usability

Tagging capability available for experiments list on the modeling and experiment pages. An experiment list can quickly become very long, including a vast amount of different experiment versions when a user updates and iterates versions before landing on an optimal AI model. This can make it hard to navigate between the experiments, find a specific experiment and keep track of differences. With the new functionality, it is now possible to add tags to an experiment, and then filter by these keywords, i.e., “best,” “beta” and so forth.

Organizational member management enabled to the platform admin. Previously, an organization and its platform users had no quick way to edit and control the members having access to the platform. The member management was done through Peltarion. Today, it is all in the hands of the organization’s platform administrator. Under Settings, the administrator can invite more collaborators or remove a member of the organization, change their role and thus make sure all users have the right capabilities. All collaborators can now get an overview of how many accounts have been purchased and how many are in use. Read more on the platform members view.  

Look and feel of the platform updated, including color changes and “breadcrumbs” for easier navigation between pages. When logging into the platform, the landing page now gives the user essential information regarding the organization’s resource usage and product updates, has quick links to the knowledge center and to the organization's projects.

/ Transparency

Transparent resource usage available. Users can now easily get an overview of their specific quota plan - meaning the computing time (GPU hours) and data storage space purchased as part of the license, including how much resource has been used and how much allowed computing time is left for the current month. This GPU usage is calculated in real time when training an experiment, and storage usage is calculated as datasets are uploaded or deleted. The GPU usage is reset at the beginning of every new month, while data storage usage is calculated over all uploaded datasets.

/ Fast Deployment

New deployment solution released with persistent deployment availability. In beta mode, the deployment endpoint was automatically created for any experiment, and a deployed service with unique access token was available for 48 hours only. With the new solution, the user can determine the availability of the deployment service per each experiment. The user can choose which experiment checkpoint should be deployed and define the names of the input and output parameters as appropriate. Most importantly, the user has the power over enablement of deployed service – when the deployment is enabled, other services can send requests and receive responses from the model. When needed, the user can disable serving of a specific model deployment temporarily or permanently. For those curious, the platform operational AI tutorial comes with a simple web app that enables users to easily build, deploy and see the new persistent deployment capability in action.

These were the main updates on the Peltarion Platform for the past sprint – and a lot more is in the pipeline. We’ll continue to share updates on the progress of the Peltarion Platform every few weeks.

Ele-Kaja Gildemann
Product Owner

About

Ele-Kaja Gildemann is a Product Owner at Peltarion. She has a degree in computer science from Tallinn University of Technology and more than 15 years of experience in sectors as diverse as digital services, telecom and retail. She is passionate about data-driven product development, user experience and machine learning.

More from the author

Contact