Each month, we share updates about our progress on the platform and plans for what’s next.
This sprint, updates of the Peltarion Platform have been largely focused on usability. As the key focus of the product team, we want to ensure the user experience is as smooth and seamless as possible.
Major redesign of the Evaluation page. This includes additional context-specific accuracy metrics for classification problems and additional error metrics when working with a regression problem. Previously in the Evaluation page, the concept of analysis was used and the view organized according to dataset and metric.
With the redesign, the Experiment info column gives the user information about each experiment, e.g., if the experiment is ready to run and start training or if it’s completed. The loss metric is also available through this view, and the user can follow how it progresses in real time.
The line graph on the Evaluation page displays model performance for all listed experiments using the same loss, accuracy or error metric, enabling comparisons between these experiments. Additionally, the user can choose to display the loss metric for each experiment on the graph and how it converges. Checkpoints have been added to the line graph, which the user can hover over to get information on performance. This allows the user to get an immediate understanding of how many accurate predictions versus mismatches the model is producing.
Adjustment of the x- and y-scale of the line graph. This means the user can choose to get the training overview presented according to Epochs, Wall time and Relative time on the x-axis, and through Linear scale or Log scale on the y-axis. The user can also zoom in or out of the experiment, using the added zoom buttons.
Line of identity added to the scatter plot when working on a regression problem. This allows for a better view of how well the data points are grouped around zero, and thus is an indication of the model’s quality.
Precision and recall in the confusion matrix. When solving a classification problem, a confusion matrix is used to evaluate the model. This is a good way to display the outcomes between what the model should have predicted and what was actually predicted. Previously, this was presented using Count cells and now it can be presented using Percentage cells as well. Precision and recall percentages make it easier to get an overview of the model’s performance.
The search and filter function introduced to the Modelling page during the last sprint is now also available in the Evaluation and Deployment page.
Introducing quick links for smoothly switching between views. Quick links are found next to each experiment in the experiment lists, and allow the user to duplicate, rename or delete an experiment. Quick links also let you instantly go back and forth between Modelling, Evaluation and Deployment views, without leaving the experiment you’re working on.
These were the major updates and improvements added across the latest sprint. We’ll continue to share updates on the progress of the Peltarion Platform every few weeks – so stay tuned!