Product update – September 2018

Since our beta launch in May, things have been moving forward quickly, and many new features have been added weekly.

Each month, we’ll share updates about our progress on the platform and plans for what’s next. Let’s start with some of the enhancements introduced to the platform across the last few sprints.

/ Usability

Usability is a key focus of the product team, and typically there will be a set of enhancements every quarter centered on usability.

The time it takes from uploading a dataset to when it’s available for training on the platform has been significantly reduced. When uploading data, it needs to be preprocessed, allowing for statistics to be calculated as well as for it to be possible to use the data in the training of the model. A dataset which previously took ~15 minutes to upload now only takes a mere couple of seconds.

Improved user experience when statistics are being calculated in Datasets. This currently includes histograms, but other statistics will be updated accordingly. When the data is being processed, statistics are calculated, presented and updated in real time. This allows for the user to get early visibility of results throughout the training of the data. Previously, the user was only able to see  statistics and histograms once the dataset version was saved.

Search and filter capability available for the experiment list. Previously, different experiments were organized in a folder structure. This has now been abandoned and replaced with a search function, which makes it much easier to find and access the required experiment. Currently, this search function is based on free text only. For example, you can search for all experiments created by a certain user, as well as by experiment status, loss function and experiment name.

Heuristics for improved dataset selection. Previously, when creating a new model, the user needed to fill in all sections in Dataset options. These are now pre-selected according to the latest user-saved dataset version.

/ Flexibility

Support for advanced optimizer parameters in the experiment settings panel, allowing for a more accurate optimizer function and the ability to control and adjust parameters to a greater extent. When in the Compiler options of the model builder, under the Optimizer function, there is now the ability to select Advanced options. There are a range of new optimizer functions you can set from Decay to Apply Amsgrad. Previously, the only parameter to choose here was Learning rate, with the standard learning rate being 0,001. But optimizer function methods have different learning rates, and therefore it is simply not plausible to apply the same learning rate across all methods – a distinction must be made. Users now have the ability to apply different learning rates across the various optimizer function methods.

These were the major updates and improvements added across recent sprints. We’ll continue to share updates on the progress of the Peltarion Platform every few weeks – so stay tuned!

Daniel Skantze
Head of Engineering

About

Daniel Skantze has been programming for over 30 years. He has a master’s degree in Cognitive Science from Linköping University and has been driving tech development in award winning innovation projects for Samsung and Absolut Vodka, among others. He is equally passionate about building teams as he is about innovating and developing new products.

Contact