Product development /

Platform features - Deep dive

April 23 2020/10 min read
  • Reynaldo Boulogne
    Reynaldo Boulogne

A detailed overview of the platform and some of its most important features.

Has this happened to you before? You visit a website, read through the content on the homepage and maybe even some of the subpages and after all that you still have no idea what the product does? 

We get you, it annoys us too :-)

But we also know that website real estate is scarce and that it’s hard to craft a short message that explains everything clearly to everyone. With that in mind we decided to write this blog post to give you a more detailed overview of the platform and some of its most important features.

Let’s dive right in!

Platform overview

The Peltarion platform is an end-to-end (or ‘all-in-one’) deep learning development environment that allows you to upload your data, build, tweak, deploy your models, and keep all the history safe.

It combines:

Dataset exploration

Upload, visualize and track your datasets automatically on the platform. 

  • Data API - Programmatically upload your data into the platform
  • Data Library -  Over 30 deep learning datasets ready for you to explore 
  • Inspect and edit datasets - Visualize information for each feature in your dataset instantly

Model development

Build, configure, train and evaluate innumerable deep-learning models faster and take advantage of our accessible deep learning features:

  • Snippets - Get started quickly with our prebuilt models 
  • Experiment creation wizard - Get automatic suggestions of what snippet to use.
  • Pre-trained models - Reduce the time, costs and skills to get started with large models.
  • No code environment - Skip learning data science libraries and avoid tedious and time-consuming non deep learning tasks.

 Prediction serving

Deploy your trained models directly from the platform all the way directly in your application. Quickly test out your models in production or run them continuously with a stable and scalable server-to-server integration.

  • Rest-API deployment - Integrate model training and model predictions into your own applications and automate interactions with the platform.

DevOps for Deep Learning 

Track and manage everything related to your projects from end-to-end and enjoy the ability to reproduce results, reuse previous work and introduce better governance (if you need that) in your work environment.

  • Version control -  Track data, models, hyperparameters, experiment results and deployments across projects and experiments.
  • Experiment management - Iterate faster and track the best models you’ve created, what data you train with and experiment on, what hyperparameters you used, and the associated tradeoffs.
  • Project history - Keep track of everything related to your deep learning project in one place. Nothing is ever lost enabling reproducibility, reusability, collaboration and governance.


Get access to the storage, computation and deployment resources you need. No configuration or management work required! 

  • Train on GPUs - Your model development workspace is GPU enabled, always.
  • Run inferences on CPUs - Run inferences on CPU-backed machines instantly.

We hope this gave you a better understanding of what the platform does, but why not give it a try to get a feel of all these featuers yourself? We have a free tier so you don’t need to commit to anything beforehand. 

If you need some inspiration to get started, check out our deep learning tutorials.

And if you’re interested in even more details about the platform features, keep on reading. There is an additional section below which goes into more detail on each of the features and provides tons of links for you to explore.

Happy reading!

Curious to know what it would take to build your own end-to-end deep learning pipeline like the one the Peltarion platform offers?  we recommend you have a look at this link:

Interested in understanding how we compare to other companies with similar offerings?

Platform features - In depth

Dataset exploration

Data API

When we first launched the platform you could only upload data to it via files and…. well we know that’s not the best way of doing that.

Now it’s possible to programmatically upload your data into the platform with the Data API. The Data API lets you send data to the platform right from your preprocessing code. Meaning, no more selecting and uploading files manually on the platform.

Want to read more about our Data API? Here are some links for you to explore:

Data Library

Finding useful deep learning datasets is still a hassle (although definitely not impossible) and even when you find one, it has to be pre-processed before you can even get to the actual modelling part.

To help cut out some of that tedious work, we have introduced our Data Library which holds over 30 useful datasets applicable to different deep learning problems ready for you to explore and start practice your deep learning skills with. 

Inspect and edit datasets

Stating the obvious here: if you’re working with data you will want to visualize it and some of their characteristics to get a better understanding of what you’re going to be working on.

The platform will automatically do that for you displaying information for each feature in your dataset like its shape, the distribution of values. a preview of the values for the first examples, etc. 

It will also allow you to select the desired encoding and normalization preference for each feature, as well as allow you to do some basic transformation like resizing and cropping.

Want to read more about dataset inspections? Here are some links for you to explore:

Accessible deep learning


Many of the most powerful neural networks have very large architectures (e.g., the Resnet 152 network has, you guessed it, 152 layers in total) which can make them tedious to build and daunting to start working with.

To help you get started, we’ve prebuilt over 30 of the most popular networks inside of the Peltarion Platform. Using snippets can save you a lot of time which you can spend on exploring and experimenting with the different architectures instead.

Experiment creation wizard 

Having Snippets (premade prebuilt AI models) on the platform is definitely a great help, but choosing which to use for your problem can still be a daunting task.

To help you with this, we’ve developed the Experiment creation wizard. After saving your dataset, the wizard offers the most suitable neural network template based on the input data and the type of problem you’re trying to solve.

Want to read more about our wizard? Here is a link for you to explore:

Pre-trained models 

It’s no secret that building well-performing neural networks is a complex task that requires specific skills, knowledge, (computing) resources and lots of data. 

To make our user’s life just a bit easier, we have pre-trained models on the platform which are… well, they are exactly what it says on the tin, models that have been pre-trained on a lot of data. These models have learned a lot during pre-training already which means that they can be used as a starting point for those problems where you don’t have much data, in order to get better results.

Want to read more about our pre-trained models? Here are some links for you to explore:

No code environment 

Ok, we know this is a controversial point. Some of you will be skeptical about whether this is a good thing or not and we get that. There are definitely pros and cons to having a no-code environment.

But our goal with the platform is to make deep learning accessible to as many people as possible and with that in mind, the no-code environment is our way of helping our users by: 

  • removing the need for them to learn the multiple data science libraries* before they can get started
  • removing the tedious and time-consuming tasks that come with coding but which are not related to deep learning:

* Curious to see how many tools you would need to learn if you wouldn’t be using the platform?

Want to read more about what a no-code environment enables you to do? Here are a few links for you to explore:

Ready made evaluation tools

Training a model is not possible unless you can evaluate its performance. To make this easy, the platform offers ready made visualizations and metrics that allow you to easily analyze and compare the performance of your models. 

The platform adapts the visualizations and metrics according to the problem you’re trying to solve.

Want to read more about our evaluation tools? Here are some links for you to explore:

Prediction serving

Rest-API deployment

Deploy your trained models directly from the platform all the way directly in your application via a Rest-API. 

You can control whether a deployment (i.e. a trained model) is enabled for requests or disabled: just toggle the Enable switch. It’s that easy to put a model into production!

Want to read more about our Rest-API deployment? Here are some links for you to explore:

DevOps for Deep Learning 

Version control

Keeping track of the different stages, changes and configurations of your deep learning model throughout its development lifecycle can quickly get overwhelming. To do this well, you would need to keep a record of data, models, hyperparameters, experiment results and deployments across multiple projects and experiments.

The platform makes this process really simple as it automatically versions everything and records every aspect of your projects and experiments, so that you can always keep track of who changed what, when and why.

Want to read more about version control on the platform? Here is a link for you to explore:

Experiment management

Easily creating, running and managing multiple experiments is one of the  features that makes the platform such an amazing tool. 

Each experiment contains a model, it’s hyperparameter settings, it’s training setting and it’s training results. It also records 1) the dataset (version) that was used to train the model and 2) the deployment module that the trained model was used in.

It’s a powerful artifact inside the platform that allows you to easily keep track of everything related to a trained model. And you can run as many experiments as your quota allows you to do and thereby easily iterate on your models, compare them and keep track of the tradeoffs of your choices in each experiment.

Want to read more about experiment management on the platform? Here are some links for you to explore:

Project history

A project is the main artifact on the platform that holds the dataset (versions), experiments and deployments that were used for a specific application or solution that you were working on.

Why is this a feature you ask? Well because since everything is automatically version controlled, a project effectively allows you to have the complete history of everything that went into the development of the deep learning solution you worked on.

This essentially guarantees that you will always be able to reproduce results, reuse previous work and introduce better governance (if you need that) in your work environment. 

Achieving this tightly integrated tracking of data, experiments, models, settings, results, deployments, etc. outside of the platform is an extremely difficult task. It would not only require using multiple tools, but also building and managing an overarching framework that ties them all together. To get a glimpse of the monumental effort this would take, we recommend you have a look at this link:


Train on GPUs

Deep Learning models are notoriously compute hungry. If you have prior experience, you know that they can quickly eat up your available compute power, even if you have a GPU. (This becomes especially a problem when you start training multiple models, e.g.. running multiple experiments, at the same time).

To avoid you the headache of finding, setting up, managing and paying for this resource separately, the platform offers you GPUs to train your models with your account. You don’t need to configure or set up any hardware. Everything is managed for you.

Run inferences on CPUs

Deploying trained models requires more than just a web server, it also requires stable, reliable and scalable hardware to run inferences on. Luckily the platform gives you all the computing power your model / application needs to consistently compute the model predictions. 

The best part? You don’t need to configure or set up any hardware. Everything is managed for you.

  • Reynaldo Boulogne

    Reynaldo Boulogne

    With over 15 years of experience, Reynaldo has worked within the intersection of business and technology across multiple sectors, most recently at Klarna and Spotify. He is passionate about innovation, leadership, and building things from scratch. Reynaldo is also a former Vice-chairman of the Stockholm based AI forum, Stockholm AI.