Deploy an operational AI model

How to solve a classification problem

Zero prior AI knowledge is required. We’re going to plainly rip the image into a long list of numbers and put it through a deep neural network to find out what’s in that image. This is the simplest possible (and surprisingly powerful) starting point to deep learning, in our opinion.

- Target audience: Beginners
- Estimated Time: 20 minutes

You will learn to
- Build an AI model quick and easy.
- Use your AI model with a real-world AI web service.
- Solve a single-label image classification problem, that is, predict what an image shows.

Create a project

First, navigate to the Projects view.
Click New project to create a project and name it, so you know what kind of project it is.

A project combines all of the steps in solving a problem, from the pre-processing of datasets to model building, evaluation, and deployment. Using projects makes it easy to collaborate with others.

Add the MNIST dataset to the platform

After creating the project, you will be taken to the Datasets view, where you can import data.

There are several ways to import your data to the platform. This time we will use the Data library that is packed with free-to-use datasets, so click the Import free datasets button.

Look for the MNIST - tutorial data dataset in the list. Click on it to get more information.

The MNIST dataset
The original MNIST dataset consists of small, 28 x 28 pixel images of handwritten numbers that are annotated with a label indicating the correct number.

Figure 1. Examples from the MNIST dataset

The MNIST dataset we’re using in this tutorial consists of 3-channel RGB pictures because then you can use the deployed experiment with a phone.

 Note For a model to be usable, the input data needs to be of the same type as the model was trained on. In this case, the picture that depicts a number needs to be in RGB (a standard digital format.) This is true for every AI model – it can’t predict apples when it has been trained on oranges.

Click Accept and import (→ you agree with the license).

This will import the dataset in your project, and you can now edit it.

Save the dataset

Keep these default values that are shown here in the Datasets view. They’re all set for this project.
Later you can change the subsets split, feature encoding, and more. If you want to dig deeper into all the things you see in this view, you should navigate to the Datasets view articles in the Knowledge center.

Click Create an experiment to open the Experiment wizard.

Design a deep learning network with the wizard

The Experiment wizard makes it really easy for you to set up an experiment. Let’s take a look and make sure that all presets are correct:

• Dataset tab
The MNIST dataset is selected.

• Inputs / target tab

• Image as Input feature, since we want to classify images.

• Number as Target feature, since we want to know what number an image depicts.

• Problem type tab
Select Single-label image classification as problem type.

• Click Create.

Run experiment

The experiment is done and ready to be trained. All settings have been pre-populated by the platform, for example:

• Batch size in the Settings tab. Decides how many samples should be calculated at the same time.

• Learning rate in the Settings tab. The size of the update steps along the gradient.

• Loss function in the Target block. Loss is a number on how well the model performs.
If the model predictions are totally wrong, the loss will be a high number. If they’re pretty good, it will be close to zero.

By default, we’ve also chosen the Run until validation loss stops improving in the Settings tab. This means that the training will automatically be stopped when the validation loss has stopped improving. Great to make sure you don’t train for too long.

The next step is simply to click Run. So let’s do that!
Click Run in the top right corner to start the training.

Analyze experiment

In the Evaluation view, you can see how the loss gets lower for each epoch (when the complete training set has run through the model one time).

Loss graph

Figure 2. Model evaluation view — Training overview

The loss indicates the magnitude of error your model made on its prediction. It’s a method of evaluating how well your algorithm models your dataset.

If your predictions are totally off, your loss function will output a higher number. If they’re pretty good, it’ll output a lower one. Is the loss low enough?

Yes, this is good to go.

Confusion matrix

Figure 3. Model evaluation view — Confusion matrix

You can also get information from the confusion matrix. You’ll find it if you click Predictions inspection.

The confusion matrix is used to see how well a system does classification. The diagonal shows correct predictions. Everything outside this diagonal is errors. In a perfect classification, you’ll have 100% on the diagonal going from top left to bottom right.

Results ok – let’s deploy

In later tutorials, we will iterate on the experiment by tweaking the model to improve it. But this is good to go – time to deploy.

This experiment is great, but it’s of no use as long as it is locked up inside the Peltarion Platform. If you want people to use the trained experiment, you have to get it out in some usable form.

Create new deployment

In the Evaluation view, click Create deployment.

1. Select experiment and checkpoint of your trained model to test it for predictions or enable for business product calls.
Both Best epoch and Last epoch for each trained experiment are available for deployment.

2. Click the Enable button to deploy the experiment.

Test the MNIST classifier

Click the text Test deployment.
The following page will show up (…​ but it will include your experiment’s URL and token):

Figure 4. Peltarion’s image classifier API tester

Add image to the test classifier

Drop the image in the classifier and click the Result icon to get a result.

If an error occurs, make sure that the uploaded image has three channels (RGB) and the size 28x28 pixels.

Result - Success!!

Whazaaaaam!!! You have created an operational AI experiment.

Not working? Remember it’s a prediction

If it doesn’t work every time, remember that the loss from the experiment isn’t 0. Hence, the experiment will predict the wrong numbers in some cases.

Tutorial recap and next steps

In this tutorial, you’ve created an AI experiment that you trained, evaluated, and deployed. You have used all the tools you need to go from data to production — easily and quickly.

You’ve used one labeled dataset and a CNN to predict numbers, but can you improve a result by using multiple sets of input data, both tabular data, and images.

Use multiple sets of data

Try using multiple sets of data in the tutorial Predict California house prices. Predicting a house price from both tabular and image inputs is a unique problem and not something you can do with anything other than deep learning.

Use web app for more things

The web app you’ve used now is not for MNIST only. You can use the web app as an image classifier to any other deployment if the experiment has been trained on a dataset with 3-channel images, e.g., CIFAR.