Target group: Data scientist
Preamble: Zero prior AI knowledge is required. We’re going to plainly rip the image into a long list of numbers and put it through a deep neural network. This is the simplest possible (and surprisingly powerful) starting point to deep learning in our opinion.
Classification problems have always been around. Humans have in all times tried to classify the world around them. We try to make sense of the world by putting labels on things, this is a “Cat”, this is a “Dog”, this is a “Bird”. But classification is hard for most real-world problems, it’s not always easy to draw a dividing line between one class or another. Thanks to AI this has now become much easier. With increased computing power, big data and deep learning models we can now solve hard classification much faster and more accurately.
The MNIST classification problem is the “Hello world” of deep learning. We're going to take it one step further and show how you can make a real world AI web-service out of this little example dataset.
This tutorial will show you how to build a model that will solve a classification problem. This means that your experiment is about predicting a label, in this case, what number an image depicts.
When you’ve trained the experiment you learn how to deploy an experiment. And start to use your AI application straight away.
You will learn how easy and quick it is to build a model by using a CNN snippet.
The original MNIST dataset consists of small, 28 x 28 pixels images of handwritten numbers that is annotated with a label indicating the correct number. The dataset consists of a training set of 60K examples and a test set of 10K examples. Read more about the MNIST dataset here: MNIST dataset.
The MNIST dataset we’re using in this tutorial consists of 3-channel RGB images. The reason for that is simply because the phone’s camera takes 3-channel RGB images. If you want to test a model you need to have the same input data as the model was trained on. The model can’t predict apples when it has been trained on oranges.
The goal of the experiment is to learn if the simplest possible deep learning model (turning the pictures into long numbers and putting them through dense layers) can get us a as low loss as possible when we test it in real life. Loss indicates the magnitude of error your model made on its prediction.
First, create a project and name it so you know what kind of project it is. Naming is important.
A project combines all of the steps in solving a problem, from pre-processing of datasets to model building, evaluation, and deployment. Using projects makes it easy to collaborate with others.
The first samples of the MNIST dataset is shown in the Datasets area with one column for each feature, images, and numbers. On top of each feature, there is a graph showing the distribution of a feature over its range.
Set the preprocessing of the Number column to One-hot encoded. You do that in the drop-down menu. You one-hot encode a category feature since you don't want to impose a specific ordering on the categories. This is very important, if you want to understand more look here: Feature preprocessing.
In the top right corner, you’ll see the subsets. All samples in the dataset are by default split into 20% validation and 80% training subsets. Keep these default values in this project. You can change the existing subsets and add more subsets, e.g., a test subset, if you want.
A feature set is used as input or output in a model. Usually feature sets are features bundled together, but in this case, it’s only one feature in each feature set.
Click New feature set, name the feature set Input and select Image (28×28×3). These are the images of the numbers.
Click New feature set again, name the feature set Target and select Number (1). These are the labels indicating the correct number.
You’ve now created a dataset ready to be used in the platform. Click Save version and navigate to the Modeling view.
Time to create an experiment in the Modeling view. The experiment contains all the information needed to reproduce the experiment:
The result from this experiment is a trained AI model that can be evaluated and deployed.
The experiment is done and ready to be trained. Click Run in the top right corner to start the training.
You can tweak the training setup in the Run settings section in the Settings tab. We won't do that now, but if you want to go deeper into all the parameters, check out the Optimizer topic.
Navigate to the Evaluation view and watch the model train. You can see how the loss is getting lower epoch by epoch.
Loss indicates the magnitude of error your model made on its prediction. It’s a method of evaluating how well your algorithm models your dataset. If your predictions are totally off, your loss function will output a higher number. If they’re pretty good, it’ll output a lower one. Is the loss low enough? Yes, this is good to go.
You can also get information from the confusion matrix. It is used to see how well a system does classification. The diagonal shows correct predictions, everything outside this diagonal is errors. In a perfect classification, you'll have 100% on the diagonal going from top left to bottom right.
Our line is quite ok but not fantastic. Despite that we will deploy our model and try it out in the real world outside the Platform.
While our model may be great it is little more than an academic exercise as long as it is locked up inside the Peltarion Platform. If you want people to use the trained model, you have to get it out in some usable form. This is where the Deployment view comes in. In this project we will deploy the model as API.
This test is most fun if you use your phone but you could do it on your desktop if you want to. Enter this address to our web app Image classifier API tester into your preferred browser: http://bit.ly/ImageClassifier
The following page will show up:
Click Setup. Copy the URL in the Deployment view and paste it into the URL field. If you use only Apple devices you can do this by copy on your mac and paste on your iPhone. See here how: Copy and paste across devices.
You could also use Pushbullet.
Copy the Token in the Deployment view. The API is called by sending an HTTP POST to the endpoint shown in the interface. The token is required to authenticate the calls. Add the Token from the Deployment view to the Token field.
Type Image in the Image parameter field. This name is the name of the input parameter in the Deployment view. Note, it's case sensitive.
Width is 28 and Height is 28. That is the image is 28x28 pixels large. Maybe you need to tilt the image, depending on the setting on your phone.
Collapse the Setup. Scribble down a “5” on a piece of paper. This is the easiest digit to recognize.
Tap the icon shown below on your phone, take a photo of your digit.
Tap the Load icon to get a result.
Whazaaaaam!!! You have created an operational AI experiment.
If it doesn’t work every time, remember that the loss from the experiment isn’t 0. Hence the experiment will predict the wrong numbers in some cases.
In this tutorial you’ve created an AI experiment that you first evaluated and then deployed it straight away. You have used all the tools you need to go from data to production - easily and quickly.
In this tutorial we've used one labeled dataset and a CNN to predict numbers. But can we improve a result by using multiple sets of input data, both tabular data and images? Try this in the tutorial Predict California house prices. Predicting a house price from both tabular and image inputs is a unique problem and not something you can do with anything other than deep learning.
The web app you’ve used now is not for MNIST only. You can use the web app as an image classifier to any other deployment if the experiment has been trained on a dataset with 3-channel images, e.g., CIFAR (https://storage.googleapis.com/bucket-8732/cifar-bundle.zip).