Target group: Data scientist
Classification problems have always been around. Humans have in all times tried to classify the world around them. We try to make sense of the world by putting labels on things, this is a "cat," this is a "dog," this is a "bird". But classification is hard for most real-world problems, it’s not always easy to draw a dividing line between one class or another. Thanks to AI this has now become much easier. With increased computing power, big data and deep learning models we can now solve hard classification much faster and more accurately.
The MNIST classification problem is the “Hello world” of deep learning. We're going to take it one step further and show how you can make a real world AI web-service out of this little example dataset.
This tutorial will show you how to build a model that will solve a classification problem. This means that your experiment is about predicting a label, in this case, what number an image depicts.
When you’ve trained the experiment you will learn how to deploy an experiment. And start to use your AI application straight away.
You will learn how easy and quick it is to build a model by using a Convolution Neural Network (CNN) snippet.
The original MNIST dataset consists of small, 28 x 28 pixel images of handwritten numbers that are annotated with a label indicating the correct number. The dataset consists of a training set of 60K examples and a test set of 10K examples. Read more about the MNIST dataset here: MNIST dataset.
The MNIST dataset we’re using in this tutorial consists of 3-channel RGB images. The reason for that is simply because the phone’s camera takes 3-channel RGB images. If you want to test a model you need to have the same input data as the model was trained on. The model can’t predict apples when it has been trained on oranges.
The goal of the experiment is to learn if the simplest possible deep learning model (turning pictures into long numbers and putting them through dense layers) can get us the lowest possible loss when we test it in real life. Loss indicates the magnitude of error your model made on its prediction.
First, create a project and name it so you know what kind of project it is. Naming is important.
A project combines all of the steps in solving a problem, from pre-processing of datasets to model building, evaluation, and deployment. Using projects makes it easy to collaborate with others.
The first samples of the MNIST dataset are shown in the Datasets view with one column for each feature, images, and numbers. On top of each feature, there is a graph showing the distribution of a feature over its range.
Click the Number column and set the Encoding to Categorical in the Inspector to the right. By using this encoding, you ensure that you are not imposing a specific order on the categories. This is very important. If you want to understand more look here: Feature encoding.
In the top right corner, you’ll see the subsets. All samples in the dataset are by default split into 20% validation and 80% training subsets. Keep these default values in this project. You can change the existing subsets and add more subsets, e.g., a test subset, if you want.
You’ve now created a dataset ready to be used in the platform. Click Save version and navigate to the Modeling view.
Time to create an experiment in the Modeling view. The experiment contains all the information needed to reproduce the experiment:
The result from this experiment is a trained AI model that can be evaluated and deployed.
The experiment is done and ready to be trained. Click Run in the top right corner to start the training.
You can tweak the training setup in the Run settings section in the Settings tab. We won't do that now, but if you want to go deeper into all the parameters, check out the Optimizer and compiler options.
Navigate to the Evaluation view and watch the model train. You can see how the loss is getting lower epoch by epoch.
Loss indicates the magnitude of error your model made on its prediction. It’s a method of evaluating how well your algorithm models your dataset. If your predictions are totally off, your loss function will output a higher number. If they’re pretty good, it’ll output a lower one. Is the loss low enough? Yes, this is good to go.
You can also get information from the confusion matrix. It is used to see how well a system does classification. The diagonal shows correct predictions, everything outside this diagonal is errors. In a perfect classification, you'll have 100% on the diagonal going from top left to bottom right.
Our line is quite ok but not fantastic. Despite that we will deploy our model and try it out in the real world outside the Platform.
While our model may be great it is little more than an academic exercise as long as it is locked up inside the Peltarion Platform. If you want people to use the trained model, you have to get it out in some usable form. This is where the Deployment view comes in. In this project we will deploy the model as API.
This test is most fun if you use your phone but you could do it on your desktop if you want to. Enter this address to our web app Image classifier API tester into your preferred browser: http://bit.ly/ImageClassifier
The following page will show up:
Click Setup. Copy the URL in the Deployment view and paste it into the URL field. If you use only Apple devices you can do this by copy on your mac and paste on your iPhone. See here how: Copy and paste across devices.
You could also use Pushbullet.
Copy the Token in the Deployment view. The API is called by sending an HTTP POST to the endpoint shown in the interface. The token is required to authenticate the calls. Add the Token from the Deployment view to the Token field.
Type Image (case sensitive) in the Image parameter field. This name is the name of the input parameter in the Deployment view.
Set the Width and Height to 28. These parameters correspond to the size of the input image.
Collapse the Setup. Scribble down a “5” on a piece of paper. This is the easiest digit to recognize.
Tap the icon shown below on your phone, take a photo of your digit.
If the image is displayed sideways on your phone, expand the Setup menu again and tap the Tilt icon.
Tap the Load icon to get a result.
Whazaaaaam!!! You have created an operational AI experiment.
If it doesn’t work every time, remember that the loss from the experiment isn’t 0. Hence the experiment will predict the wrong numbers in some cases.
In this tutorial you’ve created an AI experiment that you first evaluated and then deployed. You have used all the tools you need to go from data to production - easily and quickly.
You've used one labeled dataset and a CNN to predict numbers, but can we improve a result by using multiple sets of input data, both tabular data and images? Try this in the tutorial Predict California house prices. Predicting a house price from both tabular and image inputs is a unique problem and not something you can do with anything other than deep learning.
The web app you’ve used now is not for MNIST only. You can use the web app as an image classifier to any other deployment if the experiment has been trained on a dataset with 3-channel images, e.g., CIFAR (https://storage.googleapis.com/bucket-8732/cifar-bundle.zip).
Stay in the know by signing up for occasional emails with tips, tricks, deep learning insights, product updates, event news and webinar invitations.
We promise not to spam you or share your email with any third party. You can change your preferences at any time. See our privacy policies.
Please check your email inbox account to confirm, set, or update your communication preferences.