Buy or not / Predict from tabular data

Predict if a customer will buy or not based on earlier customers buying patterns

Money!! Understanding what makes a user willing to cash up and buy a product has always been key to businesses.

This tutorial will show you how you can build simple AI models using the spreadsheets that so many of us work with. You will use tabular data to solve a classification problem, and get advice on how you’d also solve a regression problem.

12 - Target audience: Beginners
12 - Estimated time: Setup - 5 minutes | Training - 10 minutes

You will learn to
12 - Import and use tabular data onto the Peltarion Platform.
12 - Solve a classification problem - Will they buy yes or no? (You’ll get some hints on how to solve a regression problem as well)
12 - Analyze the performance of your model.

Tab data numbers

The problem - Unleash the power of the spreadsheet

Most of the data that businesses collect are tabular, i.e., data that can be stored in a spreadsheet: numerical, categorical, binary, or any combination of those. You name it.

How do you use this data to make really good predictions? Well, there are many ways to make predictions using tabular data, and the Peltarion Platform is a great way to quickly and intuitively leverage your data to make valuable predictions.

Getting started - create a project

Let’s begin! First, click New project and name it, so you know what kind of project it is.

The data

To train a model, you need examples of input data together with the predictions that you expect the model to make.

This tutorial uses data from a phone marketing campaign. Many features such as the age, the employment and education of the client, the response to earlier phone campaigns, the day of the week of the phone call, etc. are recorded in table form.
The dataset also contains whether or not the client subscribed to a term deposit after the phone call. It’s this outcome that the AI model will learn to predict from the known factors.

Import the dataset

Go to the Datasets view to import and preprocess datasets.

Import the Bank marketing dataset from our Data library

In the Datasets view, click on Data library and choose the Bank marketing dataset. This dataset is used to solve a binary classification problem for a propensity to buy use case.

After you have reviewed the information about the dataset, click on Accept and import to accept the terms of the dataset’s license and import it into your project.

Bank marketing dataset in the data library
Figure 1. Bank marketing in the data library.

Import your own tabular data

If you want to train a model to make predictions tailored to your usage, you can upload your own tabular data. To do this, you need to upload a comma-separated value (CSV) file.

Most software such as Microsoft Excel or Google Sheets have a built-in function to export your spreadsheet as a CSV file.

Save dataset

Click Save version on the upper right corner of the Datasets view.

Then click Use in new model and the Experiment wizard will pop up.

Build your model in the Experiment wizard


Select the data you want to use.

  • The Training subset is used by the model to improve its predictions.

  • The Validation subset isn’t shown to the model while training. You use it to evaluate how well the model performs on data it has never seen before.

Click Next.

Input(s) / target

In the Inputs column, select everything except the Purchased

Select Purchased as the target feature, and click Next. The target is what the model will learn to predict.


Given the inputs and target selected, the wizard recommends automatically to select Tabular binary classification as Problem type, and selects Tabular as the Recommended snippet.

Click Create.

Modeling canvas

The wizard has created a model that fits your tabular data. All settings are pre-populated and it’s time to train the model.

Click Run.

Evaluation view

In the Evaluation view, you will find several ways of analyzing how your model is performing. The specific metrics that you are shown depend on your problem type and loss function.

Loss and metrics curves

The Loss and metrics curves show the performance of your model on the training and validation datasets for different epochs. In general, you are aiming to minimize loss and error metrics and maximize accuracy. To identify which metrics are most important for your specific application, read more about loss and metrics in the knowledge center.

Predictions inspection

The Predictions inspections section lets you analyze the performance of a particular epoch on a particular subset.

The Bank marketing use case is a binary use case, so you’ll get the opportunity to set a threshold. The threshold value allows you to control how the errors made by the model distribute between false positive and false negative.
Slide the Threshold slider to a good value, e.g., 0.2.

The features of this section are also dependent on your problem type. Read this article on Prediction inspection to learn more.

ROC curve
Figure 2. ROC curve

Improve your model

A vital step in successful data science is not just building a working prototype but also going back and experimenting with new iterations of your model to improve the performance.

To provide some guidance for what types of settings and parameters to change to try to improve your model, have a look at the Improving your tabular data model tutorial.

Further reading

Congratulations, you have completed the tabular data tutorial!

With good input data, models like these can be used to make important predictions and solve a wide array of interesting problems. Read more here:

Was this page helpful?