Classify customer complaints

Use natural language processing and deploy a working model

No one likes to think about customer complaints. You want to spend all your time trying to build a product so good that you won’t receive any.
… but when they eventually do trickle in, it’s good to know right away what the customer complains about so you can get to the bottom of it.

12 - No prior data science experience is needed.
12 - 15 minutes to download the data and build the AI model.

You will learn to
12 - Build a classification model
12 - Deploy your model
12 - Work with natural language processing (NLP)

Rather watch?
See how Björn did this.

ComplaintsImage PowerAppsTutorial PA1

Download the data from CFPB

You will use data from the CFPB (Consumer Financial Protection Bureau). The database is huge and includes complaints about consumer financial products and services that the CFPB has sent to companies for response.

  1. Go to this page and look through the terms and conditions so that you know what you agree to.

  2. In this CFPB page, you’ll see that we’ve added filters so the downloaded file only includes complaints with the following conditions:

    • From New York State

    • Has narrative.

  3. Click on Export data that appears a bit down on the screen, just above the map view that is made up by squares.

ExportDataFromCFPB PA1
  1. In the pop-up dialog, select these two boxes:

    • CSV box

    • Filtered dataset box.

  2. Click Export data. This will download the dataset to your computer.


Upload and process the data

Upload the data

  1. Sign in to the Peltarion Platform (or sign up if you haven’t done so),

  2. From the dashboard, click New project and name it, e.g. Consumer Complaints.

  3. In the Datasets view, click Choose files and upload the dataset file.

  4. Name the dataset, e.g., Complaints, and click Done.

Process the data

  1. Click the Table tab, and you’ll see the individual records of the CSV file.

  2. Find the column named Consumer complaint narrative and click the wrench icon next to it.

  3. Change the label to narrative. This will make it easier to refer to this column in later steps.

  4. Click Save version in the upper-right corner.

Save version button

Create an experiment and build a model

Now, it’s time to build and train the model. This model will be able to predict what product the consumer is referring to by looking at the customers’ written complaints. You will use the BERT English uncased snippet that includes the whole BERT network. The snippet allows you to use a massive network with weights that are pre-trained to understand the text. You don’t have to build it yourself. Major win!

  1. Staying in the Dataset view, click the new button Use in new experiment that appeared where the Save version was.

  2. The Experiment wizard has three steps. In the first step, enter a name for the experiment, e.g., Experiment 1. Click Next

  3. In the next step, you choose snippet. Set:

    • Input feature to narrative

    • Target feature to Product

    • Problem type to Single-label text classification

  4. The recommended snippets appears. Choose BERT English uncased. Click Next.

  5. Keep all defaults in the Initialize weights tab and click Create.

Edit settings and start training

The selected BERT snippet appears on the Modeling canvas. You now want to update the settings and train our model.

  1. Go to the Settings tab to the left of the canvas.

  2. Check Early stopping.
    This will make sure that the training is stopped when the performance of the model no longer improves, making training faster.

  3. Update Epochs to 50.
    One epoch is when the model has seen all the examples once. Since you have early stopping, you can have any high number here since the training will automatically stop.

  4. Now you are ready to train the model. click Run in the upper-right corner.
    Training will take some time, so now will be a good time to take a little break.

Run button PA3

Deploy the model

In this case, the results you get from the default settings are pretty good. If you want to try to tweak it further, you can try setting up a few more experiments to see if there are ways to improve it - otherwise, we’re ready to set up a deployment.

  1. Go to the Deployment view.

  2. Click New deployment.

  3. Select which Experiment and Checkpoint that you want to use (usually you would choose the one with the best overall performance). Name the deployment.

  4. Click Create.

Enable deployment

Click the Enable button to enable your deployment. As soon as your deployment is enabled, you can start requesting predictions.

Enable button PA1

When you submit a request to the deployed model, you have to send all the Input features. The response will contain the predicted Output feature for each submitted example.

To send queries via the API, use the URL and the Token required to allow the deployment to respond with predictions.


Next step

Now your AI model is built, and you’re ready to go! The next thing you want to do is to use the deployment in your business:

Power Apps GUI
Figure 1. Power Apps user interface
Was this page helpful?
YesNo