Model download for UIPath

UIPath is a Robotic Process Automation (RPA) platform which makes it easy to build, manage and deploy software for automation that emulates human actions interacting with software. As a part of their platform they offer the AI Center which let users upload machine learning models and user them within their RPA workflows.

This page will take you through the steps of how to package the TensorFlow SavedModel, which you can download from the Pletarion Platform, and deploy it to UIPath. This example uses the MNIST classifier as the model.

Prerequisites

  • Set up a Python environment and install tensorflow 2.5.0 with pip install tensorflow==2.5.0

  • Download a model in the SavedModel format from the Peltarion Platform

Converting a Peltarion Platform SavedModel to a UIPath ML Package

In accordance with the UIPath documentation, an ML Package consists of three parts:

  1. The ML model, in our case the TensorFlow SavedModel

  2. A main.py file with a class Main with an init method and a predict method

  3. A requirements.txt containing required dependencies for running the ML Package

Let’s start by creating our requirements.txt. You will only need to add two dependencies to your file

tensorflow==2.5.0
tensorflow-text==2.5.0

We need tensorflow package to load the SavedModel into our python code. The tensorflow-text is needed in the case that the SavedModel is working with text input.

Next up we’ll create our main.py file. As mentioned earlier the ML Package requires this file to contain a class called Main which has a constructor method called init that takes no arguments and a method predict that takes the model input as an argument. In our case, the input will be the binary data if a file since we are working with a MNIST Classifier model. In the UIPath documentation you can read more about what kind of inputs the MLPackage can recieve.

Lets start by importing the packages that we require

import json
import tensorflow as tf
from .utils.parse_input import Features

We will use the Features class to load the feature_mapping.json file that is bundled together with the exported model. This file contains information about the input and output features of the model and the Features class will help us convert features listed by their names into something that the exported model will understand.

The next step is to add the Main class into the main.py file.

class Main(object):

  _base_model_path = "./exported_model"
  _graph_signature = "serving_default"

  def __init__(self):
    self._model = tf.saved_model.load(f"{self._base_model_path}/saved_model")
    self._features = Features(feature_mapping_file_path=f"{self._base_model_path}/feature_mapping.json")
    self._predict = self._model.signatures[self._graph_signature]
    self._metadata = self._model.signatures['metadata']

  def _build_input(self, X):
    example_tensor = self._features.serialize_data({
      "image": X
    })
    example_tensor = tf.expand_dims(example_tensor, -1)
    return example_tensor

  def _build_output(self, outputs):
    metadata = self._metadata()

    result = {}
    for feature_id, data in outputs.items():
      feature_label = self._features.get_feature_label(feature_id)
      out = data.numpy().tolist()[0]
      if feature_id in metadata:
        meta = metadata[feature_id].numpy().astype(str).tolist()
        result[feature_label] = {label:prob for label, prob in zip(meta, out)}
      else:
        result[feature_label] = out
    return result

  def predict(self, X):
    example_tensor = self._build_input(X)
    outputs = self._predict(example_tensor)
    result = self._build_output(outputs)

    return json.dumps(result)

As mentioned earlier this file has the init method where we load our model and create an instance of the Features class. With the help of _build_input and _build_output the predict method builds a tf.train.Example, run it through the SavedModel and returns the result as a JSON string.

Finally, we can add the following code at the end of main.py for local testing.

if __name__ == '__main__':
   with open('./mnist-3.png', 'rb') as input_file:
      file_bytes = input_file.read()
      m = Main()
      print(m.predict(file_bytes))

With this, you can run python main.py and it should be able to run a prediction on mnist-3.png if this file exists in the same folder as your main.py.

Creating the ML Package

Now that you have your requirements.txt, main.py and your exported SavedModel you are ready to create your ML Package. This is done by creating a zip file with these 3 artifacts. Before your create your zip file check that you have the following folder structure:

my_package/
├── requirements.txt
├── main.py
└── exported_model
    ├── feature_mapping.json
    ├── saved_model
        ├── assets
        ├── saved_model.pb
        └── variables
            ├── variables.data-00000-of-00001
            └── variables.index

Uploading you ML Package

Checkout the UIPath documentation for more information on how to upload and use your newly created ML Package.

Was this page helpful?
YesNo