How to call a deployed model from Python using JSON requests

It is convenient to use the deployment API from within a Python script to visualize results, compute additional metrics and so on. Here we show how to call a model trained and deployed in the Peltarion platform from Python using JSON requests. The advantage of using JSON is that you can score several examples in a batch rather than one at a time as with curl or POST requests with forms.

This how-to is available as a jupyter notebook at Peltarion’s community-code repo.

Image classification

Classify a single image

The following example assumes that you have a model trained on MNIST data (28x28 pixels, 3 color channels) and want to classify new images that the model has not seen. Pretty much the same thing what you do in the tutorial Deploy an operational AI model.

In this example, we will hide the deployment URL and the authentication token for security reasons. You can substitute the values found on the Deployment view for the model we want to use.

url = ' -- insert the URL you find on the deployment view -- '
token = ' -- insert the token you find on the deployment view --'

The file called three.png can be found in the images folder in Peltarion’s community-code repo on GitHub.

In order to feed the image to the deployment API, we need to encode it in base64 format and prepend a short string explaining what type of data it is.

img_file = "images/three.png"
img_type = os.path.splitext(img_file)[-1][1:]
with open(img_file, "rb") as image_file:
    encoded_img = 'data:image/{};base64,'.format(img_type) + base64.b64encode(image_file.read()).decode('ascii')

The structure of the JSON string that we will send is shown below. The "rows" key must always be present and its value is a list of strings that represent different examples that we want to classify. Each entry of the list contains a comma-separated set of key-value pairs where the key is the feature name and the value is the feature value (which will be a base64 string for images.)

  {"rows":
      [{"feature1": "value1", "feature2": "value2"},
       {"feature1": "value1", "feature2": "value2"}
      ]
  }

For the current example, we only have one example and one feature called "Image". You can find the feature names on the Deployment view. The structure of the JSON will be fairly simple:

  {"rows":
      [{"Image": "<base64 encoded image>"}]
  }
payload = "{\"rows\": [{\"Image\":\"" + encoded_img + "\"}]}"
headers = {
    'Content-Type': "application/json",
    'Authorization': "Bearer {}".format(token),
    }

response = requests.request("POST", url, data=payload, headers=headers)

print(response.json())

Result

For our model we get this result:

Deployment - Resulting JSON

] ‘3’ gets the highest value, 0.99999976. This means that the model predicts the image to be a ‘3’.

Classify several images

To simplify our life, we might want to write a small function that encodes an image to base64 given a file path.

url = ' -- insert the URL you find on the deployment view -- '
token = ' -- insert the token you find on the deployment view --'
def encode_img(path):
    img_type = os.path.splitext(path)[-1][1:]
    with open(path, "rb") as image_file:
        encoded_img = 'data:image/{};base64,'.format(img_type) + base64.b64encode(image_file.read()).decode('ascii')
    return encoded_img

Now we can classify a batch of images. In this case just two images, but it would work with a much larger batch too. The files can be found in the images folder in Peltarion’s community-code repo on GitHub.

img_files = ['images/three.png', 'images/Six.png']
encoded_imgs = [encode_img(f) for f in img_files]
input_batch = ','.join(["{\"Image\":\"" + encoded_img + "\"}" for encoded_img in encoded_imgs])
payload = "{\"rows\": [" + input_batch + "]}"
response = requests.request("POST", url, data=payload, headers=headers)
response.json()

Result

The first image is predicted to be a '3' and the second to a '6'.

Deployment - Resulting JSON

Tabular data

In this example we try to predict on which latitude a house is situated. We assume that the deployed model has been trained on the Calihouse dataset as in the tutorial Predict California house prices.

url = ' -- insert the URL you find on the deployment view -- '
token = ' -- insert the token you find on the deployment view --'

We can define a short utility function to construct a row for a training example in the right format.

def input_row(input_params):
    return '{' + ','.join(["\"" + name + "\":" + value for (name, value) in input_params.items()]) + '}'

We create two examples that we’ll send to the deployed model.

ex1 = {
"population": "1551.0",
"totalBedrooms": "434.0",
"totalRooms": "2202.0",
"housingMedianAge": "52.0",
"medianHouseValue": "261100.0",
"medianIncome": "3.12",
"households": "514.0"
}

ex2 = {
"population": "3551.0",
"totalBedrooms": "834.0",
"totalRooms": "2902.0",
"housingMedianAge": "76.0",
"medianHouseValue": "111100.0",
"medianIncome": "2.12",
"households": "1000.0"
}

examples = [ex1, ex2]
input_batch = ','.join([input_row(ex) for ex in examples])
payload = "{\"rows\": [" + input_batch + "]}"
headers = {
    'Content-Type': "application/json",
    'Authorization': "Bearer {}".format(token),
    }

response = requests.request("POST", url, data=payload, headers=headers)

print(response.json())

Result

The model predicts that the second house is situated slightly north of the first house.

Deployment - House latitude - Resulting JSON
Latitude position callouts PA1
Figure 1. © OpenStreetMap contributors

Images and tabular data

In this example, we will predict the mean house value in a specific area, just as in the tutorial Predict California house prices. We use a model trained on the Calihouse dataset that consists of map images from Open street map and tabular demographic data collected from the California 1990 Census.

url = ' -- insert the URL you find on the deployment view -- '
token = ' -- insert the token you find on the deployment view --'

We will re-use the encode_imgs() function defined in the Classify several images example.

img_files = ['images/15_5256_12656.png', 'images/15_5258_12653.png']
encoded_imgs = [encode_img(f) for f in img_files]

We can now populate the examples with numeric values and encoded images.

ex1 = {
"population": "1551.0",
"totalBedrooms": "434.0",
"totalRooms": "2202.0",
"housingMedianAge": "52.0",
"medianIncome": "3.12",
"households": "514.0",
"image_path": "\"" + encoded_imgs[0] + "\"",
"latitude": "37.88",
"longitude": "-122.25"
}

ex2 = {
"population": "3551.0",
"totalBedrooms": "834.0",
"totalRooms": "2902.0",
"housingMedianAge": "76.0",
"medianIncome": "2.12",
"households": "1000.0",
"image_path": "\"" + encoded_imgs[1] + "\"",
"latitude": "37.88",
"longitude": "-122.25"
}

examples = [ex1,ex2]
input_batch = ','.join([input_row(ex) for ex in examples])
payload = "{\"rows\": [" + input_batch + "]}"
headers = {
    'Content-Type': "application/json",
    'Authorization': "Bearer {}".format(token),
    }

response = requests.request("POST", url, data=payload, headers=headers)

print(response.json())

Result

The model predicts that the area where the second house is situated is more expensive than the first houses' area.

{'rows': [{'medianHouseValue': 204714.05}, {'medianHouseValue': 298926.44}]}

Image to image

This example will show how you how to send two images to a deployment and get two images back. The images come from the NoisyOffice dataset where the task is to clean images from stains and other imperfections.

url = ' -- insert the URL you find on the deployment view -- '
token = ' -- insert the token you find on the deployment view --'
img_files = ['images/FontLrm_Noisec_TE.png', 'images/FontLrm_Noisew_TE.png']
encoded_imgs = [encode_img(f) for f in img_files]
input_batch = ','.join(["{\"path_noisy\":\"" + encoded_img + "\"}" for encoded_img in encoded_imgs])
payload = "{\"rows\": [" + input_batch + "]}"
headers = {
    'Content-Type': "application/json",
    'Authorization': "Bearer {}".format(token),
    }

response = requests.request("POST", url, data=payload, headers=headers)
results = response.json()['rows']

Now you can, for example, save the generated images to file.

for i, res in enumerate(results):
    decoded = base64.b64decode(res['path_clean'].split(',')[-1])
    with open('images/image{}.png'.format(i), 'bw') as outf:
        outf.write(decoded)

Numpy to numpy

The numpy data type can be used to build several models, e.g. auto-encoders, segmentation models, or multi-label classification of vectors or images.

In this exampole we will send input data represented as numpy arrays to the deployment API, and then get a numpy array of predictions back.

# Get predictions from deployment api
# Return the response as json
def get_predictions(data, token, url):
    headers = {
    'Content-Type': "application/json",
    'Authorization': "Bearer {}".format(token),
    }
    response = requests.request("POST", url, data=data, headers=headers)
    return response.json()

# Prepare a json data structure from numpy array
# Assume first axis in the numpy array arr represents samples
def prepare_api_data(arr, input_param_name="input"):
    encoded_arrs = [encode_numpy(a) for a in arr]
    input_batch = ','.join(["{\"" + input_param_name + "\":\"" + encoded_arr + "\"}" for encoded_arr in encoded_arrs])
    payload = "{\"rows\": [" + input_batch + "]}"
    return payload

# Encode a numpy array in base64 format and add data application type
def encode_numpy(arr):

# Need to temp save the arr to a buffer to get the npy headers not just the raw data
    buffer = io.BytesIO()
    np.save(buffer, arr)
    encoded_arr = base64.b64encode(buffer.getvalue()).decode('ascii')
    return 'data:application/x.peltarion.npy;base64,' + encoded_arr

# Decode a base64 string into a numpy array
def decode_base64(base64_string):
    decoded = base64.decodebytes(base64_string.encode('ascii'))
    buffer = io.BytesIO(decoded)
    return np.load(buffer)

# Decode a json response from Peltarion deployment API into a numpy array
# The resulting array represents one or several samples
def decode_api_response(response_json, output_param_name='output'):
    res = []
    for sample in response_json['rows']:
        data_base64 = sample[output_param_name].split(',')[1]
        data_numpy = decode_base64(data_base64)
        res.append(data_numpy)
    return np.array(res)

Add your token and URL.

url = ' -- insert the URL you find on the Deployment view -- '
token = ' -- insert the token you find on the Deployment view --'

Print the shape of the input and returned numpy array.

features= np.load('/home/asa/projects/potkaista_dev/features.npy')

print("Shape of the input numpy array:", features.shape)
  api_data = prepare_api_data(features, input_param_name="features.npy_0")
  preds = get_predictions(api_data, token, url)
  decoded = decode_api_response(preds, output_param_name="labels.npy_0")

print("Shape of the returned numpy array: ", decoded.shape)
Try the platform