Deployment API

The Deployment API lets you connect to your standard model deployments to request predictions about new data.

You can request a single prediction, or send a batch of examples to get many predictions at once. The deployment limitations may limit the size and frequency of your requests, and the number of models that you can deploy simultaneously depend on your pricing plan.

Our APIs uses HTTP endpoints to provide an interface between your application and the Platform. This allows you to send requests to specific URLs, including, if needed, examples or data files. Our API then returns standard HTTP codes, and JSON formatted text with the model predictions.

See examples of using the Deployment API in cURL and Python.

Prerequisites

You need to create a Standard deployment from the deployment view. The deployment view will show you the information you need to use the Deployment API:

  • All the input and output names (and shapes) of your model

  • The URL and deployment token to use for your requests

  • The status of the deployment. Make sure it is enabled, or queries will be denied

OpenAPI specification

In the Deployment view, you will be able to download the API specifications for your model in the standard OpenAPI format.

Structure of a prediction request

To submit examples, you use a HTTP client to send a POST request to the URL indicated on the deployment page that you want to use.

You can send such requests by using cURL, Python (e.g., with the requests package), or our sidekick tool for Python. Other packages, languages, or tools may also be used if they follow the API specifications.

Header

Your request must always have a header, which is a JSON formatted string. The header must contain the deployment token, which is used to authenticate the request.

It must also contain the format of the query’s payload. The Deployment API only accepts JSON formatted payloads, so the header looks like this:

{
    'Authorization': 'Bearer <token>',
    'Content-Type': 'application/json'
}

where you replace <token> with the deployment token that you copy from the Deployment view.

Payload

To submit examples, you attach them in the payload of the request. The payload is also a JSON formatted string, which contains the features of one or more examples to be evaluated.

The structure of the JSON payload has a single key called rows, associated with a comma-separated array of examples.
Each example is a collection of key-value pairs, where the keys are the names of the input features of the model.

For instance, if your deployment parameters look like this in the Deployment view:

The parameter list of a deployment.

Then the JSON payload to submit 4 examples will look like this:

{
    "rows": [
        {"Type": "animal", "Sound": "miaow", "Size": 0.4},
        {"Type": "animal", "Sound": "woof", "Size": 1.2},
        {"Type": "vehicle", "Sound": "tchoo", "Size": 80},
        {"Type": "vehicle", "Sound": "vroom", "Size": 3.2}
    ]
}

Note that you don’t need to have line breaks inside your payload, and it could look like this:

{"rows": [{"Type": "animal", "Sound": "miaow", "Size": 0.3},{"Type": "animal", "Sound": "woof", "Size": 1.2},{"Type": "vehicle", "Sound": "tchoo", "Size": 80},{"Type": "vehicle", "Sound": "vroom", "Size": 3.2}]}

Response

Once you send the request with a valid header and payload, your model will process the examples submitted and return predictions for each of them.

The predictions are given as a JSON formatted text string and their structure is identical to that of the payload.
The predictions in the rows array contain a key-value pair using the Name of each Output feature listed in the Deployment view.

There can be more than one output feature if the model Target block uses a feature set, or if the model has one or more Output blocks.

The response to the example request above would look like this:

{
    "rows": [
        {"Weight": 1.2},
        {"Weight": 4.7},
        {"Weight": 900000},
        {"Weight": 3000}
    ]
}
Was this page helpful?
YesNo