TensorFlow Models

Enhanced Advanced

TensorFlow is a platform for machine learning. Saved models can be published to Posit Connect.

Deploying a model

A saved model is a directory containing all of data needed to run the model and might have a directory structure like the following:

model/
    1/
        saved_model.pb
        variables/
            variables.index
            variables.data.00000-of-00001

Use the model directory as the deployment target. Its contents are deployed to Connect.

Python environments can deploy TensorFlow models from the command line using the rsconnect-python Python package.

Before getting started, make sure that you have installed the rsconnect-python package and linked your Connect account.

Note

Use rsconnect-python 1.24.0 or higher when deploying TensorFlow models.

From directory containing your saved model (model in our example), it can be deployed with the command:

rsconnect deploy tensorflow .

Alternatively, you can create a manifest for the saved model to use with Git-backed publishing and other workflows. Use the command:

rsconnect write-manifest tensorflow .

R environments can deploy from the R console with the rsconnect R package. Unlike some other content types, push-button publishing is not available for TensorFlow models.

Note

Use rsconnect 1.3.0 or higher when deploying TensorFlow models.

Before getting started, make sure that you have linked your Connect account from your R session. This only needs to be done once.

From the directory containing your saved model (‘model’ in our example), use rsconnect::deployTFModel() to deploy your model.

rsconnect::deployTFModel()

Alternatively, you can create a manifest for the saved model to use with Git-backed publishing and other workflows. Use the command:

rsconnect::writeManifest()

The home page for your hosted model describes the signatures supported by the model and explains how it can be used.

Working with hosted models

The following code shows how you might interact with a MNIST model hosted by Posit Connect. It uses the mnist_input_data.py helper from the TensorFlow Serving MNIST example.

import os
import requests
import mnist_input_data


def predict(predict_url, api_key, data):
    headers = {"content-type": "application/json"}
    if api_key:
        headers["Authorization"] = "Key %s" % os.environ["CONNECT_API_KEY"]

    response = requests.post(predict_url, json=data, headers=headers)
    response.raise_for_status()
    return response.json()


def run(predict_url, api_key, work_dir, num_tests):
    test_data_set = mnist_input_data.read_data_sets(work_dir).test
    for _ in range(num_tests):
        image, label = test_data_set.next_batch(1)
        data = {
            "signature_name": "predict_images",
            "inputs": {"images": [image[0].tolist()]},
        }
        result = predict(predict_url, api_key, data)
        print(result)


if __name__ == '__main__':
    predict_url = os.environ.get("PREDICT_URL",
        'http://connect:3939/content/cb5f85bf-d589-4f5c-97fb-52aafa83c807/v1/models/default:predict')
    api_key = os.environ.get("CONNECT_API_KEY")

    work_dir = '/tmp'
    num_tests = 100
    run(predict_url, api_key, work_dir, num_tests)

This script uses a PREDICT_URL environment variable to locate your prediction endpoint. It also uses the CONNECT_API_KEY environment variable for an (optional) Connect API key.

This example prints the API response to demonstrate that the result is returned from the server.