TensorFlow Serving

Enhanced Advanced

Off-Host Execution

When using off-host execution, TensorFlow Serving installations are defined by the set of Execution Environments available for executing content. For an overview of this feature, please see the Execution Environments appendix. Connect selects an appropriate image with a TensorFlow Serving installation from the set of configured images.

TensorFlow Serving is a serving system for machine learning models.

Saved models can be published to Posit Connect. Connect hosts your model using TensorFlow Serving.

Installing TensorFlow Serving

TensorFlow Serving must be installed on your system before Connect can host TensorFlow content. Follow the TensorFlow Serving installation instructions.

Enabling TensorFlow Support

If you installed the TensorFlow Serving binary using APT, that binary is available at /usr/bin/tensorflow_model_server. Configure Connect to use this default location:

; /etc/rstudio-connect/rstudio-connect.gcfg
[TensorFlow]
Enabled = true
Executable = "/usr/bin/tensorflow_model_server"

TensorFlow support is only available when the TensorFlow.Enabled is enabled and the TensorFlow.Executable setting references a valid TensorFlow Serving installation. Posit Connect warns at startup if TensorFlow is enabled without a configured installation. Connect fails to start if TensorFlow is enabled but references an invalid installation.

The TensorFlow Serving binary is used to host saved models deployed to Connect.

Your Connect server can only use one TensorFlow Serving binary at this time. Should you provide multiple TensorFlow.Executable entries, only the binary with the highest version is used.