Configure your Helm chart values
Posit maintains a Helm chart that is required for deploying Posit Connect on Kubernetes. It is highly configurable and supports multiple deployment options to meet your organization’s requirements.
The values.yaml file overrides defaults in the Helm chart. Use the steps below to set values for the initial deployment. After validating the deployment, update this file with your production values.
The config section of the values.yaml file maps to Connect’s rstudio-connect.gcfg configuration file. See the Helm Chart Reference for details on the mapping.
Additional example values.yaml files detailing customizations of ingress, authentication, storage are available from the Helm repository site.
Step 1: Create your initial values.yaml file
Create a file called values.yaml with the following contents:
values.yaml
# Controls how many instances of Posit Connect are created.
replicas: 1
# Mounts the license file from the Secret created during cluster preparation.
license:
file:
# Replace with the name of your license file secret and key.
secret: posit-connect-license
secretKey: posit-connect.lic
# Configures shared storage for the Posit Connect pod and content pods.
sharedStorage:
create: true
mount: true
# The name of the PVC created for Connect's data directory.
name: connect-pvc
# The StorageClass for Connect's data directory. Must support RWX.
# Replace with your storage class name.
storageClassName: connect-nfs
requests:
storage: 100G
# Adds an environment variable containing the PostgreSQL password from a Secret.
pod:
env:
- name: CONNECT_POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
# Replace with the name of your database password secret and key.
name: posit-connect-database
key: password
# Enables off-host execution using the direct Kubernetes runner.
backends:
kubernetes:
enabled: true
# The config section maps to Connect's rstudio-connect.gcfg configuration file.
config:
# Configures the PostgreSQL connection.
Database:
Provider: "Postgres"
Postgres:
URL: "postgres://connect@connect-db-postgresql.posit-connect.svc.cluster.local:5432/connect?sslmode=disable"
# Set the password from a Secret via pod.env above rather than here.Define content execution environments
List images to make available for content deployment under executionEnvironments in values.yaml. Connect selects a compatible environment for each piece of content based on the runtime versions it requires. See the Helm chart reference for execution environments for the full schema.
Posit publishes images to ghcr.io/posit-dev/connect-content which each contain versions of Python, Quarto, and R. You can also build your own image that meets the Execution Environment requirements.
Add the following to your values.yaml, adjusting the image tag and installation versions to match the runtimes your users need:
values.yaml
executionEnvironments:
- name: ghcr.io/posit-dev/connect-content:R4.5.2-python3.14.3-ubuntu-24.04
title: "R 4.5.2 / Python 3.14.3 / Quarto 1.8.27"
matching: any
r:
installations:
- version: "4.5.2"
path: /opt/R/4.5.2/bin/R
python:
installations:
- version: "3.14.3"
path: /opt/python/3.14.3/bin/python3
quarto:
installations:
- version: "1.8.27"
path: /opt/quarto/1.8.27/bin/quartoRepeat the entry for each additional image. Changes to executionEnvironments take effect on every helm upgrade; no Connect restart is required.
Step 2: Replace the sample values
Replace the sample values with the values for your PostgreSQL database and shared storage in values.yaml.
To view the chart’s entire set of default values, run:
Terminal
helm show values rstudio/rstudio-connectEven if you plan to eventually implement a Highly Available (HA) topology, we strongly recommend that the initial deployment configuration utilize a single node. We provide post-deployment instructions to configure the number of desired HA instances.
Using external storage
If you use external storage for Connect’s data directory (Amazon EFS, Azure NetApp Files, or similar), choose between dynamic provisioning and static provisioning below.
For cloud-specific guidance, see the reference architectures for AWS and Azure.
Dynamic provisioning (recommended)
With dynamic provisioning, you install a CSI driver for your storage backend and create a StorageClass. The Helm chart’s PVC automatically provisions a PersistentVolume. Install the CSI driver before creating the StorageClass:
- Amazon EFS: Amazon EFS CSI driver
- Azure NetApp Files: Azure NetApp Files CSI driver
After installing the driver, create a StorageClass that references it and set sharedStorage.storageClassName in your values.yaml to that StorageClass name. The Helm chart handles PVC creation.
Static provisioning
If you manage storage outside of Kubernetes or use an on-premises NFS server, you can configure storage statically. No CSI driver is required.
Option 1: Using a PersistentVolumeClaim
Step 1: Create a no-op StorageClass
This StorageClass is specified in the Helm chart’s values. The Helm chart uses it when creating the PersistentVolumeClaim for Connect’s data directory. The kubernetes.io/no-provisioner provisioner tells Kubernetes not to dynamically provision a volume. You create one manually in the next step.
Terminal
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: connect-external-storage
provisioner: kubernetes.io/no-provisioner
EOFStep 2: Create a PersistentVolume
In this step we create a PersistentVolume that meets the criteria of the PersistentVolumeClaim that will later be created by the Helm chart.
Verify the location of your Posit Connect data directory on the external storage instance, and ensure that it matches the spec.nfs.path value in your PersistentVolume spec.
Terminal
# modify these values to match your environment
CONNECT_NFS_SERVER=<your-external-storage-endpoint>
# this must match the root of your Connect data directory on
# external storage
CONNECT_NFS_EXPORT_PATH=<your-external-storage-export-path>
# you may modify this value to change the amount of storage that is
# available to Posit Connect
CONNECT_STORAGE_VOLUME=100G
# create a PV, backed by your external share
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
name: connect-pv
spec:
storageClassName: connect-external-storage
capacity:
storage: ${CONNECT_STORAGE_VOLUME}
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: ${CONNECT_NFS_EXPORT_PATH}
server: ${CONNECT_NFS_SERVER}
EOFStep 3: Modify the Helm chart values
Update the Helm chart to use the storage class created in step 1 for Connect’s data directory PersistentVolumeClaim.
Modify the following values in your values.yaml:
values.yaml
sharedStorage:
# Tell the chart to create a PVC for connect's data dir
create: true
# Tell the chart to mount this PVC to the connect Pod
mount: true
# The name of the PVC that will be created for Connect's data
# directory.
name: connect-pvc
# The storageClass to use for Connect's data directory. Must
# support RWX.
storageClassName: connect-external-storage
requests:
# This should match the value used for CONNECT_STORAGE_VOLUME in the
# previous step.
storage: 100GOption 2: Using a “raw” NFS volume
Posit Connect can also be configured to use an existing NFS server export with a raw NFS volume. This eliminates the need to create a PersistentVolumeClaim as we did in Option 1 above.
Step 1: Modify the Helm chart values
Configure the Connect pod to mount the NFS export at /var/lib/rstudio-connect and set the persistent storage location for content pods.
When using a raw NFS volume, disable the chart’s PVC-based shared storage to avoid conflicting mounts. Set sharedStorage.create: false and sharedStorage.mount: false as shown below.
values.yaml
# Disable PVC-based shared storage when using raw NFS.
sharedStorage:
create: false
mount: false
pod:
volumes:
- name: connect-data
nfs:
server: <your-external-storage-endpoint>
path: <your-external-storage-export-path>
readOnly: false
volumeMounts:
- mountPath: /var/lib/rstudio-connect
name: connect-data
config:
Server:
DataDir: /var/lib/rstudio-connect
Kubernetes:
DataDirNFSHost: <your-external-storage-endpoint>
DataDir: <your-external-storage-export-path>