Upgrading to the direct Kubernetes runner
This guide covers upgrading Posit Connect helm values from the Launcher (launcher.enabled: true) to the direct Kubernetes runner (backends.kubernetes.enabled: true).
The direct Kubernetes runner manages content Jobs and Services using standard Kubernetes manifests, replacing the Launcher’s template system.
The direct Kubernetes runner requires Posit Connect version 2026.04.0 or later.
Minimal upgrade
If you haven’t customized launcher.templateValues, the minimal upgrade is a single change: set launcher.enabled: false and backends.kubernetes.enabled: true.
| Launcher setting | Action |
|---|---|
launcher.enabled |
Set to false and set backends.kubernetes.enabled: true |
config.Launcher |
Move applicable values to config.Kubernetes (optional — only if you customized it) |
launcher.namespace |
Move to backends.kubernetes.namespace (optional — only if you customized it) |
launcher.defaultInitContainer.* |
Move to backends.kubernetes.defaultInitContainer.* (optional — same structure) |
values.yaml
# Minimal upgrade from Launcher to direct Kubernetes runner.
#
# If your values.yaml contains modifications to launcher.templateValues or
# any other launcher.* fields use the examples below to move these values
# to the new configuration structure.
# Disable the Launcher
launcher:
enabled: false
# Enable the direct Kubernetes runner
backends:
kubernetes:
enabled: trueImportant notes
The direct Kubernetes runner requires content service accounts to have the connect.posit.co/service-account=true label. Apply this label to any service account that content should be allowed to use, including the default and any custom service accounts set via serviceAccountName in the job base:
kubectl label sa <service-account-name> connect.posit.co/service-account=true -n <namespace>Init container – The chart auto-generates a connect-content-init init container. You do not need to configure it unless you want to customize the init container image or behavior via backends.kubernetes.defaultInitContainer. Custom init containers run alongside the auto-generated one.
Shared storage – sharedStorage.* values work identically for both modes. No changes needed.
Values with alternatives
These values do not have a direct equivalent but have alternative approaches in the direct Kubernetes runner.
| Launcher value | Alternative |
|---|---|
launcher.customRuntimeYaml |
Use executionEnvironments instead. See Execution environments. |
launcher.additionalRuntimeImages |
Use executionEnvironments instead. |
launcher.launcherKubernetesProfilesConf |
The direct Kubernetes runner does not use profiles. Similar customization can be achieved using defaultResourceJobBase. |
Upgrading templateValues customizations
If you customized launcher.templateValues, those settings move into backends.kubernetes.defaultResourceJobBase and backends.kubernetes.defaultResourceServiceBase. These are standard Kubernetes Job and Service specs, so Kubernetes documentation applies directly.
Any field supported by the Kubernetes Job or Service spec can be set in the resource base, not just the fields shown below.
Volumes and volume mounts
Note that volumeMounts moves from a pod-level setting to the connect-content container spec.
# Before
launcher:
templateValues:
pod:
volumes:
- name: extra-config
configMap:
name: content-config
volumeMounts:
- name: extra-config
mountPath: /etc/content-config
readOnly: true
# After
backends:
kubernetes:
defaultResourceJobBase:
spec:
template:
spec:
volumes:
- name: extra-config
configMap:
name: content-config
containers:
- name: connect-content
volumeMounts:
- name: extra-config
mountPath: /etc/content-config
readOnly: trueInit containers and sidecar containers
Custom initContainers run alongside the chart’s auto-generated connect-content-init. extraContainers becomes a named entry in containers.
# Before
launcher:
templateValues:
pod:
initContainers:
- name: wait-for-db
image: busybox:latest
command: ["sh", "-c", "echo waiting"]
extraContainers:
- name: log-forwarder
image: fluent/fluent-bit:latest
# After
backends:
kubernetes:
defaultResourceJobBase:
spec:
template:
spec:
initContainers:
- name: wait-for-db
image: busybox:latest
command: ["sh", "-c", "echo waiting"]
containers:
- name: log-forwarder
image: fluent/fluent-bit:latestEnvironment variables and resources
# Before
launcher:
templateValues:
pod:
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "2"
# After
backends:
kubernetes:
defaultResourceJobBase:
spec:
template:
spec:
containers:
- name: connect-content
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "2"Node scheduling (nodeSelector, tolerations, affinity)
# Before
launcher:
templateValues:
pod:
nodeSelector:
workload: content
tolerations:
- key: "dedicated"
operator: "Equal"
value: "content"
effect: "NoSchedule"
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app.example.com/content: "true"
topologyKey: kubernetes.io/hostname
# After
backends:
kubernetes:
defaultResourceJobBase:
spec:
template:
spec:
nodeSelector:
workload: content
tolerations:
- key: "dedicated"
operator: "Equal"
value: "content"
effect: "NoSchedule"
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app.example.com/content: "true"
topologyKey: kubernetes.io/hostnameLabels and annotations
# Before
launcher:
templateValues:
job:
labels:
app.example.com/managed-by: connect
annotations:
app.example.com/team: data-science
pod:
labels:
app.example.com/content: "true"
annotations:
app.example.com/tier: compute
# After
backends:
kubernetes:
defaultResourceJobBase:
metadata:
labels:
app.example.com/managed-by: connect
annotations:
app.example.com/team: data-science
spec:
template:
metadata:
labels:
app.example.com/content: "true"
annotations:
app.example.com/tier: computeService account and image pull secrets
# Before
launcher:
templateValues:
pod:
serviceAccountName: connect-content-sa
imagePullSecrets:
- name: registry-credentials
# After
backends:
kubernetes:
defaultResourceJobBase:
spec:
template:
spec:
serviceAccountName: connect-content-sa
imagePullSecrets:
- name: registry-credentialsPod security context
# Before
launcher:
templateValues:
pod:
securityContext:
runAsNonRoot: true
# After
backends:
kubernetes:
defaultResourceJobBase:
spec:
template:
spec:
securityContext:
runAsNonRoot: trueService configuration
# Before
launcher:
templateValues:
service:
labels:
app.example.com/service: content
annotations:
app.example.com/tier: compute
type: ClusterIP
# After
backends:
kubernetes:
defaultResourceServiceBase:
metadata:
labels:
app.example.com/service: content
annotations:
app.example.com/tier: compute
spec:
type: ClusterIPComplete before/after reference
For a complete example with all fields, see the full before/after files:
Before (Launcher-based):
values.yaml (launcher)
# Original Launcher-based values with templateValues customizations.
# Each field is annotated with its equivalent path after upgrading to the direct Kubernetes runner.
# See rstudio-connect-customized-upgrade.yaml for the upgraded version.
sharedStorage:
create: true
mount: true
storageClassName: nfs-sc-rwx
requests:
storage: 100G
launcher:
enabled: true
templateValues:
job:
labels: # Moves to: backends.kubernetes.defaultResourceJobBase.metadata.labels
app.example.com/managed-by: connect
annotations: # Moves to: backends.kubernetes.defaultResourceJobBase.metadata.annotations
app.example.com/team: data-science
pod:
labels: # Moves to: backends.kubernetes.defaultResourceJobBase.spec.template.metadata.labels
app.example.com/content: "true"
annotations: # Moves to: backends.kubernetes.defaultResourceJobBase.spec.template.metadata.annotations
app.example.com/tier: compute
nodeSelector: # Moves to: backends.kubernetes.defaultResourceJobBase.spec.template.spec.nodeSelector
workload: content
tolerations: # Moves to: backends.kubernetes.defaultResourceJobBase.spec.template.spec.tolerations
- key: "dedicated"
operator: "Equal"
value: "content"
effect: "NoSchedule"
affinity: # Moves to: backends.kubernetes.defaultResourceJobBase.spec.template.spec.affinity
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app.example.com/content: "true"
topologyKey: kubernetes.io/hostname
securityContext: # Moves to: backends.kubernetes.defaultResourceJobBase.spec.template.spec.securityContext
runAsUser: 999
runAsGroup: 999
fsGroup: 999
# defaultSecurityContext is also supported and maps to the same pod securityContext field in direct mode.
# In the direct runner, runAsUser/runAsGroup/supplementalGroups are managed by Connect and are not user-overridable.
serviceAccountName: connect-content-sa # Moves to: backends.kubernetes.defaultResourceJobBase.spec.template.spec.serviceAccountName
imagePullSecrets: # Moves to: backends.kubernetes.defaultResourceJobBase.spec.template.spec.imagePullSecrets
- name: registry-credentials
priorityClassName: "" # Moves to: backends.kubernetes.defaultResourceJobBase.spec.template.spec.priorityClassName
hostAliases: # Moves to: backends.kubernetes.defaultResourceJobBase.spec.template.spec.hostAliases
- ip: "10.0.0.50"
hostnames:
- "internal-api.example.com"
volumes: # Moves to: backends.kubernetes.defaultResourceJobBase.spec.template.spec.volumes
- name: extra-config
configMap:
name: content-config
initContainers: # Moves to: backends.kubernetes.defaultResourceJobBase.spec.template.spec.initContainers
- name: wait-for-db
image: busybox:latest
command: ["sh", "-c", "echo waiting for dependencies"]
extraContainers: # Moves to: backends.kubernetes.defaultResourceJobBase.spec.template.spec.containers (as a named container)
- name: log-forwarder
image: fluent/fluent-bit:latest
resources:
requests:
cpu: "50m"
memory: "64Mi"
env: # Moves to: backends.kubernetes.defaultResourceJobBase.spec.template.spec.containers[name=connect-content].env
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
resources: # Moves to: backends.kubernetes.defaultResourceJobBase.spec.template.spec.containers[name=connect-content].resources
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "2"
imagePullPolicy: IfNotPresent # Moves to: backends.kubernetes.defaultResourceJobBase.spec.template.spec.containers[name=connect-content].imagePullPolicy
containerSecurityContext: # Moves to: backends.kubernetes.defaultResourceJobBase.spec.template.spec.containers[name=connect-content].securityContext (except runAsUser/runAsGroup)
allowPrivilegeEscalation: false
command: [] # No direct equivalent override (connect-content command is managed by Connect)
volumeMounts: # Moves to: backends.kubernetes.defaultResourceJobBase.spec.template.spec.containers[name=connect-content].volumeMounts
- name: extra-config
mountPath: /etc/content-config
readOnly: true
service:
labels: # Moves to: backends.kubernetes.defaultResourceServiceBase.metadata.labels
app.example.com/service: content
annotations: # Moves to: backends.kubernetes.defaultResourceServiceBase.metadata.annotations
app.example.com/tier: compute
type: ClusterIP # Moves to: backends.kubernetes.defaultResourceServiceBase.spec.type
config:
# ... your existing config ...
Launcher: # Moves to: config.Kubernetes
DataDirPVCName: my-release-rstudio-connect-shared-storage # Your actual PVC nameAfter (direct Kubernetes runner):
values.yaml (kubernetes)
# Upgraded values: direct Kubernetes runner with templateValues customizations.
# Each field is annotated with its original launcher.templateValues path.
# See rstudio-connect-customized-launcher.yaml for the original Launcher-based version.
sharedStorage:
create: true
mount: true
storageClassName: nfs-sc-rwx # TODO: Change to your RWX StorageClass
requests:
storage: 100G
launcher:
enabled: false
backends:
kubernetes:
enabled: true
defaultResourceJobBase:
metadata:
labels: # Was: templateValues.job.labels
app.example.com/managed-by: connect
annotations: # Was: templateValues.job.annotations
app.example.com/team: data-science
spec:
template:
metadata:
labels: # Was: templateValues.pod.labels
app.example.com/content: "true"
annotations: # Was: templateValues.pod.annotations
app.example.com/tier: compute
spec:
nodeSelector: # Was: templateValues.pod.nodeSelector
workload: content
tolerations: # Was: templateValues.pod.tolerations
- key: "dedicated"
operator: "Equal"
value: "content"
effect: "NoSchedule"
affinity: # Was: templateValues.pod.affinity
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app.example.com/content: "true"
topologyKey: kubernetes.io/hostname
securityContext: # Was: templateValues.pod.securityContext and pod.defaultSecurityContext
# Direct runner manages runAsUser/runAsGroup/supplementalGroups from Connect RunAs config.
# Keep only user-overridable pod securityContext fields here.
fsGroup: 999
serviceAccountName: connect-content-sa # Was: templateValues.pod.serviceAccountName
# NOTE: SA must be labeled: kubectl label sa connect-content-sa connect.posit.co/service-account=true
imagePullSecrets: # Was: templateValues.pod.imagePullSecrets
- name: registry-credentials # TODO: Change to your image pull secret
priorityClassName: "" # Was: templateValues.pod.priorityClassName
hostAliases: # Was: templateValues.pod.hostAliases
- ip: "10.0.0.50"
hostnames:
- "internal-api.example.com"
volumes: # Was: templateValues.pod.volumes
- name: extra-config
configMap:
name: content-config
initContainers: # Was: templateValues.pod.initContainers
# The chart auto-generates connect-content-init; custom init containers run alongside it.
- name: wait-for-db
image: busybox:latest
command: ["sh", "-c", "echo waiting for dependencies"]
containers:
- name: log-forwarder # Was: templateValues.pod.extraContainers
image: fluent/fluent-bit:latest
resources:
requests:
cpu: "50m"
memory: "64Mi"
- name: connect-content # Must use this exact name for the content container
# templateValues.pod.command has no direct equivalent; connect-content command is runner-managed.
env: # Was: templateValues.pod.env
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: database-url
resources: # Was: templateValues.pod.resources
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "2"
imagePullPolicy: IfNotPresent # Was: templateValues.pod.imagePullPolicy
securityContext: # Was: templateValues.pod.containerSecurityContext
# runAsUser/runAsGroup are managed by Connect and cannot be overridden for connect-content.
allowPrivilegeEscalation: false
volumeMounts: # Was: templateValues.pod.volumeMounts
- name: extra-config
mountPath: /etc/content-config
readOnly: true
defaultResourceServiceBase:
metadata:
labels: # Was: templateValues.service.labels
app.example.com/service: content
annotations: # Was: templateValues.service.annotations
app.example.com/tier: compute
spec:
type: ClusterIP # Was: templateValues.service.type
config:
# ... your existing config ...
Kubernetes: # Was: config.Launcher
# DataDirPVCName must match your existing PVC name (typically <release>-rstudio-connect-shared-storage).
# Alternatively, remove this and set sharedStorage.mountContent: true to let the chart set it automatically.
DataDirPVCName: my-release-rstudio-connect-shared-storage # TODO: Change to your actual PVC name