Integrating Posit Workbench with Kubernetes#
Overview#
These steps describe how to integrate Posit Workbench, formerly RStudio Workbench, with Launcher and Kubernetes.
Info
Launcher is a new feature of RStudio Server Pro 1.21 that is only available under named user licensing. RStudio Server Pro 1.2 without Launcher is available under existing server-based licensing. For questions about using Launcher with Workbench, please contact sales@rstudio.com.
Prerequisites#
This integration is intended to be performed on top of a base installation of Workbench.
- Installation of RStudio Server Pro 1.2.5 or higher, including RStudio Workbench 1.4
- NFS server that is configured with Workbench for home directory project storage
- Kubernetes cluster:
- Kubernetes API endpoint
- Kubernetes cluster CA certificate
- Access to
kubectl
to create namespaces, service accounts, cluster roles, and role bindings
- Access to Docker image registry (if working within an offline environment)
Pre-Flight Configuration Checks#
Verifying active Kubernetes worker nodes
-
On a machine with
kubectl
configured, ensure that you have one or more worker nodes that are ready to accept pods as part of the Kubernetes cluster by running the following command:Terminal$ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-12-54.ec2.internal Ready <none> 90d v1.11.5 ip-172-31-15-141.ec2.internal Ready <none> 90d v1.11.5 ip-172-31-18-59.ec2.internal Ready <none> 90d v1.11.5 ip-172-31-20-112.ec2.internal Ready <none> 90d v1.11.5
Verifying functionality with a test deployment
-
On a machine with
kubectl
configured, ensure that you are able to deploy a sample application to your Kubernetes cluster by running the following command:Terminal$ kubectl create deployment hello-node --image=gcr.io/google-samples/node-hello:1.0
-
Confirm that the pod is running by using the following command:
Terminal$ kubectl get pods NAME READY STATUS RESTARTS AGE hello-node-6d6cd9679f-mllr7 1/1 Running 0 1m
-
Now, you can clean up the test deployment by running the following command:
Terminal$ kubectl delete deployment hello-node deployment.extensions "hello-node" deleted
Step 1. Configure Workbench with Launcher#
-
Add the following lines to the Workbench configuration file:
File: /etc/rstudio/rserver.conf# Launcher Config launcher-address=127.0.0.1 launcher-port=5559 launcher-sessions-enabled=1 launcher-default-cluster=Kubernetes launcher-sessions-callback-address=http://<SERVER-ADDRESS>:8787 launcher-sessions-container-run-as-root=0 launcher-sessions-create-container-user=1
-
It is recommended that you do the following:
- In the
launcher-sessions-callback-address
setting, you should replace<SERVER-ADDRESS>
with the DNS name or IP address of Workbench. - You should also change the protocol and port if you are using HTTPS or a different port.
Note
The
<SERVER-ADDRESS>
needs to be reachable from the containers in Kubernetes to the instance of Workbench. - In the
Step 2. Configure Launcher settings and plugins#
-
Add the following lines to the Launcher configuration file:
File: /etc/rstudio/launcher.conf[server] address=127.0.0.1 port=5559 server-user=rstudio-server admin-group=rstudio-server authorization-enabled=1 thread-pool-size=4 enable-debug-logging=1 [cluster] name=Kubernetes type=Kubernetes
Step 3. Configure profile for Launcher Kubernetes plugin#
-
Add the following lines to the Launcher profiles configuration file:
File: /etc/rstudio/launcher.kubernetes.profiles.conf[*] default-cpus=1 default-mem-mb=512 max-cpus=2 max-mem-mb=1024 container-images=rstudio/r-session-complete:centos7-2023.03.1-446.pro1 default-container-image=rstudio/r-session-complete:centos7-2023.03.1-446.pro1 allow-unknown-images=0
For more information on using Docker images with Launcher, refer to the Support article on Using Docker images with Workbench, Launcher, and Kubernetes.
Step 4. Provision and configure NFS server#
Shared home directory storage via NFS is required for configurations of Workbench and Launcher. Workbench stores project data for each user in their respective home directory.
-
Perform the following steps in your environment:
- Provision an NFS server that exports the
/home
directory. We recommend configuring an NFS server on a machine that runs separately from Workbench and Launcher. - On the machine with Workbench and Launcher, mount the NFS share
at
/home
.
Note
Similar to any NFS configuration, all machines (e.g., the machine with the NFS server and the machine with Workbench and Launcher) should have the same users with matching user IDs and group IDs to avoid permission or ownership issues across NFS client machines.
- Provision an NFS server that exports the
Step 5. Configure NFS mounts for Launcher#
-
Add the following lines to the Launcher mounts configuration file, which is the NFS server and mount path that will be used by the containers to mount the home directory for each user:
File: /etc/rstudio/launcher-mounts# Required home directory mount for RSW, Launcher, and Kubernetes Host: <NFS-IP-ADDRESS> Path: /home/{USER} MountPath: /home/{USER} ReadOnly: false Cluster: Kubernetes
-
Replace
<NFS-IP-ADDRESS>
with the IP address of your NFS server. - The
Path
andMountPath
contain the special variable{USER}
to indicate that the user’s name will be substituted when the container starts, so there is no need to change that variable in this configuration file. - The
Path
is the source directory of the mount, i.e., the home directory path within NFS server. Please replace it with the correct path if it is something different than/home/
. - The
MountPath
is the path within the container that the home directory will be mounted to. It must match how the home directory is mounted on the RSW server. Please replace it with the correct path if it is something different than/home/
.
Note
Shared home directory storage via NFS is required for configurations of
Workbench and Launcher. Therefore, the configuration section shown
above is required in the /etc/rstudio/launcher-mounts
configuration file
for Workbench and Launcher to function with Kubernetes.
Additional NFS mounts can be added to this same configuration file to make other read-only or read-write file storage mounts available within remote session containers.
Step 6. Create Kubernetes resources for Launcher sessions and Workbench jobs#
-
Run the following commands in a terminal to create the
rstudio
namespace and required service account, cluster role, and role bindings:Terminal$ kubectl create namespace rstudio $ kubectl create serviceaccount job-launcher --namespace rstudio $ kubectl create rolebinding job-launcher-admin \ --clusterrole=cluster-admin \ --group=system:serviceaccounts:rstudio \ --namespace=rstudio $ kubectl create clusterrole job-launcher-clusters \ --verb=get,watch,list \ --resource=nodes $ kubectl create clusterrolebinding job-launcher-list-clusters \ --clusterrole=job-launcher-clusters \ --group=system:serviceaccounts:rstudio
-
As of Kubernetes 1.24, tokens are no longer automatically generated for service accounts. When using Kubernetes 1.24+, run the following command to manually create a token:
TerminalBe sure that the account name specified in the line$ kubectl apply -f - <<EOF apiVersion: v1 kind: Secret type: kubernetes.io/service-account-token metadata: name: job-launcher-token namespace: rstudio annotations: kubernetes.io/service-account.name: job-launcher EOF
kubernetes.io/service-account.name: job-launcher
exactly matches the service account name. Kubernetes uses this annotation to perform a match from the service account to this token secret.It is expected that service accounts will no longer list secrets by default, either. Even once this token secret is created, the service account will still show 0 secrets available (assuming it has just been created and not previously modified):
Terminal$ kubectl get serviceaccounts NAME SECRETS AGE default 0 1d job-launcher 0 1d
(Alternative) Using a custom role instead of the
cluster-admin
roleThe default steps above use the
cluster-admin
role on the Kubernetes cluster. If you are unable to use thecluster-admin
role, then you can use a custom role that has full access to therstudio
namespace.In this case, you can run the following commands in a terminal to create the
rstudio
namespace and required service account, custom cluster role, and role bindings:Terminal$ kubectl create namespace rstudio $ kubectl create serviceaccount job-launcher --namespace rstudio # Create a role with full access to the rstudio namespace $ kubectl create role rstudio-full-access \ --verb='*' \ --resource='*.*' \ --namespace=rstudio # Bind the new role to the service account $ kubectl create rolebinding job-launcher-admin \ --role=rstudio-full-access \ --group=system:serviceaccounts:rstudio \ --namespace=rstudio $ kubectl create clusterrole job-launcher-clusters \ --verb=get,watch,list \ --resource=nodes $ kubectl create clusterrolebinding job-launcher-list-clusters \ --clusterrole=job-launcher-clusters \ --group=system:serviceaccounts:rstudio
(Optional) Perform these steps if your Kubernetes cluster doesn't have impersonation enabled
With the default configuration of most Kubernetes distributions, the above steps should be sufficient enough to allow Launcher session containers to run as the end user who created the session. If your Kubernetes cluster does not have impersonation enabled, then you can use a custom cluster role and role binding that allow for impersonation.
After you run the above steps, run the following additional commands in a terminal to create cluster role and role binding resources that allow for impersonation:
Terminal$ kubectl create clusterrole job-launcher-api \ --verb=impersonate \ --resource=users,groups,serviceaccounts $ kubectl create rolebinding job-launcher-impersonation \ --clusterrole=job-launcher-api \ --group=system:serviceaccounts:rstudio \ --namespace=rstudio
Refer to the Launcher section of the Workbench Administration Guide for more information about how Launcher creates the session user within the container.
Refer to the user impersonation section of the Kubernetes documentation for more information about authentication and impersonation in Kubernetes.
Step 7. Configure Launcher with Kubernetes#
-
Obtain the Kubernetes token for the service account in the
rstudio
namespace by running the following command in your terminal:Terminal$ kubectl get secret $(kubectl get serviceaccount job-launcher --namespace=rstudio -o jsonpath='{.secrets[0].name}') --namespace=rstudio -o jsonpath='{.data.token}' | base64 -d && echo
In Kubernetes 1.24+ you must use the secret that you previously created in step 6:
Terminal$ kubectl get secret job-launcher-token --namespace=rstudio -o jsonpath='{.data.token}' | base64 -d && echo
-
Add the following lines to the Launcher Kubernetes configuration file, (where
<KUBERNETES-API-ENDPOINT>
is the URL for the Kubernetes API,<KUBERNETES-CLUSTER-TOKEN>
is the Kubernetes service account token from the abovekubectl get secret
terminal command, and<BASE-64-ENCODED-CA-CERTIFICATE>
is the Base64 encoded CA certificate for the Kubernetes API):File: /etc/rstudio/launcher.kubernetes.confapi-url=<KUBERNETES-API-ENDPOINT> auth-token=<KUBERNETES-CLUSTER-TOKEN> certificate-authority=<BASE-64-ENCODED-CA-CERTIFICATE>
Note
You can typically locate these values from your Kubernetes cluster console or dashboard.
Step 8. Restart Workbench and Launcher Services#
-
Run the following to restart services:
Terminal$ sudo rstudio-server restart $ sudo rstudio-launcher restart
Step 9. Test Workbench with Launcher and Kubernetes#
-
Run the following command to test the installation and configuration of Workbench with Launcher and Kubernetes:
Terminal$ sudo rstudio-server stop $ sudo rstudio-server verify-installation --verify-user=<USER> $ sudo rstudio-server start
Note
Replace <USER>
with a valid username of a user that is setup to run
Workbench in your installation. You only need to run this test once for one
valid user to verify that Workbench and Launcher can successfully
communicate with Kubernetes and start sessions/Workbench jobs.
For more information on using the Launcher verification tool, refer to the Troubleshooting section in the Workbench Administration Guide .
Additional information#
Use Custom Docker Images#
You can extend or build your own custom Docker images to use with Workbench and Kubernetes with different versions of R, R packages, or system packages.
For more information on using custom Docker images, refer to the support article on Using Docker images with Workbench, Launcher, and Kubernetes.
Perform Additional Configuration#
For more information on configuring Workbench and Launcher, including configuring additional shared file storage mounts, environment variables, and ports, refer to the following reference documentation:
Troubleshooting Workbench and Kubernetes#
Refer to the documentation page on Troubleshooting Launcher and Kubernetes in Workbench for additional information on troubleshooting Workbench with Launcher and Kubernetes.
-
We will continue to use the RStudio Server Pro name for references to versions prior to 1.4. ↩