The architectures below represent general architectures for Posit Workbench, providing a mental model for different deployment options and the requirements to support each variation.
The following section provides specific reference architectures organized by target environment.
Target environments
Posit Workbench can be deployed on-premises or within a cloud environment.
Workbench on a single server
In this configuration, Workbench is installed on a single Linux server and enables:
Access to RStudio, Jupyter Notebook, JupyterLab, and VS Code development IDEs
In this configuration, Workbench is installed on one or more Linux servers and is configured with Launcher and a external backend where interactive sessions and non-interactive Workbench jobs are run.
Launcher is a plugin that allows you to run sessions and background jobs on external cluster resource managers. In addition to Kubernetes and Slurm, Launcher can be extended to work with other cluster resource managers using the Launcher SDK. AWS Sagemaker and Altair Grid Engine are two example uses of this SDK where a partner developed a launcher plugin for their respective cluster resource manager.
This enables:
Users to run sessions and jobs on external compute cluster(s)
Optional replicas for high availability
Access to RStudio, Jupyter Notebook, JupyterLab, and VS Code development IDEs
Multiple concurrent sessions per user
Use of multiple versions of Python and R
Requirements
Users’ home directories must be stored on an external shared file server (typically an NFS server)
It is strongly recommended that Workbench metadata be stored on an external PostgreSQL database server
Generic architecture
The architecture below provides a mental model for how Launcher interacts with an external resource manager. For deployments using Kubernetes or Slurm, refer to the architectural overview diagrams below for these back ends:
In this configuration, Workbench is installed on one or more Linux servers, is configured with Launcher and a Slurm cluster backend, and enables:
Users to run sessions and submit jobs via the Slurm Launcher against a Slurm cluster with arbitrary number of compute nodes of a given type.
Optional replicas for high availability.
Access to RStudio, Jupyter Notebook, JupyterLab, and VS Code development IDEs.
Multiple concurrent sessions per user.
Use of multiple versions of Python and R.
Requirements
Users’ home directories must be stored on a shared file system (typically an NFS server). Shared storage typically includes /home, /scratch, data folders, and session containers.
Session components need to be accessible from the Slurm compute nodes (installed or mounted), or Singularity can be used as session containers.
Users must exist on both Workbench servers and the Slurm cluster node, for example by pointing to the same authentication provider.
The use of an external PostgreSQL database server is necessary when using multiple Workbench servers.