Selecting the Right Architecture for Posit Workbench

The right architecture for your deployment of Posit Workbench depends on your team and use case. Before you can decide on the right architecture, you must understand how your team uses or will use Workbench, and how Workbench fits into your existing patterns for deploying and maintaining applications.

Workbench is a server based development environment that allows users to start IDE sessions in RStudio, VSCode, JupyterLab, and Jupyter Notebooks. One or more of these interactive sessions can be run by a single user at the same time. Additionally, users can run non-interactive Workbench Jobs in either R or Python. These non-interactive Workbench Jobs are typically used for long-running tasks that can run in the background.

Architecture considerations

Below we’ll cover some of the factors you need to understand in order to choose the right architecture for your team.

Number of users

The number of users concurrently accessing the system is one of the principal determinants of load. For example, if ten users have access to the system, but only one is on the system at any given time, then your load is one. It is good to have an idea of both the average number of concurrent sessions and the peak, as well as how regularly this peak occurs. You may also want to consider what platform adoption has looked like in the past. If you have been steadily adding users over time, you may want to budget resources for expected growth.

Workbench by itself consumes only a small amount of system resources. Therefore the number of resources needed for good performance is mostly dictated by the number of Python and R processes running inside of user sessions or in Workbench Jobs. By default, Python and R are both single-threaded processes that hold all their data in memory. If users are not explicitly parallelizing their code, having 1 or 2 cores per developer session is sufficient for most analytic workflows. The following rule of thumb can be used to estimate the number of CPUs needed for a single server or load-balanced Workbench deployment that is not using parallelization or a resource manager like Kubernetes or Slurm:

Workbench CPU Rule-of-Thumb: (2 CPUs) X (number of concurrent analysts) X (number of active sessions per analyst)

If users are employing parallelization, then a core will be used for every worker/thread/process that has been allocated to it, increasing the CPU requirement. Additionally, deployments with high performance computing clusters or Kubernetes on the back end will have different considerations.

Memory and disk requirements

It is important to consider how users interact with data, which impacts RAM and disk space.

If users use large data sets, you will need to allocate more RAM accordingly. It is key to have enough RAM for each active session such that all users have enough available to do their work. Potentially, the requirements of large data sets can be reduced if users can offload some work to a database or tools like Spark.

Workbench RAM Rule-of-Thumb: 2-4 GB RAM per user, minimum

If you have an existing Workbench installation that you want to scale, you can get more concrete information from the following resources:

You may also have different groups of users to consider, who each behave differently on the system. Specifically, novice users may need more governance to be sure they do not accidentally consume too many resources. On the other hand, some power users need governance so they do not intentionally consume too many resources. This can be managed proactively with User and Group Profiles when in a single server or load-balanced cluster architecture, or by specifying either Kubernetes user and group profiles or Slurm user and group profiles.

Disk requirements are driven by three factors:

  1. Space to install the Workbench application code, Python and R interpreters, and other integration tools like Quarto and Jupyter
  2. Swap space used by the system in place of RAM when all of the system RAM is in use
  3. User home directory space used for both projects and user Python and R package libraries

Workbench Disk Rule-of-Thumb: 2 GB root storage + 8 GB swap + (typical user space X number of users)

Another factor to consider when thinking about disk requirements is the speed and type of disk. In general, it’s preferable for your application code and swap space to be on fast disks, like modern Solid State Drives (SSDs). User home directories can be on slower disks, since in most cases the data on them is going to be read into RAM when it is accessed. Given this, you may consider a smaller, fast SSD for the OS, Workbench, and dependencies, and a mounted, larger disk for user home directories.

Stability of workflows and workloads

A team with long-established workflows is likely to have a stable set of OS system dependencies. On the other hand, a team that is rapidly iterating on new workflows may require more frequent updates to system dependencies, or a distinct set of dependencies depending on project type. Variable dependencies may lend toward an architecture that runs sessions external from the Workbench host, such as with a Kubernetes or Slurm execution back end.

Similarly, if workloads are variable, it may be beneficial to have a system that can accommodate scaling on demand. This can be achieved with a Kubernetes or Slurm back end. If workloads are more stable, or subject to moderate spikes, a single server or load-balanced cluster with resource buffer may be sufficient.

Expectation for uptime

Availability or uptime of the service informs architecture decisions and affects the buffer that you build in. It can also determine whether segregating into several nodes is preferable (i.e., if a user occupies all the resources on a given machine, other users can still access another machine).

Existing tools, skills, and organizational structures

A successful deployment of Workbench requires a team with the right skills and tools to manage it. Organizationally, you may have existing tools and processes in place for managing infrastructure that you can leverage. For example, Workbench with Kubernetes is managed using Helm. If this tooling is unfamiliar, it’s advised to consider other architectures that are in line with your organization’s expertise and existing processes.

Architectures for Workbench

When you have a clear picture of your team’s use case and needs, you can start evaluating the best architecture for your team’s Workbench environment.

A single Posit Workbench server is how many teams get started. This architecture is the simplest, without any requirements for external shared storage. If you do not require high availability, just increasing the server size of a single Workbench node and scaling vertically can be a great strategy to scale!

Below we show a matrix that provides a starting framework for thinking about which architecture best fits your needs. It is very likely that your organization has additional criteria that are critical to making this decision. For example, you may have specific software deployment patterns that you need to follow, like always deploying apps in a container.

In general, we recommend selecting the simplest possible architecture that meets your current and near-term needs, then growing the complexity and scale as needed.

Single Server Load-balanced Cluster Kubernetes Slurm
Workload size small to medium medium to large large large
Can support High Availability no yes yes yes
Well-suited for variable workflows and/or scaling workloads no no yes yes
Relevant admin skill required Linux system administration Linux system administration

Linux system administration

Familiarity with Docker, Kubernetes, Helm

Linux system administration

Cluster management tooling1

1 Administration is simplified if using a cluster management tool such as AWS ParallelCluster or Azure CycleCloud

Back to top