Local Plugin
The Local Job Launcher Plugin provides the capability to launch executables on the local machine (same machine that the Launcher is running on). It also provides the capability of running arbitrary PAM profiles. All of the sandboxing capability is provided via rsandbox
.
Notable features of this plugin include:
- User control over requested resources type and count (CPU count, memory).
- User and Group Profiles for resources.
- Resource Profiles that allow administrators to predefine and label frequently used resource combinations. Users can choose those resource profiles by referring to the label rather than the resources.
Configuration
The local plugin does not require configuration, and it is recommended you do not change any of the defaults. If you want to use resource limits with the Local plugin, see Enabling resource limits via Cgroups V2
/etc/rstudio/launcher.local.conf
Config Option | Description | Required (Y/N) | Default Value |
---|---|---|---|
server-user | User to run the executable as. The plugin should be started as root, and will lower its privilege to this user for normal execution. It is recommended not to change the default value, as this is populated by the Launcher service itself. | N | rstudio-server |
thread-pool-size | Size of the thread pool used by the plugin. It is recommended not to change the default value, as this is populated by the Launcher service itself. | N | Number of CPUs * 2 |
enable-debug-logging | Enables/disables verbose debug logging. Can be 1 (enabled) or 0. (disabled) | N | 0 |
scratch-path | Scratch directory where the plugin writes temporary state | N | /var/lib/rstudio-launcher/{name of plugin} |
logging-dir | Specifies the path where debug logs should be written. | N | /var/log/rstudio/launcher |
job-expiry-hours | Number of hours before completed jobs are removed from the system | N | 24 |
save-unspecified-output | Enables/disables saving of stdout/stderr that was not specified in submitted jobs. This will allow users to view their output even if they do not explicitly save it, at the cost of disk space. | N | 1 |
rsandbox-path | Location of rsandbox executable. |
N | /usr/lib/rstudio-server/bin/rsandbox |
verify-ssl-certs | Whether or not to verify SSL certificates when connecting to other Launcher instances. Only applicable if connecting over HTTPS and load balancing is in use. For production use, you should always leave the default or have this set to true, but it can be disabled for testing purposes. | N | 1 |
unprivileged | Runs the Launcher in unprivileged mode. Child processes will not require root permissions. If the plugin cannot acquire root permissions it will run without root and will not change users or perform any impersonation. | N | 0 |
node-connection-timeout-seconds | The amount of seconds to allow for the process to connect to load balanced nodes before giving up on the connection. | N | 3 |
stream-idle-timeout-seconds | The amount of seconds to allow a stream to a load balanced node to be idle before it is timed out and reconnected. This is important to keep somewhat low, as network middleware and node crashes can cause these streams to become stale. | N | 300 (5 minutes) |
load-balancer-hostname | The hostname to use for load balancing. It is recommended to set this if using external load balancing and the hosts for a particular node are mismatched. | N | Defaults to system hostname |
load-balancer-preference | Specifies the preference for which load balancing mode to use. Once the mode has switched to the preference, the other load balancing type will no longer be used. Can be either external or nfs . |
N | nfs |
Session Lifecycle
RStudio, Jupyter, and VS Code sessions are launched as child processes of the Local Job Launcher Plugin. This means that restarting Posit Workbench will not necessarily cause sessions to suspend or be terminated. However, restarting the Job Launcher will cause sessions to exit.
Load balancing considerations
In order to effectively load balance the local plugin, the following must be true across each instance of the plugin that you intend to balance:
- Each hostname of the systems running the local plugin must be unique.
- Each local plugin must be configured the same - the name of the local plugin must also match across all instances in the load balancer pool.
- When using
nfs
load balancing, thescratch-path
directory above must be located on shared storage so that all instances of the plugin may see the presence of other nodes. It is recommended that you create the directory on NFS first and change the owner to theserver-user
above so that the directory will be correctly writable by the plugin. These steps can be performed as follows (assuming default values for configuration are used):
sudo mkdir -p /var/lib/rstudio-launcher/Local
sudo chown rstudio-server /var/lib/rstudio-launcher/Local
# now, mount the path created above into all hosts that you will be load balancing
- If using
external
load balancing, theload-balancer-hostname
must match what is detected by theexternal
load balancer (such as Workbench). - Each local plugin node must be able to directly connect to the launcher service located on other nodes in the load balance pool.
- When each node comes online, it downloads all jobs from the other nodes in the load balance pool, so if you frequently have a large amount of jobs in memory you may need to raise the
max-message-size
/etc/rstudio/launcher.conf
parameter described above. The default of 5 MiB should be sufficient for a working load of approximately 1000 jobs (though this will vary based on average job size).
Once these steps have been accomplished, simply start each instance of the launcher/local-plugin that you wish to load balance.
Enabling Resource limits via Cgroups V2
To use resource limits with the Local plugin, you must do the following:
- Enable cgroups in Workbench by appending
enable-cgroups=1
to yourlauncher.conf
file - Enable cgroups v2 on your Linux distribution
The following distros do not have cgroups v2 enabled by default, which is required to use resource limits with the Local plugin:
- Red Hat Enterprise Linux 8
- Ubuntu Focal
Enable Cgroups V2 on Ubuntu Focal
- Edit
/etc/default/grub
. - Append
systemd.unified_cgroup_hierarchy=1
on theGRUB_CMDLINE_LINUX=
line. - Run
update-grub
to apply the changes to the system. - Reboot the system.
Enable Cgroups V2 on Red Hat Enterprise Linux 8 (RHEL8)
- Use the grubby tool to update the kernel to use cgroups v2
sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=1"
- Edit
/etc/systemd/system.conf
and setDefaultCpuAccounting=yes
- Reboot the system
Verify Cgroups V2 is enabled on linux
Verify that cgroups v2 is enabled on your distribution by running the following command:
stat -fc %T /sys/fs/cgroup
cgroup2fs
should display as the output:
cgroup2fs
User and Group Profiles
The Local plugin also allows you to specify user and group configuration profiles, similar to Posit Workbench’s profiles, in the configuration file /etc/rstudio/launcher.local.profiles.conf
(or an arbitrary file as specified in profile-config
within the main configuration; see above). These are entirely optional.
Profiles are divided into sections of three different types:
Global ([*])
Per-group ([@groupname])
Per-user ([username])
Here’s an example profiles file that illustrates each of these types:
/etc/rstudio/launcher.local.profiles.conf
[*]
max-cpus=2
max-mem-mb=1024
[@posit-power-users]
resource-profiles="medium",
[jsmith]
resource-profiles="small",
By default, this configuration specifies that users can launch jobs with a maximum of 1024 MB of memory and 2 CPUs. It also specifies that members of the posit-power-users group can use the medium resource profile, and the user jsmith can use the small one.
The profiles file is processed from top to bottom (i.e., settings matching the current user that occur later in the file always override ones that appeared prior). The settings available in the file are described in more depth in the table below. Also, if the Local cluster has been configured to have a maximum and/or default memory value, these values return whenever a maximum or default value is not configured for a user.
/etc/rstudio/launcher.local.profiles.conf
Config Option | Description | Required (Y/N) | Default Value |
---|---|---|---|
default-cpus | Number of CPUs available to a job by default if not specified by the job. | N | 0.0 (infinite - managed by Systemd) |
default-mem-mb | Number of MB of RAM available to a job by default if not specified by the job. | N | 0.0 (infinite - managed by Systemd) |
max-cpus | Maximum number of CPUs available to a job. Setting this to a negative value will disable setting CPUs on a job. If set, the value of default-cpus will always be used. |
N | 0.0 (infinite - managed by Systemd) |
max-mem-mb | Maximum number of MB of RAM available to a job. Setting this to a negative value will disable setting memory on a job. If set, the value of default-mem-mb will always be used. |
N | 0.0 (infinite - managed by Systemd) |
resource-profiles | Available resource profiles. See Resource Profiles. | N | |
allow-custom-resources | Whether jobs can use the custom resource profile. See Resource Profiles. |
Resource Profiles
Resource profiles greatly simplify the task of assigning CPU and memory. They are configured in the optional /etc/rstudio/launcher.local.resources.conf
file. For example:
/etc/rstudio/launcher.local.resources.conf
[default]
name = "Default" # optional, derived from the section name when absent
cpus=1
mem-mb=4096
[small]
cpus=1
mem-mb=512
[hugemem]
name = "Huge Memory"
cpus=8
mem-mb=262144
By default, all profiles are available to all users, and jobs can also use a special custom
profile to specify CPU and memory directly. However, users are still subject to the constraints in User and Group Profiles, and administrators may also limit access to individual resource profiles with that configuration file.
For example, suppose an administrator wants to restrict the resource profiles above such that (1) CPUs and large memory jobs are only available to users in the bioinformatics
group, and (2) only users in the posit-power-users
group can use the custom
resource profile to set their resources directly. This might result in the following /etc/rstudio/launcher.local.profiles.conf
file:
/etc/rstudio/launcher.local.profiles.conf
[*]
resource-profiles=default,small
allow-custom-resources=0
[@bioinformatics]
resource-profiles=default,small,hugemem
[@posit-power-users]
resource-profiles=default,small,hugemem
allow-custom-resources=1
The settings available in each section of the /etc/rstudio/launcher.local.resources.conf
file are described in more depth in the table below:
/etc/rstudio/launcher.local.resources.conf
Config Option | Description | Required (Y/N) | Default Value |
---|---|---|---|
name | A user-friendly name for the profile, e.g. Default (1 CPU, 4G mem) or m4.xlarge . |
N | The section title |
cpus | The CPU limit. | Y | |
mem-mb | The memory limit, in megabytes. |