Run Chronicle on Kubernetes
Posit provides official container images and Helm charts for running Chronicle on Kubernetes. Using them involves two distinct tasks:
Deploying the Chronicle server to the same cluster as the supported professional product(s) using the official Helm chart.
Configuring the supported professional product(s) to run the Chronicle agent as a sidecar container that forwards data to this server.
Deploy the server on Kubernetes using Helm
Provided you have Helm installed and configured, you can add the Posit repository (if not already present) as follows:
Terminal
$ helm repo add posit https://helm.rstudio.com
And then install or update the chart for the Chronicle server:
Terminal
$ helm update --install chronicle posit/posit-chronicle
(The above example does not use a values.yaml
file, but you are likely to need one eventually.)
Customizing the server’s Helm chart
The Chronicle server has a small number of configuration options. On Kubernetes, typically you need to choose only where the server should write the collected data. Chronicle currently supports writing data to a persistent volume (the default) or to AWS S3.
To configure the Chronicle server on Kubernetes you must modify the Helm values.yaml
file used to deploy it. For example, to write data to AWS S3 on an EKS cluster with IAM role for service accounts configured:
values.yaml
serviceaccount:
enabled: true
annotations:
eks.amazonaws.com/role-arn: "<my-aws-iam-role-arn>"
config:
s3Storage:
enabled: true
bucket: "posit-chronicle"
region: "us-east-2"
(Since it can be difficult to access data written to a persistent volume from outside of the cluster, we generally recommend writing data to AWS S3 instead.)
The public Helm repository has complete documentation for available settings.
Run the agent as a sidecar container
If you are using the official Posit Connect Helm chart, adding something close to the following to your Helm values.yaml
file should be sufficient:
Connect values.yaml
pod:
sidecar:
- name: chronicle-agent
image: ghcr.io/rstudio/chronicle-agent:2025.03.0
env:
- name: CHRONICLE_SERVER_ADDRESS
value: "http://posit-chronicle.chronicle.svc.cluster.local"
- name: CHRONICLE_CONNECT_APIKEY
value: "changeme"
We suggest storing potentially sensitive settings like the Connect API key in a Kubernetes secret rather than writing the value into the container spec directly.
And with the official Workbench chart, the story is very similar:
Workbench values.yaml
pod:
sidecar:
- name: chronicle-agent
image: ghcr.io/rstudio/chronicle-agent:2025.03.0
env:
- name: CHRONICLE_SERVER_ADDRESS
value: "http://posit-chronicle.chronicle.svc.cluster.local"
Both examples above assume that a Chronicle server is already running in the cluster in the chronicle
namespace. The exact address may differ in practice.