Kubernetes
The operator allows managing ShinyProxy servers on Kubernetes. See the Features page for an overview of all benefits.
Components and dependencies
This section describes the components and dependencies of the operator.
-
Operator: the operator itself which manages the different ShinyProxy servers.
-
ShinyProxy: the ShinyProxy servers, these host the Shiny apps. You don’t need to create these servers manually, since these are created by the operator. Instead, you define which servers to create, and the operator creates all necessary Kubernetes resources, without affecting any existing server or causing downtime.
-
Redis: Redis is used by ShinyProxy (not by the operator) to implement session and app persistence. This ensures that when a ShinyProxy server is replaced, the user is still logged in and all apps remain active. Redis is always required when using the operator. When deploying Redis on the Kubernetes cluster, we advise to use Redis Sentinel such that Redis is run in a high-available way. It’s also possible to use a Redis server provided by cloud providers.
Warning
When deploying to production, it’s important to change the password used to secure Redis. Each example (see below) already changes the password tomySecurePassword12
. For an example see theoverlays/1-namespaced/patches/redis.secret.yaml
file. Make sure to change the password before deploying, see changing the Redis password for instructions on how to change the password after deploying.
Tutorial using minikube
This section provides a step-by-step tutorial on the basic deployment of the ShinyProxy Operator on minikube. The same steps can be used to deploy on production-grade Kubernetes clusters (e.g. AWS EKS). Although the deployment is performed using on CLI tools, it’s a good idea to use a GUI tool to connect to your Kubernetes cluster. This makes things more visual and allows you to easily observe what the operator does. Great tools are k9s and the official Kubernetes dashboard.
-
This tutorial requires that you install some tools:
-
Start minikube:
minikube start --addons=metrics-server,ingress
-
Optionally install and open the Kubernetes dashboard using:
minikube dashboard
-
Clone the ShinyProxy Operator repository (this contains the manifests) and change the working directory:
git clone https://github.com/openanalytics/shinyproxy-operator cd shinyproxy-operator/docs/deployment/overlays/1-namespaced
-
Apply all resources
kustomize build . | kubectl apply -f - --server-side
Note: this command may not finish successfully from the first attempt, for example, you could get the following message:
unable to recognize "STDIN": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1" unable to recognize "STDIN": no matches for kind "ShinyProxy" in version "openanalytics.eu/v1"
In this case, just re-run the command. The resources should then get created. (there is no way to specify the order of resources or the dependencies between resources in
kustomize
, re-running the command is the only workaround)At this point, you can select the
ShinyProxy
namespace in the Kubernetes dashboard. -
Wait for all the resources to startup. At this point the operator should start. It’s now time to configure web access to the cluster. First get the IP of minikube using:
minikube ip
Next, add the following entries to
/etc/hosts
, replacingMINIKUBE_IP
by the output of the previous command:MINIKUBE_IP shinyproxy-demo.local MINIKUBE_IP shinyproxy-demo2.local
-
Once all deployments are finished, you can access ShinyProxy at
shinyproxy-demo.local
. You might get a security warning from your browser because of the invalid (self-signed) certificate. You can safely bypass this warning during this example. -
Wait until the ShinyProxy instance is fully started. (before you will see a
Not Found
page). -
Login into ShinyProxy by using the username
jack
and the passwordpassword
. Next, try to launch an app and keep this app running. -
Open the
resources/shinyproxy.shinyproxy.yaml
file. This file contains the complete configuration of ShinyProxy, which is managed using custom resources in Kubernetes. Next, make a change in the file, for example change thetitle
property and instruct the operator to create two ShinyProxy replicas:apiVersion: openanalytics.eu/v1 kind: ShinyProxy metadata: name: shinyproxy namespace: shinyproxy spec: # ... proxy: store-mode: Redis stop-proxies-on-shutdown: false title: ShinyProxy 2 # ... replicas: 2 image: openanalytics/shinyproxy:3.2.0 imagePullPolicy: Always fqdn: shinyproxy-demo.local
-
Apply this change using
kubectl
:kubectl apply -f resources/shinyproxy.shinyproxy.yaml
The operator now deploys a new ShinyProxy instance. The old instance is kept intact as long as a WebSocket connection is active on the old instance. The old instance gets automatically removed once it no longer has any open WebSocket connections. New requests are immediately handled by the new server as soon as it’s ready. Try going to the main page of ShinyProxy and check whether the change your made has been applied.
-
Try the other examples. The following commands first remove the current example, next you can open another example (e.g.
2-clustered
) and deploy it usingkubectl
:kubectl delete namespace/shinyproxy kubectl delete namespace/shinyproxy-operator # may fail kubectl delete namespace/shinyproxy-dept2 # may fail kubectl delete namespace/my-namespace # may fail kubectl delete namespace/redis # may fail cd ../2-clustered kustomize build . | kubectl apply -f -
Additional examples
The Operator is designed to be flexible and fit many type of deployments. The repository includes examples for many kinds of deployments:
-
- Operator-mode:
namespaced
- Operator-namespace:
shinyproxy
- Redis-namespace:
shinyproxy
- ShinyProxy-namespace:
shinyproxy
- URLs:
https://shinyproxy-demo.local
This is a very simple deployment of the operator, where everything runs in the same namespace.
- Operator-mode:
-
- Operator-mode:
clustered
- Operator-namespace:
shinyproxy-operator
- Redis-namespace:
redis
- ShinyProxy-namespace:
shinyproxy
andshinyproxy-dept2
- URLs:
https://shinyproxy-demo.local
https://shinyproxy-demo2.local
In this example, the operator runs in
clustered
mode. Therefore, the operator looks into all namespaces forShinyProxy
resources and deploy these resources in their respective namespace. This example also demonstrates how the Operator can be used in a multi-tenancy or multi-realm way. Each ShinyProxy server runs in its own namespace, isolated from the other servers. However, they’re managed by a single operator. - Operator-mode:
-
- Operator-mode:
namespaced
- Operator-namespace:
shinyproxy
- Redis-namespace:
shinyproxy
- ShinyProxy-namespace:
shinyproxy
- URLs:
https://shinyproxy-demo.local
Similar to example 1, however, the
01_hello
app is now run in themy-namespace
namespace instead of theshinyproxy
namespace. In addition to the change in theshinyproxy.shinyproxy.yaml
file, this configuration requires the definition of the extra namespace and the modification of theServiceAccount
of the ShinyProxy server. - Operator-mode:
-
- Operator-mode:
namespaced
- Operator-namespace:
shinyproxy
- Redis-namespace:
shinyproxy
- ShinyProxy-namespace:
shinyproxy
- URLs:
https://shinyproxy-demo.local/shinyproxy1/
https://shinyproxy-demo.local/shinyproxy2/
https://shinyproxy-demo.local/shinyproxy3/
Based on the second example, this example shows how multi-tenancy can be achieved using sub-paths instead of multiple domain names. Each ShinyProxy server is made available at the same domain name but at a different path under that domain name.
- Operator-mode:
ShinyProxy configuration
In Kubernetes, the configuration of ShinyProxy is managed
using custom resources
in Kubernetes. The CustomResourceDefinition
of the operator can be found in the
bases/namespaced/operator/crd.yaml
directory (the CRD is equal for clustered
and namespaced
deployments). Both the
regular configuration of ShinyProxy and properties
specific to the operator can be specified in the custom resource. The following
properties are available:
spring
: config related to Spring, such as the Redis connection informationproxy
: the configuration of ShinyProxy, this is the same configuration as if you were manually deploying ShinyProxyimage
: the docker image to use for the ShinyProxy server ( e.g.openanalytics/shinyproxy:3.2.0
)imagePullPolicy
: the pull policy for ShinyProxy Image; the default value isIfNotPresent
; valid options areNever
,IfNotPresent
andAlways
.fqdn
: the FQDN at which the service should be available, e.g.additionalFqdns
: (optional) a list of additional FQDNs that can be used to access this ShinyProxy serverreplicas
: (optional) the number of ShinyProxy replicas to runlabels
: (optional) map of labels to add to the ShinyProxy pod.memory-request
: (optional) the minimum amount of memory available to a single ShinyProxy instance. Uses the same format as for apps. For example1G
.memory-limit
: (optional) the maximum amount of memory available to a single ShinyProxy instance. Uses the same format as for apps. For example1G
.cpu-limit
: (optional) the maximum amount of CPU time available to a single ShinyProxy instance. Uses the same format as for apps. For example1
(= 1 CPU core).dns
: (optional) list of DNS servers to be used by the ShinyProxy container.kubernetesPodTemplateSpecPatches
: allows to patch thePodTemplate
of the ReplicaSet created by the operator (see the example)kubernetesIngressPatches
: allows to patch theIngress
resources created by the operator (see the example)kubernetesServicePatches
: allows to patch theService
resources created by the operator (see the example)appNamespaces
: a list of namespaces in which apps are be deployed. This is only needed when you change the namespace of an app using thekubernetes-pod-patches
feature. The namespace of the operator and ShinyProxy instance are automatically includedantiAffinityTopologyKey
: the topology key to use in the anti-affinity configuration of the ShinyProxy podsantiAffinityRequired
: if enabled, the anti-affinity configuration rules arerequired
instead ofpreferred
Modify the Ingress Resource
The ShinyProxy Operator automatically creates an ingress resource for each
ShinyProxy resource you create. This ingress resource points to the correct
Kubernetes service (which is also created by the operator). The created Ingress
resource contains everything that’s needed for a working ShinyProxy deployment.
However, in some cases it’s required to modify the resource. This can be
achieved using the kubernetesIngressPatches
field. This field should contain a
string which contains a list of JSON Patches to apply
to the Ingress resource. The above examples already include the following patch:
apiVersion: openanalytics.eu/v1
kind: ShinyProxy
metadata:
name: shinyproxy
namespace: shinyproxy
spec:
proxy:
# ...
kubernetesIngressPatches: |
- op: add
path: /metadata/annotations
value:
nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 300m
- op: add
path: /spec/ingressClassName
value: nginx
- op: add
path: /spec/tls
value:
- hosts:
- shinyproxy-demo.local
# secretName: example # uncomment and change this line if needed
image: openanalytics/shinyproxy:3.2.0
imagePullPolicy: Always
fqdn: shinyproxy-demo.local
The first patch adds some additional annotations to the ShinyProx resource. For
example, in order to set up a redirect from HTTP to HTTPS. The second patch
changes the ingressClassName to nginx
. Finally, the last patch configures TLS
for the ingress resource. In a production environment, you can uncomment the
line with the secretName
to refer to a proper secret. Any patch is accepted,
but make sure that the resulting Ingress resource still works for the ShinyProxy
Deployment. The ShinyProxy Operator logs the manifest before and after applying
the patch, this can be useful while creating the patches.
Note: the previous section only applies to version 2 of the operator. Version 1 behaves differently since it used Skipper as (intermediate) ingress controller.
Modify the ShinyProxy Pod
The Operator automatically creates a ReplicaSet for each ShinyProxy resource you
create. This ReplicaSet contains a PodTemplate
, which contains all necessary
settings for creating a proper ShinyProxy pod. In a lot of cases, it can be
useful to adapt this PodTemplate
for the specific context in which ShinyProxy
is running. For example, it’s a good idea to specify the resource requests and
limits, or sometimes it’s required to add a toleration to the pod. These
modification can be achieved using the kubernetesPodTemplateSpecPatches
field.
This field should contain a string which contains a list
of JSON Patches to apply to the PodTemplate
. The
above examples already include the following patch:
apiVersion: openanalytics.eu/v1
kind: ShinyProxy
metadata:
name: shinyproxy
namespace: shinyproxy
spec:
proxy:
# ...
kubernetesPodTemplateSpecPatches: |
- op: add
path: /spec/containers/0/env/-
value:
name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis
key: redis-password
- op: add
path: /spec/containers/0/resources
value:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 0.5
memory: 1Gi
- op: add
path: /spec/serviceAccountName
value: shinyproxy-sa
image: openanalytics/shinyproxy:3.2.0
imagePullPolicy: Always
fqdn: shinyproxy-demo.local
The above configuration contains three patches. The first patch adds an
environment variable with the password used for connecting to the Redis server.
The second patch configures the resource limits and requests of the ShinyProxy
pod. Finally, the last patch configures the ServiceAccount
of the pod.
Note: it’s important when using this feature to not break any existing configuration of the pod. For example, when you want to mount additional ConfigMaps, use the following configuration:
apiVersion: openanalytics.eu/v1
kind: ShinyProxy
metadata:
name: shinyproxy
namespace: shinyproxy
spec:
kubernetesPodTemplateSpecPatches: |
- op: add
path: /spec/volumes/-
value:
name: myconfig
configMap:
name: some-configmnap
- op: add
path: /spec/containers/0/volumeMounts/-
value:
mountPath: /mnt/configmap
name: myconfig
readOnly: true
In this example, the path
property of the patch always ends with a -
, this
indicates that the patch adds a new entry to the end of the array
( e.g. spec/volumes/
).
The following patch breaks the behavior of the ShinyProxy pod and should therefore not be used:
# NOTE: this is a demo of a WRONG configuration - don't use
apiVersion: openanalytics.eu/v1
kind: ShinyProxy
metadata:
name: shinyproxy
namespace: shinyproxy
spec:
kubernetesPodTemplateSpecPatches: |
- op: add
path: /spec/volumes
value:
- name: myconfig
configMap:
name: some-configmnap
- op: add
path: /spec/containers/0/volumeMounts
value:
- mountPath: /mnt/configmap
name: myconfig
readOnly: true
This patch replaces the existing /spec/volumes
and /spec/containers/0/volumeMounts
arrays of the pod. The ShinyProxy Operator
automatically creates a mount for a ConfigMap which contains the ShinyProxy
configuration. By overriding these mounts, this ConfigMap isn’t be mounted and
the default (demo) configuration of ShinyProxy is loaded.
Modify the Service Resource
The ShinyProxy Operator automatically creates a Service resource for each
ShinyProxy resource you create. The created Service resource contains everything
that’s needed for a working ShinyProxy deployment. However, in some cases it’s
required to modify the resource. This can be achieved using
the kubernetesServicePatches
field. This field should contain a string which
contains a list of JSON Patches to apply to the
Service resource. For example:
apiVersion: openanalytics.eu/v1
kind: ShinyProxy
metadata:
name: shinyproxy
namespace: shinyproxy
spec:
proxy:
# ...
kubernetesServicePatches: |
- op: add
path: /metadata/annotations
value:
my-annotation: my-value
image: openanalytics/shinyproxy:3.2.0
imagePullPolicy: Always
fqdn: shinyproxy-demo.local
This example patch adds the annotation my-annotation: my-value
to the Service
resource created by the operator.
Anti-affinity
The operator can create multiple replicas of ShinyProxy to achieve high
availability and scaling. Simply add or change the replicas
property to a
number equal or higher than 2. Starting with version 2.1.0, the operator
automatically adds anti-affinity
rules, such that Kubernetes tries to not schedule multiple ShinyProxy
replicas on the same Kubernetes node. Note that this only has effect when
running multiple replicas of ShinyProxy. If Kubernetes is unable to satisfy the
requirement, it still schedules multiple replicas on the same node. This
behavior can be changed by setting antiAffinityRequired
to true
in your
ShinyProxy configuration. It’s also possible to change the topology, by setting
the antiAffinityTopologyKey
, e.g. to not run multiple replicas in the same
availability zone you can set this property to topology.kubernetes.io/zone
.
Note: ShinyProxy is designed to work with multiple replicas. However, there should always be only one replica of the ShinyProxy Operator. Therefore, never deploy multiple replicas.
Operator configuration
We try to keep the configuration of the Operator itself as minimum as possible.
Furthermore, we want the operator to work without configuration in most cases.
Nevertheless, for some specific cases some configuration options are available.
For now these options are specified using environment variables. All variables
start with the SPO
prefix, meaning ShinyProxyOperator.
SPO_ORCHESTRATOR
: (required) can either bekubernetes
(default) ordocker
.SPO_MODE
: can either benamespaced
orclustered
(default). This specifies whether the operator should only look in its own namespace for ShinyProxy configurations or in all namespaces.SPO_PROBE_INITIAL_DELAY
: specifies the initial delay of the readiness and liveness probes. This is useful when the used Kubernetes version doesn’t support startup probes.SPO_PROBE_FAILURE_THRESHOLD
: specifies the failure threshold of the readiness and liveness probes. This is useful when the used Kubernetes version doesn’t support startup probes.SPO_PROBE_TIMEOUT
: specifies the timeout in seconds of the Readiness and Liveness probes. This is useful when the used Kubernetes version doesn’t support startup probes.SPO_STARTUP_PROBE_INITIAL_DELAY
: specifies the initial delay of the StartUp probe. By default, this is 60 seconds.SPO_LOG_LEVEL
: configures the log level of the operator, may be one of the following:OFF
: disables loggingERROR
WARN
INFO
DEBUG
: default (may change)TRACE
ALL
: enables all logging
Changing the Redis password
Each example changes the password to mySecurePassword12
. It’s important to
change this password in your environment. Ideally, the password must be changed
before deploying Redis for the first time, since changing the password after
initial deployment requires deleting all data.
In order to change the password after deployment:
Note: during this process ShinyProxy is stopped and all apps and users are stopped!
-
change the password in the yaml file (e.g. in
overlays/1-namespaced/patches/redis.secret.yaml
) -
stop ShinyProxy by removing the ShinyProxy resource (pods of running apps are not automatically removed):
kubectl delete shinyproxy -n shinyproxy shinyproxy
-
delete all Redis related resources:
kubectl delete statefulset -n shinyproxy redis-node kubectl delete pvc -n shinyproxy redis-data-redis-node-0 kubectl delete pvc -n shinyproxy redis-data-redis-node-1 kubectl delete pvc -n shinyproxy redis-data-redis-node-2
-
wait for all related pods to be stopped and all resources to be removed
-
check that the
PersistentVolumes
of Redis are removed usingkubectl get pv
-
re-deploy Redis and ShinyProxy:
kustomize build . | kubectl apply -f - --server-side
Kubernetes versions
k8s 1.33.x | k8s 1.32.x | k8s 1.31.x | k8s 1.30.x | k8s 1.29.x | k8s 1.28.x | k8s 1.27.x | k8s 1.26.x | k8s 1.25.x | k8s 1.24.x | k8s 1.23.x | k8s 1.22.x | k8s >= v1.21.3 | k8s >= 1.20.10 | v1.19 | <= v1.18 | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2.2.0 | ✓ | ✓ | ✓ | ✓ | ||||||||||||
2.1.0 | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
2.0.0 | ✓¹ | ✓¹ | ✓¹ | ✓¹ | ✓¹ | ✓¹ | ✓ | ✓ | ✓ | ✓ | ✓ |
Note:
- we only update this table when we run our automated tests on a specific Kubernetes version and after releasing new versions of the operator. However, in most cases the operator supports newer versions of Kubernetes than listed in this table. In addition, we see very little issues regarding compatibility between the operator and the Kubernetes API. Therefore, you can definitely try using a Kubernetes version that’s not listed in this table.
- ¹ version 2.0.0 supports these Kubernetes versions, but might stop syncing after some time, this issue is solved in version 2.1.0
Upgrading
Note
See the general upgrading documentation as well.Upgrade to 2.0.0
Be aware of these changes when updating to version 2.0.0:
- the old mechanism where cookies were used to assign users to specific
ShinyProxy servers is no longer used. Instead, as soon as a new server is
started, all new requests are handled by the new server, including
requests for existing apps. Only existing WebSocket connections stay open
on the old servers. This has multiple benefits:
- when a new server is started, users immediately use and see the configuration of that new server. In other words, if a new configuration includes a new app, this app is immediately available to all users (even if they’re using apps started on older servers)
- there is no longer a process of transferring users to new servers. Both the forced method and the manual method (where users have to click a button) are removed. Users immediately use the new configuration.
- apps can be run for a (very) long time, even if frequently updating the configuration and without having many old servers. Old servers are removed as soon as no WebSocket connections are running on that server.
- Skipper is no longer a dependency of the operator. There is no benefit in using with version two of the operator.
- the operator now requires ShinyProxy to store the active proxies in Redis. Therefore, since this release Redis takes a more critical role. When running Redis inside Kubernetes, it’s therefore best practice to use Redis Sentinel. This way Redis runs in a High Available mode, using three replicas. Compared to running a single Redis server, this prevents a single point of failure on Redis and the node it’s running on. This repository contains all manifests required to set up Redis Sentinel (based on the Bitnami Redis helm chart).
The best way to update to ShinyProxy 2.0.0 is by creating a fresh deployment of the operator and migrating users to this new deployment. The following changes need to be made to the ShinyProxy configuration file:
- add the property
proxy.store-mode: Redis
- add the property
proxy.stop-proxies-on-shutdown: false
- optionally add the property
kubernetesIngressPatches
in order to customize the ingress created by the operator. - update the ShinyProxy image to
openanalytics/shinyproxy:3.1.1
Upgrade to 2.1.0
The ShinyProxy CRD has been updated in version 2.1.0, it’s important to update the CRD in your cluster. Running the deployment commands is enough. The CRD can be updated while ShinyProxy and the ShinyProxy Operator are running in the cluster.
Upgrade to 2.2.0
We recommend to upgrade to ShinyProxy 3.2.0 when upgrading the operator to version 2.2.0, although that during the upgrade, it’s possible to (temporarily) stay on ShinyProxy 3.1.1. The new version of the ShinyProxy Operator and ShinyProxy both need additional Kubernetes permissions. The full upgrade can be executed without downtime or without stopping apps. However, once the new ShinyProxy server is running, users need to re-login (their HTTP session is removed). The steps to upgrade are:
-
modify your local yaml manifests to use version
2.2.0
of the manifests hosted on GitHub. In other words, in all yourkustomization.yaml
files change the following URLs from:resources: - github.com/openanalytics/shinyproxy-operator/docs/deployment/bases/redis-sentinel?ref=v2.1.0 - github.com/openanalytics/shinyproxy-operator/docs/deployment/bases/namespaced?ref=v2.1.0 - github.com/openanalytics/shinyproxy-operator/docs/deployment/bases/shinyproxy?ref=v2.1.0
to:
resources: - github.com/openanalytics/shinyproxy-operator/docs/deployment/bases/redis-sentinel?ref=v2.2.0 - github.com/openanalytics/shinyproxy-operator/docs/deployment/bases/namespaced?ref=v2.2.0 - github.com/openanalytics/shinyproxy-operator/docs/deployment/bases/shinyproxy?ref=v2.2.0
-
apply the manifests:
kustomize build . | kubectl apply -f - --server-side
-
wait for the new version of the operator to be deployed
-
change the
image
property in the ShinyProxy CRD file from:apiVersion: openanalytics.eu/v1 kind: ShinyProxy metadata: name: shinyproxy namespace: shinyproxy spec: # ... proxy: # ... image: openanalytics/shinyproxy:3.1.1
to
apiVersion: openanalytics.eu/v1 kind: ShinyProxy metadata: name: shinyproxy namespace: shinyproxy spec: # ... proxy: # ... image: openanalytics/shinyproxy:3.2.0
-
apply the manifests:
kustomize build . | kubectl apply -f - --server-side
Note on Redis and Valkey
ShinyProxy uses Redis for persisting user and app sessions. The manifests of the operator contain a ready to use deployment of Redis Sentinel. Recently, the community has forked Redis into Valkey, because of license changes in Redis. The manifests still use Redis, because Spring (the Java framework used by ShinyProxy) only supports Valkey on a best-effort basis (i.e. not officially supported). To minimize disruptions in deployments, we decided to not update the manifests to the latest version of Redis, since this would make it more difficult for users to migrate Valkey. At the moment as a user you have three choices for deploying Redis:
- use the manifests included in the repo: these are known to work good, but use an older version of Redis
- use the same manifests with a newer version of Redis (including version 8): we’ve less experience with this option, but it should work fine
- migrate to Valkey: we’ve no experience with this, but should work in theory
Feel free to share your experiences when upgrading Redis or switching to Valkey. This will help us to choose the best approach for the next version of ShinyProxy.