This page provides an overview of the different ways to deploy ShinyProxy. It is advised to first read the Getting Started guide in order to understand the basics of ShinyProxy first.
The most simple way to run ShinyProxy is to use the JAR file. For simple testing or demonstration purposes it is sufficient to use
java -jar shinyproxy-2.5.0.jar
which will run ShinyProxy on port
It is possible to use
nohup to let ShinyProxy running when you log out of a
system, but it is advisable to use alternatives e.g. the
packages or the containerized ShinyProxy to have it defined as a proper service.
Installating or upgrading the .deb package of ShinyProxy can be done using:
wget https://www.shinyproxy.io/downloads/shinyproxy_2.5.0_amd64.deb sudo dpkg -i shinyproxy_2.5.0_amd64.deb
This will define ShinyProxy as a service running on port
To see whether the service is up and running, one can use:
systemctl status shinyproxy
Looking at the logs can be done using
journalctl -u shinyproxy
application.yml file that will be used by ShinyProxy can be found
and edited under
To permanently enable the service so ShinyProxy is automatically started after a reboot one can use
sudo systemctl enable shinyproxy
To verify that the service is enabled one can use
sudo systemctl is-enabled shinyproxy
Installing the RPM package of ShinyProxy can be achieved with
wget https://www.shinyproxy.io/downloads/shinyproxy_2.5.0_x86_64.rpm sudo rpm -i shinyproxy_2.5.0_x86_64.rpm
Upgrading the package can be done using
sudo rpm -U shinyproxy_2.5.0_x86_64.rpm
Similarly to the .deb package, the .rpm package will define ShinyProxy as a
service running on port
application.yml file that will be used by ShinyProxy can be found and
ShinyProxy uses containers to achieve scalability and security when launching multiple Shiny apps for multiple users. Each user runs a Shiny app in an isolated container, and there is no risk of apps interfering with each other, or users getting hold of other users' data.
From an infrastructure point of view, there are also great advantages to be gained. Containers are much easier to manage and scale than a series of system processes or services.
So if ShinyProxy uses containers to run Shiny apps, why not run ShinyProxy itself in a container? This would offer several additional advantages:
No need to install a Java runtime on the host. The docker image will take care of that.
Many container managers can be set up to automatically restart crashed containers. If the ShinyProxy container crashes, it can recover without requiring manual intervention.
It becomes much easier to deploy multiple ShinyProxy containers, and many clustered container managers (such as Kubernetes) allow you to deploy load balancers in front of those containers.
If you have multiple ShinyProxy containers and want to put a new configuration online, you can perform a ‘rolling update’ without causing any downtime for your users.
To run ShinyProxy in a container, several steps must be taken:
A docker image must be built, containing ShinyProxy, an
application.ymlconfiguration file, and a Java runtime. We advise you to build upon our official Docker image. This image is optimized to have a small footprint and to be secure (by running ShinyProxy as a non-root user). An example on how this image can be found here.
Since ShinyProxy is now listening on a container address and port, an additional mapping must be made to ensure that ShinyProxy is also accessible on an external interface. This is done differently for different container managers:
For docker, the port must be published, which means the host will allocate a port that forwards traffic to the container port.
For docker swarm, you can define a service that publishes a port on the ingress overlay network to make it accessible from any node.
For Kubernetes, the port can also be published via a service that automatically allocates the port on all nodes and takes care of routing the traffic to the appropriate pod.
If ShinyProxy is running inside the same container manager as the Shiny containers it launches, it also becomes easier to communicate with those containers:
It is no longer necessary for each Shiny container to publish a port on the host using port-range-start: simply exposing the Shiny port (3838) on the container is enough.
Instead of constructing a URL from the docker hostname, ShinyProxy can now use the container ID to access it. For this to work, a setting must be enabled in the
shiny.proxy.kubernetes.internal-networking: true. In addition it is necessary to use a separate Docker network, as explained in our example.
ShinyProxy Operator on Kubernetes
Deploying and managing ShinyProxy can get complex when many apps are used, especially when the configuration of ShinyProxy is often updated. When restarting a running ShinyProxy instance (in order to update its configuration), users will face a disconnect from their running applications. The only solution to guarantee that users do not lose their connection to running apps, is to keep the current instance alive when updating ShinyProxy’s configuration. However, manually keeping track of these instances would be too cumbersome and should therefore be automated.
The ShinyProxy operator for Kubernetes is able to manage multiple ShinyProxy instances and their configuration.
To give an example of the working of the operator, assume we have some ShinyProxy configuration
config1 which contains one app called
When the operator starts working, it checks whether a ShinyProxy instance exists with that configuration.
If not, it starts a ShinyProxy instance and all other required configuration.
Users can now start using
app1 on this instance.
Some time later, the need for a second app arises. Therefore the administrator adapts the configuration of ShinyProxy to include a second app
However, some users are still using
app1 on the old instance. These apps may have some state, which should not be lost.
Therefore, the operator starts a second ShinyProxy instance using configuration
The operator ensures that users which are currently using the first instance, stay on that instance.
All other users, are forwarded to the new server and can use the new application.
Nevertheless, users using an old instance can choose to use the new instance, by clicking a button in the user interface.
The operator stops the old instance once it has no apps running.
The operator is packaged as a Docker container, which can easily be run on Kubernetes. The operator makes use of Skipper for routing users to the correct ShinyProxy instance. More information on deploying the operator can be found in the readme.
Readiness and liveness probes
ShinyProxy includes support for kubernetes readiness and liveness probes.
The correct endpoints are respectively
The probes can be used in a deployment of kubernetes as follows:
apiVersion: apps/v1 kind: Deployment metadata: name: shinyproxy-deployment labels: app: shinyproxy spec: replicas: 1 selector: matchLabels: app: shinyproxy template: metadata: name: shinyproxy labels: app: shinyproxy spec: containers: - name: shinyproxy image: openanalytics/shinyproxy livenessProbe: httpGet: path: /actuator/health/liveness port: 9090 periodSeconds: 1 readinessProbe: httpGet: path: /actuator/health/readiness port: 9090 periodSeconds: 1 startupProbe: httpGet: path: /actuator/health/liveness port: 9090 failureThreshold: 8 periodSeconds: 5 volumeMounts: - name: config-volume mountPath: /etc/shinyproxy/application-in.yml subPath: application-in.yml volumes: - name: config-volume configMap: name: shinyproxy-application-yml
Note that you may have to tune the
periodSeconds according to your deployment.
Instead of using StartupProbes (e.g. because your Kubernetes version does not support it yet), you can use the
initialDelaySeconds option on the readiness and liveness probes.