This page provides an overview of the different ways to deploy ShinyProxy. It is advised to first read the Getting Started guide in order to understand the basics of ShinyProxy first.
The most simple way to run ShinyProxy is to use the JAR file. For simple testing or demonstration purposes it is sufficient to use
java -jar shinyproxy-2.3.0.jar
which will run ShinyProxy on port
It is possible to use
nohup to let ShinyProxy running when you log out of a
system, but it is advisable to use alternatives e.g. the
packages or the containerized ShinyProxy to have it defined as a proper service.
Installating or upgrading the .deb package of ShinyProxy can be done using:
wget https://www.shinyproxy.io/downloads/shinyproxy_2.3.0_amd64.deb sudo dpkg -i shinyproxy_2.3.0_amd64.deb
This will define ShinyProxy as a service running on port
To see whether the service is up and running, one can use:
systemctl status shinyproxy
Looking at the logs can be done using
journalctl -u shinyproxy
application.yml file that will be used by ShinyProxy can be found
and edited under
Installing the RPM package of ShinyProxy can be achieved with
wget https://www.shinyproxy.io/downloads/shinyproxy_2.3.0_x86_64.rpm sudo rpm -i shinyproxy_2.3.0_x86_64.rpm
Upgrading the package can be done using
sudo rpm -U shinyproxy_2.3.0_x86_64.rpm
Similarly to the .deb package, the .rpm package will define ShinyProxy as a
service running on port
application.yml file that will be used by ShinyProxy can be found and
ShinyProxy uses containers to achieve scalability and security when launching multiple Shiny apps for multiple users. Each user runs a Shiny app in an isolated container, and there is no risk of apps interfering with each other, or users getting hold of other users' data.
From an infrastructure point of view, there are also great advantages to be gained. Containers are much easier to manage and scale than a series of system processes or services.
So if ShinyProxy uses containers to run Shiny apps, why not run ShinyProxy itself in a container? This would offer several additional advantages:
No need to install a Java runtime on the host. The docker image will take care of that.
Many container managers can be set up to automatically restart crashed containers. If the ShinyProxy container crashes, it can recover without requiring manual intervention.
It becomes much easier to deploy multiple ShinyProxy containers, and many clustered container managers (such as Kubernetes) allow you to deploy load balancers in front of those containers.
If you have multiple ShinyProxy containers and want to put a new configuration online, you can perform a 'rolling update' without causing any downtime for your users.
To run ShinyProxy in a container, several steps must be taken:
A docker image must be built, containing ShinyProxy, an
application.ymlconfiguration file, and a Java runtime. An example for this can be found here.
Since ShinyProxy is now listening on a container address and port, an additional mapping must be made to ensure that ShinyProxy is also accessible on an external interface. This is done differently for different container managers:
For docker, the port must be published, which means the host will allocate a port that forwards traffic to the container port.
For docker swarm, you can define a service that publishes a port on the ingress overlay network to make it accessible from any node.
For Kubernetes, the port can also be published via a service that automatically allocates the port on all nodes and takes care of routing the traffic to the appropriate pod.
If ShinyProxy is running inside the same container manager as the Shiny containers it launches, it also becomes easier to communicate with those containers:
It is no longer necessary for each Shiny container to publish a port on the host using port-range-start: simply exposing the Shiny port (3838) on the container is enough.
Instead of constructing a URL from the docker hostname, ShinyProxy can now use the container ID to access it.
For this to work, a setting must be enabled in the