Deploying Relativity with Docker
How long would it take to freshly installed an Ubuntu host, deploy a combo of PostgreSQL server + Relativity Server, and to let it accept connections on port 80 using nginx? Do this twice to create both development and production environments and remember to sync any config changes between these. Does it sounds like a very tedious and error-prone task?
Thanks to Docker and Docker Compose, this task can be done in literally minutes.
What is Docker? Well, it is a tool that can take your app and put into a container (think of it as a slim Virtual Machine image with a lot of configurable options). Later this container can be deployed on other host (either a physical computer or a cloud-based one like AWS, Azure, Heroku, etc).
Docker Compose is a tool that allows to configure several Docker containers to work together.
Let's take a closer look at Docker tools and develop some Docker scripts for Relativity Server.
Note: The approach described below would also work for custom Data Abstract or Remoting SDK servers as well.
At first the Docker toolset itself should be installed. The installation process is very straightforward and the only thing one needs to remember is that on Linux Docker Compose needs to be installed separately after the Docker engine itself is installed.
On macOS and Windows environments the Docker Desktop package should be installed. This package includes the Docker engine, command line tools, Docker Compose and other useful tools. Detailed installation instructions for Docker Desktop can be found at https://docs.docker.com/desktop/.
The Linux environment requires a bit more effort. At first the Docker Engine needs to be installed (see https://docs.docker.com/engine/install/ for detailed instructions for different Linux distros). Then, Docker Compose can be installed (see https://docs.docker.com/compose/install/ for detailed instructions).
At this point the Docker environment should be up and running.
The first step is to put Relativity Server into a Docker image. The Dockerfile (configuration file describing what to do to create an image) to do this is a very simple one:
FROM mono:6.10
WORKDIR /usr/src/app
COPY . .
ENTRYPOINT [ "mono", "/usr/src/app/Relativity.exe", "--console" ]
All this it says it "take the image of Mono 6.10 and copy the Relativity Server files. Then use the given command to start it"
Now Docker image can be built using a simple command
docker build
After that, regardless of the host OS (be it Windows, Linux, macOS) and the version of Mono installed there (if any at all) the Relativity Server instance will always be run using Mono 6.10 on Debian 10.
This makes the app behavior predictable, as its environment (runtime, OS, config options, etc) is controlled.
Of course, Relativity Server alone is not that useful. It would be cool to connect it to database server. Also, the database should be stored in the way that is won't be lost when the Docker container is rebuilt.
This is where Docker Compose comes to help. This is a magic tool that allows to configure and start several Docker containers at once.
Here are the sample Docker Compose configuration files for Docker Compose and nginx reverse proxy:
version: '3'
services:
relativity:
container_name: docker-relativity
image: private-docker-repo/relativity
ports:
- "7100:7100"
depends_on:
- docker-database
volumes:
- domain:/etc/relativity/
networks:
- docker-network-relativity
docker-database:
container_name: docker-database
image: postgres:13
restart: always
environment:
- POSTGRES_PASSWORD=Alpha!Omega
volumes:
- database:/var/lib/postgresql/data
networks:
- docker-network-relativity
nginx:
container_name: docker-nginx
image: nginx:stable-alpine
ports:
- "80:80"
volumes:
- ./nginx.conf.prod:/etc/nginx/conf.d/default.conf
depends_on:
- relativity
networks:
- docker-network-relativity
volumes:
database:
domain:
networks:
docker-network-relativity:
driver: bridge
server {
listen 80;
server_name my-relativity-instance.com;
location / {
proxy_pass http://relativity:7099;
}
}
What it does is
- define 3 containers ("container" is a very slim VM created based on a source "image"):
- Relativity Server container (note that here an unofficial Relativity Server image here as a sample source)
- PostgreSQL 13 container
- nginx container with a custom configuration
- a custom network that these 3 containers should use to communicate with each other
- a set of so-called volumes (persistent storage containers) used to store the database and Relativity configuration
All these containers can now be started with just a single command:
docker-compose up
This single command will start a complex process that does exactly what was described above: a combo of PostgreSQL 13 + Relativity Server hidden behind a nginx reverse proxy.
What's way more important is that this same configuration file can now be used on any other computer. Docker magic will fetch images and configure containers, so literally within minutes the same configuration will be up and running.
The only thing one needs to remember that Relativity Server instance should not try to access PostgreSQL via localhost. This won't work because for the Relativity Server instance running in the Docker container localhost means the container it is running in (not the host). In this case database server should be accessed by its container name docker-database (one defined in the docker-compose configuration). Docker will resolve this as an IP address in the virtual network used by the running containers.
So the connection string should look like
NPGSQL.NET?Server=docker-database;Database=postgres;User ID=postgres;Password=****;Pooling=false
not
NPGSQL.NET?Server=localhost;Database=postgres;...
Schema Modeler will route all database access operations (like fetching database tables list, previewing data etc) through Relativity data access services.
And that is it. Complex configuration that would take hours to install and configure is up within minutes and accepts connections.
Note: This article is not a Docker tutorial. It just scratches the surface as it is not possible for a short article to go deeper. I'd really suggest you to invest some time into learning Docker, Docker Compose and other tools. Every hour spent reading the docs or watching courses will pay for it dozens of times later.