We recommend that the on-site administrator of your Overleaf instance has:
We find that on-site administrators with these skills will be able to provide the best experience for users of Overleaf at your organization.
Installing Overleaf requires the right skills, as well as particular hardware and software.
Make sure you read this section thoroughly before proceeding with any installation of Overleaf.
We strongly recommend you run Server Pro on a recent version of Ubuntu, with a recent version of Docker. It's possible to run Server Pro on many other systems, but it will be much more difficult for us to offer support for systems that are far from the default Ubuntu/Docker setup. We also can't offer support for problems with Docker itself.
For the best experience when running Overleaf, we highly recommend using a Debian-based operating system, such as Ubuntu. This choice aligns with the software's development environment and is the preferred option among the majority of Overleaf users.
When using Server Pro with Sandboxed Compiles, it's important to note that the application requires root access to the Docker socket.
Both Server CE and Server Pro currently support the following versions of dependencies:
Docker 25.0 and 29
MongoDB 7.0 and 8.0
Redis 6.2 and 7.5
MongoDB and Redis are automatically pulled by docker compose when running Server CE or Server Pro, unless configured to use a different installation.
The Toolkit depends on the following programs:
bash
Docker
docker compose is required and is generally installed with Docker.
We recommend that you install the most recent version of Docker that is available for your operating system.
Once Docker is installed correctly, you should be able to run these commands without error:
The includes a handy script that produces a report pointing to any unfulfilled dependency.
When provisioning hardware to run Overleaf, the main factor to consider is how many concurrent users will be running compiles.
For example, if you have a license for 100 total users, but only expect ~5 to be working at the same time, the minimal install will be sufficient. If you expect a higher proportion to be working (and compiling) simultaneously, you should consider provisioning a server with a higher specification.
A minimum base requirement of 2 cores and 3GB memory is required for basic operations with around 5 concurrent users. This minimum requirement will also be sufficient for larger groups where there is less concurrent usage, or where it is OK for compile times to be longer during heavier usage.
# Shows the installed Docker version
docker --version
# docker compose plugin (v2)
docker compose version
# List the running Docker containers on your system
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b1fadcd1dcb1 quay.io/sharelatex/sharelatex-pro:5.4.0 "/sbin/my_init" 23 hours ago Up About a minute 0.0.0.0:80->80/tcp sharelatex
7900ebb9ebb8 redis:6.2 "docker-entrypoint.s…" 45 hours ago Up About a minute 6379/tcp redis
fbd49d420e59 mongo:6.0 "docker-entrypoint.s…" 45 hours ago Up About a minute (healthy) 27017/tcp mongoIf you are considering using a NFS (Network File System) based file system for your small instance please have a look at this section in the Troubleshooting section.
As a rule of thumb, to provide a high and consistent level of service, 1 CPU core and 1GB of memory should be added to the minimal install for every 5-10 concurrent users.
This should only be taken as a guide, as factors such as the size of typical documents (larger documents use up more compile resources), how often users compile, and what tolerance there is for longer compile times during heavy usage, all affect the level of provisioning required.
Many of our customers look to deploy Server Pro organization wide, or across large teams. In those situations, it's difficult for us to advise on specific setup requirements, because the use cases and underlying hardware available can be quite varied.
If you're running a Server Pro installation for 300 total users, and regularly expect 30-60 of those users to be compiling documents at the same time, 8GB and 7 Cores (5 cores + 5GB + base of 2 cores & 3GB) should provide sufficient resources for your users to have a consistently high level of service.
To give an example of the hardware requirements for a larger deployment, a Server Pro installation for 1,000 total users has been set up successfully using a single server provisioned with two 4-core processors and 32GB of system memory. This has been sufficient for the team’s needs over the past year of usage.
If you have installed Overleaf at your organization and would like to share details of your setup to help us add to this section, please let us know.
Customers who are exceeding the limits of a single large server can take a look at Horizontal scaling for Server Pro.
We advise against using Network File System (NFS)/Amazon EFS/Amazon EBS for project/history storage in larger setups and explicitly do not support it for horizontal scaling.
The behavior of these file systems is not providing the necessary performance and reliability that Server Pro needs when running at a high scale. When the file system cannot keep up with the load, the application stalls from too many blocking IO operations. These stalls can lead to overrun Redis-based locks, which in turn can result in corrupted project data.
We advise using S3 compatible object storage instead. Slow S3 performance in turn only affects the upload/download of files, which only leads to an elevated number of open connections to your S3 provider and in turn does not affect the behavior of the rest of the application. Additionally, Server Pro can specify reasonable timeouts on S3 requests, which is not possible for file system/IO operations at the application level.
By default, Overleaf Server instance limit the number of connections to 768. This includes persistent Websocket connections, top-level HTML navigation and ajax requests. Once the limit is hit, the editor might not be able to connect, the editor page might not load entirely and compile requests can fail. Nginx will return status 500 responses and log worker_connections are not enough while connecting to upstream into var/log/nginx/error.log inside the sharelatex container.
The worker_connections setting limits the number of concurrent connections nginx will accept per worker. The number of workers is controlled by the worker_processes setting and is set to 4 by default in our nginx configuration.
Nginx doesn't do much work compared to other parts of the system, so these limits act as a safety preventing too many connections from overwhelming the system. It's preferable to drop some excess connections early rather than slowing down every connection.
Overleaf Server instances expose environment variables for adjusting these nginx settings:
NGINX_WORKER_PROCESSES for worker_processes (default 4)
NGINX_WORKER_CONNECTIONS for worker_connections (default 768)
NGINX_KEEPALIVE_TIMEOUT for keepalive_timeout (default 65)
LaTeX is a single threaded program, meaning it can only utilize one CPU core at a time. The CPU is also the main limitation when compiling a document. Therefore the faster the single core performance of your CPU the faster you will be able to compile a document. More cores will only help if you are trying to compile more documents than you have free CPU cores.