Blog

Seafile Installation Using Docker Compose and NFS

6/9/2025 3 min read

Seafile is a secure, cross-platform cloud storage solution. One way to install and run Seafile is by using Docker containers combined with the Network File System (NFS). In this article, I show how to set up Seafile with Docker and NFS on a system.

Seafile Installation Using Docker Compose and NFS

Why Do I Use NFS?

NFS is a widely used filesystem protocol designed for sharing files over a network. It offers a straightforward way to share and synchronize files between different systems without manually copying files. It's fast and efficient because it is specifically developed for network use.

NFS is commonly used in enterprises that need to share large files across multiple locations and is supported by many operating systems including Linux, Unix, and Windows.

The major advantage of using NFS with Seafile is the independence from the Docker server. For example, it's possible to run Seafile on a Raspberry Pi while the data is securely stored on a PC or server. If the Raspberry Pi or the Seafile server fails, the data isn't lost.

Additionally, the Raspberry Pi doesn't offer an ideal way to connect large storage with good performance. Of course, backing up your data remains essential. In my case, I use an Ubuntu server to host the Seafile Docker containers and a Synology DiskStation as the NFS server where the files reside.


Docker Compose File

version: '2.0'
services:
  db:
    image: mariadb:10.5
    container_name: seafile-mysql
    environment:
      - MYSQL_ROOT_PASSWORD=${DB_PASS}  # Set the root password for MySQL service
      - MYSQL_LOG_CONSOLE=true
    volumes:
      - type: volume
        source: db
        target: /var/lib/mysql
        volume:
          nocopy: true
    networks:
      - seafile-net

  memcached:
    image: memcached:1.6
    container_name: seafile-memcached
    entrypoint: memcached -m 256
    networks:
      - seafile-net

  seafile:
    image: seafileltd/seafile-mc:latest
    container_name: seafile
    ports:
      - "5000:80"
      - "5001:8080"
    volumes:
      - type: volume
        source: data
        target: /shared
        volume:
          nocopy: true
    environment:
      - DB_HOST=db
      - DB_ROOT_PASSWD=${DB_PASS}              # MySQL root user password
      - TIME_ZONE=${TIME_ZONE}                 # Timezone, default is UTC
      - SEAFILE_ADMIN_EMAIL=${ADMIN_USER_EMAIL}
      - SEAFILE_ADMIN_PASSWORD=${ADMIN_USER_PASS}
      - SEAFILE_SERVER_LETSENCRYPT=false
      - SEAFILE_SERVER_HOSTNAME=${HOST_NAME}
    depends_on:
      - db
      - memcached
    networks:
      - seafile-net

networks:
  seafile-net:

volumes:
  db:   # Volume for the database
    driver: local
    driver_opts:
      type: nfs
      o: addr=192.168.20.10,rw,vers=4.1
      device: ":/volume1/DockerData/Seafile/db/"
  data: # Volume for the data
    driver: local
    driver_opts:
      type: nfs
      o: addr=192.168.20.10,rw,vers=4.1
      device: ":/volume1/DockerData/Seafile/data/"

Important: Backup of the Seafile Database

It is essential to back up the Seafile database. Unlike Nextcloud, Seafile stores files in small blocks, usually only a few MB in size. When a file is updated, only the changed block is uploaded, greatly enhancing the efficiency and speed of synchronization.

These blocks are referenced by unique identifiers, allowing Seafile to manage storage efficiently. When a file is deleted, associated blocks are automatically removed. However, if the database is missing, it is no longer possible to restore the files—even if the data blocks still exist.


Nginx Proxy Manager

Integrating Nginx Proxy Manager into the Docker setup is advantageous for several reasons:

  • User-friendly web interface for managing reverse proxies
  • Easy setup and automatic renewal of Let's Encrypt certificates
  • Central management of SSL, redirects, and access rules
  • Simple use of subdomains or path-based routing for services like Seafile

This eliminates the need to manually edit Nginx configuration files, simplifying maintenance and reducing errors. Overall, it significantly improves the security, clarity, and flexibility of the server infrastructure.

You can find a guide for setting up the Nginx Proxy Manager at the appropriate source.