A Nextcloud installation that works perfectly today is one hardware failure, one corrupted database transaction, or one accidental bulk delete away from becoming a data loss incident. Backups are not optional — they're the insurance policy that turns a catastrophe into a minor inconvenience. Yet many Nextcloud administrators either skip backups entirely or implement them partially, backing up files but forgetting the database, or backing up everything but never testing restores.

This guide covers a complete backup strategy for Nextcloud: what to back up, how to automate it, how to verify backups are actually working, and how to perform a full disaster recovery when the worst happens. Every command and script in this guide is production-tested.

The Three Components You Must Back Up

A complete Nextcloud backup consists of three distinct components. Missing any one of them means your backup is incomplete and a full restore may be impossible.

1. The Database

Nextcloud's database (PostgreSQL or MySQL/MariaDB) stores user accounts, file metadata, share permissions, activity logs, app configurations, and calendar/contact data. Without the database, your files are just an unorganized pile of blobs that Nextcloud cannot reassemble into a coherent file structure.

2. The Data Directory

This is where actual file content lives — typically at /var/www/nextcloud/data/ or a custom path. It contains every user's files, local application data, and file previews. For most deployments, the data directory is by far the largest backup component.

If you've configured external storage using S3 or Ceph object storage, file content may reside outside this directory. In that case, the data directory still needs to be backed up (it contains metadata files, encryption keys, and app data), but the bulk storage backup is handled at the object storage layer.

3. The Configuration

This includes:

RPO and RTO: Defining Your Backup Requirements

Before writing backup scripts, define two critical metrics:

Recovery Point Objective (RPO) — How much data can you afford to lose? If your RPO is 24 hours, daily backups are sufficient. If your RPO is 1 hour, you need hourly database snapshots and near-continuous file synchronization.

Recovery Time Objective (RTO) — How quickly must the system be back online? If your RTO is 4 hours, you need a tested restore procedure that completes within that window. If your RTO is 30 minutes, you likely need high availability architecture rather than backup/restore alone.

Scenario Typical RPO Typical RTO Backup Strategy
Small team (1-20 users) 24 hours 4-8 hours Daily database dump + daily file sync
Mid-size org (20-200 users) 4-8 hours 1-2 hours Hourly DB dumps + incremental file backup
Enterprise (200+ users) 1 hour 15-30 minutes Continuous DB replication + Ceph snapshots + HA failover
Regulated industries Per regulatory requirement Per regulatory requirement All above + off-site replication + audit trail

Database Backup

The database backup must be consistent — a backup taken during an active write operation may be corrupted. Both PostgreSQL and MySQL provide tools that create consistent snapshots.

PostgreSQL

# Full database dump with compression
pg_dump -U nextcloud -h localhost nextcloud_db \
    --format=custom \
    --compress=6 \
    -f /backup/nextcloud/db/nextcloud_$(date +%Y%m%d_%H%M%S).pgdump

# Verify the dump is valid
pg_restore --list /backup/nextcloud/db/nextcloud_$(date +%Y%m%d_%H%M%S).pgdump > /dev/null 2>&1
echo "Exit code: $? (0 = success)"

The --format=custom flag creates a compressed, restorable archive. The --compress=6 applies zlib compression at level 6, which provides a good balance between file size and backup speed. A typical Nextcloud database for 100 users compresses to 50-200 MB.

MySQL/MariaDB

# Full database dump with single-transaction for consistency
mysqldump --single-transaction --routines --triggers \
    -u nextcloud -p nextcloud_db \
    | gzip > /backup/nextcloud/db/nextcloud_$(date +%Y%m%d_%H%M%S).sql.gz

# Verify the dump
gunzip -t /backup/nextcloud/db/nextcloud_$(date +%Y%m%d_%H%M%S).sql.gz
echo "Exit code: $? (0 = valid gzip)"

The --single-transaction flag is essential for InnoDB tables (which Nextcloud uses). It creates a consistent snapshot without locking the database, so users can continue working during the backup.

Scheduling Database Backups

Add a cron entry to automate database backups:

# For PostgreSQL - hourly backups, retain 48 hours
0 * * * * pg_dump -U nextcloud -h localhost nextcloud_db --format=custom --compress=6 -f /backup/nextcloud/db/nextcloud_$(date +\%Y\%m\%d_\%H\%M\%S).pgdump && find /backup/nextcloud/db/ -name "*.pgdump" -mtime +2 -delete

# For MySQL - hourly backups, retain 48 hours
0 * * * * mysqldump --single-transaction --routines --triggers -u nextcloud -pYOURPASSWORD nextcloud_db | gzip > /backup/nextcloud/db/nextcloud_$(date +\%Y\%m\%d_\%H\%M\%S).sql.gz && find /backup/nextcloud/db/ -name "*.sql.gz" -mtime +2 -delete

The find ... -mtime +2 -delete command removes backups older than 2 days, preventing disk space exhaustion. Adjust the retention period based on your RPO requirements and available storage.

File Backups

The data directory is typically the largest component and benefits most from incremental backup strategies that only transfer changed files.

Option 1: rsync (Simple and Reliable)

For straightforward backup to a local or remote destination:

# Enable maintenance mode to ensure consistency
sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --on

# Sync data directory to backup location
rsync -avz --delete \
    /var/www/nextcloud/data/ \
    /backup/nextcloud/data/

# Disable maintenance mode
sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --off

Maintenance mode prevents users from modifying files during the backup, ensuring consistency. The downside is a brief period of unavailability. For small deployments (under 50 GB of data), the rsync operation completes in minutes and the downtime is negligible. For larger deployments, consider the BorgBackup approach below.

Option 2: BorgBackup (Deduplicated and Efficient)

BorgBackup is a deduplicated, compressed backup tool that excels at backing up large data directories. After the initial full backup, subsequent backups only transfer changed file chunks, making them dramatically faster.

# Initialize the Borg repository (first time only)
borg init --encryption=repokey /backup/borg/nextcloud

# Create a backup
borg create \
    --stats --progress --compression zstd,6 \
    /backup/borg/nextcloud::nextcloud-{now:%Y%m%d-%H%M%S} \
    /var/www/nextcloud/data \
    /var/www/nextcloud/config \
    /var/www/nextcloud/themes

# Prune old backups (keep 24 hourly, 7 daily, 4 weekly, 6 monthly)
borg prune \
    /backup/borg/nextcloud \
    --keep-hourly=24 \
    --keep-daily=7 \
    --keep-weekly=4 \
    --keep-monthly=6

# Compact the repository to reclaim space
borg compact /backup/borg/nextcloud

BorgBackup's deduplication is particularly effective for Nextcloud data directories, where many files remain unchanged between backup cycles. A 500 GB data directory where 5% of files change daily might produce incremental backups of only 25-30 GB, completing in minutes rather than hours.

The --encryption=repokey flag encrypts the backup repository with a passphrase. Store this passphrase securely — without it, the backup is unrecoverable. Export and back up the encryption key separately:

borg key export /backup/borg/nextcloud /secure/location/borg-key-backup.txt

Option 3: Ceph Snapshots (For Object Storage Deployments)

If your Nextcloud data directory resides on Ceph RBD or CephFS (as described in our S3/Ceph configuration guide), storage-level snapshots provide near-instant point-in-time backups with zero application downtime:

# Create a CephFS snapshot
mkdir /mnt/cephfs/nextcloud-data/.snap/backup-$(date +%Y%m%d-%H%M%S)

# List existing snapshots
ls /mnt/cephfs/nextcloud-data/.snap/

# Delete old snapshots (retain 48 hours)
find /mnt/cephfs/nextcloud-data/.snap/ -maxdepth 1 -mtime +2 -exec rmdir {} \;

Ceph snapshots are copy-on-write, meaning they consume only the space needed for data that changes after the snapshot. A 1 TB data directory with 5% daily churn creates snapshots that consume roughly 50 GB of additional space per day.

Configuration Backup

Configuration files change less frequently but are equally critical for recovery. Back them up alongside your database:

# Back up Nextcloud configuration
tar czf /backup/nextcloud/config/nc-config-$(date +%Y%m%d).tar.gz \
    /var/www/nextcloud/config/config.php \
    /var/www/nextcloud/themes/ \
    /etc/nginx/sites-available/nextcloud* \
    /etc/php/*/fpm/pool.d/nextcloud.conf \
    /etc/letsencrypt/live/ \
    /etc/letsencrypt/renewal/ \
    /var/spool/cron/crontabs/www-data 2>/dev/null

# Retain 30 days of config backups
find /backup/nextcloud/config/ -name "nc-config-*.tar.gz" -mtime +30 -delete

Security note: The config.php file contains database passwords, secret keys, and possibly S3 credentials. Ensure backup destinations have restricted access permissions. Encrypt backups that leave the server, especially for off-site replication.

Complete Automation Script

Here is a comprehensive backup script that handles all three components, manages maintenance mode, and logs the results. Save it as /usr/local/bin/nextcloud-backup.sh:

#!/bin/bash
# Nextcloud Complete Backup Script
# Usage: /usr/local/bin/nextcloud-backup.sh

set -euo pipefail

# Configuration
NC_PATH="/var/www/nextcloud"
BACKUP_BASE="/backup/nextcloud"
BORG_REPO="/backup/borg/nextcloud"
DB_TYPE="pgsql"  # pgsql or mysql
DB_NAME="nextcloud_db"
DB_USER="nextcloud"
LOG_FILE="/var/log/nextcloud-backup.log"
DATE=$(date +%Y%m%d_%H%M%S)

log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"; }

log "=== Nextcloud backup started ==="

# Create backup directories
mkdir -p "$BACKUP_BASE"/{db,config}

# Step 1: Database backup
log "Starting database backup..."
if [ "$DB_TYPE" = "pgsql" ]; then
    pg_dump -U "$DB_USER" -h localhost "$DB_NAME" \
        --format=custom --compress=6 \
        -f "$BACKUP_BASE/db/nextcloud_${DATE}.pgdump"
    log "PostgreSQL dump completed: nextcloud_${DATE}.pgdump"
elif [ "$DB_TYPE" = "mysql" ]; then
    mysqldump --single-transaction --routines --triggers \
        -u "$DB_USER" "$DB_NAME" \
        | gzip > "$BACKUP_BASE/db/nextcloud_${DATE}.sql.gz"
    log "MySQL dump completed: nextcloud_${DATE}.sql.gz"
fi

# Step 2: Enable maintenance mode
log "Enabling maintenance mode..."
sudo -u www-data php "$NC_PATH/occ" maintenance:mode --on

# Step 3: File backup with BorgBackup
log "Starting BorgBackup..."
borg create \
    --stats --compression zstd,6 \
    "$BORG_REPO"::nextcloud-"$DATE" \
    "$NC_PATH/data" \
    "$NC_PATH/config" \
    "$NC_PATH/themes" 2>&1 | tee -a "$LOG_FILE"
log "BorgBackup completed."

# Step 4: Disable maintenance mode
log "Disabling maintenance mode..."
sudo -u www-data php "$NC_PATH/occ" maintenance:mode --off

# Step 5: Configuration backup
log "Backing up configuration files..."
tar czf "$BACKUP_BASE/config/nc-config-${DATE}.tar.gz" \
    "$NC_PATH/config/config.php" \
    /etc/nginx/sites-available/nextcloud* \
    /etc/php/*/fpm/pool.d/nextcloud.conf 2>/dev/null || true
log "Configuration backup completed."

# Step 6: Prune old backups
log "Pruning old backups..."
borg prune "$BORG_REPO" \
    --keep-hourly=24 --keep-daily=7 \
    --keep-weekly=4 --keep-monthly=6
borg compact "$BORG_REPO"

find "$BACKUP_BASE/db/" -name "*.pgdump" -mtime +7 -delete
find "$BACKUP_BASE/db/" -name "*.sql.gz" -mtime +7 -delete
find "$BACKUP_BASE/config/" -name "nc-config-*.tar.gz" -mtime +30 -delete
log "Pruning completed."

log "=== Nextcloud backup finished ==="

Make it executable and schedule it:

chmod +x /usr/local/bin/nextcloud-backup.sh

# Add to cron - run at 2 AM daily
echo "0 2 * * * root /usr/local/bin/nextcloud-backup.sh" > /etc/cron.d/nextcloud-backup

Off-Site Replication

Local backups protect against software failures and accidental deletion, but they don't protect against hardware failure, data center outages, or physical disasters. Off-site replication sends a copy of your backups to a geographically separate location.

Using rclone for Off-Site Sync

rclone supports dozens of storage backends (S3, Google Cloud Storage, Backblaze B2, SFTP, and more). Configure an off-site destination and sync your backup repository:

# Configure rclone (interactive wizard)
rclone config
# Add a remote named "offsite" pointing to your S3-compatible storage

# Sync the Borg repository to off-site storage
rclone sync /backup/borg/nextcloud offsite:nextcloud-backups/borg/ \
    --transfers=8 \
    --progress

# Sync database dumps
rclone sync /backup/nextcloud/db offsite:nextcloud-backups/db/ \
    --transfers=4 \
    --progress

Using rsync over SSH

For replication to a second server (e.g., in a different data center):

# Set up SSH key authentication first
ssh-keygen -t ed25519 -f /root/.ssh/backup_key -N ""
ssh-copy-id -i /root/.ssh/backup_key backup-user@offsite-server.example.com

# Sync backups to remote server
rsync -avz --delete \
    -e "ssh -i /root/.ssh/backup_key" \
    /backup/borg/nextcloud/ \
    backup-user@offsite-server.example.com:/backup/nextcloud/borg/

Add the off-site sync to your backup script or schedule it as a separate cron job that runs after the primary backup completes.

Backup Verification and Restore Testing

A backup you've never tested restoring is not a backup — it's a hope. Schedule regular restore tests (monthly at minimum) to verify your backups are complete and your recovery procedure works within your RTO.

Verifying BorgBackup Archives

# List available backups
borg list /backup/borg/nextcloud

# Verify archive integrity
borg check --verify-data /backup/borg/nextcloud

# Test extracting specific files (dry run)
borg extract --dry-run --list \
    /backup/borg/nextcloud::nextcloud-20260122-020000

Verifying Database Dumps

# PostgreSQL: restore to a temporary database
createdb -U postgres nextcloud_restore_test
pg_restore -U postgres -d nextcloud_restore_test \
    /backup/nextcloud/db/nextcloud_20260122_020000.pgdump
echo "Restore test result: $?"
dropdb -U postgres nextcloud_restore_test

# MySQL: restore to a temporary database
mysql -u root -e "CREATE DATABASE nextcloud_restore_test;"
gunzip -c /backup/nextcloud/db/nextcloud_20260122_020000.sql.gz \
    | mysql -u root nextcloud_restore_test
echo "Restore test result: $?"
mysql -u root -e "DROP DATABASE nextcloud_restore_test;"

Full Disaster Recovery Procedure

When everything fails and you need to restore from scratch, follow this procedure step by step. Practice it before you need it.

Step 1: Provision New Server

Deploy a new server with the same OS and PHP version as the original. If you're using a standard installation, install the base packages: Nginx/Apache, PHP-FPM, PostgreSQL/MySQL.

Step 2: Restore Configuration

# Extract configuration backup
tar xzf /backup/nextcloud/config/nc-config-20260122.tar.gz -C /

# Restore web server and PHP configs
systemctl restart nginx php-fpm

Step 3: Restore Database

# PostgreSQL
createdb -U postgres nextcloud_db
pg_restore -U postgres -d nextcloud_db \
    /backup/nextcloud/db/nextcloud_20260122_020000.pgdump

# MySQL
mysql -u root -e "CREATE DATABASE nextcloud_db;"
gunzip -c /backup/nextcloud/db/nextcloud_20260122_020000.sql.gz \
    | mysql -u root nextcloud_db

Step 4: Restore Files

# Using BorgBackup
cd /
borg extract /backup/borg/nextcloud::nextcloud-20260122-020000

# Fix permissions
chown -R www-data:www-data /var/www/nextcloud/data
chown -R www-data:www-data /var/www/nextcloud/config

Step 5: Verify and Finalize

# Disable maintenance mode
sudo -u www-data php /var/www/nextcloud/occ maintenance:mode --off

# Run file scan to reconcile any differences
sudo -u www-data php /var/www/nextcloud/occ files:scan --all

# Update the database schema if Nextcloud version differs
sudo -u www-data php /var/www/nextcloud/occ upgrade

# Check system status
sudo -u www-data php /var/www/nextcloud/occ status

Step 6: Validate User Access

Log in as an admin and verify:

Document the entire restore procedure with timestamps. This documentation becomes invaluable during actual disaster recovery scenarios, when stress and time pressure make it easy to skip steps or make errors. A tested, documented procedure is the difference between a 30-minute recovery and a 6-hour scramble.

Backup Strategy for Multi-Server Nextcloud Deployments

If your Nextcloud deployment spans multiple servers — a web/application server, a database server, and a file storage server — coordinating backups across them requires additional planning.

Coordinating Cross-Server Consistency

The critical challenge with multi-server backups is ensuring that the database snapshot and the file system snapshot represent the same point in time. If you back up the database at 2:00 AM and the file system at 2:15 AM, any files uploaded in that 15-minute window will exist on disk but not in the database, potentially causing orphaned files or metadata inconsistencies after a restore.

To ensure consistency across servers:

  1. Enable maintenance mode on the Nextcloud application server first. This prevents all user activity and ensures no new writes occur during the backup window.
  2. Trigger the database backup on the database server.
  3. Once the database dump is confirmed complete, trigger the file system backup on the storage server.
  4. After both backups complete, disable maintenance mode.

Orchestrate this with SSH commands from a central backup coordinator script:

#!/bin/bash
# Multi-server backup coordinator
NC_SERVER="nextcloud-app.internal"
DB_SERVER="nextcloud-db.internal"
STORAGE_SERVER="nextcloud-storage.internal"

# Enable maintenance mode
ssh www-data@$NC_SERVER "php /var/www/nextcloud/occ maintenance:mode --on"

# Backup database
ssh backup@$DB_SERVER "/usr/local/bin/backup-nextcloud-db.sh"

# Backup files
ssh backup@$STORAGE_SERVER "/usr/local/bin/backup-nextcloud-files.sh"

# Disable maintenance mode
ssh www-data@$NC_SERVER "php /var/www/nextcloud/occ maintenance:mode --off"

Handling Large Data Volumes

Organizations with terabytes of file data face a practical challenge: a full rsync or BorgBackup of a 5 TB data directory might take hours, resulting in an unacceptable maintenance mode window. Several strategies mitigate this:

Monitoring Backup Health

Automated backups that fail silently are worse than no backups at all — they give you false confidence. Implement monitoring to catch backup failures immediately.

Log-Based Monitoring

The backup script above logs to /var/log/nextcloud-backup.log. Monitor this file for the "Nextcloud backup finished" message. If the message doesn't appear within the expected window, the backup likely failed:

# Simple monitoring check — add to a monitoring cron
LAST_SUCCESS=$(grep "Nextcloud backup finished" /var/log/nextcloud-backup.log | tail -1 | cut -d']' -f1 | tr -d '[')
HOURS_SINCE=$(( ($(date +%s) - $(date -d "$LAST_SUCCESS" +%s)) / 3600 ))

if [ "$HOURS_SINCE" -gt 26 ]; then
    echo "WARNING: Last successful Nextcloud backup was $HOURS_SINCE hours ago" | \
        mail -s "Nextcloud Backup Alert" admin@example.com
fi

Backup Size Monitoring

A backup that suddenly becomes much smaller than usual might indicate a partial failure. Track backup sizes over time and alert on significant deviations:

# Check latest database dump size
LATEST_DB=$(ls -lt /backup/nextcloud/db/*.pgdump 2>/dev/null | head -1)
CURRENT_SIZE=$(echo "$LATEST_DB" | awk '{print $5}')
# Compare with a baseline and alert if more than 20% smaller

Infrastructure Considerations

Your backup strategy is only as reliable as the infrastructure supporting it. Key factors to consider:

MassiveGRID's managed Nextcloud hosting includes automated daily backups with off-site replication to geographically separate data centers, encrypted at rest and in transit. The backup infrastructure is monitored 24/7, and restore procedures are tested regularly by the operations team. For organizations where data protection is non-negotiable — and it always should be — managed hosting eliminates the risk of backup misconfiguration and ensures your recovery objectives are met consistently.

If you're running a self-hosted Nextcloud instance and want to offload the complexity of backup management while maintaining full data sovereignty, explore MassiveGRID's Nextcloud hosting to see how enterprise-grade backup infrastructure is built into every deployment.