Running a self-hosted platform like Coolify gives you full control over your applications, databases, and infrastructure. That freedom, however, comes with a critical responsibility: backups. Without a solid backup strategy, a single disk failure, accidental deletion, or misconfigured deployment can wipe out weeks or months of work in seconds.
This guide walks through every layer of a production-grade Coolify backup strategy — from Coolify's built-in database backup features to external storage targets, verification procedures, and infrastructure-level protection with MassiveGRID's Ceph-powered cloud.
Why Backups Matter for Self-Hosted Platforms
Managed PaaS providers handle backups behind the scenes. When you self-host with Coolify, that responsibility shifts to you. The risks are real and varied:
- Human error — An accidental
docker compose down -vremoves all named volumes, including database data. - Failed deployments — A bad configuration change can corrupt application state or overwrite environment variables.
- Disk failure — Even SSDs fail. Without replication or backups, data loss is permanent.
- Security incidents — Ransomware or unauthorized access can encrypt or delete your data. Off-site backups are your last line of defense.
- Software bugs — Database corruption from application bugs or unexpected crashes can leave you with unusable data.
The 3-2-1 backup rule applies just as much to self-hosted platforms as it does to enterprise infrastructure: keep at least 3 copies of your data, on 2 different media types, with 1 copy off-site.
Coolify's Built-In Backup Features
Coolify ships with native backup capabilities that make protecting your databases straightforward. Understanding what is available out of the box is the first step toward building a complete strategy.
S3-Compatible Backup Destinations
Coolify supports configuring one or more S3-compatible storage destinations for backups. This includes:
- Amazon S3 — The original object storage service, widely supported.
- MinIO — Self-hosted S3-compatible storage you can run alongside Coolify on the same server or on a separate node.
- Backblaze B2 — Cost-effective cloud storage with S3-compatible API.
- Cloudflare R2 — Zero egress-fee object storage with S3 compatibility.
- Wasabi — Hot storage with no egress or API request fees.
To configure a backup destination in Coolify, navigate to Settings → Backup (or Server → Destinations depending on your version) and add your S3 credentials: endpoint URL, access key, secret key, bucket name, and region.
Scheduled Database Backups
Once a storage destination is configured, Coolify lets you enable automatic scheduled backups for any database resource. For each database, you can set:
- Backup frequency — Using cron expressions (e.g.,
0 */6 * * *for every 6 hours). - Retention count — How many backup copies to keep before rotating old ones.
- Backup destination — Which S3-compatible target to use.
Coolify executes the appropriate native dump tool for each database engine, compresses the output, and uploads it to your configured destination automatically.
Setting Up Database Backups in Coolify
Coolify supports automated backups for all major database engines it can deploy. The process differs slightly depending on which database you are running.
PostgreSQL Backups
PostgreSQL is one of the most popular databases deployed through Coolify. Coolify uses pg_dump under the hood to create logical backups of your PostgreSQL databases.
To enable PostgreSQL backups:
- Open your PostgreSQL resource in the Coolify dashboard.
- Navigate to the Backups tab.
- Select your pre-configured S3 destination.
- Set the backup frequency using a cron expression. For production databases,
0 */4 * * *(every 4 hours) is a reasonable starting point. - Set the retention count — keeping at least 7 daily backups gives you a week of recovery points.
- Enable the backup schedule and save.
For larger PostgreSQL databases, consider using pg_dump --format=custom via a manual script for more efficient compression and selective restore capabilities. You can run such scripts as Coolify scheduled tasks or through standard cron jobs on the host.
# Manual PostgreSQL backup script example
#!/bin/bash
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
CONTAINER_NAME="your-postgres-container"
DB_NAME="your_database"
BACKUP_DIR="/opt/backups/postgres"
mkdir -p $BACKUP_DIR
docker exec $CONTAINER_NAME pg_dump \
-U postgres \
--format=custom \
--compress=9 \
$DB_NAME > "$BACKUP_DIR/${DB_NAME}_${TIMESTAMP}.dump"
# Upload to S3
aws s3 cp "$BACKUP_DIR/${DB_NAME}_${TIMESTAMP}.dump" \
s3://your-bucket/postgres-backups/
# Clean up local copies older than 3 days
find $BACKUP_DIR -name "*.dump" -mtime +3 -delete
MySQL and MariaDB Backups
For MySQL and MariaDB databases, Coolify uses mysqldump to generate SQL dump files. The setup process mirrors PostgreSQL:
- Open your MySQL or MariaDB resource in the Coolify dashboard.
- Go to the Backups tab.
- Choose your S3 destination and configure the schedule.
- Set retention count and enable backups.
For databases with large tables, mysqldump with the --single-transaction flag ensures a consistent snapshot without locking tables:
# Manual MySQL/MariaDB backup with single-transaction
docker exec your-mysql-container mysqldump \
-u root \
-p"$MYSQL_ROOT_PASSWORD" \
--single-transaction \
--routines \
--triggers \
--all-databases | gzip > "mysql_backup_$(date +%Y%m%d_%H%M%S).sql.gz"
MongoDB Backups
If you run MongoDB through Coolify, backups use mongodump. The procedure is the same: configure the S3 destination, set your schedule and retention, and Coolify handles the rest. For replica sets, ensure your backup connects to a secondary node to avoid impacting primary performance.
Redis Backups
Redis is often used as a cache, but if you use it for persistent data (queues, session storage, rate limiting), backups are essential. Coolify can back up Redis via RDB snapshots. You can also configure Redis appendonly mode for point-in-time recovery and periodically copy the AOF and RDB files to your S3 destination.
Application Data and Persistent Volume Backups
Database backups alone are not enough. Your Coolify applications likely store critical data in Docker volumes or bind mounts: uploaded files, generated assets, configuration files, SSL certificates, and more.
Identifying Critical Volumes
Start by auditing which volumes your applications use:
# List all Docker volumes
docker volume ls
# Inspect a specific volume to find its mount point
docker volume inspect your_app_data
# List volumes used by running containers
docker ps --format '{{.Names}}' | while read c; do
echo "=== $c ==="
docker inspect $c --format '{{range .Mounts}}{{.Source}} -> {{.Destination}}{{"\n"}}{{end}}'
done
Backing Up Docker Volumes
Docker volumes live on the host filesystem (typically under /var/lib/docker/volumes/). You can back them up by creating compressed tar archives:
# Back up a Docker volume to a tar archive
docker run --rm \
-v your_app_data:/source:ro \
-v /opt/backups:/backup \
alpine tar czf /backup/app_data_$(date +%Y%m%d_%H%M%S).tar.gz -C /source .
# Upload to S3
aws s3 cp /opt/backups/app_data_*.tar.gz s3://your-bucket/volume-backups/
Coolify Configuration Backup
Do not forget to back up Coolify itself. Coolify stores its configuration, SSH keys, and internal database in specific directories:
/data/coolify— The primary Coolify data directory containing environment files, SSH keys, and the internal database./data/coolify/ssh— SSH keys used to connect to remote servers./data/coolify/database— Coolify's internal PostgreSQL database that stores all resource configurations.
# Back up the entire Coolify data directory
tar czf /opt/backups/coolify_config_$(date +%Y%m%d_%H%M%S).tar.gz \
/data/coolify
# This backup lets you restore your entire Coolify setup
# including all resource definitions, environment variables,
# and server connections.
Important: Your Coolify configuration backup contains sensitive data including database passwords, API keys, and SSH private keys. Encrypt these backups before uploading to any remote storage. Use
gpgoragefor encryption.
Backup to External Storage
Storing backups on the same server as your applications defeats the purpose. External storage ensures your data survives even if the entire server is lost. Here are the most common S3-compatible targets for Coolify backups.
Amazon S3
Amazon S3 remains the gold standard for object storage. To configure it as a Coolify backup destination:
# Coolify S3 Destination Settings
Endpoint: https://s3.amazonaws.com
Region: us-east-1
Bucket: your-coolify-backups
Access Key: AKIAIOSFODNN7EXAMPLE
Secret Key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Enable S3 versioning on your bucket for an additional layer of protection. Even if a backup is overwritten or deleted, you can recover previous versions. Combine this with S3 lifecycle rules to transition older backups to Glacier for cost savings.
MinIO (Self-Hosted)
MinIO is a high-performance, S3-compatible object storage system you can self-host. Running MinIO on a separate Cloud VPS from your Coolify server creates a true off-site backup without recurring cloud storage fees beyond the VPS cost.
# Deploy MinIO via Docker on a separate server
docker run -d \
--name minio \
-p 9000:9000 \
-p 9001:9001 \
-v /data/minio:/data \
-e MINIO_ROOT_USER=minioadmin \
-e MINIO_ROOT_PASSWORD=your-secure-password \
minio/minio server /data --console-address ":9001"
# Create a backup bucket
mc alias set myminio http://localhost:9000 minioadmin your-secure-password
mc mb myminio/coolify-backups
Backblaze B2
Backblaze B2 offers S3-compatible storage at a fraction of the cost of AWS S3. At $0.006 per GB per month for storage, it is an excellent choice for backup retention. Coolify connects to B2 using the S3-compatible API:
# Backblaze B2 S3-Compatible Settings
Endpoint: https://s3.us-west-004.backblazeb2.com
Region: us-west-004
Bucket: your-coolify-backups
Access Key: your-b2-application-key-id
Secret Key: your-b2-application-key
Backup Verification and Testing
A backup you have never tested is not a backup — it is a hope. Verification should be a scheduled, repeatable process, not something you discover is broken during an actual emergency.
Automated Verification Script
Create a script that periodically downloads the latest backup and verifies its integrity:
#!/bin/bash
# backup-verify.sh - Verify Coolify database backups
set -euo pipefail
BACKUP_BUCKET="s3://your-bucket/postgres-backups"
VERIFY_DIR="/tmp/backup-verify"
LOG_FILE="/var/log/backup-verify.log"
mkdir -p $VERIFY_DIR
LATEST=$(aws s3 ls $BACKUP_BUCKET/ | sort | tail -1 | awk '{print $4}')
echo "[$(date)] Verifying backup: $LATEST" >> $LOG_FILE
# Download latest backup
aws s3 cp "$BACKUP_BUCKET/$LATEST" "$VERIFY_DIR/$LATEST"
# Verify PostgreSQL backup integrity
if pg_restore --list "$VERIFY_DIR/$LATEST" > /dev/null 2>&1; then
echo "[$(date)] PASS: Backup $LATEST is valid" >> $LOG_FILE
else
echo "[$(date)] FAIL: Backup $LATEST is corrupted!" >> $LOG_FILE
# Send alert (webhook, email, etc.)
curl -s -X POST "https://your-webhook-url" \
-d "{\"text\": \"Backup verification FAILED for $LATEST\"}"
fi
# Clean up
rm -rf $VERIFY_DIR
Restore Testing
Beyond file integrity checks, periodically perform a full restore test. Spin up a temporary database container, restore the backup into it, and run basic queries to confirm data consistency:
# Spin up a temporary PostgreSQL container for restore testing
docker run -d --name pg-restore-test \
-e POSTGRES_PASSWORD=testpassword \
postgres:16
# Wait for PostgreSQL to be ready
sleep 5
# Restore the backup
docker exec -i pg-restore-test pg_restore \
-U postgres \
-d postgres \
--create \
< /tmp/backup-verify/latest.dump
# Run a verification query
docker exec pg-restore-test psql -U postgres -d your_database \
-c "SELECT COUNT(*) FROM users;"
# Clean up
docker rm -f pg-restore-test
Schedule restore tests at least monthly. If your data changes frequently, weekly tests provide more confidence.
Disaster Recovery Workflow
When disaster strikes, you need a clear, documented recovery procedure. Panic-driven recovery attempts often cause more damage than the original incident. Here is a step-by-step disaster recovery workflow for a Coolify deployment.
Step 1: Assess the Situation
Before restoring anything, determine what was lost:
- Is the server itself intact? Can you SSH in?
- Is Coolify still running? Check with
docker ps. - Which resources are affected — databases, applications, or both?
- When did the failure occur? This determines which backup to restore from.
Step 2: Provision New Infrastructure (if needed)
If the server is unrecoverable, provision a new Dedicated VPS or Cloud VPS and install Coolify fresh. MassiveGRID instances can be provisioned in under 60 seconds, minimizing downtime.
Step 3: Restore Coolify Configuration
# On the new server, restore Coolify's data directory
tar xzf coolify_config_backup.tar.gz -C /
# Restart Coolify services
cd /data/coolify/source
docker compose up -d
Step 4: Restore Databases
# Download the latest database backup from S3
aws s3 cp s3://your-bucket/postgres-backups/latest.dump /tmp/
# Restore into the running PostgreSQL container
docker exec -i your-postgres-container pg_restore \
-U postgres \
-d your_database \
--clean \
--if-exists \
/tmp/latest.dump
Step 5: Restore Application Volumes
# Download and extract volume backup
aws s3 cp s3://your-bucket/volume-backups/app_data_latest.tar.gz /tmp/
# Restore to the Docker volume
docker run --rm \
-v your_app_data:/target \
-v /tmp:/backup \
alpine sh -c "cd /target && tar xzf /backup/app_data_latest.tar.gz"
Step 6: Verify and Monitor
After restoration, verify each application is functioning correctly. Check database connectivity, test critical user flows, and monitor logs for errors over the next 24 hours. Update DNS records if you migrated to a new server.
Recommended Backup Schedule
Not all data changes at the same rate. Tailor your backup frequency to the volatility and criticality of each data type:
| Data Type | Frequency | Retention | Method |
|---|---|---|---|
| Production databases | Every 4–6 hours | 7 days (rolling) | Coolify built-in + S3 |
| Application volumes | Daily | 14 days | Cron script + S3 |
| Coolify configuration | Daily + after changes | 30 days | Cron script + S3 (encrypted) |
| Redis / cache data | Every 12 hours | 3 days | RDB snapshot + S3 |
| Full server snapshot | Weekly | 4 weeks | Infrastructure-level snapshot |
Combine these schedules into a single backup orchestration script or use a tool like restic or borg for incremental, deduplicated backups that minimize storage costs while maintaining extensive history.
Infrastructure-Level Protection with MassiveGRID
Application-level backups are essential, but they represent only one layer of defense. The infrastructure your Coolify instance runs on matters just as much. MassiveGRID provides multiple layers of data protection that work alongside your backup strategy.
Ceph Distributed Storage with 3x Replication
Every Cloud VPS and Dedicated VPS on MassiveGRID is backed by Ceph distributed storage with 3x data replication. This means your disk data is automatically written to three separate physical drives across different storage nodes. If a drive or even an entire storage node fails, your data remains available with zero downtime.
This infrastructure-level replication protects against:
- Hardware failure — Drives and nodes can fail without data loss.
- Silent data corruption — Ceph performs regular scrubbing to detect and repair bit-rot.
- Storage node maintenance — Nodes can be taken offline for maintenance without affecting availability.
Note: Infrastructure-level replication protects against hardware failure but does not protect against accidental deletion, application bugs, or security breaches. You still need application-level backups for those scenarios. Think of Ceph replication as your first safety net and application backups as your second.
Multiple Data Center Locations
MassiveGRID operates data centers in New York, London, Frankfurt, and Singapore. For maximum disaster resilience, run your Coolify production server in one location and send backups to a MinIO instance or use MassiveGRID Backup Services in a different region. This geographic separation protects against data center-level incidents.
Security Hardening Your Backup Pipeline
Backups are a high-value target for attackers. Combine your backup strategy with the security practices covered in our Coolify security hardening guide:
- Encrypt backups at rest — Use client-side encryption before uploading to any S3 target.
- Use IAM policies with least privilege — Your backup credentials should only have
PutObjectandGetObjectpermissions on the specific backup bucket. - Enable S3 Object Lock — Immutable backups prevent ransomware from encrypting or deleting your backup history.
- Rotate credentials — Change S3 access keys quarterly and after any suspected compromise.
- Audit backup access — Enable S3 access logging to track who accesses your backups and when.
Putting It All Together
A robust Coolify backup strategy combines multiple layers of protection:
- Coolify built-in backups for automated database dumps to S3-compatible storage.
- Custom scripts for application volume backups, Coolify configuration, and Redis data.
- External storage with providers like Backblaze B2, MinIO, or Amazon S3 for off-site copies.
- Verification and testing on a regular schedule to ensure backups are restorable.
- A documented disaster recovery workflow so recovery is fast and orderly under pressure.
- Infrastructure-level protection with MassiveGRID's Ceph 3x replication as your foundation.
Self-hosting with Coolify gives you control that no managed platform can match. With the right backup strategy, you get that control without sacrificing data safety. Start with Coolify's built-in database backups, layer on volume and configuration backups, send everything off-site, test regularly, and build on top of resilient infrastructure.
Infrastructure-Level Data Protection
- Cloud VPS — From $1.99/mo. Ceph distributed storage with 3x data replication at the infrastructure level.
- Dedicated VPS — From $4.99/mo. Dedicated resources with the same Ceph 3x replication underneath.
- Backup Services — Managed backup solutions for additional off-site protection.