Frontend-only deployments are simple enough — push a React or Next.js app to a VPS and you're running. But most production applications need more. They need a database. They need background workers. They need Redis for sessions or job queues. And the moment you introduce multiple services, deployment complexity multiplies.
Coolify handles this cleanly. It can deploy your application code from Git, provision PostgreSQL and Redis through one-click service templates, and orchestrate everything with Docker networking so your services discover each other automatically. One VPS, one dashboard, one bill — and your application talks to its database over a local network interface instead of crossing the internet.
This guide walks through deploying a complete Node.js + PostgreSQL stack on Coolify, from a single Git-based application to a full Docker Compose setup with Redis and automated database backups. If you haven't installed Coolify yet, start with our Coolify installation guide.
Why Self-Host Your Full Stack
Managed platforms fragment your infrastructure. Heroku charges per dyno for your app and per add-on for your database. Railway and Render separate compute from storage billing. Vercel handles the frontend but pushes you toward third-party database providers like PlanetScale or Neon, each with their own pricing model and latency characteristics.
Self-hosting your entire stack on a single VPS eliminates this fragmentation. Your Node.js application and PostgreSQL database share the same physical machine, communicating over Docker's internal bridge network with sub-millisecond latency. There are no egress charges for database queries, no connection pooling complications from distant servers, and no vendor-specific ORMs or adapters required. You use standard PostgreSQL with the standard pg driver.
The financial equation is straightforward. A Dedicated VPS starting at $4.99/month gives you dedicated CPU cores for consistent query performance, enough RAM for your application and database to coexist comfortably, and NVMe storage for fast reads and writes. That single monthly cost replaces what would be $30–$80/month across fragmented managed services for the same workload.
Deploying the Node.js Application
With Coolify running on your VPS, deploying a Node.js application starts from the dashboard. Navigate to Projects → Add New Resource → Public/Private Repository and connect your GitHub account through Coolify's GitHub App integration.
Git source configuration
Select the repository containing your Node.js application and choose the branch you want to deploy. Coolify supports monorepos — if your backend lives in a subdirectory like /api or /server, you can set the base directory in the resource configuration so Coolify builds from that path.
Coolify uses Nixpacks for automatic build detection. For a standard Node.js application, Nixpacks reads your package.json, identifies the runtime, installs dependencies, and runs your build script. No Dockerfile needed. If your project has a build script in package.json, Nixpacks will execute it. The start command defaults to node index.js or whatever is specified in main — override this in Coolify if you use a different entry point like npm start or node dist/server.js.
Environment variables
Configure your environment variables in Coolify's resource settings before the first deploy. At minimum, a full-stack Node.js app typically needs:
NODE_ENV=productionPORT=3000(or whichever port your app listens on)DATABASE_URL(you'll set this after provisioning PostgreSQL)- Any API keys, secrets, or service credentials your app requires
Coolify stores environment variables encrypted and injects them into the container at runtime. You can mark variables as build-time only if they're needed during the npm run build step but not at runtime.
Health check configuration
Add a health check endpoint to your application — a simple /health or /api/health route that returns a 200 status. In Coolify's resource configuration, set the health check path and interval. Coolify will poll this endpoint and automatically restart the container if it becomes unresponsive, providing self-healing without external monitoring tools.
// health.js - Express example
app.get('/health', async (req, res) => {
try {
await db.query('SELECT 1');
res.status(200).json({ status: 'healthy', timestamp: Date.now() });
} catch (err) {
res.status(503).json({ status: 'unhealthy', error: err.message });
}
});
This health check also verifies database connectivity, which means Coolify will detect and respond to both application crashes and database connection failures.
Provisioning PostgreSQL via Coolify's One-Click Services
Coolify ships with one-click service templates for dozens of popular databases and tools. PostgreSQL is one of the most commonly deployed — and Coolify supports versions 14 through 18, including the pgvector extension for AI/ML workloads.
Creating the database service
Navigate to your project and select Add New Resource → Database → PostgreSQL. Choose your preferred version (PostgreSQL 16 is the current stable release), and Coolify will provision a Docker container with persistent volume storage. The database credentials — host, port, username, password, and database name — are generated automatically and displayed in the resource details.
Connecting your application to the database
Copy the internal connection string from the PostgreSQL resource. Because both your Node.js application and PostgreSQL run within Coolify's Docker network, you use the internal hostname (typically the service name) rather than localhost or a public IP. The connection string looks like:
postgresql://postgres:your_generated_password@your-postgres-service:5432/your_database
Paste this as the DATABASE_URL environment variable in your Node.js application's resource settings. Redeploy the application, and it will connect to PostgreSQL over Docker's internal network — no ports exposed to the internet, no firewall rules to manage.
Running initial migrations
If your application uses an ORM like Prisma, Drizzle, or Knex, you'll want migrations to run on deploy. The simplest approach is to add a migration command to your build or start script in package.json:
// package.json
{
"scripts": {
"build": "tsc && prisma generate",
"start": "prisma migrate deploy && node dist/server.js"
}
}
This runs prisma migrate deploy (which applies pending migrations without prompting) before starting the application on every deployment. For Knex or Drizzle, substitute the equivalent CLI command. Since the database connection is already established through Docker networking, migrations execute immediately without connection timeout issues.
Full-Stack Deployment with Docker Compose
For more complex stacks — or if you prefer defining your entire infrastructure as code — Coolify has native Docker Compose support. This is a significant advantage over some alternatives: Dokploy supports Compose files, but CapRover requires you to adapt your stack into its Captain Definition format. Coolify accepts standard docker-compose.yml files with minimal modifications.
Example: Node.js + PostgreSQL + Redis
Here's a production-ready Docker Compose file for a typical full-stack Node.js application. Create this as docker-compose.yml in your repository root:
version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://appuser:${DB_PASSWORD}@db:5432/appdb
- REDIS_URL=redis://cache:6379
- PORT=3000
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
ports:
- "3000:3000"
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: appuser
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: appdb
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U appuser -d appdb"]
interval: 10s
timeout: 5s
retries: 5
cache:
image: redis:7-alpine
volumes:
- redisdata:/data
command: redis-server --appendonly yes
volumes:
pgdata:
redisdata:
Deploying the Compose stack in Coolify
In Coolify, select Add New Resource → Docker Compose and point it to your repository. Coolify reads the docker-compose.yml, presents each service, and lets you configure domains, environment variables, and persistent volumes through the dashboard. Set the DB_PASSWORD environment variable in Coolify's UI — it will be injected into all services that reference it.
Coolify's Traefik integration automatically handles routing. Assign your domain to the app service, and Traefik will provision an SSL certificate via Let's Encrypt and proxy traffic to port 3000. The db and cache services remain internal — accessible only within the Docker network.
Dockerfile for production Node.js
If you're using Docker Compose instead of Nixpacks, you need a Dockerfile. Here's a multi-stage build that produces a lean production image:
# Build stage
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && cp -R node_modules /prod_modules
RUN npm ci
COPY . .
RUN npm run build
# Production stage
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /prod_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json .
USER node
EXPOSE 3000
CMD ["node", "dist/server.js"]
The multi-stage approach keeps your final image under 200MB by excluding dev dependencies, source files, and build tools. Running as the node user instead of root follows Docker security best practices — see our Coolify security hardening guide for a complete production checklist.
Configuring Automated Database Backups
Data loss is the nightmare scenario for any self-hosted deployment. Coolify includes a built-in backup system for databases that integrates with any S3-compatible storage provider.
Setting up S3-compatible backup storage
Navigate to Settings → S3 Storage in the Coolify dashboard and add your storage credentials. This works with AWS S3, MinIO, Backblaze B2, Wasabi, or any S3-compatible endpoint. Configure the bucket name, region, access key, and secret key.
Scheduling automated backups
In your PostgreSQL resource settings, navigate to the backup configuration section. Enable scheduled backups, select your S3 storage destination, and set the frequency — daily backups are the minimum for production databases, with hourly backups recommended for write-heavy applications. Coolify executes pg_dump on schedule, compresses the output, and uploads it to your configured storage.
Testing restores
A backup that has never been tested is not a backup. Download a recent backup from your S3 storage, spin up a temporary PostgreSQL container locally, and restore the dump:
# Download the latest backup
aws s3 cp s3://your-bucket/backups/latest.sql.gz ./
# Restore to a test instance
gunzip latest.sql.gz
docker run --name pg-test -e POSTGRES_PASSWORD=test -d postgres:16-alpine
docker exec -i pg-test psql -U postgres < latest.sql
# Verify data integrity
docker exec pg-test psql -U postgres -c "SELECT count(*) FROM users;"
Schedule restore tests monthly. The five minutes it takes to verify a backup can save you from discovering a corrupted backup during an actual emergency.
Infrastructure-level data protection
Application-level backups are essential, but they protect against logical failures — accidental deletes, bad migrations, application bugs. They don't protect against disk failures. On MassiveGRID's infrastructure, your VPS storage runs on Ceph with 3x replication. Every block of data — including your PostgreSQL data files, Docker volumes, and Coolify's configuration — exists on three separate physical disks simultaneously. If a disk fails, the cluster rebuilds the data automatically from the remaining copies. This gives you two layers of protection: Ceph replication handles hardware failures beneath the application, while Coolify's S3 backups handle logical failures at the application level. For a comprehensive strategy, see our Coolify backup guide.
Monitoring and Health Checks
Running your full stack on a single VPS means you need visibility into resource consumption. Coolify provides several monitoring capabilities out of the box.
Coolify Sentinel
Sentinel is Coolify's built-in monitoring agent. It tracks CPU usage, memory consumption, disk I/O, and network traffic at both the server and container level. The metrics are displayed in the Coolify dashboard, giving you a real-time view of how your Node.js application, PostgreSQL database, and Redis cache are consuming resources.
Watch for these warning signs:
- PostgreSQL memory > 80% of allocated limit — your database needs more shared_buffers or total RAM
- Node.js CPU consistently > 70% — consider scaling vCPU or optimizing hot paths
- Disk usage > 85% — PostgreSQL WAL files and Docker images accumulate; prune old images and verify backup rotation
- Redis memory growing unbounded — check for missing TTLs on cached keys
Notification channels
Configure notifications in Coolify's settings to receive alerts through Discord, Slack, Telegram, or email. Set these up before something breaks. A notification that your database health check failed is far more useful than discovering the issue when a customer reports it.
Self-hosted monitoring with Uptime Kuma
For external monitoring — verifying that your application is reachable from outside your server — deploy Uptime Kuma via Coolify. It monitors HTTP endpoints, TCP ports, and DNS records, and sends alerts through the same channels you already use. Running Uptime Kuma alongside your application on a separate server gives you independent monitoring that works even if the primary server is unreachable.
Scaling Considerations
A full-stack deployment on a single VPS works well for most applications up to moderate traffic levels. Here's how to think about when and how to scale.
Vertical scaling: add resources to your server
The first scaling move is almost always vertical. If PostgreSQL queries are slowing down, adding RAM improves the buffer cache hit ratio. If Docker builds are taking too long, adding vCPU cores speeds up compilation. MassiveGRID's Cloud VPS and Dedicated VPS let you scale CPU, RAM, and storage independently, so you can add 4GB of RAM for your growing database without paying for CPU you don't need.
For reference, here's a resource allocation guide for common full-stack workloads:
| Stack Profile | vCPU | RAM | Storage |
|---|---|---|---|
| Node.js + PostgreSQL (light traffic) | 2 | 4 GB | 40 GB |
| Node.js + PostgreSQL + Redis (moderate) | 4 | 8 GB | 80 GB |
| Multiple apps + PostgreSQL + Redis (production) | 6 | 16 GB | 160 GB |
See our resource planning guide for detailed sizing across different workload types.
Horizontal scaling: separate app and database
When a single server reaches its ceiling, Coolify's multi-server support lets you separate concerns. Move PostgreSQL to a dedicated VPS optimized for storage and memory, and keep the application server optimized for CPU. Coolify manages both servers from a single dashboard and handles deployment orchestration across them.
With MassiveGRID's four datacenter locations — New York, London, Frankfurt, and Singapore — you can also distribute servers geographically. Place your application server close to your users and your database in the same datacenter for minimal latency between them. See our multi-server Coolify setup guide for the full walkthrough.
Why MassiveGRID for Full-Stack Deployments
Full-stack deployments place unique demands on infrastructure. Your database needs consistent I/O performance. Your application needs CPU for request handling and Docker builds. Your data needs to survive hardware failures. MassiveGRID's platform addresses each of these requirements at the infrastructure level.
Ceph 3x-replicated storage means your PostgreSQL data files, Docker volumes, and Redis persistence files exist on three separate physical disks. A disk failure triggers automatic rebuilding without any intervention — your database stays online while the cluster heals. Proxmox HA clustering extends this protection to the server level: if the physical node hosting your VPS fails, your entire Coolify deployment — application, database, and all services — automatically migrates to a healthy node.
Independent resource scaling lets you match your VPS configuration to your actual workload. When your PostgreSQL database grows and needs more storage, you add storage. When your Node.js app needs more CPU for build processes, you add CPU. You never pay for resources you don't use — a critical advantage for full-stack deployments where application and database have fundamentally different resource profiles.
Deploy Your Full Stack on MassiveGRID
- Cloud VPS — From $1.99/mo. Independently scalable resources for development and staging full-stack environments.
- Dedicated VPS — From $4.99/mo. Dedicated CPU cores for consistent Node.js and PostgreSQL performance.
- Managed Cloud Dedicated — Automatic failover and Ceph 3x-replicated storage for production workloads with SLA guarantees.
What's Next
Your full-stack application is running on infrastructure you control, with automated backups protecting your data and monitoring keeping you informed. From here, you can explore these related guides to further strengthen your deployment:
- How to install Coolify on a VPS — the foundational setup guide
- Resource planning for multiple apps — sizing your VPS for growing stacks
- Coolify backup strategy — comprehensive data protection beyond automated database dumps
- Security hardening your Coolify server — production-ready firewall rules, SSH hardening, and Docker security
- Dokploy vs Coolify vs CapRover — if you're still deciding which self-hosted PaaS to use
- Dokploy VPS Hosting — explore an alternative PaaS platform on MassiveGRID infrastructure