What Is a Reverse Proxy and Why You Need One
A reverse proxy sits between the internet and your backend services, forwarding client requests to the appropriate server. Unlike a forward proxy that acts on behalf of clients, a reverse proxy acts on behalf of servers. It is the single point of entry to your infrastructure.
In modern self-hosted environments, you rarely run just one service. You might have a Node.js application on port 3000, a Gitea instance on port 3001, an n8n workflow engine on port 5678, and a Coolify dashboard on port 8000 — all on the same VPS. Without a reverse proxy, users would need to remember port numbers. With Nginx as a reverse proxy, each service gets its own subdomain with proper SSL, all routed through ports 80 and 443.
Key benefits of an Nginx reverse proxy:
- SSL/TLS termination — Handle encryption once at the proxy layer instead of configuring SSL in every backend service
- Clean URLs — Route
app.example.comto port 3000,git.example.comto port 3001, no port numbers exposed - Centralized security — Apply rate limiting, security headers, and access controls in one place
- Load balancing — Distribute traffic across multiple backend instances
- Caching — Cache static assets and responses to reduce backend load
- WebSocket support — Proxy WebSocket connections for real-time applications
Prerequisites
Before you begin, you need:
- An Ubuntu 24.04 VPS with root or sudo access (2+ vCPU, 2+ GB RAM recommended)
- A domain name with DNS A records pointing to your server’s IP (both the root domain and any subdomains you plan to use)
- Nginx installed:
sudo apt update && sudo apt install nginx -y - One or more backend services running on localhost ports
If you are running Docker containers, each container exposes a port on the host. If you are running native applications, they listen on their configured ports. The reverse proxy does not care what is behind it — Docker, Node.js, Python, Go, Java — it just forwards HTTP requests.
Basic proxy_pass Configuration
The simplest reverse proxy configuration forwards all requests from a domain to a local port. Let’s proxy a Node.js application running on port 3000:
sudo nano /etc/nginx/sites-available/app.example.com
server {
listen 80;
listen [::]:80;
server_name app.example.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
# Pass original client information to the backend
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeout settings
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
}
Enable the site:
sudo ln -s /etc/nginx/sites-available/app.example.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
The proxy_set_header directives are important. Without X-Real-IP and X-Forwarded-For, your backend application sees every request as coming from 127.0.0.1 instead of the actual client IP. Without X-Forwarded-Proto, your application cannot tell whether the original request was HTTP or HTTPS, which breaks redirect logic and secure cookie settings.
SSL/TLS Termination with Certbot
Running a reverse proxy without SSL is not acceptable in production. Certbot makes it easy to get free Let’s Encrypt certificates:
sudo apt install certbot python3-certbot-nginx -y
sudo certbot --nginx -d app.example.com
Certbot automatically modifies your Nginx configuration to add SSL, creates a redirect from HTTP to HTTPS, and sets up auto-renewal. After running Certbot, your configuration will look like this:
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name app.example.com;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server {
listen 80;
listen [::]:80;
server_name app.example.com;
return 301 https://$host$request_uri;
}
Verify auto-renewal is configured:
sudo certbot renew --dry-run
Proxying Multiple Services on Subdomains
The real power of a reverse proxy appears when you host multiple services. Create a separate configuration file for each subdomain:
# /etc/nginx/sites-available/git.example.com
server {
listen 80;
server_name git.example.com;
location / {
proxy_pass http://127.0.0.1:3001;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Gitea needs larger body size for repo pushes
client_max_body_size 100M;
}
}
# /etc/nginx/sites-available/n8n.example.com
server {
listen 80;
server_name n8n.example.com;
location / {
proxy_pass http://127.0.0.1:5678;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# n8n requires WebSocket support
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
# /etc/nginx/sites-available/coolify.example.com
server {
listen 80;
server_name coolify.example.com;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Coolify dashboard uses WebSockets
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Enable all sites and issue SSL certificates:
sudo ln -s /etc/nginx/sites-available/git.example.com /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/n8n.example.com /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/coolify.example.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
# Issue certificates for all subdomains
sudo certbot --nginx -d git.example.com
sudo certbot --nginx -d n8n.example.com
sudo certbot --nginx -d coolify.example.com
WebSocket Support
Many modern applications rely on WebSocket connections for real-time features: live dashboards, chat, notifications, terminal access, and collaborative editing. Without explicit WebSocket support in your reverse proxy configuration, these features silently break.
The critical headers for WebSocket proxying are:
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
These tell Nginx to upgrade the HTTP connection to a WebSocket connection when the client requests it. Applications that require WebSocket support include:
- Coolify — Real-time deployment logs and terminal access
- n8n — Live workflow execution monitoring
- Gitea — Live notification updates
- Grafana — Live dashboard metric streaming
- VS Code Server / Code Server — Full IDE functionality
- Any chat application — Real-time messaging
If WebSocket connections drop after 60 seconds, increase the read timeout:
proxy_read_timeout 86400s; # 24 hours for long-lived WebSocket connections
proxy_send_timeout 86400s;
Upstream Blocks and Load Balancing
When your application grows beyond what a single backend can handle, Nginx can distribute traffic across multiple instances. Define an upstream block:
upstream app_backend {
# Round-robin (default)
server 127.0.0.1:3000;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
# Optional: weighted distribution
# server 127.0.0.1:3000 weight=3;
# server 127.0.0.1:3001 weight=1;
# Optional: least connections
# least_conn;
# Health check: mark server as down after 3 failed attempts
# server 127.0.0.1:3000 max_fails=3 fail_timeout=30s;
}
server {
listen 443 ssl;
server_name app.example.com;
location / {
proxy_pass http://app_backend;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Load balancing strategies available in Nginx:
| Method | Directive | Best For |
|---|---|---|
| Round Robin | (default) | Stateless APIs, equal-capacity backends |
| Least Connections | least_conn | Varying request durations |
| IP Hash | ip_hash | Session persistence (sticky sessions) |
| Hash | hash $request_uri | Cache consistency |
For most self-hosted applications on a VPS, you will not need load balancing immediately. But understanding the configuration means you can scale horizontally when traffic demands it — or when you need zero-downtime deployments by running two versions of your app simultaneously.
Caching for Static Assets
If your backend serves static files (images, CSS, JavaScript), let Nginx cache them to avoid unnecessary proxy requests:
proxy_cache_path /var/cache/nginx/proxy levels=1:2
keys_zone=proxy_cache:10m max_size=1g inactive=60m use_temp_path=off;
server {
listen 443 ssl;
server_name app.example.com;
# Cache static assets
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff2|woff|ttf|eot)$ {
proxy_pass http://127.0.0.1:3000;
proxy_cache proxy_cache;
proxy_cache_valid 200 30d;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
add_header X-Cache-Status $upstream_cache_status;
expires 30d;
}
# Dynamic content — no caching
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
The X-Cache-Status header helps you verify caching is working. Check it with:
curl -I https://app.example.com/style.css | grep X-Cache
# X-Cache-Status: HIT (served from cache)
# X-Cache-Status: MISS (fetched from backend, now cached)
Security Headers
A reverse proxy is the ideal place to enforce security headers. Apply them once, and every proxied service inherits them. Add these to your server blocks or create a shared snippet:
# /etc/nginx/snippets/security-headers.conf
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
# HSTS — only enable after confirming SSL works correctly
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Content Security Policy — customize per application
# add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline'" always;
Include the snippet in each server block:
server {
listen 443 ssl;
server_name app.example.com;
include snippets/security-headers.conf;
# ... rest of configuration
}
A few notes on these headers:
- HSTS — Only enable this after you have confirmed SSL is fully working. Once a browser receives this header, it will refuse to connect over HTTP for the specified duration. Start with a low
max-age(300 seconds) for testing. - Content-Security-Policy — This is application-specific. A restrictive CSP can break applications that use inline scripts or load resources from CDNs. Test thoroughly before deploying.
- X-Frame-Options — Prevents your site from being embedded in iframes, which protects against clickjacking attacks.
Rate Limiting
Protect your backend services from abuse and brute-force attacks with Nginx rate limiting. Define rate limit zones in the http block of /etc/nginx/nginx.conf:
# General rate limit: 10 requests per second per IP
limit_req_zone $binary_remote_addr zone=general:10m rate=10r/s;
# Strict rate limit for login/auth endpoints: 1 request per second
limit_req_zone $binary_remote_addr zone=auth:10m rate=1r/s;
# API rate limit: 30 requests per second per IP
limit_req_zone $binary_remote_addr zone=api:10m rate=30r/s;
Apply rate limits in your server blocks:
server {
listen 443 ssl;
server_name app.example.com;
# General rate limit with burst allowance
location / {
limit_req zone=general burst=20 nodelay;
proxy_pass http://127.0.0.1:3000;
# ... proxy headers
}
# Strict rate limit on authentication endpoints
location /api/auth/ {
limit_req zone=auth burst=3 nodelay;
proxy_pass http://127.0.0.1:3000;
# ... proxy headers
}
# API endpoints with higher limits
location /api/ {
limit_req zone=api burst=50 nodelay;
proxy_pass http://127.0.0.1:3000;
# ... proxy headers
}
}
The burst parameter allows short traffic spikes. A burst of 20 with nodelay means the first 20 excess requests are served immediately, but the 21st is rejected with a 503 status code. Without nodelay, burst requests are queued and served at the configured rate.
Custom error pages for rate-limited requests make for a better user experience:
limit_req_status 429;
error_page 429 /429.html;
location = /429.html {
root /var/www/error-pages;
internal;
}
Putting It All Together
Here is a complete, production-ready reverse proxy configuration that combines all the techniques covered in this guide:
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name app.example.com;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# Security headers
include snippets/security-headers.conf;
# Rate limiting
limit_req zone=general burst=20 nodelay;
# Logging
access_log /var/log/nginx/app.example.com.access.log;
error_log /var/log/nginx/app.example.com.error.log;
# Client body size (adjust for file uploads)
client_max_body_size 50M;
# Static asset caching
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg|woff2)$ {
proxy_pass http://127.0.0.1:3000;
proxy_cache proxy_cache;
proxy_cache_valid 200 7d;
expires 7d;
add_header X-Cache-Status $upstream_cache_status;
}
# WebSocket endpoint
location /ws {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 86400s;
}
# Default proxy
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
}
server {
listen 80;
listen [::]:80;
server_name app.example.com;
return 301 https://$host$request_uri;
}
Nginx as a reverse proxy is one of the most fundamental tools in the self-hosting toolkit. Whether you are running a single application or a dozen microservices, the pattern is the same: one Nginx instance, multiple server blocks, SSL everywhere. A MassiveGRID VPS with 2 vCPU and 4 GB RAM can comfortably proxy ten or more services, especially when you leverage caching and rate limiting to keep backend load manageable. For heavier workloads — multiple upstream backends, high WebSocket concurrency, or heavy caching — consider a Dedicated VPS where you get guaranteed resources without noisy-neighbor effects.
MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10
→ Deploy a self-managed VPS — from $1.99/mo
→ Need dedicated resources? — from $8.30/mo
→ Want fully managed hosting? — we handle everything
Troubleshooting Common Reverse Proxy Issues
Even well-configured reverse proxies can run into problems. Here are the most common issues and how to diagnose them.
502 Bad Gateway
This means Nginx cannot reach the backend service. Common causes:
- The backend service is not running — check with
systemctl status your-serviceordocker ps - The port is wrong — verify with
ss -tlnp | grep PORT - The service is listening on a different interface — make sure it binds to
127.0.0.1or0.0.0.0, not just a Docker network IP
# Check if the backend is reachable
curl -v http://127.0.0.1:3000
# Check Nginx error logs for details
sudo tail -f /var/log/nginx/error.log
504 Gateway Timeout
The backend is reachable but taking too long to respond. Increase Nginx timeout values:
proxy_connect_timeout 300s;
proxy_send_timeout 300s;
proxy_read_timeout 300s;
However, if your backend routinely takes more than 60 seconds to respond, the real fix is optimizing the backend — not increasing timeouts. Long timeouts tie up Nginx worker connections.
WebSocket Connections Dropping
If WebSocket connections close after exactly 60 seconds, you are hitting Nginx’s default proxy_read_timeout. Set it higher for WebSocket locations:
location /ws {
proxy_read_timeout 3600s; # 1 hour
proxy_send_timeout 3600s;
# ... rest of WebSocket config
}
Mixed Content Warnings After SSL
If your backend generates URLs with http:// instead of https://, the application is not receiving the X-Forwarded-Proto header correctly. Verify the header is set and that your application trusts the proxy. In Express.js:
app.set('trust proxy', 1);
In Django:
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
413 Request Entity Too Large
Nginx has a default upload limit of 1 MB. For applications that accept file uploads, set client_max_body_size in the relevant location or server block:
client_max_body_size 100M;
Monitoring Your Reverse Proxy
Enable the Nginx stub status module to monitor your proxy’s health:
server {
listen 127.0.0.1:8080;
server_name localhost;
location /nginx_status {
stub_status on;
allow 127.0.0.1;
deny all;
}
}
Query it to see active connections, request rates, and reading/writing/waiting counts:
curl http://127.0.0.1:8080/nginx_status
# Active connections: 12
# server accepts handled requests
# 847 847 3258
# Reading: 0 Writing: 5 Waiting: 7
For detailed monitoring, use access log analysis. Nginx’s log_format directive lets you include upstream response times:
log_format proxy_log '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'upstream: $upstream_addr '
'response_time: $upstream_response_time '
'cache: $upstream_cache_status';
access_log /var/log/nginx/proxy_access.log proxy_log;
This log format includes the upstream response time and cache status, making it easy to identify slow backends and cache hit rates. Tools like GoAccess can parse these logs and generate real-time dashboards.
Security Best Practices for Reverse Proxies
Your reverse proxy is the front door to your infrastructure. Harden it accordingly:
- Hide Nginx version — Add
server_tokens off;to thehttpblock innginx.conf - Remove default site — Delete
/etc/nginx/sites-enabled/defaultto prevent exposing services on the server’s IP address directly - Block unknown hosts — Add a catch-all server block that returns 444 (close connection) for requests that do not match any configured domain:
server {
listen 80 default_server;
listen 443 default_server ssl;
server_name _;
ssl_certificate /etc/nginx/ssl/dummy.crt;
ssl_certificate_key /etc/nginx/ssl/dummy.key;
return 444;
}
- Restrict backend access — Make sure backend services only listen on
127.0.0.1, not0.0.0.0. If someone knows the port number, they should not be able to bypass the proxy by connecting directly. - Use firewall rules — Only allow ports 80, 443, and 22 (SSH) through
ufw. Block everything else:
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable
This ensures that even if a backend service binds to 0.0.0.0, it is not reachable from the internet — only through the Nginx reverse proxy.
With these techniques, a single MassiveGRID VPS becomes a fully capable application gateway. The reverse proxy pattern scales naturally — start with one service, add more as needed, and each one gets clean URLs, SSL, and centralized security without any additional cost or complexity.