If you are running multiple web applications on a single VPS -- a Node.js API on port 3000, a Python app on port 8000, a WordPress site on PHP-FPM -- you need something sitting in front of them all, listening on ports 80 and 443, and routing requests to the right backend. That something is a reverse proxy, and Nginx is the best tool for the job.
This guide covers everything from the basic concept to advanced configurations: proxying to multiple backends, WebSocket support, SSL termination, load balancing, caching, and security hardening. All on a single VPS.
What Is a Reverse Proxy and Why Use One?
A reverse proxy sits between the internet and your backend applications. Instead of exposing each application directly, all traffic flows through Nginx, which then forwards requests to the appropriate backend based on the domain name or URL path.
The benefits are significant:
- SSL termination -- handle HTTPS in one place instead of configuring SSL in every application.
- Single point of entry -- only ports 80 and 443 need to be open. Backend applications listen on localhost only.
- Static file serving -- Nginx serves static assets far more efficiently than Node.js or Python.
- Load balancing -- distribute traffic across multiple instances of the same application.
- Request buffering -- Nginx buffers slow client connections, freeing your application to handle the next request.
- Security -- rate limiting, request size limits, and header filtering all happen at the proxy layer.
Step 1: Install Nginx
sudo apt update
sudo apt install nginx -y
sudo systemctl enable nginx
sudo systemctl start nginx
Verify it is running by visiting http://YOUR_SERVER_IP in your browser. You should see the default Nginx welcome page.
Step 2: Basic Reverse Proxy Configuration
Suppose you have a Node.js application running on localhost:3000. Create an Nginx configuration to proxy requests to it:
sudo nano /etc/nginx/sites-available/app.example.com
server {
listen 80;
server_name app.example.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Let us break down what each proxy_set_header directive does:
Host $host-- passes the original domain name to the backend, so your app knows which domain was requested.X-Real-IP $remote_addr-- passes the client's real IP address (otherwise the backend only sees 127.0.0.1).X-Forwarded-For-- appends the client IP to the forwarding chain, important when multiple proxies are involved.X-Forwarded-Proto $scheme-- tells the backend whether the original request was HTTP or HTTPS.
Enable the site and reload Nginx:
sudo ln -s /etc/nginx/sites-available/app.example.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
Step 3: Proxy Multiple Applications by Domain
The real power of a reverse proxy shows when you run multiple applications on the same server. Each gets its own domain and Nginx server block:
# /etc/nginx/sites-available/api.example.com
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# /etc/nginx/sites-available/dashboard.example.com
server {
listen 80;
server_name dashboard.example.com;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Nginx matches the server_name from the incoming request's Host header and routes to the correct backend. You can host as many applications as your server's resources allow.
Step 4: Path-Based Routing
Sometimes you want a single domain to route different URL paths to different backends. For example, /api/ goes to your Node.js backend while / serves a static frontend:
server {
listen 80;
server_name example.com;
# Static frontend
location / {
root /var/www/frontend/dist;
try_files $uri $uri/ /index.html;
}
# API backend
location /api/ {
proxy_pass http://127.0.0.1:3000/api/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Trailing slash matters:
proxy_pass http://127.0.0.1:3000/api/with a trailing slash strips the/api/prefix before forwarding. Without the trailing slash, the full path is forwarded. This is one of the most common sources of Nginx proxy bugs.
Step 5: WebSocket Support
If your application uses WebSockets (Socket.io, real-time chat, live updates), you need additional headers for the upgrade handshake:
location /ws/ {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 86400s; # Keep WebSocket connections alive
proxy_send_timeout 86400s;
}
The Upgrade and Connection headers tell Nginx to switch from HTTP to the WebSocket protocol. The extended timeouts prevent Nginx from closing long-lived connections.
Step 6: SSL Termination with Let's Encrypt
Install Certbot and generate certificates for all your domains at once:
sudo apt install certbot python3-certbot-nginx -y
sudo certbot --nginx -d app.example.com -d api.example.com -d dashboard.example.com
Certbot automatically modifies your Nginx configurations to add the SSL certificate paths, enable HTTPS, and redirect HTTP to HTTPS. After running Certbot, your configuration will include blocks like:
server {
listen 443 ssl http2;
server_name app.example.com;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
location / {
proxy_pass http://127.0.0.1:3000;
# ... proxy headers ...
}
}
Step 7: Load Balancing
If you run multiple instances of the same application (on different ports or different servers), Nginx can distribute traffic across them:
upstream app_backend {
least_conn; # Send traffic to the server with fewest active connections
server 127.0.0.1:3000;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
}
server {
listen 80;
server_name app.example.com;
location / {
proxy_pass http://app_backend;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Nginx supports several load balancing algorithms:
| Method | Behavior |
|---|---|
round-robin (default) |
Distributes requests equally in order |
least_conn |
Sends to the server with the fewest active connections |
ip_hash |
Routes the same client IP to the same backend (sticky sessions) |
weight |
Assigns proportional traffic (e.g., server 127.0.0.1:3000 weight=3;) |
Step 8: Caching Proxied Responses
For read-heavy APIs or content that does not change frequently, Nginx can cache backend responses:
# In the http block (/etc/nginx/nginx.conf)
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=app_cache:10m max_size=1g inactive=60m;
# In the server block
location /api/ {
proxy_pass http://127.0.0.1:3000;
proxy_cache app_cache;
proxy_cache_valid 200 10m;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout updating;
add_header X-Cache-Status $upstream_cache_status;
}
The X-Cache-Status header tells you whether a response was served from cache (HIT), fetched from the backend (MISS), or served from stale cache during an error (STALE). This is invaluable for debugging.
Step 9: Security Hardening
Add these directives to your server blocks for better security:
# Rate limiting (in http block of nginx.conf)
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
# In your server block
server {
# ... existing config ...
# Request size limit (prevents large upload attacks)
client_max_body_size 10M;
# Rate limiting
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://127.0.0.1:3000;
# ... proxy headers ...
}
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Hide Nginx version
server_tokens off;
# Block common exploit paths
location ~ /\.(git|env|htaccess) {
deny all;
}
}
Debugging Common Issues
- 502 Bad Gateway -- your backend application is not running, or it is listening on a different port than what Nginx expects. Check with
curl http://127.0.0.1:3000from the server. - 504 Gateway Timeout -- your backend is taking too long to respond. Increase
proxy_read_timeoutor optimize your application. - Mixed content warnings -- your app generates HTTP URLs when accessed via HTTPS. Make sure you pass
X-Forwarded-Protoand your app respects it. - Redirect loops -- often caused by your app redirecting to HTTPS while Nginx is already handling HTTPS. Check that your app trusts the proxy headers.
Always check the Nginx error log for details:
sudo tail -f /var/log/nginx/error.log
Run Your Applications on Solid Infrastructure
Nginx is lightweight and efficient, but it can only perform as well as the hardware underneath it. MassiveGRID's Cloud VPS plans start at $1.99/month with NVMe storage and data centers in New York, London, Frankfurt, and Singapore. Every VPS runs on Proxmox HA clusters with Ceph distributed storage for automatic failover and triple-replicated data. Whether you are proxying a single app or load-balancing a fleet of microservices, the infrastructure handles the heavy lifting so you can focus on building. Configure your VPS and be up and running within minutes.