Why Run Nginx as a Reverse Proxy?

A reverse proxy terminates client connections and forwards them to one or more backend services. Putting Nginx in front of your application servers gives you TLS termination, request routing, caching, compression, connection limits, and observability - all configured in a single place. This guide covers production-grade reverse proxying on Ubuntu 22.04 LTS and Ubuntu 24.04 LTS.

Install Nginx

apt update
apt install -y nginx
systemctl enable --now nginx
ufw allow 'Nginx Full'

Verify with curl -I http://localhost. The default welcome page confirms Nginx is listening on port 80.

Anatomy of a Reverse Proxy Server Block

Create a new configuration at /etc/nginx/sites-available/app.example.com:

server {
    listen 80;
    server_name app.example.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Enable the site and reload:

ln -s /etc/nginx/sites-available/app.example.com /etc/nginx/sites-enabled/
nginx -t
systemctl reload nginx

The nginx -t step is mandatory - it validates config syntax before the reload.

Add HTTPS with Let's Encrypt

Run certbot to provision a TLS certificate and rewrite the server block:

apt install -y certbot python3-certbot-nginx
certbot --nginx -d app.example.com --redirect

For a deeper walkthrough, see our Let's Encrypt guide.

Upstream Blocks for Load Balancing

To distribute traffic across multiple backends, declare an upstream block:

upstream app_backend {
    least_conn;
    server 10.0.0.11:3000 max_fails=3 fail_timeout=30s;
    server 10.0.0.12:3000 max_fails=3 fail_timeout=30s;
    keepalive 32;
}

server {
    listen 443 ssl http2;
    server_name app.example.com;

    location / {
        proxy_pass http://app_backend;
        proxy_http_version 1.1;
        proxy_set_header Connection "";
    }
}

The least_conn policy sends each new request to the backend with the fewest active connections. keepalive reuses upstream connections, cutting latency for high-throughput APIs.

WebSockets and Long-Lived Connections

WebSockets require HTTP/1.1 and the Upgrade header:

location /ws/ {
    proxy_pass http://app_backend;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_read_timeout 3600s;
}

The long proxy_read_timeout prevents idle WebSocket disconnects every 60 seconds.

Caching Static Responses

Proxy caching dramatically reduces backend load for read-heavy APIs. Define a cache path in nginx.conf:

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=api_cache:10m
                 max_size=1g inactive=60m use_temp_path=off;

Then enable it per location:

location /api/public/ {
    proxy_cache api_cache;
    proxy_cache_valid 200 10m;
    proxy_cache_use_stale error timeout updating;
    add_header X-Cache-Status $upstream_cache_status;
    proxy_pass http://app_backend;
}

The X-Cache-Status header lets you verify HIT/MISS behavior in browser devtools.

Rate Limiting and Protection

Define a shared-memory zone and apply it to sensitive endpoints:

limit_req_zone $binary_remote_addr zone=login:10m rate=5r/s;

location /login {
    limit_req zone=login burst=10 nodelay;
    proxy_pass http://app_backend;
}

This throttles each client IP to 5 login attempts per second, with a burst of 10. Combine with Fail2ban for IP-level bans.

gzip and Brotli Compression

Enable compression in /etc/nginx/nginx.conf:

gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_proxied any;
gzip_types text/plain text/css application/json application/javascript
           application/xml+rss text/xml image/svg+xml;

Observability

Nginx ships with access and error logs in /var/log/nginx/. Tail them live while testing:

tail -f /var/log/nginx/access.log /var/log/nginx/error.log

Expose stub_status on a private port for Prometheus scraping:

location /nginx_status {
    stub_status;
    allow 127.0.0.1;
    deny all;
}
Use CaseDirective
HTTP to HTTPS redirectreturn 301 https://$host$request_uri;
Client body limitclient_max_body_size 50m;
Real client IPset_real_ip_from 10.0.0.0/8;
Health checklocation = /healthz { return 200; }

Where to Go Next

For the base Ubuntu configuration, start with our VPS setup guide. To deploy a Python web app behind this proxy, see Django/Flask with Gunicorn.

Running production Ubuntu servers? MassiveGRID's Cloud VPS provides NVMe storage, 10 Gbps networking, and private VLANs for multi-backend architectures. For fully managed reverse proxy and load balancer setups, see our Managed Cloud Servers or contact our team.

Published by MassiveGRID - high-availability cloud hosting with managed Nginx and HAProxy configurations.