Cloudflare Tunnel lets you expose services running on your Ubuntu VPS to the internet without opening any inbound ports on your firewall. Instead of configuring port forwarding, NAT rules, or public-facing reverse proxies, the tunnel creates an outbound-only connection from your server to Cloudflare's edge network. Traffic flows through Cloudflare, gets inspected, cached, and forwarded to your local services — all while your server remains invisible to port scanners and direct attacks. This guide walks through the complete setup on an Ubuntu VPS, from installing cloudflared to routing multiple services, integrating with existing Nginx configurations, and applying zero-trust access policies.
MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10
Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything
What Cloudflare Tunnel Actually Does
Traditional web hosting requires your server to listen on public ports — typically 80 and 443 — so that clients can connect. This means those ports must be open in your firewall, your IP address is discoverable, and every exposed service becomes a potential attack surface. Cloudflare Tunnel flips this model entirely.
When you run cloudflared on your VPS, it initiates multiple outbound connections to Cloudflare's nearest points of presence. These are long-lived, multiplexed connections using HTTP/2 or QUIC. Because the connection is outbound-only, your server never needs to accept inbound traffic on any port other than SSH for management. Cloudflare's edge receives incoming requests for your domain, routes them through the tunnel, and delivers them to whatever local service you specify — a web app on port 3000, an API on port 8080, a dashboard on port 9090, or anything else listening on localhost.
The cloudflared daemon is lightweight, typically consuming under 50MB of RAM and negligible CPU. It automatically handles reconnection, certificate rotation, and connection pooling. From the perspective of your local services, traffic arrives from localhost — they never see the external client's IP directly unless you configure header propagation.
Tunnel vs Traditional Reverse Proxy: Tradeoffs
Cloudflare Tunnel is not a direct replacement for a reverse proxy like Nginx or Caddy. It serves a different purpose and comes with its own set of tradeoffs that you should understand before committing.
The primary advantage is security through obscurity combined with real access control. Your server's IP address never appears in DNS records — Cloudflare's IPs do. Port scans against your VPS reveal nothing because no ports are open. This eliminates entire categories of attacks: direct DDoS against your origin, exploitation of exposed admin panels, and brute-force attempts against application login pages.
The tradeoff is latency. Every request takes an extra hop through Cloudflare's network. For most web applications, this adds 5-20ms of overhead, which is imperceptible to users. For latency-sensitive applications like real-time gaming servers, VoIP, or high-frequency trading APIs, the additional hop matters. You also give up direct control over TLS termination — Cloudflare handles the certificate between the client and their edge, and you configure whether the connection between Cloudflare and your origin is encrypted separately.
Another consideration is protocol support. Cloudflare Tunnel works natively with HTTP, HTTPS, SSH, RDP, and arbitrary TCP/UDP streams. However, HTTP traffic gets the most benefit because Cloudflare can apply WAF rules, caching, and bot management. Non-HTTP protocols pass through with fewer optimizations.
Prerequisites
Before starting, you need three things in place. First, a Cloudflare account — the free tier includes tunnel functionality with no usage limits. Second, a domain with its DNS managed by Cloudflare. This means your domain's nameservers must point to Cloudflare, and you should see your domain active in the Cloudflare dashboard. Third, an Ubuntu VPS with root or sudo access and outbound internet connectivity. The VPS does not need any inbound ports open other than SSH for your initial setup.
Verify your Ubuntu version and ensure your system is updated:
lsb_release -a
sudo apt update && sudo apt upgrade -y
Confirm that your domain resolves through Cloudflare by checking the nameservers:
dig NS yourdomain.com +short
You should see nameservers like name1.ns.cloudflare.com and name2.ns.cloudflare.com.
Installing cloudflared
You have two options for installing cloudflared: as a standalone binary or through Docker. Both work well, but the binary installation integrates more cleanly with systemd for automatic startup.
Option A: Binary Installation (Recommended)
Cloudflare provides an official Debian/Ubuntu repository. Add it and install:
# Add Cloudflare's GPG key
sudo mkdir -p --mode=0755 /usr/share/keyrings
curl -fsSL https://pkg.cloudflare.com/cloudflare-main.gpg | sudo tee /usr/share/keyrings/cloudflare-main.gpg > /dev/null
# Add the repository
echo "deb [signed-by=/usr/share/keyrings/cloudflare-main.gpg] https://pkg.cloudflare.com/cloudflared $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/cloudflared.list
# Install
sudo apt update
sudo apt install cloudflared -y
# Verify
cloudflared --version
Alternatively, download the binary directly:
curl -L https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb -o cloudflared.deb
sudo dpkg -i cloudflared.deb
Option B: Docker Installation
If you prefer containerized deployment, run cloudflared as a Docker container. This is useful when you want to isolate the tunnel from the host system or manage it alongside other containerized services:
docker run -d --name cloudflared \
--restart unless-stopped \
cloudflare/cloudflared:latest \
tunnel --no-autoupdate run \
--token YOUR_TUNNEL_TOKEN
The token-based approach (covered below) works best with Docker since it avoids mounting credential files into the container.
Creating and Authenticating a Tunnel
Start by authenticating cloudflared with your Cloudflare account. This opens a browser-based login flow:
cloudflared tunnel login
On a headless VPS, this command outputs a URL. Copy it, open it in your local browser, select the domain you want to use, and authorize the connection. A certificate file is saved to ~/.cloudflared/cert.pem.
Next, create a named tunnel:
cloudflared tunnel create my-vps-tunnel
This generates a tunnel UUID and a credentials file at ~/.cloudflared/TUNNEL_UUID.json. Note the UUID — you will reference it in your configuration. You can list your tunnels at any time:
cloudflared tunnel list
Now create a DNS record that points your domain to the tunnel:
cloudflared tunnel route dns my-vps-tunnel app.yourdomain.com
This creates a CNAME record in Cloudflare DNS pointing app.yourdomain.com to TUNNEL_UUID.cfargotunnel.com. You can route multiple subdomains to the same tunnel.
Single Service Routing
For the simplest case — exposing one service — you can run the tunnel directly from the command line without a configuration file:
cloudflared tunnel run --url http://localhost:3000 my-vps-tunnel
This routes all traffic arriving at your configured domain through to port 3000 on localhost. It works immediately and is useful for quick testing. However, for production use, you want a configuration file and systemd service, which we cover in the multi-service section below.
If your local service uses HTTPS with a self-signed certificate, tell cloudflared to skip verification:
cloudflared tunnel run --url https://localhost:8443 --no-tls-verify my-vps-tunnel
Multi-Service Configuration
The real power of Cloudflare Tunnel emerges when you route multiple services through a single tunnel. Create a configuration file at ~/.cloudflared/config.yml:
tunnel: TUNNEL_UUID
credentials-file: /root/.cloudflared/TUNNEL_UUID.json
ingress:
- hostname: app.yourdomain.com
service: http://localhost:3000
- hostname: api.yourdomain.com
service: http://localhost:8080
- hostname: grafana.yourdomain.com
service: http://localhost:3001
- hostname: portainer.yourdomain.com
service: http://localhost:9000
- hostname: code.yourdomain.com
service: http://localhost:8443
originRequest:
noTLSVerify: true
- service: http_status:404
The last entry is a catch-all rule — it is required by cloudflared and handles requests that do not match any hostname. Each hostname needs a corresponding DNS route. Create them all:
cloudflared tunnel route dns my-vps-tunnel app.yourdomain.com
cloudflared tunnel route dns my-vps-tunnel api.yourdomain.com
cloudflared tunnel route dns my-vps-tunnel grafana.yourdomain.com
cloudflared tunnel route dns my-vps-tunnel portainer.yourdomain.com
cloudflared tunnel route dns my-vps-tunnel code.yourdomain.com
Validate your configuration before starting the tunnel:
cloudflared tunnel ingress validate
Now install cloudflared as a systemd service so it starts automatically on boot:
sudo cloudflared service install
sudo systemctl enable cloudflared
sudo systemctl start cloudflared
sudo systemctl status cloudflared
Check the logs if anything goes wrong:
sudo journalctl -u cloudflared -f
Cloudflare Tunnel Alongside Existing Nginx
If you already run Nginx as a reverse proxy on your VPS, Cloudflare Tunnel does not replace it — the two complement each other. A common pattern is to point the tunnel at Nginx, which then handles routing, load balancing, caching, or header manipulation before forwarding to your application processes.
In this configuration, Nginx listens on 127.0.0.1:80 (not 0.0.0.0) and the tunnel routes traffic to it:
ingress:
- hostname: app.yourdomain.com
service: http://127.0.0.1:80
- service: http_status:404
Your Nginx server block handles the rest as normal:
server {
listen 127.0.0.1:80;
server_name app.yourdomain.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $http_cf_connecting_ip;
proxy_set_header X-Forwarded-For $http_cf_connecting_ip;
proxy_set_header X-Forwarded-Proto https;
}
}
Note the use of $http_cf_connecting_ip — Cloudflare adds this header containing the real client IP. Pass it through so your application logs show actual visitor addresses instead of Cloudflare's internal IPs. By binding Nginx to 127.0.0.1 instead of 0.0.0.0, it only accepts connections from the tunnel and refuses any direct external access, even if firewall rules were misconfigured.
Cloudflare Access: Zero-Trust Policies
Cloudflare Tunnel becomes significantly more powerful when combined with Cloudflare Access, which adds authentication and authorization in front of your tunneled services — no changes to the applications themselves.
In the Cloudflare Zero Trust dashboard, navigate to Access > Applications and create a new self-hosted application. Set the domain to match your tunnel hostname (for example, grafana.yourdomain.com). Then define an access policy:
- Email-based: Only allow specific email addresses or email domains
- Identity provider: Integrate with Google, GitHub, Okta, Azure AD, or any SAML/OIDC provider
- IP-based: Restrict to specific IP ranges (useful for office networks)
- Multi-factor: Require MFA verification at the Cloudflare edge before traffic reaches your tunnel
- Country-based: Block or allow traffic from specific geographic regions
This effectively creates a zero-trust perimeter around your self-hosted services. Visitors hit a Cloudflare-hosted login page before any request reaches your VPS. Even if someone discovers your tunnel URL, they cannot interact with the service without passing the access policy. For internal tools like admin panels, monitoring dashboards, and development environments, this is a significant security upgrade over IP whitelisting or basic authentication alone.
Access policies are free for up to 50 users on the Cloudflare Zero Trust free plan, which is generous enough for most self-hosting scenarios.
Locking Down the Firewall
With Cloudflare Tunnel handling all inbound application traffic, you can aggressively lock down your VPS firewall. The only port that needs to remain open is SSH for remote management. Close everything else:
# Reset UFW to default deny
sudo ufw default deny incoming
sudo ufw default allow outgoing
# Allow SSH (adjust port if you've changed it)
sudo ufw allow 22/tcp
# Enable the firewall
sudo ufw enable
# Verify — only SSH should be listed
sudo ufw status verbose
That is it. No port 80, no port 443, no port 8080 — nothing. The tunnel works over outbound connections, so the allow outgoing default is all it needs. Port scanners will see only SSH, and if you move SSH to a non-standard port and use key-based authentication, your VPS becomes nearly invisible on the internet. For more advanced UFW configurations including rate limiting, connection tracking, and application profiles, see our detailed guide on advanced UFW firewall rules for Ubuntu VPS.
Performance and Latency Analysis
Running traffic through Cloudflare Tunnel introduces an additional network hop, and the performance impact varies by geography and workload type. Here is what to expect.
For HTTP/HTTPS traffic, Cloudflare routes through the nearest point of presence (PoP) to the visitor, then through the nearest PoP to your VPS. If your VPS is in Frankfurt and a visitor is in Berlin, the traffic path is: Berlin → Cloudflare Frankfurt PoP → Your VPS Frankfurt. The added latency is typically 2-8ms because Cloudflare has data centers in most major cities. If the visitor is in Sydney and your VPS is in New York, the latency is dominated by geography regardless of the tunnel.
Cloudflare maintains multiple concurrent connections per tunnel (typically 4 connections to 2 different PoPs) for redundancy. If one connection drops, traffic instantly shifts to another. The QUIC protocol option further reduces overhead compared to HTTP/2 by eliminating TCP handshake delays on reconnection.
Where you may notice impact is on the first request after an idle period. Cloudflare's connection pooling means the first request may take slightly longer as the connection warms up. Subsequent requests flow through the established connections with minimal overhead. For high-throughput applications, the tunnel can sustain hundreds of megabits per second — bandwidth is not typically the bottleneck.
To benchmark your specific setup, use curl with timing breakdown:
curl -o /dev/null -s -w "DNS: %{time_namelookup}s\nConnect: %{time_connect}s\nTLS: %{time_appconnect}s\nTotal: %{time_total}s\nTTFB: %{time_starttransfer}s\n" https://app.yourdomain.com
Compare this against a direct request to your VPS IP on the same service to quantify the exact tunnel overhead for your deployment.
When to Use Tunnel vs Direct Exposure
Cloudflare Tunnel is not universally better than direct exposure. The right choice depends on your threat model, performance requirements, and the services you are hosting.
Use Cloudflare Tunnel when:
- You host admin panels, dashboards, or internal tools that should not be publicly discoverable
- You want zero-trust access control without modifying your applications
- You run multiple services on non-standard ports and want clean subdomain routing
- You need to hide your origin server IP from public DNS records
- You self-host services on a home network or behind restrictive NATs
Use direct exposure when:
- You need sub-millisecond latency for real-time applications
- You run game servers or protocols that require direct UDP connectivity
- You need full control over TLS termination, certificate pinning, or custom certificate authorities
- Your hosting provider already offers robust network-level DDoS protection — MassiveGRID VPS includes 12 Tbps DDoS mitigation at no extra cost, which eliminates the primary reason many people adopt tunnels
- You serve high-bandwidth content (video streaming, large file downloads) where the additional hop adds measurable cost
Many operators use a hybrid approach: tunnel for internal tools and admin interfaces, direct exposure for public-facing production services that benefit from the hosting provider's native DDoS protection and lower latency.
Monitoring Tunnel Health
Once your tunnel is running in production, you want visibility into its health and performance. Cloudflare provides several monitoring options.
The Cloudflare dashboard under Networks > Tunnels shows each tunnel's status, connected PoPs, and recent connection events. You can see when connections drop and reconnect, which is useful for diagnosing intermittent issues.
Locally, check the systemd service status and logs:
# Service status
sudo systemctl status cloudflared
# Recent logs
sudo journalctl -u cloudflared --since "1 hour ago"
# Follow live logs
sudo journalctl -u cloudflared -f
For automated monitoring, create a simple health check script that verifies the tunnel is running and your services are reachable through it:
#!/bin/bash
# /usr/local/bin/tunnel-health.sh
# Check if cloudflared process is running
if ! pgrep -x cloudflared > /dev/null; then
echo "CRITICAL: cloudflared not running"
sudo systemctl restart cloudflared
exit 1
fi
# Check if the tunnel endpoint responds
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" https://app.yourdomain.com/health)
if [ "$HTTP_CODE" != "200" ]; then
echo "WARNING: Tunnel endpoint returned $HTTP_CODE"
exit 1
fi
echo "OK: Tunnel healthy"
exit 0
Schedule this with cron to run every five minutes and pipe alerts to your notification system of choice. Cloudflare also exposes tunnel metrics via their API, which you can feed into Prometheus or Grafana for historical visibility.
For cloudflared-specific metrics, the daemon exposes a local metrics endpoint when configured:
# Add to config.yml
metrics: localhost:2000
Then scrape http://localhost:2000/metrics for Prometheus-format data including active connections, request counts, error rates, and tunnel latency percentiles.
Your VPS Already Has DDoS Protection
One of the most common reasons people set up Cloudflare Tunnel is DDoS protection — hiding the origin IP so attackers cannot target it directly. This is a valid concern on providers that leave you exposed to volumetric attacks, but it is worth noting that not all hosting environments have this vulnerability.
MassiveGRID's VPS platform includes 12 Tbps of DDoS mitigation as a standard feature across all plans, including the $1.99/mo entry tier. The cloudflared daemon itself is lightweight — under 100MB of RAM — so even the smallest Linux VPS can run it without impacting your application workloads. This scrubbing happens at the network edge before traffic reaches your instance, which means your public-facing services are already protected without a tunnel.
This changes the calculus. If DDoS protection is your primary motivation, you may not need a tunnel at all — your VPS is already protected. Where the tunnel still adds clear value on a MassiveGRID deployment is origin IP concealment (preventing targeted attacks that bypass DNS-based protection), zero-trust access policies for internal tools, and clean multi-service routing without managing certificates for every subdomain.
For workloads where tunnel latency matters — say you are running a low-latency API or serving resources where every millisecond counts — dedicated VPS resources give you consistent performance because your CPU and RAM are not shared with other tenants. The tunnel hop adds a small overhead, but dedicated resources ensure your origin response time stays predictable, keeping total latency within acceptable bounds even with the additional network path.
Troubleshooting Common Issues
A few issues come up frequently when setting up Cloudflare Tunnel for the first time.
Tunnel connects but services return 502 errors. This means cloudflared reached your local service but got an error. Verify the service is actually running on the port specified in your config. Test locally with curl http://localhost:3000 from the VPS itself. If the service uses HTTPS locally, ensure you have noTLSVerify: true in the ingress rule's originRequest section.
DNS resolution errors. After running cloudflared tunnel route dns, the CNAME record may take a few minutes to propagate. Check with dig CNAME app.yourdomain.com — you should see it pointing to your tunnel UUID at cfargotunnel.com.
Tunnel disconnects frequently. Check your VPS's outbound connectivity and DNS resolution. Cloudflared needs stable outbound access to Cloudflare's edge. If your VPS has restrictive outbound firewall rules, ensure traffic to Cloudflare's IP ranges on ports 443 and 7844 is permitted. Also check system resources — if the VPS is running low on memory, the OOM killer may terminate cloudflared.
Real client IPs not visible. Cloudflare adds the CF-Connecting-IP header to requests. Your application or reverse proxy needs to extract this header. In Nginx, use $http_cf_connecting_ip in your proxy headers. In Node.js, read req.headers['cf-connecting-ip'].
Catch-all rule errors. The config file must end with a catch-all ingress rule (the - service: http_status:404 line). If you forget this, cloudflared refuses to start and logs an error about the ingress rules being invalid.
Prefer Managed Network Architecture?
Setting up Cloudflare Tunnel is straightforward, but it is one piece of a larger network architecture puzzle. You still need to manage TLS certificates for internal services, keep cloudflared updated, monitor tunnel health, configure access policies, maintain firewall rules, and troubleshoot connectivity issues when they arise.
If you would rather focus on your applications instead of the infrastructure underneath them, MassiveGRID's fully managed hosting handles network architecture, security configuration, monitoring, and maintenance for you. The managed team configures firewalls, sets up secure access patterns, manages DDoS protection, and keeps the network layer running so you can deploy your code without worrying about tunnel configurations, certificate renewals, or edge routing. For teams that want the security benefits of controlled network access without the operational overhead of managing it themselves, that is the most direct path.