Vercel, Netlify, and Cloudflare Pages have made deploying static sites effortless — push to Git, and your site is live in seconds. But that convenience comes with trade-offs you discover only after you're locked in: build minute limits, bandwidth caps, vendor-specific configuration files, proprietary edge functions, and pricing that scales unpredictably. When your side project suddenly gets traffic, you're scrambling to understand why your bill tripled or why your deploys are queued behind a build limit.
Self-hosting your static sites on a VPS gives you complete control over the deployment pipeline, unlimited builds, predictable costs, and zero vendor lock-in. With Nginx serving pre-built HTML, your sites load as fast as any CDN — often faster, because there's no cold-start latency and no edge function overhead. And the configuration you write is standard Nginx, transferable to any server, anywhere.
MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10
Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything
VPS vs Vercel, Netlify, and Cloudflare Pages
Let's be honest about the trade-offs. Platform-as-a-service providers are excellent for getting started quickly. But here's where a VPS wins:
| Factor | VPS (Self-Hosted) | Vercel / Netlify / CF Pages |
|---|---|---|
| Build minutes | Unlimited | Free tier: 300-500/mo; paid tiers vary |
| Bandwidth | Included (1-10 TB typical) | Free tier: 100GB; overage charges apply |
| Concurrent builds | Limited only by CPU/RAM | 1 (free) to 3 (paid) |
| Build environment | Full control (any Node version, any tool) | Restricted to platform's build image |
| Custom server config | Full Nginx configuration | Limited to platform rules (netlify.toml, vercel.json) |
| Server-side logic | Anything you want | Proprietary edge/serverless functions |
| Vendor lock-in | None | Moderate to high |
| Monthly cost (multiple sites) | $1.99-5.99/mo flat | $0 (free tier) to $20+/site/mo |
| Setup effort | Medium (one-time) | Low |
| Deploy speed | Instant (after build) | 30s-3min (queued builds) |
The VPS approach makes the most sense when you're hosting multiple sites, need unlimited builds, want full control over your server configuration, or simply prefer predictable costs. Static sites are the lightest workload possible — a Cloud VPS with 1 vCPU / 1GB RAM serves dozens of static sites via Nginx without breaking a sweat.
Prerequisites
You need:
- An Ubuntu VPS — even the smallest plan works perfectly for static site hosting
- Nginx installed — see our LEMP stack guide for installation, or install Nginx standalone:
sudo apt update
sudo apt install nginx -y
sudo systemctl enable nginx
sudo systemctl start nginx
- A domain name with DNS A records pointing to your VPS IP
- Node.js installed (for building Hugo, Next.js, and Astro sites):
# Install Node.js 22 LTS via NodeSource
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt install nodejs -y
node --version
npm --version
Verify Nginx is running:
sudo systemctl status nginx
curl -I http://localhost
Nginx Configuration for Static Sites
Before deploying any framework, let's set up a production-grade Nginx configuration with gzip compression, proper caching headers, and security headers. These settings dramatically improve performance and security.
First, configure Nginx's main settings for optimal static file serving:
sudo nano /etc/nginx/nginx.conf
Ensure these settings are in the http block:
http {
# ... existing settings ...
# Gzip compression
gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_min_length 256;
gzip_types
text/plain
text/css
text/xml
text/javascript
application/javascript
application/json
application/xml
application/rss+xml
application/atom+xml
image/svg+xml
font/woff
font/woff2;
# File cache settings
open_file_cache max=1000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
}
Now create a reusable snippet for security and caching headers that you'll include in every site configuration:
sudo nano /etc/nginx/snippets/static-site-headers.conf
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
# Cache static assets aggressively
location ~* \.(jpg|jpeg|png|gif|ico|webp|avif|svg)$ {
expires 30d;
add_header Cache-Control "public, immutable";
access_log off;
}
location ~* \.(css|js)$ {
expires 7d;
add_header Cache-Control "public";
access_log off;
}
location ~* \.(woff|woff2|ttf|otf|eot)$ {
expires 365d;
add_header Cache-Control "public, immutable";
add_header Access-Control-Allow-Origin "*";
access_log off;
}
# Deny access to hidden files
location ~ /\. {
deny all;
access_log off;
log_not_found off;
}
Test and reload:
sudo nginx -t
sudo systemctl reload nginx
Deploying a Hugo Site
Hugo is one of the fastest static site generators — it builds thousands of pages in seconds. Install Hugo on your VPS:
# Install Hugo extended (needed for SCSS processing)
HUGO_VERSION="0.142.0"
wget "https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_extended_${HUGO_VERSION}_linux-amd64.deb"
sudo dpkg -i "hugo_extended_${HUGO_VERSION}_linux-amd64.deb"
hugo version
Create the web root and deploy directory:
sudo mkdir -p /var/www/hugo-site.com
sudo chown -R $USER:$USER /var/www/hugo-site.com
Clone your Hugo project and build it:
# Clone your Hugo site repository
cd /home/$USER
git clone https://github.com/yourusername/your-hugo-site.git
cd your-hugo-site
# Install Hugo modules (if used)
hugo mod get
# Build for production
hugo --minify --baseURL "https://hugo-site.com"
# Copy the built site to the web root
rsync -av --delete public/ /var/www/hugo-site.com/
Create the Nginx server block:
sudo nano /etc/nginx/sites-available/hugo-site.com
server {
listen 80;
server_name hugo-site.com www.hugo-site.com;
root /var/www/hugo-site.com;
index index.html;
# Include shared headers and caching
include snippets/static-site-headers.conf;
# Handle clean URLs (Hugo generates /page/index.html)
location / {
try_files $uri $uri/ =404;
}
# Custom 404 page
error_page 404 /404.html;
# Disable access log for favicon
location = /favicon.ico {
access_log off;
log_not_found off;
}
}
sudo ln -s /etc/nginx/sites-available/hugo-site.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
Hugo build times are trivially fast — even sites with thousands of pages build in under 5 seconds. This makes Hugo an ideal choice for self-hosted deployments where you want instant builds.
Deploying a Next.js Static Export
Next.js can generate a fully static site using its output: 'export' configuration. This gives you all of Next.js's component model and routing without needing a Node.js server in production.
First, ensure your Next.js project is configured for static export. In next.config.js (or next.config.mjs):
/** @type {import('next').NextConfig} */
const nextConfig = {
output: 'export',
// Optional: Add trailing slashes to match directory-based routing
trailingSlash: true,
// Optional: Optimize images for static export
images: {
unoptimized: true,
},
}
module.exports = nextConfig
Clone, build, and deploy:
# Clone the repository
cd /home/$USER
git clone https://github.com/yourusername/your-nextjs-site.git
cd your-nextjs-site
# Install dependencies
npm ci
# Build the static export
npm run build
# The output is in the 'out' directory
ls out/
# Deploy to web root
sudo mkdir -p /var/www/nextjs-site.com
rsync -av --delete out/ /var/www/nextjs-site.com/
Create the Nginx configuration. Next.js static exports need special handling for client-side routing:
sudo nano /etc/nginx/sites-available/nextjs-site.com
server {
listen 80;
server_name nextjs-site.com www.nextjs-site.com;
root /var/www/nextjs-site.com;
index index.html;
# Include shared headers and caching
include snippets/static-site-headers.conf;
# Next.js static export with trailingSlash: true
location / {
try_files $uri $uri/ $uri.html =404;
}
# Cache Next.js static assets (hashed filenames)
location /_next/static/ {
expires 365d;
add_header Cache-Control "public, immutable";
access_log off;
}
# Custom 404 page
error_page 404 /404.html;
}
sudo ln -s /etc/nginx/sites-available/nextjs-site.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
Note: Next.js features that require a server — API routes, server-side rendering (SSR), incremental static regeneration (ISR), middleware — will not work with static export. If you need these features, deploy Next.js as a Node.js application instead. See our Node.js deployment guide for that approach.
Deploying an Astro Site
Astro generates static HTML by default with zero JavaScript shipped to the client (unless you explicitly add interactive components). This makes it the ideal framework for content-heavy static sites.
Clone, build, and deploy:
# Clone the repository
cd /home/$USER
git clone https://github.com/yourusername/your-astro-site.git
cd your-astro-site
# Install dependencies
npm ci
# Build the site
npm run build
# Astro outputs to the 'dist' directory
ls dist/
# Deploy to web root
sudo mkdir -p /var/www/astro-site.com
rsync -av --delete dist/ /var/www/astro-site.com/
Create the Nginx configuration:
sudo nano /etc/nginx/sites-available/astro-site.com
server {
listen 80;
server_name astro-site.com www.astro-site.com;
root /var/www/astro-site.com;
index index.html;
# Include shared headers and caching
include snippets/static-site-headers.conf;
# Astro generates clean URLs with directory-based routing
location / {
try_files $uri $uri/ $uri.html =404;
}
# Cache Astro's hashed assets
location /_astro/ {
expires 365d;
add_header Cache-Control "public, immutable";
access_log off;
}
# Custom 404 page
error_page 404 /404.html;
}
sudo ln -s /etc/nginx/sites-available/astro-site.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
Astro's build output is typically smaller than Next.js because it doesn't ship a JavaScript runtime by default. For content-focused sites like blogs, documentation, and marketing pages, Astro produces the leanest possible output.
Multi-Site Hosting with Nginx Server Blocks
One of the biggest advantages of a VPS over platform services is hosting unlimited sites at no additional cost. Each site gets its own Nginx server block and web root directory.
Here's the pattern:
# Create web roots for each site
sudo mkdir -p /var/www/site-one.com
sudo mkdir -p /var/www/site-two.com
sudo mkdir -p /var/www/site-three.com
# Each site gets its own Nginx config
sudo nano /etc/nginx/sites-available/site-one.com
sudo nano /etc/nginx/sites-available/site-two.com
sudo nano /etc/nginx/sites-available/site-three.com
# Enable each site
sudo ln -s /etc/nginx/sites-available/site-one.com /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/site-two.com /etc/nginx/sites-enabled/
sudo ln -s /etc/nginx/sites-available/site-three.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
Nginx handles this with near-zero overhead. A single VPS can serve hundreds of static sites simultaneously because each request just reads a file from disk — there's no application server, no database, no runtime involved.
For managing multiple WordPress sites alongside static sites, see our multi-site WordPress hosting guide.
Automated Deploys with Git Hooks (Push-to-Deploy)
Set up a bare Git repository on your VPS that automatically builds and deploys your site when you push to it. This is the simplest form of continuous deployment — no external service required.
Set Up the Bare Repository
# Create a bare repository for your site
sudo mkdir -p /opt/repos/hugo-site.git
cd /opt/repos/hugo-site.git
sudo git init --bare
sudo chown -R $USER:$USER /opt/repos/hugo-site.git
Create the Post-Receive Hook
This hook runs after every push, checks out the code, builds the site, and copies it to the web root:
nano /opt/repos/hugo-site.git/hooks/post-receive
For a Hugo site:
#!/bin/bash
set -euo pipefail
SITE_NAME="hugo-site.com"
WORK_DIR="/tmp/build-$SITE_NAME"
WEB_ROOT="/var/www/$SITE_NAME"
LOG_FILE="/var/log/deploy-$SITE_NAME.log"
echo "=== Deploy started at $(date) ===" >> "$LOG_FILE"
# Read the branch that was pushed
while read oldrev newrev refname; do
BRANCH=$(echo "$refname" | sed 's|refs/heads/||')
done
# Only deploy from the main branch
if [ "$BRANCH" != "main" ]; then
echo "Received push to $BRANCH — skipping deploy (only main triggers deploy)" >> "$LOG_FILE"
exit 0
fi
echo "Deploying branch: $BRANCH" >> "$LOG_FILE"
# Clean and checkout
rm -rf "$WORK_DIR"
mkdir -p "$WORK_DIR"
git --work-tree="$WORK_DIR" --git-dir="/opt/repos/hugo-site.git" checkout -f main
# Build
cd "$WORK_DIR"
hugo --minify --baseURL "https://$SITE_NAME" >> "$LOG_FILE" 2>&1
# Deploy
rsync -av --delete "$WORK_DIR/public/" "$WEB_ROOT/" >> "$LOG_FILE" 2>&1
# Cleanup
rm -rf "$WORK_DIR"
echo "=== Deploy completed at $(date) ===" >> "$LOG_FILE"
echo "Deploy successful!"
For a Next.js static export, replace the build section:
# Build (Next.js)
cd "$WORK_DIR"
npm ci >> "$LOG_FILE" 2>&1
npm run build >> "$LOG_FILE" 2>&1
# Deploy
rsync -av --delete "$WORK_DIR/out/" "$WEB_ROOT/" >> "$LOG_FILE" 2>&1
For an Astro site:
# Build (Astro)
cd "$WORK_DIR"
npm ci >> "$LOG_FILE" 2>&1
npm run build >> "$LOG_FILE" 2>&1
# Deploy
rsync -av --delete "$WORK_DIR/dist/" "$WEB_ROOT/" >> "$LOG_FILE" 2>&1
Make the hook executable:
chmod +x /opt/repos/hugo-site.git/hooks/post-receive
Add the VPS as a Git Remote
On your local machine, add the VPS as a deployment remote:
# On your local machine
cd /path/to/your-hugo-site
git remote add deploy ssh://user@your-vps-ip/opt/repos/hugo-site.git
# Deploy by pushing to the VPS
git push deploy main
Now every git push deploy main from your local machine automatically builds and deploys your site. The deploy typically completes in seconds for Hugo, and 30-60 seconds for Next.js or Astro (due to npm ci).
For a self-hosted Git server (if you want to avoid GitHub entirely), see our Git server setup guide.
Automated Deploys with GitHub Actions
If your code lives on GitHub and you want deploys triggered by pushes to GitHub (not directly to the VPS), use GitHub Actions. This gives you build logs in GitHub's UI and integrates with pull request workflows.
First, set up SSH key authentication for GitHub Actions. On your VPS:
# Create a deploy user (or use an existing one)
sudo adduser --disabled-password deploy
sudo mkdir -p /home/deploy/.ssh
sudo chmod 700 /home/deploy/.ssh
# Generate a deploy key
ssh-keygen -t ed25519 -f /tmp/deploy_key -N "" -C "github-actions-deploy"
# Add the public key to authorized_keys
sudo cp /tmp/deploy_key.pub /home/deploy/.ssh/authorized_keys
sudo chmod 600 /home/deploy/.ssh/authorized_keys
sudo chown -R deploy:deploy /home/deploy/.ssh
# Give deploy user write access to the web root
sudo chown -R deploy:deploy /var/www/hugo-site.com
# Copy the private key — you'll add this to GitHub Secrets
cat /tmp/deploy_key
# Clean up the key from the VPS
rm /tmp/deploy_key /tmp/deploy_key.pub
In your GitHub repository, go to Settings → Secrets and variables → Actions and add these secrets:
DEPLOY_SSH_KEY— The private key contentDEPLOY_HOST— Your VPS IP addressDEPLOY_USER— The deploy user (e.g.,deploy)
Create the GitHub Actions workflow. For a Hugo site:
mkdir -p .github/workflows
# .github/workflows/deploy.yml
name: Deploy to VPS
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
submodules: true
- name: Setup Hugo
uses: peaceiris/actions-hugo@v3
with:
hugo-version: '0.142.0'
extended: true
- name: Build
run: hugo --minify --baseURL "https://hugo-site.com"
- name: Deploy via rsync
uses: burnett01/rsync-deployments@7.0.1
with:
switches: -avz --delete
path: public/
remote_path: /var/www/hugo-site.com/
remote_host: ${{ secrets.DEPLOY_HOST }}
remote_user: ${{ secrets.DEPLOY_USER }}
remote_key: ${{ secrets.DEPLOY_SSH_KEY }}
For a Next.js static export:
name: Deploy Next.js to VPS
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22'
cache: 'npm'
- name: Install and build
run: |
npm ci
npm run build
- name: Deploy via rsync
uses: burnett01/rsync-deployments@7.0.1
with:
switches: -avz --delete
path: out/
remote_path: /var/www/nextjs-site.com/
remote_host: ${{ secrets.DEPLOY_HOST }}
remote_user: ${{ secrets.DEPLOY_USER }}
remote_key: ${{ secrets.DEPLOY_SSH_KEY }}
For Astro, it's nearly identical — just change the build output path from out/ to dist/.
For a deeper dive into CI/CD pipelines including testing, staging environments, and rollback strategies, see our CI/CD deployment guide.
SSL with Let's Encrypt
Secure every site with free SSL certificates from Let's Encrypt. Install Certbot if you haven't already:
sudo apt install certbot python3-certbot-nginx -y
Issue certificates for all your sites:
# Single site
sudo certbot --nginx -d hugo-site.com -d www.hugo-site.com
# Multiple sites in one command
sudo certbot --nginx -d hugo-site.com -d www.hugo-site.com -d nextjs-site.com -d www.nextjs-site.com -d astro-site.com -d www.astro-site.com
Certbot automatically modifies your Nginx configuration to handle HTTPS and sets up auto-renewal. Verify the renewal timer is active:
sudo systemctl status certbot.timer
sudo certbot renew --dry-run
For the complete SSL setup walkthrough, see our Let's Encrypt SSL guide.
CDN Integration (Cloudflare in Front of Your VPS)
For sites that need global edge caching, you can put Cloudflare in front of your VPS. This is optional — Nginx with gzip serves static files extremely well on its own — but it's useful if you have a globally distributed audience.
Set up Cloudflare:
- Add your domain to Cloudflare (free plan is sufficient)
- Update your domain's nameservers to Cloudflare's
- In Cloudflare DNS, create an A record pointing to your VPS IP with the proxy toggle enabled (orange cloud)
- Set SSL/TLS mode to Full (Strict) since you have a valid Let's Encrypt certificate on your origin server
Configure Cloudflare page rules for optimal caching:
*.yourdomain.com/*.css— Cache Level: Cache Everything, Edge Cache TTL: 1 month*.yourdomain.com/*.js— Cache Level: Cache Everything, Edge Cache TTL: 1 month*.yourdomain.com/*.html— Cache Level: Cache Everything, Edge Cache TTL: 2 hours
On your VPS, restrict Nginx to only accept connections from Cloudflare's IP ranges (to prevent direct access bypassing Cloudflare):
sudo nano /etc/nginx/snippets/cloudflare-only.conf
# Cloudflare IPv4 ranges
allow 173.245.48.0/20;
allow 103.21.244.0/22;
allow 103.22.200.0/22;
allow 103.31.4.0/22;
allow 141.101.64.0/18;
allow 108.162.192.0/18;
allow 190.93.240.0/20;
allow 188.114.96.0/20;
allow 197.234.240.0/22;
allow 198.41.128.0/17;
allow 162.158.0.0/15;
allow 104.16.0.0/13;
allow 104.24.0.0/14;
allow 172.64.0.0/13;
allow 131.0.72.0/22;
deny all;
Include it in your site's server block:
server {
# ...
include snippets/cloudflare-only.conf;
# ...
}
Also set the real IP header so your logs show visitor IPs instead of Cloudflare's:
# Add to the http block in nginx.conf
set_real_ip_from 173.245.48.0/20;
set_real_ip_from 103.21.244.0/22;
set_real_ip_from 103.22.200.0/22;
set_real_ip_from 103.31.4.0/22;
set_real_ip_from 141.101.64.0/18;
set_real_ip_from 108.162.192.0/18;
set_real_ip_from 190.93.240.0/20;
set_real_ip_from 188.114.96.0/20;
set_real_ip_from 197.234.240.0/22;
set_real_ip_from 198.41.128.0/17;
set_real_ip_from 162.158.0.0/15;
set_real_ip_from 104.16.0.0/13;
set_real_ip_from 104.24.0.0/14;
set_real_ip_from 172.64.0.0/13;
set_real_ip_from 131.0.72.0/22;
real_ip_header CF-Connecting-IP;
Build-Heavy Workflows
If you're building directly on the VPS (rather than using GitHub Actions), large Next.js or Astro projects can be resource-intensive during the build phase. A Next.js project with hundreds of pages and heavy image optimization can consume 2-4 GB of RAM during npm run build.
Build-heavy workflows? If your Next.js or Astro build process is resource-intensive, dedicated CPU ensures builds complete quickly without competing with other processes on the same physical hardware. This is especially important if you're building multiple sites on the same server.
If you're building on a smaller VPS and running into memory issues, add swap space as a safety net:
# Add 2GB swap
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
For more on swap management, see our swap memory management guide.
Alternatively, build on GitHub Actions (which provides 7 GB RAM) and just deploy the built files to your VPS via rsync. This keeps your VPS lean — it only serves files, never builds them.
Performance Testing
After deploying, verify your site's performance. Test gzip compression:
# Check if gzip is working
curl -s -H "Accept-Encoding: gzip" -I https://your-site.com | grep -i content-encoding
# Should show: content-encoding: gzip
Test caching headers:
# Check cache headers on a CSS file
curl -I https://your-site.com/css/style.css
# Should show: Cache-Control: public and Expires header
# Check cache headers on an image
curl -I https://your-site.com/images/hero.webp
# Should show: Cache-Control: public, immutable
Test security headers:
curl -I https://your-site.com | grep -iE "x-frame|x-content|x-xss|referrer|permissions"
Benchmark your site's response time:
# Simple benchmark — 100 requests
ab -n 100 -c 10 https://your-site.com/
# More detailed timing
curl -o /dev/null -s -w "DNS: %{time_namelookup}s\nConnect: %{time_connect}s\nTLS: %{time_appconnect}s\nTTFB: %{time_starttransfer}s\nTotal: %{time_total}s\n" https://your-site.com/
A well-configured Nginx serving static files should achieve TTFB (Time to First Byte) under 50ms from the datacenter's local network, and under 200ms for most global visitors.
For ongoing performance monitoring, see our VPS performance optimization guide.
Deployment Script for Multiple Frameworks
If you're hosting multiple sites built with different frameworks, create a unified deploy script:
sudo nano /opt/deploy-site.sh
#!/bin/bash
set -euo pipefail
SITE_DIR="$1"
SITE_DOMAIN="$2"
WEB_ROOT="/var/www/$SITE_DOMAIN"
if [ ! -d "$SITE_DIR" ]; then
echo "Error: Directory $SITE_DIR does not exist"
exit 1
fi
cd "$SITE_DIR"
# Detect the framework and build
if [ -f "hugo.toml" ] || [ -f "hugo.yaml" ] || [ -f "config.toml" ]; then
echo "Detected: Hugo"
hugo --minify --baseURL "https://$SITE_DOMAIN"
BUILD_DIR="public"
elif [ -f "next.config.js" ] || [ -f "next.config.mjs" ] || [ -f "next.config.ts" ]; then
echo "Detected: Next.js"
npm ci
npm run build
BUILD_DIR="out"
elif [ -f "astro.config.mjs" ] || [ -f "astro.config.ts" ]; then
echo "Detected: Astro"
npm ci
npm run build
BUILD_DIR="dist"
else
echo "Error: Unknown framework. Expected Hugo, Next.js, or Astro."
exit 1
fi
# Deploy
mkdir -p "$WEB_ROOT"
rsync -av --delete "$BUILD_DIR/" "$WEB_ROOT/"
echo "Deployed $SITE_DOMAIN from $BUILD_DIR/"
sudo chmod +x /opt/deploy-site.sh
# Usage:
# /opt/deploy-site.sh /home/user/my-hugo-site hugo-site.com
# /opt/deploy-site.sh /home/user/my-nextjs-site nextjs-site.com
# /opt/deploy-site.sh /home/user/my-astro-site astro-site.com
Final Thoughts
Self-hosting static sites on a VPS is one of the highest-value, lowest-effort hosting configurations you can set up. Nginx serves static files with extraordinary efficiency, the configuration is standard and portable, and you get unlimited sites, unlimited builds, and predictable costs from day one.
A MassiveGRID Cloud VPS with 1 vCPU and 1 GB RAM comfortably serves dozens of static sites. With the automated deploy pipelines in this guide — whether Git hooks for simplicity or GitHub Actions for CI/CD integration — your deployment workflow is as fast and reliable as any platform service, without the vendor lock-in.
For related infrastructure guides, explore our tutorials on Traefik with Docker for container-based deployments, server security hardening, and monitoring setup to keep your hosting infrastructure running smoothly.