Development teams pay between $20 and $50 per developer per month for cloud-hosted development environments — Codespaces, Gitpod, Coder — and still deal with cold starts, connection dropouts, and storage limits. There's a simpler approach: a self-hosted VPS running your own Git server, CI/CD runners, shared databases, and staging environments. You control the infrastructure, the data stays on your server, and the per-developer cost drops to a fraction of what you're paying now. This guide covers how to set up shared development infrastructure for a team of 2-15 developers on an Ubuntu VPS.
MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10
Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything
Why Self-Host Development Infrastructure
Cloud development environments solve a real problem: consistent dev environments without "works on my machine" issues. But they come with trade-offs that become painful as your team grows:
Cost at scale. GitHub Codespaces charges per core-hour. A developer using a 4-core machine for 8 hours a day, 22 days a month, pays roughly $40/month — per developer. A team of 10 pays $400/month for development environments alone. A VPS with 8 vCPU and 16GB RAM costs a fraction of that and serves the entire team.
Data ownership. Your source code, database dumps, API keys, and customer test data live on third-party infrastructure. For teams handling sensitive data — healthcare, financial services, government contracts — this may violate compliance requirements. A self-hosted VPS keeps everything on infrastructure you control.
No cold starts. Cloud environments spin down when idle and take 30-90 seconds to restart. Self-hosted infrastructure is always running. SSH in, and you're working immediately.
Customization. Need a specific system library, a particular database version, a GPU for ML development, or custom kernel parameters? On your own VPS, install what you need. No marketplace limitations, no support tickets to enable features.
Network performance. A VPS in your team's region provides low-latency access to shared databases, staging environments, and Git repositories. No cross-continent round trips to reach your dev environment.
Choosing a Datacenter by Team Location
Network latency directly affects the development experience. Every SSH keystroke, VS Code Remote operation, and Git push/pull traverses the network. Choose a datacenter close to your team.
A Cloud VPS with 4 vCPU / 8GB RAM hosts Gitea, shared databases, and staging for a small team. MassiveGRID offers four datacenter locations — choose based on your team's geography:
| Datacenter | Best For | Typical Latency |
|---|---|---|
| New York City | Americas-based teams (US, Canada, Latin America) | <20ms US East, <60ms US West, <80ms Western Europe |
| London | UK and Western Europe teams | <10ms UK, <25ms Western EU, <80ms US East |
| Frankfurt | Central/Eastern Europe teams | <5ms Germany, <20ms Central EU, <90ms US East |
| Singapore | Asia-Pacific teams (India, Southeast Asia, Australia) | <30ms SEA, <60ms India, <80ms Australia |
For distributed teams spanning multiple continents, deploy a VPS in each region and use Git's distributed nature to keep repositories in sync. Each sub-team accesses their local instance for daily work.
Team Access Management
Before installing any development tools, set up proper user accounts and access controls. Every team member gets their own Linux user account with SSH key authentication — no shared passwords, no shared accounts.
Creating User Accounts
# Create a developer user
sudo adduser --disabled-password --gecos "Alice Developer" alice
sudo usermod -aG developers alice
# Create a developers group (for shared directory permissions)
sudo groupadd developers
# Set up SSH key for the user
sudo mkdir -p /home/alice/.ssh
sudo nano /home/alice/.ssh/authorized_keys
# Paste Alice's public key
sudo chmod 700 /home/alice/.ssh
sudo chmod 600 /home/alice/.ssh/authorized_keys
sudo chown -R alice:alice /home/alice/.ssh
Sudo Policies
Not every developer needs root access. Create granular sudo rules:
# /etc/sudoers.d/developers
# Allow developers to manage Docker (without full root)
%developers ALL=(ALL) NOPASSWD: /usr/bin/docker
%developers ALL=(ALL) NOPASSWD: /usr/bin/docker-compose
%developers ALL=(ALL) NOPASSWD: /usr/bin/systemctl restart nginx
%developers ALL=(ALL) NOPASSWD: /usr/bin/systemctl status *
# Team lead gets broader access
alice ALL=(ALL) NOPASSWD: /usr/bin/apt update, /usr/bin/apt upgrade
alice ALL=(ALL) NOPASSWD: /usr/bin/systemctl restart *
# Validate sudoers syntax before saving
sudo visudo -c -f /etc/sudoers.d/developers
For comprehensive SSH hardening and access control, see our security hardening guide and Fail2Ban advanced configuration.
Shared Project Directories
# Create shared workspace
sudo mkdir -p /opt/projects
sudo chown root:developers /opt/projects
sudo chmod 2775 /opt/projects
# The setgid bit (2) ensures new files inherit the group
# Per-project directories
sudo mkdir -p /opt/projects/{webapp,api,mobile-backend}
sudo chown -R root:developers /opt/projects/
sudo chmod -R 2775 /opt/projects/
Self-Hosted Git with Gitea
Gitea is a lightweight, self-hosted Git service with a web interface, pull requests, issue tracking, and CI integration. It's the self-hosted alternative to GitHub/GitLab that runs comfortably on 256MB of RAM. Our Git server guide covers the fundamentals.
# Install Gitea via Docker
mkdir -p /opt/gitea/{data,config}
docker run -d --name gitea \
-p 3000:3000 \
-p 2222:22 \
-v /opt/gitea/data:/data \
-v /opt/gitea/config:/data/gitea/conf \
-e GITEA__database__DB_TYPE=sqlite3 \
-e GITEA__server__ROOT_URL=https://git.yourcompany.com/ \
-e GITEA__server__SSH_DOMAIN=git.yourcompany.com \
-e GITEA__server__SSH_PORT=2222 \
-e GITEA__service__DISABLE_REGISTRATION=true \
--restart unless-stopped \
gitea/gitea:latest
Or use Docker Compose for a more complete setup with PostgreSQL:
# /opt/gitea/docker-compose.yml
version: "3.8"
services:
gitea:
image: gitea/gitea:latest
environment:
- GITEA__database__DB_TYPE=postgres
- GITEA__database__HOST=db:5432
- GITEA__database__NAME=gitea
- GITEA__database__USER=gitea
- GITEA__database__PASSWD=${GITEA_DB_PASSWORD}
- GITEA__server__ROOT_URL=https://git.yourcompany.com/
- GITEA__server__SSH_DOMAIN=git.yourcompany.com
- GITEA__server__SSH_PORT=2222
- GITEA__service__DISABLE_REGISTRATION=true
- GITEA__mailer__ENABLED=true
- GITEA__mailer__SMTP_ADDR=smtp.sendgrid.net
- GITEA__mailer__SMTP_PORT=587
volumes:
- gitea-data:/data
ports:
- "3000:3000"
- "2222:22"
depends_on:
- db
restart: unless-stopped
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: gitea
POSTGRES_USER: gitea
POSTGRES_PASSWORD: ${GITEA_DB_PASSWORD}
volumes:
- gitea-db:/var/lib/postgresql/data
restart: unless-stopped
volumes:
gitea-data:
gitea-db:
# Set up Nginx reverse proxy for Gitea
# /etc/nginx/sites-available/gitea
server {
listen 443 ssl http2;
server_name git.yourcompany.com;
ssl_certificate /etc/letsencrypt/live/git.yourcompany.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/git.yourcompany.com/privkey.pem;
client_max_body_size 100M; # Allow large repo pushes
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
For SSL certificate setup, see our Let's Encrypt guide. For reverse proxy configuration details, see our Nginx reverse proxy guide.
CI/CD with Self-Hosted GitHub Actions Runners
If your team uses GitHub for code hosting but wants to run CI/CD on your own infrastructure, self-hosted GitHub Actions runners give you the best of both worlds: GitHub's workflow syntax with your hardware resources. Our self-hosted runner guide covers the complete setup.
# Quick setup for a self-hosted runner
mkdir -p /opt/actions-runner && cd /opt/actions-runner
# Download the latest runner
curl -o actions-runner-linux-x64.tar.gz -L \
https://github.com/actions/runner/releases/latest/download/actions-runner-linux-x64-2.311.0.tar.gz
tar xzf actions-runner-linux-x64.tar.gz
# Configure (get token from GitHub repo Settings → Actions → Runners)
./config.sh --url https://github.com/your-org/your-repo \
--token YOUR_RUNNER_TOKEN \
--name "vps-runner" \
--labels "self-hosted,linux,x64,vps" \
--work "_work"
# Install and start as a service
sudo ./svc.sh install
sudo ./svc.sh start
With a self-hosted runner, your CI/CD pipelines have direct access to shared databases, staging environments, and internal services — no need for complex networking or secret management to reach test infrastructure.
# .github/workflows/test.yml — using self-hosted runner
name: Test Suite
on: [push, pull_request]
jobs:
test:
runs-on: [self-hosted, linux, vps]
steps:
- uses: actions/checkout@v4
- name: Run tests against shared dev database
env:
DATABASE_URL: postgresql://testuser:testpass@localhost:5432/myapp_test
REDIS_URL: redis://localhost:6379/1
run: |
npm install
npm test
- name: Deploy to staging
if: github.ref == 'refs/heads/main'
run: |
cd /opt/projects/webapp-staging
git pull origin main
docker compose up -d --build
Shared Development Databases
A shared PostgreSQL instance on the VPS eliminates the "but it works with my local database" problem. Every developer connects to the same database version with the same extensions and configuration. See our PostgreSQL installation guide for the full setup.
# Create per-project databases and users
sudo -u postgres psql
-- Project: webapp
CREATE USER webapp_dev WITH PASSWORD 'dev_webapp_pass';
CREATE DATABASE webapp_dev OWNER webapp_dev;
-- Project: api
CREATE USER api_dev WITH PASSWORD 'dev_api_pass';
CREATE DATABASE api_dev OWNER api_dev;
-- Test databases (CI/CD uses these)
CREATE USER test_runner WITH PASSWORD 'test_pass' CREATEDB;
CREATE DATABASE webapp_test OWNER test_runner;
CREATE DATABASE api_test OWNER test_runner;
-- Allow test_runner to drop/create databases (for test isolation)
ALTER USER test_runner CREATEDB;
# PostgreSQL pg_hba.conf — allow connections from team VPN
# /etc/postgresql/16/main/pg_hba.conf
# Local connections for system users
local all postgres peer
# Developer connections via VPN
host all all 10.0.0.0/24 scram-sha-256
# CI/CD runner (local connection)
host all test_runner 127.0.0.1/32 scram-sha-256
# Database seeding script for developers
# /opt/projects/scripts/seed-dev-db.sh
#!/bin/bash
DB_NAME=${1:-webapp_dev}
SEED_FILE="/opt/projects/webapp/db/seeds/development.sql"
echo "Resetting database: $DB_NAME"
dropdb --if-exists "$DB_NAME"
createdb "$DB_NAME"
echo "Running migrations..."
cd /opt/projects/webapp
DATABASE_URL="postgresql://webapp_dev:dev_webapp_pass@localhost:5432/$DB_NAME" \
npx prisma migrate deploy
echo "Seeding data..."
psql -d "$DB_NAME" -f "$SEED_FILE"
echo "Done. Database $DB_NAME ready for development."
For Redis as a shared cache and session store, see our Redis installation guide.
Staging Environments on the Same VPS
Docker makes it practical to run multiple staging environments on a single VPS. Each project gets its own Docker Compose stack with isolated networking. If you need to brush up on Docker networking, see our Docker networking guide.
# /opt/projects/webapp-staging/docker-compose.yml
version: "3.8"
services:
app:
build: .
environment:
NODE_ENV: staging
DATABASE_URL: postgresql://webapp:staging_pass@db:5432/webapp_staging
REDIS_URL: redis://redis:6379/0
depends_on:
- db
- redis
networks:
- webapp-staging
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: webapp_staging
POSTGRES_USER: webapp
POSTGRES_PASSWORD: staging_pass
volumes:
- webapp-staging-db:/var/lib/postgresql/data
networks:
- webapp-staging
redis:
image: redis:7-alpine
networks:
- webapp-staging
nginx:
image: nginx:alpine
volumes:
- ./nginx-staging.conf:/etc/nginx/conf.d/default.conf:ro
ports:
- "8081:80"
depends_on:
- app
networks:
- webapp-staging
networks:
webapp-staging:
driver: bridge
volumes:
webapp-staging-db:
# Run multiple staging environments on different ports
# webapp-staging: port 8081
# api-staging: port 8082
# mobile-backend-staging: port 8083
# Nginx reverse proxy routes subdomains to staging ports
# /etc/nginx/sites-available/staging
server {
listen 443 ssl http2;
server_name staging-webapp.yourcompany.com;
ssl_certificate /etc/letsencrypt/live/yourcompany.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourcompany.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:8081;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server {
listen 443 ssl http2;
server_name staging-api.yourcompany.com;
ssl_certificate /etc/letsencrypt/live/yourcompany.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourcompany.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:8082;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Deploy Script for Staging
#!/bin/bash
# /opt/projects/scripts/deploy-staging.sh
# Usage: deploy-staging.sh webapp main
PROJECT=$1
BRANCH=${2:-main}
PROJECT_DIR="/opt/projects/${PROJECT}-staging"
if [ ! -d "$PROJECT_DIR" ]; then
echo "Error: Project directory $PROJECT_DIR does not exist"
exit 1
fi
cd "$PROJECT_DIR"
echo "Deploying $PROJECT (branch: $BRANCH) to staging..."
git fetch origin
git checkout "$BRANCH"
git pull origin "$BRANCH"
echo "Rebuilding containers..."
docker compose build --no-cache
docker compose up -d
echo "Running migrations..."
docker compose exec -T app npm run migrate
echo "Staging deployed: https://staging-${PROJECT}.yourcompany.com"
VS Code Remote Development
VS Code Remote - SSH lets developers write code locally while executing it on the VPS. Files, terminals, extensions, and debuggers all run on the server. This gives every developer access to the same environment regardless of their local machine. Our VS Code remote development guide covers the complete setup.
# Developer's local SSH config (~/.ssh/config)
Host dev-server
HostName dev.yourcompany.com
User alice
Port 22
IdentityFile ~/.ssh/id_ed25519
ForwardAgent yes
# Persistent connections for faster reconnects
ControlMaster auto
ControlPath ~/.ssh/sockets/%r@%h-%p
ControlPersist 600
# Compression for better remote editing performance
Compression yes
# Server-side configuration for VS Code Remote
# Each developer gets their own workspace
# Create workspace directories
sudo mkdir -p /home/alice/workspaces
sudo mkdir -p /home/bob/workspaces
# Symlink shared projects into each workspace
ln -s /opt/projects/webapp /home/alice/workspaces/webapp
ln -s /opt/projects/api /home/alice/workspaces/api
Each developer connects to the VPS through VS Code, opens their workspace, and has full IDE functionality — IntelliSense, debugging, integrated terminal — running on server hardware with direct access to databases, Redis, and staging environments.
Docker-Based Development Environments
Devcontainers provide reproducible, project-specific development environments. Each project defines its environment in a .devcontainer configuration, ensuring every developer gets identical tooling regardless of when they join the team. If Docker isn't installed yet, see our Docker installation guide.
// .devcontainer/devcontainer.json (in your project repo)
{
"name": "Webapp Development",
"dockerComposeFile": "docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspace",
"customizations": {
"vscode": {
"extensions": [
"dbaeumer.vscode-eslint",
"esbenp.prettier-vscode",
"ms-vscode.vscode-typescript-next",
"bradlc.vscode-tailwindcss"
],
"settings": {
"editor.formatOnSave": true,
"editor.defaultFormatter": "esbenp.prettier-vscode"
}
}
},
"forwardPorts": [3000, 5432, 6379],
"postCreateCommand": "npm install && npm run migrate",
"remoteUser": "node"
}
# .devcontainer/docker-compose.yml
version: "3.8"
services:
app:
build:
context: ..
dockerfile: .devcontainer/Dockerfile
volumes:
- ..:/workspace:cached
command: sleep infinity
networks:
- dev
db:
image: postgres:16-alpine
environment:
POSTGRES_DB: myapp_dev
POSTGRES_USER: dev
POSTGRES_PASSWORD: devpass
volumes:
- devcontainer-db:/var/lib/postgresql/data
networks:
- dev
redis:
image: redis:7-alpine
networks:
- dev
networks:
dev:
volumes:
devcontainer-db:
# .devcontainer/Dockerfile
FROM node:20-bookworm
# Install development tools
RUN apt-get update && apt-get install -y \
git \
curl \
postgresql-client \
redis-tools \
&& rm -rf /var/lib/apt/lists/*
# Install global dev tools
RUN npm install -g typescript ts-node nodemon prisma
# Create non-root user workspace
RUN mkdir -p /workspace && chown node:node /workspace
WORKDIR /workspace
USER node
Team VPN for Secure Access
Development infrastructure should not be exposed to the public internet. A WireGuard VPN provides encrypted access to the VPS for all team members. Our WireGuard VPN guide covers the full setup.
# Server WireGuard configuration
# /etc/wireguard/wg0.conf
[Interface]
PrivateKey = SERVER_PRIVATE_KEY
Address = 10.0.0.1/24
ListenPort = 51820
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
# Alice
[Peer]
PublicKey = ALICE_PUBLIC_KEY
AllowedIPs = 10.0.0.2/32
# Bob
[Peer]
PublicKey = BOB_PUBLIC_KEY
AllowedIPs = 10.0.0.3/32
# Charlie
[Peer]
PublicKey = CHARLIE_PUBLIC_KEY
AllowedIPs = 10.0.0.4/32
# Restrict services to VPN only (using UFW)
# Allow SSH from anywhere (for initial connection)
sudo ufw allow 22/tcp
# Allow WireGuard
sudo ufw allow 51820/udp
# Allow Gitea only from VPN
sudo ufw allow from 10.0.0.0/24 to any port 3000
# Allow staging only from VPN
sudo ufw allow from 10.0.0.0/24 to any port 8081
sudo ufw allow from 10.0.0.0/24 to any port 8082
sudo ufw allow from 10.0.0.0/24 to any port 8083
# Allow PostgreSQL only from VPN
sudo ufw allow from 10.0.0.0/24 to any port 5432
For advanced firewall configuration, see our UFW advanced rules guide.
Onboarding a New Team Member
#!/bin/bash
# /opt/projects/scripts/onboard-developer.sh
# Usage: onboard-developer.sh username "Full Name" ssh_public_key
USERNAME=$1
FULL_NAME=$2
SSH_KEY=$3
echo "=== Onboarding: $FULL_NAME ($USERNAME) ==="
# 1. Create system user
sudo adduser --disabled-password --gecos "$FULL_NAME" "$USERNAME"
sudo usermod -aG developers,docker "$USERNAME"
# 2. Set up SSH
sudo mkdir -p /home/$USERNAME/.ssh
echo "$SSH_KEY" | sudo tee /home/$USERNAME/.ssh/authorized_keys
sudo chmod 700 /home/$USERNAME/.ssh
sudo chmod 600 /home/$USERNAME/.ssh/authorized_keys
sudo chown -R $USERNAME:$USERNAME /home/$USERNAME/.ssh
# 3. Create workspace with project symlinks
sudo mkdir -p /home/$USERNAME/workspaces
for project in /opt/projects/*/; do
project_name=$(basename "$project")
ln -s "$project" /home/$USERNAME/workspaces/$project_name
done
sudo chown -R $USERNAME:$USERNAME /home/$USERNAME/workspaces
# 4. Generate WireGuard keys
wg genkey | tee /tmp/${USERNAME}_wg_private | wg pubkey > /tmp/${USERNAME}_wg_public
echo ""
echo "WireGuard public key: $(cat /tmp/${USERNAME}_wg_public)"
echo "Add this peer to /etc/wireguard/wg0.conf"
# 5. Create Gitea account
echo "Create Gitea account manually at https://git.yourcompany.com/-/admin/users"
# 6. Create personal dev database
sudo -u postgres createuser "$USERNAME" -P
sudo -u postgres createdb "${USERNAME}_sandbox" -O "$USERNAME"
echo ""
echo "=== Onboarding complete ==="
echo "Share with $FULL_NAME:"
echo " - SSH: ssh $USERNAME@dev.yourcompany.com"
echo " - VPN config: Generate from WireGuard keys"
echo " - Gitea: https://git.yourcompany.com"
echo " - Database: postgresql://$USERNAME:PASSWORD@10.0.0.1:5432/${USERNAME}_sandbox"
Growth Path: Matching Infrastructure to Team Size
Small Team (2-5 Developers) — Cloud VPS
A Cloud VPS with 4 vCPU / 8GB RAM handles Gitea, shared databases, CI runners, and 2-3 staging environments comfortably. Resource usage at this scale:
# Typical resource profile for 2-5 developer team
# Gitea: 256MB RAM, minimal CPU
# PostgreSQL: 1-2GB RAM (shared_buffers)
# Redis: 128MB RAM
# 2 staging envs: 512MB each
# CI runner: 1-2GB during builds (idle otherwise)
# VS Code servers: ~300MB per connected developer
# OS overhead: 512MB
# ---
# Total peak: ~5-6GB with 3 developers connected
# Comfortable on: 8GB VPS
Medium Team (5-15 Developers) — Cloud VDS
When 5+ developers build and test simultaneously, dedicated resources ensure consistent build times for everyone. A Cloud VDS eliminates the resource contention that causes "the server is slow" complaints during team-wide development sprints.
At this scale, you need:
- 8-16 vCPU for parallel CI builds and multiple VS Code servers
- 32GB+ RAM for concurrent database connections, build caches, and staging environments
- Dedicated I/O for consistent build and test performance
No DevOps Engineer — Managed Dedicated
If your team doesn't include someone comfortable managing servers, Managed Dedicated Servers provide managed shared infrastructure. The hosting team handles OS updates, security patches, backup configuration, and monitoring — your developers focus on code.
Cost Comparison
Real numbers for a 5-developer team:
| Item | GitHub Codespaces (4-core) | Gitpod (Standard) | Self-Hosted VPS (8 vCPU/16GB) |
|---|---|---|---|
| Per developer/month | $40 | $25 | — |
| 5 developers | $200/mo | $125/mo | — |
| GitHub Teams (repos) | $20/mo (5 users) | $20/mo (5 users) | $0 (Gitea) |
| CI/CD minutes | $30-50/mo overage | Included | $0 (self-hosted runner) |
| Staging environments | Separate hosting needed | Separate hosting needed | $0 (same server) |
| Shared databases | Separate hosting needed | Separate hosting needed | $0 (same server) |
| VPS cost | — | — | $20-40/mo |
| Total monthly cost | $250-270/mo | $145-175/mo | $20-40/mo |
| Annual cost | $3,000-3,240 | $1,740-2,100 | $240-480 |
The self-hosted approach costs 6-10x less. The trade-off: someone on the team needs to be comfortable with basic Linux administration — SSH, apt, systemd, Docker. If your team has that expertise (and most development teams do), the savings are significant.
Note on the comparison: Cloud development environment pricing changes frequently. These figures are based on published pricing as of early 2026. The relative cost difference — self-hosted being dramatically cheaper — remains consistent regardless of exact pricing.
Maintenance and Monitoring
Shared infrastructure requires basic maintenance to keep the team productive. Set up monitoring to catch issues before developers report them.
# Install Uptime Kuma for service monitoring (see our detailed guide)
docker run -d --name uptime-kuma \
-p 3001:3001 \
-v uptime-kuma-data:/app/data \
--restart unless-stopped \
louislam/uptime-kuma:latest
# Monitor these endpoints:
# - Gitea web interface (HTTPS)
# - PostgreSQL port (TCP)
# - Redis port (TCP)
# - Each staging environment (HTTPS)
# - VPN endpoint (UDP port check)
For comprehensive monitoring, see our Uptime Kuma guide and Prometheus and Grafana guide.
# Automated maintenance cron jobs
# /etc/cron.d/dev-infrastructure
# Clean Docker build cache weekly (Sunday 3 AM)
0 3 * * 0 root docker builder prune -f --keep-storage=5G >> /var/log/docker-cleanup.log 2>&1
# Backup Gitea data daily (2 AM)
0 2 * * * root docker exec gitea gitea dump -c /data/gitea/conf/app.ini >> /var/log/gitea-backup.log 2>&1
# Backup all databases daily (2:30 AM)
30 2 * * * root pg_dumpall -U postgres | gzip > /opt/backups/pg-all-$(date +\%Y\%m\%d).sql.gz 2>> /var/log/db-backup.log
# Remove old backups (keep 14 days)
0 4 * * * root find /opt/backups -name "*.gz" -mtime +14 -delete
For automated backup strategies, see our automated backups guide.
Getting Started
The setup order matters. Follow this sequence:
- Deploy a VPS in your team's region — see our setup guide
- Secure the server — see our security hardening guide
- Set up WireGuard VPN — see our WireGuard guide
- Install Docker — see our Docker guide
- Deploy Gitea for Git hosting
- Install PostgreSQL and Redis for shared databases
- Set up CI/CD runner — see our Actions runner guide
- Configure staging environments with Docker Compose
- Create user accounts and SSH keys for each developer
- Set up monitoring — see our Uptime Kuma guide
Total setup time for an experienced admin: 2-3 hours. The result: a complete, self-hosted development platform that your team controls, at a fraction of the cost of cloud-hosted alternatives.