Docker fundamentally changes how you deploy and manage applications on a VPS. Instead of installing software directly on the host OS — where conflicting dependencies, version mismatches, and configuration drift create headaches over time — you package each application in an isolated container with exactly the dependencies it needs. The container runs the same way on your laptop, your staging server, and your production VPS.

This guide covers installing Docker Engine and Docker Compose on an Ubuntu 24.04 VPS, configuring Docker for production use, and deploying your first multi-container application. By the end, you'll have a properly configured Docker host ready for real workloads.

MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10

Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $8.30/mo
Want fully managed hosting? — we handle everything

Why Docker on a VPS?

If you've ever spent an afternoon debugging why an application works on one server but not another, Docker solves that problem. Containers encapsulate the application code, runtime, libraries, and configuration into a single portable unit. Here's what that means in practice:

On a VPS, Docker is particularly valuable because you can run multiple isolated services on a single server without them interfering with each other. A $10/month VPS can simultaneously run a web application, a database, a Redis cache, a reverse proxy, and a monitoring stack — all in separate containers.

Prerequisites

Before starting, you need:

Step 1: Remove Old Docker Packages

Ubuntu's default repositories include unofficial Docker packages (docker.io, docker-compose) that are outdated. Remove them before installing the official Docker packages:

sudo apt-get remove -y docker docker-engine docker.io containerd runc docker-compose docker-doc podman-docker

Don't worry if apt-get reports that some packages aren't installed — that's expected on a fresh server. This command ensures a clean slate regardless of what was previously installed.

Existing Docker images, containers, volumes, and networks stored in /var/lib/docker/ are preserved when uninstalling packages. If you want a completely fresh start, you can remove that directory too (warning: this deletes all Docker data):

# Only run this if you want to delete ALL existing Docker data
# sudo rm -rf /var/lib/docker
# sudo rm -rf /var/lib/containerd

Step 2: Add Docker's Official APT Repository

Docker maintains their own package repository with the latest stable releases. Adding it involves installing prerequisites, importing Docker's GPG key (to verify package authenticity), and adding the repository URL.

Install the prerequisites for adding HTTPS repositories:

sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg

Create the keyring directory and download Docker's official GPG key:

sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

Add the Docker repository to your APT sources:

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Update the package index to include Docker's packages:

sudo apt-get update

You should see download.docker.com in the update output, confirming the repository was added successfully.

Step 3: Install Docker Engine, CLI, and Compose

Install the complete Docker package set:

sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Here's what each package provides:

Package Purpose
docker-ce Docker Engine — the daemon that runs containers
docker-ce-cli Docker CLI — the docker command-line tool
containerd.io Container runtime — manages container lifecycle
docker-buildx-plugin Extended build capabilities (multi-platform builds)
docker-compose-plugin Docker Compose v2 — multi-container orchestration

Verify Docker is installed and running:

sudo docker --version
sudo docker compose version

Expected output (versions may vary):

Docker version 27.x.x, build xxxxxxx
Docker Compose version v2.x.x

Run the hello-world container to verify everything works end-to-end:

sudo docker run hello-world

This downloads a tiny test image from Docker Hub, creates a container, runs it (which prints a confirmation message), and exits. If you see "Hello from Docker!" in the output, Docker is fully functional.

Step 4: Post-Installation Configuration

Run Docker Without sudo

By default, Docker commands require sudo because the Docker daemon runs as root. To run Docker commands as your regular user, add yourself to the docker group:

sudo usermod -aG docker $USER

Important: You must log out and log back in for the group change to take effect. Or use newgrp docker to activate the group in your current session.

Verify it works without sudo:

docker run hello-world

Security note: Adding a user to the docker group effectively grants them root-equivalent privileges — anyone who can run Docker commands can mount the host filesystem and access any file. Only add trusted users to this group.

Enable Docker to Start on Boot

sudo systemctl enable docker.service
sudo systemctl enable containerd.service

These should be enabled by default after installation, but it's worth confirming. After a server reboot, Docker will start automatically and all containers with restart: unless-stopped or restart: always policies will be restarted.

Step 5: Docker Compose Basics

Docker Compose lets you define and run multi-container applications using a YAML configuration file. Instead of running multiple docker run commands with long flag lists, you define everything declaratively in docker-compose.yml.

Create a project directory and a simple Compose file:

mkdir -p ~/myapp
nano ~/myapp/docker-compose.yml

Here's a minimal example that runs Nginx:

services:
  web:
    image: nginx:alpine
    ports:
      - "8080:80"
    volumes:
      - ./html:/usr/share/nginx/html:ro
    restart: unless-stopped

Create a test HTML file:

mkdir -p ~/myapp/html
echo "<h1>Docker is working!</h1>" > ~/myapp/html/index.html

Start the container:

cd ~/myapp
docker compose up -d

The -d flag runs containers in detached mode (background). Verify it's running:

docker compose ps

Test it:

curl http://localhost:8080

You should see your HTML content. Essential Docker Compose commands:

# View logs
docker compose logs -f

# Stop all services
docker compose down

# Rebuild and restart (after changing Dockerfile or compose file)
docker compose up -d --build

# View resource usage
docker compose stats

# Execute a command inside a running container
docker compose exec web sh

Step 6: Production Docker Configuration

The default Docker daemon configuration is fine for development but needs tuning for production. Create or edit the Docker daemon configuration file:

sudo nano /etc/docker/daemon.json

Add the following production configuration:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3",
    "compress": "true"
  },
  "default-address-pools": [
    {
      "base": "172.17.0.0/16",
      "size": 24
    }
  ],
  "storage-driver": "overlay2",
  "live-restore": true,
  "userland-proxy": false,
  "no-new-privileges": true,
  "default-ulimits": {
    "nofile": {
      "Name": "nofile",
      "Hard": 65536,
      "Soft": 32768
    }
  }
}

Let's break down each setting:

Setting Purpose
log-driver: json-file Default logging driver — stores logs as JSON files on disk
max-size: 10m Rotate log files when they reach 10 MB
max-file: 3 Keep only 3 rotated log files per container (30 MB max per container)
compress: true Compress rotated log files to save disk space
storage-driver: overlay2 Use the recommended storage driver for modern Linux kernels
live-restore: true Containers keep running during Docker daemon restarts/upgrades
userland-proxy: false Use iptables for port mapping instead of a userland proxy (better performance)
no-new-privileges: true Prevent processes in containers from gaining new privileges via setuid/setgid
default-ulimits Set default file descriptor limits for all containers

The log rotation settings are critical. Without them, container logs grow indefinitely. A busy web application can generate gigabytes of logs in days, eventually filling your disk. With max-size: 10m and max-file: 3, each container's logs are capped at 30 MB.

Restart Docker to apply the configuration:

sudo systemctl restart docker

Verify the daemon configuration loaded correctly:

docker info | grep -A 5 "Logging Driver"

Step 7: Docker Compose Resource Limits

In production, always set resource limits on containers. Without limits, a single runaway container can consume all available CPU and RAM, affecting everything else on the server.

Add resource constraints to your Compose file:

services:
  web:
    image: nginx:alpine
    ports:
      - "8080:80"
    restart: unless-stopped
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 512M
        reservations:
          cpus: '0.25'
          memory: 128M
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost/"]
      interval: 30s
      timeout: 5s
      retries: 3
      start_period: 10s

Resource limit options explained:

Docker Storage on Ceph: Built-In Data Protection

Docker stores all its data — images, containers, volumes, build cache — in /var/lib/docker/. On MassiveGRID, this directory sits on Ceph 3x replicated NVMe storage. Every Docker volume, every container filesystem layer, every database file inside a container is automatically replicated across three independent physical drives.

This has practical implications:

That said, you should still maintain application-level backups. Ceph protects against hardware failures; it doesn't protect against docker volume rm or accidental data deletion at the application level.

Need Dedicated CPU for Container Builds?

Docker image builds — especially multi-stage builds with compilation steps — are CPU-intensive. On a shared VPS, build times can vary based on host load. If you're running CI/CD pipelines, building images frequently, or compiling code in containers, a Dedicated VPS (VDS) provides exclusively allocated CPU cores that deliver consistent build performance regardless of other tenants' activity.

Step 8: Deploying a Multi-Container Application

Here's a realistic production example: a web application with Nginx as a reverse proxy, PostgreSQL as the database, and Redis for caching.

Create the project structure:

mkdir -p ~/webapp
nano ~/webapp/docker-compose.yml
services:
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
      - ./nginx/ssl:/etc/nginx/ssl:ro
      - app-static:/var/www/static:ro
    depends_on:
      app:
        condition: service_healthy
    restart: unless-stopped
    deploy:
      resources:
        limits:
          cpus: '0.5'
          memory: 256M

  app:
    build: ./app
    expose:
      - "8000"
    environment:
      - DATABASE_URL=postgresql://appuser:securepass123@postgres:5432/appdb
      - REDIS_URL=redis://redis:6379/0
      - SECRET_KEY=${SECRET_KEY}
    volumes:
      - app-static:/app/static
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
      interval: 30s
      timeout: 5s
      retries: 3
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 1G

  postgres:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: appdb
      POSTGRES_USER: appuser
      POSTGRES_PASSWORD: securepass123
    volumes:
      - postgres-data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U appuser -d appdb"]
      interval: 10s
      timeout: 5s
      retries: 5
    restart: unless-stopped
    deploy:
      resources:
        limits:
          cpus: '1.0'
          memory: 1G

  redis:
    image: redis:7-alpine
    command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru
    volumes:
      - redis-data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 5s
      retries: 5
    restart: unless-stopped
    deploy:
      resources:
        limits:
          cpus: '0.5'
          memory: 256M

volumes:
  postgres-data:
  redis-data:
  app-static:

Key patterns in this configuration:

Create a .env file for sensitive configuration:

nano ~/webapp/.env
SECRET_KEY=your-randomly-generated-secret-key-here

Never commit .env files to version control. Add .env to your .gitignore.

Docker Networking Basics

Docker creates isolated networks for container communication. Understanding Docker networking is essential for multi-container deployments.

When you use Docker Compose, it automatically creates a bridge network for each project. Containers within the same Compose project can reach each other using their service names as hostnames. In the example above, the app container connects to PostgreSQL using postgres as the hostname — Docker's embedded DNS resolves this to the container's internal IP.

View existing Docker networks:

# List all networks
docker network ls

# Inspect a specific network to see connected containers
docker network inspect webapp_default

For more complex setups where multiple Compose projects need to communicate, create a shared external network:

# Create a shared network
docker network create shared-net

Then reference it in your Compose files:

services:
  app:
    image: myapp
    networks:
      - shared-net

networks:
  shared-net:
    external: true

Key networking rules to remember:

Docker Housekeeping: Managing Disk Space

Docker accumulates unused images, stopped containers, and orphaned volumes over time. On a VPS with limited storage, regular cleanup is important:

# Remove stopped containers, unused networks, dangling images, and build cache
docker system prune -f

# Also remove unused images (not just dangling ones)
docker system prune -a -f

# Remove unused volumes (careful — this deletes data!)
docker volume prune -f

# Check disk usage by Docker
docker system df

For automated cleanup, add a weekly cron job:

sudo crontab -e

Add this line to clean up weekly at 3 AM on Sundays:

0 3 * * 0 docker system prune -f --filter "until=168h" >> /var/log/docker-cleanup.log 2>&1

The --filter "until=168h" flag only removes objects older than 7 days, so recent images and containers are preserved.

Next Steps

With Docker installed and configured, you're ready to deploy real applications. Here are popular self-hosted platforms that run beautifully on a Docker-powered VPS:

Each of these guides builds on the Docker foundation you've set up here.

Want Managed Docker Hosting?

Docker simplifies application deployment, but managing the underlying server — OS updates, security patches, Docker daemon upgrades, storage monitoring, backup verification — is still your responsibility on a self-managed VPS. If you'd rather focus entirely on your containers and let someone else handle the host, MassiveGRID's Managed Dedicated Cloud Servers give you dedicated resources with full server management. Your Docker containers run on hardware managed by a team of engineers who handle everything from kernel updates to storage monitoring — 24 hours a day, 7 days a week.