Every Docker container you launch gets its own isolated network namespace — its own IP address, its own routing table, its own port space. Two containers running on the same VPS are as isolated from each other as two separate physical machines, unless you explicitly connect them. Understanding Docker networking is the difference between containers that "just work" and hours of debugging why your app can't reach its database.

This guide covers every Docker network driver, when to use each one, how Docker Compose handles networking automatically, how to architect multi-application stacks securely, and how to diagnose the three most common networking problems.

MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10

Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything

The Docker Networking Mental Model

Without networking configuration, a Docker container is a black box. It can reach the internet (outbound), but nothing can reach it (inbound), and it cannot communicate with other containers by name. Every networking decision you make is about opening specific, controlled pathways between containers and between containers and the outside world.

Docker networking runs entirely within your Cloud VPS — all Docker network modes work out of the box on MassiveGRID Cloud VPS.

List the networks Docker created at installation:

docker network ls

You will see three default networks:

NETWORK ID     NAME      DRIVER    SCOPE
a1b2c3d4e5f6   bridge    bridge    local
f6e5d4c3b2a1   host      host      local
1a2b3c4d5e6f   none      null      local

These three networks represent the three fundamental modes of container networking. Every other configuration is built on top of them.

Network Drivers Explained

Bridge Networks (Default)

A bridge network creates a virtual switch (a Linux bridge device) on your host. Containers attached to the same bridge can communicate with each other. The bridge also provides NAT-based internet access for containers.

# Run a container on the default bridge
docker run -d --name web1 nginx

# Inspect its network settings
docker inspect web1 --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'
# Output: 172.17.0.2

The default bridge assigns IP addresses from the 172.17.0.0/16 subnet. Containers on this bridge can reach each other by IP address, but not by container name. This is one of several reasons you should never use the default bridge.

Host Network

Host networking removes all isolation between the container and the host's network stack. The container shares the host's IP address, port space, and network interfaces directly.

# Run Nginx directly on the host network
docker run -d --network host --name web-host nginx

# Nginx is now listening on the HOST's port 80
curl http://localhost:80

Use host networking when:

Avoid host networking when:

None Network

The none driver gives the container no network access at all. The container only has a loopback interface.

# Run a container with no network
docker run -d --network none --name isolated alpine sleep 3600

# Verify — no network interfaces except lo
docker exec isolated ip addr
# Only shows: 1: lo: 

This is useful for batch processing jobs that should never make network connections, or for security-sensitive workloads that only interact through mounted volumes.

Macvlan Network

Macvlan assigns a real MAC address to each container, making it appear as a physical device on your network. Containers get IP addresses from your actual network's DHCP server or from a statically defined range.

# Create a macvlan network
docker network create -d macvlan \
  --subnet=192.168.1.0/24 \
  --gateway=192.168.1.1 \
  -o parent=eth0 \
  my-macvlan

# Run a container with a real network presence
docker run -d --network my-macvlan --ip 192.168.1.100 --name direct-net nginx

Macvlan is rarely needed on a VPS. It is most useful in on-premises environments where containers need to appear as first-class network citizens. On a cloud VPS, bridge networks with port publishing handle virtually every use case.

Default Bridge vs Custom Bridge

This is the single most important networking concept to internalize: always create custom bridge networks instead of using the default bridge.

Feature Default Bridge Custom Bridge
DNS resolution by container name No Yes
Automatic container discovery No Yes
Network isolation from other stacks No (all containers share it) Yes (per-network isolation)
Connect/disconnect without restart No Yes
Custom subnet configuration Limited Full control

Create a custom bridge network and test DNS resolution:

# Create the network
docker network create app-network

# Launch two containers on the same network
docker run -d --network app-network --name api alpine sleep 3600
docker run -d --network app-network --name worker alpine sleep 3600

# Test DNS resolution — worker can find api by name
docker exec worker ping -c 3 api
# PING api (172.18.0.2): 56 data bytes
# 64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.089 ms

On the default bridge, that ping api command would fail with ping: bad address 'api'. This DNS resolution is what makes custom bridges essential for any multi-container application.

Docker Compose Networking

Docker Compose creates a custom bridge network automatically for each project. Every service in a docker-compose.yml file is connected to this network by default, and each service is discoverable by its service name.

# docker-compose.yml
services:
  app:
    image: node:20-alpine
    working_dir: /app
    volumes:
      - ./app:/app
    command: node server.js
    ports:
      - "3000:3000"

  db:
    image: postgres:16
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: appuser
      POSTGRES_PASSWORD: secretpass
    volumes:
      - pgdata:/var/lib/postgresql/data

  redis:
    image: redis:7-alpine

volumes:
  pgdata:

When you run docker compose up, Docker creates a network named <project-directory>_default. Inside the app container, you can connect to the database at db:5432 and Redis at redis:6379 — using service names directly as hostnames.

# In your Node.js application
const { Pool } = require('pg');
const pool = new Pool({
  host: 'db',          // Service name from docker-compose.yml
  port: 5432,
  database: 'myapp',
  user: 'appuser',
  password: 'secretpass'
});

const Redis = require('ioredis');
const redis = new Redis({
  host: 'redis',       // Service name from docker-compose.yml
  port: 6379
});

If you have already installed Docker on your Ubuntu VPS, you have Docker Compose included — it is built into the Docker CLI as docker compose (no hyphen).

Connecting Containers Across Compose Files

When you run multiple Compose projects on the same VPS (a common pattern), their networks are isolated from each other by default. To connect containers across projects, use external networks.

First, create a shared network:

docker network create shared-backend

Then reference it in each Compose file:

# ~/apps/api/docker-compose.yml
services:
  api:
    image: node:20-alpine
    networks:
      - default
      - shared-backend
    environment:
      DATABASE_URL: postgres://appuser:secretpass@shared-db:5432/myapp

networks:
  shared-backend:
    external: true
# ~/apps/database/docker-compose.yml
services:
  shared-db:
    image: postgres:16
    networks:
      - shared-backend
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: appuser
      POSTGRES_PASSWORD: secretpass
    volumes:
      - pgdata:/var/lib/postgresql/data

networks:
  shared-backend:
    external: true

volumes:
  pgdata:

The api service can now reach shared-db by name because they both exist on the shared-backend network, even though they are defined in completely separate Compose projects.

Network Security Architecture: Isolating Stacks

On a production VPS running multiple services, you should segment networks by trust level. The principle: a container should only be able to reach the containers it actually needs to communicate with.

# docker-compose.yml — Multi-network security architecture
services:
  # === Public-facing tier ===
  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    networks:
      - frontend
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf:ro

  # === Application tier ===
  app:
    build: ./app
    networks:
      - frontend      # Reachable by nginx
      - backend       # Can reach database and cache
    environment:
      DATABASE_URL: postgres://appuser:secretpass@db:5432/myapp
      REDIS_URL: redis://cache:6379

  # === Data tier ===
  db:
    image: postgres:16
    networks:
      - backend       # Only reachable from backend network
    volumes:
      - pgdata:/var/lib/postgresql/data
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: appuser
      POSTGRES_PASSWORD: secretpass

  cache:
    image: redis:7-alpine
    networks:
      - backend       # Only reachable from backend network

networks:
  frontend:           # nginx ↔ app communication
  backend:            # app ↔ db/cache communication

volumes:
  pgdata:

In this architecture:

If an attacker compromises the nginx container, they cannot access the database. They would need to pivot through the app container first — a meaningful additional barrier.

Port Publishing: Exposing Containers to the Outside World

Containers are not reachable from outside the VPS unless you explicitly publish ports. The -p flag (or ports: in Compose) creates iptables rules that forward traffic from a host port to a container port.

# Publish container port 3000 on host port 3000 (all interfaces)
docker run -d -p 3000:3000 myapp

# Publish on localhost only (not accessible from outside the VPS)
docker run -d -p 127.0.0.1:3000:3000 myapp

# Publish on a specific host port, mapping to a different container port
docker run -d -p 8080:3000 myapp

In a reverse proxy setup (which is the recommended pattern — see our Nginx reverse proxy guide), you should publish application containers on 127.0.0.1 only:

services:
  app:
    build: .
    ports:
      - "127.0.0.1:3000:3000"   # Only reachable from localhost/nginx

This ensures the application is only accessible through your reverse proxy, not directly via http://your-vps-ip:3000.

DNS Resolution Between Containers

Docker runs an embedded DNS server at 127.0.0.11 inside every container connected to a custom bridge network. When a container makes a DNS query for another container's name, Docker's DNS server responds with the target container's IP address on that network.

# Inspect how DNS is configured inside a container
docker run --rm --network app-network alpine cat /etc/resolv.conf
# nameserver 127.0.0.11
# options ndots:0

Key DNS behaviors to understand:

# Scaling with DNS round-robin
services:
  worker:
    build: ./worker
    networks:
      backend:
        aliases:
          - workers
    deploy:
      replicas: 3

Other containers resolving workers will receive a different IP each time, distributing connections across all three replicas.

Troubleshooting: "Container Can't Reach the Internet"

This is the most common Docker networking problem. Follow this diagnostic sequence:

Step 1: Verify DNS Resolution

# Try to resolve an external hostname
docker exec mycontainer nslookup google.com

# If DNS fails, check resolv.conf
docker exec mycontainer cat /etc/resolv.conf

Step 2: Check IP Connectivity

# Can the container reach external IPs? (bypass DNS)
docker exec mycontainer ping -c 3 8.8.8.8

# If ping works but DNS doesn't → DNS problem
# If ping fails → routing/firewall problem

Step 3: Check Host Forwarding

# On the host, verify IP forwarding is enabled
cat /proc/sys/net/ipv4/ip_forward
# Should output: 1

# If it's 0, enable it
sudo sysctl -w net.ipv4.ip_forward=1

# Make permanent
echo "net.ipv4.ip_forward = 1" | sudo tee -a /etc/sysctl.conf

Step 4: Check iptables NAT Rules

# Docker should have created MASQUERADE rules
sudo iptables -t nat -L POSTROUTING -n -v

# You should see rules like:
# MASQUERADE  all  --  172.17.0.0/16  0.0.0.0/0
# MASQUERADE  all  --  172.18.0.0/16  0.0.0.0/0

Step 5: Restart Docker's Networking

# Nuclear option — restart Docker daemon
sudo systemctl restart docker

# This recreates all network infrastructure but stops all containers

Troubleshooting: "Containers Can't Talk to Each Other"

Step 1: Verify They Are on the Same Network

# List networks for each container
docker inspect app --format '{{json .NetworkSettings.Networks}}' | jq .
docker inspect db --format '{{json .NetworkSettings.Networks}}' | jq .

# They must share at least one network name

Step 2: Check DNS Resolution

# From the app container, try to resolve the db container
docker exec app nslookup db

# If using the default bridge — DNS won't work. Switch to a custom bridge.
docker network create mynet
docker network connect mynet app
docker network connect mynet db

Step 3: Check if the Target Service is Listening

# Is PostgreSQL actually running and listening?
docker exec db pg_isready
# /var/run/postgresql:5432 - accepting connections

# Is it listening on all interfaces, not just localhost?
docker exec db ss -tlnp | grep 5432
# LISTEN  0  128  0.0.0.0:5432  0.0.0.0:*

Step 4: Test Connectivity Directly

# Install network tools if needed
docker exec app apt-get update && docker exec app apt-get install -y netcat-openbsd

# Test TCP connectivity
docker exec app nc -zv db 5432
# Connection to db (172.18.0.3) 5432 port [tcp/postgresql] succeeded!

Troubleshooting: "Port Conflicts Between Containers"

Port conflicts happen when two containers try to publish on the same host port. Containers can internally listen on the same port — the conflict only occurs at the host level.

# This fails — both want host port 80
docker run -d -p 80:80 --name web1 nginx
docker run -d -p 80:80 --name web2 nginx
# Error: bind: address already in use

Solutions:

# Solution 1: Use different host ports
docker run -d -p 80:80 --name web1 nginx
docker run -d -p 8080:80 --name web2 nginx

# Solution 2: Bind to different interfaces
docker run -d -p 127.0.0.1:80:80 --name web1 nginx
docker run -d -p 127.0.0.2:80:80 --name web2 nginx

# Solution 3: Don't publish ports — use a reverse proxy instead
# (The recommended approach for production)

Find which process is using a port:

# What's listening on port 80?
sudo ss -tlnp | grep :80
# LISTEN  0  511  0.0.0.0:80  0.0.0.0:*  users:(("docker-proxy",pid=12345,fd=4))

# Find which container owns that docker-proxy process
docker ps --filter "publish=80"

Docker Networking and UFW: The Bypass Problem

This is a critical security issue that catches many administrators. Docker manipulates iptables directly to publish ports, and these rules bypass UFW entirely. If you publish a container on port 5432, it is accessible from the internet even if UFW denies port 5432.

# You might think this blocks external access to PostgreSQL:
sudo ufw deny 5432

# But this container is STILL accessible from the internet:
docker run -d -p 5432:5432 postgres:16

# Docker's iptables rules are evaluated BEFORE UFW's rules

For a complete solution, read our UFW advanced rules guide. The essential fix:

# Edit Docker daemon configuration
sudo nano /etc/docker/daemon.json
{
  "iptables": false
}

Then manually configure iptables rules for Docker traffic. Alternatively, the more practical approach: never publish ports on 0.0.0.0 — always bind to 127.0.0.1 and use a reverse proxy:

services:
  db:
    image: postgres:16
    ports:
      - "127.0.0.1:5432:5432"   # Localhost only — UFW bypass is irrelevant

  app:
    build: .
    ports:
      - "127.0.0.1:3000:3000"   # Only reachable by the nginx reverse proxy

When every container only publishes to 127.0.0.1, and only your reverse proxy (Nginx or Traefik) listens on public ports 80 and 443, the UFW bypass becomes a non-issue.

Network Architecture for a Multi-App VPS

The recommended architecture for running multiple applications on a single VPS uses Traefik as a Docker-aware reverse proxy. Here is the network topology:

# Shared proxy network — all web-facing services connect here
docker network create traefik-public
# ~/traefik/docker-compose.yml
services:
  traefik:
    image: traefik:v3.0
    ports:
      - "80:80"
      - "443:443"
    networks:
      - traefik-public
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./traefik.yml:/etc/traefik/traefik.yml:ro
      - ./acme.json:/acme.json

networks:
  traefik-public:
    external: true
# ~/apps/blog/docker-compose.yml
services:
  blog:
    image: ghost:5
    networks:
      - traefik-public
      - internal
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.blog.rule=Host(`blog.example.com`)"
      - "traefik.http.routers.blog.tls.certresolver=letsencrypt"
      - "traefik.http.services.blog.loadbalancer.server.port=2368"

  blog-db:
    image: mysql:8
    networks:
      - internal

networks:
  traefik-public:
    external: true
  internal:             # Isolated — only blog ↔ blog-db
# ~/apps/api/docker-compose.yml
services:
  api:
    build: ./api
    networks:
      - traefik-public
      - api-internal
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.api.rule=Host(`api.example.com`)"
      - "traefik.http.routers.api.tls.certresolver=letsencrypt"

  api-db:
    image: postgres:16
    networks:
      - api-internal

networks:
  traefik-public:
    external: true
  api-internal:         # Isolated — only api ↔ api-db

This pattern gives you:

High-throughput container communication needs dedicated CPU for network processing overhead. For production multi-app deployments, consider MassiveGRID Cloud VDS with dedicated resources starting at $19.80/mo.

Inspecting and Debugging Networks

Essential commands for understanding your Docker network state:

# List all networks
docker network ls

# Inspect a network — see all connected containers
docker network inspect app-network

# See which networks a container is connected to
docker inspect mycontainer --format '{{range $net, $config := .NetworkSettings.Networks}}{{$net}}: {{$config.IPAddress}}{{"\n"}}{{end}}'

# View the Linux bridge device Docker created
ip link show type bridge

# See the iptables rules Docker manages
sudo iptables -L DOCKER -n -v
sudo iptables -t nat -L DOCKER -n -v

# Monitor network traffic between containers (on the host)
sudo tcpdump -i br-$(docker network inspect app-network --format '{{.Id}}' | cut -c1-12) -n

To connect or disconnect a running container from a network without restarting it:

# Add a container to an additional network
docker network connect shared-backend mycontainer

# Remove a container from a network
docker network disconnect shared-backend mycontainer

Network Performance Considerations

Docker bridge networking adds minimal overhead for most workloads, but it is measurable under high-throughput scenarios:

Network Mode Latency Overhead Throughput Impact Use Case
Host ~0 (native) ~0 (native) Performance-critical services
Custom bridge ~10-50 μs ~5-10% reduction Most applications (recommended)
Default bridge ~10-50 μs ~5-10% reduction Never recommended
Macvlan ~0 (native) ~0 (native) Special network topology needs

For applications where container-to-container latency is critical (such as high-frequency database queries), you can measure the actual overhead:

# Benchmark: Bridge network latency
docker run --rm --network app-network alpine sh -c \
  "apk add --no-cache iputils && ping -c 100 -i 0.01 db" 2>/dev/null | tail -1

# Benchmark: Host network latency (for comparison)
ping -c 100 -i 0.01 127.0.0.1 | tail -1

Docker Compose Network Configuration Reference

The full set of network configuration options available in Compose files:

networks:
  app-network:
    driver: bridge                    # Network driver (bridge, host, none, macvlan)
    driver_opts:
      com.docker.network.bridge.name: br-app   # Custom Linux bridge name
    ipam:
      driver: default
      config:
        - subnet: 172.28.0.0/16      # Custom subnet
          ip_range: 172.28.5.0/24    # Allocate IPs from this range
          gateway: 172.28.0.1         # Gateway IP
    internal: false                   # If true, no external/internet access
    labels:
      environment: production
      app: mystack

The internal: true option is valuable for database networks. It creates a bridge with no route to the internet, so even if someone gets a shell inside the database container, they cannot download malware or exfiltrate data:

networks:
  database:
    internal: true      # No internet access for containers on this network

Cleaning Up Networks

Over time, Docker accumulates unused networks from stopped Compose projects. Clean them up:

# Remove all networks not used by any container
docker network prune

# Remove a specific network
docker network rm app-network

# See which networks are unused
docker network ls --filter "dangling=true"

Network cleanup is part of general disk space management on your VPS. While networks themselves consume negligible disk space, dangling network configuration can cause subnet exhaustion if you create and destroy many networks over time.

Summary: Docker Networking Decision Tree

Scenario Network Type Port Publishing
Single container, needs internet access Custom bridge 127.0.0.1:port:port
Multi-container app (web + db) Compose default (custom bridge) Only web container
Multiple apps, one VPS Traefik + shared external + per-app internal Only Traefik: 80, 443
Performance-critical service Host network Not applicable (uses host ports)
Batch processing, no network needed None Not applicable
Database tier Internal custom bridge Never publish externally

Docker networking is one of those topics where understanding the fundamentals eliminates entire categories of debugging. Once you internalize that custom bridges provide DNS resolution, that Compose creates networks automatically, and that port publishing bypasses UFW, you can architect multi-application stacks with confidence. Start with the Traefik multi-app pattern, and you will rarely need to think about Docker networking again.