Your Dokploy builds sometimes finish in 30 seconds, other times take 5 minutes. Same code, same Dockerfile. You have not changed anything. What is happening?
If you are running Dokploy on a shared VPS, the answer is almost certainly resource contention. The server you are on is shared with other tenants, and when they are busy, your builds slow down. This guide explains exactly why this happens, how to diagnose it, and what to do about it.
The Noisy-Neighbor Problem
Docker builds are bursty CPU workloads. When Dokploy triggers a build, the process spikes to 100% CPU utilization during compilation, dependency installation, and image layer creation. Then it drops to near-zero once the build completes and the container is running.
On shared infrastructure, your VPS is allocated a number of CPU cores, but those cores are shared with other tenants on the same physical host. The hypervisor uses CPU scheduling to divide time between virtual machines. When your neighbor's workload is idle, you get the full performance of your allocated cores. When they are running their own builds, database queries, or batch jobs, the hypervisor throttles your CPU time to maintain fairness.
This creates the exact symptom described above: inconsistent build times. The same docker build command takes 30 seconds when the host is quiet and 5 minutes when it is busy. You have no visibility into what other tenants are doing, and no control over when the contention occurs.
The impact is not limited to builds. If your Dokploy instance is serving production traffic while also running a build, the CPU contention affects both. Response times for your running applications degrade during builds, and builds take longer because the application is consuming CPU. On shared infrastructure, these two workloads compete for the same throttled resource.
Diagnosing Your Bottleneck
Before changing your server, confirm that resource contention is actually the problem. Dokploy provides built-in monitoring that shows CPU, RAM, and disk utilization for your server and individual containers.
CPU Bottleneck Indicators
Check CPU utilization during a build. If you see the following pattern, CPU is your bottleneck:
- CPU hits 100% (or close to it) during builds
- Build times vary significantly between runs (more than 2x difference)
- Application response times degrade during builds
docker statsshows your build container consuming all available CPU
You can also check for CPU steal time, which directly measures how much CPU time the hypervisor is taking from your VM to give to other tenants:
# Check CPU steal time (st column)
top -bn1 | head -5
# Or use vmstat for a snapshot
vmstat 1 5
If the st (steal) value is consistently above 5-10%, your VM is losing significant CPU time to other tenants. This is the definitive indicator of the noisy-neighbor problem.
RAM Bottleneck Indicators
RAM bottlenecks present differently. Docker builds cache layers in memory, and running databases (PostgreSQL, MySQL, Redis) consume RAM at rest. If RAM is the issue, you will see:
- High swap usage during builds (
free -hshows significant swap utilization) - The OOM killer terminating build processes (check
dmesg | grep -i oom) - Builds failing outright with "out of memory" errors
- Consistently slow builds regardless of time of day (unlike CPU, RAM bottlenecks are consistent)
RAM contention is not a noisy-neighbor issue. RAM allocated to your VPS is typically guaranteed, even on shared infrastructure. If you are running out of RAM, it means your workload genuinely needs more memory, not that someone else is using yours.
Fixing CPU Bottlenecks
If CPU steal time confirms the noisy-neighbor problem, you have two options:
Option 1: Scale CPU on Your Current VPS
On MassiveGRID's Cloud VPS, CPU, RAM, and storage are scaled independently. If your current allocation is 2 vCPU / 4 GB RAM / 80 GB SSD, you can bump to 4 vCPU without changing your RAM or storage. This costs less than upgrading to the next fixed-size package because you are only paying for the resource you actually need.
Adding 2 more CPU cores gives your Dokploy builds more headroom during compilation. If the contention was marginal (builds were just barely exceeding available CPU), this may be sufficient.
Option 2: Move to a Dedicated VPS (VDS)
If CPU steal time is consistently high, adding more shared CPU cores may not solve the problem. The cores are still shared, and a busier host means more contention across all of them.
A Dedicated VPS (VDS) assigns physical CPU cores exclusively to your instance. No other tenant can use them. Your 4 CPU cores are always your 4 CPU cores, with zero steal time. Build times become consistent and predictable.
The difference is measurable. Here is a representative example for a Node.js application with a multi-stage Docker build (dependency install + TypeScript compilation + production image):
| Metric | Shared VPS (4 vCPU) | Dedicated VDS (4 vCPU) |
|---|---|---|
| Best-case build time | 35 seconds | 32 seconds |
| Worst-case build time | 4 min 20 sec | 38 seconds |
| Average build time | 1 min 45 sec | 34 seconds |
| CPU steal time | 8-25% | 0% |
| Build time variance | High (unpredictable) | Low (consistent) |
The best-case times are similar because when the shared host is idle, you get near-dedicated performance. The critical difference is the worst case: a 7x variation on shared infrastructure versus near-zero variation on dedicated.
Fixing RAM Bottlenecks
Docker builds cache intermediate layers in memory. Each RUN instruction in your Dockerfile creates a layer, and the build process holds previous layers in memory while building the next. For a complex multi-stage build, this can consume several gigabytes.
Meanwhile, your running services (databases, application containers, Dokploy itself) maintain their own memory footprint. PostgreSQL's shared_buffers alone typically consumes 25% of available RAM. Redis keeps its entire dataset in memory. Your Node.js or Python application has its own heap.
When a build triggers on top of these running services, total memory demand can exceed your allocation. The kernel starts swapping, and everything slows down dramatically. Swap is backed by disk I/O, which is orders of magnitude slower than RAM.
The fix: scale RAM independently. On MassiveGRID, you can add 2 GB or 4 GB of RAM without changing your CPU or storage allocation. Calculate your baseline memory usage (all running containers at rest) and add headroom for builds. A good rule of thumb is: baseline memory usage plus 2 GB for Docker build overhead.
When Both Are the Problem
If you are seeing both CPU steal time and RAM pressure, the issue is that your workload has outgrown your current server tier. This typically happens when:
- You are running multiple applications on a single Dokploy instance
- You added a database alongside your application containers
- Build frequency increased (CI/CD pushing multiple times per hour)
- Your application's resource footprint grew (more dependencies, larger builds)
For these scenarios, consider two paths:
Dedicated VPS (VDS) for guaranteed CPU plus independently scaled RAM. This is the right choice when you want full control and predictable performance without management overhead.
Cloud Dedicated with HA when you also need automatic failover and managed infrastructure. This tier adds MassiveGRID's management layer on top of dedicated resources, with automatic migration if the underlying hardware fails.
MassiveGRID for Dokploy
- Independent Resource Scaling — Scale CPU, RAM, and storage separately. Fix your actual bottleneck without overpaying for resources you do not need
- Dedicated VPS (VDS) — Physical CPU cores assigned exclusively to your instance. Zero steal time, consistent build performance
- Cloud VPS — Cost-effective shared resources with per-resource scaling for development and staging
- HA Cloud Dedicated — Dedicated resources plus automatic failover and managed infrastructure for production
- Real-time Scaling — Add 2 CPU cores or 4 GB RAM without migration or downtime
The Bottom Line
Inconsistent Dokploy build times are not a Dokploy problem. They are an infrastructure problem. Docker builds are CPU-intensive, bursty workloads that expose the limitations of shared infrastructure more than steady-state web serving ever will.
Diagnose first: check CPU steal time and RAM utilization during builds. If steal time is the issue, dedicated CPU (VDS) eliminates it. If RAM is the issue, scale RAM independently. If both, move to a dedicated tier that gives you full control over resource allocation.
With most providers, fixing slow builds means jumping to the next package and paying for resources you do not need. MassiveGRID lets you scale the exact resource that is bottlenecking you, whether that is 2 additional CPU cores, 4 GB more RAM, or a move to fully dedicated infrastructure.
For the initial Dokploy setup, see our installation guide. To understand which server tier fits your workload, read choosing the best VPS for Dokploy. If your performance needs require horizontal scaling, the multi-node Docker Swarm guide covers distributing Dokploy across multiple servers.