Introduction
Choosing the wrong VPS for n8n does not produce a visible error page. It produces something worse: webhook timeouts that silently drop incoming data, workflow executions that fail mid-step and leave your CRM half-updated, and overnight jobs that die because the server ran out of memory at 3 AM. The failure mode is not "site down" — it is data loss, broken automations, and hours spent tracing why a Slack notification never fired.
n8n is not a website. It is not a blog, a landing page, or even a web application in the traditional sense. It is a 24/7 automation brain — listening for webhooks, polling APIs on schedules, processing data transformations, and orchestrating multi-step workflows that connect your entire tool stack. It needs to be running, responsive, and reliable every minute of every day, including when you are asleep.
Most "best VPS for n8n" lists rank providers by the same criteria they use for WordPress hosting: price per month, disk space, and how fast a homepage loads. None of that matters for workflow automation. What matters is whether your server can handle a burst of 50 concurrent webhook calls, whether the CPU is shared with noisy neighbors, whether your PostgreSQL database survives a hardware failure, and whether 99.9% uptime is actually good enough for a system that silently misses events during downtime. This guide covers the real requirements.
What n8n Actually Needs from a VPS
CPU: Three Distinct Workload Patterns
n8n's CPU demands are not constant — they spike in three different patterns that most VPS reviews ignore entirely.
First, Docker builds and updates. Every time you update n8n (roughly monthly releases), Docker pulls and rebuilds images. This is a short, intense CPU burst — 30 to 90 seconds of 100% utilization across all cores. On a shared VPS with burstable CPU, this build competes with other tenants on the same physical host. Your update that should take 45 seconds takes 3 minutes instead.
Second, AI nodes and LLM processing. If you use n8n's AI agent nodes, LangChain integrations, or any workflow that processes text through an embedding model, CPU usage is sustained and heavy. Parsing large JSON payloads, chunking documents, and managing AI agent memory all demand consistent compute — not burst credit that runs out after 10 minutes.
Third, queue mode workers. Production n8n deployments use queue mode, where a main process accepts webhooks and multiple worker processes execute workflows in parallel. Each worker is a separate Node.js process consuming its own CPU allocation. Two workers on a 2-core shared VPS means constant contention. Four workers need at least 4 dedicated cores to avoid queuing delays.
Shared/burstable vCPUs advertise a core count, but you only get full performance in short bursts. For n8n with AI nodes or queue mode, you need dedicated cores — physical CPU exclusively allocated to your server. Check whether your provider's "vCPU" means dedicated or shared.
RAM: The Real Minimum Is Not 2 GB
n8n's documentation suggests 2 GB minimum. That is for n8n alone — a single process with SQLite, no queue mode, no production database. A real self-hosted deployment stacks up fast:
- PostgreSQL (required for production): 500 MB – 1 GB depending on
shared_buffersand connection count - n8n main process: ~500 MB with moderate workflow history
- Redis (required for queue mode): ~256 MB
- Docker overhead (daemon + networking): ~200 MB
- Queue mode workers: 200 – 500 MB each
For a basic setup without queue mode (PostgreSQL + n8n + Docker): 4 GB minimum. For queue mode with two workers (PostgreSQL + Redis + n8n + 2 workers + Docker): 8 GB. For AI agent workloads with larger payload processing: 8 – 16 GB.
When you exceed available RAM, the Linux OOM killer terminates the most memory-hungry process. In an n8n stack, that is usually PostgreSQL — which means your workflow execution database disappears mid-run, and every in-flight execution fails without logging the error.
Storage: Execution History Is the Silent Growth Factor
n8n stores every workflow execution in the database: input data, output data, error logs, timing information. A moderately active instance running 50 workflows processes thousands of executions per month. Each execution record ranges from a few KB (simple HTTP requests) to several MB (data transformations with large payloads).
In practice, execution history grows between 500 MB and 10 GB per month depending on workflow complexity and data volume. PostgreSQL's write-ahead log adds another 20-30% on top. After six months of production use without cleanup policies, a 20 GB disk can be full.
Storage speed matters too. PostgreSQL performs constant write operations for execution logging, and NVMe SSDs handle these writes 3-5x faster than standard SATA SSDs. On a slow disk, your workflow execution logging becomes the bottleneck — n8n waits for the database write to confirm before proceeding to the next node.
Independent storage scaling is valuable here: your CPU and RAM needs may stay constant, but execution history grows linearly with time. You should be able to add 50 GB of storage without being forced into a larger compute package.
Network: Webhook Latency Is Not Optional
n8n listens for incoming webhook calls from external services — Stripe payment events, GitHub push notifications, form submissions, CRM updates. When Stripe sends a webhook, it expects a response within a few seconds. If your server is slow to respond (high latency, overloaded CPU), Stripe retries. If it keeps failing, Stripe disables the webhook endpoint.
Deploy your n8n instance in the datacenter closest to your primary integrations. If most of your webhooks come from US-based services, choose a US datacenter. If your stack is European, choose Frankfurt or London.
Uptime: Do the Math on Missed Webhooks
A 99.9% uptime SLA means up to 8.7 hours of downtime per year. For a website, that is an inconvenience. For an automation platform, it is 8.7 hours of silently missed webhook events. Every Stripe payment notification, every form submission, every scheduled job that was supposed to run during that window — gone. Most webhook senders retry a few times and then give up. Your data is lost without any record it was ever sent.
A 99.99% SLA reduces that to 52 minutes per year. A 100% SLA — backed by high-availability infrastructure with automatic failover — means the server migrates to healthy hardware before your webhooks even notice.
VPS Evaluation Criteria for Automation
When evaluating VPS providers specifically for workflow automation, the standard benchmarks (PageSpeed scores, PHP performance, WordPress load times) are irrelevant. Here is what actually differentiates providers for an n8n workload.
Dedicated vs. Shared Resources
The single most important distinction. A "2 vCPU" shared instance means you get access to 2 virtual cores on a hypervisor that oversubscribes its physical CPUs across dozens of tenants. During off-peak hours, you might get close to full performance. During peak hours, when your neighbors are also running builds, your n8n workers compete for the same physical silicon. Workflow execution times become unpredictable — a data transformation that takes 200ms one hour takes 800ms the next.
A Dedicated VPS allocates physical CPU cores exclusively to your server. Your performance is identical at 2 AM and 2 PM, regardless of what other tenants are doing. For production automation where consistent execution speed matters, this is not optional.
Uptime SLA: The Tiers That Matter
Not all uptime numbers are equal:
- 99.9% — Up to 8 hours 45 minutes of downtime per year. Common for budget providers. Acceptable for development instances.
- 99.99% — Up to 52 minutes per year. Better, but still means missed webhooks during maintenance windows.
- 100% — Requires high-availability architecture with automatic failover. Your VM migrates to a healthy physical host before your workflows notice. The only tier suitable for business-critical automation.
Storage Redundancy: Single Disk vs. Replicated
Standard VPS storage uses a local SSD on a single physical server. If that drive fails, your PostgreSQL database — containing all workflow definitions, credentials, and execution history — is gone. Recovery depends on your backup discipline.
Distributed storage systems like Ceph replicate every block of data across three independent drives on different physical servers. A single drive failure is invisible to your applications. This is the difference between "we lost the last 6 hours of execution data" and "we didn't even notice."
Automatic Failover vs. Manual Recovery
When the physical server hosting your VPS fails, what happens next defines your provider. With manual recovery, you file a support ticket and wait. With automatic failover, the hypervisor detects the failure and migrates your VM to a healthy node — typically within seconds. For an automation platform that other systems depend on, the distinction between 2-second failover and 2-hour recovery is the distinction between a non-event and a crisis.
Datacenter Locations
Your n8n instance should be geographically close to the services it integrates with. If your webhooks come from US SaaS platforms (Stripe, HubSpot, Salesforce), a US East Coast datacenter minimizes round-trip latency. European integrations benefit from Frankfurt or London. Asian workflows from Singapore. Multiple datacenter options give you the flexibility to deploy where your data flows.
Independent Resource Scaling
n8n's resource demands grow unevenly. You might need more RAM (added a queue mode worker) without needing more CPU. Or more storage (execution history growth) without changing compute. Providers that bundle resources into fixed tiers force you to overpay — you buy 8 vCPUs to get 16 GB RAM, even though 4 vCPUs was plenty. Independent scaling lets you adjust each dimension separately.
Human Support Quality
When your n8n instance is unresponsive at midnight and you cannot determine whether the issue is Docker, PostgreSQL, the Linux kernel OOM killer, or the underlying hypervisor, a chatbot reading from a knowledge base will not help. Human support with infrastructure expertise — engineers who understand the difference between a Docker bridge network issue and a hypervisor-level IOPS throttle — is the difference between a 15-minute resolution and a 6-hour debugging session.
Sizing Guide by Use Case
The following configurations use MassiveGRID's Dedicated VPS pricing, which charges per resource: $2.87/vCPU + $0.80/GB RAM + $0.01/GB SSD per month. This matters because you can scale each dimension independently — the tiers below are starting points, not rigid packages.
| Tier | Configuration | Monthly | Annual (20% off) | Use Case |
|---|---|---|---|---|
| Starter | 2 vCPU / 4 GB / 64 GB NVMe | $9.58/mo | $7.66/mo | Up to 20 workflows, no queue mode, basic integrations |
| Professional | 4 vCPU / 8 GB / 128 GB NVMe | $19.16/mo | $15.33/mo | 20–100 workflows, queue mode, AI nodes, Redis |
| Enterprise | 8 vCPU / 16 GB / 256 GB NVMe | $38.32/mo | $30.66/mo | 100+ workflows, multi-worker queue mode, heavy AI |
How the math works:
- Starter: (2 × $2.87) + (4 × $0.80) + (64 × $0.01) = $5.74 + $3.20 + $0.64 = $9.58/mo
- Professional: (4 × $2.87) + (8 × $0.80) + (128 × $0.01) = $11.48 + $6.40 + $1.28 = $19.16/mo
- Enterprise: (8 × $2.87) + (16 × $0.80) + (256 × $0.01) = $22.96 + $12.80 + $2.56 = $38.32/mo
With annual billing, MassiveGRID applies a 20% discount: Starter drops to $7.66/mo, Professional to $15.33/mo, Enterprise to $30.66/mo.
Start with the Starter tier and monitor resource usage with htop and docker stats for two weeks. Scale up the specific resource that hits its limit first — usually RAM (more workflows) or CPU (AI nodes). You do not need to guess.
Starter: Up to 20 Workflows
Professional: Queue Mode + AI Nodes
Enterprise: Multi-Worker, Heavy Automation
MassiveGRID for n8n Automation
Here is how MassiveGRID's infrastructure maps to the evaluation criteria outlined above — factually, without superlatives.
Automatic failover via HA cluster. MassiveGRID's Managed Cloud Dedicated Servers run on clustered physical hosts. If the underlying hardware fails, the hypervisor migrates your VM to a healthy node automatically. Your n8n instance keeps running, your webhooks keep arriving, and you find out about the hardware event from a monitoring log — not from a cascade of customer complaints about failed automations.
Ceph distributed storage for PostgreSQL protection. Your workflow definitions, encrypted credentials, and execution history live in PostgreSQL. On MassiveGRID's HA tier, that data is stored on Ceph with 3x replication across independent physical drives. A single disk failure does not touch your data. This is not a backup — it is real-time redundancy at the storage layer.
Dedicated resources for consistent performance. The Dedicated VPS (VDS) tier allocates physical CPU cores exclusively to your server. No noisy neighbors, no burst credits, no performance variability. Your n8n queue mode workers get the same CPU throughput at 2 PM on Monday as they do at 3 AM on Sunday.
Four datacenter locations. New York, London, Frankfurt, and Singapore. Deploy your n8n instance in the region closest to your webhook sources and API integrations. If you are a European agency handling client data under GDPR, Frankfurt keeps your execution data within EU jurisdiction.
100% uptime SLA. Not 99.9%, not 99.99%. The HA tier is backed by a 100% uptime guarantee — because the architecture is designed for zero downtime through automatic failover and redundant storage.
Independent resource scaling. Add RAM without changing CPU. Add storage without changing anything else. Scale the resource that your monitoring shows is actually constrained, not the one a fixed-tier pricing table forces you to upgrade.
For hobby projects running under 10 simple workflows with no queue mode and no AI nodes, a $5 shared VPS from any reputable provider is fine. MassiveGRID's dedicated infrastructure is designed for production workloads where reliability, consistent performance, and data protection justify the cost difference. If your automations are not business-critical, you do not need HA failover.
Cost Comparison: Self-Hosted vs n8n Cloud
n8n offers a managed cloud product alongside the self-hosted open-source version. The pricing models are fundamentally different, and the right choice depends on your execution volume and technical comfort level.
n8n Cloud Pricing (as of 2025)
| Plan | Price | Execution Limit | Active Workflows |
|---|---|---|---|
| Starter | €24/mo | 2,500 executions | 5 active |
| Pro | €60/mo | 10,000 executions | 50 active |
| Enterprise | Custom | Custom | Unlimited |
Self-Hosted on MassiveGRID VDS
| Config | Price | Execution Limit | Active Workflows |
|---|---|---|---|
| 2 vCPU / 4 GB / 64 GB | $9.58/mo (~€9) | Unlimited | Unlimited |
| 4 vCPU / 8 GB / 128 GB | $19.16/mo (~€18) | Unlimited | Unlimited |
The cost difference is stark. n8n Cloud's Starter plan at €24/mo gives you 2,500 executions and 5 active workflows. A self-hosted instance at roughly the same price (€18/mo for the Professional tier) gives you unlimited executions and unlimited active workflows, with 4 dedicated CPU cores and 8 GB of RAM.
For context: a moderately active n8n instance with 30 workflows running on schedules and webhooks easily produces 5,000–15,000 executions per month. On n8n Cloud, that requires the Pro plan at €60/mo. Self-hosted, the same workload runs comfortably on a $9.58/mo Starter VDS — saving you roughly €50/month (€600/year).
If you scale to 100+ workflows with queue mode and multiple workers, n8n Cloud's Enterprise tier is custom-priced but typically runs into hundreds of euros per month. The self-hosted Enterprise VDS at $38.32/mo handles that workload without execution caps.
The Honest Trade-Off
Self-hosting is not free of cost beyond the VPS bill. It requires:
- Docker and Linux comfort — you need to be able to SSH into a server, run
docker composecommands, and read log output - Initial setup time — roughly 30–60 minutes to get n8n, PostgreSQL, Redis, and a reverse proxy running (see our step-by-step Docker Compose guide)
- Ongoing maintenance — approximately 30 minutes per month for n8n version updates, security patches, and database backups
If those requirements feel uncomfortable, n8n Cloud is a legitimate choice — you are paying for managed infrastructure and zero maintenance. But if you (or anyone on your team) can handle basic Docker operations, the cost savings of self-hosting are substantial and compound every month.
Bottom Line
The best VPS for n8n is not the one with the highest synthetic benchmark score or the lowest sticker price. It is the one that provides dedicated CPU cores, fast random-write storage, enough RAM to keep PostgreSQL and Node.js happy, and infrastructure that stays online when hardware fails. For most self-hosted n8n deployments, that means a 2 vCPU / 4 GB RAM / 64 GB SSD tier on high-availability infrastructure — a setup that costs under $10/month and handles tens of thousands of monthly executions without throttling, silent failures, or 3 AM outage pages.