Memory is the most common bottleneck on a VPS. When your server runs out of physical RAM, the Linux kernel's OOM (Out of Memory) killer terminates processes — often your database or application — causing downtime. A properly configured swap file acts as a safety net, giving the system overflow space on disk when RAM is exhausted.

This guide covers everything about memory management on an Ubuntu 24.04 VPS: creating and sizing swap files, tuning kernel parameters like vm.swappiness and vfs_cache_pressure, monitoring memory with standard tools, configuring the OOM killer, deciding when to add swap versus upgrading RAM, and tuning memory limits for common applications like MySQL, PostgreSQL, Redis, and Node.js. By the end, your VPS will handle memory pressure gracefully instead of crashing.

Prerequisites

Before starting, you need:

MassiveGRID Ubuntu VPS — Ubuntu 24.04 LTS pre-installed, Proxmox HA cluster with automatic failover, Ceph 3x replicated NVMe storage, independent CPU/RAM/storage scaling, 12 Tbps DDoS protection, 4 global datacenter locations, 100% uptime SLA, and 24/7 human support rated 9.5/10. Deploy a self-managed VPS from $1.99/mo.

Understanding Memory on Linux

Before configuring anything, you need to understand how Linux uses memory. The kernel divides physical RAM into several categories:

Check your current memory status:

free -h

Example output on a 2 GB VPS:

               total        used        free      shared  buff/cache   available
Mem:           1.9Gi       892Mi       124Mi        12Mi       932Mi       852Mi
Swap:             0B          0B          0B

In this example, only 124 MB is "free," but 852 MB is "available" because the kernel will release cache memory on demand. The available column is what you should monitor — when it approaches zero, you're in trouble.

Notice that Swap shows 0B — no swap is configured. This is the default on most VPS providers, including fresh Ubuntu 24.04 installations.

When and Why You Need Swap

Swap is disk space that the kernel uses as overflow when physical RAM is full. The kernel moves ("swaps out") inactive memory pages from RAM to disk, freeing RAM for active processes.

When Swap Helps

When Swap Doesn't Help

Rule of thumb: Swap should handle temporary spikes, not chronic memory shortage. If your VPS is consistently using swap, it's time to upgrade RAM.

Creating a Swap File on Ubuntu 24.04

Ubuntu 24.04 does not create a swap file or swap partition by default on most VPS providers. Here's how to create one.

Step 1: Verify No Swap Exists

sudo swapon --show

If this produces no output, no swap is configured. You can also check with:

free -h | grep Swap

Step 2: Create the Swap File

Use fallocate to create a file of the desired size. We'll start with 2 GB:

sudo fallocate -l 2G /swapfile

If fallocate is not available or fails (some filesystems don't support it), use dd instead:

sudo dd if=/dev/zero of=/swapfile bs=1M count=2048 status=progress

Step 3: Set Correct Permissions

The swap file must only be readable by root:

sudo chmod 600 /swapfile

Verify:

ls -lh /swapfile
# -rw------- 1 root root 2.0G Feb 27 10:00 /swapfile

Step 4: Format as Swap

sudo mkswap /swapfile

Output:

Setting up swapspace version 1, size = 2 GiB (2147479552 bytes)
no label, UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

Step 5: Enable the Swap File

sudo swapon /swapfile

Verify it's active:

sudo swapon --show
NAME      TYPE SIZE USED PRIO
/swapfile file   2G   0B   -2
free -h
               total        used        free      shared  buff/cache   available
Mem:           1.9Gi       892Mi       124Mi        12Mi       932Mi       852Mi
Swap:          2.0Gi          0B       2.0Gi

Step 6: Make Swap Persistent Across Reboots

Add the swap file to /etc/fstab:

echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

Verify the fstab entry:

cat /etc/fstab | grep swap

Test that fstab is valid (prevents boot issues from typos):

sudo findmnt --verify

Setting the Right Swap Size

The optimal swap size depends on your VPS RAM, workload, and whether you use hibernation (which you won't on a VPS). Here are practical recommendations:

VPS RAM Recommended Swap Reasoning
512 MB 1 GB 2x RAM — essential to prevent OOM on small instances
1 GB 1-2 GB 1-2x RAM — gives solid buffer for spikes
2 GB 2 GB 1x RAM — handles typical web server spikes
4 GB 2-4 GB 0.5-1x RAM — database servers benefit from more swap
8 GB 2-4 GB 0.25-0.5x RAM — enough for temporary spikes
16 GB+ 2-4 GB Fixed size — mostly a safety net at this point

Don't over-allocate swap. A 1 GB VPS with 8 GB of swap won't perform well — if the system is using more than 1-2 GB of swap, it's thrashing and needs more RAM, not more swap. Excessively large swap files also waste disk space.

Resizing an Existing Swap File

To change the swap file size, disable it first, recreate it, then re-enable:

# Disable the current swap file
sudo swapoff /swapfile

# Resize (example: change to 4 GB)
sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

# Verify
free -h

The /etc/fstab entry doesn't need to change since it references the file path, not the size.

Tuning vm.swappiness

The vm.swappiness parameter controls how aggressively the kernel moves memory pages from RAM to swap. It's a value between 0 and 200 (on modern kernels), with a default of 60.

Check the current value:

cat /proc/sys/vm/swappiness
# 60

For most VPS workloads (web servers, application servers, databases), a swappiness of 10 is optimal. This tells the kernel to prefer keeping data in RAM and only swap when memory pressure is high:

# Set temporarily (until reboot)
sudo sysctl vm.swappiness=10

# Set permanently
echo 'vm.swappiness=10' | sudo tee -a /etc/sysctl.d/99-swap.conf
sudo sysctl -p /etc/sysctl.d/99-swap.conf

Verify:

cat /proc/sys/vm/swappiness
# 10

Recommended Swappiness by Workload

Workload Recommended Swappiness Why
Database server (MySQL, PostgreSQL) 10 Keep database pages in RAM as long as possible
Redis / Memcached 1-10 In-memory stores should almost never swap
Web application server 10-20 Balance between app and cache memory
General purpose / development 30-60 Default behavior is fine
Build server / CI runner 60 Short-lived processes; default is acceptable

Tuning vfs_cache_pressure

The vfs_cache_pressure parameter controls how aggressively the kernel reclaims memory used for caching directory and inode information (filesystem metadata). The default is 100.

For most VPS workloads, set it to 50:

# Set temporarily
sudo sysctl vm.vfs_cache_pressure=50

# Set permanently
echo 'vm.vfs_cache_pressure=50' | sudo tee -a /etc/sysctl.d/99-swap.conf
sudo sysctl -p /etc/sysctl.d/99-swap.conf

The combined /etc/sysctl.d/99-swap.conf file should now contain:

vm.swappiness=10
vm.vfs_cache_pressure=50

Monitoring Memory Usage

Regular monitoring helps you understand memory patterns and catch problems before they cause downtime.

free Command

The simplest memory overview:

free -h
               total        used        free      shared  buff/cache   available
Mem:           1.9Gi       1.2Gi        89Mi        12Mi       652Mi       542Mi
Swap:          2.0Gi       128Mi       1.9Gi

Key things to watch:

htop

Interactive process viewer with real-time memory display:

sudo apt install -y htop
htop

In htop, the top bars show CPU and memory usage. Press M to sort processes by memory usage, making it easy to identify which process is consuming the most RAM.

vmstat

vmstat shows memory, swap, I/O, and CPU statistics in a compact format:

# Show stats every 2 seconds, 10 iterations
vmstat 2 10
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0 131072  91200  45312 622080    0    2    12    48  120  210  3  1 95  1  0
 0  0 131072  90880  45312 622080    0    0     0     4   95  185  1  0 99  0  0

The critical columns for memory:

Column Meaning Watch For
swpd Virtual memory used (KB) Steadily increasing = memory pressure
free Free memory (KB) Consistently near zero = low memory
si Swap in (KB/s) — pages moved from swap to RAM High values = actively reading from swap
so Swap out (KB/s) — pages moved from RAM to swap High values = actively swapping out (bad)

If si and so are consistently non-zero, your server is actively swapping (thrashing). This is a strong signal that you need more RAM.

smem — Per-Process Memory Reporting

smem provides accurate per-process memory usage including shared memory:

sudo apt install -y smem

# Show memory usage per process, sorted by RSS
sudo smem -r -s rss

# Show memory usage as percentages
sudo smem -p -s uss

# Summary by user
sudo smem -u

/proc/meminfo — Detailed Kernel Memory Stats

cat /proc/meminfo

This shows every memory metric the kernel tracks. The most useful lines:

grep -E "MemTotal|MemFree|MemAvailable|SwapTotal|SwapFree|Cached|Buffers|Dirty" /proc/meminfo

Continuous Monitoring Script

For ongoing monitoring, create a simple script that logs memory stats. Save as /usr/local/bin/memwatch.sh:

#!/bin/bash
# Log memory stats every 5 minutes
LOG="/var/log/memwatch.log"

while true; do
    TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
    MEMINFO=$(free -m | awk '/Mem:/ {printf "RAM: %sMB used / %sMB total (%sMB available)", $3, $2, $7}')
    SWAPINFO=$(free -m | awk '/Swap:/ {printf "Swap: %sMB used / %sMB total", $3, $2}')
    echo "$TIMESTAMP | $MEMINFO | $SWAPINFO" >> "$LOG"
    sleep 300
done
sudo chmod +x /usr/local/bin/memwatch.sh

Run it as a systemd service for reliability:

sudo nano /etc/systemd/system/memwatch.service
[Unit]
Description=Memory usage monitor
After=network.target

[Service]
Type=simple
ExecStart=/usr/local/bin/memwatch.sh
Restart=always

[Install]
WantedBy=multi-user.target
sudo systemctl enable --now memwatch
sudo systemctl status memwatch

For a complete monitoring stack with dashboards and alerting, see our Ubuntu VPS monitoring guide.

OOM Killer Configuration

The Linux OOM (Out of Memory) killer is a kernel mechanism of last resort. When the system is completely out of memory (RAM and swap), the OOM killer selects a process to terminate, freeing memory so the system can continue running.

How the OOM Killer Selects Victims

Each process has an OOM score visible in /proc/[pid]/oom_score. The kernel calculates this based on memory usage, process age, and other factors. The process with the highest score gets killed.

View OOM scores for all processes:

# List processes sorted by OOM score
printf "%-10s %-6s %-30s %s\n" "PID" "SCORE" "COMMAND" "ADJ"
for pid in $(ls /proc/ | grep -E '^[0-9]+$'); do
    if [ -f "/proc/$pid/oom_score" ] && [ -f "/proc/$pid/comm" ]; then
        score=$(cat /proc/$pid/oom_score 2>/dev/null)
        comm=$(cat /proc/$pid/comm 2>/dev/null)
        adj=$(cat /proc/$pid/oom_score_adj 2>/dev/null)
        if [ -n "$score" ] && [ "$score" -gt 0 ]; then
            printf "%-10s %-6s %-30s %s\n" "$pid" "$score" "$comm" "$adj"
        fi
    fi
done | sort -k2 -rn | head -20

Protecting Critical Processes

You can adjust the OOM score of critical processes to make them less likely (or more likely) to be killed. The oom_score_adj value ranges from -1000 (never kill) to 1000 (kill first).

Protect your database:

# Find the PostgreSQL process
pgrep -a postgres

# Protect it from OOM killer (set adj to -500)
echo -500 | sudo tee /proc/$(pgrep -o postgres)/oom_score_adj

To make this permanent, add it to the systemd service file. Create an override:

sudo systemctl edit postgresql

Add:

[Service]
OOMScoreAdjust=-500

For MySQL:

sudo systemctl edit mysql
[Service]
OOMScoreAdjust=-500

For Nginx (which should almost never be killed):

sudo systemctl edit nginx
[Service]
OOMScoreAdjust=-900

Checking OOM Kill History

When the OOM killer activates, it logs the event. Check for past OOM kills:

# Check kernel log for OOM events
journalctl -k | grep -i "oom\|out of memory\|killed process"

# Check dmesg
sudo dmesg | grep -i "oom\|out of memory\|killed process"

Example OOM kill log entry:

Out of memory: Killed process 12345 (mysqld) total-vm:2048000kB, anon-rss:1536000kB, file-rss:0kB, shmem-rss:0kB

Disabling Memory Overcommit

By default, Linux overcommits memory — it promises more memory than is physically available, betting that not all processes will use their full allocation simultaneously. You can change this behavior:

# Check current overcommit mode
cat /proc/sys/vm/overcommit_memory
# 0 = heuristic (default), 1 = always, 2 = never

For most VPS workloads, the default (0) is fine. If you run a database that allocates a large amount of memory upfront, mode 2 (never overcommit) prevents the OOM killer from activating unexpectedly:

# Set to strict (no overcommit)
echo 'vm.overcommit_memory=2' | sudo tee -a /etc/sysctl.d/99-swap.conf

# Set overcommit ratio (percentage of RAM + swap that can be allocated)
echo 'vm.overcommit_ratio=80' | sudo tee -a /etc/sysctl.d/99-swap.conf

sudo sysctl -p /etc/sysctl.d/99-swap.conf

Warning: With overcommit disabled, processes that try to allocate more memory than available will fail with "Cannot allocate memory" errors instead of being OOM-killed later. This is safer for databases but may cause issues with applications that allocate large amounts of memory upfront (like Java or Redis with large datasets).

When to Upgrade RAM vs. Use Swap

Swap and RAM upgrades solve different problems. Here's a decision framework:

Symptom Solution
Occasional spikes cause OOM kills, but normal usage fits in RAM Add swap (1-2x RAM)
Swap is used but not growing; si/so are near zero Swap is working correctly — no action needed
si/so are consistently non-zero (active thrashing) Upgrade RAM immediately
Available memory is consistently below 100 MB Upgrade RAM
Load average is high but CPU usage is low Likely I/O wait from swap thrashing — upgrade RAM
Database queries are slow despite low CPU Database working set doesn't fit in RAM — upgrade RAM
Application works fine but system processes (cron, apt) trigger OOM Add swap — these are temporary spikes

With a MassiveGRID Cloud VPS, you can scale RAM independently without upgrading CPU or storage. This means you can add RAM specifically to resolve memory pressure without paying for resources you don't need. For workloads where consistent memory performance is critical, a Dedicated VPS (VDS) provides guaranteed RAM that is never shared with other tenants.

Memory Management for Common Applications

Each application has its own memory configuration that determines how much RAM it consumes. Tuning these settings prevents any single application from monopolizing memory.

MySQL / MariaDB

MySQL's largest memory consumer is the InnoDB buffer pool. Edit /etc/mysql/mysql.conf.d/mysqld.cnf:

[mysqld]
# InnoDB buffer pool — typically 50-70% of available RAM
# On a 2 GB VPS with other services, allocate ~800 MB
innodb_buffer_pool_size = 800M

# InnoDB log file size — larger = better write performance, more memory
innodb_log_file_size = 128M

# Per-connection memory (watch out with many connections)
sort_buffer_size = 2M
read_buffer_size = 1M
join_buffer_size = 2M
tmp_table_size = 32M
max_heap_table_size = 32M

# Limit connections to prevent memory exhaustion
max_connections = 50

# Thread cache (reduces thread creation overhead)
thread_cache_size = 16

Memory calculation for MySQL:

# Approximate max memory usage:
# innodb_buffer_pool_size
# + max_connections * (sort_buffer_size + read_buffer_size + join_buffer_size)
# + innodb_log_file_size
# + overhead (~200 MB)
#
# Example: 800M + 50*(2M+1M+2M) + 128M + 200M = ~1378 MB

Restart MySQL after changes:

sudo systemctl restart mysql

PostgreSQL

Edit /etc/postgresql/16/main/postgresql.conf:

# Shared buffers — typically 25% of total RAM
# On a 2 GB VPS: 512 MB
shared_buffers = 512MB

# Effective cache size — how much memory the OS can use for caching
# Set to ~75% of total RAM
effective_cache_size = 1536MB

# Work memory — per-query sort/hash memory
# Be conservative; each query can use this amount
work_mem = 4MB

# Maintenance work memory — for VACUUM, CREATE INDEX, etc.
maintenance_work_mem = 128MB

# WAL buffers
wal_buffers = 16MB

# Max connections
max_connections = 50

Memory calculation for PostgreSQL:

# Approximate max memory usage:
# shared_buffers + (max_connections * work_mem) + maintenance_work_mem + overhead
# 512M + (50 * 4M) + 128M + ~200M = ~1040 MB
sudo systemctl restart postgresql

Redis

Redis stores everything in memory. Set a hard memory limit to prevent it from consuming all RAM. Edit /etc/redis/redis.conf:

# Maximum memory Redis can use
maxmemory 256mb

# Eviction policy when maxmemory is reached
# allkeys-lru: Evict least recently used keys (good for caches)
# noeviction: Return error on write (good for queues/data stores)
maxmemory-policy allkeys-lru
sudo systemctl restart redis-server

Monitor Redis memory usage:

redis-cli INFO memory | grep -E "used_memory_human|maxmemory_human|mem_fragmentation"

Node.js

Node.js (V8) has a default heap limit of approximately 1.5 GB on 64-bit systems. For applications that need more or less, set it explicitly:

# Limit to 512 MB heap
node --max-old-space-size=512 app.js

# Or via environment variable
export NODE_OPTIONS="--max-old-space-size=512"
node app.js

If using PM2:

# In ecosystem.config.js
module.exports = {
  apps: [{
    name: 'myapp',
    script: './app.js',
    node_args: '--max-old-space-size=512',
    max_memory_restart: '600M'  // PM2 restarts if memory exceeds this
  }]
};

PHP-FPM

PHP-FPM's memory usage depends on the number of worker processes and each worker's memory consumption. Edit the pool configuration:

sudo nano /etc/php/8.3/fpm/pool.d/www.conf
; Static pool — fixed number of workers (predictable memory usage)
pm = static
pm.max_children = 5

; Or dynamic pool — scales up and down
; pm = dynamic
; pm.max_children = 10
; pm.start_servers = 3
; pm.min_spare_servers = 2
; pm.max_spare_servers = 5

Each PHP-FPM worker typically uses 30-80 MB depending on your application. Calculate the max:

# max_children * average_worker_memory = total PHP-FPM memory
# 5 workers * 60 MB = 300 MB

Set the per-script memory limit in php.ini:

; /etc/php/8.3/fpm/php.ini
memory_limit = 128M
sudo systemctl restart php8.3-fpm

Memory Budget Planning

On a VPS, you should plan your memory budget to ensure all services fit within available RAM. Here's an example for a 2 GB VPS running a typical LEMP stack:

Component Allocated RAM Configuration
OS + systemd services 200 MB Base system overhead
Nginx 50 MB Low-memory footprint by default
PHP-FPM (5 workers) 300 MB pm.max_children = 5
MySQL 800 MB innodb_buffer_pool_size = 800M
Redis 256 MB maxmemory 256mb
Monitoring / misc 100 MB Node exporter, fail2ban, etc.
Total 1706 MB
Buffer for filesystem cache ~300 MB Kernel uses free RAM for caching
Swap (safety net) 2 GB For temporary spikes

If this budget exceeds your RAM, you have three options:

  1. Reduce application memory settings (smaller buffer pool, fewer workers)
  2. Upgrade RAM — with MassiveGRID VPS, scale RAM independently from CPU and storage
  3. Move services to separate VPS instances (e.g., database on one VPS, application on another)

For a detailed guide on overall VPS performance tuning, see our Ubuntu VPS performance optimization guide.

Complete Setup Summary

Here's the full sequence of commands to set up swap and tune memory on a fresh Ubuntu 24.04 VPS:

# 1. Create a 2 GB swap file
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

# 2. Tune kernel parameters
sudo tee /etc/sysctl.d/99-swap.conf <<EOF
vm.swappiness=10
vm.vfs_cache_pressure=50
EOF
sudo sysctl -p /etc/sysctl.d/99-swap.conf

# 3. Verify
free -h
swapon --show
cat /proc/sys/vm/swappiness
cat /proc/sys/vm/vfs_cache_pressure

What's Next