Redis is an in-memory data store that serves three critical roles in modern web applications: object caching (reducing database load by orders of magnitude), session storage (fast, centralized session management for stateless application servers), and message queuing (background job processing for tasks like email sending, image resizing, and webhook delivery). It's the Swiss Army knife of application infrastructure.

This guide covers installing Redis on Ubuntu 24.04, securing it, configuring memory management, and setting it up for all three use cases. Whether you're running WordPress, Laravel, Django, Node.js, or Rails, adding Redis to your stack is one of the highest-impact performance improvements you can make.

What Redis Does and Why You Need It

Without Redis, every page load hits your database. A WordPress site generating 100 requests per minute might execute 800+ database queries per minute — for the same content. A Django or Laravel API might query the same user permissions table on every single request.

Redis eliminates this waste by storing frequently accessed data in RAM:

Redis achieves this speed by keeping all data in RAM. A typical Redis instance handles 100,000+ operations per second on modest hardware.

Prerequisites

You need:

Installing Redis from the Official Repository

Ubuntu 24.04 includes Redis in its default repositories, but the official Redis repository provides the latest stable version with timely security patches.

Add the Redis repository:

curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list

Install Redis:

sudo apt update
sudo apt install -y redis-server

Verify the installation:

redis-server --version
# Redis server v=7.4.2 sha=00000000:0 malloc=jemalloc-5.3.0 bits=64 build=...

sudo systemctl status redis-server

Redis starts automatically and is enabled to start on boot.

Basic Redis CLI and Testing

Connect to Redis using the CLI:

redis-cli

Test basic operations:

127.0.0.1:6379> PING
PONG

127.0.0.1:6379> SET greeting "Hello from Redis"
OK

127.0.0.1:6379> GET greeting
"Hello from Redis"

127.0.0.1:6379> SET counter 0
OK

127.0.0.1:6379> INCR counter
(integer) 1

127.0.0.1:6379> INCR counter
(integer) 2

127.0.0.1:6379> DEL greeting counter
(integer) 2

127.0.0.1:6379> EXIT

Check Redis server information:

redis-cli INFO server | head -20
redis-cli INFO memory

Security Configuration

By default, Redis has no authentication and listens on all interfaces — a serious security risk. Edit the Redis configuration:

sudo nano /etc/redis/redis.conf

Set a Strong Password

Find and uncomment the requirepass directive:

requirepass your-very-strong-redis-password-at-least-32-characters

Generate a strong password:

openssl rand -base64 48

Bind to Localhost Only

Unless you need remote Redis access, bind only to the loopback interface:

bind 127.0.0.1 -::1

Disable Dangerous Commands

Rename or disable commands that should never be run in production:

# Disable FLUSHDB, FLUSHALL (delete all data)
rename-command FLUSHDB ""
rename-command FLUSHALL ""

# Disable CONFIG (change settings at runtime)
rename-command CONFIG "REDIS_CONFIG_b7f8a3c2"

# Disable DEBUG (exposes internals)
rename-command DEBUG ""

# Disable SHUTDOWN (stops the server)
rename-command SHUTDOWN "REDIS_SHUTDOWN_e4d1f9a6"

Disable Protected Mode Only If Needed

Protected mode is enabled by default and prevents external connections when no password is set. Leave it on:

protected-mode yes

Apply the changes:

sudo systemctl restart redis-server

Test authentication:

redis-cli
127.0.0.1:6379> PING
(error) NOAUTH Authentication required.

127.0.0.1:6379> AUTH your-very-strong-redis-password-at-least-32-characters
OK

127.0.0.1:6379> PING
PONG

Or authenticate directly from the command line:

redis-cli -a your-very-strong-redis-password-at-least-32-characters PING

Memory Configuration

Redis stores everything in RAM. Without memory limits, it will grow until it consumes all available memory and the OS kills it (or your application). Always set explicit memory limits.

sudo nano /etc/redis/redis.conf

Set Maximum Memory

# Allocate 512 MB to Redis (adjust based on your server size)
maxmemory 512mb

Set Eviction Policy

When Redis reaches maxmemory, the eviction policy determines which keys to remove. The right policy depends on your use case:

# For caching: evict least recently used keys (most common)
maxmemory-policy allkeys-lru

# For sessions: evict keys with TTL set, least recently used
# maxmemory-policy volatile-lru

# For queues: don't evict anything — return errors on write when full
# maxmemory-policy noeviction
PolicyBehaviorBest For
allkeys-lruEvict least recently used key from all keysGeneral caching
volatile-lruEvict LRU key only from keys with TTLMixed cache + persistent data
allkeys-lfuEvict least frequently used keyFrequency-based caching
volatile-ttlEvict key with shortest remaining TTLTime-sensitive caches
noevictionReturn error on writes when fullQueues, critical data

Restart Redis:

sudo systemctl restart redis-server

Verify memory configuration:

redis-cli -a your-password INFO memory

Use Case: Object Caching

Object caching stores the results of expensive database queries or API calls in Redis so subsequent requests skip the database entirely.

WordPress with Redis Object Cache

WordPress is one of the biggest beneficiaries of Redis caching. Every page load can trigger 30-100+ database queries. With Redis object cache, repeated queries are served from memory. If you're running WordPress on Ubuntu, see our multi-site WordPress hosting guide for the full setup.

Install the Redis Object Cache plugin and add to wp-config.php:

define('WP_REDIS_HOST', '127.0.0.1');
define('WP_REDIS_PORT', 6379);
define('WP_REDIS_PASSWORD', 'your-redis-password');
define('WP_REDIS_DATABASE', 0);
define('WP_REDIS_TIMEOUT', 1);
define('WP_REDIS_READ_TIMEOUT', 1);

Laravel with Redis Cache

Laravel has built-in Redis support. Install the PHP Redis extension and configure .env:

CACHE_DRIVER=redis
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=your-redis-password
REDIS_PORT=6379

Use cache in your controllers (see our Laravel deployment guide for the complete setup):

// Cache a database query for 1 hour
$users = Cache::remember('active_users', 3600, function () {
    return User::where('active', true)->get();
});

// Cache with tags for selective invalidation
Cache::tags(['users'])->put('user_profile_' . $id, $profile, 3600);
Cache::tags(['users'])->flush(); // Clear all user caches

Django with Redis Cache

Install django-redis:

pip install django-redis

Configure in settings.py:

CACHES = {
    "default": {
        "BACKEND": "django_redis.cache.RedisCache",
        "LOCATION": "redis://127.0.0.1:6379/0",
        "OPTIONS": {
            "CLIENT_CLASS": "django_redis.client.DefaultClient",
            "PASSWORD": "your-redis-password",
            "SOCKET_CONNECT_TIMEOUT": 5,
            "SOCKET_TIMEOUT": 5,
            "COMPRESSOR": "django_redis.compressors.zlib.ZlibCompressor",
        },
        "KEY_PREFIX": "myapp",
        "TIMEOUT": 3600,  # Default TTL: 1 hour
    }
}

Use the cache in views:

from django.core.cache import cache

# Cache a queryset
products = cache.get('featured_products')
if products is None:
    products = Product.objects.filter(featured=True).select_related('category')
    cache.set('featured_products', products, timeout=3600)

# Cache entire views with decorator
from django.views.decorators.cache import cache_page

@cache_page(60 * 15)  # Cache for 15 minutes
def product_list(request):
    ...

Node.js with ioredis

npm install ioredis
const Redis = require('ioredis');

const redis = new Redis({
  host: '127.0.0.1',
  port: 6379,
  password: 'your-redis-password',
  db: 0,
  retryDelayOnFailover: 100,
  maxRetriesPerRequest: 3
});

// Cache a database query
async function getUser(userId) {
  const cacheKey = `user:${userId}`;
  const cached = await redis.get(cacheKey);
  
  if (cached) {
    return JSON.parse(cached);
  }
  
  const user = await db.query('SELECT * FROM users WHERE id = $1', [userId]);
  await redis.set(cacheKey, JSON.stringify(user), 'EX', 3600); // 1 hour TTL
  return user;
}

Use Case: Session Store

Storing sessions in Redis instead of files or databases provides sub-millisecond lookups and enables multiple application servers to share session state.

PHP Sessions

Install the PHP Redis extension:

sudo apt install -y php-redis

Update php.ini:

session.save_handler = redis
session.save_path = "tcp://127.0.0.1:6379?auth=your-redis-password&database=1"

Using database 1 separates sessions from cache data (database 0). Restart PHP-FPM:

sudo systemctl restart php8.3-fpm

Express.js Sessions

npm install express-session connect-redis ioredis
const session = require('express-session');
const RedisStore = require('connect-redis').default;
const Redis = require('ioredis');

const redisClient = new Redis({
  host: '127.0.0.1',
  port: 6379,
  password: 'your-redis-password',
  db: 1
});

app.use(session({
  store: new RedisStore({ client: redisClient }),
  secret: process.env.SESSION_SECRET,
  resave: false,
  saveUninitialized: false,
  cookie: {
    secure: true,      // Requires HTTPS
    httpOnly: true,     // Not accessible via JavaScript
    maxAge: 86400000,   // 24 hours
    sameSite: 'lax'
  }
}));

Django Sessions

With django-redis already configured (from the caching section above):

SESSION_ENGINE = "django.contrib.sessions.backends.cache"
SESSION_CACHE_ALIAS = "default"

MassiveGRID Ubuntu VPS Includes

Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10

→ Deploy a self-managed VPS — from $1.99/mo
→ Need dedicated resources? — from $19.80/mo
→ Want fully managed hosting? — we handle everything

Use Case: Queue Backend

Redis excels as a message broker for background job processing. Instead of handling slow operations during HTTP requests, you push jobs into a Redis-backed queue and worker processes handle them asynchronously.

Bull (Node.js)

npm install bullmq
const { Queue, Worker } = require('bullmq');

const connection = {
  host: '127.0.0.1',
  port: 6379,
  password: 'your-redis-password'
};

// Create a queue
const emailQueue = new Queue('email', { connection });

// Add a job to the queue
await emailQueue.add('welcome-email', {
  to: 'user@example.com',
  subject: 'Welcome!',
  template: 'welcome'
});

// Create a worker to process jobs
const worker = new Worker('email', async (job) => {
  console.log(`Processing ${job.name} for ${job.data.to}`);
  await sendEmail(job.data);
}, { connection, concurrency: 5 });

worker.on('completed', (job) => {
  console.log(`Job ${job.id} completed`);
});

worker.on('failed', (job, err) => {
  console.error(`Job ${job.id} failed:`, err);
});

Celery (Python / Django / Flask)

pip install celery redis

Create celery_app.py (Django example):

import os
from celery import Celery

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')

app = Celery('myproject')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()

Add to settings.py:

CELERY_BROKER_URL = 'redis://:your-redis-password@127.0.0.1:6379/2'
CELERY_RESULT_BACKEND = 'redis://:your-redis-password@127.0.0.1:6379/3'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'UTC'

Create a task:

from celery import shared_task

@shared_task
def send_welcome_email(user_id):
    user = User.objects.get(id=user_id)
    # Send the email...
    return f"Email sent to {user.email}"

Call the task asynchronously:

send_welcome_email.delay(user.id)  # Returns immediately

Start the Celery worker:

celery -A myproject worker --loglevel=info --concurrency=4

Create a systemd service for the Celery worker so it starts on boot:

sudo nano /etc/systemd/system/celery.service
[Unit]
Description=Celery Worker
After=redis-server.service

[Service]
Type=forking
User=deploy
Group=deploy
WorkingDirectory=/var/www/myapp/src
ExecStart=/var/www/myapp/venv/bin/celery multi start worker \
    -A myproject \
    --pidfile=/var/run/celery/%n.pid \
    --logfile=/var/log/celery/%n%I.log \
    --loglevel=INFO \
    --concurrency=4
ExecStop=/var/www/myapp/venv/bin/celery multi stopwait worker \
    --pidfile=/var/run/celery/%n.pid
ExecReload=/var/www/myapp/venv/bin/celery multi restart worker \
    -A myproject \
    --pidfile=/var/run/celery/%n.pid \
    --logfile=/var/log/celery/%n%I.log \
    --loglevel=INFO \
    --concurrency=4
Restart=on-failure

[Install]
WantedBy=multi-user.target
sudo mkdir -p /var/run/celery /var/log/celery
sudo chown deploy:deploy /var/run/celery /var/log/celery
sudo systemctl daemon-reload
sudo systemctl enable celery
sudo systemctl start celery

Laravel Horizon

Laravel's Horizon provides a dashboard and management layer for Redis-powered queues. Install it:

composer require laravel/horizon
php artisan horizon:install

Configure .env:

QUEUE_CONNECTION=redis

Start Horizon:

php artisan horizon

For production, create a systemd service or use Supervisor to keep Horizon running. See our Laravel deployment guide for the full setup.

Persistence: RDB vs AOF

By default, Redis periodically saves snapshots of the dataset to disk (RDB). You can also enable Append Only File (AOF) for more durable persistence. Understanding the trade-offs:

RDB (Redis Database Backup)

RDB creates point-in-time snapshots at configured intervals:

# Default configuration in redis.conf
save 900 1     # Save if at least 1 key changed in 900 seconds
save 300 10    # Save if at least 10 keys changed in 300 seconds
save 60 10000  # Save if at least 10000 keys changed in 60 seconds

# Output file
dbfilename dump.rdb
dir /var/lib/redis

Pros: Compact, fast restarts, minimal performance impact during normal operation.

Cons: You can lose the data written since the last snapshot (up to 15 minutes with default settings).

AOF (Append Only File)

AOF logs every write operation, providing near-zero data loss:

appendonly yes
appendfilename "appendonly.aof"

# Sync policy:
# everysec — fsync once per second (good balance)
# always — fsync on every write (safest, slowest)
# no — let the OS decide when to flush (fastest, least safe)
appendfsync everysec

Recommended: Both RDB + AOF

For production, enable both:

save 900 1
save 300 10
save 60 10000

appendonly yes
appendfsync everysec

# Auto-rewrite AOF when it doubles in size
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

Redis uses AOF for recovery (more complete) and RDB for faster bootstrapping.

Persistence competing with application I/O? RDB snapshots and AOF rewrites create disk I/O spikes. On shared VPS infrastructure, this I/O competes with other tenants' workloads, potentially causing latency spikes during saves. A Dedicated VPS (VDS) provides guaranteed I/O bandwidth, ensuring Redis persistence operations complete predictably without impacting your application's response times. This is especially important when running Redis alongside PostgreSQL — both services generate significant I/O, and dedicated resources prevent them from starving each other during peak persistence activity.

Redis as a Systemd Service and Monitoring

Redis installs as a systemd service automatically. Verify its configuration:

sudo systemctl status redis-server
sudo systemctl is-enabled redis-server

Monitoring Redis Health

Create a simple monitoring script that checks key Redis metrics:

sudo nano /usr/local/bin/redis-health.sh
#!/bin/bash
REDIS_PASS="your-redis-password"

echo "=== Redis Health Check ==="

# Memory usage
echo -e "\n--- Memory ---"
redis-cli -a "$REDIS_PASS" --no-auth-warning INFO memory | grep -E "used_memory_human|used_memory_peak_human|maxmemory_human|mem_fragmentation_ratio"

# Connected clients
echo -e "\n--- Clients ---"
redis-cli -a "$REDIS_PASS" --no-auth-warning INFO clients | grep -E "connected_clients|blocked_clients"

# Key statistics
echo -e "\n--- Keyspace ---"
redis-cli -a "$REDIS_PASS" --no-auth-warning INFO keyspace

# Hit/miss ratio
echo -e "\n--- Cache Performance ---"
HITS=$(redis-cli -a "$REDIS_PASS" --no-auth-warning INFO stats | grep keyspace_hits | cut -d: -f2 | tr -d '\r')
MISSES=$(redis-cli -a "$REDIS_PASS" --no-auth-warning INFO stats | grep keyspace_misses | cut -d: -f2 | tr -d '\r')
if [ "$((HITS + MISSES))" -gt 0 ]; then
    RATIO=$(echo "scale=2; $HITS * 100 / ($HITS + $MISSES)" | bc)
    echo "Hit ratio: ${RATIO}% (hits: $HITS, misses: $MISSES)"
fi

# Persistence status
echo -e "\n--- Persistence ---"
redis-cli -a "$REDIS_PASS" --no-auth-warning INFO persistence | grep -E "rdb_last_save_time|rdb_last_bgsave_status|aof_enabled|aof_last_bgrewrite_status"

# Replication (if applicable)
echo -e "\n--- Replication ---"
redis-cli -a "$REDIS_PASS" --no-auth-warning INFO replication | grep -E "role|connected_slaves"
sudo chmod +x /usr/local/bin/redis-health.sh

Real-Time Monitoring

Watch Redis commands in real time (useful for debugging):

redis-cli -a your-redis-password MONITOR

Watch latency:

redis-cli -a your-redis-password --latency

Track slow queries (commands taking longer than 10ms):

# Configure slow log
redis-cli -a your-redis-password CONFIG SET slowlog-log-slower-than 10000
redis-cli -a your-redis-password CONFIG SET slowlog-max-len 128

# View slow queries
redis-cli -a your-redis-password SLOWLOG GET 10

For comprehensive server monitoring including Redis metrics, dashboards, and alerting, see our VPS performance optimization guide.

Redis Memory Usage by Database

Check how many keys exist in each database:

redis-cli -a your-redis-password INFO keyspace
# Keyspace
db0:keys=1523,expires=1400,avg_ttl=1800000
db1:keys=245,expires=245,avg_ttl=86400000
db2:keys=18,expires=0,avg_ttl=0

Scan for large keys consuming the most memory:

redis-cli -a your-redis-password --bigkeys

Redis Configuration Best Practices Summary

Here is a consolidated reference for a production Redis configuration:

# /etc/redis/redis.conf — Production configuration

# Network
bind 127.0.0.1 -::1
port 6379
protected-mode yes
tcp-backlog 511
timeout 300
tcp-keepalive 300

# Security
requirepass your-very-strong-redis-password-at-least-32-characters
rename-command FLUSHDB ""
rename-command FLUSHALL ""
rename-command DEBUG ""

# Memory
maxmemory 512mb
maxmemory-policy allkeys-lru

# Persistence (RDB + AOF)
save 900 1
save 300 10
save 60 10000
dbfilename dump.rdb
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

# Logging
loglevel notice
logfile /var/log/redis/redis-server.log

# Slow log
slowlog-log-slower-than 10000
slowlog-max-len 128

# Data directory
dir /var/lib/redis

Prefer Managed Caching Infrastructure?

Redis configuration, memory management, persistence tuning, and security updates are ongoing concerns — especially when Redis is critical to your application's response times. Misconfigured eviction policies lead to cache stampedes. Unbounded memory growth crashes your server. Missing persistence means data loss on restart. If you'd rather not manage this infrastructure, MassiveGRID's Managed Dedicated Cloud Servers include Redis administration as part of the managed service. The team handles configuration optimization, memory monitoring, persistence verification, security patching, and 24/7 incident response — all running on Proxmox HA clusters with automatic failover and triple-replicated Ceph NVMe storage.

Next Steps