PostgreSQL is the most advanced open-source relational database, and for good reason — it handles complex queries, supports JSONB for semi-structured data, provides robust ACID compliance, and scales from small applications to enterprise workloads. Whether you're running Django, Rails, Laravel, Node.js, or Go, PostgreSQL is the production database of choice.
This guide covers installing PostgreSQL 16 on Ubuntu 24.04, configuring it for production use, tuning memory settings, setting up automated backups, enabling remote connections, and monitoring query performance. By the end, you'll have a production-ready PostgreSQL instance with proper security, optimized performance, and reliable backups.
Prerequisites
Before starting, you need:
- An Ubuntu 24.04 VPS. Deploy a Cloud VPS with at least 2 vCPU / 4 GB RAM. PostgreSQL uses shared memory aggressively, and 4 GB gives you room for meaningful shared_buffers allocation alongside your application.
- Root or sudo access. If you haven't set up your server yet, start with our Ubuntu VPS setup guide and security hardening guide.
Installing PostgreSQL 16 from the Official Repository
Ubuntu 24.04's default repositories include PostgreSQL, but the version may lag behind. Install from the official PostgreSQL Apt repository to get the latest stable release with timely security updates.
Add the PostgreSQL repository and signing key:
sudo apt install -y curl ca-certificates
sudo install -d /usr/share/postgresql-common/pgdg
sudo curl -o /usr/share/postgresql-common/pgdg/apt.postgresql.org.asc \
--fail https://www.postgresql.org/media/keys/ACCC4CF8.asc
echo "deb [signed-by=/usr/share/postgresql-common/pgdg/apt.postgresql.org.asc] \
https://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" | \
sudo tee /etc/apt/sources.list.d/pgdg.list
Update the package list and install PostgreSQL 16:
sudo apt update
sudo apt install -y postgresql-16 postgresql-contrib-16
Verify the installation:
sudo systemctl status postgresql
psql --version
# psql (PostgreSQL) 16.6
PostgreSQL starts automatically after installation and is enabled to start on boot. The service runs as the postgres system user.
Initial Configuration
PostgreSQL creates a system user called postgres during installation. This user has superuser privileges within the database. Switch to this user to access the PostgreSQL prompt:
sudo -u postgres psql
You're now in the PostgreSQL interactive terminal. Set a password for the postgres superuser:
ALTER USER postgres WITH PASSWORD 'your-strong-postgres-password';
\q
Check the cluster is running and note the data directory:
pg_lsclusters
Ver Cluster Port Status Owner Data directory Log file
16 main 5432 online postgres /var/lib/postgresql/16/main /var/log/postgresql/postgresql-16-main.log
The default cluster is named main, runs on port 5432, and stores data in /var/lib/postgresql/16/main.
Creating Application Users and Databases
Never use the postgres superuser for application connections. Create a dedicated user with only the privileges your application needs.
sudo -u postgres psql
-- Create the application user
CREATE USER myapp_user WITH PASSWORD 'strong-random-password-here';
-- Create the application database
CREATE DATABASE myapp_db OWNER myapp_user;
-- Connect to the new database
\c myapp_db
-- Grant schema permissions
GRANT ALL PRIVILEGES ON SCHEMA public TO myapp_user;
-- Optional: Restrict the user from creating new databases
ALTER USER myapp_user CREATEDB; -- Remove if not needed
\q
Test the connection with the new user:
psql -U myapp_user -d myapp_db -h localhost
If this fails with a peer authentication error, you need to update the authentication configuration (next section).
Create additional users for read-only access (useful for reporting or analytics):
sudo -u postgres psql -d myapp_db
CREATE USER readonly_user WITH PASSWORD 'another-strong-password';
GRANT CONNECT ON DATABASE myapp_db TO readonly_user;
GRANT USAGE ON SCHEMA public TO readonly_user;
GRANT SELECT ON ALL TABLES IN SCHEMA public TO readonly_user;
-- Automatically grant SELECT on future tables
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO readonly_user;
\q
Authentication Configuration (pg_hba.conf)
PostgreSQL's client authentication is controlled by pg_hba.conf. This file determines who can connect, from where, and how they authenticate.
sudo nano /etc/postgresql/16/main/pg_hba.conf
The default configuration uses peer authentication for local connections (matching system username to database username) and scram-sha-256 for TCP/IP connections. For application use, update the local IPv4 line:
# TYPE DATABASE USER ADDRESS METHOD
# Local connections
local all postgres peer
local all all scram-sha-256
# IPv4 local connections
host all all 127.0.0.1/32 scram-sha-256
# IPv6 local connections
host all all ::1/128 scram-sha-256
If your application connects via localhost (which most do), scram-sha-256 authentication for IPv4 connections is correct and secure. Reload PostgreSQL to apply changes:
sudo systemctl reload postgresql
Test the password-based connection:
psql -U myapp_user -d myapp_db -h 127.0.0.1
Production Tuning
PostgreSQL's default configuration is extremely conservative — designed to run on minimal hardware without consuming too many resources. For a production VPS with 4 GB RAM, these defaults leave significant performance on the table. Edit the main configuration file:
sudo nano /etc/postgresql/16/main/postgresql.conf
Memory Settings
# Shared memory buffer pool — primary performance lever
# Set to 25% of total RAM (1 GB on a 4 GB server)
shared_buffers = 1GB
# Planner's estimate of available OS cache
# Set to 50-75% of total RAM
effective_cache_size = 3GB
# Memory per query operation (sorts, joins, hash tables)
# Per-connection setting — be conservative on high-connection servers
work_mem = 16MB
# Memory for maintenance operations (VACUUM, CREATE INDEX, ALTER TABLE)
maintenance_work_mem = 256MB
Write-Ahead Log (WAL) Settings
# WAL buffer size — match to shared_buffers
wal_buffers = 64MB
# Minimum WAL size before checkpoint
min_wal_size = 1GB
# Maximum WAL size before forced checkpoint
max_wal_size = 4GB
# Spread checkpoint writes over 80% of the interval
checkpoint_completion_target = 0.9
Connection Settings
# Maximum simultaneous connections
# Each connection uses ~10 MB. With 4 GB RAM: keep under 100
max_connections = 100
# Enable huge pages for better memory performance (if OS supports it)
huge_pages = try
Query Planner Settings
# Encourage index scans over sequential scans (for SSD storage)
random_page_cost = 1.1
# Parallel query settings
max_parallel_workers_per_gather = 2
max_parallel_workers = 4
max_worker_processes = 8
Logging Settings
# Log slow queries (takes longer than 1 second)
log_min_duration_statement = 1000
# Log checkpoints
log_checkpoints = on
# Log connection attempts
log_connections = on
log_disconnections = on
# Log lock waits
log_lock_waits = on
# Log temporary file usage
log_temp_files = 0
Apply the changes:
sudo systemctl restart postgresql
Verify the settings took effect:
sudo -u postgres psql -c "SHOW shared_buffers;"
sudo -u postgres psql -c "SHOW effective_cache_size;"
sudo -u postgres psql -c "SHOW work_mem;"
Tuning Reference by Server Size
| Setting | 4 GB RAM | 8 GB RAM | 16 GB RAM | 32 GB RAM |
|---|---|---|---|---|
| shared_buffers | 1 GB | 2 GB | 4 GB | 8 GB |
| effective_cache_size | 3 GB | 6 GB | 12 GB | 24 GB |
| work_mem | 16 MB | 32 MB | 64 MB | 128 MB |
| maintenance_work_mem | 256 MB | 512 MB | 1 GB | 2 GB |
| max_connections | 100 | 200 | 300 | 400 |
Need Predictable Database I/O?
Database performance is directly tied to storage I/O. PostgreSQL constantly reads pages from disk into shared_buffers, writes WAL segments, and performs checkpoint flushes. On shared VPS infrastructure, your NVMe bandwidth is shared with other tenants — a neighbor running large sequential scans or bulk inserts can saturate the storage controller, causing your query latencies to spike unpredictably.
A Dedicated VPS (VDS) provides physically dedicated CPU and guaranteed I/O bandwidth. This is the single most impactful upgrade for database workloads. When your PostgreSQL instance has dedicated resources, query planning estimates become accurate (because I/O latency is consistent), vacuum operations complete on schedule, and your 99th-percentile query times stop spiking during other tenants' peak hours.
For high-throughput databases serving thousands of queries per second, dedicated resources aren't a luxury — they're the difference between predictable response times and intermittent slowdowns that are impossible to debug at the application layer.
Your Data Is Protected: Ceph 3x Replication
MassiveGRID VPS instances run on Ceph distributed storage with 3x replication. Every block of data written by PostgreSQL is automatically replicated across three independent storage nodes. This means:
- No single point of failure. If a storage node fails, your data is intact on two other nodes. Ceph automatically re-replicates to restore the third copy.
- No data loss from hardware failure. Unlike local SSD setups where a drive failure means data loss, Ceph replication protects against disk, node, and even rack-level failures.
- Consistent I/O performance. Ceph distributes reads across replicas, providing parallel read bandwidth that exceeds single-disk performance.
This replication happens at the block storage level — transparently below PostgreSQL. Your database writes to what appears to be a local NVMe volume, but every write is confirmed only after it's persisted to three separate storage nodes. This is enterprise-grade storage durability, included with every MassiveGRID Cloud VPS.
Note that Ceph replication protects against hardware failure, not application-level mistakes. You still need logical backups (next section) to protect against accidental DROP TABLE or bad migrations.
Automated Backups with pg_dump and Cron
Set up daily automated backups using pg_dump. Create a backup script:
sudo mkdir -p /var/backups/postgresql
sudo chown postgres:postgres /var/backups/postgresql
sudo nano /usr/local/bin/pg-backup.sh
#!/bin/bash
# PostgreSQL daily backup script
BACKUP_DIR="/var/backups/postgresql"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
DAYS_TO_KEEP=14
# Backup all databases individually
for DB in $(sudo -u postgres psql -At -c "SELECT datname FROM pg_database WHERE datistemplate = false AND datname != 'postgres'"); do
FILENAME="${BACKUP_DIR}/${DB}_${TIMESTAMP}.sql.gz"
sudo -u postgres pg_dump "$DB" | gzip > "$FILENAME"
echo "Backed up: $DB -> $FILENAME ($(du -h "$FILENAME" | cut -f1))"
done
# Also create a full cluster backup
CLUSTER_FILE="${BACKUP_DIR}/full_cluster_${TIMESTAMP}.sql.gz"
sudo -u postgres pg_dumpall | gzip > "$CLUSTER_FILE"
echo "Full cluster backup: $CLUSTER_FILE ($(du -h "$CLUSTER_FILE" | cut -f1))"
# Remove backups older than retention period
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +$DAYS_TO_KEEP -delete
echo "Cleaned up backups older than $DAYS_TO_KEEP days"
sudo chmod +x /usr/local/bin/pg-backup.sh
Test the backup script:
sudo /usr/local/bin/pg-backup.sh
Add a cron job to run daily at 2 AM:
sudo crontab -e
0 2 * * * /usr/local/bin/pg-backup.sh >> /var/log/pg-backup.log 2>&1
For a more comprehensive backup strategy including off-site storage, see our automated backups guide.
To restore from a backup:
# Restore a single database
gunzip -c /var/backups/postgresql/myapp_db_20260227_020000.sql.gz | sudo -u postgres psql myapp_db
# Restore the full cluster
gunzip -c /var/backups/postgresql/full_cluster_20260227_020000.sql.gz | sudo -u postgres psql
Test your backups. An untested backup is not a backup. Schedule periodic restore tests on a separate database to verify integrity.
MassiveGRID Ubuntu VPS Includes
Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10
→ Deploy a self-managed VPS — from $1.99/mo
→ Need dedicated resources? — from $19.80/mo
→ Want fully managed hosting? — we handle everything
Remote Connections
By default, PostgreSQL only listens on localhost. To allow connections from other servers (application servers, remote administration), you need to change two configuration files.
Step 1: Update listen_addresses
sudo nano /etc/postgresql/16/main/postgresql.conf
# Listen on all interfaces (or specify a specific IP)
listen_addresses = '*'
Step 2: Add Remote Access Rules to pg_hba.conf
sudo nano /etc/postgresql/16/main/pg_hba.conf
Add a line for the specific IP or subnet that needs access:
# Allow a specific application server
host myapp_db myapp_user 10.0.1.5/32 scram-sha-256
# Allow an entire subnet
host myapp_db myapp_user 10.0.1.0/24 scram-sha-256
Never add 0.0.0.0/0 unless you absolutely know what you're doing. This opens your database to the entire internet.
Step 3: Configure SSL for Remote Connections
PostgreSQL supports SSL encryption for client connections. On Ubuntu 24.04, SSL is enabled by default with a self-signed certificate:
sudo -u postgres psql -c "SHOW ssl;"
# on
For production remote connections, configure PostgreSQL to require SSL:
# In pg_hba.conf, use hostssl instead of host
hostssl myapp_db myapp_user 10.0.1.0/24 scram-sha-256
Step 4: Firewall Configuration
Open port 5432 only for trusted IPs:
sudo ufw allow from 10.0.1.5 to any port 5432
sudo ufw reload
Reload PostgreSQL:
sudo systemctl reload postgresql
Test the remote connection from the application server:
psql -U myapp_user -d myapp_db -h db-server-ip
Basic Monitoring
PostgreSQL includes powerful built-in monitoring views. Enable the pg_stat_statements extension for query-level performance tracking:
sudo nano /etc/postgresql/16/main/postgresql.conf
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.max = 10000
pg_stat_statements.track = all
Restart PostgreSQL (this setting requires a restart, not just a reload):
sudo systemctl restart postgresql
Enable the extension in your database:
sudo -u postgres psql -d myapp_db -c "CREATE EXTENSION IF NOT EXISTS pg_stat_statements;"
Top 10 Slowest Queries
SELECT
calls,
round(total_exec_time::numeric, 2) AS total_time_ms,
round(mean_exec_time::numeric, 2) AS avg_time_ms,
round((100 * total_exec_time / sum(total_exec_time) OVER ())::numeric, 2) AS pct_total,
left(query, 100) AS query_preview
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;
Connection Count by State
SELECT
state,
count(*) AS connections,
round(100.0 * count(*) / sum(count(*)) OVER (), 1) AS pct
FROM pg_stat_activity
WHERE backend_type = 'client backend'
GROUP BY state
ORDER BY connections DESC;
Database Size
SELECT
datname AS database,
pg_size_pretty(pg_database_size(datname)) AS size
FROM pg_database
WHERE datistemplate = false
ORDER BY pg_database_size(datname) DESC;
Table Sizes (Including Indexes)
SELECT
schemaname || '.' || tablename AS table,
pg_size_pretty(pg_total_relation_size(schemaname || '.' || tablename)) AS total_size,
pg_size_pretty(pg_relation_size(schemaname || '.' || tablename)) AS data_size,
pg_size_pretty(pg_total_relation_size(schemaname || '.' || tablename) - pg_relation_size(schemaname || '.' || tablename)) AS index_size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname || '.' || tablename) DESC
LIMIT 20;
Cache Hit Ratio
SELECT
sum(heap_blks_read) AS blocks_read_from_disk,
sum(heap_blks_hit) AS blocks_read_from_cache,
round(100.0 * sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)), 2) AS cache_hit_pct
FROM pg_statio_user_tables;
A cache hit ratio above 99% means your shared_buffers is appropriately sized. Below 95% suggests you may need more RAM. For comprehensive server-level monitoring with dashboards and alerting, see our VPS performance optimization guide.
Useful Maintenance Queries
-- Find tables that need VACUUM
SELECT
schemaname || '.' || relname AS table,
n_dead_tup AS dead_rows,
n_live_tup AS live_rows,
round(100.0 * n_dead_tup / NULLIF(n_live_tup + n_dead_tup, 0), 1) AS dead_pct,
last_autovacuum
FROM pg_stat_user_tables
WHERE n_dead_tup > 1000
ORDER BY n_dead_tup DESC
LIMIT 10;
-- Find unused indexes (candidates for removal)
SELECT
schemaname || '.' || indexrelname AS index,
pg_size_pretty(pg_relation_size(indexrelid)) AS size,
idx_scan AS scans
FROM pg_stat_user_indexes
WHERE idx_scan = 0
AND indexrelname NOT LIKE '%pkey%'
ORDER BY pg_relation_size(indexrelid) DESC;
Next Steps
With PostgreSQL installed and tuned for production, consider these extensions and advanced configurations:
- PostGIS — spatial and geographic data support
- Streaming replication — read replicas for scaling read traffic
- PgBouncer — connection pooling for high-connection-count applications
- pg_partman — automatic table partitioning for large datasets
- TimescaleDB — time-series data extension
Related guides:
- Ubuntu VPS initial setup — server fundamentals
- Security hardening — firewall, SSH, fail2ban
- Automated backups — off-site backup strategies
- VPS performance optimization — system-level tuning
- Deploy Django/Flask with Gunicorn — connect your Python app to PostgreSQL
Prefer Managed Databases?
Production PostgreSQL demands continuous attention: vacuum tuning, index maintenance, query plan analysis, WAL archiving, backup verification, minor version upgrades, and security patches. If database administration isn't your core expertise, MassiveGRID's Managed Dedicated Cloud Servers handle all of it. The managed service includes database optimization, automated backup verification, proactive monitoring, and 24/7 expert support — running on Proxmox HA clusters with automatic failover and Ceph triple-replicated NVMe storage. You focus on your application's SQL; we handle everything beneath it.