Database migrations are the most nerve-wracking part of any server move. Unlike application servers, where a failed deployment can be rolled back in seconds, a botched database migration can mean corrupted records, lost transactions, and hours of painful recovery. Data integrity is paramount, and every minute of downtime can translate directly into lost revenue and eroded user trust.

Whether you are moving a MySQL instance off an aging VPS, migrating PostgreSQL to a cloud server with better I/O performance, or relocating a MongoDB replica set to a new data center, the fundamentals remain the same: plan thoroughly, test relentlessly, and always have a rollback strategy. This guide covers the strategies, commands, and verification steps you need to migrate your database server to a new cloud server with minimal or zero downtime.

Database Migration Strategies

Before you touch a single command, you need to choose the right migration strategy. The best approach depends on your database size, acceptable downtime window, and operational complexity tolerance. There are three primary methods for migrating a database server.

1. Dump and Restore

The dump-and-restore approach is the simplest and most universally compatible method. You export the entire database to a file (SQL dump, custom archive, or BSON), transfer that file to the new server, and import it. Every major database engine supports this pattern natively.

The trade-off is downtime. Your application must stop writing to the old database before you take the final dump, and it cannot resume until the restore completes on the new server. For a 5 GB database on a fast network, this might take minutes. For a 500 GB database, it could take hours. Dump and restore is best suited for smaller databases (under 50 GB), development and staging environments, or any situation where a maintenance window is acceptable.

2. Replication-Based Migration

Replication-based migration is the gold standard for production databases that cannot afford significant downtime. The idea is straightforward: configure the new server as a replica (slave) of the old server (master), let it catch up with all historical data, then promote the replica to become the new primary. The actual cutover takes seconds rather than hours.

This approach requires that both servers be accessible to each other over the network during the synchronization period, and it adds operational complexity. You need to understand replication configuration, monitor replication lag, and coordinate the promotion carefully. But the payoff is near-zero downtime, often measured in single-digit seconds. Use replication-based migration for production databases of any size where downtime must be minimized.

3. File-Level Copy

File-level copy means stopping the database process entirely, then copying the raw data directory from the old server to the new one using rsync or scp. This bypasses the database engine's export/import logic entirely, making it the fastest method for very large databases where a dump would take prohibitively long to generate.

The critical requirement is that the database must be completely stopped during the copy. There is no way to take a consistent file-level snapshot of a running database without specialized tools. Additionally, both servers must run the same database version and operating system architecture. File-level copy is best for very large databases (hundreds of gigabytes or more) when a maintenance window is available and dump/restore would be too slow.

Strategy Downtime Complexity Best For
Dump & Restore Minutes to hours Low Small databases, dev/staging
Replication Seconds Medium–High Production, any size
File-Level Copy Minutes to hours Low Very large databases

Migrating MySQL/MariaDB

MySQL and MariaDB remain the most widely deployed open-source databases, and their migration tooling is mature and well-documented. Here are three methods in order of increasing sophistication.

Method 1: mysqldump with --single-transaction

The mysqldump utility is the standard tool for logical backups. The --single-transaction flag is critical for InnoDB tables because it takes a consistent snapshot without locking the database, allowing reads and writes to continue during the dump process itself. However, you will still need to stop application writes before the final dump to ensure no data is lost.

# On the OLD server: export all databases
mysqldump --all-databases --single-transaction \
  --routines --triggers --events \
  --set-gtid-purged=OFF \
  -u root -p > full_dump.sql

# Transfer to new server
rsync -avz --progress full_dump.sql user@NEW_SERVER_IP:/tmp/

# On the NEW server: import
mysql -u root -p < /tmp/full_dump.sql

For individual databases, replace --all-databases with the database name:

# Export a single database
mysqldump --single-transaction --routines --triggers \
  -u root -p my_application_db > my_application_db.sql

# Compress for faster transfer
gzip my_application_db.sql

# Transfer and decompress on new server
rsync -avz my_application_db.sql.gz user@NEW_SERVER_IP:/tmp/
ssh user@NEW_SERVER_IP "gunzip /tmp/my_application_db.sql.gz && mysql -u root -p my_application_db < /tmp/my_application_db.sql"

Method 2: Set Up Replication

For near-zero downtime, configure the new server as a replica of the old server. This method streams all changes in real time, and the final cutover is just a matter of pointing your application to the new server.

# On the OLD server (master): enable binary logging
# Add to /etc/mysql/mysql.conf.d/mysqld.cnf:
# [mysqld]
# server-id = 1
# log_bin = /var/log/mysql/mysql-bin.log
# binlog_format = ROW

# Create a replication user on the OLD server
mysql -u root -p -e "
CREATE USER 'repl_user'@'NEW_SERVER_IP' IDENTIFIED BY 'strong_password';
GRANT REPLICATION SLAVE ON *.* TO 'repl_user'@'NEW_SERVER_IP';
FLUSH PRIVILEGES;
"

# Get the current binary log position
mysql -u root -p -e "SHOW MASTER STATUS\G"
# Note the File and Position values

# Take an initial dump for the replica
mysqldump --all-databases --single-transaction \
  --master-data=2 --routines --triggers \
  -u root -p > replica_seed.sql
# On the NEW server (replica):
# Set server-id = 2 in my.cnf, restart MySQL

# Import the seed dump
mysql -u root -p < /tmp/replica_seed.sql

# Configure replication
mysql -u root -p -e "
CHANGE MASTER TO
  MASTER_HOST='OLD_SERVER_IP',
  MASTER_USER='repl_user',
  MASTER_PASSWORD='strong_password',
  MASTER_LOG_FILE='mysql-bin.000003',
  MASTER_LOG_POS=154;
START SLAVE;
"

# Verify replication is running
mysql -u root -p -e "SHOW SLAVE STATUS\G"
# Check: Slave_IO_Running = Yes, Slave_SQL_Running = Yes
# Check: Seconds_Behind_Master = 0 (fully caught up)

Once Seconds_Behind_Master reaches 0, the replica has caught up. At this point, stop writes to the old server, verify the replica is still at 0 lag, then point your application to the new server. The cutover itself takes seconds.

Method 3: Percona XtraBackup for Large Databases

For databases in the hundreds of gigabytes, mysqldump can be painfully slow. Percona XtraBackup creates hot, non-blocking physical backups of InnoDB databases. It copies the data files directly while tracking changes, then replays the transaction log to bring the backup to a consistent state.

# On the OLD server: create a full backup
xtrabackup --backup --target-dir=/backup/full \
  --user=root --password=your_password

# Prepare the backup (apply transaction logs)
xtrabackup --prepare --target-dir=/backup/full

# Transfer to new server
rsync -avz /backup/full/ user@NEW_SERVER_IP:/var/lib/mysql/

# On the NEW server: set correct ownership and start MySQL
chown -R mysql:mysql /var/lib/mysql
systemctl start mysql

Verification: Ensuring Data Integrity

Never trust a migration without verification. At minimum, compare row counts across every table and run checksum comparisons.

# Compare row counts for all tables in a database
mysql -u root -p -e "
SELECT table_name, table_rows
FROM information_schema.tables
WHERE table_schema = 'my_application_db'
ORDER BY table_name;
"

# Run checksum on specific tables
mysql -u root -p -e "CHECKSUM TABLE my_application_db.users, my_application_db.orders;"

# Compare checksums between old and new servers
# Run the same CHECKSUM TABLE command on both and diff the results

Migrating PostgreSQL

PostgreSQL offers some of the most sophisticated migration tooling of any open-source database. From the venerable pg_dump to built-in logical replication, there is a method for every scenario.

Method 1: pg_dump/pg_restore with Custom Format

The --format=custom flag produces a compressed, flexible archive that supports parallel restore and selective table restoration. This is almost always preferable to plain SQL dumps for PostgreSQL.

# On the OLD server: dump in custom format
pg_dump -U postgres -Fc -Z5 --verbose \
  my_application_db > my_application_db.dump

# Transfer to new server
rsync -avz --progress my_application_db.dump user@NEW_SERVER_IP:/tmp/

# On the NEW server: create the database and restore
createdb -U postgres my_application_db

pg_restore -U postgres -d my_application_db \
  --verbose --jobs=4 /tmp/my_application_db.dump

The --jobs=4 flag enables parallel restore, which can dramatically speed up the import on multi-core servers. For a full cluster migration including all databases and roles:

# Export all roles (users/groups) first
pg_dumpall -U postgres --roles-only > roles.sql

# Export each database individually in custom format
for db in $(psql -U postgres -t -c "SELECT datname FROM pg_database WHERE datname NOT IN ('template0','template1','postgres')"); do
  pg_dump -U postgres -Fc -Z5 "$db" > "${db}.dump"
done

# On the NEW server: restore roles, then databases
psql -U postgres < /tmp/roles.sql

for dump in /tmp/*.dump; do
  dbname=$(basename "$dump" .dump)
  createdb -U postgres "$dbname"
  pg_restore -U postgres -d "$dbname" --jobs=4 "$dump"
done

Method 2: pg_basebackup + Streaming Replication

PostgreSQL's built-in streaming replication is the preferred approach for zero-downtime migrations. The new server receives a physical copy of the entire cluster, then continuously streams WAL (Write-Ahead Log) records to stay synchronized.

# On the OLD server (primary): configure replication
# In postgresql.conf:
# wal_level = replica
# max_wal_senders = 5
# wal_keep_size = 1024

# In pg_hba.conf: allow replication connections from new server
# host replication repl_user NEW_SERVER_IP/32 scram-sha-256

# Create replication user
psql -U postgres -c "CREATE ROLE repl_user WITH REPLICATION LOGIN PASSWORD 'strong_password';"

# Reload configuration
psql -U postgres -c "SELECT pg_reload_conf();"
# On the NEW server (standby): take a base backup
pg_basebackup -h OLD_SERVER_IP -U repl_user \
  -D /var/lib/postgresql/16/main \
  -Fp -Xs -P -R

# The -R flag auto-creates standby.signal and sets
# primary_conninfo in postgresql.auto.conf

# Start PostgreSQL on the new server
systemctl start postgresql

# Verify replication status from the OLD server
psql -U postgres -c "SELECT client_addr, state, sent_lsn, replay_lsn FROM pg_stat_replication;"

When the standby has caught up (sent_lsn equals replay_lsn), you are ready for the cutover. Stop your application, verify no new writes are arriving, then promote the standby:

# Promote the standby to primary
psql -U postgres -c "SELECT pg_promote();"

# Or use: pg_ctl promote -D /var/lib/postgresql/16/main

# Verify it is now a primary
psql -U postgres -c "SELECT pg_is_in_recovery();"
# Should return 'f' (false = primary)

Method 3: Logical Replication (PostgreSQL 10+)

Logical replication is uniquely powerful for PostgreSQL migrations because it replicates at the logical (row-change) level rather than the physical (WAL block) level. This means you can replicate between different PostgreSQL major versions, replicate a subset of tables, and even make schema changes on the subscriber before cutover.

# On the OLD server (publisher):
# Set wal_level = logical in postgresql.conf and restart

# Create a publication for all tables
psql -U postgres -d my_application_db -c "
CREATE PUBLICATION my_migration FOR ALL TABLES;
"

# On the NEW server (subscriber):
# Create the database with the same schema (no data)
pg_dump -U postgres -s my_application_db | psql -U postgres -d my_application_db

# Create a subscription
psql -U postgres -d my_application_db -c "
CREATE SUBSCRIPTION my_migration_sub
  CONNECTION 'host=OLD_SERVER_IP dbname=my_application_db user=repl_user password=strong_password'
  PUBLICATION my_migration;
"

# Monitor replication progress
psql -U postgres -c "SELECT * FROM pg_stat_subscription;"

Verification: Ensuring Data Integrity

# Compare row counts across all tables
psql -U postgres -d my_application_db -c "
SELECT schemaname, relname, n_live_tup
FROM pg_stat_user_tables
ORDER BY schemaname, relname;
"

# Schema comparison: dump schema from both servers and diff
pg_dump -U postgres -s my_application_db > old_schema.sql  # on old server
pg_dump -U postgres -s my_application_db > new_schema.sql  # on new server
diff old_schema.sql new_schema.sql

# Spot-check specific tables with row hashes
psql -U postgres -d my_application_db -c "
SELECT md5(string_agg(t::text, '' ORDER BY id))
FROM my_critical_table t;
"

Migrating MongoDB

MongoDB migrations follow similar patterns but use MongoDB-specific tooling. The two primary approaches are logical dump/restore and replica set migration.

Method 1: mongodump/mongorestore

The mongodump and mongorestore utilities are the MongoDB equivalents of pg_dump/pg_restore. They export data in BSON format and support parallel operations.

# On the OLD server: dump all databases
mongodump --uri="mongodb://admin:password@localhost:27017" \
  --out=/backup/mongodump_full

# For a single database
mongodump --uri="mongodb://admin:password@localhost:27017/my_app_db" \
  --out=/backup/mongodump_single

# Transfer to new server
rsync -avz /backup/mongodump_full/ user@NEW_SERVER_IP:/tmp/mongodump/

# On the NEW server: restore
mongorestore --uri="mongodb://admin:password@localhost:27017" \
  --nsInclude="*" --numParallelCollections=4 \
  /tmp/mongodump/

Method 2: Replica Set Migration

For zero-downtime MongoDB migrations, add the new server as a secondary member of the existing replica set, wait for it to sync, then step down the old primary and reconfigure the set.

# From the current primary: add the new server to the replica set
rs.add("NEW_SERVER_IP:27017")

# Monitor sync progress
rs.status()
# Watch for the new member's stateStr to become "SECONDARY"
# and its optimeDate to match the primary

# Once fully synced, step down the old primary
rs.stepDown()

# Reconfigure the replica set to remove the old server
cfg = rs.conf()
# Find and remove the old member from cfg.members
rs.reconfig(cfg)

# Force the new server to become primary if needed
cfg = rs.conf()
# Set priority of new server higher
cfg.members[0].priority = 2
rs.reconfig(cfg)

Verification

# Compare document counts across collections
mongosh --eval "
db.getMongo().getDBNames().forEach(function(dbName) {
  var db = db.getSiblingDB(dbName);
  db.getCollectionNames().forEach(function(collName) {
    print(dbName + '.' + collName + ': ' + db[collName].countDocuments({}));
  });
});
"

# Compare collection stats
mongosh --eval "db.getSiblingDB('my_app_db').my_collection.stats().count"

Why Databases Need Dedicated Resources

Databases are the single workload most harmed by shared infrastructure. Unlike a web server that can tolerate an occasional slow response, a database experiencing resource contention creates a cascading failure across every application that depends on it. Understanding why this happens is essential to choosing the right server for your migrated database.

I/O contention causes query latency spikes. Databases are fundamentally I/O-bound workloads. Every query that cannot be served from the buffer pool must hit disk. On a shared hypervisor, your carefully optimized query that normally takes 2 ms can suddenly take 200 ms because another tenant on the same physical host launched a bulk import job. These latency spikes are invisible in averages but devastating to your p99 response times. For transactional databases, this means lock wait times spike as well, creating a chain reaction of slow queries that backs up your entire connection pool.

Shared CPU means unpredictable lock wait times. Database engines rely on CPU to manage concurrency: acquiring and releasing locks, resolving deadlocks, running the query planner, and processing transactions. When CPU cycles are stolen by noisy neighbors, lock hold times increase. A lock that should be held for microseconds is held for milliseconds instead. Every other query waiting on that lock stalls, and your application throughput collapses. This is particularly dangerous for write-heavy workloads where row-level locking in InnoDB or MVCC in PostgreSQL depends on rapid lock turnover.

Memory pressure destroys cache hit ratios. Databases are designed to keep frequently accessed data in memory. MySQL's InnoDB buffer pool, PostgreSQL's shared_buffers, MongoDB's WiredTiger cache — these all work on the same principle: serve data from RAM to avoid disk I/O. On shared infrastructure with memory overcommitment, the hypervisor can reclaim pages from your database's cache to serve other tenants. Your buffer pool shrinks silently, your cache hit ratio drops from 99% to 80%, and suddenly every fifth query hits disk instead of RAM. The performance impact is not linear — it is catastrophic.

The bottom line: for any database that matters, dedicated resources are not a luxury. They are the foundation of predictable performance.

Choosing the Right Server for Database Workloads

Not every database needs the same tier of infrastructure. A development database running test data has completely different requirements from a production PostgreSQL instance handling thousands of transactions per second. Here is how to match your database workload to the right MassiveGRID cloud server tier.

Development/Staging Database: H/A Cloud VPS

For development, QA, and staging databases, an H/A Cloud VPS provides the flexibility to right-size your resources without overspending. The key advantage here is the ability to scale RAM independently. A staging database that mirrors production data might need 16 GB of RAM for realistic query performance testing, but only 2 vCPU because the query volume is low. With MassiveGRID's independent resource scaling, you pay for the RAM you need without being forced into a higher CPU tier.

Production Database with Consistent Load: H/A Cloud VDS

For production databases, an H/A Cloud VDS (starting at $19.80/mo) is the default recommendation. The "D" in VDS stands for Dedicated — your vCPU cores and RAM are exclusively allocated to your server. No noisy neighbors, no CPU steal time, no memory pressure from other tenants. This is the difference between a database that consistently delivers 2 ms query response times and one that intermittently spikes to 200 ms during peak hours on the shared hypervisor.

A Cloud VDS provides the predictable I/O, CPU, and memory performance that databases require. For the vast majority of production database workloads — from e-commerce transaction databases to SaaS application backends — this is the right choice. You get dedicated resources at a fraction of the cost of bare metal, with the flexibility to scale up as your data grows.

Production Database Without a DBA: H/A Managed Cloud Dedicated Servers

If your team does not include a dedicated database administrator, H/A Managed Cloud Dedicated Servers (starting at $76.19/mo) combine dedicated resources with a management layer. MassiveGRID's operations team monitors server health, handles OS-level patches and security updates, manages backup schedules, and provides proactive alerting on resource utilization trends. You still manage your database engine and schema, but the underlying infrastructure is taken care of.

This tier is particularly valuable for small and mid-size teams running critical production databases. Instead of splitting your engineering team's attention between database performance tuning and server maintenance, you can focus entirely on your application while knowing the infrastructure is monitored and maintained by specialists.

Independent Resource Scaling: A Database-Specific Advantage

One of the most significant advantages of MassiveGRID for database workloads is the ability to scale individual resources independently. Databases have a unique scaling profile that differs from most other workloads.

Consider a growing PostgreSQL instance. As your dataset expands from 20 GB to 100 GB, the critical resource you need more of is RAM — specifically for shared_buffers and effective_cache_size. More RAM means a higher cache hit ratio, which means fewer disk reads, which means faster queries. But your CPU requirements stay relatively flat because you are serving the same query volume; you just have more data to cache.

On MassiveGRID, you can scale RAM from 8 GB to 32 GB while keeping 4 vCPU — across any tier. You are not forced into a "large" plan with 16 vCPU just because you need more memory. This independent scaling applies to storage as well. A database that grows its data volume from 100 GB to 500 GB but maintains the same query patterns needs more SSD, not more CPU. You scale exactly what the workload demands, and nothing more.

Free Migration Assistance

If the commands and procedures in this guide feel overwhelming, you do not have to do it alone. MassiveGRID offers free migration assistance for customers moving their database servers to any MassiveGRID cloud server. Our engineering team will work with you to plan the migration, execute the data transfer, verify integrity, and coordinate the cutover to minimize downtime.

Recommended setup for production databases: Deploy an H/A Cloud VDS with dedicated resources to eliminate noisy neighbor problems. Start with 4 vCPU and 8 GB RAM, and scale independently as your dataset grows. For teams without a DBA, consider Managed Cloud Dedicated Servers for infrastructure monitoring and maintenance included.

Conclusion

Migrating a database server is a high-stakes operation, but it does not have to be a stressful one. By choosing the right strategy — dump and restore for simplicity, replication for zero downtime, or file-level copy for speed — and following methodical verification steps, you can move MySQL, PostgreSQL, or MongoDB to a new cloud server with confidence.

The key to a successful database migration is preparation: test your procedure against a staging copy first, verify every row after the migration, and always have a rollback plan. And when choosing your destination server, remember that databases are uniquely sensitive to resource contention. Dedicated resources on a Cloud VDS or Managed Cloud Dedicated Server will pay for themselves in consistent query performance and fewer 3 AM pages.

Ready to migrate your database server? Contact our team for free migration planning, or explore MassiveGRID's cloud server options to find the right fit for your workload.