If you have been running Nextcloud for a small team and it felt perfectly snappy, scaling to 100 or more users can be a rude awakening. The file browser takes five seconds to load a directory listing. Large uploads stall or time out entirely. Collabora Online or ONLYOFFICE becomes sluggish to the point of unusable during peak hours. Desktop and mobile sync clients report constant conflicts and delays, and your users start wondering if they should just go back to Dropbox.

These are not unusual problems. Nextcloud is a powerful, feature-rich platform, but its default configuration is optimized for small deployments -- a handful of users on modest hardware. Once you push past 50 concurrent users, and especially beyond 100, the default settings actively work against you. PHP processes exhaust available memory, the database becomes a bottleneck on every file listing, preview generation consumes CPU cycles that should be serving requests, and the caching layer -- if you even have one -- cannot keep up.

Before diving into software tuning, though, there is a more fundamental question you should ask first: is your infrastructure the actual bottleneck? No amount of PHP configuration will compensate for a server that does not have enough CPU cores, sufficient RAM, or adequate storage I/O. If you are running on shared infrastructure where your resources are contested by other tenants, you may be optimizing the wrong layer entirely.

Diagnosing Infrastructure vs. Software Bottlenecks

The first step in any performance tuning exercise is determining whether the problem is in your software configuration or in the underlying infrastructure. This distinction matters because the solutions are fundamentally different. Software bottlenecks can be fixed with configuration changes. Infrastructure bottlenecks require more resources or better resources -- and if you are on shared hosting, they may require a completely different hosting approach.

CPU Contention

Run top or htop during peak usage and watch your CPU utilization. If your vCPUs are consistently above 80% during normal operations (not during background jobs like preview generation), you either need more cores or your existing cores are being contested. On shared infrastructure, the CPU time you are "allocated" is not guaranteed -- other tenants on the same physical host can steal cycles from your workload through CPU contention. Run cat /proc/stat and look at the steal value. If steal time is regularly above 5%, your hypervisor is taking CPU cycles away from your VM to serve other tenants. This is a clear signal that you are on overcommitted shared infrastructure.

RAM Pressure

Check memory usage with free -h. Nextcloud with 100+ users needs RAM for PHP-FPM workers, database caching, Redis, and the operating system itself. If you are seeing heavy swap usage, your performance problems are almost certainly RAM-related. Swap is orders of magnitude slower than physical memory, and any request that triggers swap I/O will feel dramatically slow to the end user. For 100+ users, plan for a minimum of 16 GB of RAM -- and more if you are running the database and Redis on the same server.

Storage IOPS

Nextcloud is heavily I/O dependent. Every file listing, every thumbnail load, every sync check involves disk operations. Run iostat -x 1 and watch the %util and await columns. If your storage device is consistently at 90%+ utilization or your average wait times exceed 10ms, storage is your bottleneck. Traditional SATA SSDs may not be sufficient for large deployments. NVMe storage, with its dramatically higher IOPS capability and lower latency, makes a measurable difference for Nextcloud workloads.

The Noisy Neighbor Problem

If you are on a standard VPS from a typical cloud provider, you are sharing physical hardware with other tenants. Even if your CPU, RAM, and disk metrics look acceptable in isolation, inconsistent performance -- fast one minute, slow the next -- is a telltale sign of noisy neighbors. The other tenants on your physical host are competing for the same CPU cache, memory bandwidth, and storage controller. If your diagnostic checks reveal infrastructure-level problems -- CPU steal, insufficient RAM, inadequate IOPS, or inconsistent performance -- no amount of PHP tuning will fix them. The solution is single-tenant infrastructure or a high-availability private cloud where all resources are dedicated exclusively to your Nextcloud deployment.

PHP-FPM Optimization

PHP-FPM (FastCGI Process Manager) is the engine that executes Nextcloud's PHP code. Every page load, every API call, every WebDAV operation goes through a PHP-FPM worker process. If your pool configuration does not match your workload, you will see either wasted memory (too many idle workers) or queued requests and timeouts (too few workers).

Process Manager Mode

PHP-FPM offers three process manager modes: static, dynamic, and ondemand. For a Nextcloud server handling 100+ users, static mode is almost always the right choice. It pre-spawns a fixed number of worker processes and keeps them in memory. There is no latency from spawning new workers under load, and you have a predictable, consistent memory footprint.

The dynamic mode attempts to scale workers up and down based on demand, but the spawning overhead creates latency spikes during traffic bursts -- exactly when you need performance most. The ondemand mode is even worse for high-traffic scenarios, as it kills idle workers and respawns them, adding significant latency to the first request after an idle period.

Calculating max_children

The pm.max_children value determines how many simultaneous PHP requests your server can handle. Set it too low and requests queue up, causing timeouts. Set it too high and you exhaust memory, causing the OOM killer to terminate processes randomly.

The formula is straightforward:

max_children = (Total RAM - RAM for OS - RAM for DB - RAM for Redis) / Per-process memory

For a 16 GB server running the database and Redis locally, a reasonable allocation might look like this:

With a memory_limit of 512 MB (Nextcloud's recommended minimum) and real-world average usage of around 180 MB per process, you could safely run 50-60 workers on 16 GB. Be conservative -- it is better to queue a few requests than to trigger the OOM killer.

Recommended PHP-FPM Configuration

; /etc/php/8.3/fpm/pool.d/nextcloud.conf

[nextcloud]
user = www-data
group = www-data

listen = /run/php/php8.3-fpm-nextcloud.sock
listen.owner = www-data
listen.group = www-data

pm = static
pm.max_children = 50

; Recycle workers after 500 requests to prevent memory leaks
pm.max_requests = 500

; Slow log for debugging (requests taking longer than 5s)
request_slowlog_timeout = 5s
slowlog = /var/log/php-fpm/nextcloud-slow.log

; Terminate requests that run longer than 300s
request_terminate_timeout = 300s

; PHP settings for Nextcloud
php_value[memory_limit] = 512M
php_value[upload_max_filesize] = 16G
php_value[post_max_size] = 16G
php_value[max_execution_time] = 3600
php_value[max_input_time] = 3600

OPcache Configuration

OPcache stores precompiled PHP bytecode in shared memory, eliminating the need to parse and compile PHP scripts on every request. For Nextcloud, which loads hundreds of PHP files per request, a properly configured OPcache is one of the single biggest performance improvements you can make.

; /etc/php/8.3/fpm/conf.d/10-opcache.ini

opcache.enable=1
opcache.interned_strings_buffer=32
opcache.max_accelerated_files=20000
opcache.memory_consumption=256
opcache.save_comments=1
opcache.revalidate_freq=60
opcache.jit=1255
opcache.jit_buffer_size=128M

Key settings explained:

Redis for Caching and File Locking

Nextcloud supports two levels of caching: local cache (APCu) and distributed cache (Redis or Memcached). For a single-server deployment with fewer than 20 users, APCu alone may be sufficient. For 100+ users, Redis is essential -- and it should handle both distributed caching and transactional file locking.

Why Redis Over APCu

APCu operates within each individual PHP-FPM process's memory space. With 50 PHP-FPM workers running in static mode, APCu caching means 50 separate cache instances, each with their own copy of cached data. This wastes memory and creates cache inconsistencies between workers. Redis, by contrast, is a separate process that all PHP-FPM workers share. One cache, one source of truth, dramatically better memory efficiency.

More critically, APCu cannot handle transactional file locking across multiple workers. When two users edit the same file simultaneously, Nextcloud needs a locking mechanism that all PHP workers can see. APCu cannot provide this. Redis can.

Redis Configuration

Install Redis and configure it to use a Unix socket for lower latency (eliminating TCP overhead on localhost connections):

# /etc/redis/redis.conf

# Use Unix socket instead of TCP
unixsocket /var/run/redis/redis-server.sock
unixsocketperm 770

# Memory limit -- adjust based on your dataset
maxmemory 512mb
maxmemory-policy allkeys-lru

# Disable persistence for cache-only usage (optional, improves performance)
save ""
appendonly no

# Increase connection limit
maxclients 512

Nextcloud config.php Additions

Add the following to your Nextcloud config/config.php to enable Redis for all caching layers:

'memcache.local' => '\OC\Memcache\APCu',
'memcache.distributed' => '\OC\Memcache\Redis',
'memcache.locking' => '\OC\Memcache\Redis',
'redis' => [
    'host' => '/var/run/redis/redis-server.sock',
    'port' => 0,
    'timeout' => 1.5,
],

This configuration uses APCu as the local per-process cache (fast for frequently accessed small values) and Redis for distributed caching and file locking. It is the recommended dual-cache setup for production Nextcloud deployments and provides the best balance of performance and consistency.

Database Optimization

The database is involved in virtually every Nextcloud operation. File listings query the oc_filecache table. User authentication hits the oc_users and oc_authtoken tables. Share lookups, activity logs, notifications -- every feature adds database queries. For 100+ users with tens of thousands of files, database performance directly determines how fast Nextcloud feels.

PostgreSQL vs. MySQL

Both work. But for large Nextcloud deployments, PostgreSQL consistently outperforms MySQL in the workload patterns that matter most: complex joins on the file cache table, concurrent write operations from multiple sync clients, and full-text search queries. PostgreSQL's query planner handles Nextcloud's auto-generated queries more efficiently, and its MVCC (Multi-Version Concurrency Control) implementation provides better performance under heavy concurrent write loads than MySQL's InnoDB.

If you are starting fresh, choose PostgreSQL. If you are already on MySQL and performance is acceptable, migration may not be worth the effort -- but if you are hitting database bottlenecks, it is worth considering.

PostgreSQL Tuning for Nextcloud

# /etc/postgresql/16/main/postgresql.conf

# Memory
shared_buffers = 4GB              # 25% of total RAM (for a 16 GB server)
effective_cache_size = 12GB       # 75% of total RAM
work_mem = 64MB                   # Per-operation sort/hash memory
maintenance_work_mem = 1GB        # For VACUUM, CREATE INDEX

# Write-Ahead Log
wal_buffers = 64MB
checkpoint_completion_target = 0.9
min_wal_size = 1GB
max_wal_size = 4GB

# Query Planning
random_page_cost = 1.1            # Set to 1.1 for SSD/NVMe storage
effective_io_concurrency = 200    # NVMe can handle high concurrency
default_statistics_target = 200   # Better query plans at cost of ANALYZE time

# Connections
max_connections = 100             # Match to PHP-FPM max_children + headroom
shared_preload_libraries = 'pg_stat_statements'

The shared_buffers setting deserves special attention. This is PostgreSQL's internal cache for table and index data. For Nextcloud, the oc_filecache table and its indexes are the most frequently accessed data, and keeping them in memory eliminates disk reads for file listings and searches. On a 16 GB server, 4 GB of shared_buffers is a reasonable starting point -- but if your file cache table is larger, you need more.

This is where infrastructure flexibility becomes critical. If your database needs more RAM for caching, you need to be able to add RAM without being forced into a larger plan that also increases your CPU and storage costs. With MassiveGRID's independent resource scaling, you can add RAM specifically for database caching without changing your CPU or storage allocation -- paying only for the resource you actually need.

Connection Pooling

With 50 PHP-FPM workers, each maintaining a persistent database connection, you can quickly exhaust PostgreSQL's connection limit. Each connection consumes approximately 5-10 MB of RAM on the database side. For larger deployments, consider PgBouncer as a connection pooler:

# /etc/pgbouncer/pgbouncer.ini

[databases]
nextcloud = host=/var/run/postgresql port=5432 dbname=nextcloud

[pgbouncer]
listen_addr = 127.0.0.1
listen_port = 6432
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 200
default_pool_size = 25
min_pool_size = 10
reserve_pool_size = 5

Transaction-mode pooling allows PgBouncer to multiplex many client connections over a smaller number of actual PostgreSQL connections, reducing memory overhead and improving connection efficiency.

Preview Generation

Preview generation is one of the most underestimated performance drains on large Nextcloud installations. Every time a user opens a folder in the web interface, Nextcloud generates thumbnail previews for images, PDFs, videos, and office documents. On a fresh installation with thousands of existing files, this happens on-the-fly -- and it happens on every page load until all previews are cached. The result is a file browser that takes 10-15 seconds to render a folder with 200 images, with the web server's CPU pinned at 100%.

The Preview Generator App

The solution is the Preview Generator app (available from the Nextcloud App Store). Instead of generating previews on demand when users browse folders, it pre-generates all previews as a background job. Once previews are generated, folder browsing becomes near-instantaneous because Nextcloud simply serves the cached preview files.

Install it via the command line:

sudo -u www-data php occ app:enable previewgenerator

Initial Preview Generation

For an existing installation with a large file library, the initial preview generation pass can take hours or even days depending on the number of files and the server's CPU capacity. Run it manually first to generate all existing previews:

sudo -u www-data php occ preview:generate-all

This command is heavily CPU-intensive. On a server with 4 CPU cores, generating previews for 100,000 files can take 8-12 hours. This is where infrastructure elasticity pays off. With MassiveGRID's independent resource scaling, you can temporarily add CPU cores during the initial generation pass -- scaling from 4 to 8 or even 16 cores for the duration of the job -- and then scale back down once it completes. You pay for the extra CPU only for the hours you use it, rather than permanently sizing your server for a one-time workload.

Ongoing Preview Generation via Cron

After the initial pass, configure a cron job to generate previews for newly uploaded files:

# /etc/cron.d/nextcloud-preview
# Run preview generation for new files every 10 minutes
*/10 * * * * www-data php /var/www/nextcloud/occ preview:pre-generate

Note the distinction: preview:generate-all processes all files (used for the initial pass), while preview:pre-generate only processes files that have been added or modified since the last run.

Limiting Preview Sizes

By default, Nextcloud generates previews at multiple resolutions. For most deployments, you do not need all of them. Limiting preview sizes reduces both CPU usage and storage consumption:

// In config/config.php
'preview_max_x' => 2048,
'preview_max_y' => 2048,
'preview_max_scale_factor' => 1,
'enabledPreviewProviders' => [
    'OC\Preview\PNG',
    'OC\Preview\JPEG',
    'OC\Preview\GIF',
    'OC\Preview\BMP',
    'OC\Preview\XBitmap',
    'OC\Preview\MP3',
    'OC\Preview\TXT',
    'OC\Preview\MarkDown',
    'OC\Preview\PDF',
    'OC\Preview\HEIC',
    'OC\Preview\Movie',
],

Disabling preview providers you do not need (for example, removing Movie if your users do not upload video content) significantly reduces the CPU overhead of preview generation.

Collabora Online and ONLYOFFICE Performance

Collaborative document editing is one of Nextcloud's most compelling features -- and one of its most resource-intensive. Collabora Online (based on LibreOffice) and ONLYOFFICE both run as separate services that receive documents from Nextcloud, render them in a browser-based editor, and handle real-time collaboration between multiple users.

Dedicated Resources Are Not Optional

The most common mistake in Nextcloud deployments with document editing is running the editing engine on the same server as Nextcloud itself. Collabora Online, for example, spawns a separate LibreOffice process for each open document. Each process consumes 200-500 MB of RAM and significant CPU during active editing, spell-checking, and document rendering. When 10 users are simultaneously editing documents, that is an additional 2-5 GB of RAM and multiple CPU cores consumed -- directly competing with PHP-FPM, the database, and Redis for the same resources.

For deployments with 100+ users, run Collabora or ONLYOFFICE on a separate server or container. This eliminates resource contention and allows you to scale the editing engine independently of the Nextcloud application server.

Recommended Resource Allocation

Concurrent EditorsCollabora (RAM / CPU)ONLYOFFICE (RAM / CPU)
10-204 GB / 2 vCPU4 GB / 2 vCPU
20-508 GB / 4 vCPU8 GB / 4 vCPU
50-10016 GB / 8 vCPU12 GB / 6 vCPU
100+32 GB / 16 vCPU24 GB / 12 vCPU

Connection Limits

Both Collabora and ONLYOFFICE have configurable connection limits that should be tuned to match your resource allocation. For Collabora Online (CODE), the key settings in coolwsd.xml are:

<num_prespawn_children>4</num_prespawn_children>
<per_document>
    <max_concurrency>4</max_concurrency>
    <idle_timeout_secs>3600</idle_timeout_secs>
</per_document>
<max_file_size>104857600</max_file_size>

Setting num_prespawn_children to match the expected number of concurrent documents keeps editor startup latency low. The idle_timeout_secs value controls how long an unused editor session stays alive -- lower values free resources faster but may cause brief delays when users return to a document after a break.

Monitoring and Ongoing Optimization

Performance tuning is not a one-time exercise. Usage patterns change, file libraries grow, new apps are installed, and Nextcloud updates may introduce new performance characteristics. Ongoing monitoring ensures you catch performance regressions before your users do.

Nextcloud's Built-in Monitoring

Nextcloud includes a server status endpoint at /ocs/v2.php/apps/serverinfo/api/v1/info that provides real-time metrics including active user count, storage usage, PHP memory consumption, database size, and cache hit rates. Enable the Monitoring app in Nextcloud to get a visual dashboard of these metrics over time.

Check the OPcache hit rate regularly. If it drops below 99%, your opcache.max_accelerated_files or opcache.memory_consumption settings are too low and the cache is evicting entries. Check Redis memory usage with redis-cli info memory -- if it is consistently near the maxmemory limit, increase the allocation. Monitor PostgreSQL with pg_stat_statements to identify slow queries that may need index optimization.

External Monitoring

For production deployments, complement Nextcloud's built-in monitoring with external tools. Prometheus with node_exporter provides system-level metrics (CPU, RAM, disk I/O, network). The Nextcloud Exporter feeds Nextcloud-specific metrics into Prometheus. Grafana dashboards can then visualize both infrastructure and application metrics in a single view, making it straightforward to correlate infrastructure events (CPU spikes, disk latency increases) with application-level symptoms (slow page loads, failed sync operations).

Key Metrics to Watch

When to Scale vs. When to Optimize

There is a point of diminishing returns with software tuning. If you have followed every recommendation in this guide and your Nextcloud instance is still slow for 100+ users, the answer is almost always infrastructure. Either you need more resources (more CPU cores, more RAM, faster storage) or you need better resources (dedicated instead of shared, NVMe instead of SATA SSD, bare-metal instead of virtualized).

The advantage of hosting on infrastructure that supports granular, independent scaling is that you can address specific bottlenecks without overhauling your entire server. Need more RAM for PostgreSQL caching? Add RAM. Need more CPU for a preview generation backlog? Add CPU temporarily. Need more storage IOPS? Move to a higher-performance storage tier. This targeted approach is more cost-effective than upgrading to the next fixed plan tier where you pay for resources you do not need.

Putting It All Together

Here is a summary of the recommended configuration stack for a Nextcloud deployment serving 100+ users:

ComponentRecommendationWhy
InfrastructureSingle-tenant / dedicated resources, NVMe storageEliminates noisy neighbor issues and I/O bottlenecks
PHP-FPMStatic mode, 50 workers, 512M memory_limitPredictable performance, no spawning latency
OPcache256 MB memory, 20K files, JIT enabledEliminates PHP compilation overhead
RedisUnix socket, 512 MB, LRU evictionShared cache across all workers + file locking
DatabasePostgreSQL, 4 GB shared_buffers, PgBouncerBetter concurrency handling, efficient connection use
PreviewsPre-generated via cron every 10 minutesEliminates on-demand CPU spikes during browsing
Collabora/ONLYOFFICESeparate server or containerPrevents resource contention with core Nextcloud

Performance tuning Nextcloud for scale is a methodical process. Start with the infrastructure -- make sure you have the raw resources needed and that they are not being shared with other workloads. Then tune the software stack layer by layer: PHP-FPM first (it is the execution engine), OPcache next (the cheapest performance win), then Redis (shared caching and locking), then the database (the most complex but highest-impact layer), and finally preview generation and document editing (the biggest CPU consumers).

If you are running into performance walls with your current hosting, or if you are planning a Nextcloud deployment for a large team and want to start on infrastructure that will not hold you back, talk to MassiveGRID's Nextcloud specialists. We offer single-tenant infrastructure with independent resource scaling, NVMe storage, and 24/7 human support from engineers who understand Nextcloud's architecture inside and out. No chatbots. No tier-1 scripts. Just direct access to people who can help you get your deployment running at peak performance.

Running into performance walls? Migrate your Nextcloud to single-tenant infrastructure with dedicated resources, NVMe storage, and expert support. Get started with MassiveGRID Nextcloud hosting.