Slack charges per user. Microsoft Teams locks you into the Microsoft ecosystem. Discord mines your data for advertising profiles. Every mainstream team chat platform comes with trade-offs that grow more uncomfortable as your organization scales, handles sensitive data, or simply values digital sovereignty. Matrix offers a fundamentally different approach: a federated, open-standard communication protocol with end-to-end encryption baked in from the ground up. When you self-host Matrix using Synapse as your homeserver and Element as your client, you get a Slack-quality team messaging experience where you own every message, every file, every encryption key, and every byte of metadata.
Running Matrix on your own Ubuntu VPS means no per-user pricing ceilings, no vendor lock-in, no third-party access to your conversations, and complete control over data retention policies. Whether you are building internal communications for a startup, setting up secure channels for a legal team, or replacing a patchwork of messaging tools across a distributed organization, self-hosted Matrix delivers enterprise-grade chat infrastructure at a fraction of the cost.
MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10
Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything
Why Self-Host Your Team Chat
The case for self-hosting team communication infrastructure goes beyond philosophical preference. There are concrete, measurable reasons why organizations move away from hosted chat platforms.
True end-to-end encryption. Matrix implements the Olm and Megolm cryptographic ratchets, the same double-ratchet algorithm family used by Signal. When E2EE is enabled, messages are encrypted on the sender's device and can only be decrypted by verified recipient devices. Your server never sees plaintext content. This is fundamentally different from platforms that encrypt data "at rest" on their servers — those platforms can still read your messages.
No per-user pricing. Slack's Business+ plan costs $12.50 per user per month. At 100 users, that is $15,000 per year for a chat application. A self-hosted Matrix deployment on a MassiveGRID VPS with 4 vCPUs and 8 GB RAM can comfortably serve 100-200 users for a tiny fraction of that cost, with no artificial user caps.
Data sovereignty. When your chat server runs on infrastructure you control, your messages stay where you put them. There are no third-party subprocessors, no data residency ambiguities, and no surprise changes to terms of service. For organizations subject to GDPR, HIPAA, or industry-specific compliance frameworks, this level of control is often not optional — it is required.
No vendor lock-in. Matrix is an open standard maintained by the Matrix.org Foundation. Your data is portable. Your clients are interchangeable. If you decide to switch homeserver implementations, migrate to a different host, or federate with other organizations, nothing stops you.
The Matrix Ecosystem
Matrix is not a single application — it is a protocol ecosystem with multiple interchangeable components. Understanding the pieces helps you make informed deployment decisions.
Synapse is the reference homeserver implementation, written in Python. It is the most mature and feature-complete Matrix server, supporting the full specification including E2EE, federation, application services, and server-side search. Synapse typically uses 2-4 GB of RAM for small to mid-size deployments, scaling with the number of active users and joined rooms.
Element is the flagship Matrix client, available as a web application (Element Web), desktop application (Electron-based), and native mobile apps for iOS and Android. Element Web is a static JavaScript application that you can self-host alongside Synapse, giving your team a branded, polished chat interface accessible from any browser.
Bridges are application services that connect Matrix rooms to external platforms. You can bridge conversations to Slack, Discord, IRC, Telegram, WhatsApp, and more. This means you can adopt Matrix incrementally — team members on other platforms can continue using their preferred client while messages flow bidirectionally through Matrix.
Bots in Matrix work through the same application service interface. You can build custom bots for notifications, CI/CD alerts, on-call rotations, or automated moderation using libraries like matrix-bot-sdk (TypeScript) or maubot (Python plugin framework).
Federation vs. Private Deployment
One of Matrix's defining features is federation — the ability for multiple homeservers to communicate with each other, similar to how email servers interoperate. When federation is enabled, your users can join rooms hosted on other Matrix servers, and external users can join rooms on yours.
For internal team chat, you likely want to disable federation. A private, non-federated deployment means your server only communicates with itself. No external servers can discover your rooms, no outside users can attempt to join, and your server does not need to handle federation traffic. This simplifies your security model and reduces resource usage.
You can always enable federation later if your needs change. The configuration is a single flag in the homeserver configuration file, and rooms can be set to invite-only regardless of federation status.
Prerequisites
For a Matrix deployment serving a small to mid-size team (up to 200 users), you need:
- VPS specifications: 4 vCPUs, 8 GB RAM, 80 GB NVMe storage. Synapse itself uses 2-4 GB RAM under normal load, PostgreSQL needs 1-2 GB for caching, and you want headroom for Element Web, Nginx, and occasional spikes during media uploads or room state resolution.
- Ubuntu 24.04 LTS — a fresh installation with root or sudo access.
- Docker and Docker Compose — we will use containerized deployment for isolation and reproducibility. If you have not set up Docker yet, follow our guide on installing Docker on an Ubuntu VPS.
- A domain name with DNS configured — you need at least two subdomains: one for the Synapse API (e.g.,
matrix.example.com) and one for Element Web (e.g.,chat.example.com). - Ports 80 and 443 open for HTTPS traffic through your firewall.
A MassiveGRID VPS with 4 vCPUs and 8 GB RAM provides a solid foundation. The Ceph 3x replicated NVMe storage ensures your message database and uploaded media survive disk failures without any intervention on your part.
Docker Compose: Synapse, PostgreSQL, and Element Web
Create your project directory and the Docker Compose configuration:
mkdir -p /opt/matrix/{synapse,element}
cd /opt/matrix
Create the docker-compose.yml file:
version: "3.8"
services:
postgres:
image: postgres:16-alpine
container_name: matrix-postgres
restart: unless-stopped
environment:
POSTGRES_USER: synapse
POSTGRES_PASSWORD: your_secure_db_password_here
POSTGRES_DB: synapse
POSTGRES_INITDB_ARGS: "--encoding=UTF8 --lc-collate=C --lc-ctype=C"
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- matrix
healthcheck:
test: ["CMD-SHELL", "pg_isready -U synapse"]
interval: 10s
timeout: 5s
retries: 5
synapse:
image: matrixdotorg/synapse:latest
container_name: matrix-synapse
restart: unless-stopped
depends_on:
postgres:
condition: service_healthy
environment:
SYNAPSE_CONFIG_DIR: /data
SYNAPSE_CONFIG_PATH: /data/homeserver.yaml
volumes:
- ./synapse:/data
- synapse_media:/data/media_store
ports:
- "8008:8008"
networks:
- matrix
element:
image: vectorim/element-web:latest
container_name: matrix-element
restart: unless-stopped
volumes:
- ./element/config.json:/app/config.json:ro
ports:
- "8080:80"
networks:
- matrix
volumes:
postgres_data:
synapse_media:
networks:
matrix:
driver: bridge
Before starting the services, you need to generate the initial Synapse configuration. Run:
docker run -it --rm \
-v /opt/matrix/synapse:/data \
-e SYNAPSE_SERVER_NAME=example.com \
-e SYNAPSE_REPORT_STATS=no \
matrixdotorg/synapse:latest generate
Replace example.com with your actual domain. This creates the homeserver.yaml file and signing keys in the synapse/ directory. The server name is permanent — it becomes part of every user ID (@user:example.com) and cannot be changed after the server begins operation.
Homeserver Configuration
Open /opt/matrix/synapse/homeserver.yaml and configure the critical settings. Here is a focused walkthrough of the sections that matter most:
# Server identity
server_name: "example.com"
public_baseurl: "https://matrix.example.com/"
serve_server_wellknown: true
# Database — switch from default SQLite to PostgreSQL
database:
name: psycopg2
args:
user: synapse
password: your_secure_db_password_here
database: synapse
host: postgres
port: 5432
cp_min: 5
cp_max: 10
# Listener configuration
listeners:
- port: 8008
tls: false
type: http
x_forwarded: true
resources:
- names: [client, federation]
compress: false
# Disable federation for private deployment
federation_domain_whitelist: []
# Registration
enable_registration: false
enable_registration_without_verification: false
# Media storage
media_store_path: "/data/media_store"
max_upload_size: "50M"
url_preview_enabled: true
url_preview_ip_range_blacklist:
- '127.0.0.0/8'
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
# Logging
log_config: "/data/log.config"
# Signing keys
signing_key_path: "/data/signing.key"
trusted_key_servers: []
# Retention policy
retention:
enabled: true
default_policy:
min_lifetime: 1d
max_lifetime: 365d
# Rate limiting
rc_message:
per_second: 5
burst_count: 25
rc_login:
address:
per_second: 0.5
burst_count: 3
account:
per_second: 0.5
burst_count: 3
Key decisions in this configuration: federation is effectively disabled by whitelisting no domains. Registration is disabled because you will create accounts manually or via SSO. The database points to the PostgreSQL container. Rate limiting is configured to prevent abuse without hindering normal usage.
Nginx Reverse Proxy with SSL
Synapse and Element Web both need HTTPS access. If you do not already have Nginx configured as a reverse proxy, follow our Nginx reverse proxy setup guide first, which covers installation, Let's Encrypt certificates, and security hardening.
Create the Nginx configuration for Synapse:
server {
listen 443 ssl http2;
server_name matrix.example.com;
ssl_certificate /etc/letsencrypt/live/matrix.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/matrix.example.com/privkey.pem;
client_max_body_size 50M;
location / {
proxy_pass http://127.0.0.1:8008;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_read_timeout 600s;
}
location /_synapse/client {
proxy_pass http://127.0.0.1:8008;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
}
}
Create the Nginx configuration for Element Web:
server {
listen 443 ssl http2;
server_name chat.example.com;
ssl_certificate /etc/letsencrypt/live/chat.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/chat.example.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
}
}
You also need a .well-known delegation on your base domain so that Matrix clients and servers know where to find your homeserver. If your base domain (example.com) is served by a separate web server, add these routes:
location /.well-known/matrix/server {
return 200 '{"m.server": "matrix.example.com:443"}';
default_type application/json;
add_header Access-Control-Allow-Origin *;
}
location /.well-known/matrix/client {
return 200 '{"m.homeserver": {"base_url": "https://matrix.example.com"}}';
default_type application/json;
add_header Access-Control-Allow-Origin *;
}
Test the configuration and reload Nginx:
sudo nginx -t
sudo systemctl reload nginx
Element Web Configuration
Element Web is configured through a single JSON file. Create /opt/matrix/element/config.json:
{
"default_server_config": {
"m.homeserver": {
"base_url": "https://matrix.example.com",
"server_name": "example.com"
}
},
"brand": "Your Team Chat",
"integrations_ui_url": "",
"integrations_rest_url": "",
"integrations_widgets_urls": [],
"disable_custom_urls": true,
"disable_guests": true,
"disable_login_language_selector": false,
"disable_3pid_login": false,
"default_country_code": "US",
"show_labs_settings": false,
"default_theme": "dark",
"room_directory": {
"servers": ["example.com"]
},
"features": {
"feature_pinning": "labs",
"feature_thread": true,
"feature_video_rooms": true
},
"setting_defaults": {
"breadcrumbs": true,
"MessageComposerInput.showStickersButton": false
}
}
This configuration points Element at your homeserver, disables integration managers (which would connect to external services), enables threading and video rooms, and sets the dark theme as default. Customize the brand field with your organization name.
Starting the Stack
With all configuration files in place, start the services:
cd /opt/matrix
docker compose up -d
Monitor the startup logs to ensure everything initializes correctly:
docker compose logs -f synapse
Synapse will run database migrations on first startup, which may take a minute. Once you see a message indicating the server is listening on port 8008, your homeserver is operational. Verify it by visiting https://matrix.example.com/_matrix/client/versions in your browser — you should see a JSON response listing supported API versions.
User Registration and Administration
Since we disabled open registration, create your first admin user from the command line:
docker exec -it matrix-synapse register_new_matrix_user \
-c /data/homeserver.yaml \
-u admin \
-p your_secure_admin_password \
-a \
http://localhost:8008
The -a flag grants server administrator privileges. This user can manage rooms, deactivate accounts, and access the Synapse Admin API.
For subsequent users, you have several options:
- Admin API: Use the Synapse Admin API to create users programmatically:
PUT /_synapse/admin/v2/users/@username:example.comwith a JSON body containing the password and display name. - Registration tokens: Enable token-based registration in
homeserver.yamlby settingregistration_requires_token: trueand creating tokens via the Admin API. Share tokens with new team members for self-service onboarding. - Synapse Admin UI: Deploy the community-maintained synapse-admin web interface as another Docker container for a graphical user management panel.
Rooms, Spaces, and Team Structure
Matrix organizes conversations into rooms (individual channels) and spaces (hierarchical groupings of rooms, similar to Slack workspaces or Discord servers).
For a team deployment, structure your spaces logically:
- Company space (top level) — contains all sub-spaces and common rooms.
- Department spaces — Engineering, Marketing, Sales, etc. Each contains department-specific rooms.
- Project spaces — temporary spaces for active projects, archived when complete.
- Common rooms — #general, #announcements (read-only for most users), #random, #help-desk.
Set the company space as the default space that new users automatically join. Configure room power levels to control who can post in announcement channels, who can invite users, and who can modify room settings. Matrix's power level system is granular — you can set different thresholds for sending messages, changing room names, kicking users, and dozens of other actions.
End-to-End Encryption Configuration
Matrix supports E2EE at the room level. You can enable encryption on individual rooms or set it as the default for all new rooms:
# In homeserver.yaml
encryption_enabled_by_default_for_room_type: all
When E2EE is enabled, each user device generates its own set of cryptographic keys. Messages are encrypted per-device, meaning a user logged in on three devices has three separate decryption key sets. This has important implications for key management.
Key backup. Configure server-side key backup so users can recover their message history when they sign in on new devices. Element prompts users to set up a Security Key or Security Phrase during their first login. Emphasize to your team that this step is not optional — without key backup, switching devices means losing access to encrypted message history.
Cross-signing. Matrix uses cross-signing to establish trust between a user's devices. When a user verifies a new device (by scanning a QR code or comparing emoji sequences), the cross-signing keys attest that both devices belong to the same user. This prevents man-in-the-middle attacks at the device level.
Room key rotation. Megolm session keys rotate automatically. By default, keys rotate after 100 messages or 1 week, whichever comes first. You can adjust these thresholds, but the defaults are suitable for most team environments.
SSO with Authentik
For organizations that already have an identity provider, integrating Single Sign-On eliminates the need to manage Matrix passwords separately. If you are running Authentik on your Ubuntu VPS, you can connect it to Synapse using OpenID Connect.
In Authentik, create a new OAuth2/OIDC provider for Synapse with the following settings: set the redirect URI to https://matrix.example.com/_synapse/client/oidc/callback, select the openid, profile, and email scopes, and note the client ID and client secret.
Then add the OIDC configuration to homeserver.yaml:
oidc_providers:
- idp_id: authentik
idp_name: "Company SSO"
issuer: "https://auth.example.com/application/o/matrix/"
client_id: "your_client_id"
client_secret: "your_client_secret"
scopes: ["openid", "profile", "email"]
user_mapping_provider:
config:
localpart_template: "{{ user.preferred_username }}"
display_name_template: "{{ user.name }}"
allow_existing_users: true
backchannel_logout_enabled: true
With SSO configured, the Element login page displays a "Sign in with Company SSO" button. Users authenticate through Authentik and are automatically mapped to Matrix accounts. This also means you can enforce MFA, password policies, and session management through your centralized identity provider.
Synapse Performance Tuning with Workers
Synapse's default single-process mode handles most small deployments well, but as your team grows beyond 100-150 active users or joins large federated rooms, you may notice increased latency. Synapse supports a worker architecture that splits processing across multiple processes.
Common workers to deploy first:
- synchrotron — handles
/syncrequests, which are the most resource-intensive endpoint (every connected client polls this continuously). Offloading sync to dedicated workers dramatically reduces main process load. - media_repository — handles media uploads, downloads, and thumbnail generation. Isolating media processing prevents large file uploads from slowing down message delivery.
- federation_sender — handles outbound federation traffic. Only relevant if you enable federation.
- pusher — handles push notifications to mobile devices.
Workers communicate through Redis, which you add to your Docker Compose stack:
redis:
image: redis:7-alpine
container_name: matrix-redis
restart: unless-stopped
networks:
- matrix
Enable Redis in homeserver.yaml:
redis:
enabled: true
host: redis
port: 6379
Start with sync workers if you hit performance limits. Each worker runs as a separate Synapse process with its own configuration file specifying which endpoints it handles. The main Synapse documentation provides detailed worker configuration templates.
Media Storage and Independent Scaling
Chat platforms accumulate media quickly. File shares, screenshots, profile avatars, and link preview thumbnails all consume storage. Over months of active use, a 50-person team can easily generate 20-50 GB of media data.
In the Docker Compose configuration, media is stored in the synapse_media named volume. For production deployments, consider these storage strategies:
- Volume monitoring: Set up alerts when media storage exceeds 70% of available disk space. Use
docker system df -vto track volume sizes. - Media retention: Configure Synapse's media retention settings to automatically purge remote media (cached from federated servers) after a set period. Local media (uploaded by your users) should be retained longer or indefinitely.
- Independent storage scaling: With MassiveGRID's VPS, you can scale storage independently from CPU and RAM. If your Synapse instance is running fine on 4 vCPUs and 8 GB RAM but needs more disk space for media, add storage without changing compute resources.
For large deployments, you can also configure Synapse to use S3-compatible object storage for media via the synapse-s3-storage-provider module, offloading media to an external storage backend while keeping the database on fast local NVMe.
PostgreSQL Optimization
Synapse's performance is heavily dependent on database performance. The default PostgreSQL configuration is conservative and tuned for minimal resource usage, not for a busy chat server. If you want a deeper understanding of PostgreSQL setup and tuning, reference our PostgreSQL installation guide.
Key PostgreSQL tuning parameters for a Matrix deployment on a 4 vCPU / 8 GB RAM server:
# postgresql.conf adjustments
shared_buffers = 2GB
effective_cache_size = 4GB
work_mem = 16MB
maintenance_work_mem = 512MB
max_connections = 100
checkpoint_completion_target = 0.9
wal_buffers = 64MB
random_page_cost = 1.1
effective_io_concurrency = 200
These settings allocate 2 GB for PostgreSQL's shared buffer pool (25% of total RAM), set the effective cache size to account for OS-level file caching, and configure WAL settings for write-heavy workloads. Synapse generates significant write activity — every message, state event, receipt, and presence update results in database writes.
Regular maintenance tasks:
- Run
VACUUM ANALYZEweekly on the Synapse database, or configure autovacuum aggressively since Synapse's update patterns can cause table bloat. - Monitor the
state_groups_statetable — this is typically the largest table in a Synapse database and can grow to tens of gigabytes in deployments that join many rooms. - Use the Synapse state compressor tool (
synapse_auto_compressor) periodically to reduce the size of room state storage.
Mobile Clients
Element is available as native mobile apps for both iOS and Android. Once your server is accessible via HTTPS with proper well-known delegation, mobile setup is straightforward:
- Install Element from the App Store or Google Play.
- On the login screen, tap "Other" under the server selection.
- Enter your homeserver URL (
https://matrix.example.com) or your server name (example.com— the app will use well-known discovery). - Sign in with username/password or SSO, depending on your configuration.
- Complete device verification by scanning a QR code from an existing session or entering the Security Key.
Mobile clients support push notifications through a push gateway. For self-hosted deployments, Element's default push gateway (sygnal) sends notifications through Google FCM (Android) and Apple APNS (iOS). The default Element apps use Matrix.org's Sygnal instance for push delivery, which means notification metadata (who is messaging whom, not message content) passes through Matrix.org's servers. If this is a concern, you can self-host Sygnal, though it requires registering your own FCM and APNS credentials.
Alternative Matrix clients like FluffyChat, SchildiChat (an Element fork with additional features), and Cinny (a Discord-like UI) also work with any standard Matrix homeserver. Let your team experiment to find their preferred client — the open protocol means client choice does not affect interoperability.
Backup Strategy
Your Matrix deployment has three critical data stores that require regular backups:
- PostgreSQL database: Contains all messages, room state, user accounts, and device keys. Use
pg_dumpfor logical backups or configure continuous WAL archiving for point-in-time recovery. This is the most important backup — without it, you lose everything. - Media store: Contains all uploaded files, avatars, and thumbnails. Back up the
synapse_mediaDocker volume. Media is large but low-priority compared to the database, since losing media means broken image links but not lost conversations. - Synapse signing keys: The
signing.keyfile in your Synapse data directory. If you lose this key and need to regenerate it, other federated servers will reject your server's identity. Back it up once and store it securely.
Automate daily database backups with a cron job:
0 3 * * * docker exec matrix-postgres pg_dump -U synapse synapse | gzip > /backup/matrix/synapse-$(date +\%Y\%m\%d).sql.gz
Monitoring and Health Checks
Synapse exposes Prometheus metrics on the /_synapse/metrics endpoint when enabled in homeserver.yaml:
enable_metrics: true
metrics_flags:
known_servers: true
Key metrics to monitor:
- synapse_http_server_response_time — track request latency, especially for
/syncendpoints. - synapse_storage_events_persisted_total — rate of events being written to the database.
- synapse_federation_server_pdu_count — federation traffic volume (if federation is enabled).
- process_resident_memory_bytes — Synapse's RAM usage, which should stay below your allocated limits.
Set up alerts for sustained high latency on /sync (indicating the server is struggling to keep up with client requests) and for RAM usage approaching container limits (indicating potential OOM kills).
Scaling to Dedicated Resources
As your Matrix deployment grows — more users, more rooms, more media, more integrations — you may hit the limits of shared infrastructure. Database-intensive operations like room state resolution, full-text search across large rooms, and media transcoding benefit from guaranteed CPU time and dedicated memory.
A MassiveGRID VDS (Virtual Dedicated Server) provides dedicated physical resources rather than shared allocations. This eliminates noisy-neighbor effects where another tenant's workload impacts your database write latency or media processing speed. For Matrix deployments serving 200+ users or handling significant media traffic, dedicated resources ensure consistent performance during peak usage periods.
Prefer Managed Hosting?
Self-hosting Matrix gives you maximum control, but it also means you are responsible for OS updates, security patches, database maintenance, backup verification, and incident response. If your team needs always-on chat infrastructure but does not have the bandwidth to manage the underlying systems, MassiveGRID's fully managed hosting handles the infrastructure layer entirely.
With managed hosting, your team focuses on configuring Matrix and Element to match your workflows — room structures, integrations, access policies — while MassiveGRID handles server hardening, automated backups, proactive monitoring, kernel updates, and 24/7 incident response. The Proxmox HA cluster with automatic failover ensures your chat infrastructure survives hardware failures without manual intervention, and the 100% uptime SLA backs that commitment. For communication infrastructure that your entire organization depends on, managed hosting is not a luxury — it is operational insurance.