Infrastructure as Code has fundamentally changed the way teams provision and manage cloud resources. Instead of clicking through dashboards or running ad-hoc CLI commands, you define your entire infrastructure in declarative configuration files that can be versioned, reviewed, and reused. Terraform, created by HashiCorp, is the most widely adopted tool for this approach, and when paired with Ubuntu VPS instances on MassiveGRID, it gives you a repeatable, auditable, and automated path from zero to a fully provisioned environment. This guide walks through every stage of using Terraform to manage your Ubuntu VPS infrastructure — from single-server definitions to multi-tier architectures with DNS, secrets management, and disaster recovery workflows.

MassiveGRID Ubuntu VPS includes: Ubuntu 24.04 LTS pre-installed · Proxmox HA cluster with automatic failover · Ceph 3x replicated NVMe storage · Independent CPU/RAM/storage scaling · 12 Tbps DDoS protection · 4 global datacenter locations · 100% uptime SLA · 24/7 human support rated 9.5/10

Deploy a self-managed VPS — from $1.99/mo
Need dedicated resources? — from $19.80/mo
Want fully managed hosting? — we handle everything

What Terraform Does — and What It Does Not

There is a critical distinction that trips up newcomers to Infrastructure as Code: Terraform is a provisioning tool, not a configuration management tool. Terraform excels at creating, modifying, and destroying infrastructure resources — spinning up VPS instances, attaching storage volumes, creating DNS records, configuring firewalls at the network level, and managing load balancers. It answers the question "what infrastructure exists?" and ensures that reality matches your declared desired state.

What Terraform does not do is configure the software running inside those instances. It will not install packages, write configuration files to disk, manage systemd services, or set up user accounts on a running server. That is the domain of configuration management tools like Ansible, which operates at the operating system level rather than the infrastructure level. Understanding this boundary is essential because the most effective IaC pipelines use both tools together — Terraform to create the servers and Ansible to configure them after provisioning.

Terraform operates declaratively. You describe what you want your infrastructure to look like, and Terraform figures out the sequence of API calls required to get there. If you declare three VPS instances and currently have two, Terraform creates one more. If you reduce to two, it destroys the extra one. This declarative model is what makes Terraform so powerful for managing infrastructure at scale — you never write imperative scripts that might fail halfway through and leave your environment in an inconsistent state.

The Terraform and Ansible Pipeline

The most robust Infrastructure as Code pipeline for VPS management combines Terraform and Ansible into a two-phase workflow. In the first phase, Terraform provisions the raw infrastructure: it creates VPS instances with the specified CPU, RAM, and storage configurations, sets up networking, creates DNS records, and outputs the IP addresses and connection details of the newly created servers. In the second phase, Ansible takes over, using those IP addresses as its inventory to connect via SSH and configure every instance — installing packages, deploying application code, hardening the OS, and starting services.

This separation of concerns keeps each tool doing what it does best. Terraform manages the lifecycle of infrastructure resources through provider APIs, while Ansible manages the state of the operating system and applications through SSH. Changes to infrastructure (scaling up, adding servers, changing regions) go through Terraform. Changes to configuration (updating nginx settings, rotating certificates, deploying new application versions) go through Ansible. The result is a clean, maintainable pipeline where every aspect of your environment is defined in code.

Prerequisites

Before writing your first Terraform configuration, you need two things in place: a working Terraform installation and API credentials for your infrastructure provider.

Installing Terraform on Ubuntu is straightforward. HashiCorp maintains an official APT repository:

wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraform

Verify the installation with terraform version. You should see a version number confirming Terraform is available in your PATH.

For API credentials, you will need an API token from your hosting provider's control panel. Store this token securely — it grants programmatic access to create and destroy resources in your account. Never commit API tokens directly into your Terraform files or version control. Instead, use environment variables or a secrets manager, both of which we will cover later in this guide.

You should also have Git installed for version-controlling your Terraform configurations, and SSH keys configured for connecting to provisioned instances. If you plan to follow the full pipeline, install Ansible as well (sudo apt install ansible) so you can configure servers after Terraform creates them.

Terraform Fundamentals

Terraform configurations are written in HCL (HashiCorp Configuration Language), a declarative syntax that is both human-readable and machine-parseable. Every Terraform project consists of a few core concepts that you need to understand before writing configurations.

Providers

Providers are plugins that allow Terraform to interact with specific APIs. Each cloud platform, DNS service, or infrastructure provider has its own Terraform provider. Providers handle authentication, API versioning, and the translation of HCL resource definitions into actual API calls. You declare which providers your configuration requires in a terraform block:

terraform {
  required_providers {
    cloud = {
      source  = "provider/cloud"
      version = "~> 2.0"
    }
  }
}

provider "cloud" {
  api_token = var.api_token
}

Resources

Resources are the fundamental building blocks of Terraform. Each resource block describes one piece of infrastructure — a VPS instance, a DNS record, a firewall rule, a storage volume. Resources have a type (determined by the provider) and a name (chosen by you), and they contain arguments that define the resource's configuration:

resource "cloud_server" "web" {
  name       = "web-server-01"
  image      = "ubuntu-24-04"
  size       = "4vcpu-8gb"
  region     = "nyc1"
}

Variables

Variables make your configurations reusable by parameterizing values that change between environments, deployments, or team members. You declare variables with a type, description, and optional default value, then reference them with the var. prefix:

variable "server_size" {
  type        = string
  description = "The VPS size slug"
  default     = "2vcpu-4gb"
}

variable "datacenter" {
  type        = string
  description = "Datacenter region"
  default     = "nyc1"
}

Outputs

Outputs expose values from your Terraform state after provisioning — typically IP addresses, hostnames, or resource IDs that you need for subsequent steps like running Ansible or updating documentation:

output "server_ip" {
  value       = cloud_server.web.ipv4_address
  description = "Public IP of the web server"
}

State

Terraform state is the mechanism that allows Terraform to know what infrastructure it currently manages. After running terraform apply, Terraform writes a state file (terraform.tfstate) that maps your configuration resources to real-world objects. This state file is critical — it is how Terraform knows what to create, update, or destroy on the next run. We will discuss remote state management and state locking later, as these become essential in team environments.

Defining a Single VPS Resource

Let us start with the simplest meaningful Terraform configuration: provisioning a single Ubuntu 24.04 VPS. Create a new directory for your project and add three files — main.tf, variables.tf, and outputs.tf:

# variables.tf
variable "api_token" {
  type        = string
  description = "API authentication token"
  sensitive   = true
}

variable "ssh_key_fingerprint" {
  type        = string
  description = "SSH key fingerprint for server access"
}

variable "server_name" {
  type        = string
  default     = "ubuntu-vps-01"
}

variable "region" {
  type        = string
  default     = "nyc1"
  description = "Datacenter: nyc1, lon1, fra1, sgp1"
}

variable "size" {
  type        = string
  default     = "2vcpu-4gb-80gb"
  description = "Server size: vCPU, RAM, NVMe storage"
}
# main.tf
resource "cloud_server" "ubuntu_vps" {
  name     = var.server_name
  image    = "ubuntu-24-04-lts"
  size     = var.size
  region   = var.region
  ssh_keys = [var.ssh_key_fingerprint]

  tags = ["terraform-managed", "ubuntu"]
}
# outputs.tf
output "vps_ip" {
  value       = cloud_server.ubuntu_vps.ipv4_address
  description = "Public IPv4 address of the VPS"
}

output "vps_id" {
  value       = cloud_server.ubuntu_vps.id
  description = "Unique identifier of the VPS"
}

This configuration creates a single VPS running Ubuntu 24.04 LTS with 2 vCPUs, 4 GB RAM, and 80 GB NVMe storage in the NYC datacenter. The SSH key ensures you can connect immediately after provisioning. The sensitive flag on the API token variable prevents Terraform from displaying it in logs or output.

Variables and Environments: Dev, Staging, and Production

Real-world deployments rarely involve a single identical server. You need different specifications for different environments — a small instance for development, a medium one for staging, and a production instance with maximum resources. Terraform handles this through variable files (.tfvars) that override default values per environment.

Create separate variable files for each environment:

# environments/dev.tfvars
server_name = "dev-web-01"
region      = "nyc1"
size        = "1vcpu-2gb-40gb"

# environments/staging.tfvars
server_name = "staging-web-01"
region      = "nyc1"
size        = "2vcpu-4gb-80gb"

# environments/prod.tfvars
server_name = "prod-web-01"
region      = "fra1"
size        = "8vcpu-32gb-320gb"

Apply a specific environment by passing the variable file:

terraform apply -var-file="environments/prod.tfvars"

This pattern keeps your Terraform modules generic and reusable while allowing precise control over what gets deployed where. Your development environment uses a small VPS with minimal resources to keep costs low, staging mirrors production at a reduced scale, and production uses a Dedicated VPS with guaranteed resources to handle real traffic. Each environment is defined entirely in code, making it trivial to recreate from scratch.

Plan, Apply, and Verify

Terraform's workflow follows a three-step cycle: plan, apply, verify. This cycle provides safety and predictability, especially when managing production infrastructure.

terraform init initializes the working directory, downloading provider plugins and setting up the backend. Run this once when starting a new project or after adding providers:

terraform init

terraform plan generates an execution plan showing exactly what Terraform will do without making any changes. This is your safety net — review the plan carefully before proceeding:

terraform plan -var-file="environments/prod.tfvars"

# Output shows:
# + cloud_server.ubuntu_vps will be created
#   + name     = "prod-web-01"
#   + image    = "ubuntu-24-04-lts"
#   + size     = "8vcpu-32gb-320gb"
#   + region   = "fra1"
# Plan: 1 to add, 0 to change, 0 to destroy.

terraform apply executes the plan, making the actual API calls to create or modify infrastructure. Terraform will show the plan again and prompt for confirmation:

terraform apply -var-file="environments/prod.tfvars"

After apply completes, verify your outputs:

terraform output vps_ip
# Returns: 203.0.113.42

Confirm SSH access to validate the instance is running and accessible:

ssh root@$(terraform output -raw vps_ip) "lsb_release -a"

This plan-apply-verify cycle ensures that you never make blind changes to infrastructure. Every modification is previewed, approved, executed, and verified.

Multi-VPS Architecture in Terraform

Single-server setups work for development and small applications, but production workloads benefit from splitting services across multiple servers. Terraform makes multi-server architectures straightforward to define and manage. Here is a three-tier setup with web, database, and monitoring servers:

# Web server — handles HTTP traffic
resource "cloud_server" "web" {
  name     = "prod-web-01"
  image    = "ubuntu-24-04-lts"
  size     = "4vcpu-8gb-160gb"
  region   = var.region
  ssh_keys = [var.ssh_key_fingerprint]
  tags     = ["web", "terraform-managed"]
}

# Database server — larger storage allocation
resource "cloud_server" "database" {
  name     = "prod-db-01"
  image    = "ubuntu-24-04-lts"
  size     = "4vcpu-16gb-500gb"
  region   = var.region
  ssh_keys = [var.ssh_key_fingerprint]
  tags     = ["database", "terraform-managed"]
}

# Monitoring server — lightweight instance
resource "cloud_server" "monitoring" {
  name     = "prod-monitor-01"
  image    = "ubuntu-24-04-lts"
  size     = "2vcpu-4gb-80gb"
  region   = var.region
  ssh_keys = [var.ssh_key_fingerprint]
  tags     = ["monitoring", "terraform-managed"]
}

For larger deployments with multiple identical servers, use the count or for_each meta-argument:

variable "web_server_count" {
  type    = number
  default = 3
}

resource "cloud_server" "web_pool" {
  count    = var.web_server_count
  name     = "prod-web-${count.index + 1}"
  image    = "ubuntu-24-04-lts"
  size     = "4vcpu-8gb-160gb"
  region   = var.region
  ssh_keys = [var.ssh_key_fingerprint]
  tags     = ["web-pool", "terraform-managed"]
}

output "web_pool_ips" {
  value = cloud_server.web_pool[*].ipv4_address
}

This creates three identical web servers with sequential names. Changing web_server_count to five and running terraform apply adds two more servers without touching the existing three. Terraform's understanding of state makes scaling up or down a single variable change.

DNS Record Management

Terraform can manage DNS records alongside your server infrastructure, ensuring that domain names automatically point to the correct IP addresses after provisioning. This eliminates the manual step of updating DNS after deploying new servers:

resource "cloud_domain" "primary" {
  name = "example.com"
}

resource "cloud_dns_record" "web_a" {
  domain = cloud_domain.primary.id
  type   = "A"
  name   = "www"
  value  = cloud_server.web.ipv4_address
  ttl    = 300
}

resource "cloud_dns_record" "db_internal" {
  domain = cloud_domain.primary.id
  type   = "A"
  name   = "db"
  value  = cloud_server.database.ipv4_address_private
  ttl    = 300
}

resource "cloud_dns_record" "monitor" {
  domain = cloud_domain.primary.id
  type   = "A"
  name   = "monitor"
  value  = cloud_server.monitoring.ipv4_address
  ttl    = 300
}

By managing DNS in Terraform, the entire flow — from server creation to DNS propagation — is automated. When you replace a server (destroy and recreate), the DNS record automatically updates to the new IP address on the next terraform apply.

Combining Terraform with Ansible

The handoff between Terraform and Ansible is where your IaC pipeline comes together. After Terraform provisions your VPS instances, you need Ansible to configure them. The bridge between the two tools is typically a dynamic inventory or an output-generated inventory file.

Add a local_file resource to your Terraform configuration that generates an Ansible inventory from your provisioned servers:

resource "local_file" "ansible_inventory" {
  content = templatefile("${path.module}/templates/inventory.tpl", {
    web_ip     = cloud_server.web.ipv4_address
    db_ip      = cloud_server.database.ipv4_address
    monitor_ip = cloud_server.monitoring.ipv4_address
  })
  filename = "${path.module}/ansible/inventory.ini"
}

# templates/inventory.tpl
# [webservers]
# ${web_ip}
#
# [databases]
# ${db_ip}
#
# [monitoring]
# ${monitor_ip}

After running terraform apply, you now have a fresh inventory file. Run Ansible to configure every server:

cd ansible/
ansible-playbook -i inventory.ini site.yml

For a fully automated pipeline, chain the commands:

terraform apply -auto-approve -var-file="environments/prod.tfvars" && \
ansible-playbook -i ansible/inventory.ini ansible/site.yml

This single command provisions all infrastructure and configures every server, taking you from nothing to a fully operational environment in minutes. Terraform handles the creation of Ubuntu VPS instances with the right specs, and Ansible handles the configuration — installing packages, deploying code, hardening the OS, and starting services.

State Management: Remote State and Locking

By default, Terraform stores state in a local file (terraform.tfstate). This works for solo developers but becomes problematic in team environments where multiple people might run Terraform simultaneously. Remote state backends solve this by storing state in a shared, centralized location with locking to prevent concurrent modifications.

Configure a remote backend in your main.tf:

terraform {
  backend "s3" {
    bucket         = "my-terraform-state"
    key            = "prod/ubuntu-vps/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
    dynamodb_table = "terraform-locks"
  }
}

The S3 backend stores state in an encrypted object storage bucket, while the DynamoDB table provides state locking. When one team member runs terraform apply, the lock prevents anyone else from making concurrent changes. This is essential for production environments where an accidental concurrent apply could corrupt state or create conflicting resources.

Other backend options include Terraform Cloud (HashiCorp's managed service), Consul, and PostgreSQL. The choice depends on your existing infrastructure and team preferences, but the principle remains the same: never rely on local state files for production infrastructure managed by more than one person.

State files contain sensitive information including IP addresses, resource IDs, and sometimes passwords or tokens. Always encrypt state at rest and restrict access to the backend storage. Treat your state file with the same security posture as your production database credentials.

Secrets Management with Terraform

Managing secrets in Terraform requires careful thought. API tokens, database passwords, SSL certificates, and other sensitive values must be available to Terraform without being exposed in configuration files or version control.

The simplest approach is environment variables. Terraform automatically reads environment variables prefixed with TF_VAR_:

export TF_VAR_api_token="your-api-token-here"
export TF_VAR_db_password="secure-database-password"
terraform apply

For more sophisticated secrets management, integrate with HashiCorp Vault. Terraform has a native Vault provider that can read secrets at plan and apply time:

data "vault_generic_secret" "api_credentials" {
  path = "secret/terraform/cloud-provider"
}

provider "cloud" {
  api_token = data.vault_generic_secret.api_credentials.data["api_token"]
}

With Vault integration, secrets are never stored in Terraform files, variable files, or even environment variables on developer machines. They live exclusively in Vault, accessed only at runtime. This approach provides audit logging, secret rotation, dynamic credential generation, and fine-grained access control — all critical for production environments handling sensitive data.

Regardless of your secrets management approach, follow these non-negotiable rules: never commit secrets to Git, always mark sensitive variables with sensitive = true, encrypt your state files, and rotate credentials regularly.

Destroy and Recreate: Disaster Recovery with Terraform

One of Terraform's most powerful capabilities for disaster recovery is the ability to completely destroy and recreate infrastructure from code. If a server is compromised, corrupted, or needs to be rebuilt from scratch, the process is simple:

# Destroy the compromised server
terraform destroy -target=cloud_server.web -var-file="environments/prod.tfvars"

# Recreate it with identical specs
terraform apply -var-file="environments/prod.tfvars"

# Reconfigure with Ansible
ansible-playbook -i ansible/inventory.ini ansible/site.yml

The -target flag limits the destroy operation to a specific resource, leaving the rest of your infrastructure untouched. After destruction, terraform apply detects that the resource is missing from the actual infrastructure and creates a new one matching the specification. Ansible then reconfigures the fresh instance to match the desired state.

This destroy-and-recreate pattern is also the foundation of immutable infrastructure, where servers are never patched or modified in place — they are replaced entirely. When a security update is needed, you update the Ansible playbook, destroy the old servers, create new ones with terraform apply, and let Ansible configure them with the latest patches. The old servers never accumulate configuration drift because they are destroyed rather than modified.

For full environment disaster recovery, maintain your Terraform configurations and Ansible playbooks in a Git repository with remote state. If your entire environment is destroyed, recovery is a matter of cloning the repo and running terraform apply followed by ansible-playbook. The complete infrastructure — servers, DNS, networking — is rebuilt from code in minutes rather than hours or days of manual reconstruction.

Best Practices for Terraform VPS Management

As your Terraform usage matures, these practices will save you from common pitfalls and keep your configurations maintainable:

Version pin your providers. Always specify exact or constrained provider versions to prevent unexpected behavior from provider updates. Use ~> 2.0 for minor version flexibility or exact versions for production stability.

Use modules for reusable patterns. If you find yourself copying the same resource blocks across projects, extract them into Terraform modules. A VPS module that accepts size, region, and name as inputs can be reused across every project without duplication.

Tag everything. Tags like terraform-managed, environment:prod, and team:platform make it easy to identify and audit resources in your hosting dashboard. Tags also enable cost tracking and cleanup scripts.

Plan before every apply. Never run terraform apply -auto-approve in production without first reviewing a terraform plan. The plan output is your last chance to catch unintended changes before they affect live infrastructure.

Keep state secure. Use remote backends with encryption, enable state locking for team environments, and never commit terraform.tfstate to version control.

Separate environments completely. Use different state files for dev, staging, and production. A mistake in one environment's configuration should never propagate to another.

Use terraform fmt and terraform validate in CI. Enforce consistent formatting and catch syntax errors before they reach a human reviewer. These commands are fast and catch a surprising number of issues early.

Define Your VPS Specs in Code

Terraform transforms VPS management from a manual, error-prone process into a disciplined engineering practice. Every server specification, every DNS record, every environment variable is defined in version-controlled files that serve as both documentation and executable configuration. When you need a new VPS, you do not log into a dashboard — you write a resource block, run terraform plan to review, and execute terraform apply to provision.

MassiveGRID's VPS platform gives you the ideal foundation for Terraform-managed infrastructure. Define your vCPU, RAM, and NVMe storage specifications in code and deploy with a single terraform apply command. Ubuntu 24.04 LTS comes pre-installed, Proxmox HA clustering provides automatic failover at the hypervisor level, and Ceph 3x replicated NVMe storage ensures your data survives hardware failures — all starting from $1.99 per month.

For environments where you need guaranteed resources per tier — a lean development instance alongside a production server with dedicated compute — MassiveGRID's Dedicated VPS provides isolated resources that never share with noisy neighbors. Define different size slugs in your dev.tfvars and prod.tfvars files to scale resource allocation per environment while keeping the same Terraform modules.

Prefer Human-Managed Infrastructure?

Not every team wants to build and maintain a Terraform pipeline. If you would rather focus on your application while infrastructure experts handle provisioning, scaling, security hardening, and ongoing operations, MassiveGRID's fully managed hosting provides exactly that. The managed team handles server provisioning, OS updates, security patches, monitoring, backups, and scaling decisions — so you get the benefits of professionally managed infrastructure without writing a single line of HCL. For teams that want Infrastructure as Code without the overhead, the managed option gives you expert-level provisioning and scaling handled entirely by humans who know the platform inside and out.