
This post is part of a series on Terraform State.
Terraform state is how your code understands the infrastructure it controls, acting as a record of deployed resources. When multiple teams or configurations need to interact, sharing this state effectively becomes a core requirement for collaboration. This article explains Terraform state, the methods for secure sharing across deployments, and how managed platforms like Scalr and Terraform Cloud simplify the process.
At its core, Terraform state is a snapshot of your infrastructure. When you run terraform apply, Terraform records the mapping between your configuration and the real-world resources it created. This state file, typically named terraform.tfstate, is essential for Terraform to:
Without a valid state file, Terraform has no idea what it's managing, leading to potential resource duplication or accidental destruction — commonly known as drift.
While a single Terraform configuration often generates a single state file, real-world deployments rarely live in isolation. Often, one set of Terraform state files depends on another. For example:
In these scenarios, the output of one Terraform deployment (e.g., VPC ID, subnet IDs, database endpoint) becomes an input for another. Without a mechanism to share these outputs, each team would have to manually discover and input these values, which is error-prone and defeats the purpose of IaC.
Sharing Terraform state becomes invaluable in several common scenarios:
Sharing Terraform state, while powerful, introduces security considerations:
Sensitive data. Terraform state can contain sensitive information like database passwords, API keys, or private IP addresses.
terraform.tfstate files directly to version control.Access control (least privilege). Not everyone needs access to all state files.
State locking. Concurrent operations on the same state file can corrupt it.
State history and rollback. Leverage remote backends that provide versioning for state files. This allows you to revert to a previous working state if a deployment goes wrong.
Auditing. Enable logging and auditing for your state backend to track who accessed and modified state files. Dedicated Terraform platforms make this much easier.
When operating with open-source Terraform alone — without a managed platform — state management and sharing become your responsibility. The core mechanism is the remote backend: cloud object storage that holds your terraform.tfstate file in a centralized, accessible location.
This approach requires you to manually configure the backend and often set up supporting services for state locking.
The most common approach is cloud object storage, which offers high durability, versioning, and often built-in locking.
General steps:
terraform init.backend block to your root module.AWS S3. Prerequisites:
LockID (string) for state locking.terraform {
backend "s3" {
bucket = "my-terraform-state-bucket"
key = "path/to/my/terraform.tfstate"
region = "us-east-1"
encrypt = true
dynamodb_table = "terraform-lock-table"
}
}Azure Blob Storage. Prerequisites: storage account and container, and the "Storage Blob Data Contributor" role on the identity running Terraform. Azure Blob Storage provides its own locking mechanism.
terraform {
backend "azurerm" {
resource_group_name = "tfstate-rg"
storage_account_name = "myazuretfstateaccount"
container_name = "tfstate"
key = "path/to/my/terraform.tfstate"
}
}Google Cloud Storage. Prerequisites: a GCS bucket with object versioning enabled, and the "Storage Object Admin" role on the identity running Terraform. GCS provides native state locking.
terraform {
backend "gcs" {
bucket = "my-gcs-terraform-state-bucket"
prefix = "path/to/my/states"
}
}After defining the backend block, run terraform init. Terraform detects the backend configuration and prompts you to migrate any existing local state.
terraform_remote_stateOnce your state is in a remote backend, other Terraform configurations (in different directories or even different Git repositories) can access its outputs using the terraform_remote_state data source. For a deeper dive on the output block itself — arguments, for expressions, sensitive flags, and child-module outputs — see Terraform Outputs: How to with Examples.
data "terraform_remote_state" "network" {
backend = "s3"
config = {
bucket = "my-terraform-state-bucket"
key = "path/to/my/network-terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-lock-table"
}
}
resource "aws_instance" "app_server" {
ami = "ami-0abcdef1234567890"
instance_type = "t2.micro"
subnet_id = data.terraform_remote_state.network.outputs.private_subnet_id
}The consumer needs the right permissions to read the state files in the bucket.
apply operations can corrupt state.terraform workspace commands map each workspace to a distinct key within the bucket.Platforms like Scalr and Terraform Cloud simplify the learning curve for Terraform state management by providing a centralized, managed backend. They natively handle secure remote storage, robust state locking, and versioning for easy rollbacks. This integration becomes truly powerful when combined with run triggers — a feature that automatically initiates a run in a dependent downstream workspace whenever its upstream counterpart successfully completes an apply.
Both platforms abstract away the complexity of manual state management through the concept of workspaces. A workspace is the fundamental organizational unit — an isolated container for managing a specific set of infrastructure, where the state and all related deployment artifacts are stored.
Terraform Cloud is HashiCorp's managed service for a collaborative Terraform workflow. State management is a core feature:

Once state sharing is configured, consumers use the terraform_remote_state data source:
# Workspace A (e.g., networking)
output "vpc_id" {
value = aws_vpc.main.id
}
output "public_subnet_ids" {
value = aws_subnet.public.*.id
}
# Workspace B (e.g., compute), consuming outputs from Workspace A
data "terraform_remote_state" "network" {
backend = "remote"
config = {
organization = "your-organization-name"
workspaces {
name = "workspace-a-name"
}
}
}
resource "aws_instance" "web" {
ami = "ami-0abcdef1234567890"
instance_type = "t2.micro"
subnet_id = data.terraform_remote_state.network.outputs.public_subnet_ids[0]
}Terraform Cloud handles the secure retrieval of state from the upstream workspace and provides its outputs downstream.
Scalr is another Terraform automation platform with enterprise-grade state management and sharing:

Once a workspace is granted permission to pull outputs from another workspace's state, consumers define the data source.

The configuration is very similar to Terraform Cloud:
# Scalr Workspace A (e.g., base-infrastructure)
output "vpc_id" {
value = aws_vpc.main.id
}
# Scalr Workspace B (e.g., application-deployment)
data "terraform_remote_state" "base_infra" {
backend = "remote"
config = {
organization = "your-scalr-environment-id"
workspaces {
name = "base-infrastructure-workspace-name"
}
}
}
resource "aws_instance" "app" {
ami = "ami-0abcdef1234567890"
instance_type = "t2.micro"
subnet_id = data.terraform_remote_state.base_infra.outputs.public_subnet_ids[0]
}Note: the organization value in terraform_remote_state for Scalr will be your Scalr environment ID. See the Scalr documentation for the precise value.
Sharing outputs is only half the story. Normally, a successful Terraform apply changes the state file outputs, which then needs to be propagated automatically to the workspaces that consume them.
Run triggers, available in both Terraform Cloud and Scalr, chain workspace runs together. They create an explicit dependency: the successful completion of terraform apply in one "upstream" workspace automatically initiates a new run in a "downstream" workspace.

Run triggers let you orchestrate deployments and avoid monolithic configurations — instead breaking deployments down into smaller workspaces that depend on each other, which is much more manageable and a better developer experience.
If you're rolling your own with open-source Terraform, you can approximate this with CI/CD glue. Here's a GitHub Actions sketch that uses repository_dispatch to chain a network-infra repo to an app-servers repo:
# network-infra/.github/workflows/deploy-network.yml
name: Deploy Network Infrastructure
on:
push:
branches: [main]
paths: ['network-infra/**']
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: us-east-1
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
- run: terraform init -backend-config="bucket=my-tf-states" -backend-config="key=prod/network/terraform.tfstate" -backend-config="region=${{ env.AWS_REGION }}" -backend-config="dynamodb_table=terraform-locks"
working-directory: network-infra
- run: terraform apply -auto-approve
working-directory: network-infra
- name: Trigger app-servers
if: success()
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ secrets.PAT_TOKEN }}
repository: your-org/app-servers
event-type: deploy-app-servers# app-servers/.github/workflows/deploy-app-servers.yml
name: Deploy Application Servers
on:
push:
branches: [main]
paths: ['app-servers/**']
repository_dispatch:
types: [deploy-app-servers]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: hashicorp/setup-terraform@v3
- run: terraform init -backend-config="bucket=my-tf-states" -backend-config="key=prod/app-servers/terraform.tfstate"
working-directory: app-servers
- run: terraform apply -auto-approve
working-directory: app-serversThis gets the job done, but the connecting glue and credentials management add real maintenance burden. That's where built-in Scalr federated environments, run triggers, and state sharing come in.
Terraform state is the cornerstone of effective infrastructure management. Understanding its role, the dependencies between state files, and the various methods for secure sharing is paramount for collaborative and scalable IaC deployments. By embracing remote backends, implementing strong access controls, and leveraging platforms like Terraform Cloud or Scalr, you can ensure your Terraform state is not just managed, but managed securely and efficiently.
Try Scalr for free.
