
Infrastructure drift is one of the most insidious challenges in modern Infrastructure as Code workflows. It occurs silently, can accumulate over time, and introduces security vulnerabilities, compliance risks, and operational instability. This guide provides everything you need to detect, prevent, and remediate infrastructure drift in your Terraform and OpenTofu environments.
.tf files. It happens silently — manual console changes, broken automation, partial applies, parallel tooling — and accumulates until something breaks at the worst possible time.terraform plan is your first detector. Run it on a schedule, not just on PRs — see how to set up scheduled drift detection for the cron, alerting, and exit-code patterns.apply to restore code-as-truth), align (update .tf to match reality, useful when the drift was intentional), or ignore (use lifecycle.ignore_changes for fields that legitimately drift, like auto-scaler counts).apply.Infrastructure drift, or configuration drift, occurs when the actual, live state of your deployed infrastructure diverges from the intended state defined in your Infrastructure as Code configuration files and state. Simply put: your code no longer accurately represents what's running in your cloud environment.
In a Terraform context, drift means the difference between:
Imagine your Terraform code defines an S3 bucket with public access disabled. Then a developer, responding to an urgent request, logs into the AWS console and manually enables public access. Your live infrastructure now differs from your code--that's drift. The code says "private," but the reality is "public."
Drift isn't usually malicious; it creeps in through everyday operational realities:
The most common cause. Engineers make quick changes directly via cloud provider consoles to fix urgent issues or test something, bypassing the IaC workflow entirely. Emergency security patches, performance tuning, and debugging often trigger manual changes.
Multiple tools managing the same resources without proper coordination cause conflicting changes. Terraform provisions a server while Ansible later modifies its network configuration independently, or a Lambda-based auto-remediation tool alters resources outside the IaC system.
Critical incidents sometimes necessitate immediate manual changes to restore service. If these aren't backported to the IaC code, they become persistent drift that diverges further over time.
Operations teams or developers run custom scripts to modify resources outside the purview of the primary IaC tool, often without documentation or version control.
Team members unfamiliar with IaC principles might make direct changes, underestimating the cascading impact on infrastructure consistency.
Auto-scaling groups replace instances, managed databases perform automated maintenance, cloud providers change default settings--these provider-initiated changes can alter resource configurations dynamically.
Ignoring drift introduces serious business risks:
| Risk | Impact |
|---|---|
| Security Gaps | Drift can undo carefully configured security settings--altered firewall rules, S3 bucket policies, IAM permissions--inadvertently opening vulnerabilities to attacks |
| Compliance Violations | Unauthorized changes can breach PCI DSS, HIPAA, SOC 2, or GDPR requirements, resulting in failed audits and potential fines |
| Budget Blowouts | Unmanaged resources or unintended scaling lead to surprise cost increases and operational overhead in tracking "ghost" infrastructure |
| Stability & Reliability | When code isn't the source of truth, troubleshooting becomes guesswork, leading to unpredictable behavior and downtime |
| Reduced Agility | Teams hesitant to deploy changes slow down innovation and increase deployment friction |
Terraform and OpenTofu provide foundational tools for detecting drift. These native commands are your first line of defense — but they only work as well as your state file and remote backend setup allow.
The terraform plan command is your primary drift detection tool. When executed, Terraform performs a four-step process:
terraform planAn execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# aws_s3_bucket.example will be updated in-place
~ resource "aws_s3_bucket" "example" {
id = "my-example-bucket"
~ versioning {
~ enabled = false -> true
}
}
Plan: 0 to add, 1 to change, 0 to destroy.
The ~ symbol indicates drift. The versioning attribute shows the actual state differs from your code.
While terraform plan implicitly performs a refresh, you can run terraform refresh as a standalone command. This updates your state file to reflect the real-world state of resources without making any changes to your infrastructure.
terraform refreshImportant: OpenTofu has deprecated the standalone tofu refresh command due to safety concerns. Instead, use tofu apply -refresh-only or terraform apply -refresh-only, which perform the same refresh but allow review of changes before committing them to state.
# Recommended approach (works for both Terraform and OpenTofu)
terraform apply -refresh-only
tofu apply -refresh-onlyFor CI/CD pipeline integration, use the -detailed-exitcode flag:
terraform plan -detailed-exitcode
# Returns:
# 0 - No changes (no drift)
# 1 - Error occurred
# 2 - Changes present (drift detected)While essential, native commands have significant limitations:
Prevention is always more efficient than remediation. Effective drift prevention requires combining technical controls with organizational practices. Layer in Policy as Code to block guardrail-violating changes before they reach apply, and pair with IaC security scanning on every PR.
Make Git your single source of truth. All infrastructure changes must flow through pull requests with required reviews before being applied. This creates an audit trail, ensures all changes are codified, and enables automatic rollback.
Key practices:
Limit who can make manual changes in your cloud environment using Role-Based Access Control (RBAC) and the principle of least privilege. Separate read-only access (for debugging) from change access.
Implementation:
Regularly schedule drift detection to catch unauthorized changes quickly. Detection frequency depends on your risk tolerance and operational tempo.
Scheduling strategies:
Define and enforce policies automatically using Open Policy Agent (OPA) or Sentinel. Policies are checked before terraform apply runs, preventing non-compliant changes.
# Example OPA policy
package terraform.aws.s3
deny[msg] {
input.resource_changes[_].type == "aws_s3_bucket"
not input.resource_changes[_].change.after.server_side_encryption_configuration
msg := "S3 buckets must have server-side encryption configured."
}This policy prevents creation of unencrypted S3 buckets, preventing a common source of drift and security violations.
Once drift is detected, you have two main philosophies and multiple tactical approaches. Making the wrong call can be worse than the drift itself: blindly reverting an emergency scaling event could cause an outage, while ignoring a security group modification could leave you exposed.
Prioritize your Terraform code as the source of truth. Run terraform apply to revert the infrastructure to match your coded state. This is the right call when drift is unauthorized or unintentional -- someone opened a security group port that shouldn't be open, or a manual change broke the expected configuration.
When to revert:
When NOT to revert:
Process:
The risk with reverting is timing. If you revert automatically at 3 AM without understanding the context, you might undo an emergency change that's keeping production running. This is why platforms like Scalr deliberately keep a human in the loop for remediation -- presenting the drift and letting an engineer decide, rather than auto-reverting.
Accept the drifted state as the new desired state. Update your Terraform .tf files to match the actual infrastructure. Suitable for intentional changes like emergency hotfixes that need codification.
When to align:
Process:
For simple attribute changes, updating the .tf file and running terraform plan to confirm zero diff is straightforward. For resources that were created outside Terraform entirely, you'll need terraform import to bring them into state.
Not all drift requires action. Some changes are expected, temporary, or managed by other systems. The key is to acknowledge them deliberately rather than leaving them as unreviewed noise in your drift reports.
When to ignore:
The danger of ignoring drift is that it becomes habitual. If your team starts dismissing all drift alerts, you'll miss the one that actually matters. Good drift hygiene means reviewing every detection, making an explicit decision, and documenting why you chose to ignore it.
When you detect drift, run through these questions in order:
1. Is the change security-sensitive? If an IAM policy, security group, encryption setting, or access control was modified, treat it as high-priority. Revert immediately unless you can confirm the change was authorized and intentional.
2. Was the change intentional? Check with your team. If someone made a deliberate change during an incident or as part of a planned activity, the right path is usually to align your code rather than revert.
3. Is the change still needed? Emergency scaling events are intentional but temporary. If the incident is resolved and the extra capacity isn't needed, revert. If it is, align your code.
4. Is it managed by another system? Auto-scaling groups, Kubernetes operators, and other automation tools legitimately modify resources. If another system is authoritative for that resource attribute, consider using lifecycle { ignore_changes } in your Terraform configuration to prevent false positives going forward.
5. Can you explain why you're ignoring it? If you can't articulate a clear reason, don't ignore it. The inability to explain drift is itself a signal that something unexpected happened.
When drift represents intentional changes that should be captured, update your state file without modifying infrastructure:
# Terraform - validate non-destructive changes first
terraform plan -target=aws_instance.example
# Update state to match actual infrastructure
terraform apply -refresh-onlyWhen drift represents unauthorized changes, generate and apply a plan to revert:
# Create specific target plan
terraform plan -target=aws_security_group.web_sg -out=tf.plan
# Review the plan carefully
terraform show tf.plan
# Apply if correct
terraform apply tf.planWhen resources were created outside Terraform, import them to bring them under IaC management:
# Import existing resource
terraform import aws_s3_bucket.data bucket-name
# For Terraform 1.5+, use import blocks
import {
to = aws_instance.web
id = "i-1234567890abcdef0"
}Not all drift requires immediate attention. Establish a prioritization framework based on business impact and risk:
| Type | Priority | Example | Approach |
|---|---|---|---|
| Security-critical | P0 | Modified security groups, IAM policies | Immediate remediation |
| Business-critical | P1 | Changes to production databases, load balancers | Scheduled remediation |
| Configuration drift | P2 | Instance type changes, tag modifications | Batch remediation |
| Informational | P3 | Comment changes, cosmetic differences | Document for next update |
Assign specific roles for drift management:
While native Terraform commands provide a foundation, mature IaC management platforms offer significantly enhanced capabilities. The difference isn't just convenience -- it changes the operational model fundamentally. For a hands-on look at scheduled drift checks, see how to set up scheduled drift detection; for the integrated platform approach, see our deep dive into Scalr's platform architecture.
Before exploring platform solutions, it's worth understanding what manual drift detection actually requires at scale. The shell script running terraform plan -detailed-exitcode is the easy part. The hard part is everything around it:
Credential management. Every drift check needs valid cloud provider credentials. For AWS, that means IAM roles or access keys for each account. For multi-cloud setups, you're managing credentials across AWS, Azure, GCP, and whatever else you run. These credentials need rotation, and if they expire, your drift detection silently stops working.
State locking. When your drift detection script runs terraform plan, it acquires a state lock. If a developer triggers a real plan at the same time, one of them fails. At scale, this contention becomes a real problem -- your drift checks start interfering with production deployments.
Per-workspace configuration. Each Terraform workspace needs its own script invocation with the right backend config, variable files, and provider configuration. Adding a new workspace means updating your drift detection setup. Teams inevitably forget, and new workspaces go unmonitored.
Alerting and reporting. A cron job that prints "DRIFT DETECTED" to a log file isn't actionable. You need to parse the plan output, send structured alerts, track which workspaces have unresolved drift, and give someone a way to act on the findings. This is effectively building a dashboard from scratch.
Maintenance. Terraform CLI updates, provider version changes, backend configuration changes, and CI/CD platform migrations all break drift detection scripts. These scripts are never anyone's primary responsibility, so they break silently and stay broken for weeks.
If you're running fewer than 10 workspaces, all in one cloud provider, with a small team -- the manual approach is fine. Beyond that, the operational cost justifies a platform.
Scalr is an Infrastructure as Code management platform providing robust drift detection, reporting, and remediation options for both Terraform and OpenTofu environments. It treats drift detection as a first-class platform feature rather than something you bolt on with scripts.
Scalr employs flexible detection strategies:
This dual-source comparison catches more deviations than plan-based detection alone.
In Scalr, drift detection is enabled per environment, not per workspace. When you turn it on for your production environment and set a daily schedule, every workspace in that environment is automatically covered. New workspaces inherit the drift detection policy the moment they're created -- there's no separate configuration step to forget.
This is a meaningful architectural difference. Manual approaches require explicit opt-in per workspace. Scalr's environment-level model is opt-out -- you'd have to deliberately exclude a workspace from drift detection. The default is coverage, not gaps.
To set it up:
There's no script to maintain, no credentials to manage separately, and no state lock conflicts -- Scalr coordinates drift checks with regular runs automatically.
Scalr deliberately does not provide fully automated remediation. The platform requires explicit user intervention, prioritizing safety and deliberate action:
The advantage of handling remediation through a platform rather than CLI commands is visibility. When an engineer reverts drift from Scalr, the action is logged, attributed to a user, and visible to the team. When someone runs terraform apply from their laptop to fix drift, nobody else knows it happened.
| Capability | Manual (cron/CI) | Scalr |
|---|---|---|
| Setup per workspace | Script + config per workspace | None -- inherits from environment |
| New workspace coverage | Manual opt-in (often forgotten) | Automatic |
| Credential management | Separate service accounts | Reuses existing credentials |
| State lock handling | Contention with prod runs | Coordinated automatically |
| Alerting | Build your own | Native Slack integration |
| Org-wide visibility | Build your own dashboard | Built-in dashboards |
| Remediation | Separate CLI step | Integrated in same UI |
| Audit trail | CI/CD logs (if retained) | Full audit of detections + actions |
| Maintenance burden | Scripts break with TF/provider updates | Zero -- managed by platform |
| Best for | <10 workspaces, single cloud | 10+ workspaces, multi-team |
The inflection point is usually around 15-20 workspaces, or when a second team starts managing infrastructure. At that point, the operational cost of maintaining per-workspace scripts, managing credentials, and aggregating alerts exceeds the cost of adopting a platform.
The drift detection landscape offers multiple solutions, each with distinct philosophies and strengths.
| Feature | Scalr | env0 | Terramate | Driftive | Snyk IaC |
|---|---|---|---|---|---|
| Primary Focus | User-controlled drift mgmt | AI-powered analysis | Orchestration + auto-remediate | Notification-first detection | Unmanaged resources |
| Scheduled Detection | Yes (Native) | Yes (Native) | Yes (CI/CD config) | Manual/scripted | Yes (Integrated) |
| Unmanaged Resources | Not prioritized | Not prioritized | Limited | Limited | Yes (Primary) |
| Remediation | Ignore/Sync/Revert | Auto-policies & more | Automated reconcile | Manual via notifications | Manual |
| OpenTofu Support | Yes (Founding member) | Yes (Founding member) | Yes | Yes | Unconfirmed |
| Reporting & Alerts | UI/Dashboard/Slack | UI/Notifications/AI | Cloud UI/Slack | Slack/GitHub Issues | CLI/Snyk UI |
| Best For | Control-focused orgs | Deep analysis needs | High automation orgs | OSS/self-hosted | Shadow IT concerns |
For large AWS or multi-cloud environments, manual detection becomes impractical. Implement automated, scaled detection:
Schedule regular drift detection in your CI/CD pipeline:
# GitHub Actions example
name: Terraform Drift Detection
on:
schedule:
- cron: '0 8 * * *' # Daily at 8 AM
jobs:
detect_drift:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
- name: Terraform Init
run: terraform init
- name: Check Drift
run: |
terraform plan -detailed-exitcode
if [ $? -eq 2 ]; then
echo "Drift detected!"
# Send notification to Slack/email
fiFor AWS Organizations:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": [
"ec2:ModifyInstanceAttribute",
"rds:ModifyDBInstance"
],
"Resource": "*",
"Condition": {
"StringEquals": {"aws:ResourceTag/ManagedBy": "Terraform"}
}
}
]
}This policy prevents modification of Terraform-managed resources, preventing drift at the source.
Problem: Team members make emergency changes via the AWS console
Prevention:
Recovery:
terraform import operations to bring resources under IaCProblem: Auto Scaling Groups, managed services automatically modify resources
Solution:
Use lifecycle blocks to ignore expected changes:
lifecycle {
ignore_changes = [instance_type, tags]
}Problem: terraform apply operations fail midway, leaving partial state
Solution:
-target carefully with state lockingImplement recovery procedures:
terraform apply -refresh-only # Synchronize state
terraform plan -detailed-exitcode # Validate stateProblem: Other systems modify AWS resources independently
Solution:
For compliance-grade evidence of who-changed-what, layer Terraform audit logs into the workflows below.
Leadership & Documentation:
Team Training:
Combine native commands with platform-based detection:
terraform planCreate decision trees for different drift types:
Prevention is vastly more efficient than remediation:
Maintain visibility into drift patterns:
Go deeper: For a hands-on implementation guide, see how to set up scheduled drift detection.
Infrastructure drift is an inevitable challenge in dynamic cloud environments. However, by combining diligent detection practices, strategic remediation, proactive prevention measures, and appropriate tooling, you can maintain infrastructure integrity, security, and reliability.
The journey from manual drift detection with Terraform commands to automated platform-based management with tools like Scalr represents the maturity progression most organizations follow. Start with native commands to understand your baseline, implement scheduled detection early, establish clear remediation procedures, and invest in prevention through GitOps discipline and policy enforcement.
By applying these practices, your Terraform infrastructure will remain securely aligned with your code, ensuring the IaC investment continues to deliver on its promise of stability, security, and speed--even as your infrastructure grows in complexity and scale.
