Terraform Import: The Complete Guide to Importing Existing Infrastructure
Master Terraform import in minutes: map existing cloud resources into code, dodge common pitfalls, and streamline your IaC with this step-by-step guide.
Terraform import bridges the gap between manually created infrastructure and infrastructure as code. This comprehensive guide provides everything you need to successfully import existing resources into Terraform, from basic concepts to advanced techniques and modern best practices for 2026.
Why Import Existing Infrastructure
Terraform import allows you to bring existing infrastructure resources under Terraform management without rebuilding them. This functionality maps real-world resources created outside of Terraform to resource blocks in your configuration, creating a binding between the remote object and a resource instance in your state file.
Key Use Cases for Terraform Import
Terraform import is particularly valuable in several common scenarios:
- Migrating to IaC: When an organization adopts Terraform after already having existing infrastructure
- Moving between tools: When transitioning from other IaC tools like CloudFormation or Pulumi
- Handling emergency resources: For resources created through a cloud console during incidents
- Disaster recovery of state: If your Terraform state file becomes corrupted or lost
- Incremental adoption: Facilitating phased adoption of Terraform within an organization
- Refactoring state: When splitting large state files into smaller, more manageable pieces
Understanding the Fundamental Limitation
The fundamental limitation of Terraform import is that it only updates the state file; it doesn't automatically generate the corresponding configuration code (though newer versions provide options for this). You must still write or generate the resource configuration to match the imported state.
Understanding Import Methods
Two primary import methods are available in modern Terraform:
- The
terraform importCLI command - Available in all Terraform versions, provides imperative, one-off imports - The import block - Introduced in Terraform v1.5.0, enabling declarative, configuration-driven imports
Each approach has distinct advantages depending on your workflow and use case.
Terraform Import Command (Legacy)
The terraform import command allows you to link a remote, pre-existing resource to a corresponding resource block in your Terraform configuration. This method is still valuable for one-off, interactive imports and quick fixes.
Basic Syntax
terraform import aws_instance.example i-1234567890abcdef0
The basic syntax involves specifying the Terraform resource address (e.g., aws_instance.example) and the unique ID of the existing remote resource.
Common Import Flags
Here are the most useful flags for the terraform import command:
-config=path: Specifies the path to the directory containing your Terraform configuration files-input=true/false: Determines whether Terraform should ask for interactive input (set tofalsefor automation)-lock=false: Disables state locking (generally not recommended in collaborative environments)-lock-timeout=0s: Sets a duration to retry acquiring a state lock before failing-no-color: Disables colorized output-parallelism=n: Limits the number of concurrent operations (default is 10)-var 'foo=bar': Sets a variable from the command line
Provider-Specific Examples
AWS EC2 Instance
terraform import aws_instance.web_server i-abcd1234
AWS S3 Bucket
terraform import aws_s3_bucket.data_lake my-data-lake
Azure Virtual Machine
Azure resource IDs are typically full paths:
terraform import azurerm_virtual_machine.app_server /subscriptions/xxx-xxx-xxx-xxx-xxx/resourceGroups/myRG/providers/Microsoft.Compute/virtualMachines/myVM
GCP Compute Instance
terraform import google_compute_instance.app_server projects/my-project/zones/us-central1-a/instances/app-server
Handling Computed and Default Attributes
When using terraform import, a common challenge arises with computed attributes or default settings. Computed attributes are values determined by the cloud provider after resource creation (like timestamps, default security group IDs, or ARN components).
Solution 1: Explicitly Ignoring Attributes
Use the ignore_changes lifecycle meta-argument to tell Terraform to disregard drift for specific attributes:
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
lifecycle {
ignore_changes = [
default_security_group_id,
tags_all,
]
}
}
Solution 2: Including Default Values in HCL
For attributes where you can reliably determine the default value, explicitly include that value in your HCL:
resource "aws_db_instance" "example" {
# ... other imported attributes
backup_retention_period = 7 # Explicitly set the provider's default
}
Import Blocks: The Modern Approach
Introduced in Terraform 1.5, the import block provides a declarative alternative integrated directly into the Terraform configuration language. This block allows you to define which resources should be imported as part of your normal terraform plan and apply workflow.
Why Import Blocks Matter
Import blocks revolutionized the import process by integrating imports into the standard plan/apply workflow:
| Feature/Aspect | Import Block (Modern) | terraform import CLI (Legacy) |
|---|---|---|
| Preview | Yes, see plan before changes | No, modifies state immediately |
| HCL Generation | Yes, auto-generate a draft | No, write all HCL manually |
| Safety | High, part of plan/apply workflow | Low, easy to make mistakes |
| CI/CD Integration | Works perfectly in pipelines | Risky and difficult |
| Version Control | Import definitions reviewable in PRs | No visibility into imports |
Basic Import Block Syntax
import {
to = aws_s3_bucket.legacy_bucket
id = "my-legacy-data-bucket"
}
When you run terraform plan and terraform apply with an import block present, Terraform performs the import operation. You can also generate configuration automatically using:
terraform plan -generate-config-out=generated.tf
Import Block with for_each
For importing multiple resources of the same type:
locals {
buckets = {
"staging" = "staging-bucket"
"uat" = "uat-bucket"
"prod" = "production-bucket"
}
}
import {
for_each = local.buckets
to = aws_s3_bucket.app_data[each.key]
id = each.value
}
resource "aws_s3_bucket" "app_data" {
for_each = local.buckets
bucket = each.value
}
Import Block into Modules
To import resources into modules:
import {
to = module.servers.aws_instance.app_server
id = "i-1234567890abcdef0"
}
Step-by-Step Import Block Workflow
Step 1: Prepare Import Block
Create an import block in your configuration specifying the resource to import:
import {
to = aws_s3_bucket.legacy_bucket
id = "my-legacy-data-bucket"
}
Step 2: Generate Configuration
Run plan with configuration generation enabled:
terraform plan -generate-config-out=generated.tf
Terraform will create a generated.tf file with configuration based on the live resource.
Step 3: Review and Refine Configuration
Do not blindly trust generated configuration. Review carefully:
- Clean it up: Remove computed attributes (like
arn,hosted_zone_id) and default values - Correct it: Fix any invalid syntax or errors
- Make it yours: Refactor to fit your team's standards and use variables
Example cleanup:
# BEFORE - Generated
resource "aws_s3_bucket" "legacy_bucket" {
bucket = "my-legacy-data-bucket"
bucket_domain_name = "my-legacy-data-bucket.s3.amazonaws.com" # Remove
hosted_zone_id = "Z3AQBSTGFYJSTF" # Remove
region = "us-east-1" # Remove
}
# AFTER - Cleaned
resource "aws_s3_bucket" "legacy_bucket" {
bucket = "my-legacy-data-bucket"
tags = {
Name = "Legacy Data Bucket"
Environment = "production"
}
}
Step 4: Run Validation Plan
terraform plan
The output should show "1 to import, 0 to change, 0 to destroy." If Terraform wants changes, adjust your HCL until the plan is clean.
Step 5: Apply the Import
terraform apply
Once approved, the import becomes permanent.
Step 6: Verify and Clean Up
Verify the import succeeded:
terraform state show aws_s3_bucket.legacy_bucket
You can now remove the import block from your configuration—it's a one-time operation.
Common Mistakes to Avoid with Import Blocks
- Blindly trusting generated code - Always review and clean up generated configuration
- Forgetting the resource block - Import blocks require a corresponding
resourceblock - Leaving import blocks in configuration - Remove them after successful import
- Wrong resource addressing - Missing array indices for
countor keys forfor_each - Using dynamic values - All import block values must be known at plan time; no data sources or computed values
Azure aztfexport Tool
For Azure environments, Microsoft provides aztfexport (formerly aztfy), a command-line tool designed to scan existing Azure resources, generate corresponding Terraform HCL code, and create a state file.
What is Aztfexport
Azure Export for Terraform (aztfexport) is an open-source tool from Microsoft that:
- Scans existing Azure resources
- Generates corresponding Terraform HCL code
- Creates a state file mapping to live infrastructure
- Supports both
azurermandazapiTerraform providers
How Aztfexport Works
The tool combines three components:
aztft: A Go program that maps Azure resource IDs to Terraform resource typesterraform import: The standard Terraform command reads resource configuration from Azuretfadd: A Go component that reads state and generates associated HCL code
Installation
Linux (apt - Debian/Ubuntu):
curl -sSL https://packages.microsoft.com/keys/microsoft.asc | sudo tee /etc/apt/trusted.gpg.d/microsoft.asc > /dev/null
sudo apt-add-repository https://packages.microsoft.com/ubuntu/20.04/prod
sudo apt-get update
sudo apt-get install aztfexport
Linux/macOS (Homebrew):
brew install aztfexport
Windows (winget):
winget install aztfexport
Basic Commands
Export a Resource Group:
aztfexport resource-group myRG
Use the -n flag for non-interactive mode with large resource groups.
Export using Azure Resource Graph Query:
aztfexport query "resourceGroup =~ 'myRG' and type =~ 'microsoft.network/virtualnetworks'"
Export a Single Resource:
aztfexport resource /subscriptions/your-sub-id/resourceGroups/myRG/providers/Microsoft.Compute/virtualMachines/myVM
The tool generates .tf files, a terraform.tfstate file, and a JSON mapping file (aztfexportResourceMapping.json).
Primary Use Cases
- IaC Adoption Acceleration: Automates the initial import of existing resources
- Migration Support: Provides a snapshot of legacy resources for reference
- Configuration Auditing: Generated HCL documents point-in-time resource configurations
- Learning Aid: Practical way to see how Azure resources are defined in HCL
Post-Export Refactoring Requirements
It's important to note that Microsoft states the generated code is not intended to be fully reproducible from scratch. The generated output requires mandatory manual review and refactoring:
- Rename resources to align with standards
- Replace hardcoded IDs with dynamic references
- Introduce variables for parameterization
- Organize code into logical modules
- Secure sensitive data that may have been exported
- Ensure compliance with security and coding standards
Comparison: Aztfexport vs Terraform Import
| Feature | aztfexport | terraform import (CLI) | terraform import (Block) |
|---|---|---|---|
| HCL Code Generation | Automated | Manual | Automated |
| State Import | Automated | Automated | Automated |
| Resource Discovery | Supported (RG, Query, Interactive) | Manual | Manual identification |
| Bulk Operations | High | Low | Medium |
| Manual Effort | Medium (refinement) | Very High | Medium |
Step-by-Step Import Workflow
Phase 1: Preparation
Before importing resources, complete these essential preparation steps:
- Inventory existing resources you want to bring under Terraform management
- Identify dependencies between resources to determine import order
- Verify access and permissions to view and manage target resources
- Install Terraform (v0.12+ for CLI import, v1.5+ for import blocks)
- Configure providers with necessary authentication
- Establish import naming conventions for consistency
Phase 2: Configuration Setup
For each resource you plan to import, create a minimal resource block:
resource "aws_instance" "web_server" {
# Minimal required configuration - will be filled in later
}
The resource address must exactly match what you'll use in the import.
Phase 3: Identify Resource ID Format
Each resource type has a specific format for its import ID. This varies by provider:
- AWS: Often simple IDs (e.g.,
i-1234567890abcdef0for EC2 instances) - Azure: Full resource paths (e.g.,
/subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/Microsoft.Compute/virtualMachines/{vm-name}) - GCP: Various formats including project information (e.g.,
projects/{project}/zones/{zone}/instances/{instance})
Always consult provider documentation for the exact format.
Phase 4: Execute Import
Using import blocks (recommended for modern workflows):
terraform plan -generate-config-out=generated.tf
Or using the CLI command (for one-off imports):
terraform import aws_instance.web_server i-1234567890abcdef0
Phase 5: Refine Configuration
Run terraform plan to identify differences:
terraform plan
Update your configuration to align with the imported state. Continue until plan shows no changes.
Phase 6: Verify Successful Import
Confirm the import:
terraform state list
terraform state show aws_instance.web_server
Successful import means the resource appears in state with all imported attributes.
Handling Import Conflicts
Resource Already Exists in State
If a resource already exists in your state file with a different address:
# Remove from old address
terraform state rm aws_instance.old_name
# Then import to new address
terraform import aws_instance.new_name i-1234567890abcdef0
Resolving Import Errors
Error: Cannot import non-existent remote object
- Verify resource ID format matches provider documentation
- Check permissions to access the resource
- Confirm the resource actually exists in the cloud provider
Error: Resource address does not exist in the configuration
- Create the resource block before importing
- Ensure the resource block address matches the import statement exactly
Error: Error acquiring the state lock
- Check for other running Terraform processes
- Verify state locking mechanism is working correctly
- Review remote backend configuration
Error: Invalid provider configuration
- Ensure provider configuration only depends on variables, not data sources
- Check authentication credentials and permissions
- Verify provider version compatibility
Handling Dependency Errors
When importing resources with dependencies:
- Import dependencies first (VPC, subnets, IAM roles)
- Use the
-targetflag to apply subsets of configuration - For circular dependencies, temporarily remove references and reestablish after import
State Management During Import
How State Works with Import
Terraform state is central to how import works:
- State-only operation: Import updates the state file but not the configuration
- One-to-one mapping: Each remote resource should be imported to only one resource address
- Complex imports: Some resources result in multiple related resources in state
- State locking: Imports respect state locking to prevent concurrent modifications
- State backends: Import works with all backends (local, S3, Azure, GCS, etc.)
Remote Backend Considerations
When using remote backends like Terraform Cloud or Scalr, ensure your backend configuration is in place:
terraform {
backend "remote" {
hostname = "my-account.scalr.io"
organization = "<environment-id>"
workspaces {
name = "<workspace-name>"
}
}
}
Once configured, imported resources are added to the remote state automatically.
State Manipulation After Import
After importing, you might need to reorganize your state:
- Splitting state: Move resources between state files using
terraform state mvwith-stateand-state-out - Renaming resources: Use
terraform state mvto rename without destroying - Removing from state: Use
terraform state rmto remove without destroying
terraform state mv aws_instance.old_name aws_instance.new_name
terraform state mv aws_instance.standalone module.servers.aws_instance.server
Importing at Scale
Bulk Import Strategies
Using Third-Party Tools
Terraformer is a multi-cloud tool for bulk imports:
# Install Terraformer
go install github.com/GoogleCloudPlatform/terraformer@latest
# Import all EC2 instances in a region
terraformer import aws --resources=ec2_instance --regions=us-west-2
Using for_each with Import Blocks
For bulk imports in modern Terraform:
locals {
instances = {
"web" = "i-12345678"
"app" = "i-23456789"
"db" = "i-34567890"
}
}
import {
for_each = local.instances
to = aws_instance.servers[each.key]
id = each.value
}
resource "aws_instance" "servers" {
for_each = local.instances
# configuration...
}
Scripted Bulk Imports
For large-scale imports using the CLI:
#!/bin/bash
for instance_id in i-12345678 i-23456789 i-34567890; do
terraform import aws_instance.server_${instance_id} ${instance_id}
done
Resource Organization at Scale
- Logical separation: Group resources by service, application, or team
- Import order: Import foundation resources first (networking, IAM) before dependent resources
- State segmentation: Consider splitting resources across multiple state files
- Batch imports: For large infrastructures, import in stages across multiple planned sessions
Refactoring with Moved Blocks
Terraform's moved block feature (v1.1+) allows you to safely rename or relocate resources without destroying and recreating them. This is particularly useful when organizing imported resources.
Basic Moved Block Syntax
moved {
from = aws_instance.old_name
to = aws_instance.new_name
}
Practical Refactoring Examples
Renaming a Resource
resource "aws_security_group" "api_security_group" {
name = "api-security-group"
}
moved {
from = aws_security_group.sg
to = aws_security_group.api_security_group
}
Moving a Resource into a Module
# After creating module/storage/main.tf with the resource
module "storage" {
source = "./modules/storage"
bucket_name = "application-logs"
}
moved {
from = aws_s3_bucket.logs
to = module.storage.aws_s3_bucket.logs
}
Converting from count to for_each
locals {
servers = {
"web" = {}
"api" = {}
}
}
resource "aws_instance" "server" {
for_each = local.servers
}
moved {
from = aws_instance.server[0]
to = aws_instance.server["web"]
}
moved {
from = aws_instance.server[1]
to = aws_instance.server["api"]
}
Moved Blocks Best Practices
- Document and retain: Keep moved blocks in code indefinitely with explanatory comments
- Use chained moves: Document the full history when resources move multiple times
- Plan before applying: Always run
terraform planto verify the moves are correct - Refactor incrementally: Move resources in smaller batches rather than all at once
- Consider module shims: Create shim modules for backward compatibility when breaking modules
For comprehensive details on moved blocks and advanced refactoring scenarios, see the dedicated article: Terraform Moved Blocks: Refactoring Without Pain
Best Practices and Common Pitfalls
Best Practices Summary
Resource Organization Strategy
- Logical separation: Group resources by service, application, or team
- Import order: Import foundation resources first (networking, IAM) before dependent resources
- State segmentation: Consider splitting resources across multiple state files for better manageability
Configuration Management
- Minimal initial configuration: Start with only required attributes, then expand
- Configuration pruning: Remove attributes managed by the provider or defaulted values
- Version control: Commit configuration before and after import with clear change descriptions
Team Coordination
- Document imports: Maintain an import log documenting what was imported and when
- Staged imports: For large infrastructures, import in stages across multiple planned sessions
- Approval workflows: Implement approval processes for import operations affecting production
Automation Strategies
- Scripted imports: For bulk operations, use scripts to automate sequential imports
- For-each import blocks: Use Terraform 1.5+ import blocks with for_each for multiple resources
- CI/CD integration: Use import blocks in CI/CD pipelines for consistent, reproducible imports
State Management
- State backup: Always back up state before large import operations
- Remote state: Use remote state storage with locking for team environments
- State manipulation: Use
terraform state mvandterraform state rmcarefully for reorganization
Common Pitfalls to Avoid
Frequent Mistakes
- Incorrect resource ID format - Always verify the exact format required by your provider
- Missing resource block - Create the resource block before importing
- Trusting generated code blindly - Always review and clean generated configuration
- Mixing old and new import methods - Choose one approach for consistency
- Not handling computed attributes - Use
ignore_changesor explicit defaults - Forgetting dependencies - Import in the correct order
- Concurrent state operations - Ensure only one Terraform operation at a time
Troubleshooting Techniques
Enable debug logging:
export TF_LOG=TRACE
export TF_LOG_PATH=terraform.log
Inspect state:
terraform state list
terraform state show <resource_address>
Validate configuration:
terraform validate
terraform fmt
When Not to Use Import
Consider alternatives when:
- Resources are easily recreatable: If the resource is simple to recreate from scratch
- Core infrastructure with high risk: Importing networking or IAM can be risky; consider creating parallel resources
- Uncertain existing configuration: If you don't fully understand the resource configuration, importing can lead to unexpected changes
Provider-Specific Considerations
Each provider has unique import requirements:
- AWS Provider 4.0+: S3 buckets split into multiple resources; must import separately
- Azure: Resources require full ARM IDs with proper escaping in PowerShell
- GCP: Resources may need project ID in import identifier even if set in provider
- Kubernetes: Different approach using
kubernetes_resourceor manual state patching
Integration with Infrastructure as Code Workflows
Incremental Adoption Strategy
Import facilitates a phased approach to IaC adoption:
- Assessment: Inventory existing infrastructure and dependencies
- Prioritization: Identify critical resources to import first
- Import foundation: Import base infrastructure (networking, IAM)
- Import applications: Import application resources
- Standardize: Refactor configuration to follow standards
- Expand coverage: Incrementally bring more resources under management
CI/CD Integration
With import blocks, Terraform import integrates seamlessly with CI/CD pipelines:
- Detect: Identify resources to import
- Configure: Generate import blocks and initial configuration
- Review: Create PR for team review
- Plan: Run
terraform planto preview import - Apply: Execute import after approval
GitOps Methodology
Import enables GitOps workflows for existing infrastructure:
- Repository setup: Create infrastructure repository with configurations
- Import process: Import existing resources into Terraform state
- Version control: Commit configurations to Git
- CI/CD integration: Set up pipelines for plan/apply
- Pull request workflow: All changes go through PRs
2026 Best Practices and Modern Considerations
Recommended Approach in 2026
As of 2026, the recommended approach incorporates:
- Default to import blocks over CLI commands for new work
- Use
-generate-config-outto accelerate configuration creation - Implement policy-as-code to validate imported resources match standards
- Integrate with OIDC-based authentication for keyless provider access
- Use workspace patterns to organize imported resources by environment
- Implement drift detection to identify configuration changes after import
- Document import decisions in architecture decision records (ADRs)
Security Considerations
When importing, consider:
- Credential exposure: Never commit sensitive values; use variable files or secret management
- Audit logging: Enable provider audit logging to track import operations
- Access control: Limit who can perform import operations using RBAC
- State encryption: Ensure remote state is encrypted at rest and in transit
- Sensitive attributes: Use
sensitive = truefor imported secrets
Performance Optimization
For large-scale imports:
- Parallelism: Use
-parallelismflag to control concurrent operations - Batching: Import resources in logical batches to reduce plan/apply time
- Caching: Enable provider caching where supported
- State backend optimization: Choose backends optimized for your use case
Conclusion
Terraform import provides a crucial bridge between existing infrastructure and the infrastructure-as-code paradigm. While it has challenges, particularly around configuration generation and complex resource relationships, continuous improvements have made the process increasingly streamlined—especially with the introduction of import blocks in version 1.5.
By following the step-by-step processes and best practices outlined in this guide, you can successfully bring your existing infrastructure under Terraform management, unlocking the benefits of version control, reproducibility, and automation for your entire infrastructure landscape.
The modern approach emphasizes using declarative import blocks within your normal Terraform workflow, leveraging automated configuration generation, and maintaining proper documentation and refactoring practices. Whether you're migrating a small set of resources or managing a large brownfield infrastructure estate, the tools and techniques covered here provide a comprehensive pathway to successful IaC adoption.
Additional Reading
- Using Terraform Moved blocks to refactor