TrademarkTrademark
Features
Documentation
All articles

API Driven Workflows for Terraform and OpenTofu

Learn how to take an API driven approach with examples
Ryan FeeFebruary 27, 2024
API Driven Workflows for Terraform and OpenTofu

This post is part of a series on What is OpenTofu?.

In Scalr, there are commonly known workflows, such as CLI-driven or VCS-driven (version control system), to execute a Terraform or OpenTofu run, but today we are focusing on one of the lesser-known ways: API-driven. The key difference with an API-driven run, instead of VCS-based, is that you are in complete control of when it happens as it is a push-based model since you are making the API call. Let's take a look at some examples of why you would use an API-driven run.

Real-time or Event-Driven Infrastructure Provisioning

In certain scenarios, infrastructure provisioning needs to be triggered in real-time or based on specific events. An API-driven run can be initiated programmatically in response to these events, ensuring that the cloud infrastructure aligns with current needs. This is especially relevant in dynamic environments where manual intervention is not practical or feasible.

Integration with External Systems

When tight integration with external systems is required, using the API directly might be preferable. This is common in cases where Terraform needs to interact with custom scripts, external APIs, or other tools that aren't directly connected to the VCS provider. An API-driven approach allows for flexibility and customization in handling these integrations. For example, if an alert goes off in New Relic, you might have a specific workspace and configuration update that needs to be made in response to that alert. We also have a few customers integrate with Slack to kick off runs through Slack based on certain scenarios.

Custom Workflows and Complex Automation

If your infrastructure management requires complex automation workflows or custom processes that go beyond the capabilities of a VCS-driven approach, an API-driven run offers more flexibility. You can orchestrate intricate sequences of actions, integrate with external systems, and handle specific use cases that may not be easily achieved through VCS alone. It is not uncommon to see the API approach with tools such as Github Actions, Harness, Bamboo, or Jenkins.

Let's walk through an example of how the API-driven workflow works.

Example

In this example, we are going to create a new workspace, upload the new configuration version (can be Terraform or OpenTofu), and then execute a new run.

Prerequisites

To start, you'll need to obtain values for the following objects in Scalr:

  • API token
  • URL for your Scalr account
  • Workspace name
  • Environment ID
  • Terraform configuration files that are in a tar.gz file

Script

The below is a sample script that can be modified to your liking, but it will give you the basic steps to get started with. Make sure to update the token, base_url, env_id, ws_id, and upload_archive_path with values specific to your Scalr account. You can also update the is_dry_run flag based on the type of run you want to execute, a dry (speculative plan) run or a full Terraform apply.

import requests
 
token = ""
base_url = 'https://example.scalr.io/'
headers = {
    'Prefer': 'profile=preview',
    'accept': 'application/vnd.api+json',
    'content-type': 'application/vnd.api+json',
    'Authorization': f'Bearer {token}'
}
env_id = ""
ws_id = ""
is_dry_run = True
upload_archive_path = ''
 
## Create CV
url = f'{base_url}/api/iacp/v3/configuration-versions'
data = {
    'data': {
        'attributes': {
            "auto-queue-runs": False,
        },
        'relationships': {
            'workspace': {
                'data': {
                    'type': 'workspaces',
                    'id': ws_id
                }
            }
        },
        'type': 'configuration-versions'
    }
}
 
response = requests.post(url, headers=headers, json=data)
 
cv_id = None
if response.status_code == 201:
    # Successful request
    result = response.json()
    # Process the response data
    print(result)
    cv_id = result['data']['id']
else:
    # Request failed
    raise Exception(f"Error: {response.status_code} - {response.text}")
 
upload_url = result['data']['links']['upload']
print(upload_url)
 
upload = requests.put(upload_url, headers={'Content-Type': 'application/octet-stream'}, data=open(upload_archive_path, 'rb'))
print(upload.status_code)
 
## create run
url = f'{base_url}/api/iacp/v3/runs'
data = {
    'data': {
        'attributes': {
            "is-dry": is_dry_run,
        },
        'relationships': {
            'configuration-version': {
                'data': {
                    'type': 'configuration-versions',
                    'id': cv_id
                }
            },
            'workspace': {
                'data': {
                    'type': 'workspaces',
                    'id': ws_id
                }
            }
        },
        'type': 'runs'
    }
}
 
response = requests.post(url, headers=headers, json=data)
if response.status_code == 201:
    # Successful request
    result = response.json()
    # Process the response data
    print(result)
else:
    # Request failed
    raise Exception(f"Error: {response.status_code} - {response.text}")

It's important to note that in this case we created a new workspace, but the same script can be used on an existing workspace. There are also a lot more options in terms of workspace settings that can control if the runs are executed immediately or wait until someone makes a separate request. All documentation on the Scalr API can be found here.

Summary

In many cases, a combination of VCS and API-driven runs might be the most effective approach. VCS remains a powerful tool for versioning and managing infrastructure as code configurations, while API-driven runs provide the agility and flexibility needed for dynamic and real-time changes. The choice between these approaches should be based on the specific requirements of your infrastructure and workflows.

About the author
Ryan Feedirector of platform engineering at Scalr
Ryan Fee is the director of platform engineering at Scalr, with over 15 years of experience improving infrastructure experiences at companies large and small.