Scalr
Scalr
April 17, 2024

Why Terraform Concurrency Matters

By
Ryan Fee

One of the main benefits of using Terraform is the speed at which engineers can develop code and deploy it. A single module can be used to make changes across thousands of resources impacting hundreds of applications. The problem is that Terraform is only as fast as the system it is running on.

Not only is it impacting the speed at which resources can be delivered, but it also impacts the developer experience. As a developer, you might be under pressure to get changes out the door as fast as possible during a maintenance window or incident and the last thing you want is to be throttled by the concurrency limits of the system that is deploying Terraform

How Scalr Previously Handled Concurrency

Every Scalr account automatically gets five concurrent runs when subscribing to a paid plan in Scalr. While we would love to give everyone unlimited run concurrency, we have to have controls in place to avoid abuse of the system, one of our core principles. The concurrency limits included the runs executing on self-hosted agents, which really couldn’t have abused the core of Scalr.io. 

Being that that above statement somewhat conflicted, we decided to make changes to how we handle concurrency with self-hosted agents.

How Scalr Now Handles Concurrency

We decoupled the concurrency applied on the scalr.io runners with that of the self-hosted agents. Each agent can have a maximum of 5 concurrent runs at a time and our customers are in full control of how many agents they deploy. 

For example, if you have 5 concurrent runs on the Scalr.io runners and you deploy 5 agents, you now have 30 concurrent runs at no extra cost.

What was the result:

While we knew that this would have a positive impact on the developer experience, we didn’t expect how drastic it would be. Across our entire user base we saw the following happen:

  • The number of agents deployed went up by 114%.
  • The average wait time from when a run was triggered to the actual execution went from 39 seconds to 0 seconds.

Looking deeper into one of the most impacted customers, a large hardware retail store:

  1. This customer had 15 concurrent runs before we made the change. With 10 agents deployed across their account, they instantly went from 15 concurrent runs to 65 when the limits were removed.
  2. In the 20 days leading up to the release of this feature, they had an average wait time of 5 minutes and 48 seconds across 3,670 runs. This is a total delay of 355 hours.
  3. Since removing the concurrency limits from the agents, the customer has had an average wait time of 0.14 milliseconds across 3,330 runs. Their wait time has gone from 355 hours, to essentially 0.
  4. This same result is seen over and over by many of our customers, greatly improving the overall developer experience.

Compared to the competition

Scalr is a drop-in replacement for Terraform Cloud, please see the difference below in how the Scalr offering compares to TFC:

Comparison of Terraform Cloud & Scalr's Agent & Concurrency Availability by Plan

By giving the customers the ability to control their concurrency, they can deliver to their customers faster as well as improve their own developer experience. Don’t be limited by concurrency for no reason, sign up for Scalr and give it a try today.

Disclaimer: All information was accurate at of April 10th 2024, based on publicly available information

Note: While this blog references Terraform, everything mentioned in here also applies to OpenTofu. New to OpenTofu? It is a fork of Terraform 1.5.7 as a result of the license change from MPL to BUSL by HashiCorp. OpenTofu is an open-source alternative to Terraform that is governed by the Linux Foundation. All features available in Terraform 1.5.7 or earlier are also available in OpenTofu. Find out the history of OpenTofu here.

Start using the Terraform platform of the future.

A screenshot of the modules page in the Scalr Platform