Fanatical Support for AWS
Product Guide

Caution

Information in this section refers to a future offering that may not be available at this time. This documentation may also be updated at any time. Please reach out to your Account Manager for more information.

Terraform Standards

General Style Guide

Terraform files should obey the syntax rules for the HashiCorp Configuration Language (HCL) and the general formatting guidelines provided by the Terraform Project through the fmt command.

In addition, Rackspace follows the following standards when writing Terraform:

  • Use Snake Case for all resource names
  • Declare all variables in variables.tf, including a description and type
  • Declare all outputs in outputs.tf, including a description
  • Pin all modules and providers to a specific version or tag
  • Always use relative paths and the file() helper
  • Prefer separate resources over inline blocks (e.g. aws_security_group_rule over aws_security_group)
  • Always define AWS region as a variable when building modules
  • Prefer variables.tf over terraform.tfvars to provide sensible defaults

Running terraform

  • All changes should be made through CI/CD tooling, and Terraform should not be run locally – especially terraform apply
  • Terraform versions and provider versions should be pinned, as it’s not possible to safely downgrade a state file once it has been used with a newer version of Terraform
  • Create “GitHub release” objects for releases, which automatically make tags, lets us define release notes / change log, and provides download links
  • S3 (standard region) with DynamoDB locking, with versioning & server-side encryption in S3
  • Standard bucket name : tf-state-(aws acct #)
  • All changes should be mapped to a specific AWS account as a GitHub repo
  • Read only state should be visible directly in S3
  • Repositories should always have a .terraform-version file.

Grouping state into layers

There are a few different designs employed in the Terraform community for how to structure your Terraform files, modules, and directories. The community around Terraform has written blog posts and spoken at HashiConf about how these patterns evolve over time; many of the recommendations focus on how to best group Terraform state. At Rackspace, we’ve built upon the existing best practices (e.g. ‘Terraservice’) and created a concept we call layers in order to isolate Terraform state.

What is a layer?

Put simply, a layer is a directory that is treated as a single Terraform configuration. It is a logical grouping of related resources that should be managed together by Terraform. Layers are placed in the layers/ directory inside an Account Repository. Our automation will perform all of the usual Terraform workflow steps (init, plan, apply) on each layer, alphabetically.

In collaboration with experienced Rackers, you should carefully consider how to logical group the state of your AWS resources into layers; layers could represent environments like production or test, regions your application may be hosted in, application tiers like “database” or “web servers,” or even applications that share availability requirements.

Here are some considerations that Rackers will discuss with you when planning out your environment:

  • ensure resources frequently modified together are also grouped together in the same layer
  • keep layers small, to limit blast radius and ensure refreshing state is quick/safe
  • keep dependencies between layers simple, as changes must take dependencies into consideration manually
  • consider reading state from another layer, using a data source; never write to another layer’s state
  • for small environments, consider that a single layer may acceptable, but moving resources between layers is hard

Writing and organizing Terraform with modules

Generally, Rackspace maintains modules for most common use cases, and uses these modules to build out your account. If we do not have a pre-existing module, the next best choice is to use the built-in aws_* resources offered by the AWS provider for Terraform. Please let us know if we don’t have a module or best practice for building out a specific resource or AWS product.

A common recommendation in the Terraform community is to think of modules as functions that take an input and provide an output. Modules can be built in your shared repository or in your account repositories. If you find yourself writing the same set of resources or functionality over and over again, consider building a module instead.

When to consider writing a module:

  • When multiple resources should always be used together (e.g. a CloudWatch Alarm and EC2 instance, for autorecovery)
  • When Rackspace has a strong opinion that overrides default values for a resource
  • When module re-use remains shallow (don’t nest modules if at all possible)

Rackspace Module Standards

Rackspace maintains a number of Terraform modules available at https://github.com/rackspace-infrastructure-automation. Contributions should follow these guidelines.

  • use semantic versioning for shared code and modules
  • always point to GitHub releases (over a binary or master) when referencing external modules
  • always extend, don’t re-create resources manually
  • parameters control counts, for non-standard numbers of subnets/AZs/etc.
  • use overrides to implement Rackspace best practices
  • use variables with good defaults for things Rackspace expects to configure
  • Modules should use semantic versioning light (Major.minor.0) for AWS account repositories
  • Modules should be built using the standard files: main.tf, variables.tf, output.tf
  • Consider writing tests and examples, and shipping them in directories of the same name
  • Readme files at should contain a description of the module as well as documentation of variables. An example of documentation can be found here.
  • The files in .circleci are managed by Rackspace and should not be changed. If you would like to submit a module, please do so without this folder.
  • The files in example can be named anything as long as they have .tf as the extension.
  • The tests directory must be called tests and each test must be test#. Inside each test# folder should be exactly one file called main.tf
  • Use Github’s .gitignore contents for Terraform.

variables.tf

This file must include the following code block at the beginning or end of the file.

  variable "environment" {
    description = "Application environment for which this network is being created. one of: ('Development', 'Integration', 'PreProduction', 'Production', 'QA', 'Staging', 'Test')"
    type        = "string"
    default     = "Development"
  }
   variable "tags" {
    description = "Custom tags to apply to all resources."
    type        = "map"
    default     = {}
  }

main.tf

This file must include the following code block at the top of the file. Other variables can be added to this block.

  locals {
    tags {
      Name            = "${var.name}"
      ServiceProvider = "Rackspace"
      Environment     = "${var.environment}"
    }
  }

In any resource block that supports tags the following code should be used:

tags = "${merge(var.tags, local.tags)}"

This takes the tag values that are in variable.tf and combines them with any values defined in main.tf in the locals block.