Fanatical Support for AWS
Product Guide

Important

Information in this section refers to an offering that is currently in Limited Availability. This documentation may also be updated at any time. Please reach out to your Account Manager for more information.

Terraform Standards

What is Terraform?

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. https://www.terraform.io/intro/index.html

Making changes with Terraform

Rackspace strongly recommends that all changes be made through CI/CD tooling, and Terraform should not be run locally except in extreme cases, especially terraform apply. Because all repositories have a .terraform-version file, and are named appropriately for a specific AWS account, our tooling will ensure that the correct version of Terraform is executed against the correct account.

As mentioned in the Using GitHub section of this documentation, there is also a shared repository for Terraform modules you may wish to reuse across multiple accounts. Rackspace will create “GitHub release” objects in this repository, which automatically makes tags that we can use to reference specific modules at specific points in time.

Please see the later part of this document for specific standards Rackspace recommends when creating Terraform modules and Terraform configuration.

Grouping state into layers

There are a few different designs employed in the Terraform community for how to structure your Terraform files, modules, and directories. The community around Terraform has written blog posts and spoken at HashiConf about how these patterns evolve over time; many of the recommendations focus on how to best group Terraform state. At Rackspace, we’ve built upon the existing best practices (e.g. ‘module-per-environment’) and created a concept we call layers in order to isolate Terraform state.

Where is state stored?

Rackspace maintains a separate S3 bucket for storing Terraform state of each AWS account. This bucket is restricted to Customers and Rackers that also have access to the corresponding AWS account. In addition, Rackspace implements best practices such as requiring bucket encryption, access logging, versioning, and preventing accidental public access. You can locate the name of this bucket by looking at the logged output from the corresponding CI/CD job.

What is a layer?

Put simply, a layer is a directory that is treated as a single Terraform configuration. It is a logical grouping of related resources that should be managed together by Terraform. Layers are placed in the layers/ directory inside an Account Repository. Our automation will perform all of the usual Terraform workflow steps (init, plan, apply) on each layer, alphabetically.

In collaboration with experienced Rackers, you should carefully consider how to logical group the state of your AWS resources into layers; layers could represent environments like production or test, regions your application may be hosted in, application tiers like “database” or “web servers,” or even applications that share availability requirements.

Here are some considerations that Rackers will discuss with you when planning out your environment:

  • ensure resources frequently modified together are also grouped together in the same layer
  • keep layers small, to limit blast radius and ensure refreshing state is quick/safe
  • keep dependencies between layers simple, as changes must take dependencies into consideration manually
  • consider reading state from another layer, using a data source; never write to another layer’s state
  • for small environments, consider that a single layer may acceptable, but moving resources between layers is hard

Writing and organizing Terraform with modules

Generally, Rackspace maintains modules for most common use cases, and uses these modules to build out your account. If we do not have a pre-existing module, the next best choice is to use the built-in aws_* resources offered by the AWS provider for Terraform. Please let us know if we don’t have a module or best practice for building out a specific resource or AWS product.

A common recommendation in the Terraform community is to think of modules as functions that take an input and provide an output. Modules can be built in your shared repository or in your account repositories. If you find yourself writing the same set of resources or functionality over and over again, consider building a module instead.

When to consider writing a module:

  • When multiple resources should always be used together (e.g. a CloudWatch Alarm and EC2 instance, for autorecovery)
  • When Rackspace has a strong opinion that overrides default values for a resource
  • When module re-use remains shallow (don’t nest modules if at all possible)

General Terraform Style Guide

Terraform files should obey the syntax rules for the HashiCorp Configuration Language (HCL) and the general formatting guidelines provided by the Terraform Project through the fmt command.

In addition, Rackspace follows the following standards when writing Terraform:

  • Use Snake Case for all resource names
  • Declare all variables in variables.tf, including a description and type
  • Declare all outputs in outputs.tf, including a description
  • Pin all modules and providers to a specific version or tag
  • Always use relative paths and the file() helper
  • Prefer separate resources over inline blocks (e.g. aws_security_group_rule over aws_security_group)
  • Always define AWS region as a variable when building modules
  • Prefer variables.tf over terraform.tfvars to provide sensible defaults
  • Terraform versions and provider versions should be pinned, as it’s not possible to safely downgrade a state file once it has been used with a newer version of Terraform

Rackspace Module Standards

Rackspace maintains a number of Terraform modules available at https://github.com/rackspace-infrastructure-automation. Contributions should follow these guidelines.

  • use semantic versioning for shared code and modules
  • always point to GitHub releases (over a binary or master) when referencing external modules
  • always extend, don’t re-create resources manually
  • parameters control counts, for non-standard numbers of subnets/AZs/etc.
  • use overrides to implement Rackspace best practices
  • use variables with good defaults for things Rackspace expects to configure
  • Modules should use semantic versioning light (Major.minor.0) for AWS account repositories
  • Modules should be built using the standard files: main.tf, variables.tf, output.tf
  • Consider writing tests and examples, and shipping them in directories of the same name
  • Readme files at should contain a description of the module as well as documentation of variables. An example of documentation can be found here.
  • The files in .circleci are managed by Rackspace and should not be changed. If you would like to submit a module, please do so without this folder.
  • The files in example can be named anything as long as they have .tf as the extension.
  • The tests directory must be called tests and each test must be test#. Inside each test# folder should be exactly one file called main.tf
  • Use Github’s .gitignore contents for Terraform.

variables.tf

This file must include the following code block at the beginning or end of the file.

  variable "environment" {
    description = "Application environment for which this network is being created. one of: ('Development', 'Integration', 'PreProduction', 'Production', 'QA', 'Staging', 'Test')"
    type        = "string"
    default     = "Development"
  }
   variable "tags" {
    description = "Custom tags to apply to all resources."
    type        = "map"
    default     = {}
  }

main.tf

This file must include the following code block at the top of the file. Other variables can be added to this block.

  locals {
    tags {
      Name            = "${var.name}"
      ServiceProvider = "Rackspace"
      Environment     = "${var.environment}"
    }
  }

In any resource block that supports tags the following code should be used:

tags = "${merge(var.tags, local.tags)}"

This takes the tag values that are in variable.tf and combines them with any values defined in main.tf in the locals block.