Terraform Cloud Series – Part 4 (remote state)

Reading Time: 2 minutes

Continuing from where left off, In this post, I will discuss how to tap into workspace state file.

In the previous post, we connected workspace dependency allowing execution of child workspace, however, in some cases stack requires fetching data sources in order to cross-reference the resource name, id, etc. allowing us to make terraform code more usable and flexible.

Let’s look at an example of how to pull data from a remote state file stored in the Terraform cloud.

If we look at the execution flow in the previous post, We executed 1-poc-network and stack trigger executed 2-poc-security-groups, but when we execute 2-poc-security-groups it requires vpc_id created in 1-poc-network. So, let’s look at the code and break it down a bit.

module "vote_service_sg" {
  source = "terraform-aws-modules/security-group/aws"
  name        = "access-security-group"
  description = "Security group for user-service with custom ports open 
  within VPC, and PostgreSQL publicly open"
  vpc_id      = "VPC_ID" # --> VPC ID associating Security group to VPC
  ingress_cidr_blocks      = ["10.10.0.0/16","10.10.105.0/24","78.1.10.100"]
  ingress_rules            = ["https-443-tcp"]
  ingress_with_cidr_blocks = [
    {
      from_port   = 8080
      to_port     = 8090
      protocol    = "tcp"
      description = "User-service ports"
      cidr_blocks = "10.10.0.0/16"
    },
    {
      rule        = "postgresql-tcp"
      cidr_blocks = "0.0.0.0/0"
    },
  ]
  tags = var.default_tags
}

Looking at line # 6, notice we have to provide VPC ID every time this code is to be executed.

vpc_id      = "VPC_ID" # --> VPC ID associating Security group to VPC

If we were to change or add this as variable, it will work, but requires someone to find the VPC ID and input the value; a lot of work!

What if we can fetch the data from the previous stack and let terraform figure this out. We need to add the following code block to our terraform stack:

data "terraform_remote_state" "vpc" {
  backend = "remote"
  config = {
    organization = "securectl-poc"
    workspaces = {
      name = "1-poc-network"
    }
  }
}

Let me explain how to interpret the remote state:

data "terraform_remote_state" "vpc" {
  backend = "remote"

The section above indicates that we are setting a remote state called “vpc” and with the backend type of remote.

  config = {
    organization = "securectl-poc"
    workspaces = {
      name = "1-poc-network"

And in the section above, we are setting up our config’s allowing us to fetch the needed data from a remote state file. Notice that there are two required inputs that are needed.

  • organization
  • workspace name

Now that we have our remote-state setup let’s change the code to fetch data from the remote state:

data "terraform_remote_state" "vpc" {
  backend = "remote"
  config = {
    organization = "securectl-poc"
    workspaces = {
      name = "1-poc-network"
    }
  }
}

module "vote_service_sg" {
  source = "terraform-aws-modules/security-group/aws"
  name        = "access-security-group"
  description = "Security group for user-service with custom ports open 
  within VPC, and PostgreSQL publicly open"
  vpc_id      = data.terraform_remote_state.vpc.outputs.vpc_id.vpc_id
  ingress_cidr_blocks      = ["10.10.0.0/16","10.10.105.0/24","78.1.10.100"]
  ingress_rules            = ["https-443-tcp"]
  ingress_with_cidr_blocks = [
    {
      from_port   = 8080
      to_port     = 8090
      protocol    = "tcp"
      description = "User-service ports"
      cidr_blocks = "10.10.0.0/16"
    },
    {
      rule        = "postgresql-tcp"
      cidr_blocks = "0.0.0.0/0"
    },
  ]
  tags = var.default_tags
}

Notice that vpc_id now points to a data value of remote-state file within workspace 1-poc-network.

data.terraform_remote_state.vpc.outputs.vpc_id.vpc_id

As you can see how our code is re-useable allowing us to extract output information from remote-state.

Using this method, we can create dependency within our terraform stack allowing us to use the remote state for extracting required attributes. I hope this helped you understand how the backend/remote state works, try it out yourself!

Terraform Cloud Series – Part 3 (Connect Workspace Trigger)

Reading Time: 4 minutes

In the previous series, we covered how to get started with Terraform Cloud and setup VCS with our source repository, In this post, we will look at how we can use “trigger” capability for dependent workspaces/stack.

For the purpose of the demo, I will create the following resources by using the trigger feature of the TF cloud and in the following order of stack:

  • 1-poc-network
  • 2-poc-security-groups
  • 3-poc-buckets

1-poc-network

This will create the required network i.e VPC, subnets, IGW, SG resource to create AWS EC2 instance and other resources that require a network.

2-poc-security-groups

This will create an application-specific security group for the purpose of the demo

3-poc-buckets

Additional resources needed to support the application i.e. S3 buckets, policies, etc.

Putting it all together, here how the visualization looks like below:

Essentially, we are sort of creating a job dependency, but my experience with the trigger has been mixed as there seem to be a lot of limitations with checks and balances. In my opinion, the workflow above is good for a repeatable process where you don’t care if the process is executed multiple times and you expect the same result every time regardless of number of executions.

What I experienced is that if you run into an error with a parent job during the apply phase, TF cloud will still trigger the jobs downstream, hence there seems to be duplication or no way to tell the downstream job if the parent fails. However, regardless of limitations, it is still good feature allowing you to setup simple chaining. If you need a more task-driven setup, in my opinion, Gitlab CI/CD is a better tool.

Now let’s look at the workspaces and how to setup the trigger for child jobs:

If we look at the 1-poc-network workspace, under the run trigger option we have the option to attach child workspace.

Note: Even if the run trigger is setup, a child job can be executed by itself or via VCS code commit.

Notice that I don’t have trigger setup on the parent job, that is because trigger needs to be executed from 2-poc-security-groups when 1-poc-network executed! and yes, I know it is confusing as it took me by surprise too!

So, let’s look at the trigger properties for 2-poc-security-groups:

So, basically we are saying when 1-poc-network job is executed, TF Cloud should also execute 2-poc-security-groups. Now, let’s also look at the 3-poc-buckets:

Now you get the idea of how the flow works! Also, if you are planning on taking the Hashicorp Terraform Associate exam, knowing TF cloud knowledge is a plus and will help pass the exam. I will do another post on TF Associate exam,

Trigger the parent job

Now – let me trigger the job (in this case from git repo commit) – as soon as I commit Terraform job is scheduled and executed

Noticed, it picked up the trigger and TF cloud will execute the dependent workspace after the apply is completed for the source job.

Similarly like before, 2-poc-security-group also detected the downstream trigger:

Now, noticed that there is nothing to do as my bucket was already created. However, I changed the name on the bucket in repo and the job still executed independently.

Conclusion

Terraform workspace trigger feature allows users to create stack dependency when working with a large stack. This good method when the user needs to create multiple workspaces connected and suppose you may be changing dependent resources that require complete teardown and re-create.

Terraform Cloud Series – Part 2

Reading Time: 5 minutes

So, Let’s continue from where we have left off at, In this blog, I will focus on the same build process of AWS VPC, but this time code will reside in a git repository (Gitlab for this demos). Assuming the audience is familiar with what is GitLab/GitHub – otherwise, I would recommend understanding the basics of Git before continuing with the rest of the demo.

For Part 2 of this series, I will be creating a new workspace for simplicity purposes.

And I will break the blog into the following areas:

  • Connect to a version control provider
    • Setup/Configure application
    • Generate Tokens
  • Integrate Terraform Cloud with Gitlab
  • Create AWS VPC network using git repo
  • Setup Cloud provider tokens/run time vars
  • Update the code base in git

Connect to a version control provider

Once signed into Terraform Cloud, click create “New workspace” you would be asked to set up the backend repository and cloud provider token:

For the purpose of this lab, I will be using Gitlab to setup my backend.

Note: If you planning on using GitHub or GitLab, one thing is to keep in mind is that each environment lifecycle should be a repo/project. If you combine your code with the root repo, it will be very difficult to manage the stack deployment and organization.

Setup/Configure application

Note: You will need to get properties from both GitLab and terraform cloud, hence I suggest that you open two windows/tabs to work in parallel.

Once you have signed into Gitlab, goto your account sections and applications:

Here we will create a new application that will integrate with Terraform Cloud, I am going to call my application “TerraformCloudisFun

Notice the Redirect URL is garbage value; that is on purpose. We will come back to this and fix it later. go ahead and save the application.

Now, let’s configure the Terraform Cloud section:

  • You should be already on “VCS Providers” section under your organization:
  • If not, you will need to get VCS Provider by clicking –> org –> New Workspace –> VCS provider
  • Again, I am calling my provider “TerraformCloudisFun” to keep the naming consistent.
  • We will need to provide application ID & Secret generated in step above
  • Add VCS provider and application is created.

Integrate Terraform Cloud with Gitlab

Locate the call-back URL and copy the URL – we need to modify with the Gitlab application we created in an earlier step.

  • If you are still on application created page, click the edit button and update callback URL with Terraform Cloud Callback URL:
  • Save & update the application.
  • Now, back to Terraform Cloud and click “connect organization
  • Terraform will try to access GitLab.com and authorize the application.

That’s it – Backend is configured and ready to be used.

Create AWS VPC network using git repo

Now that we have our backend ready, let us try to create the AWS VPC by pulling the code directly from version control.

Terraform application we created will fetch the repos/projects from Gitlab.com:

Select your repository or working project for provisioning and create workspace:

You might have to wait a bit before the workspace is ready to be configured.

Hit the configure button and provide the required properties for the cloud provider:

Setup Cloud provider tokens/run time vars

I will add my AWS IAM user Access Key and Secret which is needed to create the stack in AWS.

  1. AWS Access Key
  2. AWS Access Secret
  3. Additional tag values

Notice that TF Cloud allows you to encrypt the secret values, but this information may appear in TF outputs/debug logs.

  • Select the “Sensitive” checkbox & save the variables.

Now we are ready to create the stack using TF Cloud.

Hit the “Queue Plan” button and stack creation will generate plan and if there any errors, it will stop:

If all looks good, TF Cloud will ask the user to verify and apply the changes:

Apply the changes and provide comments.

While it is creating the stack, you can look at the raw logs:

If everything goes as planned, job will change the status to success:

Update the code base in git

For the final piece, I will update one of the subnet CIDR range in TF code block from 10.10.104.0/24 to 10.10.105.0/24 – Push the changes to Gitlab.

From:

module "vpc" {
  source = "terraform-aws-modules/vpc/aws"
  version = "2.29.0"

  # insert the 12 required variables here
  name = "poc-vpc-${var.prefix}"
  cidr = "10.10.0.0/16"
   azs             = ["us-east-1a", "us-east-1b", "us-east-1c"]
  private_subnets = ["10.10.1.0/24"]
  public_subnets  = ["10.10.104.0/24"]

  enable_nat_gateway = true
  enable_vpn_gateway = true
  enable_s3_endpoint = true

  tags = var.default_tags
}

To:

module "vpc" {
  source = "terraform-aws-modules/vpc/aws"
  version = "2.29.0"

  # insert the 12 required variables here
  name = "poc-vpc-${var.prefix}"
  cidr = "10.10.0.0/16"
   azs             = ["us-east-1a", "us-east-1b", "us-east-1c"]
  private_subnets = ["10.10.1.0/24"]
  public_subnets  = ["10.10.105.0/24"]

  enable_nat_gateway = true
  enable_vpn_gateway = true
  enable_s3_endpoint = true

  tags = var.default_tags
}


Terraform detected the changes from backend and generated the new infra plan:

Changes are detected as we can see the subnet will be re-created.

It is obvious that pushing this change impacts the network, I have the ability to discard the run with the provided comment:

In the next series, we will discuss organizations of the projects/workspace, state file, and advance features.

Hope this post helped to get you started with Terraform Cloud.

AWS Pricing Calculator **NEW

Reading Time: 3 minutes

Recently I had a need to create a quote for AWS infrastructure and I noticed that AWS is switching from “AWS simple calculator” to “AWS pricing calculator” – So, let’s give it a try.

The process is pretty straight forward, you punch in some input and AWS generates TCO for AWS kit. It is a bit of a learning curve to get around, but not bad.

https://calculator.aws/#/addService


Once you click the URL, you will start with a blank pricing sheet which will allow the user to add by service and you simply input your requirements.

For instance, let’s say we need to provision 10 ec2 instances, simply click configure and add your inputs.

There are two methods:

  • Quick estimate
  • Advance estimate

For this demo, I am sticking with a quick estimate!

Check out the nice feature where I just plug in my numbers for “vCPUs” and “Memory” and AWS automatically suggested that I should use “r5a.8xlarge” – this is pretty nice since I don’t have to scramble with figuring out what shape I need to select for my use case.

Next, I need to define how many ec2 instances I need to add.


Great what about the pricing model, not to worry! The new pricing calculator allows us to select the model for pricing:

Another example with “Standard Reserved Instances”:

Next, we can add storage for EBS block volume:

Finally, we add the ec2 estimate to the overall pricing estimate and continue to work with additional resources.

Give it a try! it is free!

Attach is an example exported output from Pricing calculator: