Terraform: an Introduction

John MacLean
12 min readJun 10, 2023

--

Photo by George Bohunicky on Unsplash

Introduction

Imagine, if you will, a blueprint. But not just any blueprint. A blueprint that can create, alter, and manage the full ‘real estate’ of a digital world in the blink of an eye. That’s what we’ll explore today. Say hello to Terraform — a tool that does for cloud infrastructure what architects do for buildings.

In simple terms, Terraform is an Infrastructure as Code (IaC) tool, but what does that mean, you ask? It’s like a construction kit for the cloud. Rather than clicking around a website to create databases, servers and other components, IaC lets us write code to do it instead. We can then use that code again and again, eliminating human error and ensuring we get exactly the same result each time.

The spotlight in today’s adventure falls on a task that every DevOps team will appreciate — setting up a Continuous Integration/Continuous Deployment (CI/CD) server using Jenkins, a popular automation tool, and Amazon Web Services (AWS), one of the world’s most widely-used cloud platforms. All the while, we’ll be unlocking the power of Terraform to keep track of our changes and make our setup easily reproducible.

Here’s a quick rundown of terms I’ve used to help navigate the cloud jargon:

  • Terraform: An Infrastructure as Code tool that helps us automate the creation and management of cloud resources.
  • DevOps: A practice that combines software development (Dev) and IT operations (Ops) with the goal of shortening the system development life cycle and providing continuous delivery.
  • Jenkins: An open-source automation tool for developers, with plugins built for developing, deploying, and automating any project.
  • AWS (Amazon Web Services): A popular cloud platform providing a wide range of services such as compute power, databases, and storage.
  • EC2 (Elastic Compute Cloud): A web service in AWS that allows users to rent virtual computers to run their own computer applications.
  • S3 (Simple Storage Service): A service offered by AWS that provides scalable object storage for data backup, archival and analytics.

Pre-requisites

  1. A non-root AWS IAM account with enough rights to perform the required actions.
  2. A development environment for creating and manipulating the Terraform files. I am using a Windows 11 workstation with PowerShell and VSCode installed.
  3. The AWS CLI should be installed on your system and your AWS account details configured.
  4. Terraform installed on your machine. The installers can be found on the official HashiCorp website.
  5. As ever, some general AWS service and architecture knowledge is useful, but hopefully I will explain everything clearly!
  6. Finally knowledge on Git and GitHub would be handy. You can check out some of my previous articles to help with that.

Please also feel free to check out my GitHub repository, where you can find the source code from this article. Link: Johnny Mac’s Terraform GitHub repo

The Challenge!

As usual, our challenge is broken down into three stages, growing in complexity or difficulty. However, as the latter stages are quite short in this instance, I’m just going to complete everything in the same article.

Foundational Stage:

  1. Deploy 1 EC2 Instance in your Default VPC.
  2. Bootstrap the EC2 instance with a script that will install and start Jenkins. Review the official Jenkins Documentation for more information: https://www.jenkins.io/doc/book/installing/linux/
  3. Create and assign a Security Group to the Jenkins Security Group that allows traffic on port 22 from your ip and allows traffic from port 8080.
  4. Create a S3 bucket for your Jenkins Artifacts that is not open to the public.
  5. Verify that you can reach your Jenkins install via port 8080 in your browser. Be sure to include a screenshot of the Jenkins login screen in your documentation.
  6. Push your code to GitHub and include the link in your write up.

Advanced Stage:

  1. Add a variables.tf file and make sure nothing is hardcoded in your main.tf.
  2. Create separate file for your providers.tf if you have not already.

Complex Stage:

  1. Create an IAM Role that allows S3 read/write access for the Jenkins Server and assign that role to your Jenkins Server EC2 instance.
  2. You can confirm this by ssh-ing into your instance and without using your credentials, test some S3 AWS CLI commands. We’ll leave it up to you to determine which commands you feel are appropriate to validate the permissions.

Task Preparation

We are going to create three Terraform files to complete this task:

providers.tf — this file is where we connect Terraform to the external services it will manage. For our task, we’re using AWS, so we define that connection here. This separation of service connection ensures our core code remains neat and focused on what it needs to do, rather than how to connect to where it does it.

variables.tf — this file is like the settings or preferences for our Terraform code. We can define variables that we might want to change each time we run our code. This makes our setup flexible and easy to customize, without needing to modify the core code.

main.tf — this is our action file, where we tell Terraform what to do. Here, we specify the resources we want to create and manage on AWS. Keeping this separate allows us to focus on the task at hand, without being distracted by connection details or variable values.

Structuring our code in this manner is essentially about keeping our Terraform code clean, organized and easy to customize.

I just created a folder and then created each file in that folder using VSCode. However, you can simply do the same with any text editor.

In the following sections, I will break down each files contents.

providers.tf

# providers.tf
provider "aws" {
region = var.region
}

Ok, not much to say here — this is really simple! Just set your desired provider as shown.

variables.tf

# variables.tf
variable "region" {
description = "Region to deploy resources into"
default = "eu-west-2"
}

variable "instance_type" {
description = "Type of instance"
default = "t2.micro"
}

variable "ami_id" {
description = "AMI ID for the EC2 instance"
default = "ami-03195679f849f5cee" # This is an AWS Linux 2 AMI for our preferred region
}

variable "key_name" {
description = "Name of an existing AWS key pair"
type = string
default = "LUITProject5KeyPair"
}

Here is where we define our variables that will be used in our main.tf.

Well, I had planned to be very clever and programmatically get the latest AMI ID so that I wouldn’t need to hard code a specific version, but wouldn’t you know it — the latest AWS Linux 2023 image currently has an issue that prevents Jenkins from launching. Thank you to my fellow Level Up In Tech Gold Team cohorts for discovering that!

Therefore, we need to do is get a suitable AMI ID applicable to our preferred AWS region the old fashioned manual way. So let’s head off to the AWS Console

Go to the EC2 Dashboard and click Launch instance.

Scroll down to Application and OS Images. The Amazon Linux image is the default and the Amazon Linux 2023 AMI is selected initially.

Click in that box to bring up a list of all Amazon Linux images. We’ll select the older Amazon Linux 2 AMI (HVM) which is still Free tier eligible.

Now we can just copy the AMI ID field as highlighted and paste it into our file.

main.tf

# main.tf

# IAM policy that allows read/write access to S3
resource "aws_iam_policy" "s3_access" {
name = "S3Access"
description = "Allows read/write access to S3"

policy = <<-EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::*",
"Effect": "Allow"
}
]
}
EOF
}

# IAM role for EC2
resource "aws_iam_role" "ec2_s3_access_role" {
name = "EC2S3AccessRole"

assume_role_policy = <<-EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow"
}
]
}
EOF
}

# Attaching IAM policy to the IAM role
resource "aws_iam_role_policy_attachment" "s3_access_policy_attachment" {
role = aws_iam_role.ec2_s3_access_role.name
policy_arn = aws_iam_policy.s3_access.arn
}

# Instance profile that we will assign to our EC2 instance
resource "aws_iam_instance_profile" "instance_profile" {
name = "jenkins_instance_profile"
role = aws_iam_role.ec2_s3_access_role.name
}

# EC2 Instance
resource "aws_instance" "jenkins" {
ami = var.ami_id
instance_type = var.instance_type
iam_instance_profile = aws_iam_instance_profile.instance_profile.name
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.jenkins_sg.id]

# This will run a script to install and start Jenkins
user_data = <<-EOF
#!/bin/bash
sudo yum update -y
sudo amazon-linux-extras install -y java-openjdk11
sudo yum install -y wget
sudo wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat/jenkins.repo
sudo rpm --import https://pkg.jenkins.io/redhat/jenkins.io.key
sudo yum install jenkins -y
sudo systemctl start jenkins
sudo systemctl enable jenkins
EOF

tags = {
Name = "my-jenkins-server"
}
}

# Security Group
resource "aws_security_group" "jenkins_sg" {
name = "jenkins_sg"
description = "Allow inbound traffic on port 22 and 8080 and outbound traffic"

ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Put your IP CIDR in here
}

ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Put your IP CIDR in here
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

# S3 bucket for Jenkins artifacts
resource "aws_s3_bucket" "jenkins_artifacts" {
bucket = "johnnymac-luit-week20-jenkins-artifacts"
}

# Ensure S3 bucket is private
resource "aws_s3_bucket_public_access_block" "jenkins_artifacts_acl" {
bucket = aws_s3_bucket.jenkins_artifacts.id

block_public_acls = true
block_public_policy = true
}

As the name suggests, our main.tf is where the core of our resource creation is coded. We define the following resources:

  1. An EC2 Instance is created: This is essentially a virtual server in Amazon’s cloud. We specify its type, the machine image (Amazon Linux 2) and the key pair to use.
  2. A Security Group is created and assigned: Security groups are virtual firewalls for your instance to control inbound and outbound traffic. For our EC2 instance, we’ve specified the necessary rules for secure access allowing SSH and Jenkins inbound traffic and internet access outbound. If copying the code, be sure to replace 0.0.0.0/0 with your own workstation’s IP address on the ingress lines marked Put your IP CIDR in here.
  3. IAM Role and Policy are created: IAM roles are a secure way to grant permissions to entities that you trust. In our case, we’ve granted our EC2 instance permissions to access S3 resources.
  4. An IAM Instance Profile is created: Instance profiles are IAM roles that you can use to grant permissions to applications running on your EC2 instances.
  5. An S3 Bucket is created: S3, or Simple Storage Service, provides object storage through a web service interface. We’ve created a private bucket for storing Jenkins artifacts.
  6. User Data script is provided: This script is used to bootstrap our instance, which includes installing Jenkins on our EC2 instance.

I had a little bit of an issue installing Jenkins via the User Data initially, but the official Redhat Jenkins Packages page has the correct commands needed and once I’d copied those in, everything went smoothly.

Terraform Commands

Now that our files are complete, we can create our environment by running the relevant Terraform commands.

Firstly on our development machine, let’s check we’re in the correct folder containing the Terraform files.

Once we’ve confirmed we’re in the correct location, the first command we will run is terraform init.

This sets up everything Terraform needs to run our code.

Next we will run terraform fmt to check the formatting of our Terraform files. The command will also rewrite them to conform to the required standard if necessary.

Here we can see main.tf was output in my case, meaning the contents of that file were rearranged to comply with the Terraform standards.

Next we execute terraform validate which checks our code, ensuring there are no typos, mistakes or missing pieces before we try to run it.

Everything is good, so terraform plan is next. This will effectively preview what Terraform will do. You can review all the changes and make sure everything looks right before proceeding.

In our case, we get a long list of items that will be created.

Once we review and confirm everything looks as we expect, we can then run terraform apply to create the resources specified in our code.

Since we entered terraform apply without the optional -auto-approve switch, we get a final chance to review the resources to be created.

Entering yes when prompted proceeds.

Our command output looks good, so let’s head back over to the AWS Console to check our new resources and test our environment.

Testing

In the EC2 Dashboard, we can see our instance has been spun up and is available…

…and checking its details, we can see it has been assigned the required IAM Role and Security group.

We can SSH into the instance using our terminal by copying its public IP address from the AWS Console and connecting with our existing key pair.

Entering systemctl status jenkins shows us that Jenkins is active and running.

Therefore if we put <instance public IP address>:8080 into a browser on our workstation, we should see the Jenkins startup page…

…and thankfully that is the case!

The final test is to check our EC2 instance has access to the S3 bucket we created.

Checking out the Buckets page in the AWS Console, we can see the bucket is currently empty.

Now, if you remember from our main.tf file, we gave our S3 policy permissions to get, put and delete files from the bucket.

So let’s try a couple of those actions from our EC2 instance.

First, create a test file…

…then copy (put) it up to our bucket.

Checking our bucket in the console, we can confirm the file was uploaded successfully.

Now let’s create a new folder and confirm it’s empty…

…then we can copy (get) the file from our bucket to the EC2 instance. Listing confirms that operation was successful too.

Finally, let’s try deleting the file from the bucket. Run the command from the EC2 instance…

…and a check of the console confirms the file was successfully deleted.

Therefore everything has worked and we have completed the challenge!

Housekeeping

One of the best things about Terraform is that it makes tidying everything up so simple and convenient.

All the resources are removed by running the terraform destroy command, so you don’t have to worry that you have left an unattended resource somewhere costing you money.

This time I just run terraform destroy -auto-approve and everything is gone in 30 seconds. Naturally, be very, very careful using the -auto-approve switch when destroying!

Conclusion

As our journey in cloud automation wraps up, we’ve unveiled the potential of Terraform and AWS in creating a streamlined Jenkins setup.

Each meticulous step — from establishing our virtual server, crafting its security shield, authorizing permissions, to setting up our storage bucket — reflected the power of code to handle intricate tasks with precision.

Through the Terraform lens, we’ve taken a complex problem and created a reproducible, scalable solution. In doing so, we’ve demystified cloud technology, showing it to be a useful tool, not a daunting beast.

After all, the essence of technology is to simplify, and our dive into Terraform and AWS has done just that.

Many thanks once again dear reader for making it this far. Please feel free to leave any questions or comments — they are gratefully received.

--

--