Complete Infrastructure Automation on AWS with Terraform

Lakshyasinghvi
5 min readJun 14, 2020

What is Cloud Automation?

Cloud automation is a broad term that refers to the processes and tools an organization uses to reduce the manual efforts associated with provisioning and managing cloud computing workloads. IT teams can apply cloud automation to private, public and hybrid cloud environments. Cloud automation enables IT teams and developers to create, modify, and tear down resources on the cloud automatically.

What is Terraform?

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.

Task Description (Task-1):

Creation of complete infrastructure for hosting a web page on AWS Cloud using Terraform.

Steps to Follow :

1. Create the key and security group which allow the port 80(for HTTP) and Port No 22 (for SSH).

2. Launch EC2 instance with key and security group which have created in first step.

3. Configure the O.S. so that it can be used to host a web page. Install Apache Web Server, PHP and start the required services.

4. Launch one Volume (EBS) and format that volume and mount into /var/www/html.

5. Developer have uploaded the code into github repo also the repo has some images.

6. Clone the github repo code into /var/www/html.

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

Software Requirements :

  1. Terraform
  2. AWS CLI

Proceed to Code :

  1. Declaring our cloud provider and giving our account details so that Terraform can access our AWS account. We will also provide the region where we want to work and the version of AWS.

provider “aws” {
region = “ap-south-1”
profile = “lakshya”
version = “~> 2.66”
}

2. Then, Create a pair of private and public key using the tls_private_key resource.

resource “tls_private_key” “key-pair” {
algorithm =”RSA”
}

resource “local_file” “private-key” {
content = tls_private_key.key-pair.private_key_pem
filename = “mykey.pem”
}

resource “aws_key_pair” “deployer” {
key_name = “mykey_new”
public_key = tls_private_key.key-pair.public_key_openssh
}

3. Create a security group which will allow HTTP and SSH inbound traffic from all sources. ICMP protocol is also enabled from all sources which will enable us to ping to our instance.

resource “aws_security_group” “MyFirewall” {
name = “allow_http_ssh”
description = “Allow HTTP and SSH inbound traffic”

ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
ipv6_cidr_blocks = [“::/0”]
}

ingress {
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
ipv6_cidr_blocks = [“::/0”]
}

ingress {
from_port = -1
to_port = -1
protocol = “icmp”
cidr_blocks = [“0.0.0.0/0”]
ipv6_cidr_blocks = [“::/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}
}

4. Launching our instance using the key and security group created in above steps, Amazon Linux 2 AMI is used here. Also installing required software such as Apache Web Server, PHP and git and enable the httpd service.

resource “aws_instance” “MyWebOS” {
ami = “ami-0447a12f28fddb066”
instance_type = “t2.micro”
key_name = aws_key_pair.deployer.key_name
security_groups = [ aws_security_group.MyFirewall.name ]

tags = {
Name = “MyTerraOS”
}

connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.key-pair.private_key_pem
host = aws_instance.MyWebOS.public_ip
}

provisioner “local-exec” {
command = “echo ${aws_instance.MyWebOS.public_ip} > public_IP.txt”
}
provisioner “remote-exec” {
inline = [
“sudo yum install httpd php git -y”,
“sudo systemctl restart httpd”,
“sudo systemctl enable httpd”,
]
}
}

5. Creating a new EBS volume and attaching it to our instance. At the time of creation, we don’t know the availability zone of the instance. To solve this, Terraform provides us a way of using variables which will give us the required data.

resource “aws_ebs_volume” “MY-HD” {
availability_zone = aws_instance.MyWebOS.availability_zone
size = 1
tags = {
Name = “HDVol”
}
}

resource “aws_volume_attachment” “ebs_att” {
device_name = “/dev/sdh”
volume_id = aws_ebs_volume.MY-HD.id
instance_id = aws_instance.MyWebOS.id
force_detach = true
}

6. Now, we have to format and mount the attached volume to the default web server directory (/var/www/html).

resource “null_resource” “nullremote” {

depends_on = [
aws_volume_attachment.ebs_att,
]

connection {
type = “ssh”
user = “ec2-user”
private_key = tls_private_key.key-pair.private_key_pem
host = aws_instance.MyWebOS.public_ip
}

provisioner “remote-exec” {
inline = [
“sudo mkfs.ext4 /dev/xvdh”,
“sudo mount /dev/xvdh /var/www/html”,
“sudo rm -rf /var/www/html/*”,
“sudo git clone https://github.com/lakshyasinghvi/TerraTask1.git /var/www/html/”
]
}
}

7. Let’s create a S3 bucket to store the image for our web page.

resource “aws_s3_bucket” “my_image_bucket” {
depends_on = [
null_resource.nullremote,
]
bucket = “lakshyaimages”
acl = “public-read”

}

resource “aws_s3_bucket_object” “upload” {
bucket = aws_s3_bucket.my_image_bucket.bucket
key = “terra-test.jpg”
source = “D:/terra-test.jpg”
acl = “public-read”

}

8. In the final step, we will create the CloudFront distribution for the image on our bucket.

locals {
s3_origin_id = “S3-${aws_s3_bucket.my_image_bucket.bucket}”
}

resource “aws_cloudfront_distribution” “s3_distribution” {

enabled = true
is_ipv6_enabled = true

origin {
domain_name = aws_s3_bucket.my_image_bucket.bucket_domain_name
origin_id = local.s3_origin_id
}

default_cache_behavior {
allowed_methods = [“DELETE”, “GET”, “HEAD”, “OPTIONS”, “PATCH”, “POST”, “PUT”]
cached_methods = [“GET”, “HEAD”]
target_origin_id = local.s3_origin_id

forwarded_values {
query_string = false

cookies {
forward = “none”
}
}

viewer_protocol_policy = “allow-all”

}

restrictions {
geo_restriction {
restriction_type = “none”

}
}

viewer_certificate {
cloudfront_default_certificate = true
}
connection {
type = “ssh”
user = “ec2-user”
host = aws_instance.MyWebOS.public_ip
private_key = tls_private_key.key-pair.private_key_pem
}
provisioner “remote-exec” {
inline = [
“sudo su << EOF”,
“echo \”<img src=’http://${self.domain_name}/${aws_s3_bucket_object.upload.key}’>\” >> /var/www/html/index.php",
“EOF”
]

}
}

Commands to run the code :

On the terminal, just run the following commands -

# To initialize the plugins
Terraform init

# To validate the configuration file in the directory
Terraform validate

# To create the infrastructure
Terraform apply -auto-approve

#To destroy the infrastructure
Terraform destroy -auto-approve

Output :

Opening the webpage on the displayed IP Address :

--

--