Automating complete website hosting service using Terraform & AWS features (EFS, S3, CloudFront)

What is Cloud Automation?

Cloud automation is a broad term that refers to the processes and tools an organization uses to reduce the manual efforts associated with provisioning and managing cloud computing workloads. IT teams can apply cloud automation to private, public and hybrid cloud environments. Cloud automation enables IT teams and developers to create, modify, and tear down resources on the cloud automatically.

What is Terraform?

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.

What is EFS?

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

Amazon EFS offers two storage classes: the Standard storage class, and the Infrequent Access storage class (EFS IA)

What is S3?

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides easy-to-use management features so you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements. Amazon S3 is designed for 99.999999999% (11 9’s) of durability, and stores data for millions of applications for companies all around the world.

What is CloudFront?

Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.

Task Description (Task-2):

Perform the task-1 using EFS instead of EBS service on the AWS as, Create/launch Application using Terraform.

Steps to Follow :

1. Create Security group which allow the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.

4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html

5. Developer have uploded the code into github repo also the repo has some images.

6. Copy the github repo code into /var/www/html

7. Create S3 bucket, and copy/deploy the images from github repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html

Software Requirements :

  1. Terraform
  2. AWS CLI

Proceed to Code :

  1. Declaring our cloud provider and giving our account details so that Terraform can access our AWS account. We will also provide the region where we want to work and the version of AWS.
provider "aws" {
region = "ap-south-1"
profile = "lakshya"
version = "~> 2.66"
}

2. Then, Create a pair of private and public key using the tls_private_key resource.

resource "tls_private_key" "key-pair" {
algorithm ="RSA"
}
resource "local_file" "private-key" {
content = tls_private_key.key-pair.private_key_pem
filename = "mykey.pem"
}
resource "aws_key_pair" "deployer" {
key_name = "mykey_new"
public_key = tls_private_key.key-pair.public_key_openssh
}

3. Create a security group which will allow HTTP and SSH inbound traffic from all sources. ICMP protocol is also enabled from all sources which will enable us to ping to our instance.

resource "aws_security_group" "MyFirewall" {
name = "allow_http_ssh"
description = "Allow HTTP and SSH inbound traffic"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
ingress {
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

4. Launching our instance using the key and security group created in above steps, Amazon Linux 2 AMI is used here. Also installing required software such as Apache Web Server, PHP and git and enable the httpd service.

resource "aws_instance" "MyWebOS" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = aws_key_pair.deployer.key_name
security_groups = [ aws_security_group.MyFirewall.name ]

tags = {
Name = "MyTerraOS"
}
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.key-pair.private_key_pem
host = aws_instance.MyWebOS.public_ip
}
provisioner "local-exec" {
command = "echo ${aws_instance.MyWebOS.public_ip} > public_IP.txt"
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd php git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}
}

5. Creating a new EFS volume and attaching it to our instance. At the time of creation, we don’t know the availability zone of the instance. To solve this, Terraform provides us a way of using variables which will give us the required data.

resource "aws_efs_file_system" "MY-EFS" {
creation_token = "EFS-FILE"
tags = {
Name = "MY-EFS"
}
}
resource "aws_efs_mount_target" "alpha" {
file_system_id = aws_efs_file_system.MY-EFS.id
subnet_id = aws_subnet.alpha.id
}
resource "aws_vpc" "MY-EFS" {
cidr_block = "10.0.0.0/16"
}
resource "aws_subnet" "alpha" {
vpc_id = aws_vpc.MY-EFS.id
availability_zone = "ap-south-1a"
cidr_block = "10.0.1.0/24"
}
output "myos_ip" {
value = aws_instance.MyWebOS.public_ip
}

6. Now, we have to format and mount the attached volume to the default web server directory (/var/www/html).

resource "null_resource" "nullremote"  {depends_on = [
aws_efs_mount_target.alpha,
]
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.key-pair.private_key_pem
host = aws_instance.MyWebOS.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo mkfs.ext4 /dev/xvdh",
"sudo mount /dev/xvdh /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/lakshyasinghvi/TerraTask1.git /var/www/html/"
]
}
}

7. Let’s create a S3 bucket to store the image for our web page.

resource "aws_s3_bucket" "my_image_bucket" {
depends_on = [
null_resource.nullremote,
]
bucket = "lakshyaimages"
acl = "public-read"
}resource "aws_s3_bucket_object" "upload" {
bucket = aws_s3_bucket.my_image_bucket.bucket
key = "terra-test.jpg"
source = "D:/terra-test.jpg"
acl = "public-read"

}

8. We will create the CloudFront distribution for the image on our bucket.

locals {
s3_origin_id = "S3-${aws_s3_bucket.my_image_bucket.bucket}"
}
resource "aws_cloudfront_distribution" "s3_distribution" {enabled = true
is_ipv6_enabled = true
origin {
domain_name = aws_s3_bucket.my_image_bucket.bucket_domain_name
origin_id = local.s3_origin_id
}
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"

}
restrictions {
geo_restriction {
restriction_type = "none"

}
}
viewer_certificate {
cloudfront_default_certificate = true
}
connection {
type = "ssh"
user = "ec2-user"
host = aws_instance.MyWebOS.public_ip
private_key = tls_private_key.key-pair.private_key_pem
}
provisioner "remote-exec" {
inline = [
"sudo su << EOF",
"echo \"<img src='http://${self.domain_name}/${aws_s3_bucket_object.upload.key}'>\" >> /var/www/html/index.php",
"EOF"
]

}
}

9. In the final step, we will write the command to open the IP Address in Google Chrome Browser.

resource "null_resource" "nulllocal1" {depends_on = [
null_resource.nullremote,
]
provisioner "local-exec" {
command = "start chrome ${aws_instance.MyWebOS.public_ip}"
}
}

Commands to run the code :

On the terminal, just run the following commands -

# To initialize the plugins
Terraform init
# To validate the configuration file in the directory
Terraform validate

# To create the infrastructure
Terraform apply -auto-approve

#To destroy the infrastructure
Terraform destroy -auto-approve

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store