DEPLOYMENT OF WordPress & MySQL on Amazon EKS

Lakshyasinghvi
9 min readJul 14, 2020

What is Amazon EKS?

Amazon EKS is a managed service that helps make it easier to run Kubernetes on AWS. Through EKS, organizations can run Kubernetes without installing and operating a Kubernetes control plane or worker nodes. Simply put, EKS is a managed containers-as-a-service (CaaS) that drastically simplifies Kubernetes deployment on AWS.

What is Kubernetes?

Kubernetes is an open-source system that allows organizations to deploy and manage containerized applications like platforms as a service (PaaS), batch processing workers, and microservices in the cloud at scale. Through an abstraction layer created on top of a group of hosts, development teams can let Kubernetes manage a host of functions — including load balancing, monitoring and controlling resource consumption by team or application, limiting resource consumption and leveraging additional resources from new hosts added to a cluster, and other workflows.

Through Amazon EKS, organizations using AWS can get the full functions of Kubernetes without having to install or manage Kubernetes itself.

Why use EKS?

Through EKS, normally cumbersome steps are done for you, like creating the Kubernetes master cluster, as well as configuring service discovery, Kubernetes primitives, and networking. Existing tools will more than likely work through EKS with minimal mods, if any.

With Amazon EKS, the Kubernetes control plane — including the backend persistence layer and the API servers — is provisioned and scaled across various AWS availability zones, resulting in high availability and eliminating a single point of failure. Unhealthy control plane nodes are detected and replaced, and patching is provided for the control plane. The result is a resilient AWS-managed Kubernetes cluster that can withstand even the loss of an availability zone.

Organizations can choose to run EKS using AWS Fargate — a serverless compute engine for containers. With Fargate, there’s no longer a need to provision and manage servers; organizations can specify and pay for resources per application. Fargate, through application isolation by design, also improves security.

And of course, as part of the AWS landscape, EKS is integrated with various AWS services, making it easy for organizations to scale and secure applications seamlessly. From AWS Identity Access Management (IAM) for authentication to Elastic Load Balancing for load distribution, the straightforwardness and convenience factor of using EKS can’t be understated.

What is EFS?

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

Amazon EFS offers two storage classes: the Standard storage class, and the Infrequent Access storage class (EFS IA)

Why use EFS?

It is a centralized storage that will be persistent and mountable by numerous pods simultaneously, whereas EBS fails on these parameters. So, There’s need for EFS.

Task Description ():

Deployment & Integration Of Wordpress & MySQL on Amazon EKS using Amazon EFS.

Software Requirements :

  • AWS CLI V2 Windows Installer
  • Kubectl Windows Binary
  • Eksctl Windows Binary
  • Helm Windows Binary
  • Tiller Windows Binary

Steps to Follow :

  1. Configure the AWS CLI : Enter the details as asked by the AWS CLI. It’s recommended to create and login as a IAM user.

2. Creating the Cluster : Write a yml code for creating the cluster and use eksctl command to create it.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: lscluster
region: ap-south-1
nodeGroups:
- name: ng1
instanceType: t2.micro
desiredCapacity: 2
ssh:
publicKeyName: mykey1111
- name: ng2
instanceType: t2.small
desiredCapacity: 1
ssh:
publicKeyName: mykey1111
- name: ng-mixed
minSize: 2
maxSize: 5
instancesDistribution:
maxPrice: 0.017
instanceTypes: ["t3.small", "t3.medium"] # At least one instance type should be specified
onDemandBaseCapacity: 0
onDemandPercentageAboveBaseCapacity: 50
spotInstancePools: 2
ssh:
publicKeyName: mykey1111
eksctl create cluster -f cluster.yml

Check that the cluster has been created and also the nodegroups using following commands:

eksctl get cluster

eksctl get nodegroup --cluster lscluster

Connect the kubectl to this cluster to update the kubeconfig file. This can be done using the command:

aws eks update -kubeconfig -–name lscluster

This can also be check from the Web UI that the cluster has been created.

This cluster has total of 5 nodes and can be checked by using both CLI as well as Web UI.

3. EFS Configuration : Create a file system using EFS (Elastic File Storage) service of AWS for providing persistent storage to the pods. While creating the EFS storage look that it is in same VPC in which the nodes have been created so that they can connect to the EFS storage. Provide the same security group which is used by the nodes.

Firstly, Install amazon-efs-utils in all the slave/worker nodes to connect the pods to EFS storage this utility should be present in all the nodes. This can simply install it in all the nodes by using the following command:

yum install amazon-efs-utils -y

Then connect to the instance via Browser based SSH Connection or any other way.

Execute the command to install the packages required

Now creating an EFS Volume

Important : Remember to select the Same VPC and Security Groups as the EKS Cluster as shown. Also create mount points in all the AZs available.

Keep a record of the DNS Name and FILE_SYSTEM_ID provided for the EFS Volume, It will be helpful later.

Next Create a new namespace for this project ‘efsns’ and set this namespace to be used by default by using the following command:

kubectl config set-context --current --namespace=efsns

create-efs-provisioner.yaml

Next, Create an EFS provisioner with the help of YML code. Provide the ID & DNS from already created EFS file system. Create the file ‘create-efs-provisioner.yaml’ so that kubectl can connect to the EFS storage that is running on the cloud.

kind: Deployment
apiVersion: apps/v1
metadata:
name: efs-provisioner
spec:
selector:
matchLabels:
app: efs-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:v0.1.0
env:
- name: FILE_SYSTEM_ID
value: fs-43fe6b92
- name: AWS_REGION
value: ap-south-1
- name: PROVISIONER_NAME
value: aws-efs-provisioner
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: fs-43fe6b92.efs.ap-south-1.amazonaws.com
path: /

create-rbac.yaml

Create a file ‘create-rbac.yaml’ to modify some permissions using Role Based Access Control (RBAC). Cloud Formation connect to EFS for mounting to pod . For this it require some power that is known as role.

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nfs-provisioner-role-binding
subjects:
- kind: ServiceAccount
name: default
namespace: efsns
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io

create-storage.yaml

Create a file ‘create-storage.yaml’ for creating a new SC (storage class) which will create a PVC (Persistent Volume Claim) that would be connected to the pods. A dynamic PV (Persistent Volume) will be created which will contact to this SC.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-efs
provisioner: aws-efs-provisioner
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-wordpress
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-mysql
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi

Now set the default namespace as efsns.

kubectl config set-context --current --namespace=efsns

Next Create a secret so that It does not reveal any crucial info and while creating the mysql pods. Use this secret to provide the environment variables like mysql-password.

kubectl create secret generic mysqlsecret --from-literal=password=redhat

Then run all these files in the following order:

kubectl create -f create-efs-provisioner.yaml

kubectl create -f create-rbac.yaml

kubectl create -f create-storage.yaml

deploy-mysql.yml

Now Launch a MySQL database which will be connected to WordPress . For doing so, Already Launched a pod using MYSQL version 5.7 image. Also picked the environment variables form the pre-created secret and created a service type: ClusterIP which would be connected to this pod.

apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysqlsecret
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: efs-mysql

deploy-wordpress.yml

Next launched a pod using WordPress version 4.8-apache image. Provided the environment variables and created a service type: Load Balancer which would help to expose this pod to the outside world so that it could connect to the WordPress. For this service type Cloudformation would connect to the ELB(Elastic Load Balancer) service of AWS for creating a load balancer and would connect it to the pod.

apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysqlsecret
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: efs-wordpress

Now run these files as well :

kubectl create -f deploy-mysql.yml

kubectl create -f deploy-wordpress.yml

After running these files Check using CLI or WebUI that a Load balancer has been created as well. Browse to the DNS Name of the Load Balancer.

It’ll land at the Wordpress Welcome Page:

Now continue through the basic installation wizard :

Here, Create one blog post and then browse to the newly generated page. It will look like this :

Even if user delete any pods now with kubectl, they will be automatically by created by the Kubernetes Replica Sets which are a part of Kubernetes Deployment which user has created. Also no data will be lost because Persistent Volume is being used from centralized AWS EFS Volume.

That’s it for the complete Wordpress deployment using AWS EKS.

Finally. delete the cluster to avoid any unwanted costs:

eksctl delete cluster -f cluster.yml

--

--