AWS - EKS

MAYANK VARSHNEY
10 min readJul 12, 2020

--

The most popular and demanded Kubernetes services in the market currently

This is how AWS-EKS works.

What is AMAZON EKS ?

Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service. Customers such as Intel, Snap, Intuit, Go Daddy, and Auto desk trust EKS to run their most sensitive and mission critical applications because of its security, reliability, and scalability.

  • EKS is the best place to run Kubernetes for several reasons. First, you can choose to run your EKS clusters using AWS Fargate, which is serverless compute for containers.
  • Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.
  • Second, EKS is deeply integrated with services such as Amazon CloudWatch, Auto Scaling Groups, AWS Identity and Access Management (IAM), and Amazon Virtual Private Cloud (VPC), providing you a seamless experience to monitor, scale, and load-balance your applications.
  • Third, EKS integrates with AWS App Mesh and provides a Kubernetes native experience to consume service mesh features and bring rich observability, traffic controls and security features to applications.
  • Additionally, EKS provides a scalable and highly-available control plane that runs across multiple availability zones to eliminate a single point of failure.

Kubernetes (K8S) is an open-source system for automating deployment, scaling, and management of containerized applications.

Kubernetes is a program or tool which keeps on monitoring the containers, if any container goes down, behind the scenes it requests docker to create another container with the same data.
In docker, we have a tool, Docker swarm which works the same as Kubernetes but Docker swarm is only applicable to docker, but Kubernetes is for all container engines like Podman, Docker, Cri-o etc…

For connecting to AWS EKS we have different ways such as WebUI, CLI, TERRAFORM.

Before diving into EKS :
we have to download eksctl in our base os!

EKSCTL is a simple CLI tool for creating clusters on EKS by just running a single command.

WHY EKS?

There is no need to launch Master node, Behind the scenes, EKS launches it so, no need to worry about the master node. It creates the slave nodes by using ec2 instances. We use eksctl command to work on EKS.
For creating worker nodes we can write a YAML file. We run the command by using eksctl . It contacts EKS and behind the scenes EKS contacts AWS EC2 for launching instances(nodes) .

OBJECTIVES/TASK :

  • 👉Create a Kubernetes cluster using AWS EKS.
  • 👉Integrate EKS with EC2, ELB, EBS, EFS.
  • 👉Deploying WordPress & Mysql on top of AWS EKS
  • 👉Using Helm : Installing & integrating Prometheus and Grafana
  • 👉Create Farget Cluster

Creating k8s cluster on AWS EKS

prerequisites:- Install AWS CLI in our base os and login to our AWS account with Access Key and Secret Key.Set up eksctl in our Base os.

Now create a yml file for creating eks cluster

code can be edited as per preferences

In the above code, we have given 2 node groups and one spot instance for a node in Mumbai region

eksctl create cluster -f filename.yml

Just by running above command, eks creates an entire cluster with master & slave node.

To see the cluster → eksctl get cluster

To see Node Groups → eksctl get nodegroups cluster clustername

The output of the above commands are in below screenshot :

cluster is created successfully
we can confirm through CLI
We can manually check in AWS Console

Now Our cluster is ready. Let’s deploy WordPress and MySQL and attach PVC with storage class from EBS

Why should we attach PVC?

If we write anything in our pod , and restart the pod, we will lose entire data , so if we attach PVC then, we can retrieve our data even if we delete or restart the pods.

So let’s write a yaml file for deploying our WordPress app.

apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim

Now let’s write yaml file for MySQL deployment.

apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim

Now let’s write a file — kustomization file .

In this we can give what are the files that we want to run ,so there is no need to run all files, just by running one file we can launch our entire WordPress application-MySQL

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
secretGenerator:
- name: mysql-pass
literals:
- password=redhat
resources:
- mysql-deployment.yaml
- wordpress-deployment.yaml

Now run command → kubectl create -k .

Just by this simple command, our entire WordPress app will be launch

To check just run this → kubectl get all

LoadBalancer gives a Public DNS so that the outside world can connect to our pods through the load balancer and isolate our pods. Based on the traffic &load, it dynamically creates the nodes, I am Copying our load balancer IP in google So that, I can connect to our WordPress .

WordPress Installation Successful .

But there is a challenge/problem

If we scale the pods, PVC won’t able to connect because, ebs is from particular availability zone (AZ), so we cant add PVC to other pods .

We attached PVC to our pods, PVC takes storage from our AWS ebs volume. So if we want to create another pod and attach same ebs volume(PVC), we can’t . We have to create one more PVC/ebs volume but Resouces increases..

So what might be the solution to our challenge!

If we have centralized storage, then we can attach to any number of pods…..

**Here comes the role of EFS (Elastic File System )**

Amazon EFS is a regional service storing data, within and across multiple Availability Zones (AZs) for high availability and durability.

So let’s create EFS from our AWS WebUI

Select appropriate Security Groups' so that each node can ping to each other in any region where the pod is launched.
EFS created successfully .

Before launching WordPress & MySQL pods we have to do a small set up.

We have to launch EFS-PROVISIONER, STORAGE, RBAC pods!

BUT WHY?

EFS_PROVISIONER:-

The EFS Provisioner is deployed as a Pod that has a container with access to an AWS EFS file system. The container reads a ConfigMap containing the File system ID, Amazon Region of the EFS file system, and the name of the provisioner.

STORAGE CLASS:-

A StorageClass resource is defined as, whose provisioner attribute determines which volume plugin is used for provisioning a PersistentVolume (PV). In this case, the StorageClass specifies the EFS Provisioner Pod as an external provisioner by referencing the value of provisioner.name key in the ConfigMap.

RBAC :

Role-based access control (RBAC) is a method of regulating access to a computer or network resources based on the roles of individual users within your organization. RBAC authorization uses the rbac.authorization.k8s.io API group to drive authorization decisions, allowing you to dynamically configure policies through the Kubernetes API.

Let’s write yml file for these three!

# EFS-PROVISONERkind: Deployment
apiVersion: apps/v1
metadata:
name: efs-provisioner
spec:
selector:
matchLabels:
app: efs-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:v0.1.0
env:
- name: FILE_SYSTEM_ID
value: fs-83078d52
- name: AWS_REGION
value: ap-south-1
- name: PROVISIONER_NAME
value: lw-course/aws-efs
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: fs-83078d52.efs.ap-south-1.amazonaws.com
path: /
# RBACapiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nfs-provisioner-role-binding
subjects:
- kind: ServiceAccount
name: default
namespace: lwns
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
# STORAGE-CLASSkind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-efs
provisioner: lw-course/aws-efs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-wordpress
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-mysql
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi

Now run these commands to launch EFS_PROVISONER , STORAGE , RBAC

→ kubectl create -f efs-provisoioner.yml

→ kubectl create -f storage.yml

→kubectl create -f rbac.yml

After everything launched we are ready with our website..

PROMETHEUS — GRAFANA INTEGRATION BY USING HELM

Let’s get an overview about helm, Prometheus, Grafana

PROMETHEUS :

Prometheus is a An open-source monitoring system with a dimensional data model, flexible query language, efficient time-series database and modern alerting approach. Prometheus collects the metric data from the exporters program and save in the time-series data base .The exporter is the program which we install on the nodes, exposes the metrics. ( metrics ex:- ram, cpu utilization comes under metric data).

Prometheus stores 3 types of data in its database.

Timestamps 2. Labels 3. Metric Data

GRAFANA :

Grafana is multi-platform open-source analytics and interactive visualization web application. It provides charts, graphs, and alerts for the web when connected to supported data sources. It is expandable through a plug-in system. End users can create complex monitoring dashboards[4] using interactive query builders.

HELM :

Helm helps you to install & manage Kubernetes applications — Helm Charts helps you to define, install, and upgrade even the most complex Kubernetes application. By using helm, we can launch Kubernetes applications within seconds.

Pre-requisite :

First thing is to install helm and triller!

CLI commands :

# helm init

# helm repo add stable https://kubernetes-charts.storage.googleapis.com/

# helm repo list

# helm repo update

# kubectl -n Kube-system create serviceaccount tiller

# kubectl create clusterrolebinding tiller — clusterrole cluster-admin — serviceaccount=kube-system:tiller

# helm init — service-account tiller

# kubectl get pods — namespace Kube-system

Now , Lets launch Prometheus

# kubectl create namespace prometheus

# helm install stable/prometheus — namespace prometheus — set alertmanager.persistentVolume.storageClass=”gp2" — set server.persistentVolume.storageClass=”gp2"

# kubectl get svc -n prometheus

# kubectl -n prometheus port-forward svc/flailing-buffalo-prometheus-server 8888:80

Install grafana through helm command:

# kubectl create namespace grafana

# helm install stable/grafana — namespace grafana — set persistence.storageClassName=”gp2" — set adminPassword=’GrafanaAdm!n’ — set datasources.”datasources\.yaml”.apiVersion=1 — set datasources.”datasources\.yaml”.datasources[0].name=Prometheus — set datasources.”datasources\.yaml”.datasources[0].type=prometheus — set datasources.”datasources\.yaml”.datasources[0].url=http://prometheus-server.prometheus.svc.cluster.local — set datasources.”datasources\.yaml”.datasources[0].access=proxy — set datasources.”datasources\.yaml”.datasources[0].isDefault=true — set service.type=LoadBalancer

# kubectl get secret worn-bronco-grafana — namespace grafana -o yaml

Graphana is monitoring the Clusters metrics with realtime-series graphs with all the target nodes

FARGATE:-

It is a Serverless Architecture. Depends upon the load it scales out&in the number of nodes. No need to specify the nodes, it dynamically creates by using ec2 instances.

Create a file for fargate cluster:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: far-mkcluster
region: ap-southeast-1
fargateProfiles:
- name: fargate-default
selectors:
- namespace: kube-system
- namespace: default

eksctl create cluster -f fcluster.yml

It dynamically launches a cluster with master and slave nodes in the Singapore region, depends on the load.

You can see the cluster created in Singapore region named as far-mkcluster. To make use of cluster, I have updated the kubeconfig.

By default , when we create fargate, it will launch 2 Nodes.

Now I am launching a pod from image httpd . It dynamically Launched a Node to run httpd pod.

Now as we know fargate creates serverless architecture , so to check we launch a new pod and see the output:

as soon as the pod created a new node launches dynamically
No slave node is running here but internally they operates serverless.

Now, I will delete the pod, Let’s see, it deletes Node Dynamically or not

As soon as we delete the pod , it Dynamically deleted the Node and again if we launch, it creates Node Dynamically.

Thank You for reading..

LinkedIn profile: https://www.linkedin.com/in/mayank-varshney-62744a163

Source Code: https://github.com/mayank-aly/aws-eks.git

--

--

MAYANK VARSHNEY

I am a forward-thinking individual with exceptional skills in problem-solving, adaptive thinking, automation, and development.