AWS - EKS

This is how AWS-EKS works.

What is AMAZON EKS ?

  • Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.
  • Second, EKS is deeply integrated with services such as Amazon CloudWatch, Auto Scaling Groups, AWS Identity and Access Management (IAM), and Amazon Virtual Private Cloud (VPC), providing you a seamless experience to monitor, scale, and load-balance your applications.
  • Third, EKS integrates with AWS App Mesh and provides a Kubernetes native experience to consume service mesh features and bring rich observability, traffic controls and security features to applications.
  • Additionally, EKS provides a scalable and highly-available control plane that runs across multiple availability zones to eliminate a single point of failure.

Kubernetes (K8S) is an open-source system for automating deployment, scaling, and management of containerized applications.

WHY EKS?

There is no need to launch Master node, Behind the scenes, EKS launches it so, no need to worry about the master node. It creates the slave nodes by using ec2 instances. We use eksctl command to work on EKS.
For creating worker nodes we can write a YAML file. We run the command by using eksctl . It contacts EKS and behind the scenes EKS contacts AWS EC2 for launching instances(nodes) .

OBJECTIVES/TASK :

  • 👉Create a Kubernetes cluster using AWS EKS.
  • 👉Integrate EKS with EC2, ELB, EBS, EFS.
  • 👉Deploying WordPress & Mysql on top of AWS EKS
  • 👉Using Helm : Installing & integrating Prometheus and Grafana
  • 👉Create Farget Cluster

Creating k8s cluster on AWS EKS

prerequisites:- Install AWS CLI in our base os and login to our AWS account with Access Key and Secret Key.Set up eksctl in our Base os.

code can be edited as per preferences
cluster is created successfully
we can confirm through CLI
We can manually check in AWS Console
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
secretGenerator:
- name: mysql-pass
literals:
- password=redhat
resources:
- mysql-deployment.yaml
- wordpress-deployment.yaml
WordPress Installation Successful .

But there is a challenge/problem

If we scale the pods, PVC won’t able to connect because, ebs is from particular availability zone (AZ), so we cant add PVC to other pods .

So what might be the solution to our challenge!

If we have centralized storage, then we can attach to any number of pods…..

Select appropriate Security Groups' so that each node can ping to each other in any region where the pod is launched.
EFS created successfully .

BUT WHY?

EFS_PROVISIONER:-

STORAGE CLASS:-

RBAC :

# EFS-PROVISONERkind: Deployment
apiVersion: apps/v1
metadata:
name: efs-provisioner
spec:
selector:
matchLabels:
app: efs-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:v0.1.0
env:
- name: FILE_SYSTEM_ID
value: fs-83078d52
- name: AWS_REGION
value: ap-south-1
- name: PROVISIONER_NAME
value: lw-course/aws-efs
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: fs-83078d52.efs.ap-south-1.amazonaws.com
path: /
# RBACapiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nfs-provisioner-role-binding
subjects:
- kind: ServiceAccount
name: default
namespace: lwns
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
# STORAGE-CLASSkind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: aws-efs
provisioner: lw-course/aws-efs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-wordpress
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-mysql
annotations:
volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
After everything launched we are ready with our website..

PROMETHEUS — GRAFANA INTEGRATION BY USING HELM

PROMETHEUS :

GRAFANA :

HELM :

Pre-requisite :

First thing is to install helm and triller!

Now , Lets launch Prometheus

Install grafana through helm command:

Graphana is monitoring the Clusters metrics with realtime-series graphs with all the target nodes

FARGATE:-

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: far-mkcluster
region: ap-southeast-1
fargateProfiles:
- name: fargate-default
selectors:
- namespace: kube-system
- namespace: default
as soon as the pod created a new node launches dynamically
No slave node is running here but internally they operates serverless.

Thank You for reading..

LinkedIn profile: https://www.linkedin.com/in/mayank-varshney-62744a163

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
MAYANK VARSHNEY

MAYANK VARSHNEY

3 Followers

I am a forward-thinking individual with exceptional skills in problem-solving, adaptive thinking, automation, and development.