Kubernetes with EKS and AWS Console

Deploying a Spring boot application on a Kubernetes Cluster on AWS using EKS

The goal of this post is to create a Spring boot Application and deploy it first on a local minikube cluster and then deploy it on a Kubernetes Cluster on AWS using Amazon EKS - Amazon Elastic Kubernetes Service.

The post is derived from the official AWS documentation which is available at the following location:
https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html


Pre-Requisites:

  • You should have a AWS account setup.
           Note, completing this activity would incur some expenses.So kindly provision your account accordingly.
  • Install / Setup Kubectl
                  https://kubernetes.io/docs/tasks/tools/install-kubectl/
  • Install / Setup AWS CLI
                https://docs.aws.amazon.com/cli/latest/userguide/install-cliv1.html


Prefer aws cli version 1

run the following commands:
aws --version
aws configure

Note: aws configure is used to configure your cli with the region, and access keys.
You might need to create a Client secret and access key for your Account .

IAM-Service-Role:
 
You need to create an IAM role that allows Kubernetes to create AWS resources.


 Additionally you need to attach and additional inline-policy to the role:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "ec2:DescribeAccountAttributes",
            "Resource": "*"
        }
    ]
}

Create VPC Stack:
 The EKS setup requires us to create a VPC . The easisest way is to create a VPC stack using cloud formation.

Select Cloud formation and enter the following url:
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/amazon-eks-vpc-sample.yaml


After the stack is created, navigate to the output tab and note the following:
  • Security groups
  • Subnet Ids
  • VPC ID

Note: This can take upto 5 mins.

EKS Cluster Setup:
Navigate to the EKS Service and create a new EKS CLuster by providing a desired EKS Cluster name.


 

Choose the role created in the step "IAM-Service-Role" step and select the VPC from the "Create VPC Stack" step.
This then takes around 10-15 mins. Once completed, note the name of your eks cluster and your region as this will be required to connect to the kubernetes cluster using kubectl.

Connecting to EKS Cluster from kubectl: 
Backup of kube config

If you use minikube on your local machine or if your connecting to a different kubernetes cluste using kubectl, it is recommended to take a backup of your kube config file.

AWS CLI setup

The AWS CLI should be setup and the following command should have been excetued: 
aws configure
 
When prompted, provide your IAM user, access key, secret and default region.

Update Kubectl config
aws eks --region [region name] update-kubeconfig --name [eks cluster name]

Create IAM Role For Worker Node:
Before you can launch worker nodes and register them into a cluster, you must create an IAM role for those worker nodes to use when they are launched .
This can be easily done using cloud formation with the following:
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/amazon-eks-nodegroup-role.yaml
Attaching Worker Node to EKS Cluster:
Before, you proceed with this step, make sure that you have created a EC2 key pair which can be used to connect to an EC2 instance.  
This can be easily done using the following link:
 
Then navigate to EKS Cluster on AWS console and click on configure node group .



Select the IAM role for worker node, which was created in the previous step and the EC2 key pair when prompted.
The wizard provides an option to provision the EC2 instance types.

I used a t3-medium instancce with desired capacity as 1 and max capacity as 2.
Once the setup is complete, you can verify if the nodes are connected to your cluster by running the following command:

kubectl get nodes


Deploying the application:
For testing the kubernetes cluster, we would deploy a simple spring boot application.
I have created a Spring boot application with a home page, which we will try to deploy on AWS.
The source code for the same can be found on GitHub

After cloning the application from Github, build the project using:
mvn clean package

Dockerization:
The source code contains a docker file which can be used to create a docker image.
Exceute the following commands:

docker build -t springboot-kubernetes .

docker tag springboot-kubernetes:latest [docker public repo]/springboot-kubernetes:one

docker push [docker public repo]/springboot-kubernetes:one


Kubernetes Deployment:

IAM Setup (Optional)

Note: By default, the user who creates the cluster has only access to the cluster.
To provide other IAM users access to the cluster via kubectl, we can create a  yaml file as follows:

kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn:
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
  mapUsers: |
    - userarn:
      username:
      groups:
        - system:masters


Further reading:
https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

Deployment : 

Once the image is pushed to a public repo, you can run the following kubectl commands on your EKS cluster: 

Navigate to the root folder of the application and run the following commands:
 kubectl create -f ./deployment.yaml
you can verify if the application is deployed correctly by running the following command:
kubectl get pods

If the pods are created succefully, expose the application as a LoadBalancer service by running the following command:
kubectl create -f ./kubernetes_service_loadbalancer.yaml

The service can be verified with the followign command:
kubectl get svc 


Note:(For private container Registries)
Its possible that you dont want your images to be in a public repo like docker hub and instead want to deploy your images to your private/ company related repo.

In that case, after pushing your docker images to the private  repo, you should create a kubernetes secret with the following command:

kubectl create secret docker-registry regcred --docker-server=[your-registry-server] --docker-username=[user name for registry] --docker-password=[password for reg] --docker-email=[email address]

Once created, you can reference this secret in the deployment.yaml as follows:

....
spec:
   containers:
      ...
    imagePullSecrets:
      -name: regcred

This way, kubernetes can download the image from your prvate repo and deploy the same.


Since we have created a service of type Load Balancer, AWS will assign an external ip to the service. This can be then used to access the application.

Note:Initially when the service is created, the external ip will be in pending state. Then it gets assigned an IP address.

Note: the application is accessible only after a minute or two.

For troubleshooting, describe the pod and the services.

The application can then be accessed using the external ip as follows:

Cleaning Up:
Once you are done, you can clean up the resources in the following order:
  1. Deleting Node groups.
  2. Delete Kubernetes cluster
  3. Delete EKS-worker-Node-Role stack from cloud formation.
  4. Delete VPC stack from cloud formation.
If vpc stack deletion fails or hangs:
  1. Delete Load Balancer. (created by Kubernetes Service)
  2. Delete Network Interfaces.
  3. Delete VPC 
Total Cost:
After completing the entire excercise and cleaning up the resources immediately, I incurred a cost of around 0.33$ . 

The split up of the cost is as follows:
 

Kubernetes Cluster on AWS using KOPS

The cluster creation on AWS can also be done using kops.


The following steps are to be followed for the cluster creation:


1. KOPS CLI setup
Kops is a command line tool which can be setup using the following tool :

2. Create a new IAM user named kops with programmatic access and assign the following policies:
AmazonEC2FullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
IAMFullAccess
AmazonVPCFullAccess

Note down the access keys and secret for the user.

3. AWS CLI Setup:
Ensure thate AWS cli is setup and run aws configure using the credentials of the user created in the previous step.

4. S3 Bucket:
Create a S3 bucket in and enable versioning on it.
Note down the bucket name.

5. Public key file
create a new pubic certificate inside the .ssh folder(inside your profiles users folder)
ssh-keygen
This will prompt you for a file name and passphrase. Note them down.

6. Open a new command prompt and run the following commands:
  
    6.1 set KOPS_CLUSTER_NAME=<name of cluster you want>
          set KOPS_STATE_STORE=s3://<name of s3 bucket created in step 4>

    6.2 Check availability zones available to us in the region:
           aws ec2 describe-availability-zones --region eu-central-1

    6.3  Create kubernetes cluster with the following command:

          kops create cluster --node-count=1 --node-size=t2.micro --master-size=t2.micro --zones=eu-central-1a,eu-central-1b --ssh-public-key .ssh/id_rsa_kops.pub


         where -- zones indicate the availability zones as per the output of 6.2
                    -- public-key is the path of the newly create pub file from step 5.
                    -- Adjust node-count, node-size, master-size as per your requirement.

           Executing the above command will give you a summary of the infrastructure that will be created
           and the resultant configuration will be saved in the s3 bucket created in step 4.

    6.4  kops update cluster --name <name of cluster >--yes

            This command will now start creating the required infrastructure which includes (but not                        limited ) to the following:
             Kubernetes Master Node(EC2 instance)
             Kubernetes Worker node(EC2 instance)
             Load balencer 
             VPC etc.

     6.5 Accessing the nodes:
            kubectl get nodes
            kubectl config get-contexts
            kubectl config use-context <required context name>

     6.6 Accessing / Editing node configuration:
             kops get ig
             kops edit ig <ig name>
            (make changes using the editor and save them)
            kops update cluster --name <name of cluster >--yes

7. Terminating the cluster:
     kops delete cluster --yes

For further reading: