Deploy Spring Boot Application on Kubernetes

Deployment on Kubernetes


The blog assumes that minikube and kubectl are setup on your machine.
In addition the OS supports virtualization.

The setup can be done using the following link:

The blog focusses on deploying your spring boot application on a kubernetes.
As a pre-requisite, the docker images for the applications that need to be deployed should already be present in a public docker hub repository.
For more instructions , kindly refer to the previous blog post :

Configuration Files

Creation of Deployment.yaml
The first step is to create a deployment.yaml for each application.
The deployment yaml contains instructions like the name of the application, the number of replicas, name of the image etc.

The yaml files can be executed using kubectl which will enable kubernetes(minikube) to create a service, pod as well as deploy the application on the kubernetes cluster.

A sample deployment.yaml looks like the following:
--------------------------------------------------------------------------------------------------------------

-----
kind: Service
apiVersion: v1
metadata:
name: employee-service
spec:
selector:
# Should match the template.metadata.labels.app value
app: employee
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 80
# Port to forward to inside the pod
targetPort: 7003
# Port accessible outside cluster
nodePort: 30002
name: http
type: NodePort


---
apiVersion: apps/v1
kind: Deployment
metadata:
name: employee-deployment
spec:
selector:
matchLabels:
app: employee
replicas: 1
template:
metadata:
labels:
app: employee
spec:
containers:
- name: employee
image: /employee:one
# same port as mentioned in applciations application.properties
ports:
- containerPort: 7003
--------------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------------------------------------
---
kind: Service
apiVersion: v1
metadata:
name: auth-server-service
spec:
selector:
# Should match the template.metadata.labels.app value
app: oauth-server
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 80
# Port to forward to inside the pod
targetPort: 7777
# Port accessible outside cluster
nodePort: 30001
name: http
type: NodePort

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: oauth-server-deployment
spec:
selector:
matchLabels:
app: oauth-server
replicas: 1
template:
metadata:
labels:
app: oauth-server
spec:
containers:
- name: oauth-server
image: /oauth-server:one
#same port as mentioned in applciations application.properties
ports:
- containerPort: 7777
--------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------------------------------
---
kind: Service
apiVersion: v1
metadata:
name: auth-client-service
spec:
selector:
# Should match the template.metadata.labels.app value
app: oauth-client
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 80
# Port to forward to inside the pod
targetPort: 7004
# Port accessible outside cluster
nodePort: 30003
name: http
type: NodePort



---
apiVersion: apps/v1
kind: Deployment
metadata:
name: oauth-client-deployment
spec:
selector:
matchLabels:
app: oauth-client
replicas: 1
template:
metadata:
labels:
app: oauth-client
spec:
containers:
- name: oauth-client
image: /oauthclient:one
# same port as mentioned in applciations application.properties
ports:
- containerPort: 7004
env:
- name: oauth2.server.uri

readinessProbe:
httpGet:
path: /actuator/health
port: 7004
initialDelaySeconds: 10
timeoutSeconds: 2
periodSeconds: 3
failureThreshold: 1
livenessProbe:
httpGet:
path: /actuator/health
port: 7004
initialDelaySeconds: 20
timeoutSeconds: 2
periodSeconds: 8
failureThreshold: 1

-----------------------------------------------------------------------------------------------------------------

Note: For the oauth client, we need to make a interservice call to the oauth-server to fetch the tokens.
This can be achieved by passing the cluster ip of the auth server as an environment variable as highlighed above.
Whenever we create a service, kubernetes exposes some environment variables like {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT which can be used to make inter service calls within the same kubernetes cluster.
In the above example, we pass the ip address of the oauth server, which is then used in the Restcontroller of the aouth client to make a interservice call to oauth server using RestTemplate.

The deployment.yaml for the respective microservice applications can be found in the root folder of each application on GitHub.

Note: Change the image name in the deployment.yaml to point it to your repository and image.

In the above yaml, we defined a deployment which specifies from where to fetch the image. The deployment is then exposed as a service so that they can be accessed from other services .
Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP.

ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.

NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting :.

LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created. In our example we will use this.

ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record.
with its value. No proxying of any kind is set up.

For further reading regarding the services, refer the link.

Minikube setup

Once the deployment.yaml is created for each application, the applications can be deployed in minikube .

--to start minikube (with hyper v).
minikube start --vm-driver=hyperv

--for employee microservices .
Note: Navigate to employee microservice root folder.
kubectl create -f ./employee_deployment.yaml

--for oauth server
Note: Navigate to oauth microservice root folder
kubectl create -f ./oauth_server_deployment.yaml

--for oauth client
Note: Navigate to oauth client microservice root folder
kubectl create -f ./oauth_client_deployment.yaml

--to update deployment or service yamls
kubectl apply -f ./oauth_client_deployment.yaml
--To open minikube dashboard
minikube dashboard

--Common commands
kubectl get pods
kubectl get deployments
kubectl get services

Note:
Its possible that you dont want your images to be in a public repo like docker hub and instead want to deploy your images to your private/ company related repo.

In that case, after pushing your docker images to the private  repo, you should create a kubernetes secret with the following command:

kubectl create secret docker-registry regcred --docker-server=[your-registry-server] --docker-username=[user name for registry] --docker-password=[password for reg] --docker-email=[email address]

Once created, you can reference this secret in the deployment.yaml as follows:

....
spec:
   containers:
      ...
    imagePullSecrets:
      -name: regcred

This way, kubernetes can download the image from your prvate repo and deploy the same.

Architecture view:

Accessing the application

As seen above, the application is deployed using kubectl and deployment.yaml.
To access the application, we need to know the port number and the ip address
where the application is deploymed within kubernetes.

Port number: This points to nodePort attribute in the deployment.yaml
eg: for employee microservice its : 30002
for oauth-server its : 30001
for oauth-client its : 30003

IP Address : The ip address will be the ip address of minikube.
This can be found out usign the following command:
-- Minikube ip address
minikube ip

so in our case, the request from Postman can now be modified to use the following address:
:30001/oauth/token



Ingress :

It is sometimes cumbersome to access each microservice using ipAddress:port combination.
Imagine if there are many more microservices, then the client needs to take care of setting the ipAddress:port for each and every microservice.

Additionally, lets consider a scenario when we have to expose our microservices on a cloud. This would first require us to expose our services with type "LoadBalancer"
and then the we would need to configure an Elastic Load Balancer for each of the exposed services. This will csotly as we will be charged for each instances of a Load balancer.

To overcome these problems, Kubernetes suggests to use "Ingress".
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

For further reading, refer the link from Kubernetes on ingress.


Ingress Setup on Minikube :

To setup ingress on any environment we need the following:
  1. Ingress Controller - a daemon, deployed as a Kubernetes Pod, that watches the apiserver's /ingresses endpoint for updates to the Ingress resource.
  2. Ingress Resources - Set of Rules which decide how to route to exposed services based on the url path requested.
By default, ingress is not enabled on minikube.
This can be enabled with the following command:

--enabling ingress on minikube
minikube addons enable ingress

You can verify if ingress is running by checking if the NGINX Ingress controller is running:
-- checking all pods
kubectl get pods -n kube-system

NAME                                                            READY               STATUS
--------------------------------------------------------------------------------------------
coredns-5644d7b6d9-mnvlm                            1/1                      Running
coredns-5644d7b6d9-wb54d                            1/1                      Running
etcd-minikube                                                   1/1                      Running
....
nginx-ingress-controller-57bf9855c8-z9wzr 1/1                      Running


We have already exposed our deployments as services in the respective deployment.yaml. Now we should create a ingress resource to define some rules on how to access the services.

Ingress files without hosts:

---------------------------------------------------------------
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: microservices-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-Forwarded-Proto, X-Forwarded-Port, X-Forwarded-Prefix"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/x-forwarded-prefix: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /emp/(.+)
backend:
# Service name and port from employee_deployment.yaml
serviceName: employee-service
# points to port attribute of service
servicePort: 80
- path: /auth/(.+)
backend:
# Service name and port from employee_deployment.yaml
serviceName: auth-server-service
# points to port attribute of service
servicePort: 80
----------------------------------------------------------------
--applying ingress file
kubectl apply -f C:\prashant\SelfLearning\Microservices-blog\Oauth2WithSpringBoot2\ingress-microservices.yaml

--check ingress
kubectl get ingress

NAME HOSTS ADDRESS PORTS AGE
-------------------------------------------------------------------
microservices-ingress * 172.17.128.125 80 21h

Once this is done, the microservices endpoints can be accessed with the following urls:
http://:80/emp/employee/1
http://:80/auth/oauth/token


Ingress files withhosts:

Sometimes, its required to re-direct traffic based on host name. for example, if the request from xyz.com, then redirect to service 1. If the request comes from abc.com, then redirect it to service 2 and so on..
This can be achieved by adding the host attribute to the ingress resource.
---------------------------------------------------------------
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: microservices-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-Forwarded-Proto, X-Forwarded-Port, X-Forwarded-Prefix"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/x-forwarded-prefix: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: local-microservices.com
http:
paths:
- path: /emp/?(.+)
backend:
# Service name and port from employee_deployment.yaml
serviceName: employee-service
# points to port attribute of service
servicePort: 80
- path: /auth/?(.+)
# points to port attribute of service
backend:
# Service name and port from oauth_server_deployment.yaml
serviceName: auth-server-service
servicePort: 80
---------------------------------------------------------------
In the above example, if the request comes from the host local-microservices.com,
then based on the path, the request will be redirected to the respective application.

But since, local-miccroservices.com is not a real domain we need to add an entry in our hosts file to map the domain name to the minikube ip address.
C:\Windows\System32\drivers\etc\hosts
172.17.128.125 local-microservices.com
where - 172.17.128.125 is the ip address of my minikube system derived usign the command minikube ip.

Once the above setup is done, the endpoints can now be accessed using the following url:
http://local-microservices.com/emp/employee/1
http://local-microservices.com/auth/oauth/token .

Conclusion

We have successfully deployed the microservices on minikube and exposed them as a service.
The different configuration files can be found in the following locations: