Stateful service using sticky session and header-based routing using Istio service mesh

Kaleeswaran Karuppusamy
4 min readMay 16, 2020

Prepared by Kaleeswaran Karuppusamy

In this post, will see how to define a service route for a stateful app using header & sticky session-based routing using Istio Ingress, VirtualService, and Destination Rule.

USE CASE

I Will take the below use case and go step by step how to achieve it,

  1. Deploy these services into corresponding backend services.
    app1.py -> backend-1
    app2.py -> backend-2
    app3.py -> backend-3
  2. Deploy three simple Python app such as app1.py, app2.py, and app3.py.
    All should have the same endpoint /hit_backend should receive a message over HTTP REST endpoint (/hit_backend) and respond with the following output
    Request :
    {
    “target”: “backend-1”,
    “username”: “user one”
    }
    Response:
    {
    “service_name”: “backend-1”,
    “username”: “user one”,
    “pod_id”: “1”
    }
  3. Those services hosted on minikube
  4. Each backend has a minimum of 3 and a maximum of 5 replicas
  5. A load balancer with k8s ingress or custom Nginx/haproxy ingress controller that route /hit_backend traffic based on the following routing rules

Rule

Rule 1: Use “target” field in request body or request header to route to the corresponding backend (i.e: “target”:”backend-1" route to service backend-1
Rule-2: Sticky session — request with the same “username” should reach the same replicas if called within 1 minute from the previous one

Diagram

Key features

1. Header based routing using Istio VirtualService and Gateway
2. Username based Sticky session using Istio DestinationRule
3. The sticky session should expire in one minute
4. Enabled Horizontal pod if resource utilization reaches 50

Steps

1. Steps to install Istio in minikube

  1. Minikube should be installed and up n running
    a. brew cask reinstall minikube
    b. minikube start — memory=5120
    c. minikube stop
    d. minikube restart

2. Download Istio https://doc.istio.cn/en/docs/setup/kubernetes/download-release/
3. Follow steps https://istio.io/docs/setup/kubernetes/#downloading-the-release
4. https://github.com/istio/istio/releases
5. https://istio.io/docs/setup/kubernetes/install/kubernetes/ (Change gateway type as NodePort and install in demo.xml)
6. https://istio.io/docs/tasks/traffic-management/ingress/ingress-control/#determining-the-ingress-ip-and-ports
7. https://istio.io/docs/examples/bookinfo/
8. Change istio-ingressgateway TYPE LoadBalancer to NodePort in install/kubernetes/istio-demo.yaml

2. Backend Server — Python (REST service)

Use the same for app1.py, app2.py and app3.py

from flask import Flask, jsonify,request
import socket
import requests
app = Flask(__name__)
@app.route(‘/hit-backend’, methods=[‘POST’])
def backend1():
content = request.get_json()
content[“target”] = “backend-1”
content[“podIp”] = socket.gethostbyname(socket.gethostname())
return str(content);
if __name__==’__main__’:
app.run(debug=True, host=’0.0.0.0',port=8080)

3. Docker file for Python

FROM alpine:3.9

RUN apk add — no-cache python3 && \
python3 -m ensurepip && \
rm -r /usr/lib/python*/ensurepip && \
pip3 install — upgrade pip setuptools && \
if [ ! -e /usr/bin/pip ]; then ln -s pip3 /usr/bin/pip ; fi && \
if [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python3 /usr/bin/python; fi && \
rm -r /root/.cache
WORKDIR /app
COPY . /app
RUN pip3 install virtualenv
RUN virtualenv venv
RUN source venv/bin/activate
RUN pip install flask flask-jsonpify flask-sqlalchemy flask-restful
RUN pip install requests
EXPOSE 8080
ENTRYPOINT [ “python” ]
CMD [ “app-1.py” ]

4. K8s Resources

  1. K8s Services are Backend-1, backend-2, backend-3
  2. ReplicaSet for corresponding services
  3. Horizontal Auto Scaling for ReplicaSet

5. Backend-1 DestinationRule

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: backend-1
spec:
host: backend-1.default.svc.cluster.local
trafficPolicy:
loadBalancer:
consistentHash:
httpHeaderName: x-user

6. Backend Service

kind: Service
apiVersion: v1
metadata:
name: backend-1
spec:
selector:
app: backend-1
ports:
- port: 8080 # Default port for image

---

kind: ReplicaSet
apiVersion: extensions/v1beta1
metadata:
name: backend-1
spec:
replicas: 2
selector:
matchLabels:
app: backend-1
template:
metadata:
labels:
app: backend-1
spec:
containers:
- name: backend-1
image: kaleeswarankaruppusamy/k8s:backend1
imagePullPolicy: "Always"
ports:
- containerPort: 8080
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: hpas-backend-1
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: ReplicaSet
name: backend-1
minReplicas: 2
maxReplicas: 5
targetCPUUtilizationPercentage: 50

7. VirtualService

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hit-backend-virtual-service
spec:
hosts:
- "*"
gateways:
- hit-backend-gateway
http:
- match:
- headers:
target:
exact: backend-2
route:
- destination:
host: backend-2.default.svc.cluster.local
port:
number: 8080
- match:
- headers:
target:
exact: backend-1
route:
- destination:
host: backend-1.default.svc.cluster.local
port:
number: 8080
- match:
- headers:
target:
exact: backend-3
route:
- destination:
host: backend-3.default.svc.cluster.local
port:
number: 8080

8. Gateway

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: hit-backend-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"

5. k8s — Istio Resources

  1. DestinationRule to maintain sticky session
  2. Virtual Service for Header based routing
  3. Istio Gateway

6. Dockerfile

  1. docker build -t kubia:1.0.0 .
  2. docker run -d — name kubia -it kubia:1.0.0
  3. docker tag <YOUR_REPO>/e2esystem:kubia4
  4. docker push <DOCKER_REPO>:kubia4
  5. docker exec -it kubia bash

7. K8S Deployment Files (use — validate=false)

  1. kubectl apply -f backend-1.yaml
  2. kubectl apply -f backend-2.yaml
  3. kubectl apply -f backend-3.yaml
  4. kubectl apply -f backend-1-Destination-Rule.yaml
  5. kubectl apply -f backend-2-Destination-Rule.yaml
  6. kubectl apply -f backend-3-Destination-Rule.yaml
  7. kubectl apply -f backend-gateway.yaml
  8. kubectl apply -f backend-gateway-virtualservice.yaml

8.Sample Request and Response

POST /hit_backend HTTP/1.1
target:backend-2
Content-Type:application/json
x-user:kaleeswaran
{
“username”: “user”,
“target”:”backend-1"
}

Response :
{
“target”: “backend-1”,
“username”: “user”,
“podId”: “backend-1-zlxtc”
}

Useful docker command

docker image prune — force
docker image ls
docker image rm
docker load & Save
docker ps -a
docker stop <containerid>
docker rm
docker logs <containername>

K8S Command

  1. brew cask install minikube #install using virtualbox
  2. minikube status
  3. minikube delete
  4. minikube ip
  5. kubectl describe ing backend-ingress
  6. minikube dashboard — K8s ui
  7. kubectl describe service servicename
  8. kubectl delete service servicename
  9. kubctl delete deployment deploymentname
  10. kubectl delete pod podname
  11. kubectl get service
  12. kubectl get pod
  13. kubectl get deplopyment
  14. kubectl get ReplicaSet

Run Docker Registry on you local

docker run -d -p 5000:5000 — restart=always — name registry registry:2

Refer Git repo for source code

https://github.com/kaleeswaran393/kubernetis_istio_ServiceMesh_python.git

--

--

Kaleeswaran Karuppusamy

Lead Architect @ Lumen Technologies , Working as Cloud Application Architect working on Agile, DevOps, Container and Developing Cloud Native Application