Reference to Jira Ticket number CTDS-234, Below diagram shows upgrade from v1 to v2 while serving the traffic.

What is Canary deployment (in k8s)?
A deployment strategy is using Canaries (a.k.a. incremental rollouts). With canaries, the new version of the application is gradually deployed to the Kubernetes cluster while getting a very small amount of live traffic (i.e. a subset of live users are connecting to the new version while the rest are still using the previous version).
How to achieve canary deployment in Linkerd?
Answer to above question is Flagger, Flagger is a Kubernetes operator that automates the promotion of canary deployments using Istio, Linkerd, App Mesh, NGINXor Gloorouting for traffic shifting and Prometheus metrics for canary analysis. The canary analysis can be extended with webhooks for running system integration/acceptance tests, load tests, or any other custom validation.Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators like HTTP requests success rate, requests average duration and pods health. Based on analysis of the KPIsa canary is promoted or aborted, and the analysis result is published to Slackor MS Teams.
How to setup Flagger?
Its very simple but depends on the version of Kubernetes you working currently(day of documenting), there are two versions of Kubernetes cluster one is 1.13 and 1.14, As during the time project decided to use 1.14 version in UAT and PROD environment so to setup Flagger just need to fire this command (Need kubectl version 1.14)
kubectl apply -k github.com/weaveworks/flagger//kustomize/linkerd
This would install Flagger in linkerd namespace.
Example implementation (referenced from Flagger):-
Steps to follow:-
- setup namespace for implementation >> kubectl create ns test (can be any namespace).
- inject linkerd proxy into newly created namespace >> kubectl annotate namespace test linkerd.io/inject=enabled
- This is optional but good to have horizontal pod scaler, Refer metrics-server for setting up metric server/heapster.
kubectl apply -k github.com/weaveworks/flagger//kustomize/podinfo.
- create a custom resource “canary” for your deployment object which need canary
deployment please refer the attach file and replace below parameters:-
Place holder Description
- __NameOfYourChoice__ Name of your canary deployment object (i.e podinfo).
- __NameOfYourNameSpace__ Name of the namespace where deployment lives also where canary deployment would live.
- __NameOfYourDeployment__ Name of the target deployment (i.e. pod info).
- __NameOfYourDeployment__ This is optional and Name of the target deployment (i.e. pod info).
- __ClusterIPPORTNumber__ Port number of Cluster IP service deployed.
- __PODPortNumber__ Port Number of Pod decoyed underneath Service(Optional).
It’s good to have test which can send request and keep checking the pod deployments going well though its optional.
kubectl apply -f ./canary-podinfo.yaml
On execution of above command few objects will be applied and few will be generated.
# applied
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
ingresses.extensions/__NameOfYourDeployment__
canary.flagger.app/__NameOfYourDeployment__
# generated
deployment.apps/__NameOfYourDeployment__-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/__NameOfYourDeployment__
service/__NameOfYourDeployment__-canary
service/__NameOfYourDeployment__-primary
trafficesplits.split.smi-spec.io/__NameOfYourDeployment__
Here is the trick bit which actually sets canary deployment after bootstrapping actual deployment would go down to zero and another deployment would come up and start serving on this address.Well Canary deployment setup is ready to cater the request for it. Link to video which shows how canary deployment happens for a sample app. Clarity is not at its best but will give some idea of objects moving in the process. The deployment video will canary deploy from version 3.1.1 to 3.1.2.
podinfo-primary-858fdd8d-grf7m.mp4
Below is the traffic split code which gets generated as command runs in step-5
Name: podinfo
Namespace: test
Labels:
Annotations:
API Version: split.smi-spec.io/v1alpha1
Kind: TrafficSplit
Metadata:
Creation Timestamp: 2019-12-08T19:06:15Z
Generation: 67
Owner References:
API Version: flagger.app/v1alpha3
Block Owner Deletion: true
Controller: true
Kind: Canary
Name: podinfo
UID: a57bc070-4a23-42aa-9c35-1d556f8c97de
Resource Version: 270255
Self Link: /apis/split.smi-spec.io/v1alpha1/namespaces/test/trafficsplits/podinfo
UID: 24bd60f7-336e-4ad9-8774-8413b8ef361f
Spec:
Backends:
Service: podinfo-canary
Weight: 0
Service: podinfo-primary
Weight: 100
Service: podinfo
Events:
NOTE:- Traffic splitter can be modified and created as custom reproduce for traffic splitting pointing to different service.
What is TrafficSpliter?
This resource allows users to incrementally direct percentages of traffic between various services. It will be used by clients such as ingress controllers or service mesh sidecars to split the outgoing traffic to different destinations. For example there are two versions of deployment V1 and V2, It can be done by deploying both the versions and split traffic between, for example V1 takes half the traffic and V2 takes another half or in ratios of 10/90,20/80, 30/70 so on. When a specific deployment version is been preferred to use all the traffic can route to that version with 0/100. Sample file for traffic splitter:-
apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
name: service
spec:
service: service
backends:
– service: service-V1
weight: 50
– service: service-V2
weight: 50
Output of sample implemenattion.

Challenges to be addressed while implementing:-
Database changes and its impact on old version of deployment.
Unlike Blue/green deployments, Canary releases are based on the following assumptions:
Multiple versions of your application can exist together at the same time, getting live traffic.
If you don’t use some kind of sticky session mechanism, some customers might hit a production server in one request and a canary server in another. something like user-agent to identify the source of request and point to this respective server.
References
https://linkerd.io/2/tasks/canary-release/ for setup.
https://docs.flagger.app/usage/linkerd-progressive-delivery For Canary deployment using linkerd.
https://github.com/weaveworks/flagger Flagger code base.