Table of Contents
- Definition
- Multi-Tenant App Considerations
- Simple Method: Using Labels and Services
- Advanced Method: Using Ingress Controller
- Check Canary Using Bash
Definition
A Canary Deployment is a progressive delivery strategy used to roll out new versions of an application incrementally — to a small subset of users or traffic first — before rolling it out to everyone.
It's named after the "canary in a coal mine" analogy: you test a new version on a small audience to detect problems early, without impacting all users.
Instead of upgrading all pods from v1 to v2 at once, which is what a rolling update does, a canary deployment runs both versions side-by-side and routes only part of the traffic to the new one.
| Version | Pods | % Traffic |
|---|---|---|
| v1 (stable) | 3 | 80% |
| v2 (canary) | 1 | 20% |
If no errors are detected in v2 after monitoring, you gradually increase traffic to v2, until it replaces v1 completely.
Advantages:
- Early detection of bugs, performance regressions, or config issues.
- Minimal user impact if something goes wrong.
- Real-world testing with production traffic.
- Smooth rollback: just route traffic back to v1.
Challenges:
- Complex traffic management.
- Must ensure schema compatibility if both versions hit the same DB.
- Requires observability (metrics, logs, tracing).
Multi-Tenant App Considerations
Ensure DB backward compatibility: Both v1 and v2 will access the same schema. Always deploy schema changes in a backward-compatible manner (e.g., additive changes, not destructive).
Tenant targeting: If you want to expose canary features only to certain tenants, route based on headers (e.g., X-Tenant-ID) or subdomain (tenantX.example.com). This is achievable using Istio VirtualService routing rules or NGINX Ingress annotations.
Monitoring and rollback: Track error rate, latency, and logs specifically for version=v2. Automate rollback if canary exceeds thresholds.
Simple Method: Using Labels and Services
You can manually or with CI/CD control traffic proportions via replica count.
Deployment 1:
# v1 deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-v1
spec:
replicas: 3
selector:
matchLabels:
app: myapp
version: v1
template:
metadata:
labels:
app: myapp
version: v1
spec:
containers:
- name: myapp
image: myapp:v1
Deployment 2:
# v2 deployment (canary)
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-v2
spec:
replicas: 1
selector:
matchLabels:
app: myapp
version: v2
template:
metadata:
labels:
app: myapp
version: v2
spec:
containers:
- name: myapp
image: myapp:v2
A single Service that matches both:
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 8080
Traffic is load-balanced among all 4 pods. So effectively, 25% goes to the canary (v2), 75% to stable (v1). Simple, native Kubernetes only. But you can’t precisely control traffic weights or target users by headers or cookies.
Advanced Method: Using Ingress Controller
The canary feature in the NGINX Ingress Controller is built into the official kubernetes/ingress-nginx controller, not in the old nginxinc/kubernetes-ingress (commercial) version.
This feature lets you split traffic between Ingress resources using annotations like:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "20"
So first, confirm you’re running the official Ingress NGINX and not a different ingress class.
kubectl get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx
You should see something like:
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-5f867dd84f-bd79f 1/1 Running 1 (299d ago) 476d
ingress-nginx-controller-5f867dd84f-mqzjz 1/1 Running 0 476d
Check the image:
kubectl get deployment ingress-nginx-controller -n ingress-nginx -o jsonpath='{.spec.template.spec.containers[0].image}'
You should see something like:
registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd
Canary support was added long ago (around v0.25+), but you should be on v1.0.0 or higher for full stability.
The nginx-ingress-controller itself always supports canary annotations. There’s no runtime toggle or Helm value such as controller.config.enable-canary or any equivalent ConfigMap key.
Backend Canary
You can re-use existing Helm chart and deploy canary release with canary ingress.
helm install myapp-stable ./myapp -f values-stable.yaml
helm install myapp-canary ./myapp -f values-canary.yaml
Here is values-canary.yaml example annotation:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-canary
namespace: prod
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "20"
# Optional: route to canary by header
nginx.ingress.kubernetes.io/canary-by-header: "X-Canary"
nginx.ingress.kubernetes.io/canary-by-header-value: "always"
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-v2
port:
number: 80
nginx.ingress.kubernetes.io/canary: "true"signals this ingress is a canary.nginx.ingress.kubernetes.io/canary-weight: "20"means 20% of matching traffic goes to the canary service.nginx.ingress.kubernetes.io/canary-by-header+...-valuefields allow routing to canary based on a header rather than by percentage.
After you apply both Ingresses (stable + canary), the controller will merge them internally and split traffic accordingly. You can test by accessing example.com and if you refresh, ~20% of requests should hit the canary service. If you set the header X-Canary: always then you force the request to go to canary, but only if you enabled header‐based routing.
Ensure both Ingress objects use the same host (and path) so that the controller sees them as alternative backends of the same "virtual host".
Only one canary Ingress per host/path is typically supported.
Some recent versions had reports where canary ingress didn't show up in nginx.conf, so you must verify your controller version supports the annotations.
If you use CRD-based controllers (non-ingress-nginx), the approach may differ (e.g., using VirtualServer and splits).
Backend Canary with Frontned Ingress
That was relevant when you access application using Ingress Controller. But what if you have frontend service and ingress which is routing traffic to frontend itself and backend using / and /api paths. Things become more complicated here.
The NGINX Ingress Controller merges all Ingress resources within the same namespace and for the same host into a single NGINX configuration. The merge happens by combining spec.rules and generating one large server {} block per host.
If you have multiple Ingresses that share the same namespace and spec.rules[].host then they will all be merged together for that host.
A canary Ingress works by attaching additional routing rules to an existing (primary) Ingress for the same host and path.
Specifically, when you add:
nginx.ingress.kubernetes.io/canary: "true"
The controller looks for an existing non-canary Ingress with the same host and the same path (and pathType) in the same namespace. Then, NGINX dynamically adds a new upstream to that route (e.g. api-canary-8080) and applies canary logic (weight, header, cookie).
The annotations
nginx.ingress.kubernetes.io/canaryandcanary-weightare global per location, not per-Ingress!
Helm Chart values for two ingress declarations:
ingressStable:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
hosts:
application-external:
host: application.stage.domain.world
paths:
- path: /api
pathType: Prefix
backend:
service:
name: application-api
port:
number: 8080
- path: /
pathType: Prefix
tls:
- hosts:
- application.stage.domain.world
secretName: wildcard-stage-tls
ingressCanary:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "10"
hosts:
application-external:
host: application.stage.domain.world
paths:
- path: /api
pathType: Prefix
backend:
service:
name: application-api-canary
port:
number: 8080
tls:
- hosts:
- application.stage.domain.world
secretName: wildcard-stage-tls
The ingress-nginx controller merges Ingresses not only by host but by (namespace + serviceName + port) — by backend identity.
When multiple Ingresses in the same namespace reference the same service backend, even if they have different hosts, the controller will merge them into a single "backend" definition in memory.
backend (api-8080)
├── endpoints:
│ ├── podA:8080 (weight=80)
│ └── podB:8080 (weight=20, canary=true)
└── used_by:
├── domain1.com
└── domain2.com
Key merging dimensions:
| Dimension | Merge Behavior |
|---|---|
namespace | All ingresses within the same namespace are combined |
ingressClass | Only ingresses with the same ingress class are processed by a given controller |
host | Rules for the same host are combined into a single NGINX server block |
path | Rules under the same host and path get merged as upstream backends |
The “canary” annotation affects per-host, per-path matching.
This implies that all frontend applications utilizing the same backend application will implement canary deployments, even if canary was specified for only one frontend application.
Check Canary Using Bash
#!/usr/bin/env bash
URL="https://example.com/your-endpoint"
while true; do
start=$(date +%s%3N) # ms timestamp
# -s silent, -S show errors, -w format, -o /dev/null drop body
http_code=$(curl -sS -o /dev/null -w "%{http_code}" "$URL")
end=$(date +%s%3N)
elapsed_ms=$((end - start))
printf '%s status=%s latency_ms=%d\n' "$(date -Iseconds)" "$http_code" "$elapsed_ms"
sleep 1
done
kubectl -n ingress-controller logs -f -l app.kubernetes.io/name=ingress-nginx | grep canary