Lineage to the saying “canary in a coal mine”, the canary deployment/release methodology is an incremental release focused on safety. If the canary does not pass, the deployment does not continue or is rolled back. Taking a jog down memory lane, like Kubernetes the Hard Way, a few years ago a canary deployment in Kubernetes was quite the undertaking. Today, there is certainly Continuous Delivery tooling that is focused on canary deployments with varying levels of sophistication such as Kayenta or Harness CD.
Though from a developer experience [DX] standpoint if a developer had to author or execute their own canary deployment, there is certainly a learning curve. From a DevOps or platform engineer perspective, each tool has a different canary implementation. The beauty of Shipa is that you have the ability to execute a canary release on Kubernetes using multiple tools; the experience is the same with ArgoCD, Terraform, GitHub Actions, and so on. In this example, we will execute a canary deployment leveraging the Shipa UI and leveraging ArgoCD.
Shipa UI Canary Deployment
If you are testing out Shipa for the first time, a quick way to create a canary deployment is to leverage the UI. The first step is to connect a Kubernetes cluster to Shipa and then create a new Framework and Application to deploy.
Here, a generic Framework called “canary-framework” was created. Now you can create an Application and execute a canary immediately.
Applications -> + Create Applicaiton.
Framework: canary-framework [or anything you have created].
Deployment Source: Public registry
Image [Test with Sample Image]: docker.io/shipasoftware/hello-shipa:latest
Then click Deploy for the initial release. Since canaries are incremental/subsequent releases for a running application, the initial release will create a running application.
Once deployed, for subsequent deployments, you can enable a Canary from the UI.
Applications -> ui-canary -> + Deploy
Now the “Create canary deployment” will be available. Click the check box and click Next.
Can fill out some basics about the canary deployment.
Number of Steps: 2
Step Interval: 30
Stepp Interval Unit: Seconds
Then click Deploy.
You can watch the Canary Rollout occur with kubectl.
kubectl get pods -Aw
To show the similarities across the UI/CLI/Providers with Shipa, can leverage a CD tool like ArgoCD to execute a Ship Canary Deployment.
ArgoCD Canary Deployment with Shipa
The Argo Project does have a project called Argo Rollouts which helps to enable canary deployments in ArgoCD. Though Argo Rollouts is not portable to other CI/CD solutions like GitHub Actions. Leveraging Shipa is a good use case for enabling canary functionality across a heterogeneous CI/CD stack.
Assuming you have wired ArgoCD and Shipa together via Crossplane, executing the same canary as we executed in the UI above is straightforward. You can add this sample repository to ArgoCD to kick off the canary process.
In a similar fashion, will configure ArgoCD to execute the following Kubernetes manifest. The canary functionality is enabled by setting steps, step-weight, and step-interval.
Back in ArgoCD, can wire a new Argo Application.
Repository URL: https://github.com/ravilach/canary-gitops.git
Cluster URL: https://kubernetes.default.svc
Click Create and you are ready for Argo to synch the manifest. Click on the Application Name and Sync the resource [if auto-sync is off].
After ArgoCD executes, you will have run a canary deployment through ArgoCD without Argo Rollouts.
Congratulations on your canary deployments! Though this is just the tip of the iceberg in functionality with Shipa.
Shipa, Your Partner in Developer Experience and Engineering Efficiency
There are certainly a lot of choices today when it comes to enabling a canary deployment with Kubernetes. Shipa is laser-focused on developer experience allowing anyone to create a canary deployment if they chose to. From an engineering efficiency standpoint, deploying with and/or against Shipa allows for common sense abstractions and policies to be enforced. The mighty canary is just one avenue for workloads to be deployed to Kubernetes.