Setting Up Canary Deployments with Argo Rollouts
Table of Contents
In modern software development, deploying applications to production environments can be a risky endeavor. Traditional deployment strategies often involve pushing changes to all users simultaneously, which can lead to widespread issues if something goes wrong. To mitigate this risk, organizations have adopted various deployment strategies such as blue-green deployments, rolling updates, and canary deployments.
Among these strategies, canary deployments have gained significant popularity due to their ability to minimize risk by gradually rolling out changes to a small subset of users before a full-scale deployment. This approach allows teams to detect issues early and rollback changes quickly if something goes awry.
One tool that simplifies the implementation of canary deployments in Kubernetes environments is Argo Rollouts. Argo Rollouts is an open-source project designed to provide advanced deployment capabilities for Kubernetes, including canary and blue-green deployments. In this article, we will delve into the process of setting up canary deployments using Argo Rollouts.
#
What Are Canary Deployments?
Canary deployments are a deployment strategy where a new version of an application is released to a small subset of users before a full rollout. The term “canary” comes from the historical practice of taking a canary into a coal mine to detect toxic gases; if the canary died, miners knew it was unsafe to enter.
In the context of software deployment, a canary deployment acts as an early warning system. By exposing the new version of the application to a small percentage of users, teams can monitor for issues without affecting the entire user base. If the canary deployment is successful, the rollout continues; if issues are detected, the deployment is halted or rolled back.
#
Benefits of Canary Deployments
- Risk Reduction: Canary deployments minimize the risk of deploying faulty code by testing it with a small audience first.
- Early Detection of Issues: Problems can be identified and addressed before they affect all users.
- Gradual Rollout: The deployment is rolled out incrementally, allowing for better control over the release process.
- Faster Recovery: In case of an issue, the rollback process is quicker since only a small portion of the user base is affected.
#
Introduction to Argo Rollouts
Argo Rollouts is a Kubernetes operator that provides advanced deployment capabilities. It extends the functionality of the Kubernetes Deployment resource by adding features such as:
- Canary Deployments: Allows for incremental rollouts of new versions.
- Blue-Green Deployments: Enables zero-downtime deployments by running two production environments.
- Customizable Rollout Strategies: Supports both canary and blue-green strategies, allowing teams to choose the approach that best fits their needs.
- Integration with Kubernetes: Argo Rollouts integrates seamlessly with Kubernetes, making it easier to manage deployments within the ecosystem.
##
Why Use Argo Rollouts?
- Simplified Canary Deployments: Argo Rollouts provides a straightforward way to implement canary deployments without the need for custom scripts or complex configurations.
- Advanced Features: Offers features like automated rollbacks, traffic shifting, and integration with observability tools.
- Community Support: As an open-source project, Argo Rollouts benefits from community contributions and support.
#
Setting Up Canary Deployments with Argo Rollouts
Now that we have covered the basics of canary deployments and Argo Rollouts, let’s dive into the process of setting up a canary deployment using Argo Rollouts.
##
Prerequisites
Before you start, ensure you have the following:
- Kubernetes Cluster: A Kubernetes cluster (e.g., Minikube, Kind, or a cloud-based cluster like GKE, EKS, or AKS).
- kubectl: The command-line tool for interacting with your Kubernetes cluster.
- Argo Rollouts Installed: Argo Rollouts must be installed in your Kubernetes cluster.
##
Step 1: Install Argo Rollouts
If you haven’t already installed Argo Rollouts in your Kubernetes cluster, you can do so using the following commands:
# Create the Argo Rollouts namespace
kubectl create namespace argo-rollouts
# Apply the Argo Rollouts installation manifests
kubectl apply -n argo-rollouts -f https://raw.githubusercontent.com/argoproj/argo-rollouts/stable/manifests/install.yaml
After applying these commands, Argo Rollouts will be installed in your Kubernetes cluster. You can verify the installation by checking the pods in the argo-rollouts
namespace:
kubectl get pods -n argo-rollouts
##
Step 2: Create a Canary Deployment Configuration
To create a canary deployment using Argo Rollouts, you need to define a Rollout resource. A Rollout is similar to a Kubernetes Deployment but with additional features for canary and blue-green deployments.
Below is an example of a Rollout configuration file that implements a canary deployment strategy:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: my-canary-deployment
spec:
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: <your-image-name>:<tag>
ports:
- containerPort: 80
strategy:
canary:
steps:
- setWeight: 20
pause:
duration: 30
- analysis:
templates:
- templateName: deployment-template
gates:
- metricQuery:
metricName: request-success-rate
threshold: 95
value: 90
- setWeight: 40
pause:
duration: 60
- analysis:
templates:
- templateName: deployment-template
gates:
- metricQuery:
metricName: request-success-rate
threshold: 95
value: 90
- setWeight: 60
pause:
duration: 120
- analysis:
templates:
- templateName: deployment-template
gates:
- metricQuery:
metricName: request-success-rate
threshold: 95
value: 90
- setWeight: 80
pause:
duration: 180
- analysis:
templates:
- templateName: deployment-template
gates:
- metricQuery:
metricName: request-success-rate
threshold: 95
value: 90
This configuration defines a canary deployment that increments the weight of the new version in stages, pausing between each stage to monitor for issues. Each stage includes an analysis step where metrics are checked before proceeding.
##
Step 3: Apply the Rollout Configuration
Apply the Rollout configuration file using kubectl
:
kubectl apply -f my-canary-deployment.yaml
You can monitor the progress of the rollout using the following command:
kubectl argo rollouts get my-canary-deployment -n <your-namespace>
This will show you the current status of the rollout, including the weights assigned to each version and any pauses or analyses in progress.
##
Step 4: Define Metrics for Analysis
Argo Rollouts uses metrics to determine whether to proceed with a rollout or roll back. In our example configuration, we referenced a metric called request-success-rate
. To define this metric, you need to create a Prometheus Service Monitor and a custom metric definition.
Here’s an example of how to define the request-success-rate
metric:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: my-service-monitor
spec:
selector:
matchLabels:
app: my-app
endpoints:
- port: http
interval: 30s
This ServiceMonitor collects metrics from pods labeled with app: my-app
on the http
port every 30 seconds.
Next, define a custom metric using PrometheusAdapter:
apiVersion: custom.metrics.k8s.io/v1beta1
kind: MetricDefinition
metadata:
name: request-success-rate
scope:
namespace:
name: <your-namespace>
labels:
app: my-app
prometheus:
address: http://prometheus-k8s.monitoring:9090
This definition tells Kubernetes how to query Prometheus for the request_success_rate
metric.
##
Step 5: Automate Rollback on Failure
Argo Rollouts can automatically roll back a deployment if a certain condition is met, such as failing a health check or dropping below a success rate threshold. To configure automatic rollback, you can add an automated
block to your Rollout configuration:
spec:
automated:
rollback:
analysisTemplates:
- templateName: deployment-template
gates:
- metricQuery:
metricName: request-success-rate
threshold: 95
value: 90
With this configuration, if the request-success-rate
falls below 90%, Argo Rollouts will automatically roll back the deployment.
##
Step 6: Test the Canary Deployment
To test your canary deployment, you can manually trigger a rollout by updating the image tag in the Rollout configuration. For example:
spec:
template:
spec:
containers:
- name: my-container
image: <your-image-name>:<new-tag>
Apply this change using kubectl apply
and observe how Argo Rollouts gradually increases the weight of the new version, pausing and analyzing at each step.
#
Best Practices for Canary Deployments
- Start Small: Begin with a small percentage of traffic directed to the canary version (e.g., 10%) and gradually increase it.
- Monitor Thoroughly: Set up comprehensive monitoring and logging to detect issues early.
- Automate Rollbacks: Use automated rollback policies to quickly revert if something goes wrong.
- Test in Production: Perform canary deployments in production, but ensure they are well-monitored and have minimal impact on users.
- Document Processes: Keep detailed documentation of your canary deployment process for consistency across teams.
#
Conclusion
Canary deployments are an effective strategy for reducing the risk associated with software releases. By gradually rolling out new versions to a small subset of users, teams can identify and address issues before they affect the entire user base. Argo Rollouts simplifies the implementation of canary deployments by providing a robust set of features that integrate seamlessly with Kubernetes.
By following the steps outlined in this guide, you can leverage Argo Rollouts to implement canary deployments in your own environment, ensuring safer and more reliable software releases.
To implement Canary Deployments using Argo Rollouts in Kubernetes, follow these structured steps:
##
Prerequisites:
- Kubernetes Cluster: Ensure a Kubernetes cluster is set up (e.g., Minikube, GKE).
- Argo Rollouts Installed: Install the Argo Rollouts controller and its CRDs.
- Prometheus Setup: Have Prometheus installed for metrics collection.
##
Step-by-Step Guide:
Define Deployment Configuration:
- Create a YAML file (e.g.,
deployment.yaml
) that defines your application deployment, including service and pod configurations with appropriate labels.
- Create a YAML file (e.g.,
Create an Argo Rollouts Configuration:
- Create another YAML file (e.g.,
rollout.yaml
) specifying the rollout strategy. Include stages for gradual traffic shifting, analysis intervals, and rollback policies based on metrics.
- Create another YAML file (e.g.,
Apply Configurations to Cluster:
kubectl apply -f deployment.yaml kubectl apply -f rollout.yaml
Define Metrics and Service Monitors:
- Create a
ServiceMonitor
for Prometheus to scrape application metrics. - Define custom metrics using
PrometheusAdapter
to query these metrics.
- Create a
Deploy Applications and Monitor Rollout:
kubectl argo rollouts get <rollout-name> -n <namespace>
This command shows the rollout status, including weights and analysis results.
Automate Rollbacks (Optional):
- Configure automated rollback policies in your Argo Rollouts configuration to revert changes if predefined conditions fail.
##
Example Rollout Configuration:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: example-rollout
spec:
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: example-container
image: your-docker-image:latest
ports:
- containerPort: 80
strategy:
canary:
steps:
- setWeight: 10
pause:
duration: 30s
- setWeight: 20
pause:
duration: 30s
- setWeight: 30
pause:
duration: 30s
# Continue until reaching full weight
automated:
rollback:
analysisTemplates:
- templateName: deployment-template
gates:
- metricQuery:
metricName: request-success-rate
threshold: 95
value: 90
##
Additional Commands:
List Rollouts:
kubectl argo rollouts list -n <namespace>
Check Rollout Status:
kubectl argo rollouts status <rollout-name> -n <namespace>
Undo Rollback (if needed):
kubectl argo rollouts undo <rollout-name> -n <namespace>
##
Summary:
By leveraging Argo Rollouts, you can safely implement Canary Deployments in Kubernetes. This approach allows gradual exposure of new versions, automated monitoring with Prometheus, and quick rollback capabilities to minimize downtime and risks during software releases.