Today’s post was originally published on Jade Meskill’s personal blog and is being shared here.
At Octoblu, we deploy very frequently and we’re tired of our users seeing the occasional blip when a new version is put into production.
Though we’re using Amazon Opsworks to more easily manage our infrastructure, our updates can take a while for dependencies to be installed before the service restarts – not a great experience.
Enter Kubernetes.
We knew that moving to an immutable infrastructure approach would help us deploy our apps, which range from extremely simple web services, to complex near-real-time messaging systems, quicker and easier.
Containerization is the future of app deployment, but managing and scaling a bunch of Docker instances, managing all the port mappings, is not a simple proposition.
Kubernetes simplified that part of our deployment strategy. However, we still had a problem, while Kubernetes is spinning up new versions of our docker instances, we could enter a state where old and new versions were in the mix. If we shut down the old before bringing up the new, we would also have a brief (sometimes not so brief) period of downtime.
Blue/Green Deploys
I first read about Blue/Green deploys in Martin Fowler’s excellent article BlueGreen Deployment, a simple, but powerful concept. We started to build out a way to do this in Kubernetes. After some complicated attempts, we came up with a simple idea: use Amazon ELBs as the router. Kubernetes handles the complexities of routing your request to the appropriate minion by listening to a given port on all minions, making ELB load balancing a piece of cake. Have the ELB listen on port 80 and 443, then route the request to the Kubernetes port on all minions.
Blue or Green?
The next problem was figuring out whether blue or green is currently active. Another simple idea, store a blue port and a green port as tags in the ELB and look at the current configuration of the ELB to see which one is currently live. No need to store the value somewhere that may not be accurate.
Putting it all together.
We currently use a combination of Travis CI and Amazon CodeDeploy to kick off the blue/green deploy process.
The following is part of a script that runs on our Trigger Service deploy. You can check out the code on GitHub if you want to see how it all works together.
I’ve added some annotation to help explain what is happening.
#!/bin/bash SCRIPT_DIR=` dirname $0` DISTRIBUTION_DIR=` dirname $SCRIPT_DIR` export PATH= /usr/local/bin :$PATH export AWS_DEFAULT_REGION=us-west-2 # Query ELB to get the blue port label BLUE_PORT=`aws elb describe-tags --load-balancer-name triggers-octoblu-com | jq & #039;.TagDescriptions[0].Tags[] | select(.Key == "blue") | .Value | tonumber'` # Query ELB to get the green port label GREEN_PORT=`aws elb describe-tags --load-balancer-name triggers-octoblu-com | jq & #039;.TagDescriptions[0].Tags[] | select(.Key == "green") | .Value | tonumber'` # Query ELB to figure out the current port OLD_PORT=`aws elb describe-load-balancers --load-balancer-name triggers-octoblu-com | jq & #039;.LoadBalancerDescriptions[0].ListenerDescriptions[0].Listener.InstancePort'` # figure out if the new color is blue or green NEW_COLOR=blue NEW_PORT=${BLUE_PORT} if [ "${OLD_PORT}" == "${BLUE_PORT}" ]; then NEW_COLOR=green NEW_PORT=${GREEN_PORT} fi export BLUE_PORT GREEN_PORT OLD_PORT NEW_COLOR NEW_PORT # crazy template stuff, don't ask. # # Some people, when confronted with a problem, # think "I know, I'll use regular expressions." # Now they have two problems. # -- jwz REPLACE_REGEX=& #039;s;(\\*)(\$([a-zA-Z_][a-zA-Z_0-9]*)|\$\{([a-zA-Z_][a-zA-Z_0-9]*)\})?;substr($1,0,int(length($1)/2)).($2&&length($1)%2?$2:$ENV{$3||$4});eg' perl -pe $REPLACE_REGEX $SCRIPT_DIR /triggers-service-blue-service .yaml.tmpl > $SCRIPT_DIR /triggers-service-blue-service .yaml perl -pe $REPLACE_REGEX $SCRIPT_DIR /triggers-service-green-service .yaml.tmpl > $SCRIPT_DIR /triggers-service-green-service .yaml # Always create both services kubectl delete -f $SCRIPT_DIR /triggers-service- ${NEW_COLOR}-service.yaml kubectl create -f $SCRIPT_DIR /triggers-service- ${NEW_COLOR}-service.yaml # destroy the old version of the new color kubectl stop rc -lname=triggers-service-${NEW_COLOR} kubectl delete rc -lname=triggers-service-${NEW_COLOR} kubectl delete pods -lname=triggers-service-${NEW_COLOR} kubectl create -f $SCRIPT_DIR /triggers-service- ${NEW_COLOR}-controller.yaml # wait for Kubernetes to bring up the instances properly x=0 while [ "$x" -lt 20 -a -z "$KUBE_STATUS" ]; do x=$((x+1)) sleep 10 echo "Checking kubectl status, attempt ${x}..." KUBE_STATUS=`kubectl get pod -o json -lname=triggers-service-${NEW_COLOR} | jq ".items[].currentState.info[\"triggers-service-${NEW_COLOR}\"].ready" | uniq | grep true ` done if [ -z "$KUBE_STATUS" ]; then echo "triggers-service-${NEW_COLOR} is not ready, giving up." exit 1 fi # remove the port mappings on the ELB aws elb delete-load-balancer-listeners --load-balancer-name triggers-octoblu-com --load-balancer-ports 80 aws elb delete-load-balancer-listeners --load-balancer-name triggers-octoblu-com --load-balancer-ports 443 # create new port mappings aws elb create-load-balancer-listeners --load-balancer-name triggers-octoblu-com --listeners Protocol=HTTP,LoadBalancerPort=80,InstanceProtocol=HTTP,InstancePort=${NEW_PORT} aws elb create-load-balancer-listeners --load-balancer-name triggers-octoblu-com --listeners Protocol=HTTPS,LoadBalancerPort=443,InstanceProtocol=HTTP,InstancePort=${NEW_PORT},SSLCertificateId=arn:aws:iam::822069890720:server-certificate /startinter .octoblu.com # reconfigure the health check aws elb configure-health-check --load-balancer-name triggers-octoblu-com --health-check Target=HTTP:${NEW_PORT} /healthcheck ,Interval=30,Timeout=5,UnhealthyThreshold=2,HealthyThreshold=2 |
Oops happens!
Sometimes Peter makes a mistake. We have to quickly rollback to a prior version. If it is the off-cluster, rollback is as simple as re-mapping the ELB to forward to the old ports. Sometimes Peter tries to fix his mistake with a new deploy and now we have a real mess.
Because this happened more than once, we created oops. Oops allows us to instantly rollback to the off cluster, simply by executing oops-rollback
, or quickly re-deploy a previous version oops-deploy git-commit
.
We add an .oopsrc
to all our apps that looks something like this:
{
“elb-name”: “triggers-octoblu-com”,
“application-name”: “triggers-service”,
“deployment-group”: “master”,
“s3-bucket”: “octoblu-deploy”
}
oops list
will show us all available deployments.
We are always looking for ways to get better results, if you have some suggestions, let us know.