Benefits of Kubernetes: Streamlining Production and Operations
Learning the Developments of Kubernetes
I recently had the pleasure of attending KubeCon / Cloud Native Con North America in Austin and learn about many of the new developments in the exciting world of Kubernetes. This was a great opportunity to broaden my horizon and learn from peers and leaders in the industry. On top of many great announcements, the most interesting part of the conference for me was the panel with Kelsey Hightower regarding continuous integration and continuous delivery (CI/CD) pipelines.
As a member of the operations team at Jungle Disk, when everything is running smoothly behind the scenes, my primary focus is to attempt to leverage technologies to make our developers’ lives easier. This in turn gives them more time to focus on their work and less time waiting on artificial bottlenecks between writing code and testing it. While we already use tools like Jenkins to automate some tasks and Ansible document instructions for physical machines and cloud servers, there is still some kind of human interaction needed between our developers writing code and being able to see the code running somewhere other than their computer. At the panel with Kelsey Hightower, he said “if you’re using kubectl to deploy from your laptop to production, you’re missing a few steps.” He elaborated, “if you’re doing it right, nobody should even know you’re using Kubernetes.” This comment while innocent enough on its own was quite profound for me.
Using voice commands to build a Kubernetes cluster.
In the keynote, he demoed a few fun tricks such as using voice commands on his phone to build a cluster on demand and scale it. He also demos an example of a basic build pipeline with a simple “hello world” app written in Go. Making a small change to this simple app and making a commit resulted in a multi-stage build process that would build his app and deploy it directly to his Kubernetes cluster based on conditions like the repo that was pushed and which branch it was pushed to. I was expecting the learn a fair bit about popular use cases for Kubernetes from this conference but this demo had me excited about it. After that day at the conference, all I could think about was getting back to my hotel room so I could play with build triggers and dynamically create workloads without ever touching kubectl.
The initial step for getting the process started is, of course, to fully understand what’s happening with the process. I pulled up the staging GitHub repo from the demo to try to understand how the process works from beginning to end. The first step in the example build pipeline happens when a commit is made to GitHub, which then triggers a webhook setup into his Google Cloud account that notifies Google that a branch you’re watching has been updated. This kicks off a build trigger which looks for a file called either ‘Dockerfile’ or ‘cloudbuild.yaml’ for instructions (in this case, it was a cloudbuild.yaml file). The cloudbuild.yaml file then provides step-by-step instructions on building and using a Docker container, decrypting a file encrypted with KMS Keys, generating a kubeconfig (so you can talk with your existing cluster), and patching the existing deployment using ‘kubectl patch’ with the commit SHA of the newly built image, then finishing off by committing it all back to an infrastructure repo in GitHub.
This triggers a second build job to fire off which does a recursive apply on the directory your previous deployment existed. This updates the existing deployment which causes Kubernetes to replace old pods with new pods running the new Docker container with your app. With an appropriately defined service and ingress controller in the Kubernetes deployment, the app was already accessible on the Web within 60 seconds of the push.