So there’s this tool called Helm and it is described as “The package manager for Kubernetes” and if you’re anything like me, it could take a while to wrap your head around what that actually means in practice.
Let’s dig into what Helm does in practical terms and find out what value it can bring to anyone using Kubernetes to deploy applications.
In short, Helm helps you manage complex application deployment stacks inside Kubernetes. It helps you manage the repeatability of a configuration and allows you to customize each instance of it as well. Helm is often compared to APT or YUM package managers which exist inside Linux distribution. Helm does a similar job to those tools, in that, it uses a repository to pull packages (called Charts, in Helm lingo) and deploys them (albeit with the help of a server-side component that exist in a Kubernetes cluster called Tiller). Once you go into a little more detail, you’ll notice that this is not a 1:1 comparison, but it is a starting point.
Fair warning: you’ll need to have basic knowledge of Kubernetes API concepts to fully grasp what value Helm really brings.
For Packaging & Dependency Management
To understand why Helm is actually useful, we need to think about what deploying an end-to-end application stack looks like in Kubernetes. Let’s take the example case of a typical 3 tier web app that you containerized, for which chances are, you’ll need at least these components in your deployment :
- Deployments / Pods
- Services / Ingress
- ConfigMaps / Secrets
- Volumes / Persistent Volume Claims
That’s a lot of components to have in a single YAML file! Best practices would dictate that you should split (nearly) every component into its own YAML file. But then you have a slew of issues related to having to juggle the different YAML files, their variables’ values and dependencies between the objects.
That’s where Kubernetes calls in its sidekick, Helm. Helm and its Chart mechanism comes in. Helm makes your life easier by packaging all those YAML files into a single package file or Chart.
If you decide to go down the path of breaking down your app into microservices, which can be beneficial in many cases from an organizational efficiency point of view, you could end up with a ton of components and YAML files to deploy. A tool like Helm will quickly become a necessity in that scenario.
Helm manages Charts as an atomic item. For example, you can install the WordPress Chart mentioned above, which includes many YAML files, but is managed as a single entity.
Helm allows you to use Charts in a way that makes upgrades, rollbacks and versioning of a complex Kubernetes application a breeze with a few simple commands.
If you want to see Helm in action, here’s a great introduction video that walks you through some basic Helm demos/use cases :
Side note : the CNCF Youtube channel is a great resource, there are plenty of interesting Kubernetes-related videos that cover a wide range of topics from past KubeCon conferences.
Also, when deploying apps you’ll often need to deploy more than one stack of it and managing multiple deployments’ YAML files can quickly become challenging. Helm can help here too : it allows you to keep your YAML files “dry”, which means they are free of hardcoded values and only contain variables that point to values that are stored somewhere else and that can be dynamically injected. That “somewhere else” is Helm’s values.yaml file. This approach allows you to maintain a single set of “dry” YAML files for your whole application that can be deployed multiple times, with each deployment having its own values.yaml file.
The package that contains your Kubernetes YAML files and the values file is referred to as a Helm Chart. Like in this example, you can build your own charts, or you can use the public registry of Helm Charts, called KubeApps Hub. By the way, you can build your own private Helm Chart repositories as well.
Real world use cases for Helm
Helm Charts can vary greatly in complexity, they can contain anything from a simple WordPress installation (which consists of a LAMP stack), to a full blown OpenStack distribution built on Containers (SAP Converged Cloud).
If you want to start with a full blown complex real world example of a Helm Chart, here’s a presentation from OpenStack Summit 2017 where SAP walk you through their experience with Helm and how it was used to package Converged Cloud :
But in case you’re here for more of a light read, let’s explore what the internals of a much simpler WordPress Chart looks like :
(reference : https://github.com/helm/helm/blob/master/docs/charts.md)
wordpress/ Chart.yaml # A YAML file containing information about the chart LICENSE # OPTIONAL: A plain text file containing the license for the chart README.md # OPTIONAL: A human-readable README file requirements.yaml # OPTIONAL: A YAML file listing dependencies for the chart values.yaml # The default configuration values for this chart charts/ # A directory containing any charts upon which this chart depends. templates/ # A directory of templates that, when combined with values, # will generate valid Kubernetes manifest files. templates/NOTES.txt # OPTIONAL: A plain text file containing short usage notes
For more details on the structure and contents of each component in a Helm Chart, take a look at this page from Helm’s official documentation.
Alternatives to Helm
There are definitely alternatives to Helm, but none nearly as popular for this specific use case. For example, HashiCorp’s Terraform has a Kubernetes provider and technically could allow you to achieve the same result, but you won’t be benefiting from the standardized Helm Chart format that is much more common in the community when sharing bundles of YAML files. Not to mention that Helm has a closer relationship to Kubernetes than Terraform could ever have, since a significant portion of Helm’s code base comes from Google, just like Kubernetes. Also, weirdly enough, there is a recently released Helm provider for Terraform, which would basically act as a Terraform wrapper for Helm, to me that seems a little redundant, but Terraform has surprised me in the past with its usefulness, so I might be wrong.
There’s also a tool called Kustomize, which is another way to customize Kubernetes YAML files at deployment time and make every one of of those deployments unique, with its own values. You can even apply changes to your YAML files in layers by “patching” them with changes. It looks like a very cool tool and the Github page is worth a read. Kustomize is meant for a slightly different but similar use case and some would say that you could use both Kustomize and Helm in some circumstances.
Obviously, there is no one size fits all for complex problems like managing Kubernetes deployments, so always be diligent in choosing which tool or combination of tools is right for you. For the time being however, Helm is very popular, if not the most popular way to deploy and manage packages that contain multiple YAML files. After all, it is one of the Cloud Native Computing Foundation’s “Incubating” projects, so it is sure to garner plenty of attention in the coming months and years.
Here’s the important part…
There are multiple tools that get the job done, but one crucial thing to keep in mind is that the real benefit of using these tools comes when you integrate them into an automated CI/CD pipeline. If you go from using one CLI tool like kubectl apply to another like helm install, you might feel a little cooler but your CFO might not agree with that just yet.
In other words, deploying and updating production code should never involve running those CLI commands by hand, it should be automated. Maybe using a CI/CD tool, like Spinnaker, Argo, Concourse, Jenkins, etc. or a even very simple tool like Keel. If you’re going to use Helm, use it right!
Uncharted waters – How to get started with Helm
Let’s walk through the process of testing the setup of Helm (and the server-side component, Tiller) with a new Kubernetes cluster. There’s only three short steps to get started :
1. Create a Service Account in the Kubernetes cluster
Create a YAML file called ServiceAccount.yml with this content :
apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system
Then, apply it to your Kubernetes cluster:
kubectl apply -f ServiceAccount.yml
2. Bind the Service Account to a Role AKA create a Cluster Role Binding
Create a YAML file called ClusterRoleBinding.yml with this content :
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: tiller-clusterrolebinding subjects: - kind: ServiceAccount name: tiller namespace: kube-system roleRef: kind: ClusterRole name: cluster-admin apiGroup: ""
Then, apply it to your Kubernetes cluster:
kubectl apply -f ClusterRoleBinding.yml
3. Initialize the Helm CLI client
Make sure your kubectl context is set to the right cluster, because Helm uses that same authentication mechanism.
./helm init --service-account tiller
That’s it, you’re ready to start Helming!
You’ll notice the similarities between helm and yum/apt. You can try these commands and more :
helm search <chart> helm list helm install <chart> helm delete <chart>
Embedding Tiller in Pivotal Container Service clusters
Staying true to my VMware roots, here is a quick rundown of how to simplify this process if you use Pivotal Container Service by configuring Add-Ons, which are essentially Post-Cluster-Deployment-YAMLs. In laymen’s terms, that means that every time PKS deploys a Kubernetes cluster using this Plan, it will always apply these YAML files before it marks the cluster creation process as complete.
Here we go…
In the PCF OpsMan interface, go to the PKS tile, then click on one of the PKS Plans and scroll down to the section entitled “Add-Ons – Use with caution”
And while we’re on the subject of Pivotal Container Service (PKS), the included enterprise grade container registry, VMware Harbor, can also be used as a Helm Chart repository as well as a Docker Registry. Convenient, right?
In case you want to learn more about PKS, here’s a link to one of my earlier articles, The Beginner’s Guide to Pivotal Container Server (PKS).
Plotting the course – What’s next for Helm?
In the upcoming Helm v3, Helm will make use of Kubernetes Custom Resource Definitions (CRD) and installing the Tiller component will no longer be necessary, as CRDs are a built-in component of any Kubernetes cluster that can be used to extend functionality. So there will be one less step to using Helm in the near future and it should be much more integrated with much of the native Kubernetes functionalities as well.
And by the way, it seems the trend is that Kubernetes CRDs will become increasingly important in future Kubernetes releases, so might as well familiarize yourself with them now.
Helm 3 is bound to come with a bunch of new features and if you’re interested in learning more about the history and the future of Helm, here’s an interesting article about it : A First Look at the Helm 3 Plan
Gangway – The Bottom Line
The Kubernetes ecosystem is growing at a frantic pace and new tools emerge every day, Helm is a jewel among them. Helm is a must learn for any Kubernetes administrator or developer and stands on its own as a great tool. But one thing to keep in mind is that tools like Helm are meant to be used in a automated environment, and should be part of a complete automation & deployment strategy.