Spinning up Kubernetes clusters is less than half the battle

Cloud computing and containers have taught IT people a lot about the differences between pets and cattle. Pets are named and cared for. Cattle are numbered and disposable. We are told that we should treat our application instances like cattle. Give them random names, kill them when they are sick, replace them when they die, get more when we need them, and euthanize them when we don't. Infrastructure as a Service (IaaS) consumption allows us to do the same with what was formerly known as “hardware,” e.g., servers, disks, and networks. More recently, Kubernetes provides the resources and features to treat databases, items traditionally thought of as persistent, as cattle. Now, people are even suggesting that we treat Kubernetes clusters like cattle.

When talking about treating Kubernetes clusters like cattle, people tend to focus on creating clusters. In fact, every article or talk I've seen about treating Kubernetes clusters like cattle focuses entirely on spinning up a Kubernetes cluster, usually discussing one or more ways to create clusters. But as anyone who has architected an application, much more a database, to behave properly when it is treated like cattle can tell you, there's a lot more to it than just “spinning it up.”

If you are running Kubernetes clusters, you are typically not just running your applications there, but also core services for log forwarding, metrics collection, TLS certificate management, external DNS synchronization, etc. Now that there are so many ways to create a Kubernetes cluster, from managed services like GKE, AKS, and EKS to command-line tools like minikube, k3s, and kube-aws, one can safely say that bootstrapping your Kubernetes resources is more difficult than creating the Kubernetes cluster.

Making the impossible possible

Fortunately, there are tools to help you not only spin up a cluster, but also provision your resources in the cluster once it is created. If you use GitOps with Kubernetes, then an up-to-date record of all the resources in your cluster, not to mention their entire history, is available in a Git repository. Recreating those resources in a new Kubernetes cluster can be a matter of running kubectl apply.

Git + Atomist + Kubernetes
GitOps on Kubernetes

Making the possible easy

But what if you aren't using GitOps? Or what if you would like to use GitOps but aren't sure how to get started? We've recently released a new version of the open source Atomist CLI that includes a command to fetch resource specs from a Kubernetes cluster and save them to local files.

$ atomist kube-fetch

The above command will use your currently configured Kubernetes credentials to fetch a default set of resources, excluding those typically managed by Kubernetes itself, remove common read-only properties populated by the Kubernetes system, and write the resulting resource specs to YAML files in the current directory. Once you have fetched the resource specs and modified them to suit your needs, you can commit them to Git repository and fully embrace GitOps. You can use the Atomist GitOps solution to provision the current set of resources and manage deployment of new versions of your applications to the cluster, ensuring those updates are also persisted to Git.

The kube-fetch command takes several options allowing you to customize where and what is fetched and written, including optionally encrypting Kubernetes secret data values. See the Atomist GitOps documentation for more details.

$ atomist kube-fetch --help
Fetch resources from a Kubernetes cluster using the currently configured
Kubernetes credentials, remove system-populated properties, and save each
resource specification to a file

Options:
  --help, -h, -?   Show help
  --options-file   Path to file containing a JSON object defining options
                   selecting which resources to fetch, see API docs for
                   details on the structure of the object
  --output-dir     Directory to write spec files in, if not provided current
                   directory is used
  --output-format  File format to write spec files in, supported formats are
                   'json' and 'yaml', 'yaml' is the default
  --secret-key     Key to use to encrypt secret data values before writing to
                   file

Moving on

You can use atomist kube-fetch in a variety of ways to improve how you use Kubernetes. If you haven't moved to a GitOps flow for deploying to Kubernetes, you can use kube-fetch to bootstrap your GitOps journey. If you want to migrate your resources to a new Kubernetes cluster, either because your current cluster is in a bad state or because you want to use to a new Kubernetes provider, you can use kube-fetch to fetch all the resources you need to create when migrating. If you would like to enhance your current single-cluster, multi-zone high-availability Kubernetes cluster architecture to a multi-cluster, multi-region, extreme-availability Kubernetes clusters architecture, you can use kube-fetch to replicate resources across clusters.

However you use Kubernetes, tools like atomist kube-fetch and the k8s-sdm GitOps solution will make interacting with Kubernetes easier, more secure, and more fun. Oh, and it will let you really treat your Kubernetes clusters like cattle.

An unsuspecting Kubernetes cluster