/

hasura-header-illustration

Draft vs Gitkube vs Helm vs Ksonnet vs Metaparticle vs Skaffold

A comparison of tools that help developers build and deploy their apps on Kubernetes

TL;DR

  • Draft
    - deploy code to k8s cluster (automates build-push-deploy)
    - deploy code in draft-pack supported languages without writing dockerfile or k8s manifests
    - needs draft cli, helm cli, tiller on cluster, local docker, docker registry
  • Gitkube
    - deploy code to k8s cluster (automates build-push-deploy)
    - git push to deploy, no dependencies on your local machine
    - needs dockerfile, k8s manifests in the git repo, gitkube on cluster
  • Helm
    - deploy and manage charts (collection of k8s objects defining an application) on a k8s cluster
    - ready made charts for many common applications, like mysql, mediawiki etc.
    - needs helm cli, tiller on cluster, chart definition locally or from a repo
  • Ksonnet
    - define k8s manifests in jsonnet, deploy them to k8s cluster
    - reusable components for common patterns and stacks, like deployment+service, redis
    - needs jsonnet knowledge, ksonnet cli
  • Metaparticle
    - deploy your code in metaparticle supported languages to k8s (automates build-push-deploy)
    - define containerizing and deploying to k8s in the language itself, in an idiomatic way, without writing dockerfile or k8s yaml
    - needs metaparticle library for language, local docker
  • Skaffold
    - deploy code to k8s cluster (automates build-push-deploy)
    - watches source code and triggers build-push-deploy when change happens, configurable pipeline
    - needs skaffold cli, dockerfile, k8s manifests, skaffold manifest in folder, local docker, docker registry

Want to know more? Read ahead.

Kubernetes is super popular nowadays and people are looking for more ways and workflows to deploy applications to a Kubernetes cluster. kubectl itself has become like a low-level tool, with people looking for even easier workflows. Draft, Gitkube, Helm, Ksonnet, Metaparticle and Skaffold are some of the tools around that help developers build and deploy their apps on Kubernetes.

Draft, Gitkube and Skaffold ease the developer effort, when you are building an application, to get it running on a Kubernetes cluster as quickly as possible. Helm and Ksonnet help the deployment process once your app is built and ready to ship, by defining applications, handling rollout of new versions, handling different clusters etc. Metaparticle is an odd one out here, since it combines everything into your code — yaml, dockerfile, all in the code itself.

So, what should you use for your use case?

Let’s discuss.

Contents

  1. Draft
  2. Gitkube
  3. Helm
  4. Ksonnet
  5. Metaparticle
  6. Skaffold

Draft

Simple app development & deployment — on to any Kubernetes cluster.

As the name suggests, Draft makes developing apps that run on Kubernetes clusters easier. The official statement says that Draft is a tool for developing applications that run on Kubernetes, not for deploying them. Helm is the recommended way of deploying applications as per Draft documentation.

The goal is to get the current code on the developer’s machine to a Kubernetes cluster, while the developer is still hacking on it, before it is committed to version control. Once the developer is satisfied by the changes made and deployed using Draft, the code is committed to version control.

Draft is not supposed to be used for production deployments as it is purely intended to be used for a very quick development workflow when writing applications for Kubernetes. But it integrates very well with Helm, as it internally uses Helm to deploy the changes.

Architecture

Draft: architecture diagram

As we can see from the diagram, draft CLI is a key component. It can detect language used from the source code and then use an appropriate pack from a repo. A pack is a combination of Dockerfile and Helm chart which together defines the environment for an application. Packs can be defined and distributed in repos. Users can define their own packs and repos as they’re present as files in the local system or a git repo.

Any directory with source code can be deployed if there is a pack for that stack. Once the directory is setup using draft create(this adds dockerfile, Helm chart and draft.toml), draft up can build the docker image, push it to a registry and rollout the app using Helm chart (provided Helm is installed). Every time a change is made, executing the command again will result in a new build being deployed.

There is a draft connect command which can port forward connections to your local system as well as stream logs from the container. It can also integrate with nginx-ingress to provide domain names to each app it deploys.

From zero to k8s

Here are the steps required to get a python app working on a k8s cluster using Draft. (See docs for a more detailed guide)

Prerequisites:

  • k8s cluster (hence kubectl)
  • helm CLI
  • draft CLI
  • docker
  • docker repository to store images
$ helm init

Use case

  • Developing apps that run on Kubernetes
  • Used in “inner loop”, before code is committed onto version control
  • Pre-CI: Once development is complete using draft, CI/CD takes over
  • Not to be used in production deployment

More details here.

Gitkube

Build and deploy Docker images to Kubernetes using git push

Gitkube is a tool that takes care of building and deploying your Docker images on Kubernetes, using git push. Unlike draft, gitkube has no CLI and runs exclusively on the cluster.

Any source code repo with a dockerfile can be deployed using gitkube. Once gitkube is installed and exposed on the cluster, developer can create a remote custom resource which gives a git remote url. The developer can then push to the given url and docker build-kubectl rollout will happen on the cluster. The actual application manifests can be created using any tool (kubectl, helm etc.)

Focus is on plug and play installation & usage of existing well known tools (git and kubectl). No assumptions are made about about the repo to be deployed. The docker build context and dockerfile path, along with deployments to be updated are configurable. Authentication to the git remote is based on SSH public key. Whenever any change in code is made, committing and pushing it using git will trigger a build and rollout.

Architecture

Gitkube: architecture diagram

There are 3 components on the cluster, a remote CRD which defines what should happen when a push is made on a remote url, gitkubed which builds docker images and updates deployments, and a gitkube-controller which is watching on the CRD to configure gitkubed.

Once these objects are created on the cluster, a developer can create their own applications, using kubectl. Next step is to create a remote object which tells gitkube what has to happen when a git push is made to a particular remote. Gitkube writes the remote url back to the status field of the remote object.

From zero to k8s

Prerequisites:

  • k8s cluster (kubectl)
  • git
  • gitkube installed on the cluster (kubectl create)

Here are the steps required to get you application on Kubernetes, including installation of gitkube:

$ git clone https://github.com/hasura/gitkube-example

$ cd gitkube-example

$ kubectl create -f k8s.yaml$ cat ~/.ssh/id_rsa.pub | awk '$0="  - "$0' >> "remote.yaml"

$ kubectl create -f remote.yaml

$ kubectl get remote example -o json | jq -r '.status.remoteUrl'

$ git remote add example [remoteUrl]

$ git push example master

## edit code
## commit and push

Use case

  • Easy deployments using git, without docker builds
  • Developing apps on Kubernetes
  • While development, WIP branch can be pushed multiple times to see immediate results

More details here.

Helm

The package manager for Kubernetes

As the tag suggests, Helm is a tool to manage applications on Kubernetes, in the form of Charts. Helm takes care of creating the Kubernetes manifests and versioning them so that rollbacks can be performed across all kind of objects, not just deployments. A chart can have deployment, service, configmap etc. It is also templated so that variables can be easily changed. It can be used to define complex applications with dependencies.

Helm is primarily intended as a tool to deploy manifests and manage them in a production environment. In contrast to Draft or Gitkube, Helm is not for developing applications, but to deploy them. There are a wide variety of pre-built charts ready to be used with Helm.

Architecture

Helm: architecture diagram

Let’s look at Charts first. As we mentioned earlier, a chart a bundle of information necessary to create an instance of a Kubernetes application. It can have deployments, services, configmaps, secrets, ingress etc. all defined as yaml files, which in turn are templates. Developers can also define certain charts as dependencies for other charts, or nest charts inside another one. Charts can be published or collated together in a Chart repo.

Helm has two major components, the Helm CLI and Tiller Server. The cli helps in managing charts and repos and it interacts with the Tiller server to deploy and manage these charts.

Tiller is a component running on the cluster, talking to k8s API server to create and manage actual objects. It also renders the chart to build a release. When the developer does a helm install <chart-name> the cli contacts tiller with the name of the chart and tiller will get the chart, compile the template and deploy it on the cluster.

Helm does not handle your source code. You need use some sort of CI/CD system to build your image and then use Helm to deploy the correct image.

From zero to k8s

Prerequisites:

  • k8s cluster
  • helm CLI

Here is an example of deploying Wordpress blog onto a k8s cluster using Helm:

$ helm init
$ helm repo update
$ helm install stable/wordpress
## make new version
$ helm upgrade [release-name] [chart-name]

Use case

  • Packaging: Complex applications (many k8s objects) can be packaged together
  • Reusable chart repo
  • Easy deployments to multiple environments
  • Nesting of charts — dependencies
  • Templates — changing parameters is easy
  • Distribution and reusability
  • Last mile deployment: Continuous delivery
  • Deploy an image that is already built
  • Upgrades and rollbacks of multiple k8s objects together — lifecycle management

More details here.

Ksonnet

A CLI-supported framework for extensible Kubernetes configurations

Ksonnet is an alternate way of defining application configuration for Kubernetes. It uses Jsonnet, a JSON templating language instead of the default yaml files to define k8s manifests. The ksonnet CLI renders the final yaml files and then applies it on the cluster.

It is intended to be used for defining reusable components and incrementally using them to build an application.

Architecture

Ksonnet: overview

The basic building blocks are called parts which can be mixed and matched to create prototypes. A prototype along with parameters becomes a component and components can be grouped together as an application. An application can be deployed to multiple environments.

The basic workflow is to create an application directory using ks init, auto-generate a manifest (or write your own) for a component using ks generate, deploy this application on a cluster/environment using ks apply <env>. You can manage different environments using ks env command.

In short, Ksonnet helps you define and manage applications as collection of components using Jsonnet and then deploy them on different Kubernetes clusters.

Like Helm, Ksonnet does not handle source code, it is a tool for defining applications for Kubernetes, using Jsonnet.

From zero to k8s

Prerequisites:

  • k8s cluster
  • ksonnet CLI

Here’s a guestbook example:

$ ks init
$ ks generate deployed-service guestbook-ui \     --image gcr.io/heptio-images/ks-guestbook-demo:0.1 \     --type ClusterIP
$ ks apply default
## make changes
$ ks apply default

Use case

  • Flexibility in writing configuration using Jsonnet
  • Packaging: Complex configurations can be built as mixing and matching components
  • Reusable component and prototype library: avoid duplication
  • Easy deployments to multiple environments
  • Last-mile deployment: CD step

More details here.

Metaparticle

Cloud native standard library for Containers and Kubernetes

Positioning itself as the standard library for cloud native applications, Metaparticle helps developers to easily adopt proven patterns for distributed system development through primitives via programming language interfaces.

It provides idiomatic language interfaces which helps you build systems that can containerize and deploy your application to Kubernetes, develop replicated load balanced services and a lot more. You never define a Dockerfile or a Kubernetes manifest. Everything is handled through idioms native to the programming language that you use.

For example, for a Python web application, you add a decorator called containerize (imported from the metaparticle package) to your main function. When you execute the python code, the docker image is built and deployed on to your kubernetes cluster as per parameters mentioned in the decorator. The default kubectl context is used to connect to cluster. So, switching environments means changing the current context.

Similar primitives are available for NodeJS, Java and .NET and support for more languages are a work in progress.

Architecture

The metaparticle library for the corresponding language have required primitives and bindings for building the code as a docker image, pushing it to a registry, creating k8s yaml files and deploying it onto a cluster.

The Metaparticle Package contains these language idiomatic bindings for building containers. Metaparticle Sync is a library within Metaparticle for synchronization across multiple containers running on different machines.

JavaScript/NodeJS, Python, Java and .NET are supported during the time of writing.

From zero to k8s

Prerequisites:

  • k8s cluster
  • metaparticle library for the supported language
  • docker
  • docker repository to store images

A python example (only relevant portion) for building docker image with the code and deploying to a k8s cluster:    

@containerize(
    'docker.io/your-docker-user-goes-here',
    options={
        'ports': [8080],
        'replicas': 4,
        'runner': 'metaparticle',
        'name': 'my-image',
        'publish': True
    })
def main():
    Handler = MyHandler
    httpd = SocketServer.TCPServer(("", port), Handler)
    httpd.serve_forever()

Use case

  • Develop applications without worrying about k8s yaml or dockerfile
  • Developers no longer need to master multiple tools and file formats to harness the power of containers and Kubernetes.
  • Quickly develop replicated, load-balanced services
  • Handle synchronization like locking and master election between distributed replicas
  • Easily develop cloud-native patterns like sharded-systems

More details here.

Skaffold

Easy and Repeatable Kubernetes Development

Skaffold handles the workflow of building, pushing and deploying an application to Kubernetes. Like Gitkube, any directory with a dockerfile can be deployed to a k8s cluster with Skaffold.

Skaffold will build the docker image locally, push it to a registry and rollout the deployment using the skaffold CLI tool. It also watches the directory so that rebuilds and redeploys happen whenever the code inside the directory changes. It also streams logs from containers.

The build, push, deploy pipelines are configurable using a yaml file, so that developers can mix and match tools that like for these steps: e.g. docker build vs google container builder, kubectl vs helm for rollout etc.

Architecture

Skaffold: overview

Skaffold CLI does all the work here. It looks at a file called skaffold.yaml which defines what has to be done. A typical example is to build the docker image using dockerfile in the directory where skaffold dev is run, tag it with its sha256, push the image, set that image in the k8s manifest pointed to in the yaml file, apply that manifests on the cluster. This is run continuously in a loop triggering for every change in the directory. Logs from the deployed container is streamed to the same watch window.

Skaffold is very similar to Draft and Gitkube, but more flexible, as it can manage different build-push-deploy pipelines, like the one shown above.

From zero to k8s

Prerequisites:

  • k8s cluster
  • skaffold CLI
  • docker
  • docker repository to store images

Here are the steps to deploy an example Go application that prints hello-world:

$ git clone https://github.com/GoogleCloudPlatform/skaffold
$ cd examples/getting-started
## edit skaffold.yaml to add docker repo
$ skaffold dev
## open new terminal: edit code

Use case

  • Easy deployments
  • Iterative builds — Continuous build-deploy loop
  • Developing apps on Kubernetes
  • Define build-push-deploy pipelines in CI/CD flow

More details here.


Let me know in the comments if I missed something out or got something wrong. I have not mentioned tools like Ksync and Telepresence, as I plan to write another post about them soon. But, if there are other tools that fit into the category of those mentioned here, please drop a note in the comments.

You can find the discussion on Hacker News here.

We’ve been trying to solve similar problems around deployment with the Hasura platform and our current approach is to:

  1. Bootstrap a monorepo of microservices that contains Dockerfiles and Kubernetes specs
  2. git push to build and deploy all the microservices in one shot

This method helps you get started without knowing about Dockerfiles and Kubernetes (using boilerplates) but keeps them around in case you need access to them. It also has a very simple (git push based) deployment method.

29 Mar, 2018

11 MIN READ

Share
Blog
29 Mar, 2018
Email
Subscribe to stay up-to-date on all things Hasura. One newsletter, once a month.
Loading...
v3-pattern
Accelerate development and data access with radically reduced complexity.