Evaluating Kubermatic as a Kubernetes Distribution

In the rapidly evolving field of container orchestration, Kubernetes reigns supreme. This post explores Kubermatic, a potential alternative to other Kubernetes distributions, focusing on its features and user experience after gaining attention at KubeCon23.

In the fast-evolving world of container orchestration and management, Kubernetes has solidified its position as the de facto standard. As organizations leverage Kubernetes for their containerized applications, the need for reliable management and orchestration solutions becomes crucial.

Besides our tried and tested solution Red Hat OpenShift, we have been using Rancher by SUSE for all customers needing a good multi-cluster management solution.
After hearing lots about Kubermatic during KubeCon23, we decided to invest some days of our “/mid Week 2023”, a week where we collectively explore and evaluate new technology, to dive into Kubermatic ourselves and evaluate whether it would be a worthy competitor to Rancher for future customers.

In this post, we will delve into Kubermatic, exploring its functionalities, installation experiences, and usability.

Understanding Kubermatic: Platform and benefits

Kubermatic is a Kubernetes automation platform that aims to simplify the deployment and management of Kubernetes clusters across various environments. Its flagship components, the Kubermatic Kubernetes Platform (KKP) and KubeOne, take over different parts of managing Kubernetes at scale.

Kubermatic Kubernetes Platform (KKP)

KKP provides a centralized interface for managing multiple Kubernetes clusters, offering features like user-friendly cluster creation, monitoring, and cluster lifecycle management. Its emphasis on multi-cloud support and ease of use makes it appealing for enterprises seeking scalable and efficient Kubernetes management.

KubeOne

KubeOne focuses on automating the deployment and operation of a single production-ready Kubernetes cluster. It streamlines the process by providing a simple command-line interface to set up Kubernetes clusters across various infrastructure providers.

Installation Experience

KubeOne

Our team embarked on evaluating Kubermatic’s KubeOne installation on two distinct infrastructures: AWS and vSphere.

Generally, the installation of a KubeOne cluster consists of two steps: First, the needed infrastructure for the control plane is set up. For this purpose, Kubermatic provides some example Terraform code for different cloud providers. After that, the KubeOne installer is fed with the Terraform output file, which it uses to bootstrap the Kubernetes control plane (kubeone apply -m kubeone.yaml -t tf.json). Also, leveraging Machine API, worker nodes are dynamically provisioned on the chosen cloud provider.

AWS:

It took some time until we had our KubeOne cluster up and running on AWS. Challenges surfaced especially because of missing and/or confusing documentation.

  • There is little documentation on the specific requirements and installation steps for the different cloud providers. For example, there have been no docs on what IAM policy to use or what requirements must be met on a VPC.
  • The add-on management severely lacks documentation (what add-ons are installed by default on which cloud providers, how should one go about enabling a different CNI, etc.).
  • The installation failure caused by the add-ons seems to have interfered with provisioning the worker nodes, after which we had to explicitly add to the cluster by searching through the documentation and finally applying the MachineDeployments generated with Terraform manually.

vSphere:
During the vSphere infrastructure installation process, we’ve again encountered lacking and/or inconsistent documentation.
Much had to be inferred or manually looked up in the Terraform files;
It kind of felt as if the person following the guide was presumed to already know the ins-and-outs of the provided configuration/example files.

Some annoyances:

  • Out of the box, the Terraform code in the vSphere example folder was only really compatible with Ubuntu.
    • Specifically, the vApp options in the code had to be changed for different OS images.
    • The fact that this had to be changed was not explicitly mentioned in the documentation, requiring us to dig into the code ourselves.
  • Ultimately, we only got an Ubuntu image to fully run.
    • We’ve tried Flatcar and RockyLinux images as well, but both failed due to issues on our end.
      • Flatcar failed due to a bad ignition file.
      • RockyLinux failed due to VMDK conversion issues.
        • The image is provided as an qcow2 file that needs to be converted.
          However, simply converting it using qemu-tools is not enough;
          The resulting VMDK also needs to be prepared for ESXi.
          Terraform failed due to: expected flat VMDK version 2.
  • We’ve failed to provision a Kubeone cluster after the control planes were deployed.
    • The process got stuck at the ccm-vsphere add-on step.
    • This might have been due to an improper cloudProvider.cloudConfig configuration.
      • Unfortunately, the guide did not provide further resources for possible configuration options.

Additionally, the guide seems to have been written for vSphere 6.7 and not updated since.
We’ve been using vSphere 8.0 but luckily did not have issues preparing the VMware environment;
Both a handy script and Terraform plan that prepares vSphere users and permissions were provided.
Although neither have been updated since 2020, they still worked just fine.

Ultimately, we’ve abandoned this approach in favor of other deployment options, such as AWS.

Kubermatic Kubernetes Platform and Seed Cluster

Some terminology to better understand KKP (see Architecture).

  • Master Cluster: The master cluster is a Kubernetes cluster which is responsible for storing the information about users, projects, SSH keys and credentials for infrastructure providers. All sensitive information is stored in etcd, which is split across the nodes of the master cluster. It hosts the KKP components and might also act as a seed cluster.
  • Seed Cluster: The seed cluster is a Kubernetes cluster which is responsible for hosting the master components of a user cluster including credentials for the available infrastructure providers. All sensitive information is stored in etcd, which is split across the nodes of the seed cluster.
  • User Cluster: The user cluster is a Kubernetes cluster created and managed by KKP

Once the KubeOne cluster was set up, our focus shifted to deploying the KKP itself, as well as the seed cluster. The process involved the installation of the KKP components, followed by setting up the seed cluster.

For testing, we chose the Small Scale Deployment Type for the KKP in which the master and seed cluster are shared. This means that our previously installed KubeOne cluster hosts the KKP itself and also acts as the seed cluster.
The configuration for the KKP consists of two files.

  • values.yaml that contains all helm values for the different Helm Charts that are provided by Kubermatic.
  • kubermatic.yaml that contains the configuration for the KKP itself. It’s an instance of the KubermaticConfiguration-CustomResource and will be managed by the Kubermatic Operator.

For our tests, we skipped setting up MinIO and therefore the etcd backup. We set up Dex with two static users and installed cert-manager. After that we had the KKP running and were able to log in to the web UI. We created one project for our evaluation.
To add the master cluster itself as a seed cluster, we created the needed Kubeconfig-Secret and Seed-CustomResource, specifying our chosen AWS datacenter regions. After applying the manifests, the seed cluster was instantly available to be used in the KKP console.

Experiences using Kubermatic Kubernetes Platform

Our evaluation encompassed creating clusters across AWS, external environments, and utilizing ‘bring your own’ clusters using kubeadm.
Additionally, we tested KKPs monitoring capabilities by deploying the MLA-stack (Monitoring, Logging, and Alerting).

While KKP provides a nice UI for all day-to-day functionality, most things can be managed by using Custom Resources on the Kubermatic master cluster. For simplicity, we decided to primarily use the UI for our tests.

Cluster creation using the seed cluster

This seems to be the flagship feature of Kubermatic Kubernetes Platform. It allows hosting the Kubernetes control plane on the seed cluster as pods, while having just the worker nodes as fully fledged VMs in the cloud provider of your choice or even on your own machines.

AWS
Creating a cluster on AWS was straight forward. It consisted of two steps

  1. Defining a so-called Preset, that contains the cloud credentials to be used for setting up the infrastructure on AWS.
  2. Creating a cluster using the “Create Cluster” wizard in the UI. The following settings can easily be set during the process:
    • Cloud provider + datacenter
    • Cluster name
    • CNI (our favorite CNI cilium is officially supported)
    • K8s version
    • SSH key for the VMs
    • Some opt-in applications (OPA, Audit Logging, User Logging/Monitoring, Operating System Manager…)
    • Initial worker nodes (OS, replicas, subnet/AZ…)

As soon as the cluster is created, the kubeconfig can be downloaded and the cluster is ready to be used.

Kubermatic can then be used to…

  • lifecycle the cluster and the CNI with one click
  • adjust the worker machine deployments
  • deploy the Kubernetes Dashboard
  • integrate with OIDC using Dex
  • Deploy a simple web terminal, that allows using kubectl directly from the KKP GUI
  • Create K8s RBAC resources

Bring Your Own (Kubeadm)
As we are typically using the swiss-based cloud provider cloudscale.ch, we were interested to find out how easy it is to use Bring Your Own VMs.
After we realized that there is some mismatch in the documentation as the UI shows kubeadm as provider, while the documentation only speaks of bringyourown, we were able to create a control plane in the seed cluster and we received a kubeadm join-command that we were able to successfully execute on a custom Ubuntu VM we created on cloudscale.ch.

While this approach still provides lifecycle functionality for the K8s version as well as the CNI, obviously there is no dynamic machine management.

External clusters

Besides using the seed cluster to deploy hosted user clusters, KKP also integrates into the managed K8s offerings of the “Big Three”. You are able to either directly create a cluster on EKS, AKS or GKE, as well as import an existing cluster.

EKS
As we were already using AWS for our other infrastructure, we created an EKS cluster using KKP. This process again was very straight forward. Using KKP, you are able to adjust the machine deployments for the worker nodes and lifecycle the K8s version. Besides that, KKP seems to offer no additional benefits.

Monitoring Logging & Alerting Stack (MLA)

Kubermatics monitoring and logging stack plays a crucial role in ensuring the stability and performance of Kubernetes clusters. However, it is worth noting that the approach taken by Kubermatic may differ from the widely adopted Prometheus Operator Stack. In a departure from the modern trend of leveraging the Prometheus Operator for streamlined management, Kubermatic has opted for a more traditional deployment of monitoring components.

Unfortunately, this choice means that the monitoring and logging stack lacks the out-of-the-box automation and ease of management offered by the Prometheus Operator. Unlike the Operators declarative approach to defining and managing Prometheus instances, alerting rules, and related components, Kubermatics approach involves deploying these components individually, reminiscent of earlier practices in the Kubernetes ecosystem.

Conclusion

Our focus while evaluating Kubermatic was, whether we could use KKP/KubeOne as an additional option when choosing the best fitting Kubernetes distribution for our customers. Kubermatic plays to its strengths at scale, and since most of our customers’ Kubernetes environments are small to medium sized, Kubermatic is not the right fit for those customers. This is by no means a shortcoming of Kubermatic, but a misjudgment from our side, going into the evaluation.

Although being a very promising product, we feel that it is at times held back by a somewhat frustrating installation experience. A big factor for this was the lacking documentation, where much had to be deduced by the user.

Being accustomed to the Prometheus Operator stack, it was surprising to see that Kubermatic has chosen to manually deploy its monitoring components and not following the cloud-native approach that is the de facto standard nowadays. We feel that this introduces further complexity that requires additional attention by the user.

Support for external clusters like EKS showcased Kubermatics adaptability to different Kubernetes flavors, although we expected more nitty-gritty details about these integrations.

We appreciate the user-friendly interface, multi-cloud support, and scalability. If Kubermatic addresses the lacking documentation and is able to improve the installation experience, we are confident that their product is able to reach its full potential.

Kommentare sind geschlossen.