Alternative to Kubernetes: Kontena

Created on 2020-05-18 11:11

Published on 2020-05-18 11:42

The Application Kontena

Kontena offers support to companies that need to handle large-scale containers. Founded in March 2015, Kontena has developed an open-source platform for the management of apps and microservices in containerized environments.

The company offers a user-friendly service for on-premise, cloud, or hybrid infrastructures. Little knowledge of DevOps or Linux is required to use the system, which aims to provide "everything needed to run and scale containers in production."

Kontena Help Management of Kubernetes Clusters 

A free version of Kontena is available for the management of Kubernetes clusters. This free desktop application for container orchestration is provided in parallel with the enterprise version.

The free version of Kontena is available for download for Linux, Windows, and macOS. With the Kontena Management Dashboard, you can granularly understand what is going on in the clusters.

The dashboard has a real-time visualization of the most important metrics, configurations, and protocol streams. Users get an insight into their Kubernetes clusters, including all nodes and current workloads. This can be used, for example, to ensure that a cluster is properly set up and configured.

An integrated terminal allows applications to be inspected or corrected without losing context. Access to the data, namespaces, and other resources is restricted by the role-based access control. To ensure this, kontena supports common external authentication systems through user management and integration APIs.

The Free version differs from the Enterprise version in that it does not offer a browser view, the authentication options are restricted, and there is no premium support.

Features

Kubernetes Distribution Kontena Pharos available as a Beta

With Pharos, the company Kontena has announced a certified Kubernetes distribution. A free, open-source solution licensed under Apache 2 is intended to convince through its solidity and simplicity in both the private and commercial environments.

According to the manufacturer, Kontena Pharos offers a foundation for Kubernetes clusters of all sizes. It is based on the latest Kubernetes sources with all the essential components - including tools that are designed to perform it easy to update and maintain the system with security fixes and platform updates.

Pharos is designed to work not only in the cloud but on any infrastructure. In particular, the administration, which requires a lot of resources and specialist knowledge, is to be simplified, underlines Miska Kaipiainen, CEO and founder of Kontena Inc .: "We have made it our task to help developers and companies of the immense complexity of container technology and especially of To relieve Kubernetes. "

In the development of Pharos, the experience that could be gathered with the own container platform solution, which has been available since 2015, flowed into the development: "With Wettena Pharos, companies can benefit from container technology immediately and not only after months or even years," says Kaipiainen.

The version 1.0 of Kontena Pharos had been released in May during KubeCon Europe 2018 in Copenhagen, Denmark. The freely available version can be found in the Pharos cluster repository at Github. A trial version of Kontena Pharos is available on the manufacturer's website. For companies, Kontena offers commercial subscriptions with support and SLA agreements, as well as advice and training packages.

Kontena Pharos 2.4, Kontena Network Loadbalancer / Universal Loadbalancer, Kontena Lens and Kontena Storage in action in Bare Metal instances of Scaleway

Kontena Pharos 2.4 was announced with new features and independence from the Kontena Lens brick in addition to the implementation of version 1.14.3 of Kubernetes.

No alt text provided for this image

As before, We launch three Bare Metal instances at Scaleway here type C2L with Ubuntu 18.04 LTS (in the region of Amsterdam).

No alt text provided for this image
No alt text provided for this image

To deploy my Kubernetes cluster, We will, therefore, use this new version of Kontena Pharos. You can have access to the community version on GitHub:

No alt text provided for this image
No alt text provided for this image

Or you can go for Pro:

No alt text provided for this image

Kontena Pharos OSS is the basic version and contains all the essential functionalities to take full advantage of Kubernetes on any scale, on any infrastructure. It is a 100% open source under Apache 2 license. You can use it for free, for any use.

Kontena Pharos PRO is based on Kontena Pharos OSS but has more enhanced features and advanced functionality. It is commercial, but you can evaluate it for free, as long as you need it!

We start by preparing my cluster.yml configuration file which contains a number of add-ons supported by the PRO version of Kontena:

No alt text provided for this image

Beforehand We applied this script for cloud-init at the level of these instances:

#! / bin / sh apt install sudo iputils-ping -y echo "root ALL = (ALL) NOPASSWD: ALL" | tee /etc/sudoers.d/root yes | mkfs.ext4 / dev / sda mkdir -p / var / lib / docker fs_uuid = $ (blkid -o value -s UUID / dev / sda) echo "UUID = $ fs_uuid / var / lib / docker ext4 defaults 0 0"> > / etc / fstab mount -a 


curl -s https://install.zerotier.com/ | bash 

zerotier-cli join <YOUR NETWORK-ID>

We indeed have a second 250 GB disk on these instances (especially for Kontena Storage):

No alt text provided for this image

These instances are added to ZeroTier (P2P VPN) on a private subnet where the Ethernet Bridging mode is activated (for Kontena Network Load Balancer):

No alt text provided for this image
No alt text provided for this image

And we launch it all:

$ pharos up -c cluster.yml

No alt text provided for this image

The Kubernetes cluster is then available:

No alt text provided for this image

We have access to the Kontena Lens dashboard:

No alt text provided for this image

I deployed here Kontena Storage, which takes Rook (Ceph) in the cluster as well as an associated dashboard. To make it accessible, I modify the manifest of its service (to put it in Load Balancer type via Kontena Network Balancer which takes up MetalLB):

No alt text provided for this image

A ZeroTier private network pool address is automatically assigned by Kontena Network Load Balancer:

No alt text provided for this image

Allowing access to the dashboard. The password associated with admin is retrieved as follows:

$ kubectl -n kontena-storage get secret rook-ceph-dashboard-password -o jsonpath = "{['data'] ['password']}" | base64 --decode && echo

No alt text provided for this image

We can also use the command line to know the state of health of the Ceph cluster:

No alt text provided for this image

Kontena Lens offers the possibility of accessing a catalog of Charts to install applications in the Kubernetes cluster:

No alt text provided for this image

We modify the Chart parameters for Weave Scope from a Kontena Lens terminal by putting a Load Balancer type service once again:

No alt text provided for this image

and deployment:

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

We have access to the viewer via Weave Scope deployed in the cluster:

No alt text provided for this image
No alt text provided for this image

Rook offers the possibility (like OpenEBS) of deploying Minio, for example. We deployed Minio here in a distributed mode in the cluster:

No alt text provided for this image
No alt text provided for this image

We take the sources of my FC chatbot to deploy it statically within a bucket in Minio:

No alt text provided for this image
No alt text provided for this image

We reuse the Cloudflare Argo Tunnel to make this Chatbot publicly accessible:

No alt text provided for this image
No alt text provided for this image

The Chatbot is accessible via the URL returned by Argo Tunnel:

No alt text provided for this image

with correct performance:

No alt text provided for this image

Another test of "GitOps" via Flagger and Istio in this cluster. We- start from the sources offered via this Github repository, according to this kinematics:

No alt text provided for this image

Isitio, Weave Flux, Flagger, Prometheus and Helm are loaded into the cluster:

kubectl -n kube-system create sa tiller 

kubectl create clusterrolebinding tiller-cluster-rule \ 

--clusterrole = cluster-admin \ 

--serviceaccount = kube-system: tiller 

helm init --service-account tiller --wait git clone https: //github.com/<YOUR-USERNAME>/gitops-istio 

cd gitops-istio ./scripts/flux-init.sh git@github.com: <YOUR-USERNAME> / gitops-istio

At startup, Weave Flux generates an SSH key and saves the public key. The command in bold above will print the public key.

No alt text provided for this image

To synchronize the state of your cluster with git, you must copy the public key and create a deployment key with write access to its GitHub repository. On GitHub, select Settings> Deploy keys, click Add deployment key, check Allow write access, paste the Flux public key, and click Add key.

No alt text provided for this image

When Weave Flux has to write access to your repository, it will do the following:

No alt text provided for this image

When Weave Flux synchronizes the Git repository with the cluster, it creates the front-end / backend deployment, HPA and a canary object. Flagger uses this definition to create a series of objects: Kubernetes deployments, ClusterIP services and Istio virtual services:

No alt text provided for this image

Flagger detects that the deployment revision has changed and initiates a new deployment:

No alt text provided for this image

All this is monitored with Grafana:

No alt text provided for this image

and viewable in Weave Scope:

No alt text provided for this image

Or in Weave Cloud where it is possible to initiate an automated deployment in GitOps mode (example here with the FC demonstrator):

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

We make a change on its GitHub repository of the deployment manifest, and automatic detection of the change occurred then redeployment:

No alt text provided for this image
No alt text provided for this image

accompanied by monitoring:

No alt text provided for this image
No alt text provided for this image

The FC demonstrator is always accessible (via the IP address provided by Kontena Network Load Balancer).

Finally, we can use Kontena Universal Load Balancer which takes Akrobateo previously seen by modifying the cluster.yml file at the add-on level:

addons: 

 kontena-universal-lb: 

 enabled: true

For this, I start from a cluster of Bare Metal instances in Scaleway of type C2M:

No alt text provided for this image
No alt text provided for this image

Once the deployment is complete, Kontena Universal Load Balancer (Akrobateo) is installed (Akrobateo being a simple Kubernetes operator allowing to expose the LoadBalancer services of the cluster as a hostPorts node using DaemonSets):

No alt text provided for this image

and always with Kontena Lens:

No alt text provided for this image

Hope you find out points from our complete step by step guide and clear things about configuration Kontena Pharos 2.4, Kontena Network Loadbalancer / Universal Loadbalancer, Kontena Lens and Kontena Storage in action in Bare Metal instances of Scaleway. If you still have any in your mind you can contact with us for further information.