For this tutorial, two virtual machines running Ubuntu 20.04.1 LTS were used.

If you need an on-premises Kubernetes cluster, K3s is a good option because there ‘s a single small binary to install per node.

Please note that I have blanked out all domain names, IP addresses, and so forth for privacy reasons.

Installing the cluster

On the machine that will be the main node, install the K3s binary:

curl -sfL https://get.k3s.io | sh -

Get the node token, which is needed in the next step:

cat /var/lib/rancher/k3s/server/node-token

On each machine that will be a worker node:

curl -sfL https://get.k3s.io | K3S_URL=https://kubernetes01.domain.de:6443 K3S_TOKEN=<the-token-from-the-step-before> sh -

Back on the main node, check if all nodes are present:

sudo k3s kubectl get node
NAME      STATUS ROLES     AGE VERSION
kubernetes01.domain.de Ready control-plane,master 18m v1.20.0+k3s2
kubernetes02.domain.de Ready <none>     93s v1.20.0+k3s2

Deploying a Service

For a first test, deploy some NGINX instances to the cluster with the following manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
 name: nginx-deployment
spec:
 selector:
 matchLabels:
  app: nginx
 replicas: 10
 template:
 metadata:
  labels:
  app: nginx
 spec:
  containers:
  - name: nginx
  image: nginx:1.14.2
  ports:
  - containerPort: 80

Save this as nginx-deployment.yml and install it:

kubectl apply -f nginx-deployment.yml

There should now be 10 pods running NGINX. Why 10? Just for fun— adjust to your liking.

Check where those pods are running:

kubectl get pods -l app=nginx --output=wide

Creating a Service

This step is the first of two to make the internal deployment accessible from outside the cluster.

apiVersion: v1
kind: Service
metadata:
 name: nginx
 labels:
 run: nginx
spec:
 ports:
 - port: 80
  protocol: TCP
 selector:
 app: nginx

Save as nginx-service.yml and apply it:

kubectl apply -f nginx-service.yml

Check if it worked:

sudo kubectl get services
NAME   TYPE  CLUSTER-IP  EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 00.0.0.0  <none>  443/TCP 4h9m
nginx  ClusterIP 00.00.000.000 <none>  80/TCP 13

But it needs to be load balanced, right?

Right!

Ingress with HAProxy

After creating the service, make the NGINX instances accessible from outside by creating an ingress using HAProxy.

The original manifest provided by HAProxy is rather long, so it is not repeated here. It can be installed directly from its source:

kubectl apply -f https://raw.githubusercontent.com/haproxytech/kubernetes-ingress/v1.4/deploy/haproxy-ingress.yaml

Check that the ingress was created:

sudo kubectl get ingress
NAME   CLASS HOSTS ADDRESS  PORTS AGE
nginx-ingress <none> *  00.000.0.000 80  2m38s

Additionally, let’s “talk” to one of the pods:

curl 10.199.7.120/nginx
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.14.2</center>
</body>
</html>

The 404 is expected— all NGINX instances are “empty.”

Changing the Local Kubernetes Context

It ‘s more convenient to control the cluster from your local terminal without SSHing into the master node.

Copy the contents of /etc/rancher/k3s/k3s.yaml from the main node and add them to your local ~/.kube/config.

Replace “localhost” with the IP or name of the main K3s node.

On the local machine, change the context with:

kubectl set-context <yourcontext>

Verify by retrieving pods, for example:

kubectl get pods -o wide

Resources