Dear visitor,

it seems your are using Internet Explorer in version 11 or lower as your web browser. Due to a lack of supported modern web standards I do not support this browser anymore.

Please use an alternative web browser such as:

Fedora CoreOS - Basic Kubernetes Setup

Short overview of how to set up a Fedora CoreOS (FCOS) instance with a simple single node Kubernetes cluster using kubeadm, CRI-O as the container runtime and Flannel as the CNI network provider.

At the time of writing FCOS 31.20200127.3.0 was used, it might be the case that some of the following steps change in future versions. This guide also relies on package layering with some non-straightforward installation steps.

Components involved

The following setup contains the following components:

CRI-O as the container runtime

FCOS ships with Docker as a container runtime. Some outdated documentation of FCOS states that it comes with CRI-O out of the box as well, but this was changed for the final release as CRI-O needs to be matched with the used Kubernetes version. Therefore it needs to be installed separately.

Activating Fedora module repositories

FCOS does not enable the Fedora module repositories by default. As CRI-O is released as a module, these repositories need to be activated with:

sed -i -z s/enabled=0/enabled=1/ /etc/yum.repos.d/fedora-modular.repo
sed -i -z s/enabled=0/enabled=1/ /etc/yum.repos.d/fedora-updates-modular.repo
# At the time of writing cri-o in version 1.17 is only available in testing
sed -i -z s/enabled=0/enabled=1/ /etc/yum.repos.d/fedora-updates-testing-modular.repo

Setting up the CRI-O module

FCOS allows the installation of additional packages with the package layering mechanism provided by rpm-ostree. Unfortunately, a straightforward support for modules is not integrated. Even with module repositories enabled, only the default stream of a module seems to be respected. Therefore, a separate configuration for the CRI-O module needs to dropped in place.

mkdir /etc/dnf/modules.d
cat <<EOF > /etc/dnf/modules.d/cri-o.module
[cri-o]
name=cri-o
stream=1.17
profiles=
state=enabled
EOF

Installing CRI-O

After the previous steps, CRI-O can be installed with:

rpm-ostree install cri-o

# Make sure to reboot to have layered package available
systemctl reboot

In order to run CRI-O some kernel modules and additional network settings are required:

# Load required kernel modules
modprobe overlay && modprobe br_netfilter

# Kernel module should be loaded on every reboot
cat <<EOF > /etc/modules-load.d/crio-net.conf
overlay
br_netfilter
EOF

# Network settings
cat <<EOF > /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sysctl --system

CRI-O configuration

I found, that with CRI-O 1.17 (at the time of writing) the default configuration tries to set the path for OCI hooks to a read-only directory in FCOS. This needs to be changed with:

sed -i -z s+/usr/share/containers/oci/hooks.d+/etc/containers/oci/hooks.d+ /etc/crio/crio.conf

Installing kubeadm, kubelet and kubectl

Installing the required components to install and manage the Kubernetes cluster requires setting up an additional package repository:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

Then, the usual package layering can be used:

rpm-ostree install kubelet kubeadm kubectl

# Make sure to reboot to have layered packages available
systemctl reboot

Since kubelet has improvable support for SELinux, the upstream documentation recommends to set it to permissive mode (which basically disables SELinux).

# Set SELinux to permissive mode
setenforce 0

# Make setting persistent
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Kubernetes cluster setup

After installing all required packages it is necessary to reboot the system, as layered packages via rpm-ostree are only available after that. The steps above already included this step.

Next action is activating all required services:

systemctl enable --now cri-o && systemctl enable --now kubelet

Since CRI-O was chosen as the container runtime the correct cgroup driver to be used by kubelet has to be set.

echo "KUBELET_EXTRA_ARGS=--cgroup-driver=systemd" | tee /etc/sysconfig/kubelet

Before starting the installation a custom cluster configuration needs to be created:

cat <<EOF > clusterconfig.yml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
controllerManager:
  extraArgs:
    flex-volume-plugin-dir: "/etc/kubernetes/kubelet-plugins/volume/exec"
networking:
  podSubnet: 10.244.0.0/16
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
  criSocket: /var/run/crio/crio.sock
EOF

This is necessary to provide the following custom settings:

Starting the installation can then be done with:

kubeadm init --config clusterconfig.yml

After the installation one can give specific users access to the cluster. This requires copying the kubeconfig file to the users .kube directory with the correct ownership.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Allowing master node to schedule pods

As this guide describes the setup of a one node cluster, it is necessary to allow the master node to schedule pods (which is disabled by default for security reasons). This is done by removing a taint:

kubectl taint nodes --all node-role.kubernetes.io/master-

Setting up networking

Flannel as the chosen networking solution was already taken into account during the kubeadm init step as it requires to have the controllerManager configured with a podSubnet.

The installation of Flannel is as simple as:

sudo sysctl net.bridge.bridge-nf-call-iptables=1
kubectl apply -f kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

This should result in a number of pods running inside the cluster:

kubectl get pods --all-namespaces

NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE
kube-system   coredns-...                     1/1     Running   0          67s
kube-system   coredns-...                     1/1     Running   0          67s
kube-system   etcd-...                        1/1     Running   0          63s
kube-system   kube-apiserver-...              1/1     Running   0          63s
kube-system   kube-controller-manager-...     1/1     Running   0          63s
kube-system   kube-flannel-ds-...             1/1     Running   0          12s
kube-system   kube-proxy-...                  1/1     Running   0          67s
kube-system   kube-scheduler-...              1/1     Running   0          63s

Running an example

After setting all this up, an example deployment can be created with:

# Deploy nginx
kubectl create deployment hello --image=nginx
kubectl expose deployment hello --type NodePort --port=80

# Find service port
PORT=$(kubectl get svc/hello -o go-template='{{ (index .spec.ports 0).nodePort }}')

# Curl nginx after pod and service are available
curl --ipv4 localhost:$PORT

References