Nabla on Kubernetes!
NOTE: This is using an old version of runnc
, if using the latest version, please use v0.3 tag of the images with runnc_v0.3
A while back, we released a demo of the nabla containers runtime, runnc
(v0.1). Most recently, we have improved our runtime in several aspects such as better start-up times and memory density. In addition to that (and also the focus of this blogpost), we have implemented Nabla support for kubernetes! In this blogpost, we will show a simple local setup of kubernetes using the nabla containers runtime, runnc
(v0.2), through containerd via the untrusted workload CRI plugin and pod annotation.
There will be several steps to achieve this on a local machine:
- Build and install Nabla runtime
runnc
- Setup containerd with
runnc
- Setup a CNI config
- Setup a local kubernetes cluster with containerd
- Running Nabla on k8s!
We note for this post, the following versions were used:
ubuntu: 16.04.3 LTS
- kernel: 4.15.0-24-generic
runnc: runnc_v0.2
- commit: 4b1eb805c29f1df6938ad2651cb062b5a122d167
containerd: v1.2.0-rc1
- commit: 0c5f8f63c3368856c320ae8a1c125e703b73b51d
kubernetes: v1.12.0
- commit: 0ed33881dc4355495f623c6f22e7dd0b7632b7c0
containernetworking:
- commit: b93d284d18dfc8ba93265fa0aa859c7e92df411b
Build and install Nabla runtime runnc
The first step to running nabla on kubernetes is to run nabla! The first order of business is to make sure we have a running new (v0.2) version of runnc
. For a step-by-step guide to install runnc
, take a look at our previous article: Running a Nabla Container or look at the README.
Setting up a CNI config
Note: We are aware of some potential issues with network configurations with /32
CIDR with ARP-proxying, and have a temporary fix for a subset of network configurations. Track Issue
Note: if you already have CNI configured, you may skip this step.
The Container Network Interface (CNI) is the interface that kubernetes uses for networking. A configuration is required for our setup using hack/local-up-cluster.sh
. For simplicity, we use a simple bridge CNI plugin.
This can be done by installing the CNI plugins and adding the configuration for the plugin.
Installing the plugin
We will use the CNI plugins in the containernetworking repo:
lumjjb@lumjjb-ThinkPad-P50:~$ go get -d github.com/containernetworking/plugins
can't load package: package github.com/containernetworking/plugins: no Go files in /home/ubuntu/go/src/github.com/containernetworking/plugins
lumjjb@lumjjb-ThinkPad-P50:~$ cd $GOPATH/src/github.com/containernetworking/plugins
lumjjb@lumjjb-ThinkPad-P50:~/go/src/github.com/containernetworking/plugins$ mkdir -p /opt/cni/bin
lumjjb@lumjjb-ThinkPad-P50:~/go/src/github.com/containernetworking/plugins$ sudo cp bin/* /opt/cni/bin/
Creating a simple bridge config
We will create the configuration in the following two files (/etc/cni/net.d/10-mynet.conflist
, /etc/cni/net.d/99-loopback.conf
). This configuration will create a bridge with name cni0
and use it to connect the pods created in the kubernetes cluster.
lumjjb@lumjjb-ThinkPad-P50:~$ cat /etc/cni/net.d/10-mynet.conflist
{
"cniVersion": "0.3.1",
"name": "mynet",
"plugins": [
{
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.30.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
},
{
"type": "portmap",
"capabilities": {"portMappings": true},
"snat": true
}
]
}
lumjjb@lumjjb-ThinkPad-P50:~$ cat /etc/cni/net.d/99-loopback.conf
{
"cniVersion": "0.3.1",
"type": "loopback"
}
Setup containerd with runnc
We are using containerd as a runtime as it supports the selection of an alternative runtime via a pod annotation in kubernetes. This feature is a CRI plugin called untrusted_workload_runtime
.
We enable this feature by modifying a containerd configuration to include the CRI plugin:
lumjjb@lumjjb-ThinkPad-P50:~$ cat /etc/containerd/config.toml
subreaper = true
oom_score = -999
[debug]
level = "debug"
[metrics]
address = "127.0.0.1:1338"
[plugins.linux]
runtime = "runc"
shim_debug = true
### ADD THE FOLLOWING CONFIGURATION ###
[plugins]
[plugins.cri.containerd]
[plugins.cri.containerd.untrusted_workload_runtime]
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runnc"
We then run the containerd service in preparation for setting up kubernetes (Note: This may require running with root
or sudo
):
root@lumjjb-ThinkPad-P50:/home/lumjjb# /usr/local/bin/containerd --config /etc/containerd/config.toml
INFO[2018-10-22T21:40:27.299644455-04:00] starting containerd revision=de4bb2ddfbb6b2b6a112b3478a935ca74dd7b796 version=v1.2.0-rc.0-44-gde4bb2d
DEBU[2018-10-22T21:40:27.299684880-04:00] changing OOM score to -999
INFO[2018-10-22T21:40:27.299892674-04:00] loading plugin "io.containerd.content.v1.content"... type=io.containerd.content.v1
INFO[2018-10-22T21:40:27.299909914-04:00] loading plugin "io.containerd.snapshotter.v1.btrfs"... type=io.containerd.snapshotter.v1
<<< TRUNCATED >>>
Note that this process runs as a daemon, so we will need to keep this running for the rest of the setup. Optionally, you may set it up as a systemd service.
Setup a local kubernetes cluster with containerd
Finally, to tie everything in together, we will deploy kubernetes!
For setting up kubernetes, we use the hack/local-cluster-up.sh
script, which is part of the kubernetes code base. This is one of many ways to set up kubernetes. To set the container runtime arguments above with hack/local-up-cluster.sh
, we use the environment variables CONTAINER_RUNTIME=remote
and CONTAINER_RUNTIME_ENDPOINT=unix:///run/containerd/containerd.sock
Note: for a regular kubernetes setup, the only changes to a deployment of a kubernetes cluster is that we need to explicitly tell the kubelet to use the containerd endpoint that we’ve created. For each worker, this can be done by setting the kubelet arguments --container-runtime=remote
and --container-runtime-endpoint=unix:///run/containerd/containerd.sock
(default containerd socket).
Note: This requires running with root
or sudo
.
root@lumjjb-ThinkPad-P50:~/go/src/k8s.io/kubernetes# CONTAINER_RUNTIME=remote CONTAINER_RUNTIME_ENDPOINT=unix:///run/containerd/containerd.sock hack/local-up-cluster.sh
WARNING : The kubelet is configured to not fail even if swap is enabled; production deployments should disable swap.
make: Entering directory '/home/lumjjb/go/src/k8s.io/kubernetes'
make[1]: Entering directory '/home/lumjjb/go/src/k8s.io/kubernetes'
make[1]: Leaving directory '/home/lumjjb/go/src/k8s.io/kubernetes'
+++ [1022 09:18:08] Building go targets for linux/amd64:
cmd/kubectl
cmd/hyperkube
<<< TRUNCATED >>>
storageclass.storage.k8s.io/standard created
Local Kubernetes cluster is running. Press Ctrl-C to shut it down.
Logs:
/tmp/kube-apiserver.log
/tmp/kube-controller-manager.log
/tmp/kube-proxy.log
/tmp/kube-scheduler.log
/tmp/kubelet.log
To start using your cluster, you can open up another terminal/tab and run:
export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
cluster/kubectl.sh
Alternatively, you can write to the default kubeconfig:
export KUBERNETES_PROVIDER=local
cluster/kubectl.sh config set-cluster local --server=https://localhost:6443 --certificate-authority=/var/run/kubernetes/server-ca.crt
cluster/kubectl.sh config set-credentials myself --client-key=/var/run/kubernetes/client-admin.key --client-certificate=/var/run/kubernetes/client-admin.crt
cluster/kubectl.sh config set-context local --cluster=local --user=myself
cluster/kubectl.sh config use-context local
cluster/kubectl.sh
Let’s run some Nabla pods!
We’ll start by creating a deployment of Nabla node-express
. We specify the following deployment, adding the additional annotation, io.kubernetes.cri.untrusted-workload: "true"
.
lumjjb@lumjjb-ThinkPad-P50:~$ cat ~/deploys/nabla.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app: nabla
name: nabla
spec:
replicas: 1
template:
metadata:
labels:
app: nabla
name: nabla
annotations:
io.kubernetes.cri.untrusted-workload: "true"
spec:
containers:
- name: nabla
image: nablact/node-express-nabla:v0.2
imagePullPolicy: Always
ports:
- containerPort: 8080
We create the deployment and verify that our pod has been created:
lumjjb@lumjjb-ThinkPad-P50:~$ kubectl create -f ~/deploys/nabla.yaml
deployment.apps/nabla created
lumjjb@lumjjb-ThinkPad-P50:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nabla-857c6d9b9-zp6dd 1/1 Running 0 6s
We can verify that the nabla container is running by observing the solo5 logo in the logs:
lumjjb@lumjjb-ThinkPad-P50:~$ kubectl logs nabla-857c6d9b9-zp6dd
<< TRUNCATED >>
| ___|
__| _ \ | _ \ __ \
\__ \ ( | | ( | ) |
____/\___/ _|\___/____/
Solo5: Memory map: 512 MB addressable:
Solo5: unused @ (0x0 - 0xfffff)
Solo5: text @ (0x100000 - 0xaed6f7)
Solo5: rodata @ (0xaed6f8 - 0xda55b7)
Solo5: data @ (0xda55b8 - 0xfdbea7)
Solo5: heap >= 0xfdc000 < stack < 0x20000000
rump kernel bare metal bootstrap
<< TRUNCATED >>
=== calling "/run/containerd/io.containerd.runtime.v1.linux/k8s.io/e4929acb4a46fae8aaf5e46ffd257318073282e1784a48c217992d6aeb58b83e/rootfs/node.nabla" main() ===
rumprun: call to ``_sys___sigprocmask14'' ignored
rumprun: call to ``sigaction'' ignored
Listening on port 8080
To demonstrate that the application is accessible, we will create a service for the deployment and access it.
lumjjb@lumjjb-ThinkPad-P50:~$ cat ~/services/nabla.yaml
kind: Service
apiVersion: v1
metadata:
name: nabla-service
spec:
selector:
app: nabla
ports:
- port: 8080
targetPort: 8080
lumjjb@lumjjb-ThinkPad-P50:~$ kubectl create -f ~/services/nabla.yaml
service/nabla-service created
lumjjb@lumjjb-ThinkPad-P50:~$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 3d
nabla-service ClusterIP 10.0.0.98 <none> 8080/TCP 2m
At last, we curl the service endpoint and see the response from our Nabla container!
lumjjb@lumjjb-ThinkPad-P50:~$ curl 10.0.0.98:8080
Nabla!
What’s next?
We are currently working in collaboration with katacontainers to get both Nabla and katacontainers on the same kubernetes cluster via the newly released alpha feature, RuntimeClass.