Table of Contents |
guest 2025-10-19 |
Asked Dejan Lesjak (F9) to create a VM running Debian 13.1.
Tasks:
sudo
,studen
as part of sudo
group.labkey
to manage resources (not sudo
, but probably docker
, see how it works out with Kubernetes)labkey
user:
studen@labkey:~$ sudo mkdir /data0/labkey
studen@labkey:~$ sudo mkdir /data1/labkey
studen@labkey:~$ sudo chown labkey:labkey /data0/labkey
studen@labkey:~$ sudo chown labkey:labkey /data1/labkey
Follow installation instructions. Specifically:
for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done;
deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian trixie stable
sudo apt-get install ca-certificates curl
sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
sudo vi /etc/apt/sources.list.d/docker.list
deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian trixie stable
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Add labkey
to docker
sudo usermod -G docker labkey
Now labkey
can start docker
:
studen@labkey:~$ sudo su labkey
labkey@labkey:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
The idea is to run a single Node with three Pods - a disk server (/data1/labkey
), database server (/data0/labkey
) and a labkey service.
Start with installing kubeadm. Made sure we are running an LTS kernel:
studen@labkey:~$ uname -r
6.12.48+deb13-amd64
Add repository:
studen@labkey:~$ sudo apt-get install -y apt-transport-https ca-certificates curl gpg
studen@labkey:~$ curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.34/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
studen@labkey:~$ cat /etc/apt/sources.list.d/kubernets.list
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.34/deb/ /
Install kubelet
kubeadm
kubectl
:
studen@labkey:~$ sudo apt-get install -y kubelet kubeadm kubectl
studen@labkey:~$ sudo apt-mark hold kubelet kubeadm kubectl
From apt: hold is used to mark a package as held back, which will prevent the package from being automatically installed, upgraded or removed.
Kubernets abstracts docker service at the containerd level. The service containerd
is part of the docker
bundle, it gets managed by systemd
,
so status of containerd
is checked by:
studen@labkey:~$ systemctl status containerd
Docker actually ships containerd.io
package which contains both containerd
daemon as well as runc
, an injector service to start a container.
The configuration file /etc/containerd/config.toml
is docker
centric. To fit better with the instructions on container runtime,
it is better to regenerate a default containerd
configuration:
studen@labkey:~$ sudo /bin/bash -c 'containerd config default > /etc/containerd/config.toml'
Change configuration of containerd
to use systemd
driver, since Trixie is recent enough to use cgroup v2 API. Bellow we check for version of cgroup
and highlight SystemdCgroup
setting to true, whereas it was set to false originally.
studen@labkey:~$ stat -fc %T /sys/fs/cgroup/
cgroup2fs
studen@labkey:~$ grep SystemdCgroup /etc/containerd/config.toml
SystemdCgroup = true
Restarting containerd
service runs OK, just complains that there is no CNI at init, which apparently is OK since we didn't start a cluster yet:
studen@labkey:~$ sudo systemctl restart containerd
studen@labkey:~$ sudo systemctl status containerd
[...]
Sep 30 11:25:57 labkey containerd[12742]: time=[...] level=error msg="failed to load cni during init, please check CRI plugin status ..."
While selecting CNI seems to be a crucial point, it is done with kubectl
which won't work until we have the control plane up and running.
To start, disable swap and start the control plane with default parameters.
studen@labkey:~$ sudo swapofff -a
#studen@labkey:~$ sudo kubeadm init
#option to specify IP of the pods
studen@labkey:~$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
This sets swap off temporarily, at reboot, the same command should be run prior to starting kubeadm
.
Now we can start managing the cluster with kubectl
. Delegate manegement to a temporary user to prevent admin certificate corruption. First, assign cluster-admin
rights to user studen
(see how the second command uses admin.conf
created by kubeadm
). Then, create a kubeconfig
for studen
and move it to default directory for kubeconfig
, ie. ~/.kube/config
. A configuration file clusterConfig.xml
contains basic data of the cluster, which can be extracted using kubectl
and admin.conf
# this is only done once, even after cert in admin/studen.conf expires
studen@labkey:~$ sudo kubectl --kubeconfig /etc/kubernetes/admin.conf create clusterrolebinding studen --clusterrole=cluster-admin --user=studen
# check cluster configuration to generate clusterConfig.yaml
studen@labkey:~$ sudo kubectl --kubeconfig /etc/kubernetes/admin.conf get cm kubeadm-config -n kube-system -o=jsonpath="{.data.ClusterConfiguration}"
# create a temoporary certificate for studen
studen@labkey:~$ sudo kubeadm kubeconfig user --config admin/clusterConfig.yaml --client-name studen --validity-period 24h > admin/studen.conf
# make it the default config for kubectl
# from this point onwards we don't need root
studen@labkey:~$ cp admin/studen.conf .kube/config
#now studen can do administration
studen@labkey:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
labkey NotReady control-plane 13d v1.34.1
So there is at least the control node that is available. It seems the configuration is not complete, but at least it is there.
We can also look at config:
studen@labkey:~$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://178.172.43.129:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: studen
name: studen@kubernetes
current-context: studen@kubernetes
kind: Config
users:
- name: studen
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED
We can add CNI, which toggles Status to Ready. Some advice on selection of CNI, Flannel seems like a good candidate.
The yml
file used is the one copied from git
studen@labkey:~$ mkdir cni && cd cni
studen@labkey:~$ wget https://github.com/antrea-io/antrea/releases/download/v2.4.3/antrea.yml
# Flannel failed
# studen@labkey:~$ curl https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
#studen@labkey:~$ kubectl apply -f cni/kube-flannel.yml
studen@labkey:~$ cd
studen@labkey:~$ kubectl apply -f cni/antrea.yml
[..]
#Equivalent to docker ps -A
studen@labkey:~$ kubectl get pods -A
NAME STATUS ROLES AGE VERSION
labkey Ready control-plane 15m v1.34.1
Once CNI is applied, all pods should go to Running
stage.
Allow scheduling of pods on control-plane node (since it is a single PC setup):
studen@labkey:~$ kubectl taint nodes --all node-role.kubernetes.io/control-plane-
node/labkey untainted
Test using a sample image:
studen@labkey:~$ kubectl create deployment kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1
deployment.apps/kubernetes-bootcamp created
studen@labkey:~$ kubectl get pods -A
[..]
#after a while
NAMESPACE NAME READY STATUS RESTARTS AGE
default kubernetes-bootcamp-658f6cbd58-sthch 1/1 Running 0 28s
kube-system antrea-agent-b8nfb 2/2 Running 0 4m48s
kube-system antrea-controller-547dbb7586-r6gnv 1/1 Running 0 4m48s
kube-system coredns-66bc5c9577-6r6sw 1/1 Running 0 7m9s
kube-system coredns-66bc5c9577-rfltb 1/1 Running 0 7m9s
kube-system etcd-labkey 1/1 Running 4 7m14s
kube-system kube-apiserver-labkey 1/1 Running 4 7m14s
kube-system kube-controller-manager-labkey 1/1 Running 0 7m15s
kube-system kube-proxy-lqhw6 1/1 Running 0 7m9s
kube-system kube-scheduler-labkey 1/1 Running 14 7m14s
Further details on pod (docker inspect
, docker logs
):
studen@labkey:~$ kubectl describe pod kubernetes-bootcamp-658f6cbd58-sthch
[..]
studen@labkey:~$ kubectl logs kubernetes-bootcamp-658f6cbd58-sthch
Kubernetes Bootcamp App Started At: 2025-10-15T14:18:10.545Z | Running On: kubernetes-bootcamp-658f6cbd58-sthch
Running On: kubernetes-bootcamp-658f6cbd58-sthch | Total Requests: 1 | App Uptime: 77.342 seconds | Log Time: 2025-10-15T14:19:27.887Z
Also, if running kubectl proxy
in a separate shell, one can access ports of image directly:
studen@labkey:~$ curl http://localhost:8001/api/v1/namespaces/default/pods/kubernetes-bootcamp-658f6cbd58-sthch:8080/proxy/
Delete sample deployment:
studen@labkey:~$ kubectl delete deployment kubernetes-bootcamp
Tasks:
fdisk
is under /sbin
Disk /dev/sda: 32 GiB, 34359738368 bytes, 67108864 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xd80d0738
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 63578111 63576064 30.3G 83 Linux
/dev/sda2 63580158 67106815 3526658 1.7G f W95 Ext'd (LBA)
/dev/sda5 63580160 67106815 3526656 1.7G 82 Linux swap / Solaris
Disk /dev/sdb: 64 GiB, 68719476736 bytes, 134217728 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdc: 2 TiB, 2199023255552 bytes, 4294967296 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
/dev/sda
and associated partitions are for system, seems like an ext4
format
studen@labkey:~$ sudo file -sL /dev/sda*
/dev/sda: DOS/MBR boot sector
/dev/sda1: Linux rev 1.0 ext4 filesystem data, UUID=7aa97821-8328-41bf-add1-199fde502af2 (needs journal recovery) (extents) (64bit) (large files) (huge files)
/dev/sdb
is for database, needs partitioning
studen@labkey:~$ sudo fdisk /dev/sdb
Command (m for help): g
Created a new GPT disklabel (GUID: CEB79293-AAAF-4E3B-AC17-D7D5C4030135).
Command (m for help): n
Partition number (1-128, default 1):
First sector (2048-134217694, default 2048): <Enter>
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-134217694, default 134215679): <Enter>
Created a new partition 1 of type 'Linux filesystem' and of size 64 GiB.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
studen@labkey:~$ sudo fdisk /dev/sdb
Welcome to fdisk (util-linux 2.41).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): p
Disk /dev/sdb: 64 GiB, 68719476736 bytes, 134217728 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: CEB79293-AAAF-4E3B-AC17-D7D5C4030135
Device Start End Sectors Size Type
/dev/sdb1 2048 134215679 134213632 64G Linux filesystem
Command (m for help): q
studen@labkey:~$ sudo mkfs.ext4 /dev/sdb1
mke2fs 1.47.2 (1-Jan-2025)
Discarding device blocks: done
Creating filesystem with 16776704 4k blocks and 4194304 inodes
Filesystem UUID: 4296eee3-c46d-42f5-8d84-da8dc712e3d7
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424
Allocating group tables: done
Writing inode tables: done
Creating journal (65536 blocks): done
Writing superblocks and filesystem accounting information: done
studen@labkey:~$ sudo blkid
/dev/sdb1: UUID="4296eee3-c46d-42f5-8d84-da8dc712e3d7" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="50081ae0-6c42-4de7-8010-7ce19fbdfbcd"
/dev/sda5: UUID="f54d2ee3-8dee-4792-a2c7-b529c2889ecd" TYPE="swap" PARTUUID="d80d0738-05"
/dev/sda1: UUID="7aa97821-8328-41bf-add1-199fde502af2" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="d80d0738-01"
mount it to /data0
by editing /etc/fstab
:
studen@labkey:~$ vi /etc/fstab
#database disk (/dev/sdb1), 20250926
UUID=4296eee3-c46d-42f5-8d84-da8dc712e3d7 /data0 ext4 defaults 0 2
studen@labkey:~$ sudo mkdir /data0
studen@labkey:~$ sudo mount -a
studen@labkey:~$ sudo systemctl daemon-reload
/dev/sdc
is for files/images, needs partitioning
studen@labkey:~$ sudo fdisk /dev/sdc
[...]
Device does not contain a recognized partition table.
The size of this disk is 2 TiB (2199023255552 bytes). DOS partition table format cannot be used on drives for volumes larger than 2199023255040 bytes for 512-byte sectors. Use GUID partition table format (GPT).
Created a new DOS (MBR) disklabel with disk identifier 0xb95bd7d0.
Command (m for help): g
Created a new GPT disklabel (GUID: F9E0CF5C-25C6-4086-91D4-A36AEB8C67F8).
Command (m for help): n
Partition number (1-128, default 1):
First sector (2048-4294967262, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-4294967262, default 4294965247):
Created a new partition 1 of type 'Linux filesystem' and of size 2 TiB.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
studen@labkey:~$ sudo fdisk /dev/sdc
[...]
Command (m for help): p
Disk /dev/sdc: 2 TiB, 2199023255552 bytes, 4294967296 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F9E0CF5C-25C6-4086-91D4-A36AEB8C67F8
Device Start End Sectors Size Type
/dev/sdc1 2048 4294965247 4294963200 2T Linux filesystem
studen@labkey:~$ sudo mkfs.ext4 /dev/sdc1
mke2fs 1.47.2 (1-Jan-2025)
Discarding device blocks: done
Creating filesystem with 536870400 4k blocks and 134217728 inodes
Filesystem UUID: aeffcc3c-c7f5-4856-9f30-6a5b402b764f
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done
mount /dev/sdc1
as /data1
studen@labkey:~$ sudo blkid
[...]
/dev/sdc1: UUID="aeffcc3c-c7f5-4856-9f30-6a5b402b764f" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="390885ba-f0f8-4871-985d-245d327f559e"
studen@labkey:~$ vi /etc/fstab
#files disk (/dev/sdc1), 20250926
UUID=aeffcc3c-c7f5-4856-9f30-6a5b402b764f /data1 ext4 defaults 0 2
studen@labkey:~$ sudo mkdir /data1
studen@labkey:~$ sudo mount -a
studen@labkey:~$ sudo systemctl daemon-reload
final layout
studen@labkey:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.8G 0 7.8G 0% /dev
tmpfs 1.6G 520K 1.6G 1% /run
/dev/sda1 30G 1.2G 27G 5% /
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-journald.service
tmpfs 7.9G 0 7.9G 0% /tmp
tmpfs 1.0M 0 1.0M 0% /run/credentials/getty@tty1.service
tmpfs 1.6G 8.0K 1.6G 1% /run/user/1002
/dev/sdb1 63G 2.1M 60G 1% /data0
/dev/sdc1 2.0T 2.1M 1.9T 1% /data1
Install software:
studen@labkey:~$ sudo apt-get update
[..]
W: https://pkgs.k8s.io/core:/stable:/v1.34/deb/InRelease: Policy will reject signature within a year, see --audit for details
studen@labkey:~$ sudo apt-get install nfs-kernel-server
#check if it is running
studen@labkey:~$ systemctl status nfs-kernel-server
Edit /etc/exports
:
/data1 178.172.43.129(rw,sync,no_root_squash,no_subtree_check)
/data1 127.0.0.1(rw,sync,no_root_squash,no_subtree_check)
This allows to mount it either via localhost (second line) or the valid ip of the labkey server.
Reload configuration and resetart the server:
studen@labkey:~$ sudo exportfs -ra
studen@labkey:~$ sudo systemctl restart nfs-kernel-server
Test mounting and unmounting
studen@labkey:~$ sudo mkdir /mnt/disk
studen@labkey:~$ sudo mount 127.0.0.1:/data1 /mnt/disk
studen@labkey:~$ sudo umount /mnt/disk
Create a dummy pod and connect to it. Using dummy.yaml from Kubernetes.
Busybox has no /bin/bash
, so /bin/sh
must be used.
cat pods/dummy.yaml
[..]
metadata:
name: hello
[..]
kubectl apply -f pods/dummy.yaml
#make sure pod is running
#wide -A: show IPs of pods/containers
kubectl get pods -o wide -A
#some information on pod
kubectl describe pods hello
#check logs (says something like Hello Kubernets!)
kubectl logs hello
#connect to running pod
kubectl exec --stdin --tty hello -- /bin/sh