Featured image of post Turning Proxmox VE into a Kubernetes Node with LXC and lxcri

Turning Proxmox VE into a Kubernetes Node with LXC and lxcri

Ecrit par ~ zwindler ~

He didn’t know it was completely stupid, so he did it

What a lie… Of course, I know perfectly well it’s dumb. Which is precisely why this idea is irresistible.

If you follow the blog, you know I use Proxmox VE extensively (the articles that attract the most traffic on the blog are actually about this virtualization technology).

Proxmox VE is pretty cool; you can run VMs (QEMU) with it, but also, if you don’t have VT-x or want “lightweight VMs” (a term we’ve overused), you can install complete OSes in containers with LXC.

But beyond these two solutions, the Proxmox VE project devs are quite rigid (and not just about that). That’s why I’ve posted a few “hacks” to work around Proxmox’s limitations, notably this article where I explain how to run Docker containers (via LXC) in Proxmox VE (normally not possible).

What if we went even further with the stupid hack?

What if we not only ran OCI containers in Proxmox VE, but ALSO transformed our Proxmox VE hosts into ✨️ Kubernetes Nodes ✨️ ???

How do we do this?

The goal is obviously not to install Kubernetes alongside Proxmox VE, but to reuse as much as possible. Typically, I’m going to reuse the virtualization technology.

With Proxmox VE supporting 2 different virtualization technologies (qemu and LXC), you have to choose. And since I like LXC and have already experimented with the Docker => LXC hack on Proxmox VE, the question was quickly answered.

The entire difficulty of this exercise is finding how to make LXC “CRI-compatible”. And we’re lucky, Linux Containers, the foundation that oversees LXC, Incus (ex LXD, “closed” by Canonical), wrote a CRI for LXC called lxcri a few years ago. And of course, as it’s a project of questionable utility, the thing has been unmaintained since 2021.

I had actually tried to use it, spent quite a bit of time on it in February 2024 (I faced bug after bug, went through forks, …) to fail miserably on a compatibility problem with my version of LXC (5+ on Proxmox 8, 6 on Proxmox 9), among other bugs.

lxcri is a wrapper around LXC which can be used as a drop-in container runtime replacement for use by CRI-O.

So we’re going to need 3 additional things on our Proxmox server for my dumb idea to work:

  • lxcri as the low-level container runtime
  • cri-o as the high-level container runtime
  • kubelet to control the runtime and communicate with the control plane

Let’s go

OK. So lxcri relies on cri_o, Redhat’s container runtime. Let’s start by installing cri-o:

Note: my Proxmox VE 8 server back then was using Debian 12 as its base OS. So we should base ourselves on that for the $OS variable (as indicated in the docs). However, when I first tested, the CRI-O repositories were being migrated, with outdated documentation and missing releases. The repo download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o didn’t have the Debian_12 PATH, nor Kubernetes version 1.29… I was SO pissed.

We shouldn’t have any issues now:

export KUBERNETES_VERSION=v1.32
export CRIO_VERSION=v1.32

# create a directory for keyrings (it doesn't always exist on a fresh install)
sudo mkdir -p /usr/share/keyrings

# kubernetes repo
curl -fsSL https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/$KUBERNETES_VERSION/deb/ /" |
  sudo tee /etc/apt/sources.list.d/kubernetes.list

# crio repo
curl -fsSL https://download.opensuse.org/repositories/isv:/cri-o:/stable:/$CRIO_VERSION/deb/Release.key |
  sudo gpg --dearmor -o /etc/apt/keyrings/cri-o-apt-keyring.gpg

echo "deb [signed-by=/etc/apt/keyrings/cri-o-apt-keyring.gpg] https://download.opensuse.org/repositories/isv:/cri-o:/stable:/$CRIO_VERSION/deb/ /" |
  sudo tee /etc/apt/sources.list.d/cri-o.list

sudo apt update
sudo apt install cri-o

From here, we have cri-o. We could configure it, but if we do that, it will use runc and we wouldn’t be using LXC as a runtime.

That’s not the point of this hack!

So at this point, we switch back to the lxcri documentation:

There’s no release

That’s right… we need to first build the lxcri binary and place it in the /usr/local of our Proxmox VE server. There’s no precompiled binary in the project, it’s an issue that was opened just before it was abandoned 😬😬.

And to make it even more fun, the build process goes through a Dockerfile, which is funny since we don’t have Docker on our proxmox node…

So I tried to build lxcri from a machine with docker, following the command shown on Github. Boom. (First, there’s a missing “.” at the end of the docker build command in the docs) it fails miserably at compilation…

Note: I no longer have that error, probably a dependency problem. It’s annoying because we’ll have to do lots of things manually… But don’t bother with git clone, we’re going to use a fork (of a fork).

Looking at the Dockerfile, we quickly realize we’re just running a script (install.sh)…

When we call docker with the buildarg installcmd=install_runtime, we run the install_runtime function, which calls install_runtime_noclean, which runs add_lxc then add_lxcri.

Me just want to compile lxc (for the bindings) and lxcri. So I’ll do it manually. For that, we need golang 1.16 (it’s old…).

There’s quite a bit of broken code everywhere and several open issues where maintainers recommend going with a fork:

I spent quite a bit of time debugging it, and in the end I made my own fork (https://github.com/zwindler/lxcri), which requires golang 1.22+ and adds a bunch of fixes.

Prerequisites for building lxcri

Let’s install a bunch of stuff:

sudo apt update
sudo apt install curl git meson pkg-config cmake libdbus-1-dev docbook2x

Let’s install golang if we don’t have it (unlikely you have Golang on a Proxmox VE node):

wget https://go.dev/dl/go1.25.4.linux-amd64.tar.gz
sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.25.4.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
go version
  go version go1.25.4 linux/amd64

I’ll also need to fetch and compile lxc. For a while, I was stuck on version 4.0.12 (very precisely) because the 4.0.6 found on Debian 11 does not work with the lxcri code (I hit lots of bugs).

But the good news is that since I’m a try-harder from space, with all the fixes on my fork, my fork works with the latest version of lxc (6.x), which is convenient because that’s the version on my updated Proxmox VE 9.

git clone https://github.com/lxc/lxc
cd lxc

meson setup -Dprefix=/usr -Dsystemd-unitdir=PATH build
  Build targets in project: 30

  lxc 6.0.0

    User defined options
      prefix         : /usr
      systemd-unitdir: PATH

  Found ninja-1.12.1 at /usr/bin/ninja

meson compile -C build
  INFO: autodetecting backend as ninja
  INFO: calculating backend command to run: /usr/bin/ninja -C /root/lxc/build
  ninja: Entering directory `/root/lxc/build'
  [544/544] Linking target src/lxc/tools/lxc-monitor

It built lots of things, that’s cool. But the files I’m interested in are here:

find . -name "lxc.pc"
./build/meson-private/lxc.pc

ls build/*lxc*
build/liblxc.so  build/liblxc.so.1  build/liblxc.so.1.8.0  build/lxc.spec

build/liblxc.so.1.8.0.p:
liblxc.so.1.8.0.symbols

I put this in the right places on my Proxmox VE with:

sudo make install
sudo ldconfig

Build like never before

OK, time to play with my fork:

git clone https://github.com/zwindler/lxcri.git lxcri.zwindler
cd lxcri.zwindler
make build
  go build -ldflags '-X main.version=8805687-dirty -X github.com/lxc/lxcri.defaultLibexecDir=/usr/local/libexec/lxcri' -o lxcri ./cmd/lxcri
  cc -Werror -Wpedantic -o lxcri-start cmd/lxcri-start/lxcri-start.c $(pkg-config --libs --cflags lxc)
  CGO_ENABLED=0 go build -o lxcri-init ./cmd/lxcri-init
  # this is paranoia - but ensure it is statically compiled
  ! ldd lxcri-init  2>/dev/null
  go build -o lxcri-hook ./cmd/lxcri-hook
  go build -o lxcri-hook-builtin ./cmd/lxcri-hook-builtin

ls -alrt
  total 15288
  [...]
  -rwxr-xr-x 1 debian debian 7108288 Nov 16 20:06 lxcri
  -rwxr-xr-x 1 debian debian   17520 Nov 16 20:06 lxcri-start
  -rwxr-xr-x 1 debian debian 2942584 Nov 16 20:06 lxcri-init
  -rwxr-xr-x 1 debian debian 2834743 Nov 16 20:06 lxcri-hook
  -rwxr-xr-x 1 debian debian 2519097 Nov 16 20:06 lxcri-hook-builtin

If we were on a dev machine, we could send the binaries to the Proxmox VE server (lxcri in /usr/local/bin, the rest in /usr/local/libexec/lxcri)

In my case, I’m directly on the machine that builds and runs, so I do:

$ sudo make install
    mkdir -p /usr/local/bin
    cp -v lxcri /usr/local/bin
    'lxcri' -> '/usr/local/bin/lxcri'
    mkdir -p /usr/local/libexec/lxcri
    cp -v lxcri-start lxcri-init lxcri-hook lxcri-hook-builtin /usr/local/libexec/lxcri
    'lxcri-start' -> '/usr/local/libexec/lxcri/lxcri-start'
    'lxcri-init' -> '/usr/local/libexec/lxcri/lxcri-init'
    'lxcri-hook' -> '/usr/local/libexec/lxcri/lxcri-hook'
    'lxcri-hook-builtin' -> '/usr/local/libexec/lxcri/lxcri-hook-builtin'

and we can continue:

$ /usr/local/bin/lxcri help
NAME:
   lxcri - lxcri is a OCI compliant runtime wrapper for lxc

USAGE:
   lxcri [global options] command [command options] [arguments...]

VERSION:
   8805687
[...]

Back to crio

OK, we’ve built lxcri and have all the dependencies to use it. So we can go back to the official lxcri doc “setup.md” with the idea of configuring CRI-O, so it doesn’t use runc, only lxcri.

sudo tee /etc/crio/crio.conf.d/10-crio.conf > /dev/null <<'EOF'
[crio.image]
signature_policy = "/etc/crio/policy.json"

[crio.runtime]
default_runtime = "lxcri"

[crio.runtime.runtimes.lxcri]
runtime_path = "/usr/local/bin/lxcri"
runtime_type = "oci"
runtime_root = "/var/lib/lxc" #proxmox lxc folder
inherit_default_runtime = false
runtime_config_path = ""
container_min_memory = ""
monitor_path = "/usr/libexec/crio/conmon"
monitor_cgroup = "system.slice"
monitor_exec_cgroup = ""
privileged_without_host_devices = false
EOF

Note, the setup.md installation doc tells us to generate a clean config with the crio binary and the config command, but it doesn’t really work and we end up launching runc or crun unintentionally. I overwrite everything, it’s simpler.

And now, we can start crio:

sudo systemctl enable crio && sudo systemctl start crio

From here, we have a server with a presumably functional CRI.

systemctl status crio
● crio.service - Container Runtime Interface for OCI (CRI-O)
     Loaded: loaded (/lib/systemd/system/crio.service; enabled; vendor preset: enabled)
     Active: active (running) since Mon 2025-11-17 12:02:37 UTC; 12s ago

journalctl -u crio
Nov 17 20:29:43 instance-2025-11-16-15-31-55 systemd[1]: Starting Container Runtime Interface for OCI (CRI-O)...
[...]
Nov 17 20:29:43 instance-2025-11-16-15-31-55 crio[11906]: time="2025-11-17T20:29:43.61758425Z" level=info msg="Using runtime handler lxcri version 8805687"

Yeeees :D

So we can install kubelet, then add it to an existing Kubernetes cluster

Fast forward

I was too lazy to set up a proper cluster with kubeadm or other, and then enroll a node manually with the token+sha. I could have also replayed with my funny PoC of Clever cloud’s “linux” workers to create a control plane at Clever, then manually enroll with the node bootstrap token (I did things pretty well in that PoC).

So I replayed roughly with my demystifions-kubernetes project, which allows you to set up a single-node cluster manually by just running binaries.

Once the control plane is bootstrapped (etcd, api-server, controller-manager, scheduler), we stop BEFORE the containerd part (we’ve already configured cri-o) and manually start kubelet:

sudo bin/kubelet --kubeconfig admin.conf --container-runtime-endpoint --container-runtime-endpoint=unix:///var/run/crio/crio.sock --fail-swap-on=false --cgroup-driver="systemd"
[...]
I1117 13:41:58.741561   88434 kubelet_node_status.go:78] "Successfully registered node" node="instance-2025-11-16-15-31-55"

We’re making progress! Let’s try to see the cluster’s health and deploy a pod:

$ export KUBECONFIG=admin.conf
$ kubectl get nodes
    NAME                           STATUS   ROLES    AGE   VERSION
    instance-2025-11-16-15-31-55   Ready    <none>   13m   v1.34.2

$ kubectl create deployment web --image=zwindler/vhelloworld
    deployment.apps/web created


$ kubectl get pods
    NAME                   READY   STATUS    RESTARTS   AGE
    web-6c8cc48c68-cdbtj   1/1     Running   0          4m17s

$ kubectl logs web-6c8cc48c68-cdbtj
    [Vweb] Running app on http://localhost:8081/
    [Vweb] We have 3 workers

Great success!

Conclusion

From here, CRI-O functionally shares Proxmox VE’s LXC engine.

To convince yourself, you can run the lxc-ls command, which will display a mix of “real” LXC containers (created with proxmox, 151 and 200) and pod containers (the STOPPED ones are cilium init containers)

$ lxc-ls -f
NAME                                                             STATE   AUTOSTART GROUPS IPV4                                               IPV6                 UNPRIVILEGED 
151                                                              STOPPED 0         -      -                                                  -                    true         
200                                                              STOPPED 0         -      -                                                  -                    true         
3045d1f3abcc069b1009acc869a27b09cf9531d0701a3f9d5a760213d57c7b20 STOPPED 0         -      -                                                  -                    false        
78057100e5d0613944c2be859dfd09f790eeba88bceebea0e04970841c9bd950 RUNNING 0         -      10.0.0.90                                          -                    false        
8e5506beea9a50214c46f31fa7fa6d04397963cb912aa397261359865d40a0cd RUNNING 0         -      10.0.0.167, 10.244.0.1, 203.0.113.42, 192.168.1.10 2001:db8::1 false        
abdcd5be0a27417aa9e11cc2b27882bc86a63d94f547f909332ca7470a5e075a STOPPED 0         -      -                                                  -                    false        
aeaed9c2f56328bb6196ede4a78f06c4bb2d3edf5dca7912cf38407b9bf6610a STOPPED 0         -      -                                                  -                    false        
ed9aaf5a0fd4b11be121296290472d6d71d9016e4c6bb0d05fb2e4a7f4b7a85d RUNNING 0         -      10.0.0.167, 10.244.0.1, 203.0.113.42, 192.168.1.10 2001:db8::1 false        
edb9d42e35a4029cfec5bed5597746bdf67fbe438f21446230252265ed1c849d STOPPED 0         -      -                                                  -                    false        

$ ps -ef |grep ed9aaf5a0fd4b11be121296290472d6d71d9016e4c6bb0d05fb2e4a7f4b7a85d
root     2674943       1  0 22:32 ?        00:00:00 /usr/libexec/crio/conmon -b /run/containers/storage/overlay-containers/ed9aaf5a0fd4b11be121296290472d6d71d9016e4c6bb0d05fb2e4a7f4b7a85d/userdata -c ed9aaf5a0fd4b11be121296290472d6d71d9016e4c6bb0d05fb2e4a7f4b7a85d --exit-dir /var/run/crio/exits -l /var/log/pods/kube-system_cilium-operator-56d6cd6767-dps78_6082ea2d-331e-4262-b8b3-d3f88eb4e446/cilium-operator/1.log --log-level info -n k8s_cilium-operator_cilium-operator-56d6cd6767-dps78_kube-system_6082ea2d-331e-4262-b8b3-d3f88eb4e446_1 -P /run/containers/storage/overlay-containers/ed9aaf5a0fd4b11be121296290472d6d71d9016e4c6bb0d05fb2e4a7f4b7a85d/userdata/conmon-pidfile -p /run/containers/storage/overlay-containers/ed9aaf5a0fd4b11be121296290472d6d71d9016e4c6bb0d05fb2e4a7f4b7a85d/userdata/pidfile --persist-dir /var/lib/containers/storage/overlay-containers/ed9aaf5a0fd4b11be121296290472d6d71d9016e4c6bb0d05fb2e4a7f4b7a85d/userdata -r /usr/local/bin/lxcri --runtime-arg --root=/var/lib/lxc --socket-dir-path /var/run/crio --syslog -u ed9aaf5a0fd4b11be121296290472d6d71d9016e4c6bb0d05fb2e4a7f4b7a85d -s
root     2674951 2674943  0 22:32 ?        00:00:00 /usr/local/libexec/lxcri/lxcri-start ed9aaf5a0fd4b11be121296290472d6d71d9016e4c6bb0d05fb2e4a7f4b7a85d /var/lib/lxc /var/lib/lxc/ed9aaf5a0fd4b11be121296290472d6d71d9016e4c6bb0d05fb2e4a7f4b7a85d/config

Nothing would prevent us from pushing it further, and making a small script that retrieves all container information and creates the associated /etc/pve/lxc/xxx.conf files to display them in the UI, as I did in the article:

In theory, if this step made sense, I could create releases myself on my fork to facilitate installation.

Finally, I could try to contribute my modifications to have everything merged into the main project, except the maintainers refuse PRs because the project is no longer maintained.

But I admit that after probably about fifteen hours of debugging, golang, C compilation, spread over 2 years, I’ve kind of reached the end of my patience for this “joke”.

OK it was dumb, now that I’ve succeeded, time to sleep X_X.

Licensed under CC BY-SA 4.0

Vous aimez ce blog ou cet article ? Partagez-le avec vos amis !   Twitter Linkedin email Facebook

Vous pouvez également vous abonner à la mailing list des articles ici

L'intégralité du contenu appartenant à Denis Germain (alias zwindler) présent sur ce blog, incluant les textes, le code, les images, les schémas et les supports de talks de conf, sont distribués sous la licence CC BY-SA 4.0.

Les autres contenus (thème du blog, police de caractères, logos d'entreprises, articles invités...) restent soumis à leur propre licence ou à défaut, au droit d'auteur. Plus d'informations dans les Mentions Légales

Built with Hugo
Theme Stack designed by Jimmy