Featured image of post RKE, Talos Linux, ... : MountVolume.NewMounter initialization failed for volume : path does not exist

RKE, Talos Linux, ... : MountVolume.NewMounter initialization failed for volume : path does not exist

Ecrit par ~ zwindler ~

It all started…

Without revealing too much about my new job, I work on a Kubernetes platform that we manage ourselves with Talos Linux (an immutable Kubernetes OS that’s pretty cool!).

The platform I’m working on is still quite young, and on my development environment, I have what I need for S3 buckets but no PVC with “block” type storage. I could have set up a rook with haste.

But what can I say… Out of professional conscientiousness (aka being too lazy to type), and to save myself 4 and a half seconds of Google/ChatGPT queries, I ask a colleague if he doesn’t have a little piece of YAML to set up a small PVC using a hostPath quickly.

cf kubernetes.io/docs/concepts/storage/storage-classes/#local

It starts badly

First of all, if you try to run this manifest, it won’t work. Indeed, what the official documentation doesn’t say (not on the storage classes page anyway) is that the Local provisioner kubernetes.io/no-provisioner can’t create a PV if you don’t specify an affinity (nodeAffinity).

% kubectl apply -f toto.yaml
storageclass.storage.k8s.io/for-science-sc created
The PersistentVolume "for-science-pv" is invalid: spec.nodeAffinity: Required value: Local volume requires node affinity

This is easily corrected:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: for-science-sc
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: for-science-pv
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi
  local:
    path: /opt/tempDir
  storageClassName: for-science-sc
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - worker-1

From there, it works (on paper). Once the manifests are created, you should have a storage class and a PV, ready to be used. We can even go one step further and create the PVC and bind it to the PV.

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: for-science-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: for-science-sc

End of article?

kubectl apply -f toto.yaml
storageclass.storage.k8s.io/for-science-sc unchanged
persistentvolume/for-science-pv created
persistentvolumeclaim/for-science-pvc created

No, of course not…

Run, pod! Ruuuuuun!

So let’s launch a small debug Pod to see if we can mount our PVC in a container:

---
apiVersion: v1
kind: Pod
metadata:
  name: ubuntu-pvc
spec:
  containers:
    - name: ubuntu
      image: ubuntu:latest
      command: ["/bin/bash", "-c", "sleep 3600"]
      volumeMounts:
        - name: storage
          mountPath: /toto
  volumes:
    - name: storage
      persistentVolumeClaim:
        claimName: for-science-pvc
  restartPolicy: Never

Here, no need to specify the affinity that the Pod needs to be scheduled on worker-1 (where the PV is located). The Kubernetes scheduler is smart enough to find it on its own.

% kubectl apply -f toto2.yaml 
pod/ubuntu-pvc created

And there, disaster strikes. The Pod stays stuck in ContainerCreating!!

%  kubectl get pods
NAME           READY   STATUS              RESTARTS   AGE
ubuntu         0/1     ContainerCreating   0          52s

% kubectl get events
LAST SEEN   TYPE      REASON                 OBJECT                                      MESSAGE
4m6s        Warning   ProvisioningFailed     persistentvolumeclaim/for-science-pvc   storageclass.storage.k8s.io "for-science-pv" not found
100s        Normal    WaitForFirstConsumer   persistentvolumeclaim/for-science-pvc   waiting for first consumer to be created before binding
88s         Normal    Scheduled              pod/ubuntu-pvc                              Successfully assigned default/ubuntu to worker-1
25s         Warning   FailedMount            pod/ubuntu-pvc                              MountVolume.NewMounter initialization failed for volume "for-science-pv" : path "/opt/tempDir" does not exist

Hmm… ok, path "/opt/tempDir" does not exist, that’s relatively explicit as an error. I forgot to create the folder on the host in question.

I’ll create a new Pod that will mount the host’s /opt in /mnt/opt, and create the folder manually:

---
apiVersion: v1
kind: Pod
metadata:
  name: ubuntu-hostpath
  namespace: somespecialnamespace
spec:
  containers:
    - name: ubuntu
      image: ubuntu:latest
      command: ["/bin/bash", "-c", "sleep 3600"]
      volumeMounts:
        - mountPath: /host/opt
          name: host
      securityContext:
        privileged: true
  volumes:
    - name: mnt
      hostPath:
        path: /host
        type: Directory
  restartPolicy: Never
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                  - worker-1

Note: Talos’s PodSecurity prevents me from creating the Pod (hostPath & privilege) if I don’t create it in a namespace with exceptions. But let’s say we fixed it.

% kubectl exec -it -n somespecialnamespace pods/ubuntu-hostpath -- /bin/bash
root@ubuntu-hostpath:/# mkdir /host/opt/tempDir/
[control] + d
exit
command terminated with exit code 1

Is it good now?

Directory: created. Mission accomplished, right? Right?

Well no!! The Pod is still stuck.

Why on earth can I see this folder in my ubuntu-hostpath Pod, but not in ubuntu-pvc, which mounts the same folder, but via a PVC???

The reason is… Because it’s a PVC.

Ok, alright. It’s not JUST that. It’s because on one side, I go through a hostpath and on the other through a PVC.

The most astute among you will surely have guessed that there’s a connection with the title of my article. The problem is the same on RKE, Talos Linux and surely other similar OSes.

These OSes have the particularity of being “Container OS”. The kubernetes components aren’t “installed” properly speaking in the OS, but launched in containers. The OS is actually just a big empty shell with just enough to launch containers. This limits the attack surface and makes them rather lightweight.

And there’s a fundamental difference between how “a Pod with hostPath” and “a pod mounting a PVC” are managed. In the case of a PVC provisioned with kubernetes.io/no-provisioner, the kubelet needs to be able to mount the folder before the Pod.

You see where I’m going? In a container OS, the kubelet is a container. It doesn’t see anything other than its own container directory tree (not the host’s /, just the contents of the big ZIP we downloaded from dockerhub with docker pull), and possibly the host folders that were explicitly mounted to it.

In my case, for the kubelet container, this /opt/tempDir folder doesn’t exist, simply because in its container directory tree, it wasn’t mounted (or created).

The solution

Actually, as often, you just had to read the documentation. Talos Linux gives 2 methods to solve this problem:

I tested both, they work perfectly. But beyond the solution, it’s more the journey that I found interesting to share with you :)

Licensed under CC BY-SA 4.0

Vous aimez ce blog ou cet article ? Partagez-le avec vos amis !   Twitter Linkedin email Facebook

Vous pouvez également vous abonner à la mailing list des articles ici

L'intégralité du contenu appartenant à Denis Germain (alias zwindler) présent sur ce blog, incluant les textes, le code, les images, les schémas et les supports de talks de conf, sont distribués sous la licence CC BY-SA 4.0.

Les autres contenus (thème du blog, police de caractères, logos d'entreprises, articles invités...) restent soumis à leur propre licence ou à défaut, au droit d'auteur. Plus d'informations dans les Mentions Légales

Built with Hugo
Theme Stack designed by Jimmy