Kubernetes in the Wild - Invalid Option "reclaimPolicy"


In this episode of Kubernetes in the Wild, we observe an issue with one of our pods, which is failing to start in EKS. The pod deployment YAML looks like the below containing a persistent volume claim.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: db
  labels:
    app: db
spec:
  replicas: 1
  selector:
    matchLabels:
      app: db
  template:
    metadata:
      labels:
        app: db
    spec:
      hostname: db
      volumes:
        - name: mongodb-data
          persistentVolumeClaim:
            claimName: mongodb-data
      containers:
      - name: db
        image: xxxxxxxxxxxx.dkr.ecr.ap-southeast-2.amazonaws.com/db
        ports:
        - containerPort: 27017
        volumeMounts:
        - name: mongodb-data
          mountPath: /data

Checking the persistent volume claim with the following command reveals the problem.

kubectl describe pvc mongodb-data

Output:

Name: mongodb-data
Namespace: default
StorageClass: gp2-retain
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{“apiVersion”:“v1”,“kind”:“PersistentVolumeClaim”,“metadata”:{“annotations”:{},“name”:“mongodb-data”,“namespace”:“default”},“spec”:{“acces…
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/aws-ebs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Block
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 6s (x10 over 8m6s) persistentvolume-controller Failed to provision volume with StorageClass “gp2-retain”: invalid option “reclaimPolicy” for volume plugin kubernetes.io/aws-ebs
Mounted By: <none>

Reviewing the storage class gp2-retain YAML definition as noted in the error output reveals the source of the problem, the reclaimPolicy statement is indented within parameters which it shouldn’t be.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2-retain
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  fsType: ext4
  reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer

To resolve the problem, correct the storage class with reclaimPolicy correctly defined.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: gp2-retain
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  fsType: ext4
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer

After resolving this issue, the pod was still failing to start; this was due to the volumeMode being set to Block instead of Filesystem. The correct YAML is noted below for reference.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongodb-data
spec:
  storageClassName: gp2-retain
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 5Gi