我搭建了一个ceph集群,想用他们给k8s提供存储.但是现在根据ceph集群创建了storageClass,然后创建了pvc.
pvc和storageClass创建都成功了,但是pvc一直处于pending状态,也无法被pod绑定.
root@master:~# kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ceph-storage ceph.com/cephfs Delete Immediate false 20m
root@master:~# kubectl describe pvc myclaim |tail
volume.kubernetes.io/storage-provisioner: ceph.com/cephfs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: nginx-deployment-69d9bb7478-nzspt
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 75s (x62 over 16m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "ceph.com/cephfs" or manually created by system administrator
这是我创建sc的yaml文件:
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: default
data:
keyring: |-
QVFCTDlieGs1cjFZQ2hBQTE3a2lKdXFveUFmR0RIWU1xM0srN2c9PQ==
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-storage
provisioner: ceph.com/cephfs
parameters:
monitors: monitor:6789
pool: kubernetes
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: default
userId: admin
userSecretName: ceph-secret
fsName: ext4
readOnly: "false"
这是创建pvc的yaml文件
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteMany
storageClassName: ceph-storage
resources:
requests:
storage: 10Mi
这是ceph集群的状态
root@monitor:~# ceph -s
cluster:
id: 49336b62-5502-4af7-97f5-67a289e244c7
health: HEALTH_WARN
mon is allowing insecure global_id reclaim
Module 'restful' has failed dependency: PyO3 modules may only be initialized once per interpreter process
1 monitors have not enabled msgr2
services:
mon: 1 daemons, quorum monitor (age 45m)
mgr: monitor(active, since 40m)
mds: 1/1 daemons up
osd: 2 osds: 2 up (since 45m), 2 in (since 2d)
data:
volumes: 1/1 healthy
pools: 3 pools, 49 pgs
objects: 26 objects, 8.8 KiB
usage: 45 MiB used, 9.9 GiB / 10 GiB avail
pgs: 49 active+clean
然后我根据k8s官方提供的ceph案例,简单修改了一下他的yaml文件,内容如下:
apiVersion: v1
kind: Pod
metadata:
name: cephfs2
spec:
containers:
- name: cephfs-rw
image: docker.io/library/debian:unstable-slim
command: ["tail"]
args: ["-f","/etc/hosts"]
volumeMounts:
- mountPath: "/mnt/cephfs"
name: cephfs
volumes:
- name: cephfs
cephfs:
monitors:
- monitor:6789
user: admin
secretRef:
name: ceph-secret
readOnly: true
pod直接指定了cepf存储,这种方式没有创建出任何的pvc和pv,但是他成功的挂载了存储.
估计是StorageClass的问题
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: cephfs
provisioner: ceph.com/cephfs
parameters:
monitors: 172.24.0.6:6789
adminId: admin
adminSecretName: ceph-secret-admin
adminSecretNamespace: "kube-system"
claimRoot: /pvc-volumes
kubectl get events可以看一下有没有错误事件,最好是能找到错误日志,或者是提供一下配置信息
你要先kubectl describe一下存储,看下报错事件的根本原因,当前只能推测如下:
在Ceph存储上创建PVC(Persistent Volume Claim)时,如果PVC处于Pending状态,可能有以下几个原因:
存储池不足:如果存储池中没有足够的可用存储空间来满足PVC的需求,PVC将会一直处于Pending状态。您可以通过查看存储池的使用情况来确定是否存在此问题。
StorageClass问题:如果没有正确配置StorageClass或者StorageClass无法满足PVC的需求,PVC也会处于Pending状态。您可以查看StorageClass的配置和属性,以确保它满足PVC的需求。
PV问题:如果没有可用的PV(Persistent Volume)或PV无法满足PVC的需求,PVC也会处于Pending状态。您可以检查PV的状态和属性,以确保它满足PVC的需求。
Ceph集群问题:如果Ceph集群出现故障或者连接问题,也可能导致PVC处于Pending状态。您可以检查Ceph集群的状态和日志,以确定是否存在此问题。
针对上述问题,您可以尝试以下解决方法:
确保存储池有足够的可用存储空间,并且Ceph集群的状态正常。
检查StorageClass的配置和属性,确保它能够满足PVC的需求。
检查PV的状态和属性,确保它能够满足PVC的需求。
如果您仍然无法解决问题,请查看PVC的事件日志,以获取更多有关PVC处于Pending状态的信息,并尝试重新创建PVC。