之前有提到nfs服务,以及k8s挂载储存卷到nfs上去。但是不难发现这样子操作有一个弊端。就是应用需要一个pvc,就得手动创建一个pv与之对应。这很明显是不符合程序员的懒惰风格的。本文介绍动态生成pv/pvc
创建rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: jhub
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: jhub
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: jhub
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: jhub
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: jhub
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
执行 kubectl create -f rbac.yaml
也可以通过官网提供的yaml,效果一样
# wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/rbac.yaml
# kubectl apply -f rbac.yaml
创建储存类Storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
namespace: jhub
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
执行 kubectl create -f Storageclass.yaml
创建deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: jhub
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.0.159
- name: NFS_PATH
value: /home/nfs
volumes:
- name: nfs-client-root
nfs:
server: 192.168.0.159
path: /home/nfs
这里注意的是提供nfs服的ip地址以及目录 本人的是 192.168.0.159 目录为 /home/nfs
查看执行的结果 本人使用的namespace 是jhub
[root@master nfs]# kubectl get all -n jhub
NAME READY STATUS RESTARTS AGE
pod/hub-58c98649b4-rj7dh 1/1 Running 7 29d
pod/nfs-client-provisioner-548b89898-8pwq4 1/1 Running 0 10m
pod/proxy-84bf658c4c-95p8b 1/1 Running 4 29d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hub ClusterIP 10.101.86.29 <none> 8081/TCP 29d
service/proxy-api ClusterIP 10.104.71.43 <none> 8001/TCP 29d
service/proxy-public NodePort 10.106.54.58 <none> 80:30607/TCP 29d
service/redis-jhub-master ClusterIP 10.98.88.151 <none> 6379/TCP 22h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hub 1/1 1 1 29d
deployment.apps/nfs-client-provisioner 1/1 1 1 10m
deployment.apps/proxy 1/1 1 1 29d
NAME DESIRED CURRENT READY AGE
replicaset.apps/hub-58c98649b4 1 1 1 29d
replicaset.apps/nfs-client-provisioner-548b89898 1 1 1 10m
replicaset.apps/proxy-84bf658c4c 1 1 1 29d
NAME READY AGE
statefulset.apps/user-placeholder 0/0 29d
看到 deployment.apps/nfs-client-provisioner 的ready是ok的,那么服务算是搭建起来了,现在进行一些简单的测试











网友评论