NFS Provisioner 配置

参考配置地址:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

配置到 k8s 中时,所有节点必须安装 nfs 相关的软件包,必须可以正确挂载 nfs 共享的路径。

这个玩意儿就是在 nfs 文件夹中,自动创建容器使用的持久化卷的。

必须搭配 PVC 使用。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1 # 副本数为 1
strategy:
type: Recreate # 指定更新策略为:先删除在创建,不会滚动更新
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner # 指定为上面创建的 ServiceAccount 名字
containers:
- name: nfs-client-provisioner
# image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner
image: 192.168.0.215:37080/public/nfs-subdir-external-provisioner:v4.0.2 # 指定镜像
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: 192.168.0.215/nfs # 指定 provisioner 名称
- name: NFS_SERVER
value: 192.168.0.215 # 指定 nfs 服务器地址
- name: NFS_PATH
value: /root/nfs_share # 指定 nfs 共享目录
volumes:
- name: nfs-client-root
nfs:
server: 192.168.0.215 # 指定 nfs 服务器地址
path: /root/nfs_share # 指定 nfs 共享目录
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-client
provisioner: 192.168.0.215/nfs # 指定 provisioner 名称,上面 Deployment 中配置的 PROVISIONER_NAME 环境变量值
parameters:
archiveOnDelete: "false"

测试配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-nginx-pvc
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1G
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: test-nginx-server
template:
metadata:
labels:
app: test-nginx-server
spec:
containers:
- name: test-nginx
image: 192.168.0.215:37080/public/nginx:1.24.0
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: www
volumes:
- name: www
persistentVolumeClaim:
claimName: test-nginx-pvc
---

apiVersion: v1
kind: Service
metadata:
name: test-nginx-service
spec:
type: NodePort
ports:
- name: app-port
port: 8080 # 集群内部 service 访问端口
targetPort: 80 # pod 自身服务的端口
nodePort: 31000 # 集群外部访问(每个集群节点开放的端口)
selector:
app: test-nginx-server