使用kubeadm
安装k8s
,附带使用外部tls
加密的etcd
集群连接配置
etcd tls 集群安装:[[ETCD 集群安装配置]] tls 证书创建:[[CFSSL 创建证书]]
前置工作
前置工作需要在所有的节点上执行
配置要求 1 2 3 4 5 CPU 推荐两核或者更多 内存 不得小于 2G MAC地址 保证唯一 交换分区 禁用 节点之间保持网络通畅
修改主机名
各个节点修改成自己的名字
1 hostnamectl set-hostname <name>
修改 hosts
配置各个节点的ip
和主机名
映射
1 2 192.168.5.128 k8s-master
关闭防火墙, 开启内核网络参数 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 systemctl stop firewalld systemctl disable firewalld net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 sysctl -p modprobe br_netfilter echo modprobe br_netfilter >> /etc/rc.d/rc.localchmod +x /etc/rc.d/rc.local
关闭 SELinux 1 2 setenforce 0 sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
关闭 swap
注释掉 /etc/fstab 文件中包含 swap 哪一行, 如下文件内容示例注释
1 2 3 4 5 6 7 8 9 10 /dev/mapper/centos-root / ext4 defaults 1 1 UUID=b6a81016-1920-44c6-b713-2547ccbc9adf /boot ext4 defaults 1 2 /dev/mapper/centos-home /home ext4 defaults 1 2
重启
安装 Docker
所有的节点都必须安装docker
且设置服务为开机自动启动
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine yum install -y yum-utils \ device-mapper-persistent-data \ lvm2 yum-config-manager \ --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum install docker-ce docker-ce-cli containerd.io -y yum list docker-ce --showduplicates | sort -r yum install docker-ce-20.10.8-3.el8 systemctl start docker systemctl enable docker { "registry-mirrors" : [ "https://registry.docker-cn.com" , "https://docker.mirrors.ustc.edu.cn" , "http://hub-mirror.c.163.com" , "https://cr.console.aliyun.com/" ] } sudo usermod -aG docker <your username>sudo systemctl restart docker
k8s 1.24 及以上版本使用 docker 必须执行 需要手动安装 cni 也就是 cri-dockerd
正常的 x 86 架构服务器,可以在 release 下载自己的安装包安装即可。 arm 架构的需要手动编译安装
RockyLinux 9.1 编译安装过程,事先安装好 docker,并启动。
1 2 3 4 5 6 7 8 9 10 11 12 git clone https://github.com/Mirantis/cri-dockerd.git cd cri-dockerdmkdir bingo build -o bin/cri-dockerd mkdir -p /usr/local/bininstall -o root -g root -m 0755 bin/cri-dockerd /usr/local/bin/cri-dockerd cp -a packaging/systemd/* /usr/lib/systemd/systemsed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /usr/lib//systemd/system/cri-docker.service systemctl daemon-reload systemctl enable cri-docker.service systemctl enable --now cri-docker.socket
安装 kubeadm,kubelet,kubectl 添加镜像源 1 2 3 4 5 6 7 8 9 10 11 12 13 [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-$basearch enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg yum clean all&&yum makecache
开始安装 1 2 3 4 yum install -y kubelet-1.21.4 kubeadm-1.21.4 kubectl-1.21.4 yum install -y kubelet-1.26.1 kubeadm-1.26.1 kubectl-1.26.1
准备初始化集群<Master节点> 查看默认的初始化配置文件, 并导出成文件
1 kubeadm config print init-defaults > init-defaults.yaml
普通版本:按照下方示例提示文字,进行修改 如果使用 calico ,请事先查看本文章节 [[#配置 calico 网络(1.26.1 已验证)]]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456780abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168 .5 .128 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s-master taints: null --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {}dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: 1.26 .0 networking: dnsDomain: cluster.local serviceSubnet: 10.244 .0 .0 /16 scheduler: {}
如果是使用外部的etcd
集群,并且是tls
加密的需要把证书复制到所有节点相同的位置,修改初始化配置文件etcd
部分内容,完整配置示例文件内容如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168 .5 .200 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s-master-1 taints: null --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {}dns: type: CoreDNS etcd: external: endpoints: - https://192.168.5.200:2379 - https://192.168.5.201:2379 - https://192.168.5.202:2379 - https://192.168.5.203:2379 - https://192.168.5.204:2379 caFile: /root/etcd/cert/ca.pem certFile: /root/etcd/cert/etcd.pem keyFile: /root/etcd/cert/etcd-key.pem imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: 1.21 .0 networking: dnsDomain: cluster.local serviceSubnet: 10.244 .0 .0 /16 scheduler: {}
查看并下载镜像
可以事先下载然后导入到自己本地的docker
中
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 [root@k8s-master k8s-install-file]# kubeadm config images list --config init-defaults.yaml registry.aliyuncs.com/k8sxio/kube-apiserver:v1.22.0 registry.aliyuncs.com/k8sxio/kube-controller-manager:v1.22.0 registry.aliyuncs.com/k8sxio/kube-scheduler:v1.22.0 registry.aliyuncs.com/k8sxio/kube-proxy:v1.22.0 registry.aliyuncs.com/k8sxio/pause:3.5 registry.aliyuncs.com/k8sxio/etcd:3.5.0-0 registry.aliyuncs.com/k8sxio/coredns:v1.8.4 kubeadm config images pull --config init-defaults.yaml [root@k8s-master k8s-install-file]# kubeadm config images pull --config init-defaults.yaml [config/images] Pulled registry.aliyuncs.com/k8sxio/kube-apiserver:v1.21.0 [config/images] Pulled registry.aliyuncs.com/k8sxio/kube-controller-manager:v1.21.0 [config/images] Pulled registry.aliyuncs.com/k8sxio/kube-scheduler:v1.21.0 [config/images] Pulled registry.aliyuncs.com/k8sxio/kube-proxy:v1.21.0 [config/images] Pulled registry.aliyuncs.com/k8sxio/pause:3.4.1 [config/images] Pulled registry.aliyuncs.com/k8sxio/etcd:3.4.13-0 failed to pull image "registry.aliyuncs.com/k8sxio/coredns:v1.8.0" : output: Error response from daemon: manifest for registry.aliyuncs.com/k8sxio/coredns:v1.8.0 not found: manifest unknown: manifest unknown , error: exit status 1 To see the stack trace of this error execute with --v=5 or higher [root@k8s-master k8s-install-file]# docker search coredns:v1.8.0 NAME DESCRIPTION STARS OFFICIAL AUTOMATED louwy001/coredns-coredns k8s.gcr.io/coredns/coredns:v1.8.0 1 ninokop/coredns k8s.gcr.io/coredns/coredns:v1.8.0 0 xwjh/coredns from k8s.gcr.io/coredns/coredns:v1.8.0 0 hhhlhh/coredns-coredns FROM k8s.gcr.io/coredns/coredns:v1.8.0 0 suxishuo/coredns k8s.gcr.io/coredns/coredns:v1.8.0 0 fengbb/coredns k8s.gcr.io/coredns/coredns:v1.8.0 0 [root@k8s-master k8s-install-file]# docker pull louwy001/coredns-coredns:v1.8.0 v1.8.0: Pulling from louwy001/coredns-coredns c6568d217a00: Pull complete 5984b6d55edf: Pull complete Digest: sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe61 Status: Downloaded newer image for louwy001/coredns-coredns:v1.8.0 docker.io/louwy001/coredns-coredns:v1.8.0 [root@k8s-master k8s-install-file]# docker tag louwy001/coredns-coredns:v1.8.0 registry.aliyuncs.com/k8sxio/coredns:v1.8.0 [root@k8s-master k8s-install-file]# [root@k8s-master k8s-install-file]# docker rmi louwy001/coredns-coredns:v1.8.0 Untagged: louwy001/coredns-coredns:v1.8.0 Untagged: louwy001/coredns-coredns@sha256:10ecc12177735e5a6fd6fa0127202776128d860ed7ab0341780ddaeb1f6dfe6[root@k8s-master k8s-install-file]#
卸载集群
如果 初始化集群失败了,或者参数错误,直接执行下面的命令还原设置
1 2 3 4 5 6 kubeadm reset iptables -F iptables -X ipvsadm -C rm -rf /etc/cni/net.drm -rf $HOME /.kube/config
开始初始化 1 kubeadm init --config init-defaults.yaml
初始化完成后, 根据提示执行初始设置, 并记录下 加入集群的命令和参数
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 mkdir -p $HOME /.kubesudo cp -i /etc/kubernetes/admin.conf $HOME /.kube/configsudo chown $(id -u):$(id -g) $HOME /.kube/configsystemctl enable kubelet.service kubeadm join 192.168.5.128:6443 --token abcdef.0123456780abcdef \ --discovery-token-ca-cert-hash sha256:d27cf2fd4a45c3ce8c59cdf0163edbf7cd4bc55a994a34404c0e175a47770798 kubeadm token create --print-join-command kubeadm token list
其他节点接入集群
确认安装好 kubeadm , kubelet, kubectl
在节点机器上执行上面提示的 加入集群命令, 并设置kubelet
为开机自启在 master
节点上拷贝集群配置文件给node
, 这样 node
才能正常使用kubectl
命令,也可以不操作这一步
1 2 systemctl enable kubelet.service scp /etc/kubernetes/admin.conf k8s-node-1:~/.kube/config
1.24.x 以后的版本由于 cri 的问题,如果还是使用 docker 的话,加入集群需要指定 cri-socket
1 kubeadm join 192.168.36.200:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:9af4803dd7446649d887ebdf0da47edc5e713fa9cb4e32bf7d7f8f49e75cb8fa --cri-socket=unix:///run/cri-dockerd.sock
配置 calico 网络(1.26.1 已验证) 官网快速开始地址: https://docs.tigera.io/calico/3.25/getting-started/kubernetes/quickstart#install-calico
根据官网下载两个 yaml 配置文件
1 2 3 wget https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/tigera-operator.yaml wget https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/custom-resources.yaml
修改第二个配置文件 custom-resources.yaml
中的 ipPools.cidr
值为你局域网中不重复的网段,并且该值必须为 kubeadm-config 中 podSubnet
配置的值。
1 2 3 4 5 6 7 8 title: 提示 关于 podSubnet 设置有以下几种方式 - podSubnet 可以在使用 kubeadm init 初始化集群的时候使用 --pod-network-cidr=192.168.0.0/16 指定 - 修改 kubeadm config 导出的集群初始化配置文件中:networking.podSubnet 的值,不存在该字段就手动添加上。 - 如果错过了初始化,可以直接修改集群中的 kubeadm-config,使用命令:kubectl edit configmap kubeadm-config -n kube-system -o yaml,找到 networking 添加字段 podSubnet 如果存在就修改该值。
1 2 3 4 5 6 7 8 9 title: 关于 serviceSubnet 和 podSubnet 区别 在Kubernetes集群中,`serviceSubnet`和`podSubnet`是两个不同的网络子网,用于不同的目的。 - `serviceSubnet`是用于定义Service资源的IP地址池,每个Service都会被分配一个在这个子网中的虚拟IP地址(ClusterIP),用于负载均衡到后端Pod。默认情况下,`serviceSubnet`的CIDR为`10.96.0.0/12`,可以通过在kube-apiserver启动参数中指定`--service-cluster-ip-range`选项来修改它。例如:`--service-cluster-ip-range=10.244.0.0/16`。 - `podSubnet`是用于定义Pod网络的IP地址池。每个Pod都会被分配一个在这个子网中的IP地址。这些IP地址是Pod内部使用的,由kubelet代理在每个节点上创建的网络命名空间中分配。默认情况下,`podSubnet`的CIDR也为`10.244.0.0/16`,可以通过在kubelet启动参数中指定`--pod-cidr`选项来修改它。例如:`--pod-cidr=10.244.0.0/16`。 请注意,由于`serviceSubnet`和`podSubnet`的CIDR相同,因此它们是重叠的。这意味着,如果您使用相同的CIDR来定义它们,可能会导致网络冲突和不可预测的行为。为了避免这种情况,请确保为它们分配不同的CIDR。
如果修改了 kubeadm-config 的配置,需要重启集群让该配置生效。其他情况等待 calico 转为正常即可。使用官方提示的命令:
1 2 kubectl get pods -n calico-system
如果上述命令提示命名空间不存在,且配置完成后集群的节点状态没有 ready。使用下面的命令查看 calico 发生了什么错误。
1 kubectl get tigerastatus -o yaml
配置 Flannel 网络(1.26.x 未验证)
安装 flannel 保证各个节点的pod之间网络通信
修改集群 kube-controller-manager.yaml
文件,追加网络参数
1 2 3 4 5 6 7 vim /etc/kubernetes/manifests/kube-controller-manager.yaml --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16 systemctl restart kubelet
如果是多网卡的机器,可能需要指定下网卡, 参考这个大佬的文章
文章 “安装 Pod Network” 中提到的 :
“另外需要注意的是如果你的节点有多个网卡的话,需要在 kube-flannel.yml 中使用--iface
参数指定集群主机内网网卡的名称,否则可能会出现 dns 无法解析。”
我猜应该是在配置文件下面的位置加, 注意Kind
和metadata
中的信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: .... ... .. . containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.14.0 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr - --iface=ens33 resources: requests: ... .... ......
获取flannel
部署文件,并下载镜像
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 curl https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml > kube-flannel.yml cat kube-flannel.yml | grep imagedocker search flannel:v0.14.0 docker pull xwjh/flannel:v0.14.0 docker tag xwjh/flannel:v0.14.0 quay.io/coreos/flannel:v0.14.0 docker rmi xwjh/flannel:v0.14.0 kubectl create -f kube-flannel.yml
验证&其他设置
至此k8s简单搭建版到此结束, 后续多个节点,多master
之类的查资料设置加入集群即可
验证节点状态 执行kubectl get node
查看集群节点状态, 如果你之前没装 flannel
直接执行会看到如下信息
1 2 3 4 5 [root@k8s-master ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master NotReady control-plane,master 21h v1.21.4 k8s-node-1 NotReady <none> 21h v1.21.4 [root@k8s-master ~]#
当你flannel
正确安装后,会变成如下样式, 两个节点都会变成 Ready
状态
1 2 3 4 5 [root@k8s-master k8s-install-file]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane,master 22h v1.21.4 k8s-node-1 Ready <none> 21h v1.21.4 [root@k8s-master k8s-install-file]#
验证 coredns 状态 安装完成后查看pod
状态可能会出现coredns
错误,无法启动:
1 2 3 4 5 6 7 8 9 10 11 12 13 [root@k8s-master k8s-install-file]# kubectl get pod --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-67574f65b-fh2kq 0/1 ImagePullBackOff 0 22h kube-system coredns-67574f65b-qspjm 0/1 ImagePullBackOff 0 22h kube-system etcd-k8s-master 1/1 Running 1 22h kube-system kube-apiserver-k8s-master 1/1 Running 1 22h kube-system kube-controller-manager-k8s-master 1/1 Running 1 5h44m kube-system kube-flannel-ds-h5fd6 1/1 Running 0 7m33s kube-system kube-flannel-ds-z945p 1/1 Running 0 7m33s kube-system kube-proxy-rmwcx 1/1 Running 1 21h kube-system kube-proxy-vzmjw 1/1 Running 1 22h kube-system kube-scheduler-k8s-master 1/1 Running 1 22h [root@k8s-master k8s-install-file]#
我们查看下pod
的错误信息
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 root@k8s-master k8s-install-file]# kubectl -n kube-system describe pod coredns-67574f65b-fh2kq Name: coredns-67574f65b-fh2kq Namespace: kube-system Priority: 2000000000 Priority Class Name: system-cluster-critical Node: k8s-node-1/192.168.5.129 Start Time: Tue, 17 Aug 2021 14:54:36 +0800 Labels: k8s-app=kube-dns pod-template-hash=67574f65b Annotations: <none> Status: Pending IP: 10.244.1.3 IPs: IP: 10.244.1.3 Controlled By: ReplicaSet/coredns-67574f65b Containers: coredns: Container ID: Image: registry.aliyuncs.com/k8sxio/coredns:v1.8.0 Image ID: Ports: 53/UDP, 53/TCP, 9153/TCP Host Ports: 0/UDP, 0/TCP, 0/TCP Args: -conf /etc/coredns/Corefile State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Limits: memory: 170Mi Requests: cpu: 100m memory: 70Mi Liveness: http-get http://:8080/health delay=60s timeout =5s period=10s Readiness: http-get http://:8181/ready delay=0s timeout =1s period=10s Environment: <none> Mounts: /etc/coredns from config-volume (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-trjcg (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: coredns Optional: false kube-api-access-trjcg: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: CriticalAddonsOnly op=Exists node-role.kubernetes.io/control-plane:NoSchedule node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 4h53m (x1020 over 21h) default-scheduler 0/2 nodes are available: 2 node(s) had taint {node.kubernetes.io/not-ready:}, that the pod didn't tolerate. Warning FailedScheduling 8m6s (x9 over 14m) default-scheduler 0/2 nodes are available: 2 node(s) had taint {node.kubernetes.io/not-ready:}, that the pod didn' t tolerate. Normal Scheduled 7m56s default-scheduler Successfully assigned kube-system/coredns-67574f65b-fh2kq to k8s-node-1 Normal Pulling 6m27s (x4 over 7m54s) kubelet Pulling image "registry.aliyuncs.com/k8sxio/coredns:v1.8.0" Warning Failed 6m26s (x4 over 7m53s) kubelet Failed to pull image "registry.aliyuncs.com/k8sxio/coredns:v1.8.0" : rpc error: code = Unknown desc = Error response from daemon: manifest for registry.aliyuncs.com/k8sxio/coredns:v1.8.0 not found: manifest unknown: manifestunknown Warning Failed 6m26s (x4 over 7m53s) kubelet Error: ErrImagePull Warning Failed 6m15s (x6 over 7m53s) kubelet Error: ImagePullBackOff Normal BackOff 2m45s (x21 over 7m53s) kubelet Back-off pulling image "registry.aliyuncs.com/k8sxio/coredns:v1.8.0"
发现错误是拉取镜像失败, 但是master
节点确实存在这个镜像, 那这个指的就是 node
节点上缺少镜像,我们导出master
上的registry.aliyuncs.com/k8sxio/coredns:v1.8.0
拷贝给node
节点导入即可
1 2 3 4 5 docker save -o coredns.zip registry.aliyuncs.com/k8sxio/coredns:v1.8.0 scp coredns.zip k8s-node-1:~ docker load -i coredns.zip
重新查看状态
1 2 3 4 5 6 7 8 9 10 11 12 13 [root@k8s-master k8s-install-file]# kubectl -n kube-system get pods NAME READY STATUS RESTARTS AGE coredns-67574f65b-fh2kq 1/1 Running 0 22h coredns-67574f65b-qspjm 1/1 Running 0 22h etcd-k8s-master 1/1 Running 1 22h kube-apiserver-k8s-master 1/1 Running 1 22h kube-controller-manager-k8s-master 1/1 Running 1 5h58m kube-flannel-ds-h5fd6 1/1 Running 0 21m kube-flannel-ds-z945p 1/1 Running 0 21m kube-proxy-rmwcx 1/1 Running 1 22h kube-proxy-vzmjw 1/1 Running 1 22h kube-scheduler-k8s-master 1/1 Running 1 22h [root@k8s-master k8s-install-file]#
node 节点角色为 none 查看节点详细信息, 可以看到node
节点为none
角色, 我们手动指定节点为worker
1 2 3 4 5 [root@k8s-master k8s-install-file]# kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-master Ready control-plane,master 22h v1.21.4 192.168.5.128 <none> CentOS Linux 8 4.18.0-305.12.1.el8_4.x86_64 docker://20.10.8 k8s-node-1 Ready <none> 22h v1.21.4 192.168.5.129 <none> CentOS Linux 8 4.18.0-305.12.1.el8_4.x86_64 docker://20.10.8 [root@k8s-master k8s-install-file]
执行下面的命令修改节点角色
1 kubectl label node <node name> node-role.kubernetes.io/node=
1 2 3 4 5 6 7 8 [root@k8s-master k8s-install-file]# kubectl label node k8s-node-1 node-role.kubernetes.io/node= node/k8s-node-1 labeled [root@k8s-master k8s-install-file]# [root@k8s-master k8s-install-file]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane,master 22h v1.21.4 k8s-node-1 Ready node 22h v1.21.4 [root@k8s-master k8s-install-file]#
设置节点角色 1 2 3 4 5 6 7 8 9 10 11 12 13 14 kubectl label node <node name> node-role.kubernetes.io/master= kubectl label node <node name> node-role.kubernetes.io/node= kubectl taint node <node name> node-role.kubernetes.io/master=true :NoSchedule kubectl taint node <node name> node-role.kubernetes.io/master=:NoSchedule kubectl label node k8s-node-1 node-role.kubernetes.io/node-
所有节点都允许运行pod
1 2 3 4 5 6 7 kubectl taint nodes --all node-role.kubernetes.io/master-
修改 NodePort 端口范围 默认端口号范围是 30000-32767 修改后等一会儿就可以生效
1 2 3 - --service-node-port-range=0-65535
安装ingress-nginx 参考官网: https://kubernetes.github.io/ingress-nginx/deploy/#docker-desktop
使用官网提供的示例文件Deployment—>Installation Guide—>Docker Desktop
1 2 kubectl apply -f ingress-nginx.yaml
镜像拉取失败使用梯子
或者搜索别人上传的