K8S集群扩容
1.kubeadm创建token用于kubelet加入集群的验证信息
kubeadm token create ktoken.tokencreateadmin --ttl 0 --print-join-command
注意,ktoken.tokencreateadmin是可以自定义的,但长度是固定的,必须是6.16长度,另外生成的那一串也要保存下,等下要用.....
2.新节点环境准备
节点 | 主机名 |
|---|---|
10.0.0.234 | worker234 |
2.1 安装docker环境
此处省略,相信大家一定掌握......
2.2 虚拟机操作系统环境准备
2.2.1 关闭swap分区
#临时关闭
swapoff -a && sysctl -w vm.swappiness=0
#基于配置文件关闭
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab2.2.2 确保各个节点MAC地址或product_uuid唯一
ifconfig ens33 | grep ether | awk '{print $2}'
cat /sys/class/dmi/id/product_uuid 2.2.3 检查网络节点是否互通
使用ping命令
2.2.4 允许iptable检查桥接流量
cat <<EOF | tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
#应用配置
sysctl --system2.2.5 检查端口是否被占用
参考官方:https://kubernetes.io/zh-cn/docs/reference/networking/ports-and-protocols/
2.2.6 禁用防火墙
systemctl disable --now firewalld2.2.7 禁用selinux
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
#检查一下
grep ^SELINUX= /etc/selinux/config2.2.8 修改docker的底层容器运行时
[root@master231 ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://tuv7rqqq.mirror.aliyuncs.com","https://docker.mirrors.ustc.edu.cn/","https://hub-mirror.c.163.com/","https://reg-mirror.qiniu.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
#重启docker
systemctl restart docker
#检查配置
#温馨提示:如果不修改cgroup的管理驱动为systemd,则默认值为cgroupfs,在初始化master节点时会失败
[root@master231 ~]#docker info | grep "Cgroup Driver"
Cgroup Driver: systemd2.2.9 新节点worker234准备加入集群
2.2.9.1 新节点配置软件源
[root@worker234 ~]# cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
EOF2.2.9.2 新节点安装kubeadm,kubelet,kubectl软件包
[root@worker234 ~]# yum -y install kubeadm-1.23.17-0 kubelet-1.23.17-0 kubectl-1.23.17-0 2.2.9.3 新节点设置kubelet开机自启动
systemctl enable --now kubelet2.2.9.4 新节点加入现有集群
注意,使用第一步我们生成的命令加入(不要复制我的,因为是使用hash算法计算过的,所以我们生成的一定不一样,复制我的一定出错)
kubeadm join 10.0.0.231:6443 --token ktoken.tokencreateadmin --discovery-token-ca-cert-hash sha256:c5f7a6b59aef4758184213cb6474d0e384e44e4428888f8d8bf01c7a78de5f793. master节点验证是否有新节点加入成功
3. 1 查看现有的节点信息
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 Ready control-plane,master 9d v1.23.17
worker232 Ready <none> 9d v1.23.17
worker233 Ready <none> 9d v1.23.17
worker234 Ready <none> 3m34s v1.23.173.2 查看flannel组件是否自动安装
[root@master231 ~]# kubectl get pods -o wide -n kube-flannel
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel-ds-8wq6n 1/1 Running 0 3h36m 10.0.0.232 worker232 <none> <none>
kube-flannel-ds-9xd8m 1/1 Running 0 3h18m 10.0.0.231 master231 <none> <none>
kube-flannel-ds-n7nk6 1/1 Running 0 3m47s 10.0.0.234 worker234 <none> <none>
kube-flannel-ds-sddmz 1/1 Running 0 3h20m 10.0.0.233 worker233 <none> <none>3.3 新创建Pod,检测是否部署到新节点
此处省略.....至此K8S扩容完成
K8S集群缩容
1. 查看驱逐之前的pod信息
[root@master231 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-apps-v1-848cdd4dbb-6bzr8 1/1 Running 0 24m 10.100.2.37 worker233 <none> <none>
deploy-apps-v1-848cdd4dbb-btzdv 1/1 Running 0 24m 10.100.2.36 worker233 <none> <none>
deploy-apps-v1-848cdd4dbb-k4mpc 1/1 Running 0 24m 10.100.1.248 worker232 <none> <none>
deploy-apps-v1-848cdd4dbb-lxtss 1/1 Running 0 24m 10.100.1.249 worker232 <none> <none>
deploy-apps-v1-848cdd4dbb-pp9vp 1/1 Running 0 24m 10.100.3.2 worker234 <none> <none>
deploy-apps-v1-848cdd4dbb-tj4tk 1/1 Running 0 24m 10.100.3.3 worker234 <none> <none>2. 查看节点信息
[root@master231 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 Ready control-plane,master 9d v1.23.17
worker232 Ready <none> 9d v1.23.17
worker233 Ready <none> 9d v1.23.17
worker234 Ready <none> 32m v1.23.173. 主节点驱逐需要下线节点的Pod,忽略ds资源所创建的Pod
[root@master231 ~]# kubectl drain worker234 --ignore-daemonsets 3.1 再次查看集群节点信息
[root@master231 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 Ready control-plane,master 9d v1.23.17
worker232 Ready <none> 9d v1.23.17
worker233 Ready <none> 9d v1.23.17
worker234 Ready,SchedulingDisabled <none> 46m v1.23.17注意:驱逐Pod后,并将节点标记为不可调度,且会打上NoSchedule类型的污点。
4. 被驱逐节点停止kubelet相关的服务
[root@worker234 ~]# systemctl disable --now kubelet
Removed symlink /etc/systemd/system/multi-user.target.wants/kubelet.service.5. worker节点重装系统,作用是保证数据不泄露
略......
6. master节点需要将被驱逐节点移除
[root@master231 ~]# kubectl delete nodes worker234 7. 查看节点信息
[root@master231 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 Ready control-plane,master 9d v1.23.17
worker232 Ready <none> 9d v1.23.17
worker233 Ready <none> 9d v1.23.17 这时候可以看到,worker234节点就已经从集群中移除了,这时候就完成了集群的缩容
评论区