侧边栏壁纸
博主头像
一揽芳华 博主等级

行动起来,活在当下

  • 累计撰写 265 篇文章
  • 累计创建 24 个标签
  • 累计收到 4 条评论

目 录CONTENT

文章目录

三、kubernetes节点管理

芳华是个男孩!
2024-10-15 / 0 评论 / 0 点赞 / 9 阅读 / 0 字
广告 广告

一、查看集群节点信息

[root@k8s-master02 ~]# kubectl  cluster-info 
Kubernetes control plane is running at https://192.168.122.100:16443
CoreDNS is running at https://192.168.122.100:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8s-master02 ~]# 

二、查看节点信息

1、查看集群节点信息

[root@k8s-master02 ~]# kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
k8s-master01   Ready    control-plane,master   21h   v1.21.0
k8s-master02   Ready    control-plane,master   21h   v1.21.0
k8s-master03   Ready    control-plane,master   21h   v1.21.0
k8s-worker01   Ready    <none>                 21h   v1.21.0
k8s-worker02   Ready    <none>                 21h   v1.21.0
k8s-worker03   Ready    <none>                 21h   v1.21.0
k8s-worker04   Ready    <none>                 21h   v1.21.0

2、查看集群节点详细信息

[root@k8s-master02 ~]# kubectl get nodes -o wide
NAME           STATUS   ROLES                  AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                           KERNEL-VERSION              CONTAINER-RUNTIME
k8s-master01   Ready    control-plane,master   21h   v1.21.0   192.168.122.11   <none>        Rocky Linux 8.5 (Green Obsidian)   6.7.2-1.el8.elrepo.x86_64   docker://20.10.9
k8s-master02   Ready    control-plane,master   21h   v1.21.0   192.168.122.12   <none>        Rocky Linux 8.5 (Green Obsidian)   6.7.2-1.el8.elrepo.x86_64   docker://20.10.9
k8s-master03   Ready    control-plane,master   21h   v1.21.0   192.168.122.13   <none>        Rocky Linux 8.5 (Green Obsidian)   6.7.2-1.el8.elrepo.x86_64   docker://20.10.9
k8s-worker01   Ready    <none>                 21h   v1.21.0   192.168.122.14   <none>        Rocky Linux 8.5 (Green Obsidian)   6.7.2-1.el8.elrepo.x86_64   docker://20.10.9
k8s-worker02   Ready    <none>                 21h   v1.21.0   192.168.122.15   <none>        Rocky Linux 8.5 (Green Obsidian)   6.7.2-1.el8.elrepo.x86_64   docker://20.10.9
k8s-worker03   Ready    <none>                 21h   v1.21.0   192.168.122.16   <none>        Rocky Linux 8.5 (Green Obsidian)   6.7.2-1.el8.elrepo.x86_64   docker://20.10.9
k8s-worker04   Ready    <none>                 21h   v1.21.0   192.168.122.17   <none>        Rocky Linux 8.5 (Green Obsidian)   6.7.2-1.el8.elrepo.x86_64   docker://20.10.9

3、查看集群节点描述详细信息

[root@k8s-master02 ~]# kubectl describe nodes k8s-master01
Name:               k8s-master01
Roles:              control-plane,master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=k8s-master01
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
                    node-role.kubernetes.io/master=
                    node.kubernetes.io/exclude-from-external-load-balancers=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    projectcalico.org/IPv4Address: 192.168.122.11/24
                    projectcalico.org/IPv4VXLANTunnelAddr: 10.244.32.128
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Tue, 30 Jan 2024 17:09:44 +0800
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  k8s-master01
  AcquireTime:     <unset>
  RenewTime:       Wed, 31 Jan 2024 15:03:31 +0800
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Wed, 31 Jan 2024 11:54:11 +0800   Wed, 31 Jan 2024 11:54:11 +0800   CalicoIsUp                   Calico is running on this node
  MemoryPressure       False   Wed, 31 Jan 2024 14:59:10 +0800   Tue, 30 Jan 2024 17:09:44 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Wed, 31 Jan 2024 14:59:10 +0800   Tue, 30 Jan 2024 17:09:44 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Wed, 31 Jan 2024 14:59:10 +0800   Tue, 30 Jan 2024 17:09:44 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Wed, 31 Jan 2024 14:59:10 +0800   Tue, 30 Jan 2024 17:19:02 +0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.122.11
  Hostname:    k8s-master01
Capacity:
  cpu:                6
  ephemeral-storage:  145676804Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             4995824Ki
  pods:               110
Allocatable:
  cpu:                6
  ephemeral-storage:  134255742345
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             4893424Ki
  pods:               110
System Info:
  Machine ID:                 aa35dc0d310e4ebe8e258798337f4830
  System UUID:                ace15fcf-6257-400f-a8f4-fb4264bd23f0
  Boot ID:                    e47fd875-abf2-4134-935e-529d675b744c
  Kernel Version:             6.7.2-1.el8.elrepo.x86_64
  OS Image:                   Rocky Linux 8.5 (Green Obsidian)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.9
  Kubelet Version:            v1.21.0
  Kube-Proxy Version:         v1.21.0
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
Non-terminated Pods:          (12 in total)
  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
  calico-system               calico-kube-controllers-77b5cb49c9-dwvqs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21h
  calico-system               calico-node-7572t                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21h
  kube-system                 coredns-57d4cbf879-9xxfd                    100m (1%)     0 (0%)      70Mi (1%)        170Mi (3%)     21h
  kube-system                 coredns-57d4cbf879-dq5kk                    100m (1%)     0 (0%)      70Mi (1%)        170Mi (3%)     21h
  kube-system                 etcd-k8s-master01                           100m (1%)     0 (0%)      100Mi (2%)       0 (0%)         21h
  kube-system                 kube-apiserver-k8s-master01                 250m (4%)     0 (0%)      0 (0%)           0 (0%)         21h
  kube-system                 kube-controller-manager-k8s-master01        200m (3%)     0 (0%)      0 (0%)           0 (0%)         21h
  kube-system                 kube-proxy-ldmsj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         21h
  kube-system                 kube-scheduler-k8s-master01                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         21h
  kube-system                 metrics-server-6bf679fb9b-q2z2f             100m (1%)     0 (0%)      200Mi (4%)       0 (0%)         21h
  kuboard                     kuboard-etcd-lxhdm                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         21h
  kuboard                     kuboard-pv-browser-rl7lw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                950m (15%)  0 (0%)
  memory             440Mi (9%)  340Mi (7%)
  ephemeral-storage  100Mi (0%)  0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
Events:              <none>

三、worker节点的管理方法

使用kubeadm安装集群,在Node节点上管理集群时会报如下错误😫

[root@k8s-worker01 ~]# kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?

只要把master节点上的文件/etc/kubernetes/admin.conf拷贝到Node节点的$HOME/.kube/config就可以让node节点实现kubectl命令管理
1、在node节点的用户家目录创建.kube目录

[root@k8s-worker01 ~]# mkdir /root/.kube

2、在master节点上如下操作

[root@k8s-master01 ~]# scp /etc/kubernetes/admin.conf k8s-worker01:/root/.kube/config
admin.conf

2、在worker节点上验证😀

[root@k8s-worker01 ~]# kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
k8s-master01   Ready    control-plane,master   22h   v1.21.0
k8s-master02   Ready    control-plane,master   22h   v1.21.0
k8s-master03   Ready    control-plane,master   22h   v1.21.0
k8s-worker01   Ready    <none>                 22h   v1.21.0
k8s-worker02   Ready    <none>                 22h   v1.21.0
k8s-worker03   Ready    <none>                 22h   v1.21.0
k8s-worker04   Ready    <none>                 22h   v1.21.0

四、节点标签

k8s集群如果有大量的节点组成,可讲节点打上对于的标签,然后通过标签对我们节点进行筛选查看,更好的进行资源对象的相关选择与匹配

1、查看节点标签信息

[root@k8s-master01 ~]# kubectl get node --show-labels 
NAME           STATUS   ROLES                  AGE   VERSION   LABELS
k8s-master01   Ready    control-plane,master   22h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-master02   Ready    control-plane,master   22h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master02,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-master03   Ready    control-plane,master   22h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master03,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-worker01   Ready    <none>                 22h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker01,kubernetes.io/os=linux
k8s-worker02   Ready    <none>                 22h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker02,kubernetes.io/os=linux
k8s-worker03   Ready    <none>                 22h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker03,kubernetes.io/os=linux
k8s-worker04   Ready    <none>                 22h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker04,kubernetes.io/os=linux

2、设置节点标签

为k8s-worker01,打上一个region=wuhan的标签

[root@k8s-master01 ~]# kubectl label node k8s-worker01 region=wuhan
node/k8s-worker01 labeled

##查看以下
[root@k8s-master01 ~]# kubectl get node --show-labels
NAME           STATUS   ROLES                  AGE   VERSION   LABELS
k8s-master01   Ready    control-plane,master   22h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-master02   Ready    control-plane,master   22h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master02,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-master03   Ready    control-plane,master   22h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master03,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-worker01   Ready    <none>                 22h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker01,kubernetes.io/os=linux,region=wuhan
k8s-worker02   Ready    <none>                 22h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker02,kubernetes.io/os=linux
k8s-worker03   Ready    <none>                 22h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker03,kubernetes.io/os=linux
k8s-worker04   Ready    <none>                 22h   v1.21.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-worker04,kubernetes.io/os=linux

通过kuboard查看

invalid image(图片无法加载)

3、查看所有节点带region的标签

[root@k8s-master01 ~]# kubectl get node -L region
NAME           STATUS   ROLES                  AGE   VERSION   REGION
k8s-master01   Ready    control-plane,master   22h   v1.21.0   
k8s-master02   Ready    control-plane,master   22h   v1.21.0   
k8s-master03   Ready    control-plane,master   22h   v1.21.0   
k8s-worker01   Ready    <none>                 22h   v1.21.0   wuhan
k8s-worker02   Ready    <none>                 22h   v1.21.0   
k8s-worker03   Ready    <none>                 22h   v1.21.0   
k8s-worker04   Ready    <none>                 22h   v1.21.0   

4、设置多维度的标签

也可以加其它的多维度标签,用于不同的需要区分的场景
如果把k8s-worker02标签为武汉,A机房,测试环境,游戏业务

[root@k8s-master01 ~]# kubectl label node k8s-worker02 region=wuhan zone=A env=test bussiness=game 
node/k8s-worker02 labeled

5、显示节点的相应标签

通过kuboard查看
invalid image(图片无法加载)

6、查找region=wuhan的节点

[root@k8s-master01 ~]# kubectl get nodes -l region=wuhan
NAME           STATUS   ROLES    AGE   VERSION
k8s-worker01   Ready    <none>   22h   v1.21.0
k8s-worker02   Ready    <none>   22h   v1.21.0

如果有键值对的用小L,没有键值对用大L

7、标签的修改

现在有k8s-worker01和k8s—worker02节点在武汉,我要把k8s-worker02节点标签改为北京

[root@k8s-master01 ~]# kubectl label node k8s-worker02 region=beijing --overwrite=true
node/k8s-worker02 labeled
[root@k8s-master01 ~]# kubectl get nodes -L region
NAME           STATUS   ROLES                  AGE   VERSION   REGION
k8s-master01   Ready    control-plane,master   23h   v1.21.0   
k8s-master02   Ready    control-plane,master   23h   v1.21.0   
k8s-master03   Ready    control-plane,master   23h   v1.21.0   
k8s-worker01   Ready    <none>                 23h   v1.21.0   wuhan
k8s-worker02   Ready    <none>                 23h   v1.21.0   beijing
k8s-worker03   Ready    <none>                 23h   v1.21.0   
k8s-worker04   Ready    <none>                 23h   v1.21.0   

8、标签的删除

删除k8s-worker01的区域标签

[root@k8s-master01 ~]# kubectl label nodes k8s-worker01 region-
node/k8s-worker01 labeled

[root@k8s-master01 ~]# kubectl get nodes -L region
NAME           STATUS   ROLES                  AGE   VERSION   REGION
k8s-master01   Ready    control-plane,master   23h   v1.21.0   
k8s-master02   Ready    control-plane,master   23h   v1.21.0   
k8s-master03   Ready    control-plane,master   23h   v1.21.0   
k8s-worker01   Ready    <none>                 23h   v1.21.0   
k8s-worker02   Ready    <none>                 23h   v1.21.0   beijing
k8s-worker03   Ready    <none>                 23h   v1.21.0   
k8s-worker04   Ready    <none>                 23h   v1.21.0   
[root@k8s-master01 ~]# 

8、标签的选择器

标签选择器主要有2种

  • 等值关系:=,!=
  • 集合关系:KEY in (VALUE1,VALUE2.....)
[root@k8s-master01 ~]# kubectl label nodes k8s-worker01 env=test1
node/k8s-worker01 labeled
[root@k8s-master01 ~]# kubectl label nodes k8s-worker02 env=test1 --overwrite=true
node/k8s-worker02 labeled

[root@k8s-master01 ~]# kubectl get nodes -l "env in(test1,test2)"
NAME           STATUS   ROLES    AGE   VERSION
k8s-worker01   Ready    <none>   23h   v1.21.0
k8s-worker02   Ready    <none>   23h   v1.21.0
[root@k8s-master01 ~]# kubectl get nodes -l env!=test1
NAME           STATUS   ROLES                  AGE   VERSION
k8s-master01   Ready    control-plane,master   23h   v1.21.0
k8s-master02   Ready    control-plane,master   23h   v1.21.0
k8s-master03   Ready    control-plane,master   23h   v1.21.0
k8s-worker03   Ready    <none>                 23h   v1.21.0
k8s-worker04   Ready    <none>                 23h   v1.21.0
0
广告 广告

评论区