侧边栏壁纸
博主头像
一揽芳华 博主等级

行动起来,活在当下

  • 累计撰写 265 篇文章
  • 累计创建 24 个标签
  • 累计收到 4 条评论

目 录CONTENT

文章目录

RKE部署k8s1.21.9

芳华是个男孩!
2024-11-06 / 0 评论 / 0 点赞 / 11 阅读 / 0 字
广告 广告

1、主机操作系统说明

序号操作系统及版本备注
1centos7.6

2、主机硬件配置说明

需求CPU内存硬盘角色主机名ip地址备注
4C8G100Gmasterk8s-master01192.168.3.31
4C8G100Gmasterk8s-master02192.168.3.32
4C8G100Getcdk8s-etcd192.168.3.33
4C8G100Gworker(node)k8s-worker01192.168.3.34
4C8G100Gworker(node)k8s-worker02192.168.3.35
4C8G100Gworker(node)k8s-worker03192.168.3.36

花里胡哨的美化配置

#命令行优化:
echo "export PS1='\[\033[01;31m\]\u\[\033[00m\]@\[\033[01;32m\]\h\[\033[00m\][\[\033[01;33m\]\t\[\033[00m\]]:\[\033[01;34m\]\w\[\033[00m\]$ '" >>/etc/profile
source /etc/profile

#历史记录优化: 
export HISTTIMEFORMAT='%F %T ' 
echo "export HISTTIMEFORMAT='%F %T '" >>/etc/profile 
source /etc/profile 

配置yum源、修改主机名配置hosts解析、优化节点、升级内核、安装docker等参考上一篇

3、创建用户及配置证书

#创建用户
useradd admin
usermod -aG docker admin
echo 123 | passwd --stdin admin

#生产证书
ssh-keygen

#复制证书到集群中所有主机
chown -R admin:admin /home/rancher
ssh-copy-id admin@k8s-master01
ssh-copy-id admin@k8s-master02
ssh-copy-id admin@k8s-etcd
ssh-copy-id admin@k8s-worker01
ssh-copy-id admin@k8s-worker02
ssh-copy-id admin@k8s-worker03

4、下载RKE工具

# wget https://github.com/rancher/rke/releases/download/v1.5.3/rke_linux-amd64
# mv rke_linux-amd64 /usr/local/bin/rke
# chmod +x /usr/local/bin/rke
# rke --version
rke version v1.5.3

5、初始化RKE

mkdir -p /app/admin
cd /app/admin
rke config --name cluster.yml
##以下是配置内容
[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: 集群私钥路径
[+] Number of Hosts [1]: 3 集群中有 3 个节点
[+] SSH Address of host (1) [none]: 192.168.3.31 第一个节点 IP 地址
[+] SSH Port of host (1) [22]: 22 第一个节点 SSH 访问端口
[+] SSH Private Key Path of host (192.168.3.31) [none]: ~/.ssh/id_rsa 第一个节点私钥路径
[+] SSH User of host (192.168.3.31) [ubuntu]: admin 远程用户名
[+] Is host (192.168.3.31) a Control Plane host (y/n)? [y]: y 是否为 K8s 集群控制节点
[+] Is host (192.168.3.31) a Worker host (y/n)? [n]: n 不是 worker 节点
[+] Is host (192.168.3.31) an etcd host (y/n)? [n]: n 不是 etcd 节点
[+] Override Hostname of host (192.168.3.31) [none]: 不覆盖现有主机名
[+] Internal IP of host (192.168.3.30) [none]: 主机局域网 IP 地址
[+] Docker socket path on host (192.168.3.31) [/var/run/docker.sock]: 主机上 docker.sock 路径
[+] SSH Address of host (2) [none]: 192.168.3.33 第二个节点
[+] SSH Port of host (2) [22]: 22 远程端口
[+] SSH Private Key Path of host (192.168.3.33) [none]: ~/.ssh/id_rsa 私钥路径
[+] SSH User of host (192.168.3.32) [ubuntu]: admin 远程访问用户
[+] Is host (192.168.3.32) a Control Plane host (y/n)? [y]: n 不是控制节点
[+] Is host (192.168.3.32) a Worker host (y/n)? [n]: n 是 worker 节点
[+] Is host (192.168.3.32) an etcd host (y/n)? [n]: y 不是 etcd 节点
[+] Override Hostname of host (192.168.3.32) [none]: 不覆盖现有主机名
[+] Internal IP of host (192.168.3.32) [none]: 主机局域网 IP 地址
[+] Docker socket path on host (192.168.3.32) [/var/run/docker.sock]: 主机上 docker.sock 路径
[+] SSH Address of host (3) [none]: 192.168.3.34 第三个节点
[+] SSH Port of host (3) [22]: 22 远程端口
[+] SSH Private Key Path of host (192.168.3.34) [none]: ~/.ssh/id_rsa 私钥路径
[+] SSH User of host (192.168.3.34) [ubuntu]: admin 远程访问用户
[+] Is host (192.168.3.34) a Control Plane host (y/n)? [y]: n 不是控制节点
[+] Is host (192.168.3.34) a Worker host (y/n)? [n]: y 不是 worker 节点
[+] Is host (192.168.3.34) an etcd host (y/n)? [n]: n 是 etcd 节点
[+] Override Hostname of host (192.168.3.34) [none]: 不覆盖现有主机名
[+] Internal IP of host (192.168.3.34) [none]: 主机局域网 IP 地址
[+] Docker socket path on host (192.168.3.34) [/var/run/docker.sock]: 主机上 docker.sock 路径
[+] Network Plugin Type (flannel, calico, weave, canal, aci) [canal]: 使用的网络插件
[+] Authentication Strategy [x509]: 认证策略
[+] Authorization Mode (rbac, none) [rbac]: 认证模式
[+] Kubernetes Docker image [rancher/hyperkube:v1.21.9-rancher1]: 集群容器镜像
[+] Cluster domain [cluster.local]: 集群域名
[+] Service Cluster IP Range [10.43.0.0/16]: 集群中 Servic IP 地址范围
[+] Enable PodSecurityPolicy [n]: 是否开启 Pod 安装策略
[+] Cluster Network CIDR [10.42.0.0/16]: 集群 Pod 网络
[+] Cluster DNS Service IP [10.43.0.10]: 集群 DNS Service IP 地址
[+] Add addon manifest URLs or YAML files [no]: 是否增加插件 manifest URL 或配置文件

##初始化后会在本地生成一个cluster.yml,初始化过程中只有3个节点,实际有6个,
##我们对配置文件进行修改,直接把另外3个加进去,注意修改地方,一般直接把已有的节点信息复制然后修
##改ip地址即可,其它不动,单个节点配置信息有14行,同时我为了加快部署效率,本地部署了私有hub,
##以下是我的配置

# If you intended to deploy Kubernetes in an air-gapped environment,
# please consult the documentation on how to configure custom RKE images.
nodes:
- address: 192.168.3.31     ------------>第1个节点
  port: "22"
  internal_address: ""
  role:
  - controlplane
  hostname_override: ""
  user: admin
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
- address: 192.168.3.32     ------------>第2个节点
  port: "22"
  internal_address: ""
  role:
  - controlplane
  hostname_override: ""
  user: admin
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
- address: 192.168.3.33     ------------>第3个节点
  port: "22"
  internal_address: ""
  role:
  - etcd
  hostname_override: ""
  user: admin
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
- address: 192.168.3.34     ------------>第4个节点
  port: "22"
  internal_address: ""
  role:
  - worker
  hostname_override: ""
  user: admin
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
- address: 192.168.3.35     ------------>第5个节点
  port: "22"
  internal_address: ""
  role:
  - worker
  hostname_override: ""
  user: admin
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
- address: 192.168.3.36     ------------>第6个节点
  port: "22"
  internal_address: ""
  role:
  - worker
  hostname_override: ""
  user: admin
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
services:
  etcd:
    image: ""
    extra_args: {}
    extra_args_array: {}
    extra_binds: []
    extra_env: []
    win_extra_args: {}
    win_extra_args_array: {}
    win_extra_binds: []
    win_extra_env: []
    external_urls: []
    ca_cert: ""
    cert: ""
    key: ""
    path: ""
    uid: 0
    gid: 0
    snapshot: null
    retention: ""
    creation: ""
    backup_config: null
  kube-api:
    image: ""
    extra_args: {}
    extra_args_array: {}
    extra_binds: []
    extra_env: []
    win_extra_args: {}
    win_extra_args_array: {}
    win_extra_binds: []
    win_extra_env: []
    service_cluster_ip_range: 10.43.0.0/16
    service_node_port_range: ""
    pod_security_policy: false
    pod_security_configuration: ""
    always_pull_images: false
    secrets_encryption_config: null
    audit_log: null
    admission_configuration: null
    event_rate_limit: null
  kube-controller:
    image: ""
    extra_args: {}
    extra_args_array: {}
    extra_binds: []
    extra_env: []
    win_extra_args: {}
    win_extra_args_array: {}
    win_extra_binds: []
    win_extra_env: []
    cluster_cidr: 10.42.0.0/16
    service_cluster_ip_range: 10.43.0.0/16
  scheduler:
    image: ""
    extra_args: {}
    extra_args_array: {}
    extra_binds: []
    extra_env: []
    win_extra_args: {}
    win_extra_args_array: {}
    win_extra_binds: []
    win_extra_env: []
  kubelet:
    image: ""
    extra_args: {}
    extra_args_array: {}
    extra_binds: []
    extra_env: []
    win_extra_args: {}
    win_extra_args_array: {}
    win_extra_binds: []
    win_extra_env: []
    cluster_domain: cluster.local
    infra_container_image: ""
    cluster_dns_server: 10.43.0.10
    fail_swap_on: false
    generate_serving_certificate: false
  kubeproxy:
    image: ""
    extra_args: {}
    extra_args_array: {}
    extra_binds: []
    extra_env: []
    win_extra_args: {}
    win_extra_args_array: {}
    win_extra_binds: []
    win_extra_env: []
network:
  plugin: canal
  options: {}
  mtu: 0
  node_selector: {}
  update_strategy: null
  tolerations: []
authentication:
  strategy: x509
  sans: []
  webhook: null
addons: ""
addons_include: []
system_images:   ------------->下面我用的个人仓库镜像,默认是从公网拉去。你有本地仓库就修改,没有就不动。
  etcd: 192.168.3.254/demo/mirrored-coreos-etcd:v3.5.9
  alpine: 192.168.3.254/demo/rke-tools:v0.1.96
  nginx_proxy: 192.168.3.254/demo/rke-tools:v0.1.96
  cert_downloader: 192.168.3.254/demo/rke-tools:v0.1.96
  kubernetes_services_sidecar: 192.168.3.254/demo/rke-tools:v0.1.96
  kubedns: 192.168.3.254/demo/mirrored-k8s-dns-kube-dns:1.22.28
  dnsmasq: 192.168.3.254/demo/mirrored-k8s-dns-dnsmasq-nanny:1.22.28
  kubedns_sidecar: 192.168.3.254/demo/mirrored-k8s-dns-sidecar:1.22.28
  kubedns_autoscaler: 192.168.3.254/demo/mirrored-cluster-proportional-autoscaler:1.8.6
  coredns: 192.168.3.254/demo/mirrored-coredns-coredns:1.10.1
  coredns_autoscaler: 192.168.3.254/demo/mirrored-cluster-proportional-autoscaler:1.8.6
  nodelocal: 192.168.3.254/demo/mirrored-k8s-dns-node-cache:1.22.28
  kubernetes: 192.168.3.254/demo/hyperkube:v1.21.9-rancher1
  flannel: 192.168.3.254/demo/mirrored-flannel-flannel:v0.21.4
  flannel_cni: 192.168.3.254/demo/flannel-cni:v0.3.0-rancher8
  calico_node: 192.168.3.254/demo/mirrored-calico-node:v3.26.3
  calico_cni: 192.168.3.254/demo/calico-cni:v3.26.3-rancher1
  calico_controllers: 192.168.3.254/demo/mirrored-calico-kube-controllers:v3.26.3
  calico_ctl: 192.168.3.254/demo/mirrored-calico-ctl:v3.26.3
  calico_flexvol: 192.168.3.254/demo/mirrored-calico-pod2daemon-flexvol:v3.26.3
  canal_node: 192.168.3.254/demo/mirrored-calico-node:v3.26.3
  canal_cni: 192.168.3.254/demo/calico-cni:v3.26.3-rancher1
  canal_controllers: 192.168.3.254/demo/mirrored-calico-kube-controllers:v3.26.3
  canal_flannel: 192.168.3.254/demo/mirrored-flannel-flannel:v0.21.4
  canal_flexvol: 192.168.3.254/demo/mirrored-calico-pod2daemon-flexvol:v3.26.3
  weave_node: 192.168.3.254/demo/weave-kube:2.8.1
  weave_cni: 192.168.3.254/demo/weave-npc:2.8.1
  pod_infra_container: 192.168.3.254/demo/mirrored-pause:3.7
  ingress: 192.168.3.254/demo/nginx-ingress-controller:nginx-1.9.4-rancher1
  ingress_backend: 192.168.3.254/demo/mirrored-nginx-ingress-controller-defaultbackend:1.5-rancher1
  ingress_webhook: 192.168.3.254/demo/mirrored-ingress-nginx-kube-webhook-certgen:v20231011-8b53cabe0
  metrics_server: 192.168.3.254/demo/mirrored-metrics-server:v0.6.3
  windows_pod_infra_container: 192.168.3.254/demo/mirrored-pause:3.7
  aci_cni_deploy_container: 192.168.3.254/demo/cnideploy:6.0.3.1.81c2369
  aci_host_container: 192.168.3.254/demo/aci-containers-host:6.0.3.1.81c2369
  aci_opflex_container: 192.168.3.254/demo/opflex:6.0.3.1.81c2369
  aci_mcast_container: 192.168.3.254/demo/opflex:6.0.3.1.81c2369
  aci_ovs_container: 192.168.3.254/demo/openvswitch:6.0.3.1.81c2369
  aci_controller_container: 192.168.3.254/demo/aci-containers-controller:6.0.3.1.81c2369
  aci_gbp_server_container: ""
  aci_opflex_server_container: ""
ssh_key_path: ~/.ssh/id_rsa
ssh_cert_path: ""
ssh_agent_auth: false
authorization:
  mode: rbac
  options: {}
ignore_docker_version: null
enable_cri_dockerd: null
kubernetes_version: ""
private_registries: []
ingress:
  provider: ""
  options: {}
  node_selector: {}
  extra_args: {}
  dns_policy: ""
  extra_envs: []
  extra_volumes: []
  extra_volume_mounts: []
  update_strategy: null
  http_port: 0
  https_port: 0
  network_mode: ""
  tolerations: []
  default_backend: null
  default_http_backend_priority_class_name: ""
  nginx_ingress_controller_priority_class_name: ""
  default_ingress_class: null
cluster_name: ""
cloud_provider:
  name: ""
prefix_path: ""
win_prefix_path: ""
addon_job_timeout: 0
bastion_host:
  address: ""
  port: ""
  user: ""
  ssh_key: ""
  ssh_key_path: ""
  ssh_cert: ""
  ssh_cert_path: ""
  ignore_proxy_env_vars: false
monitoring:
  provider: ""
  options: {}
  node_selector: {}
  update_strategy: null
  replicas: null
  tolerations: []
  metrics_server_priority_class_name: ""
restore:
  restore: false
  snapshot_name: ""
rotate_encryption_key: false
dns: null

6、使用RKE部署k8s,因为我提前把镜像导入了我的本地仓库,所以部署很快

root@k8s-master01[14:41:25]:/app/admin$ rke up
INFO[0000] Running RKE version: v1.5.3                  
INFO[0000] Initiating Kubernetes cluster                
INFO[0000] [dialer] Setup tunnel for host [192.168.3.35] 
INFO[0000] [dialer] Setup tunnel for host [192.168.3.34] 
INFO[0000] [dialer] Setup tunnel for host [192.168.3.31] 
INFO[0000] [dialer] Setup tunnel for host [192.168.3.32] 
INFO[0000] [dialer] Setup tunnel for host [192.168.3.36] 
INFO[0000] [dialer] Setup tunnel for host [192.168.3.33] 
INFO[0000] Finding container [cluster-state-deployer] on host [192.168.3.33], try #1 
INFO[0000] Pulling image [192.168.3.254/demo/rke-tools:v0.1.96] on host [192.168.3.33], try #1 
INFO[0010] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.33] 
INFO[0011] Starting container [cluster-state-deployer] on host [192.168.3.33], try #1 
INFO[0012] [state] Successfully started [cluster-state-deployer] container on host [192.168.3.33] 
INFO[0012] Finding container [cluster-state-deployer] on host [192.168.3.31], try #1 
INFO[0012] Pulling image [192.168.3.254/demo/rke-tools:v0.1.96] on host [192.168.3.31], try #1 
INFO[0023] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.31] 
INFO[0024] Starting container [cluster-state-deployer] on host [192.168.3.31], try #1 
INFO[0025] [state] Successfully started [cluster-state-deployer] container on host [192.168.3.31] 
INFO[0025] Finding container [cluster-state-deployer] on host [192.168.3.32], try #1 
INFO[0025] Pulling image [192.168.3.254/demo/rke-tools:v0.1.96] on host [192.168.3.32], try #1 
INFO[0035] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.32] 
INFO[0036] Starting container [cluster-state-deployer] on host [192.168.3.32], try #1 
INFO[0037] [state] Successfully started [cluster-state-deployer] container on host [192.168.3.32] 
INFO[0037] Finding container [cluster-state-deployer] on host [192.168.3.34], try #1 
INFO[0037] Pulling image [192.168.3.254/demo/rke-tools:v0.1.96] on host [192.168.3.34], try #1 
INFO[0047] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.34] 
INFO[0048] Starting container [cluster-state-deployer] on host [192.168.3.34], try #1 
INFO[0049] [state] Successfully started [cluster-state-deployer] container on host [192.168.3.34] 
INFO[0050] Finding container [cluster-state-deployer] on host [192.168.3.35], try #1 
INFO[0050] Pulling image [192.168.3.254/demo/rke-tools:v0.1.96] on host [192.168.3.35], try #1 
INFO[0062] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.35] 
INFO[0064] Starting container [cluster-state-deployer] on host [192.168.3.35], try #1 
INFO[0065] [state] Successfully started [cluster-state-deployer] container on host [192.168.3.35] 
INFO[0065] Finding container [cluster-state-deployer] on host [192.168.3.36], try #1 
INFO[0066] Pulling image [192.168.3.254/demo/rke-tools:v0.1.96] on host [192.168.3.36], try #1 
INFO[0075] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.36] 
INFO[0076] Starting container [cluster-state-deployer] on host [192.168.3.36], try #1 
INFO[0077] [state] Successfully started [cluster-state-deployer] container on host [192.168.3.36] 
INFO[0077] [certificates] Generating CA kubernetes certificates 
INFO[0077] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates 
INFO[0078] [certificates] GenerateServingCertificate is disabled, checking if there are unused kubelet certificates 
INFO[0078] [certificates] Generating Kubernetes API server certificates 
INFO[0078] [certificates] Generating Service account token key 
INFO[0078] [certificates] Generating Kube Controller certificates 
INFO[0079] [certificates] Generating Kube Scheduler certificates 
INFO[0079] [certificates] Generating Kube Proxy certificates 
INFO[0079] [certificates] Generating Node certificate   
INFO[0079] [certificates] Generating admin certificates and kubeconfig 
INFO[0079] [certificates] Generating Kubernetes API server proxy client certificates 
INFO[0080] [certificates] Generating kube-etcd-192-168-3-33 certificate and key 
INFO[0080] Successfully Deployed state file at [./cluster.rkestate] 
INFO[0080] Building Kubernetes cluster                  
INFO[0080] [dialer] Setup tunnel for host [192.168.3.34] 
INFO[0080] [dialer] Setup tunnel for host [192.168.3.36] 
INFO[0080] [dialer] Setup tunnel for host [192.168.3.33] 
INFO[0080] [dialer] Setup tunnel for host [192.168.3.31] 
INFO[0080] [dialer] Setup tunnel for host [192.168.3.35] 
INFO[0080] [dialer] Setup tunnel for host [192.168.3.32] 
INFO[0080] [network] Deploying port listener containers 
INFO[0080] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.33] 
INFO[0081] Starting container [rke-etcd-port-listener] on host [192.168.3.33], try #1 
INFO[0082] [network] Successfully started [rke-etcd-port-listener] container on host [192.168.3.33] 
INFO[0082] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.31] 
INFO[0082] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.32] 
INFO[0082] Starting container [rke-cp-port-listener] on host [192.168.3.31], try #1 
INFO[0082] Starting container [rke-cp-port-listener] on host [192.168.3.32], try #1 
INFO[0083] [network] Successfully started [rke-cp-port-listener] container on host [192.168.3.32] 
INFO[0083] [network] Successfully started [rke-cp-port-listener] container on host [192.168.3.31] 
INFO[0083] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.34] 
INFO[0083] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.36] 
INFO[0083] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.35] 
INFO[0083] Starting container [rke-worker-port-listener] on host [192.168.3.35], try #1 
INFO[0083] Starting container [rke-worker-port-listener] on host [192.168.3.36], try #1 
INFO[0083] Starting container [rke-worker-port-listener] on host [192.168.3.34], try #1 
INFO[0084] [network] Successfully started [rke-worker-port-listener] container on host [192.168.3.34] 
INFO[0084] [network] Successfully started [rke-worker-port-listener] container on host [192.168.3.35] 
INFO[0084] [network] Successfully started [rke-worker-port-listener] container on host [192.168.3.36] 
INFO[0084] [network] Port listener containers deployed successfully 
INFO[0084] [network] Running control plane -> etcd port checks 
INFO[0084] [network] Checking if host [192.168.3.31] can connect to host(s) [192.168.3.33] on port(s) [2379], try #1 
INFO[0084] [network] Checking if host [192.168.3.32] can connect to host(s) [192.168.3.33] on port(s) [2379], try #1 
INFO[0084] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.31] 
INFO[0084] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.32] 
INFO[0085] Starting container [rke-port-checker] on host [192.168.3.31], try #1 
INFO[0085] Starting container [rke-port-checker] on host [192.168.3.32], try #1 
INFO[0085] [network] Successfully started [rke-port-checker] container on host [192.168.3.31] 
INFO[0085] Removing container [rke-port-checker] on host [192.168.3.31], try #1 
INFO[0086] [network] Successfully started [rke-port-checker] container on host [192.168.3.32] 
INFO[0086] Removing container [rke-port-checker] on host [192.168.3.32], try #1 
INFO[0086] [network] Running control plane -> worker port checks 
INFO[0086] [network] Checking if host [192.168.3.32] can connect to host(s) [192.168.3.34 192.168.3.35 192.168.3.36] on port(s) [10250], try #1 
INFO[0086] [network] Checking if host [192.168.3.31] can connect to host(s) [192.168.3.34 192.168.3.35 192.168.3.36] on port(s) [10250], try #1 
INFO[0086] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.31] 
INFO[0086] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.32] 
INFO[0086] Starting container [rke-port-checker] on host [192.168.3.31], try #1 
INFO[0086] Starting container [rke-port-checker] on host [192.168.3.32], try #1 
INFO[0086] [network] Successfully started [rke-port-checker] container on host [192.168.3.31] 
INFO[0086] Removing container [rke-port-checker] on host [192.168.3.31], try #1 
INFO[0086] [network] Successfully started [rke-port-checker] container on host [192.168.3.32] 
INFO[0086] Removing container [rke-port-checker] on host [192.168.3.32], try #1 
INFO[0086] [network] Running workers -> control plane port checks 
INFO[0086] [network] Checking if host [192.168.3.35] can connect to host(s) [192.168.3.31 192.168.3.32] on port(s) [6443], try #1 
INFO[0086] [network] Checking if host [192.168.3.34] can connect to host(s) [192.168.3.31 192.168.3.32] on port(s) [6443], try #1 
INFO[0086] [network] Checking if host [192.168.3.36] can connect to host(s) [192.168.3.31 192.168.3.32] on port(s) [6443], try #1 
INFO[0086] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.35] 
INFO[0086] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.34] 
INFO[0086] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.36] 
INFO[0086] Starting container [rke-port-checker] on host [192.168.3.34], try #1 
INFO[0086] Starting container [rke-port-checker] on host [192.168.3.36], try #1 
INFO[0086] Starting container [rke-port-checker] on host [192.168.3.35], try #1 
INFO[0087] [network] Successfully started [rke-port-checker] container on host [192.168.3.34] 
INFO[0087] Removing container [rke-port-checker] on host [192.168.3.34], try #1 
INFO[0087] [network] Successfully started [rke-port-checker] container on host [192.168.3.36] 
INFO[0087] Removing container [rke-port-checker] on host [192.168.3.36], try #1 
INFO[0087] [network] Successfully started [rke-port-checker] container on host [192.168.3.35] 
INFO[0087] Removing container [rke-port-checker] on host [192.168.3.35], try #1 
INFO[0087] [network] Checking KubeAPI port Control Plane hosts 
INFO[0087] [network] Removing port listener containers  
INFO[0087] Removing container [rke-etcd-port-listener] on host [192.168.3.33], try #1 
INFO[0087] [remove/rke-etcd-port-listener] Successfully removed container on host [192.168.3.33] 
INFO[0087] Removing container [rke-cp-port-listener] on host [192.168.3.31], try #1 
INFO[0087] Removing container [rke-cp-port-listener] on host [192.168.3.32], try #1 
INFO[0088] [remove/rke-cp-port-listener] Successfully removed container on host [192.168.3.32] 
INFO[0088] [remove/rke-cp-port-listener] Successfully removed container on host [192.168.3.31] 
INFO[0088] Removing container [rke-worker-port-listener] on host [192.168.3.34], try #1 
INFO[0088] Removing container [rke-worker-port-listener] on host [192.168.3.36], try #1 
INFO[0088] Removing container [rke-worker-port-listener] on host [192.168.3.35], try #1 
INFO[0088] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.3.35] 
INFO[0088] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.3.34] 
INFO[0088] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.3.36] 
INFO[0088] [network] Port listener containers removed successfully 
INFO[0088] [certificates] Deploying kubernetes certificates to Cluster nodes 
INFO[0088] Finding container [cert-deployer] on host [192.168.3.34], try #1 
INFO[0088] Finding container [cert-deployer] on host [192.168.3.35], try #1 
INFO[0088] Finding container [cert-deployer] on host [192.168.3.36], try #1 
INFO[0088] Finding container [cert-deployer] on host [192.168.3.32], try #1 
INFO[0088] Finding container [cert-deployer] on host [192.168.3.31], try #1 
INFO[0088] Finding container [cert-deployer] on host [192.168.3.33], try #1 
INFO[0088] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.31] 
INFO[0088] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.35] 
INFO[0088] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.36] 
INFO[0088] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.32] 
INFO[0088] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.33] 
INFO[0088] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.34] 
INFO[0088] Starting container [cert-deployer] on host [192.168.3.32], try #1 
INFO[0088] Starting container [cert-deployer] on host [192.168.3.33], try #1 
INFO[0088] Starting container [cert-deployer] on host [192.168.3.36], try #1 
INFO[0088] Starting container [cert-deployer] on host [192.168.3.35], try #1 
INFO[0088] Starting container [cert-deployer] on host [192.168.3.34], try #1 
INFO[0088] Starting container [cert-deployer] on host [192.168.3.31], try #1 
INFO[0089] Finding container [cert-deployer] on host [192.168.3.32], try #1 
INFO[0089] Finding container [cert-deployer] on host [192.168.3.34], try #1 
INFO[0089] Finding container [cert-deployer] on host [192.168.3.33], try #1 
INFO[0089] Finding container [cert-deployer] on host [192.168.3.36], try #1 
INFO[0089] Finding container [cert-deployer] on host [192.168.3.35], try #1 
INFO[0089] Finding container [cert-deployer] on host [192.168.3.31], try #1 
INFO[0094] Finding container [cert-deployer] on host [192.168.3.32], try #1 
INFO[0094] Removing container [cert-deployer] on host [192.168.3.32], try #1 
INFO[0094] Finding container [cert-deployer] on host [192.168.3.34], try #1 
INFO[0094] Removing container [cert-deployer] on host [192.168.3.34], try #1 
INFO[0094] Finding container [cert-deployer] on host [192.168.3.33], try #1 
INFO[0094] Removing container [cert-deployer] on host [192.168.3.33], try #1 
INFO[0094] Finding container [cert-deployer] on host [192.168.3.36], try #1 
INFO[0094] Removing container [cert-deployer] on host [192.168.3.36], try #1 
INFO[0094] Finding container [cert-deployer] on host [192.168.3.35], try #1 
INFO[0094] Removing container [cert-deployer] on host [192.168.3.35], try #1 
INFO[0094] Finding container [cert-deployer] on host [192.168.3.31], try #1 
INFO[0094] Removing container [cert-deployer] on host [192.168.3.31], try #1 
INFO[0094] [reconcile] Rebuilding and updating local kube config 
INFO[0094] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] 
WARN[0094] [reconcile] host [192.168.3.31] is a control plane node without reachable Kubernetes API endpoint in the cluster 
INFO[0094] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] 
WARN[0094] [reconcile] host [192.168.3.32] is a control plane node without reachable Kubernetes API endpoint in the cluster 
WARN[0094] [reconcile] no control plane node with reachable Kubernetes API endpoint in the cluster found 
INFO[0094] [certificates] Successfully deployed kubernetes certificates to Cluster nodes 
INFO[0094] [file-deploy] Deploying file [/etc/kubernetes/admission.yaml] to node [192.168.3.32] 
INFO[0094] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.32] 
INFO[0095] Starting container [file-deployer] on host [192.168.3.32], try #1 
INFO[0095] Successfully started [file-deployer] container on host [192.168.3.32] 
INFO[0095] Waiting for [file-deployer] container to exit on host [192.168.3.32] 
INFO[0095] Waiting for [file-deployer] container to exit on host [192.168.3.32] 
INFO[0095] Container [file-deployer] is still running on host [192.168.3.32]: stderr: [], stdout: [] 
INFO[0096] Removing container [file-deployer] on host [192.168.3.32], try #1 
INFO[0096] [remove/file-deployer] Successfully removed container on host [192.168.3.32] 
INFO[0096] [file-deploy] Deploying file [/etc/kubernetes/admission.yaml] to node [192.168.3.31] 
INFO[0096] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.31] 
INFO[0096] Starting container [file-deployer] on host [192.168.3.31], try #1 
INFO[0097] Successfully started [file-deployer] container on host [192.168.3.31] 
INFO[0097] Waiting for [file-deployer] container to exit on host [192.168.3.31] 
INFO[0097] Waiting for [file-deployer] container to exit on host [192.168.3.31] 
INFO[0097] Container [file-deployer] is still running on host [192.168.3.31]: stderr: [], stdout: [] 
INFO[0098] Removing container [file-deployer] on host [192.168.3.31], try #1 
INFO[0098] [remove/file-deployer] Successfully removed container on host [192.168.3.31] 
INFO[0098] [/etc/kubernetes/admission.yaml] Successfully deployed admission control config to Cluster control nodes 
INFO[0098] [file-deploy] Deploying file [/etc/kubernetes/audit-policy.yaml] to node [192.168.3.31] 
INFO[0098] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.31] 
INFO[0098] Starting container [file-deployer] on host [192.168.3.31], try #1 
INFO[0099] Successfully started [file-deployer] container on host [192.168.3.31] 
INFO[0099] Waiting for [file-deployer] container to exit on host [192.168.3.31] 
INFO[0099] Waiting for [file-deployer] container to exit on host [192.168.3.31] 
INFO[0099] Container [file-deployer] is still running on host [192.168.3.31]: stderr: [], stdout: [] 
INFO[0100] Removing container [file-deployer] on host [192.168.3.31], try #1 
INFO[0100] [remove/file-deployer] Successfully removed container on host [192.168.3.31] 
INFO[0100] [file-deploy] Deploying file [/etc/kubernetes/audit-policy.yaml] to node [192.168.3.32] 
INFO[0100] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.32] 
INFO[0100] Starting container [file-deployer] on host [192.168.3.32], try #1 
INFO[0100] Successfully started [file-deployer] container on host [192.168.3.32] 
INFO[0100] Waiting for [file-deployer] container to exit on host [192.168.3.32] 
INFO[0100] Waiting for [file-deployer] container to exit on host [192.168.3.32] 
INFO[0100] Container [file-deployer] is still running on host [192.168.3.32]: stderr: [], stdout: [] 
INFO[0101] Removing container [file-deployer] on host [192.168.3.32], try #1 
INFO[0101] [remove/file-deployer] Successfully removed container on host [192.168.3.32] 
INFO[0101] [/etc/kubernetes/audit-policy.yaml] Successfully deployed audit policy file to Cluster control nodes 
INFO[0101] [reconcile] Reconciling cluster state        
INFO[0101] [reconcile] This is newly generated cluster  
INFO[0101] Pre-pulling kubernetes images                
INFO[0101] Pulling image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] on host [192.168.3.31], try #1 
INFO[0101] Pulling image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] on host [192.168.3.33], try #1 
INFO[0101] Pulling image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] on host [192.168.3.35], try #1 
INFO[0101] Pulling image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] on host [192.168.3.34], try #1 
INFO[0101] Pulling image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] on host [192.168.3.36], try #1 
INFO[0101] Pulling image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] on host [192.168.3.32], try #1 
INFO[0160] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.36] 
INFO[0160] Pulling image [192.168.3.254/demo/mirrored-pause:3.7] on host [192.168.3.36], try #1 
INFO[0161] Image [192.168.3.254/demo/mirrored-pause:3.7] exists on host [192.168.3.36] 
INFO[0161] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.31] 
INFO[0161] Pulling image [192.168.3.254/demo/mirrored-pause:3.7] on host [192.168.3.31], try #1 
INFO[0161] Image [192.168.3.254/demo/mirrored-pause:3.7] exists on host [192.168.3.31] 
INFO[0161] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.32] 
INFO[0161] Pulling image [192.168.3.254/demo/mirrored-pause:3.7] on host [192.168.3.32], try #1 
INFO[0162] Image [192.168.3.254/demo/mirrored-pause:3.7] exists on host [192.168.3.32] 
INFO[0162] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.33] 
INFO[0162] Pulling image [192.168.3.254/demo/mirrored-pause:3.7] on host [192.168.3.33], try #1 
INFO[0162] Image [192.168.3.254/demo/mirrored-pause:3.7] exists on host [192.168.3.33] 
INFO[0162] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.34] 
INFO[0162] Pulling image [192.168.3.254/demo/mirrored-pause:3.7] on host [192.168.3.34], try #1 
INFO[0163] Image [192.168.3.254/demo/mirrored-pause:3.7] exists on host [192.168.3.34] 
INFO[0194] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.35] 
INFO[0194] Pulling image [192.168.3.254/demo/mirrored-pause:3.7] on host [192.168.3.35], try #1 
INFO[0194] Image [192.168.3.254/demo/mirrored-pause:3.7] exists on host [192.168.3.35] 
INFO[0194] Kubernetes images pulled successfully        
INFO[0194] [etcd] Building up etcd plane..              
INFO[0194] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.33] 
INFO[0194] Starting container [etcd-fix-perm] on host [192.168.3.33], try #1 
INFO[0195] Successfully started [etcd-fix-perm] container on host [192.168.3.33] 
INFO[0195] Waiting for [etcd-fix-perm] container to exit on host [192.168.3.33] 
INFO[0195] Waiting for [etcd-fix-perm] container to exit on host [192.168.3.33] 
INFO[0195] Container [etcd-fix-perm] is still running on host [192.168.3.33]: stderr: [], stdout: [] 
INFO[0196] Removing container [etcd-fix-perm] on host [192.168.3.33], try #1 
INFO[0196] [remove/etcd-fix-perm] Successfully removed container on host [192.168.3.33] 
INFO[0196] Pulling image [192.168.3.254/demo/mirrored-coreos-etcd:v3.5.9] on host [192.168.3.33], try #1 
INFO[0199] Image [192.168.3.254/demo/mirrored-coreos-etcd:v3.5.9] exists on host [192.168.3.33] 
INFO[0199] Starting container [etcd] on host [192.168.3.33], try #1 
INFO[0200] [etcd] Successfully started [etcd] container on host [192.168.3.33] 
INFO[0200] [etcd] Running rolling snapshot container [etcd-rolling-snapshots] on host [192.168.3.33] 
INFO[0200] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.33] 
INFO[0200] Starting container [etcd-rolling-snapshots] on host [192.168.3.33], try #1 
INFO[0200] [etcd] Successfully started [etcd-rolling-snapshots] container on host [192.168.3.33] 
INFO[0205] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.33] 
INFO[0205] Starting container [rke-bundle-cert] on host [192.168.3.33], try #1 
INFO[0206] [certificates] Successfully started [rke-bundle-cert] container on host [192.168.3.33] 
INFO[0206] Waiting for [rke-bundle-cert] container to exit on host [192.168.3.33] 
INFO[0206] Container [rke-bundle-cert] is still running on host [192.168.3.33]: stderr: [], stdout: [] 
INFO[0207] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [192.168.3.33] 
INFO[0207] Removing container [rke-bundle-cert] on host [192.168.3.33], try #1 
INFO[0207] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.33] 
INFO[0207] Starting container [rke-log-linker] on host [192.168.3.33], try #1 
INFO[0208] [etcd] Successfully started [rke-log-linker] container on host [192.168.3.33] 
INFO[0208] Removing container [rke-log-linker] on host [192.168.3.33], try #1 
INFO[0208] [remove/rke-log-linker] Successfully removed container on host [192.168.3.33] 
INFO[0208] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.33] 
INFO[0208] Starting container [rke-log-linker] on host [192.168.3.33], try #1 
INFO[0209] [etcd] Successfully started [rke-log-linker] container on host [192.168.3.33] 
INFO[0209] Removing container [rke-log-linker] on host [192.168.3.33], try #1 
INFO[0209] [remove/rke-log-linker] Successfully removed container on host [192.168.3.33] 
INFO[0209] [etcd] Successfully started etcd plane.. Checking etcd cluster health 
INFO[0209] [etcd] etcd host [192.168.3.33] reported healthy=true 
INFO[0209] [controlplane] Building up Controller Plane.. 
INFO[0209] Finding container [service-sidekick] on host [192.168.3.31], try #1 
INFO[0209] Finding container [service-sidekick] on host [192.168.3.32], try #1 
INFO[0209] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.31] 
INFO[0209] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.32] 
INFO[0209] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.31] 
INFO[0209] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.32] 
INFO[0209] Starting container [kube-apiserver] on host [192.168.3.31], try #1 
INFO[0209] Starting container [kube-apiserver] on host [192.168.3.32], try #1 
INFO[0209] [controlplane] Successfully started [kube-apiserver] container on host [192.168.3.31] 
INFO[0209] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [192.168.3.31] 
INFO[0209] [controlplane] Successfully started [kube-apiserver] container on host [192.168.3.32] 
INFO[0209] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [192.168.3.32] 
INFO[0220] [healthcheck] service [kube-apiserver] on host [192.168.3.31] is healthy 
INFO[0220] [healthcheck] service [kube-apiserver] on host [192.168.3.32] is healthy 
INFO[0220] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.31] 
INFO[0220] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.32] 
INFO[0220] Starting container [rke-log-linker] on host [192.168.3.32], try #1 
INFO[0220] Starting container [rke-log-linker] on host [192.168.3.31], try #1 
INFO[0221] [controlplane] Successfully started [rke-log-linker] container on host [192.168.3.31] 
INFO[0221] Removing container [rke-log-linker] on host [192.168.3.31], try #1 
INFO[0221] [controlplane] Successfully started [rke-log-linker] container on host [192.168.3.32] 
INFO[0221] Removing container [rke-log-linker] on host [192.168.3.32], try #1 
INFO[0221] [remove/rke-log-linker] Successfully removed container on host [192.168.3.31] 
INFO[0221] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.31] 
INFO[0221] [remove/rke-log-linker] Successfully removed container on host [192.168.3.32] 
INFO[0221] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.32] 
INFO[0221] Starting container [kube-controller-manager] on host [192.168.3.31], try #1 
INFO[0221] Starting container [kube-controller-manager] on host [192.168.3.32], try #1 
INFO[0221] [controlplane] Successfully started [kube-controller-manager] container on host [192.168.3.31] 
INFO[0221] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [192.168.3.31] 
INFO[0221] [controlplane] Successfully started [kube-controller-manager] container on host [192.168.3.32] 
INFO[0221] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [192.168.3.32] 
INFO[0226] [healthcheck] service [kube-controller-manager] on host [192.168.3.31] is healthy 
INFO[0226] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.31] 
INFO[0226] [healthcheck] service [kube-controller-manager] on host [192.168.3.32] is healthy 
INFO[0226] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.32] 
INFO[0226] Starting container [rke-log-linker] on host [192.168.3.31], try #1 
INFO[0226] Starting container [rke-log-linker] on host [192.168.3.32], try #1 
INFO[0227] [controlplane] Successfully started [rke-log-linker] container on host [192.168.3.31] 
INFO[0227] Removing container [rke-log-linker] on host [192.168.3.31], try #1 
INFO[0227] [controlplane] Successfully started [rke-log-linker] container on host [192.168.3.32] 
INFO[0227] Removing container [rke-log-linker] on host [192.168.3.32], try #1 
INFO[0227] [remove/rke-log-linker] Successfully removed container on host [192.168.3.31] 
INFO[0227] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.31] 
INFO[0227] Starting container [kube-scheduler] on host [192.168.3.31], try #1 
INFO[0227] [remove/rke-log-linker] Successfully removed container on host [192.168.3.32] 
INFO[0227] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.32] 
INFO[0227] Starting container [kube-scheduler] on host [192.168.3.32], try #1 
INFO[0227] [controlplane] Successfully started [kube-scheduler] container on host [192.168.3.31] 
INFO[0227] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [192.168.3.31] 
INFO[0227] [controlplane] Successfully started [kube-scheduler] container on host [192.168.3.32] 
INFO[0227] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [192.168.3.32] 
INFO[0232] [healthcheck] service [kube-scheduler] on host [192.168.3.31] is healthy 
INFO[0232] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.31] 
INFO[0232] [healthcheck] service [kube-scheduler] on host [192.168.3.32] is healthy 
INFO[0232] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.32] 
INFO[0233] Starting container [rke-log-linker] on host [192.168.3.31], try #1 
INFO[0233] Starting container [rke-log-linker] on host [192.168.3.32], try #1 
INFO[0233] [controlplane] Successfully started [rke-log-linker] container on host [192.168.3.31] 
INFO[0233] Removing container [rke-log-linker] on host [192.168.3.31], try #1 
INFO[0233] [remove/rke-log-linker] Successfully removed container on host [192.168.3.31] 
INFO[0233] [controlplane] Successfully started [rke-log-linker] container on host [192.168.3.32] 
INFO[0233] Removing container [rke-log-linker] on host [192.168.3.32], try #1 
INFO[0234] [remove/rke-log-linker] Successfully removed container on host [192.168.3.32] 
INFO[0234] [controlplane] Successfully started Controller Plane.. 
INFO[0234] [authz] Creating rke-job-deployer ServiceAccount 
INFO[0234] [authz] rke-job-deployer ServiceAccount created successfully 
INFO[0234] [authz] Creating system:node ClusterRoleBinding 
INFO[0234] [authz] system:node ClusterRoleBinding created successfully 
INFO[0234] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding 
INFO[0234] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully 
INFO[0234] Successfully Deployed state file at [./cluster.rkestate] 
INFO[0234] [state] Saving full cluster state to Kubernetes 
INFO[0234] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: full-cluster-state 
INFO[0234] [worker] Building up Worker Plane..          
INFO[0234] Finding container [service-sidekick] on host [192.168.3.31], try #1 
INFO[0234] Finding container [service-sidekick] on host [192.168.3.32], try #1 
INFO[0234] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.33] 
INFO[0234] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.35] 
INFO[0234] [sidekick] Sidekick container already created on host [192.168.3.31] 
INFO[0234] [sidekick] Sidekick container already created on host [192.168.3.32] 
INFO[0234] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.32] 
INFO[0234] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.31] 
INFO[0234] Starting container [kubelet] on host [192.168.3.32], try #1 
INFO[0234] Starting container [kubelet] on host [192.168.3.31], try #1 
INFO[0234] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.36] 
INFO[0234] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.34] 
INFO[0234] [worker] Successfully started [kubelet] container on host [192.168.3.32] 
INFO[0234] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.3.32] 
INFO[0234] Starting container [nginx-proxy] on host [192.168.3.33], try #1 
INFO[0234] [worker] Successfully started [kubelet] container on host [192.168.3.31] 
INFO[0234] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.3.31] 
INFO[0234] Starting container [nginx-proxy] on host [192.168.3.35], try #1 
INFO[0234] Starting container [nginx-proxy] on host [192.168.3.36], try #1 
INFO[0234] Starting container [nginx-proxy] on host [192.168.3.34], try #1 
INFO[0234] [worker] Successfully started [nginx-proxy] container on host [192.168.3.33] 
INFO[0234] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.33] 
INFO[0234] [worker] Successfully started [nginx-proxy] container on host [192.168.3.35] 
INFO[0234] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.35] 
INFO[0234] [worker] Successfully started [nginx-proxy] container on host [192.168.3.34] 
INFO[0234] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.34] 
INFO[0234] [worker] Successfully started [nginx-proxy] container on host [192.168.3.36] 
INFO[0234] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.36] 
INFO[0234] Starting container [rke-log-linker] on host [192.168.3.33], try #1 
INFO[0234] Starting container [rke-log-linker] on host [192.168.3.35], try #1 
INFO[0234] Starting container [rke-log-linker] on host [192.168.3.36], try #1 
INFO[0234] Starting container [rke-log-linker] on host [192.168.3.34], try #1 
INFO[0235] [worker] Successfully started [rke-log-linker] container on host [192.168.3.33] 
INFO[0235] Removing container [rke-log-linker] on host [192.168.3.33], try #1 
INFO[0235] [worker] Successfully started [rke-log-linker] container on host [192.168.3.35] 
INFO[0235] Removing container [rke-log-linker] on host [192.168.3.35], try #1 
INFO[0235] [remove/rke-log-linker] Successfully removed container on host [192.168.3.33] 
INFO[0235] Finding container [service-sidekick] on host [192.168.3.33], try #1 
INFO[0235] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.33] 
INFO[0235] [worker] Successfully started [rke-log-linker] container on host [192.168.3.34] 
INFO[0235] Removing container [rke-log-linker] on host [192.168.3.34], try #1 
INFO[0235] [remove/rke-log-linker] Successfully removed container on host [192.168.3.35] 
INFO[0235] Finding container [service-sidekick] on host [192.168.3.35], try #1 
INFO[0235] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.35] 
INFO[0235] [worker] Successfully started [rke-log-linker] container on host [192.168.3.36] 
INFO[0235] Removing container [rke-log-linker] on host [192.168.3.36], try #1 
INFO[0235] [remove/rke-log-linker] Successfully removed container on host [192.168.3.34] 
INFO[0235] Finding container [service-sidekick] on host [192.168.3.34], try #1 
INFO[0235] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.34] 
INFO[0235] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.33] 
INFO[0235] Starting container [kubelet] on host [192.168.3.33], try #1 
INFO[0235] [remove/rke-log-linker] Successfully removed container on host [192.168.3.36] 
INFO[0235] Finding container [service-sidekick] on host [192.168.3.36], try #1 
INFO[0235] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.36] 
INFO[0236] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.35] 
INFO[0236] Starting container [kubelet] on host [192.168.3.35], try #1 
INFO[0236] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.34] 
INFO[0236] Starting container [kubelet] on host [192.168.3.34], try #1 
INFO[0236] [worker] Successfully started [kubelet] container on host [192.168.3.33] 
INFO[0236] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.3.33] 
INFO[0236] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.36] 
INFO[0236] [worker] Successfully started [kubelet] container on host [192.168.3.35] 
INFO[0236] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.3.35] 
INFO[0236] Starting container [kubelet] on host [192.168.3.36], try #1 
INFO[0236] [worker] Successfully started [kubelet] container on host [192.168.3.34] 
INFO[0236] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.3.34] 
INFO[0236] [worker] Successfully started [kubelet] container on host [192.168.3.36] 
INFO[0236] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.3.36] 
INFO[0254] [healthcheck] service [kubelet] on host [192.168.3.31] is healthy 
INFO[0254] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.31] 
INFO[0254] [healthcheck] service [kubelet] on host [192.168.3.32] is healthy 
INFO[0254] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.32] 
INFO[0255] Starting container [rke-log-linker] on host [192.168.3.31], try #1 
INFO[0255] Starting container [rke-log-linker] on host [192.168.3.32], try #1 
INFO[0255] [worker] Successfully started [rke-log-linker] container on host [192.168.3.32] 
INFO[0255] Removing container [rke-log-linker] on host [192.168.3.32], try #1 
INFO[0255] [remove/rke-log-linker] Successfully removed container on host [192.168.3.32] 
INFO[0255] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.32] 
INFO[0256] Starting container [kube-proxy] on host [192.168.3.32], try #1 
INFO[0256] [worker] Successfully started [kube-proxy] container on host [192.168.3.32] 
INFO[0256] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.3.32] 
INFO[0256] [worker] Successfully started [rke-log-linker] container on host [192.168.3.31] 
INFO[0256] Removing container [rke-log-linker] on host [192.168.3.31], try #1 
INFO[0256] [remove/rke-log-linker] Successfully removed container on host [192.168.3.31] 
INFO[0256] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.31] 
INFO[0256] Starting container [kube-proxy] on host [192.168.3.31], try #1 
INFO[0256] [worker] Successfully started [kube-proxy] container on host [192.168.3.31] 
INFO[0256] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.3.31] 
INFO[0256] [healthcheck] service [kubelet] on host [192.168.3.33] is healthy 
INFO[0256] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.33] 
INFO[0256] [healthcheck] service [kubelet] on host [192.168.3.35] is healthy 
INFO[0256] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.35] 
INFO[0256] [healthcheck] service [kubelet] on host [192.168.3.34] is healthy 
INFO[0256] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.34] 
INFO[0256] [healthcheck] service [kubelet] on host [192.168.3.36] is healthy 
INFO[0256] Starting container [rke-log-linker] on host [192.168.3.33], try #1 
INFO[0256] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.36] 
INFO[0257] Starting container [rke-log-linker] on host [192.168.3.35], try #1 
INFO[0257] Starting container [rke-log-linker] on host [192.168.3.34], try #1 
INFO[0257] Starting container [rke-log-linker] on host [192.168.3.36], try #1 
INFO[0257] [worker] Successfully started [rke-log-linker] container on host [192.168.3.33] 
INFO[0257] Removing container [rke-log-linker] on host [192.168.3.33], try #1 
INFO[0257] [worker] Successfully started [rke-log-linker] container on host [192.168.3.34] 
INFO[0257] Removing container [rke-log-linker] on host [192.168.3.34], try #1 
INFO[0257] [worker] Successfully started [rke-log-linker] container on host [192.168.3.35] 
INFO[0257] Removing container [rke-log-linker] on host [192.168.3.35], try #1 
INFO[0257] [remove/rke-log-linker] Successfully removed container on host [192.168.3.33] 
INFO[0257] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.33] 
INFO[0257] Starting container [kube-proxy] on host [192.168.3.33], try #1 
INFO[0257] [worker] Successfully started [rke-log-linker] container on host [192.168.3.36] 
INFO[0257] Removing container [rke-log-linker] on host [192.168.3.36], try #1 
INFO[0258] [remove/rke-log-linker] Successfully removed container on host [192.168.3.34] 
INFO[0258] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.34] 
INFO[0258] Starting container [kube-proxy] on host [192.168.3.34], try #1 
INFO[0258] [remove/rke-log-linker] Successfully removed container on host [192.168.3.35] 
INFO[0258] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.35] 
INFO[0258] Starting container [kube-proxy] on host [192.168.3.35], try #1 
INFO[0258] [remove/rke-log-linker] Successfully removed container on host [192.168.3.36] 
INFO[0258] [worker] Successfully started [kube-proxy] container on host [192.168.3.33] 
INFO[0258] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.3.33] 
INFO[0258] Image [192.168.3.254/demo/hyperkube:v1.21.9-rancher1] exists on host [192.168.3.36] 
INFO[0258] Starting container [kube-proxy] on host [192.168.3.36], try #1 
INFO[0258] [worker] Successfully started [kube-proxy] container on host [192.168.3.34] 
INFO[0258] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.3.34] 
INFO[0258] [worker] Successfully started [kube-proxy] container on host [192.168.3.35] 
INFO[0258] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.3.35] 
INFO[0258] [worker] Successfully started [kube-proxy] container on host [192.168.3.36] 
INFO[0258] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.3.36] 
INFO[0261] [healthcheck] service [kube-proxy] on host [192.168.3.32] is healthy 
INFO[0261] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.32] 
INFO[0261] Starting container [rke-log-linker] on host [192.168.3.32], try #1 
INFO[0261] [healthcheck] service [kube-proxy] on host [192.168.3.31] is healthy 
INFO[0261] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.31] 
INFO[0262] Starting container [rke-log-linker] on host [192.168.3.31], try #1 
INFO[0262] [worker] Successfully started [rke-log-linker] container on host [192.168.3.32] 
INFO[0262] Removing container [rke-log-linker] on host [192.168.3.32], try #1 
INFO[0262] [remove/rke-log-linker] Successfully removed container on host [192.168.3.32] 
INFO[0263] [worker] Successfully started [rke-log-linker] container on host [192.168.3.31] 
INFO[0263] [healthcheck] service [kube-proxy] on host [192.168.3.33] is healthy 
INFO[0263] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.33] 
INFO[0263] Removing container [rke-log-linker] on host [192.168.3.31], try #1 
INFO[0263] [remove/rke-log-linker] Successfully removed container on host [192.168.3.31] 
INFO[0263] [healthcheck] service [kube-proxy] on host [192.168.3.34] is healthy 
INFO[0263] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.34] 
INFO[0263] [healthcheck] service [kube-proxy] on host [192.168.3.35] is healthy 
INFO[0263] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.35] 
INFO[0263] [healthcheck] service [kube-proxy] on host [192.168.3.36] is healthy 
INFO[0263] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.36] 
INFO[0263] Starting container [rke-log-linker] on host [192.168.3.33], try #1 
INFO[0263] Starting container [rke-log-linker] on host [192.168.3.34], try #1 
INFO[0263] Starting container [rke-log-linker] on host [192.168.3.35], try #1 
INFO[0263] Starting container [rke-log-linker] on host [192.168.3.36], try #1 
INFO[0264] [worker] Successfully started [rke-log-linker] container on host [192.168.3.34] 
INFO[0264] Removing container [rke-log-linker] on host [192.168.3.34], try #1 
INFO[0264] [worker] Successfully started [rke-log-linker] container on host [192.168.3.33] 
INFO[0264] Removing container [rke-log-linker] on host [192.168.3.33], try #1 
INFO[0264] [worker] Successfully started [rke-log-linker] container on host [192.168.3.35] 
INFO[0264] Removing container [rke-log-linker] on host [192.168.3.35], try #1 
INFO[0264] [worker] Successfully started [rke-log-linker] container on host [192.168.3.36] 
INFO[0264] Removing container [rke-log-linker] on host [192.168.3.36], try #1 
INFO[0264] [remove/rke-log-linker] Successfully removed container on host [192.168.3.34] 
INFO[0264] [remove/rke-log-linker] Successfully removed container on host [192.168.3.33] 
INFO[0264] [remove/rke-log-linker] Successfully removed container on host [192.168.3.35] 
INFO[0264] [remove/rke-log-linker] Successfully removed container on host [192.168.3.36] 
INFO[0264] [worker] Successfully started Worker Plane.. 
INFO[0264] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.31] 
INFO[0264] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.36] 
INFO[0264] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.32] 
INFO[0264] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.33] 
INFO[0264] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.34] 
INFO[0264] Image [192.168.3.254/demo/rke-tools:v0.1.96] exists on host [192.168.3.35] 
INFO[0264] Starting container [rke-log-cleaner] on host [192.168.3.33], try #1 
INFO[0264] Starting container [rke-log-cleaner] on host [192.168.3.35], try #1 
INFO[0264] Starting container [rke-log-cleaner] on host [192.168.3.36], try #1 
INFO[0264] Starting container [rke-log-cleaner] on host [192.168.3.31], try #1 
INFO[0264] Starting container [rke-log-cleaner] on host [192.168.3.34], try #1 
INFO[0265] Starting container [rke-log-cleaner] on host [192.168.3.32], try #1 
INFO[0266] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.3.36] 
INFO[0266] Removing container [rke-log-cleaner] on host [192.168.3.36], try #1 
INFO[0266] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.3.31] 
INFO[0266] Removing container [rke-log-cleaner] on host [192.168.3.31], try #1 
INFO[0266] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.3.33] 
INFO[0266] Removing container [rke-log-cleaner] on host [192.168.3.33], try #1 
INFO[0266] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.3.35] 
INFO[0266] Removing container [rke-log-cleaner] on host [192.168.3.35], try #1 
INFO[0266] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.3.34] 
INFO[0266] Removing container [rke-log-cleaner] on host [192.168.3.34], try #1 
INFO[0266] [remove/rke-log-cleaner] Successfully removed container on host [192.168.3.36] 
INFO[0266] [remove/rke-log-cleaner] Successfully removed container on host [192.168.3.31] 
INFO[0266] [remove/rke-log-cleaner] Successfully removed container on host [192.168.3.33] 
INFO[0266] [remove/rke-log-cleaner] Successfully removed container on host [192.168.3.35] 
INFO[0266] [remove/rke-log-cleaner] Successfully removed container on host [192.168.3.34] 
INFO[0266] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.3.32] 
INFO[0266] Removing container [rke-log-cleaner] on host [192.168.3.32], try #1 
INFO[0266] [remove/rke-log-cleaner] Successfully removed container on host [192.168.3.32] 
INFO[0266] [sync] Syncing nodes Labels and Taints       
INFO[0267] [sync] Successfully synced nodes Labels and Taints 
INFO[0267] [network] Setting up network plugin: canal   
INFO[0267] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes 
INFO[0267] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes 
INFO[0267] [addons] Executing deploy job rke-network-plugin 
INFO[0272] [addons] Setting up coredns                  
INFO[0272] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes 
INFO[0272] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes 
INFO[0272] [addons] Executing deploy job rke-coredns-addon 
INFO[0282] [addons] CoreDNS deployed successfully       
INFO[0282] [dns] DNS provider coredns deployed successfully 
INFO[0282] [addons] Setting up Metrics Server           
INFO[0282] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0282] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes 
INFO[0282] [addons] Executing deploy job rke-metrics-addon 
INFO[0297] [addons] Metrics Server deployed successfully 
INFO[0297] [ingress] Setting up nginx ingress controller 
INFO[0297] [ingress] removing admission batch jobs if they exist 
INFO[0297] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes 
INFO[0298] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes 
INFO[0298] [addons] Executing deploy job rke-ingress-controller 
INFO[0308] [ingress] removing default backend service and deployment if they exist 
INFO[0308] [ingress] ingress controller nginx deployed successfully 
INFO[0308] [addons] Setting up user addons              
INFO[0308] [addons] no user addons defined              
INFO[0308] Finished building Kubernetes cluster successfully 

7、安装kubectl工具

# wget https://storage.googleapis.com/kubernetes-release/release/v1.21.9/bin/linux/amd64/kubectl
# chmod +x kubectl 
# mv kubectl /usr/local/bin/kubectl
# kubectl version --client

8、验证

root@k8s-master01[16:45:49]:~$ kubectl get nodes
NAME           STATUS   ROLES          AGE    VERSION
192.168.3.31   Ready    controlplane   120m   v1.21.9
192.168.3.32   Ready    controlplane   120m   v1.21.9
192.168.3.33   Ready    etcd           120m   v1.21.9
192.168.3.34   Ready    worker         120m   v1.21.9
192.168.3.35   Ready    worker         120m   v1.21.9
192.168.3.36   Ready    worker         120m   v1.21.9

9、为集群安装Kuboard面板,装完后发现Kubernetes-apiserver在master02上,所以在master02上安装

在master01上查看Kubernetes-apiserver地址:cat .kube/config

sudo docker run -d \
  --restart=unless-stopped \
  --name=kuboard \
  -p 80:80/tcp \
  -p 10081:10081/tcp \
  -e KUBOARD_ENDPOINT="http://192.168.3.32:80" \
  -e KUBOARD_AGENT_SERVER_TCP_PORT="10081" \
  -v /root/kuboard-data:/data \
  192.168.3.254/demo/kuboard:v3
  # 也可以使用镜像 swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard:v3 ,可以更快地完成镜像下载。
  # 请不要使用 127.0.0.1 或者 localhost 作为内网 IP \
  # Kuboard 不需要和 K8S 在同一个网段,Kuboard Agent 甚至可以通过代理访问 Kuboard Server \

10、打开web根据提示导入集群,最后如下

0
k8s
广告 广告

评论区