侧边栏壁纸
博主头像
一揽芳华 博主等级

行动起来,活在当下

  • 累计撰写 265 篇文章
  • 累计创建 24 个标签
  • 累计收到 4 条评论

目 录CONTENT

文章目录

九、kubernetes核心概念之Service

芳华是个男孩!
2024-10-15 / 0 评论 / 0 点赞 / 11 阅读 / 0 字
广告 广告

一、Service作用以及kube-proxy三种代理模式


1、service作用

使用kubernetes集群运行工作负载时,由于Pod经常处于用后即焚状态,Pod经常被重新生成,因此Pod对应的IP地址也会经常变化,导致无法直接访问Pod提供的服务,Kubernetes中使用了Service来解决这一问题,即在Pod前面使用Service对Pod进行代理,无论Pod怎样变化 ,只要有Label,就可以让Service能够联系上Pod,把PodIP地址添加到Service对应的端点列表(Endpoints)实现对Pod IP跟踪,进而实现通过Service访问Pod目的。

  • 通过service为pod客户端提供访问pod方法,即可客户端访问pod入口
  • 通过标签动态感知pod IP地址变化等
  • 防止pod失联
  • 定义访问pod访问策略
  • 通过label-selector相关联
  • 通过Service实现Pod的负载均衡(TCP/UDP 4层)
  • 底层实现由kube-proxy通过userspace、iptables、ipvs三种代理模式

2、kube-proxy三种代理模式

  • kubernetes集群中有三层网络,一类是真实存在的,例如Node Network、Pod Network,提供真实IP地址;一类是虚拟的,例如Cluster Network或Service Network,提供虚拟IP地址,不会出现在接口上,仅会出现在Service当中
  • kube-proxy始终watch(监控)kube-apiserver上关于Service相关的资源变动状态,一旦获取相关信息kube-proxy都要把相关信息转化为当前节点之上的,能够实现Service资源调度到特定Pod之上的规则,进而实现访问Service就能够获取Pod所提供的服务
  • kube-proxy三种代理模式:UserSpace模式、iptables模式、ipvs模式

2.1 UserSpace模式

userspace 模式是 kube-proxy 使用的第一代模式,该模式在 kubernetes v1.0 版本开始支持使用。
userspace 模式的实现原理图示如下:

invalid image(图片无法加载)

kube-proxy 会为每个 Service 随机监听一个端口(proxy port),并增加一条 iptables 规则。所以通过 ClusterIP:Port 访问 Service 的报文都 redirect 到 proxy port,kube-proxy 从它监听的 proxy port 收到报文以后,走 round robin(默认) 或是 session affinity(会话亲和力,即同一 client IP 都走同一链路给同一 pod 服务),分发给对应的 pod。

由于 userspace 模式会造成所有报文都走一遍用户态(也就是 Service 请求会先从用户空间进入内核 iptables,然后再回到用户空间,由 kube-proxy 完成后端 Endpoints 的选择和代理工作),需要在内核空间和用户空间转换,流量从用户空间进出内核会带来性能损耗,所以这种模式效率低、性能不高,不推荐使用。

2.2 iptables模式

iptables 模式是 kube-proxy 使用的第二代模式,该模式在 kubernetes v1.1 版本开始支持,从 v1.2 版本开始成为 kube-proxy 的默认模式。

iptables 模式的负载均衡模式是通过底层 netfilter/iptables 规则来实现的,通过 Informer 机制 Watch 接口实时跟踪 Service 和 Endpoint 的变更事件,并触发对 iptables 规则的同步更新。

iptables 模式的实现原理图示如下:

invalid image(图片无法加载)

通过图示我们可以发现在 iptables 模式下,kube-proxy 只是作为 controller,而不是 server,真正服务的是内核的 netfilter,体现在用户态的是 iptables。所以整体的效率会比 userspace 模式高。

invalid image(图片无法加载)

2.3 ipvs模式

ipvs 模式被 kube-proxy 采纳为第三代模式,模式在 kubernetes v1.8 版本开始引入,在 v1.9 版本中处于 beta 阶段,在 v1.11 版本中正式开始使用。

ipvs(IP Virtual Server) 实现了传输层负载均衡,也就是 4 层交换,作为 Linux 内核的一部分。ipvs运行在主机上,在真实服务器前充当负载均衡器。ipvs 可以将基于 TCP 和 UDP 的服务请求转发到真实服务器上,并使真实服务器上的服务在单个 IP 地址上显示为虚拟服务。

ipvs 模式的实现原理图示如下:

invalid image(图片无法加载)

invalid image(图片无法加载)

ipvs 和 iptables 都是基于 netfilter 的,那么 ipvs 模式有哪些更好的性能呢?

  • ipvs 为大型集群提供了更好的可拓展性和性能
  • ipvs 支持比 iptables 更复杂的负载均衡算法(包括:最小负载、最少连接、加权等)
  • ipvs 支持服务器健康检查和连接重试等功能
  • 可以动态修改 ipset 的集合,即使 iptables 的规则正在使用这个集合

ipvs 依赖于 iptables。ipvs 会使用 iptables 进行包过滤、airpin-masquerade tricks(地址伪装)、SNAT 等功能,但是使用的是 iptables 的扩展 ipset,并不是直接调用 iptables 来生成规则链。通过 ipset 来存储需要 DROP 或 masquerade 的流量的源或目标地址,用于确保 iptables 规则的数量是恒定的,这样我们就不需要关心有多少 Service 或是 Pod 了。

使用 ipset 相较于 iptables 有什么优点呢?iptables 是线性的数据结构,而 ipset 引入了带索引的数据结构,当规则很多的时候,ipset 依然可以很高效的查找和匹配。我们可以将 ipset 简单理解为一个 IP(段) 的集合,这个集合的内容可以是 IP 地址、IP 网段、端口等,iptables 可以直接添加规则对这个“可变的集合进行操作”,这样就可以大大减少 iptables 规则的数量,从而减少性能损耗。

举一个例子,如果我们要禁止成千上万个 IP 访问我们的服务器,如果使用 iptables 就需要一条一条的添加规则,这样会在 iptables 中生成大量的规则;如果用 ipset 就只需要将相关的 IP 地址(网段)加入到 ipset 集合中,然后只需要设置少量的 iptables 规则就可以实现这个目标。

下面的表格是 ipvs 模式下维护的 ipset 表集合:

设置名称成员用法
KUBE-CLUSTER-IP所有服务 IP + 端口在 masquerade-all=true 或 clusterCIDR 指定的情况下对 Service Cluster IP 地址进行伪装,解决数据包欺骗问题
KUBE-LOOP-BACK所有服务 IP + 端口 + IP解决数据包欺骗问题
KUBE-EXTERNAL-IP服务外部 IP + 端口将数据包伪装成 Service 的外部 IP 地址
KUBE-LOAD-BALANCER负载均衡器入口 IP + 端口将数据包伪装成 Load Balancer 类型的 Service
KUBE-LOAD-BALANCER-LOCAL负载均衡器入口 IP + 端口 以及externalTrafficPolicy=local接受数据包到 Load Balancer externalTrafficPolicy=local
KUBE-LOAD-BALANCER-FW负载均衡器入口 IP + 端口 以及loadBalancerSourceRanges使用指定的 loadBalancerSourceRanges 丢弃 Load Balancer 类型 Service 的数据包
KUBE-LOAD-BALANCER-SOURCE-CIDR负载均衡器入口 IP + 端口 + 源 CIDR接受 Load Balancer 类型 Service 的数据包,并指定 loadBalancerSourceRanges
KUBE-NODE-PORT-TCPNodePort 类型服务 TCP 端口将数据包伪装成 NodePort(TCP)
KUBE-NODE-PORT-LOCAL-TCPNodePort 类型服务 TCP 端口,带有externalTrafficPolicy=local接受数据包到 NodePort 服务,使用 externalTrafficPolicy=local
KUBE-NODE-PORT-UDPNodePort 类型服务 UDP 端口将数据包伪装成 NodePort(UDP)
KUBE-NODE-PORT-LOCAL-UDPNodePort 类型服务 UDP 端口,使用externalTrafficPolicy=local接受数据包到 NodePort 服务,使用 externalTrafficPolicy=local

2.4 iptables与ipvs对比

  • iptables
    • 工作在内核空间
    • 优点
      • 灵活,功能强大(可以在数据包不同阶段对包进行操作)
    • 缺点
      • 表中规则过多时,响应变慢,即规则遍历匹配和更新,呈线性时延
  • ipvs
    • 工作在内核空间
    • 优点
      • 转发效率高
      • 调度算法丰富:rr,wrr,lc,wlc,ip hash…
    • 缺点
      • 内核支持不全,低版本内核不能使用,需要升级到4.0或5.0以上。
  • 使用iptables与ipvs时机
    • 1.10版本之前使用iptables(1.1版本之前使用UserSpace进行转发)
    • 1.11版本之后同时支持iptables与ipvs,默认使用ipvs,如果ipvs模块没有加载时,会自动降级至iptables

二、Service分类及创建


1、service类型

Service类型决定了访问Service的方法

1.1 service类型

ClusterIP

  • 默认,分配一个集群内部可以访问的虚拟IP

NodePort

  • 在每个Node上分配一个端口作为外部访问入口
  • nodePort端口范围为:30000-32767

LoadBalancer

  • 工作在特定的Cloud Provider上,例如Google Cloud,AWS,OpenStack

ExternalName

  • 表示把集群外部的服务引入到集群内部中来,即实现了集群内部pod和集群外部的服务进行通信

1.2 Service参数

  • port 访问service使用的端口
  • targetPort Pod中容器端口
  • nodePort 通过Node实现外网用户访问k8s集群内service (30000-32767)

2、Service创建

Service的创建在工作中有两种方式,一是命令行创建,二是通过资源清单文件YAML文件创建

2.1 ClusterIP类型

ClusterIP根据是否生成ClusterIP又可分为普通Service和Headless Service
Service两类:

  • 普通Service:

为Kubernetes的Service分配一个集群内部可访问的固定虚拟IP(Cluster IP), 实现集群内的访问。

  • Headless Service:
    该服务不会分配Cluster IP, 也不通过kube-proxy做反向代理和负载均衡。而是通过DNS提供稳定的网络ID来访问,DNS会将headless service的后端直接解析为pod IP列表。

invalid image(图片无法加载)

2.2、普通ClusterIP 类型


2.2、普通ClusterIP Service创建

2.2.1、通过命令行来创建Service

创建Deployment类型应用

[root@k8s-master01 service]# cat deployment-nginx-01.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name:  nginx-server-01
  namespace: default
  labels:
    app:  nginx01
spec:
  selector:
    matchLabels:
      app: nginx01
  replicas: 2
  template:
    metadata:
      labels:
        app:  nginx01
    spec:
      containers:
      - name:  nginx01
        image:  nginx:1.24.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80                # 容器暴露的端口

检查语法错误并应用

[root@k8s-master01 service]# kubectl apply -f deployment-nginx-01.yaml --dry-run=client
deployment.apps/nginx-server-01 created (dry run)
[root@k8s-master01 service]# kubectl apply -f deployment-nginx-01.yaml
deployment.apps/nginx-server-01 created

验证pod信息

[root@k8s-master01 service]# kubectl get pod
NAME                                     READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-856696f4c-cmlgq   1/1     Running   1          4d16h
nginx-server-01-67bb8b9854-vx7lf         1/1     Running   0          76s
nginx-server-01-67bb8b9854-xvjc8         1/1     Running   0          76s

[root@k8s-master01 service]# kubectl get deployments.apps 
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   1/1     1            1           4d16h
nginx-server-01          2/2     2            2           2m22s

[root@k8s-master01 service]# kubectl get pod -o wide
NAME                                     READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
nfs-client-provisioner-856696f4c-cmlgq   1/1     Running   1          4d16h   10.244.39.234    k8s-worker03   <none>           <none>
nginx-server-01-67bb8b9854-vx7lf         1/1     Running   0          2m48s   10.244.203.205   k8s-worker04   <none>           <none>
nginx-server-01-67bb8b9854-xvjc8         1/1     Running   0          2m48s   10.244.69.214    k8s-worker02   <none>           <none>

使用kuboard查看

invalid image(图片无法加载)

创建Service

[root@k8s-master01 service]# kubectl expose deployment nginx-server-01 --type=ClusterIP --target-port=80 --port=80
service/nginx-server-01 exposed

验证Service信息

[root@k8s-master01 service]# kubectl get service
NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes        ClusterIP   10.96.0.1        <none>        443/TCP   26d
nginx-server-01   ClusterIP   10.107.191.239   <none>        80/TCP    2m37s

[root@k8s-master01 service]# kubectl describe service nginx-server-01 
Name:              nginx-server-01
Namespace:         default
Labels:            app=nginx01
Annotations:       <none>
Selector:          app=nginx01
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.107.191.239                # 用来访问pod提供的服务
IPs:               10.107.191.239
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.203.205:80,10.244.69.214:80
Session Affinity:  None
Events:            <none>

访问pod提供的服务

[root@k8s-master01 service]# curl http://10.107.191.239
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

使用kuboard查看

invalid image(图片无法加载)

service具有负载均衡的特性,验证,分别为两个pod创建首页测试文件
[root@k8s-master01 service]# kubectl exec -it nginx-server-01-67bb8b9854-vx7lf -- /bin/bash
root@nginx-server-01-67bb8b9854-vx7lf:/# cd /usr/share/nginx/html/
root@nginx-server-01-67bb8b9854-vx7lf:/usr/share/nginx/html# echo "web1" > index.html 
root@nginx-server-01-67bb8b9854-vx7lf:/usr/share/nginx/html# cat index.html 
web1
root@nginx-server-01-67bb8b9854-vx7lf:/usr/share/nginx/html# exit
exit

[root@k8s-master01 service]# kubectl exec -it nginx-server-01-67bb8b9854-xvjc8 -- /bin/bash
root@nginx-server-01-67bb8b9854-xvjc8:/# cd /usr/share/nginx/html/
root@nginx-server-01-67bb8b9854-xvjc8:/usr/share/nginx/html# echo "web2" > index.html 
root@nginx-server-01-67bb8b9854-xvjc8:/usr/share/nginx/html# cat index.html 
web2
root@nginx-server-01-67bb8b9854-xvjc8:/usr/share/nginx/html# exit
exit

验证负载均衡

[root@k8s-master01 service]# curl http://10.107.191.239
web2
[root@k8s-master01 service]# curl http://10.107.191.239
web1
[root@k8s-master01 service]# curl http://10.107.191.239
web1
[root@k8s-master01 service]# curl http://10.107.191.239
web1
[root@k8s-master01 service]# curl http://10.107.191.239
web2
[root@k8s-master01 service]# curl http://10.107.191.239
web2
[root@k8s-master01 service]# curl http://10.107.191.239
web1

使用循环语句测试

[root@k8s-master01 service]# while true
> do
> curl http://10.107.191.239
> sleep 1
> done
web2
web2
web2
web1
web1
web2
web2
web1
web2
web2
web1
web1
web2
web2
web1

使用kuboard查看

invalid image(图片无法加载)

invalid image(图片无法加载)

2.2.2、通过资源清单文件创建Service

创建yaml文件

[root@k8s-master01 service]# cat deployment-nginx-02.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name:  nginx-server-02
  namespace: default
  labels:
    app:  nginx02
spec:
  selector:
    matchLabels:
      app: nginx02
  replicas: 2
  template:
    metadata:
      labels:
        app:  nginx02
    spec:
      containers:
      - name:  nginx02
        image:  nginx:1.24.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx02
  namespace: default
spec:
  selector:                    # 标签选择器,用来匹配pod的标签
    app: nginx02
  type: ClusterIP
  ports:
  - name: nginx02
    protocol: TCP
    port: 80            # service端口
    targetPort: 80            # pod中容器暴露的端口

检查语法错误并应用

[root@k8s-master01 service]# kubectl apply -f deployment-nginx-02.yaml --dry-run=client
deployment.apps/nginx-server-02 created (dry run)
service/nginx02 created (dry run)
[root@k8s-master01 service]# kubectl apply -f deployment-nginx-02.yaml
deployment.apps/nginx-server-02 created
service/nginx02 created

验证pod和Service

[root@k8s-master01 service]# kubectl get pod -o  wide
NAME                                     READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
nfs-client-provisioner-856696f4c-cmlgq   1/1     Running   1          4d17h   10.244.39.234    k8s-worker03   <none>           <none>
nginx-server-02-84d4bdf94d-9qv2q         1/1     Running   0          26s     10.244.79.101    k8s-worker01   <none>           <none>
nginx-server-02-84d4bdf94d-khsj4         1/1     Running   0          26s     10.244.203.208   k8s-worker04   <none>           <none>

[root@k8s-master01 service]# kubectl get service
NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes        ClusterIP   10.96.0.1        <none>        443/TCP   26d
nginx-server-01   ClusterIP   10.107.191.239   <none>        80/TCP    30m
nginx-server-02   ClusterIP   10.105.237.154   <none>        80/TCP    29s

[root@k8s-master01 service]# kubectl describe service nginx-server-02
Name:              nginx-server-02
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          app=nginx02
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.105.237.154
IPs:               10.105.237.154
Port:              nginx02  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.203.208:80,10.244.79.101:80
Session Affinity:  None
Events:            <none>

[root@k8s-master01 service]# kubectl get endpoints
NAME                                          ENDPOINTS                                                     AGE
k8s-sigs.io-nfs-subdir-external-provisioner   <none>                                                        4d17h
kubernetes                                    192.168.122.11:6443,192.168.122.12:6443,192.168.122.13:6443   26d
nfs.provisioner                               <none>                                                        4d17h
nginx-server-01                               <none>                                                        37m
nginx-server-02                               10.244.203.208:80,10.244.79.101:80                            6m55s

[root@k8s-master01 service]# curl http://10.105.237.154
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

3、Headless Service类型


  • 普通的ClusterIP service是service name解析为cluster ip,然后cluster ip对应到后面的pod ip
  • Headless service是指service name 直接解析为后面的pod ip

3.1、创建Deployment控制器类型的资源清单文件

编写资源清单文件

[root@k8s-master01 service]# cat deployment-nginx-03.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name:  nginx-server-03
  namespace: default
  labels:
    app:  nginx03
spec:
  selector:
    matchLabels:
      app: nginx03
  replicas: 2
  template:
    metadata:
      labels:
        app:  nginx03
    spec:
      containers:
      - name:  nginx03
        image:  nginx:1.24.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

检查语法并应用

[root@k8s-master01 service]# kubectl apply -f deployment-nginx-03.yaml --dry-run=client
deployment.apps/nginx-server-03 created (dry run)
[root@k8s-master01 service]# kubectl apply -f deployment-nginx-03.yaml
deployment.apps/nginx-server-03 created

验证相关信息

[root@k8s-master01 service]# kubectl get deployments
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   1/1     1            1           4d17h
nginx-server-03          2/2     2            2           30s
[root@k8s-master01 service]# kubectl get pod -o wide
NAME                                     READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
nfs-client-provisioner-856696f4c-cmlgq   1/1     Running   1          4d17h   10.244.39.234    k8s-worker03   <none>           <none>
nginx-server-03-5db57fcf66-rs96s         1/1     Running   0          37s     10.244.203.209   k8s-worker04   <none>           <none>
nginx-server-03-5db57fcf66-vzq85         1/1     Running   0          37s     10.244.39.236    k8s-worker03   <none>           <none>

3.2、通过资源清单文件创建headless Service

编写资源清单文件

[root@k8s-master01 service]# cat service-nginx-03.yaml 
apiVersion: v1
kind: Service
metadata:
  name: headless-service-nginx03
  namespace: default
spec:
  selector:
    app: nginx03                # 指定后端pod标签
  type: ClusterIP                # Cluster类型
  clusterIP: None                # None 表示无头Service
  ports:                        # 指定Service端口及容器端口
  - name: nginx03
    protocol: TCP
    port: 80                    # Service的ip端口
    targetPort: 80                # pod的容器端口

检查语法错误并应用

[root@k8s-master01 service]# kubectl apply -f service-nginx-03.yaml --dry-run=client
service/headless-service-nginx03 created (dry run)
[root@k8s-master01 service]# kubectl apply -f service-nginx-03.yaml
service/headless-service-nginx03 created

验证相关信息

[root@k8s-master01 service]# kubectl get service
NAME                       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
headless-service-nginx03   ClusterIP   None         <none>        80/TCP    68s
kubernetes                 ClusterIP   10.96.0.1    <none>        443/TCP   27d

[root@k8s-master01 service]# kubectl get pod -o wide
NAME                                     READY   STATUS    RESTARTS   AGE    IP               NODE           NOMINATED NODE   READINESS GATES
bbp                                      1/1     Running   1          6h2m   10.244.79.103    k8s-worker01   <none>           <none>
nfs-client-provisioner-856696f4c-cmlgq   1/1     Running   1          5d     10.244.39.234    k8s-worker03   <none>           <none>
nginx-server-03-5db57fcf66-lrxnm         1/1     Running   0          104s   10.244.69.220    k8s-worker02   <none>           <none>
nginx-server-03-5db57fcf66-vtwzv         1/1     Running   0          104s   10.244.203.219   k8s-worker04   <none>           <none>

[root@k8s-master01 service]# kubectl get endpoints
NAME                                          ENDPOINTS                                                     AGE
headless-service-nginx03                      10.244.203.219:80,10.244.69.220:80                            112s
k8s-sigs.io-nfs-subdir-external-provisioner   <none>                                                        5d
kubernetes                                    192.168.122.11:6443,192.168.122.12:6443,192.168.122.13:6443   27d
nfs.provisioner                               <none>                                                        5d

3、DNS

DNS服务监视Kubernetes API,为每一个Service创建DNS记录用于域名解析
headless service需要DNS来解决访问问题
DNS记录格式为: ..svc.cluster.local.

3.1、查看kube-dns服务的ip

[root@k8s-master01 service]# kubectl get svc -n kube-system 
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
kube-dns         ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   27d
metrics-server   ClusterIP   10.96.197.146   <none>        443/TCP                  27d

# 可以看到coreDNS的服务地址是10.96.0.10

3.2、在集群主机通过DNS服务地址查看无头服务的dns解析

[root@k8s-master01 service]# dig -t a headless-service-nginx03.default.svc.cluster.local. @10.96.0.10

; <<>> DiG 9.11.26-RedHat-9.11.26-6.el8 <<>> -t a headless-service-nginx03.default.svc.cluster.local. @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42334
;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 2264e66987bbf0da (echoed)
;; QUESTION SECTION:
;headless-service-nginx03.default.svc.cluster.local. IN A

;; ANSWER SECTION:
headless-service-nginx03.default.svc.cluster.local. 30 IN A 10.244.69.220
headless-service-nginx03.default.svc.cluster.local. 30 IN A 10.244.203.219

;; Query time: 0 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Mon Feb 26 17:52:58 CST 2024
;; MSG SIZE  rcvd: 223

创建一个pod用来访问

# 创建一个镜像为busybox:1.28的pod
[root@k8s-master01 service]# kubectl run -it bbp --image=busybox:1.28
If you don't see a command prompt, try pressing enter.
/ # nslookup headless-service-nginx03.default.svc.cluster.local.
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      headless-service-nginx03.default.svc.cluster.local.
Address 1: 10.244.203.219 10-244-203-219.headless-service-nginx03.default.svc.cluster.local
Address 2: 10.244.69.220 10-244-69-220.headless-service-nginx03.default.svc.cluster.local

/ # ping headless-service-nginx03.default.svc.cluster.local.
PING headless-service-nginx03.default.svc.cluster.local. (10.244.69.220): 56 data bytes
64 bytes from 10.244.69.220: seq=0 ttl=62 time=0.342 ms
64 bytes from 10.244.69.220: seq=1 ttl=62 time=0.289 ms
64 bytes from 10.244.69.220: seq=2 ttl=62 time=0.154 ms
64 bytes from 10.244.69.220: seq=3 ttl=62 time=0.265 ms
64 bytes from 10.244.69.220: seq=4 ttl=62 time=0.226 ms
^C
--- headless-service-nginx03.default.svc.cluster.local. ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.154/0.255/0.342 ms

4、NodePort类型创建


NodePort类型的service主要是为了打通集群外对集群内应用的访问,相关节点ip会和pod关联,实现集群外访问

4.1、部署一个NodePort类型pod应用

编写yaml文件

[root@k8s-master01 service]# cat nodeport-nginx-04.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name:  nginx-server-04
  namespace: default
  labels:
    app:  nginx04
spec:
  selector:
    matchLabels:
      app: nginx04
  replicas: 2
  template:
    metadata:
      labels:
        app:  nginx04
    spec:
      containers:
      - name:  nginx04
        image:  nginx:1.24.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nodeport-service-nginx04
  namespace: default
spec:
  selector:
    app: nginx04
  type: NodePort
  ports:
  - name: nginx04
    protocol: TCP
    port: 80
    targetPort: 80
    nodePort: 30001

检查语法并应用

[root@k8s-master01 service]# kubectl apply -f nodeport-nginx-04.yaml 
deployment.apps/nginx-server-04 created
service/nodeport-service-nginx04 created

验证相关pod信息

[root@k8s-master01 service]# kubectl get pod -o wide
NAME                                     READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
bbp                                      1/1     Running   1          130m    10.244.79.103    k8s-worker01   <none>           <none>
nfs-client-provisioner-856696f4c-cmlgq   1/1     Running   1          4d20h   10.244.39.234    k8s-worker03   <none>           <none>
nginx-server-04-54c4c6f868-4xjf9         1/1     Running   0          62s     10.244.69.217    k8s-worker02   <none>           <none>
nginx-server-04-54c4c6f868-pw47r         1/1     Running   0          62s     10.244.203.212   k8s-worker04   <none>           <none>

[root@k8s-master01 service]# kubectl get service
NAME                       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes                 ClusterIP   10.96.0.1      <none>        443/TCP        26d
nodeport-service-nginx04   NodePort    10.108.1.242   <none>        80:30001/TCP   78s

查看集群所有节点30001端口,集群所有节点都会开房该端口

[root@k8s-master01 service]# ss -anput | grep ":30001"
tcp   LISTEN     0      16384                  0.0.0.0:30001                 0.0.0.0:*     users:(("kube-proxy",pid=4306,fd=17))  

验证访问,访问集群内任意节点IP+端口即可

[root@k8s-master01 service]# curl http://k8s-worker01:30001
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

[root@k8s-master01 service]# curl http://k8s-worker02:30001
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

使用kuboard查看

invalid image(图片无法加载)

5、LoadBalancer类型


集群外访问过程

用户

域名

云服务提供商提供LB服务

NodeIP:Port(service IP)

Pod IP:端口

invalid image(图片无法加载)

5、自建Kubernetes的LoadBalancer类型服务方案-MetalLB

MetalLB可以为kubernetes集群中的Service提供网络负载均衡功能。
MetalLB两大功能为:

  • 地址分配,类似于DHCP
  • 外部通告,一旦MetalLB为服务分配了外部IP地址,它就需要使群集之外的网络意识到该IP在群集中“存在”。MetalLB使用标准路由协议来实现此目的:ARP,NDP或BGP。

参考资料
参考网址: https://metallb.universe.tf/installation/

修改kubernetes的转发模式为ipvs

[root@k8s-master01 service]# kubectl edit configmap -n kube-system kube-proxy
configmap/kube-proxy edited
--------
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"    # 模式
ipvs:
  strictARP: true

5.1、部署metallB

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
# On first install only
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

验证metallB的相关信息

[root@k8s-master01 ~]# kubectl get ns
NAME               STATUS   AGE
calico-apiserver   Active   26d
calico-system      Active   26d
default            Active   26d
kube-node-lease    Active   26d
kube-public        Active   26d
kube-system        Active   26d
kuboard            Active   26d
metallb-system     Active   59s    # 存在命名空间
test               Active   6d2h
tigera-operator    Active   26d

[root@k8s-master01 ~]# kubectl get pod -n metallb-system   # 查看该命名空间下的pod,有几个node节点对应会创建几个pod
NAME                          READY   STATUS              RESTARTS   AGE
controller-6884d48c7c-kq4hx   1/1     Running             0          92s
speaker-8qsb7                 0/1     ContainerCreating   0          92s  # 正在创建中
speaker-9jppz                 0/1     ContainerCreating   0          92s
speaker-bgsfh                 0/1     ContainerCreating   0          92s
speaker-dvg5p                 0/1     ContainerCreating   0          92s
speaker-fnnpp                 0/1     ContainerCreating   0          92s
speaker-pq4hg                 0/1     ContainerCreating   0          92s
speaker-sjrjp                 0/1     ContainerCreating   0          92s

[root@k8s-master01 ~]# kubectl get pod -n metallb-system -o wide
NAME                          READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
controller-6884d48c7c-kq4hx   1/1     Running   0          4m30s   10.244.203.213   k8s-worker04   <none>           <none>
speaker-8qsb7                 1/1     Running   0          4m30s   192.168.122.15   k8s-worker02   <none>           <none>
speaker-9jppz                 1/1     Running   0          4m30s   192.168.122.12   k8s-master02   <none>           <none>
speaker-bgsfh                 1/1     Running   0          4m30s   192.168.122.14   k8s-worker01   <none>           <none>
speaker-dvg5p                 1/1     Running   0          4m30s   192.168.122.13   k8s-master03   <none>           <none>
speaker-fnnpp                 1/1     Running   0          4m30s   192.168.122.11   k8s-master01   <none>           <none>
speaker-pq4hg                 1/1     Running   0          4m30s   192.168.122.17   k8s-worker04   <none>           <none>
speaker-sjrjp                 1/1     Running   0          4m30s   192.168.122.16   k8s-worker03   <none>           <none>

[root@k8s-master01 ~]# kubectl get configmaps -n metallb-system   # 发现没有配置文件
NAME                DATA   AGE
kube-root-ca.crt    1      5m4s

5.2、准备metallb配置文件

[root@k8s-master01 ~]# mkdir metallb
[root@k8s-master01 ~]# cd metallb/
[root@k8s-master01 metallb]# cat metallb-conf.yaml 
kind: ConfigMap
apiVersion: v1
metadata:
  name: config
  namespace: metallb-system
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.122.100-192.168.122.110                        # 集群节点同端的地址池即可

检查语法并应用

[root@k8s-master01 metallb]# kubectl apply -f metallb-conf.yaml --dry-run=client
configmap/config created (dry run)
[root@k8s-master01 metallb]# kubectl apply -f metallb-conf.yaml
configmap/config created

查看配置文件是否成功创建

[root@k8s-master01 metallb]# kubectl get configmaps -n metallb-system 
NAME                DATA   AGE
config              1      80s
kube-root-ca.crt    1      15m

[root@k8s-master01 metallb]# kubectl describe cm config -n metallb-system  # 查看配置文件信息
Name:         config
Namespace:    metallb-system
Labels:       <none>
Annotations:  <none>

Data
====
config:
----
address-pools:
- name: default
  protocol: layer2
  addresses:
  - 192.168.122.100-192.168.122.110

5.3、发布Service类型为LoadBalancer的Deployment控制器类应用

创建Deployment控制器类型应用nginx-metallb及Service类型为LoadBalancer

[root@k8s-master01 metallb]# cat 01-nginx-metallb.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-metallb
  namespace: default
  labels:
    app: nginx-metallb
spec:
  selector:
    matchLabels:
      app: nginx-metallb
  template:
    metadata:
      labels:
        app: nginx-metallb
    spec:
      containers:
      - name: nginx-metallb
        image: nginx:1.24.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-metallb
  namespace: default
spec:
  ports:
  - name: nginx-metallb
    protocol: TCP
    port: 8090
    targetPort: 80
  selector:
    app: nginx-metallb
  type: LoadBalancer

检查语法并应用

[root@k8s-master01 metallb]# kubectl apply -f 01-nginx-metallb.yaml --dry-run=client
deployment.apps/nginx-metallb created (dry run)
service/nginx-metallb created (dry run)
[root@k8s-master01 metallb]# kubectl apply -f 01-nginx-metallb.yaml
deployment.apps/nginx-metallb created
service/nginx-metallb created

验证pod等信息

[root@k8s-master01 metallb]# kubectl get pod -o wide
NAME                                     READY   STATUS    RESTARTS   AGE     IP              NODE           NOMINATED NODE   READINESS GATES
bbp                                      1/1     Running   1          3h5m    10.244.79.103   k8s-worker01   <none>           <none>
nfs-client-provisioner-856696f4c-cmlgq   1/1     Running   1          4d21h   10.244.39.234   k8s-worker03   <none>           <none>
nginx-metallb-668bbc7ffb-jcscn           1/1     Running   0          47s     10.244.69.218   k8s-worker02   <none>           <none>

验证Service信息

[root@k8s-master01 metallb]# kubectl get service
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)          AGE
kubernetes      ClusterIP      10.96.0.1       <none>            443/TCP          26d
nginx-metallb   LoadBalancer   10.107.123.44   192.168.122.100   8090:30968/TCP   2m1s

# 已经为容器分配一个外部地址 192.168.122.100,测试访问连通性
[root@k8s-master01 metallb]# ping -c 4 192.168.122.100
PING 192.168.122.100 (192.168.122.100) 56(84) bytes of data.
64 bytes from 192.168.122.100: icmp_seq=1 ttl=64 time=0.058 ms
64 bytes from 192.168.122.100: icmp_seq=2 ttl=64 time=0.034 ms
64 bytes from 192.168.122.100: icmp_seq=3 ttl=64 time=0.033 ms
64 bytes from 192.168.122.100: icmp_seq=4 ttl=64 time=0.037 ms

--- 192.168.122.100 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3109ms
rtt min/avg/max/mdev = 0.033/0.040/0.058/0.012 ms

# 验证访问服务
[root@k8s-master01 metallb]# curl http://192.168.122.100:8090
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

# 同时观察发现也可以使用任意节点ip+端口方式访问,它为我们分配了一个端口30968
[root@k8s-master01 metallb]# ss -anput | grep ":30968"
tcp   LISTEN     0      16384                  0.0.0.0:30968                 0.0.0.0:*     users:(("kube-proxy",pid=4306,fd=17))
[root@k8s-master01 metallb]# curl http://k8s-worker02:30968
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

6、ExternalName类型


6.1、ExternalName作用

把集群外部的服务引入到集群内部中来,实现了集群内部pod和集群外部的服务进行通信
ExternalName 类型的服务适用于外部服务使用域名的方式,缺点是不能指定端口
还有一点要注意: 集群内的Pod会继承Node上的DNS解析规则。所以只要Node可以访问的服务,Pod中也可以访问到, 这就实现了集群内服务访问集群外服务

6.2、将公网的域名引入到集群内

编写yaml文件

[root@k8s-master01 ~]# mkdir externelname
[root@k8s-master01 ~]# cd externelname/
[root@k8s-master01 externelname]# cat externelname.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-externalname
  namespace: default
spec:
  type: ExternalName
  externalName: www.baidu.com

检查语法并应用

[root@k8s-master01 externelname]# kubectl apply -f externelname.yaml --dry-run=client
service/my-externalname created (dry run)
[root@k8s-master01 externelname]# kubectl apply -f externelname.yaml
service/my-externalname created

验证相关信息

[root@k8s-master01 externelname]# kubectl get service
NAME              TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)          AGE
kubernetes        ClusterIP      10.96.0.1       <none>            443/TCP          26d
my-externalname   ExternalName   <none>          www.baidu.com     <none>           2m7s
nginx-metallb     LoadBalancer   10.107.123.44   192.168.122.100   8090:30968/TCP   74m

查看my-service的dns解析

[root@k8s-master01 externelname]# dig -t A my-externalname.default.svc.cluster.local  @10.96.0.10

; <<>> DiG 9.11.26-RedHat-9.11.26-6.el8 <<>> -t A my-externalname.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23137
;; flags: qr aa rd; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: e0eb6f8f9c490128 (echoed)
;; QUESTION SECTION:
;my-externalname.default.svc.cluster.local. IN A

;; ANSWER SECTION:
my-externalname.default.svc.cluster.local. 5 IN CNAME www.baidu.com.
www.baidu.com.          5       IN      CNAME   www.a.shifen.com.
www.a.shifen.com.       5       IN      A       180.101.50.242
www.a.shifen.com.       5       IN      A       180.101.50.188

;; Query time: 20 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Mon Feb 26 16:37:43 CST 2024
;; MSG SIZE  rcvd: 257

创建一个pod来验证一下

[root@k8s-master01 externelname]# kubectl get pod 
NAME                                     READY   STATUS    RESTARTS   AGE
bbp                                      1/1     Running   1          4h55m
nfs-client-provisioner-856696f4c-cmlgq   1/1     Running   1          4d23h
nginx-metallb-668bbc7ffb-tcq4l           1/1     Running   0          85m

[root@k8s-master01 externelname]# kubectl exec -it bbp -- /bin/sh
/ # nslookup www.baidu.com
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      www.baidu.com
Address 1: 2408:873d:22:1a01:0:ff:b087:eecc
Address 2: 2408:873d:22:18ac:0:ff:b021:1393
Address 3: 180.101.50.242
Address 4: 180.101.50.188

/ # nslookup my-externalname.default.svc.cluster.local 
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      my-externalname.default.svc.cluster.local
Address 1: 2408:873d:22:1a01:0:ff:b087:eecc
Address 2: 2408:873d:22:18ac:0:ff:b021:1393
Address 3: 153.3.238.102
Address 4: 153.3.238.110

7、不同命名空间访问ExternalName


7.1、创建ns1命名空间和相关deploy, pod,service

[root@k8s-master01 externelname]# cat ns1-nginx.yml 
apiVersion: v1                                                  
kind: Namespace                                                 
metadata:                                                             
  name: ns1                                                     # 创建ns1命名空间
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-nginx                    
  namespace: ns1                                                # 属于ns1命名空间
spec:
  replicas: 1                                  
  selector:
    matchLabels:
      app: nginx                                
  template:                                        
    metadata:
      labels:
        app: nginx                             
    spec:
      containers:                              
      - name: nginx
        image: nginx:1.15-alpine
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: svc1                                # 服务名
  namespace: ns1                            # 属于ns1命名空间
spec:
  selector:
    app: nginx
  clusterIP: None                           # 无头service
  ports:
  - port: 80                         
    targetPort: 80                  
---
kind: Service
apiVersion: v1
metadata:
  name: external-svc1
  namespace: ns1                            #  属于ns1命名空间
spec:
  type: ExternalName
  externalName: svc2.ns2.svc.cluster.local   # 将ns2空间的svc2服务引入到ns1命名空间

7.2、创建ns2命名空间和相关deploy, pod,service

apiVersion: v1                                                  
kind: Namespace                                                 
metadata:                                                             
  name: ns2                                                     # 创建ns2命名空间
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-nginx                    
  namespace: ns2                                                # 属于ns2命名空间
spec:
  replicas: 1                                  
  selector:
    matchLabels:
      app: nginx                                
  template:                                        
    metadata:
      labels:
        app: nginx                             
    spec:
      containers:                              
      - name: nginx
        image: nginx:1.15-alpine
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: svc2                                # 服务名
  namespace: ns2                            # 属于ns2命名空间
spec:
  selector:
    app: nginx
  clusterIP: None                           # 无头service
  ports:
  - port: 80                         
    targetPort: 80                  
---
kind: Service
apiVersion: v1
metadata:
  name: external-svc1
  namespace: ns2                            #  属于ns2命名空间
spec:
  type: ExternalName
  externalName: svc1.ns1.svc.cluster.local   # 将ns1空间的svc1服务引入到ns2命名空间

7.3、检查语法并应用

[root@k8s-master01 externelname]# kubectl apply -f ns1-nginx.yml --dry-run=client
namespace/ns1 created (dry run)
deployment.apps/deploy-nginx created (dry run)
service/svc1 created (dry run)
service/external-svc1 created (dry run)
[root@k8s-master01 externelname]# kubectl apply -f ns2-nginx.yml --dry-run=client
namespace/ns2 created (dry run)
deployment.apps/deploy-nginx created (dry run)
service/svc2 created (dry run)
service/external-svc1 created (dry run)
[root@k8s-master01 externelname]# kubectl apply -f ns1-nginx.yml
namespace/ns1 created
deployment.apps/deploy-nginx created
service/svc1 created
service/external-svc1 created
[root@k8s-master01 externelname]# kubectl apply -f ns2-nginx.yml
namespace/ns2 created
deployment.apps/deploy-nginx created
service/svc2 created
service/external-svc1 created

7.4、验证命名空间、pod等信息

root@k8s-master01 externelname]# kubectl get service -n ns1
NAME            TYPE           CLUSTER-IP   EXTERNAL-IP                  PORT(S)   AGE
external-svc1   ExternalName   <none>       svc2.ns2.svc.cluster.local   <none>    111s
svc1            ClusterIP      None         <none>                       80/TCP    111s
[root@k8s-master01 externelname]# kubectl get service -n ns2
NAME            TYPE           CLUSTER-IP   EXTERNAL-IP                  PORT(S)   AGE
external-svc1   ExternalName   <none>       svc1.ns1.svc.cluster.local   <none>    109s
svc2            ClusterIP      None         <none>                       80/TCP    109s

[root@k8s-master01 externelname]# kubectl get pod -n ns1 deploy-nginx-6d9d558bb6-65g9k -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES
deploy-nginx-6d9d558bb6-65g9k   1/1     Running   0          14m   10.244.69.219   k8s-worker02   <none>           <none>
[root@k8s-master01 externelname]# kubectl get pod -n ns2 deploy-nginx-6d9d558bb6-swvsq -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES
deploy-nginx-6d9d558bb6-swvsq   1/1     Running   0          15m   10.244.39.237   k8s-worker03   <none>           <none>

7.5、验证解析svc1

[root@k8s-master01 externelname]# dig -t a svc1.ns1.svc.cluster.local @10.96.0.10

; <<>> DiG 9.11.26-RedHat-9.11.26-6.el8 <<>> -t a svc1.ns1.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 18556
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 2ca3211db79c7100 (echoed)
;; QUESTION SECTION:
;svc1.ns1.svc.cluster.local.    IN      A

;; ANSWER SECTION:
svc1.ns1.svc.cluster.local. 30  IN      A       10.244.69.219

;; Query time: 0 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Mon Feb 26 16:58:56 CST 2024
;; MSG SIZE  rcvd: 109

# 查看pod的ip地址,查看是否解析到正确的IP地址
[root@k8s-master01 externelname]# kubectl get pod -n ns1 deploy-nginx-6d9d558bb6-65g9k -o wide
NAME                            READY   STATUS    RESTARTS   AGE    IP              NODE           NOMINATED NODE   READINESS GATES
deploy-nginx-6d9d558bb6-65g9k   1/1     Running   0          4m1s   10.244.69.219   k8s-worker02   <none>           <none>

7.6、验证解析svc2

[root@k8s-master01 externelname]# dig -t a svc2.ns2.svc.cluster.local @10.96.0.10

; <<>> DiG 9.11.26-RedHat-9.11.26-6.el8 <<>> -t a svc2.ns2.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 48341
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 8c9590e55395f916 (echoed)
;; QUESTION SECTION:
;svc2.ns2.svc.cluster.local.    IN      A

;; ANSWER SECTION:
svc2.ns2.svc.cluster.local. 30  IN      A       10.244.39.237

;; Query time: 0 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Mon Feb 26 17:01:14 CST 2024
;; MSG SIZE  rcvd: 109

# 查看pod的ip地址,查看是否解析到正确的IP地址
[root@k8s-master01 externelname]# kubectl get pod -n ns2 deploy-nginx-6d9d558bb6-swvsq -o wide
NAME                            READY   STATUS    RESTARTS   AGE     IP              NODE           NOMINATED NODE   READINESS GATES
deploy-nginx-6d9d558bb6-swvsq   1/1     Running   0          5m34s   10.244.39.237   k8s-worker03   <none>           <none>

7.7、在ns1下的命名空间中使用pod对ns2命名空间下的pod进行测试解析和连通性

[root@k8s-master01 externelname]# kubectl exec -it -n ns1 deploy-nginx-6d9d558bb6-65g9k -- /bin/sh
/ # ping svc1.ns1.svc.cluster.local.
PING svc1.ns1.svc.cluster.local. (10.244.69.219): 56 data bytes
64 bytes from 10.244.69.219: seq=0 ttl=64 time=0.070 ms
64 bytes from 10.244.69.219: seq=1 ttl=64 time=0.036 ms
64 bytes from 10.244.69.219: seq=2 ttl=64 time=0.051 ms
^C
--- svc1.ns1.svc.cluster.local. ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.036/0.052/0.070 ms
/ # nslookup svc1.ns1.svc.cluster.local.
nslookup: can't resolve '(null)': Name does not resolve

Name:      svc1.ns1.svc.cluster.local.
Address 1: 10.244.69.219 deploy-nginx-6d9d558bb6-65g9k

/ # nslookup svc1.ns1.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve

Name:      svc1.ns1.svc.cluster.local
Address 1: 10.244.69.219 deploy-nginx-6d9d558bb6-65g9k
/ # nslookup svc2.ns2.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve

Name:      svc2.ns2.svc.cluster.local
Address 1: 10.244.39.237 10-244-39-237.svc2.ns2.svc.cluster.local

/ # ping  svc2.ns2.svc.cluster.local
PING svc2.ns2.svc.cluster.local (10.244.39.237): 56 data bytes
64 bytes from 10.244.39.237: seq=0 ttl=62 time=0.185 ms
64 bytes from 10.244.39.237: seq=1 ttl=62 time=0.299 ms
64 bytes from 10.244.39.237: seq=2 ttl=62 time=0.237 ms
64 bytes from 10.244.39.237: seq=3 ttl=62 time=0.373 ms
^C
--- svc2.ns2.svc.cluster.local ping statistics ---
6 packets transmitted, 6 packets received, 0% packet loss
round-trip min/avg/max = 0.185/0.271/0.373 ms

8、sessionAffinity固定访问


回话粘贴,我们的Service自带负载均衡,如果我们需要持续不断的访问后端服务,我们可以使用sessionAffinity。设置sessionAffinity为Clientip (类似nginx的ip_hash算法,lvs的sh算法,基于源地址的IP)

8.1、创建一个Deployment控制器的pod应用

[root@k8s-master01 ~]# mkdir sessionaffinity
[root@k8s-master01 sessionaffinity]# cat create-deployment-app-nginx-with-service.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: default
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.24.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: default
spec:
  ports:
  - name: nginx
    protocol: TCP
    port: 80
    targetPort: 80
  selector:
    app: nginx
  type: ClusterIP

8.2、检查语法错误并应用

[root@k8s-master01 sessionaffinity]# kubectl apply -f create-deployment-app-nginx-with-service.yaml --dry-run=client
deployment.apps/nginx created (dry run)
service/nginx created (dry run)
[root@k8s-master01 sessionaffinity]# kubectl apply -f create-deployment-app-nginx-with-service.yaml
deployment.apps/nginx created
service/nginx unchanged

8.3、验证相关pod信息

[root@k8s-master01 sessionaffinity]# kubectl get deployments.apps 
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   1/1     1            1           5d
nginx                    2/2     2            2           2m56s
[root@k8s-master01 sessionaffinity]# kubectl get pod -o wide
NAME                                     READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
bbp                                      1/1     Running   1          5h42m   10.244.79.103    k8s-worker01   <none>           <none>
nfs-client-provisioner-856696f4c-cmlgq   1/1     Running   1          5d      10.244.39.234    k8s-worker03   <none>           <none>
nginx-5998fbf756-4jt4v                   1/1     Running   0          3m4s    10.244.203.217   k8s-worker04   <none>           <none>
nginx-5998fbf756-9dcr7                   1/1     Running   0          36s     10.244.39.238    k8s-worker03   <none>           <none>

8.4、为两个nginx配置差异化,配置不同主页,方便测试

[root@k8s-master01 sessionaffinity]# kubectl exec -it nginx-5998fbf756-4jt4v -- /bin/bash
root@nginx-5998fbf756-4jt4v:/# echo "web1" > /usr/share/nginx/html/index.html 
root@nginx-5998fbf756-4jt4v:/# exit
exit

[root@k8s-master01 sessionaffinity]# kubectl exec -it nginx-5998fbf756-9dcr7 -- /bin/bash
root@nginx-5998fbf756-9dcr7:/# echo "web2" > /usr/share/nginx/html/index.html 
root@nginx-5998fbf756-9dcr7:/# exit

8.5、测试Service自带的负载均衡

[root@k8s-master01 sessionaffinity]# kubectl get service 
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   27d
nginx        ClusterIP   10.109.71.42   <none>        80/TCP    7m16s
[root@k8s-master01 sessionaffinity]# curl http://10.109.71.42
web2
[root@k8s-master01 sessionaffinity]# curl http://10.109.71.42
web2
[root@k8s-master01 sessionaffinity]# curl http://10.109.71.42
web2
[root@k8s-master01 sessionaffinity]# curl http://10.109.71.42
web1
[root@k8s-master01 sessionaffinity]# curl http://10.109.71.42
web1
[root@k8s-master01 sessionaffinity]# curl http://10.109.71.42
web1

8.6、我们只需要它访问第一个web1

[root@k8s-master01 sessionaffinity]# kubectl describe service nginx
Name:              nginx
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          app=nginx
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.109.71.42
IPs:               10.109.71.42
Port:              nginx  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.203.217:80,10.244.39.238:80
Session Affinity:  None    # 需要修改此处
Events:            <none>

# 通过打补丁的方式来更改
[root@k8s-master01 sessionaffinity]# kubectl patch service nginx -p '{"spec":{"sessionAffinity":"ClientIP"}}'
service/nginx patched

8.7、再次测试

[root@k8s-master01 sessionaffinity]# kubectl describe service nginx
Name:              nginx
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          app=nginx
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.109.71.42
IPs:               10.109.71.42
Port:              nginx  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.203.217:80,10.244.39.238:80
Session Affinity:  ClientIP                            # 此处已更改
Events:            <none>

[root@k8s-master01 sessionaffinity]# curl http://10.109.71.42
web2
[root@k8s-master01 sessionaffinity]# curl http://10.109.71.42
web2
[root@k8s-master01 sessionaffinity]# curl http://10.109.71.42
web2
[root@k8s-master01 sessionaffinity]# curl http://10.109.71.42
web2
[root@k8s-master01 sessionaffinity]# curl http://10.109.71.42
web2
[root@k8s-master01 sessionaffinity]# curl http://10.109.71.42
web2
[root@k8s-master01 sessionaffinity]# curl http://10.109.71.42
web2
[root@k8s-master01 sessionaffinity]# curl http://10.109.71.42
web2
[root@k8s-master01 sessionaffinity]# curl http://10.109.71.42
web2
0
广告 广告

评论区