一、修复 kubelet 和 etcd 不安全问题
解题过程:
## 登陆到解题主机,使用root权限
candidate@base:~$ ssh cks001091
candidate@master01:~$ sudo -i
## 执行本体初始化脚本,真实环境不需要
root@master01:~# sh kubelet-etcd.sh
请稍等3分钟,正在初始化这道题的环境配置。
Please wait for 3 minutes, the environment configuration for this question is being initialized.
## 修复kubelet的问题
root@master01:~# vim /var/lib/kubelet/config.yaml
// 将authentication: 中anonymous改为false,webhook改为true,authorization的模式改为:Webhook(首字母大写),分别在config文件的第4行,第7行,第11行
## 修复etcd的问题
root@master01:~# vim /etc/kubernetes/manifests/etcd.yaml
// 将- --client-cert-auth=false 改为true,在第18行
![]() ![]() |
---|
验证过程:
## 重启kubelet服务
root@master01:~# systemctl daemon-reload
root@master01:~# systemctl restart kubelet.service
## 等待5分钟验证
所有pod正常运行,clever-cactus空间的下的除外。
## 返回到base节点下
root@master01:~# exit
logout
candidate@master01:~$ exit
logout
Connection to cks001091 closed.
candidate@base:~$
二、TLS Secret
解题过程
## 登陆到解题节点
candidate@base:~$ ssh cks000040
root@master01:~#
## 在 clever-cactus namespace 中为名为 clever-cactus 的现有 Deployment 创建名为 clever-cactus 的 TLS Secret
root@master01:~# kubectl -n clever-cactus create secret tls clever-cactus --cert=/home/candidate/ca-cert/web.k8s.local.crt --key=/home/candidate/ca-cert/web.k8s.local.key
secret/clever-cactus created
验证过程
## 查询该pod、secrets等情况
root@master01:~# kubectl get secrets -n clever-cactus
NAME TYPE DATA AGE
clever-cactus kubernetes.io/tls 2 52s
root@master01:~# kubectl get deployments.apps -n clever-cactus
NAME READY UP-TO-DATE AVAILABLE AGE
clever-cactus 1/1 1 1 115d
root@master01:~# kubectl get deployments.apps -n clever-cactus -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
clever-cactus 1/1 1 1 115d httpd nginx app=clever-cactus
## 返回到base节点
root@master01:~# exit
logout
candidate@master01:~$ exit
logout
Connection to cks000040 closed.
candidate@base:~$
三、Dockerfile 安全最佳实践
解题过程
## 登陆到解题节点
candidate@base:~$ ssh cks001094
candidate@master01:~$ sudo -i
## 修复Dockerfile文件
root@master01:~# vim /cks/docker/Dockerfile
// 将第17行 USER root 修改为nobody
## 修复清单文件
root@master01:~# vim /cks/docker/deployment.yaml
// 将第54行,readOnlyRootFilesystem: false 改为true
![]() ![]() |
---|
最后操作
## 返回base节点
root@master01:~# exit
logout
candidate@master01:~$ exit
logout
Connection to cks001094 closed.
candidate@base:~$
四、访问/dev/mem 的 Pod
解题过程
## 登陆到解题节点
candidate@base:~$ ssh cks000026
candidate@node02:~$ sudo -i
## 编写falco扫描文件
root@node02:~# vim /etc/falco/falco_rules.local.yaml
// 内容如下:
# Your custom rules!
- list: mem_file
items: [/dev/mem]
- rule: devmem
desc: devmem
condition: >
fd.name in (mem_file)
output: >
Shell (container_id=%container.id)
priority:
NOTICE
tags: [file]
## 执行扫描
root@node02:~# falco -M 30 -r /etc/falco/falco_rules.local.yaml >> devmem.log
// 输出如下:
Thu Feb 27 14:28:28 2025: Falco version: 0.34.0 (x86_64)
Thu Feb 27 14:28:28 2025: Falco initialized with configuration file: /etc/falco/falco.yaml
Thu Feb 27 14:28:28 2025: Loading rules from file /etc/falco/falco_rules.local.yaml
Rules match ignored syscall: warning (ignored-evttype):
Loaded rules match the following events: write, mlock2, fsconfig, send, getsockname, getpeername, setsockopt, recv, sendmmsg, recvmmsg, semop, getrlimit, page_fault, sendfile, fstat, io_uring_enter, getdents64, mlock, pwrite, ppoll, mlockall, io_uring_register, copy_file_range, getegid, fstat64, semget, munlock, signaldeliver, access, stat64, epoll_wait, lseek, poll, munmap, mprotect, getdents, lstat64, pluginevent, getresgid, getresuid, geteuid, getuid, semctl, munlockall, mmap, read, splice, brk, switch, pwritev, getgid, nanosleep, preadv, writev, readv, pread, mmap2, getcwd, select, llseek, lstat, stat, futex
These events might be associated with syscalls undefined on your architecture (please take a look here: https://marcin.juszkiewicz.com.pl/download/tables/syscalls.html). If syscalls are instead defined, you have to run Falco with `-A` to catch these events
Thu Feb 27 14:28:28 2025: The chosen syscall buffer dimension is: 8388608 bytes (8 MBs)
Thu Feb 27 14:28:28 2025: Starting health webserver with threadiness 2, listening on port 8765
Thu Feb 27 14:28:28 2025: Enabled event sources: syscall
Thu Feb 27 14:28:28 2025: Opening capture with Kernel module
Syscall event drop monitoring:
- event drop detected: 0 occurrences
- num times actions taken: 0
## 查看扫描解果
root@node02:~# cat devmem.log
14:28:33.737638157: Notice Shell (container_id=f3488f5b92c0)
14:28:33.741814822: Notice Shell (container_id=f3488f5b92c0)
14:28:33.741817422: Notice Shell (container_id=f3488f5b92c0)
14:28:43.744160043: Notice Shell (container_id=f3488f5b92c0)
14:28:43.749043538: Notice Shell (container_id=f3488f5b92c0)
14:28:43.749046637: Notice Shell (container_id=f3488f5b92c0)
14:28:53.752097192: Notice Shell (container_id=f3488f5b92c0)
14:28:53.756811233: Notice Shell (container_id=f3488f5b92c0)
14:28:53.756814272: Notice Shell (container_id=f3488f5b92c0)
Events detected: 9
Rule counts by severity:
NOTICE: 9
Triggered rules by rule name:
devmem: 9
## 查找id为f3488f5b92c0的pod,真实环境为docker ps
root@node02:~# crictl ps | grep f3488f5b92c0
f3488f5b92c01 27a71e19c9562 3 hours ago Running busybox 7 11223daacb09e cpu-65cf4d685c-lvnqk
## 将查找到pod副本缩为0
root@node02:~# kubectl scale deployment cpu --replicas 0
deployment.apps/cpu scaled
## 查看解果
root@node02:~# kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
amd-gpu 1/1 1 1 115d
cpu 0/0 0 0 115d
nvidia-gpu 1/1 1 1 115d
最后操作
## 退回到base节点
root@node02:~# exit
logout
candidate@node02:~$ exit
logout
Connection to cks000026 closed.
五、安全上下文 Container Security Context
解题过程
## 登陆到解题节点
candidate@base:~$ ssh cks001097
candidate@master01:~$ sudo -i
## 修改Deployment 的清单文件
root@master01:~# vim /home/candidate/sec-ns_deployment.yaml
// 分别在imagePullPolicy: IfNotPresent 后添加如下4行内容:
securityContext: ##安全上下文
readOnlyRootFilesystem: true ##只读文件根系统
allowPrivilegeEscalation: false ##禁止特权提升
runAsUser: 30000 ##使用用户 ID 30000 运行
## 更新Deployment
root@master01:~# kubectl apply -f /home/candidate/sec-ns_deployment.yaml
deployment.apps/secdep configured
## 检查Deployment
root@master01:~# kubectl get deployments.apps,pod -n sec-ns
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/secdep 1/1 1 1 115d
NAME READY STATUS RESTARTS AGE
pod/secdep-86cb54968-nc9gz 2/2 Running 0 63s
最后操作
## 退回到base节点
root@master01:~# exit
logout
candidate@master01:~$ exit
logout
Connection to cks001097 closed.
candidate@base:~$
六、日志审计 log audit
解题过程
## 登陆到解题节点
candidate@base:~$ ssh cks001098
candidate@master01:~$ sudo -i
## 初始化环境
root@master01:~# sh log-audit.sh
请稍等3分钟,正在初始化这道题的环境配置。
Please wait for 3 minutes, the environment configuration for this question is being initialized.
## 编辑扩展基本策略记录
root@master01:~# vim /etc/kubernetes/logpolicy/sample-policy.yaml
// 添加如下内容:
# 在日志中用 RequestResponse 级别记录 Pod 变化。
- level: RequestResponse
resources:
- group: ""
# 资源 "pods" 不匹配对任何 Pod 子资源的请求,
# 这与 RBAC 策略一致。
resources: ["persistentvolumes"]
# 在日志中记录 kube-system 中 configmap 变更的请求消息体。
- level: Request
resources:
- group: "" # core API 组
resources: ["configmaps"]
# 这个规则仅适用于 "kube-system" 名字空间中的资源。
# 空字符串 "" 可用于选择非名字空间作用域的资源。
namespaces: ["front-apps"]
# 在日志中用 Metadata 级别记录所有其他名字空间中的 configmap 和 secret 变更。
- level: Metadata
resources:
- group: "" # core API 组
resources: ["secrets", "configmaps"]
# 一个抓取所有的规则,将在日志中以 Metadata 级别记录所有其他请求。
- level: Metadata
# 符合此规则的 watch 等长时间运行的请求将不会
# 在 RequestReceived 阶段生成审计事件。
omitStages:
- "RequestReceived"
## 编辑api配置
root@master01:~# vim /etc/kubernetes/manifests/kube-apiserver.yaml
// 在- --authorization-mode=Node,RBAC 下添加如下内容
19 - --audit-policy-file=/etc/kubernetes/logpolicy/sample-policy.yaml ##本审计策略被使用
20 - --audit-log-path=/var/log/kubernetes/audit-logs.txt ##日志存储
21 - --audit-log-maxage=10 ##日志保留时间为 10 天
22 - --audit-log-maxbackup=2 ##保留 2 个日志
## 重启kubelet服务
root@master01:~# systemctl daemon-reload
root@master01:~# systemctl restart kubelet.service
## 等待2分钟检查所有pod情况
root@master01:~# kubectl get pod -A
## 查看日志
//有输入,就没有问题
root@master01:~# tail /var/log/kubernetes/audit-logs.txt
![]() ![]() |
---|
最后操作
## 退回base节点
root@master01:~# exit
logout
candidate@master01:~$ exit
logout
Connection to cks001098 closed.
candidate@base:~$
七、网络策略 Deny 和 Allow
解题过程
## 登录到解题节点
candidate@base:~$ ssh cks000031
candidate@master01:~$ sudo -i
## 创建一个NetworkPolicy文件
root@master01:~# vim 7.yaml
//输入如下内容:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-policy
namespace: prod
spec:
podSelector: {}
policyTypes:
- Ingress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-prod
namespace: data
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
env: prod
## apply策略
root@master01:~# kubectl apply -f 7.yaml
networkpolicy.networking.k8s.io/deny-policy created
networkpolicy.networking.k8s.io/allow-from-prod created
最后操作
## 退回到base节点
root@master01:~# exit
logout
candidate@master01:~$ exit
logout
Connection to cks000031 closed.
八、使用 ingress 公开 https 服务
解题过程
## 登陆到解题节点
candidate@base:~$ ssh cks000032
candidate@master01:~$ sudo -i
## 查询存在的pod的service的端口
root@master01:~# kubectl -n prod02 get service web -o yaml | grep prot
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"web"},"name":"web","namespace":"prod02"},"spec":{"ports":[{"name":"80-80","port":80,"protocol":"TCP","targetPort":80}],"selector":{"app":"web"},"type":"ClusterIP"}}
protocol: TCP
## 创建一个资源清单文件
root@master01:~# vim web.yaml
// 添加如下内容:
root@master01:~# cat web.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web
namespace: prod02
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: “true”
spec:
tls:
- hosts:
- web.k8sng.local
secretName: web-cert
rules:
- host: web.k8sng.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 80
## apply应用ingress策略
root@master01:~# kubectl apply -f web.yaml
ingress.networking.k8s.io/web created
## 检查
root@master01:~# kubectl -n prod02 get ingress web
NAME CLASS HOSTS ADDRESS PORTS AGE
web nginx web.k8sng.local 10.110.73.189 80, 443 28s
## 测试结果
root@master01:~# curl -Lk https://web.k8sng.local
Hello World ^_^
最后操作
## 退回base节点
root@master01:~# exit
logout
candidate@master01:~$ exit
logout
Connection to cks000032 closed.
九、关闭 API 凭据自动挂载
解题过程
## 等录到解题节点
candidate@base:~$ ssh cks000033
candidate@master01:~$ sudo -i
## 修改serviceaccounts stats-monitor-sa,关闭api自动挂载
root@master01:~# kubectl -n monitoring edit serviceaccounts stats-monitor-sa
//最后添加如下内容:
automountServiceAccountToken: false
##修改monitoring namespace 中现有的 stats-monitor Deployment,以注入装载在/var/run/secrets/kubernetes.io/serviceaccount/token 的 ServiceAccount 令牌。
root@master01:~# vim /home/candidate/stats-monitor/deployment.yaml
//在第22行volumeMounts下添加如下内:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount/token
name: token
readOnly: true
//在第31行volumes下添加如下内容:
- name: token
projected:
sources:
- serviceAccountToken:
path: token
## apply更新下
root@master01:~# kubectl apply -f /home/candidate/stats-monitor/deployment.yaml
deployment.apps/stats-monitor configured
## 检查
root@master01:~# kubectl get deployments.apps -n monitoring
NAME READY UP-TO-DATE AVAILABLE AGE
stats-monitor 1/1 1 1 115d
![]() ![]() |
---|
最后操作
## 退回到base节点
root@master01:~# exit
logout
candidate@master01:~$ exit
logout
Connection to cks000033 closed.
candidate@base:~$
十、升级集群节点
解题过程
## 登陆到解题节点
candidate@base:~$ ssh cks000034
candidate@master01:~$ ssh node02
candidate@node02:~$ sudo -i
## 升级kubelet
root@node02:~# apt install kubelet=1.31.1-1.1 -y
## 重启kubelet
root@node02:~# systemctl daemon-reload
root@node02:~# systemctl restart kubelet.service
## 验证结果
root@node02:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
base Ready <none> 122d v1.31.0
master01 Ready control-plane 122d v1.31.0
node02 Ready <none> 122d v1.31.1
最后操作
## 退回到base节点
root@node02:~# exit
logout
candidate@node02:~$ exit
logout
Connection to node02 closed.
candidate@master01:~$ exit
logout
Connection to cks000034 closed.
十一、bom 工具生成 SPDX 文档
解题过程
## 登陆到解题节点
candidate@base:~$ ssh cks000035
candidate@master01:~$ sudo -i
## 查看清单文件
root@master01:~# cat /home/candidate/alipine-deployment.yaml
// 以下是关键信息
spec:
containers:
- name: alpine-a
image: registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.20.0
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- while true; do sleep 360000; done
- name: alpine-b
image: registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.19.1
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- while true; do sleep 360000; done
- name: alpine-c
image: registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.16.9
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- while true; do sleep 360000; done
## 找出 alpine 镜像的哪个版本包含版本为 3.1.4-r5 的 libcrypto3实在 软件包。
// 发现包含软包libcrypto3是在 alpine-b 中
root@master01:~# kubectl -n alpine exec -it alpine-5b9c8fd489-wcjqm -c alpine-a -- apk list | grep libcrypto3
libcrypto3-3.3.0-r2 x86_64 {openssl} (Apache-2.0) [installed]
root@master01:~# kubectl -n alpine exec -it alpine-5b9c8fd489-wcjqm -c alpine-b -- apk list | grep libcrypto3
libcrypto3-3.1.4-r5 x86_64 {openssl} (Apache-2.0) [installed]
root@master01:~# kubectl -n alpine exec -it alpine-5b9c8fd489-wcjqm -c alpine-c -- apk list | grep libcrypto3
root@master01:~#
##使用bom 工具,在 ~/alpine.spdx 为找出的镜像版本创建一个 SPDX 文档
root@master01:~# bom generate --image registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.19.1 --output /home/candidate/alipine.spdx
// 以下是反馈结果
INFO bom v0.6.0: Generating SPDX Bill of Materials
INFO Processing image reference: registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.19.1
INFO Reference registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.19.1 points to a single image
INFO Generating single image package for registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.19.1
INFO Package describes image registry.cn-qingdao.aliyuncs.com/containerhub/alpine:6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0
INFO Image manifest lists 1 layers
INFO Scan of container image returned 15 OS packages in layer #0
WARN Document has no name defined, automatically set to SBOM-SPDX-f91c0f79-7864-46d3-b043-c58cdc6378a5
INFO Package SPDXRef-Package-sha256-6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0 has 1 relationships defined
INFO Package SPDXRef-Package-registry.cn-qingdao.aliyuncs.com-containerhub-alpine-6457d53fb065d6f250e1504b9bc42d5b6c65941d57532c072d929dd0628977d0-sha256-4abcf20661432fb2d719aaf90656f55c287f8ca915dc1c92ec14ff61e67fbaf8 has 15 relationships defined
root@master01:~#
## 更新 alpine Deployment 并删除 使用找出的镜像版本的容器
root@master01:~# vim /home/candidate/alipine-deployment.yaml
// 删除如下部分,快捷删除:光标移动到 - name: alpine-b行前然后输入7dd
- name: alpine-b
image: registry.cn-qingdao.aliyuncs.com/containerhub/alpine:3.19.1
imagePullPolicy: IfNotPresent
args:
- /bin/sh
- -c
- while true; do sleep 360000; done
## 更新清单文件
root@master01:~# kubectl apply -f /home/candidate/alipine-deployment.yaml
namespace/alpine unchanged
deployment.apps/alpine configured
## 查看
root@master01:~# kubectl get deployments.apps,pod -n alpine
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/alpine 1/1 1 1 115d
NAME READY STATUS RESTARTS AGE
pod/alpine-5b9c8fd489-wcjqm 3/3 Terminating 24 (4h39m ago) 115d
pod/alpine-75997c7d75-hpxgs 2/2 Running 0 29s
最后操作
## 退回到base节点
root@master01:~# exit
logout
candidate@master01:~$ exit
logout
Connection to cks000035 closed.
十二、限制性 Pod 安全标准
解题过程
## 登陆到解题节点
candidate@base:~$ ssh cks000036
candidate@master01:~$ sudo -i
## 先删除这个pod,然后执行apply,检查报错信息
root@master01:~# kubectl delete -f /home/candidate/nginx-unprivileged.yaml
deployment.apps "nginx-unprivileged-deployment" deleted
root@master01:~# kubectl apply -f /home/candidate/nginx-unprivileged.yaml
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
deployment.apps/nginx-unprivileged-deployment created
## 根据报错信息,补充完整清单文件
root@master01:~# vim /home/candidate/nginx-unprivileged.yaml
// 在第23行- containerPort: 8080添加如下内容:
24 securityContext:
25 seccompProfile:
26 type: RuntimeDefault
27 runAsNonRoot: true
28 allowPrivilegeEscalation: false
29 capabilities:
30 drop: ["ALL"]
## 执行清单文件更新
root@master01:~# kubectl apply -f /home/candidate/nginx-unprivileged.yaml
deployment.apps/nginx-unprivileged-deployment configured
## 检查
root@master01:~# kubectl get pod -n confidential
NAME READY STATUS RESTARTS AGE
nginx-unprivileged-deployment-8db94f657-6b68f 1/1 Running 0 54s
最后操作
## 退回到base节点
root@master01:~# exit
logout
candidate@master01:~$ exit
logout
Connection to cks000036 closed.
十三、Docker 守护进程
解题过程
## 登录到解题节点
candidate@base:~$ ssh cks000037
candidate@node02:~$ sudo -i
## 查看id,从 docker 组中删除用户 developer
root@node02:~# id developer
uid=1001(developer) gid=0(root) groups=0(root),40(src),100(users),998(docker)
root@node02:~# gpasswd -d developer docker
Removing user developer from group docker
## 修改保位于/var/run/docker.sock 的套接字文件由 root 组拥有
root@node02:~# vim /usr/lib/systemd/system/docker.socket
// SocketGroup=docker 修改为root
root@node02:~# cat /usr/lib/systemd/system/docker.socket
[Unit]
Description=Docker Socket for the API
[Socket]
# If /var/run is not implemented as a symlink to /run, you may need to
# specify ListenStream=/var/run/docker.sock instead.
ListenStream=/run/docker.sock
SocketMode=0660
SocketUser=root
SocketGroup=root ##改为root
[Install]
WantedBy=sockets.target
## 修改service不监听任何TCP端口
root@node02:~# vim /usr/lib/systemd/system/docker.service
//删除 第13行中的-H tcp://0.0.0.0:2375
## 重启docker.service 和docker.sock
root@node02:~# systemctl daemon-reload
root@node02:~# systemctl restart docker.socket
root@node02:~# systemctl restart docker.service
## 查询检查文件,确保属于 root,没有端口2375
root@node02:~# ls -l /var/run/docker.sock
srw-rw---- 1 root root 0 Feb 27 16:23 /var/run/docker.sock
root@node02:~# ss -tuln | grep 2375
root@node02:~#
最后操作
## 退回到base节点
root@node02:~# exit
logout
candidate@node02:~$ exit
logout
Connection to cks000037 closed.
十四、Cilium 网络策略
解题过程
## 登录到解题节点
candidate@base:~$ ssh cks000039
candidate@master01:~$ sudo -i
## 创建网络策略文件
root@master01:~# vim 14.yaml
// 输入如下内容
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: "nodebb"
namespace: nodebb
spec:
endpointSelector:
matchLabels:
app: nodebb
ingress:
- fromEndpoints:
- matchLabels:
k8s.io.kubernetes.pod.namespace: ingress-nginx
authentication:
mode: "required"
- fromEntities:
- host
## 创建
root@master01:~# kubectl apply -f 14.yaml
ciliumnetworkpolicy.cilium.io/nodebb created
## 验证
root@master01:~# kubectl -n nodebb get ciliumnetworkpolicie
NAME AGE
nodebb 27s
最后操作
## 退回base节点
root@master01:~# exit
logout
candidate@master01:~$ exit
logout
Connection to cks000039 closed.
十五、容器镜像策略 ImagePolicyWebhook
解题过程
## 登录到解题节点
candidate@base:~$ ssh cks001094
candidate@master01:~$ sudo -i
## 执行初始化脚本
root@master01:~# sh imagePolicy.sh
请稍等3分钟,正在初始化这道题的环境配置。
Please wait for 3 minutes, the environment configuration for this question is being initialized.
## 修改不完整配置文件信息
root@master01:~# vim /etc/kubernetes/epconfig/image-policy-config.yaml
// 将defaultAllow: true 改为false
## 编辑kube-config.yaml添加具有https端点的地址
root@master01:~# vim /etc/kubernetes/epconfig/kube-config.yaml
// 在第六航server的后面添加:https://image-bouncer-webhook.default.svc:1323/image_policy
## 编辑kube-apiserver.yaml,新增 ImagePolicyWebhook
// 在第24行后添加ImagePolicyWebhook
24 - --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook
## 重启kubelet服务
root@master01:~# systemctl daemon-reload
root@master01:~# systemctl restart kubelet.service
## 测试镜像策略,考试环境中会成功apply
root@master01:~# kubectl apply -f /home/candidate/web1.yaml
最后操作
## 退回到base节点
root@master01:~# exit
logout
candidate@master01:~$ exit
logout
Connection to cks001094 closed.
十六、启用 API server 身份验证
解题过程
## 登录到解题主机
candidate@base:~$ ssh cks001092
candidate@master01:~$ sudo -i
## 执行初始化脚本
root@master01:~# sh api.sh
请稍等3分钟,正在初始化这道题的环境配置。
Please wait for 3 minutes, the environment configuration for this question is being initialized.
## 式配置集群的 API 服务器
root@master01:~# vim /etc/kubernetes/manifests/kube-apiserver.yaml
// 禁止匿名身份验证,在第48 行改为
48 - --anonymous-auth=false
// 使用准入控制器,在第24改为
24 - --enable-admission-plugins=NodeRestriction
## 重启kublet服务器
root@master01:~# systemctl daemon-reload
root@master01:~# systemctl restart kubelet.service
## 测试使用/etc/kubernetes/admin.conf 的原始 kubectl 配置文件来访问受保护的集群
root@master01:~# kubectl --kubeconfig=/etc/kubernetes/admin.conf get pod -A
## 删除题目要求的角色绑定
root@master01:~# kubectl --kubeconfig=/etc/kubernetes/admin.conf delete clusterrolebinding system:anonymous
clusterrolebinding.rbac.authorization.k8s.io "system:anonymous" deleted
最后操作
## 退回到base节点
root@master01:~# exit
logout
candidate@master01:~$ exit
logout
Connection to cks001092 closed.
评论区