作者:老 Z,中电信数智科技有限公司山东分公司运维架构师,云原生爱好者,现在专注于云原生运维,云原生范畴技能栈触及 Kubernetes、KubeSphere、DevOps、OpenStack、Ansible 等。
前语

测验服务器装备
主机名 | IP | CPU | 内存 | 系统盘 | 数据盘 | 用途 |
---|---|---|---|---|---|---|
zdeops-master | 192.168.9.9 | 2 | 4 | 40 | 200 | Ansible 运维控制节点 |
ks-k8s-master-0 | 192.168.9.91 | 4 | 16 | 40 | 200+200 | KubeSphere/k8s-master/k8s-worker |
ks-k8s-master-1 | 192.168.9.92 | 4 | 16 | 40 | 200+200 | KubeSphere/k8s-master/k8s-worker |
ks-k8s-master-2 | 192.168.9.93 | 4 | 16 | 40 | 200+200 | KubeSphere/k8s-master/k8s-worker |
storage-node-0 | 192.168.9.95 | 2 | 8 | 40 | 200+200 | ElasticSearch/GlusterFS |
storage-node-0 | 192.168.9.96 | 2 | 8 | 40 | 200+200 | ElasticSearch/GlusterFS |
storage-node-0 | 192.168.9.97 | 2 | 8 | 40 | 200+200 | ElasticSearch/GlusterFS |
harbor | 192.168.9.89 | 2 | 8 | 40 | 200 | Harbor |
算计 | 8 | 22 | 84 | 320 | 2800 |
测验环境触及软件版别信息
- 操作系统:CentOS-7.9-x86_64
- Ansible:2.8.20
- KubeSphere:3.3.0
- Kubernetes:v1.24.1
- GlusterFS:9.5.1
- ElasticSearch:7.17.5
- Harbor:2.5.1
简介
出产环境 KubeSphere 3.3.0 部署的 Kubernetes 集群在安全评价的时分发现安全缝隙,其中一项缝隙提示 SSL/TLS 协议信息走漏缝隙 (CVE-2016-2183)。
本文详细描述了缝隙发生原因、缝隙修正计划、缝隙修正的操作流程以及留意事项。
缝隙信息及修正计划
缝隙详细信息
缝隙陈述中触及缝隙 SSL/TLS 协议信息走漏缝隙 (CVE-2016-2183) 的详细信息如下:


缝隙剖析
- 剖析缝隙陈述信息,我们发现缝隙触及以下端口和服务:
端口号 | 服务 |
---|---|
2379/2380 | Etcd |
6443 | kube-apiserver |
10250 | kubelet |
10257 | kube-controller |
10259 | kube-scheduler |
- 在缝隙节点 (恣意 Master 节点) 检查、确认端口号对应的服务:
# ETCD
[root@ks-k8s-master-0 ~]# ss -ntlup | grep Etcd | grep -v "127.0.0.1"
tcp LISTEN 0 128 192.168.9.91:2379 *:* users:(("Etcd",pid=1341,fd=7))
tcp LISTEN 0 128 192.168.9.91:2380 *:* users:(("Etcd",pid=1341,fd=5))
# kube-apiserver
[root@ks-k8s-master-0 ~]# ss -ntlup | grep 6443
tcp LISTEN 0 128 [::]:6443 [::]:* users:(("kube-apiserver",pid=1743,fd=7))
# kubelet
[root@ks-k8s-master-0 ~]# ss -ntlup | grep 10250
tcp LISTEN 0 128 [::]:10250 [::]:* users:(("kubelet",pid=1430,fd=24))
# kube-controller
[root@ks-k8s-master-0 ~]# ss -ntlup | grep 10257
tcp LISTEN 0 128 [::]:10257 [::]:* users:(("kube-controller",pid=19623,fd=7))
# kube-scheduler
[root@ks-k8s-master-0 ~]# ss -ntlup | grep 10259
tcp LISTEN 0 128 [::]:10259 [::]:* users:(("kube-scheduler",pid=1727,fd=7))
- 缝隙原因:
相关服务装备文件里运用了 IDEA、DES 和 3DES 等算法。
- 利用测验东西验证缝隙:
能够运用 Nmap 或是 openssl 进行验证,本文要点介绍 Nmap 的验证方法。
**留意:**openssl 的方法输出太多且欠好直观判别,有爱好的能够参阅指令
openssl s_client -connect 192.168.9.91:10257 -cipher "DES:3DES"
。
在恣意节点装置测验东西 Nmap ,并履行测验指令。
过错的姿态,仅用于阐明挑选 Nmap 版别很重要,实践操作中不要履行。
# 用 CentOS 默认源装置 nmap
yum install nmap
# 履行针对 2379 端口的 ssl-enum-ciphers 检测
nmap --script ssl-enum-ciphers -p 2379 192.168.9.91
# 成果输出如下
Starting Nmap 6.40 ( http://nmap.org ) at 2023-02-13 14:14 CST
Nmap scan report for ks-k8s-master-0 (192.168.9.91)
Host is up (0.00013s latency).
PORT STATE SERVICE
2379/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 0.30 seconds
留意: 剖析输出的成果发现并没有任何正告信息。原因是 Nmap 版别过低,需求 7.x 以上才能够。
正确的姿态,实践履行的操作:
# 从 Nmap 官网,下载装置新版软件包
rpm -Uvh https://nmap.org/dist/nmap-7.93-1.x86_64.rpm
# 履行针对 2379 端口的 ssl-enum-ciphers 检测
# nmap -sV --script ssl-enum-ciphers -p 2379 192.168.9.91 (该指令输出更为详细也更加耗时,为节约篇幅运用下面简单输出的形式)
nmap --script ssl-enum-ciphers -p 2379 192.168.9.91
# 输出成果如下
Starting Nmap 7.93 ( https://nmap.org ) at 2023-02-13 17:28 CST
Nmap scan report for ks-k8s-master-0 (192.168.9.91)
Host is up (0.00013s latency).
PORT STATE SERVICE
2379/tcp open Etcd-client
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (ecdh_x25519) - C
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A
| TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A
| compressors:
| NULL
| cipher preference: client
| warnings:
| 64-bit block cipher 3DES vulnerable to SWEET32 attack
|_ least strength: C
Nmap done: 1 IP address (1 host up) scanned in 0.66 seconds
# 履行针对 2380 端口的 ssl-enum-ciphers 检测
nmap --script ssl-enum-ciphers -p 2380 192.168.9.91
# 输出成果如下
Starting Nmap 7.93 ( https://nmap.org ) at 2023-02-13 17:28 CST
Nmap scan report for ks-k8s-master-0 (192.168.9.91)
Host is up (0.00014s latency).
PORT STATE SERVICE
2380/tcp open Etcd-server
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (ecdh_x25519) - C
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A
| TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A
| compressors:
| NULL
| cipher preference: client
| warnings:
| 64-bit block cipher 3DES vulnerable to SWEET32 attack
|_ least strength: C
Nmap done: 1 IP address (1 host up) scanned in 0.64 seconds
# 履行针对 6443 端口的 ssl-enum-ciphers 检测(10250/10257/10259 端口扫描成果相同)
nmap --script ssl-enum-ciphers -p 6443 192.168.9.91
# 输出成果如下
Starting Nmap 7.93 ( https://nmap.org ) at 2023-02-13 17:29 CST
Nmap scan report for ks-k8s-master-0 (192.168.9.91)
Host is up (0.00014s latency).
PORT STATE SERVICE
6443/tcp open sun-sr-https
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
| TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (secp256r1) - C
| TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C
| compressors:
| NULL
| cipher preference: server
| warnings:
| 64-bit block cipher 3DES vulnerable to SWEET32 attack
| TLSv1.3:
| ciphers:
| TLS_AKE_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
| TLS_AKE_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
| TLS_AKE_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A
| cipher preference: server
|_ least strength: C
Nmap done: 1 IP address (1 host up) scanned in 0.66 seconds
留意: 扫描成果中要点关注
warnings
,64-bit block cipher 3DES vulnerable to SWEET32 attack
。
缝隙修正计划
缝隙扫描陈述中提到的修正计划并不适用于 Etcd、Kubernetes 相关服务。
针关于 Etcd、Kubernetes 等服务有用的修正手法是修正服务装备文件,禁用 3DES 相关的加密装备。
Cipher Suites 装备参数的挑选,能够参阅 ETCD 官方文档或是 IBM 私有云文档,网上搜到的很多装备都是参阅的 IBM 的文档,想省劲的能够拿来即用。
关于装备参数的终究挑选,我选用了最笨的方法,即把扫描成果列出的 Cipher 值拼接起来。因为不清楚影响规模,所以保存的选用了在原有装备基础上删去 3DES 相关的装备。
下面的内容整理了 Cipher Suites 装备参数的可参阅装备。
- 原始扫描成果中的 Cipher Suites 装备:
- TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
- TLS_RSA_WITH_3DES_EDE_CBC_SHA
- TLS_RSA_WITH_AES_128_CBC_SHA
- TLS_RSA_WITH_AES_128_GCM_SHA256
- TLS_RSA_WITH_AES_256_CBC_SHA
- TLS_RSA_WITH_AES_256_GCM_SHA384
- 原始扫描成果去掉 3DES 的 Cipher Suites 装备:
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_128_CBC_SHA
- TLS_RSA_WITH_AES_128_GCM_SHA256
- TLS_RSA_WITH_AES_256_CBC_SHA
- TLS_RSA_WITH_AES_256_GCM_SHA384
运用该计划时有必要严厉依照以下次序装备,我在测验时发现次序不共同会导致 Etcd 服务重复重启。
ETCD_CIPHER_SUITES=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
虽然 CIPHER 装备相同,可是在运用下面的次序时,Etcd 服务重复重启,我排查了好久也没确认根因。也可能是我写的有问题,可是比对多次也没发现反常,只能暂时是认为是次序形成的。
ETCD_CIPHER_SUITES=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
留意: 只要 Etcd 服务受到次序的影响,kube 相关组件次序不同也没发现反常。
- IBM 相关文档中的 Cipher Suites 装备:
网上搜到的参阅文档运用率最高的装备。实践测验也的确好用,服务都能正常发动,没有发现 Etcd 不断重启的现象。假如没有特别需求,能够选用该计划,毕竟挑选越少出安全缝隙的几率也越小。
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
缝隙修正
建议运用以下次序修正缝隙:
- Etcd
- kube-apiserver
- kube-controller
- kube-scheduler
- kubelet
上面的操作流程中,要点是将 Etcd 的修正重启放在最前面履行。因为 kube 等组件的运行依赖于 Etcd,我在验证时最后升级的 Etcd,当 Etcd 发动失败后(重复重启),其他服务因为无法衔接 Etcd,形成服务反常停止。所以先保证 Etcd 运行正常再去修正其他组件。
本文一切操作仅演示了一个节点的操作方法,多节点存在缝隙时请按组件依次履行,先修正完成一个组件,确认无误后再去修正另一个组件。
以下操作是我实战验证过的经验,仅供参阅,出产环境请一定要充沛验证、测验后再履行!
修正 Etcd
- 修正 Etcd 装备文件 /etc/Etcd.env:
KubeSpere 3.3.0 选用二进制的方法部署的 Etcd,相关装备文件包括 /etc/systemd/system/Etcd.service 和 /etc/Etcd.env,参数装备保存在 /etc/Etcd.env。
# 在文件最后添加装备(用 cat 指令主动装备)
cat >> /etc/Etcd.env << "EOF"
# TLS CIPHER SUITES settings
ETCD_CIPHER_SUITES=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
EOF
- 重启 Etcd 服务:
# 重启服务
systemctl restart Etcd
# 验证服务已发动
ss -ntlup | grep Etcd
# 正确的成果如下
tcp LISTEN 129 128 192.168.9.91:2379 *:* users:(("Etcd",pid=40160,fd=7))
tcp LISTEN 0 128 127.0.0.1:2379 *:* users:(("Etcd",pid=40160,fd=6))
tcp LISTEN 0 128 192.168.9.91:2380 *:* users:(("Etcd",pid=40160,fd=5))
# 持续观测 保证服务没有重复重启
watch -n 1 -d 'ss -ntlup | grep Etcd'
留意: 假如是多节点形式,一定要一切节点都修正完装备文件,然后,一切节点一起重启 Etcd 服务。重启进程中会形成 Etcd 服务中止,出产环境慎重操作。
- 验证缝隙是否修正:
# 履行扫描指令
nmap --script ssl-enum-ciphers -p 2379 192.168.9.91
# 输出成果如下
Starting Nmap 7.93 ( https://nmap.org ) at 2023-02-14 17:48 CST
Nmap scan report for ks-k8s-master-0 (192.168.9.91)
Host is up (0.00015s latency).
PORT STATE SERVICE
2379/tcp open Etcd-client
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A
| compressors:
| NULL
| cipher preference: client
|_ least strength: A
Nmap done: 1 IP address (1 host up) scanned in 0.64 seconds
# 为了节约篇幅,2380 端口扫描完好输出成果略,实践成果与 2379 端口共同
# 能够履行过滤输出的扫描指令,假如以下指令返回值为空,阐明缝隙修正
nmap --script ssl-enum-ciphers -p 2380 192.168.9.91 | grep SWEET32
修正 kube-apiserver
- 修正 kube-apiserver 装备文件
/etc/kubernetes/manifests/kube-apiserver.yaml
:
# 新增装备(在原文件 47 行后边添加一行)
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
# 新增后的作用如下(不截图了,添加了行号显现用来区别)
46 - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
47 - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
48 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_ 256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
- 重启 kube-apiserver:
不需求手动重启,因为是静态 Pod, Kubernetes 会主动重启。
- 验证缝隙:
# 履行扫描指令
nmap --script ssl-enum-ciphers -p 6443 192.168.9.91
# 输出成果如下
Starting Nmap 7.93 ( https://nmap.org ) at 2023-02-14 09:22 CST
Nmap scan report for ks-k8s-master-0 (192.168.9.91)
Host is up (0.00015s latency).
PORT STATE SERVICE
6443/tcp open sun-sr-https
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
| compressors:
| NULL
| cipher preference: server
| TLSv1.3:
| ciphers:
| TLS_AKE_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
| TLS_AKE_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
| TLS_AKE_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A
| cipher preference: server
|_ least strength: A
Nmap done: 1 IP address (1 host up) scanned in 0.68 seconds
留意:比照之前的缝隙告警信息,扫描成果中现已不存在
64-bit block cipher 3DES vulnerable to SWEET32 attack
,阐明修正成功。
修正 kube-controller
- 修正 kube-controller 装备文件
/etc/kubernetes/manifests/kube-controller-manager.yaml
:
# 新增装备(在原文件 33 行后边添加一行)
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
# 新增后的作用如下(不截图了,添加了行号显现用来区别)
33 - --use-service-account-credentials=true
34 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_ 256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
- 重启 kube-controller:
不需求手动重启,因为是静态 Pod, Kubernetes 会主动重启。
- 验证缝隙:
# 履行完好扫描指令
nmap --script ssl-enum-ciphers -p 10257 192.168.9.91
# 为了节约篇幅,完好输出成果略,实践成果与 kube-apiserver 的共同
# 能够履行过滤输出的扫描指令,假如以下指令返回值为空,阐明缝隙修正
nmap --script ssl-enum-ciphers -p 10257 192.168.9.91 | grep SWEET32
留意:比照之前的缝隙告警信息,扫描成果中现已不存在
64-bit block cipher 3DES vulnerable to SWEET32 attack
,阐明修正成功。
修正 kube-scheduler
- 修正 kube-scheduler 装备文件
/etc/kubernetes/manifests/kube-scheduler.yaml
:
# 新增装备(在原文件 19 行后边添加一行)
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
# 新增后的作用如下(不截图了,添加了行号显现用来区别)
19 - --leader-elect=true
20 - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_ 256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
- 重启 kube-scheduler:
不需求手动重启,因为是静态 Pod, Kubernetes 会主动重启。
- 验证缝隙:
# 履行完好扫描指令
nmap --script ssl-enum-ciphers -p 10259 192.168.9.91
# 为了节约篇幅,完好输出成果略,实践成果与 kube-apiserver 的共同
# 能够履行过滤输出的扫描指令,假如以下指令返回值为空,阐明缝隙修正
nmap --script ssl-enum-ciphers -p 10259 192.168.9.91 | grep SWEET32
留意:比照之前的缝隙告警信息,扫描成果中现已不存在
64-bit block cipher 3DES vulnerable to SWEET32 attack
,阐明修正成功。
修正 kubelet
- 修正 kubelet 装备文件
/var/lib/kubelet/config.yaml
:
# 在装备文件最后添加
tlsCipherSuites: [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_ 256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA]
提示: 更多的 cipher suites 装备,请参阅 Kubernetes 官方文档。
- 重启 kubelet:
systemctl restart kubelet
重启有危险,操作需慎重!
- 验证缝隙:
# 履行完好扫描指令
nmap --script ssl-enum-ciphers -p 10250 192.168.9.91
# 为了节约篇幅,完好输出成果略,实践成果与 kube-apiserver 的共同
# 能够履行过滤输出的扫描指令,假如以下指令返回值为空,阐明缝隙修正
nmap --script ssl-enum-ciphers -p 10250 192.168.9.91 | grep SWEET32
留意: 比照之前的缝隙告警信息,扫描成果中现已不存在
64-bit block cipher 3DES vulnerable to SWEET32 attack
,阐明修正成功。
常见问题
Etcd 发动失败
报错信息:
Feb 13 16:17:41 ks-k8s-master-0 Etcd: Etcd Version: 3.4.13
Feb 13 16:17:41 ks-k8s-master-0 Etcd: Git SHA: ae9734ed2
Feb 13 16:17:41 ks-k8s-master-0 Etcd: Go Version: go1.12.17
Feb 13 16:17:41 ks-k8s-master-0 Etcd: Go OS/Arch: linux/amd64
Feb 13 16:17:41 ks-k8s-master-0 Etcd: setting maximum number of CPUs to 4, total number of available CPUs is 4
Feb 13 16:17:41 ks-k8s-master-0 Etcd: the server is already initialized as member before, starting as Etcd member...
Feb 13 16:17:41 ks-k8s-master-0 Etcd: unexpected TLS cipher suite "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"
Feb 13 16:17:42 ks-k8s-master-0 systemd: Etcd.service: main process exited, code=exited, status=1/FAILURE
Feb 13 16:17:42 ks-k8s-master-0 systemd: Failed to start Etcd.
Feb 13 16:17:42 ks-k8s-master-0 systemd: Unit Etcd.service entered failed state.
Feb 13 16:17:42 ks-k8s-master-0 systemd: Etcd.service failed.
解决计划:
删去装备文件中的 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
字段,至于原因没有深入研究。
Etcd 服务不断重启
报错信息 (省掉掉了一部分):
修正装备文件后,重新发动 Etcd,发动的时分指令履行没有报错。可是,发动后检查 status 有反常,且 /var/log/messages
中有如下信息
Feb 13 16:25:55 ks-k8s-master-0 systemd: Etcd.service holdoff time over, scheduling restart.
Feb 13 16:25:55 ks-k8s-master-0 systemd: Stopped Etcd.
Feb 13 16:25:55 ks-k8s-master-0 systemd: Starting Etcd...
Feb 13 16:25:55 ks-k8s-master-0 Etcd: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=https://192.168.9.91:2379
Feb 13 16:25:55 ks-k8s-master-0 Etcd: [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
Feb 13 16:25:55 ks-k8s-master-0 Etcd: [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
Feb 13 16:25:55 ks-k8s-master-0 Etcd: recognized and used environment variable ETCD_AUTO_COMPACTION_RETENTION=8
.....(省掉)
Feb 13 16:25:58 ks-k8s-master-0 systemd: Started Etcd.
Feb 13 16:25:58 ks-k8s-master-0 Etcd: serving client requests on 192.168.9.91:2379
Feb 13 16:25:58 ks-k8s-master-0 Etcd: serving client requests on 127.0.0.1:2379
Feb 13 16:25:58 ks-k8s-master-0 Etcd: accept tcp 127.0.0.1:2379: use of closed network connection
Feb 13 16:25:58 ks-k8s-master-0 systemd: Etcd.service: main process exited, code=exited, status=1/FAILURE
Feb 13 16:25:58 ks-k8s-master-0 systemd: Unit Etcd.service entered failed state.
Feb 13 16:25:58 ks-k8s-master-0 systemd: Etcd.service failed.
解决计划:
在实践测验中遇到了两种场景都发生了相似上面的报错信息:
第一种,在多节点 Etcd 环境中,需求先修正一切节点的 Etcd 装备文件,然后,一起重启一切节点的 Etcd 服务。
第二种,Etc Cipher 参数次序问题,不断尝试确认了终究次序后(详细装备参阅正文),重复重启的问题没有再现。
本文由博客一文多发平台 OpenWrite 发布!