K8S入門系列之集群二進制部署–> master篇(二)

內容目錄

組件版本和配置策略

組件版本

  • Kubernetes 1.16.2
  • Docker 19.03-ce
  • Etcd 3.3.17 https://github.com/etcd-io/etcd/releases/
  • Flanneld 0.11.0 https://github.com/coreos/flannel/releases/

  • 插件:

  • 鏡像倉庫:
    docker registry
    harbor

主要配置策略

  • kube-apiserver:
    使用 keepalived 和 haproxy 實現 3 節點高可用;
    關閉非安全端口 8080 和匿名訪問;
    在安全端口 6443 接收 https 請求;
    嚴格的認證和授權策略 (x509、token、RBAC);
    開啟 bootstrap token 認證,支持 kubelet TLS bootstrapping;
    使用 https 訪問 kubelet、etcd,加密通信;

  • kube-controller-manager:
    3 節點高可用;主備備
    關閉非安全端口,在安全端口 10252 接收 https 請求;
    使用 kubeconfig 訪問 apiserver 的安全端口;
    自動 approve kubelet 證書籤名請求 (CSR),證書過期后自動輪轉;
    各 controller 使用自己的 ServiceAccount 訪問 apiserver;

  • kube-scheduler:
    3 節點高可用;主備備
    使用 kubeconfig 訪問 apiserver 的安全端口;

  • kubelet:
    使用 kubeadm 動態創建 bootstrap token,也可以在 apiserver 中靜態配置;
    使用 TLS bootstrap 機制自動生成 client 和 server 證書,過期后自動輪轉;
    在 KubeletConfiguration 類型的 JSON 文件配置主要參數;
    關閉只讀端口,在安全端口 10250 接收 https 請求,對請求進行認證和授權,拒絕匿名訪問和非授權訪問;
    使用 kubeconfig 訪問 apiserver 的安全端口;

  • kube-proxy:
    使用 kubeconfig 訪問 apiserver 的安全端口;
    在 KubeProxyConfiguration 類型的 JSON 文件配置主要參數;
    使用 ipvs 代理模式;

  • 集群插件:

1. 系統初始化

1.01 系統環境&&基本環境配置

[root@localhost ~]# uname -a
Linux localhost.localdomain 4.18.0-80.11.2.el8_0.x86_64 #1 SMP Tue Sep 24 11:32:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost ~]# cat /etc/redhat-release 
CentOS Linux release 8.0.1905 (Core) 

1.02 修改各個節點的對應hostname, 並分別寫入/etc/hosts

hostnamectl set-hostname k8s-master01
...
# 寫入hosts--> 注意是 >> 表示不改變原有內容追加!
cat>> /etc/hosts <<EOF

192.168.2.201 k8s-master01
192.168.2.202 k8s-master02
192.168.2.203 k8s-master03
192.168.2.11 k8s-node01
192.168.2.12 k8s-node02
EOF

1.03 安裝依賴包和常用工具

yum install wget vim yum-utils net-tools tar chrony curl jq ipvsadm ipset conntrack iptables sysstat libseccomp -y

1.04 所有節點關閉firewalld, dnsmasq, selinux以及swap

# 關閉防火牆並清空防火牆規則
systemctl disable firewalld && systemctl stop firewalld && systemctl status firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
iptables -P FORWARD ACCEP

# 關閉dnsmasq否則可能導致docker容器無法解析域名!(centos8不存在!)
systemctl disable --now dnsmasq 

# 關閉selinux  --->selinux=disabled 需重啟生效!
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

# 關閉swap --->註釋掉swap那一行, 需重啟生效!
swapoff -a && sed -i '/ swap / s/^\(.*\)$/# \1/g' /etc/fstab

1.05 所有節點設置時間同步

timedatectl set-timezone Asia/Shanghai
timedatectl set-local-rtc 0

yum install chrony -y
systemctl enable chronyd && systemctl start chronyd && systemctl status chronyd

1.06 調整內核參數, k8s必備參數!

# 先加載模塊
modprobe br_netfilter
cat> kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max = 6553500
net.nf_conntrack_max = 6553500
net.ipv4.tcp_max_tw_buckets = 4096
EOF
  • net.bridge.bridge-nf-call-iptables=1 二層的網橋在轉發包時也會被iptables的FORWARD規則所過濾
  • net.ipv6.conf.all.disable_ipv6=1 禁用整個系統所有的ipv6接口, 預防觸發docker的bug
  • net.netfilter.nf_conntrack_max 這個默認值是65535,當服務器上的連接超過這個數的時候,系統會將數據包丟掉,直到小於這個值或達到過期時間net.netfilter.nf_conntrack_tcp_timeout_established,默認值432000,5天。期間的數據包都會丟掉。
  • net.ipv4.tcp_max_tw_buckets 這個默認值18000,服務器TIME-WAIT狀態套接字的數量限制,如果超過這個數量, 新來的TIME-WAIT套接字會直接釋放。過多的TIME-WAIT影響服務器性能,根據服務自行設置.
cp kubernetes.conf  /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf

1.07 所有節點創建k8s工作目錄並設置環境變量!

# 在每台機器上創建目錄:
mkdir -p /opt/k8s/{bin,cert,script,kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy}
mkdir -p /opt/etcd/{bin,cert}
mkdir -p /opt/lib/etcd
mkdir -p /opt/flanneld/{bin,cert}
mkdir -p /root/.kube
mkdir -p /var/log/kubernetes

# 在每台機器上添加環境變量:
sh -c "echo 'PATH=/opt/k8s/bin:/opt/etcd/bin:/opt/flanneld/bin:$PATH:$HOME/bin:$JAVA_HOME/bin' >> /etc/profile.d/k8s.sh"

source /etc/profile.d/k8s.sh

1.08 無密碼 ssh 登錄其它節點(為了部署方便!!!)

生成秘鑰對

[root@k8s-master01 ~]# ssh-keygen

將自己的公鑰發給其他服務器

[root@k8s-master01 ~]# ssh-copy-id root@k8s-master01
[root@k8s-master01 ~]# ssh-copy-id root@k8s-master02
[root@k8s-master01 ~]# ssh-copy-id root@k8s-master03

2. 創建CA根證書和密鑰

  • 為確保安全, kubernetes 系統各組件需要使用 x509 證書對通信進行加密和認證。
  • CA (Certificate Authority) 是自簽名的根證書,用來簽名後續創建的其它證書。

2.01 安裝cfssl工具集

[root@k8s-master01 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@k8s-master01 ~]# mv cfssl_linux-amd64 /opt/k8s/bin/cfssl

[root@k8s-master01 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@k8s-master01 ~]# mv cfssljson_linux-amd64 /opt/k8s/bin/cfssljson

[root@k8s-master01 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@k8s-master01 ~]# mv cfssl-certinfo_linux-amd64 /opt/k8s/bin/cfssl-certinfo

chmod +x /opt/k8s/bin/*

2.02 創建根證書CA

  • CA 證書是集群所有節點共享的,只需要創建一個CA證書,後續創建的所有證書都由它簽名。

2.02.01 創建配置文件

  • CA 配置文件用於配置根證書的使用場景 (profile) 和具體參數 (usage,過期時間、服務端認證、客戶端認證、加密等),後續在簽名其它證書時需要指定特定場景。
[root@k8s-master01 ~]# cd /opt/k8s/cert/
[root@k8s-master01 cert]# cat> ca-config.json <<EOF
{
    "signing": {
        "default": {
            "expiry": "876000h"
        },
        "profiles": {
            "kubernetes": {
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ],
                "expiry": "876000h"
            }
        }
    }
}
EOF
  • signing :表示該證書可用於簽名其它證書,生成的 ca.pem 證書中CA=TRUE;
  • server auth :表示 client 可以用該該證書對 server 提供的證書進行驗證;
  • client auth :表示 server 可以用該該證書對 client 提供的證書進行驗證;

2.02.02 創建證書籤名請求文件

[root@k8s-master01 cert]# cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "steams"
        }
    ]
}
EOF
  • CN: Common Name ,kube-apiserver 從證書中提取該字段作為請求的用戶名(User Name),瀏覽器使用該字段驗證網站是否合法;
  • O: Organization ,kube-apiserver 從證書中提取該字段作為請求用戶所屬的組(Group);
  • kube-apiserver 將提取的 User、Group 作為 RBAC 授權的用戶標識;

2.02.03 生成CA證書和密鑰

[root@k8s-master01 cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca

# 查看是否生成!
[root@k8s-master01 cert]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

2.02.04 分發證書文件

  • 簡單腳本, 注意傳參! 後期想寫整合腳本的話可以拿來用!
  • 將生成的 CA 證書、秘鑰文件、配置文件拷貝到所有節點的/opt/k8s/cert 目錄下:
[root@k8s-master01 cert]# vi /opt/k8s/script/scp_k8s_cacert.sh 

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /opt/k8s/cert/ca*.pem /opt/k8s/cert/ca-config.json root@${master_ip}:/opt/k8s/cert
done
[root@k8s-master01 cert]# bash /opt/k8s/script/scp_k8s_cacert.sh 192.168.2.201 192.168.2.202 192.168.2.203

3. 部署etcd集群

  • etcd 是基於Raft的分佈式key-value存儲系統,由CoreOS開發,常用於服務發現、共享配置以及併發控制(如leader選舉、分佈式鎖等)
  • kubernetes 使用 etcd 存儲所有運行數據。所以部署三節點高可用!

3.01 下載二進制文件

[root@k8s-master01 ~]# wget https://github.com/etcd-io/etcd/releases/download/v3.3.17/etcd-v3.3.17-linux-amd64.tar.gz
[root@k8s-master01 ~]# tar -xvf etcd-v3.3.17-linux-amd64.tar.gz 

3.02 創建etcd證書和密鑰

  • etcd集群要與k8s–>apiserver通信, 所以需要用證書籤名驗證!

3.02.01 創建證書籤名請求

[root@k8s-master01 cert]# cat > /opt/etcd/cert/etcd-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
        "127.0.0.1",
        "192.168.2.201",
        "192.168.2.202",
        "192.168.2.203"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "steams"
        }
    ]
}
EOF
  • hosts 字段指定授權使用該證書的 etcd 節點 IP 或域名列表,這裏將 etcd 集群的三個節點 IP 都列在其中

3.02.02 生成證書和私鑰

[root@k8s-master01 ~]# cfssl gencert -ca=/opt/k8s/cert/ca.pem -ca-key=/opt/k8s/cert/ca-key.pem -config=/opt/k8s/cert/ca-config.json -profile=kubernetes /opt/etcd/cert/etcd-csr.json | cfssljson -bare /opt/etcd/cert/etcd

# 查看是否生成!
[root@k8s-master01 ~]# ls /opt/etcd/cert/*
etcd.csr       etcd-csr.json  etcd-key.pem   etcd.pem  

3.02.03 分發生成的證書, 私鑰和etcd安裝文件到各etcd節點

[root@k8s-master01 ~]# vi /opt/k8s/script/scp_etcd.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
        echo ">>> ${master_ip}"
        scp /root/etcd-v3.3.17-linux-amd64/etcd* root@${master_ip}:/opt/etcd/bin
        ssh root@${master_ip} "chmod +x /opt/etcd/bin/*"
        scp /opt/etcd/cert/etcd*.pem root@${master_ip}:/opt/etcd/cert/
done
[root@k8s-master01 ~]# bash /opt/k8s/script/scp_etcd.sh 192.168.2.201 192.168.2.202 192.168.2.203

3.03 創建etcd的systemd unit模板及etcd配置文件

創建etcd的systemd unit模板

[root@k8s-master01 ~]# vi /opt/etcd/etcd.service.template

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
User=root
Type=notify
WorkingDirectory=/opt/lib/etcd/
ExecStart=/opt/etcd/bin/etcd \
    --data-dir=/opt/lib/etcd \
    --name ##ETCD_NAME## \
    --cert-file=/opt/etcd/cert/etcd.pem \
    --key-file=/opt/etcd/cert/etcd-key.pem \
    --trusted-ca-file=/opt/k8s/cert/ca.pem \
    --peer-cert-file=/opt/etcd/cert/etcd.pem \
    --peer-key-file=/opt/etcd/cert/etcd-key.pem \
    --peer-trusted-ca-file=/opt/k8s/cert/ca.pem \
    --peer-client-cert-auth \
    --client-cert-auth \
    --listen-peer-urls=https://##MASTER_IP##:2380 \
    --initial-advertise-peer-urls=https://##MASTER_IP##:2380 \
    --listen-client-urls=https://##MASTER_IP##:2379,http://127.0.0.1:2379 \
    --advertise-client-urls=https://##MASTER_IP##:2379 \
    --initial-cluster-token=etcd-cluster-0    \
    --initial-cluster=etcd0=https://192.168.2.201:2380,etcd1=https://192.168.2.202:2380,etcd2=https://192.168.2.203:2380 \
    --initial-cluster-state=new    
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
  • 本示例用腳本替換變量–>##ETCD_NAME##, ##MASTER_IP##
  • WorkingDirectory 、 –data-dir:指定工作目錄和數據目錄為/opt/lib/etcd ,需在啟動服務前創建這個目錄;
  • –name :指定各個節點名稱,當 –initial-cluster-state 值為new時, –name的參數值必須位於–initial-cluster 列表中;
  • –cert-file 、 –key-file:etcd server 與 client 通信時使用的證書和私鑰;
  • –trusted-ca-file:簽名 client 證書的 CA 證書,用於驗證 client 證書;
  • –peer-cert-file 、 –peer-key-file:etcd 與 peer 通信使用的證書和私鑰;
  • –peer-trusted-ca-file:簽名 peer 證書的 CA 證書,用於驗證 peer 證書;

3.04 為各節點創建和分發etcd systemd unit文件

[root@k8s-master01 ~]# vi /opt/k8s/script/etcd_service.sh

ETCD_NAMES=("etcd0" "etcd1" "etcd2")
MASTER_IPS=("$1" "$2" "$3")
#替換模板文件中的變量,為各節點創建systemd unit文件
for (( i=0; i < 3; i++ ));do
        sed -e "s/##ETCD_NAME##/${ETCD_NAMES[i]}/g" -e "s/##MASTER_IP##/${MASTER_IPS[i]}/g" /opt/etcd/etcd.service.template > /opt/etcd/etcd-${MASTER_IPS[i]}.service
done
#分發生成的systemd unit和etcd的配置文件:
for master_ip in ${MASTER_IPS[@]};do
        echo ">>> ${master_ip}"
        scp /opt/etcd/etcd-${master_ip}.service root@${master_ip}:/etc/systemd/system/etcd.service
done
[root@k8s-master01 ~]# bash /opt/k8s/script/etcd_service.sh 192.168.2.201 192.168.2.202 192.168.2.203

3.05 啟動etcd服務

[root@k8s-master01 ~]# vi /opt/k8s/script/etcd.sh

MASTER_IPS=("$1" "$2" "$3")
#啟動 etcd 服務
for master_ip in ${MASTER_IPS[@]};do
        echo ">>> ${master_ip}"
        ssh root@${master_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl start etcd"
done
#檢查啟動結果,確保狀態為 active (running)
for master_ip in ${MASTER_IPS[@]};do
        echo ">>> ${master_ip}"
        ssh root@${master_ip} "systemctl status etcd|grep Active"
done
#驗證服務狀態,輸出均為healthy 時表示集群服務正常
for master_ip in ${MASTER_IPS[@]};do
        echo ">>> ${master_ip}"
        ETCDCTL_API=3 /opt/etcd/bin/etcdctl \
            --endpoints=https://${master_ip}:2379 \
            --cacert=/opt/k8s/cert/ca.pem \
            --cert=/opt/etcd/cert/etcd.pem \
            --key=/opt/etcd/cert/etcd-key.pem endpoint health
done 
[root@k8s-master01 ~]# bash /opt/k8s/script/etcd.sh 192.168.2.201 192.168.2.202 192.168.2.203

4. 部署flannel網絡

  • kubernetes要求集群內各節點(包括master節點)能通過Pod網段互聯互通。flannel使用vxlan技術為各節點創建一個可以互通的Pod網絡,使用的端口為UDP 8472,需要開放該端口(如公有雲 AWS 等)。
  • flannel第一次啟動時,從etcd獲取Pod網段信息,為本節點分配一個未使用的 /24段地址,然後創建 flannel.1(也可能是其它名稱) 接口。
  • flannel將分配的Pod網段信息寫入/run/flannel/docker文件,docker後續使用這個文件中的環境變量設置docker0網橋。

4.01 下載flannel二進制文件

[root@k8s-master01 ~]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
[root@k8s-master01 ~]# mkdir flanneld
[root@k8s-master01 ~]# tar -xvf flannel-v0.11.0-linux-amd64.tar.gz -C flanneld

4.02 創建flannel證書和密鑰

  • flannel從etcd集群存取網段分配信息,而etcd集群啟用了雙向x509證書認證,所以需要flanneld 生成證書和私鑰。

4.02.01 創建證書籤名請求

[root@k8s-master01 ~]# cat > /opt/flanneld/cert/flanneld-csr.json <<EOF
{
    "CN": "flanneld",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "steams"
        }
    ]
}
EOF
  • 該證書只會被 kubectl 當做 client 證書使用,所以 hosts 字段為空;

4.02.02 生成證書和密鑰

[root@k8s-master01 ~]# cfssl gencert -ca=/opt/k8s/cert/ca.pem -ca-key=/opt/k8s/cert/ca-key.pem -config=/opt/k8s/cert/ca-config.json -profile=kubernetes /opt/flanneld/cert/flanneld-csr.json | cfssljson -bare /opt/flanneld/cert/flanneld

[root@k8s-master01 ~]# ll /opt/flanneld/cert/flanneld*
flanneld.csr       flanneld-csr.json  flanneld-key.pem   flanneld.pem   

4.02.03 將flanneld二進制文件和生成的證書和私鑰分發到所有節點(包含node節點!)

[root@k8s-master01 ~]# vi /opt/k8s/script/scp_flanneld.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /root/flanneld/flanneld /root/flanneld/mk-docker-opts.sh root@${master_ip}:/opt/flanneld/bin/
    ssh root@${master_ip} "chmod +x /opt/flanneld/bin/*"
    scp /opt/flanneld/cert/flanneld*.pem root@${master_ip}:/opt/flanneld/cert
done
[root@k8s-master01 ~]# bash /opt/k8s/script/scp_flanneld.sh 192.168.2.201 192.168.2.202 192.168.2.203

4.03 向etcd 寫入集群Pod網段信息

  • flanneld當前版本(v0.11.0)不支持etcd v3,故需使用etcd v2 API寫入配置key和網段數據;
  • 特別注意etcd版本匹配!!!

向etcd 寫入集群Pod網段信息(一個節點操作就可以了!)

[root@k8s-master01 ~]# ETCDCTL_API=2 etcdctl \
--endpoints="https://192.168.2.201:2379,https://192.168.2.202:2379,https://192.168.2.203:2379" \
--ca-file=/opt/k8s/cert/ca.pem \
--cert-file=/opt/flanneld/cert/flanneld.pem \
--key-file=/opt/flanneld/cert/flanneld-key.pem \
set /atomic.io/network/config '{"Network":"10.30.0.0/16","SubnetLen": 24, "Backend": {"Type": "vxlan"}}'

# 返回如下信息(寫入的Pod網段"Network"必須是/16 段地址,必須與kube-controller-manager的--cluster-cidr參數值一致) 
{"Network":"10.30.0.0/16","SubnetLen": 24, "Backend": {"Type": "vxlan"}}

4.04 創建flanneld的systemd unit文件

[root@k8s-master01 ~]# vi /opt/flanneld/flanneld.service.template

[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service

[Service]
Type=notify
ExecStart=/opt/flanneld/bin/flanneld \
-etcd-cafile=/opt/k8s/cert/ca.pem \
-etcd-certfile=/opt/flanneld/cert/flanneld.pem \
-etcd-keyfile=/opt/flanneld/cert/flanneld-key.pem \
-etcd-endpoints=https://192.168.2.201:2379,https://192.168.2.202:2379,https://192.168.2.203:2379 \
-etcd-prefix=/atomic.io/network \
-iface=eth0
ExecStartPost=/opt/flanneld/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
  • mk-docker-opts.sh腳本將分配給flanneld的Pod子網網段信息寫入/run/flannel/docker文件,後續docker啟動時使用這個文件中的環境變量配置docker0 網橋;
  • flanneld使用系統缺省路由所在的接口與其它節點通信,對於有多個網絡接口(如內網和公網)的節點,可以用-iface參數指定通信接口,如上面的eth1接口;
  • flanneld 運行時需要 root 權限;

4.05 分發flanneld systemd unit文件到所有節點,啟動並檢查flanneld服務

[root@k8s-master01 ~]# vi /opt/k8s/script/flanneld_service.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
 #分發 flanneld systemd unit 文件到所有節點
    scp /opt/flanneld/flanneld.service.template root@${master_ip}:/etc/systemd/system/flanneld.service
 #啟動 flanneld 服務
    ssh root@${master_ip} "systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld"
 #檢查啟動結果
    ssh root@${master_ip} "systemctl status flanneld|grep Active"
done
[root@k8s-master01 ~]# bash /opt/k8s/script/flanneld_service.sh 192.168.2.201 192.168.2.202 192.168.2.203

4.06 檢查分配給各 flanneld 的 Pod 網段信息

# 查看集群 Pod 網段(/16)
[root@k8s-master01 ~]# ETCDCTL_API=2 etcdctl \
--endpoints="https://192.168.2.201:2379,https://192.168.2.202:2379,https://192.168.2.203:2379" \
--ca-file=/opt/k8s/cert/ca.pem \
--cert-file=/opt/flanneld/cert/flanneld.pem \
--key-file=/opt/flanneld/cert/flanneld-key.pem \
get /atomic.io/network/config
# 輸出:
{"Network":"10.30.0.0/16","SubnetLen": 24, "Backend": {"Type": "vxlan"}}

# 查看已分配的 Pod 子網段列表(/24)
[root@k8s-master01 ~]# ETCDCTL_API=2 etcdctl \
--endpoints="https://192.168.2.201:2379,https://192.168.2.202:2379,https://192.168.2.203:2379" \
--ca-file=/opt/k8s/cert/ca.pem \
--cert-file=/opt/flanneld/cert/flanneld.pem \
--key-file=/opt/flanneld/cert/flanneld-key.pem \
ls /atomic.io/network/subnets
# 輸出:
/atomic.io/network/subnets/10.30.34.0-24
/atomic.io/network/subnets/10.30.41.0-24
/atomic.io/network/subnets/10.30.7.0-24

# 查看某一 Pod 網段對應的節點 IP 和 flannel 接口地址
[root@k8s-master01 ~]# ETCDCTL_API=2 etcdctl \
--endpoints="https://192.168.2.201:2379,https://192.168.2.202:2379,https://192.168.2.203:2379" \
--ca-file=/opt/k8s/cert/ca.pem \
--cert-file=/opt/flanneld/cert/flanneld.pem \
--key-file=/opt/flanneld/cert/flanneld-key.pem \
get /atomic.io/network/subnets/10.30.34.0-24
# 輸出:
{"PublicIP":"192.168.2.202","BackendType":"vxlan","BackendData":{"VtepMAC":"e6:b2:85:07:9f:c0"}}

# 驗證各節點能通過 Pod 網段互通, 注意輸出的pod網段!
[root@k8s-master01 ~]# vi /opt/k8s/script/ping_flanneld.sh
MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
 #在各節點上部署 flannel 后,檢查是否創建了 flannel 接口(名稱可能為 flannel0、flannel.0、flannel.1 等)
    ssh ${master_ip} "/usr/sbin/ip addr show flannel.1|grep -w inet"
 #在各節點上 ping 所有 flannel 接口 IP,確保能通
    ssh ${master_ip} "ping -c 1 10.30.34.0"
    ssh ${master_ip} "ping -c 1 10.30.41.0"
    ssh ${master_ip} "ping -c 1 10.30.7.0"
done
# 運行!
[root@k8s-master01 ~]# bash /opt/k8s/script/ping_flanneld.sh 192.168.2.201 192.168.2.202 192.168.2.203

5. 部署kubectl命令行工具

  • kubectl 是 kubernetes 集群的命令行管理工具
  • kubectl 默認從 ~/.kube/config 文件讀取 kube-apiserver 地址、證書、用戶名等信息,如果沒有配置,執行 kubectl 命令時可能會出錯:
  • 本文檔只需要部署一次,生成的 kubeconfig 文件與機器無關。

5.01 下載kubectl二進制文件, 並分發所有節點(包含node!)

  • kubernetes-server-linux-amd64.tar.gz包含所有組件!
[root@k8s-master01 ~]# wget https://dl.k8s.io/v1.16.2/kubernetes-server-linux-amd64.tar.gz

[root@k8s-master01 ~]# tar -zxvf kubernetes-server-linux-amd64.tar.gz
[root@k8s-master01 ~]# vi /opt/k8s/script/kubectl_environment.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /root/kubernetes/server/bin/kubectl root@${master_ip}:/opt/k8s/bin/
done
[root@k8s-master01 ~]# bash /opt/k8s/script/kubectl_environment.sh 192.168.2.201 192.168.2.202 192.168.2.203

5.02 創建 admin 證書和私鑰

  • kubectl 與 apiserver https 安全端口通信,apiserver 對提供的證書進行認證和授權。
  • kubectl 作為集群的管理工具,需要被授予最高權限。這裏創建具有最高權限的admin 證書。

創建證書籤名請求

[root@k8s-master01 ~]# cat > /opt/k8s/cert/admin-csr.json <<EOF
{
    "CN": "admin",
    "hosts": [],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "system:masters",
            "OU": "steams"
        }
    ]
}
EOF
  • O 為 system:masters ,kube-apiserver 收到該證書後將請求的 Group 設置為system:masters;
  • 預定義的 ClusterRoleBinding cluster-admin 將 Group system:masters 與Role cluster-admin 綁定,該 Role 授予所有 API的權限;
  • 該證書只會被 kubectl 當做 client 證書使用,所以 hosts 字段為空;

生成證書和私鑰

[root@k8s-master01 ~]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \
-ca-key=/opt/k8s/cert/ca-key.pem \
-config=/opt/k8s/cert/ca-config.json \
-profile=kubernetes /opt/k8s/cert/admin-csr.json | cfssljson -bare /opt/k8s/cert/admin

[root@k8s-master01 ~]# ll /opt/k8s/cert/admin*
admin.csr       admin-csr.json  admin-key.pem   admin.pem  

5.03 創建和分發 kubeconfig 文件

5.03.01 創建kubeconfig文件

  • kubeconfig 為 kubectl 的配置文件,包含訪問 apiserver 的所有信息,如 apiserver 地址、CA 證書和自身使用的證書;

step.1 設置集群參數, –server=${KUBE_APISERVER}, 指定IP和端口; 本文使用的是haproxy的VIP和端口;如果沒有haproxy代理,就用實際服務的IP和端口!

[root@k8s-master01 ~]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.2.210:8443 \
--kubeconfig=/root/.kube/kubectl.kubeconfig

step.2 設置客戶端認證參數

[root@k8s-master01 ~]# kubectl config set-credentials kube-admin \
--client-certificate=/opt/k8s/cert/admin.pem \
--client-key=/opt/k8s/cert/admin-key.pem \
--embed-certs=true \
--kubeconfig=/root/.kube/kubectl.kubeconfig

step.3 設置上下文參數

[root@k8s-master01 ~]#  kubectl config set-context kube-admin@kubernetes \
--cluster=kubernetes \
--user=kube-admin \
--kubeconfig=/root/.kube/kubectl.kubeconfig

step.4設置默認上下文

[root@k8s-master01 ~]# kubectl config use-context kube-admin@kubernetes --kubeconfig=/root/.kube/kubectl.kubeconfig

–certificate-authority :驗證 kube-apiserver 證書的根證書;
–client-certificate 、 –client-key :剛生成的 admin 證書和私鑰,連接 kube-apiserver 時使用;
–embed-certs=true :將 ca.pem 和 admin.pem 證書內容嵌入到生成的kubectl.kubeconfig 文件中(不加時,寫入的是證書文件路徑);

5.03.02 驗證kubeconfig文件

[root@k8s-master01 ~]# kubectl config view --kubeconfig=/root/.kube/kubectl.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.2.210:8443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube-admin
  name: kube-admin@kubernetes
current-context: kube-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kube-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

5.03.03 分發 kubeclt 和kubeconfig 文件,分發到所有使用kubectl 命令的節點

[root@k8s-master01 ~]# vi /opt/k8s/script/scp_kubectl_config.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /root/kubernetes/server/bin/kubectl root@${master_ip}:/opt/k8s/bin/
    ssh root@${master_ip} "chmod +x /opt/k8s/bin/*"
    scp /root/.kube/kubectl.kubeconfig root@${master_ip}:/root/.kube/config
done
[root@k8s-master01 ~]# bash /opt/k8s/script/scp_kubectl_config.sh 192.168.2.201 192.168.2.202 192.168.2.203

6. 部署master節點

  • kubernetes master 節點運行如下組件:
    kube-apiserver
    kube-scheduler
    kube-controller-manager

  • kube-scheduler 和 kube-controller-manager 可以以集群模式運行,通過 leader 選舉產生一個工作進程,其它進程處於阻塞模式。
  • 對於 kube-apiserver,可以運行多個實例, 但對其它組件需要提供統一的訪問地址,該地址需要高可用。本文檔使用 keepalived 和 haproxy 實現 kube-apiserver VIP 高可用和負載均衡。
  • 因為對master做了keepalived高可用,所以3台服務器都有可能會升成master服務器(主master宕機,會有從升級為主);因此所有的master操作,在3個服務器上都要進行。

下載最新版本的二進制文件, 想辦法!!!

[root@k8s-master01 ~]# wget https://dl.k8s.io/v1.16.2/kubernetes-server-linux-amd64.tar.gz

[root@k8s-master01 ~]# wget https://dl.k8s.io/v1.16.2/kubernetes-server-linux-amd64.tar.gz

將二進制文件拷貝到所有 master 節點

[root@k8s-master01 ~]# vi /opt/k8s/script/scp_master.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /root/kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler} root@${master_ip}:/opt/k8s/bin/
    ssh root@${master_ip} "chmod +x /opt/k8s/bin/*"
done
[root@k8s-master01 ~]# bash /opt/k8s/script/scp_master.sh 192.168.2.201 192.168.2.202 192.168.2.203

6.01 部署高可用組件

  • 本文檔講解使用 keepalived 和 haproxy 實現 kube-apiserver 高可用的步驟:
    keepalived 提供 kube-apiserver 對外服務的 VIP;
    haproxy 監聽 VIP,後端連接所有 kube-apiserver 實例,提供健康檢查和負載均衡功能;
  • 運行 keepalived 和 haproxy 的節點稱為 LB 節點。由於 keepalived 是一主多備運行模式,故至少兩個 LB 節點。
  • 本文檔復用 master 節點的三台機器,haproxy 監聽的端口(8443) 需要與 kube-apiserver的端口 6443 不同,避免衝突。
  • keepalived 在運行過程中周期檢查本機的 haproxy 進程狀態,如果檢測到 haproxy 進程異常,則觸發重新選主的過程,VIP 將飄移到新選出來的主節點,從而實現 VIP 的高可用。
  • 所有組件(如 kubeclt、apiserver、controller-manager、scheduler 等)都通過 VIP 和haproxy 監聽的 8443 端口訪問 kube-apiserver 服務。

6.01.01 安裝軟件包,配置haproxy 配置文件

[root@k8s-master01 ~]# yum install keepalived haproxy -y

[root@k8s-master01 ~]# vi /etc/haproxy/haproxy.cfg 
global
    log /dev/log local0
    log /dev/log local1 notice
    chroot /var/lib/haproxy
    stats socket /var/run/haproxy-admin.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    daemon
    nbproc 1
defaults
    log global
    timeout connect 5000
    timeout client 10m
    timeout server 10m
listen admin_stats
    bind 0.0.0.0:10080
    mode http
    log 127.0.0.1 local0 err
    stats refresh 30s
    stats uri /status
    stats realm welcome login\ Haproxy
    stats auth haproxy:123456
    stats hide-version
    stats admin if TRUE
listen k8s-master
    bind 0.0.0.0:8443
    mode tcp
    option tcplog
    balance source
    server 192.168.2.201 192.168.2.201:6443 check inter 2000 fall 2 rise 2 weight 1
    server 192.168.2.202 192.168.2.202:6443 check inter 2000 fall 2 rise 2 weight 1
    server 192.168.2.203 192.168.2.203:6443 check inter 2000 fall 2 rise 2 weight 1
  • haproxy 在 10080 端口輸出 status 信息;
  • haproxy 監聽所有接口的 8443 端口,該端口與環境變量 ${KUBE_APISERVER} 指定的端口必須一致;
  • server 字段列出所有kube-apiserver監聽的 IP 和端口;

6.01.02 在其他服務器安裝、下發haproxy 配置文件;並啟動檢查haproxy服務

[root@k8s-master01 ~]# vi /opt/k8s/script/haproxy.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
 #安裝haproxy
    ssh root@${master_ip} "yum install -y keepalived haproxy"
 #下發配置文件
    scp /etc/haproxy/haproxy.cfg root@${master_ip}:/etc/haproxy
 #啟動檢查haproxy服務
    ssh root@${master_ip} "systemctl restart haproxy"
    ssh root@${master_ip} "systemctl enable haproxy.service"
    ssh root@${master_ip} "systemctl status haproxy|grep Active"
 #檢查 haproxy 是否監聽6443 端口
    ssh root@${master_ip} "netstat -lnpt|grep haproxy"
done
[root@k8s-master01 ~]# bash /opt/k8s/script/haproxy.sh 192.168.2.201 192.168.2.202 192.168.2.203

輸出類似:

Active: active (running) since Tue 2019-11-12 01:54:41 CST; 543ms ago
tcp        0      0 0.0.0.0:8443            0.0.0.0:*               LISTEN      4995/haproxy        
tcp        0      0 0.0.0.0:10080           0.0.0.0:*               LISTEN      4995/haproxy   

6.01.03 配置和啟動 keepalived 服務

  • keepalived 是一主(master)多備(backup)運行模式,故有兩種類型的配置文件。
  • master 配置文件只有一份,backup 配置文件視節點數目而定,對於本文檔而言,規劃如下:
    master: 192.168.2.201
    backup:192.168.2.202、192.168.2.203

在192.168.2.201 master主服務;配置文件:

[root@k8s-master01 ~]# vim /etc/keepalived/keepalived.conf

global_defs {
    router_id keepalived_ha_121
}
vrrp_script check-haproxy {
    script "killall -0 haproxy"
    interval 5
    weight -30
}
vrrp_instance VI-k8s-master {
    state MASTER
    priority 120    # 第一台從為110, 以此類推!
    dont_track_primary
    interface eth0
    virtual_router_id 121
    advert_int 3
    track_script {
        check-haproxy
    }
    virtual_ipaddress {
        192.168.2.210
    }
}
  • 我的VIP 所在的接口nterface 為 eth0;根據自己的情況改變
  • 使用 killall -0 haproxy 命令檢查所在節點的 haproxy 進程是否正常。如果異常則將權重減少(-30),從而觸發重新選主過程;
  • router_id、virtual_router_id 用於標識屬於該 HA 的 keepalived 實例,如果有多套keepalived HA,則必須各不相同;

在192.168.2.202, 192.168.2.203兩台backup 服務;配置文件:

[root@k8s-master02 ~]# vi /etc/keepalived/keepalived.conf

global_defs {
        router_id keepalived_ha_122_123
}
vrrp_script check-haproxy {
        script "killall -0 haproxy"
        interval 5
        weight -30
}
vrrp_instance VI-k8s-master {
        state BACKUP
        priority 110   # 第2台從為100
        dont_track_primary
        interface eth0
        virtual_router_id 121
        advert_int 3
        track_script {
        check-haproxy
        }
        virtual_ipaddress {
            192.168.2.210
        }
}
  • priority 的值必須小於 master 的值;兩個從的值也需要不一樣;

開啟keepalived 服務

[root@k8s-master01 ~]# systemctl restart keepalived && systemctl enable keepalived && systemctl status keepalived
[root@k8s-master01 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:00:68:05 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.101/24 brd 192.168.2.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet 192.168.2.10/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f726:9d22:2b89:694c/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
    link/ether aa:43:e5:bb:88:28 brd ff:ff:ff:ff:ff:ff
    inet 10.30.34.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::a843:e5ff:febb:8828/64 scope link 
       valid_lft forever preferred_lft forever
  • 在master主服務器上能看到eth0網卡上已經有VIP的IP地址存在

6.01.04 查看 haproxy 狀態頁面

  • 瀏覽器訪問 192.168.2.210:10080/status 地址

6.02 部署 kube-apiserver 組件

下載二進制文件

  • kubernetes_server 包里有, 已經解壓到/opt/k8s/bin下

6.02.01 創建 kube-apiserver證書和私鑰

創建證書籤名請求

[root@k8s-master01 ~]# cat > /opt/k8s/cert/kube-apiserver-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "192.168.2.201",
        "192.168.2.202",
        "192.168.2.203",
        "192.168.2.210",
        "10.96.0.1",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
          "C": "CN",
          "ST": "BeiJing",
          "L": "BeiJing",
          "O": "k8s",
          "OU": "steams"
        }
    ]
}
EOF
  • hosts 字段指定授權使用該證書的 IP 或域名列表,這裏列出了 VIP 、apiserver節點 IP、kubernetes 服務 IP 和域名;
  • 域名最後字符不能是 . (如不能為kubernetes.default.svc.cluster.local. ),否則解析時失敗,提示: x509:cannot parse dnsName “kubernetes.default.svc.cluster.local.” ;
  • 如果使用非 cluster.local 域名,如 opsnull.com ,則需要修改域名列表中的最後兩個域名為: kubernetes.default.svc.opsnull 、 kubernetes.default.svc.opsnull.com
  • kubernetes 服務 IP 是 apiserver 自動創建的,一般是 –service-cluster-ip-range 參數指定的網段的第一個IP,後續可以通過如下命令獲取:
    kubectl get svc kubernetes

生成證書和私鑰

[root@k8s-master01 ~]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \
-ca-key=/opt/k8s/cert/ca-key.pem \
-config=/opt/k8s/cert/ca-config.json \
-profile=kubernetes /opt/k8s/cert/kube-apiserver-csr.json | cfssljson -bare /opt/k8s/cert/kube-apiserver

[root@k8s-master01 ~]# ll /opt/k8s/cert/kube-apiserver*
kube-apiserver.csr      kube-apiserver-csr.json  kube-apiserver-key.pem  kube-apiserver.pem 

6.02.02 創建加密配置文件

產生一個用來加密Etcd 的 Key:

[root@k8s-master01 ~]# head -c 32 /dev/urandom | base64
# 返回一個key, 每台master節點需要用一樣的 Key!!!
muqIUutYDd5ARLtsg/W1CYWs3g8Fq9uJO/lDpSsv9iw=

使用這個加密的key,創建加密配置文件

[root@k8s-master01 ~]# vi encryption-config.yaml

kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: muqIUutYDd5ARLtsg/W1CYWs3g8Fq9uJO/lDpSsv9iw=
      - identity: {}

6.02.03 將生成的證書和私鑰文件、加密配置文件拷貝到master節點的/opt/k8s目錄下

[root@k8s-master01 ~]# vi /opt/k8s/script/scp_apiserver.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo  ">>> ${master_ip}"
    scp /opt/k8s/cert/kube-apiserver*.pem root@${master_ip}:/opt/k8s/cert/
    scp /root/encryption-config.yaml root@${master_ip}:/opt/k8s/
done 
[root@k8s-master01 ~]# bash /opt/k8s/script/scp_apiserver.sh 192.168.2.201 192.168.2.202 192.168.2.203

6.02.04 創建 kube-apiserver 的 systemd unit 模板文件

[root@k8s-master01 ~]# vi /opt/k8s/kube-apiserver/kube-apiserver.service.template

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/k8s/bin/kube-apiserver \
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--experimental-encryption-provider-config=/opt/k8s/encryption-config.yaml \
--advertise-address=##MASTER_IP## \
--bind-address=##MASTER_IP## \
--insecure-port=0 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.96.0.0/16 \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/k8s/cert/kube-apiserver.pem \
--tls-private-key-file=/opt/k8s/cert/kube-apiserver-key.pem \
--client-ca-file=/opt/k8s/cert/ca.pem \
--kubelet-client-certificate=/opt/k8s/cert/kube-apiserver.pem \
--kubelet-client-key=/opt/k8s/cert/kube-apiserver-key.pem \
--service-account-key-file=/opt/k8s/cert/ca-key.pem \
--etcd-cafile=/opt/k8s/cert/ca.pem \
--etcd-certfile=/opt/k8s/cert/kube-apiserver.pem \
--etcd-keyfile=/opt/k8s/cert/kube-apiserver-key.pem \
--etcd-servers=https://192.168.2.201:2379,https://192.168.2.202:2379,https://192.168.2.203:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--event-ttl=1h \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
  • -experimental-encryption-provider-config :啟用加密特性;
  • –authorization-mode=Node,RBAC : 開啟 Node 和 RBAC 授權模式,拒絕未授權的請求;
  • –enable-admission-plugins :啟用 ServiceAccount 和NodeRestriction ;
  • –service-account-key-file :簽名 ServiceAccount Token 的公鑰文件,kube-controller-manager 的 –service-account-private-key-file 指定私鑰文件,兩者配對使用;
  • –tls-*-file :指定 apiserver 使用的證書、私鑰和 CA 文件。 –client-ca-file 用於驗證 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)請求所帶的證書;
  • –kubelet-client-certificate 、 –kubelet-client-key :如果指定,則使用 https 訪問 kubelet APIs;需要為證書對應的用戶(上面 kubernetes*.pem 證書的用戶為 kubernetes) 用戶定義 RBAC 規則,否則訪問 kubelet API 時提示未授權;
  • –bind-address : 不能為 127.0.0.1 ,否則外界不能訪問它的安全端口6443;
  • –insecure-port=0 :關閉監聽非安全端口(8080);
  • –service-cluster-ip-range : 指定 Service Cluster IP 地址段;
  • –service-node-port-range : 指定 NodePort 的端口範圍;
  • –runtime-config=api/all=true : 啟用所有版本的 APIs,如autoscaling/v2alpha1;
  • –enable-bootstrap-token-auth :啟用 kubelet bootstrap 的 token 認證;
  • –apiserver-count=3 :指定集群運行模式,多台 kube-apiserver 會通過 leader選舉產生一個工作節點,其它節點處於阻塞狀態;

6.02.05 為各master節點創建和分發 kube-apiserver的systemd unit文件; 啟動檢查 kube-apiserver 服務

[root@k8s-master01 ~]# vi /opt/k8s/script/apiserver_service.sh

MASTER_IPS=("$1" "$2" "$3")
#替換模板文件中的變量,為各節點創建 systemd unit 文件
for (( i=0; i < 3; i++ ));do
    sed "s/##MASTER_IP##/${MASTER_IPS[i]}/" /opt/k8s/kube-apiserver/kube-apiserver.service.template > /opt/k8s/kube-apiserver/kube-apiserver-${MASTER_IPS[i]}.service
done
#啟動並檢查 kube-apiserver 服務
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /opt/k8s/kube-apiserver/kube-apiserver-${master_ip}.service root@${master_ip}:/etc/systemd/system/kube-apiserver.service
    ssh root@${master_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"
    ssh root@${master_ip} "systemctl status kube-apiserver |grep 'Active:'"
done
[root@k8s-master01 ~]# bash /opt/k8s/script/apiserver_service.sh 192.168.2.201 192.168.2.202 192.168.2.203

6.02.06 打印 kube-apiserver 寫入 etcd 的數據

[root@k8s-master01 ~]# ETCDCTL_API=3 etcdctl \
--endpoints="https://192.168.2.201:2379,https://192.168.2.202:2379,https://192.168.2.203:2379" \
--cacert=/opt/k8s/cert/ca.pem \
--cert=/opt/etcd/cert/etcd.pem \
--key=/opt/etcd/cert/etcd-key.pem \
get /registry/ --prefix --keys-only

6.02.07 檢查集群信息

[root@k8s-master01 ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.2.210:8443

[root@k8s-master01 ~]# kubectl get all --all-namespaces
NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   49m

# 6443: 接收 https 請求的安全端口,對所有請求做認證和授權;
# 由於關閉了非安全端口,故沒有監聽 8080;
[root@k8s-master01 ~]# ss -nutlp |grep apiserver
tcp    LISTEN   0        128         192.168.2.201:6443           0.0.0.0:*      users:(("kube-apiserver",pid=4425,fd=6)) 

6.03 部署高可用kube-controller-manager 集群

  • 該集群包含 3 個節點,啟動后將通過競爭選舉機制產生一個 leader 節點,其它節點為阻塞狀態。當 leader 節點不可用后,剩餘節點將再次進行選舉產生新的 leader 節點,從而保證服務的可用性。
  • 為保證通信安全,本文檔先生成 x509 證書和私鑰,kube-controller-manager 在如下兩種情況下使用該證書:
    與 kube-apiserver 的安全端口通信時;
    在安全端口(https,10252) 輸出 prometheus 格式的 metrics;

準備工作:下載kube-controller-manager二進制文件(包含在kubernetes-server包里, 已解壓發送)

6.03.01 創建 kube-controller-manager 證書和私鑰

創建證書籤名請求:

[root@k8s-master01 ~]# cat > /opt/k8s/cert/kube-controller-manager-csr.json <<EOF
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
        "127.0.0.1",
        "192.168.2.201",
        "192.168.2.202",
        "192.168.2.203"
    ],
    "names": [
        {
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "system:kube-controller-manager",
            "OU": "steams"
        }
    ]
}
EOF
  • hosts 列表包含所有 kube-controller-manager 節點 IP;
  • CN 為 system:kube-controller-manager、O 為 system:kube-controller-manager(kubernetes 內置的 ClusterRoleBindings system:kube-controller-manager 賦予kube-controller-manager 工作所需的權限.)

生成證書和私鑰

cfssl gencert -ca=/opt/k8s/cert/ca.pem \
-ca-key=/opt/k8s/cert/ca-key.pem \
-config=/opt/k8s/cert/ca-config.json \
-profile=kubernetes /opt/k8s/cert/kube-controller-manager-csr.json | cfssljson -bare /opt/k8s/cert/kube-controller-manager

[root@k8s-master01 ~]# ll /opt/k8s/cert/kube-controller-manager*
kube-controller-manager.csr       kube-controller-manager-csr.json  kube-controller-manager-key.pem   kube-controller-manager.pem  

6.03.02 創建.kubeconfig 文件

  • kubeconfig 文件包含訪問 apiserver 的所有信息,如 apiserver 地址、CA 證書和自身使用的證書;

執行命令,生成kube-controller-manager.kubeconfig文件

# step.1 設置集群參數:
[root@k8s-master01 ~]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.2.210:8443 \
--kubeconfig=/opt/k8s/kube-controller-manager/kube-controller-manager.kubeconfig

# step.2 設置客戶端認證參數
[root@k8s-master01 ~]# kubectl config set-credentials system:kube-controller-manager \
--client-certificate=/opt/k8s/cert/kube-controller-manager.pem \
--client-key=/opt/k8s/cert/kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=/opt/k8s/kube-controller-manager/kube-controller-manager.kubeconfig

# step.3 設置上下文參數
[root@k8s-master01 ~]# kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=/opt/k8s/kube-controller-manager/kube-controller-manager.kubeconfig

# tep.4 設置默認上下文
[root@k8s-master01 ~]# kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=/opt/k8s/kube-controller-manager/kube-controller-manager.kubeconfig

驗證kube-controller-manager.kubeconfig文件

[root@k8s-master01 ~]# kubectl config view --kubeconfig=/opt/k8s/kube-controller-manager/kube-controller-manager.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.2.210:8443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:kube-controller-manager
  name: system:kube-controller-manager@kubernetes
current-context: system:kube-controller-manager@kubernetes
kind: Config
preferences: {}
users:
- name: system:kube-controller-manager
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

分發生成的證書和私鑰、kubeconfig 到所有 master 節點

[root@k8s-master01 ~]# vi /opt/k8s/script/scp_controller_manager.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
    echo ">>> ${master_ip}"
    scp /opt/k8s/cert/kube-controller-manager*.pem root@${master_ip}:/opt/k8s/cert/
    scp /opt/k8s/kube-controller-manager/kube-controller-manager.kubeconfig root@${master_ip}:/opt/k8s/kube-controller-manager/
done
[root@k8s-master01 ~]# bash /opt/k8s/script/scp_controller_manager.sh 192.168.2.201 192.168.2.202 192.168.2.203

6.03.03 創建和分發 kube-controller-manager 的 systemd unit 文件

[root@k8s-master01 ~]# vi /opt/k8s/kube-controller-manager/kube-controller-manager.service.template

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/k8s/bin/kube-controller-manager \
--port=0 \
--secure-port=10252 \
--bind-address=127.0.0.1 \
--kubeconfig=/opt/k8s/kube-controller-manager/kube-controller-manager.kubeconfig \
--service-cluster-ip-range=10.96.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/k8s/cert/ca.pem \
--cluster-signing-key-file=/opt/k8s/cert/ca-key.pem \
--experimental-cluster-signing-duration=8760h \
--root-ca-file=/opt/k8s/cert/ca.pem \
--service-account-private-key-file=/opt/k8s/cert/ca-key.pem \
--leader-elect=true \
--feature-gates=RotateKubeletServerCertificate=true \
--controllers=*,bootstrapsigner,tokencleaner \
--horizontal-pod-autoscaler-use-rest-clients=true \
--horizontal-pod-autoscaler-sync-period=10s \
--tls-cert-file=/opt/k8s/cert/kube-controller-manager.pem \
--tls-private-key-file=/opt/k8s/cert/kube-controller-manager-key.pem \
--use-service-account-credentials=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
  • –port=0:關閉監聽 http /metrics 的請求,同時 –address 參數無效,–bind-address 參數有效;
  • –secure-port=10252、–bind-address=0.0.0.0: 在所有網絡接口監聽 10252 端口的 https /metrics 請求;
  • –kubeconfig:指定 kubeconfig 文件路徑,kube-controller-manager 使用它連接和驗證 kube-apiserver;
  • –cluster-signing-*-file:簽名 TLS Bootstrap 創建的證書;
  • –experimental-cluster-signing-duration:指定 TLS Bootstrap 證書的有效期;
  • –root-ca-file:放置到容器 ServiceAccount 中的 CA 證書,用來對 kube-apiserver 的證書進行校驗;
  • –service-account-private-key-file:簽名 ServiceAccount 中 Token 的私鑰文件,必須和 kube-apiserver 的 –service-account-key-file 指定的公鑰文件配對使用;
  • –service-cluster-ip-range :指定 Service Cluster IP 網段,必須和 kube-apiserver 中的同名參數一致;
  • –leader-elect=true:集群運行模式,啟用選舉功能;被選為 leader 的節點負責處理工作,其它節點為阻塞狀態;
  • –feature-gates=RotateKubeletServerCertificate=true:開啟 kublet server 證書的自動更新特性;
  • –controllers=*,bootstrapsigner,tokencleaner:啟用的控制器列表,tokencleaner 用於自動清理過期的 Bootstrap token;
  • –horizontal-pod-autoscaler-*:custom metrics 相關參數,支持 autoscaling/v2alpha1;
  • –tls-cert-file、–tls-private-key-file:使用 https 輸出 metrics 時使用的 Server 證書和秘鑰;
  • –use-service-account-credentials=true:

6.03.04 kube-controller-manager 的權限

  • ClusteRole: system:kube-controller-manager 的權限很小,只能創建 secret、serviceaccount 等資源對象,各 controller 的權限分散到 ClusterRole system:controller:XXX 中。
  • 需要在 kube-controller-manager 的啟動參數中添加 –use-service-account-credentials=true 參數,這樣 main controller 會為各 controller 創建對應的 ServiceAccount XXX-controller。
  • 內置的 ClusterRoleBinding system:controller:XXX 將賦予各 XXX-controller ServiceAccount 對應的 ClusterRole system:controller:XXX 權限。

6.03.05 分發systemd unit 文件到所有master 節點;啟動檢查 kube-controller-manager 服務

[root@k8s-master01 ~]# vi /opt/k8s/script/controller_manager_service.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
        echo ">>> ${master_ip}"
        scp /opt/k8s/kube-controller-manager/kube-controller-manager.service.template root@${master_ip}:/etc/systemd/system/kube-controller-manager.service
        ssh root@${master_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager "
        ssh root@${master_ip} "systemctl status kube-controller-manager|grep Active"
done
[root@k8s-master01 ~]# bash /opt/k8s/script/controller_manager_service.sh 192.168.2.201 192.168.2.202 192.168.2.203

6.03.6 查看輸出的 metric

[root@k8s-master01 ~]# ss -nutlp |grep kube-controll
tcp    LISTEN   0        128             127.0.0.1:10252          0.0.0.0:*      users:(("kube-controller",pid=9382,fd=6))  

6.03.07 測試 kube-controller-manager 集群的高可用

  • 停掉一個或兩個節點的 kube-controller-manager 服務,觀察其它節點的日誌,看是否獲取了 leader 權限。
  • 查看當前的 leader
[root@k8s-master02 ~]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml

6.04 部署高可用 kube-scheduler 集群

  • 該集群包含 3 個節點,啟動后將通過競爭選舉機制產生一個 leader 節點,其它節點為阻塞狀態。當 leader 節點不可用后,剩餘節點將再次進行選舉產生新的 leader 節點,從而保證服務的可用性。
  • 為保證通信安全,本文檔先生成 x509 證書和私鑰,kube-scheduler 在如下兩種情況下使用該證書:
    與 kube-apiserver 的安全端口通信;
    在安全端口(https,10251) 輸出 prometheus 格式的 metrics;

準備工作:下載kube-scheduler 的二進制文件—^^^

6.04.01 創建 kube-scheduler 證書和私鑰

創建證書籤名請求:

[root@k8s-master01 ~]# cat > /opt/k8s/cert/kube-scheduler-csr.json <<EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.2.201",
      "192.168.2.202",
      "192.168.2.203"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-scheduler",
        "OU": "steams"
      }
    ]
}
EOF
  • hosts 列表包含所有 kube-scheduler 節點 IP;
  • CN 為 system:kube-scheduler、O 為 system:kube-scheduler (kubernetes 內置的 ClusterRoleBindings system:kube-scheduler 將賦予 kube-scheduler 工作所需的權限.)

生成證書和私鑰

[root@k8s-master01 ~]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \
-ca-key=/opt/k8s/cert/ca-key.pem \
-config=/opt/k8s/cert/ca-config.json \
-profile=kubernetes /opt/k8s/cert/kube-scheduler-csr.json | cfssljson -bare /opt/k8s/cert/kube-scheduler

[root@k8s-master01 ~]# ll /opt/k8s/cert/kube-scheduler*
kube-scheduler.csr       kube-scheduler-csr.json  kube-scheduler-key.pem   kube-scheduler.pem  

6.04.02 創建kubeconfig 文件

  • kubeconfig 文件包含訪問 apiserver 的所有信息,如 apiserver 地址、CA 證書和自身使用的證書;

執行命令,生成kube-scheduler.kubeconfig文件

# step.1 設置集群參數
[root@k8s-master01 ~]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/k8s/cert/ca.pem \
--embed-certs=true \
--server=https://192.168.2.210:8443 \
--kubeconfig=/opt/k8s/kube-scheduler/kube-scheduler.kubeconfig

# step.2 設置客戶端認證參數
[root@k8s-master01 ~]# kubectl config set-credentials system:kube-scheduler \
--client-certificate=/opt/k8s/cert/kube-scheduler.pem \
--client-key=/opt/k8s/cert/kube-scheduler-key.pem \
--embed-certs=true  \
--kubeconfig=/opt/k8s/kube-scheduler/kube-scheduler.kubeconfig

# step.3 設置上下文參數
[root@k8s-master01 ~]# kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=/opt/k8s/kube-scheduler/kube-scheduler.kubeconfig

# step.4設置默認上下文
[root@k8s-master01 ~]# kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=/opt/k8s/kube-scheduler/kube-scheduler.kubeconfig

驗證kube-controller-manager.kubeconfig文件

[root@k8s-master01 ~]# kubectl config view --kubeconfig=/opt/k8s/kube-scheduler/kube-scheduler.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.2.210:8443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:kube-scheduler
  name: system:kube-scheduler@kubernetes
current-context: system:kube-scheduler@kubernetes
kind: Config
preferences: {}
users:
- name: system:kube-scheduler
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

6.04.03 分發生成的證書和私鑰、kubeconfig 到所有 master 節點

[root@k8s-master01 ~]# vi /opt/k8s/script/scp_scheduler.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
        echo ">>> ${master_ip}"
        scp /opt/k8s/cert/kube-scheduler*.pem root@${master_ip}:/opt/k8s/cert/
        scp /opt/k8s/kube-scheduler/kube-scheduler.kubeconfig root@${master_ip}:/opt/k8s/kube-scheduler/
done
[root@k8s-master01 ~]# bash /opt/k8s/script/scp_scheduler.sh 192.168.2.201 192.168.2.202 192.168.2.203

6.04.04 創建kube-scheduler 的 systemd unit 文件

[root@k8s-master01 ~]# vi /opt/k8s/kube-scheduler/kube-scheduler.service.template

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/k8s/bin/kube-scheduler \
  --address=127.0.0.1 \
  --kubeconfig=/opt/k8s/kube-scheduler/kube-scheduler.kubeconfig \
  --leader-elect=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
  • ps: kube-scheduler目前僅支持http, 所以少了一大推相關安全設定!
  • –address:在 127.0.0.1:10251 端口接收 http /metrics 請求;kube-scheduler 目前還不支持接收 https 請求;
  • –kubeconfig:指定 kubeconfig 文件路徑,kube-scheduler 使用它連接和驗證 kube-apiserver;
  • –leader-elect=true:集群運行模式,啟用選舉功能;被選為 leader 的節點負責處理工作,其它節點為阻塞狀態;

6.04.05 分發systemd unit 文件到所有master 節點;啟動檢查kube-scheduler 服務

[root@k8s-master01 ~]# vi /opt/k8s/script/scheduler_service.sh

MASTER_IPS=("$1" "$2" "$3")
for master_ip in ${MASTER_IPS[@]};do
        echo ">>> ${master_ip}"
        scp /opt/k8s/kube-scheduler/kube-scheduler.service.template root@${master_ip}:/etc/systemd/system/kube-scheduler.service
        ssh root@${master_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler && systemctl status kube-scheduler|grep Active"
done
[root@k8s-master01 ~]# bash /opt/k8s/script/scheduler_service.sh 192.168.2.201 192.168.2.202 192.168.2.203

6.04.06 查看輸出的 metric

  • kube-scheduler 監聽 10251 端口,接收 http 請求:
[root@k8s-master01 ~]# ss -nutlp |grep kube-scheduler
tcp    LISTEN   0        128             127.0.0.1:10251          0.0.0.0:*      users:(("kube-scheduler",pid=8584,fd=6))                                       
tcp    LISTEN   0        128                     *:10259                *:*      users:(("kube-scheduler",pid=8584,fd=7))   
                                    
[root@k8s-master01 ~]# curl -s http://127.0.0.1:10251/metrics |head
# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
# HELP apiserver_client_certificate_expiration_seconds [ALPHA] Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0

6.04.07 測試 kube-scheduler 集群的高可用

  • 停掉一個或兩個節點的 kube-scheduler 服務,觀察其它節點的日誌,看是否獲取了 leader 權限。
  • 查看當前的 leader
[root@k8s-master02 ~]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml

master集群已部署完畢!

本站聲明:網站內容來源於博客園,如有侵權,請聯繫我們,我們將及時處理

【其他文章推薦】

※如何讓商品強力曝光呢? 網頁設計公司幫您建置最吸引人的網站,提高曝光率!!

網頁設計一頭霧水??該從何著手呢? 找到專業技術的網頁設計公司,幫您輕鬆架站!

※想知道最厲害的台北網頁設計公司推薦台中網頁設計公司推薦專業設計師”嚨底家”!!