搭建一个生产级K8S高可用集群(2)

写在前面:

此次为了贴合线上的真实情况,此次K8S搭建将不会和咱们网路上的一气呵成相媲美,更多的表现在:

  • 最新版K8S_1.16;
  • 完全基于离线模式的二进制HA搭建(政企)《链接:https://pan.baidu.com/s/1aCmiYdfn5gujnyMutVdaVw 提取码:m39k》;
  • 全部组件均采用二进制部署(包含Docker);
  • 逐一摸索每个组件的配置文件,做到线上有故障能清楚的定位到问题;
  • 既然是分布式,本次安装完全基于:
    • 先单Master到双Master高可用
    • 新Node如何加到集群

服务器硬件配置推荐:

mark

生产环境K8S平台规划 – 单Master集群

mark

生产环境K8S平台规划 – 多Master集群(HA)

mark

一、服务器规划

角色 IP 组件
k8s-master1 192.168.171.134 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-master2 192.168.171.135 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-node1 192.168.171.136 kubelet,kube-proxy,docker,etcd
k8s-node2 192.168.171.137 kubelet,kube-proxy,docker
Load Balancer(Master) 192.168.171.138,192.168.171.188 (VIP) Nginx L4,Keepalived
Load Balancer(Backup) 192.168.171.139 Nginx L4,Keepalived

1.1、系统初始化

关闭防火墙:
# systemctl stop firewalld
# systemctl disable firewalld

关闭selinux:
# setenforce 0 # 临时
# sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久

关闭swap:
# swapoff -a  # 临时
# vim /etc/fstab  # 永久

同步系统时间:
# ntpdate time.windows.com

添加hosts:
# vim /etc/hosts
192.168.171.134 k8s-master1
192.168.171.135 k8s-master2
192.168.171.136 k8s-node1
192.168.171.137 k8s-node2

修改主机名:
hostnamectl set-hostname k8s-master1

##开启转发
cat /etc/sysctl.d/kubernetes.conf

net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100

sysctl -p  /etc/sysctl.d/kubernetes.conf

二、ETCD集群

整个集群中所有的组件均是走的https协议进行交互,所以我们需要配置自签证书到各个服务中;

mark

2.1、将下载好的证书文件上传到K8s-master1中,并解压

[root@k8s-master1 ~]# ls
anaconda-ks.cfg  TLS.tar.gz
[root@k8s-master1 ~]# tar zxvf TLS.tar.gz
TLS/
TLS/cfssl
TLS/cfssl-certinfo
TLS/cfssljson
TLS/etcd/
TLS/etcd/ca-config.json
TLS/etcd/ca-csr.json
TLS/etcd/generate_etcd_cert.sh
TLS/etcd/server-csr.json
TLS/k8s/
TLS/k8s/ca-config.json
TLS/k8s/ca-csr.json
TLS/k8s/kube-proxy-csr.json
TLS/k8s/server-csr.json
TLS/k8s/generate_k8s_cert.sh
TLS/cfssl.sh
[root@k8s-master1 ~]# cd TLS
[root@k8s-master1 TLS]# ls
cfssl  cfssl-certinfo  cfssljson  cfssl.sh  etcd  k8s

将超cfssl移动到可执行目录中: 运行脚本:(cfssl.sh)《注意脚本中curl原始是被注释掉了》

[root@k8s-master1 TLS]# cat cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
cp -rf cfssl cfssl-certinfo cfssljson /usr/local/bin
chmod +x /usr/local/bin/cfssl*
执行完成脚本后:
[root@k8s-master1 TLS]# ls /usr/local/bin/
cfssl  cfssl-certinfo  cfssljson

[root@k8s-master1 TLS]# ls
cfssl  cfssl-certinfo  cfssljson  cfssl.sh  etcd  k8s
[root@k8s-master1 TLS]# cd etcd/
[root@k8s-master1 etcd]# ls
ca-config.json  ca-csr.json  generate_etcd_cert.sh  server-csr.json
[root@k8s-master1 etcd]# vim server-csr.json
[root@k8s-master1 etcd]# cat server-csr.json    ###修改如下hosts中的host
{
    "CN": "etcd",
    "hosts": [
        "192.168.171.134",
        "192.168.171.135",
        "192.168.171.136"
        ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}

执行脚本:

[root@k8s-master1 etcd]# sh generate_etcd_cert.sh
2019/11/29 20:15:53 [INFO] generating a new CA key and certificate from CSR
2019/11/29 20:15:53 [INFO] generate received request
2019/11/29 20:15:53 [INFO] received CSR
2019/11/29 20:15:53 [INFO] generating key: rsa-2048
2019/11/29 20:15:53 [INFO] encoded CSR
2019/11/29 20:15:53 [INFO] signed certificate with serial number 24102972475512203247000931916818116185424147280
2019/11/29 20:15:53 [INFO] generate received request
2019/11/29 20:15:53 [INFO] received CSR
2019/11/29 20:15:53 [INFO] generating key: rsa-2048
2019/11/29 20:15:53 [INFO] encoded CSR
2019/11/29 20:15:53 [INFO] signed certificate with serial number 12936195516565485048517952341546410494181088290
2019/11/29 20:15:53 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@k8s-master1 etcd]# ls server*
server.csr  server-csr.json  server-key.pem  server.pem
至此etcd密钥和证书生成完毕!!

上传etcd.tar.gz 并解压到k8s-master1中:

[root@k8s-master1 ~]# tar zxvf etcd.tar.gz
etcd/
etcd/bin/
etcd/bin/etcd
etcd/bin/etcdctl
etcd/cfg/
etcd/cfg/etcd.conf
etcd/ssl/
etcd/ssl/ca.pem
etcd/ssl/server.pem
etcd/ssl/server-key.pem
etcd.service
先来了解下etcd.service
[root@k8s-master1 ~]# cat etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf     ##etcd配置文件目录
ExecStart=/opt/etcd/bin/etcd \      ##etcd执行文件所在的目录
        --name=${ETCD_NAME} \
        --data-dir=${ETCD_DATA_DIR} \
        --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
        --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
        --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
        --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
        --initial-cluster=${ETCD_INITIAL_CLUSTER} \
        --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
        --initial-cluster-state=new \
        --cert-file=/opt/etcd/ssl/server.pem \
        --key-file=/opt/etcd/ssl/server-key.pem \
        --peer-cert-file=/opt/etcd/ssl/server.pem \
        --peer-key-file=/opt/etcd/ssl/server-key.pem \
        --trusted-ca-file=/opt/etcd/ssl/ca.pem \
        --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

[root@k8s-master1 etcd]# ls
bin  cfg  ssl
[root@k8s-master1 etcd]# cd bin/        ##此目录为etcd的执行文件目录(后期升级可直接下载二进制的可执行文件覆盖升级即可)
[root@k8s-master1 bin]# ls
etcd  etcdctl

再来看下etcd的配置文件目录:

[root@k8s-master1 cfg]# cat etcd.conf

#[Member]
ETCD_NAME="etcd-1"      ##集群节点的name
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"  ##数据存放位置
ETCD_LISTEN_PEER_URLS="https://192.168.171.134:2380"    ##etcd集群内部通讯url
ETCD_LISTEN_CLIENT_URLS="https://192.168.171.134:2379"  ##etcd客户端通讯url

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.171.134:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.171.134:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.171.134:2380,etcd-2=https://192.168.171.135:2380,etcd-3=https://192.168.171.136:2380"  ##集群节点的配置信息
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"   ##集群简单认证的TOKEN
ETCD_INITIAL_CLUSTER_STATE="new"    ##集群的状态(新增的节点要改为existing)
copy刚刚生成的etcd证书文件到指定的目录(/root/etcd/ssl)
[root@k8s-master1 etcd]# cp /root/TLS/etcd/{ca,server,server-key}.pem ssl/
[root@k8s-master1 etcd]# ls ssl/
ca.pem  server-key.pem  server.pem
然后下发配置etcd和etcd.service到三台集群机器:
[root@k8s-master1 ~]# ls
anaconda-ks.cfg  etcd  etcd.service  etcd.tar.gz  TLS  TLS.tar.gz
[root@k8s-master1 ~]# scp -r etcd root@192.168.171.134:/opt/
etcd                                                                                                                                100%   16MB  51.2MB/s   00:00
etcdctl                                                                                                                             100%   13MB  58.8MB/s   00:00
.etcd.conf.swp                                                                                                                      100%   12KB  11.8MB/s   00:00
etcd.conf                                                                                                                           100%  523   634.0KB/s   00:00
ca.pem                                                                                                                              100% 1265   788.8KB/s   00:00
server.pem                                                                                                                          100% 1338     1.8MB/s   00:00
server-key.pem                                                                                                                      100% 1675     1.5MB/s   00:00
[root@k8s-master1 ~]# scp -r etcd root@192.168.171.135:/opt/
root@192.168.171.135's password:
etcd                                                                                                                                100%   16MB  82.4MB/s   00:00
etcdctl                                                                                                                             100%   13MB  92.3MB/s   00:00
.etcd.conf.swp                                                                                                                      100%   12KB   7.7MB/s   00:00
etcd.conf                                                                                                                           100%  523   169.7KB/s   00:00
ca.pem                                                                                                                              100% 1265     1.3MB/s   00:00
server.pem                                                                                                                          100% 1338     1.4MB/s   00:00
server-key.pem                                                                                                                      100% 1675     1.5MB/s   00:00
[root@k8s-master1 ~]# scp -r etcd root@192.168.171.136:/opt/
etcd                                                                                                                                100%   16MB  68.7MB/s   00:00
etcdctl                                                                                                                             100%   13MB  80.8MB/s   00:00
.etcd.conf.swp                                                                                                                      100%   12KB  12.5MB/s   00:00
etcd.conf                                                                                                                           100%  523   385.2KB/s   00:00
ca.pem                                                                                                                              100% 1265     1.5MB/s   00:00
server.pem                                                                                                                          100% 1338     2.0MB/s   00:00
server-key.pem                                                                                                                      100% 1675     2.2MB/s   00
同理copyetcd.service文件:
[root@k8s-master1 ~]# scp etcd.service root@192.168.171.134:/usr/lib/systemd/system/
etcd.service                                                                                                                        100% 1078   577.1KB/s   00:00
[root@k8s-master1 ~]# scp etcd.service root@192.168.171.135:/usr/lib/systemd/system/
etcd.service                                                                                                                        100% 1078   780.0KB/s   00:00
[root@k8s-master1 ~]# scp etcd.service root@192.168.171.136:/usr/lib/systemd/system/
etcd.service

修改另外2台etcd的配置文件:

192.168.171.135中

[root@k8s-master2 ~]# cat /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.171.135:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.171.135:2379"

##[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.171.135:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.171.135:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.171.134:2380,etcd-2=https://192.168.171.135:2380,etcd-3=https://192.168.171.136:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
192.168.171.136中
[root@k8s-node1 ~]# cat /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.171.136:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.171.136:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.171.136:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.171.136:2379"
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.171.134:2380,etcd-2=https://192.168.171.135:2380,etcd-3=https://192.168.171.136:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
启动etcd(第一台启动的时候有些慢是因为在侦听其它节点)
[root@k8s-master1 ~]# systemctl daemon-reload
[root@k8s-master1 ~]# systemctl start etcd
[root@k8s-master1 ~]# systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.

查看etcd集群的日志:

[root@k8s-master1 ~]# tail /var/log/messages -f
Nov 29 21:06:20 localhost etcd: set the initial cluster version to 3.0
Nov 29 21:06:20 localhost etcd: enabled capabilities for version 3.0
Nov 29 21:06:24 localhost etcd: peer 92fcf2aa055d676f became active
Nov 29 21:06:24 localhost etcd: established a TCP streaming connection with peer 92fcf2aa055d676f (stream Message reader)
Nov 29 21:06:24 localhost etcd: established a TCP streaming connection with peer 92fcf2aa055d676f (stream MsgApp v2 reader)
Nov 29 21:06:24 localhost etcd: established a TCP streaming connection with peer 92fcf2aa055d676f (stream Message writer)
Nov 29 21:06:24 localhost etcd: established a TCP streaming connection with peer 92fcf2aa055d676f (stream MsgApp v2 writer)
Nov 29 21:06:24 localhost etcd: updating the cluster version from 3.0 to 3.3
Nov 29 21:06:24 localhost etcd: updated the cluster version from 3.0 to 3.3
Nov 29 21:06:24 localhost etcd: enabled capabilities for version 3.3

查看etcd集群的状态:
# /opt/etcd/bin/etcdctl \
--ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \
--endpoints="https://192.168.171.134:2379,https://192.168.171.135:2379,https://192.168.171.136:2379" \
cluster-health

member 3530acf25e9921b5 is healthy: got healthy result from https://192.168.171.134:2379
member 833528c821fcdcd2 is healthy: got healthy result from https://192.168.171.135:2379
member 92fcf2aa055d676f is healthy: got healthy result from https://192.168.171.136:2379
cluster is healthy

二、部署Master节点

2.1、自签证书

[root@k8s-master1 ~]# cd TLS/k8s/
[root@k8s-master1 k8s]# pwd
/root/TLS/k8s
[root@k8s-master1 k8s]# ls
ca-config.json  ca-csr.json  generate_k8s_cert.sh  kube-proxy-csr.json  server-csr.json

kube-proxy-csr.json:为kube-proxy服务自签的证书
ca-config.json,ca-csr.json,server-csr.json:为Api-server服务自签的证书

2.2、划重点(K8S集群内部是用证书进行校验通信)

  • 一定要把和API-SERVER 通信服务的IP写到如下hosts中(master节点,LB,etcd,keepalived,VIP);
  • 当然这个也是我之前的疑问,如果后期扩展了master 如何加入到当前集群?
    • 目前得到的验证是先提前多增加IP
      [root@k8s-master1 k8s]# cat server-csr.json
      {
          "CN": "kubernetes",
          "hosts": [
            "10.0.0.1",
            "127.0.0.1",
            "kubernetes",
            "kubernetes.default",
            "kubernetes.default.svc",
            "kubernetes.default.svc.cluster",
            "kubernetes.default.svc.cluster.local",
            "192.168.171.134",
            "192.168.171.135",
            "192.168.171.136",
            "192.168.171.137",
            "192.168.171.138",
            "192.168.171.139",
            "192.168.171.188",
            "192.168.171.140",
            "192.168.171.141",
            "192.168.171.142"
          ],
          "key": {
              "algo": "rsa",
              "size": 2048
          },
          "names": [
              {
                  "C": "CN",
                  "L": "BeiJing",
                  "ST": "BeiJing",
                  "O": "k8s",
                  "OU": "System"
              }
          ]
      }
      ##### 执行脚本生成证书:
      [root@k8s-master1 k8s]# sh generate_k8s_cert.sh
      2019/11/30 16:08:18 [INFO] generating a new CA key and certificate from CSR
      2019/11/30 16:08:18 [INFO] generate received request
      2019/11/30 16:08:18 [INFO] received CSR
      2019/11/30 16:08:18 [INFO] generating key: rsa-2048
      2019/11/30 16:08:18 [INFO] encoded CSR
      2019/11/30 16:08:18 [INFO] signed certificate with serial number 341826322118494245750742070723426886230473381959
      2019/11/30 16:08:18 [INFO] generate received request
      2019/11/30 16:08:18 [INFO] received CSR
      2019/11/30 16:08:18 [INFO] generating key: rsa-2048
      2019/11/30 16:08:18 [INFO] encoded CSR
      2019/11/30 16:08:18 [INFO] signed certificate with serial number 298916502664941699479785933454138161410913060966
      2019/11/30 16:08:18 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
      websites. For more information see the Baseline Requirements for the Issuance and Management
      of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
      specifically, section 10.2.3 ("Information Requirements").
      2019/11/30 16:08:18 [INFO] generate received request
      2019/11/30 16:08:18 [INFO] received CSR
      2019/11/30 16:08:18 [INFO] generating key: rsa-2048
      2019/11/30 16:08:19 [INFO] encoded CSR
      2019/11/30 16:08:19 [INFO] signed certificate with serial number 11454632622297749262296986610747834462011118952
      2019/11/30 16:08:19 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
      websites. For more information see the Baseline Requirements for the Issuance and Management
      of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
      specifically, section 10.2.3 ("Information Requirements").
      查看生成的证书:
      [root@k8s-master1 k8s]# ls *.pem
      ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  server-key.pem  server.pem

准备部署master组件:

二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.16.md#v1161

上传包中的k8s-master.tar.gz到/目录

[root@k8s-master1 ~]# tar zxvf k8s-master.tar.gz
kubernetes/
kubernetes/bin/
kubernetes/bin/kubectl
kubernetes/bin/kube-apiserver
kubernetes/bin/kube-controller-manager
kubernetes/bin/kube-scheduler
kubernetes/cfg/
kubernetes/cfg/token.csv
kubernetes/cfg/kube-apiserver.conf
kubernetes/cfg/kube-controller-manager.conf
kubernetes/cfg/kube-scheduler.conf
kubernetes/ssl/
kubernetes/logs/
kube-apiserver.service
kube-controller-manager.service
kube-scheduler.service

copy刚刚生成的证书文件放到当前ssl中:

[root@k8s-master1 kubernetes]# cp /root/TLS/k8s/*pem ssl/
[root@k8s-master1 kubernetes]# ls
bin  cfg  logs  ssl
[root@k8s-master1 kubernetes]# ls ssl/
ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  server-key.pem  server.pem
##### kube-apiserver
[root@k8s-master1 cfg]# cat kube-apiserver.conf
KUBE_APISERVER_OPTS="--logtostderr=false \      ##输出日志
--v=2 \     ##日志级别
--log-dir=/opt/kubernetes/logs \    ##日志存放目录
--etcd-servers=https://192.168.171.134:2379,https://192.168.171.135:2379,https://192.168.171.136:2379 \   ##etcd集群IP
--bind-address=192.168.171.134 \    ##绑定IP(可以为外网IP)
--secure-port=6443 \    ##安全端口
--advertise-address=192.168.171.134 \   ##集群内部通讯地址
--allow-privileged=true \   ##允许pod有超级权限
--service-cluster-ip-range=10.0.0.0/24 \    ##service的IP范围
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \      ##启动准入控制插件
--authorization-mode=RBAC,Node \    ##授权模式
--enable-bootstrap-token-auth=true \    ##bootstrap-token认证,自动颁发证书
--token-auth-file=/opt/kubernetes/cfg/token.csv \   ##token文件
--service-node-port-range=30000-32767 \     ##service的ip范围
--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \
--kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem \
--audit-log-maxage=30 \     ##如下均为日志的一些策略
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/opt/kubernetes/logs/k8s-audit.log"

kube-controller-manager
[root@k8s-master1 cfg]# cat kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \    ##日志存放路径
--leader-elect=true \       ##选举模式
--master=127.0.0.1:8080 \   ##连接本地api-server
--address=127.0.0.1 \   ##监听地址
--allocate-node-cidrs=true \    ##cni组件
--cluster-cidr=10.244.0.0/16 \  ##cni组件IP段
--service-cluster-ip-range=10.0.0.0/24 \    ##service范围和api-server中一致
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"
kube-scheduler
[root@k8s-master1 cfg]# cat kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--leader-elect \
--master=127.0.0.1:8080 \
--address=127.0.0.1"

启动apiserver

[root@k8s-master1 cfg]# cd
[root@k8s-master1 ~]# mv kubernetes/ /opt/
[root@k8s-master1 ~]# mv *.service /usr/lib/systemd/system/

[root@k8s-master1 ~]# systemctl daemon-reload
[root@k8s-master1 ~]# systemctl start kube-apiserver
[root@k8s-master1 ~]# less /opt/kubernetes/logs/kube-apiserver.INFO    ##查看启动日志
[root@k8s-master1 ~]# ps aux | grep kube
root      17717 24.2 18.0 549604 336048 ?       Ssl  16:39   0:06 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --etcd-servers=https://192.168.171.134:2379,https://192.168.171.135:2379,https://192.168.171.136:2379 --bind-address=192.168.171.134 --secure-port=6443 --advertise-address=192.168.171.134 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth=true --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-32767--kubelet-client-certificate=/opt/kubernetes/ssl/server.pem --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/opt/kubernetes/logs/k8s-audit.log
root      17731  0.0  0.0 112724   988 pts/1    S+   16:39   0:00 grep --color=auto kube

再次启动kube-controller-manager 及 kube-scheduler

[root@k8s-master1 ~]# systemctl start kube-controller-manager
[root@k8s-master1 ~]# systemctl start kube-scheduler
[root@k8s-master1 ~]# systemctl enable kube-apiserver
[root@k8s-master1 ~]# systemctl enable kube-controller-manager
[root@k8s-master1 ~]# systemctl enable kube-scheduler
移动kubectl到可执行目录
[root@k8s-master1 ~]# mv /opt/kubernetes/bin/kubectl /usr/local/bin/
[root@k8s-master1 ~]# kubectl get node
No resources found in default namespace.
[root@k8s-master1 ~]# kubectl get cs    ##经过查看发现了此版本的bug
NAME                 AGE
controller-manager   <unknown>
scheduler            <unknown>
etcd-2               <unknown>
etcd-0               <unknown>
etcd-1               <unknown>
如上bug:https://segmentfault.com/a/1190000020912684

启用TLS Bootstrapping

为kubelet TLS Bootstrapping 授权:

# cat /opt/kubernetes/cfg/token.csv 
c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper"

格式:token,用户,uid,用户组
给kubelet-bootstrap授权:

自动的给kubelet创建证书

kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

token也可自行生成替换:

head -c 16 /dev/urandom | od -An -t x | tr -d ' '

==但apiserver配置的token必须要与node节点bootstrap.kubeconfig配置里一致。==

三、部署Worker Node

二进制包下载地址:https://download.docker.com/linux/static/stable/x86_64/

上传k8s-node.tar.gz到node节点

[root@k8s-node1 ~]# tar zxvf k8s-node.tar.gz
cni-plugins-linux-amd64-v0.8.2.tgz
daemon.json
docker-18.09.6.tgz
docker.service
kubelet.service
kube-proxy.service
kubernetes/
kubernetes/bin/
kubernetes/bin/kubelet
kubernetes/bin/kube-proxy
kubernetes/cfg/
kubernetes/cfg/kubelet-config.yml
kubernetes/cfg/bootstrap.kubeconfig
kubernetes/cfg/kube-proxy.kubeconfig
kubernetes/cfg/kube-proxy.conf
kubernetes/cfg/kubelet.conf
kubernetes/cfg/kube-proxy-config.yml
kubernetes/ssl/
kubernetes/logs/
#### 3.1、配置并启动Docker
# tar zxvf docker-18.09.6.tgz
# mv docker/* /usr/bin
[root@k8s-node1 ~]# ls /usr/bin/
docker        dockerd       docker-init   docker-proxy  domainname
# mkdir /etc/docker
[root@k8s-node1 ~]# cat daemon.json     ##配置镜像加速器
{
    "registry-mirrors": ["http://bc437cce.m.daocloud.io"],
    "insecure-registries": ["192.168.171.170"]
}
# mv daemon.json /etc/docker
# mv docker.service /usr/lib/systemd/system
# systemctl start docker
# systemctl enable docker
[root@k8s-node1 ~]# ps aux | grep docker
root      17326  2.1  1.5 405704 28404 ?        Ssl  17:05   0:00 /usr/bin/dockerd
root      17333  1.2  0.8 316224 15048 ?        Ssl  17:05   0:00 containerd --config /var/run/docker/containerd/containerd.toml --log-level info
root      17534  0.0  0.0 112724   988 pts/2    R+   17:05   0:00 grep --color=auto docker
在查看docker info的时候发现了:
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
解决方法:
vim /etc/sysctl.conf

添加以下内容
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

最后再执行
sysctl -p
#### 3.2、部署kubelet和kube-proxy

在master上拷贝证书到Node(有多少node节点就需要scp到多少节点):

[root@k8s-master1 ~]# cd TLS/k8s/
[root@k8s-master1 k8s]# scp ca.pem kube-proxy*.pem root@192.168.171.136:/opt/kubernetes/ssl/
[root@k8s-master1 k8s]# scp ca.pem kube-proxy*.pem root@192.168.171.137:/opt/kubernetes/ssl/

node节点目录

[root@k8s-node1 ~]# cd kubernetes/
[root@k8s-node1 kubernetes]# tree .
.
├── bin
│   ├── kubelet
│   └── kube-proxy
├── cfg
│   ├── bootstrap.kubeconfig
│   ├── kubelet.conf
│   ├── kubelet-config.yml
│   ├── kube-proxy.conf
│   ├── kube-proxy-config.yml
│   └── kube-proxy.kubeconfig
├── logs
└── ssl
先来看下几个主要的配置文件
[root@k8s-node1 cfg]# ls
bootstrap.kubeconfig  kubelet.conf  kubelet-config.yml  kube-proxy.conf  kube-proxy-config.yml  kube-proxy.kubeconfig

conf:基本配置文件
kubeconfig:连接apiserver的配置文件
yml:主要配置文件

kubelet.conf
[root@k8s-node1 cfg]# cat kubelet.conf
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=k8s-node1 \     ##每个node的name(必须要唯一)
--network-plugin=cni \      ##指定网路组件
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \    ##配置文件
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"     ##镜像
bootstrap.kubeconfig(自动为即将要加入集群的node颁发证书)
[root@k8s-node1 cfg]# cat bootstrap.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /opt/kubernetes/ssl/ca.pem  ##拿着master的ca证书
    server: https://192.168.171.134:6443    ##master的地址
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
  user:
    token: c47ffb939f5ca36231d9e3121a252940     ## 这个token一定要和如上master上token一致

我们也来了解下启动kubelet后如何和apiserver通信的: mark

kubelet 启动带着bootstrap.kubeconfig请求apiserver,apiserver首先会校验所携带的token是否正确,正确则会颁发证书,不正确则会启动失败。

kubelet-config.yml
[root@k8s-node1 cfg]# cat kubelet-config.yml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs  ##底层驱动(和docker一致)
clusterDNS:     ##dns
- 10.0.0.2
clusterDomain: cluster.local    ##域
failSwapOn: false   ##swap关闭
authentication:     ##认证信息
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:   ##资源配置
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
kube-proxy.kubeconfig
[root@k8s-node1 cfg]# cat kube-proxy.kubeconfig
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /opt/kubernetes/ssl/ca.pem
    server: https://192.168.171.134:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kube-proxy
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-proxy
  user:
    client-certificate: /opt/kubernetes/ssl/kube-proxy.pem
    client-key: /opt/kubernetes/ssl/kube-proxy-key.pem
kube-proxy-config.yml
[root@k8s-node1 cfg]# vim kube-proxy-config.yml

kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
address: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-node1     ##全node唯一
clusterCIDR: 10.0.0.0/24
mode: ipvs      ##模式
ipvs:
  scheduler: "rr"
iptables:
  masqueradeAll: true

启动kubelet、kube-proxy服务

# mv kubernetes /opt
# cp kubelet.service kube-proxy.service /usr/lib/systemd/system

修改以下三个文件中IP地址:
# grep 192 *
bootstrap.kubeconfig:    server: https://192.168.171.134:6443
kubelet.kubeconfig:    server: https://192.168.171.134:6443
kube-proxy.kubeconfig:    server: https://192.168.171.134:6443

修改以下两个文件中主机名:
# grep hostname *
kubelet.conf:--hostname-override=k8s-node1 \
kube-proxy-config.yml:hostnameOverride: k8s-node1
[root@k8s-node1 ~]# systemctl start kubelet
[root@k8s-node1 ~]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled)
   Active: active (running) since 六 2019-11-30 19:10:01 CST; 11s ago
 Main PID: 17702 (kubelet)
    Tasks: 9
   Memory: 17.2M
   CGroup: /system.slice/kubelet.service
           └─17702 /opt/kubernetes/bin/kubelet --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --hostname-override=k8s-node1 --network-plugin=cni --kubeco...

1130 19:10:01 k8s-node1 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
1130 19:10:01 k8s-node1 systemd[1]: Stopped Kubernetes Kubelet.
1130 19:10:01 k8s-node1 systemd[1]: Unit kubelet.service entered failed state.
1130 19:10:01 k8s-node1 systemd[1]: kubelet.service failed.
1130 19:10:01 k8s-node1 systemd[1]: Started Kubernetes Kubelet.

[root@k8s-node1 ~]# systemctl enable kubelet

查看kubelet日志:

less /opt/kubernetes/logs/kubelet.INFO

其中我们会看到:
W1130 19:27:08.379468   17702 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d
E1130 19:27:08.929388   17702 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin isnot ready: cni config uninitialized

如上是因为cni的组件没有安装,稍后安装后即可恢复;

然后我们再次回到master节点 查看是否有node节点:

[root@k8s-master1 k8s]# kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-TuI-Gg5yTowa_3OkCafMXBVynLwUJB2ZKrwtYG-EdNo   2m1s   kubelet-bootstrap   Pending
[root@k8s-master1 k8s]# kubectl certificate approve node-csr-TuI-Gg5yTowa_3OkCafMXBVynLwUJB2ZKrwtYG-EdNo
certificatesigningrequest.certificates.k8s.io/node-csr-TuI-Gg5yTowa_3OkCafMXBVynLwUJB2ZKrwtYG-EdNo approved
[root@k8s-master1 k8s]# kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-TuI-Gg5yTowa_3OkCafMXBVynLwUJB2ZKrwtYG-EdNo   14m   kubelet-bootstrap   Approved,Issued
[root@k8s-master1 k8s]# kubectl get node        ##等待配置完毕cni则会ready
NAME        STATUS     ROLES    AGE   VERSION
k8s-node1   NotReady   <none>   25s   v1.16.0
启动kube-proxy服务:
[root@k8s-node1 ~]# systemctl start kube-proxy
[root@k8s-node1 ~]# systemctl status kube-proxy
查看kube-proxy日志:
[root@k8s-node1 ~]# tailf /opt/kubernetes/logs/kube-proxy.INFO

I1130 19:32:23.692156   18623 proxier.go:1729] Not using `--random-fully` in the MASQUERADE rule for iptables because the local version of iptables does not support it

解决方案:https://blog.51cto.com/juestnow/2440260
#### 3.3、部署CNI网络

二进制包下载地址:https://github.com/containernetworking/plugins/releases

# mkdir /opt/cni/bin /etc/cni/net.d
# tar zxvf cni-plugins-linux-amd64-v0.8.2.tgz –C /opt/cni/bin

确保kubelet启用CNI:

# cat /opt/kubernetes/cfg/kubelet.conf 
--network-plugin=cni

3.4、同理增加另外一个node节点

57  tar zxvf k8s-node.tar.gz
58  mv *.service /usr/lib/systemd/system/
59  tar zxvf docker-18.09.6.tgz
60  mv docker/* /usr/bin/
61  mkdir /etc/docker
62  vim daemon.json
63  mv daemon.json /etc/docker/
64  systemctl start docker
65  systemctl enable docker
66  systemctl status docker
67  mv kubernetes/ /opt/
68  cd /opt/kubernetes/
69  ls
70  cd cfg/
71  ls
72  vim bootstrap.kubeconfig
73  vim kubelet.conf
74  vim kubelet-config.yml
75  vim kube-proxy.conf
76  vim kube-proxy-config.yml
77  vim kube-proxy.kubeconfig
78  grep 192 *
79  grep hostname *
80  systemctl start kubelet
81  systemctl start kube-proxy
82  systemctl enable kubelet
83  systemctl enable kube-proxy
84   systemctl restart kubelet && systemctl restart kube-proxy
85  mkdir /opt/cni/bin /etc/cni/net.d -p
86  cd
87  tar zxvf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin/

虽然如上我只是把node2节点上的操作历史copy了一下,但是足以证明正确的操作步骤就是如上这些步骤,唯一需要注意的地方就是 如上的 kubelet和kube-proxy的配置文件。

3.5、部署flannel组件

如要实现cni网路覆盖,我们就必须部署实现这个组件的flannel服务。

在master上操作:

上传kube-flannel.yaml到/目录

[root@k8s-master1 ~]# cat kube-flannel.yaml     ##来看几个主要的信息:
1、
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }

如上的网路信息要和:
[root@k8s-master1 ~]# cat /opt/kubernetes/cfg/kube-controller-manager.conf
--cluster-cidr=10.244.0.0/16 \  一致

2、(DaemonSet模式:每个node节点都会自动部署这个服务)
apiVersion: apps/v1
kind: DaemonSet
在Master执行:
[root@k8s-master1 ~]# kubectl apply -f kube-flannel.yaml
[root@k8s-master1 ~]# kubectl get po -n kube-system
NAME                          READY   STATUS    RESTARTS   AGE
kube-flannel-ds-amd64-d2gzx   1/1     Running   0          51s
kube-flannel-ds-amd64-lwsnd   1/1     Running   0          51s
[root@k8s-master1 ~]# kubectl get node
NAME        STATUS   ROLES    AGE   VERSION
k8s-node1   Ready    <none>   67m   v1.16.0
k8s-node2   Ready    <none>   30m   v1.16.0
##### 授权apiserver访问kubelet 为提供安全性,kubelet禁止匿名访问,必须授权才可以。

上传apiserver-to-kubelet-rbac.yaml到/目录

[root@k8s-master1 ~]# cat apiserver-to-kubelet-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:      ##允许直接在master上操作如下的权限
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
测试:
[root@k8s-master1 ~]# kubectl logs kube-flannel-ds-amd64-d2gzx -n kube-system     ##没有权限查看
Error from server (Forbidden): Forbidden (user=kubernetes, verb=get, resource=nodes, subresource=proxy) ( pods/log kube-flannel-ds-amd64-d2gzx)
[root@k8s-master1 ~]# kubectl apply -f apiserver-to-kubelet-rbac.yaml
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
[root@k8s-master1 ~]# kubectl logs kube-flannel-ds-amd64-d2gzx -n kube-system     ##现在可以查看了
I1130 12:30:26.695707       1 main.go:514] Determining IP address of default interface
I1130 12:30:26.698072       1 main.go:527] Using interface with name ens33 and address 192.168.171.136
I1130 12:30:26.698106       1 main.go:544] Defaulting external address to interface address (192.168.171.136)

[root@k8s-master1 ~]# kubectl create deployment web --image=nginx       ##创建测试deployment
deployment.apps/web created

[root@k8s-master1 ~]# kubectl get po -o wide
NAME                  READY   STATUS    RESTARTS   AGE     IP           NODE        NOMINATED NODE   READINESS GATES
web-d86c95cc9-ztx9n   1/1     Running   0          2m49s   10.244.0.2   k8s-node1   <none>           <none>

[root@k8s-master1 ~]# kubectl expose deployment web --port=80 --type=NodePort     ##创建一个port测试下nginx是否OK
service/web exposed

[root@k8s-master1 ~]# kubectl get po,svc
NAME                      READY   STATUS    RESTARTS   AGE
pod/web-d86c95cc9-k9vnf   1/1     Running   0          2m34s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP        4h49m
service/web          NodePort    10.0.0.34    <none>        80:32762/TCP   17m
mark

至此单master节点的K8S集群搭建完毕!


四、部署Web UI和DNS

上传yaml/dashboard.yaml

# vi dashboard.yaml

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard


[root@k8s-master1 ~]# kubectl apply -f dashboard.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

mark

创建token登录:

##在创建token之前我们需要先创建service account

[root@k8s-master1 ~]# cat dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

创建service account并绑定默认cluster-admin管理员集群角色:

[root@k8s-master1 ~]# kubectl apply -f dashboard-adminuser.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

获取token

[root@k8s-master1 ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-bccww
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 6e6e1b2d-a0a3-4150-a611-98ce1653b79c

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1359 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IktJYmdRdDdkbW1US0dnOHRKemdPMjJ6eUEzTXEtMGQyS0h6cWRpRUVLRE0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWJjY3d3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2ZTZlMWIyZC1hMGEzLTQxNTAtYTYxMS05OGNlMTY1M2I3OWMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.YusDtTl_glNewEO0kMaiZDqOcbSMkRNY6sRT9BQYbzTjdmediGHcEB49wHepo_mXsW0isBnu4Mgpb4KL5y27OkE2hFICwQwQBX5gvHQI2CxuoHaVVi7G8eZn85fR7aKmKi7Uxppv6qOL5icZyl_74_-iQVIm3U59B-x2zoyoUa3tsFgQEpUWvkmbCajD-4sANU-UMyisR3uMdXvnyvz2oCUQBjuqJ5ZqqAupqrvtoJ1L27vHK1t7i_sLgVR_2X8MARrwgynHatEYAODVEsVRMJCBzR4ZW09xcCSbeQ1CopNyGbyPi7o9re_9FyGK18y3q7EmjaEOr2NJ3Yk0MesIyw

mark

部署coreDNS

[root@k8s-master1 ~]# kubectl apply -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

测试是否dns正常

[root@k8s-master1 k8s]# kubectl apply -f bs.yaml
pod/busybox created

[root@k8s-master1 ~]# kubectl exec -it busybox sh
/ # ping 10.0.0.34  ##测试内网IP是否通过
PING 10.0.0.34 (10.0.0.34): 56 data bytes
64 bytes from 10.0.0.34: seq=0 ttl=64 time=0.086 ms
64 bytes from 10.0.0.34: seq=1 ttl=64 time=0.068 ms
^C
--- 10.0.0.34 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.068/0.077/0.086 ms
/ # ping web    ##测试dns是否可以解析
PING web (10.0.0.34): 56 data bytes
64 bytes from 10.0.0.34: seq=0 ttl=64 time=0.049 ms
64 bytes from 10.0.0.34: seq=1 ttl=64 time=0.065 ms

/ # nslookup kubernetes (均可以解析)
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

五、Master高可用

5.1、部署Master组件(与Master1一致)

拷贝master1/opt/kubernetes和service文件:

# scp –r /opt/kubernetes root@192.168.171.135:/opt
# scp -r /opt/etcd/ssl/ root@192.168.171.135:/opt/etcd/
# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.171.135:/usr/lib/systemd/system
# scp /usr/local/bin/kubectl root@192.168.171.135:/usr/local/bin/

修改apiserver配置文件为本地IP:

# cat /opt/kubernetes/cfg/kube-apiserver.conf 
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--etcd-servers=https://192.168.171.134:2379,https://192.168.171.135:2379,https://192.168.171.136:2379 \
--bind-address=192.168.171.135 \
--secure-port=6443 \
--advertise-address=192.168.171.135 \
……

启动kube-apiserver,kube-controller-manager,kube-scheduler

[root@k8s-master2 cfg]# systemctl start kube-apiserver
[root@k8s-master2 cfg]# systemctl start kube-controller-manager
[root@k8s-master2 cfg]# systemctl start kube-scheduler
[root@k8s-master2 cfg]# systemctl enable kube-apiserver
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@k8s-master2 cfg]# systemctl enable kube-controller-manager
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@k8s-master2 cfg]# systemctl enable kube-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

在master2上面查看node节点的po

[root@k8s-master2 cfg]# kubectl get node -o wide
NAME        STATUS   ROLES    AGE   VERSION   INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
k8s-node1   Ready    <none>   25h   v1.16.0   192.168.171.136   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.9.6
k8s-node2   Ready    <none>   25h   v1.16.0   192.168.171.137   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   docker://18.9.6
[root@k8s-master2 cfg]# kubectl get po -n kube-system -o wide
NAME                          READY   STATUS    RESTARTS   AGE   IP                NODE        NOMINATED NODE   READINESS GATES
coredns-6d8cfdd59d-gbd2m      1/1     Running   2          21h   10.244.0.9        k8s-node1   <none>           <none>
kube-flannel-ds-amd64-d2gzx   1/1     Running   1          24h   192.168.171.136   k8s-node1   <none>           <none>
kube-flannel-ds-amd64-lwsnd   1/1     Running   2          24h   192.168.171.137   k8s-node2   <none>           <none>
[root@k8s-master2 cfg]# kubectl get po -n kubernetes-dashboard -o wide
NAME                                         READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-566cddb686-wrkfl   1/1     Running   1          23h   10.244.1.8   k8s-node2   <none>           <none>
kubernetes-dashboard-7b5bf5d559-csfwm        1/1     Running   1          23h   10.244.1.6   k8s-node2   <none>           <none>

5.2、部署Nginx负载均衡

nginx rpm包:http://nginx.org/packages/rhel/7/x86_64/RPMS/

# rpm -vih http://nginx.org/packages/rhel/7/x86_64/RPMS/nginx-1.16.0-1.el7.ngx.x86_64.rpm

[root@localhost ~]# cat /etc/nginx/nginx.conf

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}

####此处↓
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
                server 192.168.171.134:6443;
                server 192.168.171.135:6443;
            }

    server {
       listen 6443;
       proxy_pass k8s-apiserver;
    }
}
####此处↑

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}


# systemctl start nginx
# systemctl enable nginx

[root@localhost ~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      7142/nginx: master

5.3、Nginx+KeepAlived高可用

主节点(192.168.171.138):
# yum install keepalived

# vim /etc/keepalived/keepalived.conf
global_defs { 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state MASTER 
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 100    # 优先级,备服务器设置 90 
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        192.168.171.188/24
    } 
    track_script {
        check_nginx
    } 
}


# cat /etc/keepalived/check_nginx.sh 
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi

# chmod +x /etc/keepalived/check_nginx.sh

# systemctl start keepalived
# systemctl enable keepalived

备节点(192.168.171.139):

# cat /etc/keepalived/keepalived.conf
global_defs { 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_BACKUP
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state BACKUP 
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
    priority 90    # 优先级,备服务器设置 90 
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        192.168.171.188/24
    } 
    track_script {
        check_nginx
    } 
}


# cat /etc/keepalived/check_nginx.sh 
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi

# chmod +x /etc/keepalived/check_nginx.sh

# systemctl start keepalived
# systemctl enable keepalived

查看虚拟VIP

[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:9b:85:86 brd ff:ff:ff:ff:ff:ff
    inet 192.168.171.138/24 brd 192.168.171.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.171.188/24 scope global secondary ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::d3c5:e3e2:26f6:f6b5/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

5.4、修改Node连接VIP

[root@k8s-node1 ~]# cd /opt/kubernetes/cfg/

[root@k8s-node1 cfg]# grep 192 *
bootstrap.kubeconfig:    server: https://192.168.171.134:6443
kubelet.kubeconfig:    server: https://192.168.171.134:6443
kube-proxy.kubeconfig:    server: https://192.168.171.134:6443

[root@k8s-node1 cfg]# sed -i 's#192.168.171.134#192.168.171.188#' *

[root@k8s-node1 cfg]# grep 192 *
bootstrap.kubeconfig:    server: https://192.168.171.188:6443
kubelet.kubeconfig:    server: https://192.168.171.188:6443
kube-proxy.kubeconfig:    server: https://192.168.171.188:6443

[root@k8s-node1 cfg]# systemctl restart kubelet && systemctl restart kube-proxy

同理操作其它node节点

测试VIP是否正常工作:

[root@k8s-node2 cfg]# curl -k --header "Authorization: Bearer c47ffb939f5ca36231d9e3121a252940" https://192.168.171.188:6443/version
{
  "major": "1",
  "minor": "16",
  "gitVersion": "v1.16.0",
  "gitCommit": "2bd9643cee5b3b3a5ecbd3af49d09018f0773c77",
  "gitTreeState": "clean",
  "buildDate": "2019-09-18T14:27:17Z",
  "goVersion": "go1.12.9",
  "compiler": "gc",
  "platform": "linux/amd64"
}

分别在node1和node2上测试,你会发现nginx会以轮训的方式分别请求apiserver;
至此生产级K8S高可用集群搭建完毕!