ansible自动化部署kubernetes-1.16
概述:
集群包含coreDNS、cni、nginx-ingress、HA、flanneld
百度网盘链接:https://pan.baidu.com/s/1KYbpshhpTu62DnQwF1LUnQ 提取码:vi5e
一、单master部署
[root@k8s-ansible1 ~]# tree ansible-install-k8s-master
ansible-install-k8s-master
├── add-node.yml
├── ansible.cfg
├── group_vars
│ └── all.yml
├── hosts
├── multi-master-deploy.yml
├── multi-master.jpg
├── README.md
├── roles
│ ├── addons
│ │ ├── files
│ │ │ ├── coredns.yaml
│ │ │ ├── ingress-controller.yaml
│ │ │ ├── kube-flannel.yaml
│ │ │ └── kubernetes-dashboard.yaml
│ │ └── tasks
│ │ └── main.yml
│ ├── common
│ │ ├── tasks
│ │ │ └── main.yml
│ │ └── templates
│ │ └── hosts.j2
│ ├── docker
│ │ ├── files
│ │ │ ├── daemon.json
│ │ │ └── docker.service
│ │ └── tasks
│ │ └── main.yml
│ ├── etcd
│ │ ├── files
│ │ │ └── etcd_cert
│ │ │ ├── ca-key.pem
│ │ │ ├── ca.pem
│ │ │ ├── server-key.pem
│ │ │ └── server.pem
│ │ ├── tasks
│ │ │ └── main.yml
│ │ └── templates
│ │ ├── etcd.conf.j2
│ │ ├── etcd.service.j2
│ │ └── etcd.sh.j2
│ ├── ha
│ │ ├── files
│ │ │ └── check_nginx.sh
│ │ ├── tasks
│ │ │ └── main.yml
│ │ └── templates
│ │ ├── keepalived.conf.j2
│ │ └── nginx.conf.j2
│ ├── master
│ │ ├── files
│ │ │ ├── apiserver-to-kubelet-rbac.yaml
│ │ │ ├── etcd_cert
│ │ │ │ ├── ca.pem
│ │ │ │ ├── server-key.pem
│ │ │ │ └── server.pem
│ │ │ ├── k8s_cert
│ │ │ │ ├── admin-key.pem
│ │ │ │ ├── admin.pem
│ │ │ │ ├── ca-key.pem
│ │ │ │ ├── ca.pem
│ │ │ │ ├── kube-proxy-key.pem
│ │ │ │ ├── kube-proxy.pem
│ │ │ │ ├── server-key.pem
│ │ │ │ └── server.pem
│ │ │ ├── kubelet-bootstrap-rbac.yaml
│ │ │ └── token.csv
│ │ ├── tasks
│ │ │ └── main.yml
│ │ └── templates
│ │ ├── kube-apiserver.conf.j2
│ │ ├── kube-apiserver.service.j2
│ │ ├── kube-controller-manager.conf.j2
│ │ ├── kube-controller-manager.service.j2
│ │ ├── kube-scheduler.conf.j2
│ │ └── kube-scheduler.service.j2
│ ├── node
│ │ ├── files
│ │ │ └── k8s_cert
│ │ │ ├── ca.pem
│ │ │ ├── kube-proxy-key.pem
│ │ │ └── kube-proxy.pem
│ │ ├── tasks
│ │ │ └── main.yml
│ │ └── templates
│ │ ├── bootstrap.kubeconfig.j2
│ │ ├── kubelet-config.yml.j2
│ │ ├── kubelet.conf.j2
│ │ ├── kubelet.service.j2
│ │ ├── kube-proxy-config.yml.j2
│ │ ├── kube-proxy.conf.j2
│ │ ├── kube-proxy.kubeconfig.j2
│ │ └── kube-proxy.service.j2
│ └── tls
│ ├── files
│ │ ├── generate_etcd_cert.sh
│ │ └── generate_k8s_cert.sh
│ ├── tasks
│ │ └── main.yml
│ └── templates
│ ├── etcd
│ │ ├── ca-config.json.j2
│ │ ├── ca-csr.json.j2
│ │ └── server-csr.json.j2
│ └── k8s
│ ├── admin-csr.json.j2
│ ├── ca-config.json.j2
│ ├── ca-csr.json.j2
│ ├── kube-proxy-csr.json.j2
│ └── server-csr.json.j2
├── single-master-deploy.yml
└── single-master.jpg
1.1、解压缩binary_pkg.tar.gz
[root@k8s-ansible1 ~]# ls
anaconda-ks.cfg ansible-install-k8s-master ansible-install-k8s-master.zip binary_pkg.tar.gz
[root@k8s-ansible1 ~]# tar zxvf binary_pkg.tar.gz
1.2、修改所以节点hosts 及ansible内的hosts
[root@k8s-ansible1 ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.171.11 k8s-ansible1 ##ansible机器
192.168.171.12 k8s-ansible2 ##master
192.168.171.13 k8s-ansible3 ##node1
192.168.171.14 k8s-ansible4 ##node2
[root@k8s-ansible1 ~]# cd ansible-install-k8s-master
[root@k8s-ansible1 ansible-install-k8s-master]# cat hosts
[master]
# 如果部署单Master,只保留一个Master节点
192.168.171.12 node_name=k8s-ansible2
#192.168.171.111 node_name=k8s-master2
[node]
192.168.171.13 node_name=k8s-ansible3
192.168.171.14 node_name=k8s-ansible4
[etcd]
192.168.171.12 etcd_name=k8s-ansible2
192.168.171.13 etcd_name=k8s-ansible3
192.168.171.14 etcd_name=k8s-ansible4
[lb]
# 如果部署单Master,该项忽略
192.168.31.63 lb_name=lb-master
192.168.31.71 lb_name=lb-backup
[k8s:children]
master
node
[newnode]
#192.168.31.91 node_name=k8s-node3
1.3、更改全局环境配置
[root@k8s-ansible1 ansible-install-k8s-master]# cat group_vars/all.yml
# 安装目录
software_dir: '/root/binary_pkg'
k8s_work_dir: '/opt/kubernetes'
etcd_work_dir: '/opt/etcd'
tmp_dir: '/tmp/k8s'
# 集群网络
service_cidr: '10.0.0.0/24'
cluster_dns: '10.0.0.2' # 与roles/addons/files/coredns.yaml中IP一致
pod_cidr: '10.244.0.0/16' # 与roles/addons/files/kube-flannel.yaml中网段一致
service_nodeport_range: '30000-32767'
cluster_domain: 'cluster.local'
# 高可用,如果部署单Master,该项忽略
vip: '192.168.31.88'
nic: 'ens33'
# 自签证书可信任IP列表,为方便扩展,可添加多个预留IP
cert_hosts:
# 包含所有LB、VIP、Master(多多益善,可以多余出来几个后期扩展用) IP和service_cidr的第一个IP
k8s:
- 10.0.0.1
- 192.168.171.11
- 192.168.171.12
- 192.168.171.13
- 192.168.171.14
- 192.168.171.15
- 192.168.171.16
- 192.168.171.17
- 192.168.171.18
- 192.168.171.19
- 192.168.171.111
# 包含所有etcd节点IP
etcd:
- 192.168.171.12
- 192.168.171.13
- 192.168.171.14
二、准备部署
2.1、单Master版:
ansible-playbook -i hosts single-master-deploy.yml -uroot -k
2.2、历史记录
[root@k8s-ansible1 ansible-install-k8s-master]# ansible-playbook -i hosts single-master-deploy.yml -uroot -k
SSH password:
PLAY [0.系统初始化] ********************************************************************************************************************************************************
TASK [common : 关闭firewalld] *******************************************************************************************************************************************
ok: [192.168.171.14]
ok: [192.168.171.12]
ok: [192.168.171.13]
TASK [common : 关闭selinux] *********************************************************************************************************************************************
ok: [192.168.171.14]
ok: [192.168.171.13]
ok: [192.168.171.12]
TASK [common : 关闭swap] ************************************************************************************************************************************************
changed: [192.168.171.12]
changed: [192.168.171.13]
changed: [192.168.171.14]
TASK [common : 即时生效] **************************************************************************************************************************************************
changed: [192.168.171.13]
changed: [192.168.171.12]
changed: [192.168.171.14]
TASK [common : 拷贝时区] **************************************************************************************************************************************************
ok: [192.168.171.14]
ok: [192.168.171.12]
ok: [192.168.171.13]
TASK [common : 添加hosts] ***********************************************************************************************************************************************
ok: [192.168.171.12]
ok: [192.168.171.14]
ok: [192.168.171.13]
PLAY [1.自签证书] *********************************************************************************************************************************************************
TASK [tls : 获取Ansible工作目录] ********************************************************************************************************************************************
changed: [localhost]
TASK [tls : 创建工作目录] ***************************************************************************************************************************************************
ok: [localhost] => (item=etcd)
ok: [localhost] => (item=k8s)
TASK [tls : 准备cfssl工具] ************************************************************************************************************************************************
ok: [localhost]
TASK [tls : 准备etcd证书请求文件] *********************************************************************************************************************************************
ok: [localhost] => (item=ca-config.json.j2)
ok: [localhost] => (item=ca-csr.json.j2)
ok: [localhost] => (item=server-csr.json.j2)
TASK [tls : 准备生成etcd证书脚本] *********************************************************************************************************************************************
ok: [localhost]
TASK [tls : 生成etcd证书] *************************************************************************************************************************************************
changed: [localhost]
TASK [tls : 准备k8s证书请求文件] **********************************************************************************************************************************************
ok: [localhost] => (item=ca-config.json.j2)
ok: [localhost] => (item=ca-csr.json.j2)
ok: [localhost] => (item=server-csr.json.j2)
ok: [localhost] => (item=admin-csr.json.j2)
ok: [localhost] => (item=kube-proxy-csr.json.j2)
TASK [tls : 准备生成k8s证书脚本] **********************************************************************************************************************************************
ok: [localhost]
TASK [tls : 生成k8s证书] **************************************************************************************************************************************************
changed: [localhost]
PLAY [2.部署Docker] *****************************************************************************************************************************************************
TASK [docker : 创建临时目录] ************************************************************************************************************************************************
ok: [192.168.171.12]
ok: [192.168.171.13]
ok: [192.168.171.14]
TASK [docker : 分发并解压docker二进制包] ***************************************************************************************************************************************
ok: [192.168.171.14] => (item=/root/binary_pkg/docker-18.09.6.tgz)
ok: [192.168.171.12] => (item=/root/binary_pkg/docker-18.09.6.tgz)
ok: [192.168.171.13] => (item=/root/binary_pkg/docker-18.09.6.tgz)
TASK [docker : 移动docker二进制文件] *****************************************************************************************************************************************
changed: [192.168.171.13]
changed: [192.168.171.12]
changed: [192.168.171.14]
TASK [docker : 分发service文件] *******************************************************************************************************************************************
ok: [192.168.171.12]
ok: [192.168.171.13]
ok: [192.168.171.14]
TASK [docker : 创建目录] **************************************************************************************************************************************************
ok: [192.168.171.12]
ok: [192.168.171.13]
ok: [192.168.171.14]
TASK [docker : 配置docker] **********************************************************************************************************************************************
ok: [192.168.171.12]
ok: [192.168.171.13]
ok: [192.168.171.14]
TASK [docker : 启动docker] **********************************************************************************************************************************************
changed: [192.168.171.12]
changed: [192.168.171.13]
changed: [192.168.171.14]
TASK [docker : 查看状态] **************************************************************************************************************************************************
changed: [192.168.171.12]
changed: [192.168.171.13]
changed: [192.168.171.14]
TASK [docker : debug] *************************************************************************************************************************************************
ok: [192.168.171.13] => {
"docker.stdout_lines": [
"Containers: 0",
" Running: 0",
" Paused: 0",
" Stopped: 0",
"Images: 0",
"Server Version: 18.09.6",
"Storage Driver: overlay2",
" Backing Filesystem: xfs",
" Supports d_type: true",
" Native Overlay Diff: true",
"Logging Driver: json-file",
"Cgroup Driver: cgroupfs",
"Plugins:",
" Volume: local",
" Network: bridge host macvlan null overlay",
" Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog",
"Swarm: inactive",
"Runtimes: runc",
"Default Runtime: runc",
"Init Binary: docker-init",
"containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84",
"runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30",
"init version: fec3683",
"Security Options:",
" seccomp",
" Profile: default",
"Kernel Version: 3.10.0-957.el7.x86_64",
"Operating System: CentOS Linux 7 (Core)",
"OSType: linux",
"Architecture: x86_64",
"CPUs: 2",
"Total Memory: 1.777GiB",
"Name: k8s-ansible3",
"ID: O3LF:KDZ3:CXD6:MU6T:3DKL:PS42:6ATX:R4QE:GMI7:QHNO:CVQO:7ZW6",
"Docker Root Dir: /var/lib/docker",
"Debug Mode (client): false",
"Debug Mode (server): false",
"Registry: https://index.docker.io/v1/",
"Labels:",
"Experimental: false",
"Insecure Registries:",
" 192.168.31.70",
" 127.0.0.0/8",
"Registry Mirrors:",
" http://bc437cce.m.daocloud.io/",
"Live Restore Enabled: false",
"Product License: Community Engine"
]
}
ok: [192.168.171.12] => {
"docker.stdout_lines": [
"Containers: 0",
" Running: 0",
" Paused: 0",
" Stopped: 0",
"Images: 0",
"Server Version: 18.09.6",
"Storage Driver: overlay2",
" Backing Filesystem: xfs",
" Supports d_type: true",
" Native Overlay Diff: true",
"Logging Driver: json-file",
"Cgroup Driver: cgroupfs",
"Plugins:",
" Volume: local",
" Network: bridge host macvlan null overlay",
" Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog",
"Swarm: inactive",
"Runtimes: runc",
"Default Runtime: runc",
"Init Binary: docker-init",
"containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84",
"runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30",
"init version: fec3683",
"Security Options:",
" seccomp",
" Profile: default",
"Kernel Version: 3.10.0-957.el7.x86_64",
"Operating System: CentOS Linux 7 (Core)",
"OSType: linux",
"Architecture: x86_64",
"CPUs: 2",
"Total Memory: 1.777GiB",
"Name: k8s-ansible2",
"ID: DFQT:2YYV:YWY5:IXSS:U6VS:BW7R:6MPH:WCLC:QKOW:Y63I:5TV6:C3HT",
"Docker Root Dir: /var/lib/docker",
"Debug Mode (client): false",
"Debug Mode (server): false",
"Registry: https://index.docker.io/v1/",
"Labels:",
"Experimental: false",
"Insecure Registries:",
" 192.168.31.70",
" 127.0.0.0/8",
"Registry Mirrors:",
" http://bc437cce.m.daocloud.io/",
"Live Restore Enabled: false",
"Product License: Community Engine"
]
}
ok: [192.168.171.14] => {
"docker.stdout_lines": [
"Containers: 0",
" Running: 0",
" Paused: 0",
" Stopped: 0",
"Images: 0",
"Server Version: 18.09.6",
"Storage Driver: overlay2",
" Backing Filesystem: xfs",
" Supports d_type: true",
" Native Overlay Diff: true",
"Logging Driver: json-file",
"Cgroup Driver: cgroupfs",
"Plugins:",
" Volume: local",
" Network: bridge host macvlan null overlay",
" Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog",
"Swarm: inactive",
"Runtimes: runc",
"Default Runtime: runc",
"Init Binary: docker-init",
"containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84",
"runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30",
"init version: fec3683",
"Security Options:",
" seccomp",
" Profile: default",
"Kernel Version: 3.10.0-957.el7.x86_64",
"Operating System: CentOS Linux 7 (Core)",
"OSType: linux",
"Architecture: x86_64",
"CPUs: 2",
"Total Memory: 1.777GiB",
"Name: k8s-ansible4",
"ID: M3EP:OAKQ:6AMI:RDC6:QX2H:U34M:5GTT:Q2E7:AFP7:C4M3:FUO2:UHS3",
"Docker Root Dir: /var/lib/docker",
"Debug Mode (client): false",
"Debug Mode (server): false",
"Registry: https://index.docker.io/v1/",
"Labels:",
"Experimental: false",
"Insecure Registries:",
" 192.168.31.70",
" 127.0.0.0/8",
"Registry Mirrors:",
" http://bc437cce.m.daocloud.io/",
"Live Restore Enabled: false",
"Product License: Community Engine"
]
}
PLAY [3.部署ETCD集群] *****************************************************************************************************************************************************
TASK [etcd : 创建工作目录] **************************************************************************************************************************************************
ok: [192.168.171.12] => (item=bin)
ok: [192.168.171.13] => (item=bin)
ok: [192.168.171.14] => (item=bin)
ok: [192.168.171.12] => (item=cfg)
ok: [192.168.171.13] => (item=cfg)
ok: [192.168.171.14] => (item=cfg)
ok: [192.168.171.12] => (item=ssl)
ok: [192.168.171.13] => (item=ssl)
ok: [192.168.171.14] => (item=ssl)
TASK [etcd : 创建临时目录] **************************************************************************************************************************************************
ok: [192.168.171.12]
ok: [192.168.171.13]
ok: [192.168.171.14]
TASK [etcd : 分发并解压etcd二进制包] *******************************************************************************************************************************************
ok: [192.168.171.12] => (item=/root/binary_pkg/etcd-v3.3.13-linux-amd64.tar.gz)
ok: [192.168.171.14] => (item=/root/binary_pkg/etcd-v3.3.13-linux-amd64.tar.gz)
ok: [192.168.171.13] => (item=/root/binary_pkg/etcd-v3.3.13-linux-amd64.tar.gz)
TASK [etcd : 移动etcd二进制文件] *********************************************************************************************************************************************
changed: [192.168.171.12]
changed: [192.168.171.14]
changed: [192.168.171.13]
TASK [etcd : 分发证书] ****************************************************************************************************************************************************
changed: [192.168.171.14] => (item=ca.pem)
changed: [192.168.171.12] => (item=ca.pem)
changed: [192.168.171.13] => (item=ca.pem)
changed: [192.168.171.12] => (item=server.pem)
changed: [192.168.171.13] => (item=server.pem)
changed: [192.168.171.14] => (item=server.pem)
changed: [192.168.171.14] => (item=server-key.pem)
changed: [192.168.171.13] => (item=server-key.pem)
changed: [192.168.171.12] => (item=server-key.pem)
TASK [etcd : 分发etcd配置文件] **********************************************************************************************************************************************
changed: [192.168.171.12]
changed: [192.168.171.13]
changed: [192.168.171.14]
TASK [etcd : 分发service文件] *********************************************************************************************************************************************
changed: [192.168.171.12]
changed: [192.168.171.13]
changed: [192.168.171.14]
TASK [etcd : 启动etcd] **************************************************************************************************************************************************
changed: [192.168.171.14]
changed: [192.168.171.12]
changed: [192.168.171.13]
TASK [etcd : 分发etcd脚本] ************************************************************************************************************************************************
changed: [192.168.171.12]
changed: [192.168.171.14]
changed: [192.168.171.13]
TASK [etcd : 获取etcd集群状态] **********************************************************************************************************************************************
changed: [192.168.171.12]
changed: [192.168.171.13]
changed: [192.168.171.14]
TASK [etcd : debug] ***************************************************************************************************************************************************
ok: [192.168.171.13] => {
"status.stdout_lines": [
"member 5da6acb8be0f9647 is healthy: got healthy result from https://192.168.171.14:2379",
"member a747b20fc3712bbc is healthy: got healthy result from https://192.168.171.12:2379",
"member f351513a2b4642ac is healthy: got healthy result from https://192.168.171.13:2379",
"cluster is healthy"
]
}
ok: [192.168.171.12] => {
"status.stdout_lines": [
"member 5da6acb8be0f9647 is healthy: got healthy result from https://192.168.171.14:2379",
"member a747b20fc3712bbc is healthy: got healthy result from https://192.168.171.12:2379",
"member f351513a2b4642ac is healthy: got healthy result from https://192.168.171.13:2379",
"cluster is healthy"
]
}
ok: [192.168.171.14] => {
"status.stdout_lines": [
"member 5da6acb8be0f9647 is healthy: got healthy result from https://192.168.171.14:2379",
"member a747b20fc3712bbc is healthy: got healthy result from https://192.168.171.12:2379",
"member f351513a2b4642ac is healthy: got healthy result from https://192.168.171.13:2379",
"cluster is healthy"
]
}
PLAY [4.部署K8S Master] *************************************************************************************************************************************************
TASK [master : 创建工作目录] ************************************************************************************************************************************************
changed: [192.168.171.12] => (item=bin)
changed: [192.168.171.12] => (item=cfg)
changed: [192.168.171.12] => (item=ssl)
changed: [192.168.171.12] => (item=logs)
TASK [master : 创建临时目录] ************************************************************************************************************************************************
ok: [192.168.171.12]
TASK [master : 分发并解压k8s二进制包] ******************************************************************************************************************************************
changed: [192.168.171.12] => (item=/root/binary_pkg/kubernetes-server-linux-amd64-1.16.tar.gz)
TASK [master : 移动k8s master二进制文件] *************************************************************************************************************************************
changed: [192.168.171.12]
TASK [master : 分发k8s证书] ***********************************************************************************************************************************************
changed: [192.168.171.12] => (item=ca.pem)
changed: [192.168.171.12] => (item=ca-key.pem)
changed: [192.168.171.12] => (item=server.pem)
changed: [192.168.171.12] => (item=server-key.pem)
TASK [master : 分发etcd证书] **********************************************************************************************************************************************
changed: [192.168.171.12] => (item=ca.pem)
changed: [192.168.171.12] => (item=server.pem)
changed: [192.168.171.12] => (item=server-key.pem)
TASK [master : 分发token文件] *********************************************************************************************************************************************
changed: [192.168.171.12]
TASK [master : 分发k8s配置文件] *********************************************************************************************************************************************
changed: [192.168.171.12] => (item=kube-apiserver.conf.j2)
changed: [192.168.171.12] => (item=kube-controller-manager.conf.j2)
changed: [192.168.171.12] => (item=kube-scheduler.conf.j2)
TASK [master : 分发service文件] *******************************************************************************************************************************************
changed: [192.168.171.12] => (item=kube-apiserver.service.j2)
changed: [192.168.171.12] => (item=kube-controller-manager.service.j2)
changed: [192.168.171.12] => (item=kube-scheduler.service.j2)
TASK [master : 启动k8s master组件] ****************************************************************************************************************************************
changed: [192.168.171.12] => (item=kube-apiserver)
changed: [192.168.171.12] => (item=kube-controller-manager)
changed: [192.168.171.12] => (item=kube-scheduler)
TASK [master : 查看集群状态] ************************************************************************************************************************************************
changed: [192.168.171.12]
TASK [master : debug] *************************************************************************************************************************************************
ok: [192.168.171.12] => {
"cs.stdout_lines": [
"NAME AGE",
"scheduler <unknown>",
"controller-manager <unknown>",
"etcd-0 <unknown>",
"etcd-1 <unknown>",
"etcd-2 <unknown>"
]
}
TASK [master : 拷贝RBAC文件] **********************************************************************************************************************************************
changed: [192.168.171.12] => (item=kubelet-bootstrap-rbac.yaml)
changed: [192.168.171.12] => (item=apiserver-to-kubelet-rbac.yaml)
TASK [master : 授权APIServer访问Kubelet与授权kubelet bootstrap] **************************************************************************************************************
changed: [192.168.171.12]
PLAY [5.部署K8S Node] ***************************************************************************************************************************************************
TASK [node : 创建工作目录] **************************************************************************************************************************************************
changed: [192.168.171.13] => (item=bin)
changed: [192.168.171.14] => (item=bin)
ok: [192.168.171.12] => (item=bin)
changed: [192.168.171.13] => (item=cfg)
changed: [192.168.171.14] => (item=cfg)
ok: [192.168.171.12] => (item=cfg)
changed: [192.168.171.14] => (item=ssl)
changed: [192.168.171.13] => (item=ssl)
ok: [192.168.171.12] => (item=ssl)
changed: [192.168.171.14] => (item=logs)
ok: [192.168.171.12] => (item=logs)
changed: [192.168.171.13] => (item=logs)
TASK [node : 创建cni插件目录] ***********************************************************************************************************************************************
changed: [192.168.171.14] => (item=/opt/cni/bin)
changed: [192.168.171.13] => (item=/opt/cni/bin)
changed: [192.168.171.12] => (item=/opt/cni/bin)
changed: [192.168.171.13] => (item=/etc/cni/net.d)
changed: [192.168.171.14] => (item=/etc/cni/net.d)
changed: [192.168.171.12] => (item=/etc/cni/net.d)
TASK [node : 创建临时目录] **************************************************************************************************************************************************
ok: [192.168.171.12]
ok: [192.168.171.13]
ok: [192.168.171.14]
TASK [node : 分发并解压k8s二进制包] ********************************************************************************************************************************************
ok: [192.168.171.12] => (item=/root/binary_pkg/kubernetes-server-linux-amd64-1.16.tar.gz)
changed: [192.168.171.13] => (item=/root/binary_pkg/kubernetes-server-linux-amd64-1.16.tar.gz)
changed: [192.168.171.14] => (item=/root/binary_pkg/kubernetes-server-linux-amd64-1.16.tar.gz)
TASK [node : 分发并解压cni插件二进制包] ******************************************************************************************************************************************
changed: [192.168.171.14] => (item=/root/binary_pkg/cni-plugins-linux-amd64-v0.8.2.tgz)
changed: [192.168.171.12] => (item=/root/binary_pkg/cni-plugins-linux-amd64-v0.8.2.tgz)
changed: [192.168.171.13] => (item=/root/binary_pkg/cni-plugins-linux-amd64-v0.8.2.tgz)
TASK [node : 移动k8s node二进制文件] *****************************************************************************************************************************************
changed: [192.168.171.12]
changed: [192.168.171.13]
changed: [192.168.171.14]
TASK [node : 分发k8s证书] *************************************************************************************************************************************************
ok: [192.168.171.12] => (item=ca.pem)
changed: [192.168.171.13] => (item=ca.pem)
changed: [192.168.171.14] => (item=ca.pem)
changed: [192.168.171.12] => (item=kube-proxy.pem)
changed: [192.168.171.14] => (item=kube-proxy.pem)
changed: [192.168.171.13] => (item=kube-proxy.pem)
changed: [192.168.171.12] => (item=kube-proxy-key.pem)
changed: [192.168.171.13] => (item=kube-proxy-key.pem)
changed: [192.168.171.14] => (item=kube-proxy-key.pem)
TASK [node : 分发k8s配置文件] ***********************************************************************************************************************************************
changed: [192.168.171.12] => (item=bootstrap.kubeconfig.j2)
changed: [192.168.171.13] => (item=bootstrap.kubeconfig.j2)
changed: [192.168.171.14] => (item=bootstrap.kubeconfig.j2)
changed: [192.168.171.14] => (item=kubelet.conf.j2)
changed: [192.168.171.13] => (item=kubelet.conf.j2)
changed: [192.168.171.12] => (item=kubelet.conf.j2)
changed: [192.168.171.12] => (item=kubelet-config.yml.j2)
changed: [192.168.171.13] => (item=kubelet-config.yml.j2)
changed: [192.168.171.14] => (item=kubelet-config.yml.j2)
changed: [192.168.171.14] => (item=kube-proxy.kubeconfig.j2)
changed: [192.168.171.13] => (item=kube-proxy.kubeconfig.j2)
changed: [192.168.171.12] => (item=kube-proxy.kubeconfig.j2)
changed: [192.168.171.14] => (item=kube-proxy.conf.j2)
changed: [192.168.171.12] => (item=kube-proxy.conf.j2)
changed: [192.168.171.13] => (item=kube-proxy.conf.j2)
changed: [192.168.171.12] => (item=kube-proxy-config.yml.j2)
changed: [192.168.171.14] => (item=kube-proxy-config.yml.j2)
changed: [192.168.171.13] => (item=kube-proxy-config.yml.j2)
TASK [node : 分发service文件] *********************************************************************************************************************************************
changed: [192.168.171.12] => (item=kubelet.service.j2)
changed: [192.168.171.13] => (item=kubelet.service.j2)
changed: [192.168.171.14] => (item=kubelet.service.j2)
changed: [192.168.171.14] => (item=kube-proxy.service.j2)
changed: [192.168.171.12] => (item=kube-proxy.service.j2)
changed: [192.168.171.13] => (item=kube-proxy.service.j2)
TASK [node : 启动k8s node组件] ********************************************************************************************************************************************
changed: [192.168.171.13] => (item=kubelet)
changed: [192.168.171.14] => (item=kubelet)
changed: [192.168.171.12] => (item=kubelet)
changed: [192.168.171.14] => (item=kube-proxy)
changed: [192.168.171.13] => (item=kube-proxy)
changed: [192.168.171.12] => (item=kube-proxy)
TASK [node : 分发预准备镜像] *************************************************************************************************************************************************
changed: [192.168.171.14]
changed: [192.168.171.13]
changed: [192.168.171.12]
TASK [node : 导入镜像] ****************************************************************************************************************************************************
changed: [192.168.171.12]
changed: [192.168.171.14]
changed: [192.168.171.13]
PLAY [6.部署插件] *********************************************************************************************************************************************************
TASK [addons : 允许Node加入集群] ********************************************************************************************************************************************
changed: [192.168.171.12]
TASK [addons : 拷贝YAML文件到Master] ***************************************************************************************************************************************
changed: [192.168.171.12] => (item=/root/ansible-install-k8s-master/roles/addons/files/coredns.yaml)
changed: [192.168.171.12] => (item=/root/ansible-install-k8s-master/roles/addons/files/ingress-controller.yaml)
changed: [192.168.171.12] => (item=/root/ansible-install-k8s-master/roles/addons/files/kube-flannel.yaml)
changed: [192.168.171.12] => (item=/root/ansible-install-k8s-master/roles/addons/files/kubernetes-dashboard.yaml)
TASK [addons : 部署Flannel,Dashboard,CoreDNS,Ingress] *******************************************************************************************************************
changed: [192.168.171.12]
TASK [addons : 替换Dashboard证书] *****************************************************************************************************************************************
changed: [192.168.171.12]
TASK [addons : 查看Pod状态] ***********************************************************************************************************************************************
changed: [192.168.171.12]
TASK [addons : debug] *************************************************************************************************************************************************
ok: [192.168.171.12] => {
"getall.stdout_lines": [
"NAMESPACE NAME READY STATUS RESTARTS AGE",
"kube-system pod/coredns-6d8cfdd59d-hcfw5 0/1 Pending 0 2s",
"kubernetes-dashboard pod/dashboard-metrics-scraper-566cddb686-nk7t8 0/1 Pending 0 1s",
"kubernetes-dashboard pod/kubernetes-dashboard-c4bc5bd44-cxgb6 0/1 Pending 0 1s",
"",
"NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE",
"default service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 2m19s",
"ingress-nginx service/ingress-nginx ClusterIP 10.0.0.158 <none> 80/TCP,443/TCP 2s",
"kube-system service/kube-dns ClusterIP 10.0.0.2 <none> 53/UDP,53/TCP 2s",
"kubernetes-dashboard service/dashboard-metrics-scraper ClusterIP 10.0.0.38 <none> 8000/TCP 1s",
"kubernetes-dashboard service/kubernetes-dashboard NodePort 10.0.0.180 <none> 443:30001/TCP 1s",
"",
"NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE",
"ingress-nginx daemonset.apps/nginx-ingress-controller 0 0 0 0 0 <none> 2s",
"kube-system daemonset.apps/kube-flannel-ds-amd64 0 0 0 0 0 <none> 2s",
"",
"NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE",
"kube-system deployment.apps/coredns 0/1 1 0 2s",
"kubernetes-dashboard deployment.apps/dashboard-metrics-scraper 0/1 1 0 1s",
"kubernetes-dashboard deployment.apps/kubernetes-dashboard 0/1 1 0 1s",
"",
"NAMESPACE NAME DESIRED CURRENT READY AGE",
"kube-system replicaset.apps/coredns-6d8cfdd59d 1 1 0 2s",
"kubernetes-dashboard replicaset.apps/dashboard-metrics-scraper-566cddb686 1 1 0 1s",
"kubernetes-dashboard replicaset.apps/kubernetes-dashboard-c4bc5bd44 1 1 0 1s"
]
}
TASK [addons : 创建Dashboard管理员令牌] **************************************************************************************************************************************
changed: [192.168.171.12]
TASK [addons : 获取Dashboard管理员令牌] **************************************************************************************************************************************
changed: [192.168.171.12]
TASK [addons : Kubernetes Dashboard登录信息] ******************************************************************************************************************************
ok: [192.168.171.12] => {
"ui.stdout_lines": [
"访问地址--->https://NodeIP:30001",
"令牌内容--->eyJhbGciOiJSUzI1NiIsImtpZCI6IlhOV0FZU1ZXRU80MU5oRUlYeGsxbExFcVB1R1k0bEEzMDhQQWdWVE5oZG8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tenQ2Y3ciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMTU3YjEyYmYtNjNjNC00NzU1LWI4YTAtN2IyY2ZkZmRmNmE3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.fKCHvmsmZmIErB9YLHtQrqWQBL_b89W0i_gDa4rwgV9x4UfzVAXskUiZiQs_yAHNmyUaIqPpBdUI64pvAoXilr-6wIk8-R8hpp4BJXLL4OsTtPXxrhIQF4_NP0D-4flg9sHba-I9X9A_2RWskcY53PAPTOjlyOQuldUyTdIT9tXi6jeSgj8CrDBc9O_A3xYWZ1f7RvrdEdU4Kkotc1rsBeGg-OzabU1nNLxWAaDHZJFciYeABtbPoY2fTkdz0JGoIxLpAqcQKoFp9ztGPcoOboCOqeb_hc-caBAmyvVIfbPvBiywdtuidjvb1IazETt_GQlzg7FMBoUpHhJYOTvnAA"
]
}
PLAY RECAP ************************************************************************************************************************************************************
192.168.171.12 : ok=61 changed=40 unreachable=0 failed=0
192.168.171.13 : ok=38 changed=23 unreachable=0 failed=0
192.168.171.14 : ok=38 changed=23 unreachable=0 failed=0
localhost : ok=9 changed=3 unreachable=0 failed=0
2.3、测试
[root@k8s-ansible2 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-ansible2 Ready <none> 117s v1.16.0
k8s-ansible3 Ready <none> 117s v1.16.0
k8s-ansible4 Ready <none> 117s v1.16.0
[root@k8s-ansible2 ~]# kubectl get po --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx nginx-ingress-controller-7g9fh 1/1 Running 0 2m41s 192.168.171.13 k8s-ansible3 <none> <none>
ingress-nginx nginx-ingress-controller-hc492 1/1 Running 0 2m41s 192.168.171.14 k8s-ansible4 <none> <none>
ingress-nginx nginx-ingress-controller-s4slm 1/1 Running 0 2m41s 192.168.171.12 k8s-ansible2 <none> <none>
kube-system coredns-6d8cfdd59d-hcfw5 1/1 Running 0 3m8s 10.244.1.2 k8s-ansible2 <none> <none>
kube-system kube-flannel-ds-amd64-6nbt4 1/1 Running 0 2m51s 192.168.171.12 k8s-ansible2 <none> <none>
kube-system kube-flannel-ds-amd64-mfksz 1/1 Running 0 2m51s 192.168.171.14 k8s-ansible4 <none> <none>
kube-system kube-flannel-ds-amd64-vclgg 1/1 Running 0 2m51s 192.168.171.13 k8s-ansible3 <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-566cddb686-nk7t8 1/1 Running 0 3m7s 10.244.2.2 k8s-ansible4 <none> <none>
kubernetes-dashboard kubernetes-dashboard-c4bc5bd44-cxgb6 1/1 Running 0 3m7s 10.244.0.2 k8s-ansible3 <none> <none>
[root@k8s-ansible2 ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-96KwqYYgTQ0ajISzq2pUc5Dzu07UzjaKRZqRWs31yUk 5m5s kubelet-bootstrap Approved,Issued
node-csr-9PduNyHHpXtDmuNFj3fCkpoNkGcDkO2NPEk3uGQ3kIk 5m6s kubelet-bootstrap Approved,Issued
node-csr-WLFKgflHlDK2f0RFvuTYKlkHcr8hz0iOrzYcp2V50JE 5m6s kubelet-bootstrap Approved,Issued

三、多master部署
3.1、hosts
[root@k8s-ansible1 ansible-install-k8s-master]# cat hosts
[master]
# 如果部署单Master,只保留一个Master节点
192.168.171.11 node_name=k8s-master1
192.168.171.12 node_name=k8s-master2
[node]
192.168.171.13 node_name=k8s-node1
192.168.171.14 node_name=k8s-node2
[etcd]
192.168.171.11 etcd_name=etcd-1
192.168.171.12 etcd_name=etcd-2
192.168.171.13 etcd_name=etcd-3
[lb]
# 如果部署单Master,该项忽略
192.168.171.15 lb_name=lb-master
192.168.171.16 lb_name=lb-backup
[k8s:children]
master
node
[newnode]
#192.168.31.91 node_name=k8s-node3
3.2、全局参数配置
[root@k8s-ansible1 ansible-install-k8s-master]# cat group_vars/all.yml
# 安装目录
software_dir: '/root/binary_pkg'
k8s_work_dir: '/opt/kubernetes'
etcd_work_dir: '/opt/etcd'
tmp_dir: '/tmp/k8s'
# 集群网络
service_cidr: '10.0.0.0/24'
cluster_dns: '10.0.0.2' # 与roles/addons/files/coredns.yaml中IP一致
pod_cidr: '10.244.0.0/16' # 与roles/addons/files/kube-flannel.yaml中网段一致
service_nodeport_range: '30000-32767'
cluster_domain: 'cluster.local'
# 高可用,如果部署单Master,该项忽略
vip: '192.168.171.88'
nic: 'ens33'
# 自签证书可信任IP列表,为方便扩展,可添加多个预留IP
cert_hosts:
# 包含所有LB、VIP、Master IP和service_cidr的第一个IP(多多益善,可以多余出来几个后期扩展用)
k8s:
- 10.0.0.1
- 192.168.171.11
- 192.168.171.12
- 192.168.171.13
- 192.168.171.14
- 192.168.171.15
- 192.168.171.16
- 192.168.171.17
- 192.168.171.18
- 192.168.171.19
- 192.168.171.10
- 192.168.171.21
- 192.168.171.88
# 包含所有etcd节点IP
etcd:
- 192.168.171.11
- 192.168.171.12
- 192.168.171.13
3.3、准备部署
多Master版:
ansible-playbook -i hosts multi-master-deploy.yml -uroot -k
3.4、输出历史(重点是结果)
省略...
3.5、测试
[root@k8s-ansible1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready <none> 85s v1.16.0
k8s-master2 Ready <none> 85s v1.16.0
k8s-node1 Ready <none> 85s v1.16.0
k8s-node2 Ready <none> 85s v1.16.0
[root@k8s-ansible1 ~]# kubectl get po,svc --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx pod/nginx-ingress-controller-92b8v 1/1 Running 0 90s
ingress-nginx pod/nginx-ingress-controller-dfkp5 1/1 Running 0 90s
ingress-nginx pod/nginx-ingress-controller-hckvr 1/1 Running 0 91s
ingress-nginx pod/nginx-ingress-controller-qckdd 1/1 Running 0 90s
kube-system pod/coredns-6d8cfdd59d-lsdps 1/1 Running 0 117s
kube-system pod/kube-flannel-ds-amd64-2mc74 1/1 Running 1 100s
kube-system pod/kube-flannel-ds-amd64-4hqq7 1/1 Running 0 101s
kube-system pod/kube-flannel-ds-amd64-dgzrb 1/1 Running 0 100s
kube-system pod/kube-flannel-ds-amd64-zjtpq 1/1 Running 0 100s
kubernetes-dashboard pod/dashboard-metrics-scraper-566cddb686-9xh7b 1/1 Running 0 116s
kubernetes-dashboard pod/kubernetes-dashboard-c4bc5bd44-4f45q 1/1 Running 0 116s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 5m40s
ingress-nginx service/ingress-nginx ClusterIP 10.0.0.170 <none> 80/TCP,443/TCP 117s
kube-system service/kube-dns ClusterIP 10.0.0.2 <none> 53/UDP,53/TCP 117s
kubernetes-dashboard service/dashboard-metrics-scraper ClusterIP 10.0.0.172 <none> 8000/TCP 116s
kubernetes-dashboard service/kubernetes-dashboard NodePort 10.0.0.57 <none> 443:30001/TCP 116s
[root@k8s-ansible1 ~]# kubectl get po,svc --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx pod/nginx-ingress-controller-92b8v 1/1 Running 0 96s 192.168.171.11 k8s-master1 <none> <none>
ingress-nginx pod/nginx-ingress-controller-dfkp5 1/1 Running 0 96s 192.168.171.12 k8s-master2 <none> <none>
ingress-nginx pod/nginx-ingress-controller-hckvr 1/1 Running 0 97s 192.168.171.13 k8s-node1 <none> <none>
ingress-nginx pod/nginx-ingress-controller-qckdd 1/1 Running 0 96s 192.168.171.14 k8s-node2 <none> <none>
kube-system pod/coredns-6d8cfdd59d-lsdps 1/1 Running 0 2m3s 10.244.1.2 k8s-master2 <none> <none>
kube-system pod/kube-flannel-ds-amd64-2mc74 1/1 Running 1 106s 192.168.171.12 k8s-master2 <none> <none>
kube-system pod/kube-flannel-ds-amd64-4hqq7 1/1 Running 0 107s 192.168.171.13 k8s-node1 <none> <none>
kube-system pod/kube-flannel-ds-amd64-dgzrb 1/1 Running 0 106s 192.168.171.14 k8s-node2 <none> <none>
kube-system pod/kube-flannel-ds-amd64-zjtpq 1/1 Running 0 106s 192.168.171.11 k8s-master1 <none> <none>
kubernetes-dashboard pod/dashboard-metrics-scraper-566cddb686-9xh7b 1/1 Running 0 2m2s 10.244.2.2 k8s-master1 <none> <none>
kubernetes-dashboard pod/kubernetes-dashboard-c4bc5bd44-4f45q 1/1 Running 0 2m2s 10.244.3.2 k8s-node2 <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 5m46s <none>
ingress-nginx service/ingress-nginx ClusterIP 10.0.0.170 <none> 80/TCP,443/TCP 2m3s app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
kube-system service/kube-dns ClusterIP 10.0.0.2 <none> 53/UDP,53/TCP 2m3s k8s-app=kube-dns
kubernetes-dashboard service/dashboard-metrics-scraper ClusterIP 10.0.0.172 <none> 8000/TCP 2m2s k8s-app=dashboard-metrics-scraper
kubernetes-dashboard service/kubernetes-dashboard NodePort 10.0.0.57 <none> 443:30001/TCP 2m2s k8s-app=kubernetes-dashboard
### 检查高可用的两台机器:
[root@k8s-ansible5 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:85:37:e0 brd ff:ff:ff:ff:ff:ff
inet 192.168.171.15/24 brd 192.168.171.255 scope global noprefixroute ens33
valid_lft forever preferred_lft forever
inet 192.168.171.88/24 scope global secondary ens33 ##虚拟VIP
valid_lft forever preferred_lft forever
inet6 fe80::d20b:b903:7edd:b18b/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::5d9e:cf1f:ea7f:801f/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::5c0:8885:2874:a77b/64 scope link tentative noprefixroute dadfailed
valid_lft forever preferred_lft forever
[root@k8s-ansible5 ~]# ps aux | grep nginx
root 7895 0.0 0.0 46356 1168 ? Ss 22:57 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
nginx 7896 0.1 0.1 47184 2308 ? S 22:57 0:00 nginx: worker process
nginx 7897 0.0 0.1 46780 1980 ? S 22:57 0:00 nginx: worker process
nginx 7898 0.0 0.1 46924 2216 ? S 22:57 0:00 nginx: worker process
nginx 7899 0.0 0.1 46876 2220 ? S 22:57 0:00 nginx: worker process
root 11331 0.0 0.0 112728 988 pts/0 S+ 23:07 0:00 grep --color=auto nginx
[root@k8s-ansible5 ~]# ps aux | grep keepalived
root 7975 0.0 0.0 122884 1404 ? Ss 22:57 0:00 /usr/sbin/keepalived -D
root 7976 0.0 0.1 133844 3336 ? S 22:57 0:00 /usr/sbin/keepalived -D
root 7977 0.0 0.1 133784 2892 ? S 22:57 0:00 /usr/sbin/keepalived -D
root 11369 0.0 0.0 112724 992 pts/0 R+ 23:07 0:00 grep --color=auto keepalived

本博客所有文章除特别声明外,均采用 CC BY-SA 4.0 协议 ,转载请注明出处!