K8S之Ingress-Nginx实现高可用

承接上文,我们部署好了ingress,为了达到生产级的阈值,我们必须要要配置ingress的高可用:

假定我们在Kubernetes 指定两个worker节点中部署了ingress nginx来为后端的pod做proxy,这时候我们就需要通过keepalived实现高可用,提供对外的VIP,也就是externalLB的upstream只需要绑定此VIP即可。

mark

首先我们要先确保有两个worker节点部署了ingress nginx

在本实验中,环境如下:

IP地址 主机名 描述
10.0.0.31 k8s-master01
10.0.0.34 k8s-node02 ingress nginx、keepalived
10.0.0.35 k8s-node03 ingress nginx、keepalived

1、查看ingress nginx状态

[root@k8s-master01 Ingress]# kubectl get pod -n ingress-nginx -o wide
NAME                                        READY   STATUS    RESTARTS   AGE     IP          NODE         NOMINATED NODE   READINESS GATES
nginx-ingress-controller-85bd8789cd-8c4xh   1/1     Running   0          62s     10.0.0.34   k8s-node02   <none>           <none>
nginx-ingress-controller-85ff8dfd88-vqkhx   1/1     Running   0          3m56s   10.0.0.35   k8s-node03   <none>           <none>

创建一个用于测试环境的namespace

kubectl  create namespace test

2、部署一个Deployment(用于测试)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myweb-deploy
  # 部署在测试环境
  namespace: test
spec:
  replicas: 3
  selector:
    matchLabels:
      name: myweb
      type: test
  template:
    metadata:
      labels:
        name: myweb
        type: test
    spec:
      containers:
      - name: nginx
        image: nginx:1.13
        imagePullPolicy: IfNotPresent
        ports:
          - containerPort: 80
---
# service
apiVersion: v1
kind: Service
metadata:
  name: myweb-svc
spec:
  selector:
    name: myweb
    type: test
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
---
# ingress

执行kubectl create 创建deployment

kubectl  create -f myweb-demo.yaml
查看deployment是否部署成功

[root@k8s-master01 Project]# kubectl get pods -n test -o wide | grep "myweb"
myweb-deploy-6d586d7db4-2g5ll   1/1     Running   0          23s     10.244.3.240   k8s-node02   <none>           <none>
myweb-deploy-6d586d7db4-cf7w7   1/1     Running   0          4m2s    10.244.1.132   k8s-node01   <none>           <none>
myweb-deploy-6d586d7db4-rp5zc   1/1     Running   0          3m59s   10.244.2.5     k8s-node03   <none>

3、在两个worker节点部署keepalived

VIP10.0.0.130,接口:eth0
3.1、安装keepalived
yum -y install keepalived

3.1.1、k8s-node03节点作为MASTER配置keepalived

[root@k8s-node03 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email_from Alexandre.Cassen@firewall.loc
   router_id k8s-node03
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 110
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.0.0.130/24 dev eth0 label eth0:1
    }
}
3.1.2、k8s-node02节点作为BACKUP配置keeplived
[root@k8s-node02 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id k8s-node02
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
      10.0.0.130/24 dev eth0 label eth0:1
    }
}
两个节点启动keepalived并加入开机启动

systemctl start keepalived.service
systemctl enable keepalived.service

启动完成后检查k8s-node03的IP地址是否已有VIP

[root@k8s-node03 ~]# ip add | grep "130"
    inet 10.0.0.130/24 scope global secondary eth0:1

然后我们在externalLB上的后端配置此VIP,即可实现预期的效果!!

效果不展示了!自己实验便是最好的展示!!