K8S负载均衡之nginx-ingress

++本篇引自多篇大佬文档整合而来,加上自己所理解整理如下:++ ### 一、目前状况:
k8s有了coreDNS解决了k8s集群内部通过dns域名的方式相互访问容器服务,但是集群内部的域名无法在外部被访问,也没有解决域名7层负载均衡的问题,而nginx-ingress就是为了解决基于k8s的7层负载均衡,nginx-ingress也是已addon方式加入k8s集群,以pod的方式运行,多个副本,高可用。

二、Nginx Ingress 一般有三个组件组成:

  • 1)ingress是kubernetes的一个资源对象,用于编写定义规则。
  • 2)反向代理负载均衡器,通常以Service的Port方式运行,接收并按照ingress定义的规则进行转发,通常为nginx,haproxy,traefik等,本文使用nginx。
  • 3)ingress-controller,监听apiserver,获取服务新增,删除等变化,并结合ingress规则动态更新到反向代理负载均衡器上,并重载配置使其生效。

以上三者有机的协调配合起来,就可以完成 Kubernetes 集群服务的暴露。

来看个图例:

mark

Nginx 对后端运行的服务(Service1、Service2)提供反向代理,在配置文件中配置了域名与后端服务 Endpoints 的对应关系。客户端通过使用 DNS 服务或者直接配置本地的 hosts 文件,将域名都映射到 Nginx 代理服务器。当客户端访问 service1.com 时,浏览器会把包含域名的请求发送给 nginx 服务器,nginx 服务器根据传来的域名,选择对应的 Service,这里就是选择 Service 1 后端服务,然后根据一定的负载均衡策略,选择 Service1 中的某个容器接收来自客户端的请求并作出响应。过程很简单,nginx 在整个过程中仿佛是一台根据域名进行请求转发的“路由器”,这也就是7层代理的整体工作流程了!

对于 Nginx 反向代理做了什么,我们已经大概了解了。在 k8s 系统中,后端服务的变化是十分频繁的,单纯依靠人工来更新nginx 的配置文件几乎不可能,nginx-ingress 由此应运而生。Nginx-ingress 通过监视 k8s 的资源状态变化实现对 nginx 配置文件的自动更新,下面本文就来分析下其工作原理。

2.1、nginx-ingress 工作流程分析

首先,上一张整体工作模式架构图(只关注配置同步更新):

mark

不考虑 nginx 状态收集等附件功能,nginx-ingress 模块在运行时主要包括三个主体:NginxControllerStoreSyncQueue。 - Store 主要负责从 kubernetes APIServer 收集运行时信息,感知各类资源(如 ingress、service等)的变化,并及时将更新事件消息(event)写入一个环形管道; - SyncQueue 协程定期扫描 syncQueue 队列,发现有任务就执行更新操作,即借助 Store 完成最新运行数据的拉取,然后根据一定的规则产生新的 nginx 配置,(有些更新必须reload,就本地写入新配置,执行 reload),然后执行动态更新操作,即构造 POST 数据,向本地 Nginx Lua 服务模块发送 post 请求,实现配置更新; - NginxController 作为中间的联系者,监听 updateChannel,一旦收到配置更新事件,就向同步队列 syncQueue 里写入一个更新请求。

大白话描述下:

  • 1、ingress controller通过和kubernetes api交互,动态的去感知集群中ingress规则变化,然后读取它,按照自定义的规则,规则就是写明了哪个域名对应哪个service,生成一段nginx配置;
  • 2、再写到nginx-ingress-control的pod里,这个Ingress; controller的pod里运行着一个Nginx服务,控制器会把生成的nginx配置写入/etc/nginx.conf文件中;
  • 3、然后reload一下使配置生效。以此达到域名分配置和动态更新的问题。

2.2、Ingress 可以解决什么问题

1、动态配置服务

  如果按照传统方式, 当新增加一个服务时, 我们可能需要在流量入口加一个反向代理指向我们新的k8s服务. 而如果用了Ingress, 只需要配置好这个服务, 当服务启动时, 会自动注册到Ingress的中, 不需要而外的操作。    ##### 2、减少不必要的端口暴露   配置过k8s的都清楚, 第一步是要关闭防火墙的, 主要原因是k8s的很多服务会以NodePort方式映射出去, 这样就相当于给宿主机打了很多孔, 既不安全也不优雅. 而Ingress可以避免这个问题, 除了Ingress自身服务可能需要映射出去, 其他服务都不要用NodePort方式。

2.3、Pod与Ingress的关系

  • 通过Ingress Controller实现Pod的负载均衡, 支持TCP/UDP 4层和HTTP 7层 Ingress 只是定义规则,具体的负载均衡服务是由Ingress controller控制器完成。
mark
mark

访问流程:用户---> Ingress Controller(Node) --->service ---> Pod

三、部署nginx-ingress-controller以及定义ingress策略

20191205最新版目前:

获取配置文件位置: https://github.com/kubernetes/ingress-nginx/tree/nginx-0.26.1/deploy

mark

3.1、下载部署文件

提供了两种方式 : - 默认下载最新的yaml:

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
- 指定版本号下载对应的yaml;

修改镜像路径image

[root@localhost src]# grep image mandatory.yaml
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1

上面的镜像我没办法pull下来,改成使用阿里的google库

image: registry.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1

修改后的yaml:

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
 
---
 
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
 
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
 
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
 
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update
 
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get
 
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx
 
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx
 
---
 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: registry.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---

mandatory.yaml这一个yaml中包含了很多资源的创建,包括namespace、ConfigMap、role,ServiceAccount等等所有部署ingress-controller需要的资源,配置太多就不粘出来了,我们重点看下如上deployment部分↑

  • 可以看到主要使用了“registry.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1”这个镜像

  • 指定了一些启动参数。同时开放了80与443两个端口,并在10254端口做了健康检查。

然后修改上面mandatory.yaml的deployment部分配置为:

# 修改api版本及kind
# apiVersion: apps/v1
# kind: Deployment
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
# 删除Replicas
# replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      # 选择对应标签的node
      nodeSelector:
        isIngress: "true"
      # 使用hostNetwork暴露服务
      hostNetwork: true
      containers:
        - name: nginx-ingress-controller
          image: registry.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10

---
kind: DaemonSet:官方原始文件使用的是deployment,replicate 为 1,这样将会在某一台节点上启动对应的nginx-ingress-controller pod。外部流量访问至该节点,由该节点负载分担至内部的service。测试环境考虑防止单点故障,改为DaemonSet然后删掉replicate ,配合亲和性部署在制定节点上启动nginx-ingress-controller pod,确保有多个节点启动nginx-ingress-controller pod,后续将这些节点加入到外部硬件负载均衡组实现高可用性。

hostNetwork: true:添加该字段,暴露nginx-ingress-controller pod的服务端口(80)

nodeSelector: 增加亲和性部署,有isIngress="true" 标签的节点才会部署该DaemonSet

为需要部署nginx-ingress-controller的节点设置lable,这里测试部署在"k8s-node1,k8s-node2,k8s-node3"这个节点。

$ kubectl label node k8s-node1 isIngress="true"
$ kubectl label node k8s-node2 isIngress="true"
$ kubectl label node k8s-node3 isIngress="true"

执行yaml文件部署

[root@k8s-master1 src]# kubectl apply -f mandatory.yaml
 
# 执行结果 
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
daemonset.apps/nginx-ingress-controller created

3.2、检查部署情况(此处个人电脑资源有限我就打了一个node的tag如下:)

[root@k8s-master1 src]# kubectl get daemonset -n ingress-nginx
NAME                       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR    AGE
nginx-ingress-controller   1         1         1       1            1           isIngress=true   3m24s
[root@k8s-master1 src]# kubectl get po -n ingress-nginx -o wide
NAME                             READY   STATUS    RESTARTS   AGE     IP                NODE        NOMINATED NODE   READINESS GATES
nginx-ingress-controller-ql4x5   1/1     Running   0          3m39s   192.168.171.136   k8s-node1   <none>           <none>

可以看到,nginx-controller的pod已经部署在在k8s-node1上了。

到node-1上看下本地端口:

[root@k8s-node1 ~]# netstat -lntp | grep nginx
tcp        0      0 127.0.0.1:10247         0.0.0.0:*               LISTEN      34132/nginx: master
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      34132/nginx: master
tcp        0      0 0.0.0.0:8181            0.0.0.0:*               LISTEN      34132/nginx: master
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      34132/nginx: master
tcp        0      0 127.0.0.1:10245         0.0.0.0:*               LISTEN      34078/nginx-ingress
tcp        0      0 127.0.0.1:10246         0.0.0.0:*               LISTEN      34132/nginx: master
tcp6       0      0 :::10254                :::*                    LISTEN      34078/nginx-ingress
tcp6       0      0 :::80                   :::*                    LISTEN      34132/nginx: master
tcp6       0      0 :::8181                 :::*                    LISTEN      34132/nginx: master
tcp6       0      0 :::443                  :::*                    LISTEN      34132/nginx: master

由于配置了hostnetwork,nginx已经在node主机本地监听80/443/8181端口。其中8181是nginx-controller默认配置的一个default backend。这样,只要访问node主机有公网IP,就可以直接映射域名来对外网暴露服务了。如果要nginx高可用的话,可以在多个node 上部署,并在前面再搭建一套LVS+keepalive做负载均衡。用hostnetwork的另一个好处是,如果lvs用DR模式的话,是不支持端口映射的,这时候如果用nodeport,暴露非标准的端口,管理起来会很麻烦。

划重点:生产须知

将keepalived与ingress关联

现状:

因为pod可以分配在很多node上,若域名与一个node节点绑定,这一个node服务器出现问题,则这个域名就挂了,不能实现高可用

解决:

将每个node上装上keepalived服务,设置vip,主master,备用的backup,然后域名 绑定到 vip上就实现高可用(假如10node,其中1台设置为master,其余9台设置为backup,一旦master挂了,其余9台马上顶替,到时候域名直接绑定虚拟vip即可)

3.3、部署service用于对外提供服务

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml

修改service文件,指定一下nodePort,使用30080端口和30443端口作为nodePort

修改后的配置文件如下

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
      nodePort: 30080   # http请求对外映射30080端口
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
      nodePort: 30443  # https请求对外映射30443端口
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

3.4、部署一个tomcat用于测试ingress转发功能

vim k8s-tomcat-test.yaml

apiVersion: v1
kind: Service
metadata:
  name: tomcat
  namespace: default
spec:
  selector:
   app: tomcat
   release: canary
  ports:
  - name: http
    targetPort: 8080
    port: 8080
  - name: ajp
    targetPort: 8009
    port: 8009
 
---
 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-deploy
  namespace: default
spec:
  replicas: 1
  selector:
   matchLabels:
     app: tomcat
     release: canary
  template:
   metadata:
     labels:
       app: tomcat
       release: canary
   spec:
     containers:
     - name: tomcat
       image: tomcat
       ports:
       - name: http
         containerPort: 8080

3.5、定义ingress策略

vim k8s-tomcat-test-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-tomcat
  annotations:
    kubernets.io/ingress.class: "nginx"
spec:
  rules:
  - host: myapp.zhdya.com
    http:
      paths:
      - path:
        backend:
          serviceName: tomcat
          servicePort: 8080

手动绑定hosts测试:

192.168.171.188 myapp.zhdya.com
把tomcat service通过ingress发布出去: mark

在浏览器输入:http://myapp.zhdya.com:30080/

mark

3.6、下面我们对tomcat服务添加https服务

[root@k8s-master ingress-nginx]# openssl genrsa -out tls.key 2048
Generating RSA private key, 2048 bit long modulus
.......+++
..............................+++
e is 65537 (0x10001)
[root@k8s-master ingress-nginx]# openssl req -new -x509 -key tls.key -out tls.crt -subj /C=CN/ST=Beijing/L=Beijing/O=DevOps/CN=myapp.zhdya.com #注意域名要和服务的域名一致 
[root@k8s-master ingress-nginx]# kubectl create secret tls tomcat-ingress-secret --cert=tls.crt --key=tls.key #创建secret
secret "tomcat-ingress-secret" created
[root@k8s-master ingress-nginx]# kubectl get secret
NAME                    TYPE                                  DATA      AGE
default-token-bf52l     kubernetes.io/service-account-token   3         9d
tomcat-ingress-secret   kubernetes.io/tls                     2         7s
[root@k8s-master ingress-nginx]# kubectl describe secret tomcat-ingress-secret
Name:         tomcat-ingress-secret
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/tls

Data
====
tls.crt:  1294 bytes  #base64加密
tls.key:  1679 bytes

将证书应用至tomcat服务中

[root@k8s-master01 ingress]# vim k8s-tomcat-test-ingress-tls.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-tomcat-tls
  annotations: 
    kubernets.io/ingress.class: "nginx"
spec:
  tls:
  - hosts:
    - myapp.zhdya.com        #与secret证书的域名需要保持一致
    secretName: tomcat-ingress-secret   #secret证书的名称
  rules:
  - host: myapp.zhdya.com
    http:
      paths:
      - path: 
        backend:
          serviceName: tomcat
          servicePort: 8080
[root@k8s-master01 ingress]#  kubectl apply -f k8s-tomcat-test-ingress-tls.yaml

再次访问服务:

https://myapp.zhdya.com:30443/

mark

文末彩蛋:

从3.3-3.6大家有没有发现,我在测试tomcat的容器的时候手动创建了一个service-nodeport.yaml 这个文件,这个文件刚刚也讲到了,就是为了把容器内部的服务暴露出来,当然我们自己也测试了手动绑定了tomcat pod容器所在node节点的IP

192.168.171.188 myapp.zhdya.com
之前我是为了给大家证明,内部pod是如何把service从pod中暴露出来,但是细心的人肯定发现了,在3.1章节,我们明明已经创建了一个nginx-ingress-controller 这个就完全可以帮我们完成服务暴露啊。对的,非常对!!!

nginx-ingress-controller这个重要的组件具体实现了什么功能看文首!!

[root@k8s-master1 ~]# kubectl get po,svc,ep --all-namespaces -o wide
NAMESPACE              NAME                                             READY   STATUS    RESTARTS   AGE    IP                NODE        NOMINATED NODE   READINESS GATES
default                pod/busybox                                      1/1     Running   7          8d     10.244.0.16       k8s-node1   <none>           <none>
default                pod/tomcat-deploy-758b795dcc-69gjz               1/1     Running   1          2d6h   10.244.0.17       k8s-node1   <none>           <none>
default                pod/tomcat-deploy-758b795dcc-llcp5               1/1     Running   2          2d6h   10.244.1.17       k8s-node2   <none>           <none>
default                pod/web-d86c95cc9-k9vnf                          1/1     Running   4          9d     10.244.1.20       k8s-node2   <none>           <none>
default                pod/web-d86c95cc9-x2wn6                          1/1     Running   4          8d     10.244.0.18       k8s-node1   <none>           <none>
ingress-nginx          pod/nginx-ingress-controller-ql4x5               1/1     Running   0          47m    192.168.171.136   k8s-node1   <none>           <none>
kube-system            pod/coredns-6d8cfdd59d-gbd2m                     1/1     Running   5          8d     10.244.0.19       k8s-node1   <none>           <none>
kube-system            pod/kube-flannel-ds-amd64-d2gzx                  1/1     Running   3          9d     192.168.171.136   k8s-node1   <none>           <none>
kube-system            pod/kube-flannel-ds-amd64-lwsnd                  1/1     Running   4          9d     192.168.171.137   k8s-node2   <none>           <none>
kubernetes-dashboard   pod/dashboard-metrics-scraper-566cddb686-wrkfl   1/1     Running   3          9d     10.244.1.19       k8s-node2   <none>           <none>
kubernetes-dashboard   pod/kubernetes-dashboard-7b5bf5d559-csfwm        1/1     Running   4          9d     10.244.1.18       k8s-node2   <none>           <none>
[root@k8s-master1 src]# kubectl exec -it nginx-ingress-controller-ql4x5 -n ingress-nginx -- cat nginx.conf

...start省略不重要的配置...
## start server myapp.zhdya.com
	server {
		server_name myapp.zhdya.com ;

		listen 80  ;
		listen [::]:80  ;
		listen 443  ssl http2 ;
		listen [::]:443  ssl http2 ;

		set $proxy_upstream_name "-";

		ssl_certificate_by_lua_block {
			certificate.call()
		}

		location / {

			set $namespace      "default";
			set $ingress_name   "ingress-tomcat";
			set $service_name   "tomcat";
			set $service_port   "8080";
			set $location_path  "/";

			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = true,
					force_no_ssl_redirect = false,
					use_port_in_redirects = false,
				})
				balancer.rewrite()
				plugins.run()
			}

			header_filter_by_lua_block {

				plugins.run()
			}
			body_filter_by_lua_block {

			}

			log_by_lua_block {

				balancer.log()

				monitor.call()

				plugins.run()
			}

			port_in_redirect off;

			set $balancer_ewma_score -1;
			set $proxy_upstream_name "default-tomcat-8080";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			set $pass_server_port    $server_port;
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;

			set $proxy_alternative_upstream_name "";

			client_max_body_size                    1m;

			proxy_set_header Host                   $best_http_host;

			# Pass the extracted client certificate to the backend

			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;

			proxy_set_header                        Connection        $connection_upgrade;

			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;

			proxy_set_header X-Forwarded-For        $remote_addr;

			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;

			proxy_set_header X-Scheme               $pass_access_scheme;

			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";

			# Custom headers to proxied server

			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;

			proxy_buffering                         off;
			proxy_buffer_size                       4k;
			proxy_buffers                           4 4k;

			proxy_max_temp_file_size                1024m;

			proxy_request_buffering                 on;
			proxy_http_version                      1.1;

			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;

			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;

			proxy_pass http://upstream_balancer;

			proxy_redirect                          off;

		}

	}
	## end server myapp.zhdya.com
...end省略不重要的配置...

建议大家一定要把这个nginx.conf文件细细的看下你就会证实nginx-ingress-controller 这个组件的功劳是多么的强大!!

然后我们换掉之前手动绑定的hosts,变更为 ingress-nginx pod所在的节点:

192.168.171.136 myapp.zhdya.com
再次访问 是不是就不需要所谓的 30443端口了呢???

mark

再然后,小伙伴们也知道了外层的externalLB改如何操作和绑定了吧?当然还有我文中的重点!!