kubernetes 1.11.2整理Ⅲ

测试集群

# 创建一个 nginx deplyment

apiVersion: extensions/v1beta1 
kind: Deployment 
metadata: 
  name: nginx-dm
spec: 
  replicas: 2
  template: 
    metadata: 
      labels: 
        name: nginx 
    spec: 
      containers: 
        - name: nginx 
          image: nginx:alpine 
          imagePullPolicy: IfNotPresent
          ports: 
            - containerPort: 80
            
---

apiVersion: v1 
kind: Service
metadata: 
  name: nginx-svc 
spec: 
  ports: 
    - port: 80
      targetPort: 80
      protocol: TCP 
  selector: 
    name: nginx

创建testnginx deployment

[root@master1 ~]# kubectl create -f testnginx.yaml
deployment.extensions/nginx-dm created
service/nginx-svc created

[root@master1 ~]# kubectl get po -o wide
NAME                       READY     STATUS              RESTARTS   AGE       IP               NODE      NOMINATED NODE
nginx-dm-fff68d674-j7dlk   1/1       Running             0          9m        10.254.108.115   node2     <none>
nginx-dm-fff68d674-r5hb6   1/1       Running             0          9m        10.254.102.133   node1     <none>

在 安装了 calico 网络的node节点 里 curl

[root@node2 ~]# curl 10.254.102.133
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

查看 ipvs 规则

[root@node2 ssl]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr
  -> 192.168.161.161:6443         Masq    1      1          0
  -> 192.168.161.162:6443         Masq    1      0          0
TCP  10.254.18.37:80 rr
  -> 10.254.75.1:80               Masq    1      0          0
  -> 10.254.102.133:80            Masq    1      0          0

配置 CoreDNS

官方 地址 https://coredns.io

下载 yaml 文件

wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed

mv coredns.yaml.sed coredns.yaml

修改配置文件中的部分配置:

# vi coredns.yaml

第一处:
...
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local 10.254.0.0/18 {
          pods insecure
          upstream
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
    }
...       

第二处:搜索 /clusterIP 即可
  clusterIP: 10.254.0.2
### 配置说明
1)errors官方没有明确解释,后面研究

2)health:健康检查,提供了指定端口(默认为8080)上的HTTP端点,如果实例是健康的,则返回“OK”。

3)cluster.local:CoreDNS为kubernetes提供的域,10.254.0.0/18这告诉Kubernetes中间件它负责为反向区域提供PTR请求0.0.254.10.in-addr.arpa ..换句话说,这是允许反向DNS解析服务(我们经常使用到得DNS服务器里面有两个区域,即“正向查找区域”和“反向查找区域”,正向查找区域就是我们通常所说的域名解析,反向查找区域即是这里所说的IP反向解析,它的作用就是通过查询IP地址的PTR记录来得到该IP地址指向的域名,当然,要成功得到域名就必需要有该IP地址的PTR记录。PTR记录是邮件交换记录的一种,邮件交换记录中有A记录和PTR记录,A记录解析名字到地址,而PTR记录解析地址到名字。地址是指一个客户端的IP地址,名字是指一个客户的完全合格域名。通过对PTR记录的查询,达到反查的目的。)

4)proxy:这可以配置多个upstream 域名服务器,也可以用于延迟查找 /etc/resolv.conf 中定义的域名服务器

5)cache:这允许缓存两个响应结果,一个是肯定结果(即,查询返回一个结果)和否定结果(查询返回“没有这样的域”),具有单独的高速缓存大小和TTLs。

# 这里 kubernetes cluster.local 为 创建 svc 的 IP 段

kubernetes cluster.local 10.254.0.0/18 

# clusterIP  为 指定 DNS 的 IP

clusterIP: 10.254.0.2

创建coreDNS

[root@master1 src]# kubectl apply -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.extensions/coredns created
service/kube-dns created

查看创建:

[root@master1 src]# kubectl get pod,svc -n kube-system -o wide
NAME                                          READY     STATUS    RESTARTS   AGE       IP               NODE      NOMINATED NODE
pod/calico-kube-controllers-79cfd7887-scnnp   1/1       Running   1          2d        192.168.161.78   node2     <none>
pod/calico-node-pwlq4                         2/2       Running   2          2d        192.168.161.77   node1     <none>
pod/calico-node-vmrrq                         2/2       Running   2          2d        192.168.161.78   node2     <none>
pod/coredns-55f86bf584-fqjf2                  1/1       Running   0          23s       10.254.102.139   node1     <none>
pod/coredns-55f86bf584-hsrbp                  1/1       Running   0          23s       10.254.75.21     node2     <none>

NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE       SELECTOR
service/kube-dns   ClusterIP   10.254.0.2   <none>        53/UDP,53/TCP   23s       k8s-app=kube-dns

检查日志

[root@master1 src]# kubectl logs coredns-55f86bf584-hsrbp -n kube-system
.:53
2018/09/22 02:03:06 [INFO] CoreDNS-1.2.2
2018/09/22 02:03:06 [INFO] linux/amd64, go1.11, eb51e8b
CoreDNS-1.2.2
linux/amd64, go1.11, eb51e8b

验证 dns 服务

在验证 dns 之前,在 dns 未部署++之前创建的 pod 与 deployment 等,都必须删除++,重新部署,否则无法解析。

创建一个 pods 来测试一下 dns

apiVersion: v1
kind: Pod
metadata:
  name: alpine
spec:
  containers:
  - name: alpine
    image: alpine
    command:
    - sleep
    - "3600"

查看 创建的服务

[root@master1 ~]# kubectl get po,svc -o wide
NAME                           READY     STATUS    RESTARTS   AGE       IP               NODE      NOMINATED NODE
pod/alpine                     1/1       Running   0          52s       10.254.102.141   node1     <none>
pod/nginx-dm-fff68d674-fzhqk   1/1       Running   0          3m        10.254.102.140   node1     <none>
pod/nginx-dm-fff68d674-h8n79   1/1       Running   0          3m        10.254.75.22     node2     <none>

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE       SELECTOR
service/kubernetes   ClusterIP   10.254.0.1      <none>        443/TCP   20d       <none>
service/nginx-svc    ClusterIP   10.254.10.144   <none>        80/TCP    3m        name=nginx

测试

[root@master1 ~]#  kubectl exec -it alpine nslookup nginx-svc
nslookup: can't resolve '(null)': Name does not resolve

Name:      nginx-svc
Address 1: 10.254.10.144 nginx-svc.default.svc.cluster.local

部署 DNS 自动伸缩

按照 node 数量 自动伸缩 dns 数量

vim dns-auto-scaling.yaml

kind: ServiceAccount
apiVersion: v1
metadata:
  name: kube-dns-autoscaler
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:kube-dns-autoscaler
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["list"]
  - apiGroups: [""]
    resources: ["replicationcontrollers/scale"]
    verbs: ["get", "update"]
  - apiGroups: ["extensions"]
    resources: ["deployments/scale", "replicasets/scale"]
    verbs: ["get", "update"]
  - apiGroups: [""]
    resources: ["configmaps"]
    verbs: ["get", "create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:kube-dns-autoscaler
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
  - kind: ServiceAccount
    name: kube-dns-autoscaler
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: system:kube-dns-autoscaler
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-dns-autoscaler
  namespace: kube-system
  labels:
    k8s-app: kube-dns-autoscaler
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: kube-dns-autoscaler
  template:
    metadata:
      labels:
        k8s-app: kube-dns-autoscaler
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      priorityClassName: system-cluster-critical
      containers:
      - name: autoscaler
        image: jicki/cluster-proportional-autoscaler-amd64:1.1.2-r2
        resources:
            requests:
                cpu: "20m"
                memory: "10Mi"
        command:
          - /cluster-proportional-autoscaler
          - --namespace=kube-system
          - --configmap=kube-dns-autoscaler
          - --target=Deployment/coredns
          - --default-params={"linear":{"coresPerReplica":256,"nodesPerReplica":16,"preventSinglePointFailure":true}}
          - --logtostderr=true
          - --v=2
      tolerations:
      - key: "CriticalAddonsOnly"
        operator: "Exists"
      serviceAccountName: kube-dns-autoscaler
#### 导入文件

[root@master1 ~]# kubectl apply -f dns-auto-scaling.yaml
serviceaccount/kube-dns-autoscaler created
clusterrole.rbac.authorization.k8s.io/system:kube-dns-autoscaler created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns-autoscaler created
deployment.apps/kube-dns-autoscaler created

++如下是上面所用到的镜像,如果不可以下载使用如下的即可++:

registry.cn-hangzhou.aliyuncs.com/zhdya_centos_docker/zhdya_cc:coredns-1.2.2

registry.cn-hangzhou.aliyuncs.com/zhdya_centos_docker/zhdya_cc:cluster-proportional-autoscaler-amd64_1.1.2-r2

部署 Ingress 与 Dashboard

部署 heapster

官方 dashboard 的github https://github.com/kubernetes/dashboard

官方 heapster 的github https://github.com/kubernetes/heapster

下载 heapster 相关 yaml 文件

wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml

wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml

wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml

wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml

==如上官方镜像一直在更新,修改的时候需要把如下的版本号也修改下↓==

下载 heapster 镜像下载

# 官方镜像
k8s.gcr.io/heapster-grafana-amd64:v4.4.3
k8s.gcr.io/heapster-amd64:v1.5.3
k8s.gcr.io/heapster-influxdb-amd64:v1.3.3

# 个人的镜像
jicki/heapster-grafana-amd64:v4.4.3
jicki/heapster-amd64:v1.5.3
jicki/heapster-influxdb-amd64:v1.3.3

# 备用阿里镜像
registry.cn-hangzhou.aliyuncs.com/zhdya_centos_docker/zhdya_cc:heapster-grafana-amd64-v4.4.3
registry.cn-hangzhou.aliyuncs.com/zhdya_centos_docker/zhdya_cc:heapster-amd64-v1.5.3
registry.cn-hangzhou.aliyuncs.com/zhdya_centos_docker/zhdya_cc:heapster-influxdb-amd64-v1.3.3

# 替换所有yaml 镜像地址

sed -i 's/k8s\.gcr\.io/jicki/g' *.yaml

修改 yaml 文件

# heapster.yaml 文件

#### 修改如下部分 #####

因为 kubelet 启用了 https 所以如下配置需要增加 https 端口

        - --source=kubernetes:https://kubernetes.default
修改为
        - --source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true
# heapster-rbac.yaml  文件

#### 修改为部分 #####

 serviceAccount kube-system:heapster  ClusterRole system:kubelet-api-admin 绑定,授予它调用 kubelet API 的权限;


kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: heapster
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:heapster
subjects:
- kind: ServiceAccount
  name: heapster
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: heapster-kubelet-api
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kubelet-api-admin
subjects:
- kind: ServiceAccount
  name: heapster
  namespace: kube-system

创建:

[root@master1 dashboard180922]# kubectl apply -f .
deployment.extensions/monitoring-grafana created
service/monitoring-grafana created
clusterrolebinding.rbac.authorization.k8s.io/heapster created
clusterrolebinding.rbac.authorization.k8s.io/heapster-kubelet-api created
serviceaccount/heapster created
deployment.extensions/heapster created
service/heapster created
deployment.extensions/monitoring-influxdb created
service/monitoring-influxdb created

这儿可能需要等待一下,这个取决于自己server的网络情况:

[root@node1 ~]# journalctl -u kubelet -f
-- Logs begin at 六 2018-09-22 09:07:48 CST. --
922 10:34:55 node1 kubelet[2301]: I0922 10:34:55.701016    2301 kube_docker_client.go:345] Pulling image "jicki/heapster-grafana-amd64:v4.4.3": "a05a7a3d2d4f: Downloading [=======>                ]  7.617MB/50.21MB"
922 10:35:05 node1 kubelet[2301]: I0922 10:35:05.700868    2301 kube_docker_client.go:345] Pulling image "jicki/heapster-grafana-amd64:v4.4.3": "a05a7a3d2d4f: Downloading [========>                ]  8.633MB/50.21MB"
922 10:35:15 node1 kubelet[2301]: I0922 10:35:15.701193    2301 kube_docker_client.go:345] Pulling image "jicki/heapster-grafana-amd64:v4.4.3": "a05a7a3d2d4f: Downloading [==========>                ]  10.66MB/50.21MB"
922 10:35:25 node1 kubelet[2301]: I0922 10:35:25.700980    2301 kube_docker_client.go:345] Pulling image "jicki/heapster-grafana-amd64:v4.4.3": "a05a7a3d2d4f: Downloading [============>                ]  12.69MB/50.21MB"
922 10:35:35 node1 kubelet[2301]: I0922 10:35:35.700779    2301 kube_docker_client.go:345] Pulling image "jicki/heapster-grafana-amd64:v4.4.3": "a05a7a3d2d4f: Downloading [===============>                ]  15.74MB/50.21MB"
922 10:35:45 node1 kubelet[2301]: I0922 10:35:45.701359    2301 kube_docker_client.go:345] Pulling image "jicki/heapster-grafana-amd64:v4.4.3": "a05a7a3d2d4f: Downloading [==================>                ]  18.28MB/50.21MB"
922 10:35:55 node1 kubelet[2301]: I0922 10:35:55.701618    2301 kube_docker_client.go:345] Pulling image "jicki/heapster-grafana-amd64:v4.4.3": "a05a7a3d2d4f: Downloading [====================>                ]  20.82MB/50.21MB"
922 10:36:05 node1 kubelet[2301]: I0922 10:36:05.701611    2301 kube_docker_client.go:345] Pulling image "jicki/heapster-grafana-amd64:v4.4.3": "a05a7a3d2d4f: Downloading [=========================>                ]  25.39MB/50.21MB"
922 10:36:15 node1 kubelet[2301]: I0922 10:36:15.700926    2301 kube_docker_client.go:345] Pulling image "jicki/heapster-grafana-amd64:v4.4.3": "a05a7a3d2d4f: Downloading [==============================>                ]  30.99MB/50.21MB"
922 10:36:25 node1 kubelet[2301]: I0922 10:36:25.700931    2301 kube_docker_client.go:345] Pulling image "jicki/heapster-grafana-amd64:v4.4.3": "a05a7a3d2d4f: Downloading [==================================>                ]  34.55MB/50.21MB"
922 10:36:35 node1 kubelet[2301]: I0922 10:36:35.701950    2301 kube_docker_client.go:345] Pulling image "jicki/heapster-grafana-amd64:v4.4.3": "a05a7a3d2d4f: Downloading [==================================>                ]  34.55MB/50.21MB"
#### 查看部署情况
[root@master1 dashboard180922]# kubectl get po,svc -n kube-system -o wide
NAME                                          READY     STATUS    RESTARTS   AGE       IP               NODE      NOMINATED NODE
pod/calico-kube-controllers-79cfd7887-scnnp   1/1       Running   1          2d        192.168.161.78   node2     <none>
pod/calico-node-pwlq4                         2/2       Running   2          2d        192.168.161.77   node1     <none>
pod/calico-node-vmrrq                         2/2       Running   2          2d        192.168.161.78   node2     <none>
pod/coredns-55f86bf584-fqjf2                  1/1       Running   0          44m       10.254.102.139   node1     <none>
pod/coredns-55f86bf584-hsrbp                  1/1       Running   0          44m       10.254.75.21     node2     <none>
pod/heapster-745d7bc8b7-zk65c                 1/1       Running   0          13m       10.254.75.51     node2     <none>
pod/kube-dns-autoscaler-66d448df8f-4zvw6      1/1       Running   0          32m       10.254.102.142   node1     <none>
pod/monitoring-grafana-558c44f948-m2tzz       1/1       Running   0          1m        10.254.75.6      node2     <none>
pod/monitoring-influxdb-f6bcc9795-496jd       1/1       Running   0          13m       10.254.102.147   node1     <none>

NAME                          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE       SELECTOR
service/heapster              ClusterIP   10.254.4.11    <none>        80/TCP          13m       k8s-app=heapster
service/kube-dns              ClusterIP   10.254.0.2     <none>        53/UDP,53/TCP   44m       k8s-app=kube-dns
service/monitoring-grafana    ClusterIP   10.254.25.50   <none>        80/TCP          1m        k8s-app=grafana
service/monitoring-influxdb   ClusterIP   10.254.37.83   <none>        8086/TCP        13m       k8s-app=influxdb

部署 dashboard

下载 dashboard 镜像

# 官方镜像
k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3

# 个人的镜像
jicki/kubernetes-dashboard-amd64:v1.8.3

# 阿里的镜像
registry.cn-hangzhou.aliyuncs.com/zhdya_centos_docker/zhdya_cc:kubernetes-dashboard-amd64-v1.8.3

下载 yaml 文件

curl -O https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
导入 yaml

# 替换所有的 images,注意修改镜像版本号为1.8.3

sed -i 's/k8s\.gcr\.io/jicki/g' kubernetes-dashboard.yaml

创建dashboard

[root@master1 dashboard180922]# kubectl apply -f kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
查看创建的dashboard
[root@master1 dashboard180922]# kubectl get po,svc -n kube-system -o wide | grep dashboard
pod/kubernetes-dashboard-65666d4586-bb66s     1/1       Running   0          7m        10.254.102.151   node1     <none>

service/kubernetes-dashboard   ClusterIP   10.254.3.42    <none>        443/TCP         7m        k8s-app=kubernetes-dashboard

部署 Nginx Ingress

++Kubernetes 暴露服务的方式目前只有三种:LoadBlancer Service、NodePort Service、Ingress; 什么是 Ingress ? Ingress 就是利用 Nginx Haproxy 等负载均衡工具来暴露 Kubernetes 服务。++

官方 Nginx Ingress github: https://github.com/kubernetes/ingress-nginx/

配置 调度 node

# ingress 有多种方式 

1.  deployment 自由调度 replicas
2.  daemonset 全局调度 分配到所有node

#  deployment 自由调度过程中,由于我们需要 约束 controller 调度到指定的 node 中,所以需要对 node 进行 label 标签

# 默认如下:
[root@master1 ~]# kubectl get node
NAME      STATUS    ROLES     AGE       VERSION
node1     Ready     <none>    20d       v1.11.2
node2     Ready     <none>    8d        v1.11.2

# 对 node1 与 node2 打上 label

[root@master1 ~]# kubectl label nodes node1 ingress=proxy
node/node1 labeled
[root@master1 ~]# kubectl label nodes node2 ingress=proxy
node/node2 labeled

# 打完标签以后

[root@master1 ~]# kubectl get nodes --show-labels
NAME      STATUS    ROLES     AGE       VERSION   LABELS
node1     Ready     <none>    20d       v1.11.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ingress=proxy,kubernetes.io/hostname=node1
node2     Ready     <none>    9d        v1.11.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ingress=proxy,kubernetes.io/hostname=node2

下载镜像

# 官方镜像
gcr.io/google_containers/defaultbackend:1.4
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.16.2

# 国内镜像
jicki/defaultbackend:1.4
jicki/nginx-ingress-controller:0.16.2

# 阿里镜像
registry.cn-hangzhou.aliyuncs.com/zhdya_centos_docker/zhdya_cc:defaultbackend-1.4
registry.cn-hangzhou.aliyuncs.com/zhdya_centos_docker/zhdya_cc:nginx-ingress-controller-0.16.2

下载 yaml 文件

部署 Nginx backend , Nginx backend 用于统一转发 没有的域名 到指定页面。

curl -O https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml

curl -O https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml

curl -O https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml

curl -O https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml

curl -O https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml

# 部署 Ingress RBAC 认证

curl -O https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml


# 部署 Ingress Controller 组件

curl -O https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml

# tcp-service 与 udp-service, 由于 ingress 不支持 tcp 与 udp 的转发,所以这里配置了两个基于 tcp 与 udp 的 service ,通过 --tcp-services-configmap 与 --udp-services-configmap 来配置 tcp 与 udp 的转发服务


# 为了更加方便理解,如下两个例子:

# tcp 例子

apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: ingress-nginx
data:
  9000: "default/tomcat:8080"
  
#  以上配置, 转发 tomcat:8080 端口 到 ingress 节点的 9000 端口中

# udp 例子

apiVersion: v1
kind: ConfigMap
metadata:
  name: udp-services
  namespace: ingress-nginx
data:
  53: "kube-system/kube-dns:53"


# 替换所有的 images

sed -i 's/gcr\.io\/google_containers/jicki/g' *
sed -i 's/quay\.io\/kubernetes-ingress-controller/jicki/g' *


# 上面 对 两个 node 打了 label 所以配置 replicas: 2
# 修改 yaml 文件 增加 rbac 认证 , hostNetwork  还有 nodeSelector, 第二个 spec 下 增加。

vim with-rbac.yaml

第一处:↓
spec:
  replicas: 2
  
第二处:↓(搜索 /nginx-ingress-serviceaccount 即可,在其下添加)
  ....
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      hostNetwork: true
      nodeSelector:
        ingress: proxy
    ....
    第三处:↓
          # 这里添加一个 other 端口做为后续tcp转发
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443
          - name: other
            containerPort: 8888

导入 yaml 文件

[root@master1 ingress-service]# kubectl apply -f namespace.yaml
namespace/ingress-nginx created

[root@master1 ingress-service]# kubectl get ns
NAME            STATUS    AGE
default         Active    20d
ingress-nginx   Active    6s
kube-public     Active    20d
kube-system     Active    20d

[root@master1 ingress-service]# kubectl apply -f .
configmap/nginx-configuration created
deployment.extensions/default-http-backend created
service/default-http-backend created
namespace/ingress-nginx configured
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
configmap/tcp-services created
configmap/udp-services created
deployment.extensions/nginx-ingress-controller created

# 查看服务,可以看到这两个 pods 被分别调度到 7778 中

[root@master1 ingress-service]# kubectl get pods -n ingress-nginx -o wide
NAME                                       READY     STATUS    RESTARTS   AGE       IP               NODE      NOMINATED NODE
default-http-backend-6b89c8bdcb-vvl9f      1/1       Running   0          9m        10.254.102.163   node1     <none>
nginx-ingress-controller-cf8d4564d-5vz7h   1/1       Running   0          9m        10.254.75.16     node2     <none>
nginx-ingress-controller-cf8d4564d-z7q4b   1/1       Running   0          9m        10.254.102.158   node1     <none>

# 查看我们原有的 svc

[root@master1 ingress-service]#  kubectl get pods -o wide
NAME                       READY     STATUS    RESTARTS   AGE       IP               NODE      NOMINATED NODE
alpine                     1/1       Running   3          6h        10.254.102.141   node1     <none>
nginx-dm-fff68d674-fzhqk   1/1       Running   0          6h        10.254.102.140   node1     <none>
nginx-dm-fff68d674-h8n79   1/1       Running   0          6h        10.254.75.22     node2     <none>

创建一个 基于 nginx-dm 的 ingress

vi nginx-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-ingress
spec:
  rules:
  - host: nginx.zhdya.cn
    http:
      paths:
      - backend:
          serviceName: nginx-svc
          servicePort: 80
          
理解如下:

- host指虚拟出来的域名,具体地址(我理解应该是Ingress-controller那台Pod所在的主机的地址)应该加入/etc/hosts中,这样所有去nginx.zhdya.cn的请求都会发到nginx

- servicePort主要是定义服务的时候的端口,不是NodePort.

# 查看服务

[root@master1 ingress-service]# kubectl create -f nginx-ingress.yaml
ingress.extensions/nginx-ingress created

[root@master1 ingress-service]#  kubectl get ingress
NAME            HOSTS            ADDRESS   PORTS     AGE
nginx-ingress   nginx.zhdya.cn             80        10s

# 测试访问

[root@node1 ~]# curl nginx.zhdya.cn
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

当然如果本地浏览器访问的话 我们也需要绑定hosts

mark
# 创建一个基于 dashboard 的 https 的 ingress
# 新版本的 dashboard 默认就是 ssl ,所以这里使用 tcp 代理到 443 端口

# 查看 dashboard svc

[root@master1 ~]# kubectl get svc -n kube-system
NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
heapster               ClusterIP   10.254.4.11    <none>        80/TCP          2d
kube-dns               ClusterIP   10.254.0.2     <none>        53/UDP,53/TCP   3d
kubernetes-dashboard   ClusterIP   10.254.3.42    <none>        443/TCP         2d
monitoring-grafana     ClusterIP   10.254.25.50   <none>        80/TCP          2d
monitoring-influxdb    ClusterIP   10.254.37.83   <none>        8086/TCP        2d

# 修改 tcp-services-configmap.yaml 文件

[root@master1 src]# vim tcp-services-configmap.yaml

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
data:
  8888: "kube-system/kubernetes-dashboard:443"

# 载入配置文件

[root@master1 src]# kubectl apply -f tcp-services-configmap.yaml
configmap/tcp-services configured

# 查看服务

[root@master1 src]#  kubectl get configmap/tcp-services -n ingress-nginx
NAME           DATA      AGE
tcp-services   1         2d

[root@master1 src]# kubectl describe configmap/tcp-services -n ingress-nginx
Name:         tcp-services
Namespace:    ingress-nginx
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","data":{"8888":"kube-system/kubernetes-dashboard:443"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"tcp-services","namesp...

Data
====
8888:
----
kube-system/kubernetes-dashboard:443
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  CREATE  2d    nginx-ingress-controller  ConfigMap ingress-nginx/tcp-services
  Normal  CREATE  2d    nginx-ingress-controller  ConfigMap ingress-nginx/tcp-services
  Normal  CREATE  2d    nginx-ingress-controller  ConfigMap ingress-nginx/tcp-services
  Normal  CREATE  2d    nginx-ingress-controller  ConfigMap ingress-nginx/tcp-services
  Normal  CREATE  20m   nginx-ingress-controller  ConfigMap ingress-nginx/tcp-services
  Normal  CREATE  19m   nginx-ingress-controller  ConfigMap ingress-nginx/tcp-services
  Normal  CREATE  19m   nginx-ingress-controller  ConfigMap ingress-nginx/tcp-services
  Normal  UPDATE  1m    nginx-ingress-controller  ConfigMap ingress-nginx/tcp-services

# 测试访问

[root@node1 ~]# curl -I -k https://dashboard.zhdya.cn:8888
curl: (6) Could not resolve host: dashboard.zhdya.cn; 未知的名称或服务
当然如上报错很正常,咱们需要绑定下hosts

在master 上查询下:
[root@master1 src]# kubectl get svc -n kube-system -o wide | grep dashboard
kubernetes-dashboard   ClusterIP   10.254.3.42    <none>        443/TCP         2d        k8s-app=kubernetes-dashboard

然后再node端绑定hosts 
[root@node1 ~]# vim /etc/hosts

10.254.3.42 dashboard.zhdya.cn

[root@node1 ~]# curl -I -k https://dashboard.zhdya.cn:8888
HTTP/1.1 200 OK
Accept-Ranges: bytes
Cache-Control: no-store
Content-Length: 990
Content-Type: text/html; charset=utf-8
Last-Modified: Tue, 13 Feb 2018 11:17:03 GMT
Date: Tue, 25 Sep 2018 02:51:18 GMT
# 配置一个基于域名的 https , ingress

# 创建一个 基于 自身域名的 证书

[root@master1 dashboard-keys]# openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout dashboard.zhdya.cn-key.key -out dashboard.zhdya.cn.pem -subj "/CN=dashboard.zhdya.cn"
Generating a 2048 bit RSA private key
.......+++
..............+++
writing new private key to 'dashboard.zhdya.cn-key.key'
-----

[root@master1 dashboard-keys]# kubectl create secret tls dashboard-secret --namespace=kube-system --cert dashboard.zhdya.cn.pem --key dashboard.zhdya.cn-key.key
secret/dashboard-secret created

# 查看 secret

[root@master1 dashboard-keys]# kubectl get secret -n kube-system | grep dashboard
dashboard-secret                      kubernetes.io/tls                     2         55s
kubernetes-dashboard-certs            Opaque                                0         2d
kubernetes-dashboard-key-holder       Opaque                                2         2d
kubernetes-dashboard-token-r98wk      kubernetes.io/service-account-token   3         2d

# 创建一个 ingress

vi dashboard-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  annotations:
    ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
  tls:
  - hosts:
    - dashboard.zhdya.cn
    secretName: dashboard-secret
  rules:
  - host: dashboard.zhdya.cn
    http:
      paths:
      - path: /
        backend:
          serviceName: kubernetes-dashboard
          servicePort: 443

# 创建配置文件
[root@master1 src]# kubectl apply -f dashboard-ingress.yaml
ingress.extensions/kubernetes-dashboard created

[root@master1 src]# kubectl get ingress -n kube-system
NAME                   HOSTS                ADDRESS   PORTS     AGE
kubernetes-dashboard   dashboard.zhdya.cn             80, 443   37s

测试访问

mark
# 登录认证

# 首先创建一个 dashboard rbac 超级用户

vi dashboard-admin-rbac.yaml

---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-admin
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard-admin
  namespace: kube-system

# 导入配置文件

[root@master1 src]# kubectl apply -f dashboard-admin-rbac.yaml
serviceaccount/kubernetes-dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-admin created

# 查看超级用户的 token 名称

[root@master1 src]# kubectl -n kube-system get secret | grep kubernetes-dashboard-admin
kubernetes-dashboard-admin-token-kq27d   kubernetes.io/service-account-token   3         38s

# 查看 token 部分

[root@master1 src]# kubectl describe -n kube-system secret/kubernetes-dashboard-admin-token-kq27d

然后我们登录 web ui 选择 令牌登录

然后就发现了还是那熟悉的味道:

mark

本博客所有文章除特别声明外,均采用 CC BY-SA 4.0 协议 ,转载请注明出处!