Skip to content

本文承接上篇,在本篇中我们会讲解Prometheus如何应用基于Kubernetes的服务发现功能,检索目标信息并进行监控。

在监控策略上,我们将混合使用白盒监控与黑盒监控两种模式,建立起包括基础设施(Node)、应用容器(Docker)、Kubernetes组件和资源对象等全方位的监控覆盖。

一. 监控Node节点

  1. Daemonset部署node-exporter 创建node_exporter-daemonset.yml文件,内容如下。在spec配置中添加了tolerations,用于污点容忍,保证master节点也会部署。
yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitoring
  labels:
    app: node-exporter
spec:
  selector:
    matchLabels:
      app: node-exporter
  template:
    metadata:
      labels:
        app: node-exporter
    spec:
      tolerations:   # 污点容忍
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - image: prom/node-exporter
        name: node-exporter
        ports:
        - name: scrape
          containerPort: 9100
          hostPort: 9100
      hostNetwork: true
      hostPID: true
      securityContext:
        runAsUser: 0
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitoring
  labels:
    app: node-exporter
spec:
  selector:
    matchLabels:
      app: node-exporter
  template:
    metadata:
      labels:
        app: node-exporter
    spec:
      tolerations:   # 污点容忍
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - image: prom/node-exporter
        name: node-exporter
        ports:
        - name: scrape
          containerPort: 9100
          hostPort: 9100
      hostNetwork: true
      hostPID: true
      securityContext:
        runAsUser: 0

执行该yml文件

$ kubectl  apply -f node_exporter-daemonset.yml
daemonset.apps/prometheus-node-exporter created
$ kubectl  apply -f node_exporter-daemonset.yml
daemonset.apps/prometheus-node-exporter created

确认Daemonset及Pod状态正常

bash
$ kubectl get daemonset -n monitoring 
NAME            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
node-exporter   3         3         3       3            3           <none>          13m

$ kubectl get pod -n monitoring |grep node-exporter
node-exporter-76qz8          1/1     Running   0          14m
node-exporter-8fqmm          1/1     Running   0          14m
node-exporter-w9jxd          1/1     Running   0          2m6s
$ kubectl get daemonset -n monitoring 
NAME            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
node-exporter   3         3         3       3            3           <none>          13m

$ kubectl get pod -n monitoring |grep node-exporter
node-exporter-76qz8          1/1     Running   0          14m
node-exporter-8fqmm          1/1     Running   0          14m
node-exporter-w9jxd          1/1     Running   0          2m6s
  1. Prometheus配置任务

prometheus-config.yml文件中添下如下任务,并执行生效。

yaml
- job_name: 'kubernetes-node'
     kubernetes_sd_configs:
        - role: node
     relabel_configs:
        - source_labels: [__address__]
          regex: '(.*):10250'
          replacement: '${1}:9100'
          target_label: __address__
          action: replace
        - action: labelmap
          regex: __meta_kubernetes_node_label_(.+)
- job_name: 'kubernetes-node'
     kubernetes_sd_configs:
        - role: node
     relabel_configs:
        - source_labels: [__address__]
          regex: '(.*):10250'
          replacement: '${1}:9100'
          target_label: __address__
          action: replace
        - action: labelmap
          regex: __meta_kubernetes_node_label_(.+)

注解:该任务通过node角色发现动态获取节点地址信息,并使用标签重写(Relabeling)功能重写targets目标端口为node-exporter端口,从而实现自动监控集群节点功能。

任务生效后,可看到Prometheus已自动获取到节点信息并监控。

二. 监控容器

Kubernetes各节点的kubelet除包含自身的监控指标信息以外,还内置了对CAdviosr的支持。在前面的容器监控篇中,我们知道可以通过安装CAdviosr来监控节点上的容器状态。而在Kuberentes集群中,通过Kubelet可实现类似的效果,不需要再额外安装CAdviosr。

Prometheus配置任务 prometheus-config.yml文件中添下如下任务,并执行生效。

yaml

- job_name: 'kubernetes-cadvisor'
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)

- job_name: 'kubernetes-cadvisor'
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)

注解:该任务通过node角色发现动态获取节点地址信息。由于直接访问kubelet地址会有证书验证问题,这里使用标签重写(Relabeling)功能重写targets目标地址和地址,通过API Server提供的代理地址访问kubelet的/metrics/cadvisor。

任务生效后,可看到Prometheus已自动生成相关目标信息

三. 监控Kube API Server

Kube API Server做为整个Kubernetes集群管理的入口服务,负责对外暴露Kuberentes API,服务的稳定与否影响着集群的可用性。通过对Kube API Server的监控,我们能够清楚API的请求处理延迟、错误和可用性等参数。

Kube API Server组件一般独立部署在集群外部,并运行在Master的主机上,为了使集群内部的应用能够与API进行交互,Kubernetes会在default的命名空间下创建一个kubernetes的Service,用于集群内部访问。

bash
$ kubectl  get service  kubernetes  -o wide 
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTOR
kubernetes   ClusterIP   10.220.0.1   <none>        443/TCP   77d   <none>
$ kubectl  get service  kubernetes  -o wide 
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTOR
kubernetes   ClusterIP   10.220.0.1   <none>        443/TCP   77d   <none>

该kubernetes服务代理的后端实际地址通过endpoints进行维护,该endpoints代理地址指向了master节点的6443端口,也即是Master上运行的Kube API Server服务端口。

bash
$ kubectl get endpoints kubernetes
NAME         ENDPOINTS         AGE
kubernetes   10.12.61.1:6443   77d

$ netstat -lnpt  |grep 6443
tcp6       0      0 :::6443                 :::*                    LISTEN      30458/kube-apiserver
$ kubectl get endpoints kubernetes
NAME         ENDPOINTS         AGE
kubernetes   10.12.61.1:6443   77d

$ netstat -lnpt  |grep 6443
tcp6       0      0 :::6443                 :::*                    LISTEN      30458/kube-apiserver

因此,我们可通过Prometheus的endpoints角色发现功能,来实现Kube API Server的目标发现并监控。

Prometheus配置任务

prometheus-config.yml文件中添下如下任务,并执行生效。

yaml
- job_name: 'kubernetes-apiservers'
      kubernetes_sd_configs:
      - role: endpoints
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: default;kubernetes;https
      - target_label: __address__
        replacement: kubernetes.default.svc:443
- job_name: 'kubernetes-apiservers'
      kubernetes_sd_configs:
      - role: endpoints
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: default;kubernetes;https
      - target_label: __address__
        replacement: kubernetes.default.svc:443

注解:该任务通过endpoints角色发现动态获取endpoints信息,并使用标签重写(Relabeling)功能只保留符合正则表达式匹配的endpoints目标。

任务生效后,查看Prometheus已自动生成相关目标信息。

四. 监控Kubelet组件

Kubelet组件运行在集群中每个worker节点上,用于处理Master下发到本节点的任务,包括管理Pod和其中的容器。Kubelet会在Kube API Server上注册节点信息,并定期向集群汇报节点资源使用情况。

Kubelet的运行状态关乎着该节点的是否可以正常工作,基于该组件的重要性,我们有必要对各个节点的kubelet进行监控。

Prometheus配置任务

prometheus-config.yml文件中添下如下任务,并执行生效。

yaml
- job_name: 'k8s-kubelet'
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics
- job_name: 'k8s-kubelet'
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics

注解:该任务通过node角色发现动态获取节点地址信息。由于直接访问kubelet地址会有证书验证问题,这里使用标签重写(Relabeling)功能重写targets目标地址和地址,通过API Server提供的代理地址访问kubelet的/metrics路径。

任务生效后,查看Prometheus已自动生成相关目标信息。

五. 监控Kubernetes资源

Kubernetes资源对象包括Pod、Deployment、StatefulSets等,我们需要知道相关资源的使用情况和状态,如Pod是否正常运行。由于并不是所有资源都支持Prometheus的监控, 因此,我们需要使用开源的kube-state-metrics方案来获取监控指标。

1. 部署kube-state-metrics

kube-state-metrics对Kubernetes有版本要。我们环境的Kubernetes为1.18,所以需要下载V2.0.0及以上版本。

kube-state-metrics是Kubernetes组织下的一个项目,它通过监听Kube API收集相关资源和对象的最新信息,并提供接口地址给到Prometheus获取指标。

下载项目仓库并部署安装

bash
$ git clone https://github.com/kubernetes/kube-state-metrics.git
$ cd kube-state-metrics/
$ kubectl  apply -f examples/standard/
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
serviceaccount/kube-state-metrics created
service/kube-state-metrics created
$ git clone https://github.com/kubernetes/kube-state-metrics.git
$ cd kube-state-metrics/
$ kubectl  apply -f examples/standard/
clusterrolebinding.rbac.authorization.k8s.io/kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/kube-state-metrics created
deployment.apps/kube-state-metrics created
serviceaccount/kube-state-metrics created
service/kube-state-metrics created

查看服务状态

$ kubectl  get deploy kube-state-metrics -n kube-system
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
kube-state-metrics   1/1     1            1           6m20s
$ kubectl  get deploy kube-state-metrics -n kube-system
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
kube-state-metrics   1/1     1            1           6m20s

2. Prometheus配置任务

prometheus-config.yml文件中添下如下任务,并执行生效。

yaml
- job_name: kube-state-metrics
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
        regex: kube-state-metrics
        replacement: $1
        action: keep
      - source_labels: [__address__]
        regex: '(.*):8080'
        action: keep
- job_name: kube-state-metrics
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
        regex: kube-state-metrics
        replacement: $1
        action: keep
      - source_labels: [__address__]
        regex: '(.*):8080'
        action: keep

任务生效后,查看Prometheus已自动生成相关目标信息。

六. 监控service访问

在Kubernetes集群中,我们可以采用黑盒监控的模式,由Prometheus通过探针的方式对service进行访问探测,以便及时了解业务的可用性。

要实现探针检测,我们需要在集群中安装Blackbox Exporter。

1. 部署Blackbox Exporter

创建blackbox-exporter.yml文件,内容如下:

yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: blackbox-exporter
  name: blackbox-exporter
  namespace: monitoring
spec:
  ports:
  - name: blackbox
    port: 9115
    protocol: TCP
  selector:
    app: blackbox-exporter
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: blackbox-exporter
  name: blackbox-exporter
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: blackbox-exporter
  template:
    metadata:
      labels:
        app: blackbox-exporter
    spec:
      containers:
      - name: blackbox-exporter
        image: prom/blackbox-exporter
        imagePullPolicy: IfNotPresent

apiVersion: v1
kind: Service
metadata:
  labels:
    app: blackbox-exporter
  name: blackbox-exporter
  namespace: monitoring
spec:
  ports:
  - name: blackbox
    port: 9115
    protocol: TCP
  selector:
    app: blackbox-exporter
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: blackbox-exporter
  name: blackbox-exporter
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: blackbox-exporter
  template:
    metadata:
      labels:
        app: blackbox-exporter
    spec:
      containers:
      - name: blackbox-exporter
        image: prom/blackbox-exporter
        imagePullPolicy: IfNotPresent

执行yml文件

$ kubectl  apply -f temp.yml
service/blackbox-exporter created
deployment.apps/blackbox-exporter created
$ kubectl  apply -f temp.yml
service/blackbox-exporter created
deployment.apps/blackbox-exporter created

查看blackbox-exporter服务状态,已正常运行。

$ kubectl  get svc blackbox-exporter  -n monitoring
NAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
blackbox-exporter   ClusterIP   10.220.50.176   <none>        9115/TCP   11m

$ kubectl  get deploy blackbox-exporter  -n monitoring   
NAME                READY   UP-TO-DATE   AVAILABLE   AGE
blackbox-exporter   1/1     1            1           12m
$ kubectl  get svc blackbox-exporter  -n monitoring
NAME                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
blackbox-exporter   ClusterIP   10.220.50.176   <none>        9115/TCP   11m

$ kubectl  get deploy blackbox-exporter  -n monitoring   
NAME                READY   UP-TO-DATE   AVAILABLE   AGE
blackbox-exporter   1/1     1            1           12m

2. Prometheus配置任务

在部署Blackbox Exporter后,Prometheus可通过集群内部的访问地址:blackbox-exporter.monitoring.svc.cluster.local 对其进行调用。

yaml
- job_name: 'kubernetes-services'
      kubernetes_sd_configs:
      - role: service
      metrics_path: /probe
      params:
        module: [http_2xx]
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]  
        action: keep
        regex: true
      - source_labels: [__address__]
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox-exporter.monitoring.svc.cluster.local:9115
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        target_label: kubernetes_name
- job_name: 'kubernetes-services'
      kubernetes_sd_configs:
      - role: service
      metrics_path: /probe
      params:
        module: [http_2xx]
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]  
        action: keep
        regex: true
      - source_labels: [__address__]
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox-exporter.monitoring.svc.cluster.local:9115
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        target_label: kubernetes_name

注释 :该任务通过service角色发现的方式,获取集群中的service对象;并使用“prometheus.io/probe: true”标签进行过滤,只有包含此注解的service才纳入监控;另外,__address__执行Blackbox Exporter实例的访问地址,并且重写了标签instance的内容。

任务生效后,查看Prometheus已自动生成相关目标信息。